Re: 4.11 RC1 KVM Issue: Incorrect hostname/no IP address

2018-01-17 Thread Tutkowski, Mike
Once I run through the rest of my testing for the release candidate, I will 
turn my attention back to this issue. Thanks!

> On Jan 17, 2018, at 10:53 AM, Nux! <n...@li.nux.ro> wrote:
> 
> Mike,
> 
> Ok, at least we can rule out hypervisor firewall side, the problem in your 
> particular case may be with the VR then, but if you feel further testing is 
> not warranted then that's fine.
> 
> Lucian
> 
> --
> Sent from the Delta quadrant using Borg technology!
> 
> Nux!
> www.nux.ro
> 
> - Original Message -
>> From: "Tutkowski, Mike" <mike.tutkow...@netapp.com>
>> To: "dev" <dev@cloudstack.apache.org>
>> Sent: Wednesday, 17 January, 2018 15:08:21
>> Subject: Re: 4.11 RC1 KVM Issue: Incorrect hostname/no IP address
> 
>> The good part for 4.11 is that, per Rohit’s testing and comments, it seems 
>> like
>> it’s just an environment misconfiguration that is leading to these results.
>> That being the case, it’s not an issue we really need to be concerned with 
>> for
>> the 4.11 release candidate.
>> 
>> On 1/17/18, 7:56 AM, "Tutkowski, Mike" <mike.tutkow...@netapp.com> wrote:
>> 
>>   Hi Lucian,
>> 
>>   Thanks for the e-mail. I haven’t yet gotten around to trying suggestions 
>> from
>>   others, but I did run that script you pointed me to and then rebooted the 
>> user
>>   VM that was running on that host. Unfortunately, I see the same results: no
>>   specified hostname and no IP address for that VM.
>> 
>>   In case you’re interested, here is the output from that script:
>> 
>>   Stopping firewall and allowing all traffic ...
>> 
>>   # Generated by iptables-save v1.4.21 on Wed Jan 17 07:36:15 2018
>>   *raw
>>   :PREROUTING ACCEPT [103:4120]
>>   :OUTPUT ACCEPT [103:4120]
>>   COMMIT
>>   # Completed on Wed Jan 17 07:36:15 2018
>>   # Generated by iptables-save v1.4.21 on Wed Jan 17 07:36:15 2018
>>   *nat
>>   :PREROUTING ACCEPT [2:133]
>>   :INPUT ACCEPT [0:0]
>>   :OUTPUT ACCEPT [0:0]
>>   :POSTROUTING ACCEPT [0:0]
>>   COMMIT
>>   # Completed on Wed Jan 17 07:36:15 2018
>>   # Generated by iptables-save v1.4.21 on Wed Jan 17 07:36:15 2018
>>   *mangle
>>   :PREROUTING ACCEPT [259:10360]
>>   :INPUT ACCEPT [259:10360]
>>   :FORWARD ACCEPT [0:0]
>>   :OUTPUT ACCEPT [259:10360]
>>   :POSTROUTING ACCEPT [259:10360]
>>   COMMIT
>>   # Completed on Wed Jan 17 07:36:15 2018
>>   # Generated by iptables-save v1.4.21 on Wed Jan 17 07:36:15 2018
>>   *filter
>>   :INPUT ACCEPT [494:19760]
>>   :FORWARD ACCEPT [0:0]
>>   :OUTPUT ACCEPT [494:19760]
>>   COMMIT
>>   # Completed on Wed Jan 17 07:36:15 2018
>> 
>>   Done!
>> 
>>   Thanks,
>>   Mike
>> 
>>   On 1/17/18, 2:04 AM, "Nux!" <n...@li.nux.ro> wrote:
>> 
>>   Mike,
>> 
>>   Run iptables-save on the hypervisor running an actual VM, from the 
>> rules above
>>   it looks like you are not running any (except system VMs). If you are 
>> running a
>>   VM there, then something seems horribly wrong with the security groups.
>> 
>>   Another way to check for firewall issues is to disable it altogether, 
>> not sure
>>   how Ubuntu handles that, but you can use this little script[1]. If 
>> once you do
>>   that your problems go away, then it's a firewall issue.
>> 
>>   [1] - http://dl.nux.ro/utils/iptflush.sh
>> 
>>   --
>>   Sent from the Delta quadrant using Borg technology!
>> 
>>   Nux!
>>   www.nux.ro
>> 
>>   - Original Message -
>>> From: "Tutkowski, Mike" <mike.tutkow...@netapp.com>
>>> To: "dev" <dev@cloudstack.apache.org>
>>> Sent: Tuesday, 16 January, 2018 20:31:23
>>> Subject: Re: 4.11 RC1 KVM Issue: Incorrect hostname/no IP address
>> 
>>> Hi,
>>> 
>>> Here is the results of iptables-save (ebtables-save appears not to be
>>> installed):
>>> 
>>> # Generated by iptables-save v1.4.21 on Tue Jan 16 13:23:25 2018
>>> *nat
>>> :PREROUTING ACCEPT [1914053:9571571583]
>>> :INPUT ACCEPT [206:3]
>>> :OUTPUT ACCEPT [4822:348457]
>>> :POSTROUTING ACCEPT [7039:610037]
>>> -A POSTROUTING -s 192.168.122.0/24 -d 224.0.0.0/24 -j RETURN
>>> -A POSTROUTING -s 192.168.122.0/24 -d 255.255.255.255/32 -j RETURN
>>> -A POSTROUTING -s 192.168.122.0/24 ! -d 192.168.122.0

Re: IP Address Discrepancy with VMware

2018-01-14 Thread Tutkowski, Mike
Hi Paul,

I don’t know what was causing it, but I destroyed and re-created the VR and now 
it works just fine.

Seems like we have a bug here, but it’s not easy to reproduce.

Talk to you later!
Mike

On 1/11/18, 3:13 PM, "Tutkowski, Mike" <mike.tutkow...@netapp.com> wrote:

Hi Paul,

Thanks for looking into that!

I was just spinning up a VM in a Basic Zone using VMware and NFS…nothing 
really out of the ordinary.

I’m traveling today, but should be able to re-try this use case again 
tomorrow.

Thanks again!
Mike

On 1/11/18, 2:12 AM, "Paul Angus" <paul.an...@shapeblue.com> wrote:

Hey Mike, were you doing anything specific/special?  I haven't yet 
managed to get the wrong IP address.
how does the config for dhcp leases look on the VR?


Kind regards,

Paul Angus

paul.an...@shapeblue.com 
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK
@shapeblue
  
 


-Original Message-----
From: Tutkowski, Mike [mailto:mike.tutkow...@netapp.com] 
Sent: 11 January 2018 07:52
To: dev@cloudstack.apache.org
Subject: Re: IP Address Discrepancy with VMware

Thanks, Paul!

> On Jan 11, 2018, at 12:49 AM, Paul Angus <paul.an...@shapeblue.com> 
wrote:
> 
> I'll have a look @ mike
> 
> 
> 
> Kind regards,
> 
> Paul Angus
> 
> paul.an...@shapeblue.com
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK @shapeblue
> 
    > 
    > 
> 
> -Original Message-
> From: Tutkowski, Mike [mailto:mike.tutkow...@netapp.com] 
> Sent: 11 January 2018 07:31
> To: dev@cloudstack.apache.org
> Subject: IP Address Discrepancy with VMware
> 
> Hi,
> 
> While I was running some tests related to 4.11 tonight, I noticed the 
following discrepancy with regards to the IP address given to a VM of mine:
> 
> https://imgur.com/3ODXHNe
> 
> According to CloudStack, the IP address should be 10.117.40.28. 
However, when I run ifconfig, I get 10.117.40.115.
> 
> In this situation, I’m running only with VMware (no other hypervisor 
type in use) in a Basic Zone where the root disk of the VM is on NFS.
> 
> Anyone else able to reproduce this issue?
> 
> Thanks,
> Mike






Re: [DISCUSS] Freezing master for 4.11

2018-01-12 Thread Tutkowski, Mike
I’m investigating these now. I have found and fixed two of them so far.

> On Jan 12, 2018, at 2:49 PM, Rohit Yadav  wrote:
> 
> Thanks Rafael and Daan.
> 
> 
>> From: Rafael Weingärtner 
>> 
>> I believe there is no problem in merging Wido’s and Mike’s PRs, they have
>> been extensively discussed and improved (specially Mike’s one).
> 
> Thanks, Mike's PR has several regression smoketest failures and can be 
> accepted only when those failures are fixed.
> 
> We'll cut 4.11 branch start rc1 on Monday that would be a hard freeze. If 
> Mike wants, he can help fix them over the weekend, I can help run smoketests.
> 
>> Having said that; I would be ok with it (no need to revert it), but we need
>> to be more careful with these things. If one wants to merge something,
>> there is no harm in waiting and calling for reviewers via Github, Slack, or
>> even email them directly.
> 
> Additional review was requested, but mea culpa - thanks for your support, 
> noted.
> 
> - Rohit
> 
> On Fri, Jan 12, 2018 at 3:57 PM, Rohit Yadav 
> wrote:
> 
>> All,
>> 
>> 
>> We're down to one feature PR towards 4.11 milestone now:
>> 
>> https://github.com/apache/cloudstack/pull/2298
>> 
>> 
>> The config drive PR from Frank (Nuage) has been accepted today after no
>> regression test failures seen from yesterday's smoketest run. We've also
>> tested, reviewed and merge Wido's (blocker fix) PR.
>> 
>> 
>> I've asked Mike to stabilize the branch; based on the smoketest results
>> from today we can see some failures caused by the PR. I'm willing to work
>> with Mike and others to get this PR tested, and merged over the weekends if
>> we can demonstrate that no regression is caused by it, i.e. no new
>> smoketest regressions. I'll also try to fix regression and test failures
>> over the weekend.
>> 
>> 
>> Lastly, I would like to discuss a mistake I made today with merging the
>> following PR which per our guideline lacks one code review lgtm/approval:
>> 
>> https://github.com/apache/cloudstack/pull/2152
>> 
>> 
>> The changes in above (merged) PR are all localized to a xenserver-swift
>> file, that is not tested by Travis or Trillian, since no new regression
>> failures were seen I accepted and merge it on that discretion. The PR was
>> originally on the 4.11 milestone, however, due to it lacking a JIRA id and
>> no response from the author it was only recently removed from the milestone.
>> 
>> 
>> Please advise if I need to revert this, or we can review/lgtm it
>> post-merge? I'll also ping on the above PR.
>> 
>> 
>> - Rohit
>> 
>> 
> Apache CloudStack: Open Source Cloud Computing
> cloudstack.apache.org
> CloudStack is open source cloud computing software for creating, managing, 
> and deploying infrastructure cloud services
> 
> 
> 
>> 
>> 
>> 
>> 
>> From: Wido den Hollander 
>> Sent: Thursday, January 11, 2018 9:17:26 PM
>> To: dev@cloudstack.apache.org
>> Subject: Re: [DISCUSS] Freezing master for 4.11
>> 
>> 
>> 
>>> On 01/10/2018 07:26 PM, Daan Hoogland wrote:
>>> I hope we understand each other correctly: No-one running an earlier
>>> version then 4.11 should miss out on any functionality they are using
>> now.
>>> 
>>> So if you use ipv6 and multiple cidrs now it must continue to work with
>> no
>>> loss of functionality. see my question below.
>>> 
>>> On Wed, Jan 10, 2018 at 7:06 PM, Ivan Kudryavtsev <
>> kudryavtsev...@bw-sw.com>
>>> wrote:
>>> 
 Daan, yes this sounds reasonable, I suppose who would like to fix, could
 do custom build for himself...
 
 But still it should be aknowledged somehow, if you use several cidrs for
 network, don't use v6, or don't upgrade to 4.11 because things will stop
 running well.
 
>>> Does this mean that several cidrs in ipv6 works in 4.9 and not in 4.11?
>>> 
>> 
>> No, it doesn't. IPv6 was introduced in 4.10 and this broke in 4.10.
>> 
>> You can't run with 4.10 with multiple IPv4 CIDRs as well when you have
>> IPv6 enabled.
>> 
>> So this is broken in 4.10 and 4.11 in that case.
>> 
>> Wido
>> 
>>> 
>>> if yes; it is a blocker
>>> 
>>> if no; you might as well upgrade for other features as it doesn't work
>> now
>>> either.
>>> 
>> 
>> rohit.ya...@shapeblue.com
>> www.shapeblue.com
> [http://www.shapeblue.com/wp-content/uploads/2017/06/logo.png]
> 
> Shapeblue - The CloudStack Company
> www.shapeblue.com
> Rapid deployment framework for Apache CloudStack IaaS Clouds. CSForge is a 
> framework developed by ShapeBlue to deliver the rapid deployment of a 
> standardised ...
> 
> 
> 
>> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
>> @shapeblue
>> 
>> 
>> 
>> 
> 
> 
> --
> Rafael Weingärtner
> 
> rohit.ya...@shapeblue.com 
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, 

Re: IP Address Discrepancy with VMware

2018-01-11 Thread Tutkowski, Mike
Hi Paul,

Thanks for looking into that!

I was just spinning up a VM in a Basic Zone using VMware and NFS…nothing really 
out of the ordinary.

I’m traveling today, but should be able to re-try this use case again tomorrow.

Thanks again!
Mike

On 1/11/18, 2:12 AM, "Paul Angus" <paul.an...@shapeblue.com> wrote:

Hey Mike, were you doing anything specific/special?  I haven't yet managed 
to get the wrong IP address.
how does the config for dhcp leases look on the VR?


Kind regards,

Paul Angus

paul.an...@shapeblue.com 
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK
@shapeblue
  
 


-Original Message-
    From: Tutkowski, Mike [mailto:mike.tutkow...@netapp.com] 
Sent: 11 January 2018 07:52
To: dev@cloudstack.apache.org
Subject: Re: IP Address Discrepancy with VMware

Thanks, Paul!

> On Jan 11, 2018, at 12:49 AM, Paul Angus <paul.an...@shapeblue.com> wrote:
> 
> I'll have a look @ mike
> 
> 
> 
> Kind regards,
> 
> Paul Angus
> 
> paul.an...@shapeblue.com
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK @shapeblue
> 
    > 
    > 
> 
> -Original Message-
> From: Tutkowski, Mike [mailto:mike.tutkow...@netapp.com] 
> Sent: 11 January 2018 07:31
> To: dev@cloudstack.apache.org
> Subject: IP Address Discrepancy with VMware
> 
> Hi,
> 
> While I was running some tests related to 4.11 tonight, I noticed the 
following discrepancy with regards to the IP address given to a VM of mine:
> 
> https://imgur.com/3ODXHNe
> 
> According to CloudStack, the IP address should be 10.117.40.28. However, 
when I run ifconfig, I get 10.117.40.115.
> 
> In this situation, I’m running only with VMware (no other hypervisor type 
in use) in a Basic Zone where the root disk of the VM is on NFS.
> 
> Anyone else able to reproduce this issue?
> 
> Thanks,
> Mike




Re: [VOTE] Apache Cloudstack 4.11.0.0 LTS [RC2]

2018-01-29 Thread Tutkowski, Mike
Hi,

I am +1 on RC2.

Here is the list of tests I ran for RC1:

https://github.com/apache/cloudstack/pull/2416#issuecomment-359220967

In that RC, I found one blocker, which has been fixed and included in RC2.

I went through the list of new commits for RC2 and I didn’t see anything in 
them to concern me that the tests I successfully ran for RC1 would have trouble 
with RC2.

Thanks,
Mike

On 1/26/18, 5:19 AM, "Rohit Yadav"  wrote:

Hi All,

I've created a 4.11.0.0 release (RC2), with the following artifacts up for
testing and a vote:

Git Branch and Commit SH:

https://gitbox.apache.org/repos/asf?p=cloudstack.git;a=shortlog;h=refs/heads/4.11.0.0-RC20180126T1313
Commit: 5dada1f7ed5fb6a8ee261c763f744583e586f8bf

Source release (checksums and signatures are available at the same
location):
https://dist.apache.org/repos/dist/dev/cloudstack/4.11.0.0/

PGP release keys (signed using 5ED1E1122DC5E8A4A45112C2484248210EE3D884):
https://dist.apache.org/repos/dist/release/cloudstack/KEYS

The vote will be open till end of next week, 2nd Feb 2018.

For sanity in tallying the vote, can PMC members please be sure to indicate
"(binding)" with their vote?

[ ] +1  approve
[ ] +0  no opinion
[ ] -1  disapprove (and reason why)

Additional information:

For users' convenience, I've built packages from
5dada1f7ed5fb6a8ee261c763f744583e586f8bf and published RC2 repository here:
http://cloudstack.apt-get.eu/testing/4.11-rc2
(the packages are being built and will be available shortly)

The release notes are still work-in-progress, but the systemvmtemplate
upgrade section has been updated. You may refer the following for
systemvmtemplate upgrade testing:

http://docs.cloudstack.apache.org/projects/cloudstack-release-notes/en/latest/index.html

4.11 systemvmtemplates are available from here:
https://download.cloudstack.org/systemvm/4.11/

Regards,
Rohit Yadav




Re: CS 4.8 KVM VMs will not live migrate

2018-01-31 Thread Tutkowski, Mike
Glad to hear you fixed the issue! :)

> On Jan 31, 2018, at 7:16 AM, David Mabry  wrote:
> 
> Mike and Wei,
> 
> Good news!  I was able to manually live migrate these VMs following the steps 
> outlined below:
> 
> 1.) virsh dumpxml 38 --migratable > 38.xml
> 2.) Change the vnc information in 38.xml to match destination host IP and 
> available VNC port
> 3.) virsh migrate --verbose --live 38 --xml 38.xml 
> qemu+tcp://destination.host.net/system
> 
> To my surprise, Cloudstack was able to discover and properly handle the fact 
> that this VM was live migrated to a new host without issue.  Very cool.
> 
> Wei, I suspect you are correct when you said this was an issue with the 
> cloudstack agent code.  After digging a little deeper, the agent is never 
> attempting to talk to libvirt at all after prepping the dxml to send to the 
> destination host.  I'm going to attempt to reproduce this in my lab and 
> attach a remote debugger and see if I can get to the bottom of it.
> 
> Thanks again for the help guys!  I really appreciate it.
> 
> Thanks,
> David Mabry
> 
> On 1/30/18, 9:55 AM, "David Mabry"  wrote:
> 
>Ah, understood.  I'll take a closer look at the logs and make sure that I 
> didn't accidentally miss those lines when I pulled together the logs for this 
> email chain.
> 
>Thanks,
>David Mabry
>On 1/30/18, 8:34 AM, "Wei ZHOU"  wrote:
> 
>Hi David,
> 
>I encountered the UnsupportAnswer once before, when I made some 
> changes in
>the kvm plugin.
> 
>Normally there should be some network configurations in the agent.log 
> but I
>do not see it.
> 
>-Wei
> 
> 
>2018-01-30 15:00 GMT+01:00 David Mabry :
> 
>> Hi Wei,
>> 
>> I detached the iso and received the same error.  Just out of curiosity,
>> what leads you to believe it is something in the vxlan code?  I guess at
>> this point, attaching a remote debugger to the agent in question might be
>> the best way to get to the bottom of what is going on.
>> 
>> Thanks in advance for the help.  I really, really appreciate it.
>> 
>> Thanks,
>> David Mabry
>> 
>> On 1/30/18, 3:30 AM, "Wei ZHOU"  wrote:
>> 
>>The answer should be caused by an exception in the cloudstack agent.
>>I tried to migrate a vm in our testing env, it is working.
>> 
>>there are some different between our env and yours.
>>(1) vlan VS vxlan
>>(2) no ISO VS attached ISO
>>(3) both of us use ceph and centos7.
>> 
>>I suspect it is caused by codes on vxlan.
>>However, could you detach the ISO and try again ?
>> 
>>-Wei
>> 
>> 
>> 
>>2018-01-29 19:48 GMT+01:00 David Mabry :
>> 
>>> Good day Cloudstack Devs,
>>> 
>>> I've run across a real head scratcher.  I have two VMs, (initially 3
>> VMs,
>>> but more on that later) on a single host, that I cannot live migrate
>> to any
>>> other host in the same cluster.  We discovered this after attempting
>> to
>>> roll out patches going from CentOS 7.2 to CentOS 7.4.  Initially, we
>>> thought it had something to do with the new version of libvirtd or
>> qemu-kvm
>>> on the other hosts in the cluster preventing these VMs from
>> migrating, but
>>> we are able to live migrate other VMs to and from this host without
>> issue.
>>> We can even create new VMs on this specific host and live migrate
>> them
>>> after creation with no issue.  We've put the migration source agent,
>>> migration destination agent and the management server in debug and
>> don't
>>> seem to get anything useful other than "Unsupported command".
>> Luckily, we
>>> did have one VM that was shutdown and restarted, this is the 3rd VM
>>> mentioned above.  Since that VM has been restarted, it has no issues
>> live
>>> migrating to any other host in the cluster.
>>> 
>>> I'm at a loss as to what to try next and I'm hoping that someone out
>> there
>>> might have had a similar issue and could shed some light on what to
>> do.
>>> Obviously, I can contact the customer and have them shutdown their
>> VMs, but
>>> that will potentially just delay this problem to be solved another
>> day.
>>> Even if shutting down the VMs is ultimately the solution, I'd still
>> like to
>>> understand what happened to cause this issue in the first place with
>> the
>>> hopes of preventing it in the future.
>>> 
>>> Here's some information about my setup:
>>> Cloudstack 4.8 Advanced Networking
>>> CentOS 7.2 and 7.4 Hosts
>>> Ceph RBD Primary Storage
>>> NFS Secondary Storage
>>> Instance in Question for Debug: i-532-1392-NSVLTN
>>> 
>>> I have attached relevant debug logs to this email if anyone wishes
>> to take
>>> a look.  I think the most interesting error message that I have
>> received is
>>> the following:
>>> 
>>> 468390:2018-01-27 08:59:35,172 DEBUG [c.c.a.t.Request]
>>> (Work-Job-Executor-6:ctx-188ea30f job-181792/job-181802
>> ctx-8e7f45ad)
>>> 

Re: [VOTE] Apache Cloudstack 4.11.0.0 LTS [RC2]

2018-01-31 Thread Tutkowski, Mike
Yes

+1 binding

> On Jan 31, 2018, at 8:47 AM, Rohit Yadav  wrote:
> 
> 
> Mike - is that a binding +1 vote?
> 
> Regards.
> 
> From: Daan Hoogland 
> Sent: Wednesday, January 31, 2018 2:52:41 PM
> To: dev
> Cc: us...@cloudstack.apache.org
> Subject: Re: [VOTE] Apache Cloudstack 4.11.0.0 LTS [RC2]
> 
> did package verification with all three sig files and including my own
> monkeying and Boris testing;
> +1 (binding)
> 
> 
> rohit.ya...@shapeblue.com 
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> @shapeblue
> 
> 
> 
>> On Fri, Jan 26, 2018 at 1:19 PM, Rohit Yadav  wrote:
>> 
>> Hi All,
>> 
>> I've created a 4.11.0.0 release (RC2), with the following artifacts up for
>> testing and a vote:
>> 
>> Git Branch and Commit SH:
>> https://gitbox.apache.org/repos/asf?p=cloudstack.git;a=
>> shortlog;h=refs/heads/4.11.0.0-RC20180126T1313
>> Commit: 5dada1f7ed5fb6a8ee261c763f744583e586f8bf
>> 
>> Source release (checksums and signatures are available at the same
>> location):
>> https://dist.apache.org/repos/dist/dev/cloudstack/4.11.0.0/
>> 
>> PGP release keys (signed using 5ED1E1122DC5E8A4A45112C2484248210EE3D884):
>> https://dist.apache.org/repos/dist/release/cloudstack/KEYS
>> 
>> The vote will be open till end of next week, 2nd Feb 2018.
>> 
>> For sanity in tallying the vote, can PMC members please be sure to indicate
>> "(binding)" with their vote?
>> 
>> [ ] +1  approve
>> [ ] +0  no opinion
>> [ ] -1  disapprove (and reason why)
>> 
>> Additional information:
>> 
>> For users' convenience, I've built packages from
>> 5dada1f7ed5fb6a8ee261c763f744583e586f8bf and published RC2 repository
>> here:
>> http://cloudstack.apt-get.eu/testing/4.11-rc2
>> (the packages are being built and will be available shortly)
>> 
>> The release notes are still work-in-progress, but the systemvmtemplate
>> upgrade section has been updated. You may refer the following for
>> systemvmtemplate upgrade testing:
>> http://docs.cloudstack.apache.org/projects/cloudstack-
>> release-notes/en/latest/index.html
>> 
>> 4.11 systemvmtemplates are available from here:
>> https://download.cloudstack.org/systemvm/4.11/
>> 
>> Regards,
>> Rohit Yadav
>> 
> 
> 
> 
> --
> Daan


Re: CS 4.8 KVM VMs will not live migrate

2018-01-29 Thread Tutkowski, Mike
Well…unfortunately, the serial-number issue that I had seen before cause an 
issue doesn’t seem to be the case here. On both the working and non-working 
(for live migration) VMs, there is a  element for applicable  
elements (per the XML below).

Anyone else have any ideas here?

On 1/29/18, 4:41 PM, "David Mabry" <dma...@ena.com.INVALID> wrote:

Mike,

Thanks for the reply.  As requested:

Will not Migrate


  i-532-1392-NSVLTN
  f7dbf00b-2e15-4991-a407-cf27a3d65d1e
  Other PV Virtio-SCSI (64-bit)
  4194304
  4194304
  2
  
2000
  
  
/machine
  
  

  Apache Software Foundation
  CloudStack KVM Hypervisor
  f7dbf00b-2e15-4991-a407-cf27a3d65d1e

  
  
hvm



  
  



  
  
Haswell-noTSX

  
  

  
  destroy
  restart
  destroy
  
/usr/libexec/qemu-kvm

  
  

  
  

  
  
  
  
524288000
524288000
500
500
  
  223e08b0929c4c47833d
  
  


  
  

  
  

  
  
  
  
524288000
524288000
500
500
  
  97e5a2991efd40ed85f4
  
  


  
  
  
  
  
  


  
  


  
  


  


  
  


  
  
  


  
  
  
  
  


  
  
  


  
  
  


  




  


  
  
  


  


Will migrate


  i-532-1298-NSVLTN
  d6ec74b8-4f6a-405c-834e-ece42151b802
  Windows PV
  4194304
  4194304
  1
  
1000
  
  
/machine
  
  

  Apache Software Foundation
  CloudStack KVM Hypervisor
  d6ec74b8-4f6a-405c-834e-ece42151b802

  
  
hvm



  
  



  
  
Haswell





  
  

  
  destroy
  restart
  destroy
  
/usr/libexec/qemu-kvm

  
  

  
  

  
  
  
  
524288000
524288000
500
500
  
  f0b58e22d05a48258a4a
  
  


  
  

  
  

  
  
  
  
524288000
524288000
500
500
  
  cd0c282239124730ac55
  
  


  
  
  
  
  
  


  
  


  
  


  


  
  
  


  
  
  
  
  


  
  
  


  
  
  


  


  


  


  


  
  
  


  
  
  
+107:+107
+107:+107
  



David Mabry
Manager of Systems Engineering
On 1/29/18, 5:30 PM, "Tutkowski, Mike" <mike.tutkow...@netapp.com> wrote:

Hi David,

So, I don’t know if what I am going to say here will at all be of use 
to you, but maybe. :)

I had a customer one time mention to me that he had trouble with live 
VM migration on KVM with a VM that was created on an older version of 
CloudStack. Live VM migration worked fine for these VMs on the older version of 
CloudStack (I think it was version 4.5) and stopped working when he upgraded to 
4.8. New VMs (VMs created on the newer version of CloudStack) worked fine for 
this feature on 4.8, but old VMs had to be stopped and re-started for live VM 
migration to work. I believe the older version of CloudStack was not placing 
the serial number of the VM in the VM’

Re: CS 4.8 KVM VMs will not live migrate

2018-01-29 Thread Tutkowski, Mike
Hi David,

So, I don’t know if what I am going to say here will at all be of use to you, 
but maybe. :)

I had a customer one time mention to me that he had trouble with live VM 
migration on KVM with a VM that was created on an older version of CloudStack. 
Live VM migration worked fine for these VMs on the older version of CloudStack 
(I think it was version 4.5) and stopped working when he upgraded to 4.8. New 
VMs (VMs created on the newer version of CloudStack) worked fine for this 
feature on 4.8, but old VMs had to be stopped and re-started for live VM 
migration to work. I believe the older version of CloudStack was not placing 
the serial number of the VM in the VM’s XML descriptor file, but newer versions 
of CloudStack were expecting this field.

Can you dump the XML of one or both of your VMs that don’t live migrate and see 
if they have the serial number field in their XML? Then, I’d recommend dumping 
the XML of the VM that works and seeing if it does, in fact, have the serial 
number field in its XML.

I hope this is of some help.

Talk to you later,
Mike

On 1/29/18, 11:48 AM, "David Mabry"  wrote:

Good day Cloudstack Devs,

I've run across a real head scratcher.  I have two VMs, (initially 3 VMs, 
but more on that later) on a single host, that I cannot live migrate to any 
other host in the same cluster.  We discovered this after attempting to roll 
out patches going from CentOS 7.2 to CentOS 7.4.  Initially, we thought it had 
something to do with the new version of libvirtd or qemu-kvm on the other hosts 
in the cluster preventing these VMs from migrating, but we are able to live 
migrate other VMs to and from this host without issue.  We can even create new 
VMs on this specific host and live migrate them after creation with no issue.  
We've put the migration source agent, migration destination agent and the 
management server in debug and don't seem to get anything useful other than 
"Unsupported command".  Luckily, we did have one VM that was shutdown and 
restarted, this is the 3rd VM mentioned above.  Since that VM has been 
restarted, it has no issues live migrating to any other host in the cluster.

I'm at a loss as to what to try next and I'm hoping that someone out there 
might have had a similar issue and could shed some light on what to do.  
Obviously, I can contact the customer and have them shutdown their VMs, but 
that will potentially just delay this problem to be solved another day.  Even 
if shutting down the VMs is ultimately the solution, I'd still like to 
understand what happened to cause this issue in the first place with the hopes 
of preventing it in the future.

Here's some information about my setup:
Cloudstack 4.8 Advanced Networking
CentOS 7.2 and 7.4 Hosts
Ceph RBD Primary Storage
NFS Secondary Storage
Instance in Question for Debug: i-532-1392-NSVLTN

I have attached relevant debug logs to this email if anyone wishes to take 
a look.  I think the most interesting error message that I have received is the 
following:

468390:2018-01-27 08:59:35,172 DEBUG [c.c.a.t.Request] 
(Work-Job-Executor-6:ctx-188ea30f job-181792/job-181802 ctx-8e7f45ad) 
(logid:f0888362) Seq 22-942378222027276319: Received:  { Ans: , MgmtId: 
14038012703634, via: 22(csh02c01z01.nsvltn.ena.net), Ver: v1, Flags: 110, { 
UnsupportedAnswer } }
468391:2018-01-27 08:59:35,172 WARN  [c.c.a.m.AgentManagerImpl] 
(Work-Job-Executor-6:ctx-188ea30f job-181792/job-181802 ctx-8e7f45ad) 
(logid:f0888362) Unsupported Command: Unsupported command issued: 
com.cloud.agent.api.PrepareForMigrationCommand.  Are you sure you got the right 
type of server?
468392:2018-01-27 08:59:35,179 ERROR [c.c.v.VmWorkJobHandlerProxy] 
(Work-Job-Executor-6:ctx-188ea30f job-181792/job-181802 ctx-8e7f45ad) 
(logid:f0888362) Invocation exception, caused by: 
com.cloud.exception.AgentUnavailableException: Resource [Host:22] is 
unreachable: Host 22: Unable to prepare for migration due to Unsupported 
command issued: com.cloud.agent.api.PrepareForMigrationCommand.  Are you sure 
you got the right type of server?
468393:2018-01-27 08:59:35,179 INFO  [c.c.v.VmWorkJobHandlerProxy] 
(Work-Job-Executor-6:ctx-188ea30f job-181792/job-181802 ctx-8e7f45ad) 
(logid:f0888362) Rethrow exception 
com.cloud.exception.AgentUnavailableException: Resource [Host:22] is 
unreachable: Host 22: Unable to prepare for migration due to Unsupported 
command issued: com.cloud.agent.api.PrepareForMigrationCommand.  Are you sure 
you got the right type of server?

I've tracked this "Unsupported command" down in the CS 4.8 code to 
cloudstack/api/src/com/cloud/agent/api/Answer.java which is the generic answer 
class.  I believe where the error is really being spawned from is 
cloudstack/engine/orchestration/src/com/cloud/vm/VirtualMachineManagerImpl.java.
  Specifically:
Answer pfma = null;
try {
pfma = 

Re: Copy Volume Failed in CloudStack 4.5 (XenServer 6.5)

2018-02-08 Thread Tutkowski, Mike
If you go to the Global Settings tab in the GUI and search for “wait”, there 
are several possible timeouts that may apply.

The backup.snapshot.wait Global Setting seems like the one that probably 
applies here (per what Pierre-Luc was noting).

On 2/8/18, 4:15 PM, "Pierre-Luc Dion"  wrote:

I think there is a timout global settings you could change so the copy task
will take longer before it timeout and fail in cloudstack. This will not
improve your performance but might reduce failure.

On updating the database content, it could work, but only if the vhd
successfully copy, and mappings remain valid.

I hope this can help...



Le 6 févr. 2018 13 h 28, "anillakieni"  a
écrit :

Dear All,

Is somebody available here to assist me on fixing my issue.

Thanks,
Anil.

On Tue, Feb 6, 2018 at 9:00 PM, anillakieni  wrote:

> Hi All,
>
> I'm facing issue when copying  larger size volumes. i.e., Secondary
> Storage to Primary Storage (I mean attaching DATA volume to VM), after
> certain time around 37670 seconds.
>
> Version of:
> - CloudStack is 4.5.0
> - XenServer 6.5.0
> - MySQL 5.1.73
>
>
> The error and log is provided below, Could someone please assist me here
> which steps i have to take to fix this issue. Also, can we have a chance
to
> update the failed status to success through database tables because i have
> to upload the whole disk again to secondary storage and then later attach
> it to VM, which is consuming more time. My environment has very slow
> network transfers (I have only 1 Gig switch). Please let me know if we can
> tweak the DB to update the status of the disk or do we have any settings
to
> be changed to accept more time (wait time) for updating the status.
> "
>
> 2018-02-06 03:20:42,385 DEBUG [c.c.a.t.Request] (Work-Job-Executor-31:ctx-
c1c78a5a
> job-106186/job-106187 ctx-ea1ef3e6) (logid:c59b2359) Seq
> 38-367887794560851961: Received:  { Ans: , MgmtId: 47019105324719, via:
38,
> Ver: v1, Flags: 110, { CopyCmdAnswer } }
> 2018-02-06 03:20:42,389 DEBUG [o.a.c.s.v.VolumeObject]
> (Work-Job-Executor-31:ctx-c1c78a5a job-106186/job-106187 ctx-ea1ef3e6)
> (logid:c59b2359) *Failed to update state*
> *com.cloud.utils.exception.CloudRuntimeException: DB Exception on:
> com.mysql.jdbc.JDBC4PreparedStatement@54bd3a25: SELECT volume_store_ref.id
> , volume_store_ref.store_id,
> volume_store_ref.volume_id, volume_store_ref.zone_id,
> volume_store_ref.created, volume_store_ref.last_updated,
> volume_store_ref.download_pct, volume_store_ref.size,
> volume_store_ref.physical_size, volume_store_ref.download_state,
> volume_store_ref.checksum, volume_store_ref.local_path,
> volume_store_ref.error_str, volume_store_ref.job_id,
> volume_store_ref.install_path, volume_store_ref.url,
> volume_store_ref.download_url, volume_store_ref.download_url_created,
> volume_store_ref.destroyed, volume_store_ref.update_count,
> volume_store_ref.updated, volume_store_ref.state, volume_store_ref.ref_cnt
> FROM volume_store_ref WHERE volume_store_ref.store_id = 1  AND
> volume_store_ref.volume_id = 1178  AND volume_store_ref.destroyed = 0
> ORDER BY RAND() LIMIT 1*
> at com.cloud.utils.db.GenericDaoBase.searchIncludingRemoved(
> GenericDaoBase.java:425)
> at com.cloud.utils.db.GenericDaoBase.searchIncludingRemoved(
> GenericDaoBase.java:361)
> at com.cloud.utils.db.GenericDaoBase.findOneIncludingRemovedBy(
> GenericDaoBase.java:889)
> at com.cloud.utils.db.GenericDaoBase.findOneBy(
> GenericDaoBase.java:900)
> at org.apache.cloudstack.storage.image.db.VolumeDataStoreDaoImpl.
> findByStoreVolume(VolumeDataStoreDaoImpl.java:209)
> at sun.reflect.GeneratedMethodAccessor306.invoke(Unknown Source)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at org.springframework.aop.support.AopUtils.
> invokeJoinpointUsingReflection(AopUtils.java:317)
> at org.springframework.aop.framework.ReflectiveMethodInvocation.
> invokeJoinpoint(ReflectiveMethodInvocation.java:183)
> at org.springframework.aop.framework.ReflectiveMethodInvocation.
> proceed(ReflectiveMethodInvocation.java:150)
> at com.cloud.utils.db.TransactionContextInterceptor.invoke(
> TransactionContextInterceptor.java:34)
> at org.springframework.aop.framework.ReflectiveMethodInvocation.
> proceed(ReflectiveMethodInvocation.java:161)
> at 

Re: [VOTE] Apache Cloudstack 4.11.0.0 LTS [RC2]

2018-02-05 Thread Tutkowski, Mike
Congratulations, everyone!

On 2/5/18, 3:09 AM, "Rohit Yadav"  wrote:

Copy/paste error, the vote count was:


+1 (PMC / binding)
4 person (Mike, Daan, Wido, Rohit)


- Rohit






From: Rohit Yadav 
Sent: Monday, February 5, 2018 10:57:17 AM
To: dev@cloudstack.apache.org
Cc: us...@cloudstack.apache.org
Subject: Re: [VOTE] Apache Cloudstack 4.11.0.0 LTS [RC2]

Hi All,

The vote for CloudStack 4.11.0.0 *passes* with 4 PMC + 2 non-PMC votes.

+1 (PMC / binding)
2 person (Mike, Daan, Wido, Rohit)

+1 (non binding)
2 person (Lucian, Boris)

0
none

-1
none

Thanks to everyone participating.

I will now prepare the release announcement to go out after 48 hours to
give the mirrors time to catch up.

Regards.


rohit.ya...@shapeblue.com 
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK
@shapeblue
  
 

On Fri, Jan 26, 2018 at 1:19 PM, Rohit Yadav  wrote:

> Hi All,
>
> I've created a 4.11.0.0 release (RC2), with the following artifacts up for
> testing and a vote:
>
> Git Branch and Commit SH:
> https://gitbox.apache.org/repos/asf?p=cloudstack.git;a=
> shortlog;h=refs/heads/4.11.0.0-RC20180126T1313
> Commit: 5dada1f7ed5fb6a8ee261c763f744583e586f8bf
>
> Source release (checksums and signatures are available at the same
> location):
> https://dist.apache.org/repos/dist/dev/cloudstack/4.11.0.0/
>
> PGP release keys (signed using 5ED1E1122DC5E8A4A45112C2484248210EE3D884):
> https://dist.apache.org/repos/dist/release/cloudstack/KEYS
>
> The vote will be open till end of next week, 2nd Feb 2018.
>
> For sanity in tallying the vote, can PMC members please be sure to
> indicate "(binding)" with their vote?
>
> [ ] +1  approve
> [ ] +0  no opinion
> [ ] -1  disapprove (and reason why)
>
> Additional information:
>
> For users' convenience, I've built packages from
> 5dada1f7ed5fb6a8ee261c763f744583e586f8bf and published RC2 repository
> here:
> http://cloudstack.apt-get.eu/testing/4.11-rc2
> (the packages are being built and will be available shortly)
>
> The release notes are still work-in-progress, but the systemvmtemplate
> upgrade section has been updated. You may refer the following for
> systemvmtemplate upgrade testing:
> http://docs.cloudstack.apache.org/projects/cloudstack-
> release-notes/en/latest/index.html
>
> 4.11 systemvmtemplates are available from here:
> https://download.cloudstack.org/systemvm/4.11/
>
> Regards,
> Rohit Yadav
>




Re: [VOTE] Apache Cloudstack 4.11.0.0 LTS [RC2]

2018-02-05 Thread Tutkowski, Mike
Also, thanks to Rohit for doing an awesome job as release manager for 4.11!

On 2/5/18, 12:50 PM, "Tutkowski, Mike" <mike.tutkow...@netapp.com> wrote:

Congratulations, everyone!

On 2/5/18, 3:09 AM, "Rohit Yadav" <rohit.ya...@shapeblue.com> wrote:

Copy/paste error, the vote count was:


+1 (PMC / binding)
4 person (Mike, Daan, Wido, Rohit)


- Rohit

<https://cloudstack.apache.org>




From: Rohit Yadav <ro...@apache.org>
Sent: Monday, February 5, 2018 10:57:17 AM
To: dev@cloudstack.apache.org
Cc: us...@cloudstack.apache.org
Subject: Re: [VOTE] Apache Cloudstack 4.11.0.0 LTS [RC2]

Hi All,

The vote for CloudStack 4.11.0.0 *passes* with 4 PMC + 2 non-PMC votes.

+1 (PMC / binding)
2 person (Mike, Daan, Wido, Rohit)

+1 (non binding)
2 person (Lucian, Boris)

0
none

-1
none

Thanks to everyone participating.

I will now prepare the release announcement to go out after 48 hours to
give the mirrors time to catch up.

Regards.


rohit.ya...@shapeblue.com 
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK
@shapeblue
  
 

On Fri, Jan 26, 2018 at 1:19 PM, Rohit Yadav <ro...@apache.org> wrote:

> Hi All,
>
> I've created a 4.11.0.0 release (RC2), with the following artifacts 
up for
> testing and a vote:
>
> Git Branch and Commit SH:
> https://gitbox.apache.org/repos/asf?p=cloudstack.git;a=
> shortlog;h=refs/heads/4.11.0.0-RC20180126T1313
> Commit: 5dada1f7ed5fb6a8ee261c763f744583e586f8bf
>
> Source release (checksums and signatures are available at the same
> location):
> https://dist.apache.org/repos/dist/dev/cloudstack/4.11.0.0/
>
> PGP release keys (signed using 
5ED1E1122DC5E8A4A45112C2484248210EE3D884):
> https://dist.apache.org/repos/dist/release/cloudstack/KEYS
>
> The vote will be open till end of next week, 2nd Feb 2018.
>
> For sanity in tallying the vote, can PMC members please be sure to
> indicate "(binding)" with their vote?
>
> [ ] +1  approve
> [ ] +0  no opinion
> [ ] -1  disapprove (and reason why)
>
> Additional information:
>
> For users' convenience, I've built packages from
> 5dada1f7ed5fb6a8ee261c763f744583e586f8bf and published RC2 repository
> here:
> http://cloudstack.apt-get.eu/testing/4.11-rc2
> (the packages are being built and will be available shortly)
>
> The release notes are still work-in-progress, but the systemvmtemplate
> upgrade section has been updated. You may refer the following for
> systemvmtemplate upgrade testing:
> http://docs.cloudstack.apache.org/projects/cloudstack-
> release-notes/en/latest/index.html
>
> 4.11 systemvmtemplates are available from here:
> https://download.cloudstack.org/systemvm/4.11/
>
> Regards,
> Rohit Yadav
>






Re: Open Summit CFP anyone ?

2018-06-21 Thread Tutkowski, Mike
I am interested.

> On Jun 21, 2018, at 2:38 AM, Giles Sirett  wrote:
> 
> Hi Andrija
> - yes I think it would be a great idea for Cloudstack to have some talks 
> there.
> 
> Open source summit appears to be Linux Foundations replacement for 
> Linuxcon/cloudopen - people from this community have spoken at these before 
> and had good attendance
> 
> Lets coordinate on here some submissions.
> 
> First of all, anybody else fancy submitting for this ?
> 
> Kind regards
> Giles
> 
> giles.sir...@shapeblue.com 
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> @shapeblue
> 
> 
> 
> 
> -Original Message-
> From: Andrija Panic  
> Sent: 20 June 2018 21:52
> To: dev 
> Subject: Open Summit CFP anyone ?
> 
> Hi all,
> 
> Just wondering if anyone submitted CFP here:
> 
> https://events.linuxfoundation.org/events/open-source-summit-europe-2018/program/cfp/
> 
> Sounds like an interesting place to present the products (ACS) - if anyone 
> interested, I'm happy to share the work(load) and present jointly, or 
> similar...
> 
> Deadline for CFP is 1st July...
> 
> Anyone?
> 
> Cheers,
> Andrija


Re: [DISCUSS] Release effort for 4.11.2.0

2018-08-02 Thread Tutkowski, Mike
This sounds good to me, Rohit.

Also, I would like to add this bug to the list for 4.11.2.0:

https://github.com/apache/cloudstack/pull/2776

On 8/2/18, 2:57 AM, "Rohit Yadav"  wrote:

All,


The recent CloudStack 4.11.1.0 release received a good reception but this 
thread is to gather feedback especially list of bugs and issues from the 
community that we should aim to fix towards the next minor LTS 4.11.2.0 release.


Here is a rough timeline proposal for the same:


0-4 week: Get feedback from the community, gather and triage list of 
issues, start fixing/testing/reviewing them

4-6 week: Stabilize 4.11 branch towards 4.11.2.0, cut RC and start voting

6-8 week: Iterate over RCs/voting and release!


To limit the scope for RM, blocker/critical issues will take priority. Paul 
will continue as RM for the 4.11.2.0 release, with assistance from Boris, Daan, 
and myself.


For reference, this is the 4.11.2.0 milestone PR/issues list:

https://github.com/apache/cloudstack/milestone/6


Thoughts, issues you want to discuss, feedback? Thanks.


- Rohit

rohit.ya...@shapeblue.com 
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK
@shapeblue
  
 





Re: [GitHub] rafaelweingartner commented on issue #2761: Add managed storage pool constraints to MigrateWithVolume API method

2018-08-02 Thread Tutkowski, Mike
Sure, it’s no rush for me.

Just let me know when you’re ready for me to merge it or feel free to merge it 
yourself.

> On Aug 2, 2018, at 5:08 AM, GitBox  wrote:
> 
> rafaelweingartner commented on issue #2761: Add managed storage pool 
> constraints to MigrateWithVolume API method
> URL: https://github.com/apache/cloudstack/pull/2761#issuecomment-409890686
> 
> 
>   @DaanHoogland that is what I was going to say to Mike. We can squash at the 
> merge "moment".
> 
>   Can we wait more one week or so before merging? I would like to cover this 
> code with unit tests. It is a very complicated and delicate one code, and it 
> does not have unit-tests for it.
> 
> 
> This is an automated message from the Apache Git Service.
> To respond to the message, please log on GitHub and use the
> URL above to go to the specific comment.
> 
> For queries about this service, please contact Infrastructure at:
> us...@infra.apache.org
> 
> 
> With regards,
> Apache Git Services


Re: GUI Wizard Issue on master

2018-08-02 Thread Tutkowski, Mike
Thanks! I can take a look at that PR.

> On Aug 2, 2018, at 10:39 AM, Rafael Weingärtner  
> wrote:
> 
> It seems to be a problem caused by the upgrade of jQuery-UI. It looks like
> jQuery UI is handling differently the creation of those divs. Can you take
> a look at my PR: https://github.com/apache/cloudstack/pull/2787?
> 
> On Wed, Aug 1, 2018 at 6:30 PM, Tutkowski, Mike 
> wrote:
> 
>> Also, there seems to be a weird outline around the Create Instance wizard,
>> as well.
>> 
>> On 8/1/18, 3:28 PM, "Tutkowski, Mike"  wrote:
>> 
>>Here’s what it looks like:
>> 
>>https://imgur.com/a/cV7pc9L
>> 
>>On 8/1/18, 3:25 PM, "Tutkowski, Mike" 
>> wrote:
>> 
>>I don’t have Firefox installed, but I see the same problem on both
>> Chrome and Safari.
>> 
>>On 8/1/18, 3:18 PM, "Rafael Weingärtner" <
>> rafaelweingart...@gmail.com> wrote:
>> 
>>Are you seeing this only with Chrome? Or is it the same in
>> Firefox as well?
>> 
>>On Wed, Aug 1, 2018 at 6:13 PM, Tutkowski, Mike <
>> mike.tutkow...@netapp.com>
>>wrote:
>> 
>>> Hi,
>>> 
>>> Has anyone else noticed that the Create Zone wizard’s
>> navigation buttons
>>> are placed below the bottom edge of the wizard? I just saw
>> this on master
>>> today using Chrome.
>>> 
>>> Thanks,
>>> Mike
>>> 
>> 
>> 
>> 
>>--
>>Rafael Weingärtner
>> 
>> 
>> 
>> 
>> 
>> 
>> 
> 
> 
> -- 
> Rafael Weingärtner


Re: GUI Wizard Issue on master

2018-08-01 Thread Tutkowski, Mike
I don’t have Firefox installed, but I see the same problem on both Chrome and 
Safari.

On 8/1/18, 3:18 PM, "Rafael Weingärtner"  wrote:

Are you seeing this only with Chrome? Or is it the same in Firefox as well?

On Wed, Aug 1, 2018 at 6:13 PM, Tutkowski, Mike 
wrote:

> Hi,
>
> Has anyone else noticed that the Create Zone wizard’s navigation buttons
> are placed below the bottom edge of the wizard? I just saw this on master
> today using Chrome.
>
> Thanks,
> Mike
>



-- 
Rafael Weingärtner




GUI Wizard Issue on master

2018-08-01 Thread Tutkowski, Mike
Hi,

Has anyone else noticed that the Create Zone wizard’s navigation buttons are 
placed below the bottom edge of the wizard? I just saw this on master today 
using Chrome.

Thanks,
Mike


Re: GUI Wizard Issue on master

2018-08-01 Thread Tutkowski, Mike
Also, there seems to be a weird outline around the Create Instance wizard, as 
well.

On 8/1/18, 3:28 PM, "Tutkowski, Mike"  wrote:

Here’s what it looks like:

https://imgur.com/a/cV7pc9L

On 8/1/18, 3:25 PM, "Tutkowski, Mike"  wrote:

I don’t have Firefox installed, but I see the same problem on both 
Chrome and Safari.

On 8/1/18, 3:18 PM, "Rafael Weingärtner"  
wrote:

Are you seeing this only with Chrome? Or is it the same in Firefox 
as well?

On Wed, Aug 1, 2018 at 6:13 PM, Tutkowski, Mike 

wrote:

> Hi,
>
> Has anyone else noticed that the Create Zone wizard’s navigation 
buttons
> are placed below the bottom edge of the wizard? I just saw this 
on master
> today using Chrome.
>
> Thanks,
> Mike
>



-- 
Rafael Weingärtner








Re: GUI Wizard Issue on master

2018-08-01 Thread Tutkowski, Mike
Here’s what it looks like:

https://imgur.com/a/cV7pc9L

On 8/1/18, 3:25 PM, "Tutkowski, Mike"  wrote:

I don’t have Firefox installed, but I see the same problem on both Chrome 
and Safari.

On 8/1/18, 3:18 PM, "Rafael Weingärtner"  
wrote:

Are you seeing this only with Chrome? Or is it the same in Firefox as 
well?

On Wed, Aug 1, 2018 at 6:13 PM, Tutkowski, Mike 

wrote:

> Hi,
>
> Has anyone else noticed that the Create Zone wizard’s navigation 
buttons
> are placed below the bottom edge of the wizard? I just saw this on 
master
> today using Chrome.
>
> Thanks,
> Mike
>



-- 
Rafael Weingärtner






***UNCHECKED*** Re: Problem with CLOUDSTACK-10240 (Cannot migrate local volume to shared storage)

2018-07-25 Thread Tutkowski, Mike
Thanks, Rafael – I should be able to take a look at this later today.

On 7/23/18, 6:58 AM, "Rafael Weingärtner"  wrote:

Hey Mike, PR created: https://github.com/apache/cloudstack/pull/2761
Can you take a look at it?

On Tue, Jul 17, 2018 at 4:35 PM, Tutkowski, Mike 
wrote:

> Correct, I happened to find it while testing a PR of mine targeted at
> master.
>
> > On Jul 17, 2018, at 1:30 PM, Rafael Weingärtner <
> rafaelweingart...@gmail.com> wrote:
> >
> > Correct. I do think the problem here is only in the release notes.
> >
> > Just to confirm, you found the problem while testing 4.12 (from master),
> > right?
> >
> > On Tue, Jul 17, 2018 at 4:22 PM, Tutkowski, Mike <
> mike.tutkow...@netapp.com>
> > wrote:
> >
> >> Cool, if it’s just in master, then that makes it easier.
> >>
> >> Also, it means we did not have a process issue by introducing
> enhancement
> >> code in between release candidates.
> >>
> >> It would mean, however, that our documentation is a bit incorrect if, 
in
> >> fact, it states that that feature exists in 4.11.1.
> >>
> >>> On Jul 17, 2018, at 1:20 PM, Rafael Weingärtner <
> >> rafaelweingart...@gmail.com> wrote:
> >>>
    > >>> Ok, thanks. I had the impression that we said it was backported to
> 4.11.
> >>>
> >>> I will get master and work on it then.
> >>>
> >>> On Tue, Jul 17, 2018 at 4:12 PM, Tutkowski, Mike <
> >> mike.tutkow...@netapp.com>
> >>> wrote:
> >>>
> >>>> I only noticed it in master. The example code I was comparing it
> against
> >>>> was from 4.11.0. I never checked against 4.11.1.
> >>>>
> >>>>> On Jul 17, 2018, at 1:02 PM, Rafael Weingärtner <
> >>>> rafaelweingart...@gmail.com> wrote:
> >>>>>
> >>>>> Hey Mike, I got the branch 4.11 to start fixing the problem we
> >> discussed,
> >>>>> but I do not think my commit was backported to 4.11. I mean, I am at
> >>>>> "VirtualMachineManagerImpl" and the code is not here. I also checked
> >> the
> >>>>> commit (
> >>>>> https://github.com/apache/cloudstack/commit/
> >>>> f2efbcececb3cfb06a51e5d3a2e77417c19c667f)
> >>>>> that introduced those changes to master, and according to Github, it
> is
> >>>>> only in the master branch, and not in 4.11.
> >>>>>
> >>>>> I checked the "VirtualMachineManagerImpl" class at the Apache
> >> CloudStack
> >>>>> remote repository in the 4.11 branch, and as you can see, the code
> >> there
> >>>> is
> >>>>> the “old”   one.
> >>>>> https://github.com/apache/cloudstack/blob/4.11/engine/
> >>>> orchestration/src/com/cloud/vm/VirtualMachineManagerImpl.java
> >>>>>
> >>>>> I got a little confused now. Did you detect the problem in 4.11 or 
in
> >>>>> master?
> >>>>>
> >>>>>
> >>>>> On Tue, Jul 17, 2018 at 12:27 AM, Tutkowski, Mike <
> >>>> mike.tutkow...@netapp.com
> >>>>>> wrote:
> >>>>>
> >>>>>> Another comment here: The part that is broken is if you try to let
> >>>>>> CloudStack pick the primary storage on the destination side. That
> code
> >>>> no
> >>>>>> longer exists in 4.11.1.
> >>>>>>
> >>>>>> On 7/16/18, 9:24 PM, "Tutkowski, Mike" 
> >>>> wrote:
> >>>>>>
> >>>>>>  To follow up on this a bit: Yes, you should be able to migrate a 
VM
> >>>>>> and its storage from one cluster to another today using non-managed
> >>>>>> (traditional) primary storage with XenServer (both the source and
> >>>>>> destination primary storages would be cluster scoped). However, 
that
> >> is
> >>>> one
> >>>>>> of the features that was broken in 4.11.1 tha

Re: [GitHub] rafaelweingartner commented on a change in pull request #2761: Add managed storage pool constraints to MigrateWithVolume API method

2018-07-25 Thread Tutkowski, Mike
That is correct (that is how the old behavior worked).

On 7/25/18, 2:56 PM, "GitBox"  wrote:

rafaelweingartner commented on a change in pull request #2761: Add managed 
storage pool constraints to MigrateWithVolume API method
URL: https://github.com/apache/cloudstack/pull/2761#discussion_r205259766
 
 

 ##
 File path: 
engine/orchestration/src/main/java/com/cloud/vm/VirtualMachineManagerImpl.java
 ##
 @@ -2282,7 +2282,7 @@ protected void migrate(final VMInstanceVO vm, final 
long srcHostId, final Deploy
  * Create the mapping of volumes and storage pools. If the user did 
not enter a mapping on her/his own, we create one using {@link 
#getDefaultMappingOfVolumesAndStoragePoolForMigration(VirtualMachineProfile, 
Host)}.
  * If the user provided a mapping, we use whatever the user has 
provided (check the method {@link 
#createMappingVolumeAndStoragePoolEnteredByUser(VirtualMachineProfile, Host, 
Map)}).
  */
-private Map 
getPoolListForVolumesForMigration(VirtualMachineProfile profile, Host 
targetHost, Map volumeToPool) {
+protected Map 
getPoolListForVolumesForMigration(VirtualMachineProfile profile, Host 
targetHost, Map volumeToPool) {
 
 Review comment:
   Ok, so to summarize. If the VM has two volumes, you want to be able to 
define the migration for one of them and the other that was not specified 
should be taken care by ACS. Is that it? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services




Re: advice needed: snapshot DB/table cleanup

2018-08-17 Thread Tutkowski, Mike
Hi Andrija,

I like the way you wrote your SQL for the cloud.snapshots_store_ref table. It 
will make sure SolidFire snapshots remain in that table. Even if some of those 
are in the Deleted state, that’s OK. Perhaps the system will clean them up as 
expected in newer versions of CloudStack (otherwise, you can always delete them 
later).

Also, your plan for marking deleted snapshots as deleted with a removed date 
sounds good to me.

Talk to you later!
Mike

On 8/17/18, 9:39 AM, "Andrija Panic"  wrote:

Hi Daan, thx for the reply,

so yes I would not touch SF just for sake of magic, whatever - otherwise I
can remove them in same fashion (.

the second "is this the same" - its not -first SQL -  I delete from
snapshot_store_ref where the template is either DESTROYED or has REMOVED
date set in the main "snapshots" table. then later I just make sure (2nd
SQL) in the main "snapshot" table that if either the REMOVED or DESTROYED
is set - that I also set the missing value :) (previously all store_ref is
gone because of 1st SQL...)

As for the garbage, this has been happening from 4.5 up to now 4.8 for good
knows what reason - you remove something, and either status is set to
DESTROYED with no removal date (mostly its only for snapsbots, I don't
recall seen it on other resources) or removed date is set but state is
still READY (I actually just recently seen this and only on snaposhots -
can't be sure if this is because of ACS, or because of someone changing DB
(in case of snaps are in ERROR or ALLOCATED state - then you simply have to
alter DB, no way to cleanup via API). When you deal with CEPH and
long-running snaphosts than different gremlins can happen from time to time
- my experience at least...


Hope that makes sense (my answers)

Cheers
ANdrija

On Fri, 17 Aug 2018 at 16:40, Daan Hoogland  wrote:

> andrija,
>
> On Fri, Aug 17, 2018 at 11:23 AM, Andrija Panic 
> wrote:
>
> > HI guys, hi Mike.T.,
> >
> > we have removed all NFS and CEPH storages, and are now purely running on
> > SolidFire (KVM).
> >
> > Now I want to do serious snapshot cleanup (for reason explained at the
> end
> > of email) - since "snapshots" and "snapshot_store_ref" tables are a
> > complete mess (i.e. snapshot is destroyed with/without "removed" date,
> and
> > then there are still references in snapshot_store_ref to these 
fully/half
> > destroyed snapshots...)
> >
> > I would like to ask for a tip - based on my common sense and experience,
> I
> > was thinking on doing something like following:
> >
> > SQL:
> > delete from snapshot_store_ref where snapshot_id in (select id from
> > snapshots where status="destroyed" or removed is NOT NULL and min_iops 
is
> > NULL)
> >
> why the do you want to keep the solidfire snapshots when removed?
>
>
>
> >
> > This last "min_iops is NULL" is identifier for snaps that are NOT on
> > SolidFire - I would not touch SF snapshots) - i.e. all snapshots that 
are
> > created from SF volumes have min_iops and max_iops values set - so I 
just
> > exclude them here
> >
> > - So - above I want to remove all references for snapshots that are
> > fully/semi destroyed (status=destroyed but no removed date - or other 
way
> > around - those that have "removed" date but status=Ready.)
> >
> isn't this the same as below?
>
>
>
> >
> > Then I was also thinking does it make sense, to also set (in "snapshots"
> > table) status=Destroyed where removed is NOT NULL and other way around -
> > set removed date where status=Destroyed.
> >
> isn't this the same as above?
>
>
> Also when having cleaned all snapshots that are not for solidfire I would
> first do a check as your mess should be largely cleaned by then.
> and if a few snapshots still jump out better investigate those to see if
> you can find any root cause for the failure.
>
>
> > Sorry for long question - but I had issues with some snaps referencing
> CEPH
> > (and we removed CEPH/NFS from ACS GUI) - i.e. client was unable to list
> > snaps for his account, because some volumes had snaps that were
> referencing
> > CEPH (though they are migrated to SF or deleted)...
> >
> > Thanks a lot
> >
> hope my loose hipshots help,
>
>
>
> >
> > Andrija
> >
> >
> >
> > --
> >
> > Andrija Panić
> >
>
>
>
> --
> Daan
>


-- 

Andrija Panić




Another GUI Issue on master

2018-08-27 Thread Tutkowski, Mike
Hi everyone,

I’ve encountered another GUI issue on master: https://imgur.com/a/RCLPAfB.

This one appears when I try to perform a cross-cluster VM migration on 
XenServer.

We seem to have regressed quite a bit with regards to the GUI on master (this 
is one of around 3 issues that have popped up this release related to the GUI). 
Did we swap in a new GUI library or something that is responsible for all of 
these issues?

Thanks,
Mike


Re: Another GUI Issue on master

2018-08-27 Thread Tutkowski, Mike
That looks like the issue I was seeing – thanks!

On 8/27/18, 7:45 PM, "Rafael Weingärtner"  wrote:

Can you check this PR https://github.com/apache/cloudstack/pull/2803?

There was the jQuery-UI upgrade. I think PRs coming from 4.11 branch are
causing these problems. The new jQuery UI requires us to handle the closing
of modal popups in a different manner.

On Mon, Aug 27, 2018 at 5:15 PM, Tutkowski, Mike 
wrote:

> Hi everyone,
>
> I’ve encountered another GUI issue on master: https://imgur.com/a/RCLPAfB.
>
> This one appears when I try to perform a cross-cluster VM migration on
> XenServer.
>
> We seem to have regressed quite a bit with regards to the GUI on master
> (this is one of around 3 issues that have popped up this release related 
to
> the GUI). Did we swap in a new GUI library or something that is 
responsible
> for all of these issues?
>
> Thanks,
> Mike
>



-- 
Rafael Weingärtner




Problem with CLOUDSTACK-10240 (Cannot migrate local volume to shared storage)

2018-07-15 Thread Tutkowski, Mike
Hi,

While running managed-storage regression tests tonight, I noticed a problem 
that is not related to managed storage.

CLOUDSTACK-10240 is a ticket asking that we allow the migration of a virtual 
disk that’s on local storage to shared storage. In the process of enabling this 
feature, the VirtualMachineManagerImpl.getPoolListForVolumesForMigration method 
was re-written in a way that completely breaks at least one use case: Migrating 
a VM across compute clusters (at least supported in XenServer). If, say, a 
virtual disk resides on shared storage in the source compute cluster, we must 
be able to copy this virtual disk to shared storage in the destination compute 
cluster.

As the code is currently written, this is no longer possible. It also seems 
that the managed-storage logic has been dropped for some reason in the new 
implementation.

Rafael – It seems that you worked on this feature. Would you be able to look 
into this and create a PR?

Thanks,
Mike


Re: Problem with CLOUDSTACK-10240 (Cannot migrate local volume to shared storage)

2018-07-17 Thread Tutkowski, Mike
Cool, if it’s just in master, then that makes it easier.

Also, it means we did not have a process issue by introducing enhancement code 
in between release candidates.

It would mean, however, that our documentation is a bit incorrect if, in fact, 
it states that that feature exists in 4.11.1.

> On Jul 17, 2018, at 1:20 PM, Rafael Weingärtner  
> wrote:
> 
> Ok, thanks. I had the impression that we said it was backported to 4.11.
> 
> I will get master and work on it then.
> 
> On Tue, Jul 17, 2018 at 4:12 PM, Tutkowski, Mike 
> wrote:
> 
>> I only noticed it in master. The example code I was comparing it against
>> was from 4.11.0. I never checked against 4.11.1.
>> 
>>> On Jul 17, 2018, at 1:02 PM, Rafael Weingärtner <
>> rafaelweingart...@gmail.com> wrote:
>>> 
>>> Hey Mike, I got the branch 4.11 to start fixing the problem we discussed,
>>> but I do not think my commit was backported to 4.11. I mean, I am at
>>> "VirtualMachineManagerImpl" and the code is not here. I also checked the
>>> commit (
>>> https://github.com/apache/cloudstack/commit/
>> f2efbcececb3cfb06a51e5d3a2e77417c19c667f)
>>> that introduced those changes to master, and according to Github, it is
>>> only in the master branch, and not in 4.11.
>>> 
>>> I checked the "VirtualMachineManagerImpl" class at the Apache CloudStack
>>> remote repository in the 4.11 branch, and as you can see, the code there
>> is
>>> the “old”   one.
>>> https://github.com/apache/cloudstack/blob/4.11/engine/
>> orchestration/src/com/cloud/vm/VirtualMachineManagerImpl.java
>>> 
>>> I got a little confused now. Did you detect the problem in 4.11 or in
>>> master?
>>> 
>>> 
>>> On Tue, Jul 17, 2018 at 12:27 AM, Tutkowski, Mike <
>> mike.tutkow...@netapp.com
>>>> wrote:
>>> 
>>>> Another comment here: The part that is broken is if you try to let
>>>> CloudStack pick the primary storage on the destination side. That code
>> no
>>>> longer exists in 4.11.1.
>>>> 
>>>> On 7/16/18, 9:24 PM, "Tutkowski, Mike" 
>> wrote:
>>>> 
>>>>   To follow up on this a bit: Yes, you should be able to migrate a VM
>>>> and its storage from one cluster to another today using non-managed
>>>> (traditional) primary storage with XenServer (both the source and
>>>> destination primary storages would be cluster scoped). However, that is
>> one
>>>> of the features that was broken in 4.11.1 that we are discussing in this
>>>> thread.
>>>> 
>>>>   On 7/16/18, 9:20 PM, "Tutkowski, Mike" 
>>>> wrote:
>>>> 
>>>>   For a bit of info on what managed storage is, please take a look
>>>> at this document:
>>>> 
>>>>   https://www.dropbox.com/s/wwz2bjpra9ykk5w/SolidFire%
>>>> 20in%20CloudStack.docx?dl=0
>>>> 
>>>>   The short answer is that you can have zone-wide managed storage
>>>> (for XenServer, VMware, and KVM). However, there is no current zone-wide
>>>> non-managed storage for XenServer.
>>>> 
>>>>   On 7/16/18, 6:20 PM, "Yiping Zhang"  wrote:
>>>> 
>>>>   I assume by "managed storage", you guys mean primary
>> storages,
>>>> either zone -wide or cluster-wide.
>>>> 
>>>>   For Xen hypervisor, ACS does not support "zone-wide" primary
>>>> storage yet. Still, I can live migrate a VM with data disks between
>>>> clusters with storage migration from web GUI, today.  So, your statement
>>>> below does not reflect current behavior of the code.
>>>> 
>>>> 
>>>>  - If I want to migrate a VM across clusters, but
>> if
>>>> at least one of its
>>>>  volumes is placed in a cluster-wide managed
>>>> storage, the migration is not
>>>>  allowed. Is that it?
>>>> 
>>>>   [Mike] Correct
>>>> 
>>>> 
>>>> 
>>>> 
>>>> 
>>>> 
>>>> 
>>>> 
>>>> 
>>>> 
>>>> 
>>> 
>>> 
>>> --
>>> Rafael Weingärtner
>> 
> 
> 
> 
> -- 
> Rafael Weingärtner


Re: Problem with CLOUDSTACK-10240 (Cannot migrate local volume to shared storage)

2018-07-17 Thread Tutkowski, Mike
Correct, I happened to find it while testing a PR of mine targeted at master.

> On Jul 17, 2018, at 1:30 PM, Rafael Weingärtner  
> wrote:
> 
> Correct. I do think the problem here is only in the release notes.
> 
> Just to confirm, you found the problem while testing 4.12 (from master),
> right?
> 
> On Tue, Jul 17, 2018 at 4:22 PM, Tutkowski, Mike 
> wrote:
> 
>> Cool, if it’s just in master, then that makes it easier.
>> 
>> Also, it means we did not have a process issue by introducing enhancement
>> code in between release candidates.
>> 
>> It would mean, however, that our documentation is a bit incorrect if, in
>> fact, it states that that feature exists in 4.11.1.
>> 
>>> On Jul 17, 2018, at 1:20 PM, Rafael Weingärtner <
>> rafaelweingart...@gmail.com> wrote:
>>> 
>>> Ok, thanks. I had the impression that we said it was backported to 4.11.
>>> 
>>> I will get master and work on it then.
>>> 
>>> On Tue, Jul 17, 2018 at 4:12 PM, Tutkowski, Mike <
>> mike.tutkow...@netapp.com>
>>> wrote:
>>> 
>>>> I only noticed it in master. The example code I was comparing it against
>>>> was from 4.11.0. I never checked against 4.11.1.
>>>> 
>>>>> On Jul 17, 2018, at 1:02 PM, Rafael Weingärtner <
>>>> rafaelweingart...@gmail.com> wrote:
>>>>> 
>>>>> Hey Mike, I got the branch 4.11 to start fixing the problem we
>> discussed,
>>>>> but I do not think my commit was backported to 4.11. I mean, I am at
>>>>> "VirtualMachineManagerImpl" and the code is not here. I also checked
>> the
>>>>> commit (
>>>>> https://github.com/apache/cloudstack/commit/
>>>> f2efbcececb3cfb06a51e5d3a2e77417c19c667f)
>>>>> that introduced those changes to master, and according to Github, it is
>>>>> only in the master branch, and not in 4.11.
>>>>> 
>>>>> I checked the "VirtualMachineManagerImpl" class at the Apache
>> CloudStack
>>>>> remote repository in the 4.11 branch, and as you can see, the code
>> there
>>>> is
>>>>> the “old”   one.
>>>>> https://github.com/apache/cloudstack/blob/4.11/engine/
>>>> orchestration/src/com/cloud/vm/VirtualMachineManagerImpl.java
>>>>> 
>>>>> I got a little confused now. Did you detect the problem in 4.11 or in
>>>>> master?
>>>>> 
>>>>> 
>>>>> On Tue, Jul 17, 2018 at 12:27 AM, Tutkowski, Mike <
>>>> mike.tutkow...@netapp.com
>>>>>> wrote:
>>>>> 
>>>>>> Another comment here: The part that is broken is if you try to let
>>>>>> CloudStack pick the primary storage on the destination side. That code
>>>> no
>>>>>> longer exists in 4.11.1.
>>>>>> 
>>>>>> On 7/16/18, 9:24 PM, "Tutkowski, Mike" 
>>>> wrote:
>>>>>> 
>>>>>>  To follow up on this a bit: Yes, you should be able to migrate a VM
>>>>>> and its storage from one cluster to another today using non-managed
>>>>>> (traditional) primary storage with XenServer (both the source and
>>>>>> destination primary storages would be cluster scoped). However, that
>> is
>>>> one
>>>>>> of the features that was broken in 4.11.1 that we are discussing in
>> this
>>>>>> thread.
>>>>>> 
>>>>>>  On 7/16/18, 9:20 PM, "Tutkowski, Mike" 
>>>>>> wrote:
>>>>>> 
>>>>>>  For a bit of info on what managed storage is, please take a look
>>>>>> at this document:
>>>>>> 
>>>>>>  https://www.dropbox.com/s/wwz2bjpra9ykk5w/SolidFire%
>>>>>> 20in%20CloudStack.docx?dl=0
>>>>>> 
>>>>>>  The short answer is that you can have zone-wide managed storage
>>>>>> (for XenServer, VMware, and KVM). However, there is no current
>> zone-wide
>>>>>> non-managed storage for XenServer.
>>>>>> 
>>>>>>  On 7/16/18, 6:20 PM, "Yiping Zhang"  wrote:
>>>>>> 
>>>>>>  I assume by "managed storage", you guys mean primary
>>>> storages,
>>>>>> either zone -wide or cluster-wide.
>>>>>> 
>>>>>>  For Xen hypervisor, ACS does not support "zone-wide" primary
>>>>>> storage yet. Still, I can live migrate a VM with data disks between
>>>>>> clusters with storage migration from web GUI, today.  So, your
>> statement
>>>>>> below does not reflect current behavior of the code.
>>>>>> 
>>>>>> 
>>>>>> - If I want to migrate a VM across clusters, but
>>>> if
>>>>>> at least one of its
>>>>>> volumes is placed in a cluster-wide managed
>>>>>> storage, the migration is not
>>>>>> allowed. Is that it?
>>>>>> 
>>>>>>  [Mike] Correct
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> 
>>>>> 
>>>>> 
>>>>> --
>>>>> Rafael Weingärtner
>>>> 
>>> 
>>> 
>>> 
>>> --
>>> Rafael Weingärtner
>> 
> 
> 
> 
> -- 
> Rafael Weingärtner


Re: Problem with CLOUDSTACK-10240 (Cannot migrate local volume to shared storage)

2018-07-17 Thread Tutkowski, Mike
I only noticed it in master. The example code I was comparing it against was 
from 4.11.0. I never checked against 4.11.1.

> On Jul 17, 2018, at 1:02 PM, Rafael Weingärtner  
> wrote:
> 
> Hey Mike, I got the branch 4.11 to start fixing the problem we discussed,
> but I do not think my commit was backported to 4.11. I mean, I am at
> "VirtualMachineManagerImpl" and the code is not here. I also checked the
> commit (
> https://github.com/apache/cloudstack/commit/f2efbcececb3cfb06a51e5d3a2e77417c19c667f)
> that introduced those changes to master, and according to Github, it is
> only in the master branch, and not in 4.11.
> 
> I checked the "VirtualMachineManagerImpl" class at the Apache CloudStack
> remote repository in the 4.11 branch, and as you can see, the code there is
> the “old”   one.
> https://github.com/apache/cloudstack/blob/4.11/engine/orchestration/src/com/cloud/vm/VirtualMachineManagerImpl.java
> 
> I got a little confused now. Did you detect the problem in 4.11 or in
> master?
> 
> 
> On Tue, Jul 17, 2018 at 12:27 AM, Tutkowski, Mike > wrote:
> 
>> Another comment here: The part that is broken is if you try to let
>> CloudStack pick the primary storage on the destination side. That code no
>> longer exists in 4.11.1.
>> 
>> On 7/16/18, 9:24 PM, "Tutkowski, Mike"  wrote:
>> 
>>To follow up on this a bit: Yes, you should be able to migrate a VM
>> and its storage from one cluster to another today using non-managed
>> (traditional) primary storage with XenServer (both the source and
>> destination primary storages would be cluster scoped). However, that is one
>> of the features that was broken in 4.11.1 that we are discussing in this
>> thread.
>> 
>>On 7/16/18, 9:20 PM, "Tutkowski, Mike" 
>> wrote:
>> 
>>For a bit of info on what managed storage is, please take a look
>> at this document:
>> 
>>https://www.dropbox.com/s/wwz2bjpra9ykk5w/SolidFire%
>> 20in%20CloudStack.docx?dl=0
>> 
>>The short answer is that you can have zone-wide managed storage
>> (for XenServer, VMware, and KVM). However, there is no current zone-wide
>> non-managed storage for XenServer.
>> 
>>On 7/16/18, 6:20 PM, "Yiping Zhang"  wrote:
>> 
>>I assume by "managed storage", you guys mean primary storages,
>> either zone -wide or cluster-wide.
>> 
>>For Xen hypervisor, ACS does not support "zone-wide" primary
>> storage yet. Still, I can live migrate a VM with data disks between
>> clusters with storage migration from web GUI, today.  So, your statement
>> below does not reflect current behavior of the code.
>> 
>> 
>>   - If I want to migrate a VM across clusters, but if
>> at least one of its
>>   volumes is placed in a cluster-wide managed
>> storage, the migration is not
>>   allowed. Is that it?
>> 
>>[Mike] Correct
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
> 
> 
> -- 
> Rafael Weingärtner


Re: Problem with CLOUDSTACK-10240 (Cannot migrate local volume to shared storage)

2018-07-16 Thread Tutkowski, Mike
When I ran my suite of tests on 4.11.1, I did not encounter this issue. Also, 
looking at the code now, it appears this new code is first in 4.12.

On 7/16/18, 1:36 PM, "Yiping Zhang"  wrote:


Is this code already in ACS 4.11.1.0? 

CLOUDSTACK-10240 is listed as fixed in 4.11.1.0, according to release note 
here, 
http://docs.cloudstack.apache.org/projects/cloudstack-release-notes/ja/master/fixed_issues.html,
 but in the JIRA ticket itself, the "fixed version/s" field says 4.12.

We are using XenServer clusters with shared NFS storages and I am about to 
migrate to ACS 4.11.1.0 from 4.9.3.0.  Since we move VM between clusters a lot, 
this is going to be a blocker for us.  Someone please confirm.

Thanks

Yiping


On 7/14/18, 11:20 PM, "Tutkowski, Mike"  wrote:

Hi,

While running managed-storage regression tests tonight, I noticed a 
problem that is not related to managed storage.

CLOUDSTACK-10240 is a ticket asking that we allow the migration of a 
virtual disk that’s on local storage to shared storage. In the process of 
enabling this feature, the 
VirtualMachineManagerImpl.getPoolListForVolumesForMigration method was 
re-written in a way that completely breaks at least one use case: Migrating a 
VM across compute clusters (at least supported in XenServer). If, say, a 
virtual disk resides on shared storage in the source compute cluster, we must 
be able to copy this virtual disk to shared storage in the destination compute 
cluster.

As the code is currently written, this is no longer possible. It also 
seems that the managed-storage logic has been dropped for some reason in the 
new implementation.

Rafael – It seems that you worked on this feature. Would you be able to 
look into this and create a PR?

Thanks,
Mike






Re: Problem with CLOUDSTACK-10240 (Cannot migrate local volume to shared storage)

2018-07-16 Thread Tutkowski, Mike
OK, as Rafael noted, looks like it’s in 4.11.2. My regression tests were run 
against 4.11.1. I thought we only allowed bug fixes when going to a new RC, but 
it appears we are not strictly enforcing that rule.

On 7/16/18, 1:40 PM, "Tutkowski, Mike"  wrote:

When I ran my suite of tests on 4.11.1, I did not encounter this issue. 
Also, looking at the code now, it appears this new code is first in 4.12.

On 7/16/18, 1:36 PM, "Yiping Zhang"  wrote:


Is this code already in ACS 4.11.1.0? 

CLOUDSTACK-10240 is listed as fixed in 4.11.1.0, according to release 
note here, 
http://docs.cloudstack.apache.org/projects/cloudstack-release-notes/ja/master/fixed_issues.html,
 but in the JIRA ticket itself, the "fixed version/s" field says 4.12.

We are using XenServer clusters with shared NFS storages and I am about 
to migrate to ACS 4.11.1.0 from 4.9.3.0.  Since we move VM between clusters a 
lot, this is going to be a blocker for us.  Someone please confirm.

Thanks

Yiping


    On 7/14/18, 11:20 PM, "Tutkowski, Mike"  
wrote:

Hi,

While running managed-storage regression tests tonight, I noticed a 
problem that is not related to managed storage.

CLOUDSTACK-10240 is a ticket asking that we allow the migration of 
a virtual disk that’s on local storage to shared storage. In the process of 
enabling this feature, the 
VirtualMachineManagerImpl.getPoolListForVolumesForMigration method was 
re-written in a way that completely breaks at least one use case: Migrating a 
VM across compute clusters (at least supported in XenServer). If, say, a 
virtual disk resides on shared storage in the source compute cluster, we must 
be able to copy this virtual disk to shared storage in the destination compute 
cluster.

As the code is currently written, this is no longer possible. It 
also seems that the managed-storage logic has been dropped for some reason in 
the new implementation.

Rafael – It seems that you worked on this feature. Would you be 
able to look into this and create a PR?

Thanks,
Mike








Re: Problem with CLOUDSTACK-10240 (Cannot migrate local volume to shared storage)

2018-07-16 Thread Tutkowski, Mike
I should be able to do so soon. I’ve been in meetings all day. I’ll try to 
investigate this before my next meeting starts.

On 7/16/18, 1:45 PM, "Rafael Weingärtner"  wrote:

Yes, that is what happened. I also followed this principle. That is why I
create the PR against master, but I think people are not following this.

Mike, can you provide me some feedback regarding those two inquiries? Then,
we can fix this quickly.

On Mon, Jul 16, 2018 at 4:42 PM, Tutkowski, Mike 
wrote:

> OK, as Rafael noted, looks like it’s in 4.11.2. My regression tests were
> run against 4.11.1. I thought we only allowed bug fixes when going to a 
new
> RC, but it appears we are not strictly enforcing that rule.
>
> On 7/16/18, 1:40 PM, "Tutkowski, Mike"  wrote:
>
> When I ran my suite of tests on 4.11.1, I did not encounter this
> issue. Also, looking at the code now, it appears this new code is first in
> 4.12.
>
> On 7/16/18, 1:36 PM, "Yiping Zhang"  wrote:
>
>
> Is this code already in ACS 4.11.1.0?
>
> CLOUDSTACK-10240 is listed as fixed in 4.11.1.0, according to
> release note here, http://docs.cloudstack.apache.org/projects/cloudstack-
> release-notes/ja/master/fixed_issues.html, but in the JIRA ticket itself,
> the "fixed version/s" field says 4.12.
>
> We are using XenServer clusters with shared NFS storages and I am
> about to migrate to ACS 4.11.1.0 from 4.9.3.0.  Since we move VM between
> clusters a lot, this is going to be a blocker for us.  Someone please
> confirm.
    >
> Thanks
>
> Yiping
>
>
> On 7/14/18, 11:20 PM, "Tutkowski, Mike" 

> wrote:
>
> Hi,
>
> While running managed-storage regression tests tonight, I
> noticed a problem that is not related to managed storage.
>
> CLOUDSTACK-10240 is a ticket asking that we allow the
> migration of a virtual disk that’s on local storage to shared storage. In
> the process of enabling this feature, the VirtualMachineManagerImpl.
> getPoolListForVolumesForMigration method was re-written in a way that
> completely breaks at least one use case: Migrating a VM across compute
> clusters (at least supported in XenServer). If, say, a virtual disk 
resides
> on shared storage in the source compute cluster, we must be able to copy
> this virtual disk to shared storage in the destination compute cluster.
>
> As the code is currently written, this is no longer possible.
> It also seems that the managed-storage logic has been dropped for some
> reason in the new implementation.
>
> Rafael – It seems that you worked on this feature. Would you
> be able to look into this and create a PR?
>
> Thanks,
> Mike
>
>
>
>
>
>
>


-- 
Rafael Weingärtner




Re: Problem with CLOUDSTACK-10240 (Cannot migrate local volume to shared storage)

2018-07-16 Thread Tutkowski, Mike
I think I understand the confusion here.

Rafael’s code was put into 4.11.1, but not into the initial release candidate 
(RC). In fact, the most recent version of 4.11 that has been released is 
4.11.1. Somehow Rafael’s code (which is an enhancement) was merged into 4.11 
during the RC process. This is why my automated tests did not find it. I ran 
them against 4.11.1 RC1 and his code was put in after the first RC.

It looks like we had a bit of a process issue during the RC process as only bug 
fixes should be going into the next RC.

In any event, this means the documentation (at least in this regard) should be 
fine for 4.11.1. Also, no 4.11.2 (or 4.11.3) has been publicly released. We 
seem to have been getting those confused with RCs in our e-mail chain here.

On 7/16/18, 1:46 PM, "Yiping Zhang"  wrote:

Why is it listed as fixed in 4.11.1.0 in the release note, If the code only 
exist in 4.11.2?



On 7/16/18, 12:43 PM, "Tutkowski, Mike"  wrote:

OK, as Rafael noted, looks like it’s in 4.11.2. My regression tests 
were run against 4.11.1. I thought we only allowed bug fixes when going to a 
new RC, but it appears we are not strictly enforcing that rule.

On 7/16/18, 1:40 PM, "Tutkowski, Mike"  
wrote:

When I ran my suite of tests on 4.11.1, I did not encounter this 
issue. Also, looking at the code now, it appears this new code is first in 4.12.

On 7/16/18, 1:36 PM, "Yiping Zhang"  wrote:


Is this code already in ACS 4.11.1.0? 

CLOUDSTACK-10240 is listed as fixed in 4.11.1.0, according to 
release note here, 
http://docs.cloudstack.apache.org/projects/cloudstack-release-notes/ja/master/fixed_issues.html,
 but in the JIRA ticket itself, the "fixed version/s" field says 4.12.

We are using XenServer clusters with shared NFS storages and I 
am about to migrate to ACS 4.11.1.0 from 4.9.3.0.  Since we move VM between 
clusters a lot, this is going to be a blocker for us.  Someone please confirm.

Thanks

Yiping

    
On 7/14/18, 11:20 PM, "Tutkowski, Mike" 
 wrote:

Hi,

While running managed-storage regression tests tonight, I 
noticed a problem that is not related to managed storage.

CLOUDSTACK-10240 is a ticket asking that we allow the 
migration of a virtual disk that’s on local storage to shared storage. In the 
process of enabling this feature, the 
VirtualMachineManagerImpl.getPoolListForVolumesForMigration method was 
re-written in a way that completely breaks at least one use case: Migrating a 
VM across compute clusters (at least supported in XenServer). If, say, a 
virtual disk resides on shared storage in the source compute cluster, we must 
be able to copy this virtual disk to shared storage in the destination compute 
cluster.

As the code is currently written, this is no longer 
possible. It also seems that the managed-storage logic has been dropped for 
some reason in the new implementation.

Rafael – It seems that you worked on this feature. Would 
you be able to look into this and create a PR?

Thanks,
Mike












Re: Problem with CLOUDSTACK-10240 (Cannot migrate local volume to shared storage)

2018-07-16 Thread Tutkowski, Mike
ore starting to code. Thanks for spotting
this issue.

On Sun, Jul 15, 2018 at 9:11 PM, Tutkowski, Mike 
wrote:

> Hi Rafael,
>
> Thanks for your time on this.
>
> Here is an example where the new code deviates from the old code in a
> critical fashion (code right below is new):
>
> private Map getDefaultMappingOfVolumesAndS
> toragePoolForMigration(VirtualMachineProfile profile, Host targetHost) {
> Map volumeToPoolObjectMap = new
> HashMap();
> List allVolumes = _volsDao.findUsableVolumesForInstance(
> profile.getId());
> for (VolumeVO volume : allVolumes) {
> StoragePoolVO currentPool = _storagePoolDao.findById(
> volume.getPoolId());
> if (ScopeType.HOST.equals(currentPool.getScope())) {
> createVolumeToStoragePoolMappingIfNeeded(profile,
> targetHost, volumeToPoolObjectMap, volume, currentPool);
> } else {
> volumeToPoolObjectMap.put(volume, currentPool);
> }
> }
> return volumeToPoolObjectMap;
> }
>
> What happens in the new code (above) is if the user didn’t pass in a
> storage pool to migrate the virtual disk to (but the VM is being migrated
> to a new cluster), this code just assigns the virtual disk to its current
> storage pool (which is not going to be visible to any of the hosts in the
> new compute cluster).
>
> In the old code (I’m looking at 4.11.3 here), you could look around line
> 2337 for the following code (in the VirtualMachineManagerImpl.
> getPoolListForVolumesForMigration method):
>
> // Find a suitable pool for the volume. Call the
> storage pool allocator to find the list of pools.
>
> final DiskProfile diskProfile = new
> DiskProfile(volume, diskOffering, profile.getHypervisorType());
> final DataCenterDeployment plan = new
> DataCenterDeployment(host.getDataCenterId(), host.getPodId(),
> host.getClusterId(),
> host.getId(), null, null);
>
> final List poolList = new ArrayList<>();
> final ExcludeList avoid = new ExcludeList();
>
> for (final StoragePoolAllocator allocator :
> _storagePoolAllocators) {
> final List poolListFromAllocator =
> allocator.allocateToPool(diskProfile, profile, plan, avoid,
> StoragePoolAllocator.RETURN_UPTO_ALL);
>
> if (poolListFromAllocator != null &&
> !poolListFromAllocator.isEmpty()) {
> poolList.addAll(poolListFromAllocator);
> }
> }
>
> This old code would find an applicable storage pool in the destination
> cluster (one that can be seen by the hosts in that compute cluster).
>
> I think the main error in the new logic is the assumption that a VM can
> only be migrated to a host in the same computer cluster. For XenServer
> (perhaps for other hypervisor types?), we support cross-cluster VM
> migration.
>
> The other issue I noticed is that there is no logic in the new code that
> checks for managed-storage use cases. If you look in the
> VirtualMachineManagerImpl.getPoolListForVolumesForMigration method in the
> old code, there is special handling for managed storage. I don’t see this
> reproduced in the new logic.
>
> I sympathize with your point that all tests passed yet this issue was not
> uncovered. Unfortunately, I suspect we have a fairly low % coverage of
> automated tests on CloudStack. If we ever did get to a high % of automated
> test coverage, we might be able to spin up new releases more frequently. 
As
> the case stands today, however, there are probably many un-tested use 
cases
> when it comes to our automated suite of tests.
>
> Thanks again!
> Mike
>
> On 7/15/18, 4:19 PM, "Rafael Weingärtner" 
> wrote:
>
> Mike, are you able to pin-point in the old/replaced code the bit that
> was
> handling your use case?  I took the most care not to break anything.
> Also, your test case, isn't it in the ACS' integration test suite? In
> theory, all test passed when we merged the PR.
>
> I sure can take a look at it. Can you detail your use case? I mean, 
the
> high level execution flow. What API methods you

Re: Problem with CLOUDSTACK-10240 (Cannot migrate local volume to shared storage)

2018-07-16 Thread Tutkowski, Mike
Actually, I think I answered both of your questions with these two prior 
e-mails. Please let me know if you need further clarification. Thanks!

On 7/16/18, 2:17 PM, "Tutkowski, Mike"  wrote:

Allow me to correct what I said here:

“If getDefaultMappingOfVolumesAndStoragePoolForMigration is invoked, we 
silently ignore the (faulty) input (which is a new storage pool) from the user 
and keep the volume in its same managed storage pool (the user may wonder why 
it wasn’t migrated if they don’t get an error message back telling them this is 
not allowed).”

I should have said the following:

If getDefaultMappingOfVolumesAndStoragePoolForMigration is invoked on a VM 
that is using managed storage that is only at the cluster level (managed 
storage can be at either the zone or cluster level) and we are trying to 
migrate the VM from one cluster to another, this operation should fail (as the 
old code detects). The new code tries to keep the volume in the same storage 
pool (but that storage pool will not be visible to the hosts in the destination 
compute cluster).

On 7/16/18, 2:10 PM, "Tutkowski, Mike"  wrote:

Let me answer the questions in two separate e-mails.

This answer deals with what you wrote about this code:

> if (destPool.getId() == currentPool.getId()) {
> volumeToPoolObjectMap.put(volume, currentPool);
> } else {
>  throw new CloudRuntimeException("Currently, a volume on 
managed
> storage can only be 'migrated' to itself.");
> }
>

The code above is invoked if the user tries to migrate a volume that’s 
on managed storage to another storage pool. At present, such volumes can be 
migrated when a VM is migrated from one compute cluster to another, but those 
volumes have to remain on the same managed storage.

Here’s an example:

Let’s say VM_1 is in Cluster_1. VM_1 has a root (or data) disk on 
managed storage. We try to migrate the VM from Cluster_1 to Cluster_2 and 
specify a new storage pool for the volume. This case should fail. To make it 
work, you need to either 1) not specify a new storage pool or 2) specify the 
same storage pool the volume is already in. If the managed storage in question 
is zone wide, then it can be used from both Cluster_1 and Cluster_2.

The new code might call 
getDefaultMappingOfVolumesAndStoragePoolForMigration (if no storage pools at 
all are passed in to the API) or it might call 
createMappingVolumeAndStoragePoolEnteredByUser.

If getDefaultMappingOfVolumesAndStoragePoolForMigration is invoked, we 
silently ignore the (faulty) input (which is a new storage pool) from the user 
and keep the volume in its same managed storage pool (the user may wonder why 
it wasn’t migrated if they don’t get an error message back telling them this is 
not allowed).

If createMappingVolumeAndStoragePoolEnteredByUser is invoked, we seem 
to have a bigger problem (code is below):

I do not believe you are required to pass in a new storage pool for 
each and every volume of the VM. If the VM has, say, three volumes, you may 
only try to migrate two of the volumes to new storage pools. This logic seems 
to assume if you want to migrate one of the VM’s volumes, then you necessarily 
want to migrate all of the VM’s volumes. I believe it’s possible for targetPool 
to come back null and later throw a NullPointerException. The old code walks 
through each volume of the VM and checks if there is a new storage pool 
specified for it. If so, do one thing; else, do something else.

private Map 
createMappingVolumeAndStoragePoolEnteredByUser(VirtualMachineProfile profile, 
Host host, Map volumeToPool) {
Map volumeToPoolObjectMap = new 
HashMap();
for(Long volumeId: volumeToPool.keySet()) {
VolumeVO volume = _volsDao.findById(volumeId);

Long poolId = volumeToPool.get(volumeId);
StoragePoolVO targetPool = _storagePoolDao.findById(poolId);
StoragePoolVO currentPool = 
_storagePoolDao.findById(volume.getPoolId());

if (_poolHostDao.findByPoolHost(targetPool.getId(), 
host.getId()) == null) {
throw new CloudRuntimeException(String.format("Cannot 
migrate the volume [%s] to the storage pool [%s] while migrating VM [%s] to 
target host [%s]. The host does not have access to the storage pool entered.", 
volume.getUuid(), targetPool.getUuid(), profile.getUuid(), host.getUuid()));
}
if (currentPool.getId() == targetPool.getId()) {
s_logger.info(String.format("The volume [%s] is already 
allo

Re: Problem with CLOUDSTACK-10240 (Cannot migrate local volume to shared storage)

2018-07-16 Thread Tutkowski, Mike
For your feature, Rafael, are you trying to support the migration of a VM that 
has local storage from one cluster to another or is intra-cluster migration of 
local storage sufficient?

There is the migrateVolume API (you can pass in “live migrate” parameter):

http://cloudstack.apache.org/api/apidocs-4.11/apis/migrateVolume.html

There is also the migrateVirtualMachineWithVolume (one or more volumes). This 
is especially useful for moving a VM with its storage from one cluster to 
another:

http://cloudstack.apache.org/api/apidocs-4.11/apis/migrateVirtualMachineWithVolume.html

On 7/16/18, 2:20 PM, "Tutkowski, Mike"  wrote:

Actually, I think I answered both of your questions with these two prior 
e-mails. Please let me know if you need further clarification. Thanks!

On 7/16/18, 2:17 PM, "Tutkowski, Mike"  wrote:

Allow me to correct what I said here:

“If getDefaultMappingOfVolumesAndStoragePoolForMigration is invoked, we 
silently ignore the (faulty) input (which is a new storage pool) from the user 
and keep the volume in its same managed storage pool (the user may wonder why 
it wasn’t migrated if they don’t get an error message back telling them this is 
not allowed).”

I should have said the following:

If getDefaultMappingOfVolumesAndStoragePoolForMigration is invoked on a 
VM that is using managed storage that is only at the cluster level (managed 
storage can be at either the zone or cluster level) and we are trying to 
migrate the VM from one cluster to another, this operation should fail (as the 
old code detects). The new code tries to keep the volume in the same storage 
pool (but that storage pool will not be visible to the hosts in the destination 
compute cluster).

On 7/16/18, 2:10 PM, "Tutkowski, Mike"  
wrote:

Let me answer the questions in two separate e-mails.

This answer deals with what you wrote about this code:

> if (destPool.getId() == currentPool.getId()) {
> volumeToPoolObjectMap.put(volume, currentPool);
> } else {
>  throw new CloudRuntimeException("Currently, a volume on 
managed
> storage can only be 'migrated' to itself.");
> }
>

The code above is invoked if the user tries to migrate a volume 
that’s on managed storage to another storage pool. At present, such volumes can 
be migrated when a VM is migrated from one compute cluster to another, but 
those volumes have to remain on the same managed storage.

Here’s an example:

Let’s say VM_1 is in Cluster_1. VM_1 has a root (or data) disk on 
managed storage. We try to migrate the VM from Cluster_1 to Cluster_2 and 
specify a new storage pool for the volume. This case should fail. To make it 
work, you need to either 1) not specify a new storage pool or 2) specify the 
same storage pool the volume is already in. If the managed storage in question 
is zone wide, then it can be used from both Cluster_1 and Cluster_2.

The new code might call 
getDefaultMappingOfVolumesAndStoragePoolForMigration (if no storage pools at 
all are passed in to the API) or it might call 
createMappingVolumeAndStoragePoolEnteredByUser.

If getDefaultMappingOfVolumesAndStoragePoolForMigration is invoked, 
we silently ignore the (faulty) input (which is a new storage pool) from the 
user and keep the volume in its same managed storage pool (the user may wonder 
why it wasn’t migrated if they don’t get an error message back telling them 
this is not allowed).

If createMappingVolumeAndStoragePoolEnteredByUser is invoked, we 
seem to have a bigger problem (code is below):

I do not believe you are required to pass in a new storage pool for 
each and every volume of the VM. If the VM has, say, three volumes, you may 
only try to migrate two of the volumes to new storage pools. This logic seems 
to assume if you want to migrate one of the VM’s volumes, then you necessarily 
want to migrate all of the VM’s volumes. I believe it’s possible for targetPool 
to come back null and later throw a NullPointerException. The old code walks 
through each volume of the VM and checks if there is a new storage pool 
specified for it. If so, do one thing; else, do something else.

private Map 
createMappingVolumeAndStoragePoolEnteredByUser(VirtualMachineProfile profile, 
Host host, Map volumeToPool) {
Map volumeToPoolObjectMap = new 
HashMap();
for(Long volumeId: volumeToPool.keySet()) {
VolumeVO volume = _volsDao.findById(volumeId);

Long poolId = volu

Re: Problem with CLOUDSTACK-10240 (Cannot migrate local volume to shared storage)

2018-07-16 Thread Tutkowski, Mike
   - So, managed storage can be cluster and zone wide. Is that correct?

[Mike] Correct

   - If I want to migrate a VM across clusters, but if at least one of its
   volumes is placed in a cluster-wide managed storage, the migration is not
   allowed. Is that it?

[Mike] Correct

   - A volume placed in managed storage can never (at least not using this
   migrateWithVolume method) be migrated out of the storage pool it resides.
   is this statement right? Do you have alternative/other execution flow
   regarding this scenario?

[Mike] At least for KVM, you can shut the VM down and perform an offline 
migration 
of the volume from managed storage to non-managed storage. It’s possible we may 
support such a similar behavior with other hypervisor types in the future.

   - When migrating a VM that does not have volumes in managed storage, it
   should be possible to migrate it cross clusters. Therefore, we should try
   to use the volume allocators to find a suitable storage pool for its
   volumes in the target cluster

[Mike] It’s OK here if one or more of the volumes is on managed storage. The 
“trick” is 
that it needs to be on zone-wide managed storage that is visible to both the 
source and 
destination compute clusters. You cannot specify a new storage pool for any of 
these volumes 
(each must remain on its current, zone-wide primary storage).

If you can add these new constraints into the code, I can review them later. 
I’m a bit 
pressed for time this week, so it might not be possible to do so right away. 
Thanks!

On 7/16/18, 3:52 PM, "Rafael Weingärtner"  wrote:

Thanks for your feedback Mike. I actually did not want to change this
“migrateVirtualMachineWithVolume” API method. Everything started when we
wanted to create a feature to allow volume placement overrides. This means,
allowing root admins to place/migrate the volume to a storage pool that
might not be “allowed” (according to its current disk offering). This
feature was later expanded to allow changing the disk offering while
executing a storage migration (this means allowing changes on volume’s
QoS). Thus, creating a mechanism within ACS to allow disk offerings
replacement (as opposed to DB intervention, which was the way it was being
done so far). The rationale behind these extensions/enhancement is that the
root admins are wise/experts (at least we expect them to be). Therefore,
they know what they are doing when overriding or replacing a disk offering
of a user.

So, why am I changing this “migrateVirtualMachineWithVolume” API method?
When we allowed that override procedure, it broke the migration of VMs that
had volumes initially placed in NFS and then replaced (via override) in
local storage. It had something to do with the way ACS was detecting if the
VM has a local storage. Then, when I went to the method to fix it; it was
very convoluted to read and understand. Therefore, I re-wrote, and I missed
your use case. I am sorry for that. Moreover, I do intend to keep with the
current code, as we already have other features developed on top of it, and
this code is well documented and unit tested. It is only a matter of adding
your requirement there.

Now, let’s fix the problem. I will not point code here. I only want to
understand the idea for now.

   - So, managed storage can be cluster and zone wide. Is that correct?
   - If I want to migrate a VM across clusters, but if at least one of its
   volumes is placed in a cluster-wide managed storage, the migration is not
   allowed. Is that it?
   - A volume placed in managed storage can never (at least not using this
   migrateWithVolume method) be migrated out of the storage pool it resides.
   is this statement right? Do you have alternative/other execution flow
   regarding this scenario?
   - When migrating a VM that does not have volumes in managed storage, it
   should be possible to migrate it cross clusters. Therefore, we should try
   to use the volume allocators to find a suitable storage pool for its
   volumes in the target cluster

Are these all of the use cases that were left behind?

On Mon, Jul 16, 2018 at 5:36 PM, Tutkowski, Mike 
wrote:

> For your feature, Rafael, are you trying to support the migration of a VM
> that has local storage from one cluster to another or is intra-cluster
> migration of local storage sufficient?
>
> There is the migrateVolume API (you can pass in “live migrate” parameter):
>
> http://cloudstack.apache.org/api/apidocs-4.11/apis/migrateVolume.html
>
> There is also the migrateVirtualMachineWithVolume (one or more volumes).
> This is especially useful for moving a VM with its storage from one 
cluster
> to another:
>

Re: Problem with CLOUDSTACK-10240 (Cannot migrate local volume to shared storage)

2018-07-16 Thread Tutkowski, Mike
Allow me to correct what I said here:

“If getDefaultMappingOfVolumesAndStoragePoolForMigration is invoked, we 
silently ignore the (faulty) input (which is a new storage pool) from the user 
and keep the volume in its same managed storage pool (the user may wonder why 
it wasn’t migrated if they don’t get an error message back telling them this is 
not allowed).”

I should have said the following:

If getDefaultMappingOfVolumesAndStoragePoolForMigration is invoked on a VM that 
is using managed storage that is only at the cluster level (managed storage can 
be at either the zone or cluster level) and we are trying to migrate the VM 
from one cluster to another, this operation should fail (as the old code 
detects). The new code tries to keep the volume in the same storage pool (but 
that storage pool will not be visible to the hosts in the destination compute 
cluster).

On 7/16/18, 2:10 PM, "Tutkowski, Mike"  wrote:

Let me answer the questions in two separate e-mails.

This answer deals with what you wrote about this code:

> if (destPool.getId() == currentPool.getId()) {
> volumeToPoolObjectMap.put(volume, currentPool);
> } else {
>  throw new CloudRuntimeException("Currently, a volume on managed
> storage can only be 'migrated' to itself.");
> }
>

The code above is invoked if the user tries to migrate a volume that’s on 
managed storage to another storage pool. At present, such volumes can be 
migrated when a VM is migrated from one compute cluster to another, but those 
volumes have to remain on the same managed storage.

Here’s an example:

Let’s say VM_1 is in Cluster_1. VM_1 has a root (or data) disk on managed 
storage. We try to migrate the VM from Cluster_1 to Cluster_2 and specify a new 
storage pool for the volume. This case should fail. To make it work, you need 
to either 1) not specify a new storage pool or 2) specify the same storage pool 
the volume is already in. If the managed storage in question is zone wide, then 
it can be used from both Cluster_1 and Cluster_2.

The new code might call 
getDefaultMappingOfVolumesAndStoragePoolForMigration (if no storage pools at 
all are passed in to the API) or it might call 
createMappingVolumeAndStoragePoolEnteredByUser.

If getDefaultMappingOfVolumesAndStoragePoolForMigration is invoked, we 
silently ignore the (faulty) input (which is a new storage pool) from the user 
and keep the volume in its same managed storage pool (the user may wonder why 
it wasn’t migrated if they don’t get an error message back telling them this is 
not allowed).

If createMappingVolumeAndStoragePoolEnteredByUser is invoked, we seem to 
have a bigger problem (code is below):

I do not believe you are required to pass in a new storage pool for each 
and every volume of the VM. If the VM has, say, three volumes, you may only try 
to migrate two of the volumes to new storage pools. This logic seems to assume 
if you want to migrate one of the VM’s volumes, then you necessarily want to 
migrate all of the VM’s volumes. I believe it’s possible for targetPool to come 
back null and later throw a NullPointerException. The old code walks through 
each volume of the VM and checks if there is a new storage pool specified for 
it. If so, do one thing; else, do something else.

private Map 
createMappingVolumeAndStoragePoolEnteredByUser(VirtualMachineProfile profile, 
Host host, Map volumeToPool) {
Map volumeToPoolObjectMap = new 
HashMap();
for(Long volumeId: volumeToPool.keySet()) {
VolumeVO volume = _volsDao.findById(volumeId);

Long poolId = volumeToPool.get(volumeId);
StoragePoolVO targetPool = _storagePoolDao.findById(poolId);
StoragePoolVO currentPool = 
_storagePoolDao.findById(volume.getPoolId());

if (_poolHostDao.findByPoolHost(targetPool.getId(), 
host.getId()) == null) {
throw new CloudRuntimeException(String.format("Cannot 
migrate the volume [%s] to the storage pool [%s] while migrating VM [%s] to 
target host [%s]. The host does not have access to the storage pool entered.", 
volume.getUuid(), targetPool.getUuid(), profile.getUuid(), host.getUuid()));
}
if (currentPool.getId() == targetPool.getId()) {
s_logger.info(String.format("The volume [%s] is already 
allocated in storage pool [%s].", volume.getUuid(), targetPool.getUuid()));
}
volumeToPoolObjectMap.put(volume, targetPool);
}
return volumeToPoolObjectMap;
}

On 7/16/18, 5:13 AM, "Rafael Weingärtner"  
wrote:

Ok, I see what happened there with the migration to cluster. When I 
re-did
the code I did not have this case.

Re: Problem with CLOUDSTACK-10240 (Cannot migrate local volume to shared storage)

2018-07-16 Thread Tutkowski, Mike
Yeah, I just meant that was a workaround. As you pointed out, that workaround 
doesn’t make use of the migrateVirtualMachineWithVolume API command, though.

On 7/16/18, 5:23 PM, "Rafael Weingärtner"  wrote:

Thanks for the answers Mike. I will not be able to do it today, but I will
manage to do it this week. There is only one last doubt.

[Mike] At least for KVM, you can shut the VM down and perform an offline
migration
of the volume from managed storage to non-managed storage. It’s possible we
may
support such a similar behavior with other hypervisor types in the future.

[Rafael] I guess that we can shut down XenServer VMs and then migrate the
volumes later, right? However, the method in question here
(migrateVirtualMachineWithVolume) is not supposed to execute such steps, is
it?


On Mon, Jul 16, 2018 at 8:17 PM, Tutkowski, Mike 
wrote:

>- So, managed storage can be cluster and zone wide. Is that 
correct?
>
> [Mike] Correct
>
>- If I want to migrate a VM across clusters, but if at least one of
> its
>volumes is placed in a cluster-wide managed storage, the migration
> is not
>allowed. Is that it?
>
> [Mike] Correct
>
>- A volume placed in managed storage can never (at least not using
> this
>migrateWithVolume method) be migrated out of the storage pool it
> resides.
>is this statement right? Do you have alternative/other execution
> flow
>regarding this scenario?
>
> [Mike] At least for KVM, you can shut the VM down and perform an offline
> migration
> of the volume from managed storage to non-managed storage. It’s possible
> we may
> support such a similar behavior with other hypervisor types in the future.
>
>- When migrating a VM that does not have volumes in managed
> storage, it
>should be possible to migrate it cross clusters. Therefore, we
> should try
>to use the volume allocators to find a suitable storage pool for 
its
>volumes in the target cluster
>
> [Mike] It’s OK here if one or more of the volumes is on managed storage.
> The “trick” is
> that it needs to be on zone-wide managed storage that is visible to both
> the source and
> destination compute clusters. You cannot specify a new storage pool for
> any of these volumes
> (each must remain on its current, zone-wide primary storage).
>
> If you can add these new constraints into the code, I can review them
> later. I’m a bit
> pressed for time this week, so it might not be possible to do so right
> away. Thanks!
>
> On 7/16/18, 3:52 PM, "Rafael Weingärtner" 
> wrote:
>
> Thanks for your feedback Mike. I actually did not want to change this
> “migrateVirtualMachineWithVolume” API method. Everything started when
> we
> wanted to create a feature to allow volume placement overrides. This
> means,
> allowing root admins to place/migrate the volume to a storage pool 
that
> might not be “allowed” (according to its current disk offering). This
> feature was later expanded to allow changing the disk offering while
> executing a storage migration (this means allowing changes on volume’s
> QoS). Thus, creating a mechanism within ACS to allow disk offerings
> replacement (as opposed to DB intervention, which was the way it was
> being
> done so far). The rationale behind these extensions/enhancement is
> that the
> root admins are wise/experts (at least we expect them to be).
> Therefore,
> they know what they are doing when overriding or replacing a disk
> offering
> of a user.
>
> So, why am I changing this “migrateVirtualMachineWithVolume” API
> method?
> When we allowed that override procedure, it broke the migration of VMs
> that
> had volumes initially placed in NFS and then replaced (via override) 
in
> local storage. It had something to do with the way ACS was detecting
> if the
> VM has a local storage. Then, when I went to the method to fix it; it
> was
> very convoluted to read and understand. Therefore, I re-wrote, and I
> missed
> your use case. I am sorry for that. Moreover, I do intend to keep with
> the
> current code, as we already have other features developed on top of
> it, and
> this code is well documented and unit tested. It is only a matter of
> adding
&g

Re: Problem with CLOUDSTACK-10240 (Cannot migrate local volume to shared storage)

2018-07-15 Thread Tutkowski, Mike
It looks like this is the problematic PR:

https://github.com/apache/cloudstack/pull/2425/

On 7/15/18, 12:20 AM, "Tutkowski, Mike"  wrote:

Hi,

While running managed-storage regression tests tonight, I noticed a problem 
that is not related to managed storage.

CLOUDSTACK-10240 is a ticket asking that we allow the migration of a 
virtual disk that’s on local storage to shared storage. In the process of 
enabling this feature, the 
VirtualMachineManagerImpl.getPoolListForVolumesForMigration method was 
re-written in a way that completely breaks at least one use case: Migrating a 
VM across compute clusters (at least supported in XenServer). If, say, a 
virtual disk resides on shared storage in the source compute cluster, we must 
be able to copy this virtual disk to shared storage in the destination compute 
cluster.

As the code is currently written, this is no longer possible. It also seems 
that the managed-storage logic has been dropped for some reason in the new 
implementation.

Rafael – It seems that you worked on this feature. Would you be able to 
look into this and create a PR?

Thanks,
Mike




Re: Problem with CLOUDSTACK-10240 (Cannot migrate local volume to shared storage)

2018-07-15 Thread Tutkowski, Mike
Hi Rafael,

Thanks for your time on this.

Here is an example where the new code deviates from the old code in a critical 
fashion (code right below is new):

private Map 
getDefaultMappingOfVolumesAndStoragePoolForMigration(VirtualMachineProfile 
profile, Host targetHost) {
Map volumeToPoolObjectMap = new HashMap();
List allVolumes = 
_volsDao.findUsableVolumesForInstance(profile.getId());
for (VolumeVO volume : allVolumes) {
StoragePoolVO currentPool = 
_storagePoolDao.findById(volume.getPoolId());
if (ScopeType.HOST.equals(currentPool.getScope())) {
createVolumeToStoragePoolMappingIfNeeded(profile, targetHost, 
volumeToPoolObjectMap, volume, currentPool);
} else {
volumeToPoolObjectMap.put(volume, currentPool);
}
}
return volumeToPoolObjectMap;
}

What happens in the new code (above) is if the user didn’t pass in a storage 
pool to migrate the virtual disk to (but the VM is being migrated to a new 
cluster), this code just assigns the virtual disk to its current storage pool 
(which is not going to be visible to any of the hosts in the new compute 
cluster).

In the old code (I’m looking at 4.11.3 here), you could look around line 2337 
for the following code (in the 
VirtualMachineManagerImpl.getPoolListForVolumesForMigration method):

// Find a suitable pool for the volume. Call the storage 
pool allocator to find the list of pools.

final DiskProfile diskProfile = new DiskProfile(volume, 
diskOffering, profile.getHypervisorType());
final DataCenterDeployment plan = new 
DataCenterDeployment(host.getDataCenterId(), host.getPodId(), 
host.getClusterId(),
host.getId(), null, null);

final List poolList = new ArrayList<>();
final ExcludeList avoid = new ExcludeList();

for (final StoragePoolAllocator allocator : 
_storagePoolAllocators) {
final List poolListFromAllocator = 
allocator.allocateToPool(diskProfile, profile, plan, avoid, 
StoragePoolAllocator.RETURN_UPTO_ALL);

if (poolListFromAllocator != null && 
!poolListFromAllocator.isEmpty()) {
poolList.addAll(poolListFromAllocator);
}
}

This old code would find an applicable storage pool in the destination cluster 
(one that can be seen by the hosts in that compute cluster).

I think the main error in the new logic is the assumption that a VM can only be 
migrated to a host in the same computer cluster. For XenServer (perhaps for 
other hypervisor types?), we support cross-cluster VM migration.

The other issue I noticed is that there is no logic in the new code that checks 
for managed-storage use cases. If you look in the 
VirtualMachineManagerImpl.getPoolListForVolumesForMigration method in the old 
code, there is special handling for managed storage. I don’t see this 
reproduced in the new logic.

I sympathize with your point that all tests passed yet this issue was not 
uncovered. Unfortunately, I suspect we have a fairly low % coverage of 
automated tests on CloudStack. If we ever did get to a high % of automated test 
coverage, we might be able to spin up new releases more frequently. As the case 
stands today, however, there are probably many un-tested use cases when it 
comes to our automated suite of tests.

Thanks again!
Mike

On 7/15/18, 4:19 PM, "Rafael Weingärtner"  wrote:

Mike, are you able to pin-point in the old/replaced code the bit that was
handling your use case?  I took the most care not to break anything.
Also, your test case, isn't it in the ACS' integration test suite? In
theory, all test passed when we merged the PR.

I sure can take a look at it. Can you detail your use case? I mean, the
high level execution flow. What API methods you do, what you expected to
happen, and what is happening today.

On Sun, Jul 15, 2018 at 3:25 AM, Tutkowski, Mike 
wrote:

> It looks like this is the problematic PR:
>
> https://github.com/apache/cloudstack/pull/2425/
    >
    > On 7/15/18, 12:20 AM, "Tutkowski, Mike"  wrote:
>
> Hi,
>
> While running managed-storage regression tests tonight, I noticed a
> problem that is not related to managed storage.
>
> CLOUDSTACK-10240 is a ticket asking that we allow the migration of a
> virtual disk that’s on local storage to shared storage. In the process of
> enabling this feature, the VirtualMachineManagerImpl.
> getPoolListForVolumesForMigration method was re-written in a way that
> completely breaks at least one use case: Migrating a VM across compute
> clusters (at least supported in XenServer). If, say, a

Re: Problem with CLOUDSTACK-10240 (Cannot migrate local volume to shared storage)

2018-07-16 Thread Tutkowski, Mike
Another comment here: The part that is broken is if you try to let CloudStack 
pick the primary storage on the destination side. That code no longer exists in 
4.11.1.

On 7/16/18, 9:24 PM, "Tutkowski, Mike"  wrote:

To follow up on this a bit: Yes, you should be able to migrate a VM and its 
storage from one cluster to another today using non-managed (traditional) 
primary storage with XenServer (both the source and destination primary 
storages would be cluster scoped). However, that is one of the features that 
was broken in 4.11.1 that we are discussing in this thread.

On 7/16/18, 9:20 PM, "Tutkowski, Mike"  wrote:

For a bit of info on what managed storage is, please take a look at 
this document:


https://www.dropbox.com/s/wwz2bjpra9ykk5w/SolidFire%20in%20CloudStack.docx?dl=0

The short answer is that you can have zone-wide managed storage (for 
XenServer, VMware, and KVM). However, there is no current zone-wide non-managed 
storage for XenServer.

On 7/16/18, 6:20 PM, "Yiping Zhang"  wrote:

I assume by "managed storage", you guys mean primary storages, 
either zone -wide or cluster-wide.

For Xen hypervisor, ACS does not support "zone-wide" primary 
storage yet. Still, I can live migrate a VM with data disks between clusters 
with storage migration from web GUI, today.  So, your statement below does not 
reflect current behavior of the code.


   - If I want to migrate a VM across clusters, but if at 
least one of its
   volumes is placed in a cluster-wide managed storage, the 
migration is not
   allowed. Is that it?

[Mike] Correct












Re: Problem with CLOUDSTACK-10240 (Cannot migrate local volume to shared storage)

2018-07-16 Thread Tutkowski, Mike
For a bit of info on what managed storage is, please take a look at this 
document:

https://www.dropbox.com/s/wwz2bjpra9ykk5w/SolidFire%20in%20CloudStack.docx?dl=0

The short answer is that you can have zone-wide managed storage (for XenServer, 
VMware, and KVM). However, there is no current zone-wide non-managed storage 
for XenServer.

On 7/16/18, 6:20 PM, "Yiping Zhang"  wrote:

I assume by "managed storage", you guys mean primary storages, either zone 
-wide or cluster-wide.

For Xen hypervisor, ACS does not support "zone-wide" primary storage yet. 
Still, I can live migrate a VM with data disks between clusters with storage 
migration from web GUI, today.  So, your statement below does not reflect 
current behavior of the code.


   - If I want to migrate a VM across clusters, but if at least one 
of its
   volumes is placed in a cluster-wide managed storage, the 
migration is not
   allowed. Is that it?

[Mike] Correct








Re: [PROPOSE] Combining Apache CloudStack Documentation

2018-07-24 Thread Tutkowski, Mike
I like this, too, Paul.

On 7/24/18, 4:01 AM, "ilya musayev"  wrote:

I like it but wonder if Upgrade section needs to be added? ..

On Tue, Jul 24, 2018 at 2:25 AM Paul Angus  wrote:

> Hi All,
>
> We currently have four sources of documentation [1]. Which make managing
> the documentation convoluted, and worse, make navigating and searching the
> documentation really difficult.
>
> I have taken the current documentation and combined them into one repo,
> then created 7 sections:
>
> CloudStack Concepts and Terminology
> Quick Installation Guide
> Installation Guide
> Usage Guide
> Developers Guide
> Plugins Guide
> Release Notes
>
> I haven't changed any of the content, but I've moved some of it around to
> make more sense (to me).  You can see the result on RTD [2]
>
> I'd like to PROPOSE to move this demo version of the documentation over to
> the Apache repos and make it THE documentation source, update the website,
> and mark the current repos/sites as archive data.
>
> [1]
> https://github.com/apache/cloudstack-docs.git <
> https://github.com/apache/cloudstack-docs.git> is a bit of a dodge-podge
> of resources
> https://github.com/apache/cloudstack-docs-install.git is the install guide
> https://github.com/apache/cloudstack-docs-admin.git is the current admin
> manual.
> https://github.com/apache/cloudstack-docs-rn.git is the release notes for
> individual releases
>
> [2]  https://beta-cloudstack-docs.readthedocs.io/en/latest/
>
> paul.an...@shapeblue.com
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> @shapeblue
>
>
>
>




Re: Problem with CLOUDSTACK-10240 (Cannot migrate local volume to shared storage)

2018-07-16 Thread Tutkowski, Mike
To follow up on this a bit: Yes, you should be able to migrate a VM and its 
storage from one cluster to another today using non-managed (traditional) 
primary storage with XenServer (both the source and destination primary 
storages would be cluster scoped). However, that is one of the features that 
was broken in 4.11.1 that we are discussing in this thread.

On 7/16/18, 9:20 PM, "Tutkowski, Mike"  wrote:

For a bit of info on what managed storage is, please take a look at this 
document:


https://www.dropbox.com/s/wwz2bjpra9ykk5w/SolidFire%20in%20CloudStack.docx?dl=0

The short answer is that you can have zone-wide managed storage (for 
XenServer, VMware, and KVM). However, there is no current zone-wide non-managed 
storage for XenServer.

On 7/16/18, 6:20 PM, "Yiping Zhang"  wrote:

I assume by "managed storage", you guys mean primary storages, either 
zone -wide or cluster-wide.

For Xen hypervisor, ACS does not support "zone-wide" primary storage 
yet. Still, I can live migrate a VM with data disks between clusters with 
storage migration from web GUI, today.  So, your statement below does not 
reflect current behavior of the code.


   - If I want to migrate a VM across clusters, but if at least 
one of its
   volumes is placed in a cluster-wide managed storage, the 
migration is not
   allowed. Is that it?

[Mike] Correct










GUI Issue

2018-07-11 Thread Tutkowski, Mike
Hi,

Has anyone else noticed that you can no longer use the Quickview column to 
manipulate (i.e. delete, in this case) Disk Offerings in the GUI in master? The 
Quickview menu doesn’t show up. It works fine for Compute Offerings still, 
though.

Thanks,
Mike


Re: GUI Issue

2018-07-12 Thread Tutkowski, Mike
Thanks, Bobby!

> On Jul 11, 2018, at 11:55 PM, Boris Stoyanov  
> wrote:
> 
> I’ve just checked and it does not work with me as well. Doesn’t look browser 
> related, tried it on both chrome and FF and still not working. Let me log an 
> issue about it. 
> 
> Bobby.
> 
> 
> boris.stoya...@shapeblue.com 
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> @shapeblue
> 
> 
> 
>> On 11 Jul 2018, at 22:28, Tutkowski, Mike  wrote:
>> 
>> Hi,
>> 
>> Has anyone else noticed that you can no longer use the Quickview column to 
>> manipulate (i.e. delete, in this case) Disk Offerings in the GUI in master? 
>> The Quickview menu doesn’t show up. It works fine for Compute Offerings 
>> still, though.
>> 
>> Thanks,
>> Mike
> 


Re: Document required

2018-01-23 Thread Tutkowski, Mike
Hi,

Welcome to the CloudStack Community!

I would recommend you take a look at the CloudStack Wiki. In particular, this 
page is useful for setting up CloudStack with Linux (there are other docs on 
the Wiki for running under Mac OS X or Windows):

https://cwiki.apache.org/confluence/display/CLOUDSTACK/Setting+up+CloudStack+Development+Environment+on+Linux

Instead of the version of Java that it notes on the page, it’s probably best to 
install Java 8.

The source code can be found on GitHub here:

https://github.com/apache/cloudstack

Talk to you later!
Mike

On Jan 23, 2018, at 11:28 PM, RAKSHITH PAI 
> wrote:

Hi team,

I am new to this cloudstack community and also new to open source.Could you
please share document to set up code locally.Also, some documents to get
insight about the product.

Cheers,
Rakshith


Re: Cloudstack collab conference 2018

2018-03-13 Thread Tutkowski, Mike
I am personally good with this venue and what it provides us. The dates look 
good, as well.

On 3/13/18, 3:01 AM, "Giles Sirett"  wrote:

All
I've been speaking with Rich Bowen about the feasibility of holding another 
Cloudstack Collaboration Conference in conjunction with Apachecon

https://www.apachecon.com/acna18/schedule.html
Apachecon is 24-27 September, Montreal


Rich has said that we can have a room for Monday 24th  + one other day 
(I'm, hoping that would be Tuesday) , so CCC could be 24-25 September

We could probably squeeze in 20 talks across the 2 days and would look to 
bolt on a hackathon at the end on Wednesday 26th (if Apachecon isn't able to 
accommodate the hackathon, we do know a friendly company in Montreal who *may* 
be able to host that)

WE NEED TO MOVE QUICKLY: The CFP for apachecon is already open and closes 
March 30
So, I'd like to get a broad feeling here of support to do this


Theres not much needed to make this happen:

  1.  Confirm with Rich that we want to do this
  2.  Get http://cloudstackcollab.org/ updated - volunteers needed
  3.  Point our community at the exisiting CFP  - and then help on the 
submissions process - volunteers needed
  4.  Promote, etc


Kind regards
Giles


giles.sir...@shapeblue.com 
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK
@shapeblue
  
 





Re: Committee to Sort through CCC Presentation Submissions

2018-04-05 Thread Tutkowski, Mike
Will – What do you think? With only 26 presentations, do you think it would be 
reasonable to just ask each reviewer to review each one? One time that I was on 
one of these panels a couple years ago, we each reviewed the roughly dozen 
presentations that were submitted. Of course, people may not be able to spend 
that amount of time on this.

> On Apr 5, 2018, at 7:14 PM, Ron Wheeler <rwhee...@artifact-software.com> 
> wrote:
> 
> We still need to manage the review process and make sure that it is 
> adequately staffed.
> 
> The allocation of presentations to reviewers has to be managed to be sure 
> that the reviewers have the support that they need to do a proper review and 
> that the reviews get done.
> 
> Ron
> 
> 
>> On 05/04/2018 11:45 AM, Tutkowski, Mike wrote:
>> Perfect…then, unless anyone has other opinions they’d like to share on the 
>> topic, let’s follow that approach.
>> 
>> On 4/5/18, 9:43 AM, "Rafael Weingärtner" <rafaelweingart...@gmail.com> wrote:
>> 
>> That is exactly it.
>>  On Thu, Apr 5, 2018 at 12:37 PM, Tutkowski, Mike 
>> <mike.tutkow...@netapp.com>
>> wrote:
>>  > Hi Rafael,
>> >
>> > I think as long as we (the CloudStack Community) have the final say on 
>> how
>> > we fill our allotted slots in the CloudStack track of ApacheCon in
>> > Montreal, then it’s perfectly fine for us to leverage Apache’s normal
>> > review process to gather all the feedback from the larger Apache 
>> Community.
>> >
>> > As you say, we could wait for the feedback to come in via that 
>> mechanism
>> > and then, as per Will’s earlier comments, we could advertise on our 
>> users@
>> > and dev@ mailing lists when we plan to get together for a call and make
>> > final decisions on the CFP.
>> >
>> > Is that, in fact, what you were thinking, Rafael?
>> >
>> > Talk to you soon,
>> > Mike
>> >
>> > On 4/4/18, 2:58 PM, "Rafael Weingärtner" <rafaelweingart...@gmail.com>
>> > wrote:
>> >
>> > I think everybody that “raised their hands here” already signed up 
>> to
>> > review.
>> >
>> > Mike, what about if we only gathered the reviews from Apache main
>> > review
>> > system, and then we use that to decide which presentations will 
>> get in
>> > CloudStack tracks? Then, we reduce the work on our side (we also 
>> remove
>> > bias…). I do believe that the review from other peers from Apache
>> > community
>> > (even the one outside from our small community) will be fair and
>> > technical
>> > (meaning, without passion and or favoritism).
>> >
>> > Having said that, I think we only need a small group of PMCs to 
>> gather
>> > the
>> > results and out of the best ranked proposals, we pick the ones to 
>> our
>> > tracks.
>> >
>> > What do you (Mike) and others think?
>> >
>> >
>> > On Tue, Apr 3, 2018 at 5:07 PM, Tutkowski, Mike <
>> > mike.tutkow...@netapp.com>
>> > wrote:
>> >
>> > > Hi Ron,
>> > >
>> > > I don’t actually have insight into how many people have currently
>> > signed
>> > > up online to be CFP reviewers for ApacheCon. At present, I’m only
>> > aware of
>> > > those who have responded to this e-mail chain.
>> > >
>> > > We should be able to find out more in the coming weeks. We’re 
>> still
>> > quite
>> > > early in the process.
>> > >
>> > > Thanks for your feedback,
>> > > Mike
>> > >
>> > > On 4/1/18, 9:18 AM, "Ron Wheeler" 
>> <rwhee...@artifact-software.com>
>> > wrote:
>> > >
>> > > How many people have signed up to be reviewers?
>> > >
>> > > I don't think that scheduling is part of the review process 
>> and
>> > that
>> > > can
>> > > be done by the person/team "organizing" ApacheCon on behalf 
>> of
>> > th

Re: System VM Template

2018-04-10 Thread Tutkowski, Mike
Sounds good!

> On Apr 10, 2018, at 2:04 AM, Rohit Yadav <rohit.ya...@shapeblue.com> wrote:
> 
> Hi Mike,
> 
> 
> Please use the systemvmtemplate from the URL Rafael has mentioned. There 
> seems to be a systemd locking issue, the cloud-postinit process is locked for 
> few minutes at restart apache2:
> 
>2018-04-05 16:45:06,107  CsHelper.py execute:188 Executing: systemctl 
> restart apache2
> 
> 
> I've reproduced this and will try to fix this for 4.11.1.0 milestone next 
> week.
> 
> 
> - Rohit
> 
> <https://cloudstack.apache.org>
> 
> 
> 
> 
> From: Rafael Weingärtner <rafaelweingart...@gmail.com>
> Sent: Thursday, April 5, 2018 10:43:35 PM
> To: dev
> Subject: Re: System VM Template
> 
> I am using this template for system VMs:
> http://download.cloudstack.org/systemvm/4.11/systemvmtemplate-4.11.0-xen.vhd.bz2
> And, right now, the ACS version I am using was built using the branch of
> this PR: https://github.com/apache/cloudstack/pull/2524. Everything seems
> to be just fine here.
> 
> Could you get some details regarding the VR status that ACS is seeing?
> 
> On Thu, Apr 5, 2018 at 1:56 PM, Tutkowski, Mike <mike.tutkow...@netapp.com>
> wrote:
> 
>> Thanks for your feedback, Rafael.
>> 
>> I re-created my 4.12 cloud today (after fetching the latest code and using
>> the master branch) and still seem to be having trouble with the VR. The
>> hypervisor type I’m using here is XenServer 6.5.
>> 
>> When I examine the VR in the CloudStack GUI, the “Requires Upgrade” column
>> says, “Yes”. However, when I try to initiate the upgrade, I get an error
>> message stating that the VR is not in the proper state (because it’s stuck
>> in the Starting state).
>> 
>> The system VM template I am working with is the following:
>> http://cloudstack.apt-get.eu/systemvm/4.11/
>> 
>> In case anyone sees something, I’ve included the contents of my VR’s
>> cloud.log file below.
>> 
>> Thanks!
>> 
>> Thu Apr  5 16:45:01 UTC 2018 Executing cloud-early-config
>> Thu Apr  5 16:45:01 UTC 2018 Detected that we are running inside xen-domU
>> Thu Apr  5 16:45:02 UTC 2018 Scripts checksum detected: oldmd5=
>> 60703a62ef9d1666975ec0a8ce421270 newmd5=7f8c303cd3303ff902e7ad9f3f1f092b
>> Thu Apr  5 16:45:02 UTC 2018 Patched scripts using
>> /media/cdrom/cloud-scripts.tgz
>> Thu Apr  5 16:45:02 UTC 2018 Patching cloud service
>> Thu Apr  5 16:45:02 UTC 2018 Configuring systemvm type=dhcpsrvr
>> Thu Apr  5 16:45:02 UTC 2018 Setting up dhcp server system vm
>> Thu Apr  5 16:45:04 UTC 2018 Setting up dnsmasq
>> Thu Apr  5 16:45:05 UTC 2018 Setting up apache web server
>> Thu Apr  5 16:45:05 UTC 2018 Processors = 1  Enable service  = 0
>> Thu Apr  5 16:45:05 UTC 2018 cloud: enable_fwding = 0
>> Thu Apr  5 16:45:05 UTC 2018 enable_fwding = 0
>> Thu Apr  5 16:45:05 UTC 2018 Finished setting up systemvm
>> 2018-04-05 16:45:05,924  merge.py load:296 Continuing with the processing
>> of file '/var/cache/cloud/cmd_line.json'
>> 2018-04-05 16:45:05,927  merge.py process:101 Command of type cmdline
>> received
>> 2018-04-05 16:45:05,928  merge.py process:101 Command of type ips received
>> 2018-04-05 16:45:05,929  merge.py process:101 Command of type ips received
>> 2018-04-05 16:45:05,930  CsHelper.py execute:188 Executing: ip addr show
>> dev eth1
>> 2018-04-05 16:45:05,941  CsHelper.py execute:188 Executing: ip addr show
>> dev eth0
>> 2018-04-05 16:45:05,950  CsHelper.py execute:188 Executing: ip addr show
>> dev eth1
>> 2018-04-05 16:45:05,958  CsAddress.py process:108 Address found in DataBag
>> ==> {u'public_ip': u'169.254.3.171', u'one_to_one_nat': False,
>> u'nic_dev_id': u'1', u'network': u'169.254.0.0/16', u'netmask':
>> u'255.255.0.0', u'source_nat': False, u'broadcast': u'169.254.255.255',
>> u'add': True, u'nw_type': u'control', u'device': u'eth1', u'cidr': u'
>> 169.254.3.171/16', u'gateway': u'None', u'size': u'16'}
>> 2018-04-05 16:45:05,959  CsAddress.py process:116 Address 169.254.3.171/16
>> on device eth1 already configured
>> 2018-04-05 16:45:05,959  CsRoute.py defaultroute_exists:103 Checking if
>> default ipv4 route is present
>> 2018-04-05 16:45:05,959  CsHelper.py execute:188 Executing: ip -4 route
>> list 0/0
>> 2018-04-05 16:45:05,967  CsRoute.py defaultroute_exists:107 Default route
>> found: default via 10.117.40.126 dev eth0
>> 2018-04-05 16:45:05,967  CsHelper.py execute:188 Executing: ip addr show
>> dev eth0
>> 2018-04-05 16:45:05,976  CsAddress.py process:108 Add

Re: [DISCUSS] CloudStack graceful shutdown

2018-04-05 Thread Tutkowski, Mike
Wow, there’s been a lot of good details noted from several people on how this 
process works today and how we’d like it to work in the near future.

1) Any chance this is already documented on the Wiki?

2) If not, any chance someone would be willing to do so (a flow diagram would 
be particularly useful).

> On Apr 5, 2018, at 3:37 AM, Marc-Aurèle Brothier  wrote:
> 
> Hi all,
> 
> Good point ilya but as stated by Sergey there's more thing to consider
> before being able to do a proper shutdown. I augmented my script I gave you
> originally and changed code in CS. What we're doing for our environment is
> as follow:
> 
> 1. the MGMT looks for a change in the file /etc/lb-agent which contains
> keywords for HAproxy[2] (ready, maint) so that HA-proxy can disable the
> mgmt on the keyword "maint" and the mgmt server stops a couple of
> threads[1] to stop processing async jobs in the queue
> 2. Looks for the async jobs and wait until there is none to ensure you can
> send the reconnect commands (if jobs are running, a reconnect will result
> in a failed job since the result will never reach the management server -
> the agent waits for the current job to be done before reconnecting, and
> discard the result... rooms for improvement here!)
> 3. Issue a reconnectHost command to all the hosts connected to the mgmt
> server so that they reconnect to another one, otherwise the mgmt must be up
> since it is used to forward commands to agents.
> 4. when all agents are reconnected, we can shutdown the management server
> and perform the maintenance.
> 
> One issue remains for me, during the reconnect, the commands that are
> processed at the same time should be kept in a queue until the agents have
> finished any current jobs and have reconnected. Today the little time
> window during which the reconnect happens can lead to failed jobs due to
> the agent not being connected at the right moment.
> 
> I could push a PR for the change to stop some processing threads based on
> the content of a file. It's possible also to cancel the drain of the
> management by simply changing the content of the file back to "ready"
> again, instead of "maint" [2].
> 
> [1] AsyncJobMgr-Heartbeat, CapacityChecker, StatsCollector
> [2] HA proxy documentation on agent checker: https://cbonte.github.io/
> haproxy-dconv/1.6/configuration.html#5.2-agent-check
> 
> Regarding your issue on the port blocking, I think it's fair to consider
> that if you want to shutdown your server at some point, you have to stop
> serving (some) requests. Here the only way it's to stop serving everything.
> If the API had a REST design, we could reject any POST/PUT/DELETE
> operations and allow GET ones. I don't know how hard it would be today to
> only allow listBaseCmd operations to be more friendly with the users.
> 
> Marco
> 
> 
> On Thu, Apr 5, 2018 at 2:22 AM, Sergey Levitskiy 
> wrote:
> 
>> Now without spellchecking :)
>> 
>> This is not simple e.g. for VMware. Each management server also acts as an
>> agent proxy so tasks against a particular ESX host will be always
>> forwarded. That right answer will be to support a native “maintenance mode”
>> for management server. When entered to such mode the management server
>> should release all agents including SSVM, block/redirect API calls and
>> login request and finish all async job it originated.
>> 
>> 
>> 
>> On Apr 4, 2018, at 5:15 PM, Sergey Levitskiy  serg...@hotmail.com>> wrote:
>> 
>> This is not simple e.g. for VMware. Each management server also acts as an
>> agent proxy so tasks against a particular ESX host will be always
>> forwarded. That right answer will be to a native support for “maintenance
>> mode” for management server. When entered to such mode the management
>> server should release all agents including save, block/redirect API calls
>> and login request and finish all a sync job it originated.
>> 
>> Sent from my iPhone
>> 
>> On Apr 4, 2018, at 3:31 PM, Rafael Weingärtner <
>> rafaelweingart...@gmail.com> wrote:
>> 
>> Ilya, still regarding the management server that is being shut down issue;
>> if other MSs/or maybe system VMs (I am not sure to know if they are able to
>> do such tasks) can direct/redirect/send new jobs to this management server
>> (the one being shut down), the process might never end because new tasks
>> are always being created for the management server that we want to shut
>> down. Is this scenario possible?
>> 
>> That is why I mentioned blocking the port 8250 for the “graceful-shutdown”.
>> 
>> If this scenario is not possible, then everything s fine.
>> 
>> 
>> On Wed, Apr 4, 2018 at 7:14 PM, ilya musayev > >
>> wrote:
>> 
>> I'm thinking of using a configuration from "job.cancel.threshold.minutes" -
>> it will be the longest
>> 
>>"category": "Advanced",
>> 
>>"description": "Time (in 

Re: Committee to Sort through CCC Presentation Submissions

2018-04-05 Thread Tutkowski, Mike
Hi Rafael,

I think as long as we (the CloudStack Community) have the final say on how we 
fill our allotted slots in the CloudStack track of ApacheCon in Montreal, then 
it’s perfectly fine for us to leverage Apache’s normal review process to gather 
all the feedback from the larger Apache Community.

As you say, we could wait for the feedback to come in via that mechanism and 
then, as per Will’s earlier comments, we could advertise on our users@ and dev@ 
mailing lists when we plan to get together for a call and make final decisions 
on the CFP.

Is that, in fact, what you were thinking, Rafael?

Talk to you soon,
Mike

On 4/4/18, 2:58 PM, "Rafael Weingärtner" <rafaelweingart...@gmail.com> wrote:

I think everybody that “raised their hands here” already signed up to
review.

Mike, what about if we only gathered the reviews from Apache main review
system, and then we use that to decide which presentations will get in
CloudStack tracks? Then, we reduce the work on our side (we also remove
bias…). I do believe that the review from other peers from Apache community
(even the one outside from our small community) will be fair and technical
(meaning, without passion and or favoritism).

Having said that, I think we only need a small group of PMCs to gather the
results and out of the best ranked proposals, we pick the ones to our
tracks.

What do you (Mike) and others think?


On Tue, Apr 3, 2018 at 5:07 PM, Tutkowski, Mike <mike.tutkow...@netapp.com>
wrote:

> Hi Ron,
>
> I don’t actually have insight into how many people have currently signed
> up online to be CFP reviewers for ApacheCon. At present, I’m only aware of
> those who have responded to this e-mail chain.
>
> We should be able to find out more in the coming weeks. We’re still quite
> early in the process.
>
> Thanks for your feedback,
> Mike
>
> On 4/1/18, 9:18 AM, "Ron Wheeler" <rwhee...@artifact-software.com> wrote:
>
> How many people have signed up to be reviewers?
>
> I don't think that scheduling is part of the review process and that
> can
> be done by the person/team "organizing" ApacheCon on behalf of the 
PMC.
>
> To me review is looking at content for
> - relevance
> - quality of the presentations (suggest fixes to content, English,
> graphics, etc.)
> This should result in a consensus score
> - Perfect - ready for prime time
> - Needs minor changes as documented by the reviewers
> - Great topic but needs more work - perhaps a reviewer could volunteer
> to work with the presenter to get it ready if chosen
> - Not recommended for topic or content reasons
>
> The reviewers could also make non-binding recommendations about the
> balance between topics - marketing(why Cloudstack),
> Operations/implementation, Technical details, Roadmap, etc. based on
> what they have seen.
>
> This should be used by the organizers to make the choices and organize
> the program.
> The organizers have the final say on the choice of presentations and
> schedule
>
> Reviewers are there to help the process not control it.
>
> I would be worried that you do not have enough reviewers rather than
> too
> many.
> Then the work falls on the PMC and organizers.
>
> When planning meetings, I would recommend that you clearly separate 
the
> roles and only invite the reviewers to the meetings about review. Get
> the list of presentation to present to the reviewers and decide if
> there
> are any instructions that you want to give to reviewers.
> I would recommend that you keep the organizing group small. Membership
> should be set by the PMC and should be people that are committed to 
the
> ApacheCon project and have the time. The committee can request help 
for
> specific tasks from others in the community who are not on the
> committee.
>
> I would also recommend that organizers do not do reviews. They should
> read the finalists but if they do reviews, there may be a suggestion 
of
> favouring presentations that they reviewed. It also ensures that the
> organizers are not getting heat from rejected presenters - "it is the
> reviewers fault you did not get selected".
>
> My advice is to get as many reviewers as you can so that no one is
> essential and each reviewer has a limited number of presentations to
&

Re: Committee to Sort through CCC Presentation Submissions

2018-04-05 Thread Tutkowski, Mike
Perfect…then, unless anyone has other opinions they’d like to share on the 
topic, let’s follow that approach.

On 4/5/18, 9:43 AM, "Rafael Weingärtner" <rafaelweingart...@gmail.com> wrote:

That is exactly it.

On Thu, Apr 5, 2018 at 12:37 PM, Tutkowski, Mike <mike.tutkow...@netapp.com>
wrote:

> Hi Rafael,
>
> I think as long as we (the CloudStack Community) have the final say on how
> we fill our allotted slots in the CloudStack track of ApacheCon in
> Montreal, then it’s perfectly fine for us to leverage Apache’s normal
> review process to gather all the feedback from the larger Apache 
Community.
>
> As you say, we could wait for the feedback to come in via that mechanism
> and then, as per Will’s earlier comments, we could advertise on our users@
> and dev@ mailing lists when we plan to get together for a call and make
> final decisions on the CFP.
>
> Is that, in fact, what you were thinking, Rafael?
>
> Talk to you soon,
> Mike
>
> On 4/4/18, 2:58 PM, "Rafael Weingärtner" <rafaelweingart...@gmail.com>
> wrote:
>
> I think everybody that “raised their hands here” already signed up to
> review.
>
> Mike, what about if we only gathered the reviews from Apache main
> review
> system, and then we use that to decide which presentations will get in
> CloudStack tracks? Then, we reduce the work on our side (we also 
remove
> bias…). I do believe that the review from other peers from Apache
> community
> (even the one outside from our small community) will be fair and
> technical
> (meaning, without passion and or favoritism).
>
> Having said that, I think we only need a small group of PMCs to gather
> the
> results and out of the best ranked proposals, we pick the ones to our
> tracks.
>
> What do you (Mike) and others think?
>
>
> On Tue, Apr 3, 2018 at 5:07 PM, Tutkowski, Mike <
> mike.tutkow...@netapp.com>
> wrote:
>
> > Hi Ron,
> >
> > I don’t actually have insight into how many people have currently
> signed
> > up online to be CFP reviewers for ApacheCon. At present, I’m only
> aware of
> > those who have responded to this e-mail chain.
> >
> > We should be able to find out more in the coming weeks. We’re still
> quite
> > early in the process.
> >
> > Thanks for your feedback,
> > Mike
> >
> > On 4/1/18, 9:18 AM, "Ron Wheeler" <rwhee...@artifact-software.com>
> wrote:
> >
> > How many people have signed up to be reviewers?
> >
> > I don't think that scheduling is part of the review process and
> that
> > can
> > be done by the person/team "organizing" ApacheCon on behalf of
> the PMC.
> >
> > To me review is looking at content for
> > - relevance
> > - quality of the presentations (suggest fixes to content,
> English,
> > graphics, etc.)
> > This should result in a consensus score
> > - Perfect - ready for prime time
> > - Needs minor changes as documented by the reviewers
> > - Great topic but needs more work - perhaps a reviewer could
> volunteer
> > to work with the presenter to get it ready if chosen
> > - Not recommended for topic or content reasons
> >
> > The reviewers could also make non-binding recommendations about
> the
> > balance between topics - marketing(why Cloudstack),
> > Operations/implementation, Technical details, Roadmap, etc.
> based on
> > what they have seen.
> >
> > This should be used by the organizers to make the choices and
> organize
> > the program.
> > The organizers have the final say on the choice of presentations
> and
> > schedule
> >
> > Reviewers are there to help the process not control it.
> >
> > I would be worried that you do not have enough reviewers rather
> than
> > too
> > many.
> > Then the work falls on the PMC and organizers.
> >
>

Re: System VM Template

2018-04-05 Thread Tutkowski, Mike
proto static' returned non-zero exit 
status 1
2018-04-05 16:45:06,076  CsRoute.py set_route:74 Add dev eth0 table Table_eth0 
throw 10.117.40.0/25 proto static
2018-04-05 16:45:06,076  CsHelper.py execute:188 Executing: ip route add dev 
eth0 table Table_eth0 throw 10.117.40.0/25 proto static
2018-04-05 16:45:06,085  CsHelper.py execute:193 Command 'ip route add dev eth0 
table Table_eth0 throw 10.117.40.0/25 proto static' returned non-zero exit 
status 2
2018-04-05 16:45:06,086  CsHelper.py execute:188 Executing: sudo ip route flush 
cache
2018-04-05 16:45:06,103  CsHelper.py copy:263 Copied 
/etc/apache2/vhost.template to 
/etc/apache2/sites-enabled/vhost-10.117.40.33.conf
2018-04-05 16:45:06,107  CsFile.py commit:66 Wrote edited file 
/etc/apache2/sites-enabled/vhost-10.117.40.33.conf
2018-04-05 16:45:06,107  CsFile.py commit:68 Updated file in-cache configuration
2018-04-05 16:45:06,107  CsHelper.py execute:188 Executing: systemctl restart 
apache2

On 4/4/18, 7:04 AM, "Rafael Weingärtner" <rafaelweingart...@gmail.com> wrote:

Hey Mike,

This week I have been using ACS 4.12 to do some testing. VRs and system VMs
are deploying just fine with the system VM template of 4.11. Of course, by
using this template (the 4.11) I am not receiving the changes already made
to it in both 4.11 and current master branch.

During my testes, I allocated a public IP, created some NAT rules,
allocated directly attach IPs. Everything was working as expected.


The hypervisor I am using is XenServer both 6.5 and 7.2.

On Tue, Apr 3, 2018 at 1:56 AM, Tutkowski, Mike <mike.tutkow...@netapp.com>
wrote:

> Hi,
>
> I may have missed an e-mail about this recently.
>
> Can someone provide me with the current URL I can use to download system
> VM templates for 4.12?
>
> I’ve tried 4.11 from here:
>
> http://cloudstack.apt-get.eu/systemvm/4.11/
>
> and master from here:
>
> https://builds.cloudstack.org/job/build-master-systemvm/
>
> However, in neither case can I get the VR up and running on 4.12.
>
> Thanks!
> Mike
>



-- 
Rafael Weingärtner




Re: System VM Template

2018-04-05 Thread Tutkowski, Mike
OK, wait a second. :)

It works now. It just took a longer time than normal.

When I examine the VR in the GUI, it no longer says it requires an upgrade and 
has transitioned to the Running state.

It usually only takes a minute or so for it to come up and get into the Running 
state. It took about 10 minutes in this case, but it did end up working.

On 4/5/18, 10:56 AM, "Tutkowski, Mike" <mike.tutkow...@netapp.com> wrote:

Thanks for your feedback, Rafael.

I re-created my 4.12 cloud today (after fetching the latest code and using 
the master branch) and still seem to be having trouble with the VR. The 
hypervisor type I’m using here is XenServer 6.5.

When I examine the VR in the CloudStack GUI, the “Requires Upgrade” column 
says, “Yes”. However, when I try to initiate the upgrade, I get an error 
message stating that the VR is not in the proper state (because it’s stuck in 
the Starting state).

The system VM template I am working with is the following: 
http://cloudstack.apt-get.eu/systemvm/4.11/

In case anyone sees something, I’ve included the contents of my VR’s 
cloud.log file below.

Thanks!

Thu Apr  5 16:45:01 UTC 2018 Executing cloud-early-config
Thu Apr  5 16:45:01 UTC 2018 Detected that we are running inside xen-domU
Thu Apr  5 16:45:02 UTC 2018 Scripts checksum detected: 
oldmd5=60703a62ef9d1666975ec0a8ce421270 newmd5=7f8c303cd3303ff902e7ad9f3f1f092b
Thu Apr  5 16:45:02 UTC 2018 Patched scripts using 
/media/cdrom/cloud-scripts.tgz
Thu Apr  5 16:45:02 UTC 2018 Patching cloud service
Thu Apr  5 16:45:02 UTC 2018 Configuring systemvm type=dhcpsrvr
Thu Apr  5 16:45:02 UTC 2018 Setting up dhcp server system vm
Thu Apr  5 16:45:04 UTC 2018 Setting up dnsmasq
Thu Apr  5 16:45:05 UTC 2018 Setting up apache web server
Thu Apr  5 16:45:05 UTC 2018 Processors = 1  Enable service  = 0
Thu Apr  5 16:45:05 UTC 2018 cloud: enable_fwding = 0
Thu Apr  5 16:45:05 UTC 2018 enable_fwding = 0
Thu Apr  5 16:45:05 UTC 2018 Finished setting up systemvm
2018-04-05 16:45:05,924  merge.py load:296 Continuing with the processing 
of file '/var/cache/cloud/cmd_line.json'
2018-04-05 16:45:05,927  merge.py process:101 Command of type cmdline 
received
2018-04-05 16:45:05,928  merge.py process:101 Command of type ips received
2018-04-05 16:45:05,929  merge.py process:101 Command of type ips received
2018-04-05 16:45:05,930  CsHelper.py execute:188 Executing: ip addr show 
dev eth1
2018-04-05 16:45:05,941  CsHelper.py execute:188 Executing: ip addr show 
dev eth0
2018-04-05 16:45:05,950  CsHelper.py execute:188 Executing: ip addr show 
dev eth1
2018-04-05 16:45:05,958  CsAddress.py process:108 Address found in DataBag 
==> {u'public_ip': u'169.254.3.171', u'one_to_one_nat': False, u'nic_dev_id': 
u'1', u'network': u'169.254.0.0/16', u'netmask': u'255.255.0.0', u'source_nat': 
False, u'broadcast': u'169.254.255.255', u'add': True, u'nw_type': u'control', 
u'device': u'eth1', u'cidr': u'169.254.3.171/16', u'gateway': u'None', u'size': 
u'16'}
2018-04-05 16:45:05,959  CsAddress.py process:116 Address 169.254.3.171/16 
on device eth1 already configured
2018-04-05 16:45:05,959  CsRoute.py defaultroute_exists:103 Checking if 
default ipv4 route is present
2018-04-05 16:45:05,959  CsHelper.py execute:188 Executing: ip -4 route 
list 0/0
2018-04-05 16:45:05,967  CsRoute.py defaultroute_exists:107 Default route 
found: default via 10.117.40.126 dev eth0 
2018-04-05 16:45:05,967  CsHelper.py execute:188 Executing: ip addr show 
dev eth0
2018-04-05 16:45:05,976  CsAddress.py process:108 Address found in DataBag 
==> {u'public_ip': u'10.117.40.33', u'one_to_one_nat': False, u'nic_dev_id': 
u'0', u'network': u'10.117.40.0/25', u'netmask': u'255.255.255.128', 
u'source_nat': False, u'broadcast': u'10.117.40.127', u'add': True, u'nw_type': 
u'guest', u'device': u'eth0', u'cidr': u'10.117.40.33/25', u'gateway': u'None', 
u'size': u'25'}
2018-04-05 16:45:05,976  CsAddress.py process:116 Address 10.117.40.33/25 
on device eth0 already configured
2018-04-05 16:45:05,976  CsRoute.py add_table:37 Adding route table: 0 
Table_eth0 to /etc/iproute2/rt_tables if not present 
2018-04-05 16:45:05,978  CsHelper.py execute:188 Executing: sudo echo 0 
Table_eth0 >> /etc/iproute2/rt_tables
2018-04-05 16:45:06,015  CsHelper.py execute:188 Executing: ip rule show
2018-04-05 16:45:06,026  CsHelper.py execute:188 Executing: ip rule show
2018-04-05 16:45:06,034  CsHelper.py execute:188 Executing: ip rule add 
fwmark 0 table Table_eth0
2018-04-05 16:45:06,042  CsRule.py addMark:49 Added fwmark rule for 
Table_eth0
2018-04-05 16:45:06,043  CsHelper.py execute:188 Executing: ip link show 
eth0 | grep 'state DOWN'
2018-04-05 16:45:06,053  CsHelper.py execute:193 Command 'ip link show eth0 
| grep 'state DOWN'' returned non-zero exit status 1

Re: Committee to Sort through CCC Presentation Submissions

2018-04-05 Thread Tutkowski, Mike
Hi Ron,

We (mainly Giles and Will, from what I am aware) are still in the process of 
finalizing how many rooms we get and for how long, so – unfortunately – we 
can’t answer your questions at least at this time.

We’re making progress on that front, though.

Thanks,
Mike

On 4/5/18, 10:28 PM, "Ron Wheeler" <rwhee...@artifact-software.com> wrote:


By the time you go through one and write up a commentary, you have used 
quite a bit of your discretionary time.
How many days are in the review period?

How many reviewers have volunteered?

I would hope that key organizers of the conference are only reviewing 
finalists where the author has already done a revision to address the 
reviewers comments and the reviewers have given it a passing grade.

How many presentations are going to be given?
Are there any "reserved" slots for presentations that will be given on 
behalf of the PMC as official project reports such as a roadmap or 
project overview?

Ron

On 05/04/2018 9:21 PM, Will Stevens wrote:
> I need to get through a couple reviews to figure out the commitment. I 
> have been a bit slammed at the moment.
>
> On Thu, Apr 5, 2018, 9:19 PM Tutkowski, Mike, 
> <mike.tutkow...@netapp.com <mailto:mike.tutkow...@netapp.com>> wrote:
>
> Will – What do you think? With only 26 presentations, do you think
> it would be reasonable to just ask each reviewer to review each
> one? One time that I was on one of these panels a couple years
> ago, we each reviewed the roughly dozen presentations that were
> submitted. Of course, people may not be able to spend that amount
> of time on this.
>
> > On Apr 5, 2018, at 7:14 PM, Ron Wheeler
> <rwhee...@artifact-software.com
> <mailto:rwhee...@artifact-software.com>> wrote:
> >
> > We still need to manage the review process and make sure that it
> is adequately staffed.
> >
> > The allocation of presentations to reviewers has to be managed
> to be sure that the reviewers have the support that they need to
> do a proper review and that the reviews get done.
> >
> > Ron
> >
> >
> >> On 05/04/2018 11:45 AM, Tutkowski, Mike wrote:
> >> Perfect…then, unless anyone has other opinions they’d like to
> share on the topic, let’s follow that approach.
> >>
> >> On 4/5/18, 9:43 AM, "Rafael Weingärtner"
    > <rafaelweingart...@gmail.com <mailto:rafaelweingart...@gmail.com>>
> wrote:
> >>
> >> That is exactly it.
> >>  On Thu, Apr 5, 2018 at 12:37 PM, Tutkowski, Mike
> <mike.tutkow...@netapp.com <mailto:mike.tutkow...@netapp.com>>
> >> wrote:
> >>  > Hi Rafael,
> >> >
> >> > I think as long as we (the CloudStack Community) have the
> final say on how
> >> > we fill our allotted slots in the CloudStack track of
> ApacheCon in
> >> > Montreal, then it’s perfectly fine for us to leverage
> Apache’s normal
> >> > review process to gather all the feedback from the larger
> Apache Community.
> >> >
> >> > As you say, we could wait for the feedback to come in via
> that mechanism
> >> > and then, as per Will’s earlier comments, we could
> advertise on our users@
> >> > and dev@ mailing lists when we plan to get together for a
> call and make
> >> > final decisions on the CFP.
> >> >
> >> > Is that, in fact, what you were thinking, Rafael?
> >> >
> >> > Talk to you soon,
> >> > Mike
> >> >
> >> > On 4/4/18, 2:58 PM, "Rafael Weingärtner"
> <rafaelweingart...@gmail.com <mailto:rafaelweingart...@gmail.com>>
> >> > wrote:
> >> >
> >> > I think everybody that “raised their hands here”
> already signed up to
> >> > review.
> >> >
> >> > Mike, what about if we only gathered the reviews from
> Apache main
> >> > review
> >

Re: [DISCUSS] CloudStack graceful shutdown

2018-04-04 Thread Tutkowski, Mike
I may be remembering this incorrectly, but from what I recall, if a resource is 
owned by one MS and a request related to that resource comes in to another MS, 
the MS that received the request passes it on to the other MS.

> On Apr 4, 2018, at 2:36 PM, Rafael Weingärtner  
> wrote:
> 
> Big +1 for this feature; I only have a few doubts.
> 
> * Regarding the tasks/jobs that management servers (MSs) execute; are these
> tasks originate from requests that come to the MS, or is it possible that
> requests received by one management server to be executed by other? I mean,
> if I execute a request against MS1, will this request always be
> executed/threated by MS1, or is it possible that this request is executed
> by another MS (e.g. MS2)?
> 
> * I would suggest that after we block traffic coming from 8080/8443/8250(we
> will need to block this as well right?), we can log the execution of tasks.
> I mean, something saying, there are XXX tasks (enumerate tasks) still being
> executed, we will wait for them to finish before shutting down.
> 
> * The timeout (60 minutes suggested) could be global settings that we can
> load before executing the graceful-shutdown.
> 
> On Wed, Apr 4, 2018 at 5:15 PM, ilya musayev 
> wrote:
> 
>> Use case:
>> In any environment - time to time - administrator needs to perform a
>> maintenance. Current stop sequence of cloudstack management server will
>> ignore the fact that there may be long running async jobs - and terminate
>> the process. This in turn can create a poor user experience and occasional
>> inconsistency  in cloudstack db.
>> 
>> This is especially painful in large environments where the user has
>> thousands of nodes and there is a continuous patching that happens around
>> the clock - that requires migration of workload from one node to another.
>> 
>> With that said - i've created a script that monitors the async job queue
>> for given MS and waits for it complete all jobs. More details are posted
>> below.
>> 
>> I'd like to introduce "graceful-shutdown" into the systemctl/service of
>> cloudstack-management service.
>> 
>> The details of how it will work is below:
>> 
>> Workflow for graceful shutdown:
>>  Using iptables/firewalld - block any connection attempts on 8080/8443 (we
>> can identify the ports dynamically)
>>  Identify the MSID for the node, using the proper msid - query async_job
>> table for
>> 1) any jobs that are still running (or job_status=“0”)
>> 2) job_dispatcher not like “pseudoJobDispatcher"
>> 3) job_init_msid=$my_ms_id
>> 
>> Monitor this async_job table for 60 minutes - until all async jobs for MSID
>> are done, then proceed with shutdown
>>If failed for any reason or terminated, catch the exit via trap command
>> and unblock the 8080/8443
>> 
>> Comments are welcome
>> 
>> Regards,
>> ilya
>> 
> 
> 
> 
> -- 
> Rafael Weingärtner


Re: Community opinion regarding Apache events banner in CloudStack's website

2018-04-18 Thread Tutkowski, Mike
I like option 3 the best.

On 4/17/18, 12:43 PM, "Dag Sonstebo"  wrote:

A biased +1 for option 3 from me.

Regards,
Dag Sonstebo
Cloud Architect
ShapeBlue

On 17/04/2018, 19:42, "Rafael Weingärtner"  
wrote:

Third option (suggested by Dag) -
https://drive.google.com/open?id=16FEu9tD1HZqwxLp2lrnUBmsuRJNLpDqU

On Tue, Apr 17, 2018 at 3:39 PM, Dag Sonstebo 

wrote:

> Hi Rafael – in my opinion you need it fairly prominent so people 
notice it
> – so option 1, but maybe put it underneath the CloudMonkey logo on the
> right hand side?
>
> Regards,
> Dag Sonstebo
> Cloud Architect
> ShapeBlue
>
> On 17/04/2018, 19:35, "Rafael Weingärtner" 

> wrote:
>
> Ah damm.. I forgot about the file stripping in our mailing list.
> Sorry guys. Here they go.
>
> - first one:
> https://drive.google.com/open?id=1vSqni_GEj3YJjuGehxe-_dnrNqQP7x8y
>
> - second one:
> https://drive.google.com/open?id=1LEmt9g5ceAUeTuc2a1Cb4uctOwyz5eQ8
>
> On Tue, Apr 17, 2018 at 3:31 PM, Dag Sonstebo <
> dag.sonst...@shapeblue.com>
> wrote:
>
> > The white one is quite nice ☺
> >
> > Joking aside – looks like they got stripped from your email 
Rafael.
> >
> > Regards,
> > Dag Sonstebo
> > Cloud Architect
> > ShapeBlue
> >
> > From: Rafael Weingärtner 
> > Reply-To: "dev@cloudstack.apache.org" 

> > Date: Tuesday, 17 April 2018 at 19:13
> > To: users , dev <
> dev@cloudstack.apache.org>
> > Subject: Community opinion regarding Apache events banner in
> CloudStack's
> > website
> >
> > Hello folks,
> > I am trying to work out something to put Apache events banner 
on our
> > website. So far I came up with two proposals. Which one of them 
do
> you guys
> > prefer?
> > First one:
> > [cid:ii_jg3zjco00_162d4ce7db0cd3da]
> >
> >
> > Second:
> > [cid:ii_jg3zk0e01_162d4cefaef3a1ce]
> >
> > --
> > Rafael Weingärtner
> >
> > dag.sonst...@shapeblue.com
> > www.shapeblue.com
> > 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> > @shapeblue
> >
> >
> >
> >
>
>
> --
> Rafael Weingärtner
>
>
>
> dag.sonst...@shapeblue.com
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> @shapeblue
>
>
>
>


-- 
Rafael Weingärtner



dag.sonst...@shapeblue.com 
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK
@shapeblue
  
 





Re: Community opinion regarding Apache events banner in CloudStack's website

2018-04-18 Thread Tutkowski, Mike
My vote is for 3.

On 4/18/18, 5:13 AM, "Rafael Weingärtner"  wrote:

I am confused as well. Here goes a link with all image suggestions [1]. The
tally so far is the following:

   - Option 1  (banner in the middle of the main CloudStack section)- 1
   vote (Rafael)
   - Option 2  (a new section called “Apache events”)- 1 vote (Gabriel)
   - Option 3 (Banner aligned right and under CloudMonkey image – we can
   apply some improvements suggested by Nitin here) – 8 votes (Dag, Swen,
   Stephan, Boris, Ron, Grégorie, Nitin, Mauricio)
   - Option 4  (aligned left) – 1 vote (Nicolás)


[1] https://drive.google.com/open?id=1dqjPI2PBZ3hvKr2eEWaH3Fj_hLhXUxvB

On Wed, Apr 18, 2018 at 8:06 AM, Will Stevens  wrote:

> Can we list the options and their voting numbers?  I am a bit confused
> right now.
>
> I like the one that is left aligned under the text and keeps the logo on
> the right full size.
>
> On Tue, Apr 17, 2018, 10:41 PM Nitin Maharana, 
> wrote:
>
> > +1 for the third option. I think It would even look good if we adjust 
the
> > vertical alignment of the word "Apache CloudStack" to center.
> >
> > On Wed, Apr 18, 2018 at 12:12 AM, Rafael Weingärtner <
> > rafaelweingart...@gmail.com> wrote:
> >
> > > Third option (suggested by Dag) -
> > > https://drive.google.com/open?id=16FEu9tD1HZqwxLp2lrnUBmsuRJNLpDqU
> > >
> > > On Tue, Apr 17, 2018 at 3:39 PM, Dag Sonstebo <
> > dag.sonst...@shapeblue.com>
> > > wrote:
> > >
> > > > Hi Rafael – in my opinion you need it fairly prominent so people
> notice
> > > it
> > > > – so option 1, but maybe put it underneath the CloudMonkey logo on
> the
> > > > right hand side?
> > > >
> > > > Regards,
> > > > Dag Sonstebo
> > > > Cloud Architect
> > > > ShapeBlue
> > > >
> > > > On 17/04/2018, 19:35, "Rafael Weingärtner" <
> > rafaelweingart...@gmail.com>
> > > > wrote:
> > > >
> > > > Ah damm.. I forgot about the file stripping in our mailing list.
> > > > Sorry guys. Here they go.
> > > >
> > > > - first one:
> > > > https://drive.google.com/open?id=1vSqni_GEj3YJjuGehxe-_
> dnrNqQP7x8y
> > > >
> > > > - second one:
> > > > https://drive.google.com/open?id=1LEmt9g5ceAUeTuc2a1Cb4uctOwyz5
> eQ8
> > > >
> > > > On Tue, Apr 17, 2018 at 3:31 PM, Dag Sonstebo <
> > > > dag.sonst...@shapeblue.com>
> > > > wrote:
> > > >
> > > > > The white one is quite nice ☺
> > > > >
> > > > > Joking aside – looks like they got stripped from your email
> > Rafael.
> > > > >
> > > > > Regards,
> > > > > Dag Sonstebo
> > > > > Cloud Architect
> > > > > ShapeBlue
> > > > >
> > > > > From: Rafael Weingärtner 
> > > > > Reply-To: "dev@cloudstack.apache.org" <
> dev@cloudstack.apache.org
> > >
> > > > > Date: Tuesday, 17 April 2018 at 19:13
> > > > > To: users , dev <
> > > > dev@cloudstack.apache.org>
> > > > > Subject: Community opinion regarding Apache events banner in
> > > > CloudStack's
> > > > > website
> > > > >
> > > > > Hello folks,
> > > > > I am trying to work out something to put Apache events banner
> on
> > > our
> > > > > website. So far I came up with two proposals. Which one of 
them
> > do
> > > > you guys
> > > > > prefer?
> > > > > First one:
> > > > > [cid:ii_jg3zjco00_162d4ce7db0cd3da]
> > > > >
> > > > >
> > > > > Second:
> > > > > [cid:ii_jg3zk0e01_162d4cefaef3a1ce]
> > > > >
> > > > > --
> > > > > Rafael Weingärtner
> > > > >
> > > > > dag.sonst...@shapeblue.com
> > > > > www.shapeblue.com
> > > > > 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> > > > > @shapeblue
> > > > >
> > > > >
> > > > >
> > > > >
> > > >
> > > >
> > > > --
> > > > Rafael Weingärtner
> > > >
> > > >
> > > >
> > > > dag.sonst...@shapeblue.com
> > > > www.shapeblue.com
> > > > 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> > > > @shapeblue
> > > >
> > > >
> > > >
> > > >
> > >
> > >
> > > --
> > > Rafael Weingärtner
> > >
> >
>



-- 
Rafael Weingärtner




Re: Committee to Sort through CCC Presentation Submissions

2018-04-24 Thread Tutkowski, Mike
Hi everyone,

This note is a follow-up to this discussion thread.

Around the middle of April, the CloudStack PMC received an e-mail indicating 
(to our surprise) that we needed to provide the people organizing Montreal’s 
upcoming ApacheCon (which includes the CloudStack Collab Conf) with a schedule 
by Thursday, April 19th.

As this was sooner than we had expected, we were not able to get the group of 
people who had volunteered to look at the CFP together for a call.

Giles, Will Stevens, and I ended up examining the 29 submissions. We determined 
that four people had submitted more than one proposal (which is perfectly fine, 
of course). Being that we needed to respond to the organizers quickly, we 
decided if we limited each person who submitted one or more abstracts to only 
one that we would then be able to accept a presentation from each person.

Please let me know if you have questions.

Thanks,
Mike

On 3/27/18, 1:39 PM, "Tutkowski, Mike" <mike.tutkow...@netapp.com> wrote:

Hi everyone,

As you may be aware, this coming September in Montreal, the CloudStack 
Community will be hosting the CloudStack Collaboration Conference:

http://ca.cloudstackcollab.org/

Even though the event is six months away, we are on a tight schedule with 
regards to the Call For Participation (CFP):

https://www.apachecon.com/acna18/schedule.html

If you are interested in submitting a talk, please do so before March 30th.

That being said, as usual, we will have need of a small committee to sort 
through these presentation submissions.

If you are interested in helping out in this process, please reply to this 
message.

Thanks!
Mike




Re: Welcoming Mike as the new Apache CloudStack VP

2018-03-27 Thread Tutkowski, Mike
Thanks, everyone!

Great job, Wido!

> On Mar 27, 2018, at 4:39 AM, Nicolas Vazquez  
> wrote:
> 
> Thanks Wido! Congratulations Mike!
> 
> 
> From: Wido den Hollander 
> Sent: Monday, March 26, 2018 11:11:02 AM
> To: dev@cloudstack.apache.org; us...@cloudstack.apache.org
> Subject: Welcoming Mike as the new Apache CloudStack VP
> 
> Hi all,
> 
> It's been a great pleasure working with the CloudStack project as the
> ACS VP over the past year.
> 
> A big thank you from my side for everybody involved with the project in
> the last year.
> 
> Hereby I would like to announce that Mike Tutkowski has been elected to
> replace me as the Apache Cloudstack VP in our annual VP rotation.
> 
> Mike has a long history with the project and I am are happy welcome him
> as the new VP for CloudStack.
> 
> Welcome Mike!
> 
> Thanks,
> 
> Wido
> 
> nicolas.vazq...@shapeblue.com 
> www.shapeblue.com
> ,   
> @shapeblue
> 
> 
> 


Re: Committee to Sort through CCC Presentation Submissions

2018-03-31 Thread Tutkowski, Mike
Hi Ron,

From what I understand, the CloudStack proposals will be mixed in with all of 
the ApacheCon proposals.

In the past when I’ve participated in these CloudStack panels to review 
proposals, we had to compare each proposal against the others to arrive at a 
balance of topics (i.e. not all networking focused, not all XenServer focused, 
etc.) and to suggest improvements for proposals that we did not accept for 
other reasons.

From what I understand (but Giles can comment further on this), we have a track 
at ApacheCon and will need to fill it with X number of presentations. To do 
this, it seems like a CloudStack-focused panel would be a good approach, but I 
am definitely open to another approach. We don’t want to exclude anyone (in or 
out of the CloudStack Community) who might like to provide input. Anyone who is 
interested would, of course, be free to join us in combing through the 
proposals.

We don’t need to get started on this right away. The CFP just closed yesterday. 
Let’s wait for feedback from Giles (who is currently on vacation) and go from 
there.

Thanks!
Mike

On 3/31/18, 6:59 AM, "Ron Wheeler" <rwhee...@artifact-software.com> wrote:

Is this a real concern?
Why would a large number of Apache contributors who are not interested 
in Cloudstack (enough to outvote those "part of the Cloudstack 
community") get involved as reviewers

Reviewing involves some commitment of time so I am hard pressed to guess 
why some Apache contributor would volunteer to do the work in order to 
veto a presentation that they have not yet seen or have no interest in 
seeing.

Are we guaranteed a fixed number of hours of presentations or is the 
review process part of the allocation of overall time?

On what basis can some group veto a presentation?
That would seem to be a very strong action and I would hope that it 
requires a strong reason.

OTOH if a large??? number of Apache contributors (regardless of their 
affiliation) say that a presentation has serious issues or very limited 
interest, that would seem to be a red flag that the presentation 
requires improvement or needs to be dropped in favour of another 
Cloudstack presentation, if it can not be fixed.

We should also be aware that this is an opportunity to "market" 
Cloudstack to the broader Apache community.
Outside reviewers might have valuable input into how presentations can 
attract new adopters or be clearer to the broader DevOps community.
We also need to remember that we do have an active community and other 
opportunities during the year to present presentations that do not get 
selected for this conference.

If their is a real fear that a lot of "outsiders" are going to disrupt 
the review process, a more reasonable response would seem to be to get 
more reviewers from the community.

I have volunteered already.

Ron
    
On 30/03/2018 11:11 PM, Tutkowski, Mike wrote:
> Hi Rafael,
>
> It’s a little bit tricky in our particular situation. Allow me to explain:
>
> As you are likely aware, the CloudStack Collaboration Conference will be 
held as a track in the larger ApacheCon conference in Montreal this coming 
September.
>
> It is true, as you say, that anyone who wishes to do so can contribute to 
reviewing the CFP for ApacheCon.
>
> What is a bit of a concern, however, is that we might get certain 
CloudStack CFP proposals vetoed by people who are not, per se, a part of our 
community.
>
> That being the case, I have contacted the organizers for ApacheCon to see 
if there is some way we can section off the CloudStack CFP from the larger 
ApacheCon CFP for review purposes.
>
> Assuming we can do this, the panel that I am proposing here would handle 
this review task.
>
> I hope that helps clarify the situation.
>
> Thanks!
> Mike
>
> On 3/30/18, 8:38 AM, "Rafael Weingärtner" <rafaelweingart...@gmail.com> 
wrote:
>
>  Are we going to have a separated review process?
>  
>  I thought anybody could go here [1] and apply for a reviewer 
position and
>  start reviewing. Well, that is what I did. I have already reviewed 
some
>  CloudStack proposals (of course I did not review mines). After 
asking to
>  review presentations, Rich has giving me access to the system. I 
thought
>  everybody interest in helping was going to do the same.
>  
>  [1] 
https://cfp.apachecon.com/conference.html?apachecon-north-america-2018
>  
>  
>  On Wed, Mar 28, 2018 at 4:05 AM, Swen - swen.io <m...@swen.io> wrote:
>  
>  

Re: Committee to Sort through CCC Presentation Submissions

2018-03-31 Thread Tutkowski, Mike
Hi Ron,

I am definitely open to working this however makes the most sense.

It looks like Will’s e-mail indicates that the process I suggested has been 
followed in the past (which is how I recall, as well).

Let’s make sure I understood Will correctly.

Will – Are you, in fact, indicating that what I was suggesting is how we have 
reviewed the CFP in the past? If so, are you able to address Ron’s concerns?

Also, Will – I am not sure about a hackathon. Let’s chat with Giles once he’s 
back from vacation since he’s been the most involved with organizing the 
CloudStack track within ApacheCon.

Thanks!
Mike

On 3/31/18, 9:00 PM, "Ron Wheeler" <rwhee...@artifact-software.com> wrote:

I am not sure about your concern in that case.
I am not sure why people not interested in Cloudstack would volunteer as 
reviewers and want to pick bad presentations.

I would be more worried that there are not enough good presentations 
proposed rather than some meritorious presentation will get rejected due 
to "outsiders" voting it down in favour of less useful presentations.

It may be tricky to get balance if that means taking "bad" proposals 
that can not be fixed that cover topics that are in areas that are not 
otherwise covered at the expense of great presentations that are in 
areas with many choices.

We should wait to see how many presentations have to be rejected and the 
number of reviewers before getting too exercised over the loyalty of 
reviewers.

Getting more reviewers is likely the most effective way to see that a 
wider range of topics is covered.

Ron

    On 31/03/2018 7:15 PM, Tutkowski, Mike wrote:
> Hi Ron,
>
>  From what I understand, the CloudStack proposals will be mixed in with 
all of the ApacheCon proposals.
>
> In the past when I’ve participated in these CloudStack panels to review 
proposals, we had to compare each proposal against the others to arrive at a 
balance of topics (i.e. not all networking focused, not all XenServer focused, 
etc.) and to suggest improvements for proposals that we did not accept for 
other reasons.
>
>  From what I understand (but Giles can comment further on this), we have 
a track at ApacheCon and will need to fill it with X number of presentations. 
To do this, it seems like a CloudStack-focused panel would be a good approach, 
but I am definitely open to another approach. We don’t want to exclude anyone 
(in or out of the CloudStack Community) who might like to provide input. Anyone 
who is interested would, of course, be free to join us in combing through the 
proposals.
>
> We don’t need to get started on this right away. The CFP just closed 
yesterday. Let’s wait for feedback from Giles (who is currently on vacation) 
and go from there.
>
> Thanks!
> Mike
>
> On 3/31/18, 6:59 AM, "Ron Wheeler" <rwhee...@artifact-software.com> wrote:
>
>  Is this a real concern?
>  Why would a large number of Apache contributors who are not 
interested
>  in Cloudstack (enough to outvote those "part of the Cloudstack
>  community") get involved as reviewers
>  
>  Reviewing involves some commitment of time so I am hard pressed to 
guess
>  why some Apache contributor would volunteer to do the work in order 
to
>  veto a presentation that they have not yet seen or have no interest 
in
>  seeing.
>  
>  Are we guaranteed a fixed number of hours of presentations or is the
>  review process part of the allocation of overall time?
>  
>  On what basis can some group veto a presentation?
>  That would seem to be a very strong action and I would hope that it
>  requires a strong reason.
>  
>  OTOH if a large??? number of Apache contributors (regardless of their
>  affiliation) say that a presentation has serious issues or very 
limited
>  interest, that would seem to be a red flag that the presentation
>  requires improvement or needs to be dropped in favour of another
>  Cloudstack presentation, if it can not be fixed.
>  
>  We should also be aware that this is an opportunity to "market"
>  Cloudstack to the broader Apache community.
>  Outside reviewers might have valuable input into how presentations 
can
>  attract new adopters or be clearer to the broader DevOps community.
>  We also need to remember that we do have an active community and 
other
>  opportunities during the year to present presentations that do not 
get
>  selected for this conference.
>  
>  If their is 

Re: Committee to Sort through CCC Presentation Submissions

2018-03-31 Thread Tutkowski, Mike
Thanks for the feedback, Will!

I agree with the approach you outlined.

Thanks for being so involved in the process! Let’s chat with Giles once he’s 
back to see if we can get your questions answered.

> On Mar 31, 2018, at 10:14 PM, Will Stevens <wstev...@cloudops.com> wrote:
> 
> In the past the committee was chosen as a relatively small group in order
> to make it easier to manage feedback.  In order to make it fair to everyone
> in the community, I would suggest that instead of doing it with a small
> group, we do it out in the open on a scheduled call.
> 
> We will have to get a list of the talks that are CloudStack specific from
> ApacheCon, but that should be possible.
> 
> Once we have the talks selected, then a smaller number of us can work on
> setting up the actual ordering and the details.
> 
> I have been quite involved so far.  Giles and I have been organizing the
> sponsors, website and dealing with ApacheCon so far.  Obviously, Mike is
> also working on this as well.
> 
> I think we are headed in the right direction on this.
> 
> Cheers,
> 
> Will
> 
> On Mar 31, 2018 11:49 PM, "Tutkowski, Mike" <mike.tutkow...@netapp.com>
> wrote:
> 
> Hi Ron,
> 
> I am definitely open to working this however makes the most sense.
> 
> It looks like Will’s e-mail indicates that the process I suggested has been
> followed in the past (which is how I recall, as well).
> 
> Let’s make sure I understood Will correctly.
> 
> Will – Are you, in fact, indicating that what I was suggesting is how we
> have reviewed the CFP in the past? If so, are you able to address Ron’s
> concerns?
> 
> Also, Will – I am not sure about a hackathon. Let’s chat with Giles once
> he’s back from vacation since he’s been the most involved with organizing
> the CloudStack track within ApacheCon.
> 
> Thanks!
> 
> Mike
> 
> 
> On 3/31/18, 9:00 PM, "Ron Wheeler" <rwhee...@artifact-software.com> wrote:
> 
>I am not sure about your concern in that case.
>I am not sure why people not interested in Cloudstack would volunteer as
>reviewers and want to pick bad presentations.
> 
>I would be more worried that there are not enough good presentations
>proposed rather than some meritorious presentation will get rejected due
>to "outsiders" voting it down in favour of less useful presentations.
> 
>It may be tricky to get balance if that means taking "bad" proposals
>that can not be fixed that cover topics that are in areas that are not
>otherwise covered at the expense of great presentations that are in
>areas with many choices.
> 
>We should wait to see how many presentations have to be rejected and the
>number of reviewers before getting too exercised over the loyalty of
>reviewers.
> 
>Getting more reviewers is likely the most effective way to see that a
>wider range of topics is covered.
> 
>Ron
> 
>>On 31/03/2018 7:15 PM, Tutkowski, Mike wrote:
>> Hi Ron,
>> 
>> From what I understand, the CloudStack proposals will be mixed in
> with all of the ApacheCon proposals.
>> 
>> In the past when I’ve participated in these CloudStack panels to
> review proposals, we had to compare each proposal against the others to
> arrive at a balance of topics (i.e. not all networking focused, not all
> XenServer focused, etc.) and to suggest improvements for proposals that we
> did not accept for other reasons.
>> 
>> From what I understand (but Giles can comment further on this), we
> have a track at ApacheCon and will need to fill it with X number of
> presentations. To do this, it seems like a CloudStack-focused panel would
> be a good approach, but I am definitely open to another approach. We don’t
> want to exclude anyone (in or out of the CloudStack Community) who might
> like to provide input. Anyone who is interested would, of course, be free
> to join us in combing through the proposals.
>> 
>> We don’t need to get started on this right away. The CFP just closed
> yesterday. Let’s wait for feedback from Giles (who is currently on
> vacation) and go from there.
>> 
>> Thanks!
>> Mike
>> 
>> On 3/31/18, 6:59 AM, "Ron Wheeler" <rwhee...@artifact-software.com>
> wrote:
>> 
>> Is this a real concern?
>> Why would a large number of Apache contributors who are not
> interested
>> in Cloudstack (enough to outvote those "part of the Cloudstack
>> community") get involved as reviewers
>> 
>> Reviewing involves some commitment of time so I am hard pressed
> to guess
>> why some Apache contrib

Re: [VOTE] Move to Github issues

2018-03-29 Thread Tutkowski, Mike
+1

On 3/26/18, 12:33 AM, "Rohit Yadav"  wrote:

All,

Based on the discussion last week [1], I would like to start a vote to put
the proposal into effect:

- Enable Github issues, wiki features in CloudStack repositories.
- Both user and developers can use Github issues for tracking issues.
- Developers can use #id references while fixing an existing/open issue in
a PR [2]. PRs can be sent without requiring to open/create an issue.
- Use Github milestone to track both issues and pull requests towards a
CloudStack release, and generate release notes.
- Relax requirement for JIRA IDs, JIRA still to be used for historical
reference and security issues. Use of JIRA will be discouraged.
- The current requirement of two(+) non-author LGTMs will continue for PR
acceptance. The two(+) PR non-authors can advise resolution to any issue
that we've not already discussed/agreed upon.

For sanity in tallying the vote, can PMC members please be sure to indicate
"(binding)" with their vote?

[ ] +1  approve
[ ] +0  no opinion
[ ] -1  disapprove (and reason why)

Vote will be open for 120 hours. If the vote passes the following actions
will be taken:
- Get Github features enabled from ASF INFRA
- Update CONTRIBUTING.md and other relevant cwiki pages.
- Update project website

[1] https://markmail.org/message/llodbwsmzgx5hod6
[2] https://blog.github.com/2013-05-14-closing-issues-via-pull-requests/

Regards,
Rohit Yadav




System VM Template

2018-04-02 Thread Tutkowski, Mike
Hi,

I may have missed an e-mail about this recently.

Can someone provide me with the current URL I can use to download system VM 
templates for 4.12?

I’ve tried 4.11 from here:

http://cloudstack.apt-get.eu/systemvm/4.11/

and master from here:

https://builds.cloudstack.org/job/build-master-systemvm/

However, in neither case can I get the VR up and running on 4.12.

Thanks!
Mike


Re: Committee to Sort through CCC Presentation Submissions

2018-04-03 Thread Tutkowski, Mike
Hi Ron,

I don’t actually have insight into how many people have currently signed up 
online to be CFP reviewers for ApacheCon. At present, I’m only aware of those 
who have responded to this e-mail chain.

We should be able to find out more in the coming weeks. We’re still quite early 
in the process.

Thanks for your feedback,
Mike

On 4/1/18, 9:18 AM, "Ron Wheeler" <rwhee...@artifact-software.com> wrote:

How many people have signed up to be reviewers?

I don't think that scheduling is part of the review process and that can 
be done by the person/team "organizing" ApacheCon on behalf of the PMC.

To me review is looking at content for
- relevance
- quality of the presentations (suggest fixes to content, English, 
graphics, etc.)
This should result in a consensus score
- Perfect - ready for prime time
- Needs minor changes as documented by the reviewers
- Great topic but needs more work - perhaps a reviewer could volunteer 
to work with the presenter to get it ready if chosen
- Not recommended for topic or content reasons

The reviewers could also make non-binding recommendations about the 
balance between topics - marketing(why Cloudstack), 
Operations/implementation, Technical details, Roadmap, etc. based on 
what they have seen.

This should be used by the organizers to make the choices and organize 
the program.
The organizers have the final say on the choice of presentations and 
schedule

Reviewers are there to help the process not control it.

I would be worried that you do not have enough reviewers rather than too 
many.
Then the work falls on the PMC and organizers.

When planning meetings, I would recommend that you clearly separate the 
roles and only invite the reviewers to the meetings about review. Get 
the list of presentation to present to the reviewers and decide if there 
are any instructions that you want to give to reviewers.
I would recommend that you keep the organizing group small. Membership 
should be set by the PMC and should be people that are committed to the 
ApacheCon project and have the time. The committee can request help for 
specific tasks from others in the community who are not on the committee.

I would also recommend that organizers do not do reviews. They should 
read the finalists but if they do reviews, there may be a suggestion of 
favouring presentations that they reviewed. It also ensures that the 
organizers are not getting heat from rejected presenters - "it is the 
reviewers fault you did not get selected".

My advice is to get as many reviewers as you can so that no one is 
essential and each reviewer has a limited number of presentations to 
review but each presentation gets reviewed by multiple people. Also bear 
in mind that not all reviewers have the same ability to review each 
presentation.
Reviews should be anonymous and only the summary comments given to the 
presenter. Reviewers of a presentation should be able to discuss the 
presentation during the review to make sure that reviewers do not feel 
isolated or get lost when they hit content that they don't understand fully.



Ron
    

    On 01/04/2018 12:20 AM, Tutkowski, Mike wrote:
> Thanks for the feedback, Will!
>
> I agree with the approach you outlined.
>
> Thanks for being so involved in the process! Let’s chat with Giles once 
he’s back to see if we can get your questions answered.
>
>> On Mar 31, 2018, at 10:14 PM, Will Stevens <wstev...@cloudops.com> wrote:
>>
>> In the past the committee was chosen as a relatively small group in order
>> to make it easier to manage feedback.  In order to make it fair to 
everyone
>> in the community, I would suggest that instead of doing it with a small
>> group, we do it out in the open on a scheduled call.
>>
>> We will have to get a list of the talks that are CloudStack specific from
>> ApacheCon, but that should be possible.
>>
>> Once we have the talks selected, then a smaller number of us can work on
>> setting up the actual ordering and the details.
>>
>> I have been quite involved so far.  Giles and I have been organizing the
>> sponsors, website and dealing with ApacheCon so far.  Obviously, Mike is
>> also working on this as well.
    >>
>> I think we are headed in the right direction on this.
>>
>> Cheers,
>>
>> Will
>>
>> On Mar 31, 2018 11:49 PM, "Tutkowski, Mike" <mike.tutkow...@netapp.com>
>> wrote:
>>
>> Hi Ron,
>>
>

Re: Committee to Sort through CCC Presentation Submissions

2018-03-30 Thread Tutkowski, Mike
Hi Rafael,

It’s a little bit tricky in our particular situation. Allow me to explain:

As you are likely aware, the CloudStack Collaboration Conference will be held 
as a track in the larger ApacheCon conference in Montreal this coming September.

It is true, as you say, that anyone who wishes to do so can contribute to 
reviewing the CFP for ApacheCon.

What is a bit of a concern, however, is that we might get certain CloudStack 
CFP proposals vetoed by people who are not, per se, a part of our community.

That being the case, I have contacted the organizers for ApacheCon to see if 
there is some way we can section off the CloudStack CFP from the larger 
ApacheCon CFP for review purposes.

Assuming we can do this, the panel that I am proposing here would handle this 
review task.

I hope that helps clarify the situation.

Thanks!
Mike

On 3/30/18, 8:38 AM, "Rafael Weingärtner" <rafaelweingart...@gmail.com> wrote:

Are we going to have a separated review process?

I thought anybody could go here [1] and apply for a reviewer position and
start reviewing. Well, that is what I did. I have already reviewed some
CloudStack proposals (of course I did not review mines). After asking to
review presentations, Rich has giving me access to the system. I thought
everybody interest in helping was going to do the same.

[1] https://cfp.apachecon.com/conference.html?apachecon-north-america-2018


On Wed, Mar 28, 2018 at 4:05 AM, Swen - swen.io <m...@swen.io> wrote:

> Hi Mike,
>
> congrats!
>
> I can help sort through presentations.
>
> Best regards,
> Swen
>
    > -----Ursprüngliche Nachricht-
> Von: Tutkowski, Mike [mailto:mike.tutkow...@netapp.com]
> Gesendet: Dienstag, 27. März 2018 21:40
> An: dev@cloudstack.apache.org; us...@cloudstack.apache.org
> Betreff: Committee to Sort through CCC Presentation Submissions
>
> Hi everyone,
>
> As you may be aware, this coming September in Montreal, the CloudStack
> Community will be hosting the CloudStack Collaboration Conference:
>
> http://ca.cloudstackcollab.org/
>
> Even though the event is six months away, we are on a tight schedule with
> regards to the Call For Participation (CFP):
>
> https://www.apachecon.com/acna18/schedule.html
>
> If you are interested in submitting a talk, please do so before March 
30th.
>
> That being said, as usual, we will have need of a small committee to sort
> through these presentation submissions.
>
> If you are interested in helping out in this process, please reply to this
> message.
>
> Thanks!
> Mike
>
>
>


-- 
Rafael Weingärtner




Committee to Sort through CCC Presentation Submissions

2018-03-27 Thread Tutkowski, Mike
Hi everyone,

As you may be aware, this coming September in Montreal, the CloudStack 
Community will be hosting the CloudStack Collaboration Conference:

http://ca.cloudstackcollab.org/

Even though the event is six months away, we are on a tight schedule with 
regards to the Call For Participation (CFP):

https://www.apachecon.com/acna18/schedule.html

If you are interested in submitting a talk, please do so before March 30th.

That being said, as usual, we will have need of a small committee to sort 
through these presentation submissions.

If you are interested in helping out in this process, please reply to this 
message.

Thanks!
Mike


Re: CFP- Cloudstack collab conference 2018

2018-03-20 Thread Tutkowski, Mike
Hi Giles,

Yes, I can run the panel this year to sift through presentation submissions.

Talk to you later!
Mike

> On Mar 19, 2018, at 4:19 AM, Giles Sirett  wrote:
> 
> All
> CFP - CLOUDSTACK COLLAB 2018 - DEADLINE 30 MARCH
> 
> We have an agreement in principle to co-locate this years Cloudstack 
> conference with Apachecon
> Montreal  24-25 September
> https://www.apachecon.com/
> As last year in Miami, we will be having a "conference in a conference" - so 
> tickets will allow people to experience the whole Apachecon event
> 
> There are still some details still to work out (on the name, exact format and 
> exactly what facilities we can have), but the key thing right now  is to get 
> talks  submitted. We're using the Apachecon CFP:
> https://www.apachecon.com/acna18/
> 
> I apologise for the short notice, but the CFP closes on March 30.
> There is no way to tell the CFP that you are submitting something 
> specifically for Cloudstack - so please tag your talk title with 
> "[Cloudstack]:talk name"
> 
> There is lots to do around sponsorship, exact format, etc but the key 
> deadline this stage is to get plenty of talks submitted
> 
> Other things worth considering:
> 
>  1.  We will need a panel to sift/select talks. Wido led on this last year. 
> MikeT, do you want to lead this year ?
>  2.  It will be important that we encourage interested companies to help 
> sponsor the event. Anybody who think their company may sponsor, please ping 
> Kevin A. McGrail kmcgr...@apache.org
> 
> Kind regards
> Giles
> 
> 
> giles.sir...@shapeblue.com 
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> @shapeblue
> 
> 
> 


Re: New committer: Dag Sonstebo

2018-03-20 Thread Tutkowski, Mike
Congratulations, Dag!

> On Mar 20, 2018, at 7:59 AM, John Kinsella  wrote:
> 
> The Project Management Committee (PMC) for Apache CloudStack has
> invited Dag Sonsteboto become a committer and we are pleased to
> announce that he has accepted.
> 
> I’ll take a moment here to remind folks that being an ASF committer
> isn’t purely about code - Dag has been helping out for quite a while
> on users@, and seems to have a strong interest around ACS and the
> community. We welcome this activity, and encourage others to help
> out as they can - it doesn’t necessarily have to be purely code-related.
> 
> Being a committer enables easier contribution to the project since
> there is no need to go via the patch submission process. This should
> enable better productivity.
> 
> Please join me in welcoming Dag!
> 
> John


Re: Introductions

2018-10-14 Thread Tutkowski, Mike
Welcome to the CloudStack Community, Anurag! I’m Mike Tutkowski. I work at 
NetApp SolidFire, have been in the CloudStack Community about six years now, 
and currently am serving as the Chair of the CloudStack PMC (Project Management 
Committee).

On 10/14/18, 11:29 PM, "Anurag Awasthi"  wrote:

NetApp Security WARNING: This is an external email. Do not click links or 
open attachments unless you recognize the sender and know the content is safe.




Hello Everyone!

It is with great pleasure that I would like to introduce myself to this 
community. I am Anurag and have recently joined ShapeBlue to work on 
CloudStack. Formerly, I have worked at companies like Microsoft and Twitter, 
and I am really looking forward to future collaborations and learnings from all 
of you.

Thank you,
Best Regards,
Anurag Awasthi



anurag.awas...@shapeblue.com
www.shapeblue.com
,
@shapeblue







Re: CloudStack Collab in Brazil

2018-10-22 Thread Tutkowski, Mike
Hi Rafael,

Do you have a specific date in mind for CCC Brazil? It sounds like, in general, 
we are looking at April.

Thanks!
Mike

On 10/1/18, 12:51 PM, "Rafael Weingärtner"  wrote:

NetApp Security WARNING: This is an external email. Do not click links or 
open attachments unless you recognize the sender and know the content is safe.




Yes, that is what I also believe. From the feedback, I think we can easily
use 10 presentations. I will move on with the organization. I think it is
feasible to get more room space in case we receive more presentation and
people. I will try to not overlap presentations though (like we did in
ApacheCon).

On Mon, Oct 1, 2018 at 3:36 PM Tutkowski, Mike 
wrote:

> I guess it depends on how many people expect to be able to attend.
>
> Ten presentation slots is probably a good starting point.
>
> Get Outlook for iOS<https://aka.ms/o0ukef>
> 
> From: Rafael Weingärtner 
> Sent: Monday, October 1, 2018 10:10:55 AM
> To: users
> Cc: dev
> Subject: Re: CloudStack Collab in Brazil
>
> NetApp Security WARNING: This is an external email. Do not click links or
> open attachments unless you recognize the sender and know the content is
> safe.
>
>
>
>
> Thank you guys for the feedback!
>
> I will reach out the organizers to discuss our requirements. What do you
> guys think that we need?
> Would 10 presentation slots (50min. each) be enough? Or, do you guys think
> that we need more?
>
> Also, I think that we should also do a Hackathon. Therefore, I will also 
be
> asking for a room such as the one we used in Montreal.
>
> On Mon, Oct 1, 2018 at 12:03 PM Nicolas Vazquez <
> nicolas.vazq...@shapeblue.com> wrote:
>
> > I would be interested in an event in Brazil as well.
> >
> >
> > Regards,
> >
> > Nicolas Vazquez
> >
> > 
> > From: Gabriel Beims Bräscher 
> > Sent: Monday, October 1, 2018 11:58:07 AM
> > To: users
> > Cc: dev
> > Subject: Re: CloudStack Collab in Brazil
> >
> > As a Brazilian, that lives in Florianópolis, I cannot pass this
> opportunity
> > ;)
> > Count on me!
> >
> > Em seg, 1 de out de 2018 às 11:27, Tutkowski, Mike <
> > mike.tutkow...@netapp.com> escreveu:
> >
> > > I would be really interested in an event in Brazil.
> > >
> > > 
> > > From: Rafael Weingärtner 
> > > Sent: Monday, October 1, 2018 5:38 AM
> > > To: users
> > > Cc: dev
> > > Subject: Re: CloudStack Collab in Brazil
> > >
> > > NetApp Security WARNING: This is an external email. Do not click links
> or
> > > open attachments unless you recognize the sender and know the content
> is
> > > safe.
> > >
> > >
> > >
> > >
> > > Hey Marco,
> > > Yes, they run a very successful conference every year. I have just got
> > back
> > > from Montreal, and I talked with people there regarding the 
conference.
> > >
> > > Now, for all CloudStackers (users and devs); I will repeat what I said
> in
> > > Montreal. The TDC conference will happen with or without us. 
Therefore,
> > we
> > > only need to decide if we will join them in their Cloud tracks. We did
> > not
> > > hear much feedback here, but I will try again.
> > >
> > > If you are part of the CloudStack community (as a contributor,
> committer,
> > > user, operator, and so on), please do provide your feedback. Would you
> > like
> > > to see a CloudStack Collab Conference in Florianopolis, Brazil, 2019? 
I
> > am
> > > only asking you guys, what you think. I do understand the logistics
> > > problems for some folks to attend a conference this far.
> > >
> > > Now, about the city; the island has an airport (airport code = FLN).
> > > However, most flights to FLN will have a connection either on GRU (Sao
> > > Paulo airport) or GIG (Rio de Janeiro airport); KLM, AA, Delta,
> > AirFrance,
> > > Tap, and others have flights to FLN. I have also found some useful
> links
> > in
> > >

Re: CloudStack Collab in Brazil

2018-10-24 Thread Tutkowski, Mike
Thanks, Rafael!

The dates work for me.

Get Outlook for iOS<https://aka.ms/o0ukef>

From: Rafael Weingärtner 
Sent: Wednesday, October 24, 2018 5:02:14 PM
To: users
Cc: dev
Subject: Re: CloudStack Collab in Brazil

NetApp Security WARNING: This is an external email. Do not click links or open 
attachments unless you recognize the sender and know the content is safe.




Yes, they already have a date set. It should be 23 -  27 April, 2019.
I should be talking with them again this week to check what we need to move
thing forward.

What do you guys think about these dates?

On Mon, Oct 22, 2018 at 5:07 PM Tutkowski, Mike 
wrote:

> Hi Rafael,
>
> Do you have a specific date in mind for CCC Brazil? It sounds like, in
> general, we are looking at April.
>
> Thanks!
> Mike
>
> On 10/1/18, 12:51 PM, "Rafael Weingärtner" 
> wrote:
>
> NetApp Security WARNING: This is an external email. Do not click links
> or open attachments unless you recognize the sender and know the content is
> safe.
>
>
>
>
> Yes, that is what I also believe. From the feedback, I think we can
> easily
> use 10 presentations. I will move on with the organization. I think it
> is
> feasible to get more room space in case we receive more presentation
> and
> people. I will try to not overlap presentations though (like we did in
> ApacheCon).
>
> On Mon, Oct 1, 2018 at 3:36 PM Tutkowski, Mike <
> mike.tutkow...@netapp.com>
> wrote:
>
> > I guess it depends on how many people expect to be able to attend.
> >
> > Ten presentation slots is probably a good starting point.
> >
> > Get Outlook for iOS<https://aka.ms/o0ukef>
> > 
> > From: Rafael Weingärtner 
> > Sent: Monday, October 1, 2018 10:10:55 AM
> > To: users
> > Cc: dev
> > Subject: Re: CloudStack Collab in Brazil
> >
> > NetApp Security WARNING: This is an external email. Do not click
> links or
> > open attachments unless you recognize the sender and know the
> content is
> > safe.
> >
> >
> >
> >
> > Thank you guys for the feedback!
> >
> > I will reach out the organizers to discuss our requirements. What do
> you
> > guys think that we need?
> > Would 10 presentation slots (50min. each) be enough? Or, do you guys
> think
> > that we need more?
> >
> > Also, I think that we should also do a Hackathon. Therefore, I will
> also be
> > asking for a room such as the one we used in Montreal.
> >
> > On Mon, Oct 1, 2018 at 12:03 PM Nicolas Vazquez <
> > nicolas.vazq...@shapeblue.com> wrote:
> >
> > > I would be interested in an event in Brazil as well.
> > >
> > >
> > > Regards,
> > >
> > > Nicolas Vazquez
> > >
> > > ________
> > > From: Gabriel Beims Bräscher 
> > > Sent: Monday, October 1, 2018 11:58:07 AM
> > > To: users
> > > Cc: dev
> > > Subject: Re: CloudStack Collab in Brazil
> > >
> > > As a Brazilian, that lives in Florianópolis, I cannot pass this
> > opportunity
> > > ;)
> > > Count on me!
> > >
> > > Em seg, 1 de out de 2018 às 11:27, Tutkowski, Mike <
> > > mike.tutkow...@netapp.com> escreveu:
> > >
> > > > I would be really interested in an event in Brazil.
> > > >
> > > > 
> > > > From: Rafael Weingärtner 
> > > > Sent: Monday, October 1, 2018 5:38 AM
> > > > To: users
> > > > Cc: dev
> > > > Subject: Re: CloudStack Collab in Brazil
> > > >
> > > > NetApp Security WARNING: This is an external email. Do not click
> links
> > or
> > > > open attachments unless you recognize the sender and know the
> content
> > is
> > > > safe.
> > > >
> > > >
> > > >
> > > >
> > > > Hey Marco,
> > > > Yes, they run a very successful conference every year. I have
> just got
> > > back
> > > > from Montreal, and I talked with people there regarding the
> conference.
> > > >
> > > > Now, for all CloudStackers (users and devs); I will repe

Re: Exception in MS on master

2018-11-12 Thread Tutkowski, Mike
Thanks, Rohit!

I must have sent this e-mail while I was in Montréal at CCC. I had forgotten 
that I sent this out. :)


From: Rohit Yadav 
Sent: Sunday, November 11, 2018 11:26:38 PM
To: dev@cloudstack.apache.org
Subject: Re: Exception in MS on master

NetApp Security WARNING: This is an external email. Do not click links or open 
attachments unless you recognize the sender and know the content is safe.




Hi Mike,


I came across this as well, looks like a regression on master. I've opened a 
bug: https://github.com/apache/cloudstack/issues/2935


- Rohit

<https://cloudstack.apache.org>




From: Tutkowski, Mike 
Sent: Wednesday, September 26, 2018 1:58:51 AM
To: dev@cloudstack.apache.org
Subject: Exception in MS on master

Hi everyone,

I was building a new cloud and came across the following exception when running 
the management server (below).

Any thoughts on what this is about?

Thanks!
Mike

WARN  [o.a.c.e.o.NetworkOrchestrator] (Network-Scavenger-1:ctx-8cd291c0) 
(logid:4f70c744) Caught exception while running network gc:
com.cloud.utils.exception.CloudRuntimeException: Caught: 
com.mysql.jdbc.JDBC4PreparedStatement@6ffe01cc: SELECT networks.id FROM 
networks  INNER JOIN network_offerings ON 
networks.network_offering_id=network_offerings.id  INNER JOIN op_networks ON 
networks.id=op_networks.id WHERE networks.removed IS NULL  AND  
(op_networks.nics_count = ** NOT SPECIFIED **  AND op_networks.gc = ** NOT 
SPECIFIED **  AND op_networks.check_for_gc = ** NOT SPECIFIED ** )
at 
com.cloud.utils.db.GenericDaoBase.customSearchIncludingRemoved(GenericDaoBase.java:507)
at com.cloud.utils.db.GenericDaoBase.customSearch(GenericDaoBase.java:518)
at 
com.cloud.network.dao.NetworkDaoImpl.findNetworksToGarbageCollect(NetworkDaoImpl.java:461)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:338)
at 
org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:197)
at 
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163)
at 
com.cloud.utils.db.TransactionContextInterceptor.invoke(TransactionContextInterceptor.java:34)
at 
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:174)
at 
org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:92)
at 
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:185)
at 
org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:212)
at com.sun.proxy.$Proxy96.findNetworksToGarbageCollect(Unknown Source)
at 
org.apache.cloudstack.engine.orchestration.NetworkOrchestrator$NetworkGarbageCollector.reallyRun(NetworkOrchestrator.java:2761)
at 
org.apache.cloudstack.engine.orchestration.NetworkOrchestrator$NetworkGarbageCollector.runInContext(NetworkOrchestrator.java:2745)
at 
org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
at 
org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
at 
org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
at 
org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53)
at 
org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:46)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NullPointerException
at 
com.cloud.utils.db.GenericDaoBase.prepareAttribute(GenericDaoBase.java:1519)
at 
com.cloud.utils.db.GenericDaoBase.addJoinAttributes(GenericDaoBase.java:774)
at 
com.cloud.utils.db.GenericDaoBase.customSearchIncludingRemoved(GenericDaoBase.java:476)
... 29 more


rohit.ya...@shapeblue.com
www.shapeblue.com<http://www.shapeblue.com>
Amadeus House, Floral Street, 

Re: [VOTE] Apache CloudStack 4.11.2.0 RC5

2018-11-14 Thread Tutkowski, Mike
The VR’s IP address was not being reported on the main GUI window, but I found 
it in the VR details (and also via ifconfig on the VR itself).

I can ping both ways successfully: From the management server to the VR and 
vice versa.

On 11/14/18, 3:52 PM, "Tutkowski, Mike"  wrote:

NetApp Security WARNING: This is an external email. Do not click links or 
open attachments unless you recognize the sender and know the content is safe.




The VR doesn’t show as having an IP address.

On 11/14/18, 3:50 PM, "Paul Angus"  wrote:

NetApp Security WARNING: This is an external email. Do not click links 
or open attachments unless you recognize the sender and know the content is 
safe.




Mgmt. server should ssh into VR.
I'll fire a similar build in out lab.


paul.an...@shapeblue.com
www.shapeblue.com
Amadeus House, Floral Street, London  WC2E 9DPUK
@shapeblue




-Original Message-
        From: Tutkowski, Mike 
Sent: 14 November 2018 22:41
To: dev@cloudstack.apache.org; us...@cloudstack.apache.org
Subject: Re: [VOTE] Apache CloudStack 4.11.2.0 RC5

Right, I’ve compiled and run with –Dnoredist. The SSVM and CPVM come up 
fine. They both show the VM and agent running. The VR boots up. I can see it at 
the login prompt in vSphere Client. I don’t see any obvious errors in 
cloud.log. Maybe a port is blocked and it can’t talk to the management server?

On 11/14/18, 3:10 PM, "Paul Angus"  wrote:

NetApp Security WARNING: This is an external email. Do not click 
links or open attachments unless you recognize the sender and know the content 
is safe.




I've tested in an ubuntu16 basic zone but not vmware basic zone - I 
guess that it goes without say that you know to use build nonoss from vmware... 
have you used the 6.5 sdk ?
I think that it always says requires upgrade until the VR checks in.
Have the SSVM and CPVM checked in ok?  If you go on the VR through 
the vCenter console what do you see?

paul.an...@shapeblue.com
www.shapeblue.com
Amadeus House, Floral Street, London  WC2E 9DPUK
@shapeblue




-Original Message-----
    From: Tutkowski, Mike 
Sent: 14 November 2018 21:16
To: dev@cloudstack.apache.org; us...@cloudstack.apache.org
Subject: Re: [VOTE] Apache CloudStack 4.11.2.0 RC5

I’m having a hard time getting the VR with vSphere to come up 
successfully.

I built the code from source (the specified commit) and am using 
the system VM template specified in this e-mail chain.

I’m running in a Basic Zone. When I look at the details for the VR 
in the GUI, it says Requires Upgrade is Yes. When I click the button to upgrade 
the template, the operation fails.

Thoughts?

On 11/13/18, 6:59 AM, "Paul Angus"  wrote:

NetApp Security WARNING: This is an external email. Do not 
click links or open attachments unless you recognize the sender and know the 
content is safe.




Hi All,

I've created a 4.11.2.0 release (RC5), with the following 
artefacts up for testing and a vote:


Git Branch and Commit SH:

https://gitbox.apache.org/repos/asf?p=cloudstack.git;a=shortlog;h=refs/heads/4.11.2.0-RC20181113T0924
Commit: 5aae410dfce2bef5cc21a0892370cb5d0628f681

Source release (checksums and signatures are available at the 
same
location):
https://dist.apache.org/repos/dist/dev/cloudstack/4.11.2.0/

PGP release keys (signed using 51EE0BC8):
https://dist.apache.org/repos/dist/release/cloudstack/KEYS

The vote will be open for 72 hours - until 14:00 GMT on Friday 
16th Nov.

For sanity in tallying the vote, can PMC members please be sure 
to indicate "(binding)" with their vote?

[ ] +1 approve
[ ] +0 no opinion
[ ] -1 disapprove (and reason why)

Additional information:

For users' convenience, I've built packages from 
5aae410dfce2bef5cc21a0892370cb5d0628f681 and published RC5 repository here:
http://packages.shapeblue.com/testing/41120rc5/

The release notes are still work-in-progress, but the systemvm 
template upgrade section has been updated.

4.11.2.0 systemvm templates are available from here:
http://packages.shapeblue.com/testing/systemvm/41120rc5/

   

Re: [VOTE] Apache CloudStack 4.11.2.0 RC5

2018-11-14 Thread Tutkowski, Mike
Right, I’ve compiled and run with –Dnoredist. The SSVM and CPVM come up fine. 
They both show the VM and agent running. The VR boots up. I can see it at the 
login prompt in vSphere Client. I don’t see any obvious errors in cloud.log. 
Maybe a port is blocked and it can’t talk to the management server?

On 11/14/18, 3:10 PM, "Paul Angus"  wrote:

NetApp Security WARNING: This is an external email. Do not click links or 
open attachments unless you recognize the sender and know the content is safe.




I've tested in an ubuntu16 basic zone but not vmware basic zone - I guess 
that it goes without say that you know to use build nonoss from vmware... have 
you used the 6.5 sdk ?
I think that it always says requires upgrade until the VR checks in.
Have the SSVM and CPVM checked in ok?  If you go on the VR through the 
vCenter console what do you see?

paul.an...@shapeblue.com
www.shapeblue.com
Amadeus House, Floral Street, London  WC2E 9DPUK
@shapeblue




-Original Message-
From: Tutkowski, Mike 
Sent: 14 November 2018 21:16
To: dev@cloudstack.apache.org; us...@cloudstack.apache.org
Subject: Re: [VOTE] Apache CloudStack 4.11.2.0 RC5

I’m having a hard time getting the VR with vSphere to come up successfully.

I built the code from source (the specified commit) and am using the system 
VM template specified in this e-mail chain.

I’m running in a Basic Zone. When I look at the details for the VR in the 
GUI, it says Requires Upgrade is Yes. When I click the button to upgrade the 
template, the operation fails.

Thoughts?

On 11/13/18, 6:59 AM, "Paul Angus"  wrote:

NetApp Security WARNING: This is an external email. Do not click links 
or open attachments unless you recognize the sender and know the content is 
safe.




Hi All,

I've created a 4.11.2.0 release (RC5), with the following artefacts up 
for testing and a vote:


Git Branch and Commit SH:

https://gitbox.apache.org/repos/asf?p=cloudstack.git;a=shortlog;h=refs/heads/4.11.2.0-RC20181113T0924
Commit: 5aae410dfce2bef5cc21a0892370cb5d0628f681

Source release (checksums and signatures are available at the same
location):
https://dist.apache.org/repos/dist/dev/cloudstack/4.11.2.0/

PGP release keys (signed using 51EE0BC8):
https://dist.apache.org/repos/dist/release/cloudstack/KEYS

The vote will be open for 72 hours - until 14:00 GMT on Friday 16th Nov.

For sanity in tallying the vote, can PMC members please be sure to 
indicate "(binding)" with their vote?

[ ] +1 approve
[ ] +0 no opinion
[ ] -1 disapprove (and reason why)

Additional information:

For users' convenience, I've built packages from 
5aae410dfce2bef5cc21a0892370cb5d0628f681 and published RC5 repository here:
http://packages.shapeblue.com/testing/41120rc5/

The release notes are still work-in-progress, but the systemvm template 
upgrade section has been updated.

4.11.2.0 systemvm templates are available from here:
http://packages.shapeblue.com/testing/systemvm/41120rc5/

Only the following changes have been added to RC5:


+-+--+---+--++
| Version | Github   | Type  | Priority | 
Description|

+=+==+===+==++
| 4.11.2.0| `#3018`_ |   |  | 
Prevent error on GroupAnswers on VR creation   |

+-+--+---+--++
| 4.11.2.0| `#3007`_ |   |  | Add 
missing ConfigDrive entries on existing zones after|
| |  |   |  | 
upgrade|

+-+--+---+--++
| 4.11.2.0| `#2980`_ |   |  | 
[4.11] Fix set initial reservation on public IP ranges |

+-+--+---+--++
| 4.11.2.0| `#3010`_ |   |  | Fix 
DirectNetworkGuru canHandle checks for lowercase   |
| |  |   |

Re: [VOTE] Apache CloudStack 4.11.2.0 RC5

2018-11-14 Thread Tutkowski, Mike
The VR doesn’t show as having an IP address.

On 11/14/18, 3:50 PM, "Paul Angus"  wrote:

NetApp Security WARNING: This is an external email. Do not click links or 
open attachments unless you recognize the sender and know the content is safe.




Mgmt. server should ssh into VR.
I'll fire a similar build in out lab.


paul.an...@shapeblue.com
www.shapeblue.com
Amadeus House, Floral Street, London  WC2E 9DPUK
@shapeblue




-Original Message-
From: Tutkowski, Mike 
Sent: 14 November 2018 22:41
To: dev@cloudstack.apache.org; us...@cloudstack.apache.org
Subject: Re: [VOTE] Apache CloudStack 4.11.2.0 RC5

Right, I’ve compiled and run with –Dnoredist. The SSVM and CPVM come up 
fine. They both show the VM and agent running. The VR boots up. I can see it at 
the login prompt in vSphere Client. I don’t see any obvious errors in 
cloud.log. Maybe a port is blocked and it can’t talk to the management server?

On 11/14/18, 3:10 PM, "Paul Angus"  wrote:

NetApp Security WARNING: This is an external email. Do not click links 
or open attachments unless you recognize the sender and know the content is 
safe.




I've tested in an ubuntu16 basic zone but not vmware basic zone - I 
guess that it goes without say that you know to use build nonoss from vmware... 
have you used the 6.5 sdk ?
I think that it always says requires upgrade until the VR checks in.
Have the SSVM and CPVM checked in ok?  If you go on the VR through the 
vCenter console what do you see?

paul.an...@shapeblue.com
www.shapeblue.com
Amadeus House, Floral Street, London  WC2E 9DPUK
@shapeblue




-Original Message-
        From: Tutkowski, Mike 
Sent: 14 November 2018 21:16
To: dev@cloudstack.apache.org; us...@cloudstack.apache.org
Subject: Re: [VOTE] Apache CloudStack 4.11.2.0 RC5

I’m having a hard time getting the VR with vSphere to come up 
successfully.

I built the code from source (the specified commit) and am using the 
system VM template specified in this e-mail chain.

I’m running in a Basic Zone. When I look at the details for the VR in 
the GUI, it says Requires Upgrade is Yes. When I click the button to upgrade 
the template, the operation fails.

Thoughts?

On 11/13/18, 6:59 AM, "Paul Angus"  wrote:

NetApp Security WARNING: This is an external email. Do not click 
links or open attachments unless you recognize the sender and know the content 
is safe.




Hi All,

I've created a 4.11.2.0 release (RC5), with the following artefacts 
up for testing and a vote:


Git Branch and Commit SH:

https://gitbox.apache.org/repos/asf?p=cloudstack.git;a=shortlog;h=refs/heads/4.11.2.0-RC20181113T0924
Commit: 5aae410dfce2bef5cc21a0892370cb5d0628f681

Source release (checksums and signatures are available at the same
location):
https://dist.apache.org/repos/dist/dev/cloudstack/4.11.2.0/

PGP release keys (signed using 51EE0BC8):
https://dist.apache.org/repos/dist/release/cloudstack/KEYS

The vote will be open for 72 hours - until 14:00 GMT on Friday 16th 
Nov.

For sanity in tallying the vote, can PMC members please be sure to 
indicate "(binding)" with their vote?

[ ] +1 approve
[ ] +0 no opinion
[ ] -1 disapprove (and reason why)

Additional information:

For users' convenience, I've built packages from 
5aae410dfce2bef5cc21a0892370cb5d0628f681 and published RC5 repository here:
http://packages.shapeblue.com/testing/41120rc5/

The release notes are still work-in-progress, but the systemvm 
template upgrade section has been updated.

4.11.2.0 systemvm templates are available from here:
http://packages.shapeblue.com/testing/systemvm/41120rc5/

Only the following changes have been added to RC5:


+-+--+---+--++
| Version | Github   | Type  | Priority | 
Description|

+=+==+===+==++
| 4.11.2.0| `#3018`_ |   |  | 
Prev

Re: [VOTE] Apache CloudStack 4.11.2.0 RC5

2018-11-14 Thread Tutkowski, Mike
I should note that I’m running vSphere 5.5, by the way. I believe that’s still 
supported in CS 4.11.

On 11/14/18, 3:52 PM, "Tutkowski, Mike"  wrote:

NetApp Security WARNING: This is an external email. Do not click links or 
open attachments unless you recognize the sender and know the content is safe.




The VR doesn’t show as having an IP address.

On 11/14/18, 3:50 PM, "Paul Angus"  wrote:

NetApp Security WARNING: This is an external email. Do not click links 
or open attachments unless you recognize the sender and know the content is 
safe.




Mgmt. server should ssh into VR.
I'll fire a similar build in out lab.


paul.an...@shapeblue.com
www.shapeblue.com
Amadeus House, Floral Street, London  WC2E 9DPUK
@shapeblue




-Original Message-
        From: Tutkowski, Mike 
Sent: 14 November 2018 22:41
To: dev@cloudstack.apache.org; us...@cloudstack.apache.org
Subject: Re: [VOTE] Apache CloudStack 4.11.2.0 RC5

Right, I’ve compiled and run with –Dnoredist. The SSVM and CPVM come up 
fine. They both show the VM and agent running. The VR boots up. I can see it at 
the login prompt in vSphere Client. I don’t see any obvious errors in 
cloud.log. Maybe a port is blocked and it can’t talk to the management server?

On 11/14/18, 3:10 PM, "Paul Angus"  wrote:

NetApp Security WARNING: This is an external email. Do not click 
links or open attachments unless you recognize the sender and know the content 
is safe.




I've tested in an ubuntu16 basic zone but not vmware basic zone - I 
guess that it goes without say that you know to use build nonoss from vmware... 
have you used the 6.5 sdk ?
I think that it always says requires upgrade until the VR checks in.
Have the SSVM and CPVM checked in ok?  If you go on the VR through 
the vCenter console what do you see?

paul.an...@shapeblue.com
www.shapeblue.com
Amadeus House, Floral Street, London  WC2E 9DPUK
@shapeblue




-Original Message-----
    From: Tutkowski, Mike 
Sent: 14 November 2018 21:16
To: dev@cloudstack.apache.org; us...@cloudstack.apache.org
Subject: Re: [VOTE] Apache CloudStack 4.11.2.0 RC5

I’m having a hard time getting the VR with vSphere to come up 
successfully.

I built the code from source (the specified commit) and am using 
the system VM template specified in this e-mail chain.

I’m running in a Basic Zone. When I look at the details for the VR 
in the GUI, it says Requires Upgrade is Yes. When I click the button to upgrade 
the template, the operation fails.

Thoughts?

On 11/13/18, 6:59 AM, "Paul Angus"  wrote:

NetApp Security WARNING: This is an external email. Do not 
click links or open attachments unless you recognize the sender and know the 
content is safe.




Hi All,

I've created a 4.11.2.0 release (RC5), with the following 
artefacts up for testing and a vote:


Git Branch and Commit SH:

https://gitbox.apache.org/repos/asf?p=cloudstack.git;a=shortlog;h=refs/heads/4.11.2.0-RC20181113T0924
Commit: 5aae410dfce2bef5cc21a0892370cb5d0628f681

Source release (checksums and signatures are available at the 
same
location):
https://dist.apache.org/repos/dist/dev/cloudstack/4.11.2.0/

PGP release keys (signed using 51EE0BC8):
https://dist.apache.org/repos/dist/release/cloudstack/KEYS

The vote will be open for 72 hours - until 14:00 GMT on Friday 
16th Nov.

For sanity in tallying the vote, can PMC members please be sure 
to indicate "(binding)" with their vote?

[ ] +1 approve
[ ] +0 no opinion
[ ] -1 disapprove (and reason why)

Additional information:

For users' convenience, I've built packages from 
5aae410dfce2bef5cc21a0892370cb5d0628f681 and published RC5 repository here:
http://packages.shapeblue.com/testing/41120rc5/

The release notes are still work-in-progress, but the systemvm 
template upgrade section has been updated.

4.11.2.0 systemvm templates are available from here:
http://packages.shapeblue.com/testing/systemvm/41120rc5/

   

Re: [VOTE] Apache CloudStack 4.11.2.0 RC5

2018-11-14 Thread Tutkowski, Mike
Thanks, Rohit!



From: Rohit Yadav 
Sent: Wednesday, November 14, 2018 9:07 PM
To: dev@cloudstack.apache.org; us...@cloudstack.apache.org
Subject: Re: [VOTE] Apache CloudStack 4.11.2.0 RC5

NetApp Security WARNING: This is an external email. Do not click links or open 
attachments unless you recognize the sender and know the content is safe.




Hi Mike, Paul, everyone,


I tested the same on a 4.9.3.1 based VMware 5.5u3 + svs + basic zone and could 
see the same behaviour. Therefore, it's not a regression but a limitation from 
the past. Basic zone provides L3 isolation by means of security group 
(host-level firewall) which is not supported for VMware. I think nobody 
reported this in the past because nobody uses VMware+basic zone, I've opened an 
issue for this issue: https://github.com/apache/cloudstack/issues/3031


Let's continue testing and voting for RC5, and let's aim to fix for this 
limitation in future 4.11.3+, 4.12.0+.


- Rohit

<https://cloudstack.apache.org>




From: Tutkowski, Mike 
Sent: Thursday, November 15, 2018 4:24:46 AM
To: dev@cloudstack.apache.org; us...@cloudstack.apache.org
Subject: Re: [VOTE] Apache CloudStack 4.11.2.0 RC5

I should note that I’m running vSphere 5.5, by the way. I believe that’s still 
supported in CS 4.11.

On 11/14/18, 3:52 PM, "Tutkowski, Mike"  wrote:

NetApp Security WARNING: This is an external email. Do not click links or open 
attachments unless you recognize the sender and know the content is safe.




The VR doesn’t show as having an IP address.

On 11/14/18, 3:50 PM, "Paul Angus"  wrote:

NetApp Security WARNING: This is an external email. Do not click links or open 
attachments unless you recognize the sender and know the content is safe.




Mgmt. server should ssh into VR.
I'll fire a similar build in out lab.


paul.an...@shapeblue.com
www.shapeblue.com<http://www.shapeblue.com>
Amadeus House, Floral Street, London WC2E 9DPUK
@shapeblue




-Original Message-
From: Tutkowski, Mike 
Sent: 14 November 2018 22:41
To: dev@cloudstack.apache.org; us...@cloudstack.apache.org
Subject: Re: [VOTE] Apache CloudStack 4.11.2.0 RC5

Right, I’ve compiled and run with –Dnoredist. The SSVM and CPVM come up fine. 
They both show the VM and agent running. The VR boots up. I can see it at the 
login prompt in vSphere Client. I don’t see any obvious errors in cloud.log. 
Maybe a port is blocked and it can’t talk to the management server?

On 11/14/18, 3:10 PM, "Paul Angus"  wrote:

NetApp Security WARNING: This is an external email. Do not click links or open 
attachments unless you recognize the sender and know the content is safe.




I've tested in an ubuntu16 basic zone but not vmware basic zone - I guess that 
it goes without say that you know to use build nonoss from vmware... have you 
used the 6.5 sdk ?
I think that it always says requires upgrade until the VR checks in.
Have the SSVM and CPVM checked in ok? If you go on the VR through the vCenter 
console what do you see?

paul.an...@shapeblue.com
www.shapeblue.com<http://www.shapeblue.com>
Amadeus House, Floral Street, London WC2E 9DPUK
@shapeblue




-Original Message-
From: Tutkowski, Mike 
Sent: 14 November 2018 21:16
To: dev@cloudstack.apache.org; us...@cloudstack.apache.org
Subject: Re: [VOTE] Apache CloudStack 4.11.2.0 RC5

I’m having a hard time getting the VR with vSphere to come up successfully.

I built the code from source (the specified commit) and am using the system VM 
template specified in this e-mail chain.

I’m running in a Basic Zone. When I look at the details for the VR in the GUI, 
it says Requires Upgrade is Yes. When I click the button to upgrade the 
template, the operation fails.

Thoughts?

On 11/13/18, 6:59 AM, "Paul Angus"  wrote:

NetApp Security WARNING: This is an external email. Do not click links or open 
attachments unless you recognize the sender and know the content is safe.




Hi All,

I've created a 4.11.2.0 release (RC5), with the following artefacts up for 
testing and a vote:


Git Branch and Commit SH:
https://gitbox.apache.org/repos/asf?p=cloudstack.git;a=shortlog;h=refs/heads/4.11.2.0-RC20181113T0924
Commit: 5aae410dfce2bef5cc21a0892370cb5d0628f681

Source release (checksums and signatures are available at the same
location):
https://dist.apache.org/repos/dist/dev/cloudstack/4.11.2.0/

PGP release keys (signed using 51EE0BC8):
https://dist.apache.org/repos/dist/release/cloudstack/KEYS

The vote will be open for 72 hours - until 14:00 GMT on Friday 16th Nov.

For sanity in tallying the vote, can PMC members please be sure to indicate 
"(binding)" with their vote?

[ ] +1 approve
[ ] +0 no opinion
[ ] -1 disapprove (and reason why)

Additional information:

For users' convenience, I've built packages from 
5aae410dfce2bef5cc21a0892370cb5d0628f681 and published RC5 repository here:
http://packages.shapeblue

Re: [VOTE] Apache CloudStack 4.11.2.0 RC5

2018-11-14 Thread Tutkowski, Mike
I’m having a hard time getting the VR with vSphere to come up successfully.

I built the code from source (the specified commit) and am using the system VM 
template specified in this e-mail chain.

I’m running in a Basic Zone. When I look at the details for the VR in the GUI, 
it says Requires Upgrade is Yes. When I click the button to upgrade the 
template, the operation fails.

Thoughts?

On 11/13/18, 6:59 AM, "Paul Angus"  wrote:

NetApp Security WARNING: This is an external email. Do not click links or 
open attachments unless you recognize the sender and know the content is safe.




Hi All,

I've created a 4.11.2.0 release (RC5), with the following artefacts up for 
testing and a vote:


Git Branch and Commit SH:

https://gitbox.apache.org/repos/asf?p=cloudstack.git;a=shortlog;h=refs/heads/4.11.2.0-RC20181113T0924
Commit: 5aae410dfce2bef5cc21a0892370cb5d0628f681

Source release (checksums and signatures are available at the same
location):
https://dist.apache.org/repos/dist/dev/cloudstack/4.11.2.0/

PGP release keys (signed using 51EE0BC8):
https://dist.apache.org/repos/dist/release/cloudstack/KEYS

The vote will be open for 72 hours - until 14:00 GMT on Friday 16th Nov.

For sanity in tallying the vote, can PMC members please be sure to indicate 
"(binding)" with their vote?

[ ] +1 approve
[ ] +0 no opinion
[ ] -1 disapprove (and reason why)

Additional information:

For users' convenience, I've built packages from 
5aae410dfce2bef5cc21a0892370cb5d0628f681 and published RC5 repository here:
http://packages.shapeblue.com/testing/41120rc5/

The release notes are still work-in-progress, but the systemvm template 
upgrade section has been updated.

4.11.2.0 systemvm templates are available from here:
http://packages.shapeblue.com/testing/systemvm/41120rc5/

Only the following changes have been added to RC5:


+-+--+---+--++
| Version | Github   | Type  | Priority | 
Description|

+=+==+===+==++
| 4.11.2.0| `#3018`_ |   |  | Prevent 
error on GroupAnswers on VR creation   |

+-+--+---+--++
| 4.11.2.0| `#3007`_ |   |  | Add 
missing ConfigDrive entries on existing zones after|
| |  |   |  | upgrade   
 |

+-+--+---+--++
| 4.11.2.0| `#2980`_ |   |  | [4.11] 
Fix set initial reservation on public IP ranges |

+-+--+---+--++
| 4.11.2.0| `#3010`_ |   |  | Fix 
DirectNetworkGuru canHandle checks for lowercase   |
| |  |   |  | isolation 
methods  |

+-+--+---+--++

.. _`#3012`: https://github.com/apache/cloudstack/pull/3012
.. _`#3018`: https://github.com/apache/cloudstack/pull/3018
.. _`#3007`: https://github.com/apache/cloudstack/pull/3007
.. _`#2980`: https://github.com/apache/cloudstack/pull/2980
.. _`#3010`: https://github.com/apache/cloudstack/pull/3010



Kind regards,

Paul Angus


paul.an...@shapeblue.com
www.shapeblue.com
Amadeus House, Floral Street, London  WC2E 9DPUK
@shapeblue







Re: Montréal Hackathon

2018-10-02 Thread Tutkowski, Mike
Yes, they are in Trello and thanks to Bobby for putting them there. :)

From: Boris Stoyanov 
Sent: Tuesday, October 2, 2018 12:59:13 AM
To: dev@cloudstack.apache.org
Subject: Re: Montréal Hackathon

NetApp Security WARNING: This is an external email. Do not click links or open 
attachments unless you recognize the sender and know the content is safe.




Hi guys,

I think we’ve got most of those in Trello, here’s a link 
https://trello.com/invite/b/ztgaumtx/fd51a88ce97b2de02201ed52e99384b9/cloudstack-hackathon-18

Maybe having those ideas there would be easier to manage, + people can comment, 
add details etc to each individually.

Bobby.


boris.stoya...@shapeblue.com
www.shapeblue.com<http://www.shapeblue.com>
Amadeus House, Floral Street, London  WC2E 9DPUK
@shapeblue



On 2 Oct 2018, at 4:51, Simon Weller 
mailto:swel...@ena.com.INVALID>> wrote:

Mike,


I've got a PR in for the KVM HyperV Enlightenment feature against master. It 
looks like Jenkins is broken right now,  so might need someone to kick it.


- Si

____
From: Tutkowski, Mike 
mailto:mike.tutkow...@netapp.com>>
Sent: Monday, October 1, 2018 1:28 PM
To: dev@cloudstack.apache.org<mailto:dev@cloudstack.apache.org>
Subject: Montréal Hackathon

Hi everyone,

I wanted to send out an e-mail about the hackathon that we held in Montréal 
this past Wednesday (after the two days of the CloudStack Collaboration 
Conference that took place on Monday and Tuesday).

We spent the first 1.5 hours discussing issues we’d like to see addressed 
and/or new features we might be considering. I’ve provided the current list at 
the bottom of this message.

In particular, one item of note is that people seemed interested in quarterly 
remote meetups. The intent of such meetups would be to sync with each other on 
what we’re working on so as to not duplicate effort. We may also have people 
present a bit about a recent feature or item of interest (similar to what we do 
at conferences). In addition, these meetups could provide a nice checkpoint to 
see how we are doing with regards to the items listed below.

Please take a moment, scan through the list, ask questions, and/or send out 
additional areas that you feel the CloudStack Community should be focusing on.

If you were present at the hackathon, feel free to update us on what progress 
you might have made at the hackathon with regards to any topic below.

Thanks!
Mike

Hyper-V enlightenment

Version 5.x of CloudStack

KVM IO bursting

Live VM Migration

RPC Standard interface to VR

Getting INFO easily out of the SSVM

Deprecate old code (OVM?)

CloudMonkey testing

NoVNC in CPVM

CentOS SIG + packaging

VR Programming Optimization

New UI working with API Discovery

Network Models refactoring + designer UI

Marketing Plan

Video series for CloudStack (ex. developers series, users series)

Use GitHub to document aspects of CloudStack (how to build an environment, how 
to start writing code for it, etc.)

Figure out a process for how we'd like issues to be opened, assigned, closed, 
and resolved (using JIRA and GitHub Issues)

Create a true REST API (it can use the existing API behind the scenes).

Logic to generate code in particular use cases so you can focus mainly on your 
business logic.

Use standard libraries that implement JPA, HTTP, etc.

Remote Meetups every quarter

Support IPv6




Re: Montréal Hackathon

2018-10-02 Thread Tutkowski, Mike
Thanks, Simon!

Is anyone able to help Simon with this Jenkins issue?


From: Simon Weller 
Sent: Monday, October 1, 2018 7:52 PM
To: dev@cloudstack.apache.org
Subject: Re: Montréal Hackathon

NetApp Security WARNING: This is an external email. Do not click links or open 
attachments unless you recognize the sender and know the content is safe.




Mike,


I've got a PR in for the KVM HyperV Enlightenment feature against master. It 
looks like Jenkins is broken right now, so might need someone to kick it.


- Si


From: Tutkowski, Mike 
Sent: Monday, October 1, 2018 1:28 PM
To: dev@cloudstack.apache.org
Subject: Montréal Hackathon

Hi everyone,

I wanted to send out an e-mail about the hackathon that we held in Montréal 
this past Wednesday (after the two days of the CloudStack Collaboration 
Conference that took place on Monday and Tuesday).

We spent the first 1.5 hours discussing issues we’d like to see addressed 
and/or new features we might be considering. I’ve provided the current list at 
the bottom of this message.

In particular, one item of note is that people seemed interested in quarterly 
remote meetups. The intent of such meetups would be to sync with each other on 
what we’re working on so as to not duplicate effort. We may also have people 
present a bit about a recent feature or item of interest (similar to what we do 
at conferences). In addition, these meetups could provide a nice checkpoint to 
see how we are doing with regards to the items listed below.

Please take a moment, scan through the list, ask questions, and/or send out 
additional areas that you feel the CloudStack Community should be focusing on.

If you were present at the hackathon, feel free to update us on what progress 
you might have made at the hackathon with regards to any topic below.

Thanks!
Mike

Hyper-V enlightenment

Version 5.x of CloudStack

KVM IO bursting

Live VM Migration

RPC Standard interface to VR

Getting INFO easily out of the SSVM

Deprecate old code (OVM?)

CloudMonkey testing

NoVNC in CPVM

CentOS SIG + packaging

VR Programming Optimization

New UI working with API Discovery

Network Models refactoring + designer UI

Marketing Plan

Video series for CloudStack (ex. developers series, users series)

Use GitHub to document aspects of CloudStack (how to build an environment, how 
to start writing code for it, etc.)

Figure out a process for how we'd like issues to be opened, assigned, closed, 
and resolved (using JIRA and GitHub Issues)

Create a true REST API (it can use the existing API behind the scenes).

Logic to generate code in particular use cases so you can focus mainly on your 
business logic.

Use standard libraries that implement JPA, HTTP, etc.

Remote Meetups every quarter

Support IPv6



Re: Marketing page update

2018-10-10 Thread Tutkowski, Mike
I kind of like the idea of a redirect.

On 10/10/18, 12:20 PM, "Rafael Weingärtner"  wrote:

NetApp Security WARNING: This is an external email. Do not click links or 
open attachments unless you recognize the sender and know the content is safe.




Can I delete the wiki users page then? Or, should I replace it with a
redirect to the cloustack's website?

On Wed, Oct 10, 2018 at 3:14 PM Tutkowski, Mike 
wrote:

> It definitely seems like it would be best to just have this information in
> one place so we don’t have to update two places whenever a change is 
needed.
>
> On 10/10/18, 12:10 PM, "Rafael Weingärtner" 
> wrote:
>
> NetApp Security WARNING: This is an external email. Do not click links
> or open attachments unless you recognize the sender and know the content 
is
> safe.
>
>
>
>
> What if we remove the wiki page of users? I think it makes more sense
> to
> use only the one from cloudstack.apache.org.
>
> On Wed, Oct 10, 2018 at 3:07 PM Andrija Panic  >
> wrote:
>
> > Perhaps I'm wrong but
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=30744222
> > has a VERY short,brief list of "users" while
> > https://cloudstack.apache.org/users.html (and
> >
> https://www.shapeblue.com/are-these-people-really-all-using-cloudstack/ )
> > -
> > so I assume its out of date or something
> >
> > Anyway, to do (if you like :) );
> > - remove "Anolim",
> > - make sure HIAG Data AG (www.hiagdata.com) is present
> > - leave Safe Swiss Cloud if already listed (I cant speak for them,
> since
> > initially they stopped being ACS users/owners, but because of
> ownership
> > changes etc etc, might again be engaged with ACS - so can't speak
> for them)
> >
> > Thx Rafael
> >
> > On Wed, 10 Oct 2018 at 19:54, Rafael Weingärtner <
> > rafaelweingart...@gmail.com> wrote:
> >
> > > Then, I can update the wiki for you.
> > >
> > > What do you mean by original page outdated?
> > >
> > > On Wed, Oct 10, 2018 at 2:51 PM Andrija Panic <
> andrija.pa...@gmail.com>
> > > wrote:
> > >
> > > > Done PR: https://github.com/apache/cloudstack-www/pull/47
> > > >
> > > > As for original page (it seems out of date ???) - I do not have
> WIKI
> > > > access...
> > > >
> > > > Thx
> > > >
> > > > On Wed, 10 Oct 2018 at 19:42, Rafael Weingärtner <
> > > > rafaelweingart...@gmail.com> wrote:
> > > >
> > > > > The source code of the second link is managed here:
> > > > > https://github.com/apache/cloudstack-www
> > > > > You can even open a PR yourself to fix that.
> > > > >
> > > > > The first one, you need wiki write access. I guess, I can give
> it to
> > > you.
> > > > > What is your wiki user name?
> > > > >
> > > > > On Wed, Oct 10, 2018 at 1:23 PM Andrija Panic <
> > andrija.pa...@gmail.com
> > > >
> > > > > wrote:
> > > > >
> > > > > > Actually, I see there is new page
> > > > > > https://cloudstack.apache.org/users.html
> > > > > >
> > > > > > Here please remove Anolim, since this company has been
> RENAMED (3
> > > years
> > > > > > ago) to Safe Swiss Cloud (it's also present on this page).
> > > > > >
> > > > > > I have done Survey, to add HIAG Data AG also to the list.
> > > > > >
> > > > > >
> > > > > > Thanks
> > > > > >
> > > > > > On Wed, 10 Oct 2018 at 18:03, Andrija Panic <
> > andrija.pa...@gmail.com
> > > >
> > > > > > wrote:
> > > > > >
> > > > 

Re: Marketing page update

2018-10-10 Thread Tutkowski, Mike
It definitely seems like it would be best to just have this information in one 
place so we don’t have to update two places whenever a change is needed.

On 10/10/18, 12:10 PM, "Rafael Weingärtner"  wrote:

NetApp Security WARNING: This is an external email. Do not click links or 
open attachments unless you recognize the sender and know the content is safe.




What if we remove the wiki page of users? I think it makes more sense to
use only the one from cloudstack.apache.org.

On Wed, Oct 10, 2018 at 3:07 PM Andrija Panic 
wrote:

> Perhaps I'm wrong but
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=30744222
> has a VERY short,brief list of "users" while
> https://cloudstack.apache.org/users.html (and
> https://www.shapeblue.com/are-these-people-really-all-using-cloudstack/ )
> -
> so I assume its out of date or something
>
> Anyway, to do (if you like :) );
> - remove "Anolim",
> - make sure HIAG Data AG (www.hiagdata.com) is present
> - leave Safe Swiss Cloud if already listed (I cant speak for them, since
> initially they stopped being ACS users/owners, but because of ownership
> changes etc etc, might again be engaged with ACS - so can't speak for 
them)
>
> Thx Rafael
>
> On Wed, 10 Oct 2018 at 19:54, Rafael Weingärtner <
> rafaelweingart...@gmail.com> wrote:
>
> > Then, I can update the wiki for you.
> >
> > What do you mean by original page outdated?
> >
> > On Wed, Oct 10, 2018 at 2:51 PM Andrija Panic 
> > wrote:
> >
> > > Done PR: https://github.com/apache/cloudstack-www/pull/47
> > >
> > > As for original page (it seems out of date ???) - I do not have WIKI
> > > access...
> > >
> > > Thx
> > >
> > > On Wed, 10 Oct 2018 at 19:42, Rafael Weingärtner <
> > > rafaelweingart...@gmail.com> wrote:
> > >
> > > > The source code of the second link is managed here:
> > > > https://github.com/apache/cloudstack-www
> > > > You can even open a PR yourself to fix that.
> > > >
> > > > The first one, you need wiki write access. I guess, I can give it to
> > you.
> > > > What is your wiki user name?
> > > >
> > > > On Wed, Oct 10, 2018 at 1:23 PM Andrija Panic <
> andrija.pa...@gmail.com
> > >
> > > > wrote:
> > > >
> > > > > Actually, I see there is new page
> > > > > https://cloudstack.apache.org/users.html
> > > > >
> > > > > Here please remove Anolim, since this company has been RENAMED (3
> > years
> > > > > ago) to Safe Swiss Cloud (it's also present on this page).
> > > > >
> > > > > I have done Survey, to add HIAG Data AG also to the list.
> > > > >
> > > > >
> > > > > Thanks
> > > > >
> > > > > On Wed, 10 Oct 2018 at 18:03, Andrija Panic <
> andrija.pa...@gmail.com
> > >
> > > > > wrote:
> > > > >
> > > > > > Hi all,
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=30744222
> > > > > > ("Who uses CloudStack")
> > > > > >
> > > > > > lists "Anolim", which is former name for my company where I
> worked
> > > and
> > > > > > there have been different company changes (company changed name 
a
> > few
> > > > > years
> > > > > > ago...), and that doman doesn't exist any more..
> > > > > >
> > > > > > If someone can update it to "HIAG Data" that would be great. (
> > > > > > www.hiagdata.com)
> > > > > >
> > > > > > Cheers
> > > > > >
> > > > > > --
> > > > > >
> > > > > > Andrija Panić
> > > > > >
> > > > >
> > > > >
> > > > > --
> > > > >
> > > > > Andrija Panić
> > > > >
> > > >
> > > >
> > > > --
> > > > Rafael Weingärtner
> > > >
> > >
> > >
> > > --
> > >
> > > Andrija Panić
> > >
> >
> >
> > --
> > Rafael Weingärtner
> >
>
>
> --
>
> Andrija Panić
>


--
Rafael Weingärtner




Re: Marketing page update

2018-10-10 Thread Tutkowski, Mike
Thanks, Rafael!

Get Outlook for iOS<https://aka.ms/o0ukef>

From: Rafael Weingärtner 
Sent: Wednesday, October 10, 2018 12:58:27 PM
To: dev
Subject: Re: Marketing page update

NetApp Security WARNING: This is an external email. Do not click links or open 
attachments unless you recognize the sender and know the content is safe.




Done

On Wed, Oct 10, 2018 at 3:23 PM Tutkowski, Mike 
wrote:

> I kind of like the idea of a redirect.
>
> On 10/10/18, 12:20 PM, "Rafael Weingärtner" 
> wrote:
>
> NetApp Security WARNING: This is an external email. Do not click links
> or open attachments unless you recognize the sender and know the content is
> safe.
>
>
>
>
> Can I delete the wiki users page then? Or, should I replace it with a
> redirect to the cloustack's website?
>
> On Wed, Oct 10, 2018 at 3:14 PM Tutkowski, Mike <
> mike.tutkow...@netapp.com>
> wrote:
>
> > It definitely seems like it would be best to just have this
> information in
> > one place so we don’t have to update two places whenever a change is
> needed.
> >
> > On 10/10/18, 12:10 PM, "Rafael Weingärtner" <
> rafaelweingart...@gmail.com>
> > wrote:
> >
> > NetApp Security WARNING: This is an external email. Do not click
> links
> > or open attachments unless you recognize the sender and know the
> content is
> > safe.
> >
> >
> >
> >
> > What if we remove the wiki page of users? I think it makes more
> sense
> > to
> > use only the one from cloudstack.apache.org.
> >
> > On Wed, Oct 10, 2018 at 3:07 PM Andrija Panic <
> andrija.pa...@gmail.com
> > >
> > wrote:
> >
> > > Perhaps I'm wrong but
> > >
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=30744222
> > > has a VERY short,brief list of "users" while
> > > https://cloudstack.apache.org/users.html (and
> > >
> >
> https://www.shapeblue.com/are-these-people-really-all-using-cloudstack/ )
> > > -
> > > so I assume its out of date or something
> > >
> > > Anyway, to do (if you like :) );
> > > - remove "Anolim",
> > > - make sure HIAG Data AG 
> (www.hiagdata.com<http://www.hiagdata.com>) is present
> > > - leave Safe Swiss Cloud if already listed (I cant speak for
> them,
> > since
> > > initially they stopped being ACS users/owners, but because of
> > ownership
> > > changes etc etc, might again be engaged with ACS - so can't
> speak
> > for them)
> > >
> > > Thx Rafael
> > >
> > > On Wed, 10 Oct 2018 at 19:54, Rafael Weingärtner <
> > > rafaelweingart...@gmail.com> wrote:
> > >
> > > > Then, I can update the wiki for you.
> > > >
> > > > What do you mean by original page outdated?
> > > >
> > > > On Wed, Oct 10, 2018 at 2:51 PM Andrija Panic <
> > andrija.pa...@gmail.com>
> > > > wrote:
> > > >
> > > > > Done PR: https://github.com/apache/cloudstack-www/pull/47
> > > > >
> > > > > As for original page (it seems out of date ???) - I do not
> have
> > WIKI
> > > > > access...
> > > > >
> > > > > Thx
> > > > >
> > > > > On Wed, 10 Oct 2018 at 19:42, Rafael Weingärtner <
> > > > > rafaelweingart...@gmail.com> wrote:
> > > > >
> > > > > > The source code of the second link is managed here:
> > > > > > https://github.com/apache/cloudstack-www
> > > > > > You can even open a PR yourself to fix that.
> > > > > >
> > > > > > The first one, you need wiki write access. I guess, I
> can give
> > it to
> > > > you.
> > > > > > What is your wiki user name?
> > > > > >
> > > > > > On Wed, Oct 10, 2018 at 1:23 PM Andrija Panic <
> > > andrija.pa...@gmail.com
> > > >

Re: CloudStack Collab in Brazil

2018-10-01 Thread Tutkowski, Mike
I would be really interested in an event in Brazil.


From: Rafael Weingärtner 
Sent: Monday, October 1, 2018 5:38 AM
To: users
Cc: dev
Subject: Re: CloudStack Collab in Brazil

NetApp Security WARNING: This is an external email. Do not click links or open 
attachments unless you recognize the sender and know the content is safe.




Hey Marco,
Yes, they run a very successful conference every year. I have just got back
from Montreal, and I talked with people there regarding the conference.

Now, for all CloudStackers (users and devs); I will repeat what I said in
Montreal. The TDC conference will happen with or without us. Therefore, we
only need to decide if we will join them in their Cloud tracks. We did not
hear much feedback here, but I will try again.

If you are part of the CloudStack community (as a contributor, committer,
user, operator, and so on), please do provide your feedback. Would you like
to see a CloudStack Collab Conference in Florianopolis, Brazil, 2019? I am
only asking you guys, what you think. I do understand the logistics
problems for some folks to attend a conference this far.

Now, about the city; the island has an airport (airport code = FLN).
However, most flights to FLN will have a connection either on GRU (Sao
Paulo airport) or GIG (Rio de Janeiro airport); KLM, AA, Delta, AirFrance,
Tap, and others have flights to FLN. I have also found some useful links in
English that can be used by your guys to check the city. In this link [1]
you can information not only about the city, but the State as well; there
are pages in different languages such as English, Spanish, and German (to
change the language there is a button in the top-right corner). On these
other links [2-3], you can find a guide (English only) of the city; it
contains a brief overview and some details about Museums, Beaches, events
and so on.

I would also be happy to answer any other question that you might have.

[1] http://turismo.sc.gov.br/en/cidade/florianopolis/#
[2] https://www.floripa-guide.com/attractions/about-florianopolis.html
[3] http://www.vivendofloripa.com.br/en/home/

On Tue, Sep 25, 2018 at 5:42 PM Marco Sinhoreli <
marco.sinhor...@shapeblue.com> wrote:

> Hello Rafael,
>
> I know this conference, last year in TDC Porto Alegre I spoke about ACS
> and ansible.
>
> I was very impressed with them event support and organization, they have a
> nice approach involving community in the organization. They also have a
> good penetration to prospect sponsors.
>
> I am able to help you in this subject since I am in Brazil as well.
>
> Best regards,
>
> Marco Sinhoreli
> marco.sinhor...@shapeblue.com
> mobile: +55 21 98276 3636
>
> Av. Brigadeiro Faria Lima, 3144 - 2º andar – Jardim
> Paulistano, São Paulo, SP, Brasil, 01451-000
> Phone: + 55 11 3568-2877
> http://www.shapeblue.com/ | twitter: @shapeblue
>
> Em 21/09/2018 08:37, "Rafael Weingärtner" 
> escreveu:
>
> Hello fellow devs and users (pardon me for the cross post),
>
> I already contacted the PMC on this matter, and I am now opening the
> discussion to the whole community. Let’s see what you guys think about
> a
> CloudStack conference in Brazil, and let’s work to make it happen ;)
>
> Since the rather shameful? situation with the CloudStack Collab
> Conference
> (CCC) in Brazil last year, I have been looking for possible ways to
> enable
> the event. And, it seems that I found it, and it is something that we
> are
> already used to do.
>
> There is a conference series in Brazil called TDC (The Developer’s
> conference) [1]. They run three conferences a year (Florianopolis, São
> Paulo, and Porto Alegre). They have been running for over a decade
> now. I
> attended the last one in São Paulo (July 2018) and it was awesome. To
> give
> you guys some numbers for the São Paulo event:
>
> - 4524 attendants
> - 6040 online-viewers (some talks are live streamed)
> - 2927 corporate registrations
> - 63 different tracks, and more than 300 talks.
> - 5 days of conference (Tuesday-Saturday)
>
> They are in the process of internationalizing the conference now. The
> hot
> site is being translated, and they are preparing things for
> English/Spanish
> CFPs and tracks. While talking with one of the organizers during the
> event,
> I mentioned that we have been doing collocated CCC with ApacheCon and
> that
> I would love to see CCC in Brazil. Ant it turns out, they would love
> to see
> us there in Brazil as well. They offered to provide the same support as
> ApacheCon provides us. We would only need to organize the CFP and
> selection of the presentations. We would also need to tell them our
> needs:
> rooms, hackathon spaces, and so on. They are proposing for us space in
> their Florianopolis TDC 2019, which will be held in Florianopolis
> city. The
> event will take place in April 2019 (there is not a possibility for the
> event to be canceled this time!).
>
> I can coordinate this process with them, but I might need some help
> 

Montréal Hackathon

2018-10-01 Thread Tutkowski, Mike
Hi everyone,

I wanted to send out an e-mail about the hackathon that we held in Montréal 
this past Wednesday (after the two days of the CloudStack Collaboration 
Conference that took place on Monday and Tuesday).

We spent the first 1.5 hours discussing issues we’d like to see addressed 
and/or new features we might be considering. I’ve provided the current list at 
the bottom of this message.

In particular, one item of note is that people seemed interested in quarterly 
remote meetups. The intent of such meetups would be to sync with each other on 
what we’re working on so as to not duplicate effort. We may also have people 
present a bit about a recent feature or item of interest (similar to what we do 
at conferences). In addition, these meetups could provide a nice checkpoint to 
see how we are doing with regards to the items listed below.

Please take a moment, scan through the list, ask questions, and/or send out 
additional areas that you feel the CloudStack Community should be focusing on.

If you were present at the hackathon, feel free to update us on what progress 
you might have made at the hackathon with regards to any topic below.

Thanks!
Mike

Hyper-V enlightenment

Version 5.x of CloudStack

KVM IO bursting

Live VM Migration

RPC Standard interface to VR

Getting INFO easily out of the SSVM

Deprecate old code (OVM?)

CloudMonkey testing

NoVNC in CPVM

CentOS SIG + packaging

VR Programming Optimization

New UI working with API Discovery

Network Models refactoring + designer UI

Marketing Plan

Video series for CloudStack (ex. developers series, users series)

Use GitHub to document aspects of CloudStack (how to build an environment, how 
to start writing code for it, etc.)

Figure out a process for how we'd like issues to be opened, assigned, closed, 
and resolved (using JIRA and GitHub Issues)

Create a true REST API (it can use the existing API behind the scenes).

Logic to generate code in particular use cases so you can focus mainly on your 
business logic.

Use standard libraries that implement JPA, HTTP, etc.

Remote Meetups every quarter

Support IPv6



Re: CloudStack Collab in Brazil

2018-10-01 Thread Tutkowski, Mike
I guess it depends on how many people expect to be able to attend.

Ten presentation slots is probably a good starting point.

Get Outlook for iOS<https://aka.ms/o0ukef>

From: Rafael Weingärtner 
Sent: Monday, October 1, 2018 10:10:55 AM
To: users
Cc: dev
Subject: Re: CloudStack Collab in Brazil

NetApp Security WARNING: This is an external email. Do not click links or open 
attachments unless you recognize the sender and know the content is safe.




Thank you guys for the feedback!

I will reach out the organizers to discuss our requirements. What do you
guys think that we need?
Would 10 presentation slots (50min. each) be enough? Or, do you guys think
that we need more?

Also, I think that we should also do a Hackathon. Therefore, I will also be
asking for a room such as the one we used in Montreal.

On Mon, Oct 1, 2018 at 12:03 PM Nicolas Vazquez <
nicolas.vazq...@shapeblue.com> wrote:

> I would be interested in an event in Brazil as well.
>
>
> Regards,
>
> Nicolas Vazquez
>
> 
> From: Gabriel Beims Bräscher 
> Sent: Monday, October 1, 2018 11:58:07 AM
> To: users
> Cc: dev
> Subject: Re: CloudStack Collab in Brazil
>
> As a Brazilian, that lives in Florianópolis, I cannot pass this opportunity
> ;)
> Count on me!
>
> Em seg, 1 de out de 2018 às 11:27, Tutkowski, Mike <
> mike.tutkow...@netapp.com> escreveu:
>
> > I would be really interested in an event in Brazil.
> >
> > 
> > From: Rafael Weingärtner 
> > Sent: Monday, October 1, 2018 5:38 AM
> > To: users
> > Cc: dev
> > Subject: Re: CloudStack Collab in Brazil
> >
> > NetApp Security WARNING: This is an external email. Do not click links or
> > open attachments unless you recognize the sender and know the content is
> > safe.
> >
> >
> >
> >
> > Hey Marco,
> > Yes, they run a very successful conference every year. I have just got
> back
> > from Montreal, and I talked with people there regarding the conference.
> >
> > Now, for all CloudStackers (users and devs); I will repeat what I said in
> > Montreal. The TDC conference will happen with or without us. Therefore,
> we
> > only need to decide if we will join them in their Cloud tracks. We did
> not
> > hear much feedback here, but I will try again.
> >
> > If you are part of the CloudStack community (as a contributor, committer,
> > user, operator, and so on), please do provide your feedback. Would you
> like
> > to see a CloudStack Collab Conference in Florianopolis, Brazil, 2019? I
> am
> > only asking you guys, what you think. I do understand the logistics
> > problems for some folks to attend a conference this far.
> >
> > Now, about the city; the island has an airport (airport code = FLN).
> > However, most flights to FLN will have a connection either on GRU (Sao
> > Paulo airport) or GIG (Rio de Janeiro airport); KLM, AA, Delta,
> AirFrance,
> > Tap, and others have flights to FLN. I have also found some useful links
> in
> > English that can be used by your guys to check the city. In this link [1]
> > you can information not only about the city, but the State as well; there
> > are pages in different languages such as English, Spanish, and German (to
> > change the language there is a button in the top-right corner). On these
> > other links [2-3], you can find a guide (English only) of the city; it
> > contains a brief overview and some details about Museums, Beaches, events
> > and so on.
> >
> > I would also be happy to answer any other question that you might have.
> >
> > [1] http://turismo.sc.gov.br/en/cidade/florianopolis/#
> > [2] https://www.floripa-guide.com/attractions/about-florianopolis.html
> > [3] http://www.vivendofloripa.com.br/en/home/
> >
> > On Tue, Sep 25, 2018 at 5:42 PM Marco Sinhoreli <
> > marco.sinhor...@shapeblue.com> wrote:
> >
> > > Hello Rafael,
> > >
> > > I know this conference, last year in TDC Porto Alegre I spoke about ACS
> > > and ansible.
> > >
> > > I was very impressed with them event support and organization, they
> have
> > a
> > > nice approach involving community in the organization. They also have a
> > > good penetration to prospect sponsors.
> > >
> > > I am able to help you in this subject since I am in Brazil as well.
> > >
> > > Best regards,
> > >
> > > Marco Sinhoreli
> > > marco.sinhor...@shapeblue.com
> > > mobile: +55 21 98276 3636
> > >
> > > Av. Brigad

Re: Ansible 2.7: CloudStack related changes and future

2018-10-08 Thread Tutkowski, Mike
Thanks, Rene, for all of the work you’ve done!

On 10/8/18, 10:02 AM, "Giles Sirett"  wrote:


Rene
Really sorry to hear that. I want to say a massive thank you for all of 
your work with the ansible/cloudstack modules.  I know lots of people have 
benefitted from the modules, a testament to some very cool work

Thank you and good luck with whatever's next for you

Kind regards
Giles

giles.sir...@shapeblue.com
www.shapeblue.com
Amadeus House, Floral Street, London  WC2E 9DPUK
@shapeblue




-Original Message-
From: Rene Moser 
Sent: 08 October 2018 12:43
To: us...@cloudstack.apache.org; dev@cloudstack.apache.org
Subject: Ansible 2.7: CloudStack related changes and future

Hi all

First, please note I am leaving my current job by the end of November and I 
don't see that CloudStack will play any role in my professional future.

As a result, I official announce the end of my maintenance for the Ansible 
CloudStack modules with the release of Ansible v2.8.0 in spring 2019.

If anyone is interested to take over, please let me know so I can 
officially introduce him/her to the Ansible community.

Thanks for all the support and joy I have had with CloudStack and the 
community!

Ansible v2.7.0 is released with the following, CloudStack related changes:

David Passante (1):
  cloudstack: new module cs_disk_offering (#41795)

Rene Moser (4):
  cs_firewall: fix idempotence and tests for cloudstack v4.11 (#42458)
  cs_vpc: fix disabled or wrong vpc offering taken (#42465)
  cs_pod: workaround for 4.11 API break (#43944)
  cs_template: implement update and revamp (#37015)

Yoan Blanc (1):
  cs instance root_disk size update resizes the root volume (#43817)

nishiokay (2):
  [cloudstack] fix cs_host example (#42419)
  Update cs_storage_pool.py (#42454)


Best wishes
René




Exception in MS on master

2018-09-25 Thread Tutkowski, Mike
Hi everyone,

I was building a new cloud and came across the following exception when running 
the management server (below).

Any thoughts on what this is about?

Thanks!
Mike

WARN  [o.a.c.e.o.NetworkOrchestrator] (Network-Scavenger-1:ctx-8cd291c0) 
(logid:4f70c744) Caught exception while running network gc:
com.cloud.utils.exception.CloudRuntimeException: Caught: 
com.mysql.jdbc.JDBC4PreparedStatement@6ffe01cc: SELECT networks.id FROM 
networks  INNER JOIN network_offerings ON 
networks.network_offering_id=network_offerings.id  INNER JOIN op_networks ON 
networks.id=op_networks.id WHERE networks.removed IS NULL  AND  
(op_networks.nics_count = ** NOT SPECIFIED **  AND op_networks.gc = ** NOT 
SPECIFIED **  AND op_networks.check_for_gc = ** NOT SPECIFIED ** )
at 
com.cloud.utils.db.GenericDaoBase.customSearchIncludingRemoved(GenericDaoBase.java:507)
at com.cloud.utils.db.GenericDaoBase.customSearch(GenericDaoBase.java:518)
at 
com.cloud.network.dao.NetworkDaoImpl.findNetworksToGarbageCollect(NetworkDaoImpl.java:461)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:338)
at 
org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:197)
at 
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163)
at 
com.cloud.utils.db.TransactionContextInterceptor.invoke(TransactionContextInterceptor.java:34)
at 
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:174)
at 
org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:92)
at 
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:185)
at 
org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:212)
at com.sun.proxy.$Proxy96.findNetworksToGarbageCollect(Unknown Source)
at 
org.apache.cloudstack.engine.orchestration.NetworkOrchestrator$NetworkGarbageCollector.reallyRun(NetworkOrchestrator.java:2761)
at 
org.apache.cloudstack.engine.orchestration.NetworkOrchestrator$NetworkGarbageCollector.runInContext(NetworkOrchestrator.java:2745)
at 
org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
at 
org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
at 
org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
at 
org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53)
at 
org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:46)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NullPointerException
at 
com.cloud.utils.db.GenericDaoBase.prepareAttribute(GenericDaoBase.java:1519)
at 
com.cloud.utils.db.GenericDaoBase.addJoinAttributes(GenericDaoBase.java:774)
at 
com.cloud.utils.db.GenericDaoBase.customSearchIncludingRemoved(GenericDaoBase.java:476)
... 29 more



Re: [VOTE] Apache CloudStack 4.11.2.0 RC5

2018-11-16 Thread Tutkowski, Mike
+1 (binding)

I performed manual testing of a Basic Zone making use of vSphere and XenServer.

On 11/16/18, 5:49 AM, "Paul Angus"  wrote:

NetApp Security WARNING: This is an external email. Do not click links or 
open attachments unless you recognize the sender and know the content is safe.




We're looking good so far, but I'd still like some more votes (hopefully 
+1s  ), Please do test and cast your vote.

Thanks.


Kind regards,

Paul Angus

paul.an...@shapeblue.com
www.shapeblue.com
Amadeus House, Floral Street, London  WC2E 9DPUK
@shapeblue




-Original Message-
From: Andrija Panic 
Sent: 16 November 2018 12:12
To: dev 
Subject: Re: [VOTE] Apache CloudStack 4.11.2.0 RC5

+1

Tested:
- KVM environment
- building DEB packages for Ubuntu
- advanced and basic zone deployment, MGMT srv on Ubuntu 14.04, KVM Ubuntu
14.04 (qemu versions equivalent to Ubuntu 16.04)
- a bunch of integration tests done from in-house suite of tests (system 
and user tests)
- online and offline storage migration from NFS/CEPH to SolidFire


On Fri, 16 Nov 2018 at 10:13, Dag Sonstebo 
wrote:

> +1
>
> Various customer related configurations and lifecycle operations, all
> looking good.
>
> Environment:
> - Mgmt: CentOS7.5
> - HV: VMware 5.5
> - Advanced zone
>
> Regards,
> Dag Sonstebo
> Cloud Architect
> ShapeBlue
>
> On 16/11/2018, 08:48, "Boris Stoyanov" 
> wrote:
>
> +1
>
> I’ve did an upgrade all the way from 4.6 to rc5 and it was
> successful, I’ve also managed to run some basic lifecycle operation
> and it was looking good.
>
> Regards,
> Bobby.
>
>
> boris.stoya...@shapeblue.com
> www.shapeblue.com
> Amadeus House, Floral Street, London  WC2E 9DPUK
> @shapeblue
>
>
>
>
> dag.sonst...@shapeblue.com
> www.shapeblue.com
> Amadeus House, Floral Street, London  WC2E 9DPUK @shapeblue
>
>
>
> > On 15 Nov 2018, at 21:44, Gabriel Beims Bräscher 
> wrote:
> >
> > +1 Deployed local environment and tested VM/host/storage lifecycles.
> >
> > Hosts: KVM running on Ubuntu 16.04
> > Management server and database: running on Ubuntu 16.04
> > - create/use/delete system and user VMs
    >     > - register new template
> > - register service offering
> > - work with hosts (add, maintenance, remove)
> >
> > Em qui, 15 de nov de 2018 às 03:21, Tutkowski, Mike <
> > mike.tutkow...@netapp.com> escreveu:
> >
> >> Thanks, Rohit!
> >>
> >>
> >> 
> >> From: Rohit Yadav 
> >> Sent: Wednesday, November 14, 2018 9:07 PM
> >> To: dev@cloudstack.apache.org; us...@cloudstack.apache.org
> >> Subject: Re: [VOTE] Apache CloudStack 4.11.2.0 RC5
> >>
> >> NetApp Security WARNING: This is an external email. Do not click
> links or
> >> open attachments unless you recognize the sender and know the
> content is
> >> safe.
> >>
> >>
> >>
> >>
> >> Hi Mike, Paul, everyone,
> >>
> >>
> >> I tested the same on a 4.9.3.1 based VMware 5.5u3 + svs + basic
> zone and
> >> could see the same behaviour. Therefore, it's not a regression but 
a
> >> limitation from the past. Basic zone provides L3 isolation by means
> of
> >> security group (host-level firewall) which is not supported for
> VMware. I
> >> think nobody reported this in the past because nobody uses
> VMware+basic
> >> zone, I've opened an issue for this issue:
> >> https://github.com/apache/cloudstack/issues/3031
> >>
> >>
> >> Let's continue testing and voting for RC5, and let's aim to fix for
> this
> >> limitation in future 4.11.3+, 4.12.0+.
> >>
> >>
> >> - Rohit
>     >>
> >> <https://cloudstack.apache.org>
> >>
> >>
> >>
> >> 

Re: Introduction

2019-01-02 Thread Tutkowski, Mike
Welcome, Abhishek!

On 1/2/19, 3:54 AM, "Abhishek Kumar"  wrote:

NetApp Security WARNING: This is an external email. Do not click links or 
open attachments unless you recognize the sender and know the content is safe.




Hello all!


This is Abhishek Kumar. I've recently joined ShapeBlue as Software Engineer 
to work on Cloudstack.
Looking forward to learn and contribute in the project and community in a 
meaningful manner.


Regards,


Abhishek Kumar

Software Engineer

ShapeBlue

abhishek.ku...@shapeblue.com

www.shapeblue.com

abhishek.ku...@shapeblue.com
www.shapeblue.com
Amadeus House, Floral Street, London  WC2E 9DPUK
@shapeblue







Re: new committer: Boris Stoyanov (AKA Bobby)

2018-12-13 Thread Tutkowski, Mike
Congratulations, Bobby!



From: Paul Angus 
Sent: Thursday, December 13, 2018 2:29 AM
To: us...@cloudstack.apache.org; dev@cloudstack.apache.org
Cc: Boris Stoyanov
Subject: new committer: Boris Stoyanov (AKA Bobby)

NetApp Security WARNING: This is an external email. Do not click links or open 
attachments unless you recognize the sender and know the content is safe.




Hi Everyone,

The Project Management Committee (PMC) for Apache CloudStack
has invited Boris Stoyanov to become a committer and we are pleased
to announce that he has accepted.

Please join me in congratulating Bobby!


Being a committer enables easier contribution to the
project since there is no need to go via the patch
submission process. This should enable better productivity.
Being a PMC member enables assistance with the management
and to guide the direction of the project.



paul.an...@shapeblue.com
www.shapeblue.com
Amadeus House, Floral Street, London WC2E 9DPUK
@shapeblue





Re: CloudStack Collab in Brazil

2018-12-17 Thread Tutkowski, Mike
I can help out with the CFP (looking through and helping to select 
presentations).



From: Rafael Weingärtner 
Sent: Monday, December 17, 2018 1:49 PM
To: users
Cc: dev@cloudstack.apache.org
Subject: Re: CloudStack Collab in Brazil

NetApp Security WARNING: This is an external email. Do not click links or open 
attachments unless you recognize the sender and know the content is safe.




Hey guys,

Have you guys had time to read through this e-mail? Are there volunteers to
help us make CCC happen in Brazil? We need to provide them the topics of
tracks that we will be participating until 21/12/2018.

On Thu, Dec 13, 2018 at 7:11 PM Rafael Weingärtner <
rafaelweingart...@gmail.com> wrote:

> Hello CloudStackers,
>
> I had a few meetings with the TDC folks, and we seem to be moving on. They
> have a slightly different organization than ApacheCon though. Therefore, we
> were asked to provide them with some “track topics” that fit in the area of
> Cloud Computing. Then, we could direct presentations to one of these
> tracks. The idea is that the international tracks (the ones that will be in
> English) will not be parallelized to enable the audience to attend all of
> them (this means, one for each day). Also, the tracks will receive
> presentations from other people that are not in our bubble, and this is
> great (at least I found this awesome), because different people with
> different backgrounds would come together on the same track, which in turn
> means, people that might not know ACS would have the opportunity not just
> to meet the solution, but also the people behind it.
>
> So, this is what I have in mind:
>
> - Cloud computing (area/topic)
> - cloud orchestration -- this would be the track where topics
> regarding features, and cloud orchestration systems (e.g. CloudStack)
> design and structure would be presented
> - DevOps -- track for presentations that address the day-to-day of
> CloudStack (or OpenStack) developers and the daily life of operators with
> tasks such as debugging and troubleshooting
> - tests -- track for discussing the Q process and testing methods
> for clouds
> - cloud open source ecosystem -- track focusing on the cloud
> ecosystem, where people can address things relating the job market,
> business opportunities, and the management process of highly heterogeneous
> and distributed communities in OpenSource (such as CloudStack)
>
>
> What do you guys think of these divisions for the CFP?
> Also, we might need help to review and select presentation proposals.
> Would some of you guys be willing to help on this process?
>
> And last, but not least, it would be awesome if companies linked to ACS
> are interested to be the sponsors of tracks or the event. They have sent me
> the brochure and sponsorship prospects from 2018 so we can get to know
> better the conference [1]. The attendance report and prospectus are in
> English, and for instance, in 2018 the TDC event in Florianopolis (where we
> are proposing to have CCC in 2019) received about 4000 people. The
> sponsorship prospectus for 2019 events is being prepared, and I guess if
> there are interested parties on this, you can reach them directly, or if
> you have some problems to do that, I can help you guys as well.
>
> [1]
> https://www.dropbox.com/sh/53ujp2usf402dlj/AAA1a2jZPddGcAT8ZosRiGCAa?dl=0
>
> On Wed, Oct 24, 2018 at 8:16 PM Tutkowski, Mike 
> wrote:
>
>> Thanks, Rafael!
>>
>> The dates work for me.
>>
>> Get Outlook for iOS<https://aka.ms/o0ukef>
>> 
>> From: Rafael Weingärtner 
>> Sent: Wednesday, October 24, 2018 5:02:14 PM
>> To: users
>> Cc: dev
>> Subject: Re: CloudStack Collab in Brazil
>>
>> NetApp Security WARNING: This is an external email. Do not click links or
>> open attachments unless you recognize the sender and know the content is
>> safe.
>>
>>
>>
>>
>> Yes, they already have a date set. It should be 23 - 27 April, 2019.
>> I should be talking with them again this week to check what we need to
>> move
>> thing forward.
>>
>> What do you guys think about these dates?
>>
>> On Mon, Oct 22, 2018 at 5:07 PM Tutkowski, Mike <
>> mike.tutkow...@netapp.com>
>> wrote:
>>
>> > Hi Rafael,
>> >
>> > Do you have a specific date in mind for CCC Brazil? It sounds like, in
>> > general, we are looking at April.
>> >
>> > Thanks!
>> > Mike
>> >
>> > On 10/1/18, 12:51 PM, "Rafael Weingärtner" > >
>> > wrote:
>> >
>> > NetApp Security WARNING: This is an external email. Do not 

Re: CloudStack Collab in Brazil

2018-12-24 Thread Tutkowski, Mike
apeblue.com/ | twitter: @shapeblue
> > > > >>
> > > > >> Em 19/12/2018 15:55, "Tim Mackey"  escreveu:
> > > > >>
> > > > >> Gabriel,
> > > > >>
> > > > >> I'm happy to help review proposals if required.
> > > > >>
> > > > >> -tim
> > > > >>
> > > > >> On Wed, Dec 19, 2018 at 12:35 PM Gabriel Beims Bräscher <
> > > > >> gabrasc...@gmail.com> wrote:
> > > > >>
> > > > >> > Hi Rafael,
> > > > >> >
> > > > >> > I am available to help, count on me!
> > > > >> > I have one question. Can anyone (one that is not a
> > > PMC/Committer)
> > > > >> help to
> > > > >> > review presentations?
> > > > >> >
> > > > >> > The divisions for the CFP looks good, adding security
> aspects
> > as
> > > > >> Ricardo
> > > > >> > Makino proposed is also interesting.
> > > > >> >
> > > > >> > Regards,
> > > > >> > Gabriel.
> > > > >> >
> > > > >> > Em qua, 19 de dez de 2018 às 11:12, Cristian Latapiat <
> > > > >> latap...@gmail.com>
> > > > >> > escreveu:
> > > > >> >
> > > > >> > > Hi Rafael ,
> > > > >> > >
> > > > >> > > I am, therefore, available to collaborate and to help you
> in
> > > > >> everything
> > > > >> > > that may be necessary.
> > > > >> > >
> > > > >> > > Regards,
> > > > >> > >
> > > > >> > > Cristian
> > > > >> > >
> > > > >> > > Em seg, 17 de dez de 2018 às 18:49, Rafael Weingärtner <
> > > > >> > > rafaelweingart...@gmail.com> escreveu:
> > > > >> > >
> > > > >> > > > Hey guys,
> > > > >> > > >
> > > > >> > > > Have you guys had time to read through this e-mail? Are
> > > there
> > > > >> > volunteers
> > > > >> > > to
> > > > >> > > > help us make CCC happen in Brazil? We need to provide
> them
> > > the
> > > > >> topics
> > > > >> > of
> > > > >> > > > tracks that we will be participating until 21/12/2018.
> > > > >> > > >
> > > > >> > > > On Thu, Dec 13, 2018 at 7:11 PM Rafael Weingärtner <
> > > > >> > > > rafaelweingart...@gmail.com> wrote:
> > > > >> > > >
> > > > >> > > > > Hello CloudStackers,
> > > > >> > > > >
> > > > >> > > > > I had a few meetings with the TDC folks, and we seem
> to
> > be
> > > > >> moving on.
> > > > >> > > > They
> > > > >> > > > > have a slightly different organization than ApacheCon
> > > > though.
> > > > >> > > Therefore,
> > > > >> > > > we
> > > > >> > > > > were asked to provide them with some “track topics”
> that
> > > fit
> > > > >> in the
> > > > >> > > area
> > > > >> > > > of
> > > > >> > > > > Cloud Computing. Then, we could direct presentations
> to
> > > one
> > > > >> of these
> > > > >> > > > > tracks. The idea is that the international tracks (the
> > > ones
> > > > >> that will
> > > > >> > > be
> > > > >> > > > in
> > > > >> > > > > English) will not be parallelized to enable the
> audience
> > > to
> > > > >> attend
> > > > >> > all
> > > > >> > > of
> > > > >> > > > > them (this means, one for each day). Also, the tracks
> > will
> > > > >> receive
> > > > >> > > > > presentations from other people that are not in our
> > > bubble,
> > &g

Videos from September CCC Montreal now live

2018-12-06 Thread Tutkowski, Mike
Hi everyone,

Just an FYI that the videos from the CloudStack Collaboration Conference that 
took place in Montreal last September are available here: 
https://www.youtube.com/playlist?list=PLW7vgBNPiQhkJOwgkEw1bEc4IGDXnkzs7

Thanks to ShapeBlue for recording the presentations!
Mike


[ANNOUNCE] New committer: Andrija Panić

2018-11-18 Thread Tutkowski, Mike
Hi everyone,

The Project Management Committee (PMC) for Apache CloudStack
has invited Andrija Panić to become a committer and I am pleased
to announce that he has accepted.

Please join me in congratulating Andrija on this accomplishment.

Thanks!
Mike


Re: [ANNOUNCE] Apache CloudStack LTS Maintenance Release 4.11.2.0

2018-11-26 Thread Tutkowski, Mike
Thanks, Paul and all those who participated in this release!



From: Paul Angus 
Sent: Monday, November 26, 2018 8:05 AM
To: dev@cloudstack.apache.org; market...@cloudstack.apache.org; 
annou...@cloudstack.apache.org; us...@cloudstack.apache.org
Subject: [ANNOUNCE] Apache CloudStack LTS Maintenance Release 4.11.2.0

NetApp Security WARNING: This is an external email. Do not click links or open 
attachments unless you recognize the sender and know the content is safe.




Announcing Apache CloudStack LTS Maintenance Release 4.11.2.0

The Apache CloudStack project is pleased to announce the release of CloudStack 
4.11.2.0 as part of its LTS 4.11.x releases. The CloudStack 4.11.2.0 release 
contains more than 70 fixes since the CloudStack 4.11.1.0 release. CloudStack 
LTS branches are supported for 20 months and will receive updates for the first 
14 months. For the final six months only security updates are provided.

Apache CloudStack is an integrated Infrastructure-as-a-Service (IaaS) software 
platform allowing users to build feature-rich public and private cloud 
environments. CloudStack includes an intuitive user interface and rich API for 
managing the compute, networking, software, and storage resources. The project 
became an Apache top level project in March, 2013. More information about 
Apache CloudStack can be found at:
http://cloudstack.apache.org/

# Documentation

The 4.11.2.0 release notes include a full list of issues fixed:
http://docs.cloudstack.apache.org/en/4.11.2.0/releasenotes/index.html

The CloudStack documentation includes upgrade instructions from previous 
versions of Apache CloudStack, and can be found at:
http://docs.cloudstack.apache.org/en/4.11.2.0/upgrading/index.html


The official installation, administration and API documentation for each of the 
releases are available on our documentation page:
http://docs.cloudstack.apache.org/

# Downloads

The official source code for the 4.11.2.0 release can be downloaded from our 
downloads page:
http://cloudstack.apache.org/downloads.html

In addition to the official source code release, individual contributors have 
also made convenience binaries available on the Apache CloudStack download 
page, and can be found at:

http://download.cloudstack.org/ubuntu/dists/
http://download.cloudstack.org/centos/6/
http://download.cloudstack.org/centos/7/
http://www.shapeblue.com/packages/


Kind regards,

Paul Angus


paul.an...@shapeblue.com
www.shapeblue.com
Amadeus House, Floral Street, London WC2E 9DPUK
@shapeblue





<    1   2   3   4   5   >