Re: Launch a VM (not from a template)

2014-09-23 Thread Nitin Mehta
Marcus - I think its possible even now in CS. I don¹t think we need
another api.
1. You can create a vm using the deployvm api with flag startvm=false.
This would create vm and its corresponding resources db records without
actually creating them. You can give a dummy template for now as it still
needs a templateid (we should remove this dependence in future).
2. You can then use updateVolume to update the volume with storage pool id
and marking it ready.
3. Finally you use start vm api to start the vm. It would create all the
resources and start the vm. Since the volume is already ready it won't go
into creating the root volume.

Thanks,
-Nitin

On 22/09/14 10:08 PM, Marcus shadow...@gmail.com wrote:

So, we have thought about this a bit as well. Our solution, which we
haven't actually implemented yet, was to create a registerVirtualMachine
api call that would be similar to deployVirtualMachine but accept a
rootdiskid and storage pool id.  This would essentially enter a vm into
the db but just copy the  rootdiskid into the root volumes space rather
than creating a new one. This would allow root disks to be created outside
of cloudstack and then made known to cloudstack. It wouldn't be terribly
difficult to implement (the current deploy only creates a root disk if it
doesn't already exist so would be easy to short circuit), you would want
to
ensure the service offering matches the storage tags though.

A registerVolume would also be useful.
On Sep 22, 2014 9:18 PM, Will Stevens wstev...@cloudops.com wrote:

 ws: inline...

 Thanks for the response Mike.  :)


 *Will STEVENS*
 Lead Developer

 *CloudOps* *| *Cloud Solutions Experts
 420 rue Guy *|* Montreal *|* Quebec *|* H3J 1S6
 w cloudops.com *|* tw @CloudOps_

 On Mon, Sep 22, 2014 at 7:39 PM, Mike Tutkowski 
 mike.tutkow...@solidfire.com wrote:

  If you did #1, how would you pick a relevant compute offering? You
would
  probably need to look first to make sure at least one existed that
could
  satisfy your requirement(s) and then make sure the resources could be
  marked in advance as if they were being consumed.
 

 ws: so in this case the original VM would be managed by CS and the 3rd
 party software would be backing it up and replicating it to other
 zones/regions.  technically, we should have all of the information we
need
 about the VM because CS will already know about the original VM which
this
 VM is getting spun up to replace.  essentially this would be used for
DR in
 a different DC.  if one DC goes down for some reason, this would
basically
 behave like a cold standby in a different DC.

 you did touch on a pipe dream of mine though (which I am still in the
 process of thinking through).  I want to be able to spin up a fresh CS
 install and then discover an existing xen pool and then configure CS to
 step in as an orchestration tool for the existing infra.  I am still
 thinking through how this would be possible, but this is outside the
scope
 of this specific problem, so I won't derail this topic.

 
  #2 might be easier.
 

 ws: I agree that this might be an easier way to approach the problem.  I
 still need to think through where the gotchas are with this approach.

 
  #3 could be really useful if storage vendors allow you to take
snapshots
  that reside on their own SAN (instead of secondary storage). Then a
  template could be spun up from the SAN snapshot.
 
  ws: I think this is the worst solution for my specific situation
(because
 it still requires a copy), but as a general approach for simplifying
this
 process, I think it has the most potential.  this approach would be
useful
 to a much wider audience and could potentially reduce the frustration
and
 migration time for onboarding and migrating customers to CS.

 
 
  On Mon, Sep 22, 2014 at 4:58 PM, Will Stevens wstev...@cloudops.com
  wrote:
 
   Hey All,
   I am looking for some advice on the following problem.  I am fully
 aware
   that I will probably have to build this functionality into CS, but I
 want
   to get your ideas before I go too far down one path.
  
   *Intro:*
   We have a backup/DR solution that can basically take stateful
 incremental
   snapshots of our systems at a hypervisor level.  It does a lot of
other
   magic, but I will limit the scope for now.  It can also make the
  snapshots
   available directly to the hypervisor (in multiple datacenters) so
they
  can
   be spun up almost instantly (if CS is not in the picture) by the
   hypervisor.
  
   *Problem:*
   If we spin up the VM directly on the hypervisor, CS will not know
about
  it,
   so that currently is not an option (although detecting that VM
would be
   ideal).
   If we need to spin up the VM through CS, the current process is
 entirely
   too inefficient.  My understanding is that the only option would be
to
   import the snapshot as a template (over http) and then once
uploaded,
 it
   would then have to be transferred from secondary storage to primary
  storage
   to get launched.  For the 

Re: Review Request 25771: Adding new test case to verify the fix provided in bug CLOUDSTACK-6172

2014-09-23 Thread sanjeev n

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/25771/
---

(Updated Sept. 23, 2014, 6:29 a.m.)


Review request for cloudstack and SrikanteswaraRao Talluri.


Changes
---

Fixed Review Comments


Bugs: CS-6172
https://issues.apache.org/jira/browse/CS-6172


Repository: cloudstack-git


Description
---

@Desc:Volume is not retaining same uuid when migrating from one storage to 
another.
Step1:Create a volume/data disk
Step2:Verify UUID of the volume
Step3:Migrate the volume to another primary storage within the cluster
Step4:Migrating volume to new primary storage should succeed
Step5:volume UUID should not change even after migration


Diffs (updated)
-

  test/integration/component/test_volumes.py 122f2d1 

Diff: https://reviews.apache.org/r/25771/diff/


Testing
---

Yes
@Desc:Volume is not retaining same uuid when migrating from one storage to 
another. ... === TestName: test_01_migrateVolume | Status : SUCCESS ===
ok

--
Ran 1 test in 343.250s

OK


Thanks,

sanjeev n



Re: Review Request 25685: Fixed the test_usage.py script bug - CLOUDSTACK-7555

2014-09-23 Thread SrikanteswaraRao Talluri


 On Sept. 16, 2014, 11:25 a.m., SrikanteswaraRao Talluri wrote:
  test/integration/component/test_usage.py, line 748
  https://reviews.apache.org/r/25685/diff/1/?file=690448#file690448line748
 
  Please change this to self.account.name and self.account.domainid
 
 Chandan Purushothama wrote:
 Talluri,
 account is a class attribute. It is not conventionally correct to refer a 
 class attribute using object reference(self). Hence I intentionally 
 referenced the account via its Class name,
 
 Thank you,
 Chandan.

cool. i suggested the above since those class attributes can also be accessed 
by their instances.


- SrikanteswaraRao


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/25685/#review53506
---


On Sept. 16, 2014, 6:46 a.m., Chandan Purushothama wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/25685/
 ---
 
 (Updated Sept. 16, 2014, 6:46 a.m.)
 
 
 Review request for cloudstack, sangeetha hariharan, sanjeev n, and 
 SrikanteswaraRao Talluri.
 
 
 Bugs: CLOUDSTACK-7555
 https://issues.apache.org/jira/browse/CLOUDSTACK-7555
 
 
 Repository: cloudstack-git
 
 
 Description
 ---
 
 
 TestTemplateUsage.test_01_template_usage fails with the following error 
 message  Stack Trace
 
 Stacktrace
 
   File /usr/lib/python2.7/unittest/case.py, line 332, in run
 testMethod()
   File /root/cloudstack/test/integration/component/test_usage.py, line 802, 
 in test_01_template_usage
 Check TEMPLATE.CREATE event in events table
   File /usr/lib/python2.7/unittest/case.py, line 516, in assertEqual
 assertion_func(first, second, msg=msg)
   File /usr/lib/python2.7/unittest/case.py, line 509, in _baseAssertEqual
 raise self.failureException(msg)
 'Check TEMPLATE.CREATE event in events table\n
 
 This is because the Template is being created as admin and it belongs to the 
 admin account. The template should belong to the Regular User in order to 
 check for the TEMPLATE.CREATE Event.
 
 Fixed the script such that the Template now belongs to the regular account.
 
 
 Diffs
 -
 
   test/integration/component/test_usage.py e99bb81 
 
 Diff: https://reviews.apache.org/r/25685/diff/
 
 
 Testing
 ---
 
 No testing is done.
 
 
 Thanks,
 
 Chandan Purushothama
 




Re: schema-430to440.sql: not same file in master and 4.4 branches

2014-09-23 Thread Daan Hoogland
On Mon, Sep 22, 2014 at 11:01 PM, Pierre-Luc Dion pd...@cloudops.com
wrote:

 Look like current file: setup/db/db/schema-430to440.sql are not same in
 master branch and 4.4.
 This is a problem I guest.

 at least this file is the reason why in 4.4.0 and 4.4.1 we can't create vm
 using Windows2012r2 or CentOs 6.5 on XenServer.

 please see here:

 4.4 file history

 https://github.com/apache/cloudstack/commits/4.4/setup/db/db/schema-430to440.sql

 master file history:

 https://github.com/apache/cloudstack/commits/master/setup/db/db/schema-430to440.sql

 would it make sense is the one in master be replace by the one in 4.4
 branch ?

​I think a more ​intelligent merge is needed. I can see two way changes.

@Ian: can you have a quick look at your changes to master?


 also, I'd like to  fix  new OS type introduce in 4.4.0 in 4.4.1, can we
 update 4.4.1 for missing xenserver mapping?

​ok, go ahead.​

Thanks

 *Pierre-Luc DION*


-- 
Daan


Re: Review Request 25771: Adding new test case to verify the fix provided in bug CLOUDSTACK-6172

2014-09-23 Thread SrikanteswaraRao Talluri

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/25771/#review54258
---

Ship it!


Ship It!

- SrikanteswaraRao Talluri


On Sept. 23, 2014, 6:29 a.m., sanjeev n wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/25771/
 ---
 
 (Updated Sept. 23, 2014, 6:29 a.m.)
 
 
 Review request for cloudstack and SrikanteswaraRao Talluri.
 
 
 Bugs: CS-6172
 https://issues.apache.org/jira/browse/CS-6172
 
 
 Repository: cloudstack-git
 
 
 Description
 ---
 
 @Desc:Volume is not retaining same uuid when migrating from one storage to 
 another.
 Step1:Create a volume/data disk
 Step2:Verify UUID of the volume
 Step3:Migrate the volume to another primary storage within the cluster
 Step4:Migrating volume to new primary storage should succeed
 Step5:volume UUID should not change even after migration
 
 
 Diffs
 -
 
   test/integration/component/test_volumes.py 122f2d1 
 
 Diff: https://reviews.apache.org/r/25771/diff/
 
 
 Testing
 ---
 
 Yes
 @Desc:Volume is not retaining same uuid when migrating from one storage to 
 another. ... === TestName: test_01_migrateVolume | Status : SUCCESS ===
 ok
 
 --
 Ran 1 test in 343.250s
 
 OK
 
 
 Thanks,
 
 sanjeev n
 




Re: git commit: updated refs/heads/master to b8795d8

2014-09-23 Thread Daan Hoogland
Frank I had a look at this commit and found an issue with it. Please look
at my comments in the last file:

On Tue, Sep 23, 2014 at 12:53 AM, frankzh...@apache.org wrote:
​...



 Branch: refs/heads/master
 Commit: b8795d88796aa5b9a736fff3ad6fbcf5d6f2b825
 Parents: 6655d8f
 Author: Frank Zhang frank.zh...@citrix.com
 Authored: Mon Sep 22 15:56:57 2014 -0700
 Committer: Frank Zhang frank.zh...@citrix.com
 Committed: Mon Sep 22 15:56:57 2014 -0700

​...​

http://git-wip-us.apache.org/repos/asf/cloudstack/blob/b8795d88/plugins/hypervisors/baremetal/src/org/apache/cloudstack/api/ListBaremetalRctCmd.java
 --
 diff --git
 a/plugins/hypervisors/baremetal/src/org/apache/cloudstack/api/ListBaremetalRctCmd.java
 b/plugins/hypervisors/baremetal/src/org/apache/cloudstack/api/ListBaremetalRctCmd.java
 new file mode 100755
 index 000..3a69f3c
 --- /dev/null
 +++
 b/plugins/hypervisors/baremetal/src/org/apache/cloudstack/api/ListBaremetalRctCmd.java
 @@ -0,0 +1,69 @@
 +// Licensed to the Apache Software Foundation (ASF) under one
 +// or more contributor license agreements.  See the NOTICE file
 +// distributed with this work for additional information
 +// regarding copyright ownership.  The ASF licenses this file
 +// to you under the Apache License, Version 2.0 (the
 +// License); you may not use this file except in compliance
 +// with the License.  You may obtain a copy of the License at
 +//
 +//   http://www.apache.org/licenses/LICENSE-2.0
 +//
 +// Unless required by applicable law or agreed to in writing,
 +// software distributed under the License is distributed on an
 +// AS IS BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
 +// KIND, either express or implied.  See the License for the
 +// specific language governing permissions and limitations
 +// under the License.
 +//
 +// Automatically generated by addcopyright.py at 01/29/2013
 +package org.apache.cloudstack.api;
 +
 +import com.cloud.baremetal.manager.BaremetalVlanManager;
 +import com.cloud.baremetal.networkservice.BaremetalRctResponse;
 +import com.cloud.exception.ConcurrentOperationException;
 +import com.cloud.exception.InsufficientCapacityException;
 +import com.cloud.exception.NetworkRuleConflictException;
 +import com.cloud.exception.ResourceAllocationException;
 +import com.cloud.exception.ResourceUnavailableException;
 +import org.apache.cloudstack.acl.RoleType;
 +import org.apache.cloudstack.api.response.ListResponse;
 +import org.apache.log4j.Logger;
 +
 +import javax.inject.Inject;
 +import java.util.ArrayList;
 +import java.util.List;
 +
 +@APICommand(name = listBaremetalRct, description = list baremetal rack
 configuration, responseObject = BaremetalRctResponse.class,
 +requestHasSensitiveInfo = false, responseHasSensitiveInfo =
 false, authorized = {RoleType.Admin})
 +public class ListBaremetalRctCmd extends BaseListCmd {
 +private static final Logger s_logger =
 Logger.getLogger(ListBaremetalRctCmd.class);
 +private static final String s_name = listbaremetalrctresponse;
 +@Inject
 +BaremetalVlanManager vlanMgr;
 +


​here Exception is caught, only ServerApiException is thrown but a long
list of exceptions is declared:
​


 +@Override
 +public void execute() throws ResourceUnavailableException,
 InsufficientCapacityException, ServerApiException,
 ConcurrentOperationException,
 +ResourceAllocationException, NetworkRuleConflictException {
 +try {
 +ListResponseBaremetalRctResponse response = new
 ListResponse();
 +ListBaremetalRctResponse rctResponses = new ArrayList();
 +BaremetalRctResponse rsp = vlanMgr.listRct();
 +if (rsp != null) {
 +rctResponses.add(rsp);
 +}
 +response.setResponses(rctResponses);
 +response.setResponseName(getCommandName());
 +response.setObjectName(baremetalrcts);
 +this.setResponseObject(response);


​only what is actually thrown​

​should be caught​

 +} catch (Exception e) {
 +s_logger.debug(Exception happened while executing
 ListBaremetalRctCmd, e);
 +throw new ServerApiException(ApiErrorCode.INTERNAL_ERROR,
 e.getMessage());
 +}
 +}
 +
 +@Override
 +public String getCommandName() {
 +return s_name;
 +}
 +
 +}




-- 
Daan


Re: Review Request 25580: Adding test case to verify fix for issue Create volume from custom disk offering does not work as expected

2014-09-23 Thread sanjeev n

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/25580/#review54259
---

Ship it!


commit 0d5a435f7669eddd44a6b62317fe26bb1d96e96c
Author: sanjeev sanj...@apache.org
Date:   Tue Sep 23 14:15:39 2014 +0530

Creating custom disk does not work as expected

- sanjeev n


On Sept. 12, 2014, 1:32 p.m., sanjeev n wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/25580/
 ---
 
 (Updated Sept. 12, 2014, 1:32 p.m.)
 
 
 Review request for cloudstack, Santhosh Edukulla and SrikanteswaraRao Talluri.
 
 
 Repository: cloudstack-git
 
 
 Description
 ---
 
 @Desc:Create volume from custom disk offering does not work as expected
 Step1:Create custom disk offering
 Step2:Create Volume with size x
 Step3:Attach that volume to a vm
 Step4:Create another volume with size y
 Step5:Verify that the new volume is created with size Y but not with size X
   
 
 
 Diffs
 -
 
   test/integration/component/test_escalations_volumes.py db4c3d8 
   tools/marvin/marvin/lib/base.py 04217b2 
 
 Diff: https://reviews.apache.org/r/25580/diff/
 
 
 Testing
 ---
 
 Yes
 
 @Desc:Create volume from custom disk offering does not work as expected ... 
 === TestName: test_13_volume_custom_disk_size | Status : SUCCESS ===
 ok
 
 --
 Ran 1 test in 303.508s
 
 OK
 
 
 Thanks,
 
 sanjeev n
 




baremetal in master and 4.4

2014-09-23 Thread Daan Hoogland
Frank,

Can you have a look at commit 781ad96b04c9030fc1c0a0401145414e7f3978aa ? It
is only on master, not on 4.4 but it contains a change in
schema-430to440.sql

I'd say the easiest remedy is to revert and recreate with the db change in
another db upgrade file.


-- 
Daan


Re: console proxy HTTPS in 4.3.1, static option, clarification please

2014-09-23 Thread France
Thank you for your answer.

On 22 Sep 2014, at 18:58, Amogh Vasekar amogh.vase...@citrix.com wrote:

 Hi,
 
 No, option (3) is not fully supported yet since it needs integration with
 a load balancer for all console proxy VM ips. Please see the note at
 https://cwiki.apache.org/confluence/display/CLOUDSTACK/Realhost+IP+changes#
 RealhostIPchanges-ConsoleProxy.1
 
 Amogh
 
 On 9/22/14 4:47 AM, France mailingli...@isg.si wrote:
 
 Hi,
 
 because i get confusing information on the internet, i would like to ask
 here for clarification.
 
 There are three options for
 consoleproxy.url.domain
 configure setting.
 
 1. Disable it with empty string.
 Not really an option, because iframe to http from https, is silently
 blocked by browsers now-days. If there would be a link to click instead
 of iframe it could work and i would be done with it.
 
 2. *.somedomain.xxx
 Wildcard option. Requires to run own DNS server and buying of expensive
 certificates. Not really an option, due to too high costs of wildcard and
 setting up of another unnecessary service.
 
 3. secure.somedomain.xxx
 Static option which would allow us to use a single FQDN certificate. This
 is acceptable for us, but upon testing with 4.3.1 (restart of ACS,
 destruction of console proxy SVM) did not link to secure.somedomain.xxx.
 
 Before i loose any more time with thir doption. Does it work with 3.4.1?
 
 
 According to documentation for 4.3 on:
 http://docs.cloudstack.apache.org/projects/cloudstack-administration/en/4.
 3/search.html?q=consoleproxy.url.domaincheck_keywords=yesarea=default
 in:
 Working with System Virtual Machines
 Console Proxy
 
 Load-balancing Console Proxies
 
 An alternative to using dynamic DNS or creating a range of DNS entries as
 described in the last section would be to create a SSL certificate for a
 specific domain name, configure CloudStack to use that particular FQDN,
 and then configure a load balancer to load balance the console proxy¹s IP
 address behind the FQDN. As the functionality for this is still new,
 please 
 seehttps://cwiki.apache.org/confluence/display/CLOUDSTACK/Realhost+IP+chan
 ges for more details.
 
 
 Regards,
 F



Review Request 25933: CLOUDSTACK-7408: Fixed - Private key of the ssh keypair was getting corrupted

2014-09-23 Thread Gaurav Aradhye

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/25933/
---

Review request for cloudstack, Santhosh Edukulla and SrikanteswaraRao Talluri.


Bugs: CLOUDSTACK-7408
https://issues.apache.org/jira/browse/CLOUDSTACK-7408


Repository: cloudstack-git


Description
---

Test cases in test suite failed while trying SSH using the private key file 
after resetting SSH keypair (CS sends private key to client, public key to ssh 
host - in this case VM).

SSH failed because the private key file was getting corrupted because it was 
passed to Paramiko's load_host_keys() function. This function loads the host 
key (fingerprint) into the passed file. Here path to known_hosts file should be 
passed instead of path to private key file.

Also set the look_for_keys option to False because we don't want Paramiko to 
look for ssh keys at default location, because we are passing the private key 
itself and it is stored in temporary location.

I have added extra parameter knownHostsFilePath which is initialised in 
sshClient class. This can be used in future if some user has his/her 
known_hosts file at different location than ~/.ssh/known_hosts. However, there 
is no necessity to pass this value from test cases as of now, it will be taken 
the default.


Diffs
-

  tools/marvin/marvin/lib/base.py b0dd6e2 
  tools/marvin/marvin/lib/utils.py 8788b3b 
  tools/marvin/marvin/sshClient.py df2 

Diff: https://reviews.apache.org/r/25933/diff/


Testing
---

Yes. I ran two test classes present in this test suite separately, hence adding 
separate logs.

Log:

[I]
Test Reset SSH keys for VM  already having SSH key ... === TestName: 
test_01_reset_ssh_keys | Status : SUCCESS ===
ok
Reset SSH keys for VM  created from password enabled template and ... === 
TestName: test_02_reset_ssh_key_password_enabled_template | Status : SUCCESS ===
ok
Reset SSH key for VM  having no SSH key ... === TestName: 
test_03_reset_ssh_with_no_key | Status : SUCCESS ===
ok
Reset SSH keys for VM  created from password enabled template and ... === 
TestName: test_04_reset_key_passwd_enabled_no_key | Status : SUCCESS ===
ok
Reset SSH keys for VM  already having SSH key when VM is in running ... === 
TestName: test_05_reset_key_in_running_state | Status : SUCCESS ===
ok
Reset SSH keys for VM  created from password enabled template and ... === 
TestName: test_06_reset_key_passwd_enabled_vm_running | Status : SUCCESS ===
ok
Verify API resetSSHKeyForVirtualMachine with incorrect parameters ... === 
TestName: test_07_reset_keypair_invalid_params | Status : SUCCESS ===
ok

--
Ran 7 tests in 2247.949s

OK


[II]

Verify API resetSSHKeyForVirtualMachine for non admin non root ... === 
TestName: test_01_reset_keypair_normal_user | Status : SUCCESS ===
ok
Verify API resetSSHKeyForVirtualMachine for domain admin non root ... === 
TestName: test_02_reset_keypair_domain_admin | Status : SUCCESS ===
ok
Verify API resetSSHKeyForVirtualMachine for domain admin root ... === TestName: 
test_03_reset_keypair_root_admin | Status : SUCCESS ===
ok

--
Ran 3 tests in 1866.305s

OK


Thanks,

Gaurav Aradhye



Review Request 25934: Fixed various bugs in AlertsSyslogAppender

2014-09-23 Thread Anshul Gangwar

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/25934/
---

Review request for cloudstack, Devdeep Singh and Sateesh Chodapuneedi.


Bugs: CLOUDSTACK-7610, CLOUDSTACK-7611 and CLOUDSTACK-7612
https://issues.apache.org/jira/browse/CLOUDSTACK-7610
https://issues.apache.org/jira/browse/CLOUDSTACK-7611
https://issues.apache.org/jira/browse/CLOUDSTACK-7612


Repository: cloudstack-git


Description
---

Fixed following bugs in AlertsSyslogAppender
1. Added sync alert.
2. Changed unrecognised alerts are send as unknown instead of null.
3. Added unit tests to cover some more scenarios.


Diffs
-

  
plugins/alert-handlers/syslog-alerts/src/org/apache/cloudstack/syslog/AlertsSyslogAppender.java
 5f6e8ec 
  
plugins/alert-handlers/syslog-alerts/test/org/apache/cloudstack/syslog/AlertsSyslogAppenderTest.java
 5799348 

Diff: https://reviews.apache.org/r/25934/diff/


Testing
---

Added unit tests and all were passing fine


Thanks,

Anshul Gangwar



Build failed in Jenkins: simulator-singlerun #421

2014-09-23 Thread jenkins
See http://jenkins.buildacloud.org/job/simulator-singlerun/421/changes

Changes:

[sanjeev] Creating custom disk does not work as expected

[Daan Hoogland] CLOUDSTACK-6603 [Upgrade]DB Exception while Autoscale 
monitoring after upgrading from 4.3 to 4.4

--
[...truncated 8521 lines...]
hard linking marvin/cloudstackAPI/updateIsoPermissions.py - 
Marvin-4.5.0-SNAPSHOT/marvin/cloudstackAPI
hard linking marvin/cloudstackAPI/updateLBHealthCheckPolicy.py - 
Marvin-4.5.0-SNAPSHOT/marvin/cloudstackAPI
hard linking marvin/cloudstackAPI/updateLBStickinessPolicy.py - 
Marvin-4.5.0-SNAPSHOT/marvin/cloudstackAPI
hard linking marvin/cloudstackAPI/updateLoadBalancer.py - 
Marvin-4.5.0-SNAPSHOT/marvin/cloudstackAPI
hard linking marvin/cloudstackAPI/updateLoadBalancerRule.py - 
Marvin-4.5.0-SNAPSHOT/marvin/cloudstackAPI
hard linking marvin/cloudstackAPI/updateNetwork.py - 
Marvin-4.5.0-SNAPSHOT/marvin/cloudstackAPI
hard linking marvin/cloudstackAPI/updateNetworkACLItem.py - 
Marvin-4.5.0-SNAPSHOT/marvin/cloudstackAPI
hard linking marvin/cloudstackAPI/updateNetworkACLList.py - 
Marvin-4.5.0-SNAPSHOT/marvin/cloudstackAPI
hard linking marvin/cloudstackAPI/updateNetworkOffering.py - 
Marvin-4.5.0-SNAPSHOT/marvin/cloudstackAPI
hard linking marvin/cloudstackAPI/updateNetworkServiceProvider.py - 
Marvin-4.5.0-SNAPSHOT/marvin/cloudstackAPI
hard linking marvin/cloudstackAPI/updatePhysicalNetwork.py - 
Marvin-4.5.0-SNAPSHOT/marvin/cloudstackAPI
hard linking marvin/cloudstackAPI/updatePod.py - 
Marvin-4.5.0-SNAPSHOT/marvin/cloudstackAPI
hard linking marvin/cloudstackAPI/updatePortForwardingRule.py - 
Marvin-4.5.0-SNAPSHOT/marvin/cloudstackAPI
hard linking marvin/cloudstackAPI/updateProject.py - 
Marvin-4.5.0-SNAPSHOT/marvin/cloudstackAPI
hard linking marvin/cloudstackAPI/updateProjectInvitation.py - 
Marvin-4.5.0-SNAPSHOT/marvin/cloudstackAPI
hard linking marvin/cloudstackAPI/updateRegion.py - 
Marvin-4.5.0-SNAPSHOT/marvin/cloudstackAPI
hard linking marvin/cloudstackAPI/updateRemoteAccessVpn.py - 
Marvin-4.5.0-SNAPSHOT/marvin/cloudstackAPI
hard linking marvin/cloudstackAPI/updateResourceCount.py - 
Marvin-4.5.0-SNAPSHOT/marvin/cloudstackAPI
hard linking marvin/cloudstackAPI/updateResourceLimit.py - 
Marvin-4.5.0-SNAPSHOT/marvin/cloudstackAPI
hard linking marvin/cloudstackAPI/updateServiceOffering.py - 
Marvin-4.5.0-SNAPSHOT/marvin/cloudstackAPI
hard linking marvin/cloudstackAPI/updateSnapshotPolicy.py - 
Marvin-4.5.0-SNAPSHOT/marvin/cloudstackAPI
hard linking marvin/cloudstackAPI/updateStorageNetworkIpRange.py - 
Marvin-4.5.0-SNAPSHOT/marvin/cloudstackAPI
hard linking marvin/cloudstackAPI/updateStoragePool.py - 
Marvin-4.5.0-SNAPSHOT/marvin/cloudstackAPI
hard linking marvin/cloudstackAPI/updateTemplate.py - 
Marvin-4.5.0-SNAPSHOT/marvin/cloudstackAPI
hard linking marvin/cloudstackAPI/updateTemplatePermissions.py - 
Marvin-4.5.0-SNAPSHOT/marvin/cloudstackAPI
hard linking marvin/cloudstackAPI/updateTrafficType.py - 
Marvin-4.5.0-SNAPSHOT/marvin/cloudstackAPI
hard linking marvin/cloudstackAPI/updateUser.py - 
Marvin-4.5.0-SNAPSHOT/marvin/cloudstackAPI
hard linking marvin/cloudstackAPI/updateVMAffinityGroup.py - 
Marvin-4.5.0-SNAPSHOT/marvin/cloudstackAPI
hard linking marvin/cloudstackAPI/updateVPC.py - 
Marvin-4.5.0-SNAPSHOT/marvin/cloudstackAPI
hard linking marvin/cloudstackAPI/updateVPCOffering.py - 
Marvin-4.5.0-SNAPSHOT/marvin/cloudstackAPI
hard linking marvin/cloudstackAPI/updateVirtualMachine.py - 
Marvin-4.5.0-SNAPSHOT/marvin/cloudstackAPI
hard linking marvin/cloudstackAPI/updateVolume.py - 
Marvin-4.5.0-SNAPSHOT/marvin/cloudstackAPI
hard linking marvin/cloudstackAPI/updateVpnConnection.py - 
Marvin-4.5.0-SNAPSHOT/marvin/cloudstackAPI
hard linking marvin/cloudstackAPI/updateVpnCustomerGateway.py - 
Marvin-4.5.0-SNAPSHOT/marvin/cloudstackAPI
hard linking marvin/cloudstackAPI/updateVpnGateway.py - 
Marvin-4.5.0-SNAPSHOT/marvin/cloudstackAPI
hard linking marvin/cloudstackAPI/updateZone.py - 
Marvin-4.5.0-SNAPSHOT/marvin/cloudstackAPI
hard linking marvin/cloudstackAPI/upgradeRouterTemplate.py - 
Marvin-4.5.0-SNAPSHOT/marvin/cloudstackAPI
hard linking marvin/cloudstackAPI/uploadCustomCertificate.py - 
Marvin-4.5.0-SNAPSHOT/marvin/cloudstackAPI
hard linking marvin/cloudstackAPI/uploadSslCert.py - 
Marvin-4.5.0-SNAPSHOT/marvin/cloudstackAPI
hard linking marvin/cloudstackAPI/uploadVolume.py - 
Marvin-4.5.0-SNAPSHOT/marvin/cloudstackAPI
hard linking marvin/config/__init__.py - Marvin-4.5.0-SNAPSHOT/marvin/config
hard linking marvin/config/test_data.py - Marvin-4.5.0-SNAPSHOT/marvin/config
hard linking marvin/lib/__init__.py - Marvin-4.5.0-SNAPSHOT/marvin/lib
hard linking marvin/lib/base.py - Marvin-4.5.0-SNAPSHOT/marvin/lib
hard linking marvin/lib/common.py - Marvin-4.5.0-SNAPSHOT/marvin/lib
hard linking marvin/lib/utils.py - Marvin-4.5.0-SNAPSHOT/marvin/lib
hard linking marvin/sandbox/__init__.py - Marvin-4.5.0-SNAPSHOT/marvin/sandbox
hard linking marvin/sandbox/testSetupSuccess.py - 

conf values for 4.4 from master

2014-09-23 Thread Daan Hoogland
Sheng,

these two lines where added by you to the file schema-430to440.sql in
master but not in 4.4

INSERT INTO `cloud`.`configuration`(category, instance, component, name,
value, description, default_value) VALUES ('Advanced', 'DEFAULT',
'NetworkOrchestrationService', 'router.redundant.vrrp.interval', '1',
'seconds between VRRP broadcast. It would 3 times broadcast fail to trigger
fail-over mechanism of redundant router', '1') ON DUPLICATE KEY UPDATE
category='Advanced';
INSERT INTO `cloud`.`configuration`(category, instance, component, name,
value, description, default_value) VALUES ('Advanced', 'DEFAULT',
'NetworkOrchestrationService', 'router.aggregation.command.each.timeout',
'3', 'timeout in seconds for each Virtual Router command being aggregated.
The final aggregation command timeout would be determined by this timeout *
commands counts ', '3') ON DUPLICATE KEY UPDATE category='Advanced';

Is it sensible to add them in 4.4 as well?

-- 
Daan


Build failed in Jenkins: simulator-singlerun #422

2014-09-23 Thread jenkins
See http://jenkins.buildacloud.org/job/simulator-singlerun/422/

--
[...truncated 8851 lines...]
 WARNING: Provided file does not exist: 
http://jenkins.buildacloud.org/job/simulator-singlerun/ws/developer/../utils/conf/db.properties.override
 Initializing database=simulator with host=localhost port=3306 
username=cloud password=cloud
 Running query: drop database if exists `simulator`
 Running query: create database `simulator`
 Running query: GRANT ALL ON simulator.* to 'cloud'@`localhost` 
identified by 'cloud'
 Running query: GRANT ALL ON simulator.* to 'cloud'@`%` identified 
by 'cloud'
 Processing SQL file at 
http://jenkins.buildacloud.org/job/simulator-singlerun/ws/developer/target/db/create-schema-simulator.sql
 Processing SQL file at 
http://jenkins.buildacloud.org/job/simulator-singlerun/ws/developer/target/db/templates.simulator.sql
 Processing SQL file at 
http://jenkins.buildacloud.org/job/simulator-singlerun/ws/developer/target/db/hypervisor_capabilities.simulator.sql
 Processing upgrade: com.cloud.upgrade.DatabaseUpgradeChecker
[INFO] 
[INFO] --- maven-site-plugin:3.3:attach-descriptor (attach-descriptor) @ 
cloud-developer ---
[INFO] 
[INFO] --- maven-install-plugin:2.3.1:install (default-install) @ 
cloud-developer ---
[INFO] Installing 
http://jenkins.buildacloud.org/job/simulator-singlerun/ws/developer/pom.xml 
to 
/var/lib/jenkins/.m2/repository/org/apache/cloudstack/cloud-developer/4.5.0-SNAPSHOT/cloud-developer-4.5.0-SNAPSHOT.pom
[INFO] 
[INFO] BUILD SUCCESS
[INFO] 
[INFO] Total time: 33.948s
[INFO] Finished at: Tue Sep 23 05:53:44 EDT 2014
[INFO] Final Memory: 42M/202M
[INFO] 
[simulator-singlerun] $ /bin/bash -x /tmp/hudson4873913338587795182.sh
+ jps -l
+ grep -q Launcher
+ echo Killing leftover management servers
Killing leftover management servers
++ jps -l
++ awk '{print $1}'
++ grep Launcher
+ kill -KILL 11174
+ sleep 10
+ rm -f xunit.xml
+ rm -rf /tmp/MarvinLogs
+ echo Check for initialization of the management server
Check for initialization of the management server
+ COUNTER=0
+ SERVER_PID=18603
+ mvn -P systemvm,simulator -pl :cloud-client-ui jetty:run
+ '[' 0 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=1
+ '[' 1 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=2
+ '[' 2 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=3
+ '[' 3 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=4
+ '[' 4 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=5
+ '[' 5 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=6
+ '[' 6 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=7
+ '[' 7 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=8
+ '[' 8 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=9
+ '[' 9 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=10
+ '[' 10 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=11
+ '[' 11 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=12
+ '[' 12 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=13
+ '[' 13 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=14
+ '[' 14 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=15
+ '[' 15 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=16
+ '[' 16 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=17
+ '[' 17 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=18
+ '[' 18 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=19
+ '[' 19 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=20
+ '[' 20 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=21
+ '[' 21 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=22
+ '[' 22 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ 

Re: conf values for 4.4 from master

2014-09-23 Thread Daan Hoogland
I see one more of them:

INSERT INTO `cloud`.`configuration`(category, instance, component, name,
value, description, default_value) VALUES ('Advanced', 'DEFAULT',
'NetworkOrchestrationService', 'router.redundant.vrrp.interval', '1',
'seconds between VRRP broadcast. It would 3 times broadcast fail to trigger
fail-over mechanism of redundant router', '1') ON DUPLICATE KEY UPDATE
category='Advanced';


On Tue, Sep 23, 2014 at 11:38 AM, Daan Hoogland daan.hoogl...@gmail.com
wrote:

 Sheng,

 these two lines where added by you to the file schema-430to440.sql in
 master but not in 4.4

 INSERT INTO `cloud`.`configuration`(category, instance, component, name,
 value, description, default_value) VALUES ('Advanced', 'DEFAULT',
 'NetworkOrchestrationService', 'router.redundant.vrrp.interval', '1',
 'seconds between VRRP broadcast. It would 3 times broadcast fail to trigger
 fail-over mechanism of redundant router', '1') ON DUPLICATE KEY UPDATE
 category='Advanced';
 INSERT INTO `cloud`.`configuration`(category, instance, component, name,
 value, description, default_value) VALUES ('Advanced', 'DEFAULT',
 'NetworkOrchestrationService', 'router.aggregation.command.each.timeout',
 '3', 'timeout in seconds for each Virtual Router command being aggregated.
 The final aggregation command timeout would be determined by this timeout *
 commands counts ', '3') ON DUPLICATE KEY UPDATE category='Advanced';

 Is it sensible to add them in 4.4 as well?

 --
 Daan




-- 
Daan


Re: Marvin Package?

2014-09-23 Thread Rohit Yadav
Hi David,

On 22-Sep-2014, at 4:54 am, David Nalley da...@gnsa.us wrote:
 Sharing things on pypi is a distribution channel for end users -  if
 it's done by the project, it must be voted on and released. If it's
 done by an individual, they can't post anything they want, but we'll
 have to insist that they don't use our trademarks (e.g. both Marvin
 and CloudStack will need to be removed from the package name)

For all the previously voted releases of CloudStack, can we still build marvin 
from those source releases and publish on pypi (keeping the marvin version same 
as CloudStack version to avoid confusion)?

Regards,
Rohit Yadav
Software Architect, ShapeBlue
M. +41 779015219 | rohit.ya...@shapeblue.com
Blog: bhaisaab.org | Twitter: @_bhaisaab



Find out more about ShapeBlue and our range of CloudStack related services

IaaS Cloud Design  Buildhttp://shapeblue.com/iaas-cloud-design-and-build//
CSForge – rapid IaaS deployment frameworkhttp://shapeblue.com/csforge/
CloudStack Consultinghttp://shapeblue.com/cloudstack-consultancy/
CloudStack Infrastructure 
Supporthttp://shapeblue.com/cloudstack-infrastructure-support/
CloudStack Bootcamp Training Courseshttp://shapeblue.com/cloudstack-training/

This email and any attachments to it may be confidential and are intended 
solely for the use of the individual to whom it is addressed. Any views or 
opinions expressed are solely those of the author and do not necessarily 
represent those of Shape Blue Ltd or related companies. If you are not the 
intended recipient of this email, you must neither take any action based upon 
its contents, nor copy or show it to anyone. Please contact the sender if you 
believe you have received this email in error. Shape Blue Ltd is a company 
incorporated in England  Wales. ShapeBlue Services India LLP is a company 
incorporated in India and is operated under license from Shape Blue Ltd. Shape 
Blue Brasil Consultoria Ltda is a company incorporated in Brasil and is 
operated under license from Shape Blue Ltd. ShapeBlue SA Pty Ltd is a company 
registered by The Republic of South Africa and is traded under license from 
Shape Blue Ltd. ShapeBlue is a registered trademark.


RE: Launch a VM (not from a template)

2014-09-23 Thread Adrian Lewis
I'm glad to see that there's a potential workaround for this. Perhaps this
should be somehow incorporated in the installation process to allow
smaller setups to be implemented on hypervisors with existing VMs. The
main benefit in my mind though is as the OP suggests - DRaAS.

I think that having a registerVolume that would import the VM's metadata
and register it into CS management, including renaming to suit the
conventions would be a great addition. As the OP has already mentioned,
this would also open up the possibility of using a number of backup and DR
tools already on the market with minimal effort to integrate with CS - it
would bring the RTO down significantly if the volumes were already in
primary storage, ready to be fired up at will.

This would certainly gain favour in scenarios where CS operators are
looking to run BDR as a service for clients (such as ourselves, the OP and
a growing number of managed service providers currently just offering
backup as a service but not DR). I'm pretty sure that vCloud Director has
a few commercial BDR software options that currently won’t work easily
with CS due to this, or at least the perception by software vendors that
it can't easily be done. Simplifying the process of importing existing VMs
would definitely be something that would increase takeup of CS in certain
use-cases and at the same time boost CS's public visibility through it's
'ecosystem' of partners. If the 'product' is made as partner friendly as
possible, they in turn help the marketing efforts. Vision's DoubleTake had
a CS integration but from what I can tell it's largely been forgotten as
it is no longer supported with 4.3 or 4.4 and there doesn’t seem to be a
roadmap for this update.

Adrian

-Original Message-
From: Nitin Mehta [mailto:nitin.me...@citrix.com]
Sent: 23 September 2014 07:15
To: dev@cloudstack.apache.org
Subject: Re: Launch a VM (not from a template)

Marcus - I think its possible even now in CS. I don¹t think we need
another api.
1. You can create a vm using the deployvm api with flag startvm=false.
This would create vm and its corresponding resources db records without
actually creating them. You can give a dummy template for now as it still
needs a templateid (we should remove this dependence in future).
2. You can then use updateVolume to update the volume with storage pool id
and marking it ready.
3. Finally you use start vm api to start the vm. It would create all the
resources and start the vm. Since the volume is already ready it won't go
into creating the root volume.

Thanks,
-Nitin

On 22/09/14 10:08 PM, Marcus shadow...@gmail.com wrote:

So, we have thought about this a bit as well. Our solution, which we
haven't actually implemented yet, was to create a
registerVirtualMachine
api call that would be similar to deployVirtualMachine but accept a
rootdiskid and storage pool id.  This would essentially enter a vm into
the db but just copy the  rootdiskid into the root volumes space rather
than creating a new one. This would allow root disks to be created
outside
of cloudstack and then made known to cloudstack. It wouldn't be terribly
difficult to implement (the current deploy only creates a root disk if it
doesn't already exist so would be easy to short circuit), you would want
to
ensure the service offering matches the storage tags though.

A registerVolume would also be useful.
On Sep 22, 2014 9:18 PM, Will Stevens wstev...@cloudops.com wrote:

 ws: inline...

 Thanks for the response Mike.  :)


 *Will STEVENS*
 Lead Developer

 *CloudOps* *| *Cloud Solutions Experts
 420 rue Guy *|* Montreal *|* Quebec *|* H3J 1S6
 w cloudops.com *|* tw @CloudOps_

 On Mon, Sep 22, 2014 at 7:39 PM, Mike Tutkowski 
 mike.tutkow...@solidfire.com wrote:

  If you did #1, how would you pick a relevant compute offering? You
would
  probably need to look first to make sure at least one existed that
could
  satisfy your requirement(s) and then make sure the resources could be
  marked in advance as if they were being consumed.
 

 ws: so in this case the original VM would be managed by CS and the 3rd
 party software would be backing it up and replicating it to other
 zones/regions.  technically, we should have all of the information we
need
 about the VM because CS will already know about the original VM which
this
 VM is getting spun up to replace.  essentially this would be used for
DR in
 a different DC.  if one DC goes down for some reason, this would
basically
 behave like a cold standby in a different DC.

 you did touch on a pipe dream of mine though (which I am still in the
 process of thinking through).  I want to be able to spin up a fresh CS
 install and then discover an existing xen pool and then configure CS to
 step in as an orchestration tool for the existing infra.  I am still
 thinking through how this would be possible, but this is outside the
scope
 of this specific problem, so I won't derail this topic.

 
  #2 might be easier.
 

 ws: I agree that this might 

Re: FTP connection tracking modules missing in VR after upgrade to 4.3.1; Which release will contain the fix?

2014-09-23 Thread Rohit Yadav
Hi France,

On 22-Sep-2014, at 1:09 pm, France mailingli...@isg.si wrote:
 Hi guys,

 i have upgraded from 4.1.1 to 4.3.1 over the weekend. Amongst other 
 regressions i have found this one:

 https://issues.apache.org/jira/browse/CLOUDSTACK-7517

 So it looks like it is already well known bug and already has a fix.

 What can _we_ do to get the fix for 4.3 releases on our production system?

I’ve cherry-picked it to 4.3 branch, you may build from source to get this fix. 
Or, use the rpms from jenkins build.

 Is anyone willing to build a RPM including this fix?
 I see it was fixed only in 4.5. Taking into account what a small fix it is 
 and how important it is, maybe include it in 4.4.1 release also, because it 
 is still not released?
 Also why not add it along with other fixes to 4.3 and release 4.3.2?
 In my personal opinion, ACS has way too few bugfix releases, and lot’s of 
 people have to build their own packages because of that.

RPMs and DEB from latest 4.3 branch will be made available by Jenkins here:
http://jenkins.buildacloud.org/view/4.3

Regards,
Rohit Yadav
Software Architect, ShapeBlue
M. +41 779015219 | rohit.ya...@shapeblue.com
Blog: bhaisaab.org | Twitter: @_bhaisaab
http://shapeblue.com/cloudstack-software-engineering



Find out more about ShapeBlue and our range of CloudStack related services

IaaS Cloud Design  Buildhttp://shapeblue.com/iaas-cloud-design-and-build//
CSForge – rapid IaaS deployment frameworkhttp://shapeblue.com/csforge/
CloudStack Consultinghttp://shapeblue.com/cloudstack-consultancy/
CloudStack Infrastructure 
Supporthttp://shapeblue.com/cloudstack-infrastructure-support/
CloudStack Bootcamp Training Courseshttp://shapeblue.com/cloudstack-training/

This email and any attachments to it may be confidential and are intended 
solely for the use of the individual to whom it is addressed. Any views or 
opinions expressed are solely those of the author and do not necessarily 
represent those of Shape Blue Ltd or related companies. If you are not the 
intended recipient of this email, you must neither take any action based upon 
its contents, nor copy or show it to anyone. Please contact the sender if you 
believe you have received this email in error. Shape Blue Ltd is a company 
incorporated in England  Wales. ShapeBlue Services India LLP is a company 
incorporated in India and is operated under license from Shape Blue Ltd. Shape 
Blue Brasil Consultoria Ltda is a company incorporated in Brasil and is 
operated under license from Shape Blue Ltd. ShapeBlue SA Pty Ltd is a company 
registered by The Republic of South Africa and is traded under license from 
Shape Blue Ltd. ShapeBlue is a registered trademark.


Re: Review Request 25536: Adding new test case to verify the fix for issue Exception when attaching data disk to Rhel vm on vSphere

2014-09-23 Thread sanjeev n


 On Sept. 19, 2014, 6:03 a.m., SrikanteswaraRao Talluri wrote:
  test/integration/component/test_escalations_vmware.py, line 19
  https://reviews.apache.org/r/25536/diff/1/?file=685097#file685097line19
 
  can you please avoid import * ?

commit 5fb2b3a0d24c2bcda0745a6b8ff59fae5651e054
Author: sanjeev sanj...@apache.org
Date:   Wed Sep 10 11:55:26 2014 +0530

Test to verify fix for issue Exception when attaching data disk to RHEL vm 
on vSphere

Added Rhel6 template details to test_data.py

Signed-off-by: sanjeev sanj...@apache.org

Fixed review comments provided in RR 25536


- sanjeev


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/25536/#review53939
---


On Sept. 11, 2014, 10:31 a.m., sanjeev n wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/25536/
 ---
 
 (Updated Sept. 11, 2014, 10:31 a.m.)
 
 
 Review request for cloudstack, Santhosh Edukulla and SrikanteswaraRao Talluri.
 
 
 Repository: cloudstack-git
 
 
 Description
 ---
 
 @desc: Exception when attaching data disk to RHEL VM on vSphere
 Step1: Confirm that vmware.root.disk.controller = ide in Global Settings.
 Step2: Register RHEl 6.0 template and deploy a VM.
 Step3: Note that the root disk is attached to IDE.
 Step4: Create new DATA disk and attempt to attach it to the VM.
 Verify that step4 succeeds without any exception
 
 
 Diffs
 -
 
   test/integration/component/test_escalations_vmware.py PRE-CREATION 
   tools/marvin/marvin/config/test_data.py 4133aba 
 
 Diff: https://reviews.apache.org/r/25536/diff/
 
 
 Testing
 ---
 
 Yes
 
 
 Thanks,
 
 sanjeev n
 




Re: CloudStack Docker and Mesos Support

2014-09-23 Thread sebgoa

On Sep 17, 2014, at 12:28 PM, Sebastien Goasguen run...@gmail.com wrote:

 
 On Sep 16, 2014, at 7:20 PM, ilya musayev ilya.mailing.li...@gmail.com 
 wrote:
 
 Hi all,
 
 Would you know where we stand with Mesos and Docker?
 
 
 That's a big question.
 
 Mesos is a resource allocator that multiple frameworks can use to run 
 workloads of various sorts.
 The interest is to mix workloads: big data, long running services, parallel 
 computing, docker in order to maximize utilization of your resources.
 
 For instance Aurora (mesos framework) can execute long running services 
 within docker containers.
 
 The challenge with docker is the coordination of multiple containers. 
 Kubernetes for example coordinate docker containers to run HA applications.
 
 What we see (IMHO) is things like Kubernetes being deployed in the cloud 
 (gce, azure, backspace are currently supported in kubernetes). And at 
 mesoscon, there was a small demo of running kuberneters as a mesos framework.
 
 So…bottom line for me is that I see Mesos and everything on top as a workload 
 that can be run in CloudStack. Similar thing with CoreOS. If a CloudStack 
 cloud makes available CoreOS templates, then users can start CoreOS cluster 
 and manage Docker straight up or via Kubernetes (because of course there is 
 CoreOS support in Kubernetes).
 
 Hence, there is nothing to do, except for CloudStack clouds to show that they 
 can offer Mesos* or Kubernetes* on demand.
 
 However if we were to re-architect CloudStack entirely, we could use Mesos as 
 a base resource allocator and write a VM framework. The framework would ask 
 for hypervisors to mesos and once allocated CloudStack would start 
 them…etc. The issue would still be in the networking. The advantage is that a 
 user could run a Mesos cluster and mix workloads: CloudStack + Big Data + 
 docker….
 
 Anything we can do to make CoreOS cloud stackable and create a cloudstack 
 driver in Kubernetes would be really nice.
 
 Thanks
 ilya
 

I did not mean to kill this discussion….so to get pardoned I wrote this:

https://github.com/runseb/kubernetes-exoscale

-sebastien




[ERROR] Failed create template from snapshot The uuid you supplied was invalid.

2014-09-23 Thread raja sekhar
Hi All,

I have upgraded ACS 4.3 to 4.4.0.From that time onwards the root volume
snapshots are not taking properly, sometimes the snapshots are taking and
sometimes the snapshots stuck in ERROR state. when ever i want to create
template from a backedup snapshot i'm getting the error Failed create
template from snapshot
The uuid you supplied was invalid.

The log file shows,

2014-09-23 18:55:54,197 DEBUG [c.c.h.d.HostDaoImpl] (ClusteredAgentManager
Timer:ctx-5e74939f) Acquiring hosts for clusters already owned by this
management server
2014-09-23 18:55:54,199 DEBUG [c.c.h.d.HostDaoImpl] (ClusteredAgentManager
Timer:ctx-5e74939f) Completed acquiring hosts for clusters already owned by
this management server
2014-09-23 18:55:54,199 DEBUG [c.c.h.d.HostDaoImpl] (ClusteredAgentManager
Timer:ctx-5e74939f) Acquiring hosts for clusters not owned by any
management server
2014-09-23 18:55:54,200 DEBUG [c.c.h.d.HostDaoImpl] (ClusteredAgentManager
Timer:ctx-5e74939f) Completed acquiring hosts for clusters not owned by any
management server
2014-09-23 18:56:02,808 DEBUG [c.c.a.m.AgentManagerImpl]
(AgentManager-Handler-2:null) SeqA 23-69610: Processing Seq 23-69610:  {
Cmd , MgmtId: -1, via: 23, Ver: v1, Flags: 11,
[{com.cloud.agent.api.ConsoleProxyLoadReportCommand:{_proxyVmId:155,_loadInfo:{\n
 \connections\: []\n},wait:0}}] }
2014-09-23 18:56:02,813 DEBUG [c.c.a.m.AgentManagerImpl]
(AgentManager-Handler-2:null) SeqA 23-69610: Sending Seq 23-69610:  { Ans:
, MgmtId: 52244012385, via: 23, Ver: v1, Flags: 100010,
[{com.cloud.agent.api.AgentControlAnswer:{result:true,wait:0}}] }
2014-09-23 18:56:07,809 DEBUG [c.c.a.m.AgentManagerImpl]
(AgentManager-Handler-4:null) SeqA 23-69611: Processing Seq 23-69611:  {
Cmd , MgmtId: -1, via: 23, Ver: v1, Flags: 11,
[{com.cloud.agent.api.ConsoleProxyLoadReportCommand:{_proxyVmId:155,_loadInfo:{\n
 \connections\: []\n},wait:0}}] }
2014-09-23 18:56:07,814 DEBUG [c.c.a.m.AgentManagerImpl]
(AgentManager-Handler-4:null) SeqA 23-69611: Sending Seq 23-69611:  { Ans:
, MgmtId: 52244012385, via: 23, Ver: v1, Flags: 100010,
[{com.cloud.agent.api.AgentControlAnswer:{result:true,wait:0}}] }
2014-09-23 18:56:10,833 DEBUG [c.c.c.ConsoleProxyManagerImpl]
(consoleproxy-1:ctx-120d1e42) Zone 1 is ready to launch console proxy
2014-09-23 18:56:10,949 DEBUG [o.a.c.s.SecondaryStorageManagerImpl]
(secstorage-1:ctx-2b827d35) Zone 1 is ready to launch secondary storage VM
2014-09-23 18:56:16,215 DEBUG [c.c.a.ApiServlet]
(catalina-exec-2:ctx-1b03b697) ===START===  10.0.1.100 -- GET
 
command=createTemplateresponse=jsonsessionkey=bKgNoaskvKmQI0pYS%2F9DuT4raSA%3Dsnapshotid=907fc0ec-d782-4407-8560-83e335f5c79ename=testtmpdisplayText=testtmposTypeId=d84de96a-1495-11e4-ba8d-000c29a411ceisPublic=falsepasswordEnabled=falseisdynamicallyscalable=false_=1411524124265
2014-09-23 18:56:16,246 DEBUG [c.c.t.TemplateManagerImpl]
(catalina-exec-2:ctx-1b03b697 ctx-11184fc3) This template is getting
created from other template, setting source template Id to: 5
2014-09-23 18:56:16,292 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl]
(catalina-exec-2:ctx-1b03b697 ctx-11184fc3) submit async job-4429, details:
AsyncJobVO {id:4429, userId: 2, accountId: 2, instanceType: Template,
instanceId: 230, cmd:
org.apache.cloudstack.api.command.admin.template.CreateTemplateCmdByAdmin,
cmdInfo:
{sessionkey:bKgNoaskvKmQI0pYS/9DuT4raSA\u003d,cmdEventType:TEMPLATE.CREATE,ctxUserId:2,httpmethod:GET,osTypeId:d84de96a-1495-11e4-ba8d-000c29a411ce,isPublic:false,isdynamicallyscalable:false,response:json,id:230,ctxDetails:{\com.cloud.storage.Snapshot\:\907fc0ec-d782-4407-8560-83e335f5c79e\,\com.cloud.template.VirtualMachineTemplate\:\1c9dff2f-22e9-4349-9bee-3c80d0bb4eea\,\com.cloud.storage.GuestOS\:\d84de96a-1495-11e4-ba8d-000c29a411ce\},displayText:testtmp,snapshotid:907fc0ec-d782-4407-8560-83e335f5c79e,passwordEnabled:false,name:testtmp,_:1411524124265,uuid:1c9dff2f-22e9-4349-9bee-3c80d0bb4eea,ctxAccountId:2,ctxStartEventId:16996},
cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0,
result: null, initMsid: 52244012385, completeMsid: null, lastUpdated: null,
lastPolled: null, created: null}
2014-09-23 18:56:16,293 INFO  [o.a.c.f.j.i.AsyncJobMonitor]
(API-Job-Executor-65:ctx-5c66f5fd job-4429) Add job-4429 into job monitoring
2014-09-23 18:56:16,293 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl]
(API-Job-Executor-65:ctx-5c66f5fd job-4429) Executing AsyncJobVO {id:4429,
userId: 2, accountId: 2, instanceType: Template, instanceId: 230, cmd:
org.apache.cloudstack.api.command.admin.template.CreateTemplateCmdByAdmin,
cmdInfo:

[GitHub] cloudstack pull request: VPC and Virtual Network Managers refactor...

2014-09-23 Thread wilderrodrigues
Github user wilderrodrigues commented on the pull request:

https://github.com/apache/cloudstack/pull/19#issuecomment-56522671
  
Hi @bhaisaab,

I fixed the DHCP problem, tested all Advanced and Basic Marvin tests 
against the simulator and my XenServer environment. The commit has been added 
to the pull request.

Could you please have a look and let me know?

Once I get a GO from you, I will do the rebase and let Travis do its magic.

Cheers,
Wilder


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack pull request: VPC and Virtual Network Managers refactor...

2014-09-23 Thread bhaisaab
Github user bhaisaab commented on the pull request:

https://github.com/apache/cloudstack/pull/19#issuecomment-56533161
  
Hi @wilderrodrigues I'll get back to you soon on this. Meanwhile, just to 
reconfirm have you tested it for both basic and advance networks for Xen? If 
so, I'll just do it for KVM.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Sorry for the Spam.

2014-09-23 Thread David Bierce
Sorry for the spam.  Apple mail decided apache dev list was the destination for 
our internal mailing list.  It has been fixed.


Thanks,
David Bierce
Senior System Administrator  | Appcore

Office +1.800.735.7104
Direct +1.515.612.7801 
www.appcore.com



[GitHub] cloudstack pull request: VPC and Virtual Network Managers refactor...

2014-09-23 Thread bhaisaab
Github user bhaisaab commented on the pull request:

https://github.com/apache/cloudstack/pull/19#issuecomment-56536147
  
Yeah, please do it for Xen and see if it works for basic VM life cycle. 
I've some $dayjob stuff to do so I may not be able to test your PR this week 
with KVM.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack pull request: VPC and Virtual Network Managers refactor...

2014-09-23 Thread wilderrodrigues
Github user wilderrodrigues commented on the pull request:

https://github.com/apache/cloudstack/pull/19#issuecomment-56535474
  
Hi Rohit,

For xen I did only advanced. With simulator both.

I can do basic for xen if you need aome help.

Cheers,
Wilder

Sent from my iPhone

On 23 Sep 2014, at 16:58, Rohit Yadav 
notificati...@github.commailto:notificati...@github.com wrote:


Hi @wilderrodrigueshttps://github.com/wilderrodrigues I'll get back to 
you soon on this. Meanwhile, just to reconfirm have you tested it for both 
basic and advance networks for Xen? If so, I'll just do it for KVM.

—
Reply to this email directly or view it on 
GitHubhttps://github.com/apache/cloudstack/pull/19#issuecomment-56533161.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: Marvin Package?

2014-09-23 Thread David Nalley
On Tue, Sep 23, 2014 at 7:26 AM, Rohit Yadav rohit.ya...@shapeblue.com wrote:
 Hi David,

 On 22-Sep-2014, at 4:54 am, David Nalley da...@gnsa.us wrote:
 Sharing things on pypi is a distribution channel for end users -  if
 it's done by the project, it must be voted on and released. If it's
 done by an individual, they can't post anything they want, but we'll
 have to insist that they don't use our trademarks (e.g. both Marvin
 and CloudStack will need to be removed from the package name)

 For all the previously voted releases of CloudStack, can we still build 
 marvin from those source releases and publish on pypi (keeping the marvin 
 version same as CloudStack version to avoid confusion)?


Yes. If it's been voted on successfully that can be published.

--David


Cloudstack Mirror

2014-09-23 Thread David Bierce
We now have a mirror of cloudstack locally.

http://mirror.appcore.com/cloudstack/

Contains both cloudstack signed RPM and DEBs

/rhel for the rpm repo for YUM
/ubuntu for the deb repo for APT

This is also a mirror of the all the systemvms for all versions in /systemvms



Thanks,
David Bierce
Senior System Administrator  | Appcore

Office +1.800.735.7104
Direct +1.515.612.7801 
www.appcore.com



Jenkins build is back to normal : simulator-singlerun #423

2014-09-23 Thread jenkins
See http://jenkins.buildacloud.org/job/simulator-singlerun/423/changes



RE: baremetal in master and 4.4

2014-09-23 Thread Frank Zhang
Thanks Daan. When I wrote this schema, schema-430to440.sql was the latest one I 
could find.
I have moved it to schema-441to450.sql

From: Daan Hoogland [mailto:daan.hoogl...@gmail.com]
Sent: Tuesday, September 23, 2014 2:00 AM
To: Frank Zhang
Cc: dev
Subject: baremetal in master and 4.4

Frank,
Can you have a look at commit 781ad96b04c9030fc1c0a0401145414e7f3978aa ? It is 
only on master, not on 4.4 but it contains a change in schema-430to440.sql
I'd say the easiest remedy is to revert and recreate with the db change in 
another db upgrade file.


--
Daan


Re: conf values for 4.4 from master

2014-09-23 Thread Sheng Yang
Hi Daan,

These are already added for 4.4's schema-430to440.sql (and your second mail
is the same as the first case of first mail...)

dcb0db6084 (Sheng Yang  2014-04-16 20:13:16 -0700 1660) INSERT INTO
`cloud`.`configuration`(category, instance, component, name, value,
description, default_value) VALUES ('Advanced', 'DEFAULT',
'NetworkOrchestrationService', 'router.r

3578c7137f (Sheng Yang  2014-04-18 23:27:12 -0700 1669)
3578c7137f (Sheng Yang  2014-04-18 23:27:12 -0700 1670) INSERT INTO
`cloud`.`configuration`(category, instance, component, name, value,
description, default_value) VALUES ('Advanced', 'DEFAULT',
'NetworkOrchestrationService', 'router.a

--Sheng

On Tue, Sep 23, 2014 at 3:05 AM, Daan Hoogland daan.hoogl...@gmail.com
wrote:

 I see one more of them:

 INSERT INTO `cloud`.`configuration`(category, instance, component, name,
 value, description, default_value) VALUES ('Advanced', 'DEFAULT',
 'NetworkOrchestrationService', 'router.redundant.vrrp.interval', '1',
 'seconds between VRRP broadcast. It would 3 times broadcast fail to trigger
 fail-over mechanism of redundant router', '1') ON DUPLICATE KEY UPDATE
 category='Advanced';


 On Tue, Sep 23, 2014 at 11:38 AM, Daan Hoogland daan.hoogl...@gmail.com
 wrote:

  Sheng,
 
  these two lines where added by you to the file schema-430to440.sql in
  master but not in 4.4
 
  INSERT INTO `cloud`.`configuration`(category, instance, component, name,
  value, description, default_value) VALUES ('Advanced', 'DEFAULT',
  'NetworkOrchestrationService', 'router.redundant.vrrp.interval', '1',
  'seconds between VRRP broadcast. It would 3 times broadcast fail to
 trigger
  fail-over mechanism of redundant router', '1') ON DUPLICATE KEY UPDATE
  category='Advanced';
  INSERT INTO `cloud`.`configuration`(category, instance, component, name,
  value, description, default_value) VALUES ('Advanced', 'DEFAULT',
  'NetworkOrchestrationService', 'router.aggregation.command.each.timeout',
  '3', 'timeout in seconds for each Virtual Router command being
 aggregated.
  The final aggregation command timeout would be determined by this
 timeout *
  commands counts ', '3') ON DUPLICATE KEY UPDATE category='Advanced';
 
  Is it sensible to add them in 4.4 as well?
 
  --
  Daan
 



 --
 Daan



Re: [01/50] git commit: updated refs/heads/master to 1290e10

2014-09-23 Thread Hugo Trippaers
Hey David,

This is one of the requests that came in using the “new github pull request 
thing. The big advantage is that we leverage the nice things from github. Part 
of doing it that way means we keep the history of the original developer 
intact. With review board we typically get one smashed commit with the entire 
change. This is an entire commit history we got working up to a rewrite of the 
build scripts for the systemvm. I’m not exactly sure if that is what we want 
from github yet, but lets see what we all think about it. Procedurally this is 
the same as merging something in from the review board after review.

As for the content, i’m pretty biased as it is part of the ongoing project to 
introduce the redundant VPC router. Leo did a great job in rewriting the build 
scripts for the systemvms. No changing any functionality, but mainly making the 
code and scripts more accessible. I’m thinking of this as a change to packaging 
more than merging new features. There are changes pending that will change 
functionality, but those are planned for after 4.5 happens.

Cheers,

Hugo



On 23 sep. 2014, at 20:31, David Nalley da...@gnsa.us wrote:

 Where was the merge request for this huge merge to master? (it was at
 50 commit emails, when it stopped sending, )
 We have passed feature freeze for 4.5.0, so I am confused as why this
 was merged. Is there a reason not to revert all of this?
 
 --David
 
 On Mon, Sep 22, 2014 at 3:44 PM,  bhais...@apache.org wrote:
 Repository: cloudstack
 Updated Branches:
  refs/heads/master a6ee4112a - 1290e1010
 
 
 CLOUDSTACK-7143: move fix_acpid to its own file
 
 
 Project: http://git-wip-us.apache.org/repos/asf/cloudstack/repo
 Commit: http://git-wip-us.apache.org/repos/asf/cloudstack/commit/5627b67f
 Tree: http://git-wip-us.apache.org/repos/asf/cloudstack/tree/5627b67f
 Diff: http://git-wip-us.apache.org/repos/asf/cloudstack/diff/5627b67f
 
 Branch: refs/heads/master
 Commit: 5627b67ff3a6af70949ee1622b3e5a572d39a0b7
 Parents: 6a688a0
 Author: Leo Simons lsim...@schubergphilis.com
 Authored: Mon Jul 21 11:19:03 2014 +0200
 Committer: Rohit Yadav rohit.ya...@shapeblue.com
 Committed: Mon Sep 22 21:31:35 2014 +0200
 
 --
 .../definitions/systemvmtemplate/configure_acpid.sh  | 15 +++
 .../definitions/systemvmtemplate/definition.rb   |  1 +
 .../definitions/systemvmtemplate/postinstall.sh  | 15 ---
 3 files changed, 16 insertions(+), 15 deletions(-)
 --
 
 
 http://git-wip-us.apache.org/repos/asf/cloudstack/blob/5627b67f/tools/appliance/definitions/systemvmtemplate/configure_acpid.sh
 --
 diff --git a/tools/appliance/definitions/systemvmtemplate/configure_acpid.sh 
 b/tools/appliance/definitions/systemvmtemplate/configure_acpid.sh
 new file mode 100644
 index 000..70abe30
 --- /dev/null
 +++ b/tools/appliance/definitions/systemvmtemplate/configure_acpid.sh
 @@ -0,0 +1,15 @@
 +fix_acpid() {
 +  # Fix acpid
 +  mkdir -p /etc/acpi/events
 +  cat  /etc/acpi/events/power  EOF
 +event=button/power.*
 +action=/usr/local/sbin/power.sh %e
 +EOF
 +  cat  /usr/local/sbin/power.sh  EOF
 +#!/bin/bash
 +/sbin/poweroff
 +EOF
 +  chmod a+x /usr/local/sbin/power.sh
 +}
 +
 +fix_acpid
 
 http://git-wip-us.apache.org/repos/asf/cloudstack/blob/5627b67f/tools/appliance/definitions/systemvmtemplate/definition.rb
 --
 diff --git a/tools/appliance/definitions/systemvmtemplate/definition.rb 
 b/tools/appliance/definitions/systemvmtemplate/definition.rb
 index be0b403..a2eb82b 100644
 --- a/tools/appliance/definitions/systemvmtemplate/definition.rb
 +++ b/tools/appliance/definitions/systemvmtemplate/definition.rb
 @@ -63,6 +63,7 @@ config = {
 'configure_locale.sh',
 'configure_login.sh',
 'postinstall.sh',
 +'configure_acpid.sh',
 'cleanup.sh',
 'configure_networking.sh',
 'zerodisk.sh'
 
 http://git-wip-us.apache.org/repos/asf/cloudstack/blob/5627b67f/tools/appliance/definitions/systemvmtemplate/postinstall.sh
 --
 diff --git a/tools/appliance/definitions/systemvmtemplate/postinstall.sh 
 b/tools/appliance/definitions/systemvmtemplate/postinstall.sh
 index 893b521..f2ce1ae 100644
 --- a/tools/appliance/definitions/systemvmtemplate/postinstall.sh
 +++ b/tools/appliance/definitions/systemvmtemplate/postinstall.sh
 @@ -116,20 +116,6 @@ nameserver 8.8.4.4
 EOF
 }
 
 -fix_acpid() {
 -  # Fix acpid
 -  mkdir -p /etc/acpi/events
 -  cat  /etc/acpi/events/power  EOF
 -event=button/power.*
 -action=/usr/local/sbin/power.sh %e
 -EOF
 -  cat  /usr/local/sbin/power.sh  EOF
 -#!/bin/bash
 -/sbin/poweroff
 -EOF
 -  chmod a+x /usr/local/sbin/power.sh
 -}
 -
 fix_hostname() {
   # Fix 

Re: [01/50] git commit: updated refs/heads/master to 1290e10

2014-09-23 Thread Rohit Yadav
Hi David,

On 23-Sep-2014, at 8:31 pm, David Nalley da...@gnsa.us wrote:
 Where was the merge request for this huge merge to master? (it was at
 50 commit emails, when it stopped sending, )
 We have passed feature freeze for 4.5.0, so I am confused as why this
 was merged. Is there a reason not to revert all of this?

This was the request: https://github.com/apache/cloudstack/pull/16
And JIRA: https://issues.apache.org/jira/browse/CLOUDSTACK-7143
We all get emails from Github PR (just re-checked) so you may find them on the 
ML.

I was reviewing this Github pull request which came in couple of weeks ago. 
This is well tested and successfully refactors the way we build systemvms. It 
was only merged when it passed build tests and painful manual QA (I did on 
KVM). Just want to mention that this is not my code or $dayjob work, I was just 
trying to help out the contribution from community member(s).

Pardon my ignorance but I was not aware about our official status of 
master/4.5.0 that we’re already passed feature freeze. Can you confirm the 
official stand/status on master/4.5.0 and when do we plan to cut the release 
branch, and if we are changing anything with the way we do releases etc?

I see a lot of changes that comes in every other day on master. The pattern I 
see is that developers check-in barely working feature and then find the excuse 
to check-in more patches/commits as bug fixes where they actually are trying to 
complete an incomplete feature. I think this is a chronic issue which should be 
checked and everyone should make an effort to work in their feature branch, 
work on their git-foo, rebase/fix-conflicts and send merge request or Github 
pull requests or request using reviewboard. I think this will allow all of us 
to participate in on-going efforts and would help improve its quality using QA 
and code reviewing.

Regards,
Rohit Yadav
Software Architect, ShapeBlue
M. +41 779015219 | rohit.ya...@shapeblue.com
Blog: bhaisaab.org | Twitter: @_bhaisaab

Find out more about ShapeBlue and our range of CloudStack related services

IaaS Cloud Design  Buildhttp://shapeblue.com/iaas-cloud-design-and-build//
CSForge – rapid IaaS deployment frameworkhttp://shapeblue.com/csforge/
CloudStack Consultinghttp://shapeblue.com/cloudstack-consultancy/
CloudStack Infrastructure 
Supporthttp://shapeblue.com/cloudstack-infrastructure-support/
CloudStack Bootcamp Training Courseshttp://shapeblue.com/cloudstack-training/

This email and any attachments to it may be confidential and are intended 
solely for the use of the individual to whom it is addressed. Any views or 
opinions expressed are solely those of the author and do not necessarily 
represent those of Shape Blue Ltd or related companies. If you are not the 
intended recipient of this email, you must neither take any action based upon 
its contents, nor copy or show it to anyone. Please contact the sender if you 
believe you have received this email in error. Shape Blue Ltd is a company 
incorporated in England  Wales. ShapeBlue Services India LLP is a company 
incorporated in India and is operated under license from Shape Blue Ltd. Shape 
Blue Brasil Consultoria Ltda is a company incorporated in Brasil and is 
operated under license from Shape Blue Ltd. ShapeBlue SA Pty Ltd is a company 
registered by The Republic of South Africa and is traded under license from 
Shape Blue Ltd. ShapeBlue is a registered trademark.


[GitHub] cloudstack pull request: CLOUDSTACK-7143: Refactoring of the syste...

2014-09-23 Thread lsimons
Github user lsimons commented on the pull request:

https://github.com/apache/cloudstack/pull/16#issuecomment-56589622
  
Thanks for all the help Rohit! I'm sure to most people some bash script 
rearrangement looks boring, but nevermind that, to me it just feels good to 
sumbit more than just simple one-line patches to an apache project after such a 
long time :-)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [01/50] git commit: updated refs/heads/master to 1290e10

2014-09-23 Thread David Nalley
On Tue, Sep 23, 2014 at 4:44 PM, Rohit Yadav rohit.ya...@shapeblue.com wrote:
 Hi David,

 On 23-Sep-2014, at 8:31 pm, David Nalley da...@gnsa.us wrote:
 Where was the merge request for this huge merge to master? (it was at
 50 commit emails, when it stopped sending, )
 We have passed feature freeze for 4.5.0, so I am confused as why this
 was merged. Is there a reason not to revert all of this?

 This was the request: https://github.com/apache/cloudstack/pull/16
 And JIRA: https://issues.apache.org/jira/browse/CLOUDSTACK-7143
 We all get emails from Github PR (just re-checked) so you may find them on 
 the ML.


Yes, GH PR is exactly like the Review Board emails in this particular
aspect. My question is why is this merged into master rather than a
feature branch, and why no [MERGE] email as per:
https://cwiki.apache.org/confluence/display/CLOUDSTACK/Branch+Merge+Expectations

 I was reviewing this Github pull request which came in couple of weeks ago. 
 This is well tested and successfully refactors the way we build systemvms. It 
 was only merged when it passed build tests and painful manual QA (I did on 
 KVM). Just want to mention that this is not my code or $dayjob work, I was 
 just trying to help out the contribution from community member(s).

 Pardon my ignorance but I was not aware about our official status of 
 master/4.5.0 that we’re already passed feature freeze. Can you confirm the 
 official stand/status on master/4.5.0 and when do we plan to cut the release 
 branch, and if we are changing anything with the way we do releases etc?


See:
http://markmail.org/message/jctcystkzv4sqrvz

Based on that thread, we decided not to branch 4.5.0 and instead keep
master = 4.5.0 until much later in the schedule.
Hence merging features into master is breaking feature freeze.

 I see a lot of changes that comes in every other day on master. The pattern I 
 see is that developers check-in barely working feature and then find the 
 excuse to check-in more patches/commits as bug fixes where they actually are 
 trying to complete an incomplete feature. I think this is a chronic issue 
 which should be checked and everyone should make an effort to work in their 
 feature branch, work on their git-foo, rebase/fix-conflicts and send merge 
 request or Github pull requests or request using reviewboard. I think this 
 will allow all of us to participate in on-going efforts and would help 
 improve its quality using QA and code reviewing.


This is a separate issue; but I tend to agree with your analysis of
how things happen. If you see this happening though, please speak up.
As a committer you have veto power. Speaking frankly, we all have to
be on guard for quality issues.

--David


Re: [01/50] git commit: updated refs/heads/master to 1290e10

2014-09-23 Thread Rohit Yadav
Hi David,

On 23-Sep-2014, at 11:50 pm, David Nalley da...@gnsa.us wrote:
 Yes, GH PR is exactly like the Review Board emails in this particular
 aspect. My question is why is this merged into master rather than a
 feature branch, and why no [MERGE] email as per:
 https://cwiki.apache.org/confluence/display/CLOUDSTACK/Branch+Merge+Expectations

Ah, my bad. I did not consider the process/syntax around it. There was some 
initial email on this but it did not start with a “[MERGE]” but started with 
“rfc:” for merging it on master:
http://mail-archives.apache.org/mod_mbox/cloudstack-dev/201407.mbox/%3C7A6CF878-7A28-4D4A-BCD2-0C264F8C90B7%40schubergphilis.com%3E

I also forgot that this process should apply for merging changes from 
reviewboard and Github PR as well. That’s completely my fault to not enforce 
the branch merging process on this PR. The only thing I can offer now is to 
make sure that this work won’t break build or anything else on master/4.5.0

Regards,
Rohit Yadav
Software Architect, ShapeBlue
M. +41 779015219 | rohit.ya...@shapeblue.com
Blog: bhaisaab.org | Twitter: @_bhaisaab

Find out more about ShapeBlue and our range of CloudStack related services

IaaS Cloud Design  Buildhttp://shapeblue.com/iaas-cloud-design-and-build//
CSForge – rapid IaaS deployment frameworkhttp://shapeblue.com/csforge/
CloudStack Consultinghttp://shapeblue.com/cloudstack-consultancy/
CloudStack Infrastructure 
Supporthttp://shapeblue.com/cloudstack-infrastructure-support/
CloudStack Bootcamp Training Courseshttp://shapeblue.com/cloudstack-training/

This email and any attachments to it may be confidential and are intended 
solely for the use of the individual to whom it is addressed. Any views or 
opinions expressed are solely those of the author and do not necessarily 
represent those of Shape Blue Ltd or related companies. If you are not the 
intended recipient of this email, you must neither take any action based upon 
its contents, nor copy or show it to anyone. Please contact the sender if you 
believe you have received this email in error. Shape Blue Ltd is a company 
incorporated in England  Wales. ShapeBlue Services India LLP is a company 
incorporated in India and is operated under license from Shape Blue Ltd. Shape 
Blue Brasil Consultoria Ltda is a company incorporated in Brasil and is 
operated under license from Shape Blue Ltd. ShapeBlue SA Pty Ltd is a company 
registered by The Republic of South Africa and is traded under license from 
Shape Blue Ltd. ShapeBlue is a registered trademark.


[GitHub] cloudstack pull request: VPC and Virtual Network Managers refactor...

2014-09-23 Thread bhaisaab
Github user bhaisaab commented on the pull request:

https://github.com/apache/cloudstack/pull/19#issuecomment-56598885
  
Hi, I was recently reminded about the branch merging process. Once you're 
confident with your testing of Xen/KVM please send a merge request on dev ML 
and follow: 
https://cwiki.apache.org/confluence/display/CLOUDSTACK/Branch+Merge+Expectations

Hope this helps.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


SystemVM file name changes

2014-09-23 Thread Ian Duffy
Hi All,

I noticed the filename of the generated systemvms changed on
http://jenkins.buildacloud.org/job/build-systemvm64-master/

They now include a [0-9]* before the hypervisor name representing the build
number.

Why do we do this? Its annoying for external resources linking to the last
successful build.


Build failed in Jenkins: simulator-singlerun #428

2014-09-23 Thread jenkins
See http://jenkins.buildacloud.org/job/simulator-singlerun/428/changes

Changes:

[anthony.xu] removed unused class

[sheng.yang] CLOUDSTACK-7436: Fix automation test on RvR status detection

--
[...truncated 8872 lines...]
[INFO] BUILD SUCCESS
[INFO] 
[INFO] Total time: 21.787s
[INFO] Finished at: Tue Sep 23 20:34:19 EDT 2014
[INFO] Final Memory: 40M/204M
[INFO] 
[simulator-singlerun] $ /bin/bash -x /tmp/hudson9208507046767249637.sh
+ jps -l
+ grep -q Launcher
+ rm -f xunit.xml
+ rm -rf /tmp/MarvinLogs
+ echo Check for initialization of the management server
Check for initialization of the management server
+ COUNTER=0
+ SERVER_PID=31411
+ mvn -P systemvm,simulator -pl :cloud-client-ui jetty:run
+ '[' 0 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=1
+ '[' 1 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=2
+ '[' 2 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=3
+ '[' 3 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=4
+ '[' 4 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=5
+ '[' 5 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=6
+ '[' 6 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=7
+ '[' 7 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=8
+ '[' 8 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=9
+ '[' 9 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=10
+ '[' 10 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=11
+ '[' 11 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=12
+ '[' 12 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=13
+ '[' 13 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=14
+ '[' 14 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=15
+ '[' 15 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=16
+ '[' 16 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=17
+ '[' 17 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=18
+ '[' 18 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=19
+ '[' 19 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=20
+ '[' 20 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=21
+ '[' 21 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=22
+ '[' 22 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=23
+ '[' 23 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=24
+ '[' 24 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=25
+ '[' 25 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=26
+ '[' 26 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=27
+ '[' 27 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=28
+ '[' 28 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=29
+ '[' 29 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=30
+ '[' 30 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=31
+ '[' 31 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=32
+ '[' 32 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=33
+ '[' 33 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=34
+ '[' 34 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=35
+ '[' 35 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=36
+ '[' 36 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=37
+ '[' 37 -lt 44 ']'

[ACS441] merge request: hotfix/4.4-7574

2014-09-23 Thread Pierre-Luc Dion
Hi Dahn,

can you merge this branch into 4.4, I've test creation of windows 8.1,
2012r2 and centos6.5 on xenserver.

Thanks,

Pierre-Luc


Re: [ACS441] merge request: hotfix/4.4-7574

2014-09-23 Thread Amogh Vasekar
Hi,

Do we claim support for these OS on vmware too?

Amogh

On 9/23/14 5:44 PM, Pierre-Luc Dion pd...@cloudops.com wrote:

Hi Dahn,

can you merge this branch into 4.4, I've test creation of windows 8.1,
2012r2 and centos6.5 on xenserver.

Thanks,

Pierre-Luc



Build failed in Jenkins: simulator-singlerun #429

2014-09-23 Thread jenkins
See http://jenkins.buildacloud.org/job/simulator-singlerun/429/changes

Changes:

[pdion891] CLOUDSTACK-7574, CREATE TABLE cloud.baremetal_rct

[pdion891] remove table baremetal_rct crate from schema-440to441.sql,already in 
schema-441to450.sql

[anthony.xu] throw timeout exception when lock acquire times out

--
[...truncated 8880 lines...]
[INFO] BUILD SUCCESS
[INFO] 
[INFO] Total time: 22.732s
[INFO] Finished at: Tue Sep 23 22:10:06 EDT 2014
[INFO] Final Memory: 41M/176M
[INFO] 
[simulator-singlerun] $ /bin/bash -x /tmp/hudson794521949021670983.sh
+ grep -q Launcher
+ jps -l
+ rm -f xunit.xml
+ rm -rf /tmp/MarvinLogs
+ echo Check for initialization of the management server
Check for initialization of the management server
+ COUNTER=0
+ SERVER_PID=13106
+ mvn -P systemvm,simulator -pl :cloud-client-ui jetty:run
+ '[' 0 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=1
+ '[' 1 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=2
+ '[' 2 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=3
+ '[' 3 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=4
+ '[' 4 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=5
+ '[' 5 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=6
+ '[' 6 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=7
+ '[' 7 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=8
+ '[' 8 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=9
+ '[' 9 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=10
+ '[' 10 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=11
+ '[' 11 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=12
+ '[' 12 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=13
+ '[' 13 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=14
+ '[' 14 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=15
+ '[' 15 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=16
+ '[' 16 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=17
+ '[' 17 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=18
+ '[' 18 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=19
+ '[' 19 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=20
+ '[' 20 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=21
+ '[' 21 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=22
+ '[' 22 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=23
+ '[' 23 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=24
+ '[' 24 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=25
+ '[' 25 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=26
+ '[' 26 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=27
+ '[' 27 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=28
+ '[' 28 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=29
+ '[' 29 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=30
+ '[' 30 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=31
+ '[' 31 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=32
+ '[' 32 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=33
+ '[' 33 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=34
+ '[' 34 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=35
+ '[' 35 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=36
+ '[' 36 -lt 

Re: [ACS441] merge request: hotfix/4.4-7574

2014-09-23 Thread Pierre-Luc Dion
I guest so, but I don't know the OS name to define in VMware :-S  I don't
have a VMware system in hand.


*Pierre-Luc DION*
Architecte de Solution Cloud | Cloud Solutions Architect
t 855.652.5683

*CloudOps* Votre partenaire infonuagique* | *Cloud Solutions Experts
420 rue Guy *|* Montreal *|* Quebec *|* H3J 1S6
w cloudops.com *|* tw @CloudOps_


On Tue, Sep 23, 2014 at 9:22 PM, Amogh Vasekar amogh.vase...@citrix.com
wrote:

 Hi,

 Do we claim support for these OS on vmware too?

 Amogh

 On 9/23/14 5:44 PM, Pierre-Luc Dion pd...@cloudops.com wrote:

 Hi Dahn,
 
 can you merge this branch into 4.4, I've test creation of windows 8.1,
 2012r2 and centos6.5 on xenserver.
 
 Thanks,
 
 Pierre-Luc