[NOTICE] Meeting with Accelerite Leadership

2017-07-10 Thread ilya musayev
Dear CloudStackers,

Last week, Johh Kinsella and myself were suppose to meet with Accelerite
leadership team. Unfortunately John could not make it - so i was alone.

We discussed ways we can improve community collaboration and leverage
Accelerite"s resources to align and drive larger community agenda including
extendes roadmap.

Many topics have been mentioned, below is the summary of our discussion. I
will list things in the order i see being important.


---
1) Proposal was made to have a quarterly call (or more often as needed)
with all interested parties to discuss:
Upcoming Features you are working on developing (to avoid collision
and
maintain the roadmap)
Blockers that are impacting release and adoption
Other topics

The length of the call would be 90 minutes. Each party will get a
fair
amount of time. The agenda will be collated and presented prior to the
call with a link to FS on Confluence and time allotted for each topic.

Minutes will be taken and posted on dev list. If there are issues and or
suggestions, we will note it down in few sentences, identify interested
parties and have them do a "post" discussion on the mailing list.

The proposed date and time  - Thursday August 17th 9AM PT

--
2) Accelerite is considering funding a position for a person who will be
working within community - as community manager. Help organize and
facilitate discussions, make sure Confluence and JIRA are up to date,
help new users with answering basic questions or finding right
individual to assist with solution. While funded by Accelerite - it must
be clear that the person is working with/for Apache CloudStack project.

3) Marketing was mentioned, i suggested we do more press releases - and
possibly make use of interns

4) OpenStack VS CloudStack (unbiased technology comparison), there is a
common question - we need to come up something that can help justify
Apache CloudStack to clients leadership

5) Cinder integration with Cloudstack was mentioned - but no solid plans
yet.

6) Creating Appliances of CloudStack - that are ready to be consumed and
user can spin nested VMs to try CloudStack effortlessly

7) CoudStack Template Repository (plugin)- there is a code written for it by
Citrix and resides on ASF git - but for some reason it was dropped or
never completed. If we can give user a rich marketplace of appliances to
consume - we will certainly get a good edge. This can improve the adoption.

8) MeetUps - we need to re-kickstart this initiative within SF Bay Area
and stream it to other locations/meetups.

9) Demo environment of CloudStack.  David mentioned Citrix donated gear
is in one of ASF locations - but sitting idle. I proposed we make use of
it and let new CloudStack explorers try it out - without the hassle of
deploying it.

10) If we can get CloudStack into EPEL fedora and ubuntu upstream
repositories - it will help with adoption as well.

Please let me know if you would be interested in item #1, which is
quarterly meeting. The proposed time is 9am PST, August 17th.

I will help setting up the first few initial calls and be a moderator.

Looking forward to your comments

Regards
ilya


[JOB OPPORTUNITY] LeaseWeb is looking for CloudStack Developer

2017-07-10 Thread ilya
Hi Folks,

Promised to help LeaseWeb recruiter with posting Job to "dev" and "user"
list - apology for cross posting.

Please reach out to recruiter directly, job description can be seen here:

https://drive.google.com/open?id=0B06G3DVBuP9zQXRwMzVKZTJOVlU

Recruiter for this position can be reached here:
https://www.linkedin.com/in/darwinbpoveda/

Regards
ilya


Re: KVM VM Snapshots

2017-07-10 Thread Rubens Malheiro
Sorry to mess up
But a version 4.10 supported snapshot in KVM will it be hot?
On Mon, 10 Jul 2017 at 21:28 Simon Weller  wrote:

> Asai,
>
> 4.10 was approved last week. It should hit the repos with the next few
> days.
>
> - Si
>
> Simon Weller/615-312-6068
>
> -Original Message-
> From: Asai [a...@globalchangemusic.org]
> Received: Monday, 10 Jul 2017, 4:49PM
> To: users@cloudstack.apache.org [users@cloudstack.apache.org]
> Subject: Re: KVM VM Snapshots
>
> Rather than 9.10 I meant 4.10.  Rather than 9.2 I meant 4.9.2. Sorry.
>
>
> On 7/10/2017 2:46 PM, Asai wrote:
> > Greetings,
> >
> > Back in January there was a push to integrate the KVM snapshotting
> > ability into the 9.10 trunk.  I think this did get merged in, but 9.10
> > doesn't seem to be anywhere near release yet, so wondering if the devs
> > can push the KVM snapshotting patch into the 9.2 trunk and release as
> > a minor update?
> >
> > Asai
> >
>
>


RE: KVM VM Snapshots

2017-07-10 Thread Simon Weller
Asai,

4.10 was approved last week. It should hit the repos with the next few days.

- Si

Simon Weller/615-312-6068

-Original Message-
From: Asai [a...@globalchangemusic.org]
Received: Monday, 10 Jul 2017, 4:49PM
To: users@cloudstack.apache.org [users@cloudstack.apache.org]
Subject: Re: KVM VM Snapshots

Rather than 9.10 I meant 4.10.  Rather than 9.2 I meant 4.9.2. Sorry.


On 7/10/2017 2:46 PM, Asai wrote:
> Greetings,
>
> Back in January there was a push to integrate the KVM snapshotting
> ability into the 9.10 trunk.  I think this did get merged in, but 9.10
> doesn't seem to be anywhere near release yet, so wondering if the devs
> can push the KVM snapshotting patch into the 9.2 trunk and release as
> a minor update?
>
> Asai
>



RE: [DISCUSS] CloudStack 4.9.3.0 (LTS)

2017-07-10 Thread Sean Lair
Here are three issues we ran into in 4.9.2.0.  We have been running all of 
these fixes for several months without issues.  The code changes are all very 
easy/small, but had a big impact for us.

I'd respectfully suggest they go into 4.9.3.0:

https://github.com/apache/cloudstack/pull/2041 (VR related jobs scheduled and 
run twice on mgmt servers)
https://github.com/apache/cloudstack/pull/2040 (Bug in monitoring of S2S VPNs - 
also exists in 4.10)
https://github.com/apache/cloudstack/pull/1966 (IPSEC VPNs do not work after 
vRouter reboot)

Thanks
Sean

-Original Message-
From: Rohit Yadav [mailto:rohit.ya...@shapeblue.com] 
Sent: Friday, July 7, 2017 1:14 AM
To: d...@cloudstack.apache.org
Cc: users@cloudstack.apache.org
Subject: [DISCUSS] CloudStack 4.9.3.0 (LTS)

All,


With 4.10.0.0 voted, I would like to start some initial discussion around the 
next minor LTS release 4.9.3.0. At the moment I don't have a timeline, plans or 
dates to share but I would like to engage with the community to gather list of 
issues, commits, PRs that we should consider for the next LTS release 4.9.3.0.


To reduce our test and QA scope, we don't want to consider changes that are new 
feature, or enhancements but strictly blockers/critical/major bugfixes and 
security related fixes, and we can consider reverting any already 
committed/merged PR(s) on 4.9 branch (committed since 4.9.2.0).


Please go through list of commits since 4.9.2.0 (you can also run, git log 
4.9.2.0..4.9) and let us know if there is any change we should consider 
reverting:

https://github.com/apache/cloudstack/commits/4.9


I started backporting some 
fixes on the 4.9 branch, please go through the following PR and raise 
objections on changes/commits that we should not backport or revert:

https://github.com/apache/cloudstack/pull/2052


Lastly, please also share any PRs that we should consider reviewing+merging on 
4.9 branch for the 4.9.3.0 release effort.


- Rohit

rohit.ya...@shapeblue.com
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK @shapeblue
  
 



Re: KVM VM Snapshots

2017-07-10 Thread Asai

Rather than 9.10 I meant 4.10.  Rather than 9.2 I meant 4.9.2. Sorry.


On 7/10/2017 2:46 PM, Asai wrote:

Greetings,

Back in January there was a push to integrate the KVM snapshotting 
ability into the 9.10 trunk.  I think this did get merged in, but 9.10 
doesn't seem to be anywhere near release yet, so wondering if the devs 
can push the KVM snapshotting patch into the 9.2 trunk and release as 
a minor update?


Asai





KVM VM Snapshots

2017-07-10 Thread Asai

Greetings,

Back in January there was a push to integrate the KVM snapshotting 
ability into the 9.10 trunk.  I think this did get merged in, but 9.10 
doesn't seem to be anywhere near release yet, so wondering if the devs 
can push the KVM snapshotting patch into the 9.2 trunk and release as a 
minor update?


Asai



Integration of external/physical devices in advanced networking

2017-07-10 Thread S. Reddit
Hi List

I wonder what are your ideas of integrating externally managed VMs/physical
Devices into a CloudStack environment. All I need is to enable CloudStack
VPC vRouter to be able to configure static NATs (or Forwarding/Firewall)
rules for such devices.

One possibility could be to deploy a small "dummy-VM" with secondary
addresses where NAT Rules could be pointed to and then attach the external
devices with theses addresses to the same isolated network.. seems like a
clutch though..

other ideas?

many thanks!


Re: Database connector failure on Upgrade from CS 4.3.2 to 4.9.2

2017-07-10 Thread Patrick Miller
Thank you.


   I missed the usage entry.

  Though I do not have the usage server installed.


   It is starting up now.



   Thanks again.

Patrick Miller


From: Rene Moser 
Sent: Monday, July 10, 2017 9:49:22 AM
To: users@cloudstack.apache.org
Subject: Re: Database connector failure on Upgrade from CS 4.3.2 to 4.9.2

Hi

On 07/10/2017 06:27 PM, Rafael Weingärtner wrote:
> Did you try to set in the db.properties the "db.usage.driver"?
> something like: *db*.cloud.*driver*=jdbc:mysql

Related to CLOUDSTACK-9765

https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_apache_cloudstack_commit_bcc6b4fbaf74865b971a72122d15d5bfde4ab7ba=DwIDaQ=gYbc-GKV5BQa9zrq1GFCVg=PJ6ByoPcXzANpd1lsEuxSI2kVKchOrHCf_mz3Zwy7PQ=_4dZCfMfjZs-q6YFUPsrYvZ0AGG7rzcwyJpXNm8VAEs=gpxGR_i7PODTlbmuwpvcD_uacbrzdXkXuRyF5xwfi_g=

Regards
René


Re: Database connector failure on Upgrade from CS 4.3.2 to 4.9.2

2017-07-10 Thread Rene Moser
Hi

On 07/10/2017 06:27 PM, Rafael Weingärtner wrote:
> Did you try to set in the db.properties the "db.usage.driver"?
> something like: *db*.cloud.*driver*=jdbc:mysql

Related to CLOUDSTACK-9765

https://github.com/apache/cloudstack/commit/bcc6b4fbaf74865b971a72122d15d5bfde4ab7ba

Regards
René


Re: Database connector failure on Upgrade from CS 4.3.2 to 4.9.2

2017-07-10 Thread Rafael Weingärtner
Did you try to set in the db.properties the "db.usage.driver"?
something like: *db*.cloud.*driver*=jdbc:mysql


On Mon, Jul 10, 2017 at 11:48 AM, Patrick Miller <
patrick.mil...@sungardas.com> wrote:

> Hello all,
>
>  In preparation for the 4.10 release I tried to do an upgrade from 4.3.2
> to 4.9.2
>
>  I followed the release notes for the upgrade process including installing
> mysql-connector-python but I keep getting "DB driver type null is not
> supported!"  I googled, and all I found was notes about no end of line in
> the db.properties file.
>
>
>Thanks
>
>
>Patrick Miller
>
>
> Here are the mysql rams installed not the system
>
> rpm -qa | grep mysql
>
> mysql-connector-odbc-5.1.5r1144-7.el6.x86_64
>
> mysql-libs-5.1.73-8.el6_8.x86_64
>
> mysql-5.1.73-8.el6_8.x86_64
>
> mysql-connector-java-5.1.17-6.el6.noarch
>
> mysql-connector-python-2.1.6-1.el6.x86_64
>
> mysql-server-5.1.73-8.el6_8.x86_64
>
>
> Here is my db.properties
>
>
> # Licensed to the Apache Software Foundation (ASF) under one
>
> db.cloud.minEvictableIdleTimeMillis=24
>
> # or more contributor license agreements.  See the NOTICE file
>
> # distributed with this work for additional information
>
> db.simulator.password=cloud
>
> db.simulator.maxIdle=30
>
> # with the License.  You may obtain a copy of the License at
>
> # to you under the Apache License, Version 2.0 (the
>
> # "License"); you may not use this file except in compliance
>
> # regarding copyright ownership.  The ASF licenses this file
>
> db.usage.host=localhost
>
> #   http://www.apache.org/licenses/LICENSE-2.0
>
> # Encryption Settings
>
> # software distributed under the License is distributed on an
>
> db.usage.maxActive=100
>
> # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
>
> #
>
> # usage database settings
>
> # specific language governing permissions and limitations
>
> db.simulator.maxActive=250
>
> # management server clustering parameters, change cluster.node.IP to the
> machine
>
>  IP address
>
> db.cloud.username=cloud
>
> # KIND, either express or implied.  See the License for the
>
> # under the License.
>
> # Unless required by applicable law or agreed to in writing,
>
> cluster.node.IP=172.31.97.50
>
> db.usage.port=3306
>
> db.cloud.name=cloud
>
>
>
> # CloudStack database settings
>
> cluster.servlet.port=9090
>
> db.cloud.maxActive=250
>
> # in which the management server(Tomcat) is running
>
> db.cloud.host=localhost
>
> region.id=1
>
> # CloudStack database tuning parameters
>
>
> db.cloud.password=ENC(Hnwt8z1u0mzQABf/X9ZkKziHRHHrT4Y3)
>
> # CloudStack database SSL settings
>
> db.cloud.validationQuery=SELECT 1
>
> db.cloud.testOnBorrow=true
>
> db.awsapi.username=cloud
>
> db.cloud.port=3306
>
> db.cloud.timeBetweenEvictionRunsMillis=4
>
> db.cloud.keyStorePassword=
>
> db.cloud.autoReconnect=true
>
> db.cloud.keyStore=
>
>
> db.cloud.poolPreparedStatements=false
>
>
> db.cloud.maxIdle=30
>
> db.cloud.trustStorePassword=
>
>
> db.cloud.testWhileIdle=true
>
> db.cloud.url.params=prepStmtCacheSize=517=true
>
>
>
> # usage database tuning parameters
>
>
> db.cloud.maxWait=1
>
> db.simulator.autoReconnect=true
>
> # Simulator database settings
>
> db.usage.name=cloud_usage
>
> db.simulator.port=3306
>
> db.usage.url.params=
>
> db.usage.maxIdle=30
>
>
> db.usage.username=cloud
>
> db.cloud.trustStore=
>
> db.usage.maxWait=1
>
> db.cloud.useSSL=false
>
> db.cloud.encryption.type=file
>
> db.cloud.encrypt.secret=ENC(uAoObwkGyDGXTJKZyHcNRKruRCWJCSg0)
>
> db.awsapi.password=cloud
>
> db.simulator.username=cloud
>
> # awsapi database settings
>
> db.awsapi.host=localhost
>
> db.usage.password=ENC(5JfmqSDGmMvz4DeP6f+yaI9JWh2MUsuI)
>
> db.awsapi.port=3306
>
> db.simulator.maxWait=1
>
> db.usage.autoReconnect=true
>
> db.awsapi.name=cloudbridge
>
> db.simulator.name=simulator
>
> db.simulator.host=localhost
>
> db.cloud.driver=jdbc:mysql
>
> db.simulator.driver=jdbc:mysql
>
>
> This is an excerpt from my management-server.log
>
>
> 2017-07-10 14:55:51,599 INFO  [factory.support.DefaultListableBeanFactory]
> (main:null) Pre-instantiating singletons in org.springframework.beans.
> factory.support.DefaultListableBeanFact
>
> ory@71a40770: defining beans [ManagedContext,org.apache.
> cloudstack.managed.context.ManagedContextRunnable#0,
> databaseUpgradeChecker,versionDaoImpl,configurationDaoImpl,
> configDepot,scope
>
> dConfigStorageRegistry,entityManagerImpl,lockMasterListener,
> cloudStackLifeCycle,moduleStartup,transactionContextInterceptor,
> actionEventInterceptor,org.springframework.aop.config.intern
>
> alAutoProxyCreator,org.springframework.aop.support.
> DefaultBeanFactoryPointcutAdvisor#0,org.springframework.aop.support.
> DefaultBeanFactoryPointcutAdvisor#1,org.springframework.aop.suppo
>
> rt.DefaultBeanFactoryPointcutAdvisor#2,org.springframework.aop.support.
> DefaultBeanFactoryPointcutAdvisor#3,org.apache.cloudstack.
> spring.lifecycle.registry.RegistryLifecycle#0,org.apach
>
> 

Database connector failure on Upgrade from CS 4.3.2 to 4.9.2

2017-07-10 Thread Patrick Miller
Hello all,

 In preparation for the 4.10 release I tried to do an upgrade from 4.3.2 to 
4.9.2

 I followed the release notes for the upgrade process including installing 
mysql-connector-python but I keep getting "DB driver type null is not 
supported!"  I googled, and all I found was notes about no end of line in the 
db.properties file.


   Thanks


   Patrick Miller


Here are the mysql rams installed not the system

rpm -qa | grep mysql

mysql-connector-odbc-5.1.5r1144-7.el6.x86_64

mysql-libs-5.1.73-8.el6_8.x86_64

mysql-5.1.73-8.el6_8.x86_64

mysql-connector-java-5.1.17-6.el6.noarch

mysql-connector-python-2.1.6-1.el6.x86_64

mysql-server-5.1.73-8.el6_8.x86_64


Here is my db.properties


# Licensed to the Apache Software Foundation (ASF) under one

db.cloud.minEvictableIdleTimeMillis=24

# or more contributor license agreements.  See the NOTICE file

# distributed with this work for additional information

db.simulator.password=cloud

db.simulator.maxIdle=30

# with the License.  You may obtain a copy of the License at

# to you under the Apache License, Version 2.0 (the

# "License"); you may not use this file except in compliance

# regarding copyright ownership.  The ASF licenses this file

db.usage.host=localhost

#   http://www.apache.org/licenses/LICENSE-2.0

# Encryption Settings

# software distributed under the License is distributed on an

db.usage.maxActive=100

# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY

#

# usage database settings

# specific language governing permissions and limitations

db.simulator.maxActive=250

# management server clustering parameters, change cluster.node.IP to the machine

 IP address

db.cloud.username=cloud

# KIND, either express or implied.  See the License for the

# under the License.

# Unless required by applicable law or agreed to in writing,

cluster.node.IP=172.31.97.50

db.usage.port=3306

db.cloud.name=cloud



# CloudStack database settings

cluster.servlet.port=9090

db.cloud.maxActive=250

# in which the management server(Tomcat) is running

db.cloud.host=localhost

region.id=1

# CloudStack database tuning parameters


db.cloud.password=ENC(Hnwt8z1u0mzQABf/X9ZkKziHRHHrT4Y3)

# CloudStack database SSL settings

db.cloud.validationQuery=SELECT 1

db.cloud.testOnBorrow=true

db.awsapi.username=cloud

db.cloud.port=3306

db.cloud.timeBetweenEvictionRunsMillis=4

db.cloud.keyStorePassword=

db.cloud.autoReconnect=true

db.cloud.keyStore=


db.cloud.poolPreparedStatements=false


db.cloud.maxIdle=30

db.cloud.trustStorePassword=


db.cloud.testWhileIdle=true

db.cloud.url.params=prepStmtCacheSize=517=true



# usage database tuning parameters


db.cloud.maxWait=1

db.simulator.autoReconnect=true

# Simulator database settings

db.usage.name=cloud_usage

db.simulator.port=3306

db.usage.url.params=

db.usage.maxIdle=30


db.usage.username=cloud

db.cloud.trustStore=

db.usage.maxWait=1

db.cloud.useSSL=false

db.cloud.encryption.type=file

db.cloud.encrypt.secret=ENC(uAoObwkGyDGXTJKZyHcNRKruRCWJCSg0)

db.awsapi.password=cloud

db.simulator.username=cloud

# awsapi database settings

db.awsapi.host=localhost

db.usage.password=ENC(5JfmqSDGmMvz4DeP6f+yaI9JWh2MUsuI)

db.awsapi.port=3306

db.simulator.maxWait=1

db.usage.autoReconnect=true

db.awsapi.name=cloudbridge

db.simulator.name=simulator

db.simulator.host=localhost

db.cloud.driver=jdbc:mysql

db.simulator.driver=jdbc:mysql


This is an excerpt from my management-server.log


2017-07-10 14:55:51,599 INFO  [factory.support.DefaultListableBeanFactory] 
(main:null) Pre-instantiating singletons in 
org.springframework.beans.factory.support.DefaultListableBeanFact

ory@71a40770: defining beans 
[ManagedContext,org.apache.cloudstack.managed.context.ManagedContextRunnable#0,databaseUpgradeChecker,versionDaoImpl,configurationDaoImpl,configDepot,scope

dConfigStorageRegistry,entityManagerImpl,lockMasterListener,cloudStackLifeCycle,moduleStartup,transactionContextInterceptor,actionEventInterceptor,org.springframework.aop.config.intern

alAutoProxyCreator,org.springframework.aop.support.DefaultBeanFactoryPointcutAdvisor#0,org.springframework.aop.support.DefaultBeanFactoryPointcutAdvisor#1,org.springframework.aop.suppo

rt.DefaultBeanFactoryPointcutAdvisor#2,org.springframework.aop.support.DefaultBeanFactoryPointcutAdvisor#3,org.apache.cloudstack.spring.lifecycle.registry.RegistryLifecycle#0,org.apach

e.cloudstack.spring.lifecycle.ConfigDepotLifeCycle#0,contrailEventInterceptor,org.springframework.aop.support.DefaultBeanFactoryPointcutAdvisor#4,org.springframework.aop.support.Defaul

tBeanFactoryPointcutAdvisor#5,org.springframework.context.annotation.internalConfigurationAnnotationProcessor,org.springframework.context.annotation.internalAutowiredAnnotationProcesso

r,org.springframework.context.annotation.internalRequiredAnnotationProcessor,org.springframework.context.annotation.internalCommonAnnotationProcessor,org.springframework.beans.factory.


Re: Snapshot and secondary storage utilisation.

2017-07-10 Thread Anshul Gangwar
By default, xenserver takes delta snapshots i.e. snapshot of differential disk 
from last snapshot taken. This is configurable via global setting 
“snapshot.delta.max”. That setting tells till when keep taking delta snapshots 
before taking full snapshots. By default, value is 16. If you don’t want that 
behaviour make it 0. Taking only differential disks snapshot makes the snapshot 
operation faster but with the cost of additional storage.

Regards,
Anshul 

On 10/07/17, 12:43 PM, "Makrand"  wrote:

​​
Hi all,

My setup is:- ACS 4.4. XENserver 6.2 SP1. 4TB of secondary storage coming
from NFS.

I am observing some issues the way *.vhd* files are stored and cleaned up
in secondary storage. Let's take an example of a VM-813. It has 250G root
disk (disk ID 1015) The snapshot is scheduled to happen once every week
(sat night) and supposes to keep only 1 snapshot. From GUI I am seeing its
only keeping the latest week snapshot.

But resource utilization on CS GUI is increasing day by day. So I just ran
du -smh and found there are multiple vhd files of different sizes under
secondary storage.

Here is snippet:-

root@gcx-bom-cloudstack:/mnt/secondary2/snapshots/22# du -smhh *
1.5K1002
1.5K1003
1.5K1004
*243G1015*
1.5K1114

root@gcx-bom-cloudstack:/mnt/secondary2/snapshots/22# ls -lht *
*1015:*
*total 243G*
*-rw-r--r-- 1 nobody nogroup  32G Jul  8 21:19
8a7e6580-5191-4eb0-9eb1-3ec8e75ce104.vhd*
*-rw-r--r-- 1 nobody nogroup  40G Jul  1 21:30
f52b82b0-0eaf-4297-a973-1f5477c10b5e.vhd*
*-rw-r--r-- 1 nobody nogroup  43G Jun 24 21:35
3dc72a3b-91ad-45ae-b618-9aefb7565edb.vhd*
*-rw-r--r-- 1 nobody nogroup  40G Jun 17 21:30
c626a9c5-1929-4489-b181-6524af1c88ad.vhd*
*-rw-r--r-- 1 nobody nogroup  29G Jun 10 21:16
697cf9bd-4433-426d-a4a1-545f03aae3e6.vhd*
*-rw-r--r-- 1 nobody nogroup  29G Jun  3 21:00
bff859b3-a51c-4186-8c19-1ba94f99f9e7.vhd*
*-rw-r--r-- 1 nobody nogroup  43G May 27 21:35
127e3f6e-4fa5-45ed-a95d-7d0b850a053d.vhd*
*-rw-r--r-- 1 nobody nogroup  60G May 20 22:01
619fe1ed-6807-441c-9526-526486d7a6d2.vhd*
*-rw-r--r-- 1 nobody nogroup  35G May 13 21:23
71b0d6a8-3c93-493f-b82c-732b7a808f6d.vhd*
*-rw-r--r-- 1 nobody nogroup  31G May  6 21:19
ccbfb3ec-abd8-448c-ba79-36631b227203.vhd*
*-rw-r--r-- 1 nobody nogroup  32G Apr 29 21:18
52215821-ed4d-4283-9aed-9f9cc5acd5bd.vhd*
*-rw-r--r-- 1 nobody nogroup  38G Apr 22 21:26
4cb6ea42-8450-493a-b6f2-5be5b0594a30.vhd*
*-rw-r--r-- 1 nobody nogroup 248G Apr 16 00:44
243f50d6-d06a-47af-ab45-e0b8599aac8d.vhd*


Observed same behavior for root disks of other 4 VMs. So the number of vhds
are ever growing on secondary storage and one will eventually run out of
secondary storage size.

Simple Question:-

1) Why is cloud stack creating multiple vhd files? Should not it supposed
to keep only one vhd at secondary storage defined in snap policy?

Any thoughts? As explained earlier...from GUI I am seeing last weeks snap
as backed up.



--
Makrand


DISCLAIMER
==
This e-mail may contain privileged and confidential information which is the 
property of Accelerite, a Persistent Systems business. It is intended only for 
the use of the individual or entity to which it is addressed. If you are not 
the intended recipient, you are not authorized to read, retain, copy, print, 
distribute or use this message. If you have received this communication in 
error, please notify the sender and delete all copies of this message. 
Accelerite, a Persistent Systems business does not accept any liability for 
virus infected mails.


AW: Snapshot and secondary storage utilisation.

2017-07-10 Thread S . Brüseke - proIO GmbH
Hi Makran,

please take a look at global setting "snapshot.delta.max". As far as I 
understand scheduled snapshots ACP is using deltas to minimize time and 
transferred data. So after the first snapshot has been done, the next is only a 
delta until you hit snapshot.delta.max.
Because ACP is needing the full snapshot as long as you need one of the deltas 
you will see the vhd file on your secondary storage, but not in the UI.

Hope that helped.

Mit freundlichen Grüßen / With kind regards,

Swen Brüseke

-Ursprüngliche Nachricht-
Von: Makrand [mailto:makrandsa...@gmail.com] 
Gesendet: Montag, 10. Juli 2017 09:14
An: users@cloudstack.apache.org
Betreff: Snapshot and secondary storage utilisation.

​​
Hi all,

My setup is:- ACS 4.4. XENserver 6.2 SP1. 4TB of secondary storage coming from 
NFS.

I am observing some issues the way *.vhd* files are stored and cleaned up in 
secondary storage. Let's take an example of a VM-813. It has 250G root disk 
(disk ID 1015) The snapshot is scheduled to happen once every week (sat night) 
and supposes to keep only 1 snapshot. From GUI I am seeing its only keeping the 
latest week snapshot.

But resource utilization on CS GUI is increasing day by day. So I just ran du 
-smh and found there are multiple vhd files of different sizes under secondary 
storage.

Here is snippet:-

root@gcx-bom-cloudstack:/mnt/secondary2/snapshots/22# du -smhh *
1.5K1002
1.5K1003
1.5K1004
*243G1015*
1.5K1114

root@gcx-bom-cloudstack:/mnt/secondary2/snapshots/22# ls -lht *
*1015:*
*total 243G*
*-rw-r--r-- 1 nobody nogroup  32G Jul  8 21:19
8a7e6580-5191-4eb0-9eb1-3ec8e75ce104.vhd*
*-rw-r--r-- 1 nobody nogroup  40G Jul  1 21:30
f52b82b0-0eaf-4297-a973-1f5477c10b5e.vhd*
*-rw-r--r-- 1 nobody nogroup  43G Jun 24 21:35
3dc72a3b-91ad-45ae-b618-9aefb7565edb.vhd*
*-rw-r--r-- 1 nobody nogroup  40G Jun 17 21:30
c626a9c5-1929-4489-b181-6524af1c88ad.vhd*
*-rw-r--r-- 1 nobody nogroup  29G Jun 10 21:16
697cf9bd-4433-426d-a4a1-545f03aae3e6.vhd*
*-rw-r--r-- 1 nobody nogroup  29G Jun  3 21:00
bff859b3-a51c-4186-8c19-1ba94f99f9e7.vhd*
*-rw-r--r-- 1 nobody nogroup  43G May 27 21:35
127e3f6e-4fa5-45ed-a95d-7d0b850a053d.vhd*
*-rw-r--r-- 1 nobody nogroup  60G May 20 22:01
619fe1ed-6807-441c-9526-526486d7a6d2.vhd*
*-rw-r--r-- 1 nobody nogroup  35G May 13 21:23
71b0d6a8-3c93-493f-b82c-732b7a808f6d.vhd*
*-rw-r--r-- 1 nobody nogroup  31G May  6 21:19
ccbfb3ec-abd8-448c-ba79-36631b227203.vhd*
*-rw-r--r-- 1 nobody nogroup  32G Apr 29 21:18
52215821-ed4d-4283-9aed-9f9cc5acd5bd.vhd*
*-rw-r--r-- 1 nobody nogroup  38G Apr 22 21:26
4cb6ea42-8450-493a-b6f2-5be5b0594a30.vhd*
*-rw-r--r-- 1 nobody nogroup 248G Apr 16 00:44
243f50d6-d06a-47af-ab45-e0b8599aac8d.vhd*


Observed same behavior for root disks of other 4 VMs. So the number of vhds are 
ever growing on secondary storage and one will eventually run out of secondary 
storage size.

Simple Question:-

1) Why is cloud stack creating multiple vhd files? Should not it supposed to 
keep only one vhd at secondary storage defined in snap policy?

Any thoughts? As explained earlier...from GUI I am seeing last weeks snap as 
backed up.



--
Makrand


- proIO GmbH -
Geschäftsführer: Swen Brüseke
Sitz der Gesellschaft: Frankfurt am Main

USt-IdNr. DE 267 075 918
Registergericht: Frankfurt am Main - HRB 86239

Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen. 
Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten 
haben, 
informieren Sie bitte sofort den Absender und vernichten Sie diese Mail. 
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail sind nicht 
gestattet. 

This e-mail may contain confidential and/or privileged information. 
If you are not the intended recipient (or have received this e-mail in error) 
please notify 
the sender immediately and destroy this e-mail.  
Any unauthorized copying, disclosure or distribution of the material in this 
e-mail is strictly forbidden. 




Snapshot and secondary storage utilisation.

2017-07-10 Thread Makrand
​​
Hi all,

My setup is:- ACS 4.4. XENserver 6.2 SP1. 4TB of secondary storage coming
from NFS.

I am observing some issues the way *.vhd* files are stored and cleaned up
in secondary storage. Let's take an example of a VM-813. It has 250G root
disk (disk ID 1015) The snapshot is scheduled to happen once every week
(sat night) and supposes to keep only 1 snapshot. From GUI I am seeing its
only keeping the latest week snapshot.

But resource utilization on CS GUI is increasing day by day. So I just ran
du -smh and found there are multiple vhd files of different sizes under
secondary storage.

Here is snippet:-

root@gcx-bom-cloudstack:/mnt/secondary2/snapshots/22# du -smhh *
1.5K1002
1.5K1003
1.5K1004
*243G1015*
1.5K1114

root@gcx-bom-cloudstack:/mnt/secondary2/snapshots/22# ls -lht *
*1015:*
*total 243G*
*-rw-r--r-- 1 nobody nogroup  32G Jul  8 21:19
8a7e6580-5191-4eb0-9eb1-3ec8e75ce104.vhd*
*-rw-r--r-- 1 nobody nogroup  40G Jul  1 21:30
f52b82b0-0eaf-4297-a973-1f5477c10b5e.vhd*
*-rw-r--r-- 1 nobody nogroup  43G Jun 24 21:35
3dc72a3b-91ad-45ae-b618-9aefb7565edb.vhd*
*-rw-r--r-- 1 nobody nogroup  40G Jun 17 21:30
c626a9c5-1929-4489-b181-6524af1c88ad.vhd*
*-rw-r--r-- 1 nobody nogroup  29G Jun 10 21:16
697cf9bd-4433-426d-a4a1-545f03aae3e6.vhd*
*-rw-r--r-- 1 nobody nogroup  29G Jun  3 21:00
bff859b3-a51c-4186-8c19-1ba94f99f9e7.vhd*
*-rw-r--r-- 1 nobody nogroup  43G May 27 21:35
127e3f6e-4fa5-45ed-a95d-7d0b850a053d.vhd*
*-rw-r--r-- 1 nobody nogroup  60G May 20 22:01
619fe1ed-6807-441c-9526-526486d7a6d2.vhd*
*-rw-r--r-- 1 nobody nogroup  35G May 13 21:23
71b0d6a8-3c93-493f-b82c-732b7a808f6d.vhd*
*-rw-r--r-- 1 nobody nogroup  31G May  6 21:19
ccbfb3ec-abd8-448c-ba79-36631b227203.vhd*
*-rw-r--r-- 1 nobody nogroup  32G Apr 29 21:18
52215821-ed4d-4283-9aed-9f9cc5acd5bd.vhd*
*-rw-r--r-- 1 nobody nogroup  38G Apr 22 21:26
4cb6ea42-8450-493a-b6f2-5be5b0594a30.vhd*
*-rw-r--r-- 1 nobody nogroup 248G Apr 16 00:44
243f50d6-d06a-47af-ab45-e0b8599aac8d.vhd*


Observed same behavior for root disks of other 4 VMs. So the number of vhds
are ever growing on secondary storage and one will eventually run out of
secondary storage size.

Simple Question:-

1) Why is cloud stack creating multiple vhd files? Should not it supposed
to keep only one vhd at secondary storage defined in snap policy?

Any thoughts? As explained earlier...from GUI I am seeing last weeks snap
as backed up.



--
Makrand


Re: Apply firewall rule to multiple IPs

2017-07-10 Thread Boris Stoyanov
Hi Cristian,
I think the quickest way would be through cloudmonkey using ‘create 
firewallrule’.

(local) SBCM5> create firewallrule
cidrlist= endport=  filter=   fordisplay=   icmpcode= icmptype= 
ipaddressid=  protocol= startport=type=

https://cwiki.apache.org/confluence/display/CLOUDSTACK/CloudStack+cloudmonkey+CLI




boris.stoya...@shapeblue.com 
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK
@shapeblue
  
 

On Jul 7, 2017, at 9:08 PM, Ciobanu Cristian 
> wrote:

Hello,





  Is there a way to apply a firewall rule to multiple IPs?  For example
I have VM01 where I have allocated 10 IPs and I need to allow ingress
traffic on port 80, how to apply this rule to all IP's  avoid to create rule
for each IP. (Public IP )



  My environment is configured with advanced networking  ACS 4.9.2 and
VMware 5.5





Thank you.

Cristian