Re: [PROPOSE] Combining Apache CloudStack Documentation

2018-07-24 Thread ilya musayev
I like it but wonder if Upgrade section needs to be added? ..

On Tue, Jul 24, 2018 at 2:25 AM Paul Angus  wrote:

> Hi All,
>
> We currently have four sources of documentation [1]. Which make managing
> the documentation convoluted, and worse, make navigating and searching the
> documentation really difficult.
>
> I have taken the current documentation and combined them into one repo,
> then created 7 sections:
>
> CloudStack Concepts and Terminology
> Quick Installation Guide
> Installation Guide
> Usage Guide
> Developers Guide
> Plugins Guide
> Release Notes
>
> I haven't changed any of the content, but I've moved some of it around to
> make more sense (to me).  You can see the result on RTD [2]
>
> I'd like to PROPOSE to move this demo version of the documentation over to
> the Apache repos and make it THE documentation source, update the website,
> and mark the current repos/sites as archive data.
>
> [1]
> https://github.com/apache/cloudstack-docs.git <
> https://github.com/apache/cloudstack-docs.git> is a bit of a dodge-podge
> of resources
> https://github.com/apache/cloudstack-docs-install.git is the install guide
> https://github.com/apache/cloudstack-docs-admin.git is the current admin
> manual.
> https://github.com/apache/cloudstack-docs-rn.git is the release notes for
> individual releases
>
> [2]  https://beta-cloudstack-docs.readthedocs.io/en/latest/
>
> paul.an...@shapeblue.com
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> @shapeblue
>
>
>
>


Re: [DISCUSS] Blocking the creation of new Basic Networking zones

2018-06-20 Thread ilya musayev
I think the simplicity of Basic Zone was - you can get away with 1 VLAN for
everything (great for POC setup) where as Advanced Shared with VLAN
isolation requires several VLANs to get going.

How would we cover this use case?

On Wed, Jun 20, 2018 at 11:34 AM Tutkowski, Mike 
wrote:

> Also, yes, I agree with the list you provided, Wido. We might have to
> break “other fancy stuff” into more detail, though. ;)
>
> On 6/20/18, 12:32 PM, "Tutkowski, Mike"  wrote:
>
> Sorry, Wido :) I missed that part.
>
> On 6/20/18, 5:03 AM, "Wido den Hollander"  wrote:
>
>
>
> On 06/20/2018 12:31 AM, Tutkowski, Mike wrote:
> > If this initiative goes through, perhaps that’s a good time to
> bump CloudStack’s release number to 5.0.0?
> >
>
> That's what I said in my e-mail :-) But yes, I agree with you, this
> might be a good time to bump it to 5.0
>
> With that we would:
>
> - Drop creation of new Basic Networking Zones
> - Support IPv6 in shared IPv6 networks
> - Java 9?
> - Drop support for Ubuntu 12.04
> - Other fancy stuff?
> - Support ConfigDrive in all scenarios properly
>
> How would that sound?
>
> Wido
>
> >> On Jun 19, 2018, at 3:17 PM, Wido den Hollander 
> wrote:
> >>
> >>
> >>
> >>> On 06/19/2018 11:07 PM, Daan Hoogland wrote:
> >>> I like this initiative, and here comes the big but even though
> I myself
> >>> might think it is not valid; Basic zones are there to give a
> simple start
> >>> for new users. If we can give a one-knob start/one page wizard
> for creating
> >>> a shared network in advanced zone with security groups and
> userdata, great.
> >>
> >> That would be a UI thing, but it would be a matter of using VLAN
> >> isolation and giving in VLAN 0 or 'untagged', because that's
> basically
> >> what Basic Networking does.
> >>
> >> It plugs the VM on top of usually cloudbr0 (KVM).
> >>
> >> If you use vlan://untagged for the broadcast_uri in Advanced
> Networking
> >> you get exactly the same result.
> >>
> >>> And I really fancy this idea. let's make ACS more simple by
> throwing at as
> >>> much code as we can in a gradual and controlled way :+1:
> >>
> >> I would love to. But I'm a real novice when it comes to the UI
> though.
> >> So that would be something I wouldn't be good at doing.
> >>
> >> Blocking Basic Networking creation is a few if-statements at
> the right
> >> location and you're done.
> >>
> >> Wido
> >>
> >>>
>  On Tue, Jun 19, 2018 at 10:57 PM, Wido den Hollander <
> w...@widodh.nl> wrote:
> 
>  Hi,
> 
>  We (PCextreme) are a big-time user of Basic Networking and
> recently
>  started to look into Advanced Networking with VLAN isolation
> and a
>  shared network.
> 
>  This provides (from what we can see) all the features Basic
> Networking
>  provides, like the VR just doing DHCP and UserData while the
> Hypervisor
>  does the Security Grouping.
> 
>  That made me wonder why we still have Basic Networking.
> 
>  Dropping all the code would be a big problem for users as you
> can't
>  simply migrate from Basic to Advanced. In theory we found out
> that it's
>  possible by changing the database, but I wouldn't guarantee
> it works in
>  every use-case. So doing this automatically during a upgrade
> would be
>  difficult.
> 
>  To prevent us from having to maintain the Basic Networking
> code for ever
>  I would like to propose and discuss the matter of preventing
> the
>  creation of new Basic Networking zones.
> 
>  In the future this can get us rid of a lot of if-else
> statements in the
>  code and it would make testing also easier as we have few
> things to test.
> 
>  Most of the development also seems to go in the Advanced
> Networking
>  direction.
> 
>  We are currently also working on IPv6 in Advanced Shared
> Networks and
>  that's progressing very good as well.
> 
>  Would this be something to call the 5.0 release where we
> simplify the
>  networking and in the UI/API get rid of Basic Networking
> while keeping
>  it alive for existing users?
> 
>  Wido
> 
> >>>
> >>>
> >>>
>
>
>
>
>


Convert KVM Instance to CloudStack

2018-06-13 Thread ilya musayev
Hi Users and Dev

I apologize for cross posting.. 

I have bunch of VMs that were deployed by CloudStack - however - the management 
server along with a DB is no longer available.

This is a POC environment - but i would love not to loose and recreate the VMs 
if possible,

Hence i’m thinking of writing re-injestion process of existing running KVM 
instances back into new cloudstack - without doing template imports and such.

Has anyone create a tooling for this endevour by any chance? If not - i might 
have to create one :(


Thanks
ilya

Re: {ANNOUNCE] 4.11.1 RC2 cut

2018-06-08 Thread ilya musayev
Daan and Rohit

Come to think about it - you are correct, I looked through and noticed my
install does cloudstack-* against 4.11 repo and Marvin being in the repo -
is being installed.

I will change my process, thanks for the update

-ilya

On Fri, Jun 8, 2018 at 1:08 AM Rohit Yadav 
wrote:

> Hi Ilya,
>
>
> The cloudstack-marvin package is not needed to be installed for normal
> CloudStack setup/use, nor it is added as a dependency on any of the other
> production packages such as cloudstack-management, cloudstack-agent,
> cloudstack-common, cloudstack-usage.
>
>
> We created these additional packages (cloudstack-integration-tests and
> cloudstack-marvin) to make it easier for people to test a CloudStack
> release using release specific marvin and integration tests (as far as I
> know the current support/usage of these packages are with Trillian). We can
> discuss having a wheel/frozen marvin package that includes all python
> dependencies, but in a separate thread.
>
>
> - Rohit
>
> <https://cloudstack.apache.org>
>
>
>
> 
> From: ilya musayev 
> Sent: Friday, June 8, 2018 1:52:00 AM
> To: dev@cloudstack.apache.org
> Subject: Re: {ANNOUNCE] 4.11.1 RC2 cut
>
> Hi Daan
>
> I've tried installing 4.11 RC1 (i know i'm in the wrong thread) and noticed
> it does something funky - like installing marvin and going to pypi to get
> dependencies. If its already fixed in RC2 - please ignore this request -
> but if not...
>
> Unfortunately some environments dont have a luxury of open outbound
> internet connection.
>
> Do you think we can refine the requirements and drop packages that arent
> needed
>
> Here is an example:
>
>   Installing : cloudstack-marvin-4.11.1.0-rc1.el7.centos.x86_64
>
>
>
> 191/297
>
> Collecting
>
> http://cdn.mysql.com/Downloads/Connector-Python/mysql-connector-python-2.0.4.zip#md5=3df394d89300db95163f17c843ef49df
>
>   Downloading
>
> http://cdn.mysql.com/Downloads/Connector-Python/mysql-connector-python-2.0.4.zip
> (277kB)
>
> Installing collected packages: mysql-connector-python
>
>   Found existing installation: mysql-connector-python 1.1.6
>
> DEPRECATION: Uninstalling a distutils installed project
> (mysql-connector-python) has been deprecated and will be removed in a
> future version. This is due to the fact that uninstalling a distutils
> project will only partially uninstall the project.
>
> Uninstalling mysql-connector-python-1.1.6:
>
>   Successfully uninstalled mysql-connector-python-1.1.6
>
>   Running setup.py install for mysql-connector-python: started
>
> Running setup.py install for mysql-connector-python: finished with
> status 'done'
>
> Successfully installed mysql-connector-python-2.0.4
>
> You are using pip version 8.1.2, however version 10.0.1 is available.
>
> You should consider upgrading via the 'pip install --upgrade pip' command.
>
> Processing /usr/share/cloudstack-marvin/Marvin-4.11.1.0.tar.gz
>
> Collecting mysql-connector-python>=1.1.6 (from Marvin==4.11.1.0)
>
>   Downloading
>
> https://files.pythonhosted.org/packages/dc/48/32c715d2cef42d0791c5b2f21b4f1f280c8e45afa66a02f4d1828c77f3ea/mysql_connector_python-8.0.11-cp27-cp27mu-manylinux1_x86_64.whl
> (8.1MB)
>
> Collecting requests>=2.2.1 (from Marvin==4.11.1.0)
>
>   Downloading
>
> https://files.pythonhosted.org/packages/49/df/50aa1999ab9bde74656c2919d9c0c085fd2b3775fd3eca826012bef76d8c/requests-2.18.4-py2.py3-none-any.whl
> (88kB
> <https://files.pythonhosted.org/packages/49/df/50aa1999ab9bde74656c2919d9c0c085fd2b3775fd3eca826012bef76d8c/requests-2.18.4-py2.py3-none-any.whl(88kB>
> )
>
> Collecting paramiko>=1.13.0 (from Marvin==4.11.1.0)
>
>   Downloading
>
> https://files.pythonhosted.org/packages/3e/db/cb7b6656e0e7387637ce850689084dc0b94b44df31cc52e5fc5c2c4fd2c1/paramiko-2.4.1-py2.py3-none-any.whl
> (194kB
> <https://files.pythonhosted.org/packages/3e/db/cb7b6656e0e7387637ce850689084dc0b94b44df31cc52e5fc5c2c4fd2c1/paramiko-2.4.1-py2.py3-none-any.whl(194kB>
> )
>
> Collecting nose>=1.3.3 (from Marvin==4.11.1.0)
>
>   Downloading
>
> https://files.pythonhosted.org/packages/99/4f/13fb671119e65c4dce97c60e67d3fd9e6f7f809f2b307e2611f4701205cb/nose-1.3.7-py2-none-any.whl
> (154kB
> <https://files.pythonhosted.org/packages/99/4f/13fb671119e65c4dce97c60e67d3fd9e6f7f809f2b307e2611f4701205cb/nose-1.3.7-py2-none-any.whl(154kB>
> )
>
> Collecting ddt>=0.4.0 (from Marvin==4.11.1.0)
>
>   Downloading
>
> https://files.pythonhosted.org/packages/54/eb/b39eec5f24414cb5d7393ed9cb1bafac740d005846019473d6fc8df18d

Re: {ANNOUNCE] 4.11.1 RC2 cut

2018-06-07 Thread ilya musayev
Hi Daan

I've tried installing 4.11 RC1 (i know i'm in the wrong thread) and noticed
it does something funky - like installing marvin and going to pypi to get
dependencies. If its already fixed in RC2 - please ignore this request -
but if not...

Unfortunately some environments dont have a luxury of open outbound
internet connection.

Do you think we can refine the requirements and drop packages that arent
needed

Here is an example:

  Installing : cloudstack-marvin-4.11.1.0-rc1.el7.centos.x86_64



191/297

Collecting
http://cdn.mysql.com/Downloads/Connector-Python/mysql-connector-python-2.0.4.zip#md5=3df394d89300db95163f17c843ef49df

  Downloading
http://cdn.mysql.com/Downloads/Connector-Python/mysql-connector-python-2.0.4.zip
(277kB)

Installing collected packages: mysql-connector-python

  Found existing installation: mysql-connector-python 1.1.6

DEPRECATION: Uninstalling a distutils installed project
(mysql-connector-python) has been deprecated and will be removed in a
future version. This is due to the fact that uninstalling a distutils
project will only partially uninstall the project.

Uninstalling mysql-connector-python-1.1.6:

  Successfully uninstalled mysql-connector-python-1.1.6

  Running setup.py install for mysql-connector-python: started

Running setup.py install for mysql-connector-python: finished with
status 'done'

Successfully installed mysql-connector-python-2.0.4

You are using pip version 8.1.2, however version 10.0.1 is available.

You should consider upgrading via the 'pip install --upgrade pip' command.

Processing /usr/share/cloudstack-marvin/Marvin-4.11.1.0.tar.gz

Collecting mysql-connector-python>=1.1.6 (from Marvin==4.11.1.0)

  Downloading
https://files.pythonhosted.org/packages/dc/48/32c715d2cef42d0791c5b2f21b4f1f280c8e45afa66a02f4d1828c77f3ea/mysql_connector_python-8.0.11-cp27-cp27mu-manylinux1_x86_64.whl
(8.1MB)

Collecting requests>=2.2.1 (from Marvin==4.11.1.0)

  Downloading
https://files.pythonhosted.org/packages/49/df/50aa1999ab9bde74656c2919d9c0c085fd2b3775fd3eca826012bef76d8c/requests-2.18.4-py2.py3-none-any.whl
(88kB)

Collecting paramiko>=1.13.0 (from Marvin==4.11.1.0)

  Downloading
https://files.pythonhosted.org/packages/3e/db/cb7b6656e0e7387637ce850689084dc0b94b44df31cc52e5fc5c2c4fd2c1/paramiko-2.4.1-py2.py3-none-any.whl
(194kB)

Collecting nose>=1.3.3 (from Marvin==4.11.1.0)

  Downloading
https://files.pythonhosted.org/packages/99/4f/13fb671119e65c4dce97c60e67d3fd9e6f7f809f2b307e2611f4701205cb/nose-1.3.7-py2-none-any.whl
(154kB)

Collecting ddt>=0.4.0 (from Marvin==4.11.1.0)

  Downloading
https://files.pythonhosted.org/packages/54/eb/b39eec5f24414cb5d7393ed9cb1bafac740d005846019473d6fc8df18db2/ddt-1.1.3-py2.py3-none-any.whl

Collecting pyvmomi>=5.5.0 (from Marvin==4.11.1.0)

  Downloading
https://files.pythonhosted.org/packages/ba/45/d6e4a87004f1c87bdee2942a8896289684e660dbd76e868047d3319b245f/pyvmomi-6.7.0-py2.py3-none-any.whl
(249kB)

Collecting netaddr>=0.7.14 (from Marvin==4.11.1.0)

  Downloading
https://files.pythonhosted.org/packages/ba/97/ce14451a9fd7bdb5a397abf99b24a1a6bb7a1a440b019bebd2e9a0dbec74/netaddr-0.7.19-py2.py3-none-any.whl
(1.6MB)

Collecting dnspython (from Marvin==4.11.1.0)

  Downloading
https://files.pythonhosted.org/packages/a6/72/209e18bdfedfd78c6994e9ec96981624a5ad7738524dd474237268422cb8/dnspython-1.15.0-py2.py3-none-any.whl
(177kB)

Collecting ipmisim>=0.7 (from Marvin==4.11.1.0)

  Downloading
https://files.pythonhosted.org/packages/58/95/6acd215ec4eaa523b1bfd3b9e16f1defaaf03717a2ed7193077ecf96fa7e/ipmisim-0.7.tar.gz

Collecting protobuf>=3.0.0 (from
mysql-connector-python>=1.1.6->Marvin==4.11.1.0)

  Downloading
https://files.pythonhosted.org/packages/9d/61/54c3a9cfde6ffe0ca6a1786ddb8874263f4ca32e7693ad383bd8cf935015/protobuf-3.5.2.post1-cp27-cp27mu-manylinux1_x86_64.whl
(6.4MB)

Collecting urllib3<1.23,>=1.21.1 (from requests>=2.2.1->Marvin==4.11.1.0)

  Downloading
https://files.pythonhosted.org/packages/63/cb/6965947c13a94236f6d4b8223e21beb4d576dc72e8130bd7880f600839b8/urllib3-1.22-py2.py3-none-any.whl
(132kB)

Collecting idna<2.7,>=2.5 (from requests>=2.2.1->Marvin==4.11.1.0)

  Downloading
https://files.pythonhosted.org/packages/27/cc/6dd9a3869f15c2edfab863b992838277279ce92663d334df9ecf5106f5c6/idna-2.6-py2.py3-none-any.whl
(56kB)

Collecting chardet<3.1.0,>=3.0.2 (from requests>=2.2.1->Marvin==4.11.1.0)

  Downloading
https://files.pythonhosted.org/packages/bc/a9/01ffebfb562e4274b6487b4bb1ddec7ca55ec7510b22e4c51f14098443b8/chardet-3.0.4-py2.py3-none-any.whl
(133kB)

Collecting certifi>=2017.4.17 (from requests>=2.2.1->Marvin==4.11.1.0)

  Downloading
https://files.pythonhosted.org/packages/7c/e6/92ad559b7192d846975fc916b65f667c7b8c3a32bea7372340bfe9a15fa5/certifi-2018.4.16-py2.py3-none-any.whl
(150kB)

Collecting cryptography>=1.5 (from paramiko>=1.13.0->Marvin==4.11.1.0)

  Downloading

Re: Snapshots only on Primary Storage feature

2018-05-18 Thread ilya musayev
Perhaps bring it back into 4.11.1?

On Fri, May 18, 2018 at 9:28 AM Suresh Kumar Anaparti <
sureshkumar.anapa...@gmail.com> wrote:

> Si / Will,
>
> That is just FYI, if anyone uses VMware with that flag set to false. I'm
> neither against the feature nor telling to rip that out.
>
> You are correct, the PR 2081 supports KVM and Xen as the volume snapshots
> are directly supported on them and backup operation is not tightly coupled
> with the create operation.
>
> -Suresh
>
> On Fri, May 18, 2018 at 7:38 PM, Simon Weller 
> wrote:
>
> > There are plenty of features in ACS that are particular to a certain
> > hypervisor (or hypervisor set), including VMware specific items.
> >
> > It was never claimed this feature worked across all hypervisors. In
> > addition to that, the default was to leave the existing functionality
> > exactly the way it was originally implemented and if a user wished to
> > change the functionality they could via a global config variable.
> >
> > Your original spec for PR 2081 in confluence states that the PR was
> > targeted towards KVM and Xen, so I'm confused as to why VMware is even
> > being mentioned here.
> >
> >
> > This is a major feature regression that a number of organizations/service
> > providers are relying on and it wasn't called out when the PR was
> submitted.
> >
> >
> > 
> > From: Will Stevens 
> > Sent: Friday, May 18, 2018 6:12 AM
> > To: dev@cloudstack.apache.org
> > Subject: Re: Snapshots only on Primary Storage feature
> >
> > Just because it does not work for VMware should not a reason to rip out
> the
> > functionality for other hypervisors where it is being used though.
> >
> > I know we also have the requirement that snapshots are not automatically
> > replicated to secondary storage, so this feature is useful to us.
> >
> > I don't understand the rational for removing the feature just because it
> > does not work on VMware.
> >
> > On Fri, May 18, 2018, 6:27 AM Suresh Kumar Anaparti, <
> > sureshkumar.anapa...@gmail.com> wrote:
> >
> > > Si,
> > >
> > > The PR# 1697 with the global setting *snapshot.backup.rightafter** -
> > > false* doesn't
> > > work for VMware as create snapshot never takes a snapshot in Primary
> > pool,
> > > it just returns the snapshot uuid. The backup snapshot does the
> complete
> > > job - creates a VM snapshot with the uuid, extracts and exports the
> > target
> > > volume to secondary. On demand backup snapshot doesn't work as there is
> > no
> > > snapshot in primary. Also, there'll be only one entry with Primary
> store
> > > role in snapshot_store_ref, which is the latest snapshot taken for that
> > > volume.
> > >
> > > -Suresh
> > >
> > > On Fri, May 18, 2018 at 1:03 AM, Simon Weller  >
> > > wrote:
> > >
> > > > The whole point of the original PR was to optionally disable this
> > > > functionality.
> > > >
> > > > We don't expose views of the backup state to our customers (we have
> our
> > > > own customer interfaces) and it's a large waste of space for us to be
> > > > backing up tons of VM images when we have a solid primary storage
> > > > infrastructure that already has lots of resiliency.
> > > >
> > > >
> > > > I guess we're going to have to revisit this again before we can
> > consider
> > > > rebasing on 4.11.
> > > >
> > > > 
> > > > From: Suresh Kumar Anaparti 
> > > > Sent: Thursday, May 17, 2018 2:21 PM
> > > > To: dev
> > > > Subject: Re: Snapshots only on Primary Storage feature
> > > >
> > > > Hi Si,
> > > >
> > > > No. not possible to disable the backup to secondary. It copies the
> > volume
> > > > snapshot to secondary in a background thread using asyncBackup param
> > (set
> > > > to true) and allows other operations during that time.
> > > >
> > > > I understand that the backup was on demand when any operations are
> > > > performed on the snapshot. But, backup during that time may take
> > > > considerable time (depending on the snapshot size and the network
> > > > bandwidth), which can result in the job timeout and the User may
> assume
> > > > that it is already Backed up based on its state, unless it is
> > documented.
> > > >
> > > > -Suresh
> > > >
> > > > On Fri, May 18, 2018 at 12:23 AM, Simon Weller
>  > >
> > > > wrote:
> > > >
> > > > > Suresh,
> > > > >
> > > > >
> > > > > With this new merged  PR, is it possible to disable the backup to
> > > > > secondary completely? I can't tell from the reference spec and
> we're
> > > not
> > > > on
> > > > > a 4.10/4.11 base yet.
> > > > >
> > > > > For the record, in the instances where a volume or template from
> > > snapshot
> > > > > was required, the backup image was copied on demand to secondary.
> > > > >
> > > > > In an ideal world, secondary storage wouldn't even be involved in
> > most
> > > of
> > > > > these options, instead using the native 

Re: Dynamic roles question

2018-05-18 Thread ilya musayev
Ivan

This is already done in 4.11, I’m not next to comp to check but ShapeBlue
has a feature created that would allow for movement between different roles.

Regards
ilya

On Thu, May 17, 2018 at 6:03 AM Ivan Kudryavtsev <kudryavtsev...@bw-sw.com>
wrote:

> Hello, community.
>
> I'm thinking about implementing the feature for accounts which permits to
> change account role. Basically, the rationale is trials or demonstration
> modes which restricts users from doing extra stuff, like VM creation,
> service offering changes, etc. Basically, after trial the account should be
> switched to a normal mode or removed. By permitting such role switching we
> can support the feature, otherwise we have to create unique role for every
> user and manage it separately. Please, let me know your thoughts about
> that. Have a good day.
>
> --
> With best regards, Ivan Kudryavtsev
> Bitworks Software, Ltd.
> Cell: +7-923-414-1515
> WWW: http://bitworks.software/ <http://bw-sw.com/>
>


Re: [DISCUSS] CloudStack graceful shutdown

2018-04-21 Thread ilya musayev
Rafael

What you are suggesting - was already implemented. We've created Load
Balancing algorithms - but we did not take into account the LB algo for
maintenance (yet). Rohit and ShapeBlue were the developers behind the
feature.

What needs to happen is a tweak to LB Algorithms to become MS maintenance
aware - or create new LB Algos altogether. Essentially we need to merge
your work and this feature. Please read the FS below.

Functional Spec:


The new CA framework introduced basic support for comma-separated
list of management servers for agent, which makes an external LB
unnecessary.

This extends that feature to implement LB sorting algorithms that
sorts the management server list before they are sent to the agents.
This adds a central intelligence in the management server and adds
additional enhancements to Agent class to be algorithm aware and
have a background mechanism to check/fallback to preferred management
server (assumed as the first in the list). This is support for any
indirect agent such as the KVM, CPVM and SSVM agent, and would
provide support for management server host migration during upgrade
(when instead of in-place, new hosts are used to setup new mgmt server).

This FR introduces two new global settings:

   - indirect.agent.lb.algorithm: The algorithm for the indirect agent LB.
   - indirect.agent.lb.check.interval: The preferred host check interval
   for the agent's background task that checks and switches to agent's
   preferred host.

The indirect.agent.lb.algorithm supports following algorithm options:

   - static: use the list as provided.
   - roundrobin: evenly spreads hosts across management servers based on
   host's id.
   - shuffle: (pseudo) randomly sorts the list (not recommended for
   production).

Any changes to the global settings - indirect.agent.lb.algorithm and
host does not require restarting of the mangement server(s) and the
agents. A message bus based system dynamically reacts to change in these
global settings and propagates them to all connected agents.

Comma-separated management server list is propagated to agents on
following cases:

   - Addition of a host (including ssvm, cpvm systevms).
   - Connection or reconnection by the agents to a management server.
   - After admin changes the 'host' and/or the
   'indirect.agent.lb.algorithm' global settings.

On the agent side, the 'host' setting is saved in its properties file as:
host=@.

First the agent connects to the management server and sends its current
management server list, which is compared by the management server and
in case of failure a new/update list is sent for the agent to persist.

>From the agent's perspective, the first address in the propagated list
will be considered the preferred host. A new background task can be
activated by configuring the indirect.agent.lb.check.interval which is
a cluster level global setting from CloudStack and admins can also
override this by configuring the 'host.lb.check.interval' in the
agent.properties file.

Every time agent gets a ms-host list and the algorithm, the host specific
background check interval is also sent and it dynamically reconfigures
the background task without need to restart agents.

Note: The 'static' and 'roundrobin' algorithms, strictly checks for the
order as expected by them, however, the 'shuffle' algorithm just checks
for content and not the order of the comma separate ms host addresses.

Regards
ilya


On Fri, Apr 20, 2018 at 1:01 PM, Rafael Weingärtner <
rafaelweingart...@gmail.com> wrote:

> Is that management server load balancing feature using static
> configurations? I heard about it on the mailing list, but I did not follow
> the implementation.
>
> I do not see many problems with agents reconnecting. We can implement in
> agents (not just KVM, but also system VMs) a logic that instead of using a
> static pool of management servers configured in a properties file, they
> dynamically request a list of available management servers via that list
> management servers API method. This would require us to configure agents
> with a load balancer URL that executes the balancing between multiple
> management servers.
>
> I am +1 to remove the need for that VIP, which executes the load balance
> for connecting agents to management servers.
>
> On Fri, Apr 20, 2018 at 4:41 PM, ilya musayev <
> ilya.mailing.li...@gmail.com>
> wrote:
>
> > Rafael and Community
> >
> > All is well and good and i think we are thinking along the similar lines
> -
> > the only issue that i see right now with any approach is KVM Agents (or
> > direct agents) and using LoadBalancer on 8250.
> >
> > Here is a scenario:
> >
> > You have 2 Management Server setup fronted with a VIP on 8250.
> > The LB Algorithm is either Round Robin or Least Connections used.
> > You initiate a maintenance mode operation on o

Re: [DISCUSS] CloudStack graceful shutdown

2018-04-20 Thread ilya musayev
Rafael and Community

All is well and good and i think we are thinking along the similar lines -
the only issue that i see right now with any approach is KVM Agents (or
direct agents) and using LoadBalancer on 8250.

Here is a scenario:

You have 2 Management Server setup fronted with a VIP on 8250.
The LB Algorithm is either Round Robin or Least Connections used.
You initiate a maintenance mode operation on one of the MS servers (call it
MS1) - assume you have a long running migration job that needs 60 minutes
to complete.
We attempt to evacuate the agents by telling them to disconnect and
reconnect again
If we are using LB on 8250 with
1) Least Connection used - then all agents will continuously try to connect
to a MS1 node that is attempting to go down for maintenance. Essentially
with this  LB configuration this operation will never
2) Round Robin - this will take a while - but eventually - you will get all
nodes connected to MS2

The current limitation is usage of external LB on 8250. For this operation
to work without issue - would mean agents must connect to MS server without
an LB. This is a recent feature we've developed with ShapeBlue - where we
maintain the list of CloudStack Management Servers in the agent.properties
file.

Unless you can think of other solution - it appears we may have to forced
to bypass the 8250 VIP LB and use the new feature to maintain the list of
management servers within agent.properties.


I need to run now, let me know what your thoughts are.

Regards
ilya



On Tue, Apr 17, 2018 at 8:27 AM, Rafael Weingärtner <
rafaelweingart...@gmail.com> wrote:

> Ilya and others,
>
> We have been discussing this idea of graceful/nicely shutdown.  Our feeling
> is that we (in CloudStack community) might have been trying to solve this
> problem with too much scripting. What if we developed a more integrated
> (native) solution?
>
> Let me explain our idea.
>
> ACS has a table called “mshost”, which is used to store management server
> information. During balancing and when jobs are dispatched to other
> management servers this table is consulted/queried.  Therefore, we have
> been discussing the idea of creating a management API for management
> servers.  We could have an API method that changes the state of management
> servers to “prepare to maintenance” and then “maintenance” (as soon as all
> of the task/jobs it is managing finish). The idea is that during
> rebalancing we would remove the hosts of servers that are not in “Up” state
> (of course we would also ignore hosts in the aforementioned state to
> receive hosts to manage).  Moreover, when we send/dispatch jobs to other
> management servers, we could ignore the ones that are not in “Up” state
> (which is something already done).
>
> By doing this, the nicely shutdown could be executed in a few steps.
>
> 1 – issue the maintenance method for the management server you desire
> 2 – wait until the MS goes into maintenance mode, while there are still
> running jobs it (the management server) will be maintained in prepare for
> maintenance
> 3 – execute the Linux shutdown command
>
> We would need other APIs methods to manage MSs then. An (i) API method to
> list MSs, and we could even create an (ii) API to remove old/de-activated
> management servers, which we currently do not have (forcing users to apply
> changed directly in the database).
>
> Moreover, in this model, we would not kill hanging jobs; we would wait
> until they expire and ACS expunges them. Of course, it is possible to
> develop a forceful maintenance method as well. Then, when the “prepare for
> maintenance” takes longer than a parameter, we could kill hanging jobs.
>
> All of this would allow the MS to be kept up and receiving requests until
> it can be safely shutdown. What do you guys about this approach?
>
> On Tue, Apr 10, 2018 at 6:52 PM, Yiping Zhang <yzh...@marketo.com> wrote:
>
> > As a cloud admin, I would love to have this feature.
> >
> > It so happens that I just accidentally restarted my ACS management server
> > while two instances are migrating to another Xen cluster (via storage
> > migration, not live migration).  As results, both instances
> > ends up with corrupted data disk which can't be reattached or migrated.
> >
> > Any feature which prevents this from happening would be great.  A low
> > hanging fruit is simply checking for
> > if there are any async jobs running, especially any kind of migration
> jobs
> > or other known long running type of
> > jobs and warn the operator  so that he has a chance to abort server
> > shutdowns.
> >
> > Yiping
> >
> > On 4/5/18, 3:13 PM, "ilya musayev" <ilya.mailing.li...@gmail.com>
> wrote:
> >
> > Andrija
> >

Re: [DISCUSS] CloudMonkey 6.0.0-alpha (about six years after initial version in 2012)

2018-04-10 Thread ilya musayev
This is great news and cloud monkey is used more than you think :)

I will share the news with my team.

On Tue, Apr 10, 2018 at 5:07 AM Will Stevens  wrote:

> +1. It has been a great tool for years.  Looking forward to the golang
> version.
>
> On Apr 10, 2018 7:59 AM, "Rohit Yadav"  wrote:
>
> All,
>
>
> Few months ago, I started porting the current code to be compatible with
> both Python2 and Python3 to make it run with both Python2 (for older
> systems such as CentOS6 etc) and Python3 (for newer platforms). The work
> was not a success, another problem was that cloudmonkey was not easy to
> install and required several dependencies that would certainly fail on
> older systems with Python 2.6.x.
>
>
> Considering all things, I started working on an experimental golang port
> [2] and happy to announce that the initial alpha version shows a lot of
> promise and is 5-20x faster than the python based cli [1]. The compiled
> binary runs on several targets, including windows [1].
>
>
> I cannot commit to a timeline/release date yet but the aim of this thread
> is to discuss and propose the simplification of the CLI which may require
> removal of some features and some breaking changes may be introduced:
>
>
> - Make json the default output format
>
> - Remove coloured output
>
> - Remove unpopular, least user output formats? xml, default (line-separate
> key=value), table?
>
> - Remove `set` options: color, expires, (custom) prompt
>
> - Remove `paramcompletion` option, this will be true/enabled by default
>
> - Remove signature version and expires (I'm not sure why this is needed or
> used)
>
> - Remove history_file, cache_file, log_file options, use the default paths
> in folder at (user's  home directory)/.cloudmonkey.
>
> - Remove shell based execution from interactive interpreter mode (using !
> or shell keywords)
>
> - Remove support for CloudStack older than 4.5, i.e. it won't be tested
> against older cloudstacks.
>
> - Remove a default API cache with the client, for a fresh env without any
> ~/.cloudmonkey/cache; users can run `sync` command against a management
> server.
>
> - Interactive API parameter completion in CLI mode: the current API
> parameter completion requires the user to manually copy/paste the uuids, or
> autocomplete by typing parts of the uuids/option.
>
> - Improve how maps are passed.
>
> - Good to have: bash/zsh completion.
>
>
> Please share your thoughts, and objections (especially if you're using the
> proposed features to be removed in version 6.x).
>
>
> [1] https://twitter.com/rhtyd/status/983448788059770882
>
> [2] https://github.com/rhtyd/cmk
>
>
> - Rohit
>
> 
>
>
>
> rohit.ya...@shapeblue.com
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> @shapeblue
>


Re: [DISCUSS] CloudStack graceful shutdown

2018-04-05 Thread ilya musayev
Andrija

This is a tough scenario.

As an admin, they way i would have handled this situation, is to advertise
the upcoming outage and then take away specific API commands from a user a
day before - so he does not cause any long running async jobs. Once
maintenance completes - enable the API commands back to the user. However -
i dont know who your user base is and if this would be an acceptable
solution.

Perhaps also investigate what can be done to speed up your long running
tasks...

As a side node, we will be working on a feature that would allow for a
graceful termination of the process/job, meaning if agent noticed a
disconnect or termination request - it will abort the command in flight. We
can also consider restarting this tasks again or what not - but it would
not be part of this enhancement.

Regards
ilya

On Thu, Apr 5, 2018 at 6:47 AM, Andrija Panic <andrija.pa...@gmail.com>
wrote:

> Hi Ilya,
>
> thanks for the feedback - but in "real world", you need to "understand"
> that 60min is next to useless timeout for some jobs (if I understand this
> specific parameter correctly ?? - job is really canceled, not only job
> monitoring is canceled ???) -
>
> My value for the  "job.cancel.threshold.minutes" is 2880 minutes (2 days?)
>
> I can tell you when you have CEPH/NFS (CEPH even "worse" case, since slower
> read durign qemu-img convert process...) of 500GB, then imagine snapshot
> job will take many hours. Should I mention 1TB volumes (yes, we had
> client's like that...)
> Than attaching 1TB volume, that was uploaded to ACS (lives originally on
> Secondary Storage, and takes time to be copied over to NFS/CEPH) will take
> up to few hours.
> Then migrating 1TB volume from NFS to CEPH, or CEPH to NFS, also takes
> time...etc.
>
> I'm just giving you feedback as "user", admin of the cloud, zero DEV skills
> here :) , just to make sure you make practical decisions (and I admit I
> might be wrong with my stuff, but just giving you feedback from our public
> cloud setup)
>
>
> Cheers!
>
>
>
>
> On 5 April 2018 at 15:16, Tutkowski, Mike <mike.tutkow...@netapp.com>
> wrote:
>
> > Wow, there’s been a lot of good details noted from several people on how
> > this process works today and how we’d like it to work in the near future.
> >
> > 1) Any chance this is already documented on the Wiki?
> >
> > 2) If not, any chance someone would be willing to do so (a flow diagram
> > would be particularly useful).
> >
> > > On Apr 5, 2018, at 3:37 AM, Marc-Aurèle Brothier <ma...@exoscale.ch>
> > wrote:
> > >
> > > Hi all,
> > >
> > > Good point ilya but as stated by Sergey there's more thing to consider
> > > before being able to do a proper shutdown. I augmented my script I gave
> > you
> > > originally and changed code in CS. What we're doing for our environment
> > is
> > > as follow:
> > >
> > > 1. the MGMT looks for a change in the file /etc/lb-agent which contains
> > > keywords for HAproxy[2] (ready, maint) so that HA-proxy can disable the
> > > mgmt on the keyword "maint" and the mgmt server stops a couple of
> > > threads[1] to stop processing async jobs in the queue
> > > 2. Looks for the async jobs and wait until there is none to ensure you
> > can
> > > send the reconnect commands (if jobs are running, a reconnect will
> result
> > > in a failed job since the result will never reach the management
> server -
> > > the agent waits for the current job to be done before reconnecting, and
> > > discard the result... rooms for improvement here!)
> > > 3. Issue a reconnectHost command to all the hosts connected to the mgmt
> > > server so that they reconnect to another one, otherwise the mgmt must
> be
> > up
> > > since it is used to forward commands to agents.
> > > 4. when all agents are reconnected, we can shutdown the management
> server
> > > and perform the maintenance.
> > >
> > > One issue remains for me, during the reconnect, the commands that are
> > > processed at the same time should be kept in a queue until the agents
> > have
> > > finished any current jobs and have reconnected. Today the little time
> > > window during which the reconnect happens can lead to failed jobs due
> to
> > > the agent not being connected at the right moment.
> > >
> > > I could push a PR for the change to stop some processing threads based
> on
> > > the content of a file. It's possible also to cancel the drain of the
> > > management by simply changing 

Re: [DISCUSS] CloudStack graceful shutdown

2018-04-05 Thread ilya musayev
After much useful input from many of you - i realize my approach is
somewhat incomplete and possible very optimistic.

Speaking to Marcus, here is what we propose as alternate solution, i was
hoping to stay outside of the "core" - but it looks like there is no other
away around it.

Proposed functionality: Management Server functional to prepare for
maintenance
* i'm thinking this should be applicable to multinode setup only
drain all connection on 8250 for KVM and Other agents - by issuing a
reconnect command on agents
while 8250 is still listening, a new attempt to connect will be blocked and
agent will be asked to reconnect (if you have LB - it will route it to
another node and eventually reconnect all agents to other nodes - this
might be an area where Marc's HAProxy solution would plugin). In 4.11 -
there is a new framework for managing agent connectivity without needing
Load Balancer, need to investigate how this will work.
allow the existing running async tasks to complete - as per
"job.cancel.threshold.minutes"
max value
queue the new tasks and process them on the next management server

Still dont know what will happen to Xen or VMware in this case - perhaps
ShapeBlue team can help answer or fill in the blanks for us.

Regards,
ilya

On Thu, Apr 5, 2018 at 2:48 PM, ilya musayev <ilya.mailing.li...@gmail.com>
wrote:

> Hi Sergey
>
> Glad to see you are doing well,
>
> I was gonna say drop "enterprise virtualization company" and save a
> $fortune$ - but its not for everyone :)
>
> I'll post another proposed solution to bottom of this thread.
>
> Regards
> ilya
>
>
> On Wed, Apr 4, 2018 at 5:22 PM, Sergey Levitskiy <serg...@hotmail.com>
> wrote:
>
>> Now without spellchecking :)
>>
>> This is not simple e.g. for VMware. Each management server also acts as
>> an agent proxy so tasks against a particular ESX host will be always
>> forwarded. That right answer will be to support a native “maintenance mode”
>> for management server. When entered to such mode the management server
>> should release all agents including SSVM, block/redirect API calls and
>> login request and finish all async job it originated.
>>
>>
>>
>> On Apr 4, 2018, at 5:15 PM, Sergey Levitskiy <serg...@hotmail.com> serg...@hotmail.com>> wrote:
>>
>> This is not simple e.g. for VMware. Each management server also acts as
>> an agent proxy so tasks against a particular ESX host will be always
>> forwarded. That right answer will be to a native support for “maintenance
>> mode” for management server. When entered to such mode the management
>> server should release all agents including save, block/redirect API calls
>> and login request and finish all a sync job it originated.
>>
>> Sent from my iPhone
>>
>> On Apr 4, 2018, at 3:31 PM, Rafael Weingärtner <
>> rafaelweingart...@gmail.com<mailto:rafaelweingart...@gmail.com>> wrote:
>>
>> Ilya, still regarding the management server that is being shut down issue;
>> if other MSs/or maybe system VMs (I am not sure to know if they are able
>> to
>> do such tasks) can direct/redirect/send new jobs to this management server
>> (the one being shut down), the process might never end because new tasks
>> are always being created for the management server that we want to shut
>> down. Is this scenario possible?
>>
>> That is why I mentioned blocking the port 8250 for the
>> “graceful-shutdown”.
>>
>> If this scenario is not possible, then everything s fine.
>>
>>
>> On Wed, Apr 4, 2018 at 7:14 PM, ilya musayev <
>> ilya.mailing.li...@gmail.com<mailto:ilya.mailing.li...@gmail.com>>
>> wrote:
>>
>> I'm thinking of using a configuration from "job.cancel.threshold.minutes"
>> -
>> it will be the longest
>>
>> "category": "Advanced",
>>
>> "description": "Time (in minutes) for async-jobs to be forcely
>> cancelled if it has been in process for long",
>>
>> "name": "job.cancel.threshold.minutes",
>>
>> "value": "60"
>>
>>
>>
>>
>> On Wed, Apr 4, 2018 at 1:36 PM, Rafael Weingärtner <
>> rafaelweingart...@gmail.com<mailto:rafaelweingart...@gmail.com>> wrote:
>>
>> Big +1 for this feature; I only have a few doubts.
>>
>> * Regarding the tasks/jobs that management servers (MSs) execute; are
>> these
>> tasks originate from requests that come to the MS, or is it possible that
>> requests received by one management server to be executed by other? I
>> 

Re: [DISCUSS] CloudStack graceful shutdown

2018-04-05 Thread ilya musayev
Hi Sergey

Glad to see you are doing well,

I was gonna say drop "enterprise virtualization company" and save a
$fortune$ - but its not for everyone :)

I'll post another proposed solution to bottom of this thread.

Regards
ilya


On Wed, Apr 4, 2018 at 5:22 PM, Sergey Levitskiy <serg...@hotmail.com>
wrote:

> Now without spellchecking :)
>
> This is not simple e.g. for VMware. Each management server also acts as an
> agent proxy so tasks against a particular ESX host will be always
> forwarded. That right answer will be to support a native “maintenance mode”
> for management server. When entered to such mode the management server
> should release all agents including SSVM, block/redirect API calls and
> login request and finish all async job it originated.
>
>
>
> On Apr 4, 2018, at 5:15 PM, Sergey Levitskiy <serg...@hotmail.com serg...@hotmail.com>> wrote:
>
> This is not simple e.g. for VMware. Each management server also acts as an
> agent proxy so tasks against a particular ESX host will be always
> forwarded. That right answer will be to a native support for “maintenance
> mode” for management server. When entered to such mode the management
> server should release all agents including save, block/redirect API calls
> and login request and finish all a sync job it originated.
>
> Sent from my iPhone
>
> On Apr 4, 2018, at 3:31 PM, Rafael Weingärtner <
> rafaelweingart...@gmail.com<mailto:rafaelweingart...@gmail.com>> wrote:
>
> Ilya, still regarding the management server that is being shut down issue;
> if other MSs/or maybe system VMs (I am not sure to know if they are able to
> do such tasks) can direct/redirect/send new jobs to this management server
> (the one being shut down), the process might never end because new tasks
> are always being created for the management server that we want to shut
> down. Is this scenario possible?
>
> That is why I mentioned blocking the port 8250 for the “graceful-shutdown”.
>
> If this scenario is not possible, then everything s fine.
>
>
> On Wed, Apr 4, 2018 at 7:14 PM, ilya musayev <ilya.mailing.li...@gmail.com
> <mailto:ilya.mailing.li...@gmail.com>>
> wrote:
>
> I'm thinking of using a configuration from "job.cancel.threshold.minutes" -
> it will be the longest
>
> "category": "Advanced",
>
> "description": "Time (in minutes) for async-jobs to be forcely
> cancelled if it has been in process for long",
>
> "name": "job.cancel.threshold.minutes",
>
> "value": "60"
>
>
>
>
> On Wed, Apr 4, 2018 at 1:36 PM, Rafael Weingärtner <
> rafaelweingart...@gmail.com<mailto:rafaelweingart...@gmail.com>> wrote:
>
> Big +1 for this feature; I only have a few doubts.
>
> * Regarding the tasks/jobs that management servers (MSs) execute; are
> these
> tasks originate from requests that come to the MS, or is it possible that
> requests received by one management server to be executed by other? I
> mean,
> if I execute a request against MS1, will this request always be
> executed/threated by MS1, or is it possible that this request is executed
> by another MS (e.g. MS2)?
>
> * I would suggest that after we block traffic coming from
> 8080/8443/8250(we
> will need to block this as well right?), we can log the execution of
> tasks.
> I mean, something saying, there are XXX tasks (enumerate tasks) still
> being
> executed, we will wait for them to finish before shutting down.
>
> * The timeout (60 minutes suggested) could be global settings that we can
> load before executing the graceful-shutdown.
>
> On Wed, Apr 4, 2018 at 5:15 PM, ilya musayev <
> ilya.mailing.li...@gmail.com<mailto:ilya.mailing.li...@gmail.com>
>
> wrote:
>
> Use case:
> In any environment - time to time - administrator needs to perform a
> maintenance. Current stop sequence of cloudstack management server will
> ignore the fact that there may be long running async jobs - and
> terminate
> the process. This in turn can create a poor user experience and
> occasional
> inconsistency  in cloudstack db.
>
> This is especially painful in large environments where the user has
> thousands of nodes and there is a continuous patching that happens
> around
> the clock - that requires migration of workload from one node to
> another.
>
> With that said - i've created a script that monitors the async job
> queue
> for given MS and waits for it complete all jobs. More details are
> posted
> below.
>
> I'd like to introduce "graceful-shutdown" into the systemctl/service of
> cloudstack-management service.
>
&g

Re: [DISCUSS] CloudStack graceful shutdown

2018-04-05 Thread ilya musayev
Marc

Thank you posting the details on how your implementation works.
Unfortunately for us - HAproxy is not an option - hence we cant take
advantage of this implementation, but please do share with the community -
perhaps it will help someone else.

I'm going to post to the bottom of this thread with new proposed solution.

Regards
ilya

On Thu, Apr 5, 2018 at 2:36 AM, Marc-Aurèle Brothier <ma...@exoscale.ch>
wrote:

> Hi all,
>
> Good point ilya but as stated by Sergey there's more thing to consider
> before being able to do a proper shutdown. I augmented my script I gave you
> originally and changed code in CS. What we're doing for our environment is
> as follow:
>
> 1. the MGMT looks for a change in the file /etc/lb-agent which contains
> keywords for HAproxy[2] (ready, maint) so that HA-proxy can disable the
> mgmt on the keyword "maint" and the mgmt server stops a couple of
> threads[1] to stop processing async jobs in the queue
> 2. Looks for the async jobs and wait until there is none to ensure you can
> send the reconnect commands (if jobs are running, a reconnect will result
> in a failed job since the result will never reach the management server -
> the agent waits for the current job to be done before reconnecting, and
> discard the result... rooms for improvement here!)
> 3. Issue a reconnectHost command to all the hosts connected to the mgmt
> server so that they reconnect to another one, otherwise the mgmt must be up
> since it is used to forward commands to agents.
> 4. when all agents are reconnected, we can shutdown the management server
> and perform the maintenance.
>
> One issue remains for me, during the reconnect, the commands that are
> processed at the same time should be kept in a queue until the agents have
> finished any current jobs and have reconnected. Today the little time
> window during which the reconnect happens can lead to failed jobs due to
> the agent not being connected at the right moment.
>
> I could push a PR for the change to stop some processing threads based on
> the content of a file. It's possible also to cancel the drain of the
> management by simply changing the content of the file back to "ready"
> again, instead of "maint" [2].
>
> [1] AsyncJobMgr-Heartbeat, CapacityChecker, StatsCollector
> [2] HA proxy documentation on agent checker: https://cbonte.github.io/
> haproxy-dconv/1.6/configuration.html#5.2-agent-check
>
> Regarding your issue on the port blocking, I think it's fair to consider
> that if you want to shutdown your server at some point, you have to stop
> serving (some) requests. Here the only way it's to stop serving everything.
> If the API had a REST design, we could reject any POST/PUT/DELETE
> operations and allow GET ones. I don't know how hard it would be today to
> only allow listBaseCmd operations to be more friendly with the users.
>
> Marco
>
>
> On Thu, Apr 5, 2018 at 2:22 AM, Sergey Levitskiy <serg...@hotmail.com>
> wrote:
>
> > Now without spellchecking :)
> >
> > This is not simple e.g. for VMware. Each management server also acts as
> an
> > agent proxy so tasks against a particular ESX host will be always
> > forwarded. That right answer will be to support a native “maintenance
> mode”
> > for management server. When entered to such mode the management server
> > should release all agents including SSVM, block/redirect API calls and
> > login request and finish all async job it originated.
> >
> >
> >
> > On Apr 4, 2018, at 5:15 PM, Sergey Levitskiy <serg...@hotmail.com
>  > serg...@hotmail.com>> wrote:
> >
> > This is not simple e.g. for VMware. Each management server also acts as
> an
> > agent proxy so tasks against a particular ESX host will be always
> > forwarded. That right answer will be to a native support for “maintenance
> > mode” for management server. When entered to such mode the management
> > server should release all agents including save, block/redirect API calls
> > and login request and finish all a sync job it originated.
> >
> > Sent from my iPhone
> >
> > On Apr 4, 2018, at 3:31 PM, Rafael Weingärtner <
> > rafaelweingart...@gmail.com<mailto:rafaelweingart...@gmail.com>> wrote:
> >
> > Ilya, still regarding the management server that is being shut down
> issue;
> > if other MSs/or maybe system VMs (I am not sure to know if they are able
> to
> > do such tasks) can direct/redirect/send new jobs to this management
> server
> > (the one being shut down), the process might never end because new tasks
> > are always being created for the management server that we want to shu

Re: [DISCUSS] CloudStack graceful shutdown

2018-04-04 Thread ilya musayev
I'm thinking of using a configuration from "job.cancel.threshold.minutes" -
it will be the longest

  "category": "Advanced",

  "description": "Time (in minutes) for async-jobs to be forcely
cancelled if it has been in process for long",

  "name": "job.cancel.threshold.minutes",

  "value": "60"




On Wed, Apr 4, 2018 at 1:36 PM, Rafael Weingärtner <
rafaelweingart...@gmail.com> wrote:

> Big +1 for this feature; I only have a few doubts.
>
> * Regarding the tasks/jobs that management servers (MSs) execute; are these
> tasks originate from requests that come to the MS, or is it possible that
> requests received by one management server to be executed by other? I mean,
> if I execute a request against MS1, will this request always be
> executed/threated by MS1, or is it possible that this request is executed
> by another MS (e.g. MS2)?
>
> * I would suggest that after we block traffic coming from 8080/8443/8250(we
> will need to block this as well right?), we can log the execution of tasks.
> I mean, something saying, there are XXX tasks (enumerate tasks) still being
> executed, we will wait for them to finish before shutting down.
>
> * The timeout (60 minutes suggested) could be global settings that we can
> load before executing the graceful-shutdown.
>
> On Wed, Apr 4, 2018 at 5:15 PM, ilya musayev <ilya.mailing.li...@gmail.com
> >
> wrote:
>
> > Use case:
> > In any environment - time to time - administrator needs to perform a
> > maintenance. Current stop sequence of cloudstack management server will
> > ignore the fact that there may be long running async jobs - and terminate
> > the process. This in turn can create a poor user experience and
> occasional
> > inconsistency  in cloudstack db.
> >
> > This is especially painful in large environments where the user has
> > thousands of nodes and there is a continuous patching that happens around
> > the clock - that requires migration of workload from one node to another.
> >
> > With that said - i've created a script that monitors the async job queue
> > for given MS and waits for it complete all jobs. More details are posted
> > below.
> >
> > I'd like to introduce "graceful-shutdown" into the systemctl/service of
> > cloudstack-management service.
> >
> > The details of how it will work is below:
> >
> > Workflow for graceful shutdown:
> >   Using iptables/firewalld - block any connection attempts on 8080/8443
> (we
> > can identify the ports dynamically)
> >   Identify the MSID for the node, using the proper msid - query async_job
> > table for
> > 1) any jobs that are still running (or job_status=“0”)
> > 2) job_dispatcher not like “pseudoJobDispatcher"
> > 3) job_init_msid=$my_ms_id
> >
> > Monitor this async_job table for 60 minutes - until all async jobs for
> MSID
> > are done, then proceed with shutdown
> > If failed for any reason or terminated, catch the exit via trap
> command
> > and unblock the 8080/8443
> >
> > Comments are welcome
> >
> > Regards,
> > ilya
> >
>
>
>
> --
> Rafael Weingärtner
>


Re: [DISCUSS] CloudStack graceful shutdown

2018-04-04 Thread ilya musayev
Rafael

> * Regarding the tasks/jobs that management servers (MSs) execute; are
these
tasks originate from requests that come to the MS, or is it possible that
requests received by one management server to be executed by other? I mean,
if I execute a request against MS1, will this request always be
executed/threated by MS1, or is it possible that this request is executed
by another MS (e.g. MS2)?

Yes its possible, but it will be tracked under async_job with proper MS
that is responsible for this task.

My initial goal was to prevent user from creating more async jobs on the
node thats about to go down for maintenance - but as i'm thinking about it
- i dont know if it matters - since async job will be executed on the MS
node that tracks a specific hypervisor/agent - as defined in cloud.host
table.

Maybe i'll leave off the blocking off 8080/8443 and just focus on tracking
async_jobs instead. Assuming you are managing your MS with Load Balancer -
it should be smart enough to shift the user traffic to MS that is up.

> * I would suggest that after we block traffic coming from
8080/8443/8250(we
will need to block this as well right?), we can log the execution of tasks.
I mean, something saying, there are XXX tasks (enumerate tasks) still being
executed, we will wait for them to finish before shutting down

8250 - is a bit too aggressive in my opinion andwe dont want to do that. If
you block 8250 and you have a long running tasks - you are waiting on to
complete - then it may fail - because you block agent communication on 8250.

Thanks
ilya


On Wed, Apr 4, 2018 at 1:36 PM, Rafael Weingärtner <
rafaelweingart...@gmail.com> wrote:

> Big +1 for this feature; I only have a few doubts.
>
> * Regarding the tasks/jobs that management servers (MSs) execute; are these
> tasks originate from requests that come to the MS, or is it possible that
> requests received by one management server to be executed by other? I mean,
> if I execute a request against MS1, will this request always be
> executed/threated by MS1, or is it possible that this request is executed
> by another MS (e.g. MS2)?
>
> * I would suggest that after we block traffic coming from 8080/8443/8250(we
> will need to block this as well right?), we can log the execution of tasks.
> I mean, something saying, there are XXX tasks (enumerate tasks) still being
> executed, we will wait for them to finish before shutting down.
>
> * The timeout (60 minutes suggested) could be global settings that we can
> load before executing the graceful-shutdown.
>
> On Wed, Apr 4, 2018 at 5:15 PM, ilya musayev <ilya.mailing.li...@gmail.com
> >
> wrote:
>
> > Use case:
> > In any environment - time to time - administrator needs to perform a
> > maintenance. Current stop sequence of cloudstack management server will
> > ignore the fact that there may be long running async jobs - and terminate
> > the process. This in turn can create a poor user experience and
> occasional
> > inconsistency  in cloudstack db.
> >
> > This is especially painful in large environments where the user has
> > thousands of nodes and there is a continuous patching that happens around
> > the clock - that requires migration of workload from one node to another.
> >
> > With that said - i've created a script that monitors the async job queue
> > for given MS and waits for it complete all jobs. More details are posted
> > below.
> >
> > I'd like to introduce "graceful-shutdown" into the systemctl/service of
> > cloudstack-management service.
> >
> > The details of how it will work is below:
> >
> > Workflow for graceful shutdown:
> >   Using iptables/firewalld - block any connection attempts on 8080/8443
> (we
> > can identify the ports dynamically)
> >   Identify the MSID for the node, using the proper msid - query async_job
> > table for
> > 1) any jobs that are still running (or job_status=“0”)
> > 2) job_dispatcher not like “pseudoJobDispatcher"
> > 3) job_init_msid=$my_ms_id
> >
> > Monitor this async_job table for 60 minutes - until all async jobs for
> MSID
> > are done, then proceed with shutdown
> > If failed for any reason or terminated, catch the exit via trap
> command
> > and unblock the 8080/8443
> >
> > Comments are welcome
> >
> > Regards,
> > ilya
> >
>
>
>
> --
> Rafael Weingärtner
>


Re: [DISCUSS] CloudStack graceful shutdown

2018-04-04 Thread ilya musayev
Andrija

This is the reason for this enhancement, snapshot, migration and others -
are all async jobs - and therefore should be tracked in async_job table
under specific MS.It is known they may take a while to complete and last
thing we want is to interrupt it.

Depending on what value you have set in Configurations - it may time out -
but continue working on the background.. meaning cloudstack will stop
tracking the async job beyond specific interval - but cloudstack agent will
push forward.

I dont see a harm of taking the server offline - if there are no jobs that
are being tracked.

However - we should not stop the server - if we identify any jobs that are
still active. The user can decide to append the forceful shutdown after the
graceful one if he feels like it. For example

[shell] # service cloudstack-management graceful-shutdown; service
cloudstack-management shutdown

For your issue,

Please check the value for "job.cancel.threshold.minutes"

  "category": "Advanced",

  "description": "Time (in minutes) for async-jobs to be forcely
cancelled if it has been in process for long",

  "name": "job.cancel.threshold.minutes",

  "value": "60"


I propose for the graceful shutdown command to source
"job.cancel.threshold.minutes"
as a max value - before giving up on the endeavor.


The only issue i'm on the fence about - is blocking access to 8080/8443 -
if you have a single node setup.


There is a chance you may block the access to cloudstack for over an hour -
and that may not be what you intended.


Perhaps we add a parameter in db.properties for
"graceful.shutdown.block.api.server = true/false"


Regards,

ilya

On Wed, Apr 4, 2018 at 2:22 PM, Andrija Panic <andrija.pa...@gmail.com>
wrote:

> One comment here (I had to shutdown whole DC for few hours recently),
> please make sure to perhaps at least consider snapshoting process as the
> special case - it can take few hours for snapshot to complete really (copy
> process from Primary to Secondary Storage)
>
> I did (in my recent unfortunate DC shutdown), actually stop MS (we also
> have script to identify running async jobs), so we stop it once safe, but
> any running qemu-img processes (we use kVM) need to be killed manually
> (ansbile) after MS is stopped, etc,etc...
>
> I can assume most jobs can take reasonable long time to complete, but
> snapshots are probably the biggest exceptions as can take extremely long
> time to complete...
>
> Cheers
>
> On 4 April 2018 at 22:46, Tutkowski, Mike <mike.tutkow...@netapp.com>
> wrote:
>
> > I may be remembering this incorrectly, but from what I recall, if a
> > resource is owned by one MS and a request related to that resource comes
> in
> > to another MS, the MS that received the request passes it on to the other
> > MS.
> >
> > > On Apr 4, 2018, at 2:36 PM, Rafael Weingärtner <
> > rafaelweingart...@gmail.com> wrote:
> > >
> > > Big +1 for this feature; I only have a few doubts.
> > >
> > > * Regarding the tasks/jobs that management servers (MSs) execute; are
> > these
> > > tasks originate from requests that come to the MS, or is it possible
> that
> > > requests received by one management server to be executed by other? I
> > mean,
> > > if I execute a request against MS1, will this request always be
> > > executed/threated by MS1, or is it possible that this request is
> executed
> > > by another MS (e.g. MS2)?
> > >
> > > * I would suggest that after we block traffic coming from
> > 8080/8443/8250(we
> > > will need to block this as well right?), we can log the execution of
> > tasks.
> > > I mean, something saying, there are XXX tasks (enumerate tasks) still
> > being
> > > executed, we will wait for them to finish before shutting down.
> > >
> > > * The timeout (60 minutes suggested) could be global settings that we
> can
> > > load before executing the graceful-shutdown.
> > >
> > > On Wed, Apr 4, 2018 at 5:15 PM, ilya musayev <
> > ilya.mailing.li...@gmail.com>
> > > wrote:
> > >
> > >> Use case:
> > >> In any environment - time to time - administrator needs to perform a
> > >> maintenance. Current stop sequence of cloudstack management server
> will
> > >> ignore the fact that there may be long running async jobs - and
> > terminate
> > >> the process. This in turn can create a poor user experience and
> > occasional
> > >> inconsistency  in cloudstack db.
> > >>
> > >> This is especially 

Re: [DISCUSS] New VPN implementation based on IKEv2 backed by Vault

2018-04-04 Thread ilya musayev
Khosrow

My 2c, little less than ideal to manage yet another external end point
like.

While i understand that it makes it easier to manage certificates - it also
means going forward - Vault implementation will become a requirement to
validate future ACS release.

With that said - i do like the proposal and not against it, but:
1) Please consider decoupling it from cloudstack-management server - and
release it as server plugin
2) Test coverage must be sufficient enough to validate the functionality
(perhaps mock vault endpoints and response)

Regards,
ilya

On Wed, Apr 4, 2018 at 10:49 AM, Khosrow Moossavi <kmooss...@cloudops.com>
wrote:

> Thanks Paul, the proposed feature will enable the functionality to use
> Vault to
> act as CA if enabled in ACS, otherwise will fall back to "default"
> implementation
> which Rohit has already done.
>
>
> On Wed, Apr 4, 2018 at 12:29 PM, Paul Angus <paul.an...@shapeblue.com>
> wrote:
>
> > You guys should speak to Rohit about the CA framework.  CloudStack can
> > manage certificates now, including creating them itself and acting as a
> > root CA.
> >
> >
> >
> >
> > Kind regards,
> >
> > Paul Angus
> >
> > paul.an...@shapeblue.com
> > www.shapeblue.com
> > 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> > @shapeblue
> >
> >
> >
> >
> > -Original Message-
> > From: Rafael Weingärtner <rafaelweingart...@gmail.com>
> > Sent: 04 April 2018 16:51
> > To: dev <dev@cloudstack.apache.org>
> > Subject: Re: [DISCUSS] New VPN implementation based on IKEv2 backed by
> > Vault
> >
> > Thanks for sharing the details. Now I have a better perspective of the
> > proposal.It is an interesting integration of CloudStack VPN service with
> > Vault PKI feature.
> >
> > On Wed, Apr 4, 2018 at 12:38 PM, Khosrow Moossavi <
> kmooss...@cloudops.com>
> > wrote:
> >
> > > One of the things Vault does is essentially one of the thing Let's
> > > Encrypt does, acting as CA and generating/signing certificates.
> > >
> > > From the Vault website itself:
> > >
> > > "HashiCorp Vault secures, stores, and tightly controls access to
> > > tokens, passwords, certificates, API keys, and other secrets in modern
> > > computing. Vault handles leasing, key revocation, key rolling, and
> > > auditing. Through a unified API, users can access an encrypted
> > > Key/Value store and network encryption-as-a-service, or generate AWS
> > > IAM/STS credentials, SQL/NoSQL databases, X.509 certificates, SSH
> > > credentials, and more."
> > >
> > > In our case we are going to use Vault as PKI backend engine, to act as
> > > Root CA, sign certificates, handle CRL (Certificate Revocation List),
> > > etc.
> > > Technically we can
> > > do these with Let's Encrypt, but I haven't started exploring the
> > > possibilities or potential limitation. Using external services (such
> > > as Let's Encrypt) or going forward with Bring You Own Certificate
> > > model would be for future, it they ever made sense to do.
> > >
> > >
> > >
> > > On Wed, Apr 4, 2018 at 11:20 AM, Rafael Weingärtner <
> > > rafaelweingart...@gmail.com> wrote:
> > >
> > > > Got it. Thanks for the explanations.
> > > > There is one other thing I do not understand. This Vault thing that
> > > > you mention, how does it work? Is it similar to let's encrypt?
> > > >
> > > > On Wed, Apr 4, 2018 at 12:15 PM, Khosrow Moossavi <
> > > kmooss...@cloudops.com>
> > > > wrote:
> > > >
> > > > > On Wed, Apr 4, 2018 at 10:36 AM, Rafael Weingärtner <
> > > > > rafaelweingart...@gmail.com> wrote:
> > > > >
> > > > > > So, you need a certificate that is signed by the CA that is used
> > > > > > by
> > > the
> > > > > VPN
> > > > > > service. Is that it?
> > > > > >
> > > > > >
> > > > > Correct, a self signed "server certificate" against CA, to be
> > > > > installed directly on VR.
> > > > >
> > > > >
> > > > > >
> > > > > > It has been a while that I do not configure these VPN systems;
> > > > > > do you
> > > > > need
> > > > > > access to the private key of the CA? Or, does the program simply
> > > > validate
&g

[DISCUSS] CloudStack graceful shutdown

2018-04-04 Thread ilya musayev
Use case:
In any environment - time to time - administrator needs to perform a
maintenance. Current stop sequence of cloudstack management server will
ignore the fact that there may be long running async jobs - and terminate
the process. This in turn can create a poor user experience and occasional
inconsistency  in cloudstack db.

This is especially painful in large environments where the user has
thousands of nodes and there is a continuous patching that happens around
the clock - that requires migration of workload from one node to another.

With that said - i've created a script that monitors the async job queue
for given MS and waits for it complete all jobs. More details are posted
below.

I'd like to introduce "graceful-shutdown" into the systemctl/service of
cloudstack-management service.

The details of how it will work is below:

Workflow for graceful shutdown:
  Using iptables/firewalld - block any connection attempts on 8080/8443 (we
can identify the ports dynamically)
  Identify the MSID for the node, using the proper msid - query async_job
table for
1) any jobs that are still running (or job_status=“0”)
2) job_dispatcher not like “pseudoJobDispatcher"
3) job_init_msid=$my_ms_id

Monitor this async_job table for 60 minutes - until all async jobs for MSID
are done, then proceed with shutdown
If failed for any reason or terminated, catch the exit via trap command
and unblock the 8080/8443

Comments are welcome

Regards,
ilya


Re: ConfigDrive status

2018-03-26 Thread ilya musayev
Lucian

We reported 3-4 issues with config drive. It’s being worked on. It does
work - but not per agreed upon specifications.

See below


https://issues.apache.org/jira/browse/CLOUDSTACK-10287
https://issues.apache.org/jira/browse/CLOUDSTACK-10288
https://issues.apache.org/jira/browse/CLOUDSTACK-10289
https://issues.apache.org/jira/browse/CLOUDSTACK-10290

Regards
Ilya

On Mon, Mar 26, 2018 at 8:58 AM Dag Sonstebo <dag.sonst...@shapeblue.com>
wrote:

> Hi Lucian,
>
> I’m maybe not the right person to answer this – but my understanding is it
> only kicks in when you have a network offering without a VR, at which point
> the metadata etc. is presented as the config drive. Happy to be corrected
> on this.
>
> We did however have a really major gotcha 18 months ago – when a customer
> did a CloudStack upgrade and ended up with new unexpected config drives
> causing changes to all the VMware disk controller addressing – meaning VMs
> wouldn’t boot, couldn’t see disks, etc. If you use VMware I would test
> beforehand.
>
> Regards,
> Dag Sonstebo
> Cloud Architect
> ShapeBlue
>
> On 26/03/2018, 14:50, "Nux!" <n...@li.nux.ro> wrote:
>
> Hi,
>
> I am interested in the ConfigDrive feature.
> Before I potentially waste time on it, is anyone around here using it
> or can clarify whether it's usable or not or gotchas etc?
>
> Regards,
> Lucian
>
> --
> Sent from the Delta quadrant using Borg technology!
>
> Nux!
> www.nux.ro
>
>
>
> dag.sonst...@shapeblue.com
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> @shapeblue
>
>
>
>


Re: Welcoming Mike as the new Apache CloudStack VP

2018-03-26 Thread ilya musayev
Welcome Mike, thank you Wido!

On Mon, Mar 26, 2018 at 8:59 AM Simon Weller 
wrote:

> Thanks for all of your hard work Wido, we really appreciate it.
>
>
> Congratulations Mike!
>
>
> - Si
>
> 
> From: Wido den Hollander 
> Sent: Monday, March 26, 2018 9:11 AM
> To: dev@cloudstack.apache.org; us...@cloudstack.apache.org
> Subject: Welcoming Mike as the new Apache CloudStack VP
>
> Hi all,
>
> It's been a great pleasure working with the CloudStack project as the
> ACS VP over the past year.
>
> A big thank you from my side for everybody involved with the project in
> the last year.
>
> Hereby I would like to announce that Mike Tutkowski has been elected to
> replace me as the Apache Cloudstack VP in our annual VP rotation.
>
> Mike has a long history with the project and I am are happy welcome him
> as the new VP for CloudStack.
>
> Welcome Mike!
>
> Thanks,
>
> Wido
>


Re: [DISCUSS] CloudStack Connection Pools

2018-03-14 Thread ilya musayev
When everything works smooth - the end user does not need to know whats
under the hood.

However, when things begin to swing and your hands are tied to just one CP
- what do you do?

On Wed, Mar 14, 2018 at 10:50 AM, Khosrow Moossavi 
wrote:

> Why would we want to expose this choice to administrator of Cloudstack
> whose responsibility
> is to keep it running and not knowing about the inner-mechanic of how it
> works. right? It's not
> like that we're giving them a choice of which database to connect to.
>
> So on that note, I would say we need to agree on any of those CP libraries
> and implement, the
> same way we chose for example log4j or slf4j over one another, or any other
> _library_ we use.
>
> Khosrow Moossavi
>
> CloudOps
>
>
>
> On Wed, Mar 14, 2018 at 10:36 AM, Nicolas Vazquez <
> nicolas.vazq...@shapeblue.com> wrote:
>
> > Thanks Khosrow and Rafael. You both agree on Spring Data as the best
> > option, I see it would require a big effort and commitment to migrate to
> > it, therefore it can take some (long) time to achieve it.
> >
> > As a more viable option, would you agree on supporting different
> > connection pool management libraries and letting the administrator choose
> > which one to use? (DBCP 1.4 as default)
> >
> > 
> > From: Rafael Weingärtner 
> > Sent: Tuesday, March 13, 2018 8:52:50 AM
> > To: dev
> > Subject: Re: [DISCUSS] CloudStack Connection Pools
> >
> > Spring data would be awesome. It is very flexible and has a very good
> API.
> > However, this would require commitment from our side to slowly migrate
> > things to it.
> >
> > Regarding the connection pool management libraries; I would prefer either
> > C3P0 or 2.* DBCP. The other two sound trendy, but I worry about this type
> > of project in the long run. Both DBCP from Apache and C3P0 from Hibernate
> > (RedHat) sound a more reasonable selection for me. They have been around
> > for years, and have a solid community base already.
> >
> > On Mon, Mar 12, 2018 at 11:31 PM, Khosrow Moossavi <
> kmooss...@cloudops.com
> > >
> > wrote:
> >
> > > Hi Nicolas
> > >
> > > From my past experiences, I prefer 1) HikariCP 2) Tomcat Pool 3) C3P0
> 4)
> > > DBCP in that order. Although I don't have
> > > any benchmark of my own to provide, and the ones you mentioned are
> really
> > > informative anyway.
> > >
> > > To me the broader subject is the _one_ who uses the pool, I mean if the
> > > transactions are handled in a faster way and
> > > released sooner and with shorter locks, generally speaking if it's more
> > > efficient, I don't think from ACS point of view
> > > there won't be much difference between the above mentioned options.
> > >
> > > On the same subject, it might be more interesting to use Spring Boot in
> > > general and Spring Boot Data in particular
> > > rather than only changing the CP functionality, and slowly
> > refactor/retire
> > > the DAO layer in favor of Spring Boot equivalent
> > > implementation.
> > >
> > >
> > > Khosrow Moossavi
> > >
> > > CloudOps
> > >
> > >
> > >
> > > On Mon, Mar 12, 2018 at 9:32 PM, Nicolas Vazquez <
> > > nicolas.vazq...@shapeblue.com> wrote:
> > >
> > > > Hi all,
> > > >
> > > >
> > > > I would like to introduce a topic for discussion, regarding DB
> > connection
> > > > pools used in CloudStack, currently Apache Commons DBCP 1.4 (
> > > > http://commons.apache.org/) is used. I've been investigating this
> > topic
> > > > as we are having complains of random issues on MySQL connection pool
> on
> > > > large environments. Please let me know if this topic has already been
> > > > discussed before.
> > > >
> > > >
> > > > First of all, DBCP 1.4 has been released on 2010 (
> > > > https://commons.apache.org/proper/commons-dbcp/changes-report.html),
> > and
> > > > no minor/patch version has been released since then. It seems to work
> > in
> > > > high performance with relatively low traffic and low load
> applications.
> > > > However, it is single threaded, and in order to be thread-safe, the
> > > entire
> > > > pool needs to be locked. It is also reported that an CPU and
> concurrent
> > > > threads increases, the performance gets affected. This is a serious
> > issue
> > > > on highly concurrent systems, such as CloudStack.
> > > >
> > > >
> > > > I've been investigating some options to replace it:
> > > > - The first option can be upgrading to version 2.x. Issues on
> > performance
> > > > and concurrency could be solved using this version.
> > > > - Tomcat JDBC Connection Pool. Please check:
> > https://tomcat.apache.org/
> > > > tomcat-7.0-doc/jdbc-pool.html.
> > > >
> > > > - Other replacement options found: BoneCP, C3P0, HikariCP
> > > >
> > > >
> > > > Given these options, I've been looking for benchmarks to compare them
> > > (*).
> > > > Looks like HikariCP (http://brettwooldridge.github.io/HikariCP/)
> could
> > > be
> > > > the best replacement, improving performance and 

Re: [DISCUSS] CloudStack Connection Pools

2018-03-14 Thread ilya musayev
Rafael and Khosrow,

I actually think quite the opposite on DBCP development. Please pull up
latest commits and release notes on DBCP.
An assumption that its an Apache project and therefore will live and
flourish - is a dangerous one. It comes down to who supports the project
and if there funding to back it.

https://github.com/apache/commons-dbcp

While there are some commits - there is no major innovation happening and
its limited to few individuals.

In terms of performance HikariCP blows all other contenders by leaps and
bounds. It also holds ASF license - so we should have no issues
intergrating.
HikariCP-bench-2.6.0.png
<https://github.com/brettwooldridge/HikariCP/wiki/HikariCP-bench-2.6.0.png>

You can see active development and its backed by a company that is funded
sufficiently well.

C3P0 - is GNU licensed based - which if i understand correctly - cannot be
used with ASF project.

The reason it was proposed to allow a flexibility for connection pool
providers - is to allow for user to switch between one or the other - in
case you come across an issue with one of the two.

It would be something an advanced user knows and understands. Not providing
the flexibility to choose between different CP - makes troubleshooting
extremely difficult.

Lastly - DBCP 1.4 log4j support is none existent - therefore you cant
troubleshoot the connection pool issues with ease. We can upgrade to 2.x
and see if it helps - but given the slow development and performance
limitation - i dont see a huge benefit. Nevertheless, we know DBCP 2.x is
stable - it would be nice to have a fallback as an option in case the user
suspects his issues are due to connection pool handling.

Just point of reference, internally - some of apps switched from DBCP to
HikariCP and we saw tremendous boost in performance (10x+) and
troubleshooting became much easier. I dont want to assume we will get 10x
boost - but i think we will certainly do much better than what 8 year old
DBCP has to offer.

Regards,
ilya

On Wed, Mar 14, 2018 at 10:58 AM, Rafael Weingärtner <
rafaelweingart...@gmail.com> wrote:

> I agree with Khosrow. Even though the idea of externalizing a configuration
> like this seems interesting,  I believe that it would bring more
> complications than benefits. And, at the end of the day operators would
> only use the default.
>
> On Wed, Mar 14, 2018 at 2:50 PM, Khosrow Moossavi <kmooss...@cloudops.com>
> wrote:
>
> > Why would we want to expose this choice to administrator of Cloudstack
> > whose responsibility
> > is to keep it running and not knowing about the inner-mechanic of how it
> > works. right? It's not
> > like that we're giving them a choice of which database to connect to.
> >
> > So on that note, I would say we need to agree on any of those CP
> libraries
> > and implement, the
> > same way we chose for example log4j or slf4j over one another, or any
> other
> > _library_ we use.
> >
> > Khosrow Moossavi
> >
> > CloudOps
> >
> >
> >
> > On Wed, Mar 14, 2018 at 10:36 AM, Nicolas Vazquez <
> > nicolas.vazq...@shapeblue.com> wrote:
> >
> > > Thanks Khosrow and Rafael. You both agree on Spring Data as the best
> > > option, I see it would require a big effort and commitment to migrate
> to
> > > it, therefore it can take some (long) time to achieve it.
> > >
> > > As a more viable option, would you agree on supporting different
> > > connection pool management libraries and letting the administrator
> choose
> > > which one to use? (DBCP 1.4 as default)
> > >
> > > 
> > > From: Rafael Weingärtner <rafaelweingart...@gmail.com>
> > > Sent: Tuesday, March 13, 2018 8:52:50 AM
> > > To: dev
> > > Subject: Re: [DISCUSS] CloudStack Connection Pools
> > >
> > > Spring data would be awesome. It is very flexible and has a very good
> > API.
> > > However, this would require commitment from our side to slowly migrate
> > > things to it.
> > >
> > > Regarding the connection pool management libraries; I would prefer
> either
> > > C3P0 or 2.* DBCP. The other two sound trendy, but I worry about this
> type
> > > of project in the long run. Both DBCP from Apache and C3P0 from
> Hibernate
> > > (RedHat) sound a more reasonable selection for me. They have been
> around
> > > for years, and have a solid community base already.
> > >
> > > On Mon, Mar 12, 2018 at 11:31 PM, Khosrow Moossavi <
> > kmooss...@cloudops.com
> > > >
> > > wrote:
> > >
> > > > Hi Nicolas
> > > >
> > > > From my past expe

Re: I'd like to introduce you to Khosrow

2018-02-22 Thread ilya musayev
Great news!

On Thu, Feb 22, 2018 at 3:40 PM Pierre-Luc Dion  wrote:

> Hi fellow colleagues,
>
> I might be a bit late with this email...
>
> I'd like to introduce Khosrow Moossavi, who recently join our team and his
> focus is currently exclusively on dev for Cloudstack with cloud.ca.
>
> Our 2 current priorities are:
> -fixing VRs,SVMs to run has HVM VMs in xenserver.
> - redesign, or rewrite, the remote management vpn for vpc, poc in progress
> for IKEv2...
>
>
>
> Some of you might have interact with him already.
>
>
> Also, we are going to be more active for the upcomming 4.12 release.
>
>
> Cheers!
>


Re: Performance considerations related to Intel Meltdown on KVM CPU types

2018-01-08 Thread ilya musayev
Thanks for sharing

On Mon, Jan 8, 2018 at 7:11 AM Nux!  wrote:

> Hello,
>
> Just stumbled upon this
> https://twitter.com/berrange/status/950209752486817792
>
> "ensure KVM guest CPU model you choose has the "pcid" feature, otherwise
> guests will suffer terrible performance from the Meltdown fixes. This means
> using a named Haswell, Broadwell or Skylake based model or host passthrough"
>
>
> This means whoever is running with the KVM default CPU (like I do) as
> opposed to specific ones or host passthrough needs to change this in order
> to avoid bad performance once the new mitigating kernel is installed.
>
>
> Bad news is older Xeons do not support this, check if "invpcid" flag shows
> up in /proc/cpuinfo (you might see "pcid", that one is not enough).
>
>
>
>
>
> --
> Sent from the Delta quadrant using Borg technology!
>
> Nux!
> www.nux.ro
>


Re: [VOTE] Clean up old and obsolete branches.

2018-01-02 Thread ilya musayev
+1

On Tue, Jan 2, 2018 at 2:41 PM Boris Stoyanov 
wrote:

> 0
>
>
> boris.stoya...@shapeblue.com
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> @shapeblue
>
>
>
> > On 2 Jan 2018, at 22:13, Khosrow Moossavi 
> wrote:
> >
> > +1
> >
> > Khosrow Moossavi
> > CloudOps
> >
> > On Jan 2, 2018 14:07, "Nicolas Vazquez" 
> > wrote:
> >
> >> +1
> >>
> >> 
> >> From: Simon Weller 
> >> Sent: Tuesday, January 2, 2018 3:38:00 PM
> >> To: dev
> >> Subject: Re: [VOTE] Clean up old and obsolete branches.
> >>
> >> +0
> >>
> >> 
> >> From: Daan Hoogland 
> >> Sent: Tuesday, January 2, 2018 12:19 PM
> >> To: dev
> >> Subject: Re: [VOTE] Clean up old and obsolete branches.
> >>
> >> 0
> >>
> >> On Tue, Jan 2, 2018 at 1:51 PM, Gabriel Beims Bräscher <
> >> gabrasc...@gmail.com
> >>> wrote:
> >>
> >>> +1
> >>>
> >>> 2018-01-02 9:46 GMT-02:00 Rafael Weingärtner <
> >> rafaelweingart...@gmail.com
>  :
> >>>
>  Hope you guys had great holy days!
> 
>  Resuming the discussion we started last year in [1]. It is time to
> vote
> >>> and
>  then to push (if the vote is successful) the protocol defined to our
> >>> wiki.
>  Later we can start enforcing it.
>  I will summarize the protocol for branches in the official repository.
> 
>    1. We only maintain the master and major release branches. We
> >>> currently
>    have a system of X.Y.Z.S. I define major release here as a release
> >>> that
>    changes either ((X or Y) or (X and Y));
>    2. We will use tags for versioning. Therefore, all versions we
> >> release
>    are tagged accordingly, including minor and security releases;
>    3. When releasing the “SNAPSHOT” is removed and the branch of the
>    version is created (if the version is being cut from master). Rule
> >> (1)
>  one
>    is applied here; therefore, only major releases will receive
> >> branches.
>    Every release must have a tag according to the format X.Y.Z.S. After
>    releasing, we bump the POM of the version to next available
> >> SNAPSHOT;
>    4. If there's a need to fix an old version, we work on HEAD of
>    corresponding release branch. For instance, if we want to fix
> >>> something
>  in
>    release 4.1.1.0, we will work on branch 4.1, which will have the POM
>  set to
>    4.1.2.0-SNAPSHOT;
>    5. People should avoid (it is not forbidden though) using the
> >> official
>    apache repository to store working branches. If we want to work
>  together on
>    some issues, we can set up a fork and give permission to interested
>  parties
>    (the official repository is restricted to committers). If one uses
> >> the
>    official repository, the branch used must be cleaned right after
>  merging;
>    6. Branches not following these rules will be removed if they have
> >> not
>    received attention (commits) for over 6 (six) months;
>    7. Before the removal of a branch in the official repository it is
>    mandatory to create a Jira ticket and send a notification email to
>    CloudStack’s dev mailing list. If there are no objections, the
> >> branch
>  can
>    be deleted seven (7) business days after the notification email is
> >>> sent;
>    8. After the branch removal, the Jira ticket must be closed.
> 
>  Let’s go to the poll:
>  (+1) – I want to work using this protocol
>  (0) – Indifferent to me
>  (-1) – I prefer the way it is not, without any protocol/guidelines
> 
> 
>  [1]
>  http://mail-archives.apache.org/mod_mbox/cloudstack-dev/
>  201711.mbox/%3CCAHGRR8ozDBX%3DJJewLz_cu-YP9vA3TEmesvxGArTDBPerAOj8Cw%
>  40mail.gmail.com%3E
> 
>  --
>  Rafael Weingärtner
> 
> >>>
> >>
> >>
> >>
> >> --
> >> Daan
> >>
> >> nicolas.vazq...@shapeblue.com
> >> www.shapeblue.com
> >> ,
> >> @shapeblue
> >>
> >>
> >>
> >>
>
>


Re: [DISCUSS] Management server (pre-)shutdown to avoid killing jobs

2017-12-18 Thread ilya musayev
I very much agree with Paul, we should consider moving into resilient model
with least dependence I.e ha-proxy..

Send a notification to partner MS to take over the job management would be
ideal.

On Mon, Dec 18, 2017 at 9:28 AM Paul Angus  wrote:

> Hi Marc-Aurèle,
>
> Personally, my utopia would be to be able to pass async jobs between mgmt.
> servers.
> So rather than waiting in indeterminate time for a snapshot to complete,
> monitoring the job is passed to another management server.
>
> I would LOVE that something like Zookeeper monitored the state of the
> mgmt. servers, so that 'other' management servers could take over the async
> jobs in the (unlikely) event that a management server becomes unavailable.
>
>
>
> Kind regards,
>
> Paul Angus
>
> paul.an...@shapeblue.com
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> @shapeblue
>
>
>
>
> -Original Message-
> From: Marc-Aurèle Brothier [mailto:ma...@exoscale.ch]
> Sent: 18 December 2017 13:56
> To: dev@cloudstack.apache.org
> Subject: [DISCUSS] Management server (pre-)shutdown to avoid killing jobs
>
> Hi everyone,
>
> Another point, another thread. Currently when shutting down a management
> server, despite all the "stop()" method not being called as far as I know,
> the server could be in the middle of processing an async job task. It will
> lead to a failed job since the response won't be delivered to the correct
> management server even though the job might have succeed on the agent. To
> overcome this limitation due to our weekly production upgrades, we added a
> pre-shutdown mechanism which works along side HA-proxy. The management
> server keeps a eye onto a file "lb-agent" in which some keywords can be
> written following the HA proxy guide (
> https://cbonte.github.io/haproxy-dconv/1.9/configuration.html#5.2-agent-check
> ).
> When it finds "maint", "stopped" or "drain", it stops those threads:
>  - AsyncJobManager._heartbeatScheduler: responsible to fetch and start
> execution of AsyncJobs
>  - AlertManagerImpl._timer: responsible to send capacity check commands
>  - StatsCollector._executor: responsible to schedule stats command
>
> Then the management server stops most of its scheduled tasks. The correct
> thing to do before shutting down the server would be to send
> "rebalance/reconnect" commands to all agents connected on that management
> server to ensure that commands won't go through this server at all.
>
> Here, HA-proxy is responsible to stop sending API requests to the
> corresponding server with the help of this local agent check.
>
> In case you want to cancel the maintenance shutdown, you could write
> "up/ready" in the file and the different schedulers will be restarted.
>
> This is really more a change for operation around CS for people doing live
> upgrade on a regular basis, so I'm unsure if the community would want such
> a change in the code base. It goes a bit in the opposite direction of the
> change for removing the need of HA-proxy
> https://github.com/apache/cloudstack/pull/2309
>
> If there is enough positive feedback for such a change, I will port them
> to match with the upstream branch in a PR.
>
> Kind regards,
> Marc-Aurèle
>


Re: [README][Quarterly Call] - CloudStack Development, Blockers and Community Efforts

2017-09-17 Thread ilya musayev
Hi Folks

Apologies for dropping the ball on this initiative. I'm going to be away
for the next week or two dealing with personal issues.

Regards
Ilya

On Wed, Sep 6, 2017 at 11:21 AM ilya musayev <ilya.mailing.li...@gmail.com>
wrote:

> Hi All,
>
> We had a great discussion earlier today. I will update the meeting
> notes and start additional mail threads on behalf of pending parties
> for things we need to discuss.
>
> Meeting notes will be here:
>
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/2017-09-06+Meeting+notes
>
> The next call will be held 3 months from now. If you feel we need to
> do something sooner or can think of a way to make the communication
> better, please let me know.
>
> Thank you everyone for taking the time out of your busy schedule to
> attend this call.
>
> Regards,
> ilya
>


Re: [README][Quarterly Call] - CloudStack Development, Blockers and Community Efforts

2017-09-06 Thread ilya musayev
Hi All,

We had a great discussion earlier today. I will update the meeting
notes and start additional mail threads on behalf of pending parties
for things we need to discuss.

Meeting notes will be here:
https://cwiki.apache.org/confluence/display/CLOUDSTACK/2017-09-06+Meeting+notes

The next call will be held 3 months from now. If you feel we need to
do something sooner or can think of a way to make the communication
better, please let me know.

Thank you everyone for taking the time out of your busy schedule to
attend this call.

Regards,
ilya


binxg9nzlrjxR.bin
Description: application/pgp-encrypted


Re: [New Feature] noVNC console in Cloudstack

2017-09-05 Thread ilya
Great work Sachin, we will review and merge.

On 8/29/17 12:57 PM, sachin patil wrote:
> @wei...yeah I can share some screen shots. I have already attached one in
> my design document.
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/
> noVNC+support+for+Cloudstack
> 
> On Wed, Aug 30, 2017 at 12:09 AM, Jan-Arve Nygård > wrote:
> 
>> Well done!
>>
>> tir. 29. aug. 2017 kl. 13.15 skrev Wei ZHOU :
>>
>>> nice job!
>>>
>>> Could you paste some screenshots ?
>>>
>>> -Wei
>>>
>>> 2017-08-29 8:39 GMT+02:00 Haijiao <18602198...@163.com>:
>>>
 Awesome work,  I know lots of community users will love it .






 在2017年08月29 13时57分, "sachin patil"写道:

 Hello,

 I have integrated noVNC to cloudstack as my gsoc project for this year,
 under the guidance of my mentors @Syed Ahmed and @Rohit Yadav.

 The features that have been added are :

 1. noVNC support added and tested for KVM / Xenservers
 2. SSL/TLS security.
 3. Tested for RFB versions 3.8 and below.

 I have created a design doc that will help you understand the working.
 https://cwiki.apache.org/confluence/display/CLOUDSTACK/
 noVNC+support+for+Cloudstack

 PR for the same : https://github.com/apache/cloudstack/pull/2204

 Please let us know your views for this feature we have built.


 regards,
 Sachin Patil

>>>
>>
> 


Re: [README][Quarterly Call] - CloudStack Development, Blockers and Community Efforts

2017-09-05 Thread ilya
Apology - Link for talks below is pointing to "draft" and wont work

Please use this page instead for tomorrows call

> https://cwiki.apache.org/confluence/display/CLOUDSTACK/2017-09-06+Meeting+notes




On 9/5/17 1:59 PM, ilya wrote:
> Acknowledged, will be noted.
> 
> I'm working on the doc for this call here:
>> https://cwiki.apache.org/confluence/plugins/createcontent/draft-createpage.action?draftId=73636518
> 
> -- Please note - its work in progress
> 
> 
> 
> Calendar invite can be downloaded from here:
> https://drive.google.com/open?id=0B31Adqcpao6eRWh2R0ZXeFc3VXc
> 
> Copy and paste of the invite detail below..
> 
> 
> 
> 
> 
> -- Do not delete or change any of the following text. --
>> -- Do not delete or change any of the following text. --
>>
>> Join WebEx 
>> meeting<https://persistent.webex.com/persistent/j.php?MTID=m144fe53c6b989da2b0bd0d950662d22f>
>> Meeting number (access code): 644 427 929
>> Meeting password: fbqBz2Jm
>>
>> Join by Phone
>>
>> Please connect using VOIP wherever possible and help the organization to 
>> control telecommunications expenses.
>>
>>
>> +91-20-6001-9797INDIA Pune Toll
>> +91-80-6001-9797INDIA Bengaluru Toll
>> +91-40-6001-9797INDIA Hyderabad Toll
>> +1-949-259-2970 United States Toll
>> +60-392-122-599 Malaysia Kuala Lumpur Toll
>> +44-203-713-5073United Kingdom Toll
>> +61-28-880-3212 Australia Toll
>> 1800-206-547Australia Toll Free
>> 1-800-266-0614  India Toll Free
>> 1-844-802-4451  United States Toll Free
>> Global call-in 
>> numbers<https://persistent.webex.com/persistent/globalcallin.php> | 
>> Toll-free calling 
>> restrictions<https://www.webex.com/pdf/tollfree_restrictions.pdf>
>>
>> Can't join the meeting?<https://help.webex.com/docs/DOC-5412>
>>
>> In case of any technical issues/queries, You can contact 24 X 7 Global IT 
>> Service Desk.
>> Service Desk Number: +91-20-669-65400
>> US toll free Number: 1844-249-2623
>> India toll free Number: 1800-2100 -144
>>
>>
>> If you are a host, go 
>> here<https://persistent.webex.com/persistent/j.php?MTID=m7b3d9c33fddaeb0386e6be4f09d256a3>
>>  to view host information.
>>
>> IMPORTANT NOTICE: Please note that this WebEx service allows audio and other 
>> information sent during the session to be recorded, which may be 
>> discoverable in a legal matter. By joining this session, you automatically 
>> consent to such recordings. If you do not consent to being recorded, discuss 
>> your concerns with the host or do not join the session.
> 
> 
> On 9/5/17 12:52 PM, David Mabry wrote:
>> Hi Ilya,
>>
>> We are also interested in contributing support for the RBD/Ceph Storage 
>> backend for the new KVM HA feature that was rolled in 4.10.  We haven’t 
>> started work on that effort at this time, so I don’t have a PR to reference.
>>
>> Thanks,
>> David Mabry
>>
>> On 9/5/17, 2:13 PM, "ilya" <ilya.mailing.li...@gmail.com> wrote:
>>
>> Hi ENA team
>> 
>> Please send updates today before 5pm PST if you'd like to have more
>> items discussed.
>> 
>> On 8/18/17 5:12 AM, Simon Weller wrote:
>> > Ilya,
>> > 
>> > 
>> > I'll be attending with a few other folks from ENA.
>> > 
>> > 
>> > Here's one for the Dev efforts -
>> > 
>> > 
>> > 
>> >  Ability to Specify Mac Address when plugging a network
>> >   We're working on cloud migration strategies and part 
>> of that is making the move as seamless as possible.
>> > 
>> >  The ability to specify a mac address when shifting a 
>> VM workload from another environment makes the transition a lot easier.
>> >   https://issues.apache.org/jira/browse/CLOUDSTACK-9949
>> > 
>> >   https://github.com/apache/cloudstack/pull/2143
>> >   Nathan Johnson
>> >   PR has been submitted as of 7/13 is is awaiting 
>> review from the community (Targetting 4.11)
>> > 
>> > 
>> > We'll discuss our roadmap internally for the next half and get back to 
>> you with additions before the call.
>> > 
>> > 
>> > - Si
>> > 
>> > 

Re: [README][Quarterly Call] - CloudStack Development, Blockers and Community Efforts

2017-09-05 Thread ilya
Acknowledged, will be noted.

I'm working on the doc for this call here:
> https://cwiki.apache.org/confluence/plugins/createcontent/draft-createpage.action?draftId=73636518

-- Please note - its work in progress



Calendar invite can be downloaded from here:
https://drive.google.com/open?id=0B31Adqcpao6eRWh2R0ZXeFc3VXc

Copy and paste of the invite detail below..





-- Do not delete or change any of the following text. --
> -- Do not delete or change any of the following text. --
> 
> Join WebEx 
> meeting<https://persistent.webex.com/persistent/j.php?MTID=m144fe53c6b989da2b0bd0d950662d22f>
> Meeting number (access code): 644 427 929
> Meeting password: fbqBz2Jm
> 
> Join by Phone
> 
> Please connect using VOIP wherever possible and help the organization to 
> control telecommunications expenses.
> 
> 
> +91-20-6001-9797INDIA Pune Toll
> +91-80-6001-9797INDIA Bengaluru Toll
> +91-40-6001-9797INDIA Hyderabad Toll
> +1-949-259-2970 United States Toll
> +60-392-122-599 Malaysia Kuala Lumpur Toll
> +44-203-713-5073United Kingdom Toll
> +61-28-880-3212 Australia Toll
> 1800-206-547Australia Toll Free
> 1-800-266-0614  India Toll Free
> 1-844-802-4451  United States Toll Free
> Global call-in 
> numbers<https://persistent.webex.com/persistent/globalcallin.php> | Toll-free 
> calling restrictions<https://www.webex.com/pdf/tollfree_restrictions.pdf>
> 
> Can't join the meeting?<https://help.webex.com/docs/DOC-5412>
> 
> In case of any technical issues/queries, You can contact 24 X 7 Global IT 
> Service Desk.
> Service Desk Number: +91-20-669-65400
> US toll free Number: 1844-249-2623
> India toll free Number: 1800-2100 -144
> 
> 
> If you are a host, go 
> here<https://persistent.webex.com/persistent/j.php?MTID=m7b3d9c33fddaeb0386e6be4f09d256a3>
>  to view host information.
> 
> IMPORTANT NOTICE: Please note that this WebEx service allows audio and other 
> information sent during the session to be recorded, which may be discoverable 
> in a legal matter. By joining this session, you automatically consent to such 
> recordings. If you do not consent to being recorded, discuss your concerns 
> with the host or do not join the session.


On 9/5/17 12:52 PM, David Mabry wrote:
> Hi Ilya,
> 
> We are also interested in contributing support for the RBD/Ceph Storage 
> backend for the new KVM HA feature that was rolled in 4.10.  We haven’t 
> started work on that effort at this time, so I don’t have a PR to reference.
> 
> Thanks,
> David Mabry
> 
> On 9/5/17, 2:13 PM, "ilya" <ilya.mailing.li...@gmail.com> wrote:
> 
> Hi ENA team
> 
> Please send updates today before 5pm PST if you'd like to have more
> items discussed.
> 
> On 8/18/17 5:12 AM, Simon Weller wrote:
> > Ilya,
> > 
> > 
> > I'll be attending with a few other folks from ENA.
> > 
> > 
> > Here's one for the Dev efforts -
> > 
> > 
> > 
> >  Ability to Specify Mac Address when plugging a network
> >   We're working on cloud migration strategies and part 
> of that is making the move as seamless as possible.
> > 
> >  The ability to specify a mac address when shifting a 
> VM workload from another environment makes the transition a lot easier.
> >   https://issues.apache.org/jira/browse/CLOUDSTACK-9949
> > 
> >   https://github.com/apache/cloudstack/pull/2143
> >   Nathan Johnson
>     >   PR has been submitted as of 7/13 is is awaiting 
> review from the community (Targetting 4.11)
> > 
> > 
> > We'll discuss our roadmap internally for the next half and get back to 
> you with additions before the call.
> > 
> > 
> > - Si
> > 
> > 
> > From: ilya <ilya.mailing.li...@gmail.com>
> > Sent: Thursday, August 17, 2017 7:29 PM
> > To: dev@cloudstack.apache.org
> > Subject: Re: [README][Quarterly Call] - CloudStack Development, 
> Blockers and Community Efforts
> > 
> > Hi All,
> > 
> > I'd like to pick this thread back up and see if you are joining. As a
> > reminder, proposed date is September 6th 2017, time 9AM PST.
> > 
> > If you are, please kindly respond. If you 

Re: Need to ask for help again (Migration in cloudstack)

2017-09-05 Thread ilya
Personal experience with KVM (not cloudstack related) and non-shared
storage migration - works most of the time - but can be very slow - even
with 10G backplane.

On 9/5/17 6:27 AM, Marc-Aurèle Brothier wrote:
> Hi Dimitriy,
> 
> I wrote the PR for the live migration in cloudstack (PR 1709). We're using
> an older version than upstream so it's hard for me to fix the integration
> tests errors. All I can tell you, is that you should first configure
> libvirt correctly for migration. You can play with it by manually running
> virsh commands to initiate the migration. The networking part will not work
> after the VM being on the other machine if down manually.
> 
> Marc-Aurèle
> 
> On Tue, Sep 5, 2017 at 2:07 PM, Dmitriy Kaluzhniy <
> dmitriy.kaluzh...@gmail.com> wrote:
> 
>> Hello,
>> That's what I want, thank you!
>> I want to have Live migration on KVM with non-shared storages.
>> As I understood, migration is performed by LibVirt.
>>
>> 2017-09-01 17:04 GMT+03:00 Simon Weller :
>>
>>> Dmitriy,
>>>
>>> Can you give us a bit more information about what you're trying to do?
>>> If you're looking for live migration on non shared storage with KVM,
>> there
>>> is an outstanding PR  in the works to support that:
>>>
>>> https://github.com/apache/cloudstack/pull/1709
>>>
>>> - Si
>>>
>>>
>>> 
>>> From: Rajani Karuturi 
>>> Sent: Friday, September 1, 2017 4:07 AM
>>> To: dev@cloudstack.apache.org
>>> Subject: Re: Need to ask for help again (Migration in cloudstack)
>>>
>>> You might start with this commit
>>> https://github.com/apache/cloudstack/commit/
>> 21ce3befc8ea9e1a6de449a21499a5
>>> 0ff141a183
>>>
>>>
>>> and storage_motion_supported column in hypervisor_capabilities
>>> table.
>>>
>>> Thanks,
>>>
>>> ~ Rajani
>>>
>>> http://cloudplatform.accelerite.com/
>>>
>>> On August 31, 2017 at 6:29 PM, Dmitriy Kaluzhniy
>>> (dmitriy.kaluzh...@gmail.com) wrote:
>>>
>>> Hello!
>>> I contacted this mail before, but I wasn't subscribed to mailing
>>> list.
>>> The reason I'm contacting you - I need advise.
>>> During last week I was learning cloudstack code to find where is
>>> implemented logic of this statements I found in cloudstack
>>> documentation:
>>> "(KVM) The VM must not be using local disk storage. (On
>>> XenServer and
>>> VMware, VM live migration with local disk is enabled by
>>> CloudStack support
>>> for XenMotion and vMotion.)
>>>
>>> (KVM) The destination host must be in the same cluster as the
>>> original
>>> host. (On XenServer and VMware, VM live migration from one
>>> cluster to
>>> another is enabled by CloudStack support for XenMotion and
>>> vMotion.)"
>>>
>>> I made up a long road through source code but still can't see
>>> it. If you
>>> can give me any advise - it will be amazing.
>>> Anyway, thank you.
>>>
>>> --
>>>
>>> *Best regards,Dmitriy Kaluzhniy+38 (073) 101 14 73*
>>>
>>
>>
>>
>> --
>>
>>
>>
>> *--С уважением,Дмитрий Калюжный+38 (073) 101 14 73*
>>
> 


Re: [README][Quarterly Call] - CloudStack Development, Blockers and Community Efforts

2017-09-05 Thread ilya
Hi ENA team

Please send updates today before 5pm PST if you'd like to have more
items discussed.

On 8/18/17 5:12 AM, Simon Weller wrote:
> Ilya,
> 
> 
> I'll be attending with a few other folks from ENA.
> 
> 
> Here's one for the Dev efforts -
> 
> 
> 
>  Ability to Specify Mac Address when plugging a network
>   We're working on cloud migration strategies and part of 
> that is making the move as seamless as possible.
> 
>  The ability to specify a mac address when shifting a VM 
> workload from another environment makes the transition a lot easier.
>   https://issues.apache.org/jira/browse/CLOUDSTACK-9949
> 
>   https://github.com/apache/cloudstack/pull/2143
>   Nathan Johnson
>   PR has been submitted as of 7/13 is is awaiting review from 
> the community (Targetting 4.11)
> 
> 
> We'll discuss our roadmap internally for the next half and get back to you 
> with additions before the call.
> 
> 
> - Si
> 
> 
> From: ilya <ilya.mailing.li...@gmail.com>
> Sent: Thursday, August 17, 2017 7:29 PM
> To: dev@cloudstack.apache.org
> Subject: Re: [README][Quarterly Call] - CloudStack Development, Blockers and 
> Community Efforts
> 
> Hi All,
> 
> I'd like to pick this thread back up and see if you are joining. As a
> reminder, proposed date is September 6th 2017, time 9AM PST.
> 
> If you are, please kindly respond. If you have things to discuss -
> please use the outline below:
> 
>   1) Development efforts - 60 minutes
> Upcoming Features you are working on developing (to avoid
> collision and maintain the roadmap).
>   Depending on number of topics we need to discuss - time for
> each topic will be set accordingly.
>   If you would like to particiapate - please respond to this
> thread and adhere to sample format below:
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 2) Release Blockers - 20 minutes
>   If you would like to participate - please respond to this
> thread and adhere to sample format below:
> 
> 
> 
> 
> 
> 
> 3) Community Efforts - 10+ minutes
> 
> 
> 
> 
> 
> 
> 
> Thanks
> ilya
> 
> On 8/1/17 10:55 AM, ilya wrote:
>> Hi Team
>>
>> Proposed new date for first quarterly call
>>
>> September 6th 2017, time 9AM PST.
>>
>> This is a month out and hopefully can work with most folks. If it does
>> not work with your timing - please consider finding delegates and/or
>> representatives.
>>
>> Regards
>> ilya
>>
>> On 7/20/17 6:11 AM, Wido den Hollander wrote:
>>>
>>>> Op 20 juli 2017 om 14:58 schreef Giles Sirett <giles.sir...@shapeblue.com>:
>>>>
>>>>
>>>> Hi Ilya
>>>> Sorry, I should have highlighted that User Group meeting clash before
>>>>
>>>> Under normal circumstances, I would say: its futile trying to coordinate 
>>>> calendars with such a broad audience - there will always be some people 
>>>> not available , just set a regular date, keep it rolling (build and they 
>>>> will come)
>>>>
>>>> However, for the first call, there will be at least Wido, Mike, Paul, 
>>>> Daan, me and probably a lot more PMC members not available because of the 
>>>> user group meeting
>>>>
>>>
>>> +1 I will be present!
>>>
>>> Wido
>>>
>>>> To keep it simple, I'd therefore say, go with the following day (Friday 
>>>> 18th) or the next Thursday (24th )
>>>>
>>>> I'm not even going to respond to Simons pub/phone suggestion.
>>>>
>>>> Kind regards
>>>> Giles
>>>>
>>>> giles.sir...@shapeblue.com
>>>> www.shapeblue.com<http://www.shapeblue.com>
> [http://www.shapeblue.com/wp-content/uploads/2017/06/logo.png]<http://www.shapeblue.com/>
> 
> Shapeblue - The CloudStack Company<http://www.shapeblue.com/>
> www.shapeblue.com
> Rapid deployment framework for Apache CloudStack IaaS Clouds. CSForge is a 
> framework developed by ShapeBlue to del

Re: [README][Quarterly Call] - CloudStack Development, Blockers and Community Efforts

2017-09-05 Thread ilya
I assume Raja forgot to CC: dev@cloudstack.apache.org - this is  the
email forward

On 9/4/17 10:21 PM, Raja Pullela wrote:
> Hi Ilya,
> 
> 
> Here are few requests from us at Accelerite.  Request you to
> kindly accommodate these.
> 
> 
> Best,
> 
> Raja
> 
> Engineering, Accelerite
> 
> http://www.accelerite.com 
> 
> 
> Development Efforts:
> 
>     Affinity Group at Domain Level
> 
>      Currently the affinity groups
> are scoped at account level. The enhancement is to scope it at domain
> level or project level  
>         None FS and/or JIRA>
> 
>         
> 
>         
> 
> 
>    Bulk Provisioning of VM from
> UI
> 
>     Ability to provision VMs in
> bulk from UI 
>         None FS and/or JIRA>
> 
>         
> 
>         
> 
>  
> 
>    Strongswan: Multiple subnets in
> IKEv1
> 
>      We lost the ability to specify
> multiple subnets for S2S VPN using Ikev1 after we moved to
> Stongswan.Would like to enhance this 
>         None FS and/or JIRA>
> 
>         
> 
>         
> 
>  
> 
>    Hyper-V Snapshots
> 
>     Ability to create snapshots in
> Hyper-V setups. 
>         None FS and/or JIRA>
> 
>         
> 
>         
> 
>  
> 
>    CPU cores per socket
> 
>     Ability to update CPU cores per
> socket for a VM. 
>         None FS and/or JIRA>
> 
>         
> 
>         
> 
>  
> 
>    Cinder Integration
> 
>     Integrate with Cinder block
> storage 
>         None FS and/or JIRA>
> 
>         
> 
>         
> 
>  
> 
>    OVA Import Robustness
> 
>     Currently there are many issues
> with OVA import and usage. Will be enhancing to fix all the issues. 
>         None FS and/or JIRA>
> 
>         
> 
>         
> 
>  
> 
>    Hyper-V Clustering
> 
>     Support Hyper-V
> Clustering. 
>         None FS and/or JIRA>
> 
>         
> 
>         
> 
>  
> 
>  
> 
> Community Efforts - 
> 
>   Release Management
> 
>    Accelerite has been doing
> Release management for many releases for Cloudstack. Most recently 4.10.
> We will continue these efforts
> 
>    Rajani
> 
>    
> 
> 
>                       PR Process
> 
>    Current PR process takes quite
> a long for a PR to get merged.   Couple of examples  (1) there are
> comments on the PR that were addressed by the author but no one may
> respond (2) there are comments from reviewers and the author does not
> respond.  The whole process can take potentially a longtime if either
> the author or the reviewers take time to respond.  Can we think about
> implementing some kind of validating on the Prs.. Bottomline is how can
> we improve ?  
> 
>    TBD
> 
>    
> 
> 
>  ——end--
> 
> DISCLAIMER
> ==
> This e-mail may contain privileged and confidential information which is
> the property of Accelerite, a Persistent Systems business. It is
> intended only for the use of the individual or entity to which it is
> addressed. If you are not the intended recipient, you are not authorized
> to read, retain, copy, print, distribute or use this message. If you
> have received this communication in error, please notify the sender and
> delete all copies of this message. Accelerite, a Persistent Systems
> business does not accept any liability for virus infected mails.


Re: [README][Quarterly Call] - CloudStack Development, Blockers and Community Efforts

2017-09-05 Thread ilya
I assume Raja forgot to CC: dev@cloudstack.apache.org - this is  the
email forward

 Forwarded Message 
Subject:Re: [README][Quarterly Call] - CloudStack Development,
Blockers and Community Efforts
Date:   Mon, 4 Sep 2017 13:12:36 +
From:   Raja Pullela <raja.pull...@accelerite.com>
To: ilya.mailing.li...@gmail.com <ilya.mailing.li...@gmail.com>



Hi Ilya,

Here are couple of items, I would like to bring up.

Best,
Raja
Engineering, Accelerite

COMMUNITY WORK:
 
Regression Tests
    There are a number of tests in the integration folder
which are not run.  Need to group these together and start running
them.
    <Lead(s)>Raja/Looking for support
    TBD
 
Release Qualification
    RC to Readiness takes quite a bit of time.  Can we
discuss on ways to improve this.  
    <Lead(s)>Raja/Looking for support
    TBD

Performance Testing
    Add Performance tests.  Need hardware where we can
run these tests on?
    <Lead(s)>Raja/Looking for support
    TBD
DISCLAIMER
==
This e-mail may contain privileged and confidential information which is
the property of Accelerite, a Persistent Systems business. It is
intended only for the use of the individual or entity to which it is
addressed. If you are not the intended recipient, you are not authorized
to read, retain, copy, print, distribute or use this message. If you
have received this communication in error, please notify the sender and
delete all copies of this message. Accelerite, a Persistent Systems
business does not accept any liability for virus infected mails.


Re: [README][Quarterly Call] - CloudStack Development, Blockers and Community Efforts

2017-08-29 Thread ilya
Hi All

Friendly reminder,

The call is next week Wednesday the 6th of September @ 9AM.

Please kindly submit your drafts by EOD Monday September 4th.

I need to collate it all and set time aside for each speaker.

If you miss EOD Monday deadline - with all fairness to everyone, I can't
promise you will have an opportunity to talk.

Sorry
-ilya

On 8/25/17 9:42 AM, Paul Angus wrote:
> Hi Ilya and everyone,
> 
>  
> 
> Please find below our ShapeBlue submission.  We shouldn’t have any items
> in our list that we need to bring particular attention to on the call,
> but I’ll happily discuss any that may conflict or overlap to get a
> resolution or cooperation.
> 
>  
> 
> BTW. I’m away for the coming week.
> 
>  
> 
>  
> 
>  
> 
> COMMUNITY WORK:
> 
>  
> 
> CloudStack fat jar packaging 
> 
>     CloudStack installation has huge dependency on distro
> provided tomcat, by moving to embedded jetty (like we do use jetty for
> development) we can sync how developers develop/test mgmt servers and
> users use it. In addition, it will be easier to publish pkgs without
> depending on distro provided dependencies.
> 
>     <Lead(s)>R Yadav/MA Brothier
> 
>     
> 
>  
> 
> Debian9 systemvmtemplate 
> 
>     Required update, Debian7 is EOL, moving to Debian9
> will give;  newer kernel, newer packages, Smaller disk footprint (faster
> deployment and more space efficient) and Systemd should give quicker
> boot time
> 
>     <Lead(s)>R Yadav/W Hollander
> 
>     
> 
>  
> 
> 4.9.3.0 
> 
>     Backport of bug fixes to 4.9 branch and release
> 4.9.3.0.
> 
>     <Lead(s)>R Yadav
> 
>     
> 
>  
> 
> Winston
> 
>     WIP - PoC of Phase 1 (consolidated data storage) in
> ShapeBlue Lab - Elasticsearch instance receiving test run data from
> Trillian  Blue Orangutan. Once it has a decent amount Ill
> come to community for ideas and feedback (and help). By the time of the
> conf call I will have updated:
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/Project+Winston+-+Consolidated+Test+Data Summary>
> 
>     <Lead(s)>P Angus
> 
>     
> 
>  
> 
> Marvin Test Categories 
> 
>     Enable quicker, targeted test runs in Marvin, by
> allowing user to specify Which element(s) are touched ie.
> Networking
> 
>     <Lead(s)>B Stoyanov
> 
>     
> 
>  
> 
> DEVELOPMENT WORK
> 
>  
> 
>  
> 
>  
> 
> *<Feature*Name*>*Securing Agent Comms (CA Framework)
>     **The aim of this feature is to provide pluggable CA
> (certificate authority) management in CloudStack that can
> fetch/provision certificates to (new) host(s) and systemvms. As a
> default CA plugin, a root CA plugin will be implement where CloudStack
> becomes a self-signed Root Certificate Authority. Developers will have
> option to implement further integration with their TLS/SSL cert
> providers such as letsencrypt and other vendors.**
>    
> **https://cwiki.apache.org/confluence/display/CLOUDSTACK/Secure+Agent+Communications <https://cwiki.apache.org/confluence/display/CLOUDSTACK/Secure+Agent+Communications%3c/FS_Jira>*>*
>     *<Lead*Dev*>*R Yadav
>     *<Release*Target*>*4.11
> 
> **
> 
> *<Feature*Name*>*Error code framework
>     **Aim is to get understandable, actionable errors out of
> CloudStack, ie not **insufficient capacity**. 
> BitWork Software have implemented some context based interpretation in
> their UI.  We hope to work with them and others to implement a framework
> in side CloudStack)**
>     **TBC**
>     *<Lead*Dev*>*D Hoogland
>     *<Release*Target*>*TBC
> 
> **
> 
> *<Feature*Name*>*Enable Dedication of Public IP range to
> CPVM/SSVM
>     **By dedicating a small public IP range to only CPVM and
> SSVM, firewall rules can be used to control inbound access to this
> smaller range without affecting public traffic destined for the VRs.
> Such security is a general best practice, but also highly desireable in
> PCI DSS compliant environments**
>     **TBC**
>     *<Lead*Dev*>*TBC
>     *<Release*Target*>*Q4 2017
> 
> **
> 
> *<Feature*Name*>*Multi-disk OVA import + Additional OVA
> metadata
>     **Many users add data disks to their instances, migration
> of these between platforms (ie onboarding) is greatly simplified by
> being able to import mu**
>     **TBC**
>     *<Lead*Dev*>*N Vasquez
>     *<Release*Target*>*Q4 2017
> 
> **
> 
> *<Feature*Name*>*Fix and Update template checksum validation
>     **the obsoletion of the md5 checksum algorithm requires
> that CloudStack checks

Re: [DISCUSS][SECURITY] Feature: Secure CloudStack Communications

2017-08-23 Thread ilya
Awesome work - thank you Rohit.

On 8/23/17 12:49 PM, Rohit Yadav wrote:
> All,
> 
> 
> No regression is seen in the smoke test run, however, I'll leave the PR open 
> for some time to gather further feedback and reviews.
> 
> 
> - Rohit
> 
> 
> From: Rohit Yadav 
> Sent: Friday, August 18, 2017 4:09:30 PM
> To: dev@cloudstack.apache.org
> Subject: Re: [DISCUSS][SECURITY] Feature: Secure CloudStack Communications
> 
> All,
> 
> 
> The feature is ready for your review, please see:
> 
> https://github.com/apache/cloudstack/pull/2239
> 
> 
> Thanks and regards.
> 
> 
> From: Rohit Yadav 
> Sent: Thursday, July 13, 2017 12:59:02 PM
> To: dev@cloudstack.apache.org
> Subject: [DISCUSS][SECURITY] Feature: Secure CloudStack Communications
> 
> All,
> 
> 
> With upcoming features such as the application service (container service), 
> and existing features such as SAML, they all need some sort of certificate 
> management and the idea with the proposed feature is to build a pluggable 
> certificate authority manager (CA Manager). I would like to kick an initial 
> discussion around how we can secure components of CloudStacks. A CA 
> service/manager that can create/provision/deploy certificates providing both 
> automated and semi-automated ways for deploying/setup of certificates using 
> in-band (ssh, command-answer pattern) and out-of-band (ssh, ansible, chef 
> etc) to CloudStack services (such as systemvm agents, KVM agents, possible 
> webservices running in systemvms, VRs etc).
> 
> 
> While we do have some APIs and mechanisms to secure user/external facing 
> services where we can use custom or failsafe SSL/TLS certificates, it's far 
> from a complete solution. The present communications between CloudStack 
> management server, its peers and agents (served on port 8250) is one way SSL 
> handshaked connection, is not authenticated while may be secure by insecure 
> certificates.
> 
> 
> As a first step, it is proposed to create a general purpose pluggable CA 
> service with a default plugin implementation where CloudStack becomes a 
> Root-CA and can issue self-signed certificates. Such certificates may be 
> consumed by CloudStack agents (CPVM/SSVM/KVM) and other components/services 
> (such as SAML, container services etc). The pluggable CA framework should 
> allow developers to extend the functionality by implementing provider plugins 
> that may work with other CA providers such as LetsEncrypt, an 
> existing/internal CA infrastructure, or other certificate vendors.
> 
> 
> Please see an initial FS and ideas on implementation in the following FS. 
> Looking forward to your feedback.
> 
> 
> FS: 
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/Secure+Agent+Communications
> 
> JIRA: https://issues.apache.org/jira/browse/CLOUDSTACK-9993
> 
> 
> Regards.
> 
> rohit.ya...@shapeblue.com
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> @shapeblue
> 
> 
> 
> 
> rohit.ya...@shapeblue.com
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> @shapeblue
> 
> 
> 
> 
> rohit.ya...@shapeblue.com 
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> @shapeblue
>   
>  
> 
> 


Re: [README][Quarterly Call] - CloudStack Development, Blockers and Community Efforts

2017-08-17 Thread ilya
Hi All,

I'd like to pick this thread back up and see if you are joining. As a
reminder, proposed date is September 6th 2017, time 9AM PST.

If you are, please kindly respond. If you have things to discuss -
please use the outline below:

  1) Development efforts - 60 minutes
Upcoming Features you are working on developing (to avoid
collision and maintain the roadmap).
  Depending on number of topics we need to discuss - time for
each topic will be set accordingly.
  If you would like to particiapate - please respond to this
thread and adhere to sample format below:













2) Release Blockers - 20 minutes
  If you would like to participate - please respond to this
thread and adhere to sample format below:






3) Community Efforts - 10+ minutes







Thanks
ilya

On 8/1/17 10:55 AM, ilya wrote:
> Hi Team
> 
> Proposed new date for first quarterly call
> 
> September 6th 2017, time 9AM PST.
> 
> This is a month out and hopefully can work with most folks. If it does
> not work with your timing - please consider finding delegates and/or
> representatives.
> 
> Regards
> ilya
> 
> On 7/20/17 6:11 AM, Wido den Hollander wrote:
>>
>>> Op 20 juli 2017 om 14:58 schreef Giles Sirett <giles.sir...@shapeblue.com>:
>>>
>>>
>>> Hi Ilya
>>> Sorry, I should have highlighted that User Group meeting clash before
>>>
>>> Under normal circumstances, I would say: its futile trying to coordinate 
>>> calendars with such a broad audience - there will always be some people not 
>>> available , just set a regular date, keep it rolling (build and they will 
>>> come)
>>>
>>> However, for the first call, there will be at least Wido, Mike, Paul, Daan, 
>>> me and probably a lot more PMC members not available because of the user 
>>> group meeting
>>>
>>
>> +1 I will be present!
>>
>> Wido
>>
>>> To keep it simple, I'd therefore say, go with the following day (Friday 
>>> 18th) or the next Thursday (24th )
>>>
>>> I'm not even going to respond to Simons pub/phone suggestion.
>>>
>>> Kind regards
>>> Giles
>>>
>>> giles.sir...@shapeblue.com 
>>> www.shapeblue.com
>>> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
>>> @shapeblue
>>>   
>>>  
>>>
>>>
>>> -Original Message-
>>> From: ilya [mailto:ilya.mailing.li...@gmail.com] 
>>> Sent: 19 July 2017 22:25
>>> To: dev@cloudstack.apache.org
>>> Subject: Re: [README][Quarterly Call] - CloudStack Development, Blockers 
>>> and Community Efforts
>>>
>>> The date conflict is noted - please provide range of alternative dates.
>>>
>>> Thanks,
>>> ilya
>>>
>>> On 7/19/17 12:35 PM, Tutkowski, Mike wrote:
>>>> I thought about that, but sometimes you can barely hear the person 
>>>> next to you there let alone on a phone. :-)
>>>>
>>>>> On Jul 19, 2017, at 1:28 PM, Simon Weller <swel...@ena.com.INVALID> wrote:
>>>>>
>>>>> ...unless you take a conference phone to the pub 
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> 
>>>>> From: Tutkowski, Mike <mike.tutkow...@netapp.com>
>>>>> Sent: Wednesday, July 19, 2017 2:19 PM
>>>>> To: dev@cloudstack.apache.org
>>>>> Cc: us...@cloudstack.apache.org
>>>>> Subject: Re: [README][Quarterly Call] - CloudStack Development, 
>>>>> Blockers and Community Efforts
>>>>>
>>>>> Hi Ilya,
>>>>>
>>>>> I think this is a good idea and thanks for the proposed breakdown of the 
>>>>> contents of the call.
>>>>>
>>>>> One thing to note about August 17 is that there is a CloudStack Meetup in 
>>>>> London on that day. The call would be happening around 5 PM local time 
>>>>> (during part of the meetup or during the after-meetup activities). As 
>>>>> such, I believe the participants of that meetup probably won't be 
>>&

Re: [README][Quarterly Call] - CloudStack Development, Blockers and Community Efforts

2017-08-01 Thread ilya
Hi Team

Proposed new date for first quarterly call

September 6th 2017, time 9AM PST.

This is a month out and hopefully can work with most folks. If it does
not work with your timing - please consider finding delegates and/or
representatives.

Regards
ilya

On 7/20/17 6:11 AM, Wido den Hollander wrote:
> 
>> Op 20 juli 2017 om 14:58 schreef Giles Sirett <giles.sir...@shapeblue.com>:
>>
>>
>> Hi Ilya
>> Sorry, I should have highlighted that User Group meeting clash before
>>
>> Under normal circumstances, I would say: its futile trying to coordinate 
>> calendars with such a broad audience - there will always be some people not 
>> available , just set a regular date, keep it rolling (build and they will 
>> come)
>>
>> However, for the first call, there will be at least Wido, Mike, Paul, Daan, 
>> me and probably a lot more PMC members not available because of the user 
>> group meeting
>>
> 
> +1 I will be present!
> 
> Wido
> 
>> To keep it simple, I'd therefore say, go with the following day (Friday 
>> 18th) or the next Thursday (24th )
>>
>> I'm not even going to respond to Simons pub/phone suggestion.
>>
>> Kind regards
>> Giles
>>
>> giles.sir...@shapeblue.com 
>> www.shapeblue.com
>> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
>> @shapeblue
>>   
>>  
>>
>>
>> -Original Message-
>> From: ilya [mailto:ilya.mailing.li...@gmail.com] 
>> Sent: 19 July 2017 22:25
>> To: dev@cloudstack.apache.org
>> Subject: Re: [README][Quarterly Call] - CloudStack Development, Blockers and 
>> Community Efforts
>>
>> The date conflict is noted - please provide range of alternative dates.
>>
>> Thanks,
>> ilya
>>
>> On 7/19/17 12:35 PM, Tutkowski, Mike wrote:
>>> I thought about that, but sometimes you can barely hear the person 
>>> next to you there let alone on a phone. :-)
>>>
>>>> On Jul 19, 2017, at 1:28 PM, Simon Weller <swel...@ena.com.INVALID> wrote:
>>>>
>>>> ...unless you take a conference phone to the pub 
>>>>
>>>>
>>>>
>>>>
>>>> 
>>>> From: Tutkowski, Mike <mike.tutkow...@netapp.com>
>>>> Sent: Wednesday, July 19, 2017 2:19 PM
>>>> To: dev@cloudstack.apache.org
>>>> Cc: us...@cloudstack.apache.org
>>>> Subject: Re: [README][Quarterly Call] - CloudStack Development, 
>>>> Blockers and Community Efforts
>>>>
>>>> Hi Ilya,
>>>>
>>>> I think this is a good idea and thanks for the proposed breakdown of the 
>>>> contents of the call.
>>>>
>>>> One thing to note about August 17 is that there is a CloudStack Meetup in 
>>>> London on that day. The call would be happening around 5 PM local time 
>>>> (during part of the meetup or during the after-meetup activities). As 
>>>> such, I believe the participants of that meetup probably won't be 
>>>> attending the call.
>>>>
>>>> Perhaps we should consider another day?
>>>>
>>>> Talk to you later,
>>>> Mike
>>>>
>>>>> On Jul 19, 2017, at 12:59 PM, ilya <ilya.mailing.li...@gmail.com> wrote:
>>>>>
>>>>> Hi Devs and Users
>>>>>
>>>>> Hope this message finds you well,
>>>>>
>>>>> As mentioned earlier, we would like to start with quarterly calls to 
>>>>> discuss the direction of cloudstack project.
>>>>>
>>>>> I propose to split the 90 minute call into 3 topics:
>>>>>
>>>>>   1) Development efforts - 60 minutes
>>>>>   Upcoming Features you are working on developing (to avoid 
>>>>> collision andmaintain the roadmap).
>>>>> Depending on number of topics we need to discuss - time for 
>>>>> each topic will be set accordingly.
>>>>> If you would like to particiapate - please respond to this 
>>>>> thread and adhere to sample format below:
>>>>>
>>>>>   
>>>>>   
>>>>>   
>>>>>   
>>>>>   
>>>>>   
>>>>>   
>>>>>   
>>>>>   
>>>>>   
>>>>>   
>>>>>
>>>>>   2) Release Blockers - 20 minutes
>>>>> If you would like to participate - please respond to this 
>>>>> thread and adhere to sample format below:
>>>>>   
>>>>>   
>>>>>   
>>>>>   
>>>>>   
>>>>>
>>>>>   3) Community Efforts - 10+ minutes
>>>>>   
>>>>>   
>>>>>   
>>>>>   
>>>>>
>>>>> The proposed date and time  - Thursday August 17th 9AM PT.
>>>>>
>>>>> Minutes will be taken and posted on dev list. Due to number of 
>>>>> things we need to discuss - we have to keep the call very 
>>>>> structured, each topic - timed and very high level.
>>>>> If there are issues and or suggestions, we will note it down in few 
>>>>> sentences, identify interested parties and have them do a "post"
>>>>> discussion on the mailing list.
>>>>>
>>>>> Looking forward to your comments,
>>>>>
>>>>> Regards,
>>>>> ilya
>>>>>


Re: Introduction

2017-08-01 Thread ilya
Welcome!

On 7/31/17 8:07 AM, Nicolas Vazquez wrote:
> Hi all,
> 
> 
> My name is Nicolas Vazquez, today is my first day at @ShapeBlue as a Software 
> Engineer. I am based in Montevideo, Uruguay and I've been working with 
> CloudStack since mid-2015. Looking forward to working with you!
> 
> 
> Thanks,
> 
> Nicolas
> 
> nicolas.vazq...@shapeblue.com 
> www.shapeblue.com
> ,   
> @shapeblue
>   
>  
> 
> 


Re: [README][Quarterly Call] - CloudStack Development, Blockers and Community Efforts

2017-07-19 Thread ilya
The date conflict is noted - please provide range of alternative dates.

Thanks,
ilya

On 7/19/17 12:35 PM, Tutkowski, Mike wrote:
> I thought about that, but sometimes you can barely hear the person next to 
> you there let alone on a phone. :-)
> 
>> On Jul 19, 2017, at 1:28 PM, Simon Weller <swel...@ena.com.INVALID> wrote:
>>
>> ...unless you take a conference phone to the pub 
>>
>>
>>
>>
>> 
>> From: Tutkowski, Mike <mike.tutkow...@netapp.com>
>> Sent: Wednesday, July 19, 2017 2:19 PM
>> To: dev@cloudstack.apache.org
>> Cc: us...@cloudstack.apache.org
>> Subject: Re: [README][Quarterly Call] - CloudStack Development, Blockers and 
>> Community Efforts
>>
>> Hi Ilya,
>>
>> I think this is a good idea and thanks for the proposed breakdown of the 
>> contents of the call.
>>
>> One thing to note about August 17 is that there is a CloudStack Meetup in 
>> London on that day. The call would be happening around 5 PM local time 
>> (during part of the meetup or during the after-meetup activities). As such, 
>> I believe the participants of that meetup probably won't be attending the 
>> call.
>>
>> Perhaps we should consider another day?
>>
>> Talk to you later,
>> Mike
>>
>>> On Jul 19, 2017, at 12:59 PM, ilya <ilya.mailing.li...@gmail.com> wrote:
>>>
>>> Hi Devs and Users
>>>
>>> Hope this message finds you well,
>>>
>>> As mentioned earlier, we would like to start with quarterly calls to
>>> discuss the direction of cloudstack project.
>>>
>>> I propose to split the 90 minute call into 3 topics:
>>>
>>>   1) Development efforts - 60 minutes
>>>   Upcoming Features you are working on developing (to avoid
>>> collision andmaintain the roadmap).
>>> Depending on number of topics we need to discuss - time for
>>> each topic will be set accordingly.
>>> If you would like to particiapate - please respond to this
>>> thread and adhere to sample format below:
>>>
>>>   
>>>   
>>>   
>>>   
>>>   
>>>   
>>>   
>>>   
>>>   
>>>   
>>>   
>>>
>>>   2) Release Blockers - 20 minutes
>>> If you would like to participate - please respond to this
>>> thread and adhere to sample format below:
>>>   
>>>   
>>>   
>>>   
>>>   
>>>
>>>   3) Community Efforts - 10+ minutes
>>>   
>>>   
>>>   
>>>   
>>>
>>> The proposed date and time  - Thursday August 17th 9AM PT.
>>>
>>> Minutes will be taken and posted on dev list. Due to number of things we
>>> need to discuss - we have to keep the call very structured, each topic -
>>> timed and very high level.
>>> If there are issues and or suggestions, we will note it down in few
>>> sentences, identify interested parties and have them do a "post"
>>> discussion on the mailing list.
>>>
>>> Looking forward to your comments,
>>>
>>> Regards,
>>> ilya
>>>


[README][Quarterly Call] - CloudStack Development, Blockers and Community Efforts

2017-07-19 Thread ilya
Hi Devs and Users

Hope this message finds you well,

As mentioned earlier, we would like to start with quarterly calls to
discuss the direction of cloudstack project.

I propose to split the 90 minute call into 3 topics:

1) Development efforts - 60 minutes
Upcoming Features you are working on developing (to avoid
collision andmaintain the roadmap).
  Depending on number of topics we need to discuss - time for
each topic will be set accordingly.
  If you would like to particiapate - please respond to this
thread and adhere to sample format below:  











   

2) Release Blockers - 20 minutes
  If you would like to participate - please respond to this
thread and adhere to sample format below:  






3) Community Efforts - 10+ minutes





The proposed date and time  - Thursday August 17th 9AM PT.

Minutes will be taken and posted on dev list. Due to number of things we
need to discuss - we have to keep the call very structured, each topic -
timed and very high level.
If there are issues and or suggestions, we will note it down in few
sentences, identify interested parties and have them do a "post"
discussion on the mailing list.

Looking forward to your comments,

Regards,
ilya



[NOTICE] Meeting with Accelerite Leadership

2017-07-10 Thread ilya musayev
Dear CloudStackers,

Last week, Johh Kinsella and myself were suppose to meet with Accelerite
leadership team. Unfortunately John could not make it - so i was alone.

We discussed ways we can improve community collaboration and leverage
Accelerite"s resources to align and drive larger community agenda including
extendes roadmap.

Many topics have been mentioned, below is the summary of our discussion. I
will list things in the order i see being important.


---
1) Proposal was made to have a quarterly call (or more often as needed)
with all interested parties to discuss:
Upcoming Features you are working on developing (to avoid collision
and
maintain the roadmap)
Blockers that are impacting release and adoption
Other topics

The length of the call would be 90 minutes. Each party will get a
fair
amount of time. The agenda will be collated and presented prior to the
call with a link to FS on Confluence and time allotted for each topic.

Minutes will be taken and posted on dev list. If there are issues and or
suggestions, we will note it down in few sentences, identify interested
parties and have them do a "post" discussion on the mailing list.

The proposed date and time  - Thursday August 17th 9AM PT

--
2) Accelerite is considering funding a position for a person who will be
working within community - as community manager. Help organize and
facilitate discussions, make sure Confluence and JIRA are up to date,
help new users with answering basic questions or finding right
individual to assist with solution. While funded by Accelerite - it must
be clear that the person is working with/for Apache CloudStack project.

3) Marketing was mentioned, i suggested we do more press releases - and
possibly make use of interns

4) OpenStack VS CloudStack (unbiased technology comparison), there is a
common question - we need to come up something that can help justify
Apache CloudStack to clients leadership

5) Cinder integration with Cloudstack was mentioned - but no solid plans
yet.

6) Creating Appliances of CloudStack - that are ready to be consumed and
user can spin nested VMs to try CloudStack effortlessly

7) CoudStack Template Repository (plugin)- there is a code written for it by
Citrix and resides on ASF git - but for some reason it was dropped or
never completed. If we can give user a rich marketplace of appliances to
consume - we will certainly get a good edge. This can improve the adoption.

8) MeetUps - we need to re-kickstart this initiative within SF Bay Area
and stream it to other locations/meetups.

9) Demo environment of CloudStack.  David mentioned Citrix donated gear
is in one of ASF locations - but sitting idle. I proposed we make use of
it and let new CloudStack explorers try it out - without the hassle of
deploying it.

10) If we can get CloudStack into EPEL fedora and ubuntu upstream
repositories - it will help with adoption as well.

Please let me know if you would be interested in item #1, which is
quarterly meeting. The proposed time is 9am PST, August 17th.

I will help setting up the first few initial calls and be a moderator.

Looking forward to your comments

Regards
ilya


[JOB OPPORTUNITY] LeaseWeb is looking for CloudStack Developer

2017-07-10 Thread ilya
Hi Folks,

Promised to help LeaseWeb recruiter with posting Job to "dev" and "user"
list - apology for cross posting.

Please reach out to recruiter directly, job description can be seen here:

https://drive.google.com/open?id=0B06G3DVBuP9zQXRwMzVKZTJOVlU

Recruiter for this position can be reached here:
https://www.linkedin.com/in/darwinbpoveda/

Regards
ilya


Re: Weekly update on GSoC project - Adding new noVNC console

2017-06-09 Thread ilya
Thanks, please keep us posted.

On 6/6/17 11:49 PM, Rohit Yadav wrote:
> Good work Sachin.
> 
> 
> - Rohit
> 
> 
> From: sachin patil 
> Sent: 06 June 2017 09:58:27
> To: dev
> Subject: Weekly update on GSoC project - Adding new noVNC console
> 
> Hello All,
> 
>  I have started integrating the noVNC console to cloudstack. As my
> mentor Syed suggested, it will be available along side the existing console
> and can be accessed by passing a new parameter ( webscoket = true ) to the
> existing console access url.
> 
> I have created a new context "/novnc" to handle the noVNC request and a
> noVNC handler for the same.
> 
> I am  unable to push my local commits to my fork due to slow internet but
> will try to push them soon.
> 
> 
> Regards,
> Sachin Patil
> 
> rohit.ya...@shapeblue.com 
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> @shapeblue
>   
>  
> 


Re: Extend CloudStack for a new hypervisor

2017-05-26 Thread ilya
Cool concept.

Basically - you are trying to do what we do with VmWare and XEN. We have
direct and indirect agent model.

CloudStack talks to VmWare vCenter directly for all of its operations
via indirect agent model.

Look into either VmWare or XEN as point of reference.



On 5/18/17 2:29 PM, John Smith wrote:
> Greetings!
> 
> I have a need to extend CloudStack to support an additional hypervisor.
> This is not something I consider strategic for CloudStack itself, but I
> have a project with a very specific need.
> 
> I have a development background but am not an active developer right now
> ... so looking forward to getting back in the saddle!  I've never developed
> against the CloudStack tree before.
> 
> I can't find any docs on how one would introduce support for a new
> hypervisor (eg. what classes, methods, etc, need to be implemented,
> extended, etc) and checking the source tree I can't easily see if there is
> a base to build from.  I would appreciate any pointers about where to start
> looking to save me going through the entire tree from scratch.
> 
> The standard CloudStack concepts should be easy enough (ha!) to map 1:1 to
> this additional hypervisor (including primary & secondary storage, router &
> secondary storage VMs, the networking concepts, etc) so I'm hoping that I
> can simply implement it like a VMware or Xen backend ...
> 
> Thanks in advance!
> 
> John.
> 


Re: some issues after upgrade cloudstack to 4.9.2

2017-05-26 Thread ilya
Marc,

I recall there was a configuration setting for vmware nic during
template registration, why are you changing the code?

VMware is fairly simple in comparison. When you register a template -
you can preset the NIC like so (this is cloudmonkey example, but you can
do same in UI)

The key is
--
details[0].nicAdapter=Vmxnet3

-

register template format=ova hypervisor=vmware ispublic=true
isfeatured=true passwordenabled=false details[0].rootDiskController=scsi
details[0].nicAdapter=Vmxnet3 details[0].keyboard=us
details[0].keyboard.typematicMindelay=200 ostypeid=148 zoneid=-1
displaytext=VM_TMPL_OL63-88 name=VM_TMPL_OL63-88
url=http://mydomain.com/ovas/ol-6.3-88.ova

Regards,
ilya


On 5/25/17 3:06 AM, Marc Poll Garcia wrote:
> Hello again,
> 
> About the Cloudstack version upgrade, we've build it from sources, to
> create new self-compiled package 4.9.2 version with "noredit", in order to
> achieve VMware hypervisor compatibility.
> 
> During that process, we changed some files, trying to get Vmxnet3 as
> virtual NIC by default instead of E1000 when new instances are created.
> 
> Unfortunatly it does not work either.
> 
> That's what we did:
> /tmp/apache-cloudstack-4.9.2.0-src
> 
> *Change:*
> *// Fallback to E1000 if no specific nicAdapter is passed*
> *VirtualEthernetCardType nicDeviceType =
> VirtualEthernetCardType.E1000;*
> 
> *Apache-cloudstack-4.9.2.0-src] # cat
> ./plugins/hypervisors/vmware/src/com/cloud/hypervisor/vmware/resource/VmwareResource.java
> | grep VirtualEthernetCardType*
> *Import com.cloud.hypervisor.vmware.mo.VirtualEthernetCardType;*
> *VirtualEthernetCardType nicDeviceType =
> VirtualEthernetCardType.Vmxnet3;*
> 
> 
> 
> Any help on it?
> 
> More info:
> http://users.cloudstack.apache.narkive.com/LTPpdCh3/vmxnet3-by-default
> 
> Many thanks!
> 
> Kind regards.
> 
> 2017-05-25 11:14 GMT+02:00 Marc Poll Garcia <marc.poll.gar...@upcnet.es>:
> 
>> Hi all,
>>
>> we have just upgraded our cloud environment based on Cloudstack 4.5.2 to
>> 4.9.2 and we're expriencing some issues after this.
>>
>> Our setup is the following one:
>>
>> - 1 x cloudstack managment server
>> - 1 x bbdd server cloudstack database on it
>> - 2 x Vmware Hipervisors (hosts)
>>
>> I'm performing a list of tests:
>>
>> *- Sometimes, and randomly console from instances does not load.*
>> *- Not possible to upload template from local.*
>>
>> We see the following on log:
>>
>> 2017-05-25 09:00:31,665 ERROR [c.c.s.ImageStoreUploadMonitorImpl]
>> (Upload-Monitor-1:ctx-0b3bf6e9) (logid:e9c82a0f) *Template
>> b87459ac-8fbe-4b34-ae25-21235c3fcd1d failed to upload due to operation
>> timed out*
>> 2017-05-25 09:02:18,265 ERROR [c.c.c.ClusterServiceServletContainer]
>> (Thread-11:null) (logid:) *Unexpected exception *
>> 2017-05-25 09:03:35,940 ERROR [c.c.c.ClusterManagerImpl] (main:null)
>> (logid:) *Unable to ping management server at 192.168.100.2:9090
>> <http://192.168.100.2:9090> due to ConnectException*
>>
>>
>> Why is it happening?
>> It does not happen on our old 4.5.2 version.
>>
>> Is there any way to fix it? changing any global parameter or permissions
>> issue?
>>
>> We need a clue with that because if affecting to our production
>> environment.
>>
>> Thanks in advance.
>>
>> Kind regards.
>>
> 


Re: [VOTE] Apache Cloudstack should join the gitbox experiment.

2017-04-10 Thread ilya
+1

On 4/10/17 9:22 AM, Daan Hoogland wrote:
> In the Apache foundation an experiment has been going on to host
> mirrors of Apache project on github with more write access then just
> to the mirror-bot. For those projects committers can merge on github
> and put labels on PRs.
> 
> I move to have the project added to the gitbox experiment
> please cast your votes
> 
> +1 CloudStack should be added to the gitbox experiment
> +-0 I don't care
> -1 CloudStack shouldn't be added to the gitbox experiment and give your 
> reasons
> 
> thanks,
> 


Re: Need help in getting CentOS 7 templates to run on Cloudstack 4.9 and VMWare

2017-04-10 Thread ilya
Nux

FYI - i had a simalar issue - the password was not set properly (or at
all). I tracked it down to VR problem on 4.5. with ubuntu images/

If possible, please enable static username password please if password
update via cloudstack fails.

Thanks
ilya


On 3/31/17 1:08 PM, Nux! wrote:
> Hm, ok, so this is a corner case I'll need to cover then.
> But how did the password/sshkey feature work, if at 
> 
> --
> Sent from the Delta quadrant using Borg technology!
> 
> Nux!
> www.nux.ro
> 
> - Original Message -
>> From: "Syed Ahmed" <sah...@cloudops.com>
>> To: "dev" <dev@cloudstack.apache.org>
>> Sent: Friday, 31 March, 2017 19:10:22
>> Subject: Re: Need help in getting CentOS 7 templates to run on Cloudstack 
>> 4.9 and VMWare
> 
>> I'm using a shared network so the VR is not the gateway.
>>
>> On Fri, Mar 31, 2017 at 12:49 PM, Nux! <n...@li.nux.ro> wrote:
>>> Syed,
>>>
>>> I just checked and centos user is added to sudoers, if it was not added to 
>>> your
>>> instance, then cloud-init did not complete properly.
>>> I have seen this in the past when the data source is not reached properly.
>>> I would double check the cloud-init logs if I were you, make sure eth0 was 
>>> up
>>> properly and the VR accessible.
>>>
>>> http://storage1.static.itmages.com/i/17/0331/h_1490978820_8688282_efdf2d86f5.png
>>>
>>> --
>>> Sent from the Delta quadrant using Borg technology!
>>>
>>> Nux!
>>> www.nux.ro
>>>
>>> - Original Message -
>>>> From: "Nux!" <n...@li.nux.ro>
>>>> To: "dev" <dev@cloudstack.apache.org>
>>>> Sent: Friday, 31 March, 2017 17:01:43
>>>> Subject: Re: Need help in getting CentOS 7 templates to run on Cloudstack 
>>>> 4.9
>>>> and VMWare
>>>
>>>> Thanks, I'll check.
>>>> Cloud-init is supposed to add the user to sudo.
>>>>
>>>> --
>>>> Sent from the Delta quadrant using Borg technology!
>>>>
>>>> Nux!
>>>> www.nux.ro
>>>>
>>>> - Original Message -
>>>>> From: "Syed Ahmed" <sah...@cloudops.com>
>>>>> To: "dev" <dev@cloudstack.apache.org>
>>>>> Sent: Friday, 31 March, 2017 16:54:21
>>>>> Subject: Re: Need help in getting CentOS 7 templates to run on Cloudstack 
>>>>> 4.9
>>>>> and VMWare
>>>>
>>>>> Hi Nux,
>>>>>
>>>>> One of the things that I've seen is that the user centos is not added
>>>>> to the sudoers.
>>>>>
>>>>> On Fri, Mar 31, 2017 at 8:06 AM, Nux! <n...@li.nux.ro> wrote:
>>>>>> Excellent, let me know if you hit any more issues.
>>>>>>
>>>>>> --
>>>>>> Sent from the Delta quadrant using Borg technology!
>>>>>>
>>>>>> Nux!
>>>>>> www.nux.ro
>>>>>>
>>>>>> - Original Message -
>>>>>>> From: "Syed Ahmed" <sah...@cloudops.com>
>>>>>>> To: "dev" <dev@cloudstack.apache.org>
>>>>>>> Sent: Friday, 31 March, 2017 12:58:21
>>>>>>> Subject: Re: Need help in getting CentOS 7 templates to run on 
>>>>>>> Cloudstack 4.9
>>>>>>> and VMWare
>>>>>>
>>>>>>> Hey Nux,
>>>>>>>
>>>>>>> It worked! Thanks for fixing this. I am able to ping the VM now.
>>>>>>>
>>>>>>> On Fri, Mar 31, 2017 at 4:17 AM, Nux! <n...@li.nux.ro> wrote:
>>>>>>>> Syed,
>>>>>>>>
>>>>>>>> I am aware of the renaming issue and avoid it, if you check my 
>>>>>>>> kickstart I
>>>>>>>> specifically add biosdevname=0 and net.ifnames=0
>>>>>>>> http://jenkins.openvm.eu/cloudstack/config/centos/centos7-vmware.cfg
>>>>>>>>
>>>>>>>> However yesterday Rohit brought to my attention the ova generation 
>>>>>>>> script
>>>>>>>> inserts a e1000 eth0 which apparently breaks things in certain 
>>>>>>>> situations.
>>>>>>>> https://github.com/apache/cloudstack/pull/2022#

Re: ACS - Some VMs unable to get DHCP IP from VR

2016-11-07 Thread ilya
Can you move the routerVM on the same hypervisor as guest VM (or guest
VM to same hypervisor as routerVM)? If it works, then move the routerVM
out to another hypervisor within the same cluster (but same switch).

Are you running vSphere? I've seen similar problem where ARPs would not
make it thru due to some intricacy with VMware and Cisco (or perhaps
Juniper) switch upstream. Solution was to console in to vRouter and ping
a host 2 hops aways. That would fix ARP issue (albeit temporary).

Let me know if it helps,

Regards
ilya

On 11/7/16 12:35 PM, Cloud List wrote:
> Hi Chiradeep and Wei, thanks for your reply.
> 
> Wei, here's the result you requested:
> 
> root@r-4155-VM:/var/log2# cat /etc/dnsmasq.conf |grep -v "^#" |grep -v "^$"
> domain-needed
> bogus-priv
> resolv-file=/etc/dnsmasq-resolv.conf
> local=/cs1cloud.internal/
> except-interface=eth1
> except-interface=eth2
> except-interface=lo
> no-dhcp-interface=eth1
> no-dhcp-interface=eth2
> expand-hosts
> domain=cs1cloud.internal
> domain=cs1cloud.internal
> domain=cs1cloud.internal
> dhcp-range=X.Y.202.1,static
> dhcp-hostsfile=/etc/dhcphosts.txt
> dhcp-ignore=tag:!known
> dhcp-option=15,"cs1cloud.internal"
> dhcp-option=vendor:MSFT,2,1i
> dhcp-boot=pxelinux.0
> enable-tftp
> tftp-root=/opt/tftpboot
> dhcp-lease-max=2100
> domain=cs1cloud.internal
> log-dhcp
> log-facility=/var/log2/dnsmasq.log
> conf-dir=/etc/dnsmasq.d
> dhcp-optsfile=/etc/dhcpopts.txt
> dhcp-option=option:router,X.Y.202.1
> dhcp-option=6,X.Y.202.2,8.8.8.8,8.8.4.4
> dhcp-client-update
> 
> We actually have 3 class-C subnets assigned to the network.
> 
> X.Y.202.0/24
> X.Y.203.0/24
> Z.A.107.0/24
> 
> After enabling log-dhcp as per Chiradeep Vittal, I can see that the
> available DHCP subnet is only the first subnet.
> 
> Nov  7 20:27:49 dnsmasq-dhcp[22462]: 1979424125 available DHCP subnet:
> X.Y.202.2/255.255.255.0
> Nov  7 20:27:49 dnsmasq-dhcp[22462]: 1979424125 available DHCP subnet:
> X.Y.202.1/255.255.255.0
> Nov  7 20:27:49 dnsmasq-dhcp[22462]: 1979424125 client provides name:
> Debian-81-64b
> Nov  7 20:27:49 dnsmasq-dhcp[22462]: 1979424125 DHCPDISCOVER(eth0)
> 06:c5:38:01:13:40 ignored
> Nov  7 20:27:51 dnsmasq-dhcp[22462]: 2670909966 available DHCP subnet:
> X.Y.202.2/255.255.255.0
> Nov  7 20:27:51 dnsmasq-dhcp[22462]: 2670909966 available DHCP subnet:
> X.Y.202.1/255.255.255.0
> Nov  7 20:27:51 dnsmasq-dhcp[22462]: 2670909966 client provides name:
> WIN-H4INMOBBRJA
> Nov  7 20:27:51 dnsmasq-dhcp[22462]: 2670909966 vendor class: MSFT 5.0
> Nov  7 20:27:51 dnsmasq-dhcp[22462]: 2670909966 DHCPDISCOVER(eth0)
> 06:31:ac:01:13:f0 ignored
> 
> Coincidentally, all affected VMs are coming from the second subnet:
> X.Y.203.0/24, while the third subnet is rarely used.
> 
> Do we need to specifically include the second and third subnets into the
> dnsmasq.conf file? I tried adding below line:
> 
> dhcp-range=X.Y.203.1,static
> 
> so it will become:
> 
> dhcp-range=X.Y.202.1,static
> dhcp-range=X.Y.203.1,static
> 
> but it doesn't seem to work.
> 
> Any advice is highly appreciated.
> 
> Thank you.
> 
> On Tue, Nov 8, 2016 at 4:19 AM, Wei ZHOU <ustcweiz...@gmail.com> wrote:
> 
>> can you paste the result of following command?
>>
>> cat /etc/dnsmasq.conf |grep -v "^#" |grep -v "^$"
>>
>> -Wei
>>
>>
>> 2016-11-07 20:27 GMT+01:00 Cloud List <cloud-l...@sg.or.id>:
>>
>>> Hi Wei,
>>>
>>> In addition,
>>>
>>> The VR is serving a shared not isolated network, meaning the IP it serves
>>> is 'guest' not 'public' IP. Will that make a difference on the iptables
>>> command we need to execute?
>>>
>>> Looking forward to your reply, thank you.
>>>
>>> Cheers.
>>>
>>>
>>> On Tue, Nov 8, 2016 at 3:19 AM, Cloud List <cloud-l...@sg.or.id> wrote:
>>>
>>>> Hi Wei and Ozhan,
>>>>
>>>> Thanks for your reply.
>>>>
>>>> The problem doesn't affect only Debian-based guest VMs, but also
>> affected
>>>> some Windows and Ubuntu-based VMs as well. I have executed the command
>> on
>>>> the VR and reset the NIC of the guest VM, but unfortunately the issue
>>> still
>>>> persists.
>>>>
>>>> iptables -t mangle -A POSTROUTING -p udp -m udp --dport 68 -j CHECKSUM
>>>> --checksum-fill
>>>>
>>>> After issuing the above command on VR and reset the NIC on guest vm
>>>> (ifdown eth0, ifup eth0):
>>>>
>>&g

Re: patchviasocket seems to be broken with qemu 2.3(+?)

2016-10-28 Thread ilya
Hi Linas

Thank you for posting the solution, i've seen this issue in my lab env
as well.

Much appreciated.

Regards
ilya

On 10/26/16 4:44 AM, Linas Žilinskas wrote:
> So after some investigation I've found out that qemu 2.3.0 is indeed
> broken, at least the way CS uses the qemu chardev/socket.
> 
> Not sure in which specific version it happened, but it was fixed in
> 2.4.0-rc3, specifically noting that CloudStack 4.2 was not working.
> 
> qemu git commit: 4bf1cb03fbc43b0055af60d4ff093d6894aa4338
> 
> Also attaching the patch from that commit.
> 
> 
> For our own purposes i've included the patch to the qemu-kvm-ev package
> (2.3.0) and all is well.
> 
> 
> On 2016-10-20 09:59, Linas Žilinskas wrote:
>>
>> Hi.
>>
>> We have made an upgrade to 4.9.
>>
>> Custom build packages with our own patches, which in my mind (i'm the
>> only one patching those) should not affect the issue i'll describe.
>>
>> I'm not sure whether we didn't notice it before, or it's actually
>> related to something in 4.9
>>
>> Basically our system vm's were unable to be patched via the qemu
>> socket. The script simply error'ed out with a timeout while trying to
>> push the data to the socket.
>>
>> Executing it manually (with cmd line from the logs) resulted the same.
>> I even tried the old perl variant, which also had same result.
>>
>> So finally we found out that this issue happens only on our HVs which
>> run qemu 2.3.0, from the centos 7 special interest virtualization
>> repo. Other ones that run qemu 1.5, from official repos, can patch the
>> system vms fine.
>>
>> So i'm wondering if anyone tested 4.9 with kvm with qemu >= 2.x? Maybe
>> it something else special in our setup. e.g. we're running the HVs
>> from a preconfigured netboot image (pxe), but all of them, including
>> those with qemu 1.5, so i have no idea.
>>
>>
>> Linas Žilinskas
>> Head of Development
>> website <http://www.host1plus.com/> facebook
>> <https://www.facebook.com/Host1Plus> twitter
>> <https://twitter.com/Host1Plus> linkedin
>> <https://www.linkedin.com/company/digital-energy-technologies-ltd.>
>>
>> Host1Plus is a division of Digital Energy Technologies Ltd.
>>
>> 26 York Street, London W1U 6PZ, United Kingdom
>>
>>  
>>
> 
> Linas Žilinskas
> Head of Development
> website <http://www.host1plus.com/> facebook
> <https://www.facebook.com/Host1Plus> twitter
> <https://twitter.com/Host1Plus> linkedin
> <https://www.linkedin.com/company/digital-energy-technologies-ltd.>
> 
> Host1Plus is a division of Digital Energy Technologies Ltd.
> 
> 26 York Street, London W1U 6PZ, United Kingdom
> 
>  
> 


Re: 4.8, 4.9, and master Testing Status

2016-10-04 Thread ilya
John and Team

Thanks for amazing work and contributing back.

Regards,
ilya

On 10/3/16 9:48 PM, John Burwell wrote:
> All,
> 
> A quick update on our progress to pass all smoke tests aka super green.  We 
> have reduced the failures and errors for XenServer from 93 to 9 and for 
> VMware from 51 to 14.  A CentOS 6/CentOS 6 KVM run is currently executing.  
> Based on manual tests/fixes, we are expecting to be the first super green 
> configuration.  We have also found the following additional defects:
> 
>   * CLOUDSTACK-9528 [2]: SSVM Downloads (built-in) Template Multiple Times 
>   * CLOUDSTACK-9529 [3]: Marvin Tests Do Not Clean Up Properly
> 
> 9528 is causing XenServer environments to fail to install and startup 
> cleanly.  A lack of cleanup described in 9529 is causing XenServer to exhaust 
> available resources before a test run completes.  We believe that resolution 
> of these issues will address most, if not all, of the XenServer issues.
> 
> Thanks,
> -John
> 
> [1]: https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65873020
> [2]: https://issues.apache.org/jira/browse/CLOUDSTACK-9528
> [3]: https://issues.apache.org/jira/browse/CLOUDSTACK-9529
> 
>>
> john.burw...@shapeblue.com 
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London VA WC2N 4HSUK
> @shapeblue
>   
>  
> 
> On Sep 30, 2016, at 2:40 AM, John Burwell <john.burw...@shapeblue.com> wrote:
>>
>> All,
>>
>> Using blueorganutan, Rohit, Murali, Boris, Paul, Abhi, and I are executing 
>> the smoke tests for the 4.8, 4.9, and master branches against the following 
>> environments:
>>
>>  * CentOS 7.2 Management Server + VMware 5.5u3 + NFS Primary/Secondary 
>> Storage
>>  * CentOS 7.2 Management Server + XenServer 6.5SP1 + NFS 
>> Primary/Secondary Storage
>>  * CentOS 7.2 Management Server + CentOS 7.2 KVM + NFS Primary/Secondary 
>> Storage
>>
>> Thus far, we have found seven (7) test case and/or CloudStack defects in the 
>> VMware run for the 4.8 branch [1].  We are currently triaging fifty-one (51) 
>> new issues from the XenServer run to determine which issues were 
>> environmental and defects.  This triage work should be completed today (30 
>> Sept 2016).  Finally, we are awaiting the results of the KVM run.  
>>
>> We are using PR #1692 [2] as the master tracking PR to fix all defects in 
>> the 4.8 branch.  Our goal is to get all non-skip tests to pass and then 
>> merge this PR to the 4.8, 4.9, and master.  For each bug, we are creating a 
>> JIRA ticket and adding a commit to the PR.  Currently, the branch for this 
>> PR is in the shapeblue repo (the branch started with a much smaller fix from 
>> Paul and we just kept using it).  However, if others are interested in 
>> picking up defects, we will move it to ASF repo.  Once the 4.8 branch is 
>> stabilized, we plan to re-execute these tests on the 4.9 and master branches 
>> as we expect that the 4.9 and master branches will have additional issues.
>>
>> Since we are in a test freeze, I propose that no further PRs are merged to 
>> the 4.8, 4.9, and master branches until they are stabilized.  The following 
>> PRs will be re-based, re-tested, and merged to 4.8, 4.9.1.0, and/or 4.10.0.0 
>> post-stabilization:
>>
>>  * 1696
>>  * 1694
>>  * 1684
>>  * 1681
>>  * 1680
>>  * 1678
>>  * 1677
>>  * 1676
>>  * 1674
>>  * 1673
>>  * 1642
>>  * 1624
>>  * 1615
>>  * 1600
>>  * 1545
>>  * 1542
>>
>> I recognize that this a large backlog of contributions ready to merge, and 
>> apologize for asking folks to wait.  However, given current state of the 
>> release branches, merging them before we complete fixing the smoke tests 
>> would create a moving target that further delay stabilization.  
>>
>> Obviously, it is unlikely we will make the 10 October 2016 release date for 
>> the 4.8.2.0, 4.9.1.0, and 4.10.0.0 releases.  At this point, it is difficult 
>> to estimate the size of the schedule slip because we still have issues to 
>> triage and test runs to complete.  I have created a wiki page [2] to track 
>> progress on this effort.  
>>
>> Does this approach sound reasonable?  Any suggestions to speed up this 
>> process will be greatly appreciated as stabilizing and re-opening these 
>> branches stable ASAP is critical for the community.
>>
>> Thanks,
>> -John
>>
>> [1]: 
>> https://issues.apache.org/jira/browse/CLOUDSTACK-9518?jql=project%20%3D%20CL

Re: [DISCUSS] Replacing the VR

2016-09-17 Thread ilya
Our options become much better if we consider BSD based routers.

Would that be on the table?

https://en.wikipedia.org/wiki/List_of_router_and_firewall_distributions


On 9/16/16 12:04 PM, Will Stevens wrote:
> Ya, your points are all valid Simon.  The lack of standard libraries to
> handle a lot of the details is a problem.  I don't think it is an
> unsolvable problem, but if we spend the time to do that, will we have
> something that will work for us for the next 5 years?  This may be the
> shortest path to getting us where we need to be for the time being.
> 
> What is the best case scenario for the VR going forward which will last us
> the next 5 years?  Maybe we just clean up what we have to do a major
> restructuring of the pieces and how they are implemented.  We need to keep
> in mind how maintainable this implementation is because that is going to be
> key going forward IMO.
> 
> 
> 
> *Will STEVENS*
> Lead Developer
> 
> *CloudOps* *| *Cloud Solutions Experts
> 420 rue Guy *|* Montreal *|* Quebec *|* H3J 1S6
> w cloudops.com *|* tw @CloudOps_
> 
> On Fri, Sep 16, 2016 at 2:29 PM, Simon Weller <swel...@ena.com> wrote:
> 
>> I think our other option is to take a real look at what it would take to
>> fix the VR. In my opinion, a lot of the problems are related to the
>> monolithic python code base and the fact nothing is actually separated.
>>
>> Secondly, the python scripts (and bash scripts) don't use any established
>> libraries to complete tasks and instead shell out and run commands that are
>> both hard to track and hard to parse on return.
>>
>>
>> If we daemonized this, used a real api for Agent to VR communication, used
>> common already existing libraries for the system service and network
>> interactions and spent a bit of time separating out code into distinct
>> modules, everything would behave a lot better.
>>
>>
>> The pain and suffering is due to years and years of patches and constant
>> shelling out to complete tasks in my opinion. If we spend time to rethink
>> how we interact with the VR in general and we abstract the systems and
>> networking stuff and use well known and stable libraries to do the work,
>> the VR would be much easier to maintain.
>>
>>
>> - Si
>>
>>
>>
>>
>> 
>> From: Marty Godsey <ma...@gonsource.com>
>> Sent: Friday, September 16, 2016 12:24 PM
>> To: dev@cloudstack.apache.org
>> Subject: RE: [DISCUSS] Replacing the VR
>>
>> So based upon this discussion would it be prudent to wait on VyOS 2.0? The
>> current VR is giving us issues but would the time invested in another
>> "solution" be wasted especially if by the time another option is chose,
>> then coded, then tested, then implemented and right as that time happened
>> to be when VyOS 2.0 is released.  Of course you said they are just in the
>> scoping range so this could still be a year or more out.
>>
>> Thoughts?
>>
>> Regards,
>> Marty Godsey
>> nSource Solutions
>>
>> -Original Message-
>> From: williamstev...@gmail.com [mailto:williamstev...@gmail.com] On
>> Behalf Of Will Stevens
>> Sent: Friday, September 16, 2016 10:31 AM
>> To: dev@cloudstack.apache.org
>> Cc: dan...@baturin.org
>> Subject: Re: [DISCUSS] Replacing the VR
>>
>> I just had a quick chat with a couple of the guys over on the VyOS chat.
>> I have CC'ed one of them in case we have more licensing questions.
>>
>> So here is the status with the license "the code inherited from Vyatta and
>> our modifications from it is GPLv2 (strict, not v2+). The config reading
>> library is GPLv2 too, so anything that links to is is GPLv2.
>> Some auxiliary components we made after the fork are more permissive,
>> LGPLv2+ or MIT."
>>
>> They are currently in the process of scoping a redesign (VyOS 2.0), "we
>> are planning a clean rewrite that will solve issues of the current config
>> system".
>> This will include the ability to configure via the API.
>>
>> If we have more questions for VyOS, they are very friendly and responsive,
>> so we should be able to get answers.
>>
>> *Will STEVENS*
>> Lead Developer
>>
>> *CloudOps* *| *Cloud Solutions Experts
>> 420 rue Guy *|* Montreal *|* Quebec *|* H3J 1S6 w cloudops.com *|* tw
>> @CloudOps_
>>
>> On Fri, Sep 16, 2016 at 9:37 AM, Syed Ahmed <sah...@cloudops.com> wrote:
>>
>>> I agree with Will Ilya. There are so many problems with the VR right now.
>>> Mo

Re: [DISCUSS] Replacing the VR

2016-09-15 Thread ilya
Hi folks, please kindly keep in mind the open source licensing when
choosing the next router. AGPL or GPL v3 are "no go" for many shops.

I could not find the license for Vyatta, i do know Vyatta dev folks were
open to working with ACS few years back, but somehow the initiative was
dropped.


On 9/15/16 9:21 AM, Will Stevens wrote:
> Ya, we would need to add a daemon for VPN as well.  Load balancing is
> another aspect which we will need to consider if we went this route.
> Something like https://traefik.io/ could potentially be a good fit due to
> its API driven configuration, but it may be more than what we need.
> 
> We should probably try define which pieces make sense to be solved together
> and which pieces would be best suited to be broken out.
> 
> I think the network connectivity, routing and firewalling should probably
> all stay together since the majority of the tools we would potentially use
> would handle all of that together in a single implementation.
> 
> The password server and userdata seems like a good option for being broken
> out and handled independently (and probably rewritten completely since they
> currently have some issues).
> 
> Load balancing is another that could warrant splitting out, but that
> depends on what direction we go and how we would be managing it.  DHCP and
> DNS are others which could go either way.
> 
> If we do split out services, I think we should consolidate as much as we
> can into each service we break out.  Ideally a network packet would never
> hit more than one, maybe two, services.  I don't think we should be
> splitting services 'just because', I think we need a valid case for
> splitting any service out because it adds complexity.  Our project is
> already complex enough, we need to avoid adding complexity unless it is
> really needed.
> 
> Some more of my thoughts on this anyway...
> 
> *Will STEVENS*
> Lead Developer
> 
> *CloudOps* *| *Cloud Solutions Experts
> 420 rue Guy *|* Montreal *|* Quebec *|* H3J 1S6
> w cloudops.com *|* tw @CloudOps_
> 
> On Thu, Sep 15, 2016 at 10:28 AM, Simon Weller  wrote:
> 
>> I do agree with you that this probably isn't the right place the password
>> service and user data.
>>
>>
>> Having said that, after taking a cursory look at the dev docs, it doesn't
>> seem that difficult to add new daemons: https://opensnaproute.github.
>> io/docs/developer.html#creating-new-component
>>
>> > creating-new-component>
>>
>>
>> They've definitely build it with a microservices architecture in mind, so
>> each individual feature is abstracted into it's own small daemon process.
>> We could just create a daemon for the password server and the userdata
>> components if we really had to.
>>
>>
>> - Si
>>
>>
>> 
>> From: williamstev...@gmail.com  on behalf of
>> Will Stevens 
>> Sent: Thursday, September 15, 2016 9:17 AM
>> To: dev@cloudstack.apache.org
>> Subject: Re: [DISCUSS] Replacing the VR
>>
>> A big part of why I know about it is because it is written in Go.  :P
>>
>> Yes, it is definitely interesting for the routing and traffic handling
>> aspects of the VR.  We will likely have to rethink some of the pieces a
>> little bit like the password server and userdata if we are to adopt a
>> different VR approach.  This is where I think some of JohnB and Chiradeep's
>> ideas make sense.  In many ways, it does not make sense for the device
>> handling routing and network traffic to also be responsible for passwords
>> and userdata.
>>
>> *Will STEVENS*
>> Lead Developer
>>
>> *CloudOps* *| *Cloud Solutions Experts
>> 420 rue Guy *|* Montreal *|* Quebec *|* H3J 1S6
>> w cloudops.com *|* tw @CloudOps_
>>
>> On Thu, Sep 15, 2016 at 9:10 AM, Simon Weller  wrote:
>>
>>> I hadn't heard of Flexswitch until you mentioned it. It looks pretty
>> cool!
>>> It even supports ONIE install.
>>>
>>> To be honest, the ipsec feature could be added, or we could offload it to
>>> separate vm if we needed to. The fact it is so feature rich from a
>> routing
>>> perspective (and all API driven) is really nice.
>>>
>>>
>>> Based on the roadmap, it looks like they plan to also support
>> capabilities
>>> such as BGP-MPLS based L3VPN, EVPN, VPLS in the future. This will be huge
>>> for our carrier community that rely on these technologies to do private
>>> gateway and inter-VPC interconnections today. We handle this stuff on our
>>> ASRs right now with a vlan interconnect into the VR. Being able to do
>> MPLS
>>> all the way to the VR would be awesome.
>>>
>>>
>>> It also seems to be written in GO (a language here at ENA we know very
>>> well).
>>>
>>>
>>> - Si
>>>
>>>
>>>
>>> 
>>> From: Will Stevens 
>>> Sent: Thursday, September 15, 2016 7:06 AM
>>> To: dev@cloudstack.apache.org
>>> Subject: RE: [DISCUSS] Replacing the VR
>>>
>>> 

Re: Virtual Router execute python slowly

2016-08-19 Thread ilya
Hi Gust

Are you proposing solution "optimized CsAddress.py in VR" or just
analysis on where time is being spent?

Regards
ilya
On 8/18/16 9:34 PM, Gust wrote:
> 
> Hi,all 
> 
> we constructed a advance network in cloudstack , it work fine early , but 
> when I added some VR rules ,it execute slow more and more.
> 
> About total 100 rules already exist ,  I add a port forward rule , it would 
> execute about 3 minutes , that we changed the agent source  set more seconds 
> timeout ,else agent will timeout at 120s,  and optimized CsAddress.py in VR.
> 
> So I login to VR vm , and  modify the update_config.pywith  cProfile , 
> and print out execute result . because add cProfile , python execute more 
> than 8 min.
> 
> It shows too many netaddr  object init invoked.
> 
> vr hypervisor hardware :   E3 1230v2  3.4GHz   , vr alloc 3GHz
> 
> --
> Optimized CsAddress.py , execute 3 min, else 5 min
> 
> CsAddress.py (line 145)
>def ip_in_subnet(self, ip):
>ipo = IPAddress(ip) 
>net = IPNetwork("%s/%s" % (self.get_ip(), self.get_size()))
>aset = set(net)
>return  ipo in aset
> 
> --
> def mainp():
>if not (os.path.isfile(jsonCmdConfigPath) and os.access(jsonCmdConfigPath, 
> os.R_OK)):
>print "[ERROR] update_config.py :: You are telling me to process %s, 
> but i can't access it" % jsonCmdConfigPath
>sys.exit(1)
> 
># If the command line json file is unprocessed process it
># This is important or, the control interfaces will get deleted!
>if os.path.isfile(jsonPath % "cmd_line.json"):
>qf = QueueFile()
>qf.setFile("cmd_line.json")
>qf.load(None)
> 
># If the guest network is already configured and have the same IP, do not 
> try to configure it again otherwise it will break
>if sys.argv[1] == "guest_network.json":
>if os.path.isfile(currentGuestNetConfig):
>file = open(currentGuestNetConfig)
>guestnet_dict = json.load(file)
> 
>if not is_guestnet_configured(guestnet_dict, ['eth1', 'eth2', 
> 'eth3', 'eth4', 'eth5', 'eth6', 'eth7', 'eth8', 'eth9']):
>print "[INFO] update_config.py :: Processing Guest Network."
>process_file()
>else:
>print "[INFO] update_config.py :: No need to process Guest 
> Network."
>finish_config()
>else:
>print "[INFO] update_config.py :: No GuestNetwork configured yet. 
> Configuring first one now."
>process_file()
>else:
>print "[INFO] update_config.py :: Processing incoming file => %s" % 
> sys.argv[1]
>process_file()
> 
> cProfile.run("mainp()", "/var/log/py_pro.data”)
> 
> --
> 
> Fri Aug 19 11:27:50 2016d:\temp\py_pro.data
> 
> 614887355 function calls (614884838 primitive calls) in 548.846 
> seconds
> 
>   Ordered by: internal time
> 
>   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
> 68177344  182.9250.000  277.0510.000 __init__.py:248(__init__)
> 68170816   65.4350.000  342.4270.000 __init__.py:1682(iter_iprange)
> 68170816   54.6550.000  140.1860.000 __init__.py:63(__hash__)
> 1088   53.9360.050  536.6650.493 CsAddress.py:145(ip_in_subnet)
> 68170976   48.5770.000   69.1120.000 __init__.py:439(key)
> 68187121   42.9760.000   42.9760.000 {hasattr}
> 68188576   26.2020.000   26.2050.000 {isinstance}
> 68178432   24.9450.000   24.9450.000 __init__.py:34(__init__)
> 68177504   20.5370.000   20.5370.000 __init__.py:232(version)
> 68170816   16.4190.000   16.4190.000 {hash}
>   325.8690.183  351.378   10.981 configure.py:747(getNetworkByIp)
>   483.6850.077  194.8454.059 configure.py:741(getDeviceByIp)
> 23521.0210.0001.0210.000 {built-in method poll}
>   2454410.3520.0000.5100.000 CsNetfilter.py:296(__eq__)
> 21490.2510.0000.2510.000 {posix.read}
>  7980.1110.0000.6080.001 CsNetfilter.py:116(has_rule)
>   4935850.0980.0000.0980.000 CsNetfilter.py:258(get_table)
>  3290.0770.0000.0770.000 {posix.fork}
> 24440.0590.0000.0590.000 {method 'flush' of 'file' 
> objects}
> 24440.0390.0000.0740.000 __init__.py:24

Re: 4.10.0 release

2016-08-08 Thread ilya
Hi Guys,

Gave this thread a read - sorry i'm a bit late on this topic.

I agree with what Will, John and Rohit proposed. I also understand
Rajani's hesitancy - we dont want master to become a zoo.

In summary, i think the proposed workflow should avoid the zoo case and
give us structure that will yield some stability.

* 1 PR
* 1 Test (at a minimum) - be it Blue Orangutan, Bubble, Marvin,
screenshot - when tests dont apply, etc..
* 2 LGTMs
-> then merge

This should be a decent gating framework to avoid not well tested commits.

We just need a a consistent way of determining which tests should be ran.

Regards
ilya

On 8/4/16 10:19 AM, Will Stevens wrote:
> Yes, I agree with this.
> 
> CVEs need to be handled in security@ and will be added to the branches
> manually once they have been agreed upon there, so no PRs are needed for
> them.
> 
> I also agree that exceptions can be made for version changes in POMs and
> such because those are scripted changes which are part of the release
> process.  We may want to update the Release Procedure documentation to
> include some details around the commits which Rohit made (which I
> highlighted earlier) as those probably fall into this type of situation as
> well.  Not sure those can be scripted as part of cutting a release, but
> they are related to the release process, so detailed instructions for
> making those changes would be helpful to include.
> 
> In general, yes, your statements are all correct.  We may want to send out
> a bit of a notice on the dev@ list to highlight this.  For the last little
> while we have been having the RMs handle all of the merging of code, so we
> may want to officially inform the dev community that if you have commit
> access you can commit, but you need to follow these guidelines [1].  I
> would even go so far as to give a summary of the process.
> 
> *Example:*
> Create a GH PR with the change and get 2 LGTM (including proof of tests
> passing).
> 
> Once a PR is ready, commit it with the following flow.  Let's assume the
> change is for 4.8 and needs to be forward merged.
> 
> $ git fetch origin
> $ git checkout 4.8
> $ git rebase origin 4.8
> $ git pr 
> $ git log -p
> $ git push origin 4.8
> $ git checkout 4.9
> $ git rebase origin 4.9
> $ git fwd-merge 4.8
> $ git log -p
> $ git push origin 4.9
> $ git checkout master
> $ git rebase origin master
> $ git fwd-merge 4.9
> $ git log -p
> $ git push origin master
> 
> You can decipher this workflow from the Release Principles [1] document,
> but it is not nearly this clear.  I suggest we make this process more
> obvious so everyone knows what they are doing if they mae commits...
> 
> [1]
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/Release+principles+for+Apache+CloudStack+4.6+and+up
> 
> *Will STEVENS*
> Lead Developer
> 
> *CloudOps* *| *Cloud Solutions Experts
> 420 rue Guy *|* Montreal *|* Quebec *|* H3J 1S6
> w cloudops.com *|* tw @CloudOps_
> 
> On Thu, Aug 4, 2016 at 12:17 PM, John Burwell <john.burw...@shapeblue.com>
> wrote:
> 
>> Will,
>>
>> My understanding of the release principles is that all changes must have a
>> PR with the exception of CVE fixes.  Since we must accept CVE fixes in
>> private, the 2 LGTM rule is applied on the security@ mailing list and on
>> private JIRA security ticket.  I would also say that the release commits
>> (e.g. tags, change of Maven versions in the POMs, etc) could also be
>> granted an exception to the rule.  Otherwise, yes, my understanding is that
>> everything else requires a PR.  Do you agree with that interpretation?
>>
>> Thanks,
>> -John
>>
>> P.S. I plan to consolidate the release section of the wiki shortly as we
>> have a number of topics that ostensibly conflict with each other.
>>
>>>
>> john.burw...@shapeblue.com
>> www.shapeblue.com
>> 53 Chandos Place, Covent Garden, London VA WC2N 4HSUK
>> @shapeblue
>>
>>
>>
>> On Aug 4, 2016, at 12:02 PM, Will Stevens <wstev...@cloudops.com> wrote:
>>>
>>>> john.burw...@shapeblue.com
>>
>>
> 


Re: SSLv3 on Apache Cloudstack 4.9.0 RC2

2016-08-08 Thread ilya
Hi Rohit,

No luck. We do have custom internal CA.

Also, why is "SSLV3_ALERT_HANDSHAKE_FAILURE" causing issue? i thought
SSLV3 failure handshake is a good response...

Running steps you've outline - did not seem to make any difference for
this issue. Setting verifysslcert false also dont help.




imusayev:~$ sudo pip install --upgrade requests[security] requests
Double requirement given: requests (already in requests[security],
name='requests')
Storing debug log for failure in /Users/imusayev/Library/Logs/pip.log


imusayev:~$ sudo pip install --upgrade
git+https://github.com/apache/cloudstack-cloudmonkey.git
Downloading/unpacking
git+https://github.com/apache/cloudstack-cloudmonkey.git
  Cloning https://github.com/apache/cloudstack-cloudmonkey.git to
/tmp/pip-ayrDw8-build
  Running setup.py (path:/tmp/pip-ayrDw8-build/setup.py) egg_info for
package from git+https://github.com/apache/cloudstack-cloudmonkey.git
If you're upgrading, run the following to enable parameter completion:
  cloudmonkey sync
  cloudmonkey set paramcompletion true
Parameter completion may fail, if the above is not run!

Requirement already up-to-date: Pygments>=1.5 in
/Library/Python/2.7/site-packages (from cloudmonkey==5.3.3)
Requirement already up-to-date: argcomplete in
/Library/Python/2.7/site-packages (from cloudmonkey==5.3.3)
Requirement already up-to-date: dicttoxml in
/Library/Python/2.7/site-packages (from cloudmonkey==5.3.3)
Requirement already up-to-date: prettytable>=0.6 in
/Library/Python/2.7/site-packages/prettytable-0.7.2-py2.7.egg (from
cloudmonkey==5.3.3)
Downloading/unpacking requests from
https://pypi.python.org/packages/f8/90/42d5e0d9b5c4c3629a3d99823bbc3748fb85616f0f7a45e79ba7908d4642/requests-2.11.0-py2.py3-none-any.whl#md5=369b7333bf2f710143a1b6678f2f214c
(from cloudmonkey==5.3.3)
  Downloading requests-2.11.0-py2.py3-none-any.whl (514kB): 514kB downloaded
Requirement already up-to-date: requests-toolbelt in
/Library/Python/2.7/site-packages (from cloudmonkey==5.3.3)
Installing collected packages: requests, cloudmonkey
  Found existing installation: requests 2.10.0
Uninstalling requests:
  Successfully uninstalled requests
  Found existing installation: cloudmonkey 5.3.2
Uninstalling cloudmonkey:
  Successfully uninstalled cloudmonkey
  Running setup.py install for cloudmonkey
If you're upgrading, run the following to enable parameter completion:
  cloudmonkey sync
  cloudmonkey set paramcompletion true
Parameter completion may fail, if the above is not run!

Installing cloudmonkey script to /usr/local/bin
Successfully installed requests cloudmonkey
Cleaning up...


imusayev:~$ cloudmonkey
☁ Apache CloudStack  cloudmonkey 5.3.3. Type help or ? to list commands.

Using management server profile: lab1-ssl

(lab1-ssl) > list zones
Connection refused by server: [SSL: SSLV3_ALERT_HANDSHAKE_FAILURE] sslv3
alert handshake failure (_ssl.c:590)
Error Authentication failed


(lab1-ssl) > set verifysslcert false
(lab1-ssl) > list zones
Connection refused by server: [SSL: SSLV3_ALERT_HANDSHAKE_FAILURE] sslv3
alert handshake failure (_ssl.c:590)
Error Authentication failed




On 8/6/16 1:33 AM, Rohit Yadav wrote:
> Hi Ilya,
> 
> 
> Can you try this:
> 
> sudo pip install --upgrade requests[security] requests
> 
> sudo pip install --upgrade 
> git+https://github.com/apache/cloudstack-cloudmonkey.git
> 
> 
> Then try again?
> 
> 
> As a workaround (in case of custom CA etc.) if it fails we can ask 
> cloudmonkey to ignore ssl certificate verification (connection will be still 
> secure, but cloudmonkey/requests won't verify the certificate) by running: 
> set verifysslcert false.
> 
> 
> Regards.
> 
> 
> From: ilya <ilya.mailing.li...@gmail.com>
> Sent: 06 August 2016 06:15:25
> To: dev@cloudstack.apache.org
> Subject: Re: SSLv3 on Apache Cloudstack 4.9.0 RC2
> 
> Looks more like cloudmonkey 5.3.2 issue - i guess..
> 
> I confirmed i'm not serving SSLv3 - which would be Tomcat configuration
> issue anyway.
> 
> (lab1-ssl) > list zones
> Connection refused by server: [SSL: SSLV3_ALERT_HANDSHAKE_FAILURE] sslv3
> alert handshake failure (_ssl.c:590)
> Error Authentication failed
> 
> regards
> ilya
> 
> 
> On 8/5/16 3:32 PM, ilya wrote:
>> Has anyone tested Cloudstack 4.9.0 RC2 with SSL?
>>
>> Somehow, in my case tomcat reverted back to SSL v3 on port 8443 - which
>> is a big no-no.
>>
>> Please kindly check, alternatively if i dont hear from anyone i will
>> raise a blocker.
>>
>> On 8/3/16 10:39 PM, ilya wrote:
>>> Hi Will and Team
>>>
>>> Can someone point me to upgrade instructions if such exist.
>>>
>>> Would like to avoid learning through trial and e

Re: Please add me to Jira Contributors List

2016-08-08 Thread ilya
Who has the karma?

++ David Nalley

On 8/8/16 6:47 AM, Frank Maximus wrote:
> Hi,
> 
> I want to assign a jira issue I created to myself.
> Can my account *fmaximus* be added to the contributors list.
> 
> Thanks in advance,
> 
> 
> 
> *Frank Maximus *
> 
> Senior Software Development Engineer
> 
> *nuage*networks.net
> 


Re: SSLv3 on Apache Cloudstack 4.9.0 RC2

2016-08-05 Thread ilya
Looks more like cloudmonkey 5.3.2 issue - i guess..

I confirmed i'm not serving SSLv3 - which would be Tomcat configuration
issue anyway.

(lab1-ssl) > list zones
Connection refused by server: [SSL: SSLV3_ALERT_HANDSHAKE_FAILURE] sslv3
alert handshake failure (_ssl.c:590)
Error Authentication failed

regards
ilya


On 8/5/16 3:32 PM, ilya wrote:
> Has anyone tested Cloudstack 4.9.0 RC2 with SSL?
> 
> Somehow, in my case tomcat reverted back to SSL v3 on port 8443 - which
> is a big no-no.
> 
> Please kindly check, alternatively if i dont hear from anyone i will
> raise a blocker.
> 
> On 8/3/16 10:39 PM, ilya wrote:
>> Hi Will and Team
>>
>> Can someone point me to upgrade instructions if such exist.
>>
>> Would like to avoid learning through trial and error if possible.
>>
>> I will be testing upgrade and functionality of KVM & VMware Advanced
>> Shared Zones from ACS4.5.2 to latest.
>>
>> Thanks
>> ilya
>>
>> On 7/29/16 11:06 AM, ilya wrote:
>>> Hi Will
>>>
>>> What Remi mentioned sounds reasonable..
>>>
>>> I'll be spending sometime today and next week to test out the issue
>>> reported in 4.8 with VR not starting in Basic Zone - as well latest 4.9..
>>>
>>> i know i'm late to the party - but this is the best i could do :(
>>>
>>> Regards,
>>> ilya
>>>
>>>
>>>
>>> On 7/29/16 9:19 AM, Will Stevens wrote:
>>>> I think everything is up to date and correct now.  Please let me know if
>>>> anything seems out of place (this is the first time I have done this).
>>>>
>>>> I will wait to do an official announcement until Monday in case anything
>>>> comes up.  I will also wait to update the following things until Monday:
>>>> http://cloudstack.apache.org/downloads.html and the release notes (cause I
>>>> have to finish them).
>>>>
>>>> Let me know if you have questions.
>>>>
>>>> Should I be cutting a 4.8.1 release as well?  Not sure how that works.
>>>> Remi said to do the 4.9.0 release first and then take care of the 4.8.1
>>>> release after.  Ideas?
>>>>
>>>> *Will STEVENS*
>>>> Lead Developer
>>>>
>>>> *CloudOps* *| *Cloud Solutions Experts
>>>> 420 rue Guy *|* Montreal *|* Quebec *|* H3J 1S6
>>>> w cloudops.com *|* tw @CloudOps_
>>>>
>>>> On Fri, Jul 29, 2016 at 12:13 PM, Will Stevens <wstev...@cloudops.com>
>>>> wrote:
>>>>
>>>>> Yep, in the process of getting the release cut.  Got side tracked by
>>>>> people a few times, but I am almost finished...  I will keep you posted...
>>>>>
>>>>> *Will STEVENS*
>>>>> Lead Developer
>>>>>
>>>>> *CloudOps* *| *Cloud Solutions Experts
>>>>> 420 rue Guy *|* Montreal *|* Quebec *|* H3J 1S6
>>>>> w cloudops.com *|* tw @CloudOps_
>>>>>
>>>>> On Fri, Jul 29, 2016 at 12:10 PM, Rohit Yadav <rohit.ya...@shapeblue.com>
>>>>> wrote:
>>>>>
>>>>>> Thank you Will. Please cut the 4.9 branch so it can be picked for LTS
>>>>>> release work.
>>>>>>
>>>>>> I'll publish the rpm/deb packages in the sb hosted upstream repo shortly.
>>>>>>
>>>>>> Regards.
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> rohit.ya...@shapeblue.com
>>>>>> www.shapeblue.com
>>>>>> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
>>>>>> @shapeblue
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Fri, Jul 29, 2016 at 7:27 PM +0530, "Will Stevens" <
>>>>>> wstev...@cloudops.com<mailto:wstev...@cloudops.com>> wrote:
>>>>>>
>>>>>> Sorry, I did not follow the correct format.  :P
>>>>>>
>>>>>> After 72 hours, the vote for CloudStack 4.9.0 *passes* with 6 PMC + 2
>>>>>> non-PMC votes.
>>>>>>
>>>>>> +1 (PMC / binding)
>>>>>> * Rohit Yadav
>>>>>> * Mike Tutkowski
>>>>>> * Wido den Hollander
>>>>>> * Milamber
>>>>>> * Nux!
>>>>>> * John Burwell
>>>>>>
>>>>>> +

SSLv3 on Apache Cloudstack 4.9.0 RC2

2016-08-05 Thread ilya
Has anyone tested Cloudstack 4.9.0 RC2 with SSL?

Somehow, in my case tomcat reverted back to SSL v3 on port 8443 - which
is a big no-no.

Please kindly check, alternatively if i dont hear from anyone i will
raise a blocker.

On 8/3/16 10:39 PM, ilya wrote:
> Hi Will and Team
> 
> Can someone point me to upgrade instructions if such exist.
> 
> Would like to avoid learning through trial and error if possible.
> 
> I will be testing upgrade and functionality of KVM & VMware Advanced
> Shared Zones from ACS4.5.2 to latest.
> 
> Thanks
> ilya
> 
> On 7/29/16 11:06 AM, ilya wrote:
>> Hi Will
>>
>> What Remi mentioned sounds reasonable..
>>
>> I'll be spending sometime today and next week to test out the issue
>> reported in 4.8 with VR not starting in Basic Zone - as well latest 4.9..
>>
>> i know i'm late to the party - but this is the best i could do :(
>>
>> Regards,
>> ilya
>>
>>
>>
>> On 7/29/16 9:19 AM, Will Stevens wrote:
>>> I think everything is up to date and correct now.  Please let me know if
>>> anything seems out of place (this is the first time I have done this).
>>>
>>> I will wait to do an official announcement until Monday in case anything
>>> comes up.  I will also wait to update the following things until Monday:
>>> http://cloudstack.apache.org/downloads.html and the release notes (cause I
>>> have to finish them).
>>>
>>> Let me know if you have questions.
>>>
>>> Should I be cutting a 4.8.1 release as well?  Not sure how that works.
>>> Remi said to do the 4.9.0 release first and then take care of the 4.8.1
>>> release after.  Ideas?
>>>
>>> *Will STEVENS*
>>> Lead Developer
>>>
>>> *CloudOps* *| *Cloud Solutions Experts
>>> 420 rue Guy *|* Montreal *|* Quebec *|* H3J 1S6
>>> w cloudops.com *|* tw @CloudOps_
>>>
>>> On Fri, Jul 29, 2016 at 12:13 PM, Will Stevens <wstev...@cloudops.com>
>>> wrote:
>>>
>>>> Yep, in the process of getting the release cut.  Got side tracked by
>>>> people a few times, but I am almost finished...  I will keep you posted...
>>>>
>>>> *Will STEVENS*
>>>> Lead Developer
>>>>
>>>> *CloudOps* *| *Cloud Solutions Experts
>>>> 420 rue Guy *|* Montreal *|* Quebec *|* H3J 1S6
>>>> w cloudops.com *|* tw @CloudOps_
>>>>
>>>> On Fri, Jul 29, 2016 at 12:10 PM, Rohit Yadav <rohit.ya...@shapeblue.com>
>>>> wrote:
>>>>
>>>>> Thank you Will. Please cut the 4.9 branch so it can be picked for LTS
>>>>> release work.
>>>>>
>>>>> I'll publish the rpm/deb packages in the sb hosted upstream repo shortly.
>>>>>
>>>>> Regards.
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> rohit.ya...@shapeblue.com
>>>>> www.shapeblue.com
>>>>> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
>>>>> @shapeblue
>>>>>
>>>>>
>>>>>
>>>>> On Fri, Jul 29, 2016 at 7:27 PM +0530, "Will Stevens" <
>>>>> wstev...@cloudops.com<mailto:wstev...@cloudops.com>> wrote:
>>>>>
>>>>> Sorry, I did not follow the correct format.  :P
>>>>>
>>>>> After 72 hours, the vote for CloudStack 4.9.0 *passes* with 6 PMC + 2
>>>>> non-PMC votes.
>>>>>
>>>>> +1 (PMC / binding)
>>>>> * Rohit Yadav
>>>>> * Mike Tutkowski
>>>>> * Wido den Hollander
>>>>> * Milamber
>>>>> * Nux!
>>>>> * John Burwell
>>>>>
>>>>> +1 (non binding)
>>>>> * Paul Angus
>>>>> * Abhinandan Prateek
>>>>>
>>>>> 0
>>>>> none
>>>>>
>>>>> -1
>>>>> none
>>>>>
>>>>> Thanks to everyone participating.
>>>>>
>>>>> *Will STEVENS*
>>>>> Lead Developer
>>>>>
>>>>> *CloudOps* *| *Cloud Solutions Experts
>>>>> 420 rue Guy *|* Montreal *|* Quebec *|* H3J 1S6
>>>>> w cloudops.com *|* tw @CloudOps_
>>>>>
>>>>> On Fri, Jul 29, 2016 at 9:44 AM, Will Stevens <wstev...@cloudops.com>
>>>>> wrote:
>>>>>
>>>>>> The vote is clo

Re: [VOTE] Apache Cloudstack 4.9.0 RC2

2016-08-03 Thread ilya
Hi Will and Team

Can someone point me to upgrade instructions if such exist.

Would like to avoid learning through trial and error if possible.

I will be testing upgrade and functionality of KVM & VMware Advanced
Shared Zones from ACS4.5.2 to latest.

Thanks
ilya

On 7/29/16 11:06 AM, ilya wrote:
> Hi Will
> 
> What Remi mentioned sounds reasonable..
> 
> I'll be spending sometime today and next week to test out the issue
> reported in 4.8 with VR not starting in Basic Zone - as well latest 4.9..
> 
> i know i'm late to the party - but this is the best i could do :(
> 
> Regards,
> ilya
> 
> 
> 
> On 7/29/16 9:19 AM, Will Stevens wrote:
>> I think everything is up to date and correct now.  Please let me know if
>> anything seems out of place (this is the first time I have done this).
>>
>> I will wait to do an official announcement until Monday in case anything
>> comes up.  I will also wait to update the following things until Monday:
>> http://cloudstack.apache.org/downloads.html and the release notes (cause I
>> have to finish them).
>>
>> Let me know if you have questions.
>>
>> Should I be cutting a 4.8.1 release as well?  Not sure how that works.
>> Remi said to do the 4.9.0 release first and then take care of the 4.8.1
>> release after.  Ideas?
>>
>> *Will STEVENS*
>> Lead Developer
>>
>> *CloudOps* *| *Cloud Solutions Experts
>> 420 rue Guy *|* Montreal *|* Quebec *|* H3J 1S6
>> w cloudops.com *|* tw @CloudOps_
>>
>> On Fri, Jul 29, 2016 at 12:13 PM, Will Stevens <wstev...@cloudops.com>
>> wrote:
>>
>>> Yep, in the process of getting the release cut.  Got side tracked by
>>> people a few times, but I am almost finished...  I will keep you posted...
>>>
>>> *Will STEVENS*
>>> Lead Developer
>>>
>>> *CloudOps* *| *Cloud Solutions Experts
>>> 420 rue Guy *|* Montreal *|* Quebec *|* H3J 1S6
>>> w cloudops.com *|* tw @CloudOps_
>>>
>>> On Fri, Jul 29, 2016 at 12:10 PM, Rohit Yadav <rohit.ya...@shapeblue.com>
>>> wrote:
>>>
>>>> Thank you Will. Please cut the 4.9 branch so it can be picked for LTS
>>>> release work.
>>>>
>>>> I'll publish the rpm/deb packages in the sb hosted upstream repo shortly.
>>>>
>>>> Regards.
>>>>
>>>>
>>>>
>>>>
>>>> rohit.ya...@shapeblue.com
>>>> www.shapeblue.com
>>>> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
>>>> @shapeblue
>>>>
>>>>
>>>>
>>>> On Fri, Jul 29, 2016 at 7:27 PM +0530, "Will Stevens" <
>>>> wstev...@cloudops.com<mailto:wstev...@cloudops.com>> wrote:
>>>>
>>>> Sorry, I did not follow the correct format.  :P
>>>>
>>>> After 72 hours, the vote for CloudStack 4.9.0 *passes* with 6 PMC + 2
>>>> non-PMC votes.
>>>>
>>>> +1 (PMC / binding)
>>>> * Rohit Yadav
>>>> * Mike Tutkowski
>>>> * Wido den Hollander
>>>> * Milamber
>>>> * Nux!
>>>> * John Burwell
>>>>
>>>> +1 (non binding)
>>>> * Paul Angus
>>>> * Abhinandan Prateek
>>>>
>>>> 0
>>>> none
>>>>
>>>> -1
>>>> none
>>>>
>>>> Thanks to everyone participating.
>>>>
>>>> *Will STEVENS*
>>>> Lead Developer
>>>>
>>>> *CloudOps* *| *Cloud Solutions Experts
>>>> 420 rue Guy *|* Montreal *|* Quebec *|* H3J 1S6
>>>> w cloudops.com *|* tw @CloudOps_
>>>>
>>>> On Fri, Jul 29, 2016 at 9:44 AM, Will Stevens <wstev...@cloudops.com>
>>>> wrote:
>>>>
>>>>> The vote is closed.  The RC passed with the following votes.
>>>>>
>>>>> +1 : 8 (including 6 binding)
>>>>> +0 : 0
>>>>> -1 : 0
>>>>>
>>>>> Thanks everyone, I will get this pushed out today...
>>>>>
>>>>> *Will STEVENS*
>>>>> Lead Developer
>>>>>
>>>>> *CloudOps* *| *Cloud Solutions Experts
>>>>> 420 rue Guy *|* Montreal *|* Quebec *|* H3J 1S6
>>>>> w cloudops.com *|* tw @CloudOps_
>>>>>
>>>>> On Fri, Jul 29, 2016 at 5:24 AM, Abhinandan Prateek <
>>>>> abhinandan.prat...@shapeblue.com> wrote:
>>>>>
>>

RPMs for testing ACS4.9 RC2

2016-08-01 Thread ilya
Hi Team

Curious if there are noredist RPM of ACS 4.9 RC2 readily available
somewhere i can use for testing..

Thanks
ilya


Re: [VOTE] Apache Cloudstack 4.9.0 RC2

2016-07-29 Thread ilya
Hi Will

What Remi mentioned sounds reasonable..

I'll be spending sometime today and next week to test out the issue
reported in 4.8 with VR not starting in Basic Zone - as well latest 4.9..

i know i'm late to the party - but this is the best i could do :(

Regards,
ilya



On 7/29/16 9:19 AM, Will Stevens wrote:
> I think everything is up to date and correct now.  Please let me know if
> anything seems out of place (this is the first time I have done this).
> 
> I will wait to do an official announcement until Monday in case anything
> comes up.  I will also wait to update the following things until Monday:
> http://cloudstack.apache.org/downloads.html and the release notes (cause I
> have to finish them).
> 
> Let me know if you have questions.
> 
> Should I be cutting a 4.8.1 release as well?  Not sure how that works.
> Remi said to do the 4.9.0 release first and then take care of the 4.8.1
> release after.  Ideas?
> 
> *Will STEVENS*
> Lead Developer
> 
> *CloudOps* *| *Cloud Solutions Experts
> 420 rue Guy *|* Montreal *|* Quebec *|* H3J 1S6
> w cloudops.com *|* tw @CloudOps_
> 
> On Fri, Jul 29, 2016 at 12:13 PM, Will Stevens <wstev...@cloudops.com>
> wrote:
> 
>> Yep, in the process of getting the release cut.  Got side tracked by
>> people a few times, but I am almost finished...  I will keep you posted...
>>
>> *Will STEVENS*
>> Lead Developer
>>
>> *CloudOps* *| *Cloud Solutions Experts
>> 420 rue Guy *|* Montreal *|* Quebec *|* H3J 1S6
>> w cloudops.com *|* tw @CloudOps_
>>
>> On Fri, Jul 29, 2016 at 12:10 PM, Rohit Yadav <rohit.ya...@shapeblue.com>
>> wrote:
>>
>>> Thank you Will. Please cut the 4.9 branch so it can be picked for LTS
>>> release work.
>>>
>>> I'll publish the rpm/deb packages in the sb hosted upstream repo shortly.
>>>
>>> Regards.
>>>
>>>
>>>
>>>
>>> rohit.ya...@shapeblue.com
>>> www.shapeblue.com
>>> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
>>> @shapeblue
>>>
>>>
>>>
>>> On Fri, Jul 29, 2016 at 7:27 PM +0530, "Will Stevens" <
>>> wstev...@cloudops.com<mailto:wstev...@cloudops.com>> wrote:
>>>
>>> Sorry, I did not follow the correct format.  :P
>>>
>>> After 72 hours, the vote for CloudStack 4.9.0 *passes* with 6 PMC + 2
>>> non-PMC votes.
>>>
>>> +1 (PMC / binding)
>>> * Rohit Yadav
>>> * Mike Tutkowski
>>> * Wido den Hollander
>>> * Milamber
>>> * Nux!
>>> * John Burwell
>>>
>>> +1 (non binding)
>>> * Paul Angus
>>> * Abhinandan Prateek
>>>
>>> 0
>>> none
>>>
>>> -1
>>> none
>>>
>>> Thanks to everyone participating.
>>>
>>> *Will STEVENS*
>>> Lead Developer
>>>
>>> *CloudOps* *| *Cloud Solutions Experts
>>> 420 rue Guy *|* Montreal *|* Quebec *|* H3J 1S6
>>> w cloudops.com *|* tw @CloudOps_
>>>
>>> On Fri, Jul 29, 2016 at 9:44 AM, Will Stevens <wstev...@cloudops.com>
>>> wrote:
>>>
>>>> The vote is closed.  The RC passed with the following votes.
>>>>
>>>> +1 : 8 (including 6 binding)
>>>> +0 : 0
>>>> -1 : 0
>>>>
>>>> Thanks everyone, I will get this pushed out today...
>>>>
>>>> *Will STEVENS*
>>>> Lead Developer
>>>>
>>>> *CloudOps* *| *Cloud Solutions Experts
>>>> 420 rue Guy *|* Montreal *|* Quebec *|* H3J 1S6
>>>> w cloudops.com *|* tw @CloudOps_
>>>>
>>>> On Fri, Jul 29, 2016 at 5:24 AM, Abhinandan Prateek <
>>>> abhinandan.prat...@shapeblue.com> wrote:
>>>>
>>>>> +1
>>>>>
>>>>> Did manual testing with a cluster of Xen 6.5 in advanced zone.
>>>>> Vm life cycle
>>>>> VM Snapshot, volume snapshots
>>>>> Volume and Template from snapshots
>>>>> Migration
>>>>> Change Password
>>>>> Change service offering
>>>>> VPC, multiple tiers, VMs, ACLs
>>>>>
>>>>> Regards,
>>>>> -abhi
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> On 29/07/16, 1:43 AM, "John Burwell" <john.burw...@shapeblue.com>
>>> wrote:
>>>>>
>>>>>> All,

Re: [Feature Request] VM Snapshot based on KVM+Ceph

2016-07-12 Thread ilya
I guess ENA beat us to it :) hats of to ENA folks!

On 7/12/16 6:39 AM, Simon Weller wrote:
> Wei's KVM QCOW2 snapshot PR works very well. I tested it a few months back -  
> It would be great to get this into 4.9.10.
> 
> 
> In regards to the Ceph snapshot challenges, it is possible to emulate the 
> behaviour you want (sans the RAM component). We pushed a PR a couple of 
> months ago to enable root detach of volumes on KVM (It has been merged into 
> 4.7-forward, 4.8-forward and 4.9 RC).
> 
> We use it to emulate a snapshot revert with Ceph by using createVolume 
> sourced from a snapshot, then detaching and reattach the root volume of a VM 
> with device id of 0.
> This preserves the previous volume history and allows the user to switch back 
> and forth between different snapshots.
> 
> 
> We would like a fully featured "VM + memory" Ceph based snapshot as well, but 
> as Wido has indicated, a lot has to happen before that will be possible and 
> we have more research to do in order to understand the challenges. I would 
> guess Redhat is going to be very interested in getting this into libvirt, as 
> they are obviously pushing Ceph aggressively.
> 
> 
> - Si
> 
> 
> From: Wei ZHOU <ustcweiz...@gmail.com>
> Sent: Tuesday, July 12, 2016 2:25 AM
> To: dev@cloudstack.apache.org
> Subject: Re: [Feature Request] VM Snapshot based on KVM+Ceph
> 
> Paul,
> 
> If you mean VM snapshot on QCOW2, there is a PR on github for it.
> 
> -Wei
> 
> 2016-07-12 5:58 GMT+02:00 Paul Shadwell <p...@shadwell.ch>:
> 
>> This is a feature that has been missing from Cloudstack since the
>> beginning when using KVM as the hypervisor.
>> It's been asked for many times but still no support.
>> This would be a huge plus for Cloudstack if this feature was added and
>> make it a much more viable product for end users.
>> KVM is usually chosen because 1: it's free 2: it has many features that
>> make it a much better choice for large installations.
>> Many have ended up either writing their own VM snapshot solution, or
>> simply hiding the icon from the customer.
>> I'm surprised that after years of waiting for this feature to be added,
>> it's still missing.
>> I know it's not totally the Cloudstack Dev team's fault that VM Snapshots
>> are missing from KVM support but I do think this is a list opportunity for
>> a big win if they did something about it.
>>
>> Just my 2 penneth on this subject that has plagued me on a daily basis
>> supporting Cloudstack installs.
>>
>> Regards
>> Paul
>>
>>
>>> On Jul 12, 2016, at 05:19, ilya <ilya.mailing.li...@gmail.com> wrote:
>>>
>>> Just FYI:
>>>
>>> You can ask on this list for sponsored feature development if this is
>>> something that needs prompt resolution.
>>>
>>> Regards
>>> ilya
>>>
>>>> On 7/8/16 7:07 PM, ??? wrote:
>>>> Hi,  Developers
>>>>
>>>>
>>>>
>>>> We deployed ACS+KVM+Ceph in our environment.  Everything looks fine
>>>> except the VM snapshot feature (not volume backup) is missing in current
>>>> ACS version which seems only supported by XenServer/ESXi.
>>>>
>>>>
>>>>
>>>> Wondering if we can have this on the roadmap as well.  @Wido, do you
>>>> have any plan ?
>>>>
>>>>
>>>>
>>>> Thanks !
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> private cloud2  *We Deliver Enterprise-Grade Cloud !*
>>>>
>>>>
>> --
>>>>
>>>> *???*
>>>>
>>>> *Product Engineering & Innovation Center (PEIC)*
>>>>
>>>> *??(??)??*
>>>>
>>>> * *
>>>>
>>>> ***2679**?**B**?**601**?*
>>>>
>>>> *??:**201103*
>>>>
>>>> *Office: +86-21-62351222*
>>>>
>>>> *Mobile: +86-18602198181*
>>>>
>>>>
>>>>
>>>>
>>>>
>>
>>
> 


Re: [Feature Request] VM Snapshot based on KVM+Ceph

2016-07-12 Thread ilya
Hi Paul

Are you talking about Ceph + KVM snapshot or non Ceph filesystem using
qemu/libvirtd?

KVM Snapshot with non-ceph using qemu/libvirtd apis will be released in
upcoming versions. We have a code written for it - but to release it to
open source community takes a bit of time and might need a rewrite.

Regards
ilya

On 7/11/16 8:58 PM, Paul Shadwell wrote:
> This is a feature that has been missing from Cloudstack since the beginning 
> when using KVM as the hypervisor. 
> It's been asked for many times but still no support. 
> This would be a huge plus for Cloudstack if this feature was added and make 
> it a much more viable product for end users. 
> KVM is usually chosen because 1: it's free 2: it has many features that make 
> it a much better choice for large installations. 
> Many have ended up either writing their own VM snapshot solution, or simply 
> hiding the icon from the customer. 
> I'm surprised that after years of waiting for this feature to be added, it's 
> still missing. 
> I know it's not totally the Cloudstack Dev team's fault that VM Snapshots are 
> missing from KVM support but I do think this is a list opportunity for a big 
> win if they did something about it. 
> 
> Just my 2 penneth on this subject that has plagued me on a daily basis 
> supporting Cloudstack installs. 
> 
> Regards
> Paul
> 
> 
>> On Jul 12, 2016, at 05:19, ilya <ilya.mailing.li...@gmail.com> wrote:
>>
>> Just FYI:
>>
>> You can ask on this list for sponsored feature development if this is
>> something that needs prompt resolution.
>>
>> Regards
>> ilya
>>
>>> On 7/8/16 7:07 PM, 吕海蛟 wrote:
>>> Hi,  Developers
>>>
>>>
>>>
>>> We deployed ACS+KVM+Ceph in our environment.  Everything looks fine
>>> except the VM snapshot feature (not volume backup) is missing in current
>>> ACS version which seems only supported by XenServer/ESXi.
>>>
>>>
>>>
>>> Wondering if we can have this on the roadmap as well.  @Wido, do you
>>> have any plan ?
>>>
>>>
>>>
>>> Thanks !
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> private cloud2  *We Deliver Enterprise-Grade Cloud !*
>>>
>>> --
>>>
>>> *吕海蛟*
>>>
>>> *Product Engineering & Innovation Center (PEIC)*
>>>
>>> *华胜蓝泰科技(天津)有限责任公司*
>>>
>>> * *
>>>
>>> *上海闵行区合川路**2679**号虹桥国际商务广场**B**栋**601**室*
>>>
>>> *邮编:**201103*
>>>
>>> *Office: +86-21-62351222*
>>>
>>> *Mobile: +86-18602198181*
>>>
>>>
>>>
>>>
>>>
> 


Re: [Feature Request] VM Snapshot based on KVM+Ceph

2016-07-11 Thread ilya
Just FYI:

You can ask on this list for sponsored feature development if this is
something that needs prompt resolution.

Regards
ilya

On 7/8/16 7:07 PM, 吕海蛟 wrote:
> Hi,  Developers
> 
>  
> 
> We deployed ACS+KVM+Ceph in our environment.  Everything looks fine
> except the VM snapshot feature (not volume backup) is missing in current
> ACS version which seems only supported by XenServer/ESXi.
> 
>  
> 
> Wondering if we can have this on the roadmap as well.  @Wido, do you
> have any plan ?
> 
>  
> 
> Thanks !
> 
>  
> 
>  
> 
>  
> 
>  
> 
> private cloud2  *We Deliver Enterprise-Grade Cloud !*
> 
> --
> 
> *吕海蛟*
> 
> *Product Engineering & Innovation Center (PEIC)*
> 
> *华胜蓝泰科技(天津)有限责任公司*
> 
> * *
> 
> *上海闵行区合川路**2679**号虹桥国际商务广场**B**栋**601**室*
> 
> *邮编:**201103*
> 
> *Office: +86-21-62351222*
> 
> *Mobile: +86-18602198181*
> 
>  
> 
>  
> 


Re: Trillian.

2016-07-09 Thread ilya
Paul,

Thanks for pushing hard on getting Trillian GA.

We've lately became vmware allergic :(

Hoping KVM would be on the roadmap soon.

Thanks
ilya

On 7/8/16 7:38 AM, Will Stevens wrote:
> I can answer this for you.  No, it does not support KVM as the base
> hypervisor.  This was a design decision early in the Trillian project.  We
> just need to get some VMware in the lab for testing (we need to do this
> anyway).
> 
> *Will STEVENS*
> Lead Developer
> 
> *CloudOps* *| *Cloud Solutions Experts
> 420 rue Guy *|* Montreal *|* Quebec *|* H3J 1S6
> w cloudops.com *|* tw @CloudOps_
> 
> On Fri, Jul 8, 2016 at 9:59 AM, Syed Mushtaq <syed1.mush...@gmail.com>
> wrote:
> 
>> Followup question: Does Trillian support KVM as the "Base" hypervisor
>> instead of ESXi? If not, what would it take to enable this.
>>
>> -Syed
>>
>> On Fri, Jul 8, 2016 at 9:52 AM, Syed Mushtaq <syed1.mush...@gmail.com>
>> wrote:
>>
>>> Awesome job Paul. I was in such dire need of a tool like this, everything
>>> I need to check if a feature works for different hardware, I have to
>> spend
>>> an obscene amount of time to set things up. This would really be
>>> productivity booster for sure.
>>>
>>> -Syed
>>>
>>>
>>>
>>> On Thu, Jul 7, 2016 at 5:53 AM, Paul Angus <paul.an...@shapeblue.com>
>>> wrote:
>>>
>>>> Hi all
>>>>
>>>> As some of you may know, I have been working for some time on a project
>>>> called Trillian. This started out as an internal project at $dayjob with
>>>> the intentions of being able to quickly build environments to:
>>>>
>>>>
>>>> a)   Test new feature software builds (manually and via Marvin)
>>>>
>>>> b)  Test community releases (manually and via Marvin)
>>>>
>>>> c)   Replicate failure scenarios
>>>>
>>>> d)  Evaluate new features
>>>>
>>>> To meet these and a number of other requirements I started Trillian.  At
>>>> Trillian's core are Ansible, CloudStack and vSphere virtualisation.
>> Ansible
>>>> takes a command line input and requests VM instances from CloudStack and
>>>> then configures all of the hypervisor hosts and mgmt. servers, finally
>>>> creating a zone which incorporates all of the components which were
>>>> requested.
>>>>
>>>> The environments are built in projects and the accounts which are
>> allowed
>>>> access are specified on the commandline.
>>>>
>>>> The commandline arguments look like this:
>>>>
>>>> --extra-vars "env_name=myACSenv env_version=cs45 hvtype=x hv=2
>>>> xs_ver=xs65sp1 env_accounts=all pri=1  mgmt_os=6"
>>>>
>>>> There is a global file which holds the mapping of ACS version to
>> relevant
>>>> URLs or OS types to specific templates, however EVERY mapping can be
>>>> overridden from commandline.
>>>>
>>>> --extra-vars  "env_name=cs49-vmw55-pga env_version=cs49 mgmt_os=6
>>>> hvtype=v vmware_ver=55u3 hv=2 pri=2 env_accounts=all build_marvin=yes
>>>> baseurl_cloudstack=http://10.2.0.4/shapeblue/cloudstack/testing/
>>>> mgmtsrv_template=Testc6Template"
>>>>
>>>> CloudStack deploys the virtualised hypervisor hosts and mgmt. servers as
>>>> and when required - CloudStack mgmt. server, MySQL hosts Marvin host,
>>>> vCenter server) onto the ESXi hosts.
>>>>
>>>> The output from each request is a bespoke, fully working virtualised
>>>> CloudStack environment.
>>>>
>>>>
>>>>
>>>> The reason trillion came about in its current form is that at $dayjob we
>>>> have to deal with lots of different types of environment, hypervisors
>> and
>>>> configurations.
>>>> I know others have put a lot of work into similar tools, which I've used
>>>> over the years and found useful but I needed some tooling that could
>> also
>>>> support vSphere hosts and Hyper-V and also be easy to connect to
>> external
>>>> integration points such as SolidFire storage, NetScalers, Cloudian S3
>>>> installations etc. as well as supporting multi-tenancy.
>>>>
>>>> For some time, it's been my intention to make this open source and
>>>> generally available for this community. While I could have done this
>> sooner

Re: Roadmap for 4.x and 5.0

2016-07-05 Thread ilya
Marc

You are correct that my shell script is not most robust - it should  be
re-written in java - and called upon on "graceful" shutdown - this
script should be treated as POC i guess.

What it guards against - is more than just snapshots though. Basically -
any async operation that would be harmful to end user experience if i
was to take down one of the MS servers.

I front my MS servers with a VIP, as i take down one of the MS servers
gracefully via script below, the agents all reconnect to next MS.



The current "Cold Cross Cluster" migration as it stands is done by
copying the data disk to secondary and then back to primary. If you have
a VMs with 4TB data disks - thats not feasible for several reasons (1
NFS export for SSVM may not be as large, its pretty slow to copy to NFS
and back to Primary - even if you have a robust network). Hence direct
migration bypassing the secondary store would be far more efficient.

In regards to secure KVM migration, each migrate call, establishes a
one-time SSH key pair between 2 KVM host that will be used only for the
duration of that migration. It is cleared once the operation completes
and avoids a possibility of someone exploiting the cloud user ssh keys.

This is not a big deal to Cloud Hosting companies - but is a big deal to
enterprise security folks who run cloudstack as private cloud. We don't
want cloud user keys littered everywhere - not very ideal in terms of
security.

Regards
ilya



On 7/3/16 10:41 PM, ma...@exoscale.ch wrote:
> Hi Ilya,
> 
> Regarding the live migration, we are using it in production and did migrate a 
> couple of VMs until we reach some corner cases, for which I wrote a few 
> fixes. We'll verify them during the following weeks. The code is based on CS 
> 4.4 but I started porting it to master. I have to finish that and merge the 
> fixes too. For the cold migration, it's already in CS and we are usign it 
> since a while.
> What do you mean by secure KVM migration? My code reads configuration values 
> for which you can have TLS peer-2-peer connection between the agents to 
> transfert over it all the data using the features in libvirt. That the setup 
> we have in production.
> 
> For the graceful shutdown, we have a HA proxy in front so we just edit the 
> configuration to turn off one MS. We are also checking manually if there 
> aren't any snapshot ongoing before launching the stop-start. But I don't find 
> this very robust. Therefore I read a lot of the code managing the agent and 
> how the agents are connected to the MS. There is already a command to 
> rebalance agents between MS, so I'm developping a solution around that.
> 
> Kind regards,
> Marc-Aurèle
> 
> 
>> On 02 Jul 2016, at 02:03, ilya <ilya.mailing.li...@gmail.com> wrote:
>>
>> Marco,
>>
>> I written a tiny shell script that does following:
>>
>> Make's sure there are async_jobs that arent running, also block 8080 via
>> iptables - to avoid user connecting to MS thats about to go down.
>>
>> It needs a bit of enhancement - and should lookup the MSID of that
>> specific server, it looks something like this - consider borrowing
>> concepts if applicable..
>>
>>> #!/bin/bash
>>> DATESTAMP=$(date +%m%d%y-%H%M%S)
>>> DBPASS=$(java -classpath /usr/share/cloudstack-common/lib/jasypt-1.9.0.jar 
>>> org.jasypt.intf.cli.JasyptPBEStringDecryptionCLI input="$(cat 
>>> /etc/cloudstack/management/db.properties | grep db.cloud.password | awk 
>>> -F'(' '{print $2}' | sed 's/)//g')" password="$(cat 
>>> /etc/cloudstack/management/key)" | grep -A2 OUTPUT | tail -1)
>>> DBHOST=$(cat /etc/cloudstack/management/db.properties | grep db.cloud.host 
>>> | awk -F'=' '{print $2}' | tail -1 )
>>> DBUSER=$(cat /etc/cloudstack/management/db.properties | grep 
>>> db.cloud.username | awk -F'=' '{print $2}')
>>> DB=$(cat /etc/cloudstack/management/db.properties | grep db.cloud.name | 
>>> awk -F'=' '{print $2}')
>>> DBPORT=$(cat /etc/cloudstack/management/db.properties | grep db.cloud.port 
>>> | awk -F'=' '{print $2}')
>>> MYSQLCMD="mysql -h $DBHOST -u $DBUSER -P $DBPORT -p$DBPASS $DB"
>>> #echo $DBPASS $DBHOST $DBUSER $DB $DBPORT
>>>
>>>
>>> JOBS=$(echo 'SELECT * FROM cloud.async_job where job_status=0 and 
>>> job_dispatcher not like "pseudoJobDispatcher"' | $MYSQLCMD | wc -l)
>>>
>>> if [ $JOBS -gt 0 ]
>>>then
>>>echo "WARN: Looks like i have active jobs in flight, please 
>>> try again later"
>>>echo 'SELECT * FROM cloud.async_job where job_status=0 and 
>>> 

Re: Roadmap for 4.x and 5.0

2016-07-01 Thread ilya
Marco,

I written a tiny shell script that does following:

Make's sure there are async_jobs that arent running, also block 8080 via
iptables - to avoid user connecting to MS thats about to go down.

It needs a bit of enhancement - and should lookup the MSID of that
specific server, it looks something like this - consider borrowing
concepts if applicable..

> #!/bin/bash
> DATESTAMP=$(date +%m%d%y-%H%M%S)
> DBPASS=$(java -classpath /usr/share/cloudstack-common/lib/jasypt-1.9.0.jar 
> org.jasypt.intf.cli.JasyptPBEStringDecryptionCLI input="$(cat 
> /etc/cloudstack/management/db.properties | grep db.cloud.password | awk -F'(' 
> '{print $2}' | sed 's/)//g')" password="$(cat 
> /etc/cloudstack/management/key)" | grep -A2 OUTPUT | tail -1)
> DBHOST=$(cat /etc/cloudstack/management/db.properties | grep db.cloud.host | 
> awk -F'=' '{print $2}' | tail -1 )
> DBUSER=$(cat /etc/cloudstack/management/db.properties | grep 
> db.cloud.username | awk -F'=' '{print $2}')
> DB=$(cat /etc/cloudstack/management/db.properties | grep db.cloud.name | awk 
> -F'=' '{print $2}')
> DBPORT=$(cat /etc/cloudstack/management/db.properties | grep db.cloud.port | 
> awk -F'=' '{print $2}')
> MYSQLCMD="mysql -h $DBHOST -u $DBUSER -P $DBPORT -p$DBPASS $DB"
> #echo $DBPASS $DBHOST $DBUSER $DB $DBPORT
> 
> 
> JOBS=$(echo 'SELECT * FROM cloud.async_job where job_status=0 and 
> job_dispatcher not like "pseudoJobDispatcher"' | $MYSQLCMD | wc -l)
> 
> if [ $JOBS -gt 0 ]
> then
> echo "WARN: Looks like i have active jobs in flight, please 
> try again later"
> echo 'SELECT * FROM cloud.async_job where job_status=0 and 
> job_dispatcher not like "pseudoJobDispatcher"' | $MYSQLCMD
> exit
> else
> echo "NOTE: No jobs running, good to go!"
> echo "NOTE: Blocking incoming 8080"
> /sbin/iptables -A INPUT -p tcp --destination-port 8080 -j DROP
> service cloudstack-management stop
> service cloudstack-management stop:wq
> CSPID=$(cat /var/run/cloudstack-management.pid )
> ps -p $CSPID >/dev/null 2>&1 && (kill -9 $CSPID)
> ps -p $CSPID >/dev/null 2>&1 && (echo "ERROR: Count not 
> terminame cloudstack service on `hostname` with pid $SCPID"; /sbin/iptables 
> -D INPUT -p tcp --destination-port 8080 -j DROP; exit 1)
> service cloudstack-management start
> echo "NOTE: Unblocking incoming 8080"
> /sbin/iptables -D INPUT -p tcp --destination-port 8080 -j DROP
> fi

Regards,
ilya

On 7/1/16 3:30 AM, ma...@exoscale.ch wrote:
> Hi,
> 
> I can't edit the page but I'll be glad to put some effort for the V5:
> - Live migration for KVM
> - Improve logging using UUIDs (as I already did part of that for us at 
> exoscale)
> 
> I'm in the process to add another feature we need: graceful shutdown of a 
> management server when running a cluster of MS. The goal is to send a 
> "prepareForShutdown" command to one or more MS and have them rebalance their 
> agents to the ones still running so that no command will be lost. Then there 
> shouldn't be any downtime with any agent during an update.
> 
> Kind regards,
> Marc-Aurèle
> 
> PS: Is there any architectural discussion going on on the Slack channel? I 
> saw that the IRC is not so active...
> 
> 
>> On 01 Jul 2016, at 11:55, Paul Angus <paul.an...@shapeblue.com> wrote:
>>
>> There's not been much response to this, but I'll start clearing away the 
>> unclaimed items, people can always add them back.
>>
>>
>> Kind regards,
>>
>> Paul Angus
>>
>>
>> paul.an...@shapeblue.com 
>> www.shapeblue.com
>> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
>> @shapeblue
>>
>>
>>
> 


Re: Roadmap for 4.x and 5.0

2016-07-01 Thread ilya
Marco

RE:
> - Live migration for KVM

How far are you from completing this?

Reason i'm asking:

We have this feature completed as plugin for cross cluster migration for
LIVE VMs.
We would like to take it out of plugin and contribute it to cloudstack
core.
Also considering adding COLD VM cross cluster migration for KVM (which
should be much faster than LIVE).

We also would like to contribute the "secure" KVM migration feature.

Marcus has list of other things he has developed as plugins - that we
need to work on - contributing back to cloudstack.

Regards
ilya

On 7/1/16 6:26 AM, Paul Angus wrote:
> That sounds great. I'll get it added.
> 
> 
> Kind regards,
> 
> Paul Angus
> 
> paul.an...@shapeblue.com 
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> @shapeblue
>   
>  
> 
> 
> -Original Message-
> From: ma...@exoscale.ch [mailto:ma...@exoscale.ch] 
> Sent: 01 July 2016 11:31
> To: dev@cloudstack.apache.org
> Subject: Re: Roadmap for 4.x and 5.0
> 
> Hi,
> 
> I can't edit the page but I'll be glad to put some effort for the V5:
> - Live migration for KVM
> - Improve logging using UUIDs (as I already did part of that for us at 
> exoscale)
> 
> I'm in the process to add another feature we need: graceful shutdown of a 
> management server when running a cluster of MS. The goal is to send a 
> "prepareForShutdown" command to one or more MS and have them rebalance their 
> agents to the ones still running so that no command will be lost. Then there 
> shouldn't be any downtime with any agent during an update.
> 
> Kind regards,
> Marc-Aurèle
> 
> PS: Is there any architectural discussion going on on the Slack channel? I 
> saw that the IRC is not so active...
> 
> 
>> On 01 Jul 2016, at 11:55, Paul Angus <paul.an...@shapeblue.com> wrote:
>>
>> There's not been much response to this, but I'll start clearing away the 
>> unclaimed items, people can always add them back.
>>
>>
>> Kind regards,
>>
>> Paul Angus
>>
>>
>> paul.an...@shapeblue.com
>> www.shapeblue.com
>> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK @shapeblue
>>
>>
>>
> 


Re: 4.9+ release

2016-06-14 Thread ilya
I agree and support John's comments below.

Regards
ilya

On 6/14/16 2:44 PM, John Burwell wrote:
> All,
> 
> Completely agree with Daan.  Per semantic versioning, a major revision 
> increase must introduce a backwards incompatible change in the public API, 
> removal of one of more supported devices, reduction in the list of supported 
> distributions.  I agree that when we require Java8+, drop Ubuntu 12.04 
> support, drop support for an old hypervisor version, etc,  we will need to 
> increment the major revision to reflect the fact that the release is not 
> backwards compatible.
> 
> For 4.10 and LTS 4.9.0_1, I see it as critical that we support running on 
> either Java7 or Java8.  In particular, producing an LTS release that only 
> supports a JVM that has been unsupported for nearly 18 months would make it 
> DOA in many shops.
> 
> It seems like it would make sense to have a 5.0.0 release that removed 
> support for a number of legacy components (e.g. Xen 6.0 possibly 6.2, Java7, 
> CentOS 5, etc), as well as, internal improvements (e.g. simplified 
> configuration).  The focus of this release would be to reduce the footprint 
> of codebase, as well as, make a set of backwards incompatible changes that 
> further decouples plugins from core.  We would then plan for a 6.0.0 in 
> 4Q2017 to introduce further architectural changes and API revisions.  The 
> advantage to this approach is that it breaks up the large refactorings and 
> architectural design changes — allowing us to gain velocity by removing 
> legacy components, reducing the risk of these changes, and providing user 
> benefit earlier.  Based on the release plan I previously proposed we have the 
> following releases remaining in 2016 and in early 2017: 
> 
> * 4.10 releasing on or about 28 August 2016
> * 4.11 releasing on or about 23 October 2016
> * 4.12 releasing on or about 18 December 2016 
> * 4.13 release on or about 5 February 2017
> 
> 4.12 seems to be the sweet spot in the schedule to cut the 5.0.0 release 
> described above.  It would give us sometime to plan and gain consensus around 
> the changes in both the user and dev communities.  It would also allow the 
> second LTS release to be based on 5.0.0 — allowing both release cycles to 
> take advantage of the reduced support requirements and Java8 language 
> features. Based on this proposal, the releases above would change to the 
> following:
> 
> * 4.10 releasing on or about 28 August 2016
> * 4.11 releasing on or about 23 October 2016
> * 5.0.0 releasing on or about 18 December 2016 
> * 5.1.0 release on or about 5 February 2017
> 
> I am in the process of moving my proposal into the wiki.  If this approach is 
> acceptable, I will reflect it there, and open a thread to discuss 5.0.0.
> 
> Thanks,
> -John
> 
> 
>>
> john.burw...@shapeblue.com 
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London VA WC2N 4HSUK
> @shapeblue
> 
> 
> On Jun 14, 2016, at 2:02 PM, Paul Angus <paul.an...@shapeblue.com> wrote:
>>
>> +1 Daan.
>>
>> My recollection was that major version number changes were only to be 
>> triggered by breaks in backward compatibility (API).
>>
>>
>> Kind regards,
>>
>> Paul Angus
>>
>> paul.an...@shapeblue.com 
>> www.shapeblue.com
>> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
>> @shapeblue
>>
>>
>>
>> -Original Message-
>> From: Daan Hoogland [mailto:daan.hoogl...@gmail.com] 
>> Sent: 14 June 2016 14:47
>> To: dev <dev@cloudstack.apache.org>
>> Cc: Rajani Karuturi <raj...@apache.org>
>> Subject: Re: 4.9+ release
>>
>> You know that would require more then one byte for our minor version, Will.
>> I would be very pleased to go to 5.0 before that time.
>>
>> On Tue, Jun 14, 2016 at 3:43 PM, Will Stevens <wstev...@cloudops.com> wrote:
>>
>>> Daan is just trying to get us to version 4.256.  :P
>>>
>>> *Will STEVENS*
>>> Lead Developer
>>>
>>> *CloudOps* *| *Cloud Solutions Experts
>>> 420 rue Guy *|* Montreal *|* Quebec *|* H3J 1S6 w cloudops.com *|* tw 
>>> @CloudOps_
>>>
>>> On Tue, Jun 14, 2016 at 9:41 AM, Daan Hoogland 
>>> <daan.hoogl...@gmail.com>
>>> wrote:
>>>
>>>> -1 to what Wido said. None of those points warant a major release 
>>>> number upgrade. these should all be in 4.10, -.11, -12 etc.
>>>>
>>>> major incompatibilities like API refactor, dropping backend support 
>>>> for this or that hyporvisor or DB refactor are the things that 
>>>> warrant 5

Re: Introducing Mukul

2016-06-09 Thread ilya
Hi Mikul

Welcome,

You can also convert your system to linux and run Windows as a VM :)

Then you can do nested KVMs and be fine.

Regards,
ilya

On 6/8/16 6:26 AM, Mukul Rajarshi wrote:
> Hello CloudStack team,
> 
> My name is Mukul Rajarshi and I have joined Accelerite CloudPlatform testing 
> team. Earlier I have worked on in storage systems, cloud computing, 
> virtualization and data management systems.  This is the first time I am 
> working on CloudStack in the process have followed posts/blogs from some of 
> the members in this community and found them really helpful. Excited to be 
> part of this community and hopefully would be to contribute in future.
> 
> Currently I am trying putting up a scratchpad CloudStack environment on 
> VirtualBox. Facing issue when I use KVM hypervisor, XenServer is fine. Let me 
> know there are known issues or gotchas with KVM as a VM.
> 
> 
> ~Mukul
> www.accelerite.com<http://www.accelerite.com>
> 
> 
> 
> 
> DISCLAIMER
> ==
> This e-mail may contain privileged and confidential information which is the 
> property of Accelerite, a Persistent Systems business. It is intended only 
> for the use of the individual or entity to which it is addressed. If you are 
> not the intended recipient, you are not authorized to read, retain, copy, 
> print, distribute or use this message. If you have received this 
> communication in error, please notify the sender and delete all copies of 
> this message. Accelerite, a Persistent Systems business does not accept any 
> liability for virus infected mails.
> 


Re: Master is frozen for the 4.9 release

2016-06-07 Thread ilya
Sounds like a bug fix to me.. Bug fixes should be allowed in my opinion.

On 6/7/16 4:07 AM, Wido den Hollander wrote:
>> Op 28 mei 2016 om 0:34 schreef Will Stevens :
>>
>>
>> Hey All,
>> I think I have done what I can do at this point.  I am sorry if you have a
>> PR that you wanted to get in that didn't make it.  I pushed my deadline for
>> the freeze a bit because I had a lot of PRs that were close and I was able
>> to get a bunch of them in.
>>
>> I plan to wait about a week before I cut the first RC to give people a
>> chance to test master and get me the details of their testing.  This will
>> reduce the number of RCs we will need to have in order to get this release
>> out the door.
>>
>> Please start testing master and let me know if you run into any issues.
>> There are a couple periodic issues that show up in my CI environments, so I
>> will probably spend some time to see if I can get those sorted out before I
>> cut the first release.
>>
>> I plan to create a Github PR that will never be merged where I can post CI
>> results against master for this release so we can troubleshoot anything we
>> find.  This approach is mainly because my workflow with `upr` lets me post
>> results easily and include the logs for the run.
>>
> 
> I would like to get some attention for this PR: 
> https://github.com/apache/cloudstack/pull/1547
> 
> Currently in Basic Networking multiple ranges per VLAN/POD is broken, the VR 
> doesn't take the additional IP addresses.
> 
> That PR fixes that (and is running in production at us), but this has been 
> broken since 4.7
> 
> I would like 4.9 to have this fixed as it's a known issue right now. Can we 
> do that?
> 
> Wido
> 
>> Cheers,
>>
>> Will


Re: Introduction from Adwait

2016-06-02 Thread ilya
Welcome!

On 5/31/16 7:07 AM, Daan Hoogland wrote:
> welcome Adwait,
> 
> Don't forget to try out the bubble:
> https://github.com/MissionCriticalCloud/bubble-toolkit !
> good luck and have fun
> 
> On Mon, May 30, 2016 at 1:41 PM, Wei ZHOU  wrote:
> 
>> Welcome Adwait!
>>
>> 2016-05-30 10:54 GMT+02:00 Adwait Patankar >> :
>>
>>> Hello folks,
>>>
>>> I've joined the Accelerite CloudPlatform team recently and gearing up on
>>> the CloudStack.
>>> I'm predominantly a Java Developer and worked on different products and
>>> varied domains (Data Analytics, Backup & Recovery, IAAS Management) .
>>> My previous recent experience has been working on a heterogeneous cloud
>>> management solution supporting KVM, Xen, Hyper-V and VMWare
>> virtualization
>>> environments.
>>>
>>> I've been following the dev mailing list for CloudStack community for a
>>> few weeks and see a lot of traction with members actively participating
>> and
>>> helping each other to build a robust CloudStack ecosystem.
>>> Feeling excited to join and hopefully contribute in a meaningful way to
>>> the CloudStack community.
>>>
>>> Currently going through the publicly available videos on CloudStack
>>> features and trying out building the Devcloud setup to play around.
>>>
>>>
>>> Regards,
>>> Adwait Patankar
>>> Principal Product Engineer | www.accelerite.com<
>> http://www.accelerite.com>
>>>
>>>
>>>
>>>
>>> DISCLAIMER
>>> ==
>>> This e-mail may contain privileged and confidential information which is
>>> the property of Accelerite, a Persistent Systems business. It is intended
>>> only for the use of the individual or entity to which it is addressed. If
>>> you are not the intended recipient, you are not authorized to read,
>> retain,
>>> copy, print, distribute or use this message. If you have received this
>>> communication in error, please notify the sender and delete all copies of
>>> this message. Accelerite, a Persistent Systems business does not accept
>> any
>>> liability for virus infected mails.
>>>
>>
> 
> 
> 


Re: how to go about codebase quality when colisions occur?

2016-05-03 Thread ilya
Linas

Congrats on first java project ever!

CloudStack supports custom pluggable APIs that you dont have to bake
into code base - unless you feel many other users will benefit from it.
We do it all the time and i'm sure many others orgs do the same...

Search the web for CloudStack custom API, there is a baked in example
for creating custom API plugin and get time date from Management Server..

Also, this thread is a good example:
https://www.mail-archive.com/dev@cloudstack.apache.org/msg9.html

I recall there was a doc that explained it in greater detail, but i
havent had a chance to look for it.

Regards
ilya

On 5/3/16 5:36 AM, Linas Žilinskas wrote:
> Hello.
> 
> I implemented a new API call for myself, which returns the VNC url for a VM.
> Currently i'm using some classes from the consoleproxy servlet package as 
> well as copied some code over to my own package/class (specifically the 
> encryption stuff)
> Now i have questions regarding this way of doing things.
> 
> I'm all about isolation and making stuff portable as much as possible. To me 
> the consoleproxy seems like a semi-standalone project. Therefore it doesn't 
> seem nice to be using packages/classes willy nilly across other packages.
> 
> So should i split the reusable code into a separate package and store it 
> somewhere in the shared codebase, or should i just duplicate code. I saw that 
> the servlet itself has a copied over class from the standalone consoleproxy 
> project.
> 
> Or maybe there's some way of implementing the api calls into the consoleproxy 
> project itself?
> 
> PS. This is my first java project ever, so i'm learning as i go through the 
> code. So bear with me if the language i use seems off
> 
> Regards
> 
> Linas Žilinskas
> 
> Development Lead
> 
> [http://host1plus.com/images/h1p_e_sig/sig_logo.png]<http://www.host1plus.com/>
> 
> website<http://www.host1plus.com/>
> 
> facebook<https://www.facebook.com/Host1Plus>
> 
> twitter<https://twitter.com/Host1Plus>
> 
> linkedin<https://www.linkedin.com/company/digital-energy-technologies-ltd.>
> 
> 
> 
> 
> Phone: +44 870 8200222
> 
> Fax: +44 870 8200222
> 
> Host1Plus is a division of Digital Energy Technologies Ltd.
> 
> 26 York Street, London W1U 6PZ, United Kingdom
> 
> 
> 


Re: [ANNOUNCE] New committer: Simon Weller

2016-04-30 Thread ilya
thanks Simon and congrats!

On 4/28/16 2:30 PM, Simon Weller wrote:
> Thanks everyone! I look forward to continuing to help out where I can as we 
> march towards 4.9.
> 
> - Si
> 
> From: Pierre-Luc Dion 
> Sent: Thursday, April 28, 2016 4:20 PM
> To: dev@cloudstack.apache.org
> Subject: Re: [ANNOUNCE] New committer: Simon Weller
> 
> Congrats Simon!
> 
> On Thu, Apr 28, 2016 at 3:52 PM, Rafael Weingärtner <
> rafaelweingart...@gmail.com> wrote:
> 
>> Congratulations Simon,
>>
>> Let’s keep up with the good work ;)
>>
>> On Thu, Apr 28, 2016 at 8:34 AM, Daan Hoogland 
>> wrote:
>>
>>> welcome, kiwi. Good work and keep going.
>>>
>>> On Thu, Apr 28, 2016 at 11:25 AM, Nick LIVENS <
>>> nick.liv...@nuagenetworks.net
 wrote:
>>>
 Congrats Simon!

 Kind regards,
 Nick Livens

 On Thu, Apr 28, 2016 at 10:15 AM, Nux!  wrote:

> Congrats! :-)
>
> --
> Sent from the Delta quadrant using Borg technology!
>
> Nux!
> www.nux.ro
>
> - Original Message -
>> From: "Paul Angus" 
>> To: dev@cloudstack.apache.org
>> Sent: Thursday, 28 April, 2016 08:44:36
>> Subject: RE: [ANNOUNCE] New committer: Simon Weller
>
>> Congratulations Simon!
>>
>>
>>
>> Kind regards,
>>
>> Paul Angus
>>
>> Regards,
>>
>> Paul Angus
>>
>> paul.an...@shapeblue.com
>> www.shapeblue.com
>> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
>> @shapeblue
>>
>> -Original Message-
>> From: Erik Weber [mailto:terbol...@gmail.com]
>> Sent: 28 April 2016 08:24
>> To: dev 
>> Subject: [ANNOUNCE] New committer: Simon Weller
>>
>> The Project Management Committee (PMC) for Apache CloudStack has
>>> asked
> Simon
>> Weller to become a committer and we are pleased to announce that
>> they
> have
>> accepted.
>>
>>
>> Being a committer allows many contributors to contribute more
> autonomously. For
>> developers, it makes it easier to submit changes and eliminates the
 need
> to
>> have contributions reviewed via the patch submission process.
>> Whether
>> contributions are development-related or otherwise, it is a
>>> recognition
> of a
>> contributor's participation in the project and commitment to the
 project
> and
>> the Apache Way.
>>
>> Please join me in congratulating Simon
>>
>> --Erik
>> on behalf of the CloudStack PMC
>

>>>
>>>
>>>
>>> --
>>> Daan
>>>
>>
>>
>>
>> --
>> Rafael Weingärtner
>>


[ASF PERKS] JetBrains products - free to committers

2016-04-28 Thread ilya
Just found a cool perk many would benefit from on this list.

JetBrains is a company behind many development tools.

AppCode
CLion
DataGrip
dotCover
dotMemory
dotPeek
dotTrace
Hub
IntelliJ IDEA
Kotlin
MPS
PhpStorm
PyCharm
ReSharper
RubyMine
TeamCity
Upsource
WebStorm
YouTrack



If you are ASF committer with valid @apache.org email, you can get
access to JetBrains products free of charge (assuming products will be
used to better open source project within ASF umbrella).

Follow this link to request your license:

https://www.jetbrains.com/shop/eform/apache?product=I

Regards
ilya


Re: anybody doing load testing?

2016-04-19 Thread ilya
We load test with real production work load :)

I know thats not the answer you want to hear. By the way, dont hit 8250
without a recent non-blocking SSL Handshake patch - it will disconnect
all other agents.

Or perhaps - try without patch and with patch - to see if you can find a
breaking point.

Please do post slides and talk somewhere, load testing would be on my
agenda later this year..

Thanks,
ilya


On 4/19/16 6:20 AM, Daan Hoogland wrote:
> I will go to a talk this evening about http://gatling.io/ Is there anybody
> busy testing cloudstack this way? What tools are you using?
> 
> ​nice to hear,​
> 


Re: ACS PRs Status - 2016/04/18

2016-04-18 Thread ilya
Hi Will

Thanks for the detailed update and effort.

Please keep us posted.

Regards
ilya

On 4/18/16 1:48 PM, Will Stevens wrote:
> ACS PRs
> 
>- 1452 - master (ready, pending LGTM)
>- 1475 - 4.7 (pending clarification)
>- 1420 - master + svm (CI running)
>- 1365 - 4.7 (pending ALL)
>- 1402 - 4.7 (needs work)
>- 1454 - master (ready, pending LGTM)
>- 1459 - master (rerun CI) NOTE: This closes #561
>- 1409 - master (pending CI)
>- 1433 - master (pending CI)
>- 1230 - master (pending CI)
>- 1326 - master (*pending CI)
>- 1436 - master (*pending CI)
>- 1455 - master (*pending CI)
>- 1423 - master + svm (*pending CI)
>- 1428 - master (pending ALL)
>- 1450 - 4.7 (pending ALL)
>- 1453 - master (pending ALL)
>- 1403 - master (pending ALL)
>- 1331 - 4.7 (pending ALL)
>- 1475 - 4.7 (pending ALL)
>- 1458 - master (pending ALL)
>- 1297 - master (pending CI)
>- 1410 - 4.7 (pending ALL)
>- 1483 - 4.7 (pending ALL)
>- 1470 - 4.7 (pending ALL)
>- 1471 - 4.7 (pending ALL)
>- 1472 - 4.7 (pending ALL)
>- 1473 - 4.7 (pending LGTM)
>- 1474 - 4.7 (pending ALL)
>- 1486 - 4.7 (pending ALL)
>- 1483 - 4.7 (pending ALL)
>- 1488 - master (pending ALL)
>- 872 - master + svm (pending CI)
>- 1489 - master (pending CI)
>- 1456 - 4.7 (pending ALL)
>- 1412 - 4.6 (pending ALL)
>- 1406 - 4.6 (pending LGTM)
>- 1378 - 4.6 (pending LGTM)
>- 1491 - 4.7 (pending ALL)
>- 1360 - master (pending LGTM)
>- 1490 - 4.7 (pending ALL)
>- 1493 - master (pending ALL)
>- 1397 - master (pending CI)
>- 1499 - master (pending ALL)
>- 1371 - master + svm (pending ALL)
>- 1500 - master (pending ALL)
> 
> * Denotes special requirements for CI testing
> svm = specifies that the PR will require an updated systemvm template
> 
> ---
> 
> Here is this Monday's status report.  It is looking like we will need a new
> systemvm template with the 4.9 release, so I would like to try to get on
> top of the PRs that will require systemvm template changes so we can make
> sure to get them tested and in earlier in this release window.  This will
> give us more time to work out any kinks prior to the RC.
> 
> Sorry for the slow down on the CI.  I have been having some hardware
> issues, but I think I have resolved my issues (fingers crossed).  We now
> have Daan up and running with a CI environment as well, so that will help
> as well.  I owe a couple people instructions for getting a CI environment
> setup so they can start testing.  I will hopefully be able to get to that
> tomorrow.
> 
> If you can review the list above and try to give me code reviews on
> anything that has a status of 'pending LGTM' since I am only missing code
> review for those PRs (assuming my status is up to date).  Also, I need to
> start getting code reviews on the PRs that have a status of 'pending ALL'.
> 
> There are a few PRs that have come up that are targeting 4.6.  I would like
> some guidance for how I should be handling them.  My understanding is that
> the 4.7 release is the oldest supported release, so should I be asking the
> author to close the PR and open a PR against the 4.7 code?
> 
> Hope you all had a great weekend.  Looking forward to a productive week
> this week.  :)
> 
> Cheers,
> 
> Will
> 


Re: Feature proposal: Resource naming policies

2016-04-14 Thread ilya
Awesome and long awaited



On 4/14/16 4:40 AM, Jeff Hair wrote:
> Yesterday, we submitted this pull request:
> https://github.com/apache/cloudstack/pull/1492
> 
> This originally grew out of making the VirtualMachineName class non-static
> (original PR is mentioned in the above link). We're presenting this as a
> refactoring of the existing code to enable more extensibility and
> flexibility, make unit testing easier, and unify the way CloudStack
> generates resource names.
> 
> There is an associated JIRA ticket at CLOUDSTACK-9003. I will be writing up
> a design document for the CS wiki in the next few days.
> 
> jburwell wanted me to open a discussion on the developer list about the PR
> and how it is implemented:
> 
> There is now a ResourceNamingPolicyManager and a bunch of
> ResourceNamingPolicies. The manager exposes a method to get a policy for a
> type of resource, and the naming policies generate UUIDs + resource names.
> 
> The default naming policies generate names exactly the same way as they are
> created now. This is in a new module called default-naming-policies. By
> excluding this module and loading our own naming policies, we gain the
> ability to change how names are generated.
> 


Re: [DISCUSS] Request for comments : VPC Inline LoadBalancer (new plugin)

2016-04-11 Thread ilya
Kris and Nick

Noticed an update in the FS:

>> The VPC Inline LB appliance therefore is a regular System VM, exactly
the same as the Internal LB appliance today. Meaning it has 1 guest nic
and 1 control (link-local / management) nic.



This should be ok. The premise behind my concern was - if LB Inline VM
was to get hack (re: SSL HeartBlead) and intruder has root level
privileges, he could try to go further in the network in an attempt to
get access to MGMT layer. Since we have link-local - its limited to 1
hypervisor only.

Assuming iptables/firewall on hypervisor blocks incoming traffic from VR
link local address - we should be ok. I guess i need to double check on
this.

Regards
ilya



On 4/10/16 1:26 PM, Kris Sterckx wrote:
> Hi all
> 
> 
> Thanks for reviewing the FS. Based on the received comments I clarified
> further in the FS that the Vpc Inline LB appliance solution is based on the
> Internal LB appliance solution, only now extended with secondary IP's and
> static NAT to Public IP.
> 
> I also corrected the "management" nic to "control" nic. The text really
> meant eth1, i.e the link-local nic on KVM.
> 
> Pls find the updated text :
> 
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=61340894
> 
> Architecture and Design description
> 
> We will introduce a new CloudStack network plugin “VpcInlineLbVm” which is
> based on the Internal LoadBalancer plugin and which just like the Internal
> LB plugin is implementing load balancing based on at-run-time deployed
> appliances based on the VR (Router VM) template (which defaults to the
> System VM template), but the LB solution now extended with static NAT to
> secondary IP's.
> 
> The VPC Inline LB appliance therefore is a regular System VM, exactly the
> same as the Internal LB appliance today. Meaning it has 1 guest nic and 1
> control (link-local / management) nic.
> 
> With the new proposed VpcInlineLbVm set as the Public LB provider of a VPC,
> when a Public IP is acquired for this VPC and LB rules are configured on
> this public IP, a VPC Inline LB appliance is deployed if not yet existing,
> and an additional guest IP is allocated and set as secondary IP on the
> appliance guest nic, upon which static NAT is configured from the Public IP
> to the secondary guest IP.  (See below outline for the detailed algorithm.)
> 
> *In summary*, the VPC Inline LB appliance is reusing the Internal LB
> appliance but its solution now extended with Static NAT from Public IP's to
> secondary (load balanced) IP's at the LB appliance guest nic.
> 
> 
> Hi Ilya,
> 
> Let me know pls whether that clarifies and brings new light to the
> questions asked.
> 
> Can you pls indicate, given the suggested approach of reusing the appliance
> mechanism already used for Internal LB, whether this addresses the concern
> or, when it doesn't, pls further clarify the issue seen in this approach.
> 
> Thanks!
> 
> 
> Hi Sanjeev, to your 1st question:
> 
> Will this LB appliance be placed between guest vm's and the Nuage VSP
> provider(Nuage VSP and lb appliance will have one nic in guest network)?
> 
>> Please note that the LB appliance is a standard System VM, having 1 nic
> in Guest network and 1 nic in Control. There is as such no relation between
> this Appliance and the Nuage VSP.
> 
> In the case where Nuage VSP is the Connectivity provider, the appliance has
> a guest nic in a Nuage VSP managed (VXLAN) network, like all guest VM's
> would have. But that is dependent on the provider selection.
> 
> In the specific case of Nuage VSP, publicly load balanced traffic will
> indeed flow as : (pls read-on to your 2nd question also) :
> -> incoming traffic on Public IP  (Nuage VSP managed)
> -> .. being Static NAT'ted to Secondary IP on Vpc Inline LB VM
>  (NAT'ting is Nuage VSP managed)
> -> .. being load balanced to real-server guest VM IP's  (Vpc Inline LB
> VM appliance managed)
> -> .. reaching the real-server guest VM IP
> 
> To your 2nd question:
> 
> Is there any specific reason for traffic filtering on lb appliance instead
> of Nuage VSP ? If we configure firewall rules for LB services on the Nuage
> VSP instead of the inline lb appliance (iptable rules  for lb traffic),
> traffic can be filtered on the Nuage VSP before Natting?
> 
>> Please note that the generic Static NAT delegation is applicable : the
> realization of the Static NAT rules being set up, depends on the Static NAT
> provider in the VPC offering. In case Nuage VSP is the provider for Static
> NAT (which it would be in the case of a Nuage SDN backed deployment), the
> NAT’ting is effectively done by the Nuage VSP.  If anyone else is the
> provider, than this provider i

Re: Introduction

2016-04-10 Thread ilya
Welcome Rashmi!

On 4/7/16 9:58 PM, Rashmi Dixit wrote:
> Hello!
> 
> I am Rashmi Dixit and have recently joined the CloudPlatform team in 
> Accelerite. I have worked on a hybrid cloud management solution supporting 
> hypervisors such as KVM, Xen, VMware, HyperV and public clouds such as EC2. 
> My areas of interest are User Interface, networking.
> 
> I am really looking forward to contributing on CloudStack.
> 
> See you around!
> Rashmi
> 
> Rashmi Dixit
> Principal Product Engineer | CloudPlatform | www.accelerite.com
> 
> 
> 
> 
> DISCLAIMER
> ==
> This e-mail may contain privileged and confidential information which is the 
> property of Accelerite, a Persistent Systems business. It is intended only 
> for the use of the individual or entity to which it is addressed. If you are 
> not the intended recipient, you are not authorized to read, retain, copy, 
> print, distribute or use this message. If you have received this 
> communication in error, please notify the sender and delete all copies of 
> this message. Accelerite, a Persistent Systems business does not accept any 
> liability for virus infected mails.
> 


Re: [ACP Doctor] What is it?

2016-04-06 Thread ilya
Wow, a whole one page site dedicated to this endeavor :)

On 4/6/16 9:22 PM, Will Stevens wrote:
> FYI, It does still seem to be freely available:  http://ccpdoctor.com/
> 
> *Will STEVENS*
> Lead Developer
> 
> *CloudOps* *| *Cloud Solutions Experts
> 420 rue Guy *|* Montreal *|* Quebec *|* H3J 1S6
> w cloudops.com *|* tw @CloudOps_
> 
> On Thu, Apr 7, 2016 at 12:11 AM, ilya <ilya.mailing.li...@gmail.com> wrote:
> 
>> Thanks for explanation Ian and Will.
>>
>> Much appreciated.
>>
>> On 4/5/16 8:33 PM, Ian Rae wrote:
>>> I don't believe this is freely available, rather is a tool Citrix
>> developed
>>> for helping troubleshoot CCP customer deployments. I would imagine that
>>> Accelerite owns this tool now and it is likely available if you are an
>> ACP
>>> customer, but not necessarily for ACS users.
>>>
>>> Probably best for Accelerite to comment.
>>>
>>> On Tuesday, 5 April 2016, Will Stevens <wstev...@cloudops.com> wrote:
>>>
>>>> It used to be CCP Doctor and it is not in ACS from my understanding.
>> It is
>>>> a set of scripts that will do basic validation of a CloudStack setup.
>> It
>>>> does things like verify the system VMs are running and the connectivity
>> is
>>>> working between all of the systems.  It also does some checking to make
>>>> sure the versions of software is correct and checks some things in the
>> DB
>>>> as well.  It also collects a whole crap ton of logs and database dumps
>> (i
>>>> think) and zips them up for easy transfer to support so they can get a
>>>> solid feel for your setup.
>>>>
>>>> It also has 'suggestions' for things you can do to fix different
>> aspects of
>>>> your setup.  Things like setting 'ulimit' to 'unlimited' and will give
>> you
>>>> the command to run.  It also lets you pass a 'fix' flag and it will
>>>> automagically make all the changes for you.  I am too paranoid to have
>>>> actually used the fix flag because I was always using this in production
>>>> environments and I am a little too risk averse to let a script do
>> anything
>>>> for me (unless I wrote it).
>>>>
>>>> Does that answer your question?  It should be freely available and you
>>>> should be able to run it against ACS, so you should be able to try it
>>>> out...
>>>>
>>>> It is a pretty useful tool to be honest.  Especially if you are
>>>> troubleshooting an environment you didn't setup.
>>>>
>>>> *Will STEVENS*
>>>> Lead Developer
>>>>
>>>> *CloudOps* *| *Cloud Solutions Experts
>>>> 420 rue Guy *|* Montreal *|* Quebec *|* H3J 1S6
>>>> w cloudops.com *|* tw @CloudOps_
>>>>
>>>> On Tue, Apr 5, 2016 at 5:28 PM, ilya <ilya.mailing.li...@gmail.com
>>>> <javascript:;>> wrote:
>>>>
>>>>> Saw ACP Doctor in CCP release notes from Accelerite.
>>>>>
>>>>> Curious what it is, is it integrated into cloudstack or collection of
>>>>> shell scripts?
>>>>>
>>>>> Thanks
>>>>> ilya
>>>>>
>>>>
>>>
>>>
>>
> 


Re: [ACP Doctor] What is it?

2016-04-06 Thread ilya
Thanks for explanation Ian and Will.

Much appreciated.

On 4/5/16 8:33 PM, Ian Rae wrote:
> I don't believe this is freely available, rather is a tool Citrix developed
> for helping troubleshoot CCP customer deployments. I would imagine that
> Accelerite owns this tool now and it is likely available if you are an ACP
> customer, but not necessarily for ACS users.
> 
> Probably best for Accelerite to comment.
> 
> On Tuesday, 5 April 2016, Will Stevens <wstev...@cloudops.com> wrote:
> 
>> It used to be CCP Doctor and it is not in ACS from my understanding.  It is
>> a set of scripts that will do basic validation of a CloudStack setup.  It
>> does things like verify the system VMs are running and the connectivity is
>> working between all of the systems.  It also does some checking to make
>> sure the versions of software is correct and checks some things in the DB
>> as well.  It also collects a whole crap ton of logs and database dumps (i
>> think) and zips them up for easy transfer to support so they can get a
>> solid feel for your setup.
>>
>> It also has 'suggestions' for things you can do to fix different aspects of
>> your setup.  Things like setting 'ulimit' to 'unlimited' and will give you
>> the command to run.  It also lets you pass a 'fix' flag and it will
>> automagically make all the changes for you.  I am too paranoid to have
>> actually used the fix flag because I was always using this in production
>> environments and I am a little too risk averse to let a script do anything
>> for me (unless I wrote it).
>>
>> Does that answer your question?  It should be freely available and you
>> should be able to run it against ACS, so you should be able to try it
>> out...
>>
>> It is a pretty useful tool to be honest.  Especially if you are
>> troubleshooting an environment you didn't setup.
>>
>> *Will STEVENS*
>> Lead Developer
>>
>> *CloudOps* *| *Cloud Solutions Experts
>> 420 rue Guy *|* Montreal *|* Quebec *|* H3J 1S6
>> w cloudops.com *|* tw @CloudOps_
>>
>> On Tue, Apr 5, 2016 at 5:28 PM, ilya <ilya.mailing.li...@gmail.com
>> <javascript:;>> wrote:
>>
>>> Saw ACP Doctor in CCP release notes from Accelerite.
>>>
>>> Curious what it is, is it integrated into cloudstack or collection of
>>> shell scripts?
>>>
>>> Thanks
>>> ilya
>>>
>>
> 
> 


[ACP Doctor] What is it?

2016-04-05 Thread ilya
Saw ACP Doctor in CCP release notes from Accelerite.

Curious what it is, is it integrated into cloudstack or collection of
shell scripts?

Thanks
ilya


Re: [SSL CERTS] Importing ROOT and INTERMEDIATE certs for SSVM

2016-03-31 Thread ilya
I have a web-service that serves CloudStack templates, the SSL on the
download web service is signed by internal CA. This means i need to
inject the intermediate CA as well as ROOT CA into SSVM's java keystore
- for java client to be able to recognize the Certs and download the
template from remote repository.





On 3/29/16 4:48 AM, Daan Hoogland wrote:
> Ilya, to my knowledge the certificate won't be saved on file. It will be
> loaded from the command coming from the MS in the agent directly. Why are
> you looking to update the ssvm? I thought these are only used in the
> consoleproxy.
> 
> On Tue, Mar 29, 2016 at 12:17 AM, ilya <ilya.mailing.li...@gmail.com> wrote:
> 
>> I'm having difficulty getting ROOT and INTERMEDIATE certificates to show
>> up in SSVM java keystore.
>>
>>
>> I've followed the procedure on
>>
>> http://docs.cloudstack.apache.org/projects/cloudstack-administration/en/4.8/systemvm.html?highlight=pkcs
>>
>> and
>>
>>
>> https://cwiki.apache.org/confluence/display/CLOUDSTACK/Procedure+to+Replace+realhostip.com+with+Your+Own+Domain+Name
>>
>> But after restart of SSVM and MS - the keystore still has default Go
>> Daddy certs.
>>
>> Would any know how to troubleshoot it?
>>
>> Also, one thing to note, i'm not uploading the actual wild card cert -
>> is its against security policy. It will be impossible for me to get a
>> wildcard cert.
>>
>> Regards
>> ilya
>>
> 
> 
> 


Re: Introduction

2016-03-28 Thread ilya
Hi Boris

Welcome!

On 3/28/16 5:21 AM, Boris Stoyanov wrote:
> Hi CloudStack, 
> 
> My name is Boris Stoyanov (Bobby) and today is my first day @ShapeBlue. I’m 
> based in Sofia, Bulgaria. I will be taking the role of Software Engineer in 
> Test, and as you may have guessed I’ll mostly focus on testing CloudStack. I 
> have about 10 years of experience in testing, which I’ve mostly spend in 
> doing test automation frameworks and deployment automation. I’m new to the 
> CloudStack business and I have a lot to learn, but I hope I’ll get up to 
> speed in short time. Looking forward to working with you! 
> 
> Best Regards,
> Bobby.
> Regards,
> 
> Boris Stoyanov
> 
> boris.stoya...@shapeblue.com 
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> @shapeblue
> 


[SSL CERTS] Importing ROOT and INTERMEDIATE certs for SSVM

2016-03-28 Thread ilya
I'm having difficulty getting ROOT and INTERMEDIATE certificates to show
up in SSVM java keystore.


I've followed the procedure on
http://docs.cloudstack.apache.org/projects/cloudstack-administration/en/4.8/systemvm.html?highlight=pkcs

and

https://cwiki.apache.org/confluence/display/CLOUDSTACK/Procedure+to+Replace+realhostip.com+with+Your+Own+Domain+Name

But after restart of SSVM and MS - the keystore still has default Go
Daddy certs.

Would any know how to troubleshoot it?

Also, one thing to note, i'm not uploading the actual wild card cert -
is its against security policy. It will be impossible for me to get a
wildcard cert.

Regards
ilya


Re: Migrating CloudStack content from download.cloud.com

2016-03-25 Thread ilya
Raja

Would you know if cloud.com domain will be transferred to accelerite?

Regards
ilya

On 3/24/16 5:38 AM, Wido den Hollander wrote:
> 
>> Op 24 maart 2016 om 13:33 schreef Raja Pullela <raja.pull...@accelerite.com>:
>>
>>
>> Hi,
>>
>> Citrix has been hosting   "download.cloud.com"  for  quite  some time  now
>>  and  it holds  the  System Templates for all the releases and  some  tools.
>>   Going forward,  this  content  needs  to  be  moved  from
>> "download.cloud.com".So, we will be moving  this content  to
>>  "cloudstack.accelerite.com".I  will also be  updating  the  links in the
>>  documentation  to reflect  these  changes and will provide an update  once
>>  the  content  move  is complete.
>>
>> @Wido, if you could also copy this content to "cloudstack.apt-get.eu"  that
>>  will be great.  I can provide you the details in a separate email.
> 
> Super! If you have a rsync source I will set it up.
> 
> Wido
> 
>>
>> Best,
>> Raja
>> Senior Manager, Product Development,
>> Accelerite,
>> www.accelerite.com<http://www.accelerite.com>
>>
>>
>>
>>
>> DISCLAIMER
>> ==
>> This e-mail may contain privileged and confidential information which is the
>> property of Accelerite, a Persistent Systems business. It is intended only 
>> for
>> the use of the individual or entity to which it is addressed. If you are not
>> the intended recipient, you are not authorized to read, retain, copy, print,
>> distribute or use this message. If you have received this communication in
>> error, please notify the sender and delete all copies of this message.
>> Accelerite, a Persistent Systems business does not accept any liability for
>> virus infected mails.


Re: [DISCUSS] Request for comments : VPC Inline LoadBalancer (new plugin)

2016-03-24 Thread ilya
Hi Nick,

Being fan of SDN, I gave this proposal a thorough read.

I do have only 1 comment - that you can perhaps can use to reconsider:

"Each appliance will have 2 nics, one for management, and one in the
guest network. "

In general, 2 nics - one going to management and one going to guest - is
looked very negatively upon by internal InfoSec team. This
implementation will make an LB non-compliant from SOX or PCI perspective.

Proposed alternate solution:
Deploy a VM with 2 NICs but put them both on the same guest network (I
believe the support 2 NICs on the *same* guest network has already been
submitted upstream). 1 NIC for MGMT and 1 NIC for GUEST.

Using SDNs ability to restrict communication flow (openvswitch or what
not), only allow specific connections from CloudStack MS to Inline LB on
MGMT NIC. You will need to block all external GUEST communication to
MGMT NIC and only make it talk to CloudStack MS on specific ports.

This approach should preserve the internal compliance and wont raise any
red flags.

Perhaps reach out to a client who requested this feature and ask what
they think, maybe they have not thought this through.

Regards
ilya

PS: If we were to entertain the idea of InLine LB, we would most likely
ask for approach mentioned above.




On 3/24/16 1:18 AM, Nick LIVENS wrote:
> Hi all,
> 
> I'd like to propose a new plugin called the "VPC Inline LB" plugin.
> The design document can be found at :
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=61340894
> 
> Looking forward to hear your reviews / thoughts.
> 
> Thanks!
> 
> Kind regards,
> Nick Livens
> 


Re: introduction

2016-03-23 Thread ilya
Welcome back Murali!

On 3/23/16 4:50 AM, Murali Reddy wrote:
> All,
> 
> Just wanted to let the community know that I have decided to work on Apache 
> CloudStack again. I have taken up a position at ShapeBlue, so you will be 
> seeing me around @dev and @users. For the new members of the community and 
> others who I have not interacted with, here is small self introduction. I 
> started my journey with CloudStack, in late 2010 as cloud.com employee. Later 
> spent 4 years till late 2014 working on various CloudStack features (EIP, 
> GSLB, NAAS, NetScaler integration, native SDN controller, distributed virtual 
> router, event bus to name few). I was working on NFV solution on OpenStack 
> over a year. I am excited to join back the community, looking forward to 
> interact with you.
> 
> Thanks,
> Murali
> 


[IMPORTANT] Huge Github PR Backlog

2016-03-19 Thread ilya
Hi Folks,

What can we do about PR backlog in GitHub? As we all know, it will be
very difficult to merge the changes - as things will get out of sync.

Feedback is welcome,

Thanks,
ilya


Re: [VOTE] Move 'apache/cloudstack' -> 'apache-cloudstack/cloudstack'

2016-03-18 Thread ilya
+1 Binding.

I dont see anything wrong with this approach especially if it helps to
solve our backlog issue.

On 3/18/16 3:44 PM, Will Stevens wrote:
> We are discussing this proposal in 3 or 4 threads, so I will not try to
> recap everything.  Instead I will try to give a brief overview of the
> problem and a proposal for solving it.
> 
> *Problem:*
> The Apache CloudStack community needs additional github permissions in
> order to integrate CI for the purpose of maintaining code quality.  The ASF
> does not have enough granular control via the 'apache' github organization
> to give the 'apache/cloudstack' repository the needed permissions.
> 
> *Proposal:*
> Transfer ownership of the 'apache/cloudstack' mirrored repository out of
> the 'apache' github organization into the 'apache-cloudstack' github
> organization (which I have already setup and started inviting users to).
> Both members of the ACS community and the ASF board will have 'owner'
> permissions on this new organization.  This will allow for permissions to
> be applied specifically to the 'apache-cloudstack' organization and not
> have to be applied to the entire 'apache' organization.
> 
> By transferring ownership, all of the PRs will be copied to the new
> repository and redirects will be created on github from 'apache/cloudstack'
> to 'apache-cloudstack/cloudstack'.
> 
> The developer workflow and commit workflow will remain unchanged.  The
> canonical ASF repository (git://git.apache.org/cloudstack.git) will remain
> the source of truth and commits will be made to that repository.
> 
> Please ask if anything is unclear or needs to be better defined in order
> for you to cast a vote.
> 


Re: 4.9 Release Management

2016-03-08 Thread ilya
Will, Samir, Koushik and Patrick,

Thanks for your commitment and energy.

Regards
ilya

On 3/2/16 11:43 AM, Samir Agarwal wrote:
> Kudos Will!
> 
> I had received many private notes wondering if Accelerite will continue to 
> play a strong role in contributions to the community; here is your proof!
> 
> We wanted to take on the biggest pain points in the community, and see how we 
> can make positive contributions. Koushik Das will work alongside Will and 
> Patrick to address both of these problem areas. I believe that this will put 
> the community on a path to more manageable releases going forward.
> 
> Best
> 
> Samir
> 
>   
> 
> 
> -Original Message-
> From: Will Stevens [mailto:williamstev...@gmail.com] 
> Sent: Wednesday, March 02, 2016 9:15 AM
> To: dev@cloudstack.apache.org
> Subject: 4.9 Release Management
> 
> Hello Everyone,
> I have mentioned this in other related threads, but I wanted to make an 
> official thread on the topic.
> 
> I am nominating myself as the release manager for 4.9.  Please feel free to 
> discuss if you have comments or concerns.
> 
> I will not be working alone, I will be assisted by Koushik Das and Patrick 
> Dube.  I will be running point, but all three of us will be working together 
> as a unit for this release.
> 
> Our main focus for this release is the integration of hardware Continuous 
> Integration (CI) into the PR flow.  Koushik and his team will be setting up a 
> CI environment which will be used for testing PRs and I will also be setting 
> up a CI environment for testing PRs.
> 
> The details of the CI integration will be handled publicly, but we will 
> likely have to work with a minimum viable implementation initially and move 
> forward from there.  Here are some of the key aspects of the CI which are top 
> of mind for me.
> 
> - Standardize a feedback mechanism to post the result of CI runs back to the 
> relevant PR.  I believe the best way to do this would be to post a summary of 
> the CI run in the PR thread on Github.  With the existing integration, this 
> will then get pushed to the mailing list (since all comments on a PR are 
> pushed to the mailing list).
> - Ideally, we will also make the CI logs available for the run.  We are still 
> working out the details of how we do this, but we will likely be pushing the 
> logs to an object store with a cleanup window to remove the logs after a set 
> period of time (probably a week).  This should give people the opportunity to 
> pull the logs if they are interested in the test results, but will reduce the 
> need for ever growing storage.
> - In order to parallelize the CI operations, we will not be automatically 
> kicking off a CI run for every PR for now.  Instead, we will communicate 
> between us and each run distinct PRs so we can maximize the utilization of 
> our hardware.
> 
> Some longer term goals of the CI in my mind are as follows:
> 
> - I would like the core CI framework to be easily distributed and accessible 
> to anyone who has hardware available.  This would enable anyone to setup a CI 
> on their hardware and it would automatically be hooked up to feedback the 
> results to the Github PRs.  I feel this is very important long term because 
> every individual or organization depends on a different configuration and 
> hardware setup, so it empowers them to validate their own use case while 
> adding value back to the community.
> 
> Additional details will follow, namely the release schedule etc.
> 
> Please contribute your ideas and feedback.
> 
> Cheers,
> 
> Will
> 
> 
> 
> DISCLAIMER
> ==
> This e-mail may contain privileged and confidential information which is the 
> property of Accelerite, a Persistent Systems business. It is intended only 
> for the use of the individual or entity to which it is addressed. If you are 
> not the intended recipient, you are not authorized to read, retain, copy, 
> print, distribute or use this message. If you have received this 
> communication in error, please notify the sender and delete all copies of 
> this message. Accelerite, a Persistent Systems business does not accept any 
> liability for virus infected mails.
> 


Re: LDAP auth failures

2016-03-08 Thread ilya
I could not get LDAP to work as well in 4.5.x, i could get it to work in 4.3

I also get no stacktrace as to what could be wrong.



On 3/3/16 4:53 AM, Rene Moser wrote:
> We are experiencing authentication issues with LDAP since upgrade to 4.5.1.
> 
> After some time (...), users can not authenticate anymore, however,
> authentication in other services using ldap works during this time. The
> issue is only related to cloudstack login it seems.
> 
> We haven't found the root cause yet, a network setup issue or openldap
> config issue can not be excluded.
> 
> Stacktrace:
> 
> 2016-02-29 10:05:36,375 DEBUG [cloudstack.ldap.LdapContextFactory]
> (catalina-exec-4:ctx-9ffa7c60) initializing ldap with provider url:
> ldap://ldap.example.com:389
> 2016-02-29 10:05:42,382 DEBUG [cloudstack.ldap.LdapManagerImpl]
> (catalina-exec-4:ctx-9ffa7c60) ldap Exception:
> javax.naming.NamingException: LDAP response read timed out, timeout
> used:6000ms.; remaining name 'dc=foo,dc=bar'
>   at com.sun.jndi.ldap.Connection.readReply(Connection.java:485)
>   at com.sun.jndi.ldap.LdapClient.getSearchReply(LdapClient.java:639)
>   at com.sun.jndi.ldap.LdapClient.search(LdapClient.java:562)
>   at com.sun.jndi.ldap.LdapCtx.doSearch(LdapCtx.java:1985)
>   at com.sun.jndi.ldap.LdapCtx.searchAux(LdapCtx.java:1847)
>   at com.sun.jndi.ldap.LdapCtx.c_search(LdapCtx.java:1772)
>   at
> org.apache.cloudstack.ldap.LdapUserManager.searchUsers(LdapUserManager.java:206)
>   at
> org.apache.cloudstack.ldap.LdapUserManager.getUser(LdapUserManager.java:122)
>   at
> org.apache.cloudstack.ldap.LdapManagerImpl.getUser(LdapManagerImpl.java:173)
>   at
> org.apache.cloudstack.ldap.LdapManagerImpl.canAuthenticate(LdapManagerImpl.java:97)
>   at
> org.apache.cloudstack.ldap.LdapAuthenticator.authenticate(LdapAuthenticator.java:61)
> 2016-02-29 10:05:42,383 DEBUG [cloudstack.ldap.LdapManagerImpl]
> (catalina-exec-4:ctx-9ffa7c60) Exception while doing an LDAP bind for
> user  johndoe
> org.apache.cloudstack.ldap.NoLdapUserMatchingQueryException: No users
> matching: No Ldap User found for username: johndoe
> 
> As I understand there is a username lookup (bind with top reader
> credentials) to see if a user exists in the ldap. if found a new
> connection will be etablished for auth. In the above stacktrace it seem
> that the username lookup fails.
> 
> Further we see on the ACS management server however, is that LDAP
> connection are not going to be closed at any time.
> 
> For _every_ successful auth, the tcp connection remains established forever.
> 
> In my understanding of
> http://docs.oracle.com/javase/jndi/tutorial/ldap/connect/config.html
> these connections will become idle after successful authentication and
> reused for new authentication.
> 
> However, the reuse for the auth doesn't seem to work. _Every_ new
> successful auth of a user _creates_ a new ldap connection. We don't know
> if this is related to our problem, but at least it doesn't look like a
> wanted behavior.
> 
> In the docs we read: "By default, idle connections remain in the pool
> indefinitely until they are garbage-collected"
> 
> But as said, they seem never be gc-ed. After we added
> -Dcom.sun.jndi.ldap.connect.pool.timeout=6 to the
> /etc/cloudstack/management/tomcat6.conf which resulted in the
> connections beeing gc-ed and we didn't have any report about missing
> login since then.
> 
> Has anyone also see such an issue? Any thoughts?
> 
> René
> 


Re: [PROPOSAL] Minimum Viable CI Integration

2016-03-04 Thread ilya
I see where Daan is coming from :)  I thought this would be 4th, not
exactly 7ths.

I'm not against golang by any means (if anything - its my next "go" to
language these days).

Things to consider:

Would notify-pr support proxy? I've been thinking on ways of
contributing test runs, there would have to be few things i'd need to do.

1) massage the log content - such that no environment details are
exposed, i can probably handle this with sed/awk..

2) i'm behind multiple firewalls with no internet access. however, some
lab environments might have a proxy, so it would be nice to have a
support for it.

Thanks
ilya



On 3/4/16 6:56 AM, Will Stevens wrote:
> Yes, I have most of it already built and will be releasing it later today
> or over the weekend.  The reason I chose Golang is because it can be cross
> compiled to be run on any system and distributed as a single binary with no
> dependencies.  This means that no one will have to worry about building it
> or having to change their environment at all in order to use it.  I am
> trying to lower the barrier to entry and make it as easy as possible for
> people to contribute back their CI execution details.
> 
> *Will STEVENS*
> Lead Developer
> 
> *CloudOps* *| *Cloud Solutions Experts
> 420 rue Guy *|* Montreal *|* Quebec *|* H3J 1S6
> w cloudops.com *|* tw @CloudOps_
> 
> On Fri, Mar 4, 2016 at 8:53 AM, Daan Hoogland <daan.hoogl...@gmail.com>
> wrote:
> 
>> Will, Do you have an implementation of notify-pr? I am asking as you
>> specify it will be implemented in golang which seems odd. It is not amongst
>> the 7 or so languages already in use.
>>
>> On Fri, Mar 4, 2016 at 1:54 AM, Will Stevens <williamstev...@gmail.com>
>> wrote:
>>
>>> Hey Everyone,
>>> As I am sure most of you are aware, I have been focusing a lot on ways to
>>> get CI integrated back into the community.
>>>
>>> Today I build a little POC to validate some ideas and get a feel for a
>>> potential approach for getting CI integrated into the Github pull request
>>> workflow.
>>>
>>> There are multiple individuals/companies focusing on CI right now (which
>>> is a good thing), but there has not really been much discussion (that I
>> am
>>> aware of) for how these different CI runs will push back results to the
>>> community.  I want to make sure that nobody's work on this topic goes to
>>> waste, so my goal is to provide a simple and consistent way for everyone
>> to
>>> push their results back to the community.
>>>
>>> Here is the basic idea (please give feedback):
>>> - A simple cross platform command line tool with zero dependencies will
>> be
>>> provided (and open sourced) which will handle the submission of the CI
>>> results back to the community.  It is written in Golang and is currently
>>> called 'notify_pr'.
>>> - At the end of a CI execution, the CI should automate the execution of
>>> this tool to handle updating the appropriate PR with the results.
>>>
>>> Configuration can be done via the command line or through an INI file.
>>> Here is an example of the configuration details.  The commit is the
>>> commit that the CI just executed against.
>>>
>>> token  = 
>>> owner  = apache
>>> repo   = cloudstack
>>> commit = c8443496d3fad78a4b1a48a0992ce545bde299e8
>>>
>>> summary_file = 
>>> full_detail_dir = >> object store>
>>> full_detail_files = >> store>
>>> store_api = 
>>> store_endpoint = 
>>> store_identity = 
>>> store_secret = 
>>>
>>> I have not yet implemented the object storage endpoints, but I have code
>>> to do it from a different project, so I just need to add it.  I will be
>>> able to host my CI output in a swift object store, but others may need to
>>> use AWS S3.  Maybe we can get sponsorship for this storage.  We will only
>>> keep the logs for a window of like a week or so on the object store so
>> the
>>> storage usage will not be ever growing.
>>>
>>> Basically, the tool takes the details of the repository you are
>> validating
>>> a Pull Request for and the commit you are building.  It also takes the
>>> files you would like to push to the pull request.  The summary file will
>> be
>>> shown as text in the pull request comment and the other files will be
>>> uploaded to an object store and will be publically available for a period
>>> of time (probably about a week and then get cleaned up, details TBD).
>> The
>

Re: [DISCUSS] Request for comments: Out-of-band Management for CloudStack (new feature)

2016-03-04 Thread ilya
Rohit,

Great job!

Not certain if this was mentioned, but premise behind IPMI integration
was partially driven by HA and being able to "fence" the host in
question to avoid split brain scenario - as well as other issues when
hypervisors malfunction.

With that said, Will brings up a good point, what can we offer to avoid
accidental power down - while host is functional. I see plenty of
curious cloudstack admins who identify a function of a button by
pressing on it and saying "lets see what happens". Better yet, some
learn on the job on production environments.

Do we give the end user a warning of any kind? If possible, i would
suggest we give a one line warning - something like "The host is not in
Maintenance Mode, proceed at your own risk!" or something to that effect.

Obviously this would only be CloudStack UI safeguard and no such warning
would be shown when you use APIs directly.

Lastly, please consider disable IPMI for Cluster and Zone level, i dont
believe we need it for pod. Please dont cascade over every host ipmi
object and change their state, instead, have a separate db entry (or
else) that tracks this selection for cluster or zone.

It could be as simple as configuration setting for zone or cluster.

thanks
ilya


On 3/4/16 1:20 AM, Rohit Yadav wrote:
> 
> 
> ShapeBlue <http://www.shapeblue.com>  
> Rohit Yadav
> Software Architect,   ShapeBlue
> 
> d:* | s: +44 203 603 0540* <tel:|%20s:%20+44%20203%20603%200540>
>  |m:  *+91 8826230892* <tel:+91%208826230892>
> 
> e:*rohit.ya...@shapeblue.com | t: *
> <mailto:rohit.ya...@shapeblue.com%20|%20t:>|  w: 
> *www.shapeblue.com* <http://www.shapeblue.com>
> 
> a:53 Chandos Place, Covent Garden London WC2N 4HS UK
> 
> Shape Blue Ltd is a company incorporated in England & Wales. ShapeBlue
> Services India LLP is a company incorporated in India and is operated
> under license from Shape Blue Ltd. Shape Blue Brasil Consultoria Ltda is
> a company incorporated in Brasil and is operated under license from
> Shape Blue Ltd. ShapeBlue SA Pty Ltd is a company registered by The
> Republic of South Africa and is traded under license from Shape Blue
> Ltd. ShapeBlue is a registered trademark.
> This email and any attachments to it may be confidential and are
> intended solely for the use of the individual to whom it is addressed.
> Any views or opinions expressed are solely those of the author and do
> not necessarily represent those of Shape Blue Ltd or related companies.
> If you are not the intended recipient of this email, you must neither
> take any action based upon its contents, nor copy or show it to anyone.
> Please contact the sender if you believe you have received this email in
> error.
> 
> 
>> On 03-Mar-2016, at 12:58 PM, Will Stevens <williamstev...@gmail.com
>> <mailto:williamstev...@gmail.com>> wrote:
>>
>> Maybe I am not understanding something here.
>>
>> Does this control the power cycle of the management server(s) or the
>> hypervisor hosts?  The wording is throwing me off.
> 
> Fixed. The feature applies for hypervisor hosts only, not management
> server hosts (unless of course, mgmt server host is hypervisor host as
> well for example running mgmt server + kvm agnet on a KVM host).
> 
>> I am guessing it is for managing the hypervisor hosts. If this is the
>> case,
>> does it also handle the "maintenance mode" for the host as well?
> 
> Maintenance mode is a hypervisor semantic, this is not related to the
> out-of-band management interface (the BMC, such as iLO, iDRAC) available
> on the hypervisor host.
> 
> Even when you enable/disable maintenance mode, you can use a tool like
> ‘ipmitool’ to do execute a power management operation such as
> on/off/reset etc, so you should be able to perform the same using this
> feature. Therefore, presently there is no such enforcement.
> 
>> At least
>> with XenServer, if you do a power cycle without putting the host into
>> maintenance mode first, all the VRs will have to be restarted on that host
>> once it is back up in order for their networking to work again.
> 
> We can put in a rule to avoid executing any power operation when hosts
> are put in maintenance mode, though some users may still want to be able
> to execute power operations. Comments?
> 
> Regards.
> 
> Find out more about ShapeBlue and our range of CloudStack related services:
> IaaS Cloud Design & Build
> <http://shapeblue.com/iaas-cloud-design-and-build//> | CSForge – rapid
> IaaS deployment framework <http://shapeblue.com/csforge/>
> CloudStack Consulting <http://shapeblue.com/cloudstack-consultancy/> |
> CloudStack Software Engineering
> <http://shapeblue.com/cloudstack-software-engineering/>
> CloudStack Infrastructure Support
> <http://shapeblue.com/cloudstack-infrastructure-support/> | CloudStack
> Bootcamp Training Courses <http://shapeblue.com/cloudstack-training/>


Re: PR validation using proposed CI.

2016-02-25 Thread ilya
Bharat,

Hope all is well,

Perhaps you can explain the workflow and extensiveness of your tests.

Thanks
ilya



On 2/24/16 9:23 PM, Bharat Kumar wrote:
> Hi,
> 
> As all of you know from earlier discussions in the community, we have been 
> working on implementing a CI to test github PRs and publish results. We have 
> implemented this 
> internally and will start testing PRs and publish the results by mail to dev 
> list. 
> 
> At this point if a test fails, we do not know it for sure if it is due to a 
> bug in the test, in cloudstack or in the CI environment. We do not have 
> anything to cross reference. If a test is being
> reported as failure but is passing in your local environment,  please let us 
> know we will try and fix this in the CI environment.
> 
> We also do not have a way to share the management server logs and test run 
> logs. we are working on it. 
> 
> Please let us know if you guys think any improvements are needed.
> 
> Thanks,
> Bharat.
> 


Re: [DISCUSS] Hiding the instance name from users

2016-02-23 Thread ilya
There is no good reason i am aware of.

CloudStack was designed under a premise it will be ran by Cloud hosting
and not enterprise. Hence there was not much emphasis on cloudstack
instance name VS display name.

With that said, down the road a feature has been added only to VMware to
enable display name in vCenter.

What hypervisors are you running?

On 2/23/16 12:41 AM, Erik Weber wrote:
> Hi,
> 
> Before I file a bug I'd like to check if anyone know a good reason for
> hiding the instance name, that is the typical i-2-123-VM name, from users?
> 
> There are cases where an error returns the instancename rather than
> name/uuid, and this is confusing when users have no means to map that to
> something they know.
> 
> Example log where instance name is used:
> 
> Failed to attach volume myvolume to VM mymachine; Device 1 is used in VM
> i-2-123-VM"}
> 
> If you do operations manually, or one by one then it is easy to correlate
> it to your last action, but couple this with any kind of automation where
> you do things in parallell and it becomes a nightmare to figure out of.
> 
> Opinions?
> 


  1   2   3   4   5   6   >