API rate limiting

2018-11-01 Thread Marc-Aurèle Brothier
Hi,

Are people using the API rate limiting feature ?

I'm working on removing the ehcache and was wondering if I need to find an 
alternate solution for that plugin or if it could be removed too.

Marc-Aurèle


signature.asc
Description: Message signed with OpenPGP using GPGMail


Re: [PROPOSE] Combining Apache CloudStack Documentation

2018-07-25 Thread Marc-Aurèle Brothier
I like the idea and the RTD. Thanks Paul for moving the data around.

On Wed, Jul 25, 2018 at 11:10 AM, Rohit Yadav 
wrote:

> +1 I like it, I'm okay with release notes being a separate (version
> specific) document/website.
>
>
> - Rohit
>
> 
>
>
>
> 
> From: Paul Angus 
> Sent: Tuesday, July 24, 2018 2:55:26 PM
> To: d...@cloudstack.apache.org; users@cloudstack.apache.org
> Subject: [PROPOSE] Combining Apache CloudStack Documentation
>
> Hi All,
>
> We currently have four sources of documentation [1]. Which make managing
> the documentation convoluted, and worse, make navigating and searching the
> documentation really difficult.
>
> I have taken the current documentation and combined them into one repo,
> then created 7 sections:
>
> CloudStack Concepts and Terminology
> Quick Installation Guide
> Installation Guide
> Usage Guide
> Developers Guide
> Plugins Guide
> Release Notes
>
> I haven't changed any of the content, but I've moved some of it around to
> make more sense (to me).  You can see the result on RTD [2]
>
> I'd like to PROPOSE to move this demo version of the documentation over to
> the Apache repos and make it THE documentation source, update the website,
> and mark the current repos/sites as archive data.
>
> [1]
> https://github.com/apache/cloudstack-docs.git  cloudstack-docs.git> is a bit of a dodge-podge of resources
> https://github.com/apache/cloudstack-docs-install.git is the install guide
> https://github.com/apache/cloudstack-docs-admin.git is the current admin
> manual.
> https://github.com/apache/cloudstack-docs-rn.git is the release notes for
> individual releases
>
> [2]  https://beta-cloudstack-docs.readthedocs.io/en/latest/
>
> paul.an...@shapeblue.com
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> @shapeblue
>
>
>
>
> rohit.ya...@shapeblue.com
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> @shapeblue
>
>
>
>


Re: Github Issues

2018-07-17 Thread Marc-Aurèle Brothier
Hi Paul,

My 2 cents on the topic.

people are commenting on issues when it should by the PR and vice-versa
>

I think this is simply due to the fact that with one login you can do both,
versus before you had to have a JIRA login which people might have tried to
avoid, preferring using github directly, ensuring the conversation will
only be on the PR. Most of the issues in Jira didn't have any conversation
at all.

But I do feel also the pain of searching the issues on github as it's more
free-hand than a jira system. At the same time it's easier and quicker to
navigate, so it ease the pain at the same time ;-)
I would say that the current labels isn't well organized to be able to
search like in jira but it could. For example any label has a prefix
describing the jira attribute type (component, version, ...) Then a bot
scanning the issue content could set some of them as other open source
project are doing. The bad thing here is that you might end up with too
many labels. Maybe @resmo can give his point of view on how things are
managed in Ansible (https://github.com/ansible/ansible/pulls - lots of
labels, lots of issues and PRs). I don't know if that's a solution but
labels seem the only way to organize things.

Marc-Aurèle

On Tue, Jul 17, 2018 at 10:53 AM, Paul Angus 
wrote:

> Hi All,
>
> We have been trialling replacing Jira with Github Issues.   I think that
> we should have a conversation about it before it become the new standard by
> default.
>
> From my perspective, I don't like it.  Searching has become far more
> difficult, categorising has also. When there is a bug fix it can only be
> targeted for a single version, which makes them easy to lose track of, and
> when looking at milestones issues and PRs get jumbled up and people are
> commenting on issues when it should by the PR and vice-versa (yes I've done
> it too).
> In summary, from an administrative point of view it causes a lot more
> problems than it solves.
>
> I yield the floor to hear other people's opinions...
>
>
> Kind regards,
>
> Paul Angus
>
>
> paul.an...@shapeblue.com
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> @shapeblue
>
>
>
>


Re: Can't KVM migrate between local storage.

2018-05-14 Thread Marc-Aurèle Brothier
Can you give us the result of those API calls:

listZones
listZones id=2
listHosts
listHosts id=5
listStoragePools
listStoragePools id=1
listVirtualMachines id=19
listVolumes id=70

On Mon, May 14, 2018 at 5:30 PM, Daznis <daz...@gmail.com> wrote:

> Hi,
>
> It has 1 zone. I'm not sure how it got zoneid2. Probably failed to add
> whole zone and was added again. We have 4 hosts with local storage on
> them for system vms and VMS that need ssd storage and ceph primary for
> everything else plus one secondary  storage server.
>
> On Mon, May 14, 2018 at 5:38 PM, Marc-Aurèle Brothier <ma...@exoscale.ch>
> wrote:
> > Hi Daznis,
> >
> > Reading the logs I see some inconsistency in the values. Can you describe
> > the infrastructure you set up? The things that disturbs me is a zoneid=2,
> > and a destination pool id=1. Aren't you trying to migrate a volume of a
> VM
> > between 2 regions/zones?
> >
> > On Sat, May 12, 2018 at 2:33 PM, Daznis <daz...@gmail.com> wrote:
> >
> >> Hi,
> >> Actually that's the whole log. Above it just job starting. I have
> >> attached the missing part of the log. Which tables do you need from
> >> the database?
> >> There are multiple records with allocated/creating inside
> >> volume_store_ref. There is nothing that's looks wrong with
> >> volumes/snapshots/snapshot_store_ref.
> >>
> >> On Thu, May 10, 2018 at 9:27 PM, Suresh Kumar Anaparti
> >> <sureshkumar.anapa...@gmail.com> wrote:
> >> > Hi Darius,
> >> >
> >> > From the logs, I could observe that image volume is already in the
> >> creating
> >> > state and trying to use the same for copying the volume between pools.
> >> So,
> >> > state transition failed. Could you please provide the complete log for
> >> the
> >> > usecase to root cause the issue. Also, include volumes and snapshots
> db
> >> > details for the mentioned volume and snapshot.
> >> >
> >> > -Suresh
> >> >
> >> >
> >> > On Thu, May 10, 2018 at 1:22 PM, Daznis <daz...@gmail.com> wrote:
> >> >
> >> >> Snapshots work fine. I can make a snapshot -> convert it to template
> >> >> and start the VM on a new node from that template. When I needed to
> >> >> move one VM for balance purposes. But I want to fix the migration
> >> >> process. I have attached the error log to this email. Maybe I'm
> >> >> looking at the wrong place were I get the error?
> >> >>
> >> >> On Wed, May 9, 2018 at 9:23 PM, Marc-Aurèle Brothier <
> ma...@exoscale.ch
> >> >
> >> >> wrote:
> >> >> > Can you try to perform a snapshot of the volume on VM's that are on
> >> your
> >> >> > host, to see if they get copied correctly other the NFS too.
> >> >> >
> >> >> > Otherwise you need to look into the management logs to catch the
> >> >> exception
> >> >> > (stack trace) to have a better understanding of the issue.
> >> >> >
> >> >> > On Wed, May 9, 2018 at 1:58 PM, Daznis <daz...@gmail.com> wrote:
> >> >> >
> >> >> >> Hello,
> >> >> >>
> >> >> >>
> >> >> >> Yeah it's offline. I'm running 4.9.2 version. Running it on the
> same
> >> >> >> zone with the  only NFS secondary storage.
> >> >> >>
> >> >> >> On Wed, May 9, 2018 at 10:49 AM, Marc-Aurèle Brothier <
> >> >> ma...@exoscale.ch>
> >> >> >> wrote:
> >> >> >> > Hi Darius,
> >> >> >> >
> >> >> >> > Are you trying to perform an offline migration within the same
> >> zone,
> >> >> >> > meaning that the source and destination hosts have the same set
> of
> >> NFS
> >> >> >> > secondary storage ?
> >> >> >> >
> >> >> >> > Marc-Aurèle
> >> >> >> >
> >> >> >> > On Tue, May 8, 2018 at 3:37 PM, Daznis <daz...@gmail.com>
> wrote:
> >> >> >> >
> >> >> >> >> Hi,
> >> >> >> >>
> >> >> >> >>
> >> >> >> >> I'm having an issue while migrating offline vm disk within
> local
> >> >> >> >> storages. The particular error that has be baffled is "Can't
> find
> >> >> >> >> staging storage in zone". From what I have gather "staging
> >> storage"
> >> >> >> >> referred to secondary storage in cloudstack and it's working
> >> >> perfectly
> >> >> >> >> fine with both the source and destination node. Not sure where
> to
> >> go
> >> >> >> >> next. Any help would be appreciated.
> >> >> >> >>
> >> >> >> >>
> >> >> >> >> Regards,
> >> >> >> >> Darius
> >> >> >> >>
> >> >> >>
> >> >>
> >>
>


Re: Can't KVM migrate between local storage.

2018-05-14 Thread Marc-Aurèle Brothier
Hi Daznis,

Reading the logs I see some inconsistency in the values. Can you describe
the infrastructure you set up? The things that disturbs me is a zoneid=2,
and a destination pool id=1. Aren't you trying to migrate a volume of a VM
between 2 regions/zones?

On Sat, May 12, 2018 at 2:33 PM, Daznis <daz...@gmail.com> wrote:

> Hi,
> Actually that's the whole log. Above it just job starting. I have
> attached the missing part of the log. Which tables do you need from
> the database?
> There are multiple records with allocated/creating inside
> volume_store_ref. There is nothing that's looks wrong with
> volumes/snapshots/snapshot_store_ref.
>
> On Thu, May 10, 2018 at 9:27 PM, Suresh Kumar Anaparti
> <sureshkumar.anapa...@gmail.com> wrote:
> > Hi Darius,
> >
> > From the logs, I could observe that image volume is already in the
> creating
> > state and trying to use the same for copying the volume between pools.
> So,
> > state transition failed. Could you please provide the complete log for
> the
> > usecase to root cause the issue. Also, include volumes and snapshots db
> > details for the mentioned volume and snapshot.
> >
> > -Suresh
> >
> >
> > On Thu, May 10, 2018 at 1:22 PM, Daznis <daz...@gmail.com> wrote:
> >
> >> Snapshots work fine. I can make a snapshot -> convert it to template
> >> and start the VM on a new node from that template. When I needed to
> >> move one VM for balance purposes. But I want to fix the migration
> >> process. I have attached the error log to this email. Maybe I'm
> >> looking at the wrong place were I get the error?
> >>
> >> On Wed, May 9, 2018 at 9:23 PM, Marc-Aurèle Brothier <ma...@exoscale.ch
> >
> >> wrote:
> >> > Can you try to perform a snapshot of the volume on VM's that are on
> your
> >> > host, to see if they get copied correctly other the NFS too.
> >> >
> >> > Otherwise you need to look into the management logs to catch the
> >> exception
> >> > (stack trace) to have a better understanding of the issue.
> >> >
> >> > On Wed, May 9, 2018 at 1:58 PM, Daznis <daz...@gmail.com> wrote:
> >> >
> >> >> Hello,
> >> >>
> >> >>
> >> >> Yeah it's offline. I'm running 4.9.2 version. Running it on the same
> >> >> zone with the  only NFS secondary storage.
> >> >>
> >> >> On Wed, May 9, 2018 at 10:49 AM, Marc-Aurèle Brothier <
> >> ma...@exoscale.ch>
> >> >> wrote:
> >> >> > Hi Darius,
> >> >> >
> >> >> > Are you trying to perform an offline migration within the same
> zone,
> >> >> > meaning that the source and destination hosts have the same set of
> NFS
> >> >> > secondary storage ?
> >> >> >
> >> >> > Marc-Aurèle
> >> >> >
> >> >> > On Tue, May 8, 2018 at 3:37 PM, Daznis <daz...@gmail.com> wrote:
> >> >> >
> >> >> >> Hi,
> >> >> >>
> >> >> >>
> >> >> >> I'm having an issue while migrating offline vm disk within local
> >> >> >> storages. The particular error that has be baffled is "Can't find
> >> >> >> staging storage in zone". From what I have gather "staging
> storage"
> >> >> >> referred to secondary storage in cloudstack and it's working
> >> perfectly
> >> >> >> fine with both the source and destination node. Not sure where to
> go
> >> >> >> next. Any help would be appreciated.
> >> >> >>
> >> >> >>
> >> >> >> Regards,
> >> >> >> Darius
> >> >> >>
> >> >>
> >>
>


Re: Can't KVM migrate between local storage.

2018-05-09 Thread Marc-Aurèle Brothier
Can you try to perform a snapshot of the volume on VM's that are on your
host, to see if they get copied correctly other the NFS too.

Otherwise you need to look into the management logs to catch the exception
(stack trace) to have a better understanding of the issue.

On Wed, May 9, 2018 at 1:58 PM, Daznis <daz...@gmail.com> wrote:

> Hello,
>
>
> Yeah it's offline. I'm running 4.9.2 version. Running it on the same
> zone with the  only NFS secondary storage.
>
> On Wed, May 9, 2018 at 10:49 AM, Marc-Aurèle Brothier <ma...@exoscale.ch>
> wrote:
> > Hi Darius,
> >
> > Are you trying to perform an offline migration within the same zone,
> > meaning that the source and destination hosts have the same set of NFS
> > secondary storage ?
> >
> > Marc-Aurèle
> >
> > On Tue, May 8, 2018 at 3:37 PM, Daznis <daz...@gmail.com> wrote:
> >
> >> Hi,
> >>
> >>
> >> I'm having an issue while migrating offline vm disk within local
> >> storages. The particular error that has be baffled is "Can't find
> >> staging storage in zone". From what I have gather "staging storage"
> >> referred to secondary storage in cloudstack and it's working perfectly
> >> fine with both the source and destination node. Not sure where to go
> >> next. Any help would be appreciated.
> >>
> >>
> >> Regards,
> >> Darius
> >>
>


Re: Can't KVM migrate between local storage.

2018-05-09 Thread Marc-Aurèle Brothier
Hi Darius,

Are you trying to perform an offline migration within the same zone,
meaning that the source and destination hosts have the same set of NFS
secondary storage ?

Marc-Aurèle

On Tue, May 8, 2018 at 3:37 PM, Daznis  wrote:

> Hi,
>
>
> I'm having an issue while migrating offline vm disk within local
> storages. The particular error that has be baffled is "Can't find
> staging storage in zone". From what I have gather "staging storage"
> referred to secondary storage in cloudstack and it's working perfectly
> fine with both the source and destination node. Not sure where to go
> next. Any help would be appreciated.
>
>
> Regards,
> Darius
>


Re: How to stop Cloudstack Storage Scavenger

2018-04-26 Thread Marc-Aurèle Brothier
It's in the database, so you should log in the UI and go to Global Settings
(menu entry on the left), then search the key to edit it.

On Thu, Apr 26, 2018 at 3:25 PM, Felipe Arturo Polanco <
felipeapola...@gmail.com> wrote:

> Hi Marc,
>
> Thanks for the answer.
>
> In which file should I make that change of configuration key?
>
> On Thu, Apr 26, 2018 at 12:54 AM, Marc-Aurèle Brothier <ma...@exoscale.ch>
> wrote:
>
> > Hi Felipe,
> >
> > You cannot disable it without shutting down the management server. You
> can
> > turn off by chaging the value of that configuration key:
> > "storage.cleanup.enabled"
> > to false. You will have to restart your management servers to take into
> > account the change.
> >
> > Cheers
> >
> > On Wed, Apr 25, 2018 at 5:48 PM, Felipe Arturo Polanco <
> > felipeapola...@gmail.com> wrote:
> >
> > > Hello,
> > >
> > > Somehow the Cloudstack storage scavenger got out of control in our
> > > deployment and has been evicting legitimate disk files of production
> VMs.
> > >
> > > Does anyone know how to stop or prevent this from happening without
> > > shutting down the management server?
> > >
> > > Thanks,
> > >
> >
>


Re: How to stop Cloudstack Storage Scavenger

2018-04-25 Thread Marc-Aurèle Brothier
Hi Felipe,

You cannot disable it without shutting down the management server. You can
turn off by chaging the value of that configuration key:
"storage.cleanup.enabled"
to false. You will have to restart your management servers to take into
account the change.

Cheers

On Wed, Apr 25, 2018 at 5:48 PM, Felipe Arturo Polanco <
felipeapola...@gmail.com> wrote:

> Hello,
>
> Somehow the Cloudstack storage scavenger got out of control in our
> deployment and has been evicting legitimate disk files of production VMs.
>
> Does anyone know how to stop or prevent this from happening without
> shutting down the management server?
>
> Thanks,
>


Re: Default API response type: XML -> JSON

2018-04-24 Thread Marc-Aurèle Brothier
@rafael - I think it's overkill to have this as a configuration option. We
should have one default response type, or maybe not have a default one and
enforce the use of the response type the client is willing to receive.

On Mon, Apr 23, 2018 at 3:39 PM, Rafael Weingärtner <
rafaelweingart...@gmail.com> wrote:

> I do think it is an interesting proposal. I have been thinking, and what if
> we do something different; what about a global parameter where the root
> admin can define the default serialization mechanism (XML, JSON, RDF,
> others...)? The default value could be XML to maintain backward
> compatibility. Then, it is up to the root admin to define this behavior.
>
>
> On Mon, Apr 23, 2018 at 10:34 AM, Marc-Aurèle Brothier <ma...@exoscale.ch>
> wrote:
>
> > Hi everyone,
> >
> > I thought it would be good to move from XML to JSON by default in the
> > response of the API if no response type is sent to the server along with
> > the request. I'm wondering that's the opinion of people on the mailing
> > list.
> >
> > Moreover, if anyone knows a tool working with the API in XML can you list
> > them, so I could check the code and see if the change can be done without
> > breaking it.
> >
> > PR to change default response type: (
> > https://github.com/apache/cloudstack/pull/2593).
> > If this change would cause more trouble, or is not needed in your
> opinion,
> > I don't mind to close the PR.
> >
> > Kind regards,
> > Marc-Aurèle
> >
>
>
>
> --
> Rafael Weingärtner
>


Default API response type: XML -> JSON

2018-04-23 Thread Marc-Aurèle Brothier
Hi everyone,

I thought it would be good to move from XML to JSON by default in the
response of the API if no response type is sent to the server along with
the request. I'm wondering that's the opinion of people on the mailing list.

Moreover, if anyone knows a tool working with the API in XML can you list
them, so I could check the code and see if the change can be done without
breaking it.

PR to change default response type: (
https://github.com/apache/cloudstack/pull/2593).
If this change would cause more trouble, or is not needed in your opinion,
I don't mind to close the PR.

Kind regards,
Marc-Aurèle


Re: Welcoming Mike as the new Apache CloudStack VP

2018-03-26 Thread Marc-Aurèle Brothier
Thanks for the work Wido, and congrats Mike

On Tue, Mar 27, 2018 at 6:31 AM, Jayapal Uradi  wrote:

> Congratulations, Milke!
>
> -Jayapal
>
> > On Mar 27, 2018, at 2:38 AM, Nitin Kumar Maharana  accelerite.com> wrote:
> >
> > Congratulations, Mike!!
> >
> > -
> > Nitin
> > On 26-Mar-2018, at 7:41 PM, Wido den Hollander > wrote:
> >
> > Hi all,
> >
> > It's been a great pleasure working with the CloudStack project as the
> > ACS VP over the past year.
> >
> > A big thank you from my side for everybody involved with the project in
> > the last year.
> >
> > Hereby I would like to announce that Mike Tutkowski has been elected to
> > replace me as the Apache Cloudstack VP in our annual VP rotation.
> >
> > Mike has a long history with the project and I am are happy welcome him
> > as the new VP for CloudStack.
> >
> > Welcome Mike!
> >
> > Thanks,
> >
> > Wido
> >
> > DISCLAIMER
> > ==
> > This e-mail may contain privileged and confidential information which is
> the property of Accelerite, a Persistent Systems business. It is intended
> only for the use of the individual or entity to which it is addressed. If
> you are not the intended recipient, you are not authorized to read, retain,
> copy, print, distribute or use this message. If you have received this
> communication in error, please notify the sender and delete all copies of
> this message. Accelerite, a Persistent Systems business does not accept any
> liability for virus infected mails.
>
>


Re: [VOTE] Move to Github issues

2018-03-26 Thread Marc-Aurèle Brothier
+1

On Mon, Mar 26, 2018 at 3:05 PM, Will Stevens  wrote:

> +1
>
> On Mon, Mar 26, 2018, 5:51 AM Nicolas Vazquez, <
> nicolas.vazq...@shapeblue.com> wrote:
>
> > +1
> >
> > 
> > From: Dag Sonstebo 
> > Sent: Monday, March 26, 2018 5:06:29 AM
> > To: users@cloudstack.apache.org; d...@cloudstack.apache.org
> > Subject: Re: [VOTE] Move to Github issues
> >
> > +1
> >
> > Regards,
> > Dag Sonstebo
> > Cloud Architect
> > ShapeBlue
> >
> > On 26/03/2018, 07:33, "Rohit Yadav"  wrote:
> >
> > All,
> >
> > Based on the discussion last week [1], I would like to start a vote
> to
> > put
> > the proposal into effect:
> >
> > - Enable Github issues, wiki features in CloudStack repositories.
> > - Both user and developers can use Github issues for tracking issues.
> > - Developers can use #id references while fixing an existing/open
> > issue in
> > a PR [2]. PRs can be sent without requiring to open/create an issue.
> > - Use Github milestone to track both issues and pull requests
> towards a
> > CloudStack release, and generate release notes.
> > - Relax requirement for JIRA IDs, JIRA still to be used for
> historical
> > reference and security issues. Use of JIRA will be discouraged.
> > - The current requirement of two(+) non-author LGTMs will continue
> for
> > PR
> > acceptance. The two(+) PR non-authors can advise resolution to any
> > issue
> > that we've not already discussed/agreed upon.
> >
> > For sanity in tallying the vote, can PMC members please be sure to
> > indicate
> > "(binding)" with their vote?
> >
> > [ ] +1  approve
> > [ ] +0  no opinion
> > [ ] -1  disapprove (and reason why)
> >
> > Vote will be open for 120 hours. If the vote passes the following
> > actions
> > will be taken:
> > - Get Github features enabled from ASF INFRA
> > - Update CONTRIBUTING.md and other relevant cwiki pages.
> > - Update project website
> >
> > [1] https://markmail.org/message/llodbwsmzgx5hod6
> > [2]
> > https://blog.github.com/2013-05-14-closing-issues-via-pull-requests/
> >
> > Regards,
> > Rohit Yadav
> >
> >
> >
> > dag.sonst...@shapeblue.com
> > www.shapeblue.com
> > 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> > @shapeblue
> >
> >
> >
> >
> > nicolas.vazq...@shapeblue.com
> > www.shapeblue.com
> > ,
> > @shapeblue
> >
> >
> >
> >
>


Re: Iptables on Virtual router

2018-03-07 Thread Marc-Aurèle Brothier
Hi Varun,

The file is for the firewall are all comig from the system VM image, you
can find them here depending on the type of the system:
https://github.com/apache/cloudstack/tree/master/systemvm/debian/etc/iptables.
After the system vm has booted and the SSH is available, the agent daemon
sends a command through ssh to setup the system VM with its correspondig
type (consoleproxy, dhcpsrv, secstorage...) which configures it differently
for each use case. To overcome this, you're best bet is to build a custom
systemVM on which you add an extra systemd script to set up the rules you
need.

On Wed, Mar 7, 2018 at 10:05 AM, Dag Sonstebo 
wrote:

> Hi Varun,
>
> Not sure if I follow your use case – the VR is built to provide services
> to VMs on the internal isolated network / VPC tier, the public interface is
> there for port forwarding / NATing to services hosted on the VMs.
> Hosting DHCP on the VR for clients on the public interface isn’t a
> supported use case – anything on the public interface is by definition
> considered untrusted.
>
> I may have misunderstood you though?
>
> Regards,
> Dag Sonstebo
> Cloud Architect
> ShapeBlue
>
> On 07/03/2018, 03:21, "Kumar, Varun"  wrote:
>
> Thanks Dag.
>
> I am running into a scenario where a VR is required for dhcp service
> on the public Internet facing vlan and want to restrict connections to
> known trusted sources only.
>
> Has anyone in the community run into such a situation before and found
> a workaround ?
>
> Thanks,
> Varun
>
>
> -Original Message-
> From: Dag Sonstebo [mailto:dag.sonst...@shapeblue.com]
> Sent: Tuesday, March 06, 2018 05:41 PM
> To: users@cloudstack.apache.org
> Subject: Re: Iptables on Virtual router
>
> EXTERNAL EMAIL
>
> Hi Varun,
>
> No there’s no method for this, all firewall rules for the VR are
> contained in the CloudStack database and written on demand when the VR is
> created or firewall changes made.
>
> Regards,
> Dag Sonstebo
> Cloud Architect
> ShapeBlue
>
> On 06/03/2018, 11:56, "Kumar, Varun"  wrote:
>
> Hello,
>
> Is it possible to write custom iptables  on the Virtual router
> that's created by cloudstack  and make it persistent across restarts ?
>
> It looks like /etc/iptables/router_rules.v4  on the VR is the file
> that's being created  but I am looking for the script that creates this
> file.
>
> Any insight is appreciated.
>
> Thanks,
> Varun
>
>
>
>
> dag.sonst...@shapeblue.com
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> @shapeblue
>
>
>
>
>
>
> dag.sonst...@shapeblue.com
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> @shapeblue
>
>
>
>


Re: kvm live volume migration

2018-01-18 Thread Marc-Aurèle Brothier
There's a PR waiting to be fixed about live migration with local volume for
KVM. So it will come at some point. I'm the one who made this PR but I'm
not using the upstream release so it's hard for me to debug the problem.
You can add yourself to the PR to get notify when things are moving on it.

https://github.com/apache/cloudstack/pull/1709

On Wed, Jan 17, 2018 at 10:56 AM, Eric Green 
wrote:

> Theoretically on Centos 7 as the host KVM OS it could be done with a
> couple of pauses and the snapshotting mechanism built into qcow2, but there
> is no simple way to do it directly via virsh, the libvirtd/qemu control
> program that is used to manage virtualization. It's not as with issuing a
> simple vmotion 'migrate volume' call in Vmware.
>
> I scripted out how it would work without that direct support in
> libvirt/virsh and after looking at all the points where things could go
> wrong, honestly, I think we need to wait until there is support in
> libvirt/virsh to do this. virsh clearly has the capability internally to do
> live migration of storage, since it does this for live domain migration of
> local storage between machines when migrating KVM domains from one host to
> another, but that capability is not currently exposed in a way Cloudstack
> could use, at least not on Centos 7.
>
>
> > On Jan 17, 2018, at 01:05, Piotr Pisz  wrote:
> >
> > Hello,
> >
> > Is there a chance that one day it will be possible to migrate volume
> (root disk) of a live VM in KVM between storage pools (in CloudStack)?
> > Like a storage vMotion in Vmware.
> >
> > Best regards,
> > Piotr
> >
>
>


Re: why instance must be stopped in order to update its affinity groups?

2018-01-11 Thread Marc-Aurèle Brothier
Hi Yiping,

To add to Paul's comment, you also need to understand the goal of the
anti-affinity groups. If they don't care, you should simply block the
command so that your users don't use it (you should list the
createAffinityGroup command as a root admin call in the commands.properties
file by changing it's flag value).
The goal is to spread a group of VMs, a cluster of a service, so that in
case of a hardware failure on one hyperisor, the cluster can be sure that
only 1 of its instances will go down and the srvice can keep running.

On Thu, Jan 11, 2018 at 9:01 AM, Paul Angus 
wrote:

> Hi Yiping,
>
> Anti-affinity groups deal with the placement of VMs when they are started,
> but doesn't/can't 'move' running VMs (it isn't like vSphere DRS).  If you
> change a VM's anti-affinity group, it's current placement on a host may
> suddenly become invalid.  As the Anti-Affinity group code isn't designed to
> move VMs, the safest option is to ensure that the VM is stopped when its
> group is changed so that when it is started again, CloudStack can then
> properly decide where it can/should go.
>
>
>
> Kind regards,
>
> Paul Angus
>
> paul.an...@shapeblue.com
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> @shapeblue
>
>
>
>
> -Original Message-
> From: Yiping Zhang [mailto:yzh...@marketo.com]
> Sent: 10 January 2018 19:51
> To: users@cloudstack.apache.org
> Subject: why instance must be stopped in order to update its affinity
> groups?
>
> Hi, List:
>
> Can someone please explain why a VM instance must be in stopped state when
> updating its affinity group memberships?   This requirement is in “Feature
> assumptions” section of the original 4.2 design document (
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/
> FS+-+Affinity-Anti-affinity+groups).
>
> My users either don’t understand or don’t care about affinity groups and I
> see a large number of instances with sub-optimal host placement (from
> anti-host affinity group point of view).  But it is too much trouble for me
> to coordinate with so many users to shut them down in order to fix their
> host placement.  What bad things would happen if a running instance’s
> affinity group is changed?
>
> Thanks,
>
> Yiping
>
>


Re: [Discuss] Management cluster / Zookeeper holding locks

2017-12-18 Thread Marc-Aurèle Brothier
I understand your point, but there isn't any "transaction" in ZK. The
transaction and commit stuff are really for DB and not part of ZK. All
entries (if you start writing data in some nodes) are versioned. For
example you could enforce that to overwrite a node value you must submit
the node data having the same last version id to ensure you were
overwriting from the latest value/state of that node. Bear in mind that you
should not put too much data into your ZK, it's not a database replacement,
neither a nosql db.

The ZK client (CuratorFramework object) is started on the server startup,
and you only need to pass it along your calls so that the connection is
reused, or retried, depending on the state. Nothing manual has to be done,
it's all in this curator library.

On Mon, Dec 18, 2017 at 11:44 AM, Rafael Weingärtner <
rafaelweingart...@gmail.com> wrote:

> I did not check the link before. Sorry about that.
>
> Reading some of the pages there, I see curator more like a client library
> such as MySQL JDBC client.
>
> When I mentioned framework, I was looking for something like Spring-data.
> So, we could simply rely on the framework to manage connections and
> transactions. For instance, we could define a pattern that would open
> connection with a read-only transaction. And then, we could annotate
> methods that would write in the database something with
> @Transactional(readonly = false). If we are going to a change like this we
> need to remove manually open connections and transactions. Also, we have to
> remove the transaction management code from our code base.
>
> I would like to see something like this [1] in our future. No manually
> written transaction code, and no transaction management in our code base.
> Just simple annotation usage or transaction pattern in Spring XML files.
>
> [1]
> https://github.com/rafaelweingartner/daily-tasks/
> blob/master/src/main/java/br/com/supero/desafio/services/TaskService.java
>
> On Mon, Dec 18, 2017 at 8:32 AM, Marc-Aurèle Brothier <ma...@exoscale.ch>
> wrote:
>
> > @rafael, yes there is a framework (curator), it's the link I posted in my
> > first message: https://curator.apache.org/curator-recipes/shared-lock.
> html
> > This framework helps handling all the complexity of ZK.
> >
> > The ZK client stays connected all the time (as the DB connection pool),
> and
> > only one connection (ZKClient) is needed to communicate with the ZK
> server.
> > The framework handles reconnection as well.
> >
> > Have a look at ehc curator website to understand its goal:
> > https://curator.apache.org/
> >
> > On Mon, Dec 18, 2017 at 11:01 AM, Rafael Weingärtner <
> > rafaelweingart...@gmail.com> wrote:
> >
> > > Do we have framework to do this kind of looking in ZK?
> > > I mean, you said " create a new InterProcessSemaphoreMutex which
> handles
> > > the locking mechanism.". This feels that we would have to continue
> > opening
> > > and closing this transaction manually, which is what causes a lot of
> our
> > > headaches with transactions (it is not MySQL locks fault entirely, but
> > our
> > > code structure).
> > >
> > > On Mon, Dec 18, 2017 at 7:47 AM, Marc-Aurèle Brothier <
> ma...@exoscale.ch
> > >
> > > wrote:
> > >
> > > > We added ZK lock for fix this issue but we will remove all current
> > locks
> > > in
> > > > ZK in favor of ZK one. The ZK lock is already encapsulated in a
> project
> > > > with an interface, but more work should be done to have a proper
> > > interface
> > > > for locks which could be implemented with the "tool" you want,
> either a
> > > DB
> > > > lock for simplicity, or ZK for more advanced scenarios.
> > > >
> > > > @Daan you will need to add the ZK libraries in CS and have a running
> ZK
> > > > server somewhere. The configuration value is read from the
> > > > server.properties. If the line is empty, the ZK client is not created
> > and
> > > > any lock request will immediately return (not holding any lock).
> > > >
> > > > @Rafael: ZK is pretty easy to setup and have running, as long as you
> > > don't
> > > > put too much data in it. Regarding our scenario here, with only
> locks,
> > > it's
> > > > easy. ZK would be only the gatekeeper to locks in the code, ensuring
> > that
> > > > multi JVM can request a true lock.
> > > > For the code point of view, you're opening a connection to a ZK node
> > (any
> > > > of a 

Re: [Discuss] Management cluster / Zookeeper holding locks

2017-12-18 Thread Marc-Aurèle Brothier
@rafael, yes there is a framework (curator), it's the link I posted in my
first message: https://curator.apache.org/curator-recipes/shared-lock.html
This framework helps handling all the complexity of ZK.

The ZK client stays connected all the time (as the DB connection pool), and
only one connection (ZKClient) is needed to communicate with the ZK server.
The framework handles reconnection as well.

Have a look at ehc curator website to understand its goal:
https://curator.apache.org/

On Mon, Dec 18, 2017 at 11:01 AM, Rafael Weingärtner <
rafaelweingart...@gmail.com> wrote:

> Do we have framework to do this kind of looking in ZK?
> I mean, you said " create a new InterProcessSemaphoreMutex which handles
> the locking mechanism.". This feels that we would have to continue opening
> and closing this transaction manually, which is what causes a lot of our
> headaches with transactions (it is not MySQL locks fault entirely, but our
> code structure).
>
> On Mon, Dec 18, 2017 at 7:47 AM, Marc-Aurèle Brothier <ma...@exoscale.ch>
> wrote:
>
> > We added ZK lock for fix this issue but we will remove all current locks
> in
> > ZK in favor of ZK one. The ZK lock is already encapsulated in a project
> > with an interface, but more work should be done to have a proper
> interface
> > for locks which could be implemented with the "tool" you want, either a
> DB
> > lock for simplicity, or ZK for more advanced scenarios.
> >
> > @Daan you will need to add the ZK libraries in CS and have a running ZK
> > server somewhere. The configuration value is read from the
> > server.properties. If the line is empty, the ZK client is not created and
> > any lock request will immediately return (not holding any lock).
> >
> > @Rafael: ZK is pretty easy to setup and have running, as long as you
> don't
> > put too much data in it. Regarding our scenario here, with only locks,
> it's
> > easy. ZK would be only the gatekeeper to locks in the code, ensuring that
> > multi JVM can request a true lock.
> > For the code point of view, you're opening a connection to a ZK node (any
> > of a cluster) and you create a new InterProcessSemaphoreMutex which
> handles
> > the locking mechanism.
> >
> > On Mon, Dec 18, 2017 at 10:24 AM, Ivan Kudryavtsev <
> > kudryavtsev...@bw-sw.com
> > > wrote:
> >
> > > Rafael,
> > >
> > > - It's easy to configure and run ZK either in single node or cluster
> > > - zookeeper should replace mysql locking mechanism used inside ACS code
> > > (places where ACS locks tables or rows).
> > >
> > > I don't think from the other size, that moving from MySQL locks to ZK
> > locks
> > > is easy and light and (even implemetable) way.
> > >
> > > 2017-12-18 16:20 GMT+07:00 Rafael Weingärtner <
> > rafaelweingart...@gmail.com
> > > >:
> > >
> > > > How hard is it to configure Zookeeper and get everything up and
> > running?
> > > > BTW: what zookeeper would be managing? CloudStack management servers
> or
> > > > MySQL nodes?
> > > >
> > > > On Mon, Dec 18, 2017 at 7:13 AM, Ivan Kudryavtsev <
> > > > kudryavtsev...@bw-sw.com>
> > > > wrote:
> > > >
> > > > > Hello, Marc-Aurele, I strongly believe that all mysql locks should
> be
> > > > > removed in favour of truly DLM solution like Zookeeper. The
> > performance
> > > > of
> > > > > 3node ZK ensemble should be enough to hold up to 1000-2000 locks
> per
> > > > second
> > > > > and it helps to move to truly clustered MySQL like galera without
> > > single
> > > > > master server.
> > > > >
> > > > > 2017-12-18 15:33 GMT+07:00 Marc-Aurèle Brothier <ma...@exoscale.ch
> >:
> > > > >
> > > > > > Hi everyone,
> > > > > >
> > > > > > I was wondering how many of you are running CloudStack with a
> > cluster
> > > > of
> > > > > > management servers. I would think most of you, but it would be
> nice
> > > to
> > > > > hear
> > > > > > everyone voices. And do you get hosts going over their capacity
> > > limits?
> > > > > >
> > > > > > We discovered that during the VM allocation, if you get a lot of
> > > > parallel
> > > > > > requests to create new VMs, most notably with large profiles, the
> > > > > capacity
> > > > > >

Re: [Discuss] Management cluster / Zookeeper holding locks

2017-12-18 Thread Marc-Aurèle Brothier
We added ZK lock for fix this issue but we will remove all current locks in
ZK in favor of ZK one. The ZK lock is already encapsulated in a project
with an interface, but more work should be done to have a proper interface
for locks which could be implemented with the "tool" you want, either a DB
lock for simplicity, or ZK for more advanced scenarios.

@Daan you will need to add the ZK libraries in CS and have a running ZK
server somewhere. The configuration value is read from the
server.properties. If the line is empty, the ZK client is not created and
any lock request will immediately return (not holding any lock).

@Rafael: ZK is pretty easy to setup and have running, as long as you don't
put too much data in it. Regarding our scenario here, with only locks, it's
easy. ZK would be only the gatekeeper to locks in the code, ensuring that
multi JVM can request a true lock.
For the code point of view, you're opening a connection to a ZK node (any
of a cluster) and you create a new InterProcessSemaphoreMutex which handles
the locking mechanism.

On Mon, Dec 18, 2017 at 10:24 AM, Ivan Kudryavtsev <kudryavtsev...@bw-sw.com
> wrote:

> Rafael,
>
> - It's easy to configure and run ZK either in single node or cluster
> - zookeeper should replace mysql locking mechanism used inside ACS code
> (places where ACS locks tables or rows).
>
> I don't think from the other size, that moving from MySQL locks to ZK locks
> is easy and light and (even implemetable) way.
>
> 2017-12-18 16:20 GMT+07:00 Rafael Weingärtner <rafaelweingart...@gmail.com
> >:
>
> > How hard is it to configure Zookeeper and get everything up and running?
> > BTW: what zookeeper would be managing? CloudStack management servers or
> > MySQL nodes?
> >
> > On Mon, Dec 18, 2017 at 7:13 AM, Ivan Kudryavtsev <
> > kudryavtsev...@bw-sw.com>
> > wrote:
> >
> > > Hello, Marc-Aurele, I strongly believe that all mysql locks should be
> > > removed in favour of truly DLM solution like Zookeeper. The performance
> > of
> > > 3node ZK ensemble should be enough to hold up to 1000-2000 locks per
> > second
> > > and it helps to move to truly clustered MySQL like galera without
> single
> > > master server.
> > >
> > > 2017-12-18 15:33 GMT+07:00 Marc-Aurèle Brothier <ma...@exoscale.ch>:
> > >
> > > > Hi everyone,
> > > >
> > > > I was wondering how many of you are running CloudStack with a cluster
> > of
> > > > management servers. I would think most of you, but it would be nice
> to
> > > hear
> > > > everyone voices. And do you get hosts going over their capacity
> limits?
> > > >
> > > > We discovered that during the VM allocation, if you get a lot of
> > parallel
> > > > requests to create new VMs, most notably with large profiles, the
> > > capacity
> > > > increase is done too far after the host capacity checks and results
> in
> > > > hosts going over their capacity limits. To detail the steps: the
> > > deployment
> > > > planner checks for cluster/host capacity and pick up one deployment
> > plan
> > > > (zone, cluster, host). The plan is stored in the database under a
> > VMwork
> > > > job and another thread picks that entry and starts the deployment,
> > > > increasing the host capacity and sending the commands. Here there's a
> > > time
> > > > gap between the host being picked up and the capacity increase for
> that
> > > > host of a couple of seconds, which is well enough to go over the
> > capacity
> > > > on one or more hosts. A few VMwork job can be added in the DB queue
> > > > targeting the same host before one gets picked up.
> > > >
> > > > To fix this issue, we're using Zookeeper to act as the multi JVM lock
> > > > manager thanks to their curator library (
> > > > https://curator.apache.org/curator-recipes/shared-lock.html). We
> also
> > > > changed the time when the capacity is increased, which occurs now
> > pretty
> > > > much after the deployment plan is found and inside the zookeeper
> lock.
> > > This
> > > > ensure we don't go over the capacity of any host, and it has been
> > proven
> > > > efficient since a month in our management server cluster.
> > > >
> > > > This adds another potential requirement which should be discuss
> before
> > > > proposing a PR. Today the code works seamlessly without ZK too, to
> > ensure
> > > > it's not a hard requirement, for example in a lab.
> > > >
> > > > Comments?
> > > >
> > > > Kind regards,
> > > > Marc-Aurèle
> > > >
> > >
> > >
> > >
> > > --
> > > With best regards, Ivan Kudryavtsev
> > > Bitworks Software, Ltd.
> > > Cell: +7-923-414-1515
> > > WWW: http://bitworks.software/ <http://bw-sw.com/>
> > >
> >
> >
> >
> > --
> > Rafael Weingärtner
> >
>
>
>
> --
> With best regards, Ivan Kudryavtsev
> Bitworks Software, Ltd.
> Cell: +7-923-414-1515
> WWW: http://bitworks.software/ <http://bw-sw.com/>
>


[Discuss] Management cluster / Zookeeper holding locks

2017-12-18 Thread Marc-Aurèle Brothier
Hi everyone,

I was wondering how many of you are running CloudStack with a cluster of
management servers. I would think most of you, but it would be nice to hear
everyone voices. And do you get hosts going over their capacity limits?

We discovered that during the VM allocation, if you get a lot of parallel
requests to create new VMs, most notably with large profiles, the capacity
increase is done too far after the host capacity checks and results in
hosts going over their capacity limits. To detail the steps: the deployment
planner checks for cluster/host capacity and pick up one deployment plan
(zone, cluster, host). The plan is stored in the database under a VMwork
job and another thread picks that entry and starts the deployment,
increasing the host capacity and sending the commands. Here there's a time
gap between the host being picked up and the capacity increase for that
host of a couple of seconds, which is well enough to go over the capacity
on one or more hosts. A few VMwork job can be added in the DB queue
targeting the same host before one gets picked up.

To fix this issue, we're using Zookeeper to act as the multi JVM lock
manager thanks to their curator library (
https://curator.apache.org/curator-recipes/shared-lock.html). We also
changed the time when the capacity is increased, which occurs now pretty
much after the deployment plan is found and inside the zookeeper lock. This
ensure we don't go over the capacity of any host, and it has been proven
efficient since a month in our management server cluster.

This adds another potential requirement which should be discuss before
proposing a PR. Today the code works seamlessly without ZK too, to ensure
it's not a hard requirement, for example in a lab.

Comments?

Kind regards,
Marc-Aurèle


Re: [ANNOUNCE] Syed Mushtaq Ahmed has joined the PMC

2017-10-10 Thread Marc-Aurèle Brothier
Congrats Syed 

On Mon, Oct 9, 2017 at 7:49 PM, Syed Ahmed  wrote:

> Thank you all for the kind words. It's been a pleasure working with you
> guys. I hope we continue the good work!
>
> -Syed
>
> On Mon, Oct 9, 2017 at 12:32 PM, Gabriel Beims Bräscher <
> gabrasc...@gmail.com> wrote:
>
> > Congrats, Syed!!!
> > Well deserved!
> >
> > 2017-10-09 13:26 GMT-03:00 Nitin Kumar Maharana <
> > nitinkumar.mahar...@accelerite.com>:
> >
> > > Congratulations, Syed!!!
> > > On 09-Oct-2017, at 4:56 PM, Paul Angus  mailto:
> > > paul.an...@shapeblue.com>> wrote:
> > >
> > > Fellow CloudStackers,
> > >
> > > It gives me great pleasure to say that Syed has be invited to join the
> > PMC
> > > and has gracefully accepted.
> > > Please joining me in congratulating Syed!
> > >
> > >
> > > Kind regards,
> > >
> > > Paul Angus
> > >
> > >
> > > paul.an...@shapeblue.com
> > > www.shapeblue.com
> > > 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> > > @shapeblue
> > >
> > >
> > >
> > >
> > > DISCLAIMER
> > > ==
> > > This e-mail may contain privileged and confidential information which
> is
> > > the property of Accelerite, a Persistent Systems business. It is
> intended
> > > only for the use of the individual or entity to which it is addressed.
> If
> > > you are not the intended recipient, you are not authorized to read,
> > retain,
> > > copy, print, distribute or use this message. If you have received this
> > > communication in error, please notify the sender and delete all copies
> of
> > > this message. Accelerite, a Persistent Systems business does not accept
> > any
> > > liability for virus infected mails.
> > >
> >
>


Re: Free capacity calculation within ACS

2017-09-28 Thread Marc-Aurèle Brothier
What was the value you had for host.reservation.release.period before
changing? Did you restart the management server after changing the value?
I can only see this option to create an issue. Check in the logs when you
tried to create a new VM the capacity values that the mgmt server holds, or
the value stored in the DB in "op_host_capacity".

On Wed, Sep 27, 2017 at 11:34 PM, Jochim, Ingo <ingo.joc...@bitgroup.de>
wrote:

> It's not about deleted VM's. It's about VM's in shutdown mode.
> We do have customers which keep some demo or testing machines in off mode
> over a long period.
> Off machines will not free up capacity. In the end I cannot build new
> machines even if there is enough space on the hypervisors.
>
> Regards,
> Ingo
> ________
> Von: Marc-Aurèle Brothier [ma...@exoscale.ch]
> Gesendet: Mittwoch, 27. September 2017 14:22
> An: users@cloudstack.apache.org
> Betreff: Re: Free capacity calculation within ACS
>
> What do you see in the table vm_instance for the VMs you were expecting a
> release? Do they stay in Destroyed state only and are not moving to
> "Expunging" state? What do you see in the logs related to the thread named
> "UserVm-Scavenger". This is the one which should do the VM cleanup. What do
> you have in the logs related to CapacityManagerImpl class?
>
> Kind regards,
> Marc-Aurèle
>
>
>
> > On 26 Sep 2017, at 22:44, Jochim, Ingo <ingo.joc...@bitgroup.de> wrote:
> >
> > Hello Marc-Aurèle,
> >
> > we tested with the parameter host.reservation.release.period.
> > We didn't see any change in the free capacity after shutdown of
> machines. Not even after this period.
> > Any idea why?
> >
> > Regards,
> > Ingo
> >
> >
> >
> >
> > -Ursprüngliche Nachricht-
> > Von: Jochim, Ingo [mailto:ingo.joc...@bitgroup.de]
> > Gesendet: Mittwoch, 6. September 2017 09:56
> > An: 'users@cloudstack.apache.org' <users@cloudstack.apache.org>
> > Betreff: AW: Free capacity calculation within ACS
> >
> > Many thanks. We will check.
> >
> > Regards,
> > Ingo
> >
> > -Ursprüngliche Nachricht-
> > Von: Marc-Aurèle Brothier [mailto:ma...@exoscale.ch]
> > Gesendet: Mittwoch, 6. September 2017 09:51
> > An: users@cloudstack.apache.org
> > Betreff: Re: Free capacity calculation within ACS
> >
> > Apparently in 4.2.0
> > https://github.com/apache/cloudstack/blob/f0dd5994b447a6097c52f405c7c7c5
> 4c76da9c16/setup/db/db/schema-410to420.sql
> >
> > On Wed, Sep 6, 2017 at 9:42 AM, Jochim, Ingo <ingo.joc...@bitgroup.de>
> > wrote:
> >
> >> Hello Marc-Aurèle,
> >>
> >> great. Didn't know that this parameter exists.
> >> Do you know in which ACS version this got introduced?
> >>
> >> Many thanks.
> >> Ingo
> >>
> >> -Ursprüngliche Nachricht-
> >> Von: Marc-Aurèle Brothier [mailto:ma...@exoscale.ch]
> >> Gesendet: Mittwoch, 6. September 2017 09:15
> >> An: users@cloudstack.apache.org
> >> Betreff: Re: Free capacity calculation within ACS
> >>
> >> Hi Ingo,
> >>
> >> You might want to look at the release period set in your installation:
> >> host.reservation.release.period. This release window time is there to
> >> keep the capacity of stopped machines on a host for a certain time,
> >> before releasing it, in case the machine has to start again soon
> >> after. And most likely for other reason maybe in the advance
> >> networking mode. So try to decrease this time window and check your
> capacity after that.
> >>
> >> Marc-Aurèle
> >>
> >> On Tue, Sep 5, 2017 at 9:48 AM, Jochim, Ingo <ingo.joc...@bitgroup.de>
> >> wrote:
> >>
> >>> Hello all,
> >>>
> >>> within our CloudStack environment we like to park a couple of large
> >>> machines in powered off state.
> >>> Those machines are demo machines which are needed only sometimes.
> >>> Those machines will get included in the capacity. That means we
> >>> cannot build new machines even if there are free ressources on the
> hypervisors.
> >>> We don't want to solve it with overcommitment.
> >>> Is there a possibility to calculate free capacity without all
> >>> powered off machines?
> >>>
> >>> Currently we have a dirty workaround. We created an offering with 1
> >>> core and 1MB RAM and used that for all parked machines.
> >>> But this is not very nice.
> >>>
> >>> Any ideas or comments are welcome.
> >>> Thank you.
> >>> Ingo
> >>>
> >>
>
>


Re: Free capacity calculation within ACS

2017-09-27 Thread Marc-Aurèle Brothier
What do you see in the table vm_instance for the VMs you were expecting a 
release? Do they stay in Destroyed state only and are not moving to "Expunging" 
state? What do you see in the logs related to the thread named 
"UserVm-Scavenger". This is the one which should do the VM cleanup. What do you 
have in the logs related to CapacityManagerImpl class?

Kind regards,
Marc-Aurèle



> On 26 Sep 2017, at 22:44, Jochim, Ingo <ingo.joc...@bitgroup.de> wrote:
> 
> Hello Marc-Aurèle,
> 
> we tested with the parameter host.reservation.release.period.
> We didn't see any change in the free capacity after shutdown of machines. Not 
> even after this period.
> Any idea why?
> 
> Regards,
> Ingo
> 
> 
> 
> 
> -Ursprüngliche Nachricht-
> Von: Jochim, Ingo [mailto:ingo.joc...@bitgroup.de] 
> Gesendet: Mittwoch, 6. September 2017 09:56
> An: 'users@cloudstack.apache.org' <users@cloudstack.apache.org>
> Betreff: AW: Free capacity calculation within ACS
> 
> Many thanks. We will check.
> 
> Regards,
> Ingo
> 
> -Ursprüngliche Nachricht-
> Von: Marc-Aurèle Brothier [mailto:ma...@exoscale.ch]
> Gesendet: Mittwoch, 6. September 2017 09:51
> An: users@cloudstack.apache.org
> Betreff: Re: Free capacity calculation within ACS
> 
> Apparently in 4.2.0
> https://github.com/apache/cloudstack/blob/f0dd5994b447a6097c52f405c7c7c54c76da9c16/setup/db/db/schema-410to420.sql
> 
> On Wed, Sep 6, 2017 at 9:42 AM, Jochim, Ingo <ingo.joc...@bitgroup.de>
> wrote:
> 
>> Hello Marc-Aurèle,
>> 
>> great. Didn't know that this parameter exists.
>> Do you know in which ACS version this got introduced?
>> 
>> Many thanks.
>> Ingo
>> 
>> -Ursprüngliche Nachricht-
>> Von: Marc-Aurèle Brothier [mailto:ma...@exoscale.ch]
>> Gesendet: Mittwoch, 6. September 2017 09:15
>> An: users@cloudstack.apache.org
>> Betreff: Re: Free capacity calculation within ACS
>> 
>> Hi Ingo,
>> 
>> You might want to look at the release period set in your installation:
>> host.reservation.release.period. This release window time is there to 
>> keep the capacity of stopped machines on a host for a certain time, 
>> before releasing it, in case the machine has to start again soon 
>> after. And most likely for other reason maybe in the advance 
>> networking mode. So try to decrease this time window and check your capacity 
>> after that.
>> 
>> Marc-Aurèle
>> 
>> On Tue, Sep 5, 2017 at 9:48 AM, Jochim, Ingo <ingo.joc...@bitgroup.de>
>> wrote:
>> 
>>> Hello all,
>>> 
>>> within our CloudStack environment we like to park a couple of large 
>>> machines in powered off state.
>>> Those machines are demo machines which are needed only sometimes.
>>> Those machines will get included in the capacity. That means we 
>>> cannot build new machines even if there are free ressources on the 
>>> hypervisors.
>>> We don't want to solve it with overcommitment.
>>> Is there a possibility to calculate free capacity without all 
>>> powered off machines?
>>> 
>>> Currently we have a dirty workaround. We created an offering with 1 
>>> core and 1MB RAM and used that for all parked machines.
>>> But this is not very nice.
>>> 
>>> Any ideas or comments are welcome.
>>> Thank you.
>>> Ingo
>>> 
>> 



Re: Quick 1 Question Survey

2017-09-20 Thread Marc-Aurèle Brothier
ACS Management & KVM = Ubuntu 16.04

On Thu, Sep 14, 2017 at 7:19 AM, Milamber  wrote:

>
> Cloud 1:
> Cloudstack Management / Nodes = Centos 6.9 (CS 4.9)
> KVM/XEN =  KVM+CLVM
>
> Cloud 2 / 3 / 4:
> Cloudstack Management / Nodes = Ubuntu 14.04 (CS 4.9)
> KVM/XEN =  KVM+NFS+Local Storage (cloud 2/4)
>
>
>
>
> On 12/09/2017 13:12, Rene Moser wrote:
>
>> What Linux OS and release are you running below your:
>>
>> * CloudStack/Cloudplatform Management
>> * KVM/XEN Hypvervisor Host
>>
>> Possible answer example
>>
>> Cloudstack Management = centos6
>> KVM/XEN = None, No KVM/XEN
>>
>> Thanks in advance
>>
>> Regards
>> René
>>
>>
>>
>


Re: Free capacity calculation within ACS

2017-09-06 Thread Marc-Aurèle Brothier
Apparently in 4.2.0
https://github.com/apache/cloudstack/blob/f0dd5994b447a6097c52f405c7c7c54c76da9c16/setup/db/db/schema-410to420.sql

On Wed, Sep 6, 2017 at 9:42 AM, Jochim, Ingo <ingo.joc...@bitgroup.de>
wrote:

> Hello Marc-Aurèle,
>
> great. Didn't know that this parameter exists.
> Do you know in which ACS version this got introduced?
>
> Many thanks.
> Ingo
>
> -Ursprüngliche Nachricht-
> Von: Marc-Aurèle Brothier [mailto:ma...@exoscale.ch]
> Gesendet: Mittwoch, 6. September 2017 09:15
> An: users@cloudstack.apache.org
> Betreff: Re: Free capacity calculation within ACS
>
> Hi Ingo,
>
> You might want to look at the release period set in your installation:
> host.reservation.release.period. This release window time is there to
> keep the capacity of stopped machines on a host for a certain time, before
> releasing it, in case the machine has to start again soon after. And most
> likely for other reason maybe in the advance networking mode. So try to
> decrease this time window and check your capacity after that.
>
> Marc-Aurèle
>
> On Tue, Sep 5, 2017 at 9:48 AM, Jochim, Ingo <ingo.joc...@bitgroup.de>
> wrote:
>
> > Hello all,
> >
> > within our CloudStack environment we like to park a couple of large
> > machines in powered off state.
> > Those machines are demo machines which are needed only sometimes.
> > Those machines will get included in the capacity. That means we cannot
> > build new machines even if there are free ressources on the hypervisors.
> > We don't want to solve it with overcommitment.
> > Is there a possibility to calculate free capacity without all powered
> > off machines?
> >
> > Currently we have a dirty workaround. We created an offering with 1
> > core and 1MB RAM and used that for all parked machines.
> > But this is not very nice.
> >
> > Any ideas or comments are welcome.
> > Thank you.
> > Ingo
> >
>


Re: Free capacity calculation within ACS

2017-09-06 Thread Marc-Aurèle Brothier
Hi Ingo,

You might want to look at the release period set in your installation:
host.reservation.release.period. This release window time is there to keep
the capacity of stopped machines on a host for a certain time, before
releasing it, in case the machine has to start again soon after. And most
likely for other reason maybe in the advance networking mode. So try to
decrease this time window and check your capacity after that.

Marc-Aurèle

On Tue, Sep 5, 2017 at 9:48 AM, Jochim, Ingo 
wrote:

> Hello all,
>
> within our CloudStack environment we like to park a couple of large
> machines in powered off state.
> Those machines are demo machines which are needed only sometimes.
> Those machines will get included in the capacity. That means we cannot
> build new machines even if there are free ressources on the hypervisors.
> We don't want to solve it with overcommitment.
> Is there a possibility to calculate free capacity without all powered off
> machines?
>
> Currently we have a dirty workaround. We created an offering with 1 core
> and 1MB RAM and used that for all parked machines.
> But this is not very nice.
>
> Any ideas or comments are welcome.
> Thank you.
> Ingo
>


Re: Moving VMs to particular Hosts

2017-02-15 Thread Marc-Aurèle Brothier
Erik is correct about the preferred guest os to stick Windows VMs to a set
of hosts. We're not using HA, so I cannot guarantee that they will be
picked up but it should.

On Wed, Feb 15, 2017 at 9:02 AM, Erik Weber  wrote:

> If I recall correctly then CloudStack has a setting for the preferred guest
> os that should run on a host? Should be in host settings.
>
>
> Erik
>
> ons. 15. feb. 2017 kl. 06.13 skrev Makrand :
>
> > Hi,
> >
> > Lets say I've a XENserver resource pool containing 8 hosts in my
> cloudstack
> > setup. I've only handful of windows VM running, lets say on host1. In
> case
> > host1 goes down abruptly, is there any way I can restrict these VMs to
> move
> > (via HA motion) to only host 2,3 or 4 and not to remaining hosts in
> > cluster? Is this doable from cloud stack?
> >
> >
> > Thing is, we only have handful of windows VMs (2008 R2/2012 R2). The way
> > Microsoft licences: If you're expecting your VM to reside on particular
> > host, you should purchase license for that host. This is what the
> Microsoft
> > guy told to my manager. So we are planning to license 4 hosts only. Kind
> of
> > odd licensing policy from MS, I must say.
> >
> >
> >
> > --
> > Makrand
> >
>


Re: KVM Live VM Snapshots

2016-12-19 Thread Marc-Aurèle Brothier
Hi Asai,

In my opinion, doing a VM snapshot is making a step in the wrong direction.
Your applications/system running inside your VMs should be designed to
handle an OS crash. Then a new VM, freshly installed, should be able to get
back into your application setup so that you have again an appropriate
number of healthy nodes.

Marco

On Mon, Dec 19, 2016 at 4:34 AM, Asai  wrote:

> Greetings,
>
> Is it correct that currently there is no support in Cloudstack for KVM
> live VM snapshots? I see that Volume snapshots are available for running
> VMs, but that makes me wonder what everyone is doing to get a disaster
> recovery backup of a KVM based VM?  I did ask this question a few weeks
> back, but only one person responded with one solution, and I am really
> trying to figure out what the best solutions are here.
>
> Has anybody seen this script? https://gist.github.com/ringe/
> 334ee88ba5451c8f5732
>
> What is the community's opinion of scripts like this?  And also, big
> question, if this script is good, why isn't it integrated into Cloudstack?
>
> Thanks,
> Asai
>
>


Re: API migrateVirtualMachine does not respect affinity group assignment

2016-11-08 Thread Marc-Aurèle Brothier
IMHO it's something desirable, because in case of emergency, it's better to
migrate a VM to a host that does not follow the anti affinity group, rather
than leaving the VM on a host that must be shutdown for example and loosing
the VM. It's up to the admin to make this transgression during the shortest
amount of time.
Those migration API calls are always done by an admin, and therefore should
take care of such case, which is not very complicated. I have a python
script that does the job (
https://gist.github.com/marcaurele/dc1774b1ea13d81be702faf235bf2afe) for
live migration for example.

On Wed, Nov 9, 2016 at 2:47 AM, Simon Weller  wrote:

> Can you open a jira issue on this?
>
> Simon Weller/ENA
> (615) 312-6068
>
> -Original Message-
> From: Yiping Zhang [yzh...@marketo.com]
> Received: Tuesday, 08 Nov 2016, 8:03PM
> To: users@cloudstack.apache.org [users@cloudstack.apache.org]
> Subject: API migrateVirtualMachine does not respect affinity group
> assignment
>
> Hi,
>
> It seems that the API migrateVirtualMachine does not respect instance’s
> affinity group assignment.  Is this intentional?
>
> To reproduce:
>
> Assigning two VM instances running on different hosts, say v1 running on
> h1 and v2 running on h2, to the same affinity group.  In GUI, it won’t let
> you migrate v1 and v2 to the same host, but if you use cloudmonkey,  you
> are able to move both instances to h1 or h2 with migrateVirtualMachine API
> call.
>
> IMHO, the API call should return with an error message that the migration
> is prohibited by affinity group assignment. However, if the current
> behavior is desirable in some situations, then a parameter like
> ignore-affinity-group=true should be passed to the API call (or vice versa,
> depending on which behavior is chosen as the default)
>
> Yiping
>


Re: CloudStack Anti-Affinity

2016-10-04 Thread Marc-Aurèle Brothier
Hi,
It would deny it if you're a normal user. As an admin you can force to
overrule an anti-affinity by specifying the destination host.

On Wed, Oct 5, 2016 at 1:58 AM, Sergey Levitskiy <
sergey.levits...@autodesk.com> wrote:

> Hi Ilya,
> We tried it once. If I am not mistaken it is strict. The job will be
> accepted but end up in error.
> Thanks,
> Sergey
>
>
>
> On 10/4/16, 3:43 PM, "ilya"  wrote:
>
> I'm curious if anyone uses Anti-Affinity and how restrictive is it?
> Meaning what happens if there are no nodes to satisfy the anti-affinity
> rule? Would it still place it or deny and throw 530?
>
> Any help is appreciated
>
> Thanks,
> ilya
>
>
>


Re: CLOUDSTACK-9238: Template download URL length really fixed ?

2016-05-26 Thread Marc-Aurèle Brothier - Exoscale
Hi Aurélien,

Check the database schema as the URL is stored there. My guess is that the 
column size is 255.

Marco


> On 25 May 2016, at 15:08, Aurélien  wrote:
> 
> Hello,
> 
> I’m investigating an issue on CS 4.8.0 when registering an ISO with a
> long URL (>255 chars). The only issue related to it that I could find
> is https://issues.apache.org/jira/browse/CLOUDSTACK-9238.
> 
> Basically, the problem could be reproduced by entering an URL with
> more that 255 chars in the ISO URL. The API fails with a parameter
> error.
> 
> Since the database schema seems fine according to CLOUDSTACK-9238 - my
> current database matches that at least - I tried to change the
> Parameter annotation to length 2048 (see first patch below); Then the
> API request is accepted, but the URL appears truncated in the SSVM
> logs:
> 
> URL was: 
> https://github-cloud.s3.amazonaws.com/releases/28796010/774188bc-18fc-11e6-8140-4cf8a5632e15.iso?X-Amz-Algorithm=AWS4-HMAC-SHA256=AKIAISTNZFOVBIJMK3TQ%2F20160525%2Fus-east-1%2Fs3%2Faws4_request=20160525T125103Z=300=d40de63959eed194d0bbb7c702cf71d65a7aa91f427e33714fde64b76aebf6d8=host_id=0=attachment%3B%20filename%3Drancheros.iso=application%2Foctet-stream
> 
> SSVM gets a 255-char truncated URL (the management server logs shows
> that it sends only 255 chars also).
> 
> 2016-05-25 12:51:16,961 DEBUG [cloud.agent.Agent]
> (agentRequest-Handler-1:null) Request:Seq 57-4400016835940974604:  {
> Cmd , MgmtId: 345049980494, via: 57, Ver: v1, Flags: 100011,
> [{"org.apache.cloudstack.storage.command.DownloadCommand":{"hvm":true,"description":"ros","maxDownloadSizeInBytes":53687091200,"id":353,"resourceType":"TEMPLATE","installPath":"template/tmpl/2/353","_store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://10.87.80.250/cloudstack_secondary_z1","_role":"Image"}},"url":"https://github-cloud.s3.amazonaws.com/releases/28796010/774188bc-18fc-11e6-8140-4cf8a5632e15.iso?X-Amz-Algorithm=AWS4-HMAC-SHA256=AKIAISTNZFOVBIJMK3TQ%2F20160525%2Fus-east-1%2Fs3%2Faws4_request=20160525T125103Z=30","format":"ISO","accountId":2,"name":"353-2-34ec5c80-b6ff-3d3d-8c09-7ac1f3755d3d","secUrl":"nfs://10.87.80.250/cloudstack_secondary_z1","wait":0}}]
> }
> 
> 
> I also tried setting the length of the fields in the *VO objects (see
> second patch below), but I’m not exactly sure of what i’m doing there.
> 
> Does anyone have a hint at where to look to find where the string is
> truncated and why ?
> 
> 
> — Patch for API
> 
> diff -ru 
> a/apache-cloudstack-4.8.0-src/api/src/org/apache/cloudstack/api/command/user/iso/RegisterIsoCmd.java
> b/apache-cloudstack-4.8.0-src/api/src/org/apache/cloudstack/api/command/user/iso/RegisterIsoCmd.java
> --- 
> a/apache-cloudstack-4.8.0-src/api/src/org/apache/cloudstack/api/command/user/iso/RegisterIsoCmd.java
> 2016-01-20 23:43:35.0 +0100
> +++ 
> b/apache-cloudstack-4.8.0-src/api/src/org/apache/cloudstack/api/command/user/iso/RegisterIsoCmd.java
> 2016-05-25 09:40:25.0 +0200
> @@ -78,7 +78,11 @@
>description = "the ID of the OS type that best
> represents the OS of this ISO. If the ISO is bootable this parameter
> needs to be passed")
> private Long osTypeId;
> 
> -@Parameter(name = ApiConstants.URL, type = CommandType.STRING,
> required = true, description = "the URL to where the ISO is currently
> being hosted")
> +@Parameter(name = ApiConstants.URL,
> +   type = CommandType.STRING,
> +   required = true,
> +   description = "the URL to where the ISO is currently
> being hosted",
> +   length = 2048)
> private String url;
> 
> @Parameter(name=ApiConstants.ZONE_ID, type=CommandType.UUID,
> entityType = ZoneResponse.class,
> 
> 
> — Patch for columns
> 
> diff -ru 
> a/apache-cloudstack-4.8.0-src/engine/schema/src/org/apache/cloudstack/storage/datastore/db/TemplateDataStoreVO.java
> b/apache-cloudstack-4.8.0-src/engine/schema/src/org/apache/cloudstack/storage/datastore/db/TemplateDataStoreVO.java
> --- 
> a/apache-cloudstack-4.8.0-src/engine/schema/src/org/apache/cloudstack/storage/datastore/db/TemplateDataStoreVO.java
> 2016-01-20 23:43:35.0 +0100
> +++ 
> b/apache-cloudstack-4.8.0-src/engine/schema/src/org/apache/cloudstack/storage/datastore/db/TemplateDataStoreVO.java
> 2016-05-25 14:12:18.0 +0200
> @@ -95,7 +95,7 @@
> @Column(name = "install_path")
> private String installPath;
> 
> -@Column(name = "url")
> +@Column(name = "url", length = 2048)
> private String downloadUrl;
> 
> @Column(name = "download_url")
> 
> Thanks,
> Best regards,
> -- 
> Aurélien Guillaume
>