Re: Marvin Install Issue

2017-07-27 Thread Tutkowski, Mike
Thanks, Jayapal!

> On Jul 26, 2017, at 11:22 PM, Jayapal Uradi  
> wrote:
> 
> Hi Mike,
> 
> Long back I got issue related to mysql-connector. I followed the below steps, 
> see if this helps for installing mysql-connector.
> 
> #mysql-connector-python
> http://stackoverflow.com/questions/31748278/how-do-you-install-mysql-connector-python-development-version-through-pip
> $  git clone https://github.com/mysql/mysql-connector-python.git
> $  cd mysql-connector-python
> $  python ./setup.py build
> $  sudo python ./setup.py install
> ...
 import mysql.connector as msc
 msc.__version__
> ‘2.1.3'
> 
> Thanks,
> Jayapal
> 
> On Jul 27, 2017, at 7:46 AM, Tutkowski, Mike 
> > wrote:
> 
> Hi everyone,
> 
> I am having trouble installing Marvin on Ubuntu 14.04 from master.
> 
> It’s complaining that it’s having trouble with mysql-connector-python.
> 
> mtutkowski@mike-ubuntu:~/cloudstack/cloudstack$ sudo pip install --upgrade 
> tools/marvin/dist/Marvin-*.tar.gz
> Unpacking ./tools/marvin/dist/Marvin-4.11.0.0-SNAPSHOT.tar.gz
> Running setup.py (path:/tmp/pip-5URDXT-build/setup.py) egg_info for package 
> from 
> file:///home/mtutkowski/cloudstack/cloudstack/tools/marvin/dist/Marvin-4.11.0.0-SNAPSHOT.tar.gz
>   /usr/local/lib/python2.7/dist-packages/setuptools/dist.py:340: UserWarning: 
> The version specified ('4.11.0.0-SNAPSHOT') is an invalid version, this may 
> not work as expected with newer versions of setuptools, pip, and PyPI. Please 
> see PEP 440 for more details.
> "details." % self.metadata.version
> 
>   warning: no files found matching '*.txt' under directory 'docs'
> Could not find any downloads that satisfy the requirement 
> mysql-connector-python>=1.1.6 in /usr/lib/python2.7/dist-packages (from 
> Marvin==4.11.0.0-SNAPSHOT)
> Downloading/unpacking mysql-connector-python>=1.1.6 (from 
> Marvin==4.11.0.0-SNAPSHOT)
> Cleaning up...
> No distributions at all found for mysql-connector-python>=1.1.6 in 
> /usr/lib/python2.7/dist-packages (from Marvin==4.11.0.0-SNAPSHOT)
> Storing debug log for failure in /home/mtutkowski/.pip/pip.log
> 
> But it seems to be installed:
> 
> python-mysql.connector/trusty,now 1.1.6-1 all [installed]
> 
> Thoughts?
> 
> Thanks!
> Mike
> 
> DISCLAIMER
> ==
> This e-mail may contain privileged and confidential information which is the 
> property of Accelerite, a Persistent Systems business. It is intended only 
> for the use of the individual or entity to which it is addressed. If you are 
> not the intended recipient, you are not authorized to read, retain, copy, 
> print, distribute or use this message. If you have received this 
> communication in error, please notify the sender and delete all copies of 
> this message. Accelerite, a Persistent Systems business does not accept any 
> liability for virus infected mails.


Re: Marvin Install Issue

2017-07-27 Thread Dmytro Shevchenko

Catch same issue. I'm using virtualenv and here is my requirements.txt:

https://dev.mysql.com/get/Downloads/Connector-Python/mysql-connector-python-2.1.6.tar.gz
  Marvin
  nose-timer

working fine.

On 27/07/17 05:16, Tutkowski, Mike wrote:

Hi everyone,

I am having trouble installing Marvin on Ubuntu 14.04 from master.

It’s complaining that it’s having trouble with mysql-connector-python.

mtutkowski@mike-ubuntu:~/cloudstack/cloudstack$ sudo pip install --upgrade 
tools/marvin/dist/Marvin-*.tar.gz
Unpacking ./tools/marvin/dist/Marvin-4.11.0.0-SNAPSHOT.tar.gz
   Running setup.py (path:/tmp/pip-5URDXT-build/setup.py) egg_info for package 
from 
file:///home/mtutkowski/cloudstack/cloudstack/tools/marvin/dist/Marvin-4.11.0.0-SNAPSHOT.tar.gz
 /usr/local/lib/python2.7/dist-packages/setuptools/dist.py:340: 
UserWarning: The version specified ('4.11.0.0-SNAPSHOT') is an invalid version, 
this may not work as expected with newer versions of setuptools, pip, and PyPI. 
Please see PEP 440 for more details.
   "details." % self.metadata.version

 warning: no files found matching '*.txt' under directory 'docs'
Could not find any downloads that satisfy the requirement 
mysql-connector-python>=1.1.6 in /usr/lib/python2.7/dist-packages (from 
Marvin==4.11.0.0-SNAPSHOT)
Downloading/unpacking mysql-connector-python>=1.1.6 (from 
Marvin==4.11.0.0-SNAPSHOT)
Cleaning up...
No distributions at all found for mysql-connector-python>=1.1.6 in 
/usr/lib/python2.7/dist-packages (from Marvin==4.11.0.0-SNAPSHOT)
Storing debug log for failure in /home/mtutkowski/.pip/pip.log

But it seems to be installed:

python-mysql.connector/trusty,now 1.1.6-1 all [installed]

Thoughts?

Thanks!
Mike


--
Best regards
Dmytro Shevchenko
dshevchenko.m...@gmail.com
skype: demonsh_mk
+380(66)2426648



Re: [DISCUSS] Closing old Pull Requests on Github

2017-07-27 Thread Wido den Hollander

> Op 27 juli 2017 om 17:13 schreef Syed Ahmed :
> 
> 
> I would start by adding a comment to the open PRs to see if the author is
> responsive. If that's the case, then it means that review is need and we
> can add the "waiting-for-review" tag. There are a few PRs that are in that
> state but there are far more out there which need to have this tag added.
> 

Seems like a good suggestions. A new label which we add and a message to all 
PRs.

See if somebody responds and then take action later on?

Wido

> On Mon, Jul 24, 2017 at 7:55 AM, Wido den Hollander  wrote:
> 
> >
> > > Op 24 juli 2017 om 10:47 schreef Marc-Aurèle Brothier  > >:
> > >
> > >
> > > Hi Wido,
> > >
> > > I have one comment on this topic. Some of those PRs are lying there
> > because
> > > no one took the time to merge them (I have a couple like that) since they
> > > were not very important (I think it's the reason), fixing only a small
> > > glitch or improving an output. If we start to close the PRs because there
> > > isn't activity on them, we should be sure to treat all PRs equally in
> > term
> > > on timeline when they arrive. Using the labels to sort them and make
> > > filtering easier would also be something important IMO. Today there are
> > > 200+ PRs but we cannot filter them and have not much idea on their
> > status,
> > > except by checking if they are "mergeable". This should not conflict with
> > > the Jira tickets & discussion that happened previously.
> >
> > Understood! But that's a matter of resources the community has. Each PR
> > needs to be looked at by a volunteer, a committer who all have limited
> > resources.
> >
> > It's not good that PR's didn't get the attention they needed, but it's a
> > fact that it happened.
> >
> > I don't think we have the resources to manually check and label 200 PRs
> > and see which one can be merged.
> >
> > If a author thinks the PR is still valid he/she can open it again. It's
> > not a hard-close as I put in the message, but a way to filter what we need
> > to put attention on.
> >
> > They can be labeled and handled then.
> >
> > Wido
> >
> > >
> > > Marco
> > >
> > > On Mon, Jul 24, 2017 at 10:22 AM, Wido den Hollander 
> > wrote:
> > >
> > > > Hi,
> > > >
> > > > While writing this e-mail we have 191 Open Pull requests [0] on Github
> > and
> > > > that number keeps hovering around ~200.
> > > >
> > > > We have a great number of PRs being merged, but a lot of code is old
> > and
> > > > doesn't even merge anymore.
> > > >
> > > > My proposal would be that we close all PRs which didn't see any
> > activity
> > > > in the last 3 months (Jun, July and May 2017) with the following
> > message:
> > > >
> > > > "This Pull Request is being closed for not seeing any activity since
> > May
> > > > 2017.
> > > >
> > > > The CloudStack project is in a transition from the Apache Foundation's
> > Git
> > > > infrastructure to Github and due to that not all PRs we able to be
> > tested
> > > > and/or merged.
> > > >
> > > > It's not our intention to say that we don't value the PR, but it's a
> > way
> > > > to get a better overview of what needs to be merged.
> > > >
> > > > If you think closing this PR is a mistake, please add a comment and
> > > > re-open the PR! If you do that, could you please make sure that the PR
> > > > merges against the branch it was submitted against?
> > > >
> > > > Thank you very much for your understanding and cooperation!"
> > > >
> > > > How does that sound?
> > > >
> > > > Wido
> > > >
> > > >
> > > > [0]: https://github.com/apache/cloudstack/pulls
> > > >
> >


[DISCUSS] IOPS/GB and Highest Min and Max IOPS for disk offering

2017-07-27 Thread Syed Ahmed
Hi All,

I am planning to add 4 new parameters to the disk offering. The use case
for this is as follows:

We want to provide a provisioned IOPS style offering to our customers with
managed storage like SolidFire. The model is similar to GCE where we have
IOPS scale with the size based on a predefined ratio. So for this I want to
add two options. minIOPSPerGB and maxIOPSPerGB. Now, based on what storage
you have, you have limits on the highest values for your min and max IOPS
and after which you don't want to scale your IOPS (SolidFire for example
can do 10k minIOPS and 20k max IOPS). To support this, I have to add two
more parameters, highestMinIOPS, highestMaxIOPS. This should work with
existing disk offerings without problem. I am looking for comments on this
approach. Would really appreciate your reviews.

Thanks,
-Syed


Re: [DISCUSS] Closing old Pull Requests on Github

2017-07-27 Thread Syed Ahmed
I would start by adding a comment to the open PRs to see if the author is
responsive. If that's the case, then it means that review is need and we
can add the "waiting-for-review" tag. There are a few PRs that are in that
state but there are far more out there which need to have this tag added.

On Mon, Jul 24, 2017 at 7:55 AM, Wido den Hollander  wrote:

>
> > Op 24 juli 2017 om 10:47 schreef Marc-Aurèle Brothier  >:
> >
> >
> > Hi Wido,
> >
> > I have one comment on this topic. Some of those PRs are lying there
> because
> > no one took the time to merge them (I have a couple like that) since they
> > were not very important (I think it's the reason), fixing only a small
> > glitch or improving an output. If we start to close the PRs because there
> > isn't activity on them, we should be sure to treat all PRs equally in
> term
> > on timeline when they arrive. Using the labels to sort them and make
> > filtering easier would also be something important IMO. Today there are
> > 200+ PRs but we cannot filter them and have not much idea on their
> status,
> > except by checking if they are "mergeable". This should not conflict with
> > the Jira tickets & discussion that happened previously.
>
> Understood! But that's a matter of resources the community has. Each PR
> needs to be looked at by a volunteer, a committer who all have limited
> resources.
>
> It's not good that PR's didn't get the attention they needed, but it's a
> fact that it happened.
>
> I don't think we have the resources to manually check and label 200 PRs
> and see which one can be merged.
>
> If a author thinks the PR is still valid he/she can open it again. It's
> not a hard-close as I put in the message, but a way to filter what we need
> to put attention on.
>
> They can be labeled and handled then.
>
> Wido
>
> >
> > Marco
> >
> > On Mon, Jul 24, 2017 at 10:22 AM, Wido den Hollander 
> wrote:
> >
> > > Hi,
> > >
> > > While writing this e-mail we have 191 Open Pull requests [0] on Github
> and
> > > that number keeps hovering around ~200.
> > >
> > > We have a great number of PRs being merged, but a lot of code is old
> and
> > > doesn't even merge anymore.
> > >
> > > My proposal would be that we close all PRs which didn't see any
> activity
> > > in the last 3 months (Jun, July and May 2017) with the following
> message:
> > >
> > > "This Pull Request is being closed for not seeing any activity since
> May
> > > 2017.
> > >
> > > The CloudStack project is in a transition from the Apache Foundation's
> Git
> > > infrastructure to Github and due to that not all PRs we able to be
> tested
> > > and/or merged.
> > >
> > > It's not our intention to say that we don't value the PR, but it's a
> way
> > > to get a better overview of what needs to be merged.
> > >
> > > If you think closing this PR is a mistake, please add a comment and
> > > re-open the PR! If you do that, could you please make sure that the PR
> > > merges against the branch it was submitted against?
> > >
> > > Thank you very much for your understanding and cooperation!"
> > >
> > > How does that sound?
> > >
> > > Wido
> > >
> > >
> > > [0]: https://github.com/apache/cloudstack/pulls
> > >
>


Re: [DISCUSS] Move to Debian9 systemvmtemplate

2017-07-27 Thread Syed Ahmed
-1 on Arch as well. Moving to Debian 9 seems the wiser choice IMO. I've
used Packer before and I really like it, the only downside that I see is
that Packer lacks support for XenServer VHD images. There is some work on a
XenServer plugin but I haven't tested that. If the community decides to use
Packer, I can do some initial validation of it on XenServer.

Thanks,
-Syed

On Tue, Jul 25, 2017 at 3:19 AM, Wido den Hollander  wrote:

>
> > Op 24 juli 2017 om 19:07 schreef Rene Moser :
> >
> >
> > Hi Rohit
> >
> >
> > On 07/23/2017 06:08 PM, Rohit Yadav wrote:
> > > All,
> > >
> > >
> > > Just want to kick an initial discussion around migration to Debian9
> based systemvmtemplate, and get your feedback on the same.
> > >
> > > Here's a work-in-progress PR: https://github.com/apache/
> cloudstack/pull/2198
> >
> > Have you considered to replace veewee by packer?
> >
>
> Packer is really nice indeed. We use it to build our templates [0] which
> we use on CloudStack.
>
> Building the SSVM using Packer should be rather easy I think.
>
> [0]: https://github.com/pcextreme/packer-templates
>
> > Our friends from schubergphilis have already done some work here
> > https://github.com/MissionCriticalCloud/systemvm-packer.
> >
> > However there would be also an official way to convert the definitions
> > https://www.packer.io/guides/veewee-to-packer.html
> >
> > Regards René
>


Re: [DISCUSS] Metadata server IP improvement

2017-07-27 Thread Syed Ahmed
I think we had a little bit of discussion around this at CCC. Config drive
really does solve a lot of problems with existing implementation of using
the Cloudstack Metadata provider for cloud-init. Overall it is a much
superior solution as pointed by Wido. However, we don't want to completely
remove the VR based approach as things like BareMetal still require it.

Thanks,
-Syed

On Wed, Jul 26, 2017 at 4:36 AM, Rene Moser  wrote:

> On 07/26/2017 09:00 AM, Wido den Hollander wrote:
> > This has been discussed before and right now there is a PR for using
> Config Drive: https://github.com/apache/cloudstack/pull/2116
> >
> > The problem with 169.254.169.254 is:
> >
> > - It doesn't work with IPv6
> > - It doesn't work with Basic Networking
> > - You need to do iptables intercepting on the VR
> >
> > Config Drive is a IP-protocol independent solution for getting metadata
> into the Instance without the need for IP connectivity.
> >
> > Imho that's a much better solution.
>
> Perfect, makes sense! Thanks for the quick reply.
>
> René
>


Re: Marvin Install Issue

2017-07-27 Thread Tutkowski, Mike
Thanks, Dmytro!

> On Jul 27, 2017, at 5:18 AM, Dmytro Shevchenko  
> wrote:
> 
> Catch same issue. I'm using virtualenv and here is my requirements.txt:
> 
> https://dev.mysql.com/get/Downloads/Connector-Python/mysql-connector-python-2.1.6.tar.gz
>  Marvin
>  nose-timer
> 
> working fine.
> 
>> On 27/07/17 05:16, Tutkowski, Mike wrote:
>> Hi everyone,
>> 
>> I am having trouble installing Marvin on Ubuntu 14.04 from master.
>> 
>> It’s complaining that it’s having trouble with mysql-connector-python.
>> 
>> mtutkowski@mike-ubuntu:~/cloudstack/cloudstack$ sudo pip install --upgrade 
>> tools/marvin/dist/Marvin-*.tar.gz
>> Unpacking ./tools/marvin/dist/Marvin-4.11.0.0-SNAPSHOT.tar.gz
>>   Running setup.py (path:/tmp/pip-5URDXT-build/setup.py) egg_info for 
>> package from 
>> file:///home/mtutkowski/cloudstack/cloudstack/tools/marvin/dist/Marvin-4.11.0.0-SNAPSHOT.tar.gz
>> /usr/local/lib/python2.7/dist-packages/setuptools/dist.py:340: 
>> UserWarning: The version specified ('4.11.0.0-SNAPSHOT') is an invalid 
>> version, this may not work as expected with newer versions of setuptools, 
>> pip, and PyPI. Please see PEP 440 for more details.
>>   "details." % self.metadata.version
>> 
>> warning: no files found matching '*.txt' under directory 'docs'
>> Could not find any downloads that satisfy the requirement 
>> mysql-connector-python>=1.1.6 in /usr/lib/python2.7/dist-packages (from 
>> Marvin==4.11.0.0-SNAPSHOT)
>> Downloading/unpacking mysql-connector-python>=1.1.6 (from 
>> Marvin==4.11.0.0-SNAPSHOT)
>> Cleaning up...
>> No distributions at all found for mysql-connector-python>=1.1.6 in 
>> /usr/lib/python2.7/dist-packages (from Marvin==4.11.0.0-SNAPSHOT)
>> Storing debug log for failure in /home/mtutkowski/.pip/pip.log
>> 
>> But it seems to be installed:
>> 
>> python-mysql.connector/trusty,now 1.1.6-1 all [installed]
>> 
>> Thoughts?
>> 
>> Thanks!
>> Mike
> 
> -- 
> Best regards
> Dmytro Shevchenko
> dshevchenko.m...@gmail.com
> skype: demonsh_mk
> +380(66)2426648
> 


Re: [DISCUSS] IOPS/GB and Highest Min and Max IOPS for disk offering

2017-07-27 Thread Tutkowski, Mike
That sounds good, Syed.

On Jul 27, 2017, at 2:14 PM, Syed Ahmed 
> wrote:

Mike, you are absolutely right. I have added 4 new fields in the disk_offering 
table. The driver code won't need to change as I would pass the min and max 
IOPS after translating them. I am not using a fifth parameter since it is an 
either or situation, if you pass IOPS/GB in your API call and also pass an IOPS 
value, I will error out saying that you can only use one of those.

Thanks,
-Syed

On Thu, Jul 27, 2017 at 2:53 PM, Tutkowski, Mike 
> wrote:
So then, based on the use case you mentioned, you are saying you don't really 
care about minimum limits, right?

Are the values you specify for the disk offering going to be translated into 
the standard min and max values that get stored in the volumes table? If that 
is the case, then the storage driver code won't need to change. You would 
perform the translation and then pass in the min and max values to the driver 
as is done today.

In that situation, you would only need four new fields in the 
cloud.disk_offering table. Perhaps a fifth column saying whether you were using 
IOPS/GB or the standard way.

On Jul 27, 2017, at 12:45 PM, Syed Ahmed 
>>
 wrote:

Hi Mike,

In case of min and max values of IOPS for a specific offering, there is another 
use case. We want to offer tiered storage. Right now if we have a disk 
offering, there is no way for us to limit the IOPS that the customer can set. 
We want to have say an offering which scales upto 10k IOPS and if the want more 
IOPS, they must switch to a higher tiered offering which has its values set to 
a higher limit.

As for compatibility with existing offering. You are right, the existing 
offerings will still work as expected. An IOPS/GB setting will be used 
independently of the current method (fixed or custom)

Thanks,
-Syed

On Thu, Jul 27, 2017 at 2:34 PM, Tutkowski, Mike 
>>
 wrote:
Hi Syed,

I have a couple questions.

What about the minimum number of IOPS a storage provider can support?

For example, with SolidFire, in some releases we can go down as low as 100 IOPS 
per volume and in newer releases as low as 50 IOPS per volume.

Perhaps you should just leave it to the storage driver to confine itself to its 
minimum and maximum values. This would not require such parameters to be passed 
to the disk offering.

Another question I have is how compatibility will work between this proposed 
feature and the existing way this works. I assume it will be an either or 
situation.

Thanks!
Mike

> On Jul 27, 2017, at 9:34 AM, Syed Ahmed 
> >>
>  wrote:
>
> Hi All,
>
> I am planning to add 4 new parameters to the disk offering. The use case for 
> this is as follows:
>
> We want to provide a provisioned IOPS style offering to our customers with 
> managed storage like SolidFire. The model is similar to GCE where we have 
> IOPS scale with the size based on a predefined ratio. So for this I want to 
> add two options. minIOPSPerGB and maxIOPSPerGB. Now, based on what storage 
> you have, you have limits on the highest values for your min and max IOPS and 
> after which you don't want to scale your IOPS (SolidFire for example can do 
> 10k minIOPS and 20k max IOPS). To support this, I have to add two more 
> parameters, highestMinIOPS, highestMaxIOPS. This should work with existing 
> disk offerings without problem. I am looking for comments on this approach. 
> Would really appreciate your reviews.
>
> Thanks,
> -Syed




Re: [DISCUSS] IOPS/GB and Highest Min and Max IOPS for disk offering

2017-07-27 Thread Tutkowski, Mike
Hi Syed,

I have a couple questions.

What about the minimum number of IOPS a storage provider can support?

For example, with SolidFire, in some releases we can go down as low as 100 IOPS 
per volume and in newer releases as low as 50 IOPS per volume.

Perhaps you should just leave it to the storage driver to confine itself to its 
minimum and maximum values. This would not require such parameters to be passed 
to the disk offering.

Another question I have is how compatibility will work between this proposed 
feature and the existing way this works. I assume it will be an either or 
situation.

Thanks!
Mike

> On Jul 27, 2017, at 9:34 AM, Syed Ahmed  wrote:
> 
> Hi All, 
> 
> I am planning to add 4 new parameters to the disk offering. The use case for 
> this is as follows:
> 
> We want to provide a provisioned IOPS style offering to our customers with 
> managed storage like SolidFire. The model is similar to GCE where we have 
> IOPS scale with the size based on a predefined ratio. So for this I want to 
> add two options. minIOPSPerGB and maxIOPSPerGB. Now, based on what storage 
> you have, you have limits on the highest values for your min and max IOPS and 
> after which you don't want to scale your IOPS (SolidFire for example can do 
> 10k minIOPS and 20k max IOPS). To support this, I have to add two more 
> parameters, highestMinIOPS, highestMaxIOPS. This should work with existing 
> disk offerings without problem. I am looking for comments on this approach. 
> Would really appreciate your reviews. 
> 
> Thanks,
> -Syed


Re: [DISCUSS] IOPS/GB and Highest Min and Max IOPS for disk offering

2017-07-27 Thread Tutkowski, Mike
So then, based on the use case you mentioned, you are saying you don't really 
care about minimum limits, right?

Are the values you specify for the disk offering going to be translated into 
the standard min and max values that get stored in the volumes table? If that 
is the case, then the storage driver code won't need to change. You would 
perform the translation and then pass in the min and max values to the driver 
as is done today.

In that situation, you would only need four new fields in the 
cloud.disk_offering table. Perhaps a fifth column saying whether you were using 
IOPS/GB or the standard way.

On Jul 27, 2017, at 12:45 PM, Syed Ahmed 
> wrote:

Hi Mike,

In case of min and max values of IOPS for a specific offering, there is another 
use case. We want to offer tiered storage. Right now if we have a disk 
offering, there is no way for us to limit the IOPS that the customer can set. 
We want to have say an offering which scales upto 10k IOPS and if the want more 
IOPS, they must switch to a higher tiered offering which has its values set to 
a higher limit.

As for compatibility with existing offering. You are right, the existing 
offerings will still work as expected. An IOPS/GB setting will be used 
independently of the current method (fixed or custom)

Thanks,
-Syed

On Thu, Jul 27, 2017 at 2:34 PM, Tutkowski, Mike 
> wrote:
Hi Syed,

I have a couple questions.

What about the minimum number of IOPS a storage provider can support?

For example, with SolidFire, in some releases we can go down as low as 100 IOPS 
per volume and in newer releases as low as 50 IOPS per volume.

Perhaps you should just leave it to the storage driver to confine itself to its 
minimum and maximum values. This would not require such parameters to be passed 
to the disk offering.

Another question I have is how compatibility will work between this proposed 
feature and the existing way this works. I assume it will be an either or 
situation.

Thanks!
Mike

> On Jul 27, 2017, at 9:34 AM, Syed Ahmed 
> > wrote:
>
> Hi All,
>
> I am planning to add 4 new parameters to the disk offering. The use case for 
> this is as follows:
>
> We want to provide a provisioned IOPS style offering to our customers with 
> managed storage like SolidFire. The model is similar to GCE where we have 
> IOPS scale with the size based on a predefined ratio. So for this I want to 
> add two options. minIOPSPerGB and maxIOPSPerGB. Now, based on what storage 
> you have, you have limits on the highest values for your min and max IOPS and 
> after which you don't want to scale your IOPS (SolidFire for example can do 
> 10k minIOPS and 20k max IOPS). To support this, I have to add two more 
> parameters, highestMinIOPS, highestMaxIOPS. This should work with existing 
> disk offerings without problem. I am looking for comments on this approach. 
> Would really appreciate your reviews.
>
> Thanks,
> -Syed



Re: [DISCUSS] Closing old Pull Requests on Github

2017-07-27 Thread Rohit Yadav
That's a good idea to use labels to tag PRs. Does it make sense to add an 
explicit label such as 'closeable' or something more appropriate on PRs that 
are not getting any traction either from reviewers or from the author 
themselves?


For the 4.9.3.0 effort, I'm trying to go through several PRs and have closed 
few PRs that are not relevant anymore (for example, duplicates, or fixed in a 
different way in master, or already fixed in master, or not applicable at all 
etc).


- Rohit


From: Wido den Hollander 
Sent: Thursday, July 27, 2017 5:47:39 PM
To: Syed Ahmed; dev@cloudstack.apache.org
Cc: Marc-Aurèle Brothier
Subject: Re: [DISCUSS] Closing old Pull Requests on Github


> Op 27 juli 2017 om 17:13 schreef Syed Ahmed :
>
>
> I would start by adding a comment to the open PRs to see if the author is
> responsive. If that's the case, then it means that review is need and we
> can add the "waiting-for-review" tag. There are a few PRs that are in that
> state but there are far more out there which need to have this tag added.
>

Seems like a good suggestions. A new label which we add and a message to all 
PRs.

See if somebody responds and then take action later on?

Wido


rohit.ya...@shapeblue.com 
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK
@shapeblue
  
 

> On Mon, Jul 24, 2017 at 7:55 AM, Wido den Hollander  wrote:
>
> >
> > > Op 24 juli 2017 om 10:47 schreef Marc-Aurèle Brothier  > >:
> > >
> > >
> > > Hi Wido,
> > >
> > > I have one comment on this topic. Some of those PRs are lying there
> > because
> > > no one took the time to merge them (I have a couple like that) since they
> > > were not very important (I think it's the reason), fixing only a small
> > > glitch or improving an output. If we start to close the PRs because there
> > > isn't activity on them, we should be sure to treat all PRs equally in
> > term
> > > on timeline when they arrive. Using the labels to sort them and make
> > > filtering easier would also be something important IMO. Today there are
> > > 200+ PRs but we cannot filter them and have not much idea on their
> > status,
> > > except by checking if they are "mergeable". This should not conflict with
> > > the Jira tickets & discussion that happened previously.
> >
> > Understood! But that's a matter of resources the community has. Each PR
> > needs to be looked at by a volunteer, a committer who all have limited
> > resources.
> >
> > It's not good that PR's didn't get the attention they needed, but it's a
> > fact that it happened.
> >
> > I don't think we have the resources to manually check and label 200 PRs
> > and see which one can be merged.
> >
> > If a author thinks the PR is still valid he/she can open it again. It's
> > not a hard-close as I put in the message, but a way to filter what we need
> > to put attention on.
> >
> > They can be labeled and handled then.
> >
> > Wido
> >
> > >
> > > Marco
> > >
> > > On Mon, Jul 24, 2017 at 10:22 AM, Wido den Hollander 
> > wrote:
> > >
> > > > Hi,
> > > >
> > > > While writing this e-mail we have 191 Open Pull requests [0] on Github
> > and
> > > > that number keeps hovering around ~200.
> > > >
> > > > We have a great number of PRs being merged, but a lot of code is old
> > and
> > > > doesn't even merge anymore.
> > > >
> > > > My proposal would be that we close all PRs which didn't see any
> > activity
> > > > in the last 3 months (Jun, July and May 2017) with the following
> > message:
> > > >
> > > > "This Pull Request is being closed for not seeing any activity since
> > May
> > > > 2017.
> > > >
> > > > The CloudStack project is in a transition from the Apache Foundation's
> > Git
> > > > infrastructure to Github and due to that not all PRs we able to be
> > tested
> > > > and/or merged.
> > > >
> > > > It's not our intention to say that we don't value the PR, but it's a
> > way
> > > > to get a better overview of what needs to be merged.
> > > >
> > > > If you think closing this PR is a mistake, please add a comment and
> > > > re-open the PR! If you do that, could you please make sure that the PR
> > > > merges against the branch it was submitted against?
> > > >
> > > > Thank you very much for your understanding and cooperation!"
> > > >
> > > > How does that sound?
> > > >
> > > > Wido
> > > >
> > > >
> > > > [0]: https://github.com/apache/cloudstack/pulls
> > > >
> >


Re: [DISCUSS] IOPS/GB and Highest Min and Max IOPS for disk offering

2017-07-27 Thread Syed Ahmed
Hi Mike,

In case of min and max values of IOPS for a specific offering, there is
another use case. We want to offer tiered storage. Right now if we have a
disk offering, there is no way for us to limit the IOPS that the customer
can set. We want to have say an offering which scales upto 10k IOPS and if
the want more IOPS, they must switch to a higher tiered offering which has
its values set to a higher limit.

As for compatibility with existing offering. You are right, the existing
offerings will still work as expected. An IOPS/GB setting will be used
independently of the current method (fixed or custom)

Thanks,
-Syed

On Thu, Jul 27, 2017 at 2:34 PM, Tutkowski, Mike 
wrote:

> Hi Syed,
>
> I have a couple questions.
>
> What about the minimum number of IOPS a storage provider can support?
>
> For example, with SolidFire, in some releases we can go down as low as 100
> IOPS per volume and in newer releases as low as 50 IOPS per volume.
>
> Perhaps you should just leave it to the storage driver to confine itself
> to its minimum and maximum values. This would not require such parameters
> to be passed to the disk offering.
>
> Another question I have is how compatibility will work between this
> proposed feature and the existing way this works. I assume it will be an
> either or situation.
>
> Thanks!
> Mike
>
> > On Jul 27, 2017, at 9:34 AM, Syed Ahmed  wrote:
> >
> > Hi All,
> >
> > I am planning to add 4 new parameters to the disk offering. The use case
> for this is as follows:
> >
> > We want to provide a provisioned IOPS style offering to our customers
> with managed storage like SolidFire. The model is similar to GCE where we
> have IOPS scale with the size based on a predefined ratio. So for this I
> want to add two options. minIOPSPerGB and maxIOPSPerGB. Now, based on what
> storage you have, you have limits on the highest values for your min and
> max IOPS and after which you don't want to scale your IOPS (SolidFire for
> example can do 10k minIOPS and 20k max IOPS). To support this, I have to
> add two more parameters, highestMinIOPS, highestMaxIOPS. This should work
> with existing disk offerings without problem. I am looking for comments on
> this approach. Would really appreciate your reviews.
> >
> > Thanks,
> > -Syed
>


Re: [DISCUSS] IOPS/GB and Highest Min and Max IOPS for disk offering

2017-07-27 Thread Syed Ahmed
Mike, you are absolutely right. I have added 4 new fields in the
disk_offering table. The driver code won't need to change as I would pass
the min and max IOPS after translating them. I am not using a fifth
parameter since it is an either or situation, if you pass IOPS/GB in your
API call and also pass an IOPS value, I will error out saying that you can
only use one of those.

Thanks,
-Syed

On Thu, Jul 27, 2017 at 2:53 PM, Tutkowski, Mike 
wrote:

> So then, based on the use case you mentioned, you are saying you don't
> really care about minimum limits, right?
>
> Are the values you specify for the disk offering going to be translated
> into the standard min and max values that get stored in the volumes table?
> If that is the case, then the storage driver code won't need to change. You
> would perform the translation and then pass in the min and max values to
> the driver as is done today.
>
> In that situation, you would only need four new fields in the
> cloud.disk_offering table. Perhaps a fifth column saying whether you were
> using IOPS/GB or the standard way.
>
> On Jul 27, 2017, at 12:45 PM, Syed Ahmed  h...@cloudops.com>> wrote:
>
> Hi Mike,
>
> In case of min and max values of IOPS for a specific offering, there is
> another use case. We want to offer tiered storage. Right now if we have a
> disk offering, there is no way for us to limit the IOPS that the customer
> can set. We want to have say an offering which scales upto 10k IOPS and if
> the want more IOPS, they must switch to a higher tiered offering which has
> its values set to a higher limit.
>
> As for compatibility with existing offering. You are right, the existing
> offerings will still work as expected. An IOPS/GB setting will be used
> independently of the current method (fixed or custom)
>
> Thanks,
> -Syed
>
> On Thu, Jul 27, 2017 at 2:34 PM, Tutkowski, Mike <
> mike.tutkow...@netapp.com> wrote:
> Hi Syed,
>
> I have a couple questions.
>
> What about the minimum number of IOPS a storage provider can support?
>
> For example, with SolidFire, in some releases we can go down as low as 100
> IOPS per volume and in newer releases as low as 50 IOPS per volume.
>
> Perhaps you should just leave it to the storage driver to confine itself
> to its minimum and maximum values. This would not require such parameters
> to be passed to the disk offering.
>
> Another question I have is how compatibility will work between this
> proposed feature and the existing way this works. I assume it will be an
> either or situation.
>
> Thanks!
> Mike
>
> > On Jul 27, 2017, at 9:34 AM, Syed Ahmed  h...@cloudops.com>> wrote:
> >
> > Hi All,
> >
> > I am planning to add 4 new parameters to the disk offering. The use case
> for this is as follows:
> >
> > We want to provide a provisioned IOPS style offering to our customers
> with managed storage like SolidFire. The model is similar to GCE where we
> have IOPS scale with the size based on a predefined ratio. So for this I
> want to add two options. minIOPSPerGB and maxIOPSPerGB. Now, based on what
> storage you have, you have limits on the highest values for your min and
> max IOPS and after which you don't want to scale your IOPS (SolidFire for
> example can do 10k minIOPS and 20k max IOPS). To support this, I have to
> add two more parameters, highestMinIOPS, highestMaxIOPS. This should work
> with existing disk offerings without problem. I am looking for comments on
> this approach. Would really appreciate your reviews.
> >
> > Thanks,
> > -Syed
>
>


Re: [DISCUSS] Move to Debian9 systemvmtemplate

2017-07-27 Thread Tim Mackey
Syed,

I did a bunch of work on XenServer with Packer [1] before leaving Citrix.
My stuff works rather well and was tested with XS 6.2, 6.5 and 7. It
shouldn't be hard to validate with newest XS and updated Packer - I just
lack the infra to do the testing.

[1] https://github.com/xenserverarmy/packer

-tim

On Thu, Jul 27, 2017 at 11:19 AM, Syed Ahmed  wrote:

> -1 on Arch as well. Moving to Debian 9 seems the wiser choice IMO. I've
> used Packer before and I really like it, the only downside that I see is
> that Packer lacks support for XenServer VHD images. There is some work on a
> XenServer plugin but I haven't tested that. If the community decides to use
> Packer, I can do some initial validation of it on XenServer.
>
> Thanks,
> -Syed
>
> On Tue, Jul 25, 2017 at 3:19 AM, Wido den Hollander 
> wrote:
>
> >
> > > Op 24 juli 2017 om 19:07 schreef Rene Moser :
> > >
> > >
> > > Hi Rohit
> > >
> > >
> > > On 07/23/2017 06:08 PM, Rohit Yadav wrote:
> > > > All,
> > > >
> > > >
> > > > Just want to kick an initial discussion around migration to Debian9
> > based systemvmtemplate, and get your feedback on the same.
> > > >
> > > > Here's a work-in-progress PR: https://github.com/apache/
> > cloudstack/pull/2198
> > >
> > > Have you considered to replace veewee by packer?
> > >
> >
> > Packer is really nice indeed. We use it to build our templates [0] which
> > we use on CloudStack.
> >
> > Building the SSVM using Packer should be rather easy I think.
> >
> > [0]: https://github.com/pcextreme/packer-templates
> >
> > > Our friends from schubergphilis have already done some work here
> > > https://github.com/MissionCriticalCloud/systemvm-packer.
> > >
> > > However there would be also an official way to convert the definitions
> > > https://www.packer.io/guides/veewee-to-packer.html
> > >
> > > Regards René
> >
>