Re: [openstack-dev] [heat] Sofware Config progress

2014-01-20 Thread Prasad Vellanki
Steve & Clint

That should work. We will look at implementing a resource that spins up a
shortlived VM for bootstrapping a service VM and informing configuration
server for further configuration.

thanks
prasadv


On Wed, Jan 15, 2014 at 7:53 PM, Steven Dake  wrote:

> On 01/14/2014 09:27 PM, Clint Byrum wrote:
>
>> Excerpts from Prasad Vellanki's message of 2014-01-14 18:41:46 -0800:
>>
>>> Steve
>>>
>>> I did not mean to have custom solution at all. In fact that would be
>>> terrible.  I think Heat model of software config and deployment is really
>>> good. That allows configurators such as Chef, Puppet, Salt or Ansible to
>>> be
>>> plugged into it and all users need to write are modules for those.
>>>
>>> What I was  thinking is if there is a way to use software
>>> config/deployment
>>>   to do initial configuration of the appliance by using agentless system
>>> such  as Ansible or Salt, thus requiring no cfminit. I am not sure this
>>> will work either, since it might require ssh keys to be installed for
>>> getting ssh to work without password prompting. But I do see that ansible
>>> and salt support username/password option.
>>> If this would not work, I agree that the best option is to make them
>>> support cfminit...
>>>
>> Ansible is not agent-less. It just makes use of an extremely flexible
>> agent: sshd. :) AFAIK, salt does use an agent though maybe they've added
>> SSH support.
>>
>> Anyway, the point is, Heat's engine should not be reaching into your
>> machines. It talks to API's, but that is about it.
>>
>> What you really want is just a VM that spins up and does the work for
>> you and then goes away once it is done.
>>
> Good thinking.  This model might work well without introducing the "groan
> another daemon" problems pointed out elsewhere in this thread that were
> snipped.  Then the "modules" could simply be heat templates available to
> the Heat engine to do the custom config setup.
>
> The custom config setup might still be a problem with the original
> constraints (not modifying images to inject SSH keys).
>
> That model wfm.
>
> Regards
> -steve
>
>
>  ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][Neutron][Tempest][Network] Break down NetworkBasicOps to smaller test cases

2014-01-20 Thread Yair Fried
I seem to be unable to convey my point using generalization, so I will give a 
specific example:
I would like to have "update dns server" as an additional network scenario. 
Currently I could add it to the existing module:

1. tests connectivity
2. re-associate floating ip
3. update dns server

In which case, failure to re-associate ip will prevent my test from running, 
even though these are completely unrelated scenarios, and (IMO) we would like 
to get feedback on both of them.

Another way, is to copy the entire network_basic_ops module, remove 
"re-associate floating ip" and add "update dns server". For the obvious reasons 
- this also seems like the wrong way to go.

I am looking for an elegant way to share the code of these scenarios.

Yair


- Original Message -
From: "Salvatore Orlando" 
To: "OpenStack Development Mailing List (not for usage questions)" 

Sent: Monday, January 20, 2014 7:22:22 PM
Subject: Re: [openstack-dev] [qa][Neutron][Tempest][Network] Break down 
NetworkBasicOps to smaller test cases



Yair is probably referring to statistically independent tests, or whatever case 
for which the following is true (P(x) is the probably that a test succeeds): 


P(4|3|2|1) = P(4|1) * P(3|1) * P(2|1) 



This might apply to the tests we are adding to network_basic_ops scenario; 
however it is worth noting that: 


- in some cases the above relationship does not hold. For instance a public 
network connectivity test can hardly succeeds if the private connectivity test 
failed (is that correct? I'm not sure anymore of anything this days!) 
- Sean correctly pointed out that splitting test will cause repeated activities 
which will just make the test run longer without any additional benefit. 


On the other hand, I understand and share the feeling that we are adding too 
much to the same workflow. Would it make sense to identify a few conceptually 
independent workflows, identify one or more advanced network scenarios, and 
keep only internal + public connectivity checks in basic_ops? 


Salvatore 



On 20 January 2014 09:23, Jay Pipes < jaypi...@gmail.com > wrote: 



On Sun, 2014-01-19 at 07:17 -0500, Yair Fried wrote: 
> OK, 
> but considering my pending patch (#3 and #4) 
> what about: 
> 
> #1 -> #2 
> #1 -> #3 
> #1 -> #4 
> 
> instead of 
> 
> #1 -> #2 -> #3 -> #4 
> 
> a failure in #2 will prevent #3 and #4 from running even though they are 
> completely unrelated 

Seems to me, that the above is a logical fault. If a failure in #2 
prevents #3 or #4 from running, then by nature they are related to #2. 

-jay 




___ 
OpenStack-dev mailing list 
OpenStack-dev@lists.openstack.org 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [gantt] Scheduler sub-group agenda 1/21

2014-01-20 Thread Dugger, Donald D
1) Memcached based scheduler updates
2) Scheduler code forklift
3) Opens

--
Don Dugger
"Censeo Toto nos in Kansa esse decisse." - D. Gale
Ph: 303/443-3786


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Enable to set DHCP port attributes

2014-01-20 Thread Itsuro ODA
Hi Neutron developers,

I added a detailed specification,
https://wiki.openstack.org/wiki/Neutron/enable-to-set-dhcp-port-attributes
in order to reply comments for the code 
(https://review.openstack.org/#/c/61026/).

Comments are welcome. I hope anything to advance.

Thanks.

On Tue, 17 Dec 2013 09:01:24 +0900
Itsuro ODA  wrote:

> Hi Neutron developers,
> 
> I submitted the following blue print.
> https://blueprints.launchpad.net/neutron/+spec/enable-to-set-dhcp-port-attributes
> 
> It is a proposal to be enable to control dhcp port attributes
> (especially ip address) by a user. 
> 
> This is based on a real requirement from our customer.
> I don't know there is a consensus that dhcp port attributes should
> not be enable to set by a user. Comments are welcome.
> 
> Thanks.
> -- 
> Itsuro ODA 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Itsuro ODA 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [third-party-testing] Which patchset triggered the test

2014-01-20 Thread Mohammad Banikazemi

Thanks a lot for answering my question and also for updating the
multi-node/3rd party testing google doc as well.

Mohammad




From:   Roey Chen 
To: "OpenStack Development Mailing List (not for usage questions)"
,
Date:   01/20/2014 04:25 PM
Subject:Re: [openstack-dev] [neutron] [third-party-testing] Which
patchset triggered the test



Mohammad,

You can get the information you want from the environment variables that
the Gerrit plugin sets.

Just like it was mentioned here:
https://etherpad.openstack.org/p/multi-node-neutron-tempest
if you'll set NEUTRON_BRANCH=$GERRIT_REFSPEC in the localrc
then Devstack will pull the change which triggered the build.

Try env shell command to find out what are the rest of the environment
variables.

Best,
---
Roey



From: Mohammad Banikazemi [m...@us.ibm.com]
Sent: Monday, January 20, 2014 9:32 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [neutron] [third-party-testing] Which patchset
triggered the test



Have a question regarding the Jenkins/Gerrit setup for third party testing
setups.

When Jenkins get triggered by a patchset through the Gerrit trigger
plug-in, you can execute a set of shell scripts. How do you get the
information about the patchset that triggered the test? In particular, in
your scripts how do you figure out which patchset triggered the test. Here
is why I am asking this question:
During our earlier IRC calls we said, one approach for testing would be
using devstack to install OpenStack and run appropriate tests. The devstack
stack.sh brings in the master branch without the patchset which triggered
the test. How do I access the patchset I want to test? Am I missing
something here?

Thanks,

Mohammad___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


<>___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Disk Eraser

2014-01-20 Thread Devananda van der Veen
On Sun, Jan 19, 2014 at 9:30 PM, Robert Collins
wrote:

> On 20 January 2014 18:10, Alan Kavanagh 
> wrote:
> > +1, that is another point Rob. When I started this thread my main
> interest was disk and then firmware. It is clear we really need to have a
> clear discussion on this, as imho I would not be supportive or lease
> baremetal to tenants if I can not guarantee the service, otherwise the cost
> of risking tenants to adverse attacks and data screening are far greater
> that the revenue generated from the service. When it comes to the tenants
> in our DC we consider all tenants need to be provided a guarantee of the
> baremetal service on the disk, loaders etc etc, otherwise its difficult to
> assure your customer.
>
> I think LXC/openVZ/Docker make pretty good compromises in this space
> BTW - low overhead, bare metal performance, no root access to the
> hardware.
>
>
++

Eg, when sized for single-instance-per-host, you'll get very similar
performance without the disk/firmware/etc security issues.

-Deva
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Cinder unit test failure

2014-01-20 Thread iKhan
I am worried which one is better in terms of performance? iniparse or
ConfigParser?

I am aware iniparse will do a better job of maintaining INI file's
structure, but I am more interested in performance.

Can anyone shed some light about parser that can be used to perform INI
parsing?


On Tue, Jan 21, 2014 at 1:11 AM, iKhan  wrote:

> Continued with ConfigParser, nothing much difference apart from clean way
> of maintaining INI file in iniparse. I think unless a solution is thought,
> good to go with this.
>
> Thanks again John.
>
>
> On Mon, Jan 20, 2014 at 11:47 PM, John Griffith <
> john.griff...@solidfire.com> wrote:
>
>> On Mon, Jan 20, 2014 at 11:15 AM, John Griffith
>>  wrote:
>> > On Mon, Jan 20, 2014 at 10:30 AM, iKhan  wrote:
>> >> Thanks John,
>> >>
>> >> It worked earlier while executing because iniparse was installed, tho
>> this
>> >> wasn't present in virtual environment. Installing iniparse via pip did
>> work.
>> >> Since I didn't install iniparse specifically, I was under impression
>> it was
>> >> there by default. Probably now I have to take care of this in
>> >> test-requirement.txt as you mentioned.
>> >>
>> >> I wonder if there is an alternative to iniparse by default.
>> >>
>> >> Regards
>> >>
>> >>
>> >> On Mon, Jan 20, 2014 at 10:47 PM, John Griffith
>> >>  wrote:
>> >>>
>> >>> On Mon, Jan 20, 2014 at 10:07 AM, iKhan 
>> wrote:
>> >>> > Hi,
>> >>> >
>> >>> > I have imported iniparse to my cinder code, it works fine when I
>> perform
>> >>> > execution. But when I run the unit test, it fails while importing
>> >>> > iniparse.
>> >>> > It says "No module named iniparse". Do I have to take care of
>> something
>> >>> > here?
>> >>> >
>> >>> > --
>> >>> > Thanks,
>> >>> > Ibad Khan
>> >>> > 9686594607
>> >>> >
>> >>> > ___
>> >>> > OpenStack-dev mailing list
>> >>> > OpenStack-dev@lists.openstack.org
>> >>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >>> >
>> >>>
>> >>> It sounds like it's not installed on your system.  You'd need to do a
>> >>> "pip install iniparse", but if you're adding this to your unit tests
>> >>> you'll need to have a look at the common test-requires file.  Also
>> >>> keep in mind if your driver is going to rely on it you'll need it in
>> >>> requirements.  We can work through the details via IRC if you like.
>> >>>
>> >>> John
>> >>>
>> >>> ___
>> >>> OpenStack-dev mailing list
>> >>> OpenStack-dev@lists.openstack.org
>> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >>
>> >>
>> >>
>> >>
>> >> --
>> >> Thanks,
>> >> Ibad Khan
>> >> 9686594607
>> >>
>> >> ___
>> >> OpenStack-dev mailing list
>> >> OpenStack-dev@lists.openstack.org
>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >>
>> >
>> > there is check out openstack.common.iniparser, not sure if it'll fit
>> > your needs or not.
>> DOH!!  Disregard that
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> Thanks,
> Ibad Khan
> 9686594607
>



-- 
Thanks,
Ibad Khan
9686594607
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Validation of IP

2014-01-20 Thread iKhan
Hi,

I am planning to validate an IP which is accepted part of input from user
in cinder, I need to verify if IP address is valid and it is up in the
network. The dirty way or only way that I know as of now is to create a
socket object and perform inet_aton()and gethostbyaddr()

Jut worried if these operations are safe to perform.

-- 
Thanks,
Ibad Khan
9686594607
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Disabling file injection *by default*

2014-01-20 Thread Robert Collins
I was reminded of this while I cleaned up failed file injection nbd
devices on ci-overcloud.tripleo.org :/ - what needs to happen for us
to change the defaults around file injection so that it's disabled?

I'm not talking deprecation or removal, though both of those things
are super appealing :). I'm presuming the steps needed to change the
default are:

 - a blueprint, as it's changelog worthy for the release
 - a DocImpact patch that changes the default for drivers that
currently default it on

Anything else?

-Rob

-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Openstack-dev] [Nova] Updated Feature in Next Havana release 2013.2.2

2014-01-20 Thread cosmos cosmos
Hello I am Rusia from Samsung SDS.


Now I am in development on havana openstack.


But now i am wondering about start/stop and shelve/unshelve function.


Because the function on boot from image(creates a new volume) is not
working.


So I tested in master version.


in there, this function works good.


So i have a question.


I know that the next release, 2013.2.2 will be Feb 06 2014.


Has these bugs been fixed in 2013.2.2?


I want to know your answer.


Thanks.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Top Gate Reseting issues that need attention

2014-01-20 Thread Jay Pipes
I wasn't saying it was frequent :) Just that it was happening on one of
the patches that Eugene said needed to go through :)

-jay

On Mon, 2014-01-20 at 19:04 -0500, Joe Gordon wrote:
> 
> 
> 
> On Mon, Jan 20, 2014 at 5:51 PM, Jay Pipes  wrote:
> On Mon, 2014-01-20 at 18:43 +0400, Eugene Nikanorov wrote:
> > Hi Sean,
> 
> > I think the following 2 commits in neutron are essential for
> bringing
> > neutron jobs back to acceptable level of failure rate:
> > https://review.openstack.org/#/c/67537/
> > https://review.openstack.org/#/c/66670/
> 
> 
> That second patch has the gate-tempest-dsvm-neutron-isolated
> job failing
> trying to run keystone-manage pki-setup:
> 
> ImportError: No module named passlib.hash
> 
> 
> message:"ImportError: No module named passlib.hash" has only two hits
> in logstash both for patch 66670
> 
> 
> 
> http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiSW1wb3J0RXJyb3I6IE5vIG1vZHVsZSBuYW1lZCBwYXNzbGliLmhhc2hcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiMTcyODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTM5MDI2MjQwNjMzOH0=
> 
>  
> 
> -jay
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Top Gate Reseting issues that need attention

2014-01-20 Thread Joe Gordon
On Mon, Jan 20, 2014 at 5:51 PM, Jay Pipes  wrote:

> On Mon, 2014-01-20 at 18:43 +0400, Eugene Nikanorov wrote:
> > Hi Sean,
>
> > I think the following 2 commits in neutron are essential for bringing
> > neutron jobs back to acceptable level of failure rate:
> > https://review.openstack.org/#/c/67537/
> > https://review.openstack.org/#/c/66670/
>
> That second patch has the gate-tempest-dsvm-neutron-isolated job failing
> trying to run keystone-manage pki-setup:
>
> ImportError: No module named passlib.hash
>

message:"ImportError: No module named passlib.hash" has only two hits in
logstash both for patch 66670

http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiSW1wb3J0RXJyb3I6IE5vIG1vZHVsZSBuYW1lZCBwYXNzbGliLmhhc2hcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiMTcyODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTM5MDI2MjQwNjMzOH0=


>
> -jay
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Top Gate Reseting issues that need attention

2014-01-20 Thread Robert Collins
Joe Gordon points at

http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiSW1wb3J0RXJyb3I6IE5vIG1vZHVsZSBuYW1lZCBwYXNzbGliLmhhc2hcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiMTcyODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTM5MDI2MjQwNjMzOH0=

e.g. the passlib thing is not a frequent issue.

On 21 January 2014 11:51, Jay Pipes  wrote:
> On Mon, 2014-01-20 at 18:43 +0400, Eugene Nikanorov wrote:
>> Hi Sean,
>
>> I think the following 2 commits in neutron are essential for bringing
>> neutron jobs back to acceptable level of failure rate:
>> https://review.openstack.org/#/c/67537/
>> https://review.openstack.org/#/c/66670/
>
> That second patch has the gate-tempest-dsvm-neutron-isolated job failing
> trying to run keystone-manage pki-setup:
>
> ImportError: No module named passlib.hash
>
> -jay
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Top Gate Reseting issues that need attention

2014-01-20 Thread Jay Pipes
On Mon, 2014-01-20 at 18:43 +0400, Eugene Nikanorov wrote:
> Hi Sean,

> I think the following 2 commits in neutron are essential for bringing
> neutron jobs back to acceptable level of failure rate:
> https://review.openstack.org/#/c/67537/
> https://review.openstack.org/#/c/66670/

That second patch has the gate-tempest-dsvm-neutron-isolated job failing
trying to run keystone-manage pki-setup:

ImportError: No module named passlib.hash

-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [third-party-testing] Which patchset triggered the test

2014-01-20 Thread Jay Pipes
On Mon, 2014-01-20 at 14:26 -0500, Mohammad Banikazemi wrote:
> Have a question regarding the Jenkins/Gerrit setup for third party
> testing setups.
> 
> When Jenkins get triggered by a patchset through the Gerrit trigger
> plug-in, you can execute a set of shell scripts. How do you get the
> information about the patchset that triggered the test? In particular,
> in your scripts how do you figure out which patchset triggered the
> test. Here is why I am asking this question:
> During our earlier IRC calls we said, one approach for testing would
> be using devstack to install OpenStack and run appropriate tests. The
> devstack stack.sh brings in the master branch without the patchset
> which triggered the test. How do I access the patchset I want to test?
> Am I missing something here?

As mentioned by a previous poster, the Gerrit plugin populates some
environment variables that you may use in your scripts in order to fetch
and check out the appropriate git branch and SHA1 that corresponds to
the changeset and patch number.

For a great example of how this is done in the upstream CI system for
the gate, check out the devstack-gate project and the setup_workspace()
[1] Bash function. This function calls the setup_project() [2] Bash
function for each project that is registered for devstack to construct.
The devstack-gate-wrap.sh script is responsible for enumerating all of
the projects that devstack will install [3].

setup_project() is responsible for setting the git checkouts for all of
the OpenStack projects involved in the devstack installation --
including the project for which the triggering changeset is for. The
function calls git clone on the project's upstream cgit repository URI
[4], and then calls git fetch on the branch and SHA1 (ref) that
represents the proposed changeset in Gerrit.

In the setup_project() function, you will notice the use of the
environment variables $ZUUL_BRANCH and $ZUUL_REF. The upstream CI system
uses a Python service called Zuul [6] to manage the graph of in-progress
changesets that are currently going through the gate testing process.
While your in-house testing platform won't likely be using Zuul, the
Gerrit plugin [7] to Jenkins *will* have a similar $GERRIT_BRANCH and
$GERRIT_REFSPEC environment variable that you can use in your scripts
that contains the git ref you can use in scripts in your in-house
testing in the same way that devstack-gate uses $ZUUL_BRANCH and
$ZUUL_REF.

Finally, once your in-house setup script has constructed the devstack
environment -- including all of the git checkout'd code trees, then to
run the Neutron testing suite, simply do the following:

cd /opt/stack/new/devstack
sudo -H -u stack ./tools/configure_tempest.sh
cd /opt/stack/new/tempest
sudo -H -u tempest tox -esmoke-serial

How did I know to run the above commands? Well, because that's what the
check-tempest-dsvm-neutron-isolated (configured in the
openstack-infra/config project here: [8]) test runs in the
devstack-gate.sh script here: [9] :)

All the best,
-jay

p.s. I'm putting together some documentation walking through how these
many CI systems and gate projects all work together to configure and run
tests against a devstack environment in the gate. I should be done by
Wednesday or Thursday and will publish a link to the ML. Hopefully, the
instructions will be helpful for Cinder, Neutron, and other contributors
looking to set up 3rd party driver verification testing.

[1]
https://github.com/openstack-infra/devstack-gate/blob/master/functions.sh#L208
[2]
https://github.com/openstack-infra/devstack-gate/blob/master/functions.sh#L164
[3]
https://github.com/openstack-infra/devstack-gate/blob/master/devstack-vm-gate-wrap.sh#L27
[4]
https://github.com/openstack-infra/devstack-gate/blob/master/functions.sh#L96
[5]
https://github.com/openstack-infra/devstack-gate/blob/master/functions.sh#L28
[6] http://ci.openstack.org/zuul.html
[7] https://wiki.jenkins-ci.org/display/JENKINS/Gerrit+Trigger
[8] 
https://github.com/openstack-infra/config/blob/master/modules/openstack_project/files/jenkins_job_builder/config/devstack-gate.yaml#L184
and
https://github.com/openstack-infra/config/blob/master/modules/openstack_project/files/jenkins_job_builder/config/devstack-gate.yaml#L203
[9]
https://github.com/openstack-infra/devstack-gate/blob/master/devstack-vm-gate.sh#L299
 
and 
https://github.com/openstack-infra/devstack-gate/blob/master/devstack-vm-gate.sh#L344



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neturon] firewall_driver and ML2 and vif_security discussion

2014-01-20 Thread Jay Pipes
On Mon, 2014-01-20 at 20:43 +0100, Ian Wells wrote:
> To my mind, it would make that much more sense if Neutron created,
> networked and firewalled a tap and returned it completely set up
> (versus now, where the VM can start with a half-configured set of
> separation and firewall rules that get patched up asynchronously).

Amen.

-jay



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [third-party-testing] Which patchset triggered the test

2014-01-20 Thread Roey Chen
Mohammad,

You can get the information you want from the environment variables that the 
Gerrit plugin sets.

Just like it was mentioned here: 
https://etherpad.openstack.org/p/multi-node-neutron-tempest
if you'll set NEUTRON_BRANCH=$GERRIT_REFSPEC in the localrc
then Devstack will pull the change which triggered the build.

Try env shell command to find out what are the rest of the environment 
variables.

Best,
---
Roey



From: Mohammad Banikazemi [m...@us.ibm.com]
Sent: Monday, January 20, 2014 9:32 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [neutron] [third-party-testing] Which patchset 
triggered the test


Have a question regarding the Jenkins/Gerrit setup for third party testing 
setups.

When Jenkins get triggered by a patchset through the Gerrit trigger plug-in, 
you can execute a set of shell scripts. How do you get the information about 
the patchset that triggered the test? In particular, in your scripts how do you 
figure out which patchset triggered the test. Here is why I am asking this 
question:
During our earlier IRC calls we said, one approach for testing would be using 
devstack to install OpenStack and run appropriate tests. The devstack stack.sh 
brings in the master branch without the patchset which triggered the test. How 
do I access the patchset I want to test? Am I missing something here?

Thanks,

Mohammad
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Can somebody help me to determine if an URL validation in python-glanceclient & horizon projects is safe

2014-01-20 Thread Gabriel Hurley
Adding this to glanceclient is probably acceptable since the worst abuse of it 
would be to disrupt a user's local machine until they terminated the process, 
but adding this to Horizon is a no-go.

Django removed the "verify_exists" option from URLField in Django 1.5 for very 
good reasons. Here's the release notes summary:

"django.db.models.fields.URLField.verify_exists will be removed. The feature 
was deprecated in 1.3.1 due to intractable security and performance issues and 
will follow a slightly accelerated deprecation timeframe."

Note that "intractable security issues" bit. Doing this type of validation 
server-side opens you up to some nasty DoS attacks and simply shouldn't be done.

If you have further questions, I recommend talking to Paul McMillan, who was 
the original reporter of the security issues with "verify_exists" in Django.

All the best,


-  Gabriel

From: Victor Joel Morales Ruvalcaba [mailto:chipah...@hotmail.com]
Sent: Monday, January 20, 2014 9:44 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] Can somebody help me to determine if an URL validation 
in python-glanceclient & horizon projects is safe

I'm implementing an URL validation that checks if the external location value 
provided exists and if it's reachable.  To achieve that I'm using the method 
urlopen of six.moves.urllib.request module which it seems similar like to the 
deprecated django's method of verify_exists.  I'm wondering if I can proceed 
with the current implementation or if there's a way to implement those 
validations

https://review.openstack.org/#/c/64295/
https://review.openstack.org/#/c/64312/
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Next steps for Whole Host allocation / Pclouds

2014-01-20 Thread Sylvain Bauza
2014/1/20 Jay Pipes 

>
>
> But I believe that the two concerns can be tackled separately.
>
>
Indeed. I fully agree with the fact isolation can be provided by Nova, and
Climate would by the way happy to leverage it for providing capacity
planning on top of it.
By the way, I'm personnally convicted that there are many Nova features
that can be shared for other projects : scheduler (that's why I'm following
Gantt progress), extensible resource tracker and tenancy isolation for
objects.
Scheduler followed his way having its own service, but maybe Oslo is the
best path for the others (Climate is trying to follow what happens with
model_query() and all the connected concerns (like for example the deleted
flag which has no sense IMHO).

-Sylvain
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPv6] Subnet mode - API extension or change to core API?

2014-01-20 Thread Collins, Sean
On Mon, Jan 13, 2014 at 07:32:29PM +0100, Ian Wells wrote:
> To fill others in, we've had discussions on the rest of the patch and
> Shixiong is working on it now, the current plan is:
> 
> New subnet attribute ipv6_address_auto_config (not catchy, but because of

Hi, will this patch replace https://review.openstack.org/#/c/52983/ or
be based on it? Let me know since I need to update that review to
address reviewer suggestions.

-- 
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neturon] firewall_driver and ML2 and vif_security discussion

2014-01-20 Thread Ian Wells
On 20 January 2014 10:13, Mathieu Rohon  wrote:

> With such an architecture, we wouldn't have to tell neutron about
> vif_security or vif_type when it creates a port. When Neutron get
> called with port_create, it should only return the tap created.
>

Not entirely true.  Not every libvirt port is a tap; if you're doing things
with PCI passthrough attachment you want different libvirt configuration
(and, in this instance, also different Xen and everything else
configuration), and you still need vif_type to distinguish.  You just don't
need 101 values for 'this is a *special and unique* sort of software
bridge'.

I don't know if such a proposal is reasonable since I can't find good

> informations about the ability of libvirt to use an already created
> tap, when it creates a VM. It seem to be usable with KVM.
> But I would love to have feedback of the communnity on this
> architecture. May be it has already been discussed on the ML, so
> please give me the pointer.
>

libvirt will attach to many things, but I'm damned if I can work out if it
will attach to a tap, either.

To my mind, it would make that much more sense if Neutron created,
networked and firewalled a tap and returned it completely set up (versus
now, where the VM can start with a half-configured set of separation and
firewall rules that get patched up asynchronously).
-- 
Ian.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Cinder unit test failure

2014-01-20 Thread iKhan
Continued with ConfigParser, nothing much difference apart from clean way
of maintaining INI file in iniparse. I think unless a solution is thought,
good to go with this.

Thanks again John.


On Mon, Jan 20, 2014 at 11:47 PM, John Griffith  wrote:

> On Mon, Jan 20, 2014 at 11:15 AM, John Griffith
>  wrote:
> > On Mon, Jan 20, 2014 at 10:30 AM, iKhan  wrote:
> >> Thanks John,
> >>
> >> It worked earlier while executing because iniparse was installed, tho
> this
> >> wasn't present in virtual environment. Installing iniparse via pip did
> work.
> >> Since I didn't install iniparse specifically, I was under impression it
> was
> >> there by default. Probably now I have to take care of this in
> >> test-requirement.txt as you mentioned.
> >>
> >> I wonder if there is an alternative to iniparse by default.
> >>
> >> Regards
> >>
> >>
> >> On Mon, Jan 20, 2014 at 10:47 PM, John Griffith
> >>  wrote:
> >>>
> >>> On Mon, Jan 20, 2014 at 10:07 AM, iKhan  wrote:
> >>> > Hi,
> >>> >
> >>> > I have imported iniparse to my cinder code, it works fine when I
> perform
> >>> > execution. But when I run the unit test, it fails while importing
> >>> > iniparse.
> >>> > It says "No module named iniparse". Do I have to take care of
> something
> >>> > here?
> >>> >
> >>> > --
> >>> > Thanks,
> >>> > Ibad Khan
> >>> > 9686594607
> >>> >
> >>> > ___
> >>> > OpenStack-dev mailing list
> >>> > OpenStack-dev@lists.openstack.org
> >>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>> >
> >>>
> >>> It sounds like it's not installed on your system.  You'd need to do a
> >>> "pip install iniparse", but if you're adding this to your unit tests
> >>> you'll need to have a look at the common test-requires file.  Also
> >>> keep in mind if your driver is going to rely on it you'll need it in
> >>> requirements.  We can work through the details via IRC if you like.
> >>>
> >>> John
> >>>
> >>> ___
> >>> OpenStack-dev mailing list
> >>> OpenStack-dev@lists.openstack.org
> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >>
> >>
> >>
> >> --
> >> Thanks,
> >> Ibad Khan
> >> 9686594607
> >>
> >> ___
> >> OpenStack-dev mailing list
> >> OpenStack-dev@lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> > there is check out openstack.common.iniparser, not sure if it'll fit
> > your needs or not.
> DOH!!  Disregard that
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Thanks,
Ibad Khan
9686594607
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] [third-party-testing] Which patchset triggered the test

2014-01-20 Thread Mohammad Banikazemi

Have a question regarding the Jenkins/Gerrit setup for third party testing
setups.

When Jenkins get triggered by a patchset through the Gerrit trigger
plug-in, you can execute a set of shell scripts. How do you get the
information about the patchset that triggered the test? In particular, in
your scripts how do you figure out which patchset triggered the test. Here
is why I am asking this question:
During our earlier IRC calls we said, one approach for testing would be
using devstack to install OpenStack and run appropriate tests. The devstack
stack.sh brings in the master branch without the patchset which triggered
the test. How do I access the patchset I want to test? Am I missing
something here?

Thanks,

Mohammad___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Infra] Meeting Tuesday January 21st at 19:00 UTC

2014-01-20 Thread Elizabeth Krumbach Joseph
The OpenStack Infrastructure (Infra) team is hosting our weekly
meeting tomorrow, Tuesday January 21st, at 19:00 UTC in
#openstack-meeting

Meeting agenda available here:
https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting (anyone is
welcome to to add agenda items)

Everyone interested in infrastructure and process surrounding
automated testing and deployment is encouraged to attend.

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2
http://www.princessleia.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Meetup Schedule Posted!

2014-01-20 Thread Mark Washenberger
On Mon, Jan 20, 2014 at 7:44 AM, Georgy Okrokvertskhov <
gokrokvertsk...@mirantis.com> wrote:

> Hi Mark,
>
> Happy Martin Luther King Jr. Day!
>
> Will Google hangout or skype meeting available for remote participants? I
> know few engineers who will not be able to attend this mini-summit in
> person but they will be happy to join remotely.
>

We're going to try to do our best. The discussions will definitely be
recorded and published. In addition I've been trying to figure out a way to
broadcast the video live in a way that an international audience can
access. I'm not sure if google hangouts fit that bill, but perhaps the
"hangouts on air" feature would be a good way to go. Are there some folks
out there who can help me test this out? Or has anyone had good experiences
with some alternative means? I've also been considering justin.tv

If we do manage to get the broadcasting setup, I think remote participants
are going to have to provide their feedback through text-based means (i.e.
etherpad chat or IRC).


>
> Thanks,
> Georgy
>
>
> On Mon, Jan 20, 2014 at 1:22 AM, Mark Washenberger <
> mark.washenber...@markwash.net> wrote:
>
>> Hi folks,
>>
>> First things first: Happy Martin Luther King Jr. Day!
>>
>> Our mini summit / meetup for the Icehouse cycle will take place in one
>> week's time. To ensure we are all ready and know what to expect, I have
>> started a wiki page tracking the event details and a tentative schedule.
>> Please have a look if you plan to attend.
>>
>> https://wiki.openstack.org/wiki/Glance/IcehouseCycleMeetup
>>
>> I have taken the liberty of scheduling several of the topics we have
>> already discussed. Let me know if anything in the existing schedule creates
>> a conflict for you. There are also presently 4 unclaimed slots in the
>> schedule. If your topic is not yet scheduled, please tell me the time you
>> want and I will update accordingly.
>>
>> EXTRA IMPORTANT: If you plan to attend the meetup but have not spoken
>> with me, please respond as soon as possible to let me know your plans. We
>> have a limited number of seats remaining.
>>
>> Cheers,
>> markwash
>> 
>>
>> "Our only hope today lies in our ability to recapture the revolutionary
>> spirit and go out into a sometimes hostile world declaring eternal
>> hostility to poverty, racism, and militarism."
>>
>> "I knew that I could never again raise my voice against the violence of
>> the oppressed in the ghettos without having first spoken clearly to the
>> greatest purveyor of violence in the world today, my own government."
>>
>>  - Martin Luther King, Jr.
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Georgy Okrokvertskhov
> Architect,
> OpenStack Platform Products,
> Mirantis
> http://www.mirantis.com
> Tel. +1 650 963 9828
> Mob. +1 650 996 3284
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Cinder unit test failure

2014-01-20 Thread John Griffith
On Mon, Jan 20, 2014 at 11:15 AM, John Griffith
 wrote:
> On Mon, Jan 20, 2014 at 10:30 AM, iKhan  wrote:
>> Thanks John,
>>
>> It worked earlier while executing because iniparse was installed, tho this
>> wasn't present in virtual environment. Installing iniparse via pip did work.
>> Since I didn't install iniparse specifically, I was under impression it was
>> there by default. Probably now I have to take care of this in
>> test-requirement.txt as you mentioned.
>>
>> I wonder if there is an alternative to iniparse by default.
>>
>> Regards
>>
>>
>> On Mon, Jan 20, 2014 at 10:47 PM, John Griffith
>>  wrote:
>>>
>>> On Mon, Jan 20, 2014 at 10:07 AM, iKhan  wrote:
>>> > Hi,
>>> >
>>> > I have imported iniparse to my cinder code, it works fine when I perform
>>> > execution. But when I run the unit test, it fails while importing
>>> > iniparse.
>>> > It says "No module named iniparse". Do I have to take care of something
>>> > here?
>>> >
>>> > --
>>> > Thanks,
>>> > Ibad Khan
>>> > 9686594607
>>> >
>>> > ___
>>> > OpenStack-dev mailing list
>>> > OpenStack-dev@lists.openstack.org
>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> >
>>>
>>> It sounds like it's not installed on your system.  You'd need to do a
>>> "pip install iniparse", but if you're adding this to your unit tests
>>> you'll need to have a look at the common test-requires file.  Also
>>> keep in mind if your driver is going to rely on it you'll need it in
>>> requirements.  We can work through the details via IRC if you like.
>>>
>>> John
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>>
>> --
>> Thanks,
>> Ibad Khan
>> 9686594607
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> there is check out openstack.common.iniparser, not sure if it'll fit
> your needs or not.
DOH!!  Disregard that

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Cinder unit test failure

2014-01-20 Thread John Griffith
On Mon, Jan 20, 2014 at 10:30 AM, iKhan  wrote:
> Thanks John,
>
> It worked earlier while executing because iniparse was installed, tho this
> wasn't present in virtual environment. Installing iniparse via pip did work.
> Since I didn't install iniparse specifically, I was under impression it was
> there by default. Probably now I have to take care of this in
> test-requirement.txt as you mentioned.
>
> I wonder if there is an alternative to iniparse by default.
>
> Regards
>
>
> On Mon, Jan 20, 2014 at 10:47 PM, John Griffith
>  wrote:
>>
>> On Mon, Jan 20, 2014 at 10:07 AM, iKhan  wrote:
>> > Hi,
>> >
>> > I have imported iniparse to my cinder code, it works fine when I perform
>> > execution. But when I run the unit test, it fails while importing
>> > iniparse.
>> > It says "No module named iniparse". Do I have to take care of something
>> > here?
>> >
>> > --
>> > Thanks,
>> > Ibad Khan
>> > 9686594607
>> >
>> > ___
>> > OpenStack-dev mailing list
>> > OpenStack-dev@lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> It sounds like it's not installed on your system.  You'd need to do a
>> "pip install iniparse", but if you're adding this to your unit tests
>> you'll need to have a look at the common test-requires file.  Also
>> keep in mind if your driver is going to rely on it you'll need it in
>> requirements.  We can work through the details via IRC if you like.
>>
>> John
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> --
> Thanks,
> Ibad Khan
> 9686594607
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

there is check out openstack.common.iniparser, not sure if it'll fit
your needs or not.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna] savannaclient v2 api

2014-01-20 Thread Matthew Farrellee

On 01/20/2014 12:50 PM, Andrey Lazarev wrote:

Inlined.


On Mon, Jan 20, 2014 at 8:15 AM, Matthew Farrellee mailto:m...@redhat.com>> wrote:

(inline, trying to make this readable by a text-only mail client
that doesn't use tabs to indicate quoting)

On 01/20/2014 02:50 AM, Andrey Lazarev wrote:

 --
 FIX - @rest.get('/jobs/config-hints/') -
should move to
 GET /plugins//, similar to
 get_node_processes
 and get_required_image_tags
 --
 Not sure if it should be plugin specific right now. EDP
uses it
 to show some
 configs to users in the dashboard. it's just a cosmetic
thing.
 Also when user
 starts define some configs for some job he might not define
 cluster yet and
 thus plugin to run this job. I think we should leave it
as is
 and leave only
 abstract configs like Mapper/Reducer class and allow
users to
 apply any
 key/value configs if needed.


 FYI, the code contains comments suggesting it should be
plugin specific.


https://github.com/openstack/savanna/blob/master/savanna/service/edp/workflow_creator/workflow_factory.py#L179




>

 IMHO, the EDP should have no plugin specific dependencies.

 If it currently does, we should look into why and see if we
can't
 eliminate this entirely.

[AL] EDP uses plugins in two ways:
1. for HDFS user
2. for config hints
I think both items should not be plugin specific on EDP API
level. But
implementation should go to plugin and call plugin API for result.


In fact they are both plugin specific. The user is forced to click
through a plugin selection (when launching a job on transient
cluster) or the plugin selection has already occurred (when
launching a job on an existing cluster).

Since the config is something that is plugin specific, you might not
have hbase hints from vanilla but you would from hdp, and you
already have plugin information whenever you ask for a hint, my view
that this be under the /plugins namespace is growing stronger.


[AL] Disagree. They are plugin specific, but EDP itself could have
additional plugin-independent logic inside. Now config hints return EDP
properties (like mapred.input.dir) as well as plugin-specific
properties. Placing it under /plugins namespace will give a vision that
it is fully plugin specific.

I like to see EDP API fully plugin independent and in one workspace. If
core side needs some information internally it can easily go into the
plugin.


I'm not sure if we're disagreeing. We may, in fact, be in violent agreement.

The EDP API is fully plugin independent, and should stay that way as a 
project goal. config-hints is extra data that the horizon app can use to 
help give users suggestions about what config they may want to 
optionally add to their job. Those config options are independent of the 
job and specific to the cluster where the job will run, which is the 
purview of the plugin.


Moving config-hints out of the EDP API will make this even more clear.

Best,


matt

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Next steps for Whole Host allocation / Pclouds

2014-01-20 Thread Jay Pipes
On Mon, 2014-01-20 at 17:49 +0100, Sylvain Bauza wrote:
> Hi Jay,
> 
> Le 20/01/2014 17:34, Jay Pipes a écrit :
> 
> > On Mon, Jan 20, 2014 at 11:18 AM, Sylvain Bauza
> >  wrote:
> > Jay, please be aware of the existence of Climate, which is a
> > Stackforge project for managing dedicated resources (like
> > AWS reserved instances). This is not another API extension,
> > but another API endpoint for creating what we call "leases"
> > which can be started now or in the future and last for a
> > certain amount of time. We personnally think there is a
> > space for Reservations in Openstack, and this needs to be
> > done as a service.
> > 
> > 
> > Hi Sylvain! Hope all is well with you :)
> 
> Thanks, doing well but a bit under pressure, as Climate has its 0.1
> milestone this week...

Understood :)

> > So, I actually don't think the two concepts (reservations and
> > "isolated instances") are competing ideas. Isolated instances are
> > actually not reserved. They are simply instances that have a
> > condition placed on their assignment to a particular compute node
> > that the node must only be hosting other instances of one or more
> > specified projects (tenants).
> 
> I got your idea. This filter [1] already does most of the work,
> although it relies on aggregates and requires admin management. The
> main issue with isolated instances is that it requires kind of
> capacity planning for making sure you can cope with the load, that's
> why we placed the idea of having such a placement scheduler.
> 
> [1] :
> https://github.com/openstack/nova/blob/master/nova/scheduler/filters/aggregate_multitenancy_isolation.py
>  

Right, the difference between that and my proposed solution would be
there would be no dependency on any aggregate at all.

I do understand your point about capacity planning in light of such
scheduling functionality -- due to the higher likelihood that compute
nodes would be unable to service a more general workload from other
tenants.

But I believe that the two concerns can be tackled separately.

Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna] savannaclient v2 api

2014-01-20 Thread Andrey Lazarev
Inlined.


On Mon, Jan 20, 2014 at 8:15 AM, Matthew Farrellee  wrote:

> (inline, trying to make this readable by a text-only mail client that
> doesn't use tabs to indicate quoting)
>
> On 01/20/2014 02:50 AM, Andrey Lazarev wrote:
>
>  --
>> FIX - @rest.get('/jobs/config-hints/__') - should move
>> to
>> GET /plugins//<__plugin_version>, similar to
>> get_node_processes
>> and get_required_image_tags
>> --
>> Not sure if it should be plugin specific right now. EDP uses it
>> to show some
>> configs to users in the dashboard. it's just a cosmetic thing.
>> Also when user
>> starts define some configs for some job he might not define
>> cluster yet and
>> thus plugin to run this job. I think we should leave it as is
>> and leave only
>> abstract configs like Mapper/Reducer class and allow users to
>> apply any
>> key/value configs if needed.
>>
>>
>> FYI, the code contains comments suggesting it should be plugin
>> specific.
>>
>> https://github.com/openstack/__savanna/blob/master/savanna/_
>> _service/edp/workflow_creator/__workflow_factory.py#L179
>> > service/edp/workflow_creator/workflow_factory.py#L179>
>>
>> IMHO, the EDP should have no plugin specific dependencies.
>>
>> If it currently does, we should look into why and see if we can't
>> eliminate this entirely.
>>
>> [AL] EDP uses plugins in two ways:
>> 1. for HDFS user
>> 2. for config hints
>> I think both items should not be plugin specific on EDP API level. But
>> implementation should go to plugin and call plugin API for result.
>>
>
> In fact they are both plugin specific. The user is forced to click through
> a plugin selection (when launching a job on transient cluster) or the
> plugin selection has already occurred (when launching a job on an existing
> cluster).
>
> Since the config is something that is plugin specific, you might not have
> hbase hints from vanilla but you would from hdp, and you already have
> plugin information whenever you ask for a hint, my view that this be under
> the /plugins namespace is growing stronger.
>

[AL] Disagree. They are plugin specific, but EDP itself could have
additional plugin-independent logic inside. Now config hints return EDP
properties (like mapred.input.dir) as well as plugin-specific properties.
Placing it under /plugins namespace will give a vision that it is fully
plugin specific.

I like to see EDP API fully plugin independent and in one workspace. If
core side needs some information internally it can easily go into the
plugin.


> Best,
>
>
> matt
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Can somebody help me to determine if an URL validation in python-glanceclient & horizon projects is safe

2014-01-20 Thread Victor Joel Morales Ruvalcaba
I'm implementing an URL validation that checks if the external location value 
provided exists and if it's reachable.  To achieve that I'm using the method 
urlopen of six.moves.urllib.request module which it seems similar like to the 
deprecated django's method of verify_exists.  I'm wondering if I can proceed 
with the current implementation or if there's a way to implement those 
validations

https://review.openstack.org/#/c/64295/
https://review.openstack.org/#/c/64312/
  ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Cinder unit test failure

2014-01-20 Thread iKhan
Thanks John,

It worked earlier while executing because iniparse was installed, tho this
wasn't present in virtual environment. Installing iniparse via pip did
work. Since I didn't install iniparse specifically, I was under impression
it was there by default. Probably now I have to take care of this in
test-requirement.txt as you mentioned.

I wonder if there is an alternative to iniparse by default.

Regards


On Mon, Jan 20, 2014 at 10:47 PM, John Griffith  wrote:

> On Mon, Jan 20, 2014 at 10:07 AM, iKhan  wrote:
> > Hi,
> >
> > I have imported iniparse to my cinder code, it works fine when I perform
> > execution. But when I run the unit test, it fails while importing
> iniparse.
> > It says "No module named iniparse". Do I have to take care of something
> > here?
> >
> > --
> > Thanks,
> > Ibad Khan
> > 9686594607
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> It sounds like it's not installed on your system.  You'd need to do a
> "pip install iniparse", but if you're adding this to your unit tests
> you'll need to have a look at the common test-requires file.  Also
> keep in mind if your driver is going to rely on it you'll need it in
> requirements.  We can work through the details via IRC if you like.
>
> John
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Thanks,
Ibad Khan
9686594607
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][Neutron][Tempest][Network] Break down NetworkBasicOps to smaller test cases

2014-01-20 Thread Salvatore Orlando
Yair is probably referring to statistically independent tests, or whatever
case for which the following is true (P(x) is the probably that a test
succeeds):

P(4|3|2|1) = P(4|1) * P(3|1) * P(2|1)

This might apply to the tests we are adding to network_basic_ops scenario;
however it is worth noting that:

- in some cases the above relationship does not hold. For instance a public
network connectivity test can hardly succeeds if the private connectivity
test failed (is that correct? I'm not sure anymore of anything this days!)
- Sean correctly pointed out that splitting test will cause repeated
activities which will just make the test run longer without any additional
benefit.

On the other hand, I understand and share the feeling that we are adding
too much to the same workflow. Would it make sense to identify a few
conceptually independent workflows, identify one or more advanced network
scenarios, and keep only internal + public connectivity checks in basic_ops?

Salvatore


On 20 January 2014 09:23, Jay Pipes  wrote:

> On Sun, 2014-01-19 at 07:17 -0500, Yair Fried wrote:
> > OK,
> > but considering my pending patch (#3 and #4)
> > what about:
> >
> > #1 -> #2
> > #1 -> #3
> > #1 -> #4
> >
> > instead of
> >
> > #1 -> #2 -> #3 -> #4
> >
> > a failure in #2 will prevent #3 and #4 from running even though they are
> completely unrelated
>
> Seems to me, that the above is a logical fault. If a failure in #2
> prevents #3 or #4 from running, then by nature they are related to #2.
>
> -jay
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Cinder unit test failure

2014-01-20 Thread John Griffith
On Mon, Jan 20, 2014 at 10:07 AM, iKhan  wrote:
> Hi,
>
> I have imported iniparse to my cinder code, it works fine when I perform
> execution. But when I run the unit test, it fails while importing iniparse.
> It says "No module named iniparse". Do I have to take care of something
> here?
>
> --
> Thanks,
> Ibad Khan
> 9686594607
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

It sounds like it's not installed on your system.  You'd need to do a
"pip install iniparse", but if you're adding this to your unit tests
you'll need to have a look at the common test-requires file.  Also
keep in mind if your driver is going to rely on it you'll need it in
requirements.  We can work through the details via IRC if you like.

John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Mistral] Community meeting minutes - 01/20/2014

2014-01-20 Thread Renat Akhmerov
Hi,

Thank you for joining us today at #openstack-meeting. Here are the links to 
meeting minutes and logs:

Minutes: 
http://eavesdrop.openstack.org/meetings/mistral/2014/mistral.2014-01-20-16.00.html
Logs: 
http://eavesdrop.openstack.org/meetings/mistral/2014/mistral.2014-01-20-16.00.log.html

Renat Akhmerov
@ Mirantis Inc.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Cinder unit test failure

2014-01-20 Thread iKhan
Hi,

I have imported iniparse to my cinder code, it works fine when I perform
execution. But when I run the unit test, it fails while importing iniparse.
It says "No module named iniparse". Do I have to take care of something
here?

-- 
Thanks,
Ibad Khan
9686594607
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tempest - Stress test] : cleanup() removing resources for all tenants with an admin_manager

2014-01-20 Thread Boris Pavlovic
Julien,

Probably you should try to use Rally for benchmarking.
https://wiki.openstack.org/wiki/Rally

There is already working generic cleanup...

There is already implemented framework that allows parametrized benchmarks:
https://github.com/stackforge/rally/blob/master/rally/benchmark/scenarios/nova/servers.py#L32-L39

Simple way to configure load using json (load will be created from real
users, no admin, that will be pre created for each benchmark):
https://github.com/stackforge/rally/blob/master/doc/samples/tasks/nova/boot-and-delete.json


And simple CLI interface (now we are working around Web UI)


Best regards,
Boris Pavlovic


On Mon, Jan 20, 2014 at 8:32 PM, LELOUP Julien wrote:

>  Hi everyone,
>
>
>
> I’m forwarding my own email previously posted on the QA list.
>
>
>
> I would like to discuss about the cleanup() process used right after a
> stress test run in Tempest.
>
>
>
> For what I see now by using it and by reading the code, the cleanup()
> seems a bit rough since it is using an “admin_manager” in order to get all
> kind of test resources actually available : servers, key pairs, volumes,
> .etc…
>
> More precisely, when it comes to clean servers, it is searching for
> servers on all tenants. I find this behavior a little rough since it will
> blow all objects on the target OpenStack, even object unrelated to the
> stress tests that just ran.
>
>
>
> Actually before reading the cleanup() I had a problem when one of my
> stress test erased all the servers and volumes on another tenant, which
> impaired other people working on our OpenStack.
>
>
>
> I can imagine that for some scenarios, using an admin user to deeply clean
> an OpenStack is required, but I believe that most of the time the cleanup()
> process should focus only on the tenant used during the stress test and
> leave the other tenants alone.
>
>
>
> Am I doing something wrong ? Is there a way to restrain the cleanup()
> process ?
>
>
>
> If no parameters or configuration allows me to do so, should I improve the
> cleanup() code in order to allow it to remove only the test resources
> created for the test?
>
> I do not wish to make this kind of code if the OpenStack community believe
> that the present behavior is totally intended and should not be modified.
>
>
>
>
>
> Best Regards,
>
>
>
> Julien LELOUP
>
> julien.lel...@3ds.com
>
> This email and any attachments are intended solely for the use of the
> individual or entity to whom it is addressed and may be confidential and/or
> privileged.
>
> If you are not one of the named recipients or have received this email in
> error,
>
> (i) you should not read, disclose, or copy it,
>
> (ii) please notify sender of your receipt by reply email and delete this
> email and all attachments,
>
> (iii) Dassault Systemes does not accept or assume any liability or
> responsibility for any use of or reliance on this email.
>
>  For other languages, go to http://www.3ds.com/terms/email-disclaimer
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Next steps for Whole Host allocation / Pclouds

2014-01-20 Thread Sylvain Bauza

Hi Jay,

Le 20/01/2014 17:34, Jay Pipes a écrit :
On Mon, Jan 20, 2014 at 11:18 AM, Sylvain Bauza 
mailto:sylvain.ba...@bull.net>> wrote:


Jay, please be aware of the existence of Climate, which is a
Stackforge project for managing dedicated resources (like AWS
reserved instances). This is not another API extension, but
another API endpoint for creating what we call "leases" which can
be started now or in the future and last for a certain amount of
time. We personnally think there is a space for Reservations in
Openstack, and this needs to be done as a service.


Hi Sylvain! Hope all is well with you :)


Thanks, doing well but a bit under pressure, as Climate has its 0.1 
milestone this week...




So, I actually don't think the two concepts (reservations and 
"isolated instances") are competing ideas. Isolated instances are 
actually not reserved. They are simply instances that have a condition 
placed on their assignment to a particular compute node that the node 
must only be hosting other instances of one or more specified projects 
(tenants).




I got your idea. This filter [1] already does most of the work, although 
it relies on aggregates and requires admin management. The main issue 
with isolated instances is that it requires kind of capacity planning 
for making sure you can cope with the load, that's why we placed the 
idea of having such a placement scheduler.


[1] : 
https://github.com/openstack/nova/blob/master/nova/scheduler/filters/aggregate_multitenancy_isolation.py 


Best,
-jay



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Next steps for Whole Host allocation / Pclouds

2014-01-20 Thread Jay Pipes
On Mon, Jan 20, 2014 at 11:18 AM, Sylvain Bauza wrote:

> Jay, please be aware of the existence of Climate, which is a Stackforge
> project for managing dedicated resources (like AWS reserved instances).
> This is not another API extension, but another API endpoint for creating
> what we call "leases" which can be started now or in the future and last
> for a certain amount of time. We personnally think there is a space for
> Reservations in Openstack, and this needs to be done as a service.
>

Hi Sylvain! Hope all is well with you :)

So, I actually don't think the two concepts (reservations and "isolated
instances") are competing ideas. Isolated instances are actually not
reserved. They are simply instances that have a condition placed on their
assignment to a particular compute node that the node must only be hosting
other instances of one or more specified projects (tenants).

Best,
-jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Tempest - Stress test] : cleanup() removing resources for all tenants with an admin_manager

2014-01-20 Thread LELOUP Julien
Hi everyone,

I'm forwarding my own email previously posted on the QA list.

I would like to discuss about the cleanup() process used right after a stress 
test run in Tempest.

For what I see now by using it and by reading the code, the cleanup() seems a 
bit rough since it is using an "admin_manager" in order to get all kind of test 
resources actually available : servers, key pairs, volumes, .etc...
More precisely, when it comes to clean servers, it is searching for servers on 
all tenants. I find this behavior a little rough since it will blow all objects 
on the target OpenStack, even object unrelated to the stress tests that just 
ran.

Actually before reading the cleanup() I had a problem when one of my stress 
test erased all the servers and volumes on another tenant, which impaired other 
people working on our OpenStack.

I can imagine that for some scenarios, using an admin user to deeply clean an 
OpenStack is required, but I believe that most of the time the cleanup() 
process should focus only on the tenant used during the stress test and leave 
the other tenants alone.

Am I doing something wrong ? Is there a way to restrain the cleanup() process ?

If no parameters or configuration allows me to do so, should I improve the 
cleanup() code in order to allow it to remove only the test resources created 
for the test?
I do not wish to make this kind of code if the OpenStack community believe that 
the present behavior is totally intended and should not be modified.


Best Regards,

Julien LELOUP
julien.lel...@3ds.com

This email and any attachments are intended solely for the use of the 
individual or entity to whom it is addressed and may be confidential and/or 
privileged.

If you are not one of the named recipients or have received this email in error,

(i) you should not read, disclose, or copy it,

(ii) please notify sender of your receipt by reply email and delete this email 
and all attachments,

(iii) Dassault Systemes does not accept or assume any liability or 
responsibility for any use of or reliance on this email.

For other languages, go to http://www.3ds.com/terms/email-disclaimer
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Top Gate Reseting issues that need attention

2014-01-20 Thread Salvatore Orlando
I gave a -2 yesterday to all my Neutron patches. I did that because I
thought something was wrong with them, but then I started to realize it's a
general problem.
It makes sense to give some priority to the patches Eugene linked, even if
it would be better to have some people root causing the issues that are
plaguing the neutron gate. I saw at least 4 different failure modes and I'm
not even sure whether it's due to Neutron or the tempest changes we merged
during the sprint.

Salvatore

PS: I can't look at those issues today, as I have some backlog from last
week to deal with.


On 20 January 2014 09:43, Eugene Nikanorov  wrote:

> Hi Sean,
>
> I think the following 2 commits in neutron are essential for bringing
> neutron jobs back to acceptable level of failure rate:
> https://review.openstack.org/#/c/67537/
> https://review.openstack.org/#/c/66670/
>
> Thanks,
> Eugene.
>
>
> On Mon, Jan 20, 2014 at 6:33 PM, Sean Dague  wrote:
>
>> Anyone that's looked at the gate this morning... knows things aren't
>> good. It turns out that a few new races got into OpenStack last week,
>> which are causing a ton of pain, and have put us dramatically over the
>> edge.
>>
>> We've not tracked down all of them, but 2 that are quite important to
>> address are:
>>
>>  - Bug 1270680 - v3 extensions api inherently racey wrt instances
>>  - Bug 1270608 - n-cpu 'iSCSI device not found' log causes
>> gate-tempest-dsvm-*-full to fail
>>
>> Both can be seen as very new issues here -
>> http://status.openstack.org/elastic-recheck/
>>
>> We've got a short term work around on 1270680 which we're going to take
>> into the gate now (and fix it better later).
>>
>> 1270608 is still in desperate need of fixing.
>>
>>
>> Neutron is in a whole other level of pain. Over the weekend I found the
>> isolated jobs are in a 70% fail state, which means the overall chance
>> for success for Neutron / Neutron client patches are < 5%. As such I'd
>> suggest a moritorium for them going into the gate at this point, as they
>> are basically guarunteed to fail.
>>
>> -Sean
>>
>> --
>> Sean Dague
>> Samsung Research America
>> s...@dague.net / sean.da...@samsung.com
>> http://dague.net
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Third-party drivers and testing

2014-01-20 Thread Jay Pipes
On Sun, Jan 19, 2014 at 2:47 PM, Devananda van der Veen <
devananda@gmail.com> wrote:

> Hi all,
>
> I've been thinking about how we should treat third-party drivers in Ironic
> for a while, and had several discussions at the Hong Kong summit and last
> week at LCA. Cinder, Nova, Neutron, and TripleO are all having similar
> discussions, too. What follows is a summary of my thoughts and a proposal
> for our project's guidelines to vendors.
>

I applaud the effort. I'm actually currently in the process of writing up
instructions for Cinder and Neutron vendors interested in constructing a
3rd party testing platform that uses the openstack-infra tooling as much as
possible. (Yes, I know there is existing documentation on ci.openstack.org,
but based on discussions this past week with the Neutron vendors
implementing these test platforms, there's a number of areas that are
poorly understood and some more detail is clearly needed).

I would hope the docs I'm putting together for Cinder and Neutron will
require little, if any, changes for similar instructions for Ironic 3rd
party testers.


> Before requiring that degree of testing, I would like to be able to direct
> vendors at a working test suite which they can copy. I expect us to have
> functional testing for the PXE and SSH drivers within Tempest and devstack
> / devstack-gate either late in this cycle or early next cycle. Around the
> same time, I believe TripleO will switch to using Ironic in their test
> suite, so we'll have coverage of the IPMI driver on real hardware as well
> (this may be periodic coverage rather than per-test feedback initially).
>

I think using Tempest as that working test suite would be the best way to
go. Cinder 3rd party testing is going in this direction (the cinder_cert/
directory in devstack simply sets up Tempest, sets the appropriate Cinder
driver properly in the cinder.conf and then runs the Tempest Volume API
tests. A similar approach would work for Ironic, I believe, once the Ironic
API tests are complete for Tempest.


> I am proposing that we provisionally allow in vendor drivers this cycle
> with the following requirements, and that we draw the line at the J3
> milestone to give everyone ample time to get testing up on real hardware --
> without blocking innovation now. At that time, we may kick out third-party
> drivers if these criteria haven't been met.
>
> 1. each driver must adhere to the existing driver interfaces.
> 2. each driver must have comprehensive unit test coverage and sufficient
> inline documentation.
> 3. vendors are responsible for fixing bugs in their driver in a timely
> fashion.
> 4. vendors commit to have third-party testing on a supported hardware
> platform implemented by the J3 milestone.
> 5. vendors contribute a portion of at least one developer's time to
> upstream participation.
>

All good things. However, specificity is critical here. What does
"sufficient inline documentation" entail? Who is the arbiter? What does
"comprehensive unit test coverage" mean? 90%? 100%? What does "timely
fashion" mean? Within 2 days? By X milestone?

The more specificity, the less miscommunication will occur.

And, BTW, the above goes for all the driver verification programs currently
being fleshed out... not just Ironic, of course! :)

Best,
-jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Apparently weird timeout issue

2014-01-20 Thread Salvatore Orlando
I think you're right Darragh.

It was actually Montreal's snow and cold freezing my brain as I
investigated the same issue a while ago and tried to change cirrOS to send
a DHCPDISCOVER every 10 seconds instead of 60 seconds, but then I moved to
something else as I wasn't even sure a new centos base image could have
been brought into gate tests.

I think I also sent a related email to the mailing list, suggesting to
increase timeouts to a value that would ensure at least a second
DHCPDISCOVER is sent by the VM. Anyway, we have a few patches which should
make this failure mode less frequent. They're all -2 currently as they're
always failing the gate (and I don't know why). However, from another email
Sean recently sent, it seems it's a general Neutron issue.

Salvatore



On 20 January 2014 10:51, Darragh O'Reilly wrote:

>
> On Monday, 20 January 2014, 15:33, Jay Pipes  wrote:
>
> >Sorry for top-posting -- using web mail client.
> no worries - it doesn't bother me.
> >
> >Is it possible to change the retry interval in Cirros (or cloud-init?) so
> that the backoff is less than 60 seconds?
> I think the udhcpc command line parameters are baked into the image. It's
> part of BusyBox, and I'm not even sure if it's configurable from a
> script/text file.
> >
> >Best,
> >
> -jay
> >
> >
> >
> >
> >On Mon, Jan 20, 2014 at 10:23 AM, Darragh O'Reilly <
> dara2002-openst...@yahoo.com> wrote:
> >
> >
> >>I did a test to see what the dhcp client on cirros does. I killed the
> dhcp agent and started an instance. The instance sent the first dhcp offer
> after about 35 sec. Then another 60 sec later, and a final one after
> another 60 sec.
> >>
> >>
> >>So a revised theory for what happened is this:
> >>
> >>t=0 tempest starts vm and starts polling for ACTIVE status
> >>t=20 instance-->ACTIVE and tempest starts polling the floating ip for 60
> sec
> >>t=40 instance does a dhcp discover - no response - so sets a timer for
> 60 sec
> >>t=45 ovs-agent sets the port vlan
> >>t=80 tempest gives up and kills vm
> >>t=100 instance would have sent another dhcp discover now if it had been
> let live
> >>
> >>I think it would be worth trying to change that test to poll for 120
> seconds instead of 60.
> >>
> >>
> >>
> >>On Monday, 20 January 2014, 11:23, Darragh O'Reilly <
> dara2002-openst...@yahoo.com> wrote:
> >>
> >>Hi Salvatore,
> >>>
> >>>
> >>>I presume it's this one?
> >>>
> http://logs.openstack.org/38/65838/4/check/check-tempest-dsvm-neutron-isolated/d108e4a/logs/tempest.txt.gz?#_2014-01-19_20_50_14_604
> >>>
> >>>
> >>>Is it true that the cirros image just fires off a few dhcp discovers
> and then gives up? If so, then maybe it did so before the tagging happened.
> Do we have the instance console log? It took about 45 seconds from when the
> port was created to when it was tagged.
> >>>
> >>>
> >>>2014-01-19 20:48:57.412 8142 DEBUG neutron.agent.linux.ovsdb_monitor
> [-] Output
> received from ovsdb monitor:
>
> {"data":[["3602a7b2-b559-4709-9bf0-53ae2af68d06","insert","tap496b808c-b5"]],"headings":["row","action","name"]}
> >>>
> >>>2014-01-19 20:49:41.925 8142 DEBUG neutron.agent.linux.utils [-]
> >>>Command:
> ['sudo', '/usr/local/bin/neutron-rootwrap',
> '/etc/neutron/rootwrap.conf', 'ovs-vsctl', '--timeout=10', 'set',
> 'Port', 'tap496b808c-b5', 'tag=64']
> >>>Exit code: 0
> >>>
> >>>
> >>>Darragh.
> >>>
> >>>
> >>>
> I have been seeing in the past 2 days timeout failures on gate jobs
> which I
> am struggling to explain. An example is
> available in [1]
> These are the usual failure that we associate with bug 1253896, but
> this
> time I can verify that:
> - The floating IP is correctly wired (IP and NAT rules)
> - The DHCP port is correctly wired, as well as the VM port and the
> router
> port
> - The DHCP agent is correctly started for the network
> 
> However, no DHCP DISCOVER request is sent. Only the DHCP RELEASE
> message is
> seen.
> Any help at interpreting the logs will be appreciated.
> 
> 
> Salvatore
> 
> [1] http://logs.openstack.org/38/65838
> >>>
> >>>
> >>>
> >>___
> >>OpenStack-dev mailing list
> >>OpenStack-dev@lists.openstack.org
> >>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >>
> >
> >
> >
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna] savannaclient v2 api

2014-01-20 Thread Matthew Farrellee

(inline-ish)

On 01/20/2014 02:36 AM, Andrey Lazarev wrote:




On Sun, Jan 19, 2014 at 7:53 AM, Matthew Farrellee mailto:m...@redhat.com>> wrote:

On 01/16/2014 09:19 PM, Andrey Lazarev wrote:




REMOVE -
@rest.get('/job-executions/<__job_execution_id>/refresh-__status')
- refresh
and return status - GET should not side-effect, status is part of
details and
updated periodically, currently unused

This call goes to Oozie directly to ask it about job status. It
allows
not to wait
too long when periodic task will update status JobExecution
object in
Savanna.
The current GET asks status of JobExecution from savanna-db. I
think we can
leave this call, it might be useful for external clients.

[AL] Agree that GET shouldn't have side effect (or at least
documented
side effect). I think it could be generic PUT on
'/job-executions/' which can refresh status
or cancel
job on hadoop side.


 From what I can tell, this endpoint is not exposed by the
savannaclient or used directly from the horizon plugin.

I imagine that having a "savanna-api, please go faster" call is
enticing, but if we're not using it yet, let's make sure we have a
well defined need before adding/keeping it.

[AL] I like to disable 'periodic' in dev environment. And this is the
only way to update job status without periodic.
So, I vote on adding it to savannaclient and to horizon.


IMHO, we should not be adding calls to the client or horizon app that 
would use this command. Instead we should have a well tuned periodic 
value that meets user expectations.


I propose we not expose this as part of the official Savanna API, and we 
look into other options for developer environments that allow for 
triggering a refresh of oozie information. Possibly when savanna-api 
gets a SIGUSR1 it should re-run all periodic tasks?






REMOVE -
@rest.get('/job-executions/<__job_execution_id>/cancel') - cancel
job-execution - GET should not side-effect, currently unused,
use DELETE /job/executions/

Disagree. We have to leave this call. This methods stops job
executing
on the
Hadoop cluster but doesn't remove all its related info from
savanna-db.
DELETE removes it completely.

[AL] We need 'cancel'. Vote on generic PUT (see previous item).


AFAICT, this is also not used. Where is the need?


[AL] I can easily imagine scenario where canceling is useful.

Both features give some benefit, but not extremely needed. So, it is a
question of priorities. My vote is on leaving both of them.


I don't disagree that we could come up with scenarios, but we should not 
add these to the Savanna API until we have concrete scenarios to 
implement in the horizon app or CLI.


Best,


matt

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Next steps for Whole Host allocation / Pclouds

2014-01-20 Thread Sylvain Bauza

Le 20/01/2014 16:57, Jay Pipes a écrit :
On Mon, Jan 20, 2014 at 10:18 AM, Day, Phil > wrote:


HI Folks,

The original (and fairly simple) driver behind
whole-host-allocation
(https://wiki.openstack.org/wiki/WholeHostAllocation) was to
enable users to get guaranteed isolation for their instances. 
This then grew somewhat along the lines of "If they have in effect

a dedicated hosts then wouldn't it be great if the user could also
control some aspect of the scheduling, access for other users,
etc".The Proof of Concept I presented at the Icehouse Design
summit provided this by providing API extensions to in effect
manipulate an aggregate and scheduler filters used with that
aggregate.
https://etherpad.openstack.org/p/NovaIcehousePcloudsBased on the
discussion and feedback from the design summit session it became
clear that this approach was kind of headed into a difficult
middle ground between a very simple approach for users who just
wanted the isolation for their instances, and a fully delegated
admin model which would allow any admin operation to be scoped to
a specific set of servers/flavours/instances


My advice would be to steer as clear as you can from any concept based 
on legacy/traditional managed/dedicated hosting. This means staying 
away from *any concept* that would give the impression to the user 
that they own or control some bare-metal resource. This is, after all, 
a cloud. It isn't dedicated hosting where the customer owns or co-owns 
the hardware. The cloud is all about on-demand, shared resources. In 
this case, the "shared resource" is only shared among the one tenant's 
users, but it's not owned by the tenant. Furthermore, once no longer 
in use by the tenant, the resource may be re-used by other tenants.


Implementing the concept of EC2 dedicated instances is easy in Nova: 
simply attach to the request a list of project identifiers in a 
"limit_nodes_hosting_projects" attribute on the allocation request 
object. The scheduler would see a non-empty value as an indication 
that it must only schedule the instance(s) on compute nodes that are 
only hosting instances owned by one of the projects in that list.d,


And for the love of all that is holy in this world, please do not 
implement this as yet another API extension.


Best,
-jay



Hi Phil and Jay,

Phil, maybe you remember I discussed with you about the possibility of 
using pclouds with Climate, but we finally ended up using Nova 
aggregates and a dedicated filter. That works pretty fine. We don't use 
instance_properties but rather aggregate metadata but the idea remains 
the same for isolation.


Jay, please be aware of the existence of Climate, which is a Stackforge 
project for managing dedicated resources (like AWS reserved instances). 
This is not another API extension, but another API endpoint for creating 
what we call "leases" which can be started now or in the future and last 
for a certain amount of time. We personnally think there is a space for 
Reservations in Openstack, and this needs to be done as a service.


-Sylvain
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna] savannaclient v2 api

2014-01-20 Thread Matthew Farrellee
(inline, trying to make this readable by a text-only mail client that 
doesn't use tabs to indicate quoting)


On 01/20/2014 02:50 AM, Andrey Lazarev wrote:


--
FIX - @rest.get('/jobs/config-hints/__') - should move to
GET /plugins//<__plugin_version>, similar to
get_node_processes
and get_required_image_tags
--
Not sure if it should be plugin specific right now. EDP uses it
to show some
configs to users in the dashboard. it's just a cosmetic thing.
Also when user
starts define some configs for some job he might not define
cluster yet and
thus plugin to run this job. I think we should leave it as is
and leave only
abstract configs like Mapper/Reducer class and allow users to
apply any
key/value configs if needed.


FYI, the code contains comments suggesting it should be plugin specific.


https://github.com/openstack/__savanna/blob/master/savanna/__service/edp/workflow_creator/__workflow_factory.py#L179



IMHO, the EDP should have no plugin specific dependencies.

If it currently does, we should look into why and see if we can't
eliminate this entirely.

[AL] EDP uses plugins in two ways:
1. for HDFS user
2. for config hints
I think both items should not be plugin specific on EDP API level. But
implementation should go to plugin and call plugin API for result.


In fact they are both plugin specific. The user is forced to click 
through a plugin selection (when launching a job on transient cluster) 
or the plugin selection has already occurred (when launching a job on an 
existing cluster).


Since the config is something that is plugin specific, you might not 
have hbase hints from vanilla but you would from hdp, and you already 
have plugin information whenever you ask for a hint, my view that this 
be under the /plugins namespace is growing stronger.


Best,


matt

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Next steps for Whole Host allocation / Pclouds

2014-01-20 Thread Jay Pipes
On Mon, Jan 20, 2014 at 10:18 AM, Day, Phil  wrote:

>  HI Folks,
>
>
>
> The original (and fairly simple) driver behind whole-host-allocation (
> https://wiki.openstack.org/wiki/WholeHostAllocation) was to enable users
> to get guaranteed isolation for their instances.  This then grew somewhat
> along the lines of “If they have in effect a dedicated hosts then wouldn’t
> it be great if the user could also control some aspect of the scheduling,
> access for other users, etc”.The Proof of Concept I presented at the
> Icehouse Design summit provided this by providing API extensions to in
> effect manipulate an aggregate and scheduler filters used with that
> aggregate.   https://etherpad.openstack.org/p/NovaIcehousePcloudsBased on
> the discussion and feedback from the design summit session it became clear
> that this approach was kind of headed into a difficult middle ground
> between a very simple approach for users who just wanted the isolation for
> their instances, and a fully delegated admin model which would allow any
> admin operation to be scoped to a specific set of servers/flavours/instances
>

My advice would be to steer as clear as you can from any concept based on
legacy/traditional managed/dedicated hosting. This means staying away from
*any concept* that would give the impression to the user that they own or
control some bare-metal resource. This is, after all, a cloud. It isn't
dedicated hosting where the customer owns or co-owns the hardware. The
cloud is all about on-demand, shared resources. In this case, the "shared
resource" is only shared among the one tenant's users, but it's not owned
by the tenant. Furthermore, once no longer in use by the tenant, the
resource may be re-used by other tenants.

Implementing the concept of EC2 dedicated instances is easy in Nova: simply
attach to the request a list of project identifiers in a
"limit_nodes_hosting_projects" attribute on the allocation request object.
The scheduler would see a non-empty value as an indication that it must
only schedule the instance(s) on compute nodes that are only hosting
instances owned by one of the projects in that list.d,

And for the love of all that is holy in this world, please do not implement
this as yet another API extension.

Best,
-jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Apparently weird timeout issue

2014-01-20 Thread Darragh O'Reilly

On Monday, 20 January 2014, 15:33, Jay Pipes  wrote:

>Sorry for top-posting -- using web mail client.
no worries - it doesn't bother me.
>
>Is it possible to change the retry interval in Cirros (or cloud-init?) so that 
>the backoff is less than 60 seconds?
I think the udhcpc command line parameters are baked into the image. It's part 
of BusyBox, and I'm not even sure if it's configurable from a script/text file.
>
>Best,
>
-jay
>
>
>
>
>On Mon, Jan 20, 2014 at 10:23 AM, Darragh O'Reilly 
> wrote:
>
>
>>I did a test to see what the dhcp client on cirros does. I killed the dhcp 
>>agent and started an instance. The instance sent the first dhcp offer after 
>>about 35 sec. Then another 60 sec later, and a final one after another 60 sec.
>>
>>
>>So a revised theory for what happened is this:  
>>
>>t=0 tempest starts vm and starts polling for ACTIVE status
>>t=20 instance-->ACTIVE and tempest starts polling the floating ip for 60 sec
>>t=40 instance does a dhcp discover - no response - so sets a timer for 60 sec
>>t=45 ovs-agent sets the port vlan
>>t=80 tempest gives up and kills vm
>>t=100 instance would have sent another dhcp discover now if it had been let 
>>live
>>
>>I think it would be worth trying to change that test to poll for 120 seconds 
>>instead of 60.
>>
>>
>>
>>On Monday, 20 January 2014, 11:23, Darragh O'Reilly 
>> wrote:
>> 
>>Hi Salvatore,
>>>
>>>
>>>I presume it's this one? 
>>>http://logs.openstack.org/38/65838/4/check/check-tempest-dsvm-neutron-isolated/d108e4a/logs/tempest.txt.gz?#_2014-01-19_20_50_14_604
>>>
>>>
>>>Is it true that the cirros image just fires off a few dhcp discovers and 
>>>then gives up? If so, then maybe it did so before the tagging happened. Do 
>>>we have the instance console log? It took about 45 seconds from when the 
>>>port was created to when it was tagged.
>>>
>>>
>>>2014-01-19 20:48:57.412 8142 DEBUG neutron.agent.linux.ovsdb_monitor [-] 
>>>Output 
received from ovsdb monitor: 
{"data":[["3602a7b2-b559-4709-9bf0-53ae2af68d06","insert","tap496b808c-b5"]],"headings":["row","action","name"]}
>>>
>>>2014-01-19 20:49:41.925 8142 DEBUG neutron.agent.linux.utils [-] 
>>>Command:
['sudo', '/usr/local/bin/neutron-rootwrap', 
'/etc/neutron/rootwrap.conf', 'ovs-vsctl', '--timeout=10', 'set', 
'Port', 'tap496b808c-b5', 'tag=64']
>>>Exit code: 0
>>>
>>>
>>>Darragh.
>>>
>>>
>>>
I have been seeing in the past 2 days timeout failures on gate jobs which I
am struggling to explain. An example is
available in [1]
These are the usual failure that we associate with bug 1253896, but this
time I can verify that:
- The floating IP is correctly wired (IP and NAT rules)
- The DHCP port is correctly wired, as well as the VM port and the router
port
- The DHCP agent is correctly started for the network

However, no DHCP DISCOVER request is sent. Only the DHCP RELEASE message is
seen.
Any help at interpreting the logs will be appreciated.


Salvatore

[1] http://logs.openstack.org/38/65838
>>>
>>>
>>>
>>___
>>OpenStack-dev mailing list
>>OpenStack-dev@lists.openstack.org
>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Meetup Schedule Posted!

2014-01-20 Thread Georgy Okrokvertskhov
Hi Mark,

Happy Martin Luther King Jr. Day!

Will Google hangout or skype meeting available for remote participants? I
know few engineers who will not be able to attend this mini-summit in
person but they will be happy to join remotely.

Thanks,
Georgy


On Mon, Jan 20, 2014 at 1:22 AM, Mark Washenberger <
mark.washenber...@markwash.net> wrote:

> Hi folks,
>
> First things first: Happy Martin Luther King Jr. Day!
>
> Our mini summit / meetup for the Icehouse cycle will take place in one
> week's time. To ensure we are all ready and know what to expect, I have
> started a wiki page tracking the event details and a tentative schedule.
> Please have a look if you plan to attend.
>
> https://wiki.openstack.org/wiki/Glance/IcehouseCycleMeetup
>
> I have taken the liberty of scheduling several of the topics we have
> already discussed. Let me know if anything in the existing schedule creates
> a conflict for you. There are also presently 4 unclaimed slots in the
> schedule. If your topic is not yet scheduled, please tell me the time you
> want and I will update accordingly.
>
> EXTRA IMPORTANT: If you plan to attend the meetup but have not spoken with
> me, please respond as soon as possible to let me know your plans. We have a
> limited number of seats remaining.
>
> Cheers,
> markwash
> 
>
> "Our only hope today lies in our ability to recapture the revolutionary
> spirit and go out into a sometimes hostile world declaring eternal
> hostility to poverty, racism, and militarism."
>
> "I knew that I could never again raise my voice against the violence of
> the oppressed in the ghettos without having first spoken clearly to the
> greatest purveyor of violence in the world today, my own government."
>
>  - Martin Luther King, Jr.
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Georgy Okrokvertskhov
Architect,
OpenStack Platform Products,
Mirantis
http://www.mirantis.com
Tel. +1 650 963 9828
Mob. +1 650 996 3284
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Apparently weird timeout issue

2014-01-20 Thread Jay Pipes
Sorry for top-posting -- using web mail client.

Is it possible to change the retry interval in Cirros (or cloud-init?) so
that the backoff is less than 60 seconds?

Best,
-jay


On Mon, Jan 20, 2014 at 10:23 AM, Darragh O'Reilly <
dara2002-openst...@yahoo.com> wrote:

>
> I did a test to see what the dhcp client on cirros does. I killed the dhcp
> agent and started an instance. The instance sent the first dhcp offer after
> about 35 sec. Then another 60 sec later, and a final one after another 60
> sec.
>
> So a revised theory for what happened is this:
>
> t=0 tempest starts vm and starts polling for ACTIVE status
> t=20 instance-->ACTIVE and tempest starts polling the floating ip for 60
> sec
> t=40 instance does a dhcp discover - no response - so sets a timer for 60
> sec
> t=45 ovs-agent sets the port vlan
> t=80 tempest gives up and kills vm
> t=100 instance would have sent another dhcp discover now if it had been
> let live
>
> I think it would be worth trying to change that test to poll for 120
> seconds instead of 60.
>
>
>   On Monday, 20 January 2014, 11:23, Darragh O'Reilly <
> dara2002-openst...@yahoo.com> wrote:
>
> Hi Salvatore,
>
> I presume it's this one?
>
> http://logs.openstack.org/38/65838/4/check/check-tempest-dsvm-neutron-isolated/d108e4a/logs/tempest.txt.gz?#_2014-01-19_20_50_14_604
>
> Is it true that the cirros image just fires off a few dhcp discovers and
> then gives up? If so, then maybe it did so before the tagging happened. Do
> we have the instance console log? It took about 45 seconds from when the
> port was created to when it was tagged.
>
> 2014-01-19 20:48:57.412 8142 DEBUG neutron.agent.linux.ovsdb_monitor [-]
> Output received from ovsdb monitor:
> {"data":[["3602a7b2-b559-4709-9bf0-53ae2af68d06","insert","tap496b808c-b5"]],"headings":["row","action","name"]}
> 
> 2014-01-19 20:49:41.925 8142 DEBUG neutron.agent.linux.utils [-]
> Command: ['sudo', '/usr/local/bin/neutron-rootwrap',
> '/etc/neutron/rootwrap.conf', 'ovs-vsctl', '--timeout=10', 'set', 'Port',
> 'tap496b808c-b5', 'tag=64']
> Exit code: 0
>
> Darragh.
>
> >I have been seeing in the past 2 days timeout failures on gate jobs which
> I
> >am struggling to explain. An example is available in [1]
> >These are the usual failure that we associate with bug 1253896, but this
> >time I can verify that:
> >- The floating IP is correctly wired (IP and NAT rules)
> >- The DHCP port is correctly wired, as well as the VM port and the router
> >port
> >- The DHCP agent is correctly started for the network
> >
> >However, no DHCP DISCOVER request is sent. Only the DHCP RELEASE message
> is
> >seen.
> >Any help at interpreting the logs will be appreciated.
> >
> >
> >Salvatore
> >
> >[1] http://logs.openstack.org/38/65838
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Apparently weird timeout issue

2014-01-20 Thread Darragh O'Reilly

I did a test to see what the dhcp client on cirros does. I killed the dhcp 
agent and started an instance. The instance sent the first dhcp offer after 
about 35 sec. Then another 60 sec later, and a final one after another 60 sec.

So a revised theory for what happened is this:  


t=0 tempest starts vm and starts polling for ACTIVE status
t=20 instance-->ACTIVE and tempest starts polling the floating ip for 60 sec
t=40 instance does a dhcp discover - no response - so sets a timer for 60 sec
t=45 ovs-agent sets the port vlan
t=80 tempest gives up and kills vm
t=100 instance would have sent another dhcp discover now if it had been let live

I think it would be worth trying to change that test to poll for 120 seconds 
instead of 60.


On Monday, 20 January 2014, 11:23, Darragh O'Reilly 
 wrote:
 
Hi Salvatore,
>
>
>I presume it's this one? 
>http://logs.openstack.org/38/65838/4/check/check-tempest-dsvm-neutron-isolated/d108e4a/logs/tempest.txt.gz?#_2014-01-19_20_50_14_604
>
>
>Is it true that the cirros image just fires off a few dhcp discovers and then 
>gives up? If so, then maybe it did so before the tagging happened. Do we have 
>the instance console log? It took about 45 seconds from when the port was 
>created to when it was tagged.
>
>
>2014-01-19 20:48:57.412 8142 DEBUG neutron.agent.linux.ovsdb_monitor [-] 
>Output 
received from ovsdb monitor: 
{"data":[["3602a7b2-b559-4709-9bf0-53ae2af68d06","insert","tap496b808c-b5"]],"headings":["row","action","name"]}
>
>2014-01-19 20:49:41.925 8142 DEBUG neutron.agent.linux.utils [-] 
>Command:
 ['sudo', '/usr/local/bin/neutron-rootwrap', 
'/etc/neutron/rootwrap.conf', 'ovs-vsctl', '--timeout=10', 'set', 
'Port', 'tap496b808c-b5', 'tag=64']
>Exit code: 0
>
>
>Darragh.
>
>
>
>>I have been seeing in the past 2 days timeout failures on gate jobs which I
>>am struggling to explain. An example is
 available in [1]
>>These are the usual failure that we associate with bug 1253896, but this
>>time I can verify that:
>>- The floating IP is correctly wired (IP and NAT rules)
>>- The DHCP port is correctly wired, as well as the VM port and the router
>>port
>>- The DHCP agent is correctly started for the network
>>
>>However, no DHCP DISCOVER request is sent. Only the DHCP RELEASE message is
>>seen.
>>Any help at interpreting the logs will be appreciated.
>>
>>
>>Salvatore
>>
>>[1] http://logs.openstack.org/38/65838
>
>
>___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Next steps for Whole Host allocation / Pclouds

2014-01-20 Thread Day, Phil
HI Folks,

The original (and fairly simple) driver behind whole-host-allocation 
(https://wiki.openstack.org/wiki/WholeHostAllocation) was to enable users to 
get guaranteed isolation for their instances.  This then grew somewhat along 
the lines of "If they have in effect a dedicated hosts then wouldn't it be 
great if the user could also control some aspect of the scheduling, access for 
other users, etc".The Proof of Concept I presented at the Icehouse Design 
summit provided this by providing API extensions to in effect manipulate an 
aggregate and scheduler filters used with that aggregate.   
https://etherpad.openstack.org/p/NovaIcehousePclouds

Based on the discussion and feedback from the design summit session it became 
clear that this approach was kind of headed into a difficult middle ground 
between a very simple approach for users who just wanted the isolation for 
their instances, and a fully delegated admin model which would allow any admin 
operation to be scoped to a specific set of servers/flavours/instances

I've spent some time since mulling over what it would take to add some kind of 
"scoped admin" capability into Nova, and my current thinking is that it would 
be a pretty big change because there isn't really a concept of "ownership" once 
you get beyond instances and a few related objects.   Also with TripleO its 
becoming easier to set up new copies of a Nova stack to control a specific set 
of hosts, and that in effect provides the same degree of scoped admin in a much 
more direct way.  The sort of model I'm thinking of here is a system where 
services such as Glance/Cinder and maybe Neutron are shared by a number of Nova 
services.There are still a couple of things needed to make this work, such 
as limiting tenant access to regions on Keystone, but that feels like a better 
layer to try and address this kind of issue.

In terms of the original driver of just guaranteeing instance isolation then we 
could (as suggested by Alex Gilkson and others) implement this just as a new 
instance property with an appropriate scheduler filter (i.e. for this type of 
instance only allow scheduling to hosts that are either empty or running only 
instances for the same tenant).The attribute would then be passed through 
in notification messages, etc for the billing system to process.
This would be pretty much the peer of AWS dedicated instances.

The host_state object already has the required num_instances_by_project data 
required by the scheduler filter, and the stats field in the compute manager 
resource tracker also has this information - so both the new filter and 
additional limits check on the compute manager look like they would be fairly 
straight forward to implement.

It's kind of beyond the scope of Nova, but the resulting billing model in this 
case is more complex -as the user isn't telling you explicitly how many 
dedicated hosts they are going to consume.  AWS just charge a flat rate per 
region for having any number of dedicated instances - if you wanted to charge 
per dedicated host then it'd difficult to warn the user before they create a 
new instance that they are about to branch onto a new host.

Would welcome thoughts on the above,
Phil
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rally] Naming of a deployment

2014-01-20 Thread Boris Pavlovic
Hi,


Nice work Oleg!

Soon we should also have tempest verification of cloud.
So we should add to API it as well.

Something like deployements/{id}/verify?{type}



Best regards,
Boris Pavlovic


On Mon, Jan 20, 2014 at 6:39 PM, Oleg Gelbukh  wrote:

> I've finished the v0.1 spec of Rally API: http://docs.rallyapi.apiary.io/
>
> The only thing that spec is missing at the moment is resource for
> Workloads (/deployments/workloads). I will add this resource shortly.
>
> Please, send your comments and suggestions.
>
> --
> Best regards,
> Oleg Gelbukh
>
>
> On Sun, Jan 19, 2014 at 11:28 AM, Oleg Gelbukh wrote:
>
>> Yuriy, the idea is to choose something more or less general. 'Overcloud'
>> would be very specific to my taste. It could also create confusion for
>> users who want to depoy tests targets with other tools, like Fuel or
>> Devstack.
>>
>> --
>> Best regards,
>> Oleg Gelbukh
>>
>>
>> On Sun, Jan 19, 2014 at 1:17 AM, Yuriy Taraday wrote:
>>
>>> Hi all.
>>>
>>> I might be a little out of context, but isn't that thing deployed on
>>> some kind of cloud?
>>>
>>>
 * "cluster" -- is too generic, but also has connotations in HPC and
 various other technologies (databases, MQs, etc).

 * "installation" -- reminds me of a piece of performance art ;)

 * "instance" -- too much cross-terminology with server instance in Nova
 and Ironic
>>>
>>>
>>> In which case I'd suggest borrowing another option from TripleO:
>>> "overcloud".
>>>
>>> --
>>>
>>> Kind regards, Yuriy.
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Top Gate Reseting issues that need attention

2014-01-20 Thread Eugene Nikanorov
Hi Sean,

I think the following 2 commits in neutron are essential for bringing
neutron jobs back to acceptable level of failure rate:
https://review.openstack.org/#/c/67537/
https://review.openstack.org/#/c/66670/

Thanks,
Eugene.


On Mon, Jan 20, 2014 at 6:33 PM, Sean Dague  wrote:

> Anyone that's looked at the gate this morning... knows things aren't
> good. It turns out that a few new races got into OpenStack last week,
> which are causing a ton of pain, and have put us dramatically over the
> edge.
>
> We've not tracked down all of them, but 2 that are quite important to
> address are:
>
>  - Bug 1270680 - v3 extensions api inherently racey wrt instances
>  - Bug 1270608 - n-cpu 'iSCSI device not found' log causes
> gate-tempest-dsvm-*-full to fail
>
> Both can be seen as very new issues here -
> http://status.openstack.org/elastic-recheck/
>
> We've got a short term work around on 1270680 which we're going to take
> into the gate now (and fix it better later).
>
> 1270608 is still in desperate need of fixing.
>
>
> Neutron is in a whole other level of pain. Over the weekend I found the
> isolated jobs are in a 70% fail state, which means the overall chance
> for success for Neutron / Neutron client patches are < 5%. As such I'd
> suggest a moritorium for them going into the gate at this point, as they
> are basically guarunteed to fail.
>
> -Sean
>
> --
> Sean Dague
> Samsung Research America
> s...@dague.net / sean.da...@samsung.com
> http://dague.net
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rally] Naming of a deployment

2014-01-20 Thread Oleg Gelbukh
I've finished the v0.1 spec of Rally API: http://docs.rallyapi.apiary.io/

The only thing that spec is missing at the moment is resource for Workloads
(/deployments/workloads). I will add this resource shortly.

Please, send your comments and suggestions.

--
Best regards,
Oleg Gelbukh


On Sun, Jan 19, 2014 at 11:28 AM, Oleg Gelbukh wrote:

> Yuriy, the idea is to choose something more or less general. 'Overcloud'
> would be very specific to my taste. It could also create confusion for
> users who want to depoy tests targets with other tools, like Fuel or
> Devstack.
>
> --
> Best regards,
> Oleg Gelbukh
>
>
> On Sun, Jan 19, 2014 at 1:17 AM, Yuriy Taraday wrote:
>
>> Hi all.
>>
>> I might be a little out of context, but isn't that thing deployed on some
>> kind of cloud?
>>
>>
>>> * "cluster" -- is too generic, but also has connotations in HPC and
>>> various other technologies (databases, MQs, etc).
>>>
>>> * "installation" -- reminds me of a piece of performance art ;)
>>>
>>> * "instance" -- too much cross-terminology with server instance in Nova
>>> and Ironic
>>
>>
>> In which case I'd suggest borrowing another option from TripleO:
>> "overcloud".
>>
>> --
>>
>> Kind regards, Yuriy.
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Top Gate Reseting issues that need attention

2014-01-20 Thread Sean Dague
Anyone that's looked at the gate this morning... knows things aren't
good. It turns out that a few new races got into OpenStack last week,
which are causing a ton of pain, and have put us dramatically over the edge.

We've not tracked down all of them, but 2 that are quite important to
address are:

 - Bug 1270680 - v3 extensions api inherently racey wrt instances
 - Bug 1270608 - n-cpu 'iSCSI device not found' log causes
gate-tempest-dsvm-*-full to fail

Both can be seen as very new issues here -
http://status.openstack.org/elastic-recheck/

We've got a short term work around on 1270680 which we're going to take
into the gate now (and fix it better later).

1270608 is still in desperate need of fixing.


Neutron is in a whole other level of pain. Over the weekend I found the
isolated jobs are in a 70% fail state, which means the overall chance
for success for Neutron / Neutron client patches are < 5%. As such I'd
suggest a moritorium for them going into the gate at this point, as they
are basically guarunteed to fail.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Can someone from the Nova core team please take a look at patch 40467

2014-01-20 Thread Genin, Daniel I.
Hello Nova core team,

I have three small patches implementing ephemeral storage encryption for LVM 
backed instances.

https://review.openstack.org/#/c/40467/
https://review.openstack.org/#/c/60621/
https://review.openstack.org/#/c/61544/

The patches have been under review for a couple months now and have gone 
through several rounds of revisions. There are already a number of +1's and the 
patches are only awaiting +2's, and in one case a +1, from the core team.

Thank you for your help,
Dan
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Introduction: Rich Megginson - Designate project

2014-01-20 Thread Mac Innes, Kiall
Hi Rich - Welcome!

We're mostly all on the #openstack-dns IRC channel, drop by and say
hello ;)

Thanks,
Kiall

On Wed, 2014-01-15 at 18:24 -0700, Rich Megginson wrote:
> Hello.  My name is Rich Megginson.  I am a Red Hat employee interested 
> in working on Designate (DNSaaS), primarily in the areas of integration 
> with IPA DNS, DNSSEC, and authentication (Keystone).
> 
> I've signed up for the openstack/launchpad/gerrit accounts.
> 
> Be seeing you (online).
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: This is a digitally signed message part
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][Neutron][Tempest][Network] Break down NetworkBasicOps to smaller test cases

2014-01-20 Thread Jay Pipes
On Sun, 2014-01-19 at 07:17 -0500, Yair Fried wrote:
> OK,
> but considering my pending patch (#3 and #4)
> what about:
> 
> #1 -> #2
> #1 -> #3
> #1 -> #4
> 
> instead of 
> 
> #1 -> #2 -> #3 -> #4
> 
> a failure in #2 will prevent #3 and #4 from running even though they are 
> completely unrelated

Seems to me, that the above is a logical fault. If a failure in #2
prevents #3 or #4 from running, then by nature they are related to #2.

-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Designate (DNSaaS) design workshop in Austin, TX

2014-01-20 Thread Joe Mcbride
Greetings,
The Designate project (DNS as a service designed for Openstack) is holding a 
design workshop in Austin TX. 

REGISTER AT:
https://www.eventbrite.com/e/designate-development-workshop-january-2014-tickets-10180041779

WHO SHOULD ATTEND?
- Developers interested in contributing code
- Operators concerned with providing feedback and input into the design process

DATES & TIMES:
All times are Central Standard Time (CST).
- Monday, January 27 from 9:30AM to 5PM
- Tuesday, January 28 from 9:30AM to 5PM
- Wednesday, January 29 from 9:30AM to 2PM

LOCATION:
- The Capital Factory - 701 Brazos St #1601, Austin, TX 78701
- A Google Hangout will also be provided (please register)

AGENDA:
Actual agenda details will be posted later this week. Our workshop priorities 
are:
- Optimize the project software development process
- Blueprint review
- Design sessions
- Team building

MORE WORKSHOP DETAILS:
https://etherpad.openstack.org/p/DesignateAustinWorkshop2014-01


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance] Property protections not being enforced?

2014-01-20 Thread Tom Leaman
I'm looking at a possible bug here but I just want to confirm
that I'm not missing something obvious.

I'm currently working with Devstack on Ubuntu 12.04 LTS

Once Devstack is up and running, I'm creating a file 
/etc/glance/property-protections.conf as follows:

[^foo_property$]
create = @
read = @
update = admin
delete = admin

[.*]
create = @
read = @
update = @
delete = @

I'm then referencing this in my glance-api.conf and restarting the glance api 
service.

My understanding is that, as the demo user (which does not have the admin 
role), I should
be able to set foo_property='some_value' but once set, I should not be able to 
modify or delete it
which I currently am able to do.

I have tried changing the various operations to '!' and confirmed that those 
will prevent me from
executing those operations (returning 403 as expected). I've also double 
checked that the demo user
has not somehow acquired the admin role.

Tom

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] Filtering and limiting ready for final reviews

2014-01-20 Thread Henry Nash
Hi

Both Filtering and List Limiting are ready for final review (both were pretty 
heavily reviewed on the run up to Havana, if you remember, but we decided to 
pull them):

https://review.openstack.org/#/c/43257/
https://review.openstack.org/#/c/44836/

The only debate on list limiting is whether we indicate to the client that the 
list has been truncated by using the return status code (e.g. 203) which was 
what we decided at the Hackathon last week, or switch to using the 'Next' 
pointer in the collection return instead (e.g. 'next' : 'Truncated', os 
something like that).  The former is what has been implemented in the patch 
above, but it would be trivial to switch to the pointer style.

I think we want to get both these in for I-2, so any review help today from 
cores would be appreciated.

Henry


signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel-dev] [OSTF][Ceilometer] ceilometer meters and samples delete

2014-01-20 Thread Bogdan Dobrelya
On 01/20/2014 01:29 PM, Dmitry Iakunchikov wrote:
> David,
> 
> You're completely right,
> 
> The main problem is that ceilometer has possibility to create samples,
> but not to delete. Because of that there is no possibility to remove
> OSTF created data.
1) What is current DB backend for Ceilometer? Can we use the separate
database for OSTF and just drop while doing teardown?
2) IIRC from Ceilometer PoC performance results, the main DB with Galera
must never be used as backend for ceilometer because of performance issues?
> 
> Actually another way is to use time_to_live, but as you sad "As an
> operator, I’d expect that my data is retained even for items that have
> been removed"(c)
> 
> 
> 2014/1/17 David Easter mailto:deas...@mirantis.com>>
> 
> I’d like to make sure I understand the question.  Is this the scenario?
> 
>   * A user installs Mirantis OpenStack
>   * The user runs the Mirantis OpenStack Health Check (OSTF) against
> Ceilometer
>   * The Health Check creates a VM against which ceilometer can
> collect data
>   * Ceilometer collects the data from this VM for an amount of time
> and stores the data in mySQL
>   * The Health Check then ends the test, removing the VM
>   * The data collected about this sample VM is retained in mySQL and
> is not removed.
> 
> Is this basically correct?
> 
> If so, I’d ask if Ceilometer removes data from VM’s or nodes that
> have been deleted from OpenStack during normal operation or if the
> data is retained in the run-time scenarios as well?  If so, wouldn’t
> this be a general requirement to remove data about entities that no
> longer exist in the environment vs. an issue specific to Health
> Check (OSTF)?
> 
> As an operator, I’d expect that my data is retained even for items
> that have been removed, but I agree that there should be a way for
> an operator to make a decision to remove stale data – either based
> on time or as a manually executed operation.  Removing data
> automatically right away could lead to a loss of historical
> information that could be used for longer term analysis and billing.
> 
> Or am I misinterpreting the situation and Ceilometer already allows
> for deletion of data – and the question is just whether we should
> remove the data collected during the test?  If that is the only
> question, then yes – we should remove the data after the test is done.
> 
> Thanks,
> 
> -Dave Easter
> 
> From: Dmitry Iakunchikov  >
> Date: Friday, January 17, 2014 at 5:10 AM
> To: Nadya Privalova  >, "OpenStack Development Mailing
> List (not for usage questions)"  >, Dmitry Iakunchikov
> mailto:diakunchi...@mirantis.com>>, Mike
> Scherbakov  >, Vladimir Kuklin
> mailto:vkuk...@mirantis.com>>,
> "fuel-...@lists.launchpad.net "
> mailto:fuel-...@lists.launchpad.net>>
> Subject: Re: [Fuel-dev] [openstack-dev] [OSTF][Ceilometer]
> ceilometer meters and samples delete
> 
> For now in Fuel we keep samples forever
> 
> In case if we will use time_to_live, how long we should keep this data?
> 
> 
> 2014/1/17 Julien Danjou mailto:jul...@danjou.info>>
> 
> On Fri, Jan 17 2014, Nadya Privalova wrote:
> 
> > I would ask in another way.
> > Ceilometer has a mechanism to add a sample through POST. So it 
> looks not
> > consistent not to allow user to delete a sample.
> > IMHO, insertion and deletion through REST looks a little bit hacky: 
> user
> > always has an ability to fake data collected from OpenStack 
> services. But
> > maybe I don't see any valuable usecases.
> > Anyway, it seems reasonable to have both add_sample and 
> delete_sample in
> > API or not to have neither.
> 
> From the user PoV, that totally makes sense, agreed.
> 
> --
> Julien Danjou
> # Free Software hacker # independent consultant
> # http://julien.danjou.info
> 
> 
> 
> 
> -- 
> With Best Regards
> QA engineer Dmitry Iakunchikov
> -- Mailing list: https://launchpad.net/~fuel-dev Post to :
> fuel-...@lists.launchpad.net 
> Unsubscribe : https://launchpad.net/~fuel-dev More help :
> https://help.launchpad.net/ListHelp
> 
> 
> 
> 
> -- 
> With Best Regards
> QA engineer Dmitry Iakunchikov
> 
> 


-- 
Best regards,
Bogdan Dobrelya,
Researcher TechLead, Mirantis, Inc.
+38 (066) 051 07 53
Skype bogdando_at_yahoo.com
Irc #bogdando
38, Lenina ave.
Kharkov, Ukraine
www.mirantis.com
www.mirantis.ru
bdobre...@mirantis.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lis

Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

2014-01-20 Thread Ian Wells
On 20 January 2014 09:28, Irena Berezovsky  wrote:

> Hi,
> Having post PCI meeting discussion with Ian based on his proposal
> https://docs.google.com/document/d/1vadqmurlnlvZ5bv3BlUbFeXRS_wh-dsgi5plSjimWjU/edit?pli=1#
> ,
> I am  not sure that the case that quite usable for SR-IOV based networking
> is covered well by this proposal. The understanding I got is that VM can
> land on the Host that will lack suitable PCI resource.
>

The issue we have is if we have multiple underlying networks in the system
and only some Neutron networks are trunked on the network that the PCI
device is attached to.  This can specifically happen in the case of
provider versus trunk networks, though it's very dependent on the setup of
your system.

The issue is that, in the design we have, Neutron at present has no input
into scheduling, and also that all devices in a flavor are precisely
equivalent.  So if I say 'I want a 10G card attached to network X' I will
get one of the cases in the 10G flavor with no regard as to whether it can
actually attach to network X.

I can see two options here:

1. What I'd do right now is I would make it so that a VM that is given an
unsuitable network card fails to run in nova-compute when Neutorn discovers
it can't attach the PCI device to the network.  This will get us a lot of
use cases and a Neutron driver without solving the problem elegantly.
You'd need to choose e.g. a provider or tenant network flavor, mindful of
the network you're connecting to, so that Neutron can actually succeed,
which is more visibility into the workings of Neutron than the user really
ought to need.

2. When Nova checks that all the networks exist - which, conveniently, is
in nova-api - it also gets attributes from the networks that can be used by
the scheduler to choose a device.  So the scheduler chooses from a flavor
*and*, within that flavor, from a subset of those devices with appopriate
connectivity.  If we do this then the Neutron connection code doesn't
change - it should still fail if the connection can't be made - but it
becomes an internal error, since it's now an issue of consistency of
setup.

To do this, I think we would tell Neutron 'PCI extra-info X should be set
to Y for this provider network and Z for tenant networks' - the precise
implementation would be somewhat up to the driver - and then add the
additional check in the scheduler.  The scheduling attributes list would
have to include that attribute.

Can you please provide an example for the required cloud admin PCI related
> configurations on nova-compute and controller node with regards to the
> following simplified scenario:
>  -- There are 2 provider networks (phy1, phy2), each one has associated
> range on vlan-ids
>  -- Each compute node has 2 vendor adapters with SR-IOV  enabled feature,
> exposing xx Virtual Functions.
>  -- Every VM vnic on virtual network on provider network  phy1 or phy2
>  should be pci pass-through vnic.
>

So, we would configure Neutron to check the 'e.physical_network' attribute
on connection and to return it as a requirement on networks.  Any PCI on
provider network 'phy1' would be tagged e.physical_network => 'phy1'.  When
returning the network, an extra attribute would be supplied (perhaps
something like 'pci_requirements => { e.physical_network => 'phy1'}'.  And
nova-api would know that, in the case of macvtap and PCI directmap, it
would need to pass this additional information to the scheduler which would
need to make use of it in finding a device, over and above the flavor
requirements.

Neutron, when mapping a PCI port, would similarly work out from the Neutron
network the trunk it needs to connect to, and would reject any mapping that
didn't conform. If it did, it would work out how to encapsulate the traffic
from the PCI device and set that up on the PF of the port.

I'm not saying this is the only or best solution, but it does have the
advantage that it keeps all of the networking behaviour in Neutron -
hopefully Nova remains almost completely ignorant of what the network setup
is, since the only thing we have to do is pass on PCI requirements, and we
already have a convenient call flow we can use that's there for the network
existence check.
-- 
Ian.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] a "common" client library

2014-01-20 Thread Sean Dague
On 01/19/2014 11:50 PM, Jesse Noller wrote:
> 
> On Jan 19, 2014, at 5:37 PM, Jamie Lennox  > wrote:
> 
>> On Sat, 2014-01-18 at 09:13 -0500, Doug Hellmann wrote:
>>> I like the idea of a fresh start, but I don't think that's
>>> incompatible with the other work to clean up the existing clients.
>>> That cleanup work could help with creating the backwards compatibility
>>> layer, if a new library needs to include one, for example.
>>>
>>>
>>> As far as namespace packages and separate client libraries, I'm torn.
>>> It makes sense, and I originally assumed we would want to take that
>>> approach. The more I think about it, though, the more I like the
>>> approach Dean took with the CLI, creating a single repository with a
>>> team responsible for managing consistency in the UI.
>>>
>>>
>>> Doug
>>
>> This *is* the approach Dean took with the CLI. Have a package that
>> provides the CLI but then have the actual work handed off to the
>> individual clients (with quite a lot of glue).
> 
> And I think many of us are making the argument (or trying to) that the
> “a lot of glue” approach is wrong and unsustainable for both a unified
> CLI long term *and especially* for application developers.

100% agree. At some point take a look at the tempest rest client, and
you can see how entirely crazy different the APIs are between services -
https://github.com/openstack/tempest/blob/master/tempest/common/rest_client.py#L506

(The Tempest client is in no way a paragon of virtues, but by writing
our own client we've really discovered how lumpy this API is).

So I'm highly supportive of taking all the clients into a single
separate program which would produce the official python SDK, as well as
a unified CLI for OpenStack. The server programs should just be
producing server stacks, that end with the API. I think that would
empower a set of people that were most concerned with operator and
developer UX to be able to look at OpenStack as a whole.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel-dev] [OSTF][Ceilometer] ceilometer meters and samples delete

2014-01-20 Thread Dmitry Iakunchikov
David,

You're completely right,

The main problem is that ceilometer has possibility to create samples, but
not to delete. Because of that there is no possibility to remove OSTF
created data.

Actually another way is to use time_to_live, but as you sad "As an
operator, I’d expect that my data is retained even for items that have been
removed"(c)


2014/1/17 David Easter 

> I’d like to make sure I understand the question.  Is this the scenario?
>
>- A user installs Mirantis OpenStack
>- The user runs the Mirantis OpenStack Health Check (OSTF) against
>Ceilometer
>- The Health Check creates a VM against which ceilometer can collect
>data
>- Ceilometer collects the data from this VM for an amount of time and
>stores the data in mySQL
>- The Health Check then ends the test, removing the VM
>- The data collected about this sample VM is retained in mySQL and is
>not removed.
>
> Is this basically correct?
>
> If so, I’d ask if Ceilometer removes data from VM’s or nodes that have
> been deleted from OpenStack during normal operation or if the data is
> retained in the run-time scenarios as well?  If so, wouldn’t this be a
> general requirement to remove data about entities that no longer exist in
> the environment vs. an issue specific to Health Check (OSTF)?
>
> As an operator, I’d expect that my data is retained even for items that
> have been removed, but I agree that there should be a way for an operator
> to make a decision to remove stale data – either based on time or as a
> manually executed operation.  Removing data automatically right away could
> lead to a loss of historical information that could be used for longer term
> analysis and billing.
>
> Or am I misinterpreting the situation and Ceilometer already allows for
> deletion of data – and the question is just whether we should remove the
> data collected during the test?  If that is the only question, then yes –
> we should remove the data after the test is done.
>
> Thanks,
>
> -Dave Easter
>
> From: Dmitry Iakunchikov 
> Date: Friday, January 17, 2014 at 5:10 AM
> To: Nadya Privalova , "OpenStack Development
> Mailing List (not for usage questions)" ,
> Dmitry Iakunchikov , Mike Scherbakov <
> mscherba...@mirantis.com>, Vladimir Kuklin , "
> fuel-...@lists.launchpad.net" 
> Subject: Re: [Fuel-dev] [openstack-dev] [OSTF][Ceilometer] ceilometer
> meters and samples delete
>
> For now in Fuel we keep samples forever
>
> In case if we will use time_to_live, how long we should keep this data?
>
>
> 2014/1/17 Julien Danjou 
>
>> On Fri, Jan 17 2014, Nadya Privalova wrote:
>>
>> > I would ask in another way.
>> > Ceilometer has a mechanism to add a sample through POST. So it looks not
>> > consistent not to allow user to delete a sample.
>> > IMHO, insertion and deletion through REST looks a little bit hacky: user
>> > always has an ability to fake data collected from OpenStack services.
>> But
>> > maybe I don't see any valuable usecases.
>> > Anyway, it seems reasonable to have both add_sample and delete_sample in
>> > API or not to have neither.
>>
>> From the user PoV, that totally makes sense, agreed.
>>
>> --
>> Julien Danjou
>> # Free Software hacker # independent consultant
>> # http://julien.danjou.info
>>
>
>
>
> --
> With Best Regards
> QA engineer Dmitry Iakunchikov
>  -- Mailing list: https://launchpad.net/~fuel-dev Post to :
> fuel-...@lists.launchpad.net Unsubscribe : 
> https://launchpad.net/~fuel-devMore help :
> https://help.launchpad.net/ListHelp
>



-- 
With Best Regards
QA engineer Dmitry Iakunchikov
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] code review

2014-01-20 Thread Flavio Percoco


Please, don't send review requests to the list. if it's an urgent
matter, please ping directly on IRC.

Also, if you really have to send it to the list, tag the email subject
with the projects!

Thanks :)
FF

On 20/01/14 10:07 +0800, 黎林果 wrote:

Hi all,

I'd like you to examine a change.  Please visit

[neutron]
https://review.openstack.org/#/c/63981/
‘ipt_mgr.ipv6 written in the wrong ipt_mgr.ipv4’

[nova]
https://review.openstack.org/#/c/64241/
'Add API schema for v3 multinic API'

[python-keystoneclient]
https://review.openstack.org/#/c/63679/
'Modify the backtrace of invalid token'

Regard!

Lee Li

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
@flaper87
Flavio Percoco


pgpIWWXjnAzh_.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Apparently weird timeout issue

2014-01-20 Thread Darragh O'Reilly
Hi Salvatore,

I presume it's this one? 
http://logs.openstack.org/38/65838/4/check/check-tempest-dsvm-neutron-isolated/d108e4a/logs/tempest.txt.gz?#_2014-01-19_20_50_14_604

Is it true that the cirros image just fires off a few dhcp discovers and then 
gives up? If so, then maybe it did so before the tagging happened. Do we have 
the instance console log? It took about 45 seconds from when the port was 
created to when it was tagged.

2014-01-19 20:48:57.412 8142 DEBUG neutron.agent.linux.ovsdb_monitor [-] Output 
received from ovsdb monitor: 
{"data":[["3602a7b2-b559-4709-9bf0-53ae2af68d06","insert","tap496b808c-b5"]],"headings":["row","action","name"]}

2014-01-19 20:49:41.925 8142 DEBUG neutron.agent.linux.utils [-] 
Command:
 ['sudo', '/usr/local/bin/neutron-rootwrap', 
'/etc/neutron/rootwrap.conf', 'ovs-vsctl', '--timeout=10', 'set', 
'Port', 'tap496b808c-b5', 'tag=64']
Exit code: 0

Darragh.


>I have been seeing in the past 2 days timeout failures on gate jobs which I
>am struggling to explain. An example is available in [1]
>These are the usual failure that we associate with bug 1253896, but this
>time I can verify that:
>- The floating IP is correctly wired (IP and NAT rules)
>- The DHCP port is correctly wired, as well as the VM port and the router
>port
>- The DHCP agent is correctly started for the network
>
>However, no DHCP DISCOVER request is sent. Only the DHCP RELEASE message is
>seen.
>Any help at interpreting the logs will be appreciated.
>
>
>Salvatore
>
>[1] http://logs.openstack.org/38/65838
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Havana Release V3 Extensions and new features to quota

2014-01-20 Thread Vinod Kumar Boppanna
Hi,

My name is "Vinod Kumar Boppanna" and I was testing the quota part in the
OpenStack Havana Release. I had installed the Havana Release in a single
VM through RDO process. During testing, i used the AUTH_URL as

OS_AUTH_URL=http://:35357/v2.0/

Because of this, the nova is using the following v2 attributes for the quotas

"compute_extension:quotas:show": "",
"compute_extension:quotas:update": "rule:admin_api",
"compute_extension:quotas:delete": "rule:admin_api",

But there are other quota attributes available for v3 and they are

"compute_extension:v3:os-quota-sets:discoverable": "",
"compute_extension:v3:os-quota-sets:show": "",
"compute_extension:v3:os-quota-sets:update": "rule:admin_api",
"compute_extension:v3:os-quota-sets:delete": "rule:admin_api",
"compute_extension:v3:os-quota-sets:detail": "rule:admin_api",

My question is "how can i use the V3 extensions". I mean, whether i can
use them by changing the AUTH_URL as

OS_AUTH_URL=http://:35357/v3.0/ (but this didn't worked).

I also have a doubt whether RDO process installed the Havana setup with V3
extensions or just V2 extensions?

I could test all the existing quota features with respect to tenant and the 
users in a tenant.
During this, i had observed the following things

1. Weak Notifications - Let’s say that a user is added as a member of a project 
and he had created an
instance in that project. When he logs in to the dashboard he can see that 
an
instance has been created by him. Now, the administrator removed his 
membership
from the “project”. Now when user logs in, he will not be able to see the
instance that he created earlier. But the instance still exists and the 
user can log onto it.
But if administrator adds him back to the project, then the user is able to 
see again the same instance.

2. By default the policy.json file allows any user in a project to destroy an 
instance created by another user
   in the same project

3. I couldn't find a link or page in the dashboard where i can set the
quota limits of a user in a project. I could do for a project, but not
   for a User. I did set the quota limits for the user using nova commands.

4. When i see instances that have created by users in a project, it
 does not show who has created that instance. For eg: if a project has
 2 users and each user created 1 instance of VM each, then in the
"Instances" link, the dashboard show both the instances with their name
and details. But it does now show who has created which VM.

5. When a VM is created, it normally allows SSH login using the key
   pair generated by the user. But the "console" link provided in the
   "dashboard" only allows login through password. So, i have to atleast
   once login to the VM through command line using the key, sets the root
   password (because during the VM creation, i am not asked to enter the
 root password) and then use the console provided in the dashboard.

We also had a short discussion here (at CERN) to take the quota features 
further.
Among these features, the first one we would like to have is

Define roles like "Admin" (which is already there), "Domain Admin" and
"Project Admin".  The "Admin" can define different domains in the cloud
and also assign a person as "Domain Admin" to each domain respectively.
Also, the "Admin" will define quota to each "Domain".

The "Domain Admin" role for a person in a Domain allows him/her to define
the "Projects/Tenants" in that domain and also define a person as "Project
Admin" to each project in that domain respectively.  This person will also
define "Quota" for each project with the condition that "the sum of quota
limits of all projects should be less than or equal to its domain quota
limits".

The "Project Admin" can add users to each project and also define "quota"
for each user respectively.

We are thinking of first having this sort of tree hierarchy where the
parent can manage all the things beneath them.

I think for this, we need to have the following things in OpenStack
1. Allow to define roles (this is already there)
2. Define the meaning of these roles in the policy.json file of "nova"
3. Need to add little bit of code to understand this hierarchy and allow
the functionalities explained above.

Once we have this, we can then think of "quota delegation".

Any comments, please let me know...

Regards,
Vinod Kumar Boppanna
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] [Tempest - Stress Test] : implement a full SSH connection on "ssh_floating.py" and improve it

2014-01-20 Thread LELOUP Julien
Hello Marc,

Thanks again for your help, your blog post is helpful.

So I will start writing a new scenario test to get this full SSH stress test on 
newly created VM.
I will put more details about it in the blueprint I created for this : 
https://blueprints.launchpad.net/tempest/+spec/stress-test-ssh-floating-ip

Best Regards,

Julien LELOUP
julien.lel...@3ds.com


-Original Message-
From: Koderer, Marc [mailto:m.kode...@telekom.de]
Sent: Saturday, January 18, 2014 10:11 AM
To: LELOUP Julien
Cc: openstack-dev@lists.openstack.org
Subject: RE: [qa] [Tempest - Stress Test] : implement a full SSH connection on 
"ssh_floating.py" and improve it

Hello Julien,

maybe my blog post helps you with some more details:

http://telekomcloud.github.io/2013/09/11/new-ways-of-tempest-stress-testing.html

You can run single test if you add a new json file with the test function you 
want to test. Like:
https://github.com/openstack/tempest/blob/master/tempest/stress/etc/sample-unit-test.json

With that you can launch them with the parameters you already described.

Regards,
Marc


From: LELOUP Julien [julien.lel...@3ds.com]
Sent: Friday, January 17, 2014 3:49 PM
To: Koderer, Marc
Cc: openstack-dev@lists.openstack.org
Subject: RE: [Tempest - Stress Test] : implement a full SSH connection on 
"ssh_floating.py" and improve it

Hi Marc,

The Etherpad you provided was helpful to know the current state of the stress 
tests.

I admit that I have some difficulties to understand how I can run a single test 
built with the @stresstest decorator (even not a beginner in Python, I still 
have things to learn on this technology and a lot more on OpenStack/Tempest :) 
).
I used to run my test using "./run_stress.py -t  -d ", which allowed me to run only one test 
with a dedicated configuration (number of threads, ...)

For what I understand now in Tempest, I only managed to run all tests, using 
"./run_tests.sh" and the only configuration I found related to stress tests was 
the [stress] section in tempest.conf.

For example : let say I ported my SSH stress test as a scenario test with the 
@stresstest decorator.
How can I launch this test (and only this one) and use a dedicated 
configuration file like ones we can found in "tempest/stress/etc" ?

Another question I have now : in the case that I have to use "run_test.sh" and 
not "run_stress.py" anymore, how do I get the test runs statistics I used to 
have, and where should I put some code to improve them ?

When I will have cleared my mind with all these kinds of practical details, 
maybe I should add some content on the Wiki about stress tests in Tempest.

Best Regards,

Julien LELOUP
julien.lel...@3ds.com

-Original Message-
From: Koderer, Marc [mailto:m.kode...@telekom.de]
Sent: Friday, January 17, 2014 3:07 PM
To: LELOUP Julien
Cc: openstack-dev@lists.openstack.org
Subject: RE: [qa] RE: [Tempest - Stress Test] : implement a full SSH connection 
on "ssh_floating.py" and improve it

Hi Julien,

most of the cases in tempest/stress are already covered by exiting tests in 
/api or /scenario. The only thing that is missing is the decorator on them.

BTW here is the Etherpad from the summit talk that we had:
https://etherpad.openstack.org/p/icehouse-summit-qa-stress-tests

It possible help to understand the state. I didn't managed to work on the 
action items that are left.

Your suggestions sound good. So I'd happy so see some patches :)

Regards
Marc

From: LELOUP Julien [julien.lel...@3ds.com]
Sent: Friday, January 17, 2014 11:52 AM
To: Koderer, Marc
Cc: openstack-dev@lists.openstack.org
Subject: RE: [qa] RE: [Tempest - Stress Test] : implement a full SSH connection 
on "ssh_floating.py" and improve it

Hello Marc,

Thanks for your answer.

At the moment I'm willing to spend some time on this kind of scenario so I will 
see how to use the stress decorator inside a scenario test.
Does this means that all stress tests available in "tempest/stress" should be 
ported as scenario tests with this decorator ?

I do have some ideas about features on stress test that I find useful for my 
own use case : like adding more statistics on stress test runs in order to use 
them as benchmarks.
I don't know if this kind of feature was already discussed in the OpenStack 
community but since stress tests are a bit deprecated now, maybe there is some 
room for this kind of improvement on "fresh" stress tests.

Best Regards,

Julien LELOUP

-Original Message-
From: Koderer, Marc [mailto:m.kode...@telekom.de]
Sent: Friday, January 17, 2014 9:45 AM
To: LELOUP Julien
Cc: openstack-dev@lists.openstack.org
Subject: [qa] RE: [Tempest - Stress Test] : implement a full SSH connection on 
"ssh_floating.py" and improve it

Hello Juilen,

I forwarded your mail to the correct mailing list. Please do not use the qa 
list any longer.

I am happy that you are interested in stress tests. All the tests in 
tempest/stress

Re: [openstack-dev] Uninstalling openstack

2014-01-20 Thread Marco Fornaro
Hi Lalitha,

About uninstall: please accept some suggestions:
1) Do exactly the backward path of the installation respecting the sequence, 
meaning:
If you installed:
Packet one
Packet two
Packet three
You have to:
Uninstall packet three
Uninstall packet two
Uninstall packet one
2) always use the --purge option:
apt-get remove --purge my-packet
x)optional:I also take a look at undeleted directories in /etc/ /var/log 
/var/lib, if necessary I delete them manually :-)
3) after EACH bunch on uninstall (like apt-get remove --purge 
nova-something-packets) always use:
apt-get --purge autoremove

Then, Starting from mseknibilel work I wrote a VERY similar guide for Havana 
and carefully tested it:
https://github.com/fornyx/OpenStack-Havana-Install-Guide/blob/master/OpenStack-Havana-Install-Guide.rst

Please note that openstack dev is not in my opinion the good list, perhaps 
openstack or openstack-doc would be better :-)

BR

Marco


-Original Message-
From: Lalitha Maruthachalam [mailto:lalitha.maruthacha...@aricent.com] 
Sent: den 20 januari 2014 10:53
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] Uninstalling openstack

Hi,

I had installed Grizzly release of openstack on my machine using the following 
link.

https://github.com/mseknibilel/OpenStack-Grizzly-Install-Guide/blob/OVS_SingleNode/OpenStack_Grizzly_Install_Guide.rst

I no longer need the grizzly release. I want to install the Havana release. Can 
someone please let me know how to uninstall the Grizzly release.

If this is not the forum for me to post this query, can you please let me know 
to which mailing list should I send this query.

Thanks,
Lalitha.M




===
Please refer to http://www.aricent.com/legal/email_disclaimer.html
for important disclosures regarding this electronic communication.
===

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Uninstalling openstack

2014-01-20 Thread Lalitha Maruthachalam
Hi,

I had installed Grizzly release of openstack on my machine using the following 
link.

https://github.com/mseknibilel/OpenStack-Grizzly-Install-Guide/blob/OVS_SingleNode/OpenStack_Grizzly_Install_Guide.rst

I no longer need the grizzly release. I want to install the Havana release. Can 
someone please let me know how to uninstall the Grizzly release.

If this is not the forum for me to post this query, can you please let me know 
to which mailing list should I send this query.

Thanks,
Lalitha.M




===
Please refer to http://www.aricent.com/legal/email_disclaimer.html
for important disclosures regarding this electronic communication.
===

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova]can someone help me? when I use cmd "nova migration-list" error.

2014-01-20 Thread sahid


Perhaps a bug-maintainer should to update the status, 
the bug is not related to python-novaclient and it is not tried yet. 

Thanks a lot, 

s. 
- Original Message -

From: "li zheming"  
To: "OpenStack Development Mailing List (not for usage questions)" 
 
Sent: Monday, January 20, 2014 4:52:27 AM 
Subject: Re: [openstack-dev] [nova]can someone help me? when I use cmd "nova 
migration-list" error. 

Ok .thanks Jay 
I consider it is error in novaclient before. 
it is my misunderstand. thank you very much! 
lizheming 

2014/1/20 Jay Lau < jay.lau@gmail.com > 



It is being fixed https://review.openstack.org/#/c/61717/ 

Thanks, 

Jay 


2014/1/20 li zheming < lizhemin...@gmail.com > 



hi all: 
when I use cmd nova migration-list, it return error,like this: 
openstack@ devstack: /home$ nova migration-list 
ERROR: 'unicode' object has no attribute 'iteritems' 
I step the codes and find the codes have some error. 


python-novaclient/novaclient/base.py 

class Manager(utils.HookableMixin): 
.. 
def _list(self, url, response_key, obj_class=None, body=None): 
if body: 
_resp, body = self.api.client.post(url, body=body) 
else: 
_resp, body = self.api.client.get(url) 

if obj_class is None: 
obj_class = self.resource_class 

data = body[response_key] 
# NOTE(ja): keystone returns values as list as {'values': [ ... ]} 
# unlike other services which just return the list... 
if isinstance(data, dict): 
try: 
data = data['values'] 
except KeyError: 
pass 

with self.completion_cache('human_id', obj_class, mode="w"): 
with self.completion_cache('uuid', obj_class, mode="w"): 
return [obj_class(self, res, loaded=True) 
for res in data if res] 

I set a breakpoint in " data = data['values']", and find the date is 

{u'objects': []}}, it has no key named values. 

it except a keyError and pass. 

if go " for res in data if res ", the res is unicode "object", this will 

occur error in the next fun. 

do you met this issue? and someone who know why the comment say " keystone 
returns values as list as {'values': [ ... ]}" 

but I think this is not relevant about keystone. may be I misunderstand this 
codes. please give me more info about this code. 

thank you very much! 








___ 
OpenStack-dev mailing list 
OpenStack-dev@lists.openstack.org 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 






___ 
OpenStack-dev mailing list 
OpenStack-dev@lists.openstack.org 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 






___ 
OpenStack-dev mailing list 
OpenStack-dev@lists.openstack.org 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance] Meetup Schedule Posted!

2014-01-20 Thread Mark Washenberger
Hi folks,

First things first: Happy Martin Luther King Jr. Day!

Our mini summit / meetup for the Icehouse cycle will take place in one
week's time. To ensure we are all ready and know what to expect, I have
started a wiki page tracking the event details and a tentative schedule.
Please have a look if you plan to attend.

https://wiki.openstack.org/wiki/Glance/IcehouseCycleMeetup

I have taken the liberty of scheduling several of the topics we have
already discussed. Let me know if anything in the existing schedule creates
a conflict for you. There are also presently 4 unclaimed slots in the
schedule. If your topic is not yet scheduled, please tell me the time you
want and I will update accordingly.

EXTRA IMPORTANT: If you plan to attend the meetup but have not spoken with
me, please respond as soon as possible to let me know your plans. We have a
limited number of seats remaining.

Cheers,
markwash


"Our only hope today lies in our ability to recapture the revolutionary
spirit and go out into a sometimes hostile world declaring eternal
hostility to poverty, racism, and militarism."

"I knew that I could never again raise my voice against the violence of the
oppressed in the ghettos without having first spoken clearly to the
greatest purveyor of violence in the world today, my own government."

 - Martin Luther King, Jr.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neturon] firewall_driver and ML2 and vif_security discussion

2014-01-20 Thread Mathieu Rohon
Hi

On Thu, Jan 16, 2014 at 11:27 PM, Nachi Ueno  wrote:
> Hi Bob, Kyle
>
> I pushed (A) https://review.openstack.org/#/c/67281/.
> so could you review it?
>
> 2014/1/16 Robert Kukura :
>> On 01/16/2014 03:13 PM, Kyle Mestery wrote:
>>>
>>> On Jan 16, 2014, at 1:37 PM, Nachi Ueno  wrote:
>>>
 Hi Amir

 2014/1/16 Amir Sadoughi :
> Hi all,
>
> I just want to make sure I understand the plan and its consequences. I’m 
> on board with the YAGNI principle of hardwiring mechanism drivers to 
> return their firewall_driver types for now.
>
> However, after (A), (B), and (C) are completed, to allow for Open 
> vSwitch-based security groups (blueprint ovs-firewall-driver) is it 
> correct to say: we’ll need to implement a method such that the ML2 
> mechanism driver is aware of its agents and each of the agents' 
> configured firewall_driver? i.e. additional RPC communication?
>
> From yesterday’s meeting: 
> 
>
> 16:44:17  I've suggested that the L2 agent could get the 
> vif_security info from its firewall_driver, and include this in its 
> agents_db info
> 16:44:39  then the bound MD would return this as the 
> vif_security for the port
> 16:45:47  existing agents_db RPC would send it from agent to 
> server and store it in the agents_db table
>
> Does the above suggestion change with the plan as-is now? From Nachi’s 
> response, it seemed like maybe we should support concurrent 
> firewall_driver instances in a single agent. i.e. don’t statically 
> configure firewall_driver in the agent, but let the MD choose the 
> firewall_driver for the port based on what firewall_drivers the agent 
> supports.
>>
>> I don't see the need for anything that complex, although it could
>> certainly be done in any MD+agent that needed it.
>>
>> I personally feel statically configuring a firewall driver for an L2
>> agent is sufficient right now, and all ports handled by that agent will
>> use that firewall driver.
>>
>> Clearly, different kinds of L2 agents that coexist within a deployment
>> may use different firewall drivers. For example, linuxbridge-agent might
>> use iptables-firewall-driver, openvswitch-agent might use
>> ovs-firewall-driver, and hyperv-agent might use something else.
>>
>> I can also imagine cases where different instances of the same kind of
>> L2 agent on different nodes might use different firewall drivers. Just
>> as a hypothetical example, lets say that the ovs-firewall-driver
>> requires new OVS features (maybe connection tracking). A deployment
>> might have this new OVS feature available on some if its nodes, but not
>> on others. It could be useful to configure openvswitch-agent on the
>> nodes with the new OVS version to use ovs-firewall-driver, and configure
>> openvswitch-agent on the nodes without the new OVS version to use
>> iptables-firewall-driver. That kind of flexibility seems best supported
>> by simply configuring the firewall driver in /ovs_neutron_plugin.ini on
>> each node, which is what we currently do.
>>

 Let's say we have OpenFlowBasedFirewallDriver and
 IptablesBasedFirewallDriver in future.
 I believe there is no usecase to let user to select such
 implementation detail by host.
>>
>> I suggest a hypothetical use case above. Not sure how important it is,
>> but I'm hesitant to rule it out without good reason.
>
> Our community resource is limited, so we should focus on some usecase and
> functionalities.
> If there is no strong supporter for this usecase, we shouldn't do it.
> We should take simplest implementation for our focused usecase.
>
 so it is enough if we have a config security_group_mode=(openflow or
 iptables) in OVS MD configuration, then update vif_security based on
 this value.
>>
>> This is certainly one way the MD+agent combination could do it. It would
>> require some RPC to transmit the choice of driver or mode to the agent.
>> But I really don't think the MD and server have any business worrying
>> about which firewall driver class runs in the L2 agent. Theoretically,
>> the agent could be written in java;-). And don't forget that users may
>> want to plug in a custom firewall driver class instead.
>>
>> I think these are the options, in my descending or of current preference:
>>
>> 1) Configure firewall_driver only in the agent and pass vif_security
>> from the agent to the server. Each L2 agent gets the vif_security value
>> from its configured driver and includes it in the agents_db RPC data.
>> The MD copies the vif_security value from the agents_db to the port
>> dictionary.
>>
>> 2) Configure firewall_driver only in the agent but the hardwire
>> vif_security value for each MD. This is a reasonable short term solution
>> until we actually have multiple firewall drivers that can work with
>> singl

Re: [openstack-dev] [neutron] ML2 vlan type driver does not honor network_vlan_ranges

2014-01-20 Thread Xuhan Peng
In my opinion the provider network extension can also be used for mapping
the tenant network directly to the physical network. For example, as shown
in the official admin guide openvswitch scenario1 [1], we can configure
tenant network to use segmentation id 101 to connect to VLAN 101 of
physical switch.

$ neutron net-create --tenant-id $tenant net01 \
  --provider:network_type vlan \
  --provider:physical_network physnet2 \
  --provider:segmentation_id 101

For this kind of use case, I think it makes sense to enforce the
segmentation id in the range of network_vlan_range in ml2_conf.ini

[1]
http://docs.openstack.org/admin-guide-cloud/content/ch_networking.html#under_the_hood_openvswitch_scenario1




On Fri, Jan 17, 2014 at 5:31 AM, Henry Gessau  wrote:

> network_vlan_ranges is a 'pool' of vlans from which to pick a vlans for
> tenant networks. Provider networks are not confined to this pool. In fact,
> I
> believe it is a more common use-case that provider vlans are outside the
> pool so that they do not conflict with tenant vlan allocation.
>
> -- Henry
>
> On Thu, Jan 16, at 3:45 pm, Paul Ward  wrote:
>
> > In testing some new function I've written, I've unsurfaced the problem
> that
> > the ML2 vlan type driver does not enforce the vlan range specified in the
> > network_vlan_ranges option in ml2_conf.ini file.  It is properly
> enforcing
> > the physical network name, and even checking to be sure the
> segmentation_id
> > is valid in the sense that it's not outside the range of ALL valid vlan
> ids.
> >  But it does not actually enforce that segmentation_id is within the vlan
> > range specified for the given physical network in network_vlan_ranges.
> >
> > The fix I propose is simple.  Add the following check to
> > /neutron/plugins/ml2/drivers/type_vlan.py
> > (TypeVlanDriver.validate_provider_segment()):
> >
> > range_min, range_max =
> self.network_vlan_ranges[physical_network][0]
> > if segmentation_id not in range(range_min, range_max):
> > msg = (_("segmentation_id out of range (%(min)s through "
> >  "%(max)s)") %
> >{'min': range_min,
> > 'max': range_max})
> > raise exc.InvalidInput(error_message=msg)
> >
> > This would go near line 182 in
> >
> https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/type_vlan.py
> .
> >
> > One question I have is whether self.network_vlan_ranges[physical_network]
> > could actually be an empty list rather than a tuple representing the vlan
> > range.  I believe that should always exist, but the documentation is not
> > clear on this.  For reference, the corresponding line in ml2_conf.ini is
> this:
> >
> > [ml2_type_vlan]
> > network_vlan_ranges = default:1:4093
> >
> > Thanks in advance to any that choose to provide some insight here!
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

2014-01-20 Thread Irena Berezovsky
Hi,
Having post PCI meeting discussion with Ian based on his proposal 
https://docs.google.com/document/d/1vadqmurlnlvZ5bv3BlUbFeXRS_wh-dsgi5plSjimWjU/edit?pli=1#,
I am  not sure that the case that quite usable for SR-IOV based networking is 
covered well by this proposal. The understanding I got is that VM can land on 
the Host that will lack suitable PCI resource.
Can you please provide an example for the required cloud admin PCI related 
configurations on nova-compute and controller node with regards to the 
following simplified scenario:
 -- There are 2 provider networks (phy1, phy2), each one has associated range 
on vlan-ids
 -- Each compute node has 2 vendor adapters with SR-IOV  enabled feature, 
exposing xx Virtual Functions.
 -- Every VM vnic on virtual network on provider network  phy1 or phy2  should 
be pci pass-through vnic. 

Thanks a lot,
Irena

-Original Message-
From: Robert Li (baoli) [mailto:ba...@cisco.com] 
Sent: Saturday, January 18, 2014 12:33 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

Yunhong,

I'm hoping that these comments can be directly addressed:
  a practical deployment scenario that requires arbitrary attributes.
  detailed design on the following (that also take into account the 
introduction of predefined attributes):
* PCI stats report since the scheduler is stats based
* the scheduler in support of PCI flavors with arbitrary attributes and 
potential overlapping.
  networking requirements to support multiple provider nets/physical nets

I guess that the above will become clear as the discussion goes on. And we also 
need to define the deliveries
 
Thanks,
Robert

On 1/17/14 2:02 PM, "Jiang, Yunhong"  wrote:

>Robert, thanks for your long reply. Personally I'd prefer option 2/3 as 
>it keep Nova the only entity for PCI management.
>
>Glad you are ok with Ian's proposal and we have solution to resolve the 
>libvirt network scenario in that framework.
>
>Thanks
>--jyh
>
>> -Original Message-
>> From: Robert Li (baoli) [mailto:ba...@cisco.com]
>> Sent: Friday, January 17, 2014 7:08 AM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: Re: [openstack-dev] [nova] [neutron] PCI pass-through 
>> network support
>> 
>> Yunhong,
>> 
>> Thank you for bringing that up on the live migration support. In 
>>addition  to the two solutions you mentioned, Irena has a different 
>>solution. Let me  put all the them here again:
>> 1. network xml/group based solution.
>>In this solution, each host that supports a provider 
>>net/physical  net can define a SRIOV group (it's hard to avoid the 
>>term as you can see  from the suggestion you made based on the PCI 
>>flavor proposal). For each  SRIOV group supported on a compute node, A 
>>network XML will be  created the  first time the nova compute service 
>>is running on that node.
>> * nova will conduct scheduling, but not PCI device allocation
>> * it's a simple and clean solution, documented in libvirt as 
>>the  way to support live migration with SRIOV. In addition, a network 
>>xml is  nicely mapped into a provider net.
>> 2. network xml per PCI device based solution
>>This is the solution you brought up in this email, and Ian  
>>mentioned this to me as well. In this solution, a network xml is 
>>created  when A VM is created. the network xml needs to be removed 
>>once the  VM is  removed. This hasn't been tried out as far as I  
>>know.
>> 3. interface xml/interface rename based solution
>>Irena brought this up. In this solution, the ethernet 
>>interface  name corresponding to the PCI device attached to the VM 
>>needs to be  renamed. One way to do so without requiring system reboot 
>>is to change  the  udev rule's file for interface renaming, followed 
>>by a udev reload.
>> 
>> Now, with the first solution, Nova doesn't seem to have control over 
>>or  visibility of the PCI device allocated for the VM before the VM is  
>>launched. This needs to be confirmed with the libvirt support and see 
>>if  such capability can be provided. This may be a potential drawback 
>>if a  neutron plugin requires detailed PCI device information for operation.
>> Irena may provide more insight into this. Ideally, neutron shouldn't 
>>need  this information because the device configuration can be done by 
>>libvirt  invoking the PCI device driver.
>> 
>> The other two solutions are similar. For example, you can view the 
>>second  solution as one way to rename an interface, or camouflage an 
>>interface  under a network name. They all require additional works 
>>before the VM is  created and after the VM is removed.
>> 
>> I also agree with you that we should take a look at XenAPI on this.
>> 
>> 
>> With regard to your suggestion on how to implement the first solution 
>>with  some predefined group attribute, I think it defi

Re: [openstack-dev] [TripleO] [Tuskar] [UX] Infrastructure Management UI - Icehouse scoped wireframes

2014-01-20 Thread Jaromir Coufal

Hello everybody,

based on feedback which I received last week, I am sending updated 
wireframes. They are still not completely final, more use-cases and 
smaller updates will occur, but I believe that we are going forward 
pretty well.


http://people.redhat.com/~jcoufal/openstack/tripleo/2014-01-20_tripleo-ui-icehouse.pdf

What has changed?
* 'Architecture' dropdown was added for all node descriptions
* New views for Deployed and Free nodes
* Removed Configuration part from Deployment Overview page (will be 
happening under Configuration tab (under construction))

* Added progressing page of overcloud being deployed + Deployment Log
* Added Overcloud Horizon UI link to Deployment Overview page
* Added view for down-scaling (need more work)
* Added Implementation guide for developers

New versions of wireframes, supporting other use-cases will occur in 
time, but I hope that without huge changes.


Cheers
-- Jarda

On 2014/16/01 01:50, Jaromir Coufal wrote:

Hi folks,

thanks everybody for feedback. Based on that I updated wireframes and
tried to provide a minimum scope for Icehouse timeframe.

http://people.redhat.com/~jcoufal/openstack/tripleo/2014-01-16_tripleo-ui-icehouse.pdf


Hopefully we are able to deliver described set of features. But if you
find something what is missing which is critical for the first release
(or that we are implementing a feature which should not have such high
priority), please speak up now.

The wireframes are very close to implementation. In time, there will
appear more views and we will see if we can get them in as well.

Thanks all for participation
-- Jarda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev