Re: [openstack-dev] [Openstack][Nova][Docker] Devstack with docker driver

2014-01-09 Thread Daniel Kuffner
Hi Swapnil,

Looks like the docker-registry image is broken, since it cannot find
run.sh inside the container.

 2014/01/09 06:36:15 Unable to locate ./docker-registry/run.sh

Maybe you could try to remove and re-import image


docker rmi docker-registry

and then execute

./tools/docker/install_docker.sh

again.



On Thu, Jan 9, 2014 at 7:42 AM, Swapnil Kulkarni
swapnilkulkarni2...@gmail.com wrote:
 Hi Eric,

 I tried running the 'docker run' command without -d and it gets following
 error

 $ sudo docker run -d=false -p 5042:5000 -e SETTINGS_FLAVOR=openstack -e
 OS_USERNAME=admin -e OS_PASSWORD=password -e OS_TENANT_NAME=admin -e
 OS_GLANCE_URL=http://127.0.0.1:9292 -e
 OS_AUTH_URL=http://127.0.0.1:35357/v2.0 docker-registry
 ./docker-registry/run.sh
 lxc-start: No such file or directory - stat(/proc/16438/root/dev//console)
 2014/01/09 06:36:15 Unable to locate ./docker-registry/run.sh

 On the other hand,

 If I run the failing command just after stack.sh fails with -d,  it works
 fine,

 sudo docker run -d -p 5042:5000 -e SETTINGS_FLAVOR=openstack -e
 OS_USERNAME=admin -e OS_PASSWORD=password -e OS_TENANT_NAME=admin -e
 OS_GLANCE_URL=http://127.0.0.1:9292 -e
 OS_AUTH_URL=http://127.0.0.1:35357/v2.0 docker-registry
 ./docker-registry/run.sh
 5b737f8d2282114c1a0cfc4f25bc7c9ef8c5da7e0d8fa7ed9ccee0be81cddafc

 Best Regards,
 Swapnil


 On Wed, Jan 8, 2014 at 8:29 PM, Eric Windisch e...@windisch.us wrote:

 On Tue, Jan 7, 2014 at 11:13 PM, Swapnil Kulkarni
 swapnilkulkarni2...@gmail.com wrote:

 Let me know in case I can be of any help getting this resolved.


 Please try running the failing 'docker run' command manually and without
 the '-d' argument. I've been able to reproduce  an error myself, but wish to
 confirm that this matches the error you're seeing.

 Regards,
 Eric Windisch

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][qa] Intermittent failure of tempest test test_network_basic_ops

2014-01-09 Thread Salvatore Orlando
I am afraid I need to correct you Jay!

This actually appears to be bug 1253896 [1]

Technically, what we call 'bug' here is actually a failure manifestation.
So far, we have removed several bugs causing this failure. The last patch
was pushed to devstack around Christmas.
Nevertheless, if you look at recent comments and Joe's email, we still have
a non-negligible failure rate on the gate.

It is also worth mentioning that if you are running your tests with
parallelism enabled (ie: you're running tempest with tox -esmoke rather
than tox -esmokeserial) you will end up with a higher occurrence of this
failure due to more bugs causing it. These bugs are due to some weakness in
the OVS agent that we are addressing with patches for blueprint
neutron-tempest-parallel [2].

Regards,
Salvatore


[1] https://bugs.launchpad.net/neutron/+bug/1253896
[2] https://blueprints.launchpad.net/neutron/+spec/neutron-tempest-parallel


On 9 January 2014 05:38, Jay Pipes jaypi...@gmail.com wrote:

 On Wed, 2014-01-08 at 18:46 -0800, Sukhdev Kapur wrote:
  Dear fellow developers,

  I am running few Neutron tempest tests and noticing an intermittent
  failure of tempest.scenario.test_network_basic_ops.

  I ran this test 50+ times and am getting intermittent failure. The
  pass rate is apps. 70%. The 30% of the time it fails mostly in
  _check_public_network_connectivity.

  Has anybody seen this?
  If there is a fix or work around for this, please share your wisdom.

 Unfortunately, I believe you are running into this bug:

 https://bugs.launchpad.net/nova/+bug/1254890

 The bug is Triaged in Nova (meaning, there is a suggested fix in the bug
 report). It's currently affecting the gate negatively and is certainly
 on the radar of the various PTLs affected.

 Best,
 -jay



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][documentation][devstack] Confused about how to set up a Nova development environment

2014-01-09 Thread Mike Spreitzer
I am trying to become a bit less of a newbie, and having a bit of 
difficulty with basics.  Following are some questions, and reviews of the 
relevant documentation that I have been able to find (I am trying to 
contribute to documentation as well as solve my setup problems).  My 
driving question is: how do I set up a development environment?  Included 
is this question: what sort(s) of testing should I plan on doing in this 
environment?  And: what sorts of testing does OpenStack define?

I started at http://www.openstack.org/ and looked at the links in the 
Documentation section of the footer and also followed the Documentation 
link in the header.  Those lead to 
https://wiki.openstack.org/wiki/HowToContribute and 
http://www.openstack.org/software/start/ --- which give different answers 
to my question.  The nova project itself has a top level file named 
HACKING.rst that gives yet another answer regarding testing.  I have found 
mentions of unit, functional, and operational testing, but have not 
found any page that sets out to list/define all the categories of testing; 
should there be?


=
https://wiki.openstack.org/wiki/HowToContribute
=

The HowToContribute page's most relevant links are to 
https://wiki.openstack.org/wiki/GerritWorkflow and 
https://wiki.openstack.org/wiki/GettingTheCode .  GerritWorkflow says I 
should run unit tests, but does not describe how; it tells me I can get 
the code by doing a `git clone` but says nothing more about setting up my 
environment for running and testing.

Getting_The_Code is more expansive.  In addition to telling me how to `git 
clone` a project, it also takes a stab at telling me how to get the 
dependencies for Nova and for Swift; what are people working on other 
projects to do?  I looked into the Nova remarks.  There are two links: one 
for Ubuntu (https://wiki.openstack.org/wiki/DependsOnUbuntu) and one for 
MacOS X (https://wiki.openstack.org/wiki/DependsOnOSX).  Those two are 
surprisingly different.  The instructions for the Mac are quite simple: 
`python tools/install_venv.py` and `./run_tests.sh`, plus one more command 
to install RabbitMQ if I want to support running more than the unit tests. 
 The instructions for Ubuntu, on the other hand, are much lengthier and a 
bit less ambitious (they seem to end with the unit tests, no discussion of 
more general running).  Why the difference?  Why should 
tools/install_venv.py not be used on Ubuntu?

Are the instructions for MacOS X for real?  Looking in the Install 
OpenStack section of http://docs.openstack.org/ I see no hint that 
OpenStack can run on MacOS X.  Perhaps those instructions make sense 
because the unit tests should actually work on MacOS X?  (I tested that, 
and got 8 failures in the nova unit tests on my Mac.)

https://wiki.openstack.org/wiki/HowToContribute also refers to a video (
http://www.youtube.com/watch?v=mT2yC6ll5Qkfeature=youtu.be), and the 
video content shows several URLS:

http://wiki.openstack.org/GitCommitMessages
http://wiki.openstack.org/HowToContribute
http://wiki.openstack.org/DevQuickStart
http://www.scribd.com/doc/117185088/Introduction-to-OpenStack
http://docs.openstack.org/
http://devstack.org/

The most interesting-looking one of those that does not appear elsewhere, 
http://wiki.openstack.org/DevQuickStart, is now content-free.

I myself would feel more comfortable if I could run more than unit tests 
in my development environment.  I would like to go all the way to some 
full system tests.


=
http://www.openstack.org/software/start/
=

http://www.openstack.org/software/start/ leads directly to 
http://devstack.org/ --- which is surprising because it skips over the 
OpenStack wiki page (https://wiki.openstack.org/wiki/DevStack) that 
introduces DevStack.  The wiki's page gives a mission statement and 
description for DevStack.  The mission statement is provide and maintain 
tools used for the installation of the central OpenStack services from 
source suitable for development and operational testing; the description 
is an opinionated script to quickly create an OpenStack development 
environment.  Reading the text literally, it does not define what sort(s) 
of testing are included in development.  Since GerritWorkflow told me 
that the normal workflow includes unit tests, I could reasonably conclude 
that DevStack sets me up to run unit tests.  In particular, one could take 
these remarks to mean that DevStack will install the dependencies listed 
in test-requirements.txt.  But that is not actually true.

I tested a DevStack install, and found it gave me a nova/.git connected to 
the nova project at git.openstack.org (one could take this as implied by 
the remarks that DevStack sets up a development environment).  I presume I 
can enhance this with the more personal 

Re: [openstack-dev] [neutron] Implement NAPT in neutron (https://blueprints.launchpad.net/neutron/+spec/neutron-napt-api)

2014-01-09 Thread Martinx - ジェームズ
Hi!

From a operator point of view, I think that it would be nice to give to the
FWaaS (IPv4 flavor), the ability to manage the tenant's NAT table, not only
the filter table, as it is today.

If fact, I don't know if it is out of the scope of FWaaS or not, it is just
an idea I had. Because right now, I need to create the so called NAT
Instance, with a Floating IPv4 attached to it, with a DNAT rule for each
internal service that I need to open to the Internet... It is terrible
BTW but, it is the IPv4-thinking... (Can't wait for IPv6 in IceHouse to
kiss NAT goodbye!)... Today, each tenant must have at least, two valid IPs
(v4), one for the router's gateway and another to the NAT Instance
(because FWaaS (or something else) doesn't handle the Tenant
Router/Namespace NAT table).

So, if the Tenant can manage its own Firewall-IPv4-NAT table, there at its
own Namespace Router, then, each will require only 1 valid Floating IPv4,
the one that come when he connects its router, with the External Network
(from allocation pool anyway)... Less waste of valid IPv4.

Regards,
Thiago


On 8 January 2014 13:36, Dong Liu willowd...@gmail.com wrote:


 在 2014年1月8日,20:24,Nir Yechiel nyech...@redhat.com 写道:

 Hi Dong,

 Can you please clarify this blueprint? Currently in Neutron, If an
 instance has a floating IP, then that will be used for both inbound and
 outbound traffic. If an instance does not have a floating IP, it can make
 connections out using the gateway IP (SNAT using PAT/NAT Overload). Does
 the idea in this blueprint is to implement PAT on both directions using
 only the gateway IP? Also, did you see this one [1]?

 Thanks,
 Nir

 [1] https://blueprints.launchpad.net/neutron/+spec/router-port-forwarding



 I think my idea is duplicated with this one.
 https://blueprints.launchpad.net/neutron/+spec/access-vms-via-port-mapping

 Sorry for missing this.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Savanna] Spark plugin status

2014-01-09 Thread Sergey Lukjanov
Hi,

I'm really glad to here that!

Answers inlined.

Thanks.


On Thu, Jan 9, 2014 at 11:33 AM, Daniele Venzano daniele.venz...@eurecom.fr
 wrote:

 Hello,

 we are finishing up the development of the Spark plugin for Savanna.
 In the next few days we will deploy it on an OpenStack cluster with real
 users to iron out the last few things. Hopefully next week we will put the
 code on a public github repository in beta status.

[SL] Awesome! Could you, please, share some info this installation if
possible? like OpenStack cluster version and size, Savanna version,
expected Spark cluster sizes and lifecycle, etc.


 You can find the blueprint here:
 https://blueprints.launchpad.net/savanna/+spec/spark-plugin

 There are two things we need to release, the VM image and the code itself.
 For the image we created one ourselves and for the code we used the
 Vanilla plugin as a base.

[SL] You can use diskimage-builder [0] to prepare such images, we're
already using it for building images for vanilla plugin [1].


 We feel that our work could be interesting for others and we would like to
 see it integrated in Savanna. What is the best way to proceed?

[SL] Absolutely, it's a very interesting tool for data processing. IMO the
best way is to create a change request to savanna for code review and
discussion in gerrit, it'll be really the most effective way to
collaborate. As for the best way of integration with Savanna - we're
expecting to see it in the openstack/savanna repo like vanilla, HDP and IDH
(which will be landed soon) plugins.


 We did not follow the Gerrit workflow until now because development
 happened internally.
 I will prepare the repo on github with git-review and reference the
 blueprint in the commit. After that, do you prefer that I send immediately
 the code for review or should I send a link here on the mailing list first
 for some feedback/discussion?

[SL] It'll be better to immediately send the code for review.


 Thank you,
 Daniele Venzano, Hoang Do and Vo Thanh Phuc

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



[0] https://github.com/openstack/diskimage-builder
[1] https://github.com/openstack/savanna-image-elements

Please, feel free to ping me if some help needed with gerrit or savanna
internals stuff.

Thanks.

-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Implement NAPT in neutron (https://blueprints.launchpad.net/neutron/+spec/neutron-napt-api)

2014-01-09 Thread Nir Yechiel


- Original Message -

From: Dong Liu willowd...@gmail.com 
To: Nir Yechiel nyech...@redhat.com 
Cc: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org 
Sent: Wednesday, January 8, 2014 5:36:14 PM 
Subject: Re: [neutron] Implement NAPT in neutron 
(https://blueprints.launchpad.net/neutron/+spec/neutron-napt-api) 


在 2014年1月8日,20:24,Nir Yechiel  nyech...@redhat.com  写道: 




Hi Dong, 

Can you please clarify this blueprint? Currently in Neutron, If an instance has 
a floating IP, then that will be used for both inbound and outbound traffic. If 
an instance does not have a floating IP, it can make connections out using the 
gateway IP (SNAT using PAT/NAT Overload). Does the idea in this blueprint is to 
implement PAT on both directions using only the gateway IP? Also, did you see 
this one [1]? 

Thanks, 
Nir 

[1] https://blueprints.launchpad.net/neutron/+spec/router-port-forwarding 





I think my ide a is duplicated with this one. 
https://blueprints.launchpad.net/neutron/+spec/access-vms-via-port-mapping 

Sorry for missing this. 

[Nir] Thanks, I wasn't familiar with this one. So is there a difference between 
those three? 

https://blueprints.launchpad.net/neutron/+spec/router-port-forwarding 
https://blueprints.launchpad.net/neutron/+spec/access-vms-via-port-mapping 
https://blueprints.launchpad.net/neutron/+spec/neutron-napt-api 

Looks like all of them are trying to solve the same challenge using the public 
gateway IP and PAT. 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Discuss the option delete_on_termination

2014-01-09 Thread 黎林果
Oh, I see. Thank you very much.
It's just hard coded for attaching volume and swapping volume.

How to deal the bp:
https://blueprints.launchpad.net/nova/+spec/add-delete-on-termination-option
?

2014/1/9 Christopher Yeoh cbky...@gmail.com:
 On Thu, Jan 9, 2014 at 2:35 PM, 黎林果 lilinguo8...@gmail.com wrote:

 Hi Chris,
 Thanks for you reply.

 It's not only hard coded for swap volumes. In function
 '_create_instance' which for creating instance of nova/compute/api.py,
 the '_prepare_image_mapping' function will be called. And it hard code
 to True, too.

 values = block_device.BlockDeviceDict({
 'device_name': bdm['device'],
 'source_type': 'blank',
 'destination_type': 'local',
 'device_type': 'disk',
 'guest_format': guest_format,
 'delete_on_termination': True,
 'boot_index': -1})


 Just before that in _prepare_image_mapping is:

 if virtual_name == 'ami' or virtual_name == 'root':
 continue

 if not block_device.is_swap_or_ephemeral(virtual_name):
 continue


 Chris


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Discuss the option delete_on_termination

2014-01-09 Thread Flavio Percoco

Just a gentle reminder. Please, remember to the subject of the emails.

Cheers,
FF

--
@flaper87
Flavio Percoco


pgptFnbJDCsYT.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Dynamic Meters in Ceilometer

2014-01-09 Thread Kodam, Vijayakumar (EXT-Tata Consultancy Ser - FI/Espoo)


From: ext Doug Hellmann [mailto:doug.hellm...@dreamhost.com]
Sent: Wednesday, January 08, 2014 10:21 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Ceilometer] Dynamic Meters in Ceilometer



On Wed, Jan 8, 2014 at 3:09 PM, Kodam, Vijayakumar (EXT-Tata Consultancy Ser - 
FI/Espoo) vijayakumar.kodam@nsn.commailto:vijayakumar.kodam@nsn.com 
wrote:

 
 
 From: ext Doug Hellmann 
 [doug.hellm...@dreamhost.commailto:doug.hellm...@dreamhost.com]
 Sent: Wednesday, January 08, 2014 8:26 PM

 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Ceilometer] Dynamic Meters in Ceilometer
 
 
 On Wed, Jan 8, 2014 at 12:35 PM, Ildikó Váncsa 
 ildiko.van...@ericsson.commailto:ildiko.van...@ericsson.com wrote:
 
 Hi Doug,
 
 Answers inline again.
 
 Best Regards,
 
 Ildiko
 
 
 On Wed, Jan 8, 2014 at 3:16 AM, Ildikó Váncsa 
 ildiko.van...@ericsson.commailto:ildiko.van...@ericsson.com wrote:
 
 Hi,
 
 I've started to work on the idea of supporting a kind of tenant/project
  based configuration for Ceilometer. Unfortunately I haven't reached
  the point of having a blueprint that could be registered until now.
  I do not have a deep knowledge about the collector and compute agent
  services, but this feature would require some deep changes for sure.
  Currently there are pipelines for data collection and transformation,
  where the counters can be specified, about which data should be
  collected and also the time interval for data collection and so on.
  These pipelines can be configured now globally in the pipeline.yaml file,
  which is stored right next to the Ceilometer configuration files.
 
 Yes, the data collection was designed to be configured and controlled by
  the deployer, not the tenant. What benefits do we gain by giving that
  control to the tenant?
 
 ildikov: Sorry, my explanation was not clear. I meant there the configuration
  of data collection for projects, what was mentioned by Tim Bell in a
  previous email. This would mean that the project administrator is able to
  create a data collection configuration for his/her own project, which will
  not affect the other project's configuration. The tenant would be able to
  specify meters (enabled/disable based on which ones are needed) for the given
  project also with project specific time intervals, etc.
 
 OK, I think some of the confusion is terminology.
 Who is a project administrator? Is that someone with access to change
  ceilometer's configuration file directly? Someone with a particular role
  using the API? Or something else?
 
 ildikov: As project administrator I meant a user with particular role,
  a user assigned to a tenant.
 
 
 OK, so like I said, we did not design the system with the idea that a
  user of the cloud (rather than the deployer of the cloud) would have
  any control over what data was collected. They can ask questions about
  only some of the data, but they can't tell ceilometer what to collect.
 There's a certain amount of danger in giving the cloud user
  (no matter their role) an off switch for the data collection.
 
  As Julien pointed out, it can have a negative effect on billing
  -- if they tell the cloud not to collect data about what instances
  are created, then the deployer can't bill for those instances.
  Differentiating between the values that always must be collected and
  the ones the user can control makes providing an API to manage data
  collection more complex.
 
 Is there some underlying use case behind all of this that someone could
  describe in more detail, so we might be able to find an alternative, or
  explain how to use the existing features to achieve the goal?
 
  For example, it is already possible to change the pipeline config file
  to control which data is collected and stored.
  If we make the pipeline code in ceilometer watch for changes to that file,
  and rebuild the pipelines when the config is updated,
  would that satisfy the requirements?
 

Yes. That's exactly the requirement for our blueprint. To avoid ceilometer 
restart for changes to take effect, when the config file changes.
API support was added later based on the request in this mail chain. We 
actually don't need APIs and can be removed.

So as you mentioned above, whenever the config file is changed, ceilometer 
should update the meters accordingly.

OK, I think that's something reasonable to implement, although I would have to 
look at the collector to make sure we could rebuild the pipelines safely 
without losing any data as more messages come in. But it should be possible, if 
not easy. :-)

The blueprint should be updated to reflect this approach.

Doug

Thanks Doug.
I shall update the blueprint accordingly.

VijayKumar


 
 
 In my view, we could keep the dynamic meter configuration bp with considering
  to extend it to dynamic 

Re: [openstack-dev] [Neutron] Allow multiple subnets on gateway port for router

2014-01-09 Thread Nir Yechiel
Hi Randy, 

I don't have a specific use case. I just wanted to understand the scope here as 
the name of this blueprint (allow multiple subnets on gateway port for 
router) could be a bit misleading. 

Two questions I have though: 

1. Is this talking specifically about the gateway port to the provider's 
next-hop router or relevant for all ports in virtual routers as well? 
2. There is a fundamental difference between v4 and v6 address assignment. With 
IPv4 I agree that one IP address per port is usually enough (there is the 
concept of secondary IP, but I am not sure it's really common). With IPv6 
however you can sure have more then one (global) IPv6 on an interface. 
Shouldn't we support this? 


Thanks, 
Nir 

- Original Message -

From: Randy Tuttle randy.m.tut...@gmail.com 
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org 
Cc: rantu...@cisco.com 
Sent: Tuesday, December 31, 2013 6:43:50 PM 
Subject: Re: [openstack-dev] [Neutron] Allow multiple subnets on gateway port 
for router 

Hi Nir 

Good question. There's absolutely no reason not to allow more than 2 subnets, 
or even 2 of the same IP versions on the gateway port. In fact, in our POC we 
allowed this (or, more specifically, we did not disallow it). However, for the 
gateway port to the provider's next-hop router, we did not have a specific use 
case beyond an IPv4 and an IPv6. Moreover, in Neutron today, only a single 
subnet is allowed per interface (either v4 or v6). So all we are doing is 
opening up the gateway port to support what it does today (i.e., v4 or v6) plus 
allow IPv4 and IPv6 subnets to co-exist on the gateway port (and same 
network/vlan). Our principle use case is to enable IPv6 in an existing IPv4 
environment. 

Do you have a specific use case requiring 2 or more of the same IP-versioned 
subnets on a gateway port? 

Thanks 
Randy 


On Tue, Dec 31, 2013 at 4:59 AM, Nir Yechiel  nyech...@redhat.com  wrote: 



Hi, 

With regards to 
https://blueprints.launchpad.net/neutron/+spec/allow-multiple-subnets-on-gateway-port,
 can you please clarify this statement: We will disallow more that two 
subnets, and exclude allowing 2 IPv4 or 2 IPv6 subnets. 
The use case for dual-stack with one IPv4 and one IPv6 address associated to 
the same port is clear, but what is the reason to disallow more than two 
IPv4/IPv6 subnets to a port? 

Thanks and happy holidays! 
Nir 



___ 
OpenStack-dev mailing list 
OpenStack-dev@lists.openstack.org 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 






___ 
OpenStack-dev mailing list 
OpenStack-dev@lists.openstack.org 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.config] Centralized config management

2014-01-09 Thread Flavio Percoco

On 08/01/14 17:13 -0800, Nachi Ueno wrote:

Hi folks

OpenStack process tend to have many config options, and many hosts.
It is a pain to manage this tons of config options.
To centralize this management helps operation.

We can use chef or puppet kind of tools, however
sometimes each process depends on the other processes configuration.
For example, nova depends on neutron configuration etc

My idea is to have config server in oslo.config, and let cfg.CONF get
config from the server.
This way has several benefits.

- We can get centralized management without modification on each
projects ( nova, neutron, etc)
- We can provide horizon for configuration

This is bp for this proposal.
https://blueprints.launchpad.net/oslo/+spec/oslo-config-centralized

I'm very appreciate any comments on this.



I've thought about this as well. I like the overall idea of having a
config server. However, I don't like the idea of having it within
oslo.config. I'd prefer oslo.config to remain a library.

Also, I think it would be more complex than just having a server that
provides the configs. It'll need authentication like all other
services in OpenStack and perhaps even support of encryption.

I like the idea of a config registry but as mentioned above, IMHO it's
to live under its own project.

That's all I've got for now,
FF

--
@flaper87
Flavio Percoco


pgpeZ2M6TmKBW.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][3rd Party Testing]Please be familiar with our development process

2014-01-09 Thread Anita Kuno
If you are in a position where you want or need to provide 3rd party
testing, you do yourself (and the rest of us) a great service if you
take the time to learn the OpenStack development process.

One of the best ways to learn the OpenStack development process is to
submit a patch. Advertising: We are looking for patches to tempest all
the time.

There is a wikipage which outlines our gerrit workflow. [0]

If you don't know how to submit a patch to gerrit as a developer, you
will not understand how to correctly interface your 3rd party testing
system with gerrit. So take some time and learn the OpenStack
development process, please.

As you are getting started learning how to contribute, asking questions
in #openstack-101 and #openstack-dev on the freenode server on irc is a
great idea.

This will ensure you have logged into your gerrit account and have ssh
keys in the correct place.

Thank you and I look forward to your contributions,
Anita.

[0] https://wiki.openstack.org/wiki/Gerrit_Workflow

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Discuss the option delete_on_termination

2014-01-09 Thread Flavio Percoco

On 09/01/14 10:03 +0100, Flavio Percoco wrote:

Just a gentle reminder. Please, remember to the subject of the emails.


Erm, I obviously meant 'to tag the subject'*

Sorry, too sleepy.


--
@flaper87
Flavio Percoco


pgpD5k5khTAQA.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Discuss the option delete_on_termination

2014-01-09 Thread Christopher Yeoh
On Thu, Jan 9, 2014 at 5:00 PM, 黎林果 lilinguo8...@gmail.com wrote:

 Oh, I see. Thank you very much.
 It's just hard coded for attaching volume and swapping volume.

 How to deal the bp:

 https://blueprints.launchpad.net/nova/+spec/add-delete-on-termination-option
 ?


So I think the only thing left in your bp would be adding a delete on
terminate option when attaching a volume to an existing server and the
novaclient changes. So I'd cleanup the blueprint and then set the milestone
target to icehouse-3 which will trigger it to get it reviewed. Perhaps
consider whether its reasonable to just apply this to the V3 API rather
than doing an enhancement for both the V2 and V3 API.

Regards,

Chris


 2014/1/9 Christopher Yeoh cbky...@gmail.com:
  On Thu, Jan 9, 2014 at 2:35 PM, 黎林果 lilinguo8...@gmail.com wrote:
 
  Hi Chris,
  Thanks for you reply.
 
  It's not only hard coded for swap volumes. In function
  '_create_instance' which for creating instance of nova/compute/api.py,
  the '_prepare_image_mapping' function will be called. And it hard code
  to True, too.
 
  values = block_device.BlockDeviceDict({
  'device_name': bdm['device'],
  'source_type': 'blank',
  'destination_type': 'local',
  'device_type': 'disk',
  'guest_format': guest_format,
  'delete_on_termination': True,
  'boot_index': -1})
 
 
  Just before that in _prepare_image_mapping is:
 
  if virtual_name == 'ami' or virtual_name == 'root':
  continue
 
  if not block_device.is_swap_or_ephemeral(virtual_name):
  continue
 
 
  Chris
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Implement NAPT in neutron (https://blueprints.launchpad.net/neutron/+spec/neutron-napt-api)

2014-01-09 Thread shihanzhang


I think that these two BP is to achieve same function,it is very necessary to 
implement this function!
https://blueprints.launchpad.net/neutron/+spec/router-port-forwarding
https://blueprints.launchpad.net/neutron/+spec/neutron-napt-api


At 2014-01-09 16:56:20,Nir Yechiel nyech...@redhat.com wrote:





From: Dong Liu willowd...@gmail.com
To: Nir Yechiel nyech...@redhat.com
Cc: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Sent: Wednesday, January 8, 2014 5:36:14 PM
Subject: Re: [neutron] Implement NAPT in neutron 
(https://blueprints.launchpad.net/neutron/+spec/neutron-napt-api)





在 2014年1月8日,20:24,Nir Yechiel nyech...@redhat.com 写道:


Hi Dong,



Can you please clarify this blueprint? Currently in Neutron, If an instance has 
a floating IP, then that will be used for both inbound and outbound traffic. If 
an instance does not have a floating IP, it can make connections out using the 
gateway IP (SNAT using PAT/NAT Overload). Does the idea in this blueprint is to 
implement PAT on both directions using only the gateway IP? Also, did you see 
this one [1]?



Thanks,

Nir



[1] https://blueprints.launchpad.net/neutron/+spec/router-port-forwarding





I think my idea is duplicated with this one. 
https://blueprints.launchpad.net/neutron/+spec/access-vms-via-port-mapping



Sorry for missing this.


[Nir] Thanks, I wasn't familiar with this one. So is there a difference between 
those three?

https://blueprints.launchpad.net/neutron/+spec/router-port-forwarding

https://blueprints.launchpad.net/neutron/+spec/access-vms-via-port-mapping

https://blueprints.launchpad.net/neutron/+spec/neutron-napt-api


Looks like all of them are trying to solve the same challenge using the public 
gateway IP and PAT.




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Windows Support

2014-01-09 Thread Steven Hardy
Hi Winson,

On Wed, Jan 08, 2014 at 08:41:16PM +, Chan, Winson C wrote:
 Does anybody know if this blueprint is being actively work on?  
 https://blueprints.launchpad.net/heat/+spec/windows-instances  If this is not 
 active, can I take ownership of this blueprint?  My team wants to add support 
 for Windows in Heat for our internal deployment.

Ha, that BP has been unassigned for nearly a year, then two folks want to
take it on the same day, what are the chances! :)

Alex Pilotti pinged me on IRC yesterday asking about it, and offered to
take ownership of the BP, so I assigned it to him.

That said, I'm pretty sure there is scope for breaking down the work so you
can take on some tasks - we just need to evaluate what needs to be done and
raise some child blueprints so the effort can be distributed.

The steps I can think of, unless they have already been done by folks:
- Evaluate bootstrap agent (I'd assumed cloudbase-init would work, which
  Alex indicated was the case yesterday) with Heat generated userdata.
- Figure out if we have path issues in userdata/part-handler which need
  resolving
- Work out what we do with heat-cfntools:
- Add support for windows?
- Figure out a way to work with a fork of cfnbootstrap (which already
  works on windows I think (ref 
https://bugs.launchpad.net/heat/+bug/1103811)
- Support some other method for secondary post-boot customization (e.g
  just use cloudbase-init, or integrate with some other existing agent)
- Document preparation of a Heat-enabled Windows image
- Windows example templates and user documentation

There's probably more stuff I haven't considered - hopefully you can
connect with Alex, work out a way to divide the effort and raise some new
BPs?

To me the biggest unknown is the in-instance agent thing, but tbh I've not
really looked at it in much detail so I'd be happy to hear peoples thoughts
and experiences.

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Implementing VPNaas in Openstack Grizzly release

2014-01-09 Thread Ashwini Babureddy
Hi,

I am trying to implement VPNaas in openstack grizzly release 2013.1 by taking 
Havana release as a reference. This is basically a single node set up by 
following the below link :
https://github.com/mseknibilel/OpenStack-Grizzly-Install-Guide/blob/master/OpenStack_Grizzly_Install_Guide.rst


Currently all the vpn related files from Havana moved to Grizzly as follows:

* /quantum/services/vpn/*

* /quantum/db/vpn/*

* /quantum/extensions/vpnaas.py

* /etc/quantum/vpn_agent.ini

* /etc/quantum/quantum.conf - service_plugins = 
quantum.services.vpn.plugin.VPNPlugin

* /quantumclient/quantum/v2_0/vpn/*

* Installed Openswan

* Made changes in /quantumclient/shell.py

* /usr/bin/quantum-vpn-agent

* /etc/init.d/quantum-plugin-vpn-agent

* /etc/init/quantum-plugin-vpn-agent.conf

Current status:

* Commands running successfully

o   Vpn-ikepolicy-create/list/delete

o   Vpn-ipsecpolicy-create/list/delete

o   Vpn-service-create/list/delete

* Ipsec-site-connection-create command is failing with an HTTP Error. 
[Request Failed: internal server error while processing your request.]

* /var/log/quantum/vpn-agent.log has logs as follows:

2014-01-09 23:32:30ERROR [quantum.agent.l3_agent] Failed synchronizing 
routers : _sync_routers_task

Traceback (most recent call last):

  File /usr/lib/python2.7/dist-packages/quantum/agent/l3_agent.py, line 694, 
in _sync_routers_task

self._process_routers(routers, all_routers=True)

  File /usr/lib/python2.7/dist-packages/quantum/services/vpn/agent.py, line 
150, in _process_routers

device.sync(self.context, routers)

  File 
/usr/lib/python2.7/dist-packages/quantum/openstack/common/lockutils.py, line 
242, in inner

retval = f(*args, **kwargs)

  File 
/usr/lib/python2.7/dist-packages/quantum/services/vpn/device_drivers/ipsec.py,
 line 652, in sync

context, self.host)

  File 
/usr/lib/python2.7/dist-packages/quantum/services/vpn/device_drivers/ipsec.py,
 line 453, in get_vpn_services_on_host

topic=self.topic)

  File 
/usr/lib/python2.7/dist-packages/quantum/openstack/common/rpc/proxy.py, line 
80, in call

return rpc.call(context, self._get_topic(topic), msg, timeout)

  File 
/usr/lib/python2.7/dist-packages/quantum/openstack/common/rpc/__init__.py, 
line 140, in call

return _get_impl().call(CONF, context, topic, msg, timeout)

  File 
/usr/lib/python2.7/dist-packages/quantum/openstack/common/rpc/impl_kombu.py, 
line 798, in call

rpc_amqp.get_connection_pool(conf, Connection))

  File /usr/lib/python2.7/dist-packages/quantum/openstack/common/rpc/amqp.py, 
line 613, in call

rv = list(rv)

  File /usr/lib/python2.7/dist-packages/quantum/openstack/common/rpc/amqp.py, 
line 555, in __iter__

self.done()

  File /usr/lib/python2.7/contextlib.py, line 24, in __exit__

self.gen.next()

  File /usr/lib/python2.7/dist-packages/quantum/openstack/common/rpc/amqp.py, 
line 552, in __iter__

self._iterator.next()

  File 
/usr/lib/python2.7/dist-packages/quantum/openstack/common/rpc/impl_kombu.py, 
line 648, in iterconsume

yield self.ensure(_error_callback, _consume)

File 
/usr/lib/python2.7/dist-packages/quantum/openstack/common/rpc/impl_kombu.py, 
line 566, in ensure

error_callback(e)

  File 
/usr/lib/python2.7/dist-packages/quantum/openstack/common/rpc/impl_kombu.py, 
line 629, in _error_callback

raise rpc_common.Timeout()

Timeout: Timeout while waiting on RPC response.

2014-01-09 23:32:30  WARNING [quantum.openstack.common.loopingcall] task run 
outlasted interval by 21.531911 sec

Can anyone please help on this issue. Could this issue be due to an incomplete 
quantum-plugin-vpn-agent [as we have no such standard package].
What else can be done further to make this work?

Thanks,
Ashwini





Disclaimer:  This message and the information contained herein is proprietary 
and confidential and subject to the Tech Mahindra policy statement, you may 
review the policy at http://www.techmahindra.com/Disclaimer.html externally 
http://tim.techmahindra.com/tim/disclaimer.html internally within TechMahindra.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Windows Support

2014-01-09 Thread Alexander Tivelkov
Hi!

These are great news indeed: I am glad to hear that Windows support comes
to Heat at last!

Meanwhile, you may always take a look at what was done in this area in
Murano: it has started as Windows-deployment tool based on top of Heat, so
we have covered lots of windows-related issues and have tons of expertise
in the subject. This expertise may be not 100%-relevant for pure Heat usage
(as Murano uses its own workflow DSL and vm-side agent), however some of
our steps can be definitely repeated. Feel free to check Murano's docs (
https://wiki.openstack.org/wiki/Murano) and ask any questions in mailing
list or IRC (#murano)


--
Regards,
Alexander Tivelkov


2014/1/9 Steven Hardy sha...@redhat.com

 Hi Winson,

 On Wed, Jan 08, 2014 at 08:41:16PM +, Chan, Winson C wrote:
  Does anybody know if this blueprint is being actively work on?
 https://blueprints.launchpad.net/heat/+spec/windows-instances  If this is
 not active, can I take ownership of this blueprint?  My team wants to add
 support for Windows in Heat for our internal deployment.

 Ha, that BP has been unassigned for nearly a year, then two folks want to
 take it on the same day, what are the chances! :)

 Alex Pilotti pinged me on IRC yesterday asking about it, and offered to
 take ownership of the BP, so I assigned it to him.

 That said, I'm pretty sure there is scope for breaking down the work so you
 can take on some tasks - we just need to evaluate what needs to be done and
 raise some child blueprints so the effort can be distributed.

 The steps I can think of, unless they have already been done by folks:
 - Evaluate bootstrap agent (I'd assumed cloudbase-init would work, which
   Alex indicated was the case yesterday) with Heat generated userdata.
 - Figure out if we have path issues in userdata/part-handler which need
   resolving
 - Work out what we do with heat-cfntools:
 - Add support for windows?
 - Figure out a way to work with a fork of cfnbootstrap (which already
   works on windows I think (ref
 https://bugs.launchpad.net/heat/+bug/1103811)
 - Support some other method for secondary post-boot customization (e.g
   just use cloudbase-init, or integrate with some other existing agent)
 - Document preparation of a Heat-enabled Windows image
 - Windows example templates and user documentation

 There's probably more stuff I haven't considered - hopefully you can
 connect with Alex, work out a way to divide the effort and raise some new
 BPs?

 To me the biggest unknown is the in-instance agent thing, but tbh I've not
 really looked at it in much detail so I'd be happy to hear peoples thoughts
 and experiences.

 Steve

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Savanna] Spark plugin status

2014-01-09 Thread Daniele Venzano

On 01/09/14 09:41, Sergey Lukjanov wrote:


On Thu, Jan 9, 2014 at 11:33 AM, Daniele Venzano
daniele.venz...@eurecom.fr mailto:daniele.venz...@eurecom.fr wrote:

Hello,

we are finishing up the development of the Spark plugin for Savanna.
In the next few days we will deploy it on an OpenStack cluster with
real users to iron out the last few things. Hopefully next week we
will put the code on a public github repository in beta status.

[SL] Awesome! Could you, please, share some info this installation if
possible? like OpenStack cluster version and size, Savanna version,
expected Spark cluster sizes and lifecycle, etc.


As part of the Bigfoot project that is funding us 
(http://bigfootproject.eu/) we have a research OpenStack cluster with 6 
compute nodes, hopefully with more coming. The machines have 16 CPUs, 32 
with hyperthreading, and 128GB of RAM.


OpenStack is the Ubuntu cloud version (Grizzly 2013.1.4), but Horizon 
and Keystone are on the latest Havana branch versions. It uses KVM and 
the openvswitch plugin for networking.


For Savanna, we stayed with a version from git that was working for us, 
after 0.3, but now a couple of months old. Part of the work I need to do 
is merging with the current Savanna master branch.


We have five users that are interested in running Spark jobs and at 
least one has already been doing so on the Bigfoot platform with a 
cluster created by hand.
We will start with two of them and then let in the others. One will use 
a small cluster with 3 nodes, the other with about ten nodes.
We also plan to run a few tests with various sizes of clusters, mainly 
to measure performance in various conditions.



[SL] You can use diskimage-builder [0] to prepare such images, we're
already using it for building images for vanilla plugin [1].


Yes, I had a quick look and from what I understand we will need to 
modify the scripts that build the images. We will make a separate change 
request for that.



[SL] Absolutely, it's a very interesting tool for data processing. IMO
the best way is to create a change request to savanna for code review
and discussion in gerrit, it'll be really the most effective way to
collaborate. As for the best way of integration with Savanna - we're
expecting to see it in the openstack/savanna repo like vanilla, HDP and
IDH (which will be landed soon) plugins.


Nice! I will contact you when I am ready to create the github repo, so 
that I do it right for the review process.


Thanks,
Daniele

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Bogus -1 scores from turbo hipster

2014-01-09 Thread Roman Bogorodskiy
Hi,

Is it the same reason my change fails with turbo-hipster over and over:
https://review.openstack.org/#/c/54492/ ?


On Mon, Dec 30, 2013 at 1:21 PM, Michael Still mi...@stillhq.com wrote:

 Hi.

 The purpose of this email to is apologise for some incorrect -1 review
 scores which turbo hipster sent out today. I think its important when
 a third party testing tool is new to not have flakey results as people
 learn to trust the tool, so I want to explain what happened here.

 Turbo hipster is a system which takes nova code reviews, and runs
 database upgrades against them to ensure that we can still upgrade for
 users in the wild. It uses real user datasets, and also times
 migrations and warns when they are too slow for large deployments. It
 started voting on gerrit in the last week.

 Turbo hipster uses zuul to learn about reviews in gerrit that it
 should test. We run our own zuul instance, which talks to the
 openstack.org zuul instance. This then hands out work to our pool of
 testing workers. Another thing zuul does is it handles maintaining a
 git repository for the workers to clone from.

 This is where things went wrong today. For reasons I can't currently
 explain, the git repo on our zuul instance ended up in a bad state (it
 had a patch merged to master which wasn't in fact merged upstream
 yet). As this code is stock zuul from openstack-infra, I have a
 concern this might be a bug that other zuul users will see as well.

 I've corrected the problem for now, and kicked off a recheck of any
 patch with a -1 review score from turbo hipster in the last 24 hours.
 I'll talk to the zuul maintainers tomorrow about the git problem and
 see what we can learn.

 Thanks heaps for your patience.

 Michael

 --
 Rackspace Australia

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][turbo hipster] unable to rebase

2014-01-09 Thread John Garbutt
On 8 January 2014 12:07, Gary Kotton gkot...@vmware.com wrote:
 Hi,
 Patch sets that are cascaded has have the following errors:

 Database migration testing failed either due to migrations unable to be
 applied correctly or taking too long.

 This change was unable to be automatically merged with the current state of
 the repository. Please rebase your change and upload a new patch set.

 What do you suggest?

If you rebase you code, would that not fix everything?

It seems for the tests to pass, that change needs to be able to rebase
on trunk these days.

John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][Nova][cold migration] Why we need confirm resize after cold migration

2014-01-09 Thread John Garbutt
On 8 January 2014 15:29, Jay Lau jay.lau@gmail.com wrote:
 2014/1/8 John Garbutt j...@johngarbutt.com

 On 8 January 2014 10:02, David Xie david.script...@gmail.com wrote:
  In nova/compute/api.py#2289, function resize, there's a parameter named
  flavor_id, if it is None, it is considered as cold migration. Thus, nova
  should skip resize verifying. However, it doesn't.
 
  Like Jay said, we should skip this step during cold migration, does it
  make
  sense?

 Not sure.

  On Wed, Jan 8, 2014 at 5:52 PM, Jay Lau jay.lau@gmail.com wrote:
 
  Greetings,
 
  I have a question related to cold migration.
 
  Now in OpenStack nova, we support live migration, cold migration and
  resize.
 
  For live migration, we do not need to confirm after live migration
  finished.
 
  For resize, we need to confirm, as we want to give end user an
  opportunity
  to rollback.
 
  The problem is cold migration, because cold migration and resize share
  same code path, so once I submit a cold migration request and after the
  cold
  migration finished, the VM will goes to verify_resize state, and I need
  to
  confirm resize. I felt a bit confused by this, why do I need to verify
  resize for a cold migration operation? Why not reset the VM to original
  state directly after cold migration?

 I think the idea was allow users/admins to check everything went OK,
 and only delete the original VM when the have confirmed the move went
 OK.

 I thought there was an auto_confirm setting. Maybe you want
 auto_confirm cold migrate, but not auto_confirm resize?

 [Jay] John, yes, that can also reach my goal. Now we only have
 resize_confirm_window to handle auto confirm without considering it is
 resize or cold migration.
 # Automatically confirm resizes after N seconds. Set to 0 to
 # disable. (integer value)
 #resize_confirm_window=0

 Perhaps we can add another parameter say cold_migrate_confirm_window to
 handle confirm for cold migration.

I like Russell's suggestion, but maybe implement it as always doing
auto_confirm for cold migrate in v3 API, and leaving it as is for
resize.

See if people like that, I should check with our ops guys.

  Also, I think that probably we need split compute.api.resize() to two
  apis: one is for resize and the other is for cold migrations.
 
  1) The VM state can be either ACTIVE and STOPPED for a resize operation
  2) The VM state must be STOPPED for a cold migrate operation.

 We just stop the VM them perform the migration.
 I don't think we need to require its stopped first.
 Am I missing something?

 [Jay] Yes, but just curious why someone want to cold migrate an ACTIVE VM?
 They can use live migration instead and this can also make sure the VM
 migrate seamlessly.

If a disk is failing, people like to turn off the VMs to reduce load
on that host while performing the migrations.

And live-migrate (sadly) does not yet work in all configurations yet,
so its useful where live-migrate is not possible.

Also live-migrate with block_migration can use quite a lot more
network bandwidth than cold migration, at least in the XenServer case.

John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] Organizing a Gate Blocking Bug Fix Day

2014-01-09 Thread Sean Dague
I think we are all agreed that the current state of Gate Resets isn't 
good. Unfortunately some basic functionality is really not working 
reliably, like being able to boot a guest to a point where you can ssh 
into it.


These are common bugs, but they aren't easy ones. We've had a few folks 
digging deep on these, but we, as a community, are not keeping up with them.


So I'd like to propose Gate Blocking Bug Fix day, to be Monday Jan 20th. 
On that day I'd ask all core reviewers (and anyone else) on all projects 
to set aside that day to *only* work on gate blocking bugs. We'd like to 
quiet the queues to not include any other changes that day so that only 
fixes related to gate blocking bugs would be in the system.


This will have multiple goals:
 #1 - fix some of the top issues
 #2 - ensure we classify (ER fingerprint) and register everything we're 
seeing in the gate fails

 #3 - ensure all gate bugs are triaged appropriately

I'm hopefully that if we can get everyone looking at this one a single 
day, we can start to dislodge the log jam that exists.


Specifically I'd like to get commitments from as many PTLs as possible 
that they'll both directly participate in the day, as well as encourage 
the rest of their project to do the same.


-Sean

--
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Organizing a Gate Blocking Bug Fix Day

2014-01-09 Thread Russell Bryant
On 01/09/2014 07:46 AM, Sean Dague wrote:
 I think we are all agreed that the current state of Gate Resets isn't
 good. Unfortunately some basic functionality is really not working
 reliably, like being able to boot a guest to a point where you can ssh
 into it.
 
 These are common bugs, but they aren't easy ones. We've had a few folks
 digging deep on these, but we, as a community, are not keeping up with
 them.
 
 So I'd like to propose Gate Blocking Bug Fix day, to be Monday Jan 20th.
 On that day I'd ask all core reviewers (and anyone else) on all projects
 to set aside that day to *only* work on gate blocking bugs. We'd like to
 quiet the queues to not include any other changes that day so that only
 fixes related to gate blocking bugs would be in the system.
 
 This will have multiple goals:
  #1 - fix some of the top issues
  #2 - ensure we classify (ER fingerprint) and register everything we're
 seeing in the gate fails
  #3 - ensure all gate bugs are triaged appropriately
 
 I'm hopefully that if we can get everyone looking at this one a single
 day, we can start to dislodge the log jam that exists.
 
 Specifically I'd like to get commitments from as many PTLs as possible
 that they'll both directly participate in the day, as well as encourage
 the rest of their project to do the same.

I'm in!

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Organizing a Gate Blocking Bug Fix Day

2014-01-09 Thread Thierry Carrez
Sean Dague wrote:
 [...]
 So I'd like to propose Gate Blocking Bug Fix day, to be Monday Jan 20th.
 On that day I'd ask all core reviewers (and anyone else) on all projects
 to set aside that day to *only* work on gate blocking bugs. We'd like to
 quiet the queues to not include any other changes that day so that only
 fixes related to gate blocking bugs would be in the system.
 [...]

Great idea, and Mondays are ideal for this (smaller queues to get the
patch in).

However Monday Jan 20th is the day before the icehouse-2 branch cut, so
I fear a lot of people will be busy getting their patches in rather than
looking after gate-blocking bugs.

How about doing it Monday, Jan 13th ? Too early ? That way icehouse-2
can benefit from any successful outcome of this day.. and you might get
more people to participate.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] os-*-config in tripleo repositories

2014-01-09 Thread Derek Higgins
It looks like we have some duplication and inconsistencies on the 3
os-*-config elements in the tripleo repositories

os-apply-config (duplication) :
   We have two elements that install this
 diskimage-builder/elements/config-applier/
 tripleo-image-elements/elements/os-apply-config/

   As far as I can tell the version in diskimage-builder isn't used by
tripleo and the upstart file is broke
./dmesg:[   13.336184] init: Failed to spawn config-applier main
process: unable to execute: No such file or directory

   To avoid confusion I propose we remove
diskimage-builder/elements/config-applier/ (or deprecated if we have a
suitable process) but would like to call it out here first to see if
anybody is using it or thinks its a bad idea?

inconsistencies
  os-collect-config, os-refresh-config : these are both installed from
git into the global site-packages
  os-apply-config : installed from a released tarball into its own venv

  To be consistent with the other elements all 3 I think should be
installed from git into its own venv, thoughts?

If no objections I'll go ahead an do this next week,

thanks,
Derek.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] minimum review period for functional changes that break backwards compatibility

2014-01-09 Thread Thierry Carrez
Jay Pipes wrote:
 On Wed, 2014-01-08 at 14:26 +0100, Thierry Carrez wrote:
 Tim Bell wrote:
 +1 from me too UpgradeImpact is a much better term.

 So this one is already documented[1], but I don't know if it actually
 triggers anything yet.

 Should we configure it to post to openstack-operators, the same way as
 SecurityImpact posts to openstack-security ?
 
 Huge +1 from me here.

OK, this should do it:
https://review.openstack.org/#/c/65685

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] minimum review period for functional changes that break backwards compatibility

2014-01-09 Thread Tim Bell

I'll ask on the operators list if there are any objections. If there are strong 
ones, we may need a new list created.

Tim

 -Original Message-
 From: Thierry Carrez [mailto:thie...@openstack.org]
 Sent: 09 January 2014 14:16
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [nova] minimum review period for functional 
 changes that break backwards compatibility
 
 Jay Pipes wrote:
  On Wed, 2014-01-08 at 14:26 +0100, Thierry Carrez wrote:
  Tim Bell wrote:
  +1 from me too UpgradeImpact is a much better term.
 
  So this one is already documented[1], but I don't know if it actually
  triggers anything yet.
 
  Should we configure it to post to openstack-operators, the same way
  as SecurityImpact posts to openstack-security ?
 
  Huge +1 from me here.
 
 OK, this should do it:
 https://review.openstack.org/#/c/65685
 
 --
 Thierry Carrez (ttx)
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Organizing a Gate Blocking Bug Fix Day

2014-01-09 Thread Flavio Percoco

On 09/01/14 07:46 -0500, Sean Dague wrote:
I think we are all agreed that the current state of Gate Resets isn't 
good. Unfortunately some basic functionality is really not working 
reliably, like being able to boot a guest to a point where you can ssh 
into it.


These are common bugs, but they aren't easy ones. We've had a few 
folks digging deep on these, but we, as a community, are not keeping 
up with them.


So I'd like to propose Gate Blocking Bug Fix day, to be Monday Jan 
20th. On that day I'd ask all core reviewers (and anyone else) on all 
projects to set aside that day to *only* work on gate blocking bugs. 
We'd like to quiet the queues to not include any other changes that 
day so that only fixes related to gate blocking bugs would be in the 
system.


This will have multiple goals:
#1 - fix some of the top issues
#2 - ensure we classify (ER fingerprint) and register everything 
we're seeing in the gate fails

#3 - ensure all gate bugs are triaged appropriately

I'm hopefully that if we can get everyone looking at this one a single 
day, we can start to dislodge the log jam that exists.


Specifically I'd like to get commitments from as many PTLs as possible 
that they'll both directly participate in the day, as well as 
encourage the rest of their project to do the same.




Count me in!

--
@flaper87
Flavio Percoco


pgpWDB5w_BxEu.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] new (docs) requirement for third party CI

2014-01-09 Thread Kurt Taylor

Joe Gordon joe.gord...@gmail.com wrote on 01/08/2014 12:40:47 PM:

 Re: [openstack-dev] [nova] new (docs) requirement for third party CI


 On Jan 8, 2014 7:12 AM, Matt Riedemann mrie...@linux.vnet.ibm.com
wrote:
 

  If no one is against this or has something to add, I'll update the
wiki.

 -1 to putting this in the wiki. This isn't a nova only issue. We are
 trying to collect the requirements here:
 https://review.openstack.org/#/c/63478/

 
  [1] https://wiki.openstack.org/wiki/HypervisorSupportMatrix/
 DeprecationPlan#Specific_Requirements
 

Once this is more solid, is the eventual plan to put this out on the wiki?

There are several pockets of organization around 3rd party CI. It makes
tracking all of them across all the projects difficult. I would like to see
this organized into a global set of requirements, then maybe additional per
project specifics for nova, neutron, etc.

Kurt Taylor (krtaylor)
OpenStack Development - PowerKVM CI
IBM Linux Technology Center___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Organizing a Gate Blocking Bug Fix Day

2014-01-09 Thread Julien Danjou
On Thu, Jan 09 2014, Sean Dague wrote:

 I'm hopefully that if we can get everyone looking at this one a single day,
 we can start to dislodge the log jam that exists.

I will help you bear this burden, Sean Dague, for as long as it is
yours to bear. You have my sword.

-- 
Julien Danjou
# Free Software hacker # independent consultant
# http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Sofware Config progress

2014-01-09 Thread Steven Dake

On 01/08/2014 06:57 PM, Prasad Vellanki wrote:

Clint  Steve
One scenario we are trying to see is whether and how Heat 
software-config enables  deployment of  images available from third 
party as virtual appliances,  providing network, security or 
acceleration capabilities. The vendor in some cases might not allow 
rebuilding and/or  may not have the cloud init capability.Sometimes 
changes to the image could run into issues with licensing. 
Bootstrapping in such situations is generally done via rest api or ssh 
once the appliance boots up where one can bootstrap it further.


We are looking at how to automate deployment of such service functions 
using new configuration and deployment  model in Heat which we really 
like.


One option is that software-config can provide an option in Heat to 
trigger bootstrapping that can be done from outside rather than 
inside,  as done by  cloud-init, and does bootstrapping of appliances 
using ssh and/or rest.


Another option is there could be an agent outside that recognizes this 
kind of service coming up and then inform Heat  to go to next state to 
configure the deployed resource. This is more like a proxy model.


thanks
prasadv



Prasad,

Just to clarify, you want Heat to facilitate bootstrapping a black box 
(third-party virtual appliance) that has no consistent bootstrapping 
interface (such as cloud-init)?  The solution you propose I think goes 
along the lines of having Heat notify an out-of-vm bootstrapping system 
(such as SSH) to connect to the black box and execute the bootstrapping.


If so, I see a problem with this approach:
Heat requires the running of commands inside the virtual machine to know 
when the virtual machine is done bootstrapping, for whatever definition 
of bootstrapping you use (OS booted, vs OS loaded and ready to provide 
service).


This could be handled by modifying the init scripts to signal the end of 
booting, but one constraint you mentioned was that images may not be 
modified.


Another approach that could be used today is to constantly connect to 
the SSH port of the VM until you receive a connection.  The problem with 
this approach is who loads the ssh keys into the image?  SSH key 
injection is currently handled by the bootstrapping process.  This is a 
chicken-egg problem and a fundamental reason why bootstrapping should be 
done internally to the virtual machine driven by Heat.  Assuming this 
model were used, a notification that the booting process has completed 
is only an optimization to indicate when SSH harassment should begin :)


One possible workaround, mentioned in your use case is that the virtual 
appliance contacts a REST server (to obtain bootstrapping information, 
including possibly SSH keys).  Since I assume these virtual appliances 
come from different vendors, this would result in REST bootstrapping 
server proliferation which is bad for operators as each server has to be 
secure-ified, scale-ified, HA-ifed, and documented.


The path of least resistance in this case seems to be to influence the 
appliance vendors to adopt cloud-init rather then do unnatural acts 
inside infrastructure to support appliance vendors who are unwilling to 
conform to Open Source choices made by a broad community of technology 
experts (in this case, I mean not just the OpenStack community, but 
rather nearly every cloud vendor has made cloudinit central to their 
solutions).  Since the appliance vendors will add cloud-init to their 
image sooner or later due to operator / customer pressure, it is also 
the right choice today.


It is as simple as adding one package to the built image.  In exchange, 
from a bootstrapping perspective, their customers get a simple secure 
reliable scalable highly available experience on OpenStack and likely 
other IAAS platforms.


Regards
-steve


On Tue, Jan 7, 2014 at 11:40 AM, Clint Byrum cl...@fewbar.com 
mailto:cl...@fewbar.com wrote:


I'd say it isn't so much cloud-init that you need, but some kind
of bootstrapper. The point of hot-software-config is to help with
in-instance orchestration. That's not going to happen without some way
to push the desired configuration into the instance.

Excerpts from Susaant Kondapaneni's message of 2014-01-07 11:16:16
-0800:
 We work with images provided by vendors over which we do not
always have
 control. So we are considering the cases where vendor image does
not come
 installed with cloud-init. Is there a way to support heat
software config
 in such scenarios?

 Thanks
 Susaant

 On Mon, Jan 6, 2014 at 4:47 PM, Steve Baker sba...@redhat.com
mailto:sba...@redhat.com wrote:

   On 07/01/14 06:25, Susaant Kondapaneni wrote:
 
   Hi Steve,
 
   I am trying to understand the software config implementation.
Can you
  clarify the following:
 
   i. To use Software config and deploy in a template, instance
resource
  MUST 

Re: [openstack-dev] os-*-config in tripleo repositories

2014-01-09 Thread Tomas Sedovic

On 09/01/14 14:13, Derek Higgins wrote:

It looks like we have some duplication and inconsistencies on the 3
os-*-config elements in the tripleo repositories

os-apply-config (duplication) :
We have two elements that install this
  diskimage-builder/elements/config-applier/
  tripleo-image-elements/elements/os-apply-config/

As far as I can tell the version in diskimage-builder isn't used by
tripleo and the upstart file is broke
./dmesg:[   13.336184] init: Failed to spawn config-applier main
process: unable to execute: No such file or directory

To avoid confusion I propose we remove
diskimage-builder/elements/config-applier/ (or deprecated if we have a
suitable process) but would like to call it out here first to see if
anybody is using it or thinks its a bad idea?

inconsistencies
   os-collect-config, os-refresh-config : these are both installed from
git into the global site-packages
   os-apply-config : installed from a released tarball into its own venv

   To be consistent with the other elements all 3 I think should be
installed from git into its own venv, thoughts?


I've no insight into why things are the way they are, but your 
suggestions make sense to me.




If no objections I'll go ahead an do this next week,

thanks,
Derek.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack][Nova][Docker] Devstack with docker driver

2014-01-09 Thread Swapnil Kulkarni
Hi Daniel,

I think it was some proxy issue prohibiting from the download. I downloaded
the correct docker-registry.tar.gz and stack.sh completed but with some
issues remain related to image availability. The docker push is getting *HTTP
code 500 while uploading metadata: invalid character '' looking for
beginning of value* error. I did a search for the error and found [1] on
ask.openstack which states firewall could be an issue, which is not I guess
in my situation. Any ideas?

[1] https://ask.openstack.org/en/question/8320/docker-push-error/

Best Regards,
Swapnil


On Thu, Jan 9, 2014 at 7:37 PM, Daniel Kuffner daniel.kuff...@gmail.comwrote:

 The tar file seem to be corrupt, the tools script downloads it to:

 ./files/docker-registry.tar.gz

 On Thu, Jan 9, 2014 at 11:17 AM, Swapnil Kulkarni
 swapnilkulkarni2...@gmail.com wrote:
  Hi Daniel,
 
  I removed the existing images and executed
 ./tools/docker/install_docker.sh.
  I am facing new issue related to docker-registry,
 
  Error: exit status 2: tar: This does not look like a tar archive
 
  is the size 945 bytes correct for docker-registry image?
 
 
  Best Regards,
  Swapnil
 
  On Thu, Jan 9, 2014 at 1:35 PM, Daniel Kuffner daniel.kuff...@gmail.com
 
  wrote:
 
  Hi Swapnil,
 
  Looks like the docker-registry image is broken, since it cannot find
  run.sh inside the container.
 
   2014/01/09 06:36:15 Unable to locate ./docker-registry/run.sh
 
  Maybe you could try to remove and re-import image
 
 
  docker rmi docker-registry
 
  and then execute
 
  ./tools/docker/install_docker.sh
 
  again.
 
 
 
  On Thu, Jan 9, 2014 at 7:42 AM, Swapnil Kulkarni
  swapnilkulkarni2...@gmail.com wrote:
   Hi Eric,
  
   I tried running the 'docker run' command without -d and it gets
   following
   error
  
   $ sudo docker run -d=false -p 5042:5000 -e SETTINGS_FLAVOR=openstack
 -e
   OS_USERNAME=admin -e OS_PASSWORD=password -e OS_TENANT_NAME=admin -e
   OS_GLANCE_URL=http://127.0.0.1:9292 -e
   OS_AUTH_URL=http://127.0.0.1:35357/v2.0 docker-registry
   ./docker-registry/run.sh
   lxc-start: No such file or directory -
   stat(/proc/16438/root/dev//console)
   2014/01/09 06:36:15 Unable to locate ./docker-registry/run.sh
  
   On the other hand,
  
   If I run the failing command just after stack.sh fails with -d,  it
   works
   fine,
  
   sudo docker run -d -p 5042:5000 -e SETTINGS_FLAVOR=openstack -e
   OS_USERNAME=admin -e OS_PASSWORD=password -e OS_TENANT_NAME=admin -e
   OS_GLANCE_URL=http://127.0.0.1:9292 -e
   OS_AUTH_URL=http://127.0.0.1:35357/v2.0 docker-registry
   ./docker-registry/run.sh
   5b737f8d2282114c1a0cfc4f25bc7c9ef8c5da7e0d8fa7ed9ccee0be81cddafc
  
   Best Regards,
   Swapnil
  
  
   On Wed, Jan 8, 2014 at 8:29 PM, Eric Windisch e...@windisch.us
 wrote:
  
   On Tue, Jan 7, 2014 at 11:13 PM, Swapnil Kulkarni
   swapnilkulkarni2...@gmail.com wrote:
  
   Let me know in case I can be of any help getting this resolved.
  
  
   Please try running the failing 'docker run' command manually and
   without
   the '-d' argument. I've been able to reproduce  an error myself, but
   wish to
   confirm that this matches the error you're seeing.
  
   Regards,
   Eric Windisch
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Organizing a Gate Blocking Bug Fix Day

2014-01-09 Thread Chmouel Boudjnah
On Thu, Jan 9, 2014 at 1:46 PM, Sean Dague s...@dague.net wrote:

 Specifically I'd like to get commitments from as many PTLs as possible
 that they'll both directly participate in the day, as well as encourage the
 rest of their project to do the same


I'll be more than happy to participate (or at least on EU time).

Chmouel.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Icehouse mid-cycle meetup

2014-01-09 Thread Matt Riedemann



On 11/25/2013 6:30 PM, Mike Wilson wrote:

Hotel information has been posted. Look forward to seeing you all in
February :-).

-Mike


On Mon, Nov 25, 2013 at 8:14 AM, Russell Bryant rbry...@redhat.com
mailto:rbry...@redhat.com wrote:

Greetings,

Other groups have started doing mid-cycle meetups with success.  I've
received significant interest in having one for Nova.  I'm now excited
to announce some details.

We will be holding a mid-cycle meetup for the compute program from
February 10-12, 2014, in Orem, UT.  Huge thanks to Bluehost for
hosting us!

Details are being posted to the event wiki page [1].  If you plan to
attend, please register.  Hotel recommendations with booking links will
be posted soon.

Please let me know if you have any questions.

Thanks,

[1] https://wiki.openstack.org/wiki/Nova/IcehouseCycleMeetup
--
Russell Bryant




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



I've started an etherpad [1] for gathering ideas for morning 
unconference topics.  Feel free to post anything you're interested in 
discussing.


[1] https://etherpad.openstack.org/p/nova-icehouse-mid-cycle-meetup-items

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] os-*-config in tripleo repositories

2014-01-09 Thread Clint Byrum
Excerpts from Derek Higgins's message of 2014-01-09 06:13:53 -0700:
 It looks like we have some duplication and inconsistencies on the 3
 os-*-config elements in the tripleo repositories
 
 os-apply-config (duplication) :
We have two elements that install this
  diskimage-builder/elements/config-applier/
  tripleo-image-elements/elements/os-apply-config/
 
As far as I can tell the version in diskimage-builder isn't used by
 tripleo and the upstart file is broke
 ./dmesg:[   13.336184] init: Failed to spawn config-applier main
 process: unable to execute: No such file or directory
 
To avoid confusion I propose we remove
 diskimage-builder/elements/config-applier/ (or deprecated if we have a
 suitable process) but would like to call it out here first to see if
 anybody is using it or thinks its a bad idea?
 
 inconsistencies
   os-collect-config, os-refresh-config : these are both installed from
 git into the global site-packages
   os-apply-config : installed from a released tarball into its own venv
 
   To be consistent with the other elements all 3 I think should be
 installed from git into its own venv, thoughts?
 
 If no objections I'll go ahead an do this next week,
 

+1 to all of your solutions. Thanks for cleaning up the mess we made. :)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Allow multiple subnets on gateway port for router

2014-01-09 Thread Randy Tuttle
-- (rebroadcast to dev community from prior unicast discussion) --

Hi Nir

Sorry if the description is misleading. Didn't want a large title, and
hoped that the description would provide those additional details to
clarify the real goal of what's included and what's not included.

#1. Yes, it's only the gateway port. With that said, there are a series of
BP that are being worked to support the dual-stack use case (although not
necessarily dependent on each other) across Neutron, including internal
ports facing the tenant.
https://blueprints.launchpad.net/neutron/+spec/allow-multiple-subnets-on-gateway-port
https://blueprints.launchpad.net/neutron/+spec/dnsmasq-mode-keyword
https://blueprints.launchpad.net/neutron/+spec/neutronclient-support-dnsmasq-mode-keyword
https://blueprints.launchpad.net/neutron/+spec/dnsmasq-bind-into-qrouter-namespace
https://blueprints.launchpad.net/neutron/+spec/dnsmasq-ipv6-slaac
https://blueprints.launchpad.net/neutron/+spec/dnsmasq-ipv6-dhcpv6-relay-agent
https://blueprints.launchpad.net/neutron/+spec/dnsmasq-ipv6-dhcpv6-stateful
https://blueprints.launchpad.net/neutron/+spec/dnsmasq-ipv6-dhcpv6-stateless

#2. Surely it's possible to have multiple v4 and v6 [global] addresses on
the interface, but for the gateway port, I don't have a specific use case.
To remain consistent with current feature capability (single v4 IP), I
continue to restrict a single IP from each flavor. With that said, there's
nothing technically preventing this. It can be done; however, the CLI and
Horizon would likely need significant changes. Right now, the code is
written such that it explicitly prevents it. As I mentioned before, I
actually had to add code in to disallow multiple addresses of the same
flavor and send back an error to the user. Of course, we can evolve it in
the future if a use-case warrants it.

Thanks
Randy



On Thu, Jan 9, 2014 at 4:16 AM, Nir Yechiel nyech...@redhat.com wrote:

 Hi Randy,

 I don't have a specific use case. I just wanted to understand the scope
 here as the name of this blueprint (allow multiple subnets on gateway port
 for router) could be a bit misleading.

 Two questions I have though:

 1. Is this talking specifically about the gateway port to the provider's
 next-hop router or relevant for all ports in virtual routers as well?
 2. There is a fundamental difference between v4 and v6 address assignment.
 With IPv4 I agree that one IP address per port is usually enough (there is
 the concept of secondary IP, but I am not sure it's really common). With
 IPv6 however you can sure have more then one (global) IPv6 on an interface.
 Shouldn't we support this?


 Thanks,
 Nir

 --
 *From: *Randy Tuttle randy.m.tut...@gmail.com
 *To: *OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 *Cc: *rantu...@cisco.com
 *Sent: *Tuesday, December 31, 2013 6:43:50 PM
 *Subject: *Re: [openstack-dev] [Neutron] Allow multiple subnets on
 gateway port for router


 Hi Nir

 Good question. There's absolutely no reason not to allow more than 2
 subnets, or even 2 of the same IP versions on the gateway port. In fact, in
 our POC we allowed this (or, more specifically, we did not disallow it).
 However, for the gateway port to the provider's next-hop router, we did not
 have a specific use case beyond an IPv4 and an IPv6. Moreover, in Neutron
 today, only a single subnet is allowed per interface (either v4 or v6). So
 all we are doing is opening up the gateway port to support what it does
 today (i.e., v4 or v6) plus allow IPv4 and IPv6 subnets to co-exist on the
 gateway port (and same network/vlan). Our principle use case is to enable
 IPv6 in an existing IPv4 environment.

 Do you have a specific use case requiring 2 or more of the same
 IP-versioned subnets on a gateway port?

 Thanks
 Randy


 On Tue, Dec 31, 2013 at 4:59 AM, Nir Yechiel nyech...@redhat.com wrote:

 Hi,

 With regards to
 https://blueprints.launchpad.net/neutron/+spec/allow-multiple-subnets-on-gateway-port,can
  you please clarify this statement: We will disallow more that two
 subnets, and exclude allowing 2 IPv4 or 2 IPv6 subnets.
 The use case for dual-stack with one IPv4 and one IPv6 address associated
 to the same port is clear, but what is the reason to disallow more than two
 IPv4/IPv6 subnets to a port?

 Thanks and happy holidays!
 Nir



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev 

Re: [openstack-dev] [all] Organizing a Gate Blocking Bug Fix Day

2014-01-09 Thread Ken'ichi Ohmichi
Hi Sean,

That is good idea, I also am in.
BTW, what time will this work start?
I tried to join this kind of work, but I could not find anyone on IRC
in my timezone.


Thanks
Ken'ichi Ohmichi


2014/1/9 Sean Dague s...@dague.net:
 On 01/09/2014 08:01 AM, Thierry Carrez wrote:

 Sean Dague wrote:

 [...]
 So I'd like to propose Gate Blocking Bug Fix day, to be Monday Jan 20th.
 On that day I'd ask all core reviewers (and anyone else) on all projects
 to set aside that day to *only* work on gate blocking bugs. We'd like to
 quiet the queues to not include any other changes that day so that only
 fixes related to gate blocking bugs would be in the system.
 [...]


 Great idea, and Mondays are ideal for this (smaller queues to get the
 patch in).

 However Monday Jan 20th is the day before the icehouse-2 branch cut, so
 I fear a lot of people will be busy getting their patches in rather than
 looking after gate-blocking bugs.

 How about doing it Monday, Jan 13th ? Too early ? That way icehouse-2
 can benefit from any successful outcome of this day.. and you might get
 more people to participate.


 So I guess I had the icehouse dates wrong in my head, and I thought this was
 the week after.

 My suggestion is that we back this up to Jan 27th.


 -Sean

 --
 Sean Dague
 Samsung Research America
 s...@dague.net / sean.da...@samsung.com
 http://dague.net

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Allow multiple subnets on gateway port for router

2014-01-09 Thread Veiga, Anthony

-- (rebroadcast to dev community from prior unicast discussion) --

Hi Nir

Sorry if the description is misleading. Didn't want a large title, and hoped 
that the description would provide those additional details to clarify the real 
goal of what's included and what's not included.

#1. Yes, it's only the gateway port. With that said, there are a series of BP 
that are being worked to support the dual-stack use case (although not 
necessarily dependent on each other) across Neutron, including internal ports 
facing the tenant.
https://blueprints.launchpad.net/neutron/+spec/allow-multiple-subnets-on-gateway-port
https://blueprints.launchpad.net/neutron/+spec/dnsmasq-mode-keyword
https://blueprints.launchpad.net/neutron/+spec/neutronclient-support-dnsmasq-mode-keyword
https://blueprints.launchpad.net/neutron/+spec/dnsmasq-bind-into-qrouter-namespace
https://blueprints.launchpad.net/neutron/+spec/dnsmasq-ipv6-slaac
https://blueprints.launchpad.net/neutron/+spec/dnsmasq-ipv6-dhcpv6-relay-agent
https://blueprints.launchpad.net/neutron/+spec/dnsmasq-ipv6-dhcpv6-stateful
https://blueprints.launchpad.net/neutron/+spec/dnsmasq-ipv6-dhcpv6-stateless

I'd suggest popping into the ipv6-subteam's meetings [1] and having further 
discussions about this as well.  We've been working on address allocation for 
the most part, but routing and service integration will need to be the next 
step.



#2. Surely it's possible to have multiple v4 and v6 [global] addresses on the 
interface, but for the gateway port, I don't have a specific use case. To 
remain consistent with current feature capability (single v4 IP), I continue to 
restrict a single IP from each flavor. With that said, there's nothing 
technically preventing this. It can be done; however, the CLI and Horizon would 
likely need significant changes. Right now, the code is written such that it 
explicitly prevents it. As I mentioned before, I actually had to add code in to 
disallow multiple addresses of the same flavor and send back an error to the 
user. Of course, we can evolve it in the future if a use-case warrants it.

The use case is for networks that rely on IP allocations for security.  You may 
want a pair of separate routed blocks on the same network for, say, a public 
network for the web server to get through a policy to the Internet, but a 
separate address to get to an internal-only database cluster somewhere.  I'm 
not saying it's the greatest way to do things, but I am sure there are people 
running networks this way.  The alternative would be to spin up another port on 
another network and configure another gateway port as well.



Thanks
Randy



On Thu, Jan 9, 2014 at 4:16 AM, Nir Yechiel 
nyech...@redhat.commailto:nyech...@redhat.com wrote:
Hi Randy,

I don't have a specific use case. I just wanted to understand the scope here as 
the name of this blueprint (allow multiple subnets on gateway port for 
router) could be a bit misleading.

Two questions I have though:

1. Is this talking specifically about the gateway port to the provider's 
next-hop router or relevant for all ports in virtual routers as well?
2. There is a fundamental difference between v4 and v6 address assignment. With 
IPv4 I agree that one IP address per port is usually enough (there is the 
concept of secondary IP, but I am not sure it's really common). With IPv6 
however you can sure have more then one (global) IPv6 on an interface. 
Shouldn't we support this?


Thanks,
Nir


From: Randy Tuttle randy.m.tut...@gmail.commailto:randy.m.tut...@gmail.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Cc: rantu...@cisco.commailto:rantu...@cisco.com
Sent: Tuesday, December 31, 2013 6:43:50 PM
Subject: Re: [openstack-dev] [Neutron] Allow multiple subnets on gateway port 
for router


Hi Nir

Good question. There's absolutely no reason not to allow more than 2 subnets, 
or even 2 of the same IP versions on the gateway port. In fact, in our POC we 
allowed this (or, more specifically, we did not disallow it). However, for the 
gateway port to the provider's next-hop router, we did not have a specific use 
case beyond an IPv4 and an IPv6. Moreover, in Neutron today, only a single 
subnet is allowed per interface (either v4 or v6). So all we are doing is 
opening up the gateway port to support what it does today (i.e., v4 or v6) plus 
allow IPv4 and IPv6 subnets to co-exist on the gateway port (and same 
network/vlan). Our principle use case is to enable IPv6 in an existing IPv4 
environment.

Do you have a specific use case requiring 2 or more of the same IP-versioned 
subnets on a gateway port?

Thanks
Randy


On Tue, Dec 31, 2013 at 4:59 AM, Nir Yechiel 
nyech...@redhat.commailto:nyech...@redhat.com wrote:
Hi,

With regards to 
https://blueprints.launchpad.net/neutron/+spec/allow-multiple-subnets-on-gateway-port,
 can you please clarify this statement: We 

Re: [openstack-dev] [nova][documentation][devstack] Confused about how to set up a Nova development environment

2014-01-09 Thread Brant Knudson
Mike -

When I was starting out, I ran devstack ( http://devstack.org/ ) on an
Ubuntu VM. You wind up with a system where you've got a basic running
OpenStack so you can try things out with the command-line utilities, and
also do development because it checks out all the repos. I learned a lot,
and it's how I still do development.

- Brant

On Thu, Jan 9, 2014 at 2:19 AM, Mike Spreitzer mspre...@us.ibm.com wrote:

 I am trying to become a bit less of a newbie, and having a bit of
 difficulty with basics.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Organizing a Gate Blocking Bug Fix Day

2014-01-09 Thread Mark McClain

On Jan 9, 2014, at 7:46 AM, Sean Dague s...@dague.net wrote:

 I think we are all agreed that the current state of Gate Resets isn't good. 
 Unfortunately some basic functionality is really not working reliably, like 
 being able to boot a guest to a point where you can ssh into it.
 
 These are common bugs, but they aren't easy ones. We've had a few folks 
 digging deep on these, but we, as a community, are not keeping up with them.
 
 So I'd like to propose Gate Blocking Bug Fix day, to be Monday Jan 20th. On 
 that day I'd ask all core reviewers (and anyone else) on all projects to set 
 aside that day to *only* work on gate blocking bugs. We'd like to quiet the 
 queues to not include any other changes that day so that only fixes related 
 to gate blocking bugs would be in the system.
 
 This will have multiple goals:
 #1 - fix some of the top issues
 #2 - ensure we classify (ER fingerprint) and register everything we're seeing 
 in the gate fails
 #3 - ensure all gate bugs are triaged appropriately
 
 I'm hopefully that if we can get everyone looking at this one a single day, 
 we can start to dislodge the log jam that exists.
 
 Specifically I'd like to get commitments from as many PTLs as possible that 
 they'll both directly participate in the day, as well as encourage the rest 
 of their project to do the same.
 
   -Sean

I’m in.

mark
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] bug for Jenkins slave agent init failures

2014-01-09 Thread Sergey Lukjanov
Hi folks,

sometimes Jenkins slaves failing due to the agent initialization error.

There are several possible traces that you can see in such case:

* Caused by: java.lang.NoClassDefFoundError: Could not initialize class
jenkins.model.Jenkins$MasterComputer (full stack trace:
http://paste.openstack.org/show/60883/)
* Caused by: java.lang.NoClassDefFoundError: Could not initialize class
hudson.Util (full stack trace: http://paste.openstack.org/show/60885/)

The 1267364 [0] bug could be used to perform recheck/reverify for changes
with failed jobs. You can find more details and traces in the bug
description. This bug wouldn't be closed each time the slave
fixed/disabled, new traces and/or broken slaves will be added to it.

Thanks.

[0] https://bugs.launchpad.net/openstack-ci/+bug/1267364

-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Organizing a Gate Blocking Bug Fix Day - Mon Jan 27th

2014-01-09 Thread Sean Dague
Minor correction, we're going to do this on Jan 27th, to be after the i2 
push, as I don't think there is time organize this prior.


It will also be a good way to start the i3 push by trying to get the 
gate back in shape so that we can actually land what people need to land 
for i3.


-Sean

On 01/09/2014 07:46 AM, Sean Dague wrote:

I think we are all agreed that the current state of Gate Resets isn't
good. Unfortunately some basic functionality is really not working
reliably, like being able to boot a guest to a point where you can ssh
into it.

These are common bugs, but they aren't easy ones. We've had a few folks
digging deep on these, but we, as a community, are not keeping up with
them.

So I'd like to propose Gate Blocking Bug Fix day, to be Monday Jan 20th.
On that day I'd ask all core reviewers (and anyone else) on all projects
to set aside that day to *only* work on gate blocking bugs. We'd like to
quiet the queues to not include any other changes that day so that only
fixes related to gate blocking bugs would be in the system.

This will have multiple goals:
  #1 - fix some of the top issues
  #2 - ensure we classify (ER fingerprint) and register everything we're
seeing in the gate fails
  #3 - ensure all gate bugs are triaged appropriately

I'm hopefully that if we can get everyone looking at this one a single
day, we can start to dislodge the log jam that exists.

Specifically I'd like to get commitments from as many PTLs as possible
that they'll both directly participate in the day, as well as encourage
the rest of their project to do the same.

 -Sean




--
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] top gate bugs: a plea for help

2014-01-09 Thread Salvatore Orlando
I think I have found another fault triggering bug 1253896 when neutron is
enabled.

I've added a comment to https://bugs.launchpad.net/bugs/1253896
On another note, I'm seeing also occurrences of this bug with nova-network.
Is there anybody from the nova side looking at it (I can give it a try, but
I don't know a lot about nova-network)

Salvatore


On 8 January 2014 23:53, Joe Gordon joe.gord...@gmail.com wrote:

 Hi All,

 As you know the gate has been in particularly bad shape (gate queue over
 100!) this week due to a number of factors. One factor is how many major
 outstanding bugs we have in the gate.  Below is a list of the top 4 open
 gate bugs.

 Here are some fun facts about this list:
 * All bugs have been open for over a month
 * All are nova bugs
 * These 4 bugs alone were hit 588 times which averages to 42 hits per day
 (data is over two weeks)!

 If we want the gate queue to drop and not have to continuously run
 'recheck bug x' we need to fix these bugs.  So I'm looking for volunteers
 to help debug and fix these bugs.


 best,
 Joe

 Bug: https://bugs.launchpad.net/bugs/1253896 = message:SSHTimeout:
 Connection to the AND message:via SSH timed out. AND
 filename:console.html
 Filed: 2013-11-21
 Title: Attempts to verify guests are running via SSH fails. SSH connection
 to guest does not work.
 Project: Status
   neutron: In Progress
   nova: Triaged
   tempest: Confirmed
 Hits
   FAILURE: 243
 Percentage of Gate Queue Job failures triggered by this bug
   gate-tempest-dsvm-postgres-full: 0.35%
   gate-grenade-dsvm: 0.68%
   gate-tempest-dsvm-neutron: 0.39%
   gate-tempest-dsvm-neutron-isolated: 4.76%
   gate-tempest-dsvm-full: 0.19%

 Bug: https://bugs.launchpad.net/bugs/1254890
 Fingerprint: message:Details: Timed out waiting for thing AND
 message:to become AND  (message:ACTIVE OR message:in-use OR
 message:available)
 Filed: 2013-11-25
 Title: Timed out waiting for thing causes tempest-dsvm-neutron-* failures
 Project: Status
   neutron: Invalid
   nova: Triaged
   tempest: Confirmed
 Hits
   FAILURE: 173
 Percentage of Gate Queue Job failures triggered by this bug
   gate-tempest-dsvm-neutron-isolated: 4.76%
   gate-tempest-dsvm-postgres-full: 0.35%
   gate-tempest-dsvm-large-ops: 0.68%
   gate-tempest-dsvm-neutron-large-ops: 0.70%
   gate-tempest-dsvm-full: 0.19%
   gate-tempest-dsvm-neutron-pg: 3.57%

 Bug: https://bugs.launchpad.net/bugs/1257626
 Fingerprint: message:nova.compute.manager Timeout: Timeout while waiting
 on RPC response - topic: \network\, RPC method:
 \allocate_for_instance\ AND filename:logs/screen-n-cpu.txt
 Filed: 2013-12-04
 Title: Timeout while waiting on RPC response - topic: network, RPC
 method: allocate_for_instance info: unknown
 Project: Status
   nova: Triaged
 Hits
   FAILURE: 118
 Percentage of Gate Queue Job failures triggered by this bug
   gate-tempest-dsvm-large-ops: 0.68%

 Bug: https://bugs.launchpad.net/bugs/1254872
 Fingerprint: message:libvirtError: Timed out during operation: cannot
 acquire state change lock AND filename:logs/screen-n-cpu.txt
 Filed: 2013-11-25
 Title: libvirtError: Timed out during operation: cannot acquire state
 change lock
 Project: Status
   nova: Triaged
 Hits
   FAILURE: 54
   SUCCESS: 3
 Percentage of Gate Queue Job failures triggered by this bug
   gate-tempest-dsvm-postgres-full: 0.35%
   gate-tempest-dsvm-full: 0.19%


 Generated with: elastic-recheck-success

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Organizing a Gate Blocking Bug Fix Day - Mon Jan 27th

2014-01-09 Thread Christopher Yeoh
On Thu, Jan 9, 2014 at 11:30 PM, Sean Dague s...@dague.net wrote:

 Minor correction, we're going to do this on Jan 27th, to be after the i2
 push, as I don't think there is time organize this prior.


So FYI Jan 27th is a public holiday in Australia (Australia Day!), but
given the timezone difference I think those of us in Australia can still
participate on the 28th and it will still be the 27th in the US :-)



 It will also be a good way to start the i3 push by trying to get the gate
 back in shape so that we can actually land what people need to land for i3.

 -Sean

 On 01/09/2014 07:46 AM, Sean Dague wrote:

 I think we are all agreed that the current state of Gate Resets isn't
 good. Unfortunately some basic functionality is really not working
 reliably, like being able to boot a guest to a point where you can ssh
 into it.

 These are common bugs, but they aren't easy ones. We've had a few folks
 digging deep on these, but we, as a community, are not keeping up with
 them.

 So I'd like to propose Gate Blocking Bug Fix day, to be Monday Jan 20th.
 On that day I'd ask all core reviewers (and anyone else) on all projects
 to set aside that day to *only* work on gate blocking bugs. We'd like to
 quiet the queues to not include any other changes that day so that only
 fixes related to gate blocking bugs would be in the system.

 This will have multiple goals:
   #1 - fix some of the top issues
   #2 - ensure we classify (ER fingerprint) and register everything we're
 seeing in the gate fails
   #3 - ensure all gate bugs are triaged appropriately

 I'm hopefully that if we can get everyone looking at this one a single
 day, we can start to dislodge the log jam that exists.

 Specifically I'd like to get commitments from as many PTLs as possible
 that they'll both directly participate in the day, as well as encourage
 the rest of their project to do the same.

  -Sean



 --
 Sean Dague
 Samsung Research America
 s...@dague.net / sean.da...@samsung.com
 http://dague.net

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna] savanna-ci usage

2014-01-09 Thread Jeremy Stanley
On 2014-01-09 14:16:09 +0400 (+0400), Sergey Lukjanov wrote:
 we've finally updated our CI to use Zuul
[...]

Awesome! I can't wait to see your improvements.
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

2014-01-09 Thread Robert Li (baoli)
Hi Folks,

With John joining the IRC, so far, we had a couple of productive meetings in an 
effort to come to consensus and move forward. Thanks John for doing that, and I 
appreciate everyone's effort to make it to the daily meeting. Let's reconvene 
on Monday.

But before that, and based on our today's conversation on IRC, I'd like to say 
a few things. I think that first of all, we need to get agreement on the 
terminologies that we are using so far. With the current nova PCI passthrough

PCI whitelist: defines all the available PCI passthrough devices on a 
compute node. pci_passthrough_whitelist=[{ 
vendor_id:,product_id:}]
PCI Alias: criteria defined on the controller node with which requested 
PCI passthrough devices can be selected from all the PCI passthrough devices 
available in a cloud.
Currently it has the following format: 
pci_alias={vendor_id:, product_id:, name:str}

nova flavor extra_specs: request for PCI passthrough devices can be 
specified with extra_specs in the format for 
example:pci_passthrough:alias=name:count

As you can see, currently a PCI alias has a name and is defined on the 
controller. The implications for it is that when matching it against the PCI 
devices, it has to match the vendor_id and product_id against all the available 
PCI devices until one is found. The name is only used for reference in the 
extra_specs. On the other hand, the whitelist is basically the same as the 
alias without a name.

What we have discussed so far is based on something called PCI groups (or PCI 
flavors as Yongli puts it). Without introducing other complexities, and with a 
little change of the above representation, we will have something like:

pci_passthrough_whitelist=[{ vendor_id:,product_id:, 
name:str}]

By doing so, we eliminated the PCI alias. And we call the name in above as a 
PCI group name. You can think of it as combining the definitions of the 
existing whitelist and PCI alias. And believe it or not, a PCI group is 
actually a PCI alias. However, with that change of thinking, a lot of benefits 
can be harvested:

 * the implementation is significantly simplified
 * provisioning is simplified by eliminating the PCI alias
 * a compute node only needs to report stats with something like: PCI 
group name:count. A compute node processes all the PCI passthrough devices 
against the whitelist, and assign a PCI group based on the whitelist definition.
 * on the controller, we may only need to define the PCI group names. 
if we use a nova api to define PCI groups (could be private or public, for 
example), one potential benefit, among other things (validation, etc),  they 
can be owned by the tenant that creates them. And thus a wholesale of PCI 
passthrough devices is also possible.
 * scheduler only works with PCI group names.
 * request for PCI passthrough device is based on PCI-group
 * deployers can provision the cloud based on the PCI groups
 * Particularly for SRIOV, deployers can design SRIOV PCI groups based 
on network connectivities.

Further, to support SRIOV, we are saying that PCI group names not only can be 
used in the extra specs, it can also be used in the —nic option and the neutron 
commands. This allows the most flexibilities and functionalities afforded by 
SRIOV.

Further, we are saying that we can define default PCI groups based on the PCI 
device's class.

For vnic-type (or nic-type), we are saying that it defines the link 
characteristics of the nic that is attached to a VM: a nic that's connected to 
a virtual switch, a nic that is connected to a physical switch, or a nic that 
is connected to a physical switch, but has a host macvtap device in between. 
The actual names of the choices are not important here, and can be debated.

I'm hoping that we can go over the above on Monday. But any comments are 
welcome by email.

Thanks,
Robert

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] RFC: blueprint monitoring-network

2014-01-09 Thread Stein, Manuel (Manuel)
Yuuichi,

since you introduce switches that are currently not reflected in the Neutron 
entities, am I correct that a switch.port is always unknown to Neutron? Can a 
switch.port ever be a VM port?

I'd be happy if you could help me understand this better.

Best, Manuel

From: Yuuichi Fujioka [fujioka-yuui...@zx.mxh.nes.nec.co.jp]
Sent: Wednesday, December 11, 2013 8:25 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Ceilometer] RFC: blueprint monitoring-network

Hi, Ceilometer team.

We have posted 2 blueprints.[1][2]

This feature collects the network information(device, link status, statistics) 
via NorthBound API from SDN Controller.
The purpose of this feature is testing network route, resource optimization 
based on network proximity and etc.

In particular, the feature collects statistics and information about ports, 
flows and tables in switches.

We feel ceilometer shouldn't talk to switches directly.
If having made pollster that talks to switches directly via SouthBound API, 
pollsters would be created for each switch vendor.
It would make large maintenance cost.
In general, NorthBound API abstracts differences between hardwares. Thus this 
feature collects via NorthBound API.

We define some meters in this feature on the blueprints.
The meters are based on the OpenFlow Switch Specification.
But, We have no intention of limiting to OpenFlow Switch.
We guess OpenFlow Switch Specification covers general the network information.
If you know other necessary meters, please let me know.

Details are written in wiki.[3]

We hope feedback of you.

[1]https://blueprints.launchpad.net/ceilometer/+spec/monitoring-network
[2]https://blueprints.launchpad.net/ceilometer/+spec/monitoring-network-from-opendaylight
[3]https://wiki.openstack.org/wiki/Ceilometer/blueprints/monitoring-network

Thanks.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO][Tuskar] Domain Model Locations

2014-01-09 Thread Jay Dobies
I'm trying to hash out where data will live for Tuskar (both long term 
and for its Icehouse deliverables). Based on the expectations for 
Icehouse (a combination of the wireframes and what's in Tuskar client's 
api.py), we have the following concepts:



= Nodes =
A node is a baremetal machine on which the overcloud resources will be 
deployed. The ownership of this information lies with Ironic. The Tuskar 
UI will accept the needed information to create them and pass it to 
Ironic. Ironic is consulted directly when information on a specific node 
or the list of available nodes is needed.



= Resource Categories =
A specific type of thing that will be deployed into the overcloud. 
These are static definitions that describe the entities the user will 
want to add to the overcloud and are owned by Tuskar. For Icehouse, the 
categories themselves are added during installation for the four types 
listed in the wireframes.


Since this is a new model (as compared to other things that live in 
Ironic or Heat), I'll go into some more detail. Each Resource Category 
has the following information:


== Metadata ==
My intention here is that we do things in such a way that if we change 
one of the original 4 categories, or more importantly add more or allow 
users to add more, the information about the category is centralized and 
not reliant on the UI to provide the user information on what it is.


ID - Unique ID for the Resource Category.
Display Name - User-friendly name to display.
Description - Equally self-explanatory.

== Count ==
In the Tuskar UI, the user selects how many of each category is desired. 
This stored in Tuskar's domain model for the category and is used when 
generating the template to pass to Heat to make it happen.


These counts are what is displayed to the user in the Tuskar UI for each 
category. The staging concept has been removed for Icehouse. In other 
words, the wireframes that cover the waiting to be deployed aren't 
relevant for now.


== Image ==
For Icehouse, each category will have one image associated with it. Last 
I remember, there was discussion on whether or not we need to support 
multiple images for a category, but for Icehouse we'll limit it to 1 and 
deal with it later.


Metadata for each Resource Category is owned by the Tuskar API. The 
images themselves are managed by Glance, with each Resource Category 
keeping track of just the UUID for its image.



= Stack =
There is a single stack in Tuskar, the overcloud. The Heat template 
for the stack is generated by the Tuskar API based on the Resource 
Category data (image, count, etc.). The template is handed to Heat to 
execute.


Heat owns information about running instances and is queried directly 
when the Tuskar UI needs to access that information.


--

Next steps for me are to start to work on the Tuskar APIs around 
Resource Category CRUD and their conversion into a Heat template. 
There's some discussion to be had there as well, but I don't want to put 
too much into one e-mail.



Thoughts?

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat] python-heatclient installation failing on Windows

2014-01-09 Thread Vijendar Komalla
Hi Heat developers,
On Windows platform (windows2012 and windows2008R2), I am seeing below given 
error when installing python-heatclient. Did any one see this problem earlier?

c:\users\adminpip -v install python-heatclient
Downloading/unpacking python-heatclient
  Using version 0.2.6 (newest of versions: 0.2.6, 0.2.5, 0.2.4, 0.2.3, 0.2.2, 
0.2.1, 0.1.0)
  Downloading python-heatclient-0.2.6.tar.gz (68kB):
  Downloading from URL 
https://pypi.python.org/packages/source/p/python-heatclient/python-heatclient-0.2.6.tar.gz#md5=20ee4f01f41820fa879f92
5aa5cf127e (from https://pypi.python.org/simple/python-heatclient/)
...Downloading python-heatclient-0.2.6.tar.gz (68kB): 65kB downloaded
  Hash of the package 
https://pypi.python.org/packages/source/p/python-heatclient/python-heatclient-0.2.6.tar.gz#md5=20ee4f01f41820fa879f925
aa5cf127e (from https://pypi.python.org/simple/python-heatclient/) 
(8c6b0ae8c7fb58c08053897432c8f15d) doesn't match the expected hash 20ee4f
01f41820fa879f925aa5cf127e!
Cleaning up...
  Removing temporary dir 
c:\users\admini~1\appdata\local\temp\2\pip_build_Administrator...
Bad md5 hash for package 
https://pypi.python.org/packages/source/p/python-heatclient/python-heatclient-0.2.6.tar.gz#md5=20ee4f01f41820fa879f
925aa5cf127e (from https://pypi.python.org/simple/python-heatclient/)
Exception information:
Traceback (most recent call last):
  File C:\Python27\lib\site-packages\pip\basecommand.py, line 134, in main
status = self.run(options, args)
  File C:\Python27\lib\site-packages\pip\commands\install.py, line 236, in run
requirement_set.prepare_files(finder, force_root_egg_info=self.bundle, 
bundle=self.bundle)
  File C:\Python27\lib\site-packages\pip\req.py, line 1092, in prepare_files
self.unpack_url(url, location, self.is_download)
  File C:\Python27\lib\site-packages\pip\req.py, line 1238, in unpack_url
retval = unpack_http_url(link, location, self.download_cache, 
self.download_dir)
  File C:\Python27\lib\site-packages\pip\download.py, line 624, in 
unpack_http_url
_check_hash(download_hash, link)
  File C:\Python27\lib\site-packages\pip\download.py, line 448, in _check_hash
raise HashMismatch('Bad %s hash for package %s' % (link.hash_name, link))
HashMismatch: Bad md5 hash for package 
https://pypi.python.org/packages/source/p/python-heatclient/python-heatclient-0.2.6.tar.gz#md5=20ee4f
01f41820fa879f925aa5cf127e (from 
https://pypi.python.org/simple/python-heatclient/)


Before python-heatclient installation, I have installed python 2.7 from 
http://www.activestate.com/activepython/downloads
Please let me know if you have any workaround for the above error.

Thanks,
Vijendar
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

2014-01-09 Thread Ian Wells
I think I'm in agreement with all of this.  Nice summary, Robert.

It may not be where the work ends, but if we could get this done the rest
is just refinement.


On 9 January 2014 17:49, Robert Li (baoli) ba...@cisco.com wrote:

Hi Folks,

  With John joining the IRC, so far, we had a couple of productive
 meetings in an effort to come to consensus and move forward. Thanks John
 for doing that, and I appreciate everyone's effort to make it to the daily
 meeting. Let's reconvene on Monday.

  But before that, and based on our today's conversation on IRC, I'd like
 to say a few things. I think that first of all, we need to get agreement on
 the terminologies that we are using so far. With the current nova PCI
 passthrough

  PCI whitelist: defines all the available PCI passthrough devices
 on a compute node. pci_passthrough_whitelist=[{
 vendor_id:,product_id:}]
 PCI Alias: criteria defined on the controller node with which
 requested PCI passthrough devices can be selected from all the PCI
 passthrough devices available in a cloud.
 Currently it has the following format: 
 pci_alias={vendor_id:,
 product_id:, name:str}

 nova flavor extra_specs: request for PCI passthrough devices can
 be specified with extra_specs in the format for example:
 pci_passthrough:alias=name:count

  As you can see, currently a PCI alias has a name and is defined on the
 controller. The implications for it is that when matching it against the
 PCI devices, it has to match the vendor_id and product_id against all the
 available PCI devices until one is found. The name is only used for
 reference in the extra_specs. On the other hand, the whitelist is basically
 the same as the alias without a name.

  What we have discussed so far is based on something called PCI groups
 (or PCI flavors as Yongli puts it). Without introducing other complexities,
 and with a little change of the above representation, we will have
 something like:

 pci_passthrough_whitelist=[{ vendor_id:,product_id:,
 name:str}]

  By doing so, we eliminated the PCI alias. And we call the name in
 above as a PCI group name. You can think of it as combining the definitions
 of the existing whitelist and PCI alias. And believe it or not, a PCI group
 is actually a PCI alias. However, with that change of thinking, a lot of
 benefits can be harvested:

   * the implementation is significantly simplified
  * provisioning is simplified by eliminating the PCI alias
  * a compute node only needs to report stats with something like:
 PCI group name:count. A compute node processes all the PCI passthrough
 devices against the whitelist, and assign a PCI group based on the
 whitelist definition.
  * on the controller, we may only need to define the PCI group
 names. if we use a nova api to define PCI groups (could be private or
 public, for example), one potential benefit, among other things
 (validation, etc),  they can be owned by the tenant that creates them. And
 thus a wholesale of PCI passthrough devices is also possible.
  * scheduler only works with PCI group names.
  * request for PCI passthrough device is based on PCI-group
  * deployers can provision the cloud based on the PCI groups
  * Particularly for SRIOV, deployers can design SRIOV PCI groups
 based on network connectivities.

  Further, to support SRIOV, we are saying that PCI group names not only
 can be used in the extra specs, it can also be used in the —nic option and
 the neutron commands. This allows the most flexibilities and
 functionalities afforded by SRIOV.

  Further, we are saying that we can define default PCI groups based on
 the PCI device's class.

  For vnic-type (or nic-type), we are saying that it defines the link
 characteristics of the nic that is attached to a VM: a nic that's connected
 to a virtual switch, a nic that is connected to a physical switch, or a nic
 that is connected to a physical switch, but has a host macvtap device in
 between. The actual names of the choices are not important here, and can be
 debated.

  I'm hoping that we can go over the above on Monday. But any comments are
 welcome by email.

  Thanks,
 Robert


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.config] Centralized config management

2014-01-09 Thread Nachi Ueno
Hi Flavio

Thank you for your input.
I agree with you. oslo.config isn't right place to have server side code.

How about oslo.configserver ?
For authentication, we can reuse keystone auth and oslo.rpc.

Best
Nachi


2014/1/9 Flavio Percoco fla...@redhat.com:
 On 08/01/14 17:13 -0800, Nachi Ueno wrote:

 Hi folks

 OpenStack process tend to have many config options, and many hosts.
 It is a pain to manage this tons of config options.
 To centralize this management helps operation.

 We can use chef or puppet kind of tools, however
 sometimes each process depends on the other processes configuration.
 For example, nova depends on neutron configuration etc

 My idea is to have config server in oslo.config, and let cfg.CONF get
 config from the server.
 This way has several benefits.

 - We can get centralized management without modification on each
 projects ( nova, neutron, etc)
 - We can provide horizon for configuration

 This is bp for this proposal.
 https://blueprints.launchpad.net/oslo/+spec/oslo-config-centralized

 I'm very appreciate any comments on this.



 I've thought about this as well. I like the overall idea of having a
 config server. However, I don't like the idea of having it within
 oslo.config. I'd prefer oslo.config to remain a library.

 Also, I think it would be more complex than just having a server that
 provides the configs. It'll need authentication like all other
 services in OpenStack and perhaps even support of encryption.

 I like the idea of a config registry but as mentioned above, IMHO it's
 to live under its own project.

 That's all I've got for now,
 FF

 --
 @flaper87
 Flavio Percoco

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] new (docs) requirement for third party CI

2014-01-09 Thread Joe Gordon
On Thu, Jan 9, 2014 at 5:52 AM, Kurt Taylor krtay...@us.ibm.com wrote:

 Joe Gordon joe.gord...@gmail.com wrote on 01/08/2014 12:40:47 PM:

  Re: [openstack-dev] [nova] new (docs) requirement for third party CI

 
 
  On Jan 8, 2014 7:12 AM, Matt Riedemann mrie...@linux.vnet.ibm.com
 wrote:
  

   If no one is against this or has something to add, I'll update the
 wiki.

  -1 to putting this in the wiki. This isn't a nova only issue. We are
  trying to collect the requirements here:
  https://review.openstack.org/#/c/63478/

  
   [1] https://wiki.openstack.org/wiki/HypervisorSupportMatrix/
  DeprecationPlan#Specific_Requirements
  

 Once this is more solid, is the eventual plan to put this out on the wiki?

 There are several pockets of organization around 3rd party CI. It makes
 tracking all of them across all the projects difficult. I would like to
 see
 this organized into a global set of requirements, then maybe additional
 per
 project specifics for nova, neutron, etc.


The very aim of that patchset is to have one global set of requirements
that would live at ci.openstack.org/third_party.html



 Kurt Taylor (krtaylor)
 OpenStack Development - PowerKVM CI
 IBM Linux Technology Center


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][qa] Intermittent failure of tempest test test_network_basic_ops

2014-01-09 Thread Jay Pipes
On Thu, 2014-01-09 at 09:09 +0100, Salvatore Orlando wrote:
 I am afraid I need to correct you Jay!

I always welcome corrections to things I've gotten wrong, so no worries
at all!

 This actually appears to be bug 1253896 [1]

Ah, the infamous SSH bug :) Yeah, so last night I spent a few hours
digging through log files and running a variety of e-r queries trying to
find some patterns for the bugs that Joe G had sent an ML post about.

I went round in circles, unfortunately :( When I thought I'd found a
pattern, invariably I would doubt my initial findings and wander into
new areas in a wild goose chase.

At various times, I thought something was up with the DHCP agent, as
there were lots of No DHCP Agent found errors in the q-dhcp screen
logs. But I could not correlate any relationship with the failures in
the 4 bugs.

Then I started thinking that there was a timing/race condition where a
security group was being added to the Nova-side servers cache before it
had actually been constructed fully on the Neutron-side. But I was not
able to fully track down the many, many debug messages that are involved
in the full sequence of VM launch :( At around 4am, I gave up and went
to bed...

 Technically, what we call 'bug' here is actually a failure
 manifestation.
 So far, we have removed several bugs causing this failure. The last
 patch was pushed to devstack around Christmas.
 Nevertheless, if you look at recent comments and Joe's email, we still
 have a non-negligible failure rate on the gate.

Understood. I suspect actually that some of the various performance
improvements from Phil Day and others around optimizing certain server
and secgroup list calls have made the underlying race conditions show up
more often -- since the list calls are completing much faster, which
ironically gives Neutron less time to complete setup operations!

So, a performance patch on the Nova side ends up putting more pressure
on the Neutron side, which causes the rate of occurrence for these
sticky bugs (with potentially many root causes) to spike.

Such is life I guess :)

 It is also worth mentioning that if you are running your tests with
 parallelism enabled (ie: you're running tempest with tox -esmoke
 rather than tox -esmokeserial) you will end up with a higher
 occurrence of this failure due to more bugs causing it. These bugs are
 due to some weakness in the OVS agent that we are addressing with
 patches for blueprint neutron-tempest-parallel [2].

Interesting. If you wouldn't mind, what makes you think this is a
weakness in the OVS agent? I would certainly appreciate your expertise
in this area, since it would help me in my own bug-searching endeavors.

All the best,
-jay

 Regards,
 Salvatore
 
 
 
 
 [1] https://bugs.launchpad.net/neutron/+bug/1253896
 [2] https://blueprints.launchpad.net/neutron/+spec/neutron-tempest-parallel
 
 
 On 9 January 2014 05:38, Jay Pipes jaypi...@gmail.com wrote:
 On Wed, 2014-01-08 at 18:46 -0800, Sukhdev Kapur wrote:
  Dear fellow developers,
 
  I am running few Neutron tempest tests and noticing an
 intermittent
  failure of tempest.scenario.test_network_basic_ops.
 
  I ran this test 50+ times and am getting intermittent
 failure. The
  pass rate is apps. 70%. The 30% of the time it fails mostly
 in
  _check_public_network_connectivity.
 
  Has anybody seen this?
  If there is a fix or work around for this, please share your
 wisdom.
 
 
 Unfortunately, I believe you are running into this bug:
 
 https://bugs.launchpad.net/nova/+bug/1254890
 
 The bug is Triaged in Nova (meaning, there is a suggested fix
 in the bug
 report). It's currently affecting the gate negatively and is
 certainly
 on the radar of the various PTLs affected.
 
 Best,
 -jay
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.config] Centralized config management

2014-01-09 Thread Doug Hellmann
What capabilities would this new service give us that existing, proven,
configuration management tools like chef and puppet don't have?


On Thu, Jan 9, 2014 at 12:52 PM, Nachi Ueno na...@ntti3.com wrote:

 Hi Flavio

 Thank you for your input.
 I agree with you. oslo.config isn't right place to have server side code.

 How about oslo.configserver ?
 For authentication, we can reuse keystone auth and oslo.rpc.

 Best
 Nachi


 2014/1/9 Flavio Percoco fla...@redhat.com:
  On 08/01/14 17:13 -0800, Nachi Ueno wrote:
 
  Hi folks
 
  OpenStack process tend to have many config options, and many hosts.
  It is a pain to manage this tons of config options.
  To centralize this management helps operation.
 
  We can use chef or puppet kind of tools, however
  sometimes each process depends on the other processes configuration.
  For example, nova depends on neutron configuration etc
 
  My idea is to have config server in oslo.config, and let cfg.CONF get
  config from the server.
  This way has several benefits.
 
  - We can get centralized management without modification on each
  projects ( nova, neutron, etc)
  - We can provide horizon for configuration
 
  This is bp for this proposal.
  https://blueprints.launchpad.net/oslo/+spec/oslo-config-centralized
 
  I'm very appreciate any comments on this.
 
 
 
  I've thought about this as well. I like the overall idea of having a
  config server. However, I don't like the idea of having it within
  oslo.config. I'd prefer oslo.config to remain a library.
 
  Also, I think it would be more complex than just having a server that
  provides the configs. It'll need authentication like all other
  services in OpenStack and perhaps even support of encryption.
 
  I like the idea of a config registry but as mentioned above, IMHO it's
  to live under its own project.
 
  That's all I've got for now,
  FF
 
  --
  @flaper87
  Flavio Percoco
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][documentation][devstack] Confused about how to set up a Nova development environment

2014-01-09 Thread Mike Spreitzer
Brant Knudson b...@acm.org wrote on 01/09/2014 10:07:27 AM:

 When I was starting out, I ran devstack ( http://devstack.org/ ) on 
 an Ubuntu VM. You wind up with a system where you've got a basic 
 running OpenStack so you can try things out with the command-line 
 utilities, and also do development because it checks out all the 
 repos. I learned a lot, and it's how I still do development.

What sort(s) of testing do you do in that environment, and how?  Does your 
code editing interfere with the running DevStack?  Can you run the unit 
tests without interference from/to the running DevStack?  How do you do 
bigger tests?  What is the process for switching from running the merged 
code to running your modified code?  Are the answers documented someplace 
I have not found?

Thanks,
Mike___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.config] Centralized config management

2014-01-09 Thread Oleg Gelbukh
Nachi,

Thanks for bringing this up. We've been thinking a lot about handling of
configurations while working on Rubick.

In my understanding, oslo.config could provide an interface to different
back-ends to store configuration parameters. It could be simple centralized
alternative to configuration files, like k-v store or SQL database. It also
could be something complicated, like a service of its own
(Configration-as-a-Service), with cross-services validation capability etc.

By the way, configuration as a service was mentioned in Solum session at
the last summit, which implies that such service could have more then one
application.

The first step to this could be abstracting a back-end in oslo.config and
implementing some simplistic driver, SQL or k-v storage. This could help to
outline requirements to future configuraiton service.

--
Best regards,
Oleg Gelbukh


On Thu, Jan 9, 2014 at 1:23 PM, Flavio Percoco fla...@redhat.com wrote:

 On 08/01/14 17:13 -0800, Nachi Ueno wrote:

 Hi folks

 OpenStack process tend to have many config options, and many hosts.
 It is a pain to manage this tons of config options.
 To centralize this management helps operation.

 We can use chef or puppet kind of tools, however
 sometimes each process depends on the other processes configuration.
 For example, nova depends on neutron configuration etc

 My idea is to have config server in oslo.config, and let cfg.CONF get
 config from the server.
 This way has several benefits.

 - We can get centralized management without modification on each
 projects ( nova, neutron, etc)
 - We can provide horizon for configuration

 This is bp for this proposal.
 https://blueprints.launchpad.net/oslo/+spec/oslo-config-centralized

 I'm very appreciate any comments on this.



 I've thought about this as well. I like the overall idea of having a
 config server. However, I don't like the idea of having it within
 oslo.config. I'd prefer oslo.config to remain a library.

 Also, I think it would be more complex than just having a server that
 provides the configs. It'll need authentication like all other
 services in OpenStack and perhaps even support of encryption.

 I like the idea of a config registry but as mentioned above, IMHO it's
 to live under its own project.

 That's all I've got for now,
 FF

 --
 @flaper87
 Flavio Percoco

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.config] Centralized config management

2014-01-09 Thread Jay Pipes
On Thu, 2014-01-09 at 10:23 +0100, Flavio Percoco wrote:
 On 08/01/14 17:13 -0800, Nachi Ueno wrote:
 Hi folks
 
 OpenStack process tend to have many config options, and many hosts.
 It is a pain to manage this tons of config options.
 To centralize this management helps operation.
 
 We can use chef or puppet kind of tools, however
 sometimes each process depends on the other processes configuration.
 For example, nova depends on neutron configuration etc
 
 My idea is to have config server in oslo.config, and let cfg.CONF get
 config from the server.
 This way has several benefits.
 
 - We can get centralized management without modification on each
 projects ( nova, neutron, etc)
 - We can provide horizon for configuration
 
 This is bp for this proposal.
 https://blueprints.launchpad.net/oslo/+spec/oslo-config-centralized
 
 I'm very appreciate any comments on this.
 
 I've thought about this as well. I like the overall idea of having a
 config server. However, I don't like the idea of having it within
 oslo.config. I'd prefer oslo.config to remain a library.
 
 Also, I think it would be more complex than just having a server that
 provides the configs. It'll need authentication like all other
 services in OpenStack and perhaps even support of encryption.
 
 I like the idea of a config registry but as mentioned above, IMHO it's
 to live under its own project.

Hi Nati and Flavio!

So, I'm -1 on this idea, just because I think it belongs in the realm of
configuration management tooling (Chef/Puppet/Salt/Ansible/etc). Those
tools are built to manage multiple configuration files and changes in
them. Adding a config server would dramatically change the way that
configuration management tools would interface with OpenStack services.
Instead of managing the config file templates as all of the tools
currently do, the tools would need to essentially need to forego the
tried-and-true INI files and instead write a bunch of code in order to
deal with REST API set/get operations for changing configuration data.

In summary, while I agree that OpenStack services have an absolute TON
of configurability -- for good and bad -- there are ways to improve the
usability of configuration without changing the paradigm that most
configuration management tools expect. One such example is having
include.d/ support -- similar to the existing oslo.cfg module's support
for a --config-dir, but more flexible and more like what other open
source programs (like Apache) have done for years.

All the best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Organizing a Gate Blocking Bug Fix Day

2014-01-09 Thread Joshua Harlow
And my ring, my precious.

Count me in!

On 1/9/14, 6:06 AM, Julien Danjou jul...@danjou.info wrote:

On Thu, Jan 09 2014, Sean Dague wrote:

 I'm hopefully that if we can get everyone looking at this one a single
day,
 we can start to dislodge the log jam that exists.

I will help you bear this burden, Sean Dague, for as long as it is
yours to bear. You have my sword.

-- 
Julien Danjou
# Free Software hacker # independent consultant
# http://julien.danjou.info


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.config] Centralized config management

2014-01-09 Thread Morgan Fainberg
I agree with Doug’s question, but also would extend the train of thought to ask 
why not help to make Chef or Puppet better and cover the more OpenStack 
use-cases rather than add yet another competing system?

Cheers,
Morgan
On January 9, 2014 at 10:24:06, Doug Hellmann (doug.hellm...@dreamhost.com) 
wrote:

What capabilities would this new service give us that existing, proven, 
configuration management tools like chef and puppet don't have?


On Thu, Jan 9, 2014 at 12:52 PM, Nachi Ueno na...@ntti3.com wrote:
Hi Flavio

Thank you for your input.
I agree with you. oslo.config isn't right place to have server side code.

How about oslo.configserver ?
For authentication, we can reuse keystone auth and oslo.rpc.

Best
Nachi


2014/1/9 Flavio Percoco fla...@redhat.com:
 On 08/01/14 17:13 -0800, Nachi Ueno wrote:

 Hi folks

 OpenStack process tend to have many config options, and many hosts.
 It is a pain to manage this tons of config options.
 To centralize this management helps operation.

 We can use chef or puppet kind of tools, however
 sometimes each process depends on the other processes configuration.
 For example, nova depends on neutron configuration etc

 My idea is to have config server in oslo.config, and let cfg.CONF get
 config from the server.
 This way has several benefits.

 - We can get centralized management without modification on each
 projects ( nova, neutron, etc)
 - We can provide horizon for configuration

 This is bp for this proposal.
 https://blueprints.launchpad.net/oslo/+spec/oslo-config-centralized

 I'm very appreciate any comments on this.



 I've thought about this as well. I like the overall idea of having a
 config server. However, I don't like the idea of having it within
 oslo.config. I'd prefer oslo.config to remain a library.

 Also, I think it would be more complex than just having a server that
 provides the configs. It'll need authentication like all other
 services in OpenStack and perhaps even support of encryption.

 I like the idea of a config registry but as mentioned above, IMHO it's
 to live under its own project.

 That's all I've got for now,
 FF

 --
 @flaper87
 Flavio Percoco

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___  
OpenStack-dev mailing list  
OpenStack-dev@lists.openstack.org  
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev  
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][qa] Intermittent failure of tempest test test_network_basic_ops

2014-01-09 Thread Salvatore Orlando
Hi Jay,

replies inline.
I have probably have found one more cause for this issue in the logs, and I
have added a comment to the bug report.

Salvatore


On 9 January 2014 19:10, Jay Pipes jaypi...@gmail.com wrote:

 On Thu, 2014-01-09 at 09:09 +0100, Salvatore Orlando wrote:
  I am afraid I need to correct you Jay!

 I always welcome corrections to things I've gotten wrong, so no worries
 at all!

  This actually appears to be bug 1253896 [1]

 Ah, the infamous SSH bug :) Yeah, so last night I spent a few hours
 digging through log files and running a variety of e-r queries trying to
 find some patterns for the bugs that Joe G had sent an ML post about.

 I went round in circles, unfortunately :( When I thought I'd found a
 pattern, invariably I would doubt my initial findings and wander into
 new areas in a wild goose chase.


that's pretty much what I do all the time.


 At various times, I thought something was up with the DHCP agent, as
 there were lots of No DHCP Agent found errors in the q-dhcp screen
 logs. But I could not correlate any relationship with the failures in
 the 4 bugs.


I've seen those warning as well. They are pretty common, and I think they
are actually benign, as the DHCP for the network is configured
asynchronously, it is probably normal to see that message.
78


 Then I started thinking that there was a timing/race condition where a
 security group was being added to the Nova-side servers cache before it
 had actually been constructed fully on the Neutron-side. But I was not
 able to fully track down the many, many debug messages that are involved
 in the full sequence of VM launch :( At around 4am, I gave up and went
 to bed...


I have not investigated how this could impact connectivity. However, one
thing that it's not ok in my opinion is that we have no way to know whether
a security group is enforced or not; I think it needs an 'operational
status'.
Note: we're working on a patch for the nicira plugin to add this concept;
it's currently being developed as a plugin-specific extension, but if there
is interest to support the concept also in the ml2 plugin I think we can
just make it part of the 'core' security group API.


  Technically, what we call 'bug' here is actually a failure
  manifestation.
  So far, we have removed several bugs causing this failure. The last
  patch was pushed to devstack around Christmas.
  Nevertheless, if you look at recent comments and Joe's email, we still
  have a non-negligible failure rate on the gate.

 Understood. I suspect actually that some of the various performance
 improvements from Phil Day and others around optimizing certain server
 and secgroup list calls have made the underlying race conditions show up
 more often -- since the list calls are completing much faster, which
 ironically gives Neutron less time to complete setup operations!


That might be one explanation. The other might be the fact that we added
another scenario test for neutron which creates more vms with floating ips
and stuff, thus increasing the chances of hitting the timeout failure.


 So, a performance patch on the Nova side ends up putting more pressure
 on the Neutron side, which causes the rate of occurrence for these
 sticky bugs (with potentially many root causes) to spike.

 Such is life I guess :)

  It is also worth mentioning that if you are running your tests with
  parallelism enabled (ie: you're running tempest with tox -esmoke
  rather than tox -esmokeserial) you will end up with a higher
  occurrence of this failure due to more bugs causing it. These bugs are
  due to some weakness in the OVS agent that we are addressing with
  patches for blueprint neutron-tempest-parallel [2].

 Interesting. If you wouldn't mind, what makes you think this is a
 weakness in the OVS agent? I would certainly appreciate your expertise
 in this area, since it would help me in my own bug-searching endeavors.


Basically those are all the patches addressing the linked blueprint; I have
added more info in the commit messages for the patches.
Also some of those patches target this bug as well:
https://bugs.launchpad.net/neutron/+bug/1253993


 All the best,
 -jay

  Regards,
  Salvatore
 
 
 
 
  [1] https://bugs.launchpad.net/neutron/+bug/1253896
  [2]
 https://blueprints.launchpad.net/neutron/+spec/neutron-tempest-parallel
 
 
  On 9 January 2014 05:38, Jay Pipes jaypi...@gmail.com wrote:
  On Wed, 2014-01-08 at 18:46 -0800, Sukhdev Kapur wrote:
   Dear fellow developers,
 
   I am running few Neutron tempest tests and noticing an
  intermittent
   failure of tempest.scenario.test_network_basic_ops.
 
   I ran this test 50+ times and am getting intermittent
  failure. The
   pass rate is apps. 70%. The 30% of the time it fails mostly
  in
   _check_public_network_connectivity.
 
   Has anybody seen this?
   If there is a fix or work around for this, please share your
 

Re: [openstack-dev] [oslo.config] Centralized config management

2014-01-09 Thread Jeremy Hanmer
+1 to Jay.  Existing tools are both better suited to the job and work
quite well in their current state.  To address Nachi's first example,
there's nothing preventing a Nova node in Chef from reading Neutron's
configuration (either by using a (partial) search or storing the
necessary information in the environment rather than in roles).  I
assume Puppet offers the same.  Please don't re-invent this hugely
complicated wheel.

On Thu, Jan 9, 2014 at 10:28 AM, Jay Pipes jaypi...@gmail.com wrote:
 On Thu, 2014-01-09 at 10:23 +0100, Flavio Percoco wrote:
 On 08/01/14 17:13 -0800, Nachi Ueno wrote:
 Hi folks
 
 OpenStack process tend to have many config options, and many hosts.
 It is a pain to manage this tons of config options.
 To centralize this management helps operation.
 
 We can use chef or puppet kind of tools, however
 sometimes each process depends on the other processes configuration.
 For example, nova depends on neutron configuration etc
 
 My idea is to have config server in oslo.config, and let cfg.CONF get
 config from the server.
 This way has several benefits.
 
 - We can get centralized management without modification on each
 projects ( nova, neutron, etc)
 - We can provide horizon for configuration
 
 This is bp for this proposal.
 https://blueprints.launchpad.net/oslo/+spec/oslo-config-centralized
 
 I'm very appreciate any comments on this.

 I've thought about this as well. I like the overall idea of having a
 config server. However, I don't like the idea of having it within
 oslo.config. I'd prefer oslo.config to remain a library.

 Also, I think it would be more complex than just having a server that
 provides the configs. It'll need authentication like all other
 services in OpenStack and perhaps even support of encryption.

 I like the idea of a config registry but as mentioned above, IMHO it's
 to live under its own project.

 Hi Nati and Flavio!

 So, I'm -1 on this idea, just because I think it belongs in the realm of
 configuration management tooling (Chef/Puppet/Salt/Ansible/etc). Those
 tools are built to manage multiple configuration files and changes in
 them. Adding a config server would dramatically change the way that
 configuration management tools would interface with OpenStack services.
 Instead of managing the config file templates as all of the tools
 currently do, the tools would need to essentially need to forego the
 tried-and-true INI files and instead write a bunch of code in order to
 deal with REST API set/get operations for changing configuration data.

 In summary, while I agree that OpenStack services have an absolute TON
 of configurability -- for good and bad -- there are ways to improve the
 usability of configuration without changing the paradigm that most
 configuration management tools expect. One such example is having
 include.d/ support -- similar to the existing oslo.cfg module's support
 for a --config-dir, but more flexible and more like what other open
 source programs (like Apache) have done for years.

 All the best,
 -jay


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Bogus -1 scores from turbo hipster

2014-01-09 Thread Dan Prince


- Original Message -
 From: Michael Still mi...@stillhq.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Tuesday, January 7, 2014 5:53:01 PM
 Subject: Re: [openstack-dev] [nova] Bogus -1 scores from turbo hipster
 
 Hi. Thanks for reaching out about this.
 
 It seems this patch has now passed turbo hipster, so I am going to
 treat this as a more theoretical question than perhaps you intended. I
 should note though that Joshua Hesketh and I have been trying to read
 / triage every turbo hipster failure, but that has been hard this week
 because we're both at a conference.
 
 The problem this patch faced is that we are having trouble defining
 what is a reasonable amount of time for a database migration to run
 for. Specifically:
 
 2014-01-07 14:59:32,012 [output] 205 - 206...
 2014-01-07 14:59:32,848 [heartbeat]
 2014-01-07 15:00:02,848 [heartbeat]
 2014-01-07 15:00:32,849 [heartbeat]
 2014-01-07 15:00:39,197 [output] done
 
 So applying migration 206 took slightly over a minute (67 seconds).
 Our historical data (mean + 2 standard deviations) says that this
 migration should take no more than 63 seconds. So this only just
 failed the test.

FWIW migrations 206 is a dead man walking: 
https://review.openstack.org/#/c/64893/

Regarding Turbo hipster in general though...

 
 However, we know there are issues with our methodology -- we've tried
 normalizing for disk IO bandwidth and it hasn't worked out as well as
 we'd hoped. This week's plan is to try to use mysql performance schema
 instead, but we have to learn more about how it works first.

What if you adjusted your algorithm so that any given run has to fail two or 
three times. Perhaps even taking it a step further and making sure it fails in 
the same spots. When I've done performance testing in the past I had a baseline 
set of results which I then compare to a set of results for any given 
branch/tag. How large your set is depends on how many resources you've got... 
but the more the better. It sounds like you've got a good set of baseline data 
(which presumably you regenerate periodically). But having a larger set of data 
for each merge proposal could help here...

I really appreciate this sort of DB migration performance testing BTW. I've not 
had any trouble with it on my branches thus far... but if I do the information 
you provided in this thread is encouraging.

Also. What happens if you go on vacation? Is there a turbo hipster kill switch 
somewhere? Like it will quit giving out -1's if you aren't around and a trunk 
baseline run fails or something?

 
 I apologise for this mis-vote.
 
 Michael
 
 On Wed, Jan 8, 2014 at 1:44 AM, Matt Riedemann
 mrie...@linux.vnet.ibm.com wrote:
 
 
  On 12/30/2013 6:21 AM, Michael Still wrote:
 
  Hi.
 
  The purpose of this email to is apologise for some incorrect -1 review
  scores which turbo hipster sent out today. I think its important when
  a third party testing tool is new to not have flakey results as people
  learn to trust the tool, so I want to explain what happened here.
 
  Turbo hipster is a system which takes nova code reviews, and runs
  database upgrades against them to ensure that we can still upgrade for
  users in the wild. It uses real user datasets, and also times
  migrations and warns when they are too slow for large deployments. It
  started voting on gerrit in the last week.
 
  Turbo hipster uses zuul to learn about reviews in gerrit that it
  should test. We run our own zuul instance, which talks to the
  openstack.org zuul instance. This then hands out work to our pool of
  testing workers. Another thing zuul does is it handles maintaining a
  git repository for the workers to clone from.
 
  This is where things went wrong today. For reasons I can't currently
  explain, the git repo on our zuul instance ended up in a bad state (it
  had a patch merged to master which wasn't in fact merged upstream
  yet). As this code is stock zuul from openstack-infra, I have a
  concern this might be a bug that other zuul users will see as well.
 
  I've corrected the problem for now, and kicked off a recheck of any
  patch with a -1 review score from turbo hipster in the last 24 hours.
  I'll talk to the zuul maintainers tomorrow about the git problem and
  see what we can learn.
 
  Thanks heaps for your patience.
 
  Michael
 
 
  How do I interpret the warning and -1 from turbo-hipster on my patch here
  [1] with the logs here [2]?
 
  I'm inclined to just do 'recheck migrations' on this since this patch
  doesn't have anything to do with this -1 as far as I can tell.
 
  [1] https://review.openstack.org/#/c/64725/4/
  [2]
  https://ssl.rcbops.com/turbo_hipster/logviewer/?q=/turbo_hipster/results/64/64725/4/check/gate-real-db-upgrade_nova_mysql_user_001/5186e53/user_001.log
 
  --
 
  Thanks,
 
  Matt Riedemann
 
 
  ___
  OpenStack-dev mailing list
  

Re: [openstack-dev] [oslo.config] Centralized config management

2014-01-09 Thread Nachi Ueno
Hi folks

Thank you for your input.

The key difference from external configuration system (Chef, puppet
etc) is integration with
openstack services.
There are cases a process should know the config value in the other hosts.
If we could have centralized config storage api, we can solve this issue.

One example of such case is neuron + nova vif parameter configuration
regarding to security group.
The workflow is something like this.

nova asks vif configuration information for neutron server.
Neutron server ask configuration in neutron l2-agent on the same host
of nova-compute.

host1
  neutron server
  nova-api

host2
  neturon l2-agent
  nova-compute

In this case, a process should know the config value in the other hosts.

Replying some questions

 Adding a config server would dramatically change the way that
configuration management tools would interface with OpenStack services. [Jay]

Since this bp is just adding new mode, we can still use existing config files.

 why not help to make Chef or Puppet better and cover the more OpenStack 
 use-cases rather than add yet another competing system [Doug, Morgan]

I believe this system is not competing system.
The key point is we should have some standard api to access such services.
As Oleg suggested, we can use sql-server, kv-store, or chef or puppet
as a backend system.

Best
Nachi


2014/1/9 Morgan Fainberg m...@metacloud.com:
 I agree with Doug’s question, but also would extend the train of thought to
 ask why not help to make Chef or Puppet better and cover the more OpenStack
 use-cases rather than add yet another competing system?

 Cheers,
 Morgan

 On January 9, 2014 at 10:24:06, Doug Hellmann (doug.hellm...@dreamhost.com)
 wrote:

 What capabilities would this new service give us that existing, proven,
 configuration management tools like chef and puppet don't have?


 On Thu, Jan 9, 2014 at 12:52 PM, Nachi Ueno na...@ntti3.com wrote:

 Hi Flavio

 Thank you for your input.
 I agree with you. oslo.config isn't right place to have server side code.

 How about oslo.configserver ?
 For authentication, we can reuse keystone auth and oslo.rpc.

 Best
 Nachi


 2014/1/9 Flavio Percoco fla...@redhat.com:
  On 08/01/14 17:13 -0800, Nachi Ueno wrote:
 
  Hi folks
 
  OpenStack process tend to have many config options, and many hosts.
  It is a pain to manage this tons of config options.
  To centralize this management helps operation.
 
  We can use chef or puppet kind of tools, however
  sometimes each process depends on the other processes configuration.
  For example, nova depends on neutron configuration etc
 
  My idea is to have config server in oslo.config, and let cfg.CONF get
  config from the server.
  This way has several benefits.
 
  - We can get centralized management without modification on each
  projects ( nova, neutron, etc)
  - We can provide horizon for configuration
 
  This is bp for this proposal.
  https://blueprints.launchpad.net/oslo/+spec/oslo-config-centralized
 
  I'm very appreciate any comments on this.
 
 
 
  I've thought about this as well. I like the overall idea of having a
  config server. However, I don't like the idea of having it within
  oslo.config. I'd prefer oslo.config to remain a library.
 
  Also, I think it would be more complex than just having a server that
  provides the configs. It'll need authentication like all other
  services in OpenStack and perhaps even support of encryption.
 
  I like the idea of a config registry but as mentioned above, IMHO it's
  to live under its own project.
 
  That's all I've got for now,
  FF
 
  --
  @flaper87
  Flavio Percoco
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Organizing a Gate Blocking Bug Fix Day

2014-01-09 Thread Jay Pipes
On Thu, 2014-01-09 at 07:46 -0500, Sean Dague wrote:
 I think we are all agreed that the current state of Gate Resets isn't 
 good. Unfortunately some basic functionality is really not working 
 reliably, like being able to boot a guest to a point where you can ssh 
 into it.
 
 These are common bugs, but they aren't easy ones. We've had a few folks 
 digging deep on these, but we, as a community, are not keeping up with them.
 
 So I'd like to propose Gate Blocking Bug Fix day, to be Monday Jan 20th. 
 On that day I'd ask all core reviewers (and anyone else) on all projects 
 to set aside that day to *only* work on gate blocking bugs. We'd like to 
 quiet the queues to not include any other changes that day so that only 
 fixes related to gate blocking bugs would be in the system.
 
 This will have multiple goals:
   #1 - fix some of the top issues
   #2 - ensure we classify (ER fingerprint) and register everything we're 
 seeing in the gate fails
   #3 - ensure all gate bugs are triaged appropriately
 
 I'm hopefully that if we can get everyone looking at this one a single 
 day, we can start to dislodge the log jam that exists.
 
 Specifically I'd like to get commitments from as many PTLs as possible 
 that they'll both directly participate in the day, as well as encourage 
 the rest of their project to do the same.

I'm in.

Due to what ttx mentioned about I-2, I think the 13th Jan or 27th Jan
Mondays would be better.

Personally, I think sooner is better. The severity of the disruption is
quite high, and action is needed ASAP.

Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] os-*-config in tripleo repositories

2014-01-09 Thread Dan Prince


- Original Message -
 From: Derek Higgins der...@redhat.com
 To: OpenStack Development Mailing List openstack-dev@lists.openstack.org
 Sent: Thursday, January 9, 2014 8:13:53 AM
 Subject: [openstack-dev] os-*-config in tripleo repositories
 
 It looks like we have some duplication and inconsistencies on the 3
 os-*-config elements in the tripleo repositories
 
 os-apply-config (duplication) :
We have two elements that install this
  diskimage-builder/elements/config-applier/
  tripleo-image-elements/elements/os-apply-config/
 
As far as I can tell the version in diskimage-builder isn't used by
 tripleo and the upstart file is broke
 ./dmesg:[   13.336184] init: Failed to spawn config-applier main
 process: unable to execute: No such file or directory
 
To avoid confusion I propose we remove
 diskimage-builder/elements/config-applier/ (or deprecated if we have a
 suitable process) but would like to call it out here first to see if
 anybody is using it or thinks its a bad idea?
 
 inconsistencies
   os-collect-config, os-refresh-config : these are both installed from
 git into the global site-packages
   os-apply-config : installed from a released tarball into its own venv
 
   To be consistent with the other elements all 3 I think should be
 installed from git into its own venv, thoughts?

These all sound good to me and I've got no issues with cleaning these up.

I'm not the biggest fan of having multiple venv's for each component though. 
Especially now that we have a global requirements.txt file where we can target 
a common baseline. Multiple venvs causes lots of duplicated libraries and 
increased image build time. Is anyone planning on making consolidated venv's an 
option? Or perhaps even just using a consolidated venv as the default where 
possible.

 
 If no objections I'll go ahead an do this next week,
 
 thanks,
 Derek.
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.config] Centralized config management

2014-01-09 Thread Nachi Ueno
Hi Jeremy

Don't you think it is burden for operators if we should choose correct
combination of config for multiple nodes even if we have chef and
puppet?

If we have some constraint or dependency in configurations, such logic
should be in openstack source code.
We can solve this issue if we have a standard way to know the config
value of other process in the other host.

Something like this.
self.conf.host('host1').firewall_driver

Then we can have a chef/or file baed config backend code for this for example.

Best
Nachi


2014/1/9 Jeremy Hanmer jer...@dreamhost.com:
 +1 to Jay.  Existing tools are both better suited to the job and work
 quite well in their current state.  To address Nachi's first example,
 there's nothing preventing a Nova node in Chef from reading Neutron's
 configuration (either by using a (partial) search or storing the
 necessary information in the environment rather than in roles).  I
 assume Puppet offers the same.  Please don't re-invent this hugely
 complicated wheel.

 On Thu, Jan 9, 2014 at 10:28 AM, Jay Pipes jaypi...@gmail.com wrote:
 On Thu, 2014-01-09 at 10:23 +0100, Flavio Percoco wrote:
 On 08/01/14 17:13 -0800, Nachi Ueno wrote:
 Hi folks
 
 OpenStack process tend to have many config options, and many hosts.
 It is a pain to manage this tons of config options.
 To centralize this management helps operation.
 
 We can use chef or puppet kind of tools, however
 sometimes each process depends on the other processes configuration.
 For example, nova depends on neutron configuration etc
 
 My idea is to have config server in oslo.config, and let cfg.CONF get
 config from the server.
 This way has several benefits.
 
 - We can get centralized management without modification on each
 projects ( nova, neutron, etc)
 - We can provide horizon for configuration
 
 This is bp for this proposal.
 https://blueprints.launchpad.net/oslo/+spec/oslo-config-centralized
 
 I'm very appreciate any comments on this.

 I've thought about this as well. I like the overall idea of having a
 config server. However, I don't like the idea of having it within
 oslo.config. I'd prefer oslo.config to remain a library.

 Also, I think it would be more complex than just having a server that
 provides the configs. It'll need authentication like all other
 services in OpenStack and perhaps even support of encryption.

 I like the idea of a config registry but as mentioned above, IMHO it's
 to live under its own project.

 Hi Nati and Flavio!

 So, I'm -1 on this idea, just because I think it belongs in the realm of
 configuration management tooling (Chef/Puppet/Salt/Ansible/etc). Those
 tools are built to manage multiple configuration files and changes in
 them. Adding a config server would dramatically change the way that
 configuration management tools would interface with OpenStack services.
 Instead of managing the config file templates as all of the tools
 currently do, the tools would need to essentially need to forego the
 tried-and-true INI files and instead write a bunch of code in order to
 deal with REST API set/get operations for changing configuration data.

 In summary, while I agree that OpenStack services have an absolute TON
 of configurability -- for good and bad -- there are ways to improve the
 usability of configuration without changing the paradigm that most
 configuration management tools expect. One such example is having
 include.d/ support -- similar to the existing oslo.cfg module's support
 for a --config-dir, but more flexible and more like what other open
 source programs (like Apache) have done for years.

 All the best,
 -jay


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Devstack on Fedora 20

2014-01-09 Thread Adam Young
So finally tried running a devstack instance on Fedora 20:  rootwrap 
failed on the cinder stage of the install.  So I scaled back to a 
Keystone only install.


[fedora@ayoung-f20 devstack]$ cat localrc
FORCE=yes
ENABLED_SERVICES=key,mysql,qpid



This failed starting the Keystone server with two module dependencies 
missing:  first was dogpile.cache, and second was lxml.  I installed 
both with Yum and devstack completed with Keystone up and running:  I 
was able to fetch a token.


Dogpile is in the requirements.txt file, but not in the list of RPMS to 
install for devstack.  I tried adding it to devstack/files/rpms. lxml 
was already in there:  Neither was installed.


 (requirements.txt state that Keystone needs dogpile.cache = 0.5.0 
which is what F20 has in Yum)



What am I missing here?


[fedora@ayoung-f20 devstack]$ git diff
diff --git a/files/rpms/keystone b/files/rpms/keystone
index 52dbf47..deed296 100644
--- a/files/rpms/keystone
+++ b/files/rpms/keystone
@@ -1,5 +1,6 @@
+python-dogpile-cache #dist:f20
 python-greenlet
-python-lxml #dist:f16,f17,f18,f19
+python-lxml #dist:f16,f17,f18,f19,f20
 python-paste#dist:f16,f17,f18,f19
 python-paste-deploy #dist:f16,f17,f18,f19
 python-paste-script #dist:f16,f17,f18,f19


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Domain Model Locations

2014-01-09 Thread Tzu-Mainn Chen
Thanks!  This is very informative.  From a high-level perspective, this
maps well with my understanding of how Tuskar will interact with various
OpenStack services.  A question or two inline:

- Original Message -
 I'm trying to hash out where data will live for Tuskar (both long term
 and for its Icehouse deliverables). Based on the expectations for
 Icehouse (a combination of the wireframes and what's in Tuskar client's
 api.py), we have the following concepts:
 
 
 = Nodes =
 A node is a baremetal machine on which the overcloud resources will be
 deployed. The ownership of this information lies with Ironic. The Tuskar
 UI will accept the needed information to create them and pass it to
 Ironic. Ironic is consulted directly when information on a specific node
 or the list of available nodes is needed.
 
 
 = Resource Categories =
 A specific type of thing that will be deployed into the overcloud.
 These are static definitions that describe the entities the user will
 want to add to the overcloud and are owned by Tuskar. For Icehouse, the
 categories themselves are added during installation for the four types
 listed in the wireframes.
 
 Since this is a new model (as compared to other things that live in
 Ironic or Heat), I'll go into some more detail. Each Resource Category
 has the following information:
 
 == Metadata ==
 My intention here is that we do things in such a way that if we change
 one of the original 4 categories, or more importantly add more or allow
 users to add more, the information about the category is centralized and
 not reliant on the UI to provide the user information on what it is.
 
 ID - Unique ID for the Resource Category.
 Display Name - User-friendly name to display.
 Description - Equally self-explanatory.
 
 == Count ==
 In the Tuskar UI, the user selects how many of each category is desired.
 This stored in Tuskar's domain model for the category and is used when
 generating the template to pass to Heat to make it happen.
 
 These counts are what is displayed to the user in the Tuskar UI for each
 category. The staging concept has been removed for Icehouse. In other
 words, the wireframes that cover the waiting to be deployed aren't
 relevant for now.
 
 == Image ==
 For Icehouse, each category will have one image associated with it. Last
 I remember, there was discussion on whether or not we need to support
 multiple images for a category, but for Icehouse we'll limit it to 1 and
 deal with it later.
 
 Metadata for each Resource Category is owned by the Tuskar API. The
 images themselves are managed by Glance, with each Resource Category
 keeping track of just the UUID for its image.
 
 
 = Stack =
 There is a single stack in Tuskar, the overcloud. The Heat template
 for the stack is generated by the Tuskar API based on the Resource
 Category data (image, count, etc.). The template is handed to Heat to
 execute.
 
 Heat owns information about running instances and is queried directly
 when the Tuskar UI needs to access that information.

The UI will also need to be able to look at the Heat resources running
within the overcloud stack and classify them according to a resource
category.  How do you envision that working?

 --
 
 Next steps for me are to start to work on the Tuskar APIs around
 Resource Category CRUD and their conversion into a Heat template.
 There's some discussion to be had there as well, but I don't want to put
 too much into one e-mail.
 

I'm looking forward to seeing the API specification, as Resource Category
CRUD is currently a big unknown in the tuskar-ui api.py file.


Mainn


 
 Thoughts?
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

2014-01-09 Thread Brian Schott
Ian,

The idea of pci flavors is a great and using vendor_id and product_id make 
sense, but I could see a case for adding the class name such as 'VGA compatible 
controller'. Otherwise, slightly different generations of hardware will mean 
custom whitelist setups on each compute node.  

01:00.0 VGA compatible controller: NVIDIA Corporation G71 [GeForce 7900 GTX] 
(rev a1)

On the flip side, vendor_id and product_id might not be sufficient.  Suppose I 
have two identical NICs, one for nova internal use and the second for guest 
tenants?  So, bus numbering may be required.  

01:00.0 VGA compatible controller: NVIDIA Corporation G71 [GeForce 7900 GTX] 
(rev a1)
02:00.0 VGA compatible controller: NVIDIA Corporation G71 [GeForce 7900 GTX] 
(rev a1)

Some possible combinations:

# take 2 gpus
pci_passthrough_whitelist=[
{ vendor_id:NVIDIA Corporation G71,product_id:GeForce 7900 GTX, 
name:GPU},
]

# only take the GPU on PCI 2
pci_passthrough_whitelist=[
{ vendor_id:NVIDIA Corporation G71,product_id:GeForce 7900 GTX, 
'bus_id': '02:', name:GPU},
]
pci_passthrough_whitelist=[
{bus_id: 01:00.0, name: GPU},
{bus_id: 02:00.0, name: GPU},
]

pci_passthrough_whitelist=[
{class: VGA compatible controller, name: GPU},
]

pci_passthrough_whitelist=[
{ product_id:GeForce 7900 GTX, name:GPU},
]

I know you guys are thinking of PCI devices, but any though of mapping to 
something like udev rather than pci?  Supporting udev rules might be easier and 
more robust rather than making something up.

Brian

-
Brian Schott, CTO
Nimbis Services, Inc.
brian.sch...@nimbisservices.com
ph: 443-274-6064  fx: 443-274-6060



On Jan 9, 2014, at 12:47 PM, Ian Wells ijw.ubu...@cack.org.uk wrote:

 I think I'm in agreement with all of this.  Nice summary, Robert.
 
 It may not be where the work ends, but if we could get this done the rest is 
 just refinement.
 
 
 On 9 January 2014 17:49, Robert Li (baoli) ba...@cisco.com wrote:
 Hi Folks,
 
 
 With John joining the IRC, so far, we had a couple of productive meetings in 
 an effort to come to consensus and move forward. Thanks John for doing that, 
 and I appreciate everyone's effort to make it to the daily meeting. Let's 
 reconvene on Monday. 
 
 But before that, and based on our today's conversation on IRC, I'd like to 
 say a few things. I think that first of all, we need to get agreement on the 
 terminologies that we are using so far. With the current nova PCI passthrough
 
 PCI whitelist: defines all the available PCI passthrough devices on a 
 compute node. pci_passthrough_whitelist=[{
  vendor_id:,product_id:}] 
 PCI Alias: criteria defined on the controller node with which 
 requested PCI passthrough devices can be selected from all the PCI 
 passthrough devices available in a cloud. 
 Currently it has the following format: 
 pci_alias={vendor_id:,
  product_id:, name:str}
 
 nova flavor extra_specs: request for PCI passthrough devices can be 
 specified with extra_specs in the format for 
 example:pci_passthrough:alias=name:count
 
 As you can see, currently a PCI alias has a name and is defined on the 
 controller. The implications for it is that when matching it against the PCI 
 devices, it has to match the vendor_id and product_id against all the 
 available PCI devices until one is found. The name is only used for reference 
 in the extra_specs. On the other hand, the whitelist is basically the same as 
 the alias without a name.
 
 What we have discussed so far is based on something called PCI groups (or PCI 
 flavors as Yongli puts it). Without introducing other complexities, and with 
 a little change of the above representation, we will have something like:
 
 pci_passthrough_whitelist=[{ vendor_id:,product_id:,
  name:str}] 
 
 By doing so, we eliminated the PCI alias. And we call the name in above as 
 a PCI group name. You can think of it as combining the definitions of the 
 existing whitelist and PCI alias. And believe it or not, a PCI group is 
 actually a PCI alias. However, with that change of thinking, a lot of 
 benefits can be harvested:
 
  * the implementation is significantly simplified
  * provisioning is simplified by eliminating the PCI alias
  * a compute node only needs to report stats with something like: PCI 
 group name:count. A compute node processes all the PCI passthrough devices 
 against the whitelist, and assign a PCI group based on the whitelist 
 definition.
  * on the controller, we may only need to define the PCI group names. 
 if we use a nova api to define PCI groups (could be private or public, for 
 example), one potential benefit, among other things (validation, etc),  they 
 can be owned by the tenant that creates them. And thus a wholesale of PCI 
 passthrough devices is also possible.
  * scheduler only works with 

Re: [openstack-dev] [oslo.config] Centralized config management

2014-01-09 Thread Doug Hellmann
On Thu, Jan 9, 2014 at 1:53 PM, Nachi Ueno na...@ntti3.com wrote:

 Hi folks

 Thank you for your input.

 The key difference from external configuration system (Chef, puppet
 etc) is integration with
 openstack services.
 There are cases a process should know the config value in the other hosts.
 If we could have centralized config storage api, we can solve this issue.

 One example of such case is neuron + nova vif parameter configuration
 regarding to security group.
 The workflow is something like this.

 nova asks vif configuration information for neutron server.
 Neutron server ask configuration in neutron l2-agent on the same host
 of nova-compute.


That extra round trip does sound like a potential performance bottleneck,
but sharing the configuration data directly is not the right solution. If
the configuration setting names are shared, they become part of the
integration API between the two services. Nova should ask neutron how to
connect the VIF, and it shouldn't care how neutron decides to answer that
question. The configuration setting is an implementation detail of neutron
that shouldn't be exposed directly to nova.

Running a configuration service also introduces what could be a single
point of failure for all of the other distributed services in OpenStack. An
out-of-band tool like chef or puppet doesn't result in the same sort of
situation, because the tool does not have to be online in order for the
cloud to be online.

Doug




 host1
   neutron server
   nova-api

 host2
   neturon l2-agent
   nova-compute

 In this case, a process should know the config value in the other hosts.

 Replying some questions

  Adding a config server would dramatically change the way that
 configuration management tools would interface with OpenStack services.
 [Jay]

 Since this bp is just adding new mode, we can still use existing config
 files.

  why not help to make Chef or Puppet better and cover the more OpenStack
 use-cases rather than add yet another competing system [Doug, Morgan]

 I believe this system is not competing system.
 The key point is we should have some standard api to access such services.
 As Oleg suggested, we can use sql-server, kv-store, or chef or puppet
 as a backend system.

 Best
 Nachi


 2014/1/9 Morgan Fainberg m...@metacloud.com:
  I agree with Doug’s question, but also would extend the train of thought
 to
  ask why not help to make Chef or Puppet better and cover the more
 OpenStack
  use-cases rather than add yet another competing system?
 
  Cheers,
  Morgan
 
  On January 9, 2014 at 10:24:06, Doug Hellmann (
 doug.hellm...@dreamhost.com)
  wrote:
 
  What capabilities would this new service give us that existing, proven,
  configuration management tools like chef and puppet don't have?
 
 
  On Thu, Jan 9, 2014 at 12:52 PM, Nachi Ueno na...@ntti3.com wrote:
 
  Hi Flavio
 
  Thank you for your input.
  I agree with you. oslo.config isn't right place to have server side
 code.
 
  How about oslo.configserver ?
  For authentication, we can reuse keystone auth and oslo.rpc.
 
  Best
  Nachi
 
 
  2014/1/9 Flavio Percoco fla...@redhat.com:
   On 08/01/14 17:13 -0800, Nachi Ueno wrote:
  
   Hi folks
  
   OpenStack process tend to have many config options, and many hosts.
   It is a pain to manage this tons of config options.
   To centralize this management helps operation.
  
   We can use chef or puppet kind of tools, however
   sometimes each process depends on the other processes configuration.
   For example, nova depends on neutron configuration etc
  
   My idea is to have config server in oslo.config, and let cfg.CONF get
   config from the server.
   This way has several benefits.
  
   - We can get centralized management without modification on each
   projects ( nova, neutron, etc)
   - We can provide horizon for configuration
  
   This is bp for this proposal.
   https://blueprints.launchpad.net/oslo/+spec/oslo-config-centralized
  
   I'm very appreciate any comments on this.
  
  
  
   I've thought about this as well. I like the overall idea of having a
   config server. However, I don't like the idea of having it within
   oslo.config. I'd prefer oslo.config to remain a library.
  
   Also, I think it would be more complex than just having a server that
   provides the configs. It'll need authentication like all other
   services in OpenStack and perhaps even support of encryption.
  
   I like the idea of a config registry but as mentioned above, IMHO it's
   to live under its own project.
  
   That's all I've got for now,
   FF
  
   --
   @flaper87
   Flavio Percoco
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  

Re: [openstack-dev] Devstack on Fedora 20

2014-01-09 Thread Sean Dague
You need a working version of this patch to land -
https://review.openstack.org/#/c/63647/

Because of the rhel6 support in devstack, every new version of fc needs
manual support, because there are tons of packages needed in fc* that
don't exist in rhel.

-Sean

On 01/09/2014 02:15 PM, Adam Young wrote:
 So finally tried running a devstack instance on Fedora 20:  rootwrap
 failed on the cinder stage of the install.  So I scaled back to a
 Keystone only install.
 
 [fedora@ayoung-f20 devstack]$ cat localrc
 FORCE=yes
 ENABLED_SERVICES=key,mysql,qpid
 
 
 
 This failed starting the Keystone server with two module dependencies
 missing:  first was dogpile.cache, and second was lxml.  I installed
 both with Yum and devstack completed with Keystone up and running:  I
 was able to fetch a token.
 
 Dogpile is in the requirements.txt file, but not in the list of RPMS to
 install for devstack.  I tried adding it to devstack/files/rpms. lxml
 was already in there:  Neither was installed.
 
  (requirements.txt state that Keystone needs dogpile.cache = 0.5.0
 which is what F20 has in Yum)
 
 
 What am I missing here?
 
 
 [fedora@ayoung-f20 devstack]$ git diff
 diff --git a/files/rpms/keystone b/files/rpms/keystone
 index 52dbf47..deed296 100644
 --- a/files/rpms/keystone
 +++ b/files/rpms/keystone
 @@ -1,5 +1,6 @@
 +python-dogpile-cache #dist:f20
  python-greenlet
 -python-lxml #dist:f16,f17,f18,f19
 +python-lxml #dist:f16,f17,f18,f19,f20
  python-paste#dist:f16,f17,f18,f19
  python-paste-deploy #dist:f16,f17,f18,f19
  python-paste-script #dist:f16,f17,f18,f19
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.config] Centralized config management

2014-01-09 Thread Nachi Ueno
Hi Doug

2014/1/9 Doug Hellmann doug.hellm...@dreamhost.com:



 On Thu, Jan 9, 2014 at 1:53 PM, Nachi Ueno na...@ntti3.com wrote:

 Hi folks

 Thank you for your input.

 The key difference from external configuration system (Chef, puppet
 etc) is integration with
 openstack services.
 There are cases a process should know the config value in the other hosts.
 If we could have centralized config storage api, we can solve this issue.

 One example of such case is neuron + nova vif parameter configuration
 regarding to security group.
 The workflow is something like this.

 nova asks vif configuration information for neutron server.
 Neutron server ask configuration in neutron l2-agent on the same host
 of nova-compute.


 That extra round trip does sound like a potential performance bottleneck,
 but sharing the configuration data directly is not the right solution. If
 the configuration setting names are shared, they become part of the
 integration API between the two services. Nova should ask neutron how to
 connect the VIF, and it shouldn't care how neutron decides to answer that
 question. The configuration setting is an implementation detail of neutron
 that shouldn't be exposed directly to nova.

I agree for nova - neutron if.
However, neutron server and neutron l2 agent configuration depends on
each other.

 Running a configuration service also introduces what could be a single point
 of failure for all of the other distributed services in OpenStack. An
 out-of-band tool like chef or puppet doesn't result in the same sort of
 situation, because the tool does not have to be online in order for the
 cloud to be online.

We can choose same implementation. ( Copy information in local cache etc)

Thank you for your input, I could organize my thought.
My proposal can be split for the two bps.

[BP1] conf api for the other process
Provide standard way to know the config value in the other process in
same host or the other host.

- API Example:
conf.host('host1').firewall_driver

- Conf file baed implementaion:
config for each host will be placed in here.
 /etc/project/conf.d/{hostname}/agent.conf

[BP2] Multiple backend for storing config files

Currently, we have only file based configration.
In this bp, we are extending support for config storage.
- KVS
- SQL
- Chef - Ohai

Best
Nachi

 Doug




 host1
   neutron server
   nova-api

 host2
   neturon l2-agent
   nova-compute

 In this case, a process should know the config value in the other hosts.

 Replying some questions

  Adding a config server would dramatically change the way that
 configuration management tools would interface with OpenStack services.
 [Jay]

 Since this bp is just adding new mode, we can still use existing config
 files.

  why not help to make Chef or Puppet better and cover the more OpenStack
  use-cases rather than add yet another competing system [Doug, Morgan]

 I believe this system is not competing system.
 The key point is we should have some standard api to access such services.
 As Oleg suggested, we can use sql-server, kv-store, or chef or puppet
 as a backend system.

 Best
 Nachi


 2014/1/9 Morgan Fainberg m...@metacloud.com:
  I agree with Doug’s question, but also would extend the train of thought
  to
  ask why not help to make Chef or Puppet better and cover the more
  OpenStack
  use-cases rather than add yet another competing system?
 
  Cheers,
  Morgan
 
  On January 9, 2014 at 10:24:06, Doug Hellmann
  (doug.hellm...@dreamhost.com)
  wrote:
 
  What capabilities would this new service give us that existing, proven,
  configuration management tools like chef and puppet don't have?
 
 
  On Thu, Jan 9, 2014 at 12:52 PM, Nachi Ueno na...@ntti3.com wrote:
 
  Hi Flavio
 
  Thank you for your input.
  I agree with you. oslo.config isn't right place to have server side
  code.
 
  How about oslo.configserver ?
  For authentication, we can reuse keystone auth and oslo.rpc.
 
  Best
  Nachi
 
 
  2014/1/9 Flavio Percoco fla...@redhat.com:
   On 08/01/14 17:13 -0800, Nachi Ueno wrote:
  
   Hi folks
  
   OpenStack process tend to have many config options, and many hosts.
   It is a pain to manage this tons of config options.
   To centralize this management helps operation.
  
   We can use chef or puppet kind of tools, however
   sometimes each process depends on the other processes configuration.
   For example, nova depends on neutron configuration etc
  
   My idea is to have config server in oslo.config, and let cfg.CONF
   get
   config from the server.
   This way has several benefits.
  
   - We can get centralized management without modification on each
   projects ( nova, neutron, etc)
   - We can provide horizon for configuration
  
   This is bp for this proposal.
   https://blueprints.launchpad.net/oslo/+spec/oslo-config-centralized
  
   I'm very appreciate any comments on this.
  
  
  
   I've thought about this as well. I like the overall idea of having a
   config server. However, I don't like 

Re: [openstack-dev] [oslo.config] Centralized config management

2014-01-09 Thread Chmouel Boudjnah
On Thu, Jan 9, 2014 at 7:53 PM, Nachi Ueno na...@ntti3.com wrote:

 One example of such case is neuron + nova vif parameter configuration
 regarding to security group.
 The workflow is something like this.

 nova asks vif configuration information for neutron server.
 Neutron server ask configuration in neutron l2-agent on the same host
 of nova-compute.


What about using for that something like the discoverability middleware
that was added in swift[1] and extend it to get it integrated oslo?

Chmouel.

[1] http://techs.enovance.com/6509/swift-discoverable-capabilities
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][qa] Intermittent failure of tempest test test_network_basic_ops

2014-01-09 Thread Sukhdev Kapur
Thanks Salvatore and Jay for sharing your experiences on this issue.

I will look through the references you have provided to understand further
as well.
If I  latch onto something, I will share back.

BTW, before posting the question here, I did suspect some race conditions
and tried to play around with the timings of some of events - nothing
really helped :-(


regards..
-Sukhdev



On Thu, Jan 9, 2014 at 10:38 AM, Salvatore Orlando sorla...@nicira.comwrote:

 Hi Jay,

 replies inline.
 I have probably have found one more cause for this issue in the logs, and
 I have added a comment to the bug report.

 Salvatore


 On 9 January 2014 19:10, Jay Pipes jaypi...@gmail.com wrote:

 On Thu, 2014-01-09 at 09:09 +0100, Salvatore Orlando wrote:
  I am afraid I need to correct you Jay!

 I always welcome corrections to things I've gotten wrong, so no worries
 at all!

  This actually appears to be bug 1253896 [1]

 Ah, the infamous SSH bug :) Yeah, so last night I spent a few hours
 digging through log files and running a variety of e-r queries trying to
 find some patterns for the bugs that Joe G had sent an ML post about.

 I went round in circles, unfortunately :( When I thought I'd found a
 pattern, invariably I would doubt my initial findings and wander into
 new areas in a wild goose chase.


 that's pretty much what I do all the time.


 At various times, I thought something was up with the DHCP agent, as
 there were lots of No DHCP Agent found errors in the q-dhcp screen
 logs. But I could not correlate any relationship with the failures in
 the 4 bugs.


 I've seen those warning as well. They are pretty common, and I think they
 are actually benign, as the DHCP for the network is configured
 asynchronously, it is probably normal to see that message.
 78


 Then I started thinking that there was a timing/race condition where a
 security group was being added to the Nova-side servers cache before it
 had actually been constructed fully on the Neutron-side. But I was not
 able to fully track down the many, many debug messages that are involved
 in the full sequence of VM launch :( At around 4am, I gave up and went
 to bed...


 I have not investigated how this could impact connectivity. However, one
 thing that it's not ok in my opinion is that we have no way to know whether
 a security group is enforced or not; I think it needs an 'operational
 status'.
 Note: we're working on a patch for the nicira plugin to add this concept;
 it's currently being developed as a plugin-specific extension, but if there
 is interest to support the concept also in the ml2 plugin I think we can
 just make it part of the 'core' security group API.


  Technically, what we call 'bug' here is actually a failure
  manifestation.
  So far, we have removed several bugs causing this failure. The last
  patch was pushed to devstack around Christmas.
  Nevertheless, if you look at recent comments and Joe's email, we still
  have a non-negligible failure rate on the gate.

 Understood. I suspect actually that some of the various performance
 improvements from Phil Day and others around optimizing certain server
 and secgroup list calls have made the underlying race conditions show up
 more often -- since the list calls are completing much faster, which
 ironically gives Neutron less time to complete setup operations!


 That might be one explanation. The other might be the fact that we added
 another scenario test for neutron which creates more vms with floating ips
 and stuff, thus increasing the chances of hitting the timeout failure.


 So, a performance patch on the Nova side ends up putting more pressure
 on the Neutron side, which causes the rate of occurrence for these
 sticky bugs (with potentially many root causes) to spike.

 Such is life I guess :)

  It is also worth mentioning that if you are running your tests with
  parallelism enabled (ie: you're running tempest with tox -esmoke
  rather than tox -esmokeserial) you will end up with a higher
  occurrence of this failure due to more bugs causing it. These bugs are
  due to some weakness in the OVS agent that we are addressing with
  patches for blueprint neutron-tempest-parallel [2].

 Interesting. If you wouldn't mind, what makes you think this is a
 weakness in the OVS agent? I would certainly appreciate your expertise
 in this area, since it would help me in my own bug-searching endeavors.


 Basically those are all the patches addressing the linked blueprint; I
 have added more info in the commit messages for the patches.
 Also some of those patches target this bug as well:
 https://bugs.launchpad.net/neutron/+bug/1253993


 All the best,
 -jay

  Regards,
  Salvatore
 
 
 
 
  [1] https://bugs.launchpad.net/neutron/+bug/1253896
  [2]
 https://blueprints.launchpad.net/neutron/+spec/neutron-tempest-parallel
 
 
  On 9 January 2014 05:38, Jay Pipes jaypi...@gmail.com wrote:
  On Wed, 2014-01-08 at 18:46 -0800, Sukhdev Kapur wrote:
   Dear fellow 

Re: [openstack-dev] [oslo.config] Centralized config management

2014-01-09 Thread Doug Hellmann
On Thu, Jan 9, 2014 at 2:34 PM, Nachi Ueno na...@ntti3.com wrote:

 Hi Doug

 2014/1/9 Doug Hellmann doug.hellm...@dreamhost.com:
 
 
 
  On Thu, Jan 9, 2014 at 1:53 PM, Nachi Ueno na...@ntti3.com wrote:
 
  Hi folks
 
  Thank you for your input.
 
  The key difference from external configuration system (Chef, puppet
  etc) is integration with
  openstack services.
  There are cases a process should know the config value in the other
 hosts.
  If we could have centralized config storage api, we can solve this
 issue.
 
  One example of such case is neuron + nova vif parameter configuration
  regarding to security group.
  The workflow is something like this.
 
  nova asks vif configuration information for neutron server.
  Neutron server ask configuration in neutron l2-agent on the same host
  of nova-compute.
 
 
  That extra round trip does sound like a potential performance bottleneck,
  but sharing the configuration data directly is not the right solution. If
  the configuration setting names are shared, they become part of the
  integration API between the two services. Nova should ask neutron how to
  connect the VIF, and it shouldn't care how neutron decides to answer that
  question. The configuration setting is an implementation detail of
 neutron
  that shouldn't be exposed directly to nova.

 I agree for nova - neutron if.
 However, neutron server and neutron l2 agent configuration depends on
 each other.


  Running a configuration service also introduces what could be a single
 point
  of failure for all of the other distributed services in OpenStack. An
  out-of-band tool like chef or puppet doesn't result in the same sort of
  situation, because the tool does not have to be online in order for the
  cloud to be online.

 We can choose same implementation. ( Copy information in local cache etc)

 Thank you for your input, I could organize my thought.
 My proposal can be split for the two bps.

 [BP1] conf api for the other process
 Provide standard way to know the config value in the other process in
 same host or the other host.


Please don't do this. It's just a bad idea to expose the configuration
settings between apps this way, because it couples the applications tightly
at a low level, instead of letting the applications have APIs for sharing
logical information at a high level. It's the difference between asking
what is the value of a specific configuration setting on a particular
hypervisor and asking how do I connect a VIF for this instance. The
latter lets you provide different answers based on context. The former
doesn't.

Doug




 - API Example:
 conf.host('host1').firewall_driver

 - Conf file baed implementaion:
 config for each host will be placed in here.
  /etc/project/conf.d/{hostname}/agent.conf

 [BP2] Multiple backend for storing config files

 Currently, we have only file based configration.
 In this bp, we are extending support for config storage.
 - KVS
 - SQL
 - Chef - Ohai


 Best
 Nachi

  Doug
 
 
 
 
  host1
neutron server
nova-api
 
  host2
neturon l2-agent
nova-compute
 
  In this case, a process should know the config value in the other hosts.
 
  Replying some questions
 
   Adding a config server would dramatically change the way that
  configuration management tools would interface with OpenStack services.
  [Jay]
 
  Since this bp is just adding new mode, we can still use existing
 config
  files.
 
   why not help to make Chef or Puppet better and cover the more
 OpenStack
   use-cases rather than add yet another competing system [Doug, Morgan]
 
  I believe this system is not competing system.
  The key point is we should have some standard api to access such
 services.
  As Oleg suggested, we can use sql-server, kv-store, or chef or puppet
  as a backend system.
 
  Best
  Nachi
 
 
  2014/1/9 Morgan Fainberg m...@metacloud.com:
   I agree with Doug’s question, but also would extend the train of
 thought
   to
   ask why not help to make Chef or Puppet better and cover the more
   OpenStack
   use-cases rather than add yet another competing system?
  
   Cheers,
   Morgan
  
   On January 9, 2014 at 10:24:06, Doug Hellmann
   (doug.hellm...@dreamhost.com)
   wrote:
  
   What capabilities would this new service give us that existing,
 proven,
   configuration management tools like chef and puppet don't have?
  
  
   On Thu, Jan 9, 2014 at 12:52 PM, Nachi Ueno na...@ntti3.com wrote:
  
   Hi Flavio
  
   Thank you for your input.
   I agree with you. oslo.config isn't right place to have server side
   code.
  
   How about oslo.configserver ?
   For authentication, we can reuse keystone auth and oslo.rpc.
  
   Best
   Nachi
  
  
   2014/1/9 Flavio Percoco fla...@redhat.com:
On 08/01/14 17:13 -0800, Nachi Ueno wrote:
   
Hi folks
   
OpenStack process tend to have many config options, and many
 hosts.
It is a pain to manage this tons of config options.
To centralize this management helps operation.
   
We can 

Re: [openstack-dev] [oslo.config] Centralized config management

2014-01-09 Thread Jeremy Hanmer
Having run openstack clusters for ~2 years, I can't say that I've ever
desired such functionality.

How do you see these interactions defined?  For instance, if I deploy
a custom driver for Neutron, does that mean I also have to patch
everything that will be talking to it (Nova, for instance) so they can
agree on compatibility?

Also, I know that I run what is probably a more complicated cluster
than most production clusters, but I can't think of very many
configuration options that are globally in sync across the cluster.
Hypervisors, network drivers, mysql servers, API endpoints...they all
might vary between hosts/racks/etc.

On Thu, Jan 9, 2014 at 11:08 AM, Nachi Ueno na...@ntti3.com wrote:
 Hi Jeremy

 Don't you think it is burden for operators if we should choose correct
 combination of config for multiple nodes even if we have chef and
 puppet?

 If we have some constraint or dependency in configurations, such logic
 should be in openstack source code.
 We can solve this issue if we have a standard way to know the config
 value of other process in the other host.

 Something like this.
 self.conf.host('host1').firewall_driver

 Then we can have a chef/or file baed config backend code for this for example.

 Best
 Nachi


 2014/1/9 Jeremy Hanmer jer...@dreamhost.com:
 +1 to Jay.  Existing tools are both better suited to the job and work
 quite well in their current state.  To address Nachi's first example,
 there's nothing preventing a Nova node in Chef from reading Neutron's
 configuration (either by using a (partial) search or storing the
 necessary information in the environment rather than in roles).  I
 assume Puppet offers the same.  Please don't re-invent this hugely
 complicated wheel.

 On Thu, Jan 9, 2014 at 10:28 AM, Jay Pipes jaypi...@gmail.com wrote:
 On Thu, 2014-01-09 at 10:23 +0100, Flavio Percoco wrote:
 On 08/01/14 17:13 -0800, Nachi Ueno wrote:
 Hi folks
 
 OpenStack process tend to have many config options, and many hosts.
 It is a pain to manage this tons of config options.
 To centralize this management helps operation.
 
 We can use chef or puppet kind of tools, however
 sometimes each process depends on the other processes configuration.
 For example, nova depends on neutron configuration etc
 
 My idea is to have config server in oslo.config, and let cfg.CONF get
 config from the server.
 This way has several benefits.
 
 - We can get centralized management without modification on each
 projects ( nova, neutron, etc)
 - We can provide horizon for configuration
 
 This is bp for this proposal.
 https://blueprints.launchpad.net/oslo/+spec/oslo-config-centralized
 
 I'm very appreciate any comments on this.

 I've thought about this as well. I like the overall idea of having a
 config server. However, I don't like the idea of having it within
 oslo.config. I'd prefer oslo.config to remain a library.

 Also, I think it would be more complex than just having a server that
 provides the configs. It'll need authentication like all other
 services in OpenStack and perhaps even support of encryption.

 I like the idea of a config registry but as mentioned above, IMHO it's
 to live under its own project.

 Hi Nati and Flavio!

 So, I'm -1 on this idea, just because I think it belongs in the realm of
 configuration management tooling (Chef/Puppet/Salt/Ansible/etc). Those
 tools are built to manage multiple configuration files and changes in
 them. Adding a config server would dramatically change the way that
 configuration management tools would interface with OpenStack services.
 Instead of managing the config file templates as all of the tools
 currently do, the tools would need to essentially need to forego the
 tried-and-true INI files and instead write a bunch of code in order to
 deal with REST API set/get operations for changing configuration data.

 In summary, while I agree that OpenStack services have an absolute TON
 of configurability -- for good and bad -- there are ways to improve the
 usability of configuration without changing the paradigm that most
 configuration management tools expect. One such example is having
 include.d/ support -- similar to the existing oslo.cfg module's support
 for a --config-dir, but more flexible and more like what other open
 source programs (like Apache) have done for years.

 All the best,
 -jay


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

Re: [openstack-dev] [oslo.config] Centralized config management

2014-01-09 Thread Nachi Ueno
Hi Doug

Thank you for your input.

2014/1/9 Doug Hellmann doug.hellm...@dreamhost.com:



 On Thu, Jan 9, 2014 at 2:34 PM, Nachi Ueno na...@ntti3.com wrote:

 Hi Doug

 2014/1/9 Doug Hellmann doug.hellm...@dreamhost.com:
 
 
 
  On Thu, Jan 9, 2014 at 1:53 PM, Nachi Ueno na...@ntti3.com wrote:
 
  Hi folks
 
  Thank you for your input.
 
  The key difference from external configuration system (Chef, puppet
  etc) is integration with
  openstack services.
  There are cases a process should know the config value in the other
  hosts.
  If we could have centralized config storage api, we can solve this
  issue.
 
  One example of such case is neuron + nova vif parameter configuration
  regarding to security group.
  The workflow is something like this.
 
  nova asks vif configuration information for neutron server.
  Neutron server ask configuration in neutron l2-agent on the same host
  of nova-compute.
 
 
  That extra round trip does sound like a potential performance
  bottleneck,
  but sharing the configuration data directly is not the right solution.
  If
  the configuration setting names are shared, they become part of the
  integration API between the two services. Nova should ask neutron how to
  connect the VIF, and it shouldn't care how neutron decides to answer
  that
  question. The configuration setting is an implementation detail of
  neutron
  that shouldn't be exposed directly to nova.

 I agree for nova - neutron if.
 However, neutron server and neutron l2 agent configuration depends on
 each other.


  Running a configuration service also introduces what could be a single
  point
  of failure for all of the other distributed services in OpenStack. An
  out-of-band tool like chef or puppet doesn't result in the same sort of
  situation, because the tool does not have to be online in order for the
  cloud to be online.

 We can choose same implementation. ( Copy information in local cache etc)

 Thank you for your input, I could organize my thought.
 My proposal can be split for the two bps.

 [BP1] conf api for the other process
 Provide standard way to know the config value in the other process in
 same host or the other host.


 Please don't do this. It's just a bad idea to expose the configuration
 settings between apps this way, because it couples the applications tightly
 at a low level, instead of letting the applications have APIs for sharing
 logical information at a high level. It's the difference between asking
 what is the value of a specific configuration setting on a particular
 hypervisor and asking how do I connect a VIF for this instance. The
 latter lets you provide different answers based on context. The former
 doesn't.

Essentially, A configuration is a API.
I don't think every configuration is a kind of  low level
configuration (timeout etc).
Some configuration should tell  how do I connect a VIF for this instance,
and we should select high level design configuration parameters.

 Doug




 - API Example:
 conf.host('host1').firewall_driver

 - Conf file baed implementaion:
 config for each host will be placed in here.
  /etc/project/conf.d/{hostname}/agent.conf

 [BP2] Multiple backend for storing config files

 Currently, we have only file based configration.
 In this bp, we are extending support for config storage.
 - KVS
 - SQL
 - Chef - Ohai


 Best
 Nachi

  Doug
 
 
 
 
  host1
neutron server
nova-api
 
  host2
neturon l2-agent
nova-compute
 
  In this case, a process should know the config value in the other
  hosts.
 
  Replying some questions
 
   Adding a config server would dramatically change the way that
  configuration management tools would interface with OpenStack services.
  [Jay]
 
  Since this bp is just adding new mode, we can still use existing
  config
  files.
 
   why not help to make Chef or Puppet better and cover the more
   OpenStack
   use-cases rather than add yet another competing system [Doug, Morgan]
 
  I believe this system is not competing system.
  The key point is we should have some standard api to access such
  services.
  As Oleg suggested, we can use sql-server, kv-store, or chef or puppet
  as a backend system.
 
  Best
  Nachi
 
 
  2014/1/9 Morgan Fainberg m...@metacloud.com:
   I agree with Doug’s question, but also would extend the train of
   thought
   to
   ask why not help to make Chef or Puppet better and cover the more
   OpenStack
   use-cases rather than add yet another competing system?
  
   Cheers,
   Morgan
  
   On January 9, 2014 at 10:24:06, Doug Hellmann
   (doug.hellm...@dreamhost.com)
   wrote:
  
   What capabilities would this new service give us that existing,
   proven,
   configuration management tools like chef and puppet don't have?
  
  
   On Thu, Jan 9, 2014 at 12:52 PM, Nachi Ueno na...@ntti3.com wrote:
  
   Hi Flavio
  
   Thank you for your input.
   I agree with you. oslo.config isn't right place to have server side
   code.
  
   How about oslo.configserver ?
   For 

Re: [openstack-dev] [Solum][Pecan][Security] Pecan SecureController vs. Nova policy

2014-01-09 Thread Georgy Okrokvertskhov
Hi Adam,

This looks very interesting. When do you expect to have this code available
in oslo? Do you have a development guide which describes best practices for
using this authorization approach?

I think that for Pecan it will be possible to get rid of @protected wrapper
and use SecureController class as a parent. It has a method which will be
called before each controller method call. I saw Pecan was moved to
stackforge, so probably it is a good idea to talk with Pecan developers and
discuss how this part of keystone can be integrated\ supported by Pecan
framework.


On Wed, Jan 8, 2014 at 8:34 PM, Adam Young ayo...@redhat.com wrote:

  We are working on cleaning up the Keystone code with an eye to Oslo and
 reuse:

 https://review.openstack.org/#/c/56333/


 On 01/08/2014 02:47 PM, Georgy Okrokvertskhov wrote:

 Hi,

  Keep policy control in one place is a good idea. We can use standard
 policy approach and keep access control configuration in json file as it
 done in Nova and other projects.
  Keystone uses wrapper function for methods. Here is a wrapper code:
 https://github.com/openstack/keystone/blob/master/keystone/common/controller.py#L111.
 Each controller method has @protected() wrapper, so a method information is
 available through python f.__name__ instead of URL parsing. It means that
 some RBAC parts anyway scattered among the code.

  If we want to avoid RBAC scattered among the code we can use URL parsing
 approach and have all the logic inside hook. In pecan hook WSGI environment
 is already created and there is full access to request parameters\content.
 We can map URL to policy key.

  So we have two options:
 1. Add wrapper to each API method like all other project did
 2. Add a hook with URL parsing which maps path to policy key.


  Thanks
 Georgy



 On Wed, Jan 8, 2014 at 9:05 AM, Kurt Griffiths 
 kurt.griffi...@rackspace.com wrote:

  Yeah, that could work. The main thing is to try and keep policy control
 in one place if you can rather than sprinkling it all over the place.

   From: Georgy Okrokvertskhov gokrokvertsk...@mirantis.com
 Reply-To: OpenStack Dev openstack-dev@lists.openstack.org
 Date: Wednesday, January 8, 2014 at 10:41 AM

 To: OpenStack Dev openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Solum][Pecan][Security] Pecan
 SecureController vs. Nova policy

   Hi Kurt,

  As for WSGI middleware I think about Pecan hooks which can be added
 before actual controller call. Here is an example how we added a hook for
 keystone information collection:
 https://review.openstack.org/#/c/64458/4/solum/api/auth.py

  What do you think, will this approach with Pecan hooks work?

  Thanks
 Georgy


 On Tue, Jan 7, 2014 at 2:25 PM, Kurt Griffiths 
 kurt.griffi...@rackspace.com wrote:

  You might also consider doing this in WSGI middleware:

  Pros:

- Consolidates policy code in once place, making it easier to audit
and maintain
- Simple to turn policy on/off – just don’t insert the middleware
when off!
- Does not preclude the use of oslo.policy for rule checking
- Blocks unauthorized requests before they have a chance to touch
the web framework or app. This reduces your attack surface and can 
 improve
performance   (since the web framework has yet to parse the request).

 Cons:

- Doesn't work for policies that require knowledge that isn’t
available this early in the pipeline (without having to duplicate a lot 
 of
code)
- You have to parse the WSGI environ dict yourself (this may not be
a big deal, depending on how much knowledge you need to glean in order to
enforce the policy).
- You have to keep your HTTP path matching in sync with with your
route definitions in the code. If you have full test coverage, you will
know when you get out of sync. That being said, API routes tend to be 
 quite
stable in relation to to other parts of the code implementation once you
have settled on your API spec.

 I’m sure there are other pros and cons I missed, but you can make your
 own best judgement whether this option makes sense in Solum’s case.

   From: Doug Hellmann doug.hellm...@dreamhost.com
 Reply-To: OpenStack Dev openstack-dev@lists.openstack.org
 Date: Tuesday, January 7, 2014 at 6:54 AM
 To: OpenStack Dev openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Solum][Pecan][Security] Pecan
 SecureController vs. Nova policy




 On Mon, Jan 6, 2014 at 6:26 PM, Georgy Okrokvertskhov 
 gokrokvertsk...@mirantis.com wrote:

 Hi Dough,

  Thank you for pointing to this code. As I see you use OpenStack
 policy framework but not Pecan security features. How do you implement fine
 grain access control like user allowed to read only, writers and admins.
 Can you block part of API methods for specific user like access to create
 methods for specific user role?


  The policy enforcement isn't simple on/off switching in ceilometer, so
 we're using the policy framework calls in a couple of 

Re: [openstack-dev] Devstack on Fedora 20

2014-01-09 Thread Adam Young
That didn't seem to make a difference, still no cache.  The RPMS are not 
getting installed, even if I deliberately add a line for 
python-dogpile-cache

Shouldn't it get installed via pip without the rpm line?




On 01/09/2014 02:27 PM, Sean Dague wrote:

You need a working version of this patch to land -
https://review.openstack.org/#/c/63647/

Because of the rhel6 support in devstack, every new version of fc needs
manual support, because there are tons of packages needed in fc* that
don't exist in rhel.

-Sean

On 01/09/2014 02:15 PM, Adam Young wrote:

So finally tried running a devstack instance on Fedora 20:  rootwrap
failed on the cinder stage of the install.  So I scaled back to a
Keystone only install.

[fedora@ayoung-f20 devstack]$ cat localrc
FORCE=yes
ENABLED_SERVICES=key,mysql,qpid



This failed starting the Keystone server with two module dependencies
missing:  first was dogpile.cache, and second was lxml.  I installed
both with Yum and devstack completed with Keystone up and running:  I
was able to fetch a token.

Dogpile is in the requirements.txt file, but not in the list of RPMS to
install for devstack.  I tried adding it to devstack/files/rpms. lxml
was already in there:  Neither was installed.

  (requirements.txt state that Keystone needs dogpile.cache = 0.5.0
which is what F20 has in Yum)


What am I missing here?


[fedora@ayoung-f20 devstack]$ git diff
diff --git a/files/rpms/keystone b/files/rpms/keystone
index 52dbf47..deed296 100644
--- a/files/rpms/keystone
+++ b/files/rpms/keystone
@@ -1,5 +1,6 @@
+python-dogpile-cache #dist:f20
  python-greenlet
-python-lxml #dist:f16,f17,f18,f19
+python-lxml #dist:f16,f17,f18,f19,f20
  python-paste#dist:f16,f17,f18,f19
  python-paste-deploy #dist:f16,f17,f18,f19
  python-paste-script #dist:f16,f17,f18,f19


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.config] Centralized config management

2014-01-09 Thread Nachi Ueno
2014/1/9 Jeremy Hanmer jer...@dreamhost.com:
 Having run openstack clusters for ~2 years, I can't say that I've ever
 desired such functionality.

My proposal is adding functionalities, not removing it.
so if you are satisfied with file based configuration with chef or puppet,
this change won't affect you

 How do you see these interactions defined?  For instance, if I deploy
 a custom driver for Neutron, does that mean I also have to patch
 everything that will be talking to it (Nova, for instance) so they can
 agree on compatibility?

Nova / Neutron talks with neturon api. so it should be OK because we
are talking care
backward compatibility in the REST API.

The point in my example is neutron server + neutron l2 agent sync.

 Also, I know that I run what is probably a more complicated cluster
 than most production clusters, but I can't think of very many
 configuration options that are globally in sync across the cluster.
 Hypervisors, network drivers, mysql servers, API endpoints...they all
 might vary between hosts/racks/etc.

To support such heterogeneous environment is a purpose of this bp.
Configuration dependency is pain point for me, and it's get more worse
if the env is heterogeneous.

I have also some experience to run openstack clusters, but it is still
pain for me..

My experience is something like this
# Wow, new release! ohh this chef repo didn't supported..
# hmm i should modify chef recipe.. hmm debug.. debug..


 On Thu, Jan 9, 2014 at 11:08 AM, Nachi Ueno na...@ntti3.com wrote:
 Hi Jeremy

 Don't you think it is burden for operators if we should choose correct
 combination of config for multiple nodes even if we have chef and
 puppet?

 If we have some constraint or dependency in configurations, such logic
 should be in openstack source code.
 We can solve this issue if we have a standard way to know the config
 value of other process in the other host.

 Something like this.
 self.conf.host('host1').firewall_driver

 Then we can have a chef/or file baed config backend code for this for 
 example.

 Best
 Nachi


 2014/1/9 Jeremy Hanmer jer...@dreamhost.com:
 +1 to Jay.  Existing tools are both better suited to the job and work
 quite well in their current state.  To address Nachi's first example,
 there's nothing preventing a Nova node in Chef from reading Neutron's
 configuration (either by using a (partial) search or storing the
 necessary information in the environment rather than in roles).  I
 assume Puppet offers the same.  Please don't re-invent this hugely
 complicated wheel.

 On Thu, Jan 9, 2014 at 10:28 AM, Jay Pipes jaypi...@gmail.com wrote:
 On Thu, 2014-01-09 at 10:23 +0100, Flavio Percoco wrote:
 On 08/01/14 17:13 -0800, Nachi Ueno wrote:
 Hi folks
 
 OpenStack process tend to have many config options, and many hosts.
 It is a pain to manage this tons of config options.
 To centralize this management helps operation.
 
 We can use chef or puppet kind of tools, however
 sometimes each process depends on the other processes configuration.
 For example, nova depends on neutron configuration etc
 
 My idea is to have config server in oslo.config, and let cfg.CONF get
 config from the server.
 This way has several benefits.
 
 - We can get centralized management without modification on each
 projects ( nova, neutron, etc)
 - We can provide horizon for configuration
 
 This is bp for this proposal.
 https://blueprints.launchpad.net/oslo/+spec/oslo-config-centralized
 
 I'm very appreciate any comments on this.

 I've thought about this as well. I like the overall idea of having a
 config server. However, I don't like the idea of having it within
 oslo.config. I'd prefer oslo.config to remain a library.

 Also, I think it would be more complex than just having a server that
 provides the configs. It'll need authentication like all other
 services in OpenStack and perhaps even support of encryption.

 I like the idea of a config registry but as mentioned above, IMHO it's
 to live under its own project.

 Hi Nati and Flavio!

 So, I'm -1 on this idea, just because I think it belongs in the realm of
 configuration management tooling (Chef/Puppet/Salt/Ansible/etc). Those
 tools are built to manage multiple configuration files and changes in
 them. Adding a config server would dramatically change the way that
 configuration management tools would interface with OpenStack services.
 Instead of managing the config file templates as all of the tools
 currently do, the tools would need to essentially need to forego the
 tried-and-true INI files and instead write a bunch of code in order to
 deal with REST API set/get operations for changing configuration data.

 In summary, while I agree that OpenStack services have an absolute TON
 of configurability -- for good and bad -- there are ways to improve the
 usability of configuration without changing the paradigm that most
 configuration management tools expect. One such example is having
 include.d/ support -- similar to the existing 

Re: [openstack-dev] [oslo.config] Centralized config management

2014-01-09 Thread Robert Kukura
On 01/09/2014 02:34 PM, Nachi Ueno wrote:
 Hi Doug
 
 2014/1/9 Doug Hellmann doug.hellm...@dreamhost.com:



 On Thu, Jan 9, 2014 at 1:53 PM, Nachi Ueno na...@ntti3.com wrote:

 Hi folks

 Thank you for your input.

 The key difference from external configuration system (Chef, puppet
 etc) is integration with
 openstack services.
 There are cases a process should know the config value in the other hosts.
 If we could have centralized config storage api, we can solve this issue.

 One example of such case is neuron + nova vif parameter configuration
 regarding to security group.
 The workflow is something like this.

 nova asks vif configuration information for neutron server.
 Neutron server ask configuration in neutron l2-agent on the same host
 of nova-compute.


 That extra round trip does sound like a potential performance bottleneck,
 but sharing the configuration data directly is not the right solution. If
 the configuration setting names are shared, they become part of the
 integration API between the two services. Nova should ask neutron how to
 connect the VIF, and it shouldn't care how neutron decides to answer that
 question. The configuration setting is an implementation detail of neutron
 that shouldn't be exposed directly to nova.
 
 I agree for nova - neutron if.
 However, neutron server and neutron l2 agent configuration depends on
 each other.
 
 Running a configuration service also introduces what could be a single point
 of failure for all of the other distributed services in OpenStack. An
 out-of-band tool like chef or puppet doesn't result in the same sort of
 situation, because the tool does not have to be online in order for the
 cloud to be online.
 
 We can choose same implementation. ( Copy information in local cache etc)
 
 Thank you for your input, I could organize my thought.
 My proposal can be split for the two bps.
 
 [BP1] conf api for the other process
 Provide standard way to know the config value in the other process in
 same host or the other host.
 
 - API Example:
 conf.host('host1').firewall_driver
 
 - Conf file baed implementaion:
 config for each host will be placed in here.
  /etc/project/conf.d/{hostname}/agent.conf
 
 [BP2] Multiple backend for storing config files
 
 Currently, we have only file based configration.
 In this bp, we are extending support for config storage.
 - KVS
 - SQL
 - Chef - Ohai

I'm not opposed to making oslo.config support pluggable back ends, but I
don't think BP2 could be depended upon to satisfy a requirement for a
global view of arbitrary config information, since this wouldn't be
available if a file-based backend were selected.

As far as the neutron server getting info it needs about running L2
agents, this is currently done via the agents_db RPC, where each agent
periodically sends certain info to the server and the server stores it
in the DB for subsequent use. The same mechanism is also used for L3 and
DHCP agents, and probably for *aaS agents. Some agent config information
is included, as well as some stats, etc.. This mechanism does the job,
but could be generalized and improved a bit. But I think this flow of
information is really for specialized purposes - only a small subset of
the config info is passed, and other info is passed that doesn't come
from config.

My only real concern with using this current mechanism is that some of
the information (stats and liveness) is very dynamic, while other
information (config) is relatively static. Its a bit wasteful to send
all of it every couple seconds, but at least liveness (heartbeat) info
does need to be sent frequently. BP1 sounds like it could address the
static part, but I'm still not sure config file info is the only
relatively static info that might need to be shared. I think neutron can
stick with its agents_db RPC, DB, and API extension for now, and improve
it as needed.

-Bob

 
 Best
 Nachi
 
 Doug




 host1
   neutron server
   nova-api

 host2
   neturon l2-agent
   nova-compute

 In this case, a process should know the config value in the other hosts.

 Replying some questions

 Adding a config server would dramatically change the way that
 configuration management tools would interface with OpenStack services.
 [Jay]

 Since this bp is just adding new mode, we can still use existing config
 files.

 why not help to make Chef or Puppet better and cover the more OpenStack
 use-cases rather than add yet another competing system [Doug, Morgan]

 I believe this system is not competing system.
 The key point is we should have some standard api to access such services.
 As Oleg suggested, we can use sql-server, kv-store, or chef or puppet
 as a backend system.

 Best
 Nachi


 2014/1/9 Morgan Fainberg m...@metacloud.com:
 I agree with Doug’s question, but also would extend the train of thought
 to
 ask why not help to make Chef or Puppet better and cover the more
 OpenStack
 use-cases rather than add yet another competing system?

 Cheers,
 Morgan

 On January 9, 2014 at 

Re: [openstack-dev] [TripleO][Tuskar] Domain Model Locations

2014-01-09 Thread Jay Dobies

The UI will also need to be able to look at the Heat resources running
within the overcloud stack and classify them according to a resource
category.  How do you envision that working?


There's a way in a Heat template to specify arbitrary metadata on a 
resource. We can add flags in there and key off of those.



Next steps for me are to start to work on the Tuskar APIs around
Resource Category CRUD and their conversion into a Heat template.
There's some discussion to be had there as well, but I don't want to put
too much into one e-mail.



I'm looking forward to seeing the API specification, as Resource Category
CRUD is currently a big unknown in the tuskar-ui api.py file.


Mainn




Thoughts?

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.config] Centralized config management

2014-01-09 Thread Oleg Gelbukh
On Thu, Jan 9, 2014 at 10:53 PM, Nachi Ueno na...@ntti3.com wrote:

 Hi folks

 Thank you for your input.

 The key difference from external configuration system (Chef, puppet
 etc) is integration with
 openstack services.
 There are cases a process should know the config value in the other hosts.
 If we could have centralized config storage api, we can solve this issue.


Technically, that is already implemented in TripleO: configuration params
are stored in Heat templates metadata, and os-*-config scripts are applying
changes to that parameters on the nodes. I'm not sure if that could help
solve the use case you describe, as overcloud nodes probably won't have an
access to undercloud Heat server. But that counts as a centralized storage
of confguration information, from my standpoint.

--
Best regards,
Oleg Gelbukh
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][3rd Party Testing]Remove voting until your testing structure works.

2014-01-09 Thread Sukhdev Kapur
Hi Anita, et. all,

I understand if the -1 is being voted because of testing framework
failures, it is a real pain.

Assuming the framework is working fine, I have noticed that there could be
genuine failures because of devstack failing to fully stack up or the test
failures. In my specific case, I have observed that
test_network_basic_ops() have 70% success rate and devstack fails once in a
while as well - and these failures are not because of the patch being
reviewed.
Therefore, being sensitive to patch submitters, and not causing -1 votes, I
am not running basic ops test. However, I would eventually like to start
running this test. This means we will start to vote -1 once in while (or
lots of time - depending upon the test results).

My question is as long as a -1 vote is casted with logs of the results, is
it OK? - even if the failures are because of known bugs/issues?


regards..
-Sukhdev







On Thu, Jan 2, 2014 at 6:11 AM, Anita Kuno ante...@anteaya.info wrote:

 On 12/30/2013 09:32 PM, Anita Kuno wrote:
  Please.
 
  When your third party testing structure votes on patches and your
  testing structure is not stable, it will vote with a -1 on patches.
 
  This results in three consequences:
  1. The patch it votes on starts a countdown for abandonment, this is
  frustrating for developers.
  2. Reviewers who use -Verified-1 as a filter criteria will not review a
  patch with a -1 in the verification column in the Gerrit dashboard. This
  prevents developers from progressing in their work, and this also
  prevents reviewers from reviewing patches that need to be assessed.
  3. Third party testing that does not provide publicly accessible logs
  leave developers with no way to diagnose the issue, which makes it very
  difficult for a developer to fix, leaving the patch in a state of limbo.
 
  You can post messages to the patches, including a stable working url to
  the logs for your tests, as you continue to work on your third party
 tests.
 
  You are also welcome to post a success of failure message, just please
  refrain from allowing your testing infrastructure to vote on patches
  until your testing infrastructure is working, reliably, and has logs
  that developers can use to fix problems.
 
  The list of third party testing accounts are found here.[0]
 
  Right now there are three plugins that need to remove voting until they
  are stable.
 
  Please be active in #openstack-neutron, #openstack-qa, and the mailing
  list so that if there is an issue with your testing structure, people
  can come and talk to you.
 
  Thank you,
  Anita.
 
  [0]https://review.openstack.org/#/admin/groups/91,members
 
 Keep in mind that the email provided in this list is expected to be an
 email people can use to contact you with questions and concerns about
 your testing interactions. [0]

 The point of the exercise is to provide useful and helpful information
 to developers and reviewers so that we all improve our code and create a
 better, more integrated product than we have now.

 Please check the email inboxes of the email addresses you have provided
 and please respond to inquires in a timely fashion.

 Please also remember people will look for you on irc, so having a
 representative available on irc for discussion will give you some useful
 feedback on ensuring your third party testing structure is behaving as
 efficiently as possible.

 We have a certain level of tolerance for unproductive noise while you
 are getting the bugs knocked out of your system. If developers are
 trying to contact you for more information and there is no response,
 third party testing structures that fail to comply with the expectation
 that they will respond to questions, will be addressed on a case by case
 basis.

 Thank you in advance for your kind attention,
 Anita.

 [0] https://review.openstack.org/#/admin/groups/91,members

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.config] Centralized config management

2014-01-09 Thread Nachi Ueno
Hi Bob

2014/1/9 Robert Kukura rkuk...@redhat.com:
 On 01/09/2014 02:34 PM, Nachi Ueno wrote:
 Hi Doug

 2014/1/9 Doug Hellmann doug.hellm...@dreamhost.com:



 On Thu, Jan 9, 2014 at 1:53 PM, Nachi Ueno na...@ntti3.com wrote:

 Hi folks

 Thank you for your input.

 The key difference from external configuration system (Chef, puppet
 etc) is integration with
 openstack services.
 There are cases a process should know the config value in the other hosts.
 If we could have centralized config storage api, we can solve this issue.

 One example of such case is neuron + nova vif parameter configuration
 regarding to security group.
 The workflow is something like this.

 nova asks vif configuration information for neutron server.
 Neutron server ask configuration in neutron l2-agent on the same host
 of nova-compute.


 That extra round trip does sound like a potential performance bottleneck,
 but sharing the configuration data directly is not the right solution. If
 the configuration setting names are shared, they become part of the
 integration API between the two services. Nova should ask neutron how to
 connect the VIF, and it shouldn't care how neutron decides to answer that
 question. The configuration setting is an implementation detail of neutron
 that shouldn't be exposed directly to nova.

 I agree for nova - neutron if.
 However, neutron server and neutron l2 agent configuration depends on
 each other.

 Running a configuration service also introduces what could be a single point
 of failure for all of the other distributed services in OpenStack. An
 out-of-band tool like chef or puppet doesn't result in the same sort of
 situation, because the tool does not have to be online in order for the
 cloud to be online.

 We can choose same implementation. ( Copy information in local cache etc)

 Thank you for your input, I could organize my thought.
 My proposal can be split for the two bps.

 [BP1] conf api for the other process
 Provide standard way to know the config value in the other process in
 same host or the other host.

 - API Example:
 conf.host('host1').firewall_driver

 - Conf file baed implementaion:
 config for each host will be placed in here.
  /etc/project/conf.d/{hostname}/agent.conf

 [BP2] Multiple backend for storing config files

 Currently, we have only file based configration.
 In this bp, we are extending support for config storage.
 - KVS
 - SQL
 - Chef - Ohai

 I'm not opposed to making oslo.config support pluggable back ends, but I
 don't think BP2 could be depended upon to satisfy a requirement for a
 global view of arbitrary config information, since this wouldn't be
 available if a file-based backend were selected.

We can do it even if it's a file-based backend.
Chef or puppet will copy some configuration on the sever side and agent side.
The server read agent configuration stored in the server.

 As far as the neutron server getting info it needs about running L2
 agents, this is currently done via the agents_db RPC, where each agent
 periodically sends certain info to the server and the server stores it
 in the DB for subsequent use. The same mechanism is also used for L3 and
 DHCP agents, and probably for *aaS agents. Some agent config information
 is included, as well as some stats, etc.. This mechanism does the job,
 but could be generalized and improved a bit. But I think this flow of
 information is really for specialized purposes - only a small subset of
 the config info is passed, and other info is passed that doesn't come
 from config.

I agree on here.
We need a generic framework to do..

- static config with server and agent
- dynamic resource information and update
- stats or liveness updates

Today, we are re-inventing these frameworks in the different processes.

 My only real concern with using this current mechanism is that some of
 the information (stats and liveness) is very dynamic, while other
 information (config) is relatively static. Its a bit wasteful to send
 all of it every couple seconds, but at least liveness (heartbeat) info
 does need to be sent frequently. BP1 sounds like it could address the
 static part, but I'm still not sure config file info is the only
 relatively static info that might need to be shared. I think neutron can
 stick with its agents_db RPC, DB, and API extension for now, and improve
 it as needed.

I got it.
It looks like the community tend to don't like this idea, so it's not
good timing
to do this in generic way.
Let's work on this in neutron for now.

 Doug, Jeremy , Jay, Greg
Thank you for your inputs! I'll obsolete this bp.

Nachi

 -Bob


 Best
 Nachi

 Doug




 host1
   neutron server
   nova-api

 host2
   neturon l2-agent
   nova-compute

 In this case, a process should know the config value in the other hosts.

 Replying some questions

 Adding a config server would dramatically change the way that
 configuration management tools would interface with OpenStack services.
 [Jay]

 Since this bp is just adding 

Re: [openstack-dev] [oslo.config] Centralized config management

2014-01-09 Thread Jay Pipes
Hope you don't mind, I'll jump in here :)

On Thu, 2014-01-09 at 11:08 -0800, Nachi Ueno wrote:
 Hi Jeremy
 
 Don't you think it is burden for operators if we should choose correct
 combination of config for multiple nodes even if we have chef and
 puppet?

It's more of a burden for operators to have to configure OpenStack in
multiple ways.

 If we have some constraint or dependency in configurations, such logic
 should be in openstack source code.

Could you explain this a bit more? I generally view packages and things
like requirements.txt and setup.py [extra] sections as the canonical way
of resolving dependencies. An example here would be great.

 We can solve this issue if we have a standard way to know the config
 value of other process in the other host.
 
 Something like this.
 self.conf.host('host1').firewall_driver

This is already in every configuration management system I can think of.

In Chef, a cookbook can call out to search (or partial search) for the
node in question and retrieve such information (called attributes in
Chef-world).

In Puppet, one would use Hiera to look up another node's configuration.

In Ansible, one would use a Dynamic Inventory.

In Salt, you'd use Salt Mine.

 Then we can have a chef/or file baed config backend code for this for example.

I actually think you're thinking about this in the reverse way to the
way operators think about things. Operators want all configuration data
managed by a singular system -- their configuration management system.
Adding a different configuration data manager into the mix is the
opposite of what most operators would like, at least, that's just in my
experience.

All the best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.config] Centralized config management

2014-01-09 Thread Oleg Gelbukh
On Fri, Jan 10, 2014 at 12:18 AM, Nachi Ueno na...@ntti3.com wrote:

 2014/1/9 Jeremy Hanmer jer...@dreamhost.com:
  How do you see these interactions defined?  For instance, if I deploy
  a custom driver for Neutron, does that mean I also have to patch
  everything that will be talking to it (Nova, for instance) so they can
  agree on compatibility?

 Nova / Neutron talks with neturon api. so it should be OK because we
 are talking care
 backward compatibility in the REST API.

 The point in my example is neutron server + neutron l2 agent sync.


What about doing it the other way round, i.e. allow one server to query
certain configuration parameter(s) from the other via RPC? I believe I've
seen such proposal quite some time ago in Nova blueprints, but with no
actual implementation.

--
Best regards,
Oleg Gelbukh



  Also, I know that I run what is probably a more complicated cluster
  than most production clusters, but I can't think of very many
  configuration options that are globally in sync across the cluster.
  Hypervisors, network drivers, mysql servers, API endpoints...they all
  might vary between hosts/racks/etc.

 To support such heterogeneous environment is a purpose of this bp.
 Configuration dependency is pain point for me, and it's get more worse
 if the env is heterogeneous.

 I have also some experience to run openstack clusters, but it is still
 pain for me..

 My experience is something like this
 # Wow, new release! ohh this chef repo didn't supported..
 # hmm i should modify chef recipe.. hmm debug.. debug..


  On Thu, Jan 9, 2014 at 11:08 AM, Nachi Ueno na...@ntti3.com wrote:
  Hi Jeremy
 
  Don't you think it is burden for operators if we should choose correct
  combination of config for multiple nodes even if we have chef and
  puppet?
 
  If we have some constraint or dependency in configurations, such logic
  should be in openstack source code.
  We can solve this issue if we have a standard way to know the config
  value of other process in the other host.
 
  Something like this.
  self.conf.host('host1').firewall_driver
 
  Then we can have a chef/or file baed config backend code for this for
 example.
 
  Best
  Nachi
 
 
  2014/1/9 Jeremy Hanmer jer...@dreamhost.com:
  +1 to Jay.  Existing tools are both better suited to the job and work
  quite well in their current state.  To address Nachi's first example,
  there's nothing preventing a Nova node in Chef from reading Neutron's
  configuration (either by using a (partial) search or storing the
  necessary information in the environment rather than in roles).  I
  assume Puppet offers the same.  Please don't re-invent this hugely
  complicated wheel.
 
  On Thu, Jan 9, 2014 at 10:28 AM, Jay Pipes jaypi...@gmail.com wrote:
  On Thu, 2014-01-09 at 10:23 +0100, Flavio Percoco wrote:
  On 08/01/14 17:13 -0800, Nachi Ueno wrote:
  Hi folks
  
  OpenStack process tend to have many config options, and many hosts.
  It is a pain to manage this tons of config options.
  To centralize this management helps operation.
  
  We can use chef or puppet kind of tools, however
  sometimes each process depends on the other processes configuration.
  For example, nova depends on neutron configuration etc
  
  My idea is to have config server in oslo.config, and let cfg.CONF
 get
  config from the server.
  This way has several benefits.
  
  - We can get centralized management without modification on each
  projects ( nova, neutron, etc)
  - We can provide horizon for configuration
  
  This is bp for this proposal.
  https://blueprints.launchpad.net/oslo/+spec/oslo-config-centralized
  
  I'm very appreciate any comments on this.
 
  I've thought about this as well. I like the overall idea of having a
  config server. However, I don't like the idea of having it within
  oslo.config. I'd prefer oslo.config to remain a library.
 
  Also, I think it would be more complex than just having a server that
  provides the configs. It'll need authentication like all other
  services in OpenStack and perhaps even support of encryption.
 
  I like the idea of a config registry but as mentioned above, IMHO
 it's
  to live under its own project.
 
  Hi Nati and Flavio!
 
  So, I'm -1 on this idea, just because I think it belongs in the realm
 of
  configuration management tooling (Chef/Puppet/Salt/Ansible/etc). Those
  tools are built to manage multiple configuration files and changes in
  them. Adding a config server would dramatically change the way that
  configuration management tools would interface with OpenStack
 services.
  Instead of managing the config file templates as all of the tools
  currently do, the tools would need to essentially need to forego the
  tried-and-true INI files and instead write a bunch of code in order to
  deal with REST API set/get operations for changing configuration data.
 
  In summary, while I agree that OpenStack services have an absolute TON
  of configurability -- for good and bad -- there are ways to improve
 

Re: [openstack-dev] [TripleO][Tuskar] Domain Model Locations

2014-01-09 Thread Dougal Matthews
I'm glad we are hashing this out as I think there is still some debate 
around if Tuskar will need a database at all.


One thing to bear in mind, I think we need to make sure the terminology 
matches that described in the previous thread. I think it mostly does 
here but I'm not sure the Tuskar models do.


A few comments below.

On 09/01/14 17:22, Jay Dobies wrote:

= Nodes =
A node is a baremetal machine on which the overcloud resources will be
deployed. The ownership of this information lies with Ironic. The Tuskar
UI will accept the needed information to create them and pass it to
Ironic. Ironic is consulted directly when information on a specific node
or the list of available nodes is needed.


= Resource Categories =
A specific type of thing that will be deployed into the overcloud.


nit - Wont they be deployed into undercloud to form the overcloud?



These are static definitions that describe the entities the user will
want to add to the overcloud and are owned by Tuskar. For Icehouse, the
categories themselves are added during installation for the four types
listed in the wireframes.

Since this is a new model (as compared to other things that live in
Ironic or Heat), I'll go into some more detail. Each Resource Category
has the following information:

== Metadata ==
My intention here is that we do things in such a way that if we change
one of the original 4 categories, or more importantly add more or allow
users to add more, the information about the category is centralized and
not reliant on the UI to provide the user information on what it is.

ID - Unique ID for the Resource Category.
Display Name - User-friendly name to display.
Description - Equally self-explanatory.

== Count ==
In the Tuskar UI, the user selects how many of each category is desired.
This stored in Tuskar's domain model for the category and is used when
generating the template to pass to Heat to make it happen.

These counts are what is displayed to the user in the Tuskar UI for each
category. The staging concept has been removed for Icehouse. In other
words, the wireframes that cover the waiting to be deployed aren't
relevant for now.

== Image ==
For Icehouse, each category will have one image associated with it. Last
I remember, there was discussion on whether or not we need to support
multiple images for a category, but for Icehouse we'll limit it to 1 and
deal with it later.


+1, that matches my recollection.



Metadata for each Resource Category is owned by the Tuskar API. The
images themselves are managed by Glance, with each Resource Category
keeping track of just the UUID for its image.


= Stack =
There is a single stack in Tuskar, the overcloud. The Heat template
for the stack is generated by the Tuskar API based on the Resource
Category data (image, count, etc.). The template is handed to Heat to
execute.

Heat owns information about running instances and is queried directly
when the Tuskar UI needs to access that information.

--

Next steps for me are to start to work on the Tuskar APIs around
Resource Category CRUD and their conversion into a Heat template.
There's some discussion to be had there as well, but I don't want to put
too much into one e-mail.


Thoughts?


There are a number of other models in the tuskar code[1], do we need to 
consider these now too?


[1]: 
https://github.com/openstack/tuskar/blob/master/tuskar/db/sqlalchemy/models.py



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.config] Centralized config management

2014-01-09 Thread Nachi Ueno
Hi Jay

2014/1/9 Jay Pipes jaypi...@gmail.com:
 Hope you don't mind, I'll jump in here :)
I'll never mind to discuss with you :)

 On Thu, 2014-01-09 at 11:08 -0800, Nachi Ueno wrote:
 Hi Jeremy

 Don't you think it is burden for operators if we should choose correct
 combination of config for multiple nodes even if we have chef and
 puppet?

 It's more of a burden for operators to have to configure OpenStack in
 multiple ways.

This is independent discussion with pain of dependent configuration in
multiple node.

 If we have some constraint or dependency in configurations, such logic
 should be in openstack source code.

 Could you explain this a bit more? I generally view packages and things
 like requirements.txt and setup.py [extra] sections as the canonical way
 of resolving dependencies. An example here would be great.

It's package dependencies. I'm talking about configuration
dependencies or constraint.
For example, if we wanna use VLAN with neutron,
we should do proper configuration in neutron server and nova-compute
and l2-agent.

We get input such as this is a burden for operation.

Then neutron team start working on port binding extension to reduce this burden.
This extension let nova ask neutron for vif configuration, then we can remove
any redundant network configuration in nova.conf.


 We can solve this issue if we have a standard way to know the config
 value of other process in the other host.

 Something like this.
 self.conf.host('host1').firewall_driver

 This is already in every configuration management system I can think of.

Yes. I agree. But we can't assess it from inside of the openstack code.

 In Chef, a cookbook can call out to search (or partial search) for the
 node in question and retrieve such information (called attributes in
 Chef-world).

 In Puppet, one would use Hiera to look up another node's configuration.

 In Ansible, one would use a Dynamic Inventory.

 In Salt, you'd use Salt Mine.

 Then we can have a chef/or file baed config backend code for this for 
 example.

 I actually think you're thinking about this in the reverse way to the
 way operators think about things. Operators want all configuration data
 managed by a singular system -- their configuration management system.
 Adding a different configuration data manager into the mix is the
 opposite of what most operators would like, at least, that's just in my
 experience.

My point is let openstack access the single configuration management system.
Also, I wanna reduce redundant configuration in between multiple
nodes, and hopefully,
 we could have some generic framework to do this.

Nachi

 All the best,
 -jay


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Domain Model Locations

2014-01-09 Thread Tzu-Mainn Chen


- Original Message -
 I'm glad we are hashing this out as I think there is still some debate
 around if Tuskar will need a database at all.
 
 One thing to bear in mind, I think we need to make sure the terminology
 matches that described in the previous thread. I think it mostly does
 here but I'm not sure the Tuskar models do.
 
 A few comments below.
 
 On 09/01/14 17:22, Jay Dobies wrote:
  = Nodes =
  A node is a baremetal machine on which the overcloud resources will be
  deployed. The ownership of this information lies with Ironic. The Tuskar
  UI will accept the needed information to create them and pass it to
  Ironic. Ironic is consulted directly when information on a specific node
  or the list of available nodes is needed.
 
 
  = Resource Categories =
  A specific type of thing that will be deployed into the overcloud.
 
 nit - Wont they be deployed into undercloud to form the overcloud?
 
 
  These are static definitions that describe the entities the user will
  want to add to the overcloud and are owned by Tuskar. For Icehouse, the
  categories themselves are added during installation for the four types
  listed in the wireframes.
 
  Since this is a new model (as compared to other things that live in
  Ironic or Heat), I'll go into some more detail. Each Resource Category
  has the following information:
 
  == Metadata ==
  My intention here is that we do things in such a way that if we change
  one of the original 4 categories, or more importantly add more or allow
  users to add more, the information about the category is centralized and
  not reliant on the UI to provide the user information on what it is.
 
  ID - Unique ID for the Resource Category.
  Display Name - User-friendly name to display.
  Description - Equally self-explanatory.
 
  == Count ==
  In the Tuskar UI, the user selects how many of each category is desired.
  This stored in Tuskar's domain model for the category and is used when
  generating the template to pass to Heat to make it happen.
 
  These counts are what is displayed to the user in the Tuskar UI for each
  category. The staging concept has been removed for Icehouse. In other
  words, the wireframes that cover the waiting to be deployed aren't
  relevant for now.
 
  == Image ==
  For Icehouse, each category will have one image associated with it. Last
  I remember, there was discussion on whether or not we need to support
  multiple images for a category, but for Icehouse we'll limit it to 1 and
  deal with it later.
 
 +1, that matches my recollection.
 
 
  Metadata for each Resource Category is owned by the Tuskar API. The
  images themselves are managed by Glance, with each Resource Category
  keeping track of just the UUID for its image.
 
 
  = Stack =
  There is a single stack in Tuskar, the overcloud. The Heat template
  for the stack is generated by the Tuskar API based on the Resource
  Category data (image, count, etc.). The template is handed to Heat to
  execute.
 
  Heat owns information about running instances and is queried directly
  when the Tuskar UI needs to access that information.
 
  --
 
  Next steps for me are to start to work on the Tuskar APIs around
  Resource Category CRUD and their conversion into a Heat template.
  There's some discussion to be had there as well, but I don't want to put
  too much into one e-mail.
 
 
  Thoughts?
 
 There are a number of other models in the tuskar code[1], do we need to
 consider these now too?
 
 [1]:
 https://github.com/openstack/tuskar/blob/master/tuskar/db/sqlalchemy/models.py

Nope, these are gone now, in favor of Tuskar interacting directly with Ironic, 
Heat, etc.

Mainn

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.config] Centralized config management

2014-01-09 Thread Nachi Ueno
Hi Oleg

2014/1/9 Oleg Gelbukh ogelb...@mirantis.com:
 On Fri, Jan 10, 2014 at 12:18 AM, Nachi Ueno na...@ntti3.com wrote:

 2014/1/9 Jeremy Hanmer jer...@dreamhost.com:

  How do you see these interactions defined?  For instance, if I deploy
  a custom driver for Neutron, does that mean I also have to patch
  everything that will be talking to it (Nova, for instance) so they can
  agree on compatibility?

 Nova / Neutron talks with neturon api. so it should be OK because we
 are talking care
 backward compatibility in the REST API.

 The point in my example is neutron server + neutron l2 agent sync.


 What about doing it the other way round, i.e. allow one server to query
 certain configuration parameter(s) from the other via RPC? I believe I've
 seen such proposal quite some time ago in Nova blueprints, but with no
 actual implementation.

I agree. This is a my current choice.

 --
 Best regards,
 Oleg Gelbukh



  Also, I know that I run what is probably a more complicated cluster
  than most production clusters, but I can't think of very many
  configuration options that are globally in sync across the cluster.
  Hypervisors, network drivers, mysql servers, API endpoints...they all
  might vary between hosts/racks/etc.

 To support such heterogeneous environment is a purpose of this bp.
 Configuration dependency is pain point for me, and it's get more worse
 if the env is heterogeneous.

 I have also some experience to run openstack clusters, but it is still
 pain for me..

 My experience is something like this
 # Wow, new release! ohh this chef repo didn't supported..
 # hmm i should modify chef recipe.. hmm debug.. debug..


  On Thu, Jan 9, 2014 at 11:08 AM, Nachi Ueno na...@ntti3.com wrote:
  Hi Jeremy
 
  Don't you think it is burden for operators if we should choose correct
  combination of config for multiple nodes even if we have chef and
  puppet?
 
  If we have some constraint or dependency in configurations, such logic
  should be in openstack source code.
  We can solve this issue if we have a standard way to know the config
  value of other process in the other host.
 
  Something like this.
  self.conf.host('host1').firewall_driver
 
  Then we can have a chef/or file baed config backend code for this for
  example.
 
  Best
  Nachi
 
 
  2014/1/9 Jeremy Hanmer jer...@dreamhost.com:
  +1 to Jay.  Existing tools are both better suited to the job and work
  quite well in their current state.  To address Nachi's first example,
  there's nothing preventing a Nova node in Chef from reading Neutron's
  configuration (either by using a (partial) search or storing the
  necessary information in the environment rather than in roles).  I
  assume Puppet offers the same.  Please don't re-invent this hugely
  complicated wheel.
 
  On Thu, Jan 9, 2014 at 10:28 AM, Jay Pipes jaypi...@gmail.com wrote:
  On Thu, 2014-01-09 at 10:23 +0100, Flavio Percoco wrote:
  On 08/01/14 17:13 -0800, Nachi Ueno wrote:
  Hi folks
  
  OpenStack process tend to have many config options, and many hosts.
  It is a pain to manage this tons of config options.
  To centralize this management helps operation.
  
  We can use chef or puppet kind of tools, however
  sometimes each process depends on the other processes
   configuration.
  For example, nova depends on neutron configuration etc
  
  My idea is to have config server in oslo.config, and let cfg.CONF
   get
  config from the server.
  This way has several benefits.
  
  - We can get centralized management without modification on each
  projects ( nova, neutron, etc)
  - We can provide horizon for configuration
  
  This is bp for this proposal.
  https://blueprints.launchpad.net/oslo/+spec/oslo-config-centralized
  
  I'm very appreciate any comments on this.
 
  I've thought about this as well. I like the overall idea of having a
  config server. However, I don't like the idea of having it within
  oslo.config. I'd prefer oslo.config to remain a library.
 
  Also, I think it would be more complex than just having a server
  that
  provides the configs. It'll need authentication like all other
  services in OpenStack and perhaps even support of encryption.
 
  I like the idea of a config registry but as mentioned above, IMHO
  it's
  to live under its own project.
 
  Hi Nati and Flavio!
 
  So, I'm -1 on this idea, just because I think it belongs in the realm
  of
  configuration management tooling (Chef/Puppet/Salt/Ansible/etc).
  Those
  tools are built to manage multiple configuration files and changes in
  them. Adding a config server would dramatically change the way that
  configuration management tools would interface with OpenStack
  services.
  Instead of managing the config file templates as all of the tools
  currently do, the tools would need to essentially need to forego the
  tried-and-true INI files and instead write a bunch of code in order
  to
  deal with REST API set/get operations for changing configuration
  data.
 
  In summary, while I 

Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

2014-01-09 Thread Sandhya Dasu (sadasu)
Hi,
 One use case was brought up in today's meeting that I think is not valid.

It is the use case where all 3 vnic types : Virtio, direct and macvtap (the 
terms used in the meeting were slow, fast, faster/foobar) could be attached to 
the same VM.  The main difference between a direct and macvtap interface is 
that the former does not support live migration. So, attaching both direct and 
macvtap pci-passthrough interfaces to the same VM would mean that it cannot 
support live migration. In that case assigning the macvtap interface is in 
essence a waste.

So, it would be ideal to disallow such an assignment or at least warn the user 
that the VM will now not be able to support live migration.  We can  however 
still combine direct or macvtap pci-passthrough interfaces with virtio vmic 
types without issue.

Thanks,
Sandhya

From: Ian Wells ijw.ubu...@cack.org.ukmailto:ijw.ubu...@cack.org.uk
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Thursday, January 9, 2014 12:47 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

I think I'm in agreement with all of this.  Nice summary, Robert.

It may not be where the work ends, but if we could get this done the rest is 
just refinement.


On 9 January 2014 17:49, Robert Li (baoli) 
ba...@cisco.commailto:ba...@cisco.com wrote:
Hi Folks,

With John joining the IRC, so far, we had a couple of productive meetings in an 
effort to come to consensus and move forward. Thanks John for doing that, and I 
appreciate everyone's effort to make it to the daily meeting. Let's reconvene 
on Monday.

But before that, and based on our today's conversation on IRC, I'd like to say 
a few things. I think that first of all, we need to get agreement on the 
terminologies that we are using so far. With the current nova PCI passthrough

PCI whitelist: defines all the available PCI passthrough devices on a 
compute node. pci_passthrough_whitelist=[{ 
vendor_id:,product_id:}]
PCI Alias: criteria defined on the controller node with which requested 
PCI passthrough devices can be selected from all the PCI passthrough devices 
available in a cloud.
Currently it has the following format: 
pci_alias={vendor_id:, product_id:, name:str}

nova flavor extra_specs: request for PCI passthrough devices can be 
specified with extra_specs in the format for 
example:pci_passthrough:alias=name:count

As you can see, currently a PCI alias has a name and is defined on the 
controller. The implications for it is that when matching it against the PCI 
devices, it has to match the vendor_id and product_id against all the available 
PCI devices until one is found. The name is only used for reference in the 
extra_specs. On the other hand, the whitelist is basically the same as the 
alias without a name.

What we have discussed so far is based on something called PCI groups (or PCI 
flavors as Yongli puts it). Without introducing other complexities, and with a 
little change of the above representation, we will have something like:

pci_passthrough_whitelist=[{ vendor_id:,product_id:, 
name:str}]

By doing so, we eliminated the PCI alias. And we call the name in above as a 
PCI group name. You can think of it as combining the definitions of the 
existing whitelist and PCI alias. And believe it or not, a PCI group is 
actually a PCI alias. However, with that change of thinking, a lot of benefits 
can be harvested:

 * the implementation is significantly simplified
 * provisioning is simplified by eliminating the PCI alias
 * a compute node only needs to report stats with something like: PCI 
group name:count. A compute node processes all the PCI passthrough devices 
against the whitelist, and assign a PCI group based on the whitelist definition.
 * on the controller, we may only need to define the PCI group names. 
if we use a nova api to define PCI groups (could be private or public, for 
example), one potential benefit, among other things (validation, etc),  they 
can be owned by the tenant that creates them. And thus a wholesale of PCI 
passthrough devices is also possible.
 * scheduler only works with PCI group names.
 * request for PCI passthrough device is based on PCI-group
 * deployers can provision the cloud based on the PCI groups
 * Particularly for SRIOV, deployers can design SRIOV PCI groups based 
on network connectivities.

Further, to support SRIOV, we are saying that PCI group names not only can be 
used in the extra specs, it can also be used in the —nic option and the neutron 
commands. This allows the most flexibilities and functionalities afforded by 
SRIOV.

Further, we are saying that we 

Re: [openstack-dev] Devstack on Fedora 20

2014-01-09 Thread Dean Troyer
On Thu, Jan 9, 2014 at 2:16 PM, Adam Young ayo...@redhat.com wrote:

  That didn't seem to make a difference, still no cache.  The RPMS are not
 getting installed, even if I deliberately add a line for
 python-dogpile-cache
 Shouldn't it get installed via pip without the rpm line?


Yes pip should install it based on requirements.txt.  I just tried this and
see the install in /opt/stack/logs/stack.sh.log and then see the import
fail later.  God I love pip.  And there it is...xslt-config isn't present
so a whole batch of installs fails.

Add this to files/rpms/keystone:

libxslt-devel   # dist:f20

There are some additional tweaks that I'll ask Flavio to add to
https://review.openstack.org/63647 as it needs at least one more patch set
anyway.

dt

-- 

Dean Troyer
dtro...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] top gate bugs: a plea for help

2014-01-09 Thread Russell Bryant
On 01/08/2014 05:53 PM, Joe Gordon wrote:
 Hi All, 
 
 As you know the gate has been in particularly bad shape (gate queue over
 100!) this week due to a number of factors. One factor is how many major
 outstanding bugs we have in the gate.  Below is a list of the top 4 open
 gate bugs.
 
 Here are some fun facts about this list:
 * All bugs have been open for over a month
 * All are nova bugs
 * These 4 bugs alone were hit 588 times which averages to 42 hits per
 day (data is over two weeks)!
 
 If we want the gate queue to drop and not have to continuously run
 'recheck bug x' we need to fix these bugs.  So I'm looking for
 volunteers to help debug and fix these bugs.

I created the following etherpad to help track the most important Nova
gate bugs. who is actively working on them, and any patches that we have
in flight to help address them:

  https://etherpad.openstack.org/p/nova-gate-issue-tracking

Please jump in if you can.  We shouldn't wait for the gate bug day to
move on these.  Even if others are already looking at a bug, feel free
to do the same.  We need multiple sets of eyes on each of these issues.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Organizing a Gate Blocking Bug Fix Day

2014-01-09 Thread Anita Kuno
On 01/10/2014 02:53 AM, Jay Pipes wrote:
 On Thu, 2014-01-09 at 07:46 -0500, Sean Dague wrote:
 I think we are all agreed that the current state of Gate Resets isn't 
 good. Unfortunately some basic functionality is really not working 
 reliably, like being able to boot a guest to a point where you can ssh 
 into it.

 These are common bugs, but they aren't easy ones. We've had a few folks 
 digging deep on these, but we, as a community, are not keeping up with them.

 So I'd like to propose Gate Blocking Bug Fix day, to be Monday Jan 20th. 
 On that day I'd ask all core reviewers (and anyone else) on all projects 
 to set aside that day to *only* work on gate blocking bugs. We'd like to 
 quiet the queues to not include any other changes that day so that only 
 fixes related to gate blocking bugs would be in the system.

 This will have multiple goals:
   #1 - fix some of the top issues
   #2 - ensure we classify (ER fingerprint) and register everything we're 
 seeing in the gate fails
   #3 - ensure all gate bugs are triaged appropriately

 I'm hopefully that if we can get everyone looking at this one a single 
 day, we can start to dislodge the log jam that exists.

 Specifically I'd like to get commitments from as many PTLs as possible 
 that they'll both directly participate in the day, as well as encourage 
 the rest of their project to do the same.
 
 I'm in.
 
 Due to what ttx mentioned about I-2, I think the 13th Jan or 27th Jan
 Mondays would be better.
 
 Personally, I think sooner is better. The severity of the disruption is
 quite high, and action is needed ASAP.
 
 Best,
 -jay
 
 
I'm in.

Jan. 13th is a transportation day for me as I wend my way to the Neutron
Tempest code sprint in Montreal.

I am operating on the belief that since other Neutron Tempest folks
might also be having a transportation day, Sean has steered away from
this date as an option.

Thanks,
Anita.
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Domain Model Locations

2014-01-09 Thread Jay Pipes
On Thu, 2014-01-09 at 16:02 -0500, Tzu-Mainn Chen wrote: There are a
number of other models in the tuskar code[1], do we need to
  consider these now too?
  
  [1]:
  https://github.com/openstack/tuskar/blob/master/tuskar/db/sqlalchemy/models.py
 
 Nope, these are gone now, in favor of Tuskar interacting directly with 
 Ironic, Heat, etc.

Hmm, not quite.

If compare the models in Ironic [1] to Tuskar's (link above), there are
some dramatic differences. Notably:

* No Rack model in Ironic. Closest model seems to be the Chassis model
[2], but the Ironic Chassis model doesn't have nearly the entity
specificity that Tuskar's Rack model has. For example, the following
(important) attributes are missing from Ironic's Chassis model:
 - slots (how does Ironic know how many RU are in a chassis?)
 - location (very important for integration with operations inventory
management systems, trust me)
 - subnet (based on my experience, I've seen deployers use a
rack-by-rack or paired-rack control and data plane network static IP
assignment. While Tuskar's single subnet attribute is not really
adequate for describing production deployments that typically have 3+
management, data and overlay network routing rules for each rack, at
least Tuskar has the concept of networking rules in its Rack model,
while Ironic does not)
 - state (how does Ironic know whether a rack is provisioned fully or
not? Must it query each each Node's powr_state field that has a
chassis_id matching the Chassis' id field?)
 - 
* The Tuskar Rack model has a field chassis_id. I have no idea what
this is... or its relation to the Ironic Chassis model.

As much as the Tuskar Chassis model is lacking compared to the Tuskar
Rack model, the opposite problem exists for each project's model of
Node. In Tuskar, the Node model is pretty bare and useless, whereas
Ironic's Node model is much richer.

So, it's not as simple as it may initially seem :)

Best,
-jay

[1]
https://github.com/openstack/ironic/blob/master/ironic/db/sqlalchemy/models.py
[2]
https://github.com/openstack/ironic/blob/master/ironic/db/sqlalchemy/models.py#L83



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

2014-01-09 Thread Ian Wells
On 9 January 2014 20:19, Brian Schott brian.sch...@nimbisservices.comwrote:

 Ian,

 The idea of pci flavors is a great and using vendor_id and product_id make
 sense, but I could see a case for adding the class name such as 'VGA
 compatible controller'. Otherwise, slightly different generations of
 hardware will mean custom whitelist setups on each compute node.


Personally, I think the important thing is to have a matching expression.
The more flexible the matching language, the better.

On the flip side, vendor_id and product_id might not be sufficient.
  Suppose I have two identical NICs, one for nova internal use and the
 second for guest tenants?  So, bus numbering may be required.

 01:00.0 VGA compatible controller: NVIDIA Corporation G71 [GeForce 7900
 GTX] (rev a1)
 02:00.0 VGA compatible controller: NVIDIA Corporation G71 [GeForce 7900
 GTX] (rev a1)


I totally concur on this - with network devices in particular the PCI path
is important because you don't accidentally want to grab the Openstack
control network device ;)


 I know you guys are thinking of PCI devices, but any though of mapping to
 something like udev rather than pci?  Supporting udev rules might be easier
 and more robust rather than making something up.


Past experience has told me that udev rules are not actually terribly good,
which you soon discover when you have to write expressions like:

 SUBSYSTEM==net, KERNELS==:83:00.0, ACTION==add, NAME=eth8

which took me a long time to figure out and is self-documenting only in
that it has a recognisable PCI path in there, 'KERNELS' not being a
meaningful name to me.  And self-documenting is key to udev rules, because
there's not much information on the tag meanings otherwise.

I'm comfortable with having a match format that covers what we know and
copes with extension for when we find we're short a feature, and what we
have now is close to that.  Yes, it needs the class adding, we all agree,
and you should be able to match on PCI path, which you can't now, but it's
close.
-- 
Ian.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

2014-01-09 Thread Ian Wells
On 9 January 2014 22:50, Ian Wells ijw.ubu...@cack.org.uk wrote:

 On 9 January 2014 20:19, Brian Schott brian.sch...@nimbisservices.comwrote:
 On the flip side, vendor_id and product_id might not be sufficient.
  Suppose I have two identical NICs, one for nova internal use and the
 second for guest tenants?  So, bus numbering may be required.


 01:00.0 VGA compatible controller: NVIDIA Corporation G71 [GeForce 7900
 GTX] (rev a1)
 02:00.0 VGA compatible controller: NVIDIA Corporation G71 [GeForce 7900
 GTX] (rev a1)


 I totally concur on this - with network devices in particular the PCI path
 is important because you don't accidentally want to grab the Openstack
 control network device ;)


Redundant statement is redundant.  Sorry, yes, this has been a pet bugbear
of mine.  It applies equally to provider networks on the networking side of
thing, and, where Neutron is not your network device manager for a PCI
device, you may want several device groups bridged to different segments.
Network devices are one case of a category of device where there's
something about the device that you can't detect that means it's not
necessarily interchangeable with its peers.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >