[openstack-dev] [nova] APIImpact flag for nova specs

2014-10-15 Thread Christopher Yeoh
Hi,

I was wondering what people thought of having a convention of adding
an APIImpact flag to proposed nova specs commit messages where the
Nova API will change? It would make it much easier to find proposed
specs which affect the API as its not always clear from the gerrit
summary listing.

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Get keystone auth token via Horizon URL

2014-10-15 Thread Manickam, Kanagaraj
From Horizon, you won’t be able to do keystone way of authentication.

From: Ed Lima [mailto:e...@stackerz.com]
Sent: Wednesday, October 15, 2014 8:30 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] Get keystone auth token via Horizon URL


I'm on the very early stages of developing an app for android to manage 
openstack services and would like to get the user credentials/tokens on 
keystone to get data and execute commands via the horizon URL. I'm using 
IceHouse on Ubuntu 14.04.

In my particular use case I have keystone running on my internal server 
http://localhost:5000/v3/auth/tokens; which would allow me to use my app fine 
with JSON to get information from other services and execute commands however 
I'd have to be on the same network as my server for it to work.

On the other hand I have my horizon URL published externally on the internet at 
the address https://openstack.domain.com/horizon; which is available from 
anywhere and gives me access to my OpenStack services fine via browser on a 
desktop. I'd like to do the same on android, would it be possible? Is there a 
way for my app to send JSON requests to horizon at 
https://openstack.domain.com/horizon and get the authentication tokens from 
keystone indirectly?

I should mention I'm not a very experienced developer and any help would be 
amazing! Thanks
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Automatic evacuate

2014-10-15 Thread Jastrzebski, Michal
I tend to agree that that shouldn't be placed in nova. As it happens I'm 
working on very same thing (hello Russell :)). My current candidate is heat. 
Convergence will be in my opinion great place to do it 
(https://review.openstack.org/#/c/95907/). It's still in state of planning, but 
we'll talk about that more in Paris. I even have working demo of automatic 
evacuation :) (come to intel booth in Paris if you'd like to see it).

Thing is, nova currently isn't ready for that. For example:  
https://bugs.launchpad.net/nova/+bug/1379292
We are working on bp to enable nova to check actual host health, not only nova 
services health (bp coming soon, but in short its enabling zookeeper 
servicegroup api to monitor for example libvirt, or something else which, if 
down, means vms are dead).
That won't replace actual fencing, but that's something, and even if we would 
like to have fencing in nova, that's an requirement.

Maybe it's worth a design session? I've seen this or similar idea in several 
places already, and demand is strong for that.

Regards,
Michał

 -Original Message-
 From: Russell Bryant [mailto:rbry...@redhat.com]
 Sent: Tuesday, October 14, 2014 8:55 PM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Nova] Automatic evacuate
 
 On 10/14/2014 01:01 PM, Jay Pipes wrote:
  2) Looking forward, there is a lot of demand for doing this on a per
  instance basis.  We should decide on a best practice for allowing end
  users to indicate whether they would like their VMs automatically
  rescued by the infrastructure, or just left down in the case of a
  failure.  It could be as simple as a special tag set on an instance [2].
 
  Please note that server instance tagging (thanks for the shout-out,
  BTW) is intended for only user-defined tags, not system-defined
  metadata which is what this sounds like...
 
 I was envisioning the tag being set by the end user to say please keep my
 VM running until I say otherwise, or something like auto-recover
 for short.
 
 So, it's specified by the end user, but potentially acted upon by the system
 (as you say below).
 
  Of course, one might implement some external polling/monitoring system
  using server instance tags, which might do a nova list --tag $TAG
  --host $FAILING_HOST, and initiate a migrate for each returned server
 instance...
 
 Yeah, that's what I was thinking.  Whatever system you use to react to a
 failing host could use the tag as part of the criteria to figure out which
 instances to evacuate and which to leave as dead.
 
 --
 Russell Bryant
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Intel Technology Poland sp. z o.o.
ul. Slowackiego 173 | 80-298 Gdansk | Sad Rejonowy Gdansk Polnoc | VII Wydzial 
Gospodarczy Krajowego Rejestru Sadowego - KRS 101882 | NIP 957-07-52-316 | 
Kapital zakladowy 200.000 PLN.

Ta wiadomosc wraz z zalacznikami jest przeznaczona dla okreslonego adresata i 
moze zawierac informacje poufne. W razie przypadkowego otrzymania tej 
wiadomosci, prosimy o powiadomienie nadawcy oraz trwale jej usuniecie; 
jakiekolwiek
przegladanie lub rozpowszechnianie jest zabronione.
This e-mail and any attachments may contain confidential material for the sole 
use of the intended recipient(s). If you are not the intended recipient, please 
contact the sender and delete all copies; any review or distribution by
others is strictly prohibited.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] on Dockerfile patterns

2014-10-15 Thread James Bottomley
On Tue, 2014-10-14 at 19:52 -0400, David Vossel wrote:
 
 - Original Message -
  Ok, why are you so down on running systemd in a container?
 
 It goes against the grain.
 
 From a distributed systems view, we gain quite a bit of control by maintaining
 one service per container. Containers can be re-organised and re-purposed 
 dynamically.
 If we have systemd trying to manage an entire stack of resources within a 
 container,
 we lose this control.
 
 From my perspective a containerized application stack needs to be managed 
 externally
 by whatever is orchestrating the containers to begin with. When we take a 
 step back
 and look at how we actually want to deploy containers, systemd doesn't make 
 much sense.
 It actually limits us in the long run.
 
 Also... recovery. Using systemd to manage a stack of resources within a 
 single container
 makes it difficult for whatever is externally enforcing the availability of 
 that container
 to detect the health of the container.  As it is now, the actual service is 
 pid 1 of a
 container. If that service dies, the container dies. If systemd is pid 1, 
 there can
 be all kinds of chaos occurring within the container, but the external 
 distributed
 orchestration system won't have a clue (unless it invokes some custom health 
 monitoring
 tools within the container itself, which will likely be the case someday.)

I don't really think this is a good argument.  If you're using docker,
docker is the management and orchestration system for the containers.
There's no dogmatic answer to the question should you run init in the
container.

The reason for not running init inside a container managed by docker is
that you want the template to be thin for ease of orchestration and
transfer, so you want to share as much as possible with the host.  The
more junk you put into the container, the fatter and less agile it
becomes, so you should probably share the init system with the host in
this paradigm.

Conversely, containers can be used to virtualize full operating systems.
This isn't the standard way of doing docker, but LXC and OpenVZ by
default do containers this way.  For this type of container, because you
have a full OS running inside the container, you have to also have
systemd (assuming it's the init system) running within the container.

James



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] image requirements for Heat software config

2014-10-15 Thread Thomas Spatzier
Excerpts from Clint Byrum's message on 14/10/2014 23:38:46:
snip
 
  Regarding the process of building base images, the currently documented
way
  [1] of using diskimage-builder turns out to be a bit unstable
sometimes.
  Not because diskimage-builder is unstable, but probably because it
pulls in
  components from a couple of sources:
  #1 we have a dependency on implementation of the Heat engine of course
(So
  this is not pulled in to the image building process, but the dependency
is
  there)
  #2 we depend on features in python-heatclient (and other python-*
clients)
  #3 we pull in implementation from the heat-templates repo
  #4 we depend on tripleo-image-elements
  #5 we depend on os-collect-config, os-refresh-config and
os-apply-config
  #6 we depend on diskimage-builder itself
 
  Heat itself and python-heatclient are reasonably well in synch because
  there is a release process for both, so we can tell users with some
  certainty that a feature will work with release X of OpenStack and Heat
and
  version x.z.y of python-heatclient. For the other 4 sources, success
  sometimes depends on the time of day when you try to build an image
  (depending on what changes are currently included in each repo). So
  basically there does not seem to be a consolidated release process
across
  all that is currently needed for software config.
 

 I don't really understand why a consolidated release process across
 all would be desired or needed.

Well, all pieces have to fit together so everything work. I had many
situations where I used the currently up-to-date version of each piece but
something just did not work. Then I found that some patch was in review on
any of those, so trying a few days later worked.
I would be good for users to have one verified package of everything
instead of going thru a trial and error process.
Maybe this is going to improve in the future, since so far or until
recently a lot of software config was still work in progress. But up to
now, the image building has been a challenge at some time.


 #3 is pretty odd. You're pulling in templates from the examples repo?

We have to pull in the image elements and hooks for software config from
there.


 For #4-#6, those are all on pypi and released on a regular basis. Build
 yourself a bandersnatch mirror and you'll have locally controlled access
 to them which should eliminate any reliability issues.

So switching from git repo based installed as described in [1] to pypi
based installs, where I can specify a version number would help?
Then what we would still need is a set of version for each package that are
verified to work together (my previous point).


  The ideal solution would be to have one self-contained package that is
easy
  to install on various distributions (an rpm, deb, MSI ...).
  Secondly, it would be ideal to not have to bake additional things into
the
  image but doing bootstrapping during instance creation based on an
existing
  cloud-init enabled image. For that we would have to strip requirements
down
  to a bare minimum required for software config. One thing that comes to
my
  mind is the cirros software config example [2] that Steven Hardy
created.
  It is admittedly no up to what one could do with an image built
according
  to [1] but on the other hand is really slick, whereas [1] installs a
whole
  set of things into the image (some of which do not really seem to be
needed
  for software config).

 The agent problem is one reason I've been drifting away from Heat
 for software configuration, and toward Ansible. Mind you, I wrote
 os-collect-config to have as few dependencies as possible as one attempt
 around this problem. Still it isn't capable enough to do the job on its
 own, so you end up needing os-apply-config and then os-refresh-config
 to tie the two together.

 Ansible requires sshd, and python, with a strong recommendation for
 sudo. These are all things that pretty much every Linux distribution is
 going to have available.

Interesting, I have to investigate this. Thanks for the hint.


 
  Another issue that comes to mind: what about operating systems not
  supported by diskimage-builder (Windows), or other hypervisor
platforms?
 

 There is a windows-diskimage-builder:

 https://git.openstack.org/cgit/stackforge/windows-diskimage-builder

Good to know; I wasn't aware of it. Thanks!


 diskimage-builder can produce raw images, so that should be convertible
 to pretty much any other hypervisor's preferred disk format.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] image requirements for Heat software config

2014-10-15 Thread Thomas Spatzier
Excerpts from Steve Baker's message on 14/10/2014 23:52:41:

snip
 Regarding the process of building base images, the currently documented
way
 [1] of using diskimage-builder turns out to be a bit unstable sometimes.
 Not because diskimage-builder is unstable, but probably because it pulls
in
 components from a couple of sources:
 #1 we have a dependency on implementation of the Heat engine of course
(So
 this is not pulled in to the image building process, but the dependency
is
 there)
 #2 we depend on features in python-heatclient (and other python-*
clients)
 #3 we pull in implementation from the heat-templates repo
 #4 we depend on tripleo-image-elements
 #5 we depend on os-collect-config, os-refresh-config and os-apply-config
 #6 we depend on diskimage-builder itself

 Heat itself and python-heatclient are reasonably well in synch because
 there is a release process for both, so we can tell users with some
 certainty that a feature will work with release X of OpenStack and Heat
and
 version x.z.y of python-heatclient. For the other 4 sources, success
 sometimes depends on the time of day when you try to build an image
 (depending on what changes are currently included in each repo). So
 basically there does not seem to be a consolidated release process across
 all that is currently needed for software config.

 The ideal solution would be to have one self-contained package that is
easy
 to install on various distributions (an rpm, deb, MSI ...).
 Secondly, it would be ideal to not have to bake additional things into
the
 image but doing bootstrapping during instance creation based on an
existing
 cloud-init enabled image. For that we would have to strip requirements
down
 to a bare minimum required for software config. One thing that comes to
my
 mind is the cirros software config example [2] that Steven Hardy created.
 It is admittedly no up to what one could do with an image built according
 to [1] but on the other hand is really slick, whereas [1] installs a
whole
 set of things into the image (some of which do not really seem to be
needed
 for software config).


 Building an image from git repos was the best chance of having a
 single set of instructions which works for most cases, since the
 tools were not packaged for debian derived distros. This seems to be
 improving though; the whole build stack is now packaged for Debian
 Unstable, Testing and also Ubuntu Utopic (which isn't released yet).
 Another option is switching the default instructions to installing
 from pip rather than git, but that still gets into distro-specific
 quirks which complicate the instructions. Until these packages are
 on the recent releases of common distros then we'll be stuck in this
 slightly awkward situation.

Yeah, I understand that the current situation is probably there because we
are so close to the point where the features get developed. So hopefully
this will improve and stabilize in the future.


 I wrote a cloud-init boot script to install the agents from packages
 from a pristine Fedora 20 [3] and it seems like a reasonable
 approach for when building a custom image isn't practical. Somebody
 submitting the equivalent for Debian and Ubuntu would be most
 welcome. We need to decide whether *everything* should be packaged
 or if some things can be delivered by cloud-init on boot (os-
 collect-config.conf template, 55-heat-config, the actual desired
 config hook...)

Thanks for the pointer. I'll have a look. I think if we can put as little
requirements on the base image and do as much as possible at boot, that
would be good. If help is needed for getting this done for other distros
(and for Windows) we can certainly work on this. We just have to agree and
be convinced that this is the right path.


 I'm all for there being documentation for the different ways of
 getting the agent and hooks onto a running server for a given
 distro. I think the hot-guide would be the best place to do that,
 and I've been making a start on that recently [4][5] (help
 welcome!). The README in [1] should eventually refer to the hot-
 guide once it is published so we're not maintaining multiple build
 instructions.

I'll have a look at all the pointers. Agree that this is extremely useful.

BTW: the unit testing work you started on the software config hooks will
definitely help as well!


 Another issue that comes to mind: what about operating systems not
 supported by diskimage-builder (Windows), or other hypervisor platforms?

 The Cloudbase folk have contributed some useful cloudbase-init
 templates this cycle [6], so that is a start.  I think there is
 interest in porting os-*-config to Windows as the way of enabling
 deployment resources (help welcome!).

Yes, I've seen those templates. As long as there is an image that work with
them, this is great. I have to look closer into the Windows things.


 Any, not really suggestions from my side but more observations and
 thoughts. I wanted to share those and raise some 

Re: [openstack-dev] Travels tips for the Paris summit

2014-10-15 Thread Thierry Carrez
Anita Kuno wrote:
 On 10/14/2014 12:40 PM, Sylvain Bauza wrote:
 Le 14/10/2014 18:29, Anita Kuno a écrit :
 I have a request. Is there any way to expand the Food section to
 include how to find vegetarian restaurants? Any help here appreciated.

 Well, this is a tough question. We usually make use of TripAdvisor or
 other French noting websites for finding good places to eat, but some
 small restaurant don't provide this kind of information. There is no
 official requirement to provide these details for example.

 What I can suggest is when looking at the menu (this is mandatory to put
 it outside of the restaurant) and check for the word 'Végétarien'.

It's also always risky to ask for a vegetarian plate in a French
traditional restaurant. At best you get a bowl of rice. At worse you get
Canton rice with bacon and shrimps.

 Thanks Sylvain, I appreciate the pointers. Will wander around and look
 at menus outside restaurants. Not hard to do since I love wandering
 around the streets of Paris, so easy to walk, nice wide sidewalks.
 
 I'll also check back on the wikipage after you have edited.

I found this:
http://www.topito.com/top-meilleurs-restaurants-vegetariens-paris

The Creperies (Brittany pancakes) are also great choices for
vegetarians since you can easily pick a vegetarian filling.

Cheers,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Compute][Nova] Mid-cycle meetup details for kilo

2014-10-15 Thread Thierry Carrez
Michael Still wrote:
 I am pleased to announce details for the Kilo Compute mid-cycle
 meetup, but first some background about how we got here.
 
 Two companies actively involved in OpenStack came forward with offers
 to host the Compute meetup. However, one of those companies has
 gracefully decided to wait until the L release because of the cold
 conditions are their proposed location (think several feet of snow).

What's wrong with snow :)

 So instead, we're left with California!
 
 The mid-cycle meetup will be from 26 to 28 January 2015, at the VMWare
 offices in Palo Alto California.

Added to https://wiki.openstack.org/wiki/Sprints
Next time, please directly update that wiki page for easy find :)

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Cinder/Neutron plugins on UI

2014-10-15 Thread Evgeniy L
Hi Mike,

Dmitry S. started to work on checkboxes auto-generation in nailgun
for UI.
And in parallel I've written simple script which just updates release
model with checkboxes and plugin's fields, this simple script will be used
only for POC, and then we will replace it with Dmitry's implementation when
it's ready.

Thanks,

On Tue, Oct 14, 2014 at 10:50 PM, Mike Scherbakov mscherba...@mirantis.com
wrote:

 +1 for doing now:
  we are going to implement something really simple, like updating plugin
 attributes directly via api.
 Then we can have discussions in parallel how we plan to evolve it.

 Please confirm that we went this path.

 Thanks,


 On Mon, Oct 13, 2014 at 7:31 PM, Evgeniy L e...@mirantis.com wrote:

 Hi,

 We've discussed what we will be able to do for the current release and
 what we will not be able to implement.
 We have not only technical problems, but also we don't have a lot of time
 for implementation. We were trying to find solution which will work well
 enough
 with all of the constraints.
 For the current release we want to implement approach which was suggested
 by Mike.
 We are going to generate for UI checkbox which defines if plugin is set
 for
 deployment. In nailgun we'll be able to parse generated checkboxes and
 remove or add relation between plugin and cluster models.
 With this relation we'll be able to identify if plugins is used, it will
 allow us
 to remove the plugins if it's unused (in the future), or if we need to
 pass
 tasks to orchestrator. Also in POC, we are going to implement something
 really simple, like updating plugin attributes directly via api.

 Thanks,

 On Thu, Oct 9, 2014 at 8:13 PM, Dmitry Borodaenko 
 dborodae...@mirantis.com wrote:

 Notes from the architecture review meeting on plugins UX:

- separate page for plugins management
- user installs the plugin on the master
- global master node configuration across all environments:
   - user can see a list of plugins on Plugins tab (plugins
   description)
   - Enable/Disable plugin
  - should we enable/disable plugins globally, or only per
  environment?
 - yes, we need a global plugins management page, it will
 later be extended to upload or remove plugins
  - if a plugin is used in a deployed environment, options to
  globally disable or remove that plugin are blocked
   - show which environments (or a number of environments) have a
   specific plugin enabled
   - global plugins page is a Should in 6.0 (but easy to add)
   - future: a plugin like ostf should have a deployable flag set to
   false, so that it doesn't show up as an option per env
- user creates new environment
   - in setup wizard on the releases page (1st step), a list of
   checkboxes for all plugins is offered (same page as releases?)
  - all globally enabled plugins are checked (enabled) by default
  - changes in selection of plugins will trigger regeneration of
  subsequent setup wizard steps
   - plugin may include a yaml mixin for settings page options in
   openstack.yaml format
  - in future releases, it will support describing setup wizard
  (disk configuration, network settings etc.) options in the same way
  - what is the simplest case? does plugin writer have to define
  the plugin enable/disable checkbox, or is it autogenerated?
 - if plugin does not define any configuration options: a
 checkbox is automatically added into Additional Services 
 section of the
 settings page (disabled by default)
 - *problem:* if a plugin is enabled by default, but the
 option to deploy it is disabled by default, such environment 
 would count
 against the plugin (and won't allow to remove this plugin 
 globally) even
 though it actually wasn't deployed
  - manifest of plugins enabled/used for an environment?


 We ended the discussion on the problem highlighted in bold above: what's
 the best way to detect which plugins are actually used in an environment?


 On Thu, Oct 9, 2014 at 6:42 AM, Vitaly Kramskikh 
 vkramsk...@mirantis.com wrote:

 Evgeniy,

 Yes, the plugin management page should be a separate page. As for
 dependency on releases, I meant that some plugin can work only on Ubuntu
 for example, so for different releases different plugins could be 
 available.

 And please confirm that you also agree with the flow: the user install
 a plugin, then he enables it on the plugin management page, and then he
 creates an environment and on the first step he can uncheck some plugins
 which he doesn't want to use in that particular environment.

 2014-10-09 20:11 GMT+07:00 Evgeniy L e...@mirantis.com:

 Hi,

 Vitaly, I like the idea of having separate page, but I'm not sure if
 it should be on releases page.
 Usually a plugin is not release specific, usually it's 

Re: [openstack-dev] Quota management and enforcement across projects

2014-10-15 Thread Valeriy Ponomaryov
Manila project does use policy common code from incubator.

Our small wrapper for it:
https://github.com/openstack/manila/blob/8203c51081680a7a9dba30ae02d7c43d6e18a124/manila/policy.py


On Wed, Oct 15, 2014 at 2:41 AM, Salvatore Orlando sorla...@nicira.com
wrote:

 Doug,

 I totally agree with your findings on the policy module.
 Neutron already has some customizations there and we already have a few
 contributors working on syncing it back with oslo-incubator during the Kilo
 release cycle.

 However, my query was about the quota module.
 From what I gather it seems not a lot of projects use it:

 $ find . -name openstack-common.conf | xargs grep quota
 $

 Salvatore

 On 15 October 2014 00:34, Doug Hellmann d...@doughellmann.com wrote:


 On Oct 14, 2014, at 12:31 PM, Salvatore Orlando sorla...@nicira.com
 wrote:

 Hi Doug,

 do you know if the existing quota oslo-incubator module has already some
 active consumers?
 In the meanwhile I've pushed a spec to neutron-specs for improving quota
 management there [1]


 It looks like a lot of projects are syncing the module:

 $ grep policy */openstack-common.conf

 barbican/openstack-common.conf:modules=gettextutils,jsonutils,log,local,timeutils,importutils,policy
 ceilometer/openstack-common.conf:module=policy
 cinder/openstack-common.conf:module=policy
 designate/openstack-common.conf:module=policy
 gantt/openstack-common.conf:module=policy
 glance/openstack-common.conf:module=policy
 heat/openstack-common.conf:module=policy
 horizon/openstack-common.conf:module=policy
 ironic/openstack-common.conf:module=policy
 keystone/openstack-common.conf:module=policy
 manila/openstack-common.conf:module=policy
 neutron/openstack-common.conf:module=policy
 nova/openstack-common.conf:module=policy
 trove/openstack-common.conf:module=policy
 tuskar/openstack-common.conf:module=policy

 I’m not sure how many are actively using it, but I wouldn’t expect them
 to copy it in if they weren’t using it at all.


 Now, I can either work on the oslo-incubator module and leverage it in
 Neutron, or develop the quota module in Neutron, and move it to
 oslo-incubator once we validate it with Neutron. The latter approach seems
 easier from a workflow perspective - as it avoid the intermediate steps of
 moving code from oslo-incubator to neutron. On the other hand it will delay
 adoption in oslo-incubator.


 The policy module is up for graduation this cycle. It may end up in its
 own library, to allow us to build a review team for the code more easily
 than if we put it in with some of the other semi-related modules like the
 server code. We’re still working that out [1], and if you expect to make a
 lot of incompatible changes we should delay graduation to make that simpler.

 Either way, since we have so many consumers, I think it would be easier
 to have the work happen in Oslo somewhere so we can ensure those changes
 are useful to and usable by all of the existing consumers.

 Doug

 [1] https://etherpad.openstack.org/p/kilo-oslo-library-proposals


 What's your opinion?

 Regards,
 Salvatore

 [1] https://review.openstack.org/#/c/128318/

 On 8 October 2014 18:52, Doug Hellmann d...@doughellmann.com wrote:


 On Oct 8, 2014, at 7:03 AM, Davanum Srinivas dava...@gmail.com wrote:

  Salvatore, Joe,
 
  We do have this at the moment:
 
 
 https://github.com/openstack/oslo-incubator/blob/master/openstack/common/quota.py
 
  — dims

 If someone wants to drive creating a useful library during kilo, please
 consider adding the topic to the etherpad we’re using to plan summit
 sessions and then come participate in the Oslo meeting this Friday 16:00
 UTC.

 https://etherpad.openstack.org/p/kilo-oslo-summit-topics

 Doug

 
  On Wed, Oct 8, 2014 at 2:29 AM, Salvatore Orlando sorla...@nicira.com
 wrote:
 
  On 8 October 2014 04:13, Joe Gordon joe.gord...@gmail.com wrote:
 
 
  On Fri, Oct 3, 2014 at 10:47 AM, Morgan Fainberg
  morgan.fainb...@gmail.com wrote:
 
  Keeping the enforcement local (same way policy works today) helps
 limit
  the fragility, big +1 there.
 
  I also agree with Vish, we need a uniform way to talk about quota
  enforcement similar to how we have a uniform policy language /
 enforcement
  model (yes I know it's not perfect, but it's far closer to uniform
 than
  quota management is).
 
 
  It sounds like maybe we should have an oslo library for quotas?
 Somewhere
  where we can share the code,but keep the operations local to each
 service.
 
 
  This is what I had in mind as well. A simple library for quota
 enforcement
  which can be used regardless of where and how you do it, which might
 depend
  on the application business logic, the WSGI framework in use, or other
  factors.
 
 
 
 
  If there is still interest of placing quota in keystone, let's talk
 about
  how that will work and what will be needed from Keystone . The
 previous
  attempt didn't get much traction and stalled out early in
 implementation. If
  we want to revisit this lets make sure we have 

Re: [openstack-dev] [nova] APIImpact flag for nova specs

2014-10-15 Thread Alex Xu

On 2014年10月15日 14:20, Christopher Yeoh wrote:

Hi,

I was wondering what people thought of having a convention of adding
an APIImpact flag to proposed nova specs commit messages where the
Nova API will change? It would make it much easier to find proposed
specs which affect the API as its not always clear from the gerrit
summary listing.

+1, and is there any tool can be used by search flag?



Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance][Trove] Juno RC3 available

2014-10-15 Thread Thierry Carrez
Hello everyone,

Due to last-minute issues discovered in testing of the published Glance
and Trove 2014.2 RC2, we generated new Juno release candidates for these
projects. You can find the list of bugfixes in these RC3 and a link to a
source tarball at:

https://launchpad.net/glance/juno/juno-rc3
https://launchpad.net/trove/juno/juno-rc3

At this point, only show-stoppers would warrant a release candidate
respin, so these RC3 are very likely to be formally released as the
final Glance and Trove 2014.2 on Thursday. You are therefore strongly
encouraged to give these tarballs a last-minute test ride !

Alternatively, you can directly test the proposed/juno branch at:
https://github.com/openstack/glance/tree/proposed/juno
https://github.com/openstack/trove/tree/proposed/juno

If you find an issue that could be considered release-critical, please
file it at:

https://bugs.launchpad.net/glance/+filebug
https://bugs.launchpad.net/trove/+filebug

and tag it *juno-rc-potential* to bring it to the release crew's attention.

Regards,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Hiera implementation

2014-10-15 Thread Aleksandr Didenko
Hi,

there is a blueprint about Hiera implementation in Fuel [1] and some
additional info plus notes [2]. So as the first step I suggest to simply
configure Hiera on Fuel master node and OS nodes before the first puppet
run and merge this into master branch. This will allow manifests developers
to start using Hiera in their modules/classes.

But we need to come up with solutions for some key moments before we can
proceed:

1) Where to ship main hiera.yaml config file?
2) What Hiera :hierarchy: should we use?
3) Where to ship hieradata yamls for fuel-library modules/classes?
4) Anything else I forgot to mention?

We should also take into account MOS versioning scheme we currently use for
puppet manifests and modules.

My suggestions:

1) We should ship hiera.yaml with our osnailyfacter module under
fuel-library project, like we do with main site.pp. So we can put
config in deployment/puppet/osnailyfacter/examples/hiera.yaml file and
symlink /etc/puppet/hiera.yaml to it [3]. This solution will support MOS
versioning, because our modules are (will be) stored in version based
directories.

2) We could use something like [4]:

   -  /etc/astute.yaml - is used as default if nothing was found
   -  /etc/puppet/hieradata/default/ - is shipped with fuel-library
   -  /etc/puppet/hieradata/override/ - allows ops team to override needed
   settings on per fact, class or module basis

In this case we'll also have to add /etc/puppet/hieradata/ into our puppet
manifests/modules versioning scheme. Maybe an easier solution would be
putting hieradata under some directory which is already versioned, for
example under /etc/fuel/6.0 ? But in this case we need to either introduce
new fact like $fuel_version and use it in Hiera :hierarchy: or setup a
symlink like /etc/fuel/current - /etc/fuel/6.0

3) Add new directory in fuel-library (for example hieradata) that will be
extracted into /etc/puppet/hieradata/default. This will allow puppet
manifest developers to add needed hiera settings along with their modules:


   - fuel-library/deployment/puppet/*my_new_module/* - puppet module
   - fuel-library/deployment/hieradata/module/*my_new_module.yaml* - hiera
   data for the module

Your input/comments are welcome and appreciated :)

[1] https://blueprints.launchpad.net/fuel/+spec/replace-parseyaml-with-hiera
[2] https://etherpad.openstack.org/p/fuel_hiera
[3] https://review.openstack.org/#/c/126559/
[4] http://pastebin.com/HH0bUtYc

Regards,
Aleksandr Didenko
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Cinder/Neutron plugins on UI

2014-10-15 Thread Mike Scherbakov
Thanks, good.

On Wed, Oct 15, 2014 at 12:41 PM, Evgeniy L e...@mirantis.com wrote:

 Hi Mike,

 Dmitry S. started to work on checkboxes auto-generation in nailgun
 for UI.
 And in parallel I've written simple script which just updates release
 model with checkboxes and plugin's fields, this simple script will be used
 only for POC, and then we will replace it with Dmitry's implementation when
 it's ready.

 Thanks,

 On Tue, Oct 14, 2014 at 10:50 PM, Mike Scherbakov 
 mscherba...@mirantis.com wrote:

 +1 for doing now:
  we are going to implement something really simple, like updating
 plugin attributes directly via api.
 Then we can have discussions in parallel how we plan to evolve it.

 Please confirm that we went this path.

 Thanks,


 On Mon, Oct 13, 2014 at 7:31 PM, Evgeniy L e...@mirantis.com wrote:

 Hi,

 We've discussed what we will be able to do for the current release and
 what we will not be able to implement.
 We have not only technical problems, but also we don't have a lot of time
 for implementation. We were trying to find solution which will work well
 enough
 with all of the constraints.
 For the current release we want to implement approach which was suggested
 by Mike.
 We are going to generate for UI checkbox which defines if plugin is set
 for
 deployment. In nailgun we'll be able to parse generated checkboxes and
 remove or add relation between plugin and cluster models.
 With this relation we'll be able to identify if plugins is used, it will
 allow us
 to remove the plugins if it's unused (in the future), or if we need to
 pass
 tasks to orchestrator. Also in POC, we are going to implement something
 really simple, like updating plugin attributes directly via api.

 Thanks,

 On Thu, Oct 9, 2014 at 8:13 PM, Dmitry Borodaenko 
 dborodae...@mirantis.com wrote:

 Notes from the architecture review meeting on plugins UX:

- separate page for plugins management
- user installs the plugin on the master
- global master node configuration across all environments:
   - user can see a list of plugins on Plugins tab (plugins
   description)
   - Enable/Disable plugin
  - should we enable/disable plugins globally, or only per
  environment?
 - yes, we need a global plugins management page, it will
 later be extended to upload or remove plugins
  - if a plugin is used in a deployed environment, options to
  globally disable or remove that plugin are blocked
   - show which environments (or a number of environments) have a
   specific plugin enabled
   - global plugins page is a Should in 6.0 (but easy to add)
   - future: a plugin like ostf should have a deployable flag set
   to false, so that it doesn't show up as an option per env
- user creates new environment
   - in setup wizard on the releases page (1st step), a list of
   checkboxes for all plugins is offered (same page as releases?)
  - all globally enabled plugins are checked (enabled) by
  default
  - changes in selection of plugins will trigger regeneration
  of subsequent setup wizard steps
   - plugin may include a yaml mixin for settings page options in
   openstack.yaml format
  - in future releases, it will support describing setup wizard
  (disk configuration, network settings etc.) options in the same 
 way
  - what is the simplest case? does plugin writer have to
  define the plugin enable/disable checkbox, or is it autogenerated?
 - if plugin does not define any configuration options: a
 checkbox is automatically added into Additional Services 
 section of the
 settings page (disabled by default)
 - *problem:* if a plugin is enabled by default, but the
 option to deploy it is disabled by default, such environment 
 would count
 against the plugin (and won't allow to remove this plugin 
 globally) even
 though it actually wasn't deployed
  - manifest of plugins enabled/used for an environment?


 We ended the discussion on the problem highlighted in bold above:
 what's the best way to detect which plugins are actually used in an
 environment?


 On Thu, Oct 9, 2014 at 6:42 AM, Vitaly Kramskikh 
 vkramsk...@mirantis.com wrote:

 Evgeniy,

 Yes, the plugin management page should be a separate page. As for
 dependency on releases, I meant that some plugin can work only on Ubuntu
 for example, so for different releases different plugins could be 
 available.

 And please confirm that you also agree with the flow: the user install
 a plugin, then he enables it on the plugin management page, and then he
 creates an environment and on the first step he can uncheck some plugins
 which he doesn't want to use in that particular environment.

 2014-10-09 20:11 GMT+07:00 Evgeniy L e...@mirantis.com:

 Hi,

 Vitaly, I like the idea of having separate page, but 

Re: [openstack-dev] Quota management and enforcement across projects

2014-10-15 Thread Valeriy Ponomaryov
But why policy is being discussed on quota thread?

On Wed, Oct 15, 2014 at 11:55 AM, Valeriy Ponomaryov 
vponomar...@mirantis.com wrote:

 Manila project does use policy common code from incubator.

 Our small wrapper for it:
 https://github.com/openstack/manila/blob/8203c51081680a7a9dba30ae02d7c43d6e18a124/manila/policy.py


 On Wed, Oct 15, 2014 at 2:41 AM, Salvatore Orlando sorla...@nicira.com
 wrote:

 Doug,

 I totally agree with your findings on the policy module.
 Neutron already has some customizations there and we already have a few
 contributors working on syncing it back with oslo-incubator during the Kilo
 release cycle.

 However, my query was about the quota module.
 From what I gather it seems not a lot of projects use it:

 $ find . -name openstack-common.conf | xargs grep quota
 $

 Salvatore

 On 15 October 2014 00:34, Doug Hellmann d...@doughellmann.com wrote:


 On Oct 14, 2014, at 12:31 PM, Salvatore Orlando sorla...@nicira.com
 wrote:

 Hi Doug,

 do you know if the existing quota oslo-incubator module has already some
 active consumers?
 In the meanwhile I've pushed a spec to neutron-specs for improving quota
 management there [1]


 It looks like a lot of projects are syncing the module:

 $ grep policy */openstack-common.conf

 barbican/openstack-common.conf:modules=gettextutils,jsonutils,log,local,timeutils,importutils,policy
 ceilometer/openstack-common.conf:module=policy
 cinder/openstack-common.conf:module=policy
 designate/openstack-common.conf:module=policy
 gantt/openstack-common.conf:module=policy
 glance/openstack-common.conf:module=policy
 heat/openstack-common.conf:module=policy
 horizon/openstack-common.conf:module=policy
 ironic/openstack-common.conf:module=policy
 keystone/openstack-common.conf:module=policy
 manila/openstack-common.conf:module=policy
 neutron/openstack-common.conf:module=policy
 nova/openstack-common.conf:module=policy
 trove/openstack-common.conf:module=policy
 tuskar/openstack-common.conf:module=policy

 I’m not sure how many are actively using it, but I wouldn’t expect them
 to copy it in if they weren’t using it at all.


 Now, I can either work on the oslo-incubator module and leverage it in
 Neutron, or develop the quota module in Neutron, and move it to
 oslo-incubator once we validate it with Neutron. The latter approach seems
 easier from a workflow perspective - as it avoid the intermediate steps of
 moving code from oslo-incubator to neutron. On the other hand it will delay
 adoption in oslo-incubator.


 The policy module is up for graduation this cycle. It may end up in its
 own library, to allow us to build a review team for the code more easily
 than if we put it in with some of the other semi-related modules like the
 server code. We’re still working that out [1], and if you expect to make a
 lot of incompatible changes we should delay graduation to make that simpler.

 Either way, since we have so many consumers, I think it would be easier
 to have the work happen in Oslo somewhere so we can ensure those changes
 are useful to and usable by all of the existing consumers.

 Doug

 [1] https://etherpad.openstack.org/p/kilo-oslo-library-proposals


 What's your opinion?

 Regards,
 Salvatore

 [1] https://review.openstack.org/#/c/128318/

 On 8 October 2014 18:52, Doug Hellmann d...@doughellmann.com wrote:


 On Oct 8, 2014, at 7:03 AM, Davanum Srinivas dava...@gmail.com wrote:

  Salvatore, Joe,
 
  We do have this at the moment:
 
 
 https://github.com/openstack/oslo-incubator/blob/master/openstack/common/quota.py
 
  — dims

 If someone wants to drive creating a useful library during kilo, please
 consider adding the topic to the etherpad we’re using to plan summit
 sessions and then come participate in the Oslo meeting this Friday 16:00
 UTC.

 https://etherpad.openstack.org/p/kilo-oslo-summit-topics

 Doug

 
  On Wed, Oct 8, 2014 at 2:29 AM, Salvatore Orlando 
 sorla...@nicira.com wrote:
 
  On 8 October 2014 04:13, Joe Gordon joe.gord...@gmail.com wrote:
 
 
  On Fri, Oct 3, 2014 at 10:47 AM, Morgan Fainberg
  morgan.fainb...@gmail.com wrote:
 
  Keeping the enforcement local (same way policy works today) helps
 limit
  the fragility, big +1 there.
 
  I also agree with Vish, we need a uniform way to talk about quota
  enforcement similar to how we have a uniform policy language /
 enforcement
  model (yes I know it's not perfect, but it's far closer to uniform
 than
  quota management is).
 
 
  It sounds like maybe we should have an oslo library for quotas?
 Somewhere
  where we can share the code,but keep the operations local to each
 service.
 
 
  This is what I had in mind as well. A simple library for quota
 enforcement
  which can be used regardless of where and how you do it, which might
 depend
  on the application business logic, the WSGI framework in use, or
 other
  factors.
 
 
 
 
  If there is still interest of placing quota in keystone, let's
 talk about
  how that will work and what will be needed from 

Re: [openstack-dev] [nova] APIImpact flag for nova specs

2014-10-15 Thread Christopher Yeoh
On Wed, Oct 15, 2014 at 7:31 PM, Alex Xu x...@linux.vnet.ibm.com wrote:

 On 2014年10月15日 14:20, Christopher Yeoh wrote:

 Hi,

 I was wondering what people thought of having a convention of adding
 an APIImpact flag to proposed nova specs commit messages where the
 Nova API will change? It would make it much easier to find proposed
 specs which affect the API as its not always clear from the gerrit
 summary listing.

 +1, and is there any tool can be used by search flag?



Can use the message: filter in the gerrit web search interface to search in
commit messages, or
alternatively use gerritlib to write something custom.

Regards,

Chris
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Get keystone auth token via Horizon URL

2014-10-15 Thread Igor Milovanovic
You're android device must be be able to access Identity Service URL from
your android.
So if you're trying to access localhost:5000 you're device (I would assume
emulated in this case) must run on the same machine (your dev laptop, I
would assume) as your OpenStack Identity Service.

On Wed, Oct 15, 2014 at 7:30 AM, Manickam, Kanagaraj 
kanagaraj.manic...@hp.com wrote:

  From Horizon, you won’t be able to do keystone way of authentication.



 *From:* Ed Lima [mailto:e...@stackerz.com]
 *Sent:* Wednesday, October 15, 2014 8:30 AM
 *To:* openstack-dev@lists.openstack.org
 *Subject:* [openstack-dev] Get keystone auth token via Horizon URL



 I'm on the very early stages of developing an app for android to manage
 openstack services and would like to get the user credentials/tokens on
 keystone to get data and execute commands via the horizon URL. I'm using
 IceHouse on Ubuntu 14.04.

 In my particular use case I have keystone running on my internal server 
 *http://localhost:5000/v3/auth/tokens
 http://localhost:5000/v3/auth/tokens* which would allow me to use my
 app fine with JSON to get information from other services and execute
 commands however I'd have to be on the same network as my server for it to
 work.

 On the other hand I have my horizon URL published externally on the
 internet at the address *https://openstack.domain.com/horizon
 https://openstack.domain.com/horizon* which is available from anywhere
 and gives me access to my OpenStack services fine via browser on a desktop.
 I'd like to do the same on android, would it be possible? Is there a way
 for my app to send JSON requests to horizon at 
 *https://openstack.domain.com/horizon
 https://openstack.domain.com/horizon* and get the authentication tokens
 from keystone indirectly?

 I should mention I'm not a very experienced developer and any help would
 be amazing! Thanks

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Igor,
thin*KING*
*solving real problems*
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-stable-maint] Stable check of openstack/glance failed

2014-10-15 Thread Alan Pevec
2014-10-15 11:24 GMT+02:00 Ihar Hrachyshka ihrac...@redhat.com:
 I've reported a bug for the failure [1], marked it as Critical and
 nominated for Juno and Icehouse. I guess that's all we need to do to
 pay attention of glance developers to the failure, right?

Thanks Ihar, we have few Glance developers on stable-maint but since
this is not just a stable issue AFAICT, I'm adding openstack-dev for
the wider audience.

Cheers,
Alan

 [1]: https://bugs.launchpad.net/glance/+bug/1381419

 On 15/10/14 08:28, jenk...@openstack.org wrote:
 Build failed.

 - periodic-glance-docs-icehouse
 http://logs.openstack.org/periodic-stable/periodic-glance-docs-icehouse/16541e4
 : SUCCESS in 1m 46s - periodic-glance-python26-icehouse
 http://logs.openstack.org/periodic-stable/periodic-glance-python26-icehouse/7c14d20
 : FAILURE in 19m 40s - periodic-glance-python27-icehouse
 http://logs.openstack.org/periodic-stable/periodic-glance-python27-icehouse/880455f
 : SUCCESS in 15m 39s

 ___
 Openstack-stable-maint mailing list
 openstack-stable-ma...@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-stable-maint

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] APIImpact flag for nova specs

2014-10-15 Thread Sylvain Bauza


Le 15/10/2014 11:56, Christopher Yeoh a écrit :


On Wed, Oct 15, 2014 at 7:31 PM, Alex Xu x...@linux.vnet.ibm.com 
mailto:x...@linux.vnet.ibm.com wrote:


On 2014年10月15日 14:20, Christopher Yeoh wrote:

Hi,

I was wondering what people thought of having a convention of
adding
an APIImpact flag to proposed nova specs commit messages where the
Nova API will change? It would make it much easier to find
proposed
specs which affect the API as its not always clear from the gerrit
summary listing.

+1, and is there any tool can be used by search flag?



Can use the message: filter in the gerrit web search interface to 
search in commit messages, or

alternatively use gerritlib to write something custom.



IMHO, asking people to put a tag on a commit msg is good but errorprone 
because there could be some misses.
Considering that API changes require new templates, why not asking for 
people to provide on a separate tpl file the changes they want to 
provide, and make use of the Gerrit file pattern search like

specs/kilo/approved/*.tpl ?


-Sylvain



Regards,

Chris


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Change in fuel-library CI syntax check script

2014-10-15 Thread Aleksandr Didenko
Hi,

we've merged [1] into master, so when you create new gerrit review for
fuel-library, our CI job [2] will run syntax check (depending on file type)
for all the files found under */files/ocf/* path.

So if you want to ship OCF script as a file in your module, please put it
in MODULE/files/ocf/ directory, like in [3] and [4] for example.


[1] https://review.openstack.org/#/c/126841/5
[2] https://fuel-jenkins.mirantis.com/job/fuellib_review_syntax_check/
[3]
https://github.com/stackforge/fuel-library/tree/master/deployment/puppet/galera/files/ocf
[4]
https://github.com/stackforge/fuel-library/blob/master/deployment/puppet/galera/manifests/init.pp#L206-L212

Regards,
Aleksandr Didenko
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Supporting Javascript clients calling OpenStack APIs

2014-10-15 Thread Martin Geisler
Richard Jones r1chardj0...@gmail.com writes:

Hi,

I'm working on the ZeroVM project at Rackspace and as part of that I'm
writing a JavaScript based file manager for Swift which I've called
Swift Browser:

  https://github.com/zerovm/swift-browser

When writing this, I of course ran in to exactly the problems you
describe below.

 2. add CORS support to all the OpenStack APIs though a new WSGI
middleware (for example oslo.middleware.cors) and configured into
each of the API services individually since they all exist on
different origin host:port combinations, or

This was the solution I picked first and it was not difficult to get
working. I used this middleware:

  http://blog.yunak.eu/2013/07/24/keystone_cors/

Since I only case about Swift, another solution that I've been using is
to use swauth (or really https://github.com/zerovm/liteauth) which lets
you authenticate to Swift directly. There are thus only a single origin
to consider and the same-origin problems disappear.

 3. a new web service that proxies all the APIs and serves the static
Javascript (etc) content from the one origin (host).

That sounds like another good alternative. I'm not sure familiar with
how people normally deploy Swift, but I would imagine that people setup
some proxying anyway to rewrite the URLs to a nicer format.

 I have implemented options 2 and 3 as an exercise to see how horrid
 each one is.


 == CORS Middleware ==

 The middleware option results in a reasonably nice bit of middleware.
 It's short and relatively easy to test. The big problem with it comes
 in configuring it in all the APIs. The configuration for the
 middleware takes two forms:

 1. hooking oslo.middleware.cors into the WSGI pipeline (there's more
than one in each API),

If this became a standard part of OpenStack, could one imagine that the
default WSGI pipelines would already contain the CORS middleware?

 2. adding the CORS configuration itself for the middleware in the
API's main configuration file (eg. keystone.conf or nova.conf).

 So for each service, that's two configuration files *and* the kicker
 is that the paste configuration file is non-trivially different in
 almost every case.

I'm unsure why it would have to be different for each service? Would the
services not be configured mostly the same, as in:

  [filter:cors]
  allow_origins = https://static.company.com

 == New Single-Point API Service ==

 Actually, this is not horrid in any way - unless that publicURL
 rewriting gives you the heebie-jeebies.

 It works, and offers us some nice additional features like being able
 to host the service behind SSL without needing to get a bazillion
 certificates. And maybe load balancing. And maybe API access
 filtering.

I like this option for the flexibility it gives you.

-- 
Martin Geisler

http://google.com/+MartinGeisler


pgpbntQlgOsos.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] Juno RC3 available

2014-10-15 Thread Thierry Carrez
Hello everyone,

Due to three critical regressions discovered in testing of the published
Cinder 2014.2 RC2, we generated a new Juno release candidate. You can
find the list of bugfixes in this RC and a link to a source tarball at:

https://launchpad.net/cinder/juno/juno-rc3

At this point, only show-stoppers would warrant a release candidate
respin, so this RC3 is very likely to be formally released as the final
Cinder 2014.2 tomorrow. You are therefore strongly encouraged to
give a last-minute test round and validate this tarball !

Alternatively, you can directly test the proposed/juno branch at:
https://github.com/openstack/cinder/tree/proposed/juno

If you find an issue that could be considered release-critical, please
file it at:

https://bugs.launchpad.net/cinder/+filebug

and tag it *juno-rc-potential* to bring it to the release crew's attention.

Regards,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Travels tips for the Paris summit

2014-10-15 Thread Chmouel Boudjnah
On Wed, Oct 15, 2014 at 10:25 AM, Thierry Carrez thie...@openstack.org
wrote:

 I found this:
 http://www.topito.com/top-meilleurs-restaurants-vegetariens-paris

 The Creperies (Brittany pancakes) are also great choices for
 vegetarians since you can easily pick a vegetarian filling.



my vegetarians friends are saying that this is the best place in paris for
veggie burgers[1]:

http://www.eastsideburgers.fr/en

Chmouel

[1] caveat may be bordeline parisian hippsterish
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Change in fuel-library CI syntax check script

2014-10-15 Thread Mike Scherbakov
Excellent!
I'm also looking forward to see -1 on all shell code which exceeds 50 lines
at some point in the future... :)

On Wed, Oct 15, 2014 at 2:32 PM, Aleksandr Didenko adide...@mirantis.com
wrote:

 Hi,

 we've merged [1] into master, so when you create new gerrit review for
 fuel-library, our CI job [2] will run syntax check (depending on file type)
 for all the files found under */files/ocf/* path.

 So if you want to ship OCF script as a file in your module, please put it
 in MODULE/files/ocf/ directory, like in [3] and [4] for example.


 [1] https://review.openstack.org/#/c/126841/5
 [2] https://fuel-jenkins.mirantis.com/job/fuellib_review_syntax_check/
 [3]
 https://github.com/stackforge/fuel-library/tree/master/deployment/puppet/galera/files/ocf
 [4]
 https://github.com/stackforge/fuel-library/blob/master/deployment/puppet/galera/manifests/init.pp#L206-L212

 Regards,
 Aleksandr Didenko

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Mike Scherbakov
#mihgen
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Get keystone auth token via Horizon URL

2014-10-15 Thread Dolph Mathews
The :5000 port of Keystone is designed to be exposed publicly via HTTPS. In
the /v2.0/ API, there's only a handful of calls exposed on that port. In
/v3/ the entire API is exposed, but wrapped by RBAC. If you're using
HTTPS, it should be safe to expose the public interfaces of all
the services to the Internet.

Remember that UUID and PKI tokens are both bearer tokens, and that it takes
minimal effort for an attacker to compromise your cloud if you're exposing
tokens over HTTP.

On Tuesday, October 14, 2014, Ed Lima e...@stackerz.com wrote:

 I'm on the very early stages of developing an app for android to manage
 openstack services and would like to get the user credentials/tokens on
 keystone to get data and execute commands via the horizon URL. I'm using
 IceHouse on Ubuntu 14.04.

 In my particular use case I have keystone running on my internal server 
 *http://localhost:5000/v3/auth/tokens
 http://localhost:5000/v3/auth/tokens* which would allow me to use my
 app fine with JSON to get information from other services and execute
 commands however I'd have to be on the same network as my server for it to
 work.

 On the other hand I have my horizon URL published externally on the
 internet at the address *https://openstack.domain.com/horizon
 https://openstack.domain.com/horizon* which is available from anywhere
 and gives me access to my OpenStack services fine via browser on a desktop.
 I'd like to do the same on android, would it be possible? Is there a way
 for my app to send JSON requests to horizon at 
 *https://openstack.domain.com/horizon
 https://openstack.domain.com/horizon* and get the authentication tokens
 from keystone indirectly?

 I should mention I'm not a very experienced developer and any help would
 be amazing! Thanks

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Quota management and enforcement across projects

2014-10-15 Thread Doug Hellmann
Sigh. Because I typed the wrong command. Thanks for pointing that out.

I don’t see any instances of “quota” in openstack-common.conf files:

$ grep quota */openstack-common.conf

or in any projects under openstack/“:

$ ls */*/openstack/common/quota.py
ls: cannot access */*/openstack/common/quota.py: No such file or directory

I don’t know where manila’s copy came from, but if it has been copied from the 
incubator by hand and then changed we should fix that up.

Doug

On Oct 15, 2014, at 5:28 AM, Valeriy Ponomaryov vponomar...@mirantis.com 
wrote:

 But why policy is being discussed on quota thread?
 
 On Wed, Oct 15, 2014 at 11:55 AM, Valeriy Ponomaryov 
 vponomar...@mirantis.com wrote:
 Manila project does use policy common code from incubator.
 
 Our small wrapper for it: 
 https://github.com/openstack/manila/blob/8203c51081680a7a9dba30ae02d7c43d6e18a124/manila/policy.py
 
 
 On Wed, Oct 15, 2014 at 2:41 AM, Salvatore Orlando sorla...@nicira.com 
 wrote:
 Doug,
 
 I totally agree with your findings on the policy module.
 Neutron already has some customizations there and we already have a few 
 contributors working on syncing it back with oslo-incubator during the Kilo 
 release cycle.
 
 However, my query was about the quota module.
 From what I gather it seems not a lot of projects use it:
 
 $ find . -name openstack-common.conf | xargs grep quota
 $
 
 Salvatore
 
 On 15 October 2014 00:34, Doug Hellmann d...@doughellmann.com wrote:
 
 On Oct 14, 2014, at 12:31 PM, Salvatore Orlando sorla...@nicira.com wrote:
 
 Hi Doug,
 
 do you know if the existing quota oslo-incubator module has already some 
 active consumers?
 In the meanwhile I've pushed a spec to neutron-specs for improving quota 
 management there [1]
 
 It looks like a lot of projects are syncing the module:
 
 $ grep policy */openstack-common.conf
 barbican/openstack-common.conf:modules=gettextutils,jsonutils,log,local,timeutils,importutils,policy
 ceilometer/openstack-common.conf:module=policy
 cinder/openstack-common.conf:module=policy
 designate/openstack-common.conf:module=policy
 gantt/openstack-common.conf:module=policy
 glance/openstack-common.conf:module=policy
 heat/openstack-common.conf:module=policy
 horizon/openstack-common.conf:module=policy
 ironic/openstack-common.conf:module=policy
 keystone/openstack-common.conf:module=policy
 manila/openstack-common.conf:module=policy
 neutron/openstack-common.conf:module=policy
 nova/openstack-common.conf:module=policy
 trove/openstack-common.conf:module=policy
 tuskar/openstack-common.conf:module=policy
 
 I’m not sure how many are actively using it, but I wouldn’t expect them to 
 copy it in if they weren’t using it at all.
 
 
 Now, I can either work on the oslo-incubator module and leverage it in 
 Neutron, or develop the quota module in Neutron, and move it to 
 oslo-incubator once we validate it with Neutron. The latter approach seems 
 easier from a workflow perspective - as it avoid the intermediate steps of 
 moving code from oslo-incubator to neutron. On the other hand it will delay 
 adoption in oslo-incubator.
 
 The policy module is up for graduation this cycle. It may end up in its own 
 library, to allow us to build a review team for the code more easily than if 
 we put it in with some of the other semi-related modules like the server 
 code. We’re still working that out [1], and if you expect to make a lot of 
 incompatible changes we should delay graduation to make that simpler.
 
 Either way, since we have so many consumers, I think it would be easier to 
 have the work happen in Oslo somewhere so we can ensure those changes are 
 useful to and usable by all of the existing consumers.
 
 Doug
 
 [1] https://etherpad.openstack.org/p/kilo-oslo-library-proposals
 
 
 What's your opinion?
 
 Regards,
 Salvatore
 
 [1] https://review.openstack.org/#/c/128318/
 
 On 8 October 2014 18:52, Doug Hellmann d...@doughellmann.com wrote:
 
 On Oct 8, 2014, at 7:03 AM, Davanum Srinivas dava...@gmail.com wrote:
 
  Salvatore, Joe,
 
  We do have this at the moment:
 
  https://github.com/openstack/oslo-incubator/blob/master/openstack/common/quota.py
 
  — dims
 
 If someone wants to drive creating a useful library during kilo, please 
 consider adding the topic to the etherpad we’re using to plan summit 
 sessions and then come participate in the Oslo meeting this Friday 16:00 UTC.
 
 https://etherpad.openstack.org/p/kilo-oslo-summit-topics
 
 Doug
 
 
  On Wed, Oct 8, 2014 at 2:29 AM, Salvatore Orlando sorla...@nicira.com 
  wrote:
 
  On 8 October 2014 04:13, Joe Gordon joe.gord...@gmail.com wrote:
 
 
  On Fri, Oct 3, 2014 at 10:47 AM, Morgan Fainberg
  morgan.fainb...@gmail.com wrote:
 
  Keeping the enforcement local (same way policy works today) helps limit
  the fragility, big +1 there.
 
  I also agree with Vish, we need a uniform way to talk about quota
  enforcement similar to how we have a uniform policy language / 
  enforcement
  model (yes I know it's not perfect, but 

[openstack-dev] [QA] Meeting Thursday October 16th at 22:00 UTC

2014-10-15 Thread Matthew Treinish

Hi everyone,

Just a quick reminder that the weekly OpenStack QA team IRC meeting will be
tomorrow Thursday, October 16th at 22:00 UTC in the #openstack-meeting
channel.

The agenda for tomorrow's meeting can be found here:
https://wiki.openstack.org/wiki/Meetings/QATeamMeeting
Anyone is welcome to add an item to the agenda.

The meeting tomorrow will be primarily dedicated to summit planning. So please
remember to post any ideas or topics that you think should be discussed at
summit on the brainstorming etherpad:

https://etherpad.openstack.org/p/kilo-qa-summit-topics

before the meeting tomorrow.

To help people figure out what time 22:00 UTC is in other timezones tomorrow's
meeting will be at:

18:00 EDT
07:00 JST
07:30 ACST
0:00 CEST
17:00 CDT
15:00 PDT

-Matt Treinish


pgp6lOhkNm5R9.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Travels tips for the Paris summit

2014-10-15 Thread Anita Kuno
On 10/15/2014 07:29 AM, Chmouel Boudjnah wrote:
 On Wed, Oct 15, 2014 at 10:25 AM, Thierry Carrez thie...@openstack.org
 wrote:
 
 I found this:
 http://www.topito.com/top-meilleurs-restaurants-vegetariens-paris

 The Creperies (Brittany pancakes) are also great choices for
 vegetarians since you can easily pick a vegetarian filling.

 
 
 my vegetarians friends are saying that this is the best place in paris for
 veggie burgers[1]:
 
 http://www.eastsideburgers.fr/en
Thanks Chmouel! And please thank your friends for the suggestion.
 
 Chmouel
 
 [1] caveat may be bordeline parisian hippsterish
Ha ha ha. I would be surprised if it wasn't. :D
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [devstack] Question about the OVS_PHYSICAL_BRIDGE attribute defined in localrc

2014-10-15 Thread Danny Choi (dannchoi)
Hi,

When I have OVS_PHYSICAL_BRIDGE=br-p1p1” defined in localrc, devstack creates 
the OVS bridge br-p1p1.

localadmin@qa4:~/devstack$ sudo ovs-vsctl show
5f845d2e-9647-47f2-b92d-139f6faaf39e
Bridge br-p1p1 
Port phy-br-p1p1
Interface phy-br-p1p1
type: patch
options: {peer=int-br-p1p1}
Port br-p1p1
Interface br-p1p1
type: internal

However, no physical port is added to it.  I have to manually do it.

localadmin@qa4:~/devstack$ sudo ovs-vsctl add-port br-p1p1 p1p1
localadmin@qa4:~/devstack$ sudo ovs-vsctl show
5f845d2e-9647-47f2-b92d-139f6faaf39e
Bridge br-p1p1
Port phy-br-p1p1
Interface phy-br-p1p1
type: patch
options: {peer=int-br-p1p1}
Port br-p1p1
Interface br-p1p1
type: internal
Port “p1p1” 
Interface “p1p1


Is this expected behavior?

Thanks,
Danny
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] on Dockerfile patterns

2014-10-15 Thread Steven Dake

On 10/14/2014 01:12 PM, Clint Byrum wrote:

Excerpts from Lars Kellogg-Stedman's message of 2014-10-14 12:50:48 -0700:

On Tue, Oct 14, 2014 at 03:25:56PM -0400, Jay Pipes wrote:

I think the above strategy is spot on. Unfortunately, that's not how the
Docker ecosystem works.

I'm not sure I agree here, but again nobody is forcing you to use this
tool.


operating system that the image is built for. I see you didn't respond to my
point that in your openstack-containers environment, you end up with Debian
*and* Fedora images, since you use the official MySQL dockerhub image. And
therefore you will end up needing to know sysadmin specifics (such as how
network interfaces are set up) on multiple operating system distributions.

I missed that part, but ideally you don't *care* about the
distribution in use.  All you care about is the application.  Your
container environment (docker itself, or maybe a higher level
abstraction) sets up networking for you, and away you go.

If you have to perform system administration tasks inside your
containers, my general feeling is that something is wrong.


Speaking as a curmudgeon ops guy from back in the day.. the reason
I choose the OS I do is precisely because it helps me _when something
is wrong_. And the best way an OS can help me is to provide excellent
debugging tools, and otherwise move out of the way.

When something _is_ wrong and I want to attach GDB to mysqld in said
container, I could build a new container with debugging tools installed,
but that may lose the very system state that I'm debugging. So I need to
run things inside the container like apt-get or yum to install GDB.. and
at some point you start to realize that having a whole OS is actually a
good thing even if it means needing to think about a few more things up
front, such as which OS will I use? and what tools do I need installed
in my containers?

What I mean to say is, just grabbing off the shelf has unstated
consequences.
The biggest gain of containers is they are hermetically sealed. They 
turn hundreds of packages (the dependencies and OS files) into 1 
interface with one operation:

Start with a defined variatic environment.

This idea rocks if you can put up with the pain that debugging something 
that is busted is very difficult.   Accessing logs is not tidy and 
debugging in the ways that true experienced folk know how to (with gdb 
attach for example) just isn't possible.


It also requires that you rely on a completely stateless model (until 
persistent storage is implemented in k8s I guess) and completely 
idempotent model.  I really like the idea of time to upgrade, lets roll 
new images across the cluster.  This model is very powerful but comes 
with some pain.


Regards
-steve



Sure, Docker isn't any more limiting than using a VM or bare hardware, but
if you use the official Docker images, it is more limiting, no?

No more so than grabbing a virtual appliance rather than building a
system yourself.

In other words: sure, it's less flexible, but possibly it's faster to
get started, which is especially useful if your primary goal is not
be a database administrator but is actually write an application
that uses a database backend.

I think there are uses cases for both official and customized
images.


In the case of Kolla, we're deploying OpenStack, not just some new
application that uses a database backend. I think the bar is a bit
higher for operations than end-user applications, since it sits below
the abstractions, much closer to the metal.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS][Octavia] Floating IP management

2014-10-15 Thread Vijay Venkatachalam
I felt guilty after reading Vijay B. ’s reply ☺.
My apologies to have replied in brief, here are my thoughts in detail.

Currently LB configuration exposed via floating IP is a 2 step operation.
User has to first “create a VIP with a private IP” and then “creates a FLIP and 
assigns FLIP to private VIP” which would result in a DNAT in the gateway.
The proposal is to combine these 2 steps, meaning make DNATing operation as 
part of VIP creation process.

While this seems to be a simple proposal I think we should consider finer 
details.
The proposal assumes that the FLIP is implemented by the gateway through a DNAT.
We should also be open to creating a VIP with a FLIP rather than a private IP.
Meaning FLIP will be hosted directly by the LB appliance.

In essence, LBaaS plugin will create the FLIP for the VIP and let the driver 
know about the FLIP that should get implemented.
The drivers should have a choice of implementing in its own way.
It could choose to

1.   Implement via the traditional route. Driver creates a private IP, 
implements VIP as a private IP in the lb appliance and calls Neutron to 
implement DNAT (FLIP to private VIP )

2.   Implement VIP as a FLIP directly on the LB appliance. Here there is no 
private IP involved.

Pros:
Not much changes in the LBaaS API, there is only one IP as part of the VIP.
We can ensure backward compatibility with the driver as well by having Step (1) 
implemented in abstract driver.

Thanks,
Vjay

From: Vijay Venkatachalam [mailto:vijay.venkatacha...@citrix.com]
Sent: 15 October 2014 05:38
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Floating IP management

Thanks for the doc.

The floating IP could be hosted directly by the lb backend/lb appliance as well?
It depends on the appliance deployment.

From: Susanne Balle [mailto:sleipnir...@gmail.com]
Sent: 14 October 2014 21:15
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Floating IP management

Nice diagrams. :-) Thanks. Susanne

On Mon, Oct 13, 2014 at 4:18 PM, Phillip Toohill 
phillip.tooh...@rackspace.commailto:phillip.tooh...@rackspace.com wrote:
Diagrams in jpeg format..

On 10/12/14 10:06 PM, Phillip Toohill 
phillip.tooh...@rackspace.commailto:phillip.tooh...@rackspace.com
wrote:

Hello all,

Heres some additional diagrams and docs. Not incredibly detailed, but
should get the point across.

Feel free to edit if needed.

Once we come to some kind of agreement and understanding I can rewrite
these more to be thorough and get them in a more official place. Also, I
understand theres other use cases not shown in the initial docs, so this
is a good time to collaborate to make this more thought out.

Please feel free to ping me with any questions,

Thank you


Google DOCS link for FLIP folder:
https://drive.google.com/folderview?id=0B2r4apUP7uPwU1FWUjJBN0NMbWMusp=sh
a
ring

-diagrams are draw.iohttp://draw.io based and can be opened from within 
Drive by
selecting the appropriate application.

On 10/7/14 2:25 PM, Brandon Logan 
brandon.lo...@rackspace.commailto:brandon.lo...@rackspace.com wrote:

I'll add some more info to this as well:

Neutron LBaaS creates the neutron port for the VIP in the plugin layer
before drivers ever have any control.  In the case of an async driver,
it will then call the driver's create method, and then return to the
user the vip info.  This means the user will know the VIP before the
driver even finishes creating the load balancer.

So if Octavia is just going to create a floating IP and then associate
that floating IP to the neutron port, there is the problem of the user
not ever seeing the correct VIP (which would be the floating iP).

So really, we need to have a very detailed discussion on what the
options are for us to get this to work for those of us intending to use
floating ips as VIPs while also working for those only requiring a
neutron port.  I'm pretty sure this will require changing the way V2
behaves, but there's more discussion points needed on that.  Luckily, V2
is in a feature branch and not merged into Neutron master, so we can
change it pretty easily.  Phil and I will bring this up in the meeting
tomorrow, which may lead to a meeting topic in the neutron lbaas
meeting.

Thanks,
Brandon


On Mon, 2014-10-06 at 17:40 +, Phillip Toohill wrote:
 Hello All,

 I wanted to start a discussion on floating IP management and ultimately
 decide how the LBaaS group wants to handle the association.

 There is a need to utilize floating IPs(FLIP) and its API calls to
 associate a FLIP to the neutron port that we currently spin up.

 See DOCS here:

 
http://docs.openstack.org/api/openstack-network/2.0/content/floatingip_c
r
eate.html

 Currently, LBaaS will make internal service calls (clean interface :/)
to create and attach a Neutron port.
 The VIP from this port is added to the Loadbalancer object of the Load
balancer 

Re: [openstack-dev] [kolla] on Dockerfile patterns

2014-10-15 Thread Vishvananda Ishaya

On Oct 14, 2014, at 1:21 PM, Lars Kellogg-Stedman l...@redhat.com wrote:

 On Tue, Oct 14, 2014 at 04:06:22PM -0400, Jay Pipes wrote:
 I understand that general feeling, but system administration tasks like
 debugging networking issues or determining and grepping log file locations
 or diagnosing packaging issues for OpenStack services or performing database
 logfile maintenance and backups don't just go away because you're using
 containers, right?
 
 They don't go away, but they're not necessarily things that you would
 do inside your container.
 
 Any state (e.g., database tables) that has a lifetime different from
 that of your container should be stored outside of the container
 proper.  In docker, this would be a volume (in a cloud environment,
 this would be something like EBS or a Cinder volume).
 
 Ideally, your container-optimized applications logs to stdout/stderr.
 If you have multiple processes, they each run in a separate container.
 
 Backups take advantage of the data volumes you've associated with your
 container.  E.g., spawning a new container using the docker
 --volumes-from option to access that data for backup purposes.
 
 If you really need to get inside a container for diagnostic purposes,
 then you use something like nsenter, nsinit, or the forthcoming
 docker exec”.

“something like” isn’t good enough here. There must be a standard way
to do this stuff or people will continue to build fat containers with
all of their pet tools inside. This means containers will just be
another incarnation of virtualization.

Vish

 
 
 they very much seem to be developed from the point of view of application
 developers, and not so much from the point of view of operators who need to
 maintain and support those applications.
 
 I think it's entirely accurate to say that they are
 application-centric, much like services such as Heroku, OpenShift,
 etc.
 
 -- 
 Lars Kellogg-Stedman l...@redhat.com | larsks @ {freenode,twitter,github}
 Cloud Engineering / OpenStack  | http://blog.oddbit.com/
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] on Dockerfile patterns

2014-10-15 Thread Vishvananda Ishaya

On Oct 14, 2014, at 1:12 PM, Clint Byrum cl...@fewbar.com wrote:

 Excerpts from Lars Kellogg-Stedman's message of 2014-10-14 12:50:48 -0700:
 On Tue, Oct 14, 2014 at 03:25:56PM -0400, Jay Pipes wrote:
 I think the above strategy is spot on. Unfortunately, that's not how the
 Docker ecosystem works.
 
 I'm not sure I agree here, but again nobody is forcing you to use this
 tool.
 
 operating system that the image is built for. I see you didn't respond to my
 point that in your openstack-containers environment, you end up with Debian
 *and* Fedora images, since you use the official MySQL dockerhub image. And
 therefore you will end up needing to know sysadmin specifics (such as how
 network interfaces are set up) on multiple operating system distributions.
 
 I missed that part, but ideally you don't *care* about the
 distribution in use.  All you care about is the application.  Your
 container environment (docker itself, or maybe a higher level
 abstraction) sets up networking for you, and away you go.
 
 If you have to perform system administration tasks inside your
 containers, my general feeling is that something is wrong.
 
 
 Speaking as a curmudgeon ops guy from back in the day.. the reason
 I choose the OS I do is precisely because it helps me _when something
 is wrong_. And the best way an OS can help me is to provide excellent
 debugging tools, and otherwise move out of the way.
 
 When something _is_ wrong and I want to attach GDB to mysqld in said
 container, I could build a new container with debugging tools installed,
 but that may lose the very system state that I'm debugging. So I need to
 run things inside the container like apt-get or yum to install GDB.. and
 at some point you start to realize that having a whole OS is actually a
 good thing even if it means needing to think about a few more things up
 front, such as which OS will I use? and what tools do I need installed
 in my containers?
 
 What I mean to say is, just grabbing off the shelf has unstated
 consequences.

If this is how people are going to use and think about containers, I would
submit they are a huge waste of time. The performance value they offer is
dramatically outweighed by the flexibilty and existing tooling that exists
for virtual machines. As I state in my blog post[1] if we really want to
get value from containers, we must convert to the single application per
container view. This means having standard ways of doing the above either
on the host machine or in a debugging container that is as easy (or easier)
than the workflow you mention. There are not good ways to do this yet, and
the community hand-waves it away, saying things like, well you could …”.
You could isn’t good enough. The result is that a lot of people that are
using containers today are doing fat containers with a full os.

Vish

[1] 
https://medium.com/@vishvananda/standard-components-not-standard-containers-c30567f23da6
 
 Sure, Docker isn't any more limiting than using a VM or bare hardware, but
 if you use the official Docker images, it is more limiting, no?
 
 No more so than grabbing a virtual appliance rather than building a
 system yourself.  
 
 In other words: sure, it's less flexible, but possibly it's faster to
 get started, which is especially useful if your primary goal is not
 be a database administrator but is actually write an application
 that uses a database backend.
 
 I think there are uses cases for both official and customized
 images.
 
 
 In the case of Kolla, we're deploying OpenStack, not just some new
 application that uses a database backend. I think the bar is a bit
 higher for operations than end-user applications, since it sits below
 the abstractions, much closer to the metal.
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qa] nova get-password does not seem to work

2014-10-15 Thread Danny Choi (dannchoi)
Hi,

I used devstack to deploy Juno OpenStack.

I spin up an instance with cirros-0.3.2-x86_64-uec.

By default, useranme/password is cirrus/cubswin:)

When I execute the command “nova get-password”, nothing is returned.


localadmin@qa4:/etc/nova$ nova show vm1

+--++

| Property | Value  
|

+--++

| OS-DCF:diskConfig| MANUAL 
|

| OS-EXT-AZ:availability_zone  | nova   
|

| OS-EXT-STS:power_state   | 1  
|

| OS-EXT-STS:task_state| -  
|

| OS-EXT-STS:vm_state  | active 
|

| OS-SRV-USG:launched_at   | 2014-10-15T14:48:04.00 
|

| OS-SRV-USG:terminated_at | -  
|

| accessIPv4   |
|

| accessIPv6   |
|

| config_drive |
|

| created  | 2014-10-15T14:47:56Z   
|

| flavor   | m1.tiny (1)
|

| hostId   | 
ea715752b11cf96b95f9742513a351d2d6571c4fdb76f497d64ecddb   |

| id   | 1a3c487e-c3a3-4783-bd0b-e3c87bf22c3f   
|

| image| cirros-0.3.2-x86_64-uec 
(1dda953b-9319-4c43-bd20-1ef75b491553) |

| key_name | cirros-key 
|

| metadata | {} 
|

| name | vm1
|

| os-extended-volumes:volumes_attached | [] 
|

| private network  | 10.0.0.11  
|

| progress | 0  
|

| security_groups  | default
|

| status   | ACTIVE 
|

| tenant_id| c8daf9bd6dda40a982b074322c08da7d   
|

| updated  | 2014-10-15T14:48:04Z   
|

| user_id  | 2cbbafae01404d4ebeb6e6fbacfa6546   
|

+--++

localadmin@qa4:/etc/nova$ nova help get-password

usage: nova get-password server [private-key]


Get password for a server.


Positional arguments:

  server   Name or ID of server.

  private-key  Private key (used locally to decrypt password) (Optional).

 When specified, the command displays the clear (decrypted) VM

 password. When not specified, the ciphered VM password is

 displayed.

localadmin@qa4:/etc/nova$ nova get-password vm1

  [NOTHING RETURNED]

localadmin@qa4:/etc/nova$


Am I missing something?


Thanks,

Danny
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][all] Naming convention for unused variables

2014-10-15 Thread Vishvananda Ishaya

On Oct 13, 2014, at 6:28 PM, Angus Lees g...@inodes.org wrote:

 (Context: https://review.openstack.org/#/c/117418/)
 
 I'm looking for some rough consensus on what naming conventions we want for 
 unused variables in Neutron, and across the larger OpenStack python codebase 
 since there's no reason for Neutron to innovate here.
 
 As far as I can see, there are two cases:
 
 
 1.  The I just don't care variable
 
 Eg:_, _, filename = path.rpartition('/')
 
 In python this is very commonly '_', but this conflicts with the gettext 
 builtin so we should avoid it in OpenStack.
 
 Possible candidates include:
 
 a.  'x'
 b. '__'  (double-underscore) 
 c. No convention
 
 
 2.  I know it is unused, but the name still serves as documentation
 
 Note this turns up as two cases: as a local, and as a function parameter.
 
 Eg:   out, _err = execute('df', path)
 
 Eg:   def makefile(self, _mode, _other):
return self._buffer
 
 I deliberately chose that second example to highlight that the leading-
 underscore convention collides with its use for private properties.
 
 Possible candidates include:
 
 a. _foo   (leading-underscore, note collides with private properties)
 b. unused_foo   (suggested in the Google python styleguide)
 c. NOQA_foo   (as suggested in c/117418)
 d. No convention  (including not indicating that variables are known-unused)

I prefer a. Private properties are explicitly prefixed with self. so it
doesn’t seem to be a conflict to me.

Vish

 
 
 As with all style discussions, everyone feels irrationally attached to their 
 favourite, but the important bit is to be consistent to aid readability  (and 
 in this case, also to help the mechanical code checkers).
 
 Vote / Discuss / Suggest additional alternatives.
 
 -- 
 - Gus
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] on Dockerfile patterns

2014-10-15 Thread David Vossel


- Original Message -
 I'm not arguing that everything should be managed by one systemd, I'm just
 saying, for certain types of containers, a single docker container with
 systemd in it might be preferable to trying to slice it unnaturally into
 several containers.
 
 Systemd has invested a lot of time/effort to be able to relaunch failed
 services, support spawning and maintaining unix sockets and services across
 them, etc, that you'd have to push out of and across docker containers. All
 of that can be done, but why reinvent the wheel? Like you said, pacemaker
 can be made to make it all work, but I have yet to see a way to deploy
 pacemaker services anywhere near as easy as systemd+yum makes it. (Thanks be
 to redhat. :)
 
 The answer seems to be, its not dockerish. Thats ok. I just wanted to
 understand the issue for what it is. If there is a really good reason for
 not wanting to do it, or that its just not the way things are done. I've
 had kind of the opposite feeling regarding docker containers. Docker use to
 do very bad things when killing the container. nasty if you wanted your
 database not to go corrupt. killing pid 1 is a bit sketchy then forcing the
 container down after 10 seconds was particularly bad. having something like
 systemd in place allows the database to be notified, then shutdown properly.
 Sure you can script up enough shell to make this work, but you have to do
 some difficult code, over and over again... Docker has gotten better more
 recently but it still makes me a bit nervous using it for statefull things.
 
 As for recovery, systemd can do the recovery too. I'd argue at this point in
 time, I'd expect systemd recovery to probably work better then some custom

yes, systemd can do recovery and that is part of the problem. From my 
perspective
there should be one resource management system. Whether that be pacemaker, 
kubernetes,
or some other distributed system, it doesn't matter.  If you are mixing systemd
with these other external distributed orchestration/management tools you have 
containers
that are silently failing/recovering without the management layer having any 
clue.

centralized recovery. There's one tool responsible for detecting and invoking 
recovery.
Everything else in the system is designed to make that possible.

If we want to put a process in the container to manage multiple services, we'd 
need
the ability to escalate failures to the distributed management tool.  Systemd 
could
work if it was given the ability to act more as a watchdog after starting 
services than
invoke recovery. If systemd could be configured to die (or potentially 
gracefully cleanup
the containers resource's before dieing) whenever a failure is detected, then 
systemd
might make sense. 

I'm approaching this from a system management point of view. Running systemd in 
your
one off container that you're managing manually does not have the same 
drawbacks.
I don't have a vendetta against systemd or anything, I just think it's a step 
backwards
to put systemd in containers. I see little value in having containers become 
lightweight
virtual machines. Containers have much more to offer.

-- Vossel



 shell scripts when it comes to doing the right thing recovering at bring up.
 The other thing is, recovery is not just about pid1 going away. often it
 sticks around and other badness is going on. Its A way to know things are
 bad, but you can't necessarily rely on it to know the container's healty.
 You need more robust checks for that.
 
 Thanks,
 Kevin
 
 
 From: David Vossel [dvos...@redhat.com]
 Sent: Tuesday, October 14, 2014 4:52 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [kolla] on Dockerfile patterns
 
 - Original Message -
  Ok, why are you so down on running systemd in a container?
 
 It goes against the grain.
 
 From a distributed systems view, we gain quite a bit of control by
 maintaining
 one service per container. Containers can be re-organised and re-purposed
 dynamically.
 If we have systemd trying to manage an entire stack of resources within a
 container,
 we lose this control.
 
 From my perspective a containerized application stack needs to be managed
 externally
 by whatever is orchestrating the containers to begin with. When we take a
 step back
 and look at how we actually want to deploy containers, systemd doesn't make
 much sense.
 It actually limits us in the long run.
 
 Also... recovery. Using systemd to manage a stack of resources within a
 single container
 makes it difficult for whatever is externally enforcing the availability of
 that container
 to detect the health of the container.  As it is now, the actual service is
 pid 1 of a
 container. If that service dies, the container dies. If systemd is pid 1,
 there can
 be all kinds of chaos occurring within the container, but the external
 distributed
 orchestration system won't have a clue 

Re: [openstack-dev] [qa] Cannot start the VM console when VM is launched at Compute node

2014-10-15 Thread Vishvananda Ishaya
No this is not expected and may represent a misconfiguration or a bug. Something
is returning a 404 when it shouldn’t. You might get more luck running the nova 
command
with —debug to see what specifically is 404ing. You could also see if anything 
is reporting
NotFound in the nova-consoleauth or nova-api or nova-compute logs

Vish

On Oct 14, 2014, at 10:45 AM, Danny Choi (dannchoi) dannc...@cisco.com wrote:

 Hi,
 
 I used devstack to deploy multi-node OpenStack, with Controller + 
 nova-compute + Network on one physical node (qa4),
 and Compute on a separate physical node (qa5).
 
 When I launch a VM which spun up on the Compute node (qa5), I cannot launch 
 the VM console, in both CLI and Horizon.
 
 localadmin@qa4:~/devstack$ nova hypervisor-servers q
 +--+---+---+-+
 | ID   | Name  | Hypervisor ID | 
 Hypervisor Hostname |
 +--+---+---+-+
 | 48b16e7c-0a17-42f8-9439-3146f26b4cd8 | instance-000e | 1 | 
 qa4 |
 | 3eadf190-465b-4e90-ba49-7bc8ce7f12b9 | instance-000f | 1 | 
 qa4 |
 | 056d4ad2-e081-4706-b7d1-84ee281e65fc | instance-0010 | 2 | 
 qa5 |
 +--+---+---+-+
 localadmin@qa4:~/devstack$ nova list
 +--+--+++-+-+
 | ID   | Name | Status | Task State | Power 
 State | Networks|
 +--+--+++-+-+
 | 3eadf190-465b-4e90-ba49-7bc8ce7f12b9 | vm1  | ACTIVE | -  | Running 
 | private=10.0.0.17   |
 | 48b16e7c-0a17-42f8-9439-3146f26b4cd8 | vm2  | ACTIVE | -  | Running 
 | private=10.0.0.16, 172.29.173.4 |
 | 056d4ad2-e081-4706-b7d1-84ee281e65fc | vm3  | ACTIVE | -  | Running 
 | private=10.0.0.18, 172.29.173.5 |
 +--+--+++-+-+
 localadmin@qa4:~/devstack$ nova get-vnc-console vm3 novnc
 ERROR (CommandError): No server with a name or ID of 'vm3' exists.  
 [ERROR]
 
 
 This does not happen if the VM resides at the Controlller (qa5).
 
 localadmin@qa4:~/devstack$ nova get-vnc-console vm2 novnc
 +---+-+
 | Type  | Url 
 |
 +---+-+
 | novnc | 
 http://172.29.172.161:6080/vnc_auto.html?token=f556dea2-125d-49ed-bfb7-55a9a7714b2e
  |
 +---+-+
 
 Is this expected behavior?
 
 Thanks,
 Danny
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] nova get-password does not seem to work

2014-10-15 Thread Vishvananda Ishaya
Get password only works if you have something in the guest generating the
encrypted password and posting it to the metadata server. Cloud-init for
windows (the primary use case) will do this for you. You can do something
similar for ubuntu using this script:

https://gist.github.com/vishvananda/4008762

If cirros has usermod and openssl installed it may work there as well. Note
that you can pass the script in as userdata (see the comments at the end).

Vish

On Oct 15, 2014, at 8:02 AM, Danny Choi (dannchoi) dannc...@cisco.com wrote:

 Hi,
 
 I used devstack to deploy Juno OpenStack.
 
 I spin up an instance with cirros-0.3.2-x86_64-uec.
 
 By default, useranme/password is cirrus/cubswin:)
 
 When I execute the command “nova get-password”, nothing is returned.
 
 localadmin@qa4:/etc/nova$ nova show vm1
 +--++
 | Property | Value
   |
 +--++
 | OS-DCF:diskConfig| MANUAL   
   |
 | OS-EXT-AZ:availability_zone  | nova 
   |
 | OS-EXT-STS:power_state   | 1
   |
 | OS-EXT-STS:task_state| -
   |
 | OS-EXT-STS:vm_state  | active   
   |
 | OS-SRV-USG:launched_at   | 2014-10-15T14:48:04.00   
   |
 | OS-SRV-USG:terminated_at | -
   |
 | accessIPv4   |  
   |
 | accessIPv6   |  
   |
 | config_drive |  
   |
 | created  | 2014-10-15T14:47:56Z 
   |
 | flavor   | m1.tiny (1)  
   |
 | hostId   | 
 ea715752b11cf96b95f9742513a351d2d6571c4fdb76f497d64ecddb   |
 | id   | 1a3c487e-c3a3-4783-bd0b-e3c87bf22c3f 
   |
 | image| cirros-0.3.2-x86_64-uec 
 (1dda953b-9319-4c43-bd20-1ef75b491553) |
 | key_name | cirros-key   
   |
 | metadata | {}   
   |
 | name | vm1  
   |
 | os-extended-volumes:volumes_attached | []   
   |
 | private network  | 10.0.0.11
   |
 | progress | 0
   |
 | security_groups  | default  
   |
 | status   | ACTIVE   
   |
 | tenant_id| c8daf9bd6dda40a982b074322c08da7d 
   |
 | updated  | 2014-10-15T14:48:04Z 
   |
 | user_id  | 2cbbafae01404d4ebeb6e6fbacfa6546 
   |
 +--++
 localadmin@qa4:/etc/nova$ nova help get-password
 usage: nova get-password server [private-key]
 
 Get password for a server.
 
 Positional arguments:
   server   Name or ID of server.
   private-key  Private key (used locally to decrypt password) (Optional).
  When specified, the command displays the clear (decrypted) VM
  password. When not specified, the ciphered VM password is
  displayed.
 localadmin@qa4:/etc/nova$ nova get-password vm1 
   [NOTHING RETURNED]
 localadmin@qa4:/etc/nova$ 
 
 Am I missing something?
 
 Thanks,
 Danny
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail

Re: [openstack-dev] [kolla] on Dockerfile patterns

2014-10-15 Thread David Vossel


- Original Message -
 On Tue, 2014-10-14 at 19:52 -0400, David Vossel wrote:
  
  - Original Message -
   Ok, why are you so down on running systemd in a container?
  
  It goes against the grain.
  
  From a distributed systems view, we gain quite a bit of control by
  maintaining
  one service per container. Containers can be re-organised and re-purposed
  dynamically.
  If we have systemd trying to manage an entire stack of resources within a
  container,
  we lose this control.
  
  From my perspective a containerized application stack needs to be managed
  externally
  by whatever is orchestrating the containers to begin with. When we take a
  step back
  and look at how we actually want to deploy containers, systemd doesn't make
  much sense.
  It actually limits us in the long run.
  
  Also... recovery. Using systemd to manage a stack of resources within a
  single container
  makes it difficult for whatever is externally enforcing the availability of
  that container
  to detect the health of the container.  As it is now, the actual service is
  pid 1 of a
  container. If that service dies, the container dies. If systemd is pid 1,
  there can
  be all kinds of chaos occurring within the container, but the external
  distributed
  orchestration system won't have a clue (unless it invokes some custom
  health monitoring
  tools within the container itself, which will likely be the case someday.)
 
 I don't really think this is a good argument.  If you're using docker,
 docker is the management and orchestration system for the containers.

no, docker is a local tool for pulling images and launching containers.
Docker is not the distributed resource manager in charge of overseeing
what machines launch what containers and how those containers are linked
together.

 There's no dogmatic answer to the question should you run init in the
 container.

an init daemon might make sense to put in some containers where we have
a tightly coupled resource stack. There could be a use case where it would
make more sense to put these resources in a single container.

I don't think systemd is a good solution for the init daemon though. Systemd
attempts to handle recovery itself as if it has the entire view of the 
system. With containers, the system view exists outside of the containers.
If we put an internal init daemon within the containers, that daemon needs
to escalate internal failures. The easiest way to do this is to
have init die if it encounters a resource failure (init is pid 1, pid 1 exiting
causes container to exit, container exiting gets the attention of whatever
is managing the containers)

 The reason for not running init inside a container managed by docker is
 that you want the template to be thin for ease of orchestration and
 transfer, so you want to share as much as possible with the host.  The
 more junk you put into the container, the fatter and less agile it
 becomes, so you should probably share the init system with the host in
 this paradigm.

I don't think the local init system and containers should have anything
to do with one another.  I said this in a previous reply, I'm approaching
this problem from a distributed management perspective. The host's
init daemon only has a local view of the world. 

 
 Conversely, containers can be used to virtualize full operating systems.
 This isn't the standard way of doing docker, but LXC and OpenVZ by
 default do containers this way.  For this type of container, because you
 have a full OS running inside the container, you have to also have
 systemd (assuming it's the init system) running within the container.

sure, if you want to do this use systemd. I don't understand the use case
where this makes any sense though. For me this falls in the yeah you can do it,
but why? category.

-- Vossel

 
 James
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] nova get-password does not seem to work

2014-10-15 Thread Alessandro Pilotti
AFAIK cloud-init is not handling it ATM, while Cloudbase-Init supports it out 
of the box on Windows (and soon FreeBSD).

You need to deploy your instance with an SSH keypair and use HTTP metadata, 
required for POSTing back the encrypted password.
It does not work with ConfigDrive.

Alessandro


On 15 Oct 2014, at 18:17, Vishvananda Ishaya 
vishvana...@gmail.commailto:vishvana...@gmail.com wrote:

Get password only works if you have something in the guest generating the
encrypted password and posting it to the metadata server. Cloud-init for
windows (the primary use case) will do this for you. You can do something
similar for ubuntu using this script:

https://gist.github.com/vishvananda/4008762

If cirros has usermod and openssl installed it may work there as well. Note
that you can pass the script in as userdata (see the comments at the end).

Vish

On Oct 15, 2014, at 8:02 AM, Danny Choi (dannchoi) 
dannc...@cisco.commailto:dannc...@cisco.com wrote:

Hi,

I used devstack to deploy Juno OpenStack.

I spin up an instance with cirros-0.3.2-x86_64-uec.

By default, useranme/password is cirrus/cubswin:)

When I execute the command “nova get-password”, nothing is returned.

localadmin@qa4:/etc/nova$ nova show vm1
+--++
| Property | Value  
|
+--++
| OS-DCF:diskConfig| MANUAL 
|
| OS-EXT-AZ:availability_zone  | nova   
|
| OS-EXT-STS:power_state   | 1  
|
| OS-EXT-STS:task_state| -  
|
| OS-EXT-STS:vm_state  | active 
|
| OS-SRV-USG:launched_at   | 2014-10-15T14:48:04.00 
|
| OS-SRV-USG:terminated_at | -  
|
| accessIPv4   |
|
| accessIPv6   |
|
| config_drive |
|
| created  | 2014-10-15T14:47:56Z   
|
| flavor   | m1.tiny (1)
|
| hostId   | 
ea715752b11cf96b95f9742513a351d2d6571c4fdb76f497d64ecddb   |
| id   | 1a3c487e-c3a3-4783-bd0b-e3c87bf22c3f   
|
| image| cirros-0.3.2-x86_64-uec 
(1dda953b-9319-4c43-bd20-1ef75b491553) |
| key_name | cirros-key 
|
| metadata | {} 
|
| name | vm1
|
| os-extended-volumes:volumes_attached | [] 
|
| private network  | 10.0.0.11  
|
| progress | 0  
|
| security_groups  | default
|
| status   | ACTIVE 
|
| tenant_id| c8daf9bd6dda40a982b074322c08da7d   
|
| updated  | 2014-10-15T14:48:04Z   
|
| user_id  | 2cbbafae01404d4ebeb6e6fbacfa6546   
|
+--++
localadmin@qa4:/etc/nova$ nova help get-password
usage: nova get-password server [private-key]

Get password for a server.

Positional arguments:
  server   Name or ID of server.
  private-key  Private key (used locally to decrypt password) (Optional).
 When specified, the command displays the clear (decrypted) VM
 password. When not specified, the ciphered VM password is
 displayed.
localadmin@qa4:/etc/nova$ nova get-password vm1
  [NOTHING RETURNED]

Re: [openstack-dev] [kolla] on Dockerfile patterns

2014-10-15 Thread Lars Kellogg-Stedman
On Wed, Oct 15, 2014 at 07:52:56AM -0700, Vishvananda Ishaya wrote:
 There must be a standard way
 to do this stuff or people will continue to build fat containers with
 all of their pet tools inside. This means containers will just be
 another incarnation of virtualization.

I wouldn't spend time worrying about that.  docker exec will be the
standard way as soon as it lands in a release version, which I think
will be happening imminently with 1.3.

-- 
Lars Kellogg-Stedman l...@redhat.com | larsks @ {freenode,twitter,github}
Cloud Engineering / OpenStack  | http://blog.oddbit.com/



pgp6GOmzu0krD.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Sahara] Sahara testing for Juno

2014-10-15 Thread Yaroslav Lobankov
Hello everyone,

I have been testing Sahara for two weeks and would like to share some
results with you [1].
While testing I used new images built for the Juno release of Sahara.

Images for Vanilla plugin are available here [2].
Images for HDP plugin are available here [3].
Image for Spark plugin is available here [4].
Unfortunately, for now there are no links in the Sahara documentation to
download images for CDH plugin.

A bunch of integration tests for Sahara you can find here [5].

[1]
https://docs.google.com/a/mirantis.com/spreadsheets/d/1F_Rti8UP5Avv8w_W8cf-5vkzosBdSzS8pn6Si_sjifw/edit#gid=0
[2]
http://docs.openstack.org/developer/sahara/userdoc/vanilla_plugin.html#vanilla-plugin
[3]
http://docs.openstack.org/developer/sahara/userdoc/hdp_plugin.html#images
[4]
http://docs.openstack.org/developer/sahara/userdoc/spark_plugin.html#images
[5] https://github.com/ylobankov/sahara_integration_tests

Regards,
Yaroslav Lobankov
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Any ideas on why nova-novncproxy is failing to start on devstack?

2014-10-15 Thread Solly Ross
For future reference, it looks like you had an old version of websockify
installed that wasn't getting updated, for some reason.  Nova recently accepted
a patch that removed the support for the old version of websockify

Best Regards,
Solly Ross

- Original Message -
 From: Paul Michali (pcm) p...@cisco.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Friday, October 10, 2014 9:40:54 PM
 Subject: Re: [openstack-dev] Any ideas on why nova-novncproxy is failing to 
 start on devstack?
 
 Well, I deleted /usr/local/lib/python2.7/dist-packages/websockify* and then
 stacked and it worked!
 
 
 PCM (Paul Michali)
 
 MAIL …..…. p...@cisco.com
 IRC ……..… pcm_ (irc.freenode.com)
 TW ………... @pmichali
 GPG Key … 4525ECC253E31A83
 Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83
 
 
 
 On Oct 10, 2014, at 8:46 PM, Paul Michali (pcm) p...@cisco.com wrote:
 
  I had a system with devstack, which I restack with reclone, with the
  intention of then patching in my review diffs to update and test. Well,
  the stack is failing in n-nonvc, with this message:
  
  openstack@devstack-33:~/devstack$ /usr/local/bin/nova-novncproxy
  --config-file /etc/nova/nova.conf --web /opt/stack/noVNC  echo $!
  /opt/stack/status/stack/n-novnc.pid; fg || echo n-novnc failed to start
  | tee /opt/stack/status/stack/n-novnc.failure
  [1] 826
  /usr/local/bin/nova-novncproxy --config-file /etc/nova/nova.conf --web
  /opt/stack/noVNC
  Traceback (most recent call last):
   File /usr/local/bin/nova-novncproxy, line 6, in module
 from nova.cmd.novncproxy import main
   File /opt/stack/nova/nova/cmd/novncproxy.py, line 29, in module
 from nova.console import websocketproxy
   File /opt/stack/nova/nova/console/websocketproxy.py, line 110, in
   module
 websockify.ProxyRequestHandler):
  AttributeError: 'module' object has no attribute 'ProxyRequestHandler'
  n-novnc failed to start
  
  The websockify package is installed:
  
  openstack@devstack-33:~/devstack$ pip show websockify
  ---
  Name: websockify
  Version: 0.5.1
  Location: /usr/local/lib/python2.7/dist-packages
  Requires: numpy
  
  However, the version required is:
  
  openstack@devstack-33:/opt/stack/nova$ grep websockify requirements.txt
  websockify=0.6.0,0.7
  
  Any ideas why is does not have the right version and how to best correct?
  
  Thanks!
  
  
  PCM (Paul Michali)
  
  MAIL …..…. p...@cisco.com
  IRC ……..… pcm_ (irc.freenode.com)
  TW ………... @pmichali
  GPG Key … 4525ECC253E31A83
  Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83
  
  
  
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Quota management and enforcement across projects

2014-10-15 Thread Ajaya Agrawal
Hi,

Would the new library support quota on domain level also? As it stands in
oslo-incubator it only does quota enforcement on project level only. The
use case for this is quota enforcement across multiple projects. For e.g.
as a cloud provider, I would like my customer to create only #X volumes
across all his projects.

-Ajaya

Cheers,
Ajaya

On Wed, Oct 15, 2014 at 7:04 PM, Doug Hellmann d...@doughellmann.com
wrote:

 Sigh. Because I typed the wrong command. Thanks for pointing that out.

 I don’t see any instances of “quota” in openstack-common.conf files:

 $ grep quota */openstack-common.conf

 or in any projects under openstack/“:

 $ ls */*/openstack/common/quota.py
 ls: cannot access */*/openstack/common/quota.py: No such file or directory

 I don’t know where manila’s copy came from, but if it has been copied from
 the incubator by hand and then changed we should fix that up.

 Doug

 On Oct 15, 2014, at 5:28 AM, Valeriy Ponomaryov vponomar...@mirantis.com
 wrote:

 But why policy is being discussed on quota thread?

 On Wed, Oct 15, 2014 at 11:55 AM, Valeriy Ponomaryov 
 vponomar...@mirantis.com wrote:

 Manila project does use policy common code from incubator.

 Our small wrapper for it:
 https://github.com/openstack/manila/blob/8203c51081680a7a9dba30ae02d7c43d6e18a124/manila/policy.py


 On Wed, Oct 15, 2014 at 2:41 AM, Salvatore Orlando sorla...@nicira.com
 wrote:

 Doug,

 I totally agree with your findings on the policy module.
 Neutron already has some customizations there and we already have a
 few contributors working on syncing it back with oslo-incubator during the
 Kilo release cycle.

 However, my query was about the quota module.
 From what I gather it seems not a lot of projects use it:

 $ find . -name openstack-common.conf | xargs grep quota
 $

 Salvatore

 On 15 October 2014 00:34, Doug Hellmann d...@doughellmann.com wrote:


 On Oct 14, 2014, at 12:31 PM, Salvatore Orlando sorla...@nicira.com
 wrote:

 Hi Doug,

 do you know if the existing quota oslo-incubator module has already
 some active consumers?
 In the meanwhile I've pushed a spec to neutron-specs for improving
 quota management there [1]


 It looks like a lot of projects are syncing the module:

 $ grep policy */openstack-common.conf

 barbican/openstack-common.conf:modules=gettextutils,jsonutils,log,local,timeutils,importutils,policy
 ceilometer/openstack-common.conf:module=policy
 cinder/openstack-common.conf:module=policy
 designate/openstack-common.conf:module=policy
 gantt/openstack-common.conf:module=policy
 glance/openstack-common.conf:module=policy
 heat/openstack-common.conf:module=policy
 horizon/openstack-common.conf:module=policy
 ironic/openstack-common.conf:module=policy
 keystone/openstack-common.conf:module=policy
 manila/openstack-common.conf:module=policy
 neutron/openstack-common.conf:module=policy
 nova/openstack-common.conf:module=policy
 trove/openstack-common.conf:module=policy
 tuskar/openstack-common.conf:module=policy

 I’m not sure how many are actively using it, but I wouldn’t expect them
 to copy it in if they weren’t using it at all.


 Now, I can either work on the oslo-incubator module and leverage it in
 Neutron, or develop the quota module in Neutron, and move it to
 oslo-incubator once we validate it with Neutron. The latter approach seems
 easier from a workflow perspective - as it avoid the intermediate steps of
 moving code from oslo-incubator to neutron. On the other hand it will delay
 adoption in oslo-incubator.


 The policy module is up for graduation this cycle. It may end up in its
 own library, to allow us to build a review team for the code more easily
 than if we put it in with some of the other semi-related modules like the
 server code. We’re still working that out [1], and if you expect to make a
 lot of incompatible changes we should delay graduation to make that 
 simpler.

 Either way, since we have so many consumers, I think it would be easier
 to have the work happen in Oslo somewhere so we can ensure those changes
 are useful to and usable by all of the existing consumers.

 Doug

 [1] https://etherpad.openstack.org/p/kilo-oslo-library-proposals


 What's your opinion?

 Regards,
 Salvatore

 [1] https://review.openstack.org/#/c/128318/

 On 8 October 2014 18:52, Doug Hellmann d...@doughellmann.com wrote:


 On Oct 8, 2014, at 7:03 AM, Davanum Srinivas dava...@gmail.com
 wrote:

  Salvatore, Joe,
 
  We do have this at the moment:
 
 
 https://github.com/openstack/oslo-incubator/blob/master/openstack/common/quota.py
 
  — dims

 If someone wants to drive creating a useful library during kilo,
 please consider adding the topic to the etherpad we’re using to plan 
 summit
 sessions and then come participate in the Oslo meeting this Friday 16:00
 UTC.

 https://etherpad.openstack.org/p/kilo-oslo-summit-topics

 Doug

 
  On Wed, Oct 8, 2014 at 2:29 AM, Salvatore Orlando 
 sorla...@nicira.com wrote:
 
  On 8 October 2014 04:13, Joe Gordon joe.gord...@gmail.com 

[openstack-dev] [Octavia] Octavia Meeting Agenda 10-15-2014

2014-10-15 Thread Brandon Logan
I've added an Agenda for the meeting today.

https://wiki.openstack.org/wiki/Octavia/Weekly_Meeting_Agenda

May not get to all of it but won't be a big deal.

Thanks,
Brandon
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] API recommendation

2014-10-15 Thread Andrew Laski


On 10/15/2014 11:49 AM, Kevin L. Mitchell wrote:

Now that we have an API working group forming, I'd like to kick off some
discussion over one point I'd really like to see our APIs using (and
I'll probably drop it in to the repo once that gets fully set up): the
difference between synchronous and asynchronous operations.  Using nova
as an example—right now, if you kick off a long-running operation, such
as a server create or a reboot, you watch the resource itself to
determine the status of the operation.  What I'd like to propose is that
future APIs use a separate operation resource to track status
information on the particular operation.  For instance, if we were to
rebuild the nova API with this idea in mind, booting a new server would
give you a server handle and an operation handle; querying the server
resource would give you summary information about the state of the
server (running, not running) and pending operations, while querying the
operation would give you detailed information about the status of the
operation.  As another example, issuing a reboot would give you the
operation handle; you'd see the operation in a queue on the server
resource, but the actual state of the operation itself would be listed
on that operation.  As a side effect, this would allow us (not require,
though) to queue up operations on a resource, and allow us to cancel an
operation that has not yet been started.

Thoughts?


Something like https://review.openstack.org/#/c/86938/ ?

I know that Jay has proposed a similar thing before as well.  I would 
love to get some feedback from others on this as it's something I'm 
going to propose for Nova in Kilo.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] on Dockerfile patterns

2014-10-15 Thread Clint Byrum
Excerpts from Vishvananda Ishaya's message of 2014-10-15 07:52:34 -0700:
 
 On Oct 14, 2014, at 1:12 PM, Clint Byrum cl...@fewbar.com wrote:
 
  Excerpts from Lars Kellogg-Stedman's message of 2014-10-14 12:50:48 -0700:
  On Tue, Oct 14, 2014 at 03:25:56PM -0400, Jay Pipes wrote:
  I think the above strategy is spot on. Unfortunately, that's not how the
  Docker ecosystem works.
  
  I'm not sure I agree here, but again nobody is forcing you to use this
  tool.
  
  operating system that the image is built for. I see you didn't respond to 
  my
  point that in your openstack-containers environment, you end up with 
  Debian
  *and* Fedora images, since you use the official MySQL dockerhub image. 
  And
  therefore you will end up needing to know sysadmin specifics (such as how
  network interfaces are set up) on multiple operating system distributions.
  
  I missed that part, but ideally you don't *care* about the
  distribution in use.  All you care about is the application.  Your
  container environment (docker itself, or maybe a higher level
  abstraction) sets up networking for you, and away you go.
  
  If you have to perform system administration tasks inside your
  containers, my general feeling is that something is wrong.
  
  
  Speaking as a curmudgeon ops guy from back in the day.. the reason
  I choose the OS I do is precisely because it helps me _when something
  is wrong_. And the best way an OS can help me is to provide excellent
  debugging tools, and otherwise move out of the way.
  
  When something _is_ wrong and I want to attach GDB to mysqld in said
  container, I could build a new container with debugging tools installed,
  but that may lose the very system state that I'm debugging. So I need to
  run things inside the container like apt-get or yum to install GDB.. and
  at some point you start to realize that having a whole OS is actually a
  good thing even if it means needing to think about a few more things up
  front, such as which OS will I use? and what tools do I need installed
  in my containers?
  
  What I mean to say is, just grabbing off the shelf has unstated
  consequences.
 
 If this is how people are going to use and think about containers, I would
 submit they are a huge waste of time. The performance value they offer is
 dramatically outweighed by the flexibilty and existing tooling that exists
 for virtual machines. As I state in my blog post[1] if we really want to
 get value from containers, we must convert to the single application per
 container view. This means having standard ways of doing the above either
 on the host machine or in a debugging container that is as easy (or easier)
 than the workflow you mention. There are not good ways to do this yet, and
 the community hand-waves it away, saying things like, well you could …”.
 You could isn’t good enough. The result is that a lot of people that are
 using containers today are doing fat containers with a full os.
 

I think we really agree.

What the container universe hasn't worked out is all the stuff that the
distros have worked out for a long time now: consistency.

I think it would be a good idea for containers' filesystem contents to
be a whole distro. What's at question in this thread is what should be
running. If we can just chroot into the container's FS and run apt-get/yum
install our tools, and then nsenter and attach to the running process,
then huzzah: I think we have best of both worlds.

To the container makers: consider that things can and will go wrong,
and the answer may already exist as a traditional tool, and not be
restart the container.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] API recommendation

2014-10-15 Thread Michael McCune


- Original Message -
 Thoughts?


I like this idea. From my experience with the Sahara project I think there is 
definite opportunity for this mechanic especially with regards to cluster 
creation and job executions.

regards,
mike

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] API recommendation

2014-10-15 Thread Sam Harwell
Hi Kevin,

In an asynchronous environment that may have multiple clients sending commands 
to the same resource in a service, an operation-type resource is a 
fundamental prerequisite to creating client applications which report the 
status of ongoing operations. Without this resource, there is no way to tell a 
user whether an operation they attempted succeeded or failed.

Due to the importance of allowing users to see the results of individual 
operations on resources, I would treat the other features potentially provided 
by this type of resource, such as queuing or canceling operations, separately 
from the fundamental status reporting behavior.

Thank you,
Sam Harwell

-Original Message-
From: Kevin L. Mitchell [mailto:kevin.mitch...@rackspace.com] 
Sent: Wednesday, October 15, 2014 10:49 AM
To: openstack-dev
Subject: [openstack-dev] [api] API recommendation

Now that we have an API working group forming, I'd like to kick off some 
discussion over one point I'd really like to see our APIs using (and I'll 
probably drop it in to the repo once that gets fully set up): the difference 
between synchronous and asynchronous operations.  Using nova as an 
example—right now, if you kick off a long-running operation, such as a server 
create or a reboot, you watch the resource itself to determine the status of 
the operation.  What I'd like to propose is that future APIs use a separate 
operation resource to track status information on the particular operation.  
For instance, if we were to rebuild the nova API with this idea in mind, 
booting a new server would give you a server handle and an operation handle; 
querying the server resource would give you summary information about the state 
of the server (running, not running) and pending operations, while querying the 
operation would give you detailed information about the status of the 
operation.  As another example, issuing a reboot would give you the operation 
handle; you'd see the operation in a queue on the server resource, but the 
actual state of the operation itself would be listed on that operation.  As a 
side effect, this would allow us (not require,
though) to queue up operations on a resource, and allow us to cancel an 
operation that has not yet been started.

Thoughts?
--
Kevin L. Mitchell kevin.mitch...@rackspace.com Rackspace


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Automatic evacuate

2014-10-15 Thread Russell Bryant
On 10/13/2014 05:59 PM, Russell Bryant wrote:
 Nice timing.  I was working on a blog post on this topic.

which is now here:

http://blog.russellbryant.net/2014/10/15/openstack-instance-ha-proposal/

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift] Allow hostname for nodes in Ring

2014-10-15 Thread Pete Zaitcev
On Fri, 10 Oct 2014 04:56:55 +
Osanai, Hisashi osanai.hisa...@jp.fujitsu.com wrote:

 Today the following patch was abandoned and I contacted with the author, 
 so I would like to take it over if nobody else is chafing to take it.
 Is it OK?
 
 https://review.openstack.org/#/c/80421/
 
 If it is OK, I will proceed it with following procedure.
 (1) Open new bug report (there is no bug report for this)
 I'm not sure that I should write a BP instead of a bug report.
 (2) Make a patch based on the current patch on gerrit

If the author agrees or ambivalent about it, you are free to re-use
the old Change ID.

And you're always free to post your patch anew.

I don't know if the bug report is all that necessary or useful.
The scope of the problem is well defined without, IMHO.

-- Pete

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS][Octavia] Floating IP management

2014-10-15 Thread Phillip Toohill
No worries :)

Could you possibly clarify what you mean by the FLIP hosted directly on the LB?

Im currently assuming the FLIP pools are designated in Neutron at this point 
and we would simply be associating with the VIP port, I'm unsure the meaning of 
hosting FLIP directly on the LB.

Thank you for responses! There is definitely a more thought out discussion to 
be had, and glad these ideas are being brought up now rather than later.

From: Vijay Venkatachalam 
vijay.venkatacha...@citrix.commailto:vijay.venkatacha...@citrix.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Wednesday, October 15, 2014 9:38 AM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Floating IP management

I felt guilty after reading Vijay B. ’s reply :).
My apologies to have replied in brief, here are my thoughts in detail.

Currently LB configuration exposed via floating IP is a 2 step operation.
User has to first “create a VIP with a private IP” and then “creates a FLIP and 
assigns FLIP to private VIP” which would result in a DNAT in the gateway.
The proposal is to combine these 2 steps, meaning make DNATing operation as 
part of VIP creation process.

While this seems to be a simple proposal I think we should consider finer 
details.
The proposal assumes that the FLIP is implemented by the gateway through a DNAT.
We should also be open to creating a VIP with a FLIP rather than a private IP.
Meaning FLIP will be hosted directly by the LB appliance.

In essence, LBaaS plugin will create the FLIP for the VIP and let the driver 
know about the FLIP that should get implemented.
The drivers should have a choice of implementing in its own way.
It could choose to

1.   Implement via the traditional route. Driver creates a private IP, 
implements VIP as a private IP in the lb appliance and calls Neutron to 
implement DNAT (FLIP to private VIP )

2.   Implement VIP as a FLIP directly on the LB appliance. Here there is no 
private IP involved.

Pros:
Not much changes in the LBaaS API, there is only one IP as part of the VIP.
We can ensure backward compatibility with the driver as well by having Step (1) 
implemented in abstract driver.

Thanks,
Vjay

From: Vijay Venkatachalam [mailto:vijay.venkatacha...@citrix.com]
Sent: 15 October 2014 05:38
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Floating IP management

Thanks for the doc.

The floating IP could be hosted directly by the lb backend/lb appliance as well?
It depends on the appliance deployment.

From: Susanne Balle [mailto:sleipnir...@gmail.com]
Sent: 14 October 2014 21:15
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Floating IP management

Nice diagrams. :-) Thanks. Susanne

On Mon, Oct 13, 2014 at 4:18 PM, Phillip Toohill 
phillip.tooh...@rackspace.commailto:phillip.tooh...@rackspace.com wrote:
Diagrams in jpeg format..

On 10/12/14 10:06 PM, Phillip Toohill 
phillip.tooh...@rackspace.commailto:phillip.tooh...@rackspace.com
wrote:

Hello all,

Heres some additional diagrams and docs. Not incredibly detailed, but
should get the point across.

Feel free to edit if needed.

Once we come to some kind of agreement and understanding I can rewrite
these more to be thorough and get them in a more official place. Also, I
understand theres other use cases not shown in the initial docs, so this
is a good time to collaborate to make this more thought out.

Please feel free to ping me with any questions,

Thank you


Google DOCS link for FLIP folder:
https://drive.google.com/folderview?id=0B2r4apUP7uPwU1FWUjJBN0NMbWMusp=sh
a
ring

-diagrams are draw.iohttp://draw.io based and can be opened from within 
Drive by
selecting the appropriate application.

On 10/7/14 2:25 PM, Brandon Logan 
brandon.lo...@rackspace.commailto:brandon.lo...@rackspace.com wrote:

I'll add some more info to this as well:

Neutron LBaaS creates the neutron port for the VIP in the plugin layer
before drivers ever have any control.  In the case of an async driver,
it will then call the driver's create method, and then return to the
user the vip info.  This means the user will know the VIP before the
driver even finishes creating the load balancer.

So if Octavia is just going to create a floating IP and then associate
that floating IP to the neutron port, there is the problem of the user
not ever seeing the correct VIP (which would be the floating iP).

So really, we need to have a very detailed discussion on what the
options are for us to get this to work for those of us intending to use
floating ips as VIPs while also working for those only requiring a
neutron port.  I'm pretty sure this will require changing the way 

Re: [openstack-dev] [kolla] on Dockerfile patterns

2014-10-15 Thread David Vossel


- Original Message -
 Excerpts from Vishvananda Ishaya's message of 2014-10-15 07:52:34 -0700:
  
  On Oct 14, 2014, at 1:12 PM, Clint Byrum cl...@fewbar.com wrote:
  
   Excerpts from Lars Kellogg-Stedman's message of 2014-10-14 12:50:48
   -0700:
   On Tue, Oct 14, 2014 at 03:25:56PM -0400, Jay Pipes wrote:
   I think the above strategy is spot on. Unfortunately, that's not how
   the
   Docker ecosystem works.
   
   I'm not sure I agree here, but again nobody is forcing you to use this
   tool.
   
   operating system that the image is built for. I see you didn't respond
   to my
   point that in your openstack-containers environment, you end up with
   Debian
   *and* Fedora images, since you use the official MySQL dockerhub
   image. And
   therefore you will end up needing to know sysadmin specifics (such as
   how
   network interfaces are set up) on multiple operating system
   distributions.
   
   I missed that part, but ideally you don't *care* about the
   distribution in use.  All you care about is the application.  Your
   container environment (docker itself, or maybe a higher level
   abstraction) sets up networking for you, and away you go.
   
   If you have to perform system administration tasks inside your
   containers, my general feeling is that something is wrong.
   
   
   Speaking as a curmudgeon ops guy from back in the day.. the reason
   I choose the OS I do is precisely because it helps me _when something
   is wrong_. And the best way an OS can help me is to provide excellent
   debugging tools, and otherwise move out of the way.
   
   When something _is_ wrong and I want to attach GDB to mysqld in said
   container, I could build a new container with debugging tools installed,
   but that may lose the very system state that I'm debugging. So I need to
   run things inside the container like apt-get or yum to install GDB.. and
   at some point you start to realize that having a whole OS is actually a
   good thing even if it means needing to think about a few more things up
   front, such as which OS will I use? and what tools do I need installed
   in my containers?
   
   What I mean to say is, just grabbing off the shelf has unstated
   consequences.
  
  If this is how people are going to use and think about containers, I would
  submit they are a huge waste of time. The performance value they offer is
  dramatically outweighed by the flexibilty and existing tooling that exists
  for virtual machines. As I state in my blog post[1] if we really want to
  get value from containers, we must convert to the single application per
  container view. This means having standard ways of doing the above either
  on the host machine or in a debugging container that is as easy (or easier)
  than the workflow you mention. There are not good ways to do this yet, and
  the community hand-waves it away, saying things like, well you could …”.
  You could isn’t good enough. The result is that a lot of people that are
  using containers today are doing fat containers with a full os.
  
 
 I think we really agree.
 
 What the container universe hasn't worked out is all the stuff that the
 distros have worked out for a long time now: consistency.

I agree we need consistency. I have an idea. What if we developed an entrypoint
script standard...

Something like LSB init scripts except tailored towards the container use case.
The primary difference would be that the 'start' action of this new standard
wouldn't fork. Instead 'start' would be pid 1. The 'status' could be checked
externally by calling the exact same entry point script to invoke the 'status'
function.

This standard would lock us into the 'one service per container' concept while
giving us the ability to standardize on how the container is launched and 
monitored.

If we all conformed to something like this, docker could even extend the 
standard
so health checks could be performed using the docker cli tool.

docker status container id

Internally docker would just be doing a nsenter into the container and calling
the internal status function in our init script standard.

We already have docker start container and docker stop container. Being able
to generically call something like docker status container and have that 
translate
into some service specific command on the inside of the container would be kind 
of
neat.

Tools like kubernetes could use this functionality to poll a container's health 
and
be able to detect issues occurring within the container that don't necessarily
involve the container's service failing.

Does anyone else have any interest in this? I have quite a bit of of init 
script type
standard experience. It would be trivial for me to define something like this 
for us
to begin discussing.

-- Vossel

 I think it would be a good idea for containers' filesystem contents to
 be a whole distro. What's at question in this thread is what should be
 running. If we can just chroot into the container's FS 

Re: [openstack-dev] [kolla] on Dockerfile patterns

2014-10-15 Thread Lars Kellogg-Stedman
On Wed, Oct 15, 2014 at 01:50:08PM -0400, David Vossel wrote:
 Something like LSB init scripts except tailored towards the container use 
 case.
 The primary difference would be that the 'start' action of this new standard
 wouldn't fork. Instead 'start' would be pid 1. The 'status' could be checked
 externally by calling the exact same entry point script to invoke the 'status'
 function.

With the 1.3 release, which introduces docker exec, you could just
about get there.  Rather than attempting to introspect the container
to find the entrypoint script -- which might not even exist -- I would
say standardize on some top level paths (e.g., '/status') that can be
used to run a status check, and leave the implementation of those
paths up to the image (maybe they're scripts, maybe they're binaries,
just as long as they are executable).

Then you check would boil down to:

  docker exec container id /status

The reason why I am trying to avoid assuming some specially
constructed entrypoint script is that many images will simply not have
one -- they simply provide an initial command via CMD. Adding a
/status script or similar in this case is very simple:

   FROM original_image
   ADD status /status

Doing this via an entrypoint script would be a bit more complicated:

- You would have to determine whether or not the original image had an
  existing entrypoint script.
- If so you would need to wrap it or replicate the functionality.
- Some images may have entrypoint scripts that already provide
  subcommand like functionality (docker run someimage keyword,
  where keyword is parsed by the entrypoint script) and might not be
  compatible with an entrypoint-based status check.

Otherwise, I think establishing a best practice mechanism for executing
in-container checks is a great idea.

-- 
Lars Kellogg-Stedman l...@redhat.com | larsks @ {freenode,twitter,github}
Cloud Engineering / OpenStack  | http://blog.oddbit.com/



pgpD1P2Vb3KMF.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Elections] Vote Vote Vote in the TC election!

2014-10-15 Thread Anita Kuno
We are coming down the the last day plus hours for voting in the TC
election.

Search your gerrit preferred email address[2] for the following subject:
Poll: OpenStack Technical Committee (TC) Election - October 2014

That is your ballot and links you to the voting application. Please
vote. If you have voted, please encourage your colleagues to vote.

Candidate statements are linked to the names of all confirmed
candidates:
https://wiki.openstack.org/wiki/TC_Elections_October_2014#Confirmed_Candidates

To help people compare candidates on an even basis, this time we
introduced questions, which all candidates have taken the time to
answer. Please read the responses to help you select your 6 favourite
candidates:
https://wiki.openstack.org/wiki/TC_Elections_October_2014#Responses_to_TC_Election_Questions

What to do if you don't see the email and have a commit in at least one
of the official programs projects[1]:
 * check the trash of your gerrit Preferred Email address[2], in
case it went into trash or spam
 * wait a bit and check again, in case your email server is a bit slow
 * find the sha of at least one commit from the program project
repos[1] and email me and Tristan[0]. If we can confirm that you are
entitled to vote, we will add you to the voters list and you will be
emailed a ballot.

Please vote!

Thank you,
Anita

[0] Anita: anteaya at anteaya dot info
 Tristan: tristan dot cacqueray at enovance dot com
[1]
http://git.openstack.org/cgit/openstack/governance/tree/reference/programs.yaml?id=sept-2014-elections
[2] Sign into review.openstack.org: Go to Settings  Contact
Information. Look at the email listed as your Preferred Email. That is
where the ballot has been sent.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] on Dockerfile patterns

2014-10-15 Thread Steven Dake

On 10/14/2014 04:52 PM, David Vossel wrote:


- Original Message -

Ok, why are you so down on running systemd in a container?

It goes against the grain.

 From a distributed systems view, we gain quite a bit of control by maintaining
one service per container. Containers can be re-organised and re-purposed 
dynamically.
If we have systemd trying to manage an entire stack of resources within a 
container,
we lose this control.

 From my perspective a containerized application stack needs to be managed 
externally
by whatever is orchestrating the containers to begin with. When we take a step 
back
and look at how we actually want to deploy containers, systemd doesn't make 
much sense.
It actually limits us in the long run.

Also... recovery. Using systemd to manage a stack of resources within a single 
container
makes it difficult for whatever is externally enforcing the availability of 
that container
to detect the health of the container.  As it is now, the actual service is pid 
1 of a
container. If that service dies, the container dies. If systemd is pid 1, there 
can
be all kinds of chaos occurring within the container, but the external 
distributed
orchestration system won't have a clue (unless it invokes some custom health 
monitoring
tools within the container itself, which will likely be the case someday.)
I tend to agree systemd makes healthchecking and recovery escalation 
more difficult/impossible.  At a minimum to do escalation with systemd, 
the external orch system (k8s) needs to run code in the container to 
healthcheck the services.  This can be done today with k8s, but I fail 
to see why it is necessary to involve the complexity of systemd.  The 
systemd system is pretty sweet for multiple processes in one container, 
but systemd doesn't meld well with the one application per container 
model that appears to be best practice at this time.


Regards,
-steve


-- Vossel



Pacemaker works, but its kind of a pain to setup compared just yum installing
a few packages and setting init to systemd. There are some benefits for
sure, but if you have to force all the docker components onto the same
physical machine anyway, why bother with the extra complexity?

Thanks,
Kevin


From: David Vossel [dvos...@redhat.com]
Sent: Tuesday, October 14, 2014 3:14 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [kolla] on Dockerfile patterns

- Original Message -

Same thing works with cloud init too...


I've been waiting on systemd working inside a container for a while. it
seems
to work now.

oh no...


The idea being its hard to write a shell script to get everything up and
running with all the interactions that may need to happen. The init
system's
already designed for that. Take a nova-compute docker container for
example,
you probably need nova-compute, libvirt, neutron-openvswitch-agent, and the
celiometer-agent all backed in. Writing a shell script to get it all
started
and shut down properly would be really ugly.

You could split it up into 4 containers and try and ensure they are
coscheduled and all the pieces are able to talk to each other, but why?
Putting them all in one container with systemd starting the subprocesses is
much easier and shouldn't have many drawbacks. The components code is
designed and tested assuming the pieces are all together.

What you need is a dependency model that is enforced outside of the
containers. Something
that manages the order containers are started/stopped/recovered in. This
allows
you to isolate your containers with 1 service per container, yet still
express that
container with service A needs to start before container with service B.

Pacemaker does this easily. There's even a docker resource-agent for
Pacemaker now.
https://github.com/ClusterLabs/resource-agents/blob/master/heartbeat/docker

-- Vossel

ps. don't run systemd in a container... If you think you should, talk to me
first.


You can even add a ssh server in there easily too and then ansible in to do
whatever other stuff you want to do to the container like add other
monitoring and such

Ansible or puppet or whatever should work better in this arrangement too
since existing code assumes you can just systemctl start foo;

Kevin

From: Lars Kellogg-Stedman [l...@redhat.com]
Sent: Tuesday, October 14, 2014 12:10 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [kolla] on Dockerfile patterns

On Tue, Oct 14, 2014 at 02:45:30PM -0400, Jay Pipes wrote:

With Docker, you are limited to the operating system of whatever the
image
uses.

See, that's the part I disagree with.  What I was saying about ansible
and puppet in my email is that I think the right thing to do is take
advantage of those tools:

   FROM ubuntu

   RUN apt-get install ansible
   COPY my_ansible_config.yaml /my_ansible_config.yaml
   RUN ansible 

Re: [openstack-dev] [kolla] on Dockerfile patterns

2014-10-15 Thread Steven Dake

On 10/14/2014 05:44 PM, Angus Lees wrote:

On Tue, 14 Oct 2014 07:51:54 AM Steven Dake wrote:

Angus,

On 10/13/2014 08:51 PM, Angus Lees wrote:

I've been reading a bunch of the existing Dockerfiles, and I have two
humble requests:


1. It would be good if the interesting code came from python
sdist/bdists
rather than rpms.

This will make it possible to rebuild the containers using code from a
private branch or even unsubmitted code, without having to go through a
redhat/rpm release process first.

I care much less about where the python dependencies come from. Pulling
them from rpms rather than pip/pypi seems like a very good idea, given
the relative difficulty of caching pypi content and we also pull in the
required C, etc libraries for free.


With this in place, I think I could drop my own containers and switch to
reusing kolla's for building virtual testing environments.  This would
make me happy.

I've captured this requirement here:
https://blueprints.launchpad.net/kolla/+spec/run-from-master

I also believe it would be interesting to run from master or a stable
branch for CD.  Unfortunately I'm still working on the nova-compute
docker code, but if someone comes along and picks up that blueprint, i
expect it will get implemented :)  Maybe that could be you.

Yeah I've already got a bunch of working containers that pull from master[1],
but I've been thinking I should change that to use an externally supplied
bdist.  The downside is you quickly end up wanting a docker container to build
your deployment docker container.  I gather this is quite a common thing to
do, but I haven't found the time to script it up yet.

[1] https://github.com/anguslees/kube-openstack/tree/master/docker

I could indeed work on this, and I guess I was gauging the level of enthusiasm
within kolla for such a change.  I don't want to take time away from the
alternative I have that already does what I need only to push uphill to get it
integrated :/
There would be no uphill push.  For milestone #2, I am already going to 
reorganize the docker directory to support centos+rdo as an alternative 
to fedora+rdo.  Fedora+master is just another directory in this model 
(or Ubuntu + master if you want that choice as well). IMO the more 
choice about deployment platforms the better, especially a master model 
(or more likely a stable branch model).


Regards
-steve


2. I think we should separate out run the server from do once-off
setup.

Currently the containers run a start.sh that typically sets up the
database, runs the servers, creates keystone users and sets up the
keystone catalog.  In something like k8s, the container will almost
certainly be run multiple times in parallel and restarted numerous times,
so all those other steps go against the service-oriented k8s ideal and
are at-best wasted.

I suggest making the container contain the deployed code and offer a few
thin scripts/commands for entrypoints.  The main
replicationController/pod _just_ starts the server, and then we have
separate pods (or perhaps even non-k8s container invocations) that do
initial database setup/migrate, and post- install keystone setup.

The server may not start before the configuration of the server is
complete.  I guess I don't quite understand what you indicate here when
you say we have separate pods that do initial database setup/migrate.
Do you mean have dependencies in some way, or for eg:

glance-registry-setup-pod.yaml - the glance registry pod descriptor
which sets up the db and keystone
glance-registry-pod.yaml - the glance registry pod descriptor which
starts the application and waits for db/keystone setup

and start these two pods as part of the same selector (glance-registry)?

That idea sounds pretty appealing although probably won't be ready to go
for milestone #1.

So the way I do it now, I have a replicationController that starts/manages
(eg) nova-api pods[2].  I separately have a nova-db-sync pod[3] that basically
just runs nova-manage db sync.

I then have a simple shell script[4] that starts them all at the same time.
The nova-api pods crash and get restarted a few times until the database has
been appropriately configured by the nova-db-sync pod, and then they're fine and
start serving.

When nova-db-sync exits successfully, the pod just sits in state terminated
thanks to restartPolicy: onFailure.  Sometime later I can delete the
terminated nova-db-sync pod, but it's also harmless if I just leave it or even
if it gets occasionally re-run as part of some sort of update.


[2] 
https://github.com/anguslees/kube-openstack/blob/master/kubernetes-in/nova-api-repcon.yaml
[3] 
https://github.com/anguslees/kube-openstack/blob/master/kubernetes-in/nova-db-sync-pod.yaml
[4] https://github.com/anguslees/kube-openstack/blob/master/kubecfg-create.sh



I'm open to whether we want to make these as lightweight/independent as
possible (every daemon in an individual container), or limit it to one per
project (eg: run nova-api, nova-conductor, 

Re: [openstack-dev] [kolla] on Dockerfile patterns

2014-10-15 Thread Steven Dake

On 10/14/2014 06:10 PM, Clint Byrum wrote:

Excerpts from Fox, Kevin M's message of 2014-10-14 17:40:16 -0700:

I'm not arguing that everything should be managed by one systemd, I'm
just saying, for certain types of containers, a single docker container
with systemd in it might be preferable to trying to slice it unnaturally
into several containers.


Can you be more concrete? Most of the time things that need to be in
the same machine tend to have some kind of controller already. Meanwhile
it is worth noting that you can have one _image_, but several containers
running from that one image. So if you're trying to run a few pieces of
Neutron, for instance, you can have multiple containers each from that
one neutron image.


Systemd has invested a lot of time/effort to be able to relaunch failed
services, support spawning and maintaining unix sockets and services
across them, etc, that you'd have to push out of and across docker
containers. All of that can be done, but why reinvent the wheel? Like you
said, pacemaker can be made to make it all work, but I have yet to see
a way to deploy pacemaker services anywhere near as easy as systemd+yum
makes it. (Thanks be to redhat. :)


There are some of us who are rather annoyed that systemd tries to do
this in such a naive way and assumes everyone will want that kind of
management. It's the same naiveté that leads people to think if they
make their app server systemd service depend on their mysql systemd
service that this will eliminate startup problems. Once you have more
than one server, it doesn't work.

Kubernetes adds a distributed awareness of the containers that makes it
uniquely positioned to do most of those jobs much better than systemd
can.


The answer seems to be, its not dockerish. Thats ok. I just wanted to
understand the issue for what it is. If there is a really good reason for
not wanting to do it, or that its just not the way things are done. I've
had kind of the opposite feeling regarding docker containers. Docker use
to do very bad things when killing the container. nasty if you wanted
your database not to go corrupt. killing pid 1 is a bit sketchy then
forcing the container down after 10 seconds was particularly bad. having
something like systemd in place allows the database to be notified, then
shutdown properly. Sure you can script up enough shell to make this work,
but you have to do some difficult code, over and over again... Docker
has gotten better more recently but it still makes me a bit nervous
using it for statefull things.


What I think David was saying was that the process you want to run under
systemd is the pid 1 of the container. So if killing that would be bad,
it would also be bad to stop the systemd service, which would do the
same thing: send it SIGTERM. If that causes all hell to break loose, the
stateful thing isn't worth a dime, because it isn't crash safe.


As for recovery, systemd can do the recovery too. I'd argue at this
point in time, I'd expect systemd recovery to probably work better
then some custom shell scripts when it comes to doing the right thing
recovering at bring up. The other thing is, recovery is not just about
pid1 going away. often it sticks around and other badness is going
on. Its A way to know things are bad, but you can't necessarily rely on
it to know the container's healty. You need more robust checks for that.

I think one thing people like about Kubernetes is that when a container
crashes, and needs to be brought back up, it may actually be brought
up on a different, less busy, more healthy host. I could be wrong, or
that might be in the FUTURE section. But the point is, recovery and
start-up are not things that always want to happen on the same box.
yes FUTURE as  t his doesn't exist today.  There is a significant runway 
to enhancing the availability management of kubernetes natively.


Regards
-steve


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS][Octavia] Floating IP management

2014-10-15 Thread Vijay Venkatachalam
 I'm unsure the meaning of hosting FLIP directly on the LB.

There can be LB appliances (usually physical appliance) that sit at the edge 
and is connected to receive floating IP traffic .

In such a case, the VIP/Virtual Server with FLIP  can be configured in the LB 
appliance.
Meaning, LB appliance is now the owner of the FLIP and will be responding to 
ARPs.


Thanks,
Vijay V.

From: Phillip Toohill [mailto:phillip.tooh...@rackspace.com]
Sent: 15 October 2014 23:16
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Floating IP management

No worries :)

Could you possibly clarify what you mean by the FLIP hosted directly on the LB?

Im currently assuming the FLIP pools are designated in Neutron at this point 
and we would simply be associating with the VIP port, I'm unsure the meaning of 
hosting FLIP directly on the LB.

Thank you for responses! There is definitely a more thought out discussion to 
be had, and glad these ideas are being brought up now rather than later.

From: Vijay Venkatachalam 
vijay.venkatacha...@citrix.commailto:vijay.venkatacha...@citrix.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Wednesday, October 15, 2014 9:38 AM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Floating IP management

I felt guilty after reading Vijay B. 's reply :).
My apologies to have replied in brief, here are my thoughts in detail.

Currently LB configuration exposed via floating IP is a 2 step operation.
User has to first create a VIP with a private IP and then creates a FLIP and 
assigns FLIP to private VIP which would result in a DNAT in the gateway.
The proposal is to combine these 2 steps, meaning make DNATing operation as 
part of VIP creation process.

While this seems to be a simple proposal I think we should consider finer 
details.
The proposal assumes that the FLIP is implemented by the gateway through a DNAT.
We should also be open to creating a VIP with a FLIP rather than a private IP.
Meaning FLIP will be hosted directly by the LB appliance.

In essence, LBaaS plugin will create the FLIP for the VIP and let the driver 
know about the FLIP that should get implemented.
The drivers should have a choice of implementing in its own way.
It could choose to

1.  Implement via the traditional route. Driver creates a private IP, 
implements VIP as a private IP in the lb appliance and calls Neutron to 
implement DNAT (FLIP to private VIP )

2.  Implement VIP as a FLIP directly on the LB appliance. Here there is no 
private IP involved.

Pros:
Not much changes in the LBaaS API, there is only one IP as part of the VIP.
We can ensure backward compatibility with the driver as well by having Step (1) 
implemented in abstract driver.

Thanks,
Vjay

From: Vijay Venkatachalam [mailto:vijay.venkatacha...@citrix.com]
Sent: 15 October 2014 05:38
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Floating IP management

Thanks for the doc.

The floating IP could be hosted directly by the lb backend/lb appliance as well?
It depends on the appliance deployment.

From: Susanne Balle [mailto:sleipnir...@gmail.com]
Sent: 14 October 2014 21:15
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Floating IP management

Nice diagrams. :-) Thanks. Susanne

On Mon, Oct 13, 2014 at 4:18 PM, Phillip Toohill 
phillip.tooh...@rackspace.commailto:phillip.tooh...@rackspace.com wrote:
Diagrams in jpeg format..

On 10/12/14 10:06 PM, Phillip Toohill 
phillip.tooh...@rackspace.commailto:phillip.tooh...@rackspace.com
wrote:

Hello all,

Heres some additional diagrams and docs. Not incredibly detailed, but
should get the point across.

Feel free to edit if needed.

Once we come to some kind of agreement and understanding I can rewrite
these more to be thorough and get them in a more official place. Also, I
understand theres other use cases not shown in the initial docs, so this
is a good time to collaborate to make this more thought out.

Please feel free to ping me with any questions,

Thank you


Google DOCS link for FLIP folder:
https://drive.google.com/folderview?id=0B2r4apUP7uPwU1FWUjJBN0NMbWMusp=sh
a
ring

-diagrams are draw.iohttp://draw.io based and can be opened from within 
Drive by
selecting the appropriate application.

On 10/7/14 2:25 PM, Brandon Logan 
brandon.lo...@rackspace.commailto:brandon.lo...@rackspace.com wrote:

I'll add some more info to this as well:

Neutron LBaaS creates the neutron port for the VIP in the plugin layer
before drivers ever have any control.  In the case of an async driver,
it will then call the driver's create method, and then 

Re: [openstack-dev] [all] Resolving a possible meeting conflict

2014-10-15 Thread Kyle Mestery
Thanks Matt, we've cleaned it up and removed it now.

On Wed, Oct 15, 2014 at 9:24 AM, Matthew Farina m...@mattfarina.com wrote:
 Kyle, you can assume the PHP meeting is no longer going on. Feel free to
 clean up the meetings page.

 On Mon, Oct 13, 2014 at 9:33 AM, Kyle Mestery mest...@mestery.com wrote:

 Hi all:

 I've setup a weekly meeting for the neutron-drivers team on IRC at
 1500UTC [1], but I noticed it conflicts with the PHP SDK IRC meeting
 at 1530UTC [2]. However, I see the PHP SDK IRC meeting hasn't happened
 since August 6 [3]. Can I assume this meeting is no longer going on?
 If so, can I clean it up from the meetings page?

 Thanks!
 Kyle

 [1] https://wiki.openstack.org/wiki/Meetings/NeutronDrivers
 [2] https://wiki.openstack.org/wiki/Meetings#PHP_SDK_Team_Meeting
 [3] http://eavesdrop.openstack.org/meetings/openstack_sdk_php/2014/

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS][Octavia] Floating IP management

2014-10-15 Thread Phillip Toohill
Ah, this makes sense. Guess I'm wondering more so how that’s configured and if 
it utilizes Neutron at all. And if it does how does it configure that.

I have some more research to do it seems ;)

Thanks for the clarification

From: Vijay Venkatachalam 
vijay.venkatacha...@citrix.commailto:vijay.venkatacha...@citrix.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Wednesday, October 15, 2014 1:33 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Floating IP management

 I'm unsure the meaning of hosting FLIP directly on the LB.

There can be LB appliances (usually physical appliance) that sit at the edge 
and is connected to receive floating IP traffic .

In such a case, the VIP/Virtual Server with FLIP  can be configured in the LB 
appliance.
Meaning, LB appliance is now the “owner” of the FLIP and will be responding to 
ARPs.


Thanks,
Vijay V.

From: Phillip Toohill [mailto:phillip.tooh...@rackspace.com]
Sent: 15 October 2014 23:16
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Floating IP management

No worries :)

Could you possibly clarify what you mean by the FLIP hosted directly on the LB?

Im currently assuming the FLIP pools are designated in Neutron at this point 
and we would simply be associating with the VIP port, I'm unsure the meaning of 
hosting FLIP directly on the LB.

Thank you for responses! There is definitely a more thought out discussion to 
be had, and glad these ideas are being brought up now rather than later.

From: Vijay Venkatachalam 
vijay.venkatacha...@citrix.commailto:vijay.venkatacha...@citrix.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Wednesday, October 15, 2014 9:38 AM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Floating IP management

I felt guilty after reading Vijay B. ’s reply :).
My apologies to have replied in brief, here are my thoughts in detail.

Currently LB configuration exposed via floating IP is a 2 step operation.
User has to first “create a VIP with a private IP” and then “creates a FLIP and 
assigns FLIP to private VIP” which would result in a DNAT in the gateway.
The proposal is to combine these 2 steps, meaning make DNATing operation as 
part of VIP creation process.

While this seems to be a simple proposal I think we should consider finer 
details.
The proposal assumes that the FLIP is implemented by the gateway through a DNAT.
We should also be open to creating a VIP with a FLIP rather than a private IP.
Meaning FLIP will be hosted directly by the LB appliance.

In essence, LBaaS plugin will create the FLIP for the VIP and let the driver 
know about the FLIP that should get implemented.
The drivers should have a choice of implementing in its own way.
It could choose to

1.  Implement via the traditional route. Driver creates a private IP, 
implements VIP as a private IP in the lb appliance and calls Neutron to 
implement DNAT (FLIP to private VIP )

2.  Implement VIP as a FLIP directly on the LB appliance. Here there is no 
private IP involved.

Pros:
Not much changes in the LBaaS API, there is only one IP as part of the VIP.
We can ensure backward compatibility with the driver as well by having Step (1) 
implemented in abstract driver.

Thanks,
Vjay

From: Vijay Venkatachalam [mailto:vijay.venkatacha...@citrix.com]
Sent: 15 October 2014 05:38
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Floating IP management

Thanks for the doc.

The floating IP could be hosted directly by the lb backend/lb appliance as well?
It depends on the appliance deployment.

From: Susanne Balle [mailto:sleipnir...@gmail.com]
Sent: 14 October 2014 21:15
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Floating IP management

Nice diagrams. :-) Thanks. Susanne

On Mon, Oct 13, 2014 at 4:18 PM, Phillip Toohill 
phillip.tooh...@rackspace.commailto:phillip.tooh...@rackspace.com wrote:
Diagrams in jpeg format..

On 10/12/14 10:06 PM, Phillip Toohill 
phillip.tooh...@rackspace.commailto:phillip.tooh...@rackspace.com
wrote:

Hello all,

Heres some additional diagrams and docs. Not incredibly detailed, but
should get the point across.

Feel free to edit if needed.

Once we come to some kind of agreement and understanding I can rewrite
these more to be thorough and get them in a more official place. Also, I
understand theres other use cases not shown 

Re: [openstack-dev] [Nova] Automatic evacuate

2014-10-15 Thread Florian Haas
On Wed, Oct 15, 2014 at 7:20 PM, Russell Bryant rbry...@redhat.com wrote:
 On 10/13/2014 05:59 PM, Russell Bryant wrote:
 Nice timing.  I was working on a blog post on this topic.

 which is now here:

 http://blog.russellbryant.net/2014/10/15/openstack-instance-ha-proposal/

I am absolutely loving the fact that we are finally having a
discussion in earnest about this. i think this deserves a Design
Summit session.

If I may weigh in here, let me share what I've seen users do and what
can currently be done, and what may be supported in the future.

Problem: automatically ensure that a Nova guest continues to run, even
if its host fails.

(That's the general problem description and I don't need to go into
further details explaining the problem, because Russell has done that
beautifully in his blog post.)

Now, what are the options?

(1) Punt and leave it to the hypervisor.

This essentially means that you must use a hypervisor that already has
HA built in, such as VMware with the VCenter driver. In that scenario,
Nova itself neither deals with HA, nor exposes any HA switches to the
user. Obvious downside: not generic, doesn't work with all
hypervisors, most importantly doesn't work with the most popular one
(libvirt/KVM).

(2) Deploy Nova nodes in pairs/groups, and pretend that they are one node.

You can already do that by overriding host in nova-compute.conf,
setting resume_guests_state_on_host_boot, and using VIPs with
Corosync/Pacemaker. You can then group these hosts in host aggregates,
and the user's scheduler hint to point a newly scheduled guest to such
a host aggregate becomes, effectively, the keep this guest running at
all times flag. Upside: no changes to Nova at all, monitoring,
fencing and recovery for free from Corosync/Pacemaker. Downsides:
requires vendors to automate Pacemaker configuration in deployment
tools (because you really don't want to do those things manually).
Additional downside: you either have some idle hardware, or you might
be overcommitting resources in case of failover.

(3) Automatic host evacuation.

Not supported in Nova right now, as Adam pointed out at the top of the
thread, and repeatedly shot down. If someone were to implement this,
it would *still* require that Corosync/Pacemaker be used for
monitoring and fencing of nodes, because re-implementing this from
scratch would be the reinvention of a wheel while painting a bikeshed.

(4) Per-guest HA.

This is the idea of just doing nova boot --keep-this running, i.e.
setting a per-guest flag that still means the machine is to be kept up
at all times. Again, not supported in Nova right now, and probably
even more complex to implement generically than (3), at the same or
greater cost.

I have a suggestion to tackle this that I *think* is reasonably
user-friendly while still bearable in terms of Nova development
effort:

(a) Define a well-known metadata key for a host aggregate, say ha.
Define that any host aggregate that represents a highly available
group of compute nodes should have this metadata key set.

(b) Then define a flavor that sets extra_specs ha=true.

Granted, this places an additional burden on distro vendors to
integrate highly-available compute nodes into their deployment
infrastructure. But since practically all of them already include
Pacemaker, the additional scaffolding required is actually rather
limited.

Am I making sense?

Cheers,
Florian

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Automatic evacuate

2014-10-15 Thread Jay Pipes

On 10/15/2014 03:16 PM, Florian Haas wrote:

On Wed, Oct 15, 2014 at 7:20 PM, Russell Bryant rbry...@redhat.com wrote:

On 10/13/2014 05:59 PM, Russell Bryant wrote:

Nice timing.  I was working on a blog post on this topic.


which is now here:

http://blog.russellbryant.net/2014/10/15/openstack-instance-ha-proposal/


I am absolutely loving the fact that we are finally having a
discussion in earnest about this. i think this deserves a Design
Summit session.

If I may weigh in here, let me share what I've seen users do and what
can currently be done, and what may be supported in the future.

Problem: automatically ensure that a Nova guest continues to run, even
if its host fails.

(That's the general problem description and I don't need to go into
further details explaining the problem, because Russell has done that
beautifully in his blog post.)

Now, what are the options?

(1) Punt and leave it to the hypervisor.

This essentially means that you must use a hypervisor that already has
HA built in, such as VMware with the VCenter driver. In that scenario,
Nova itself neither deals with HA, nor exposes any HA switches to the
user. Obvious downside: not generic, doesn't work with all
hypervisors, most importantly doesn't work with the most popular one
(libvirt/KVM).

(2) Deploy Nova nodes in pairs/groups, and pretend that they are one node.

You can already do that by overriding host in nova-compute.conf,
setting resume_guests_state_on_host_boot, and using VIPs with
Corosync/Pacemaker. You can then group these hosts in host aggregates,
and the user's scheduler hint to point a newly scheduled guest to such
a host aggregate becomes, effectively, the keep this guest running at
all times flag. Upside: no changes to Nova at all, monitoring,
fencing and recovery for free from Corosync/Pacemaker. Downsides:
requires vendors to automate Pacemaker configuration in deployment
tools (because you really don't want to do those things manually).
Additional downside: you either have some idle hardware, or you might
be overcommitting resources in case of failover.

(3) Automatic host evacuation.

Not supported in Nova right now, as Adam pointed out at the top of the
thread, and repeatedly shot down. If someone were to implement this,
it would *still* require that Corosync/Pacemaker be used for
monitoring and fencing of nodes, because re-implementing this from
scratch would be the reinvention of a wheel while painting a bikeshed.

(4) Per-guest HA.

This is the idea of just doing nova boot --keep-this running, i.e.
setting a per-guest flag that still means the machine is to be kept up
at all times. Again, not supported in Nova right now, and probably
even more complex to implement generically than (3), at the same or
greater cost.

I have a suggestion to tackle this that I *think* is reasonably
user-friendly while still bearable in terms of Nova development
effort:

(a) Define a well-known metadata key for a host aggregate, say ha.
Define that any host aggregate that represents a highly available
group of compute nodes should have this metadata key set.

(b) Then define a flavor that sets extra_specs ha=true.

Granted, this places an additional burden on distro vendors to
integrate highly-available compute nodes into their deployment
infrastructure. But since practically all of them already include
Pacemaker, the additional scaffolding required is actually rather
limited.


Or:

(5) Let monitoring and orchestration services deal with these use cases 
and have Nova simply provide the primitive API calls that it already 
does (i.e. host evacuate).


Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] What's Up Doc? Oct 15 2014

2014-10-15 Thread Anne Gentle
Docs are as ready as possible for the release tomorrow! The Install Guides
and Configuration Reference are the released titles. We have removed the
grizzly docs and added juno docs. For a few weeks while the packages become
finalized, the Juno Install Guides will be identical to trunk Install
Guides.

Many thanks to all the collaborators who made this release happen. Here are
the highlights for the docs in this release. Let me know if I missed
anything! Training team, Security team, feel free to add your highlights to
the Documentation section of the Release notes on the wiki.


   - This release, the OpenStack Foundation funded a five-day book sprint
   to write the new OpenStack Architecture Design Guide
   
http://docs.openstack.org/arch-design/content/arch-guide-how-this-book-is-organized.html.
   It offers architectures for general purpose, compute-focused,
   storage-focused, network-focused, multi-site, hybrid, massively scalable,
   and specialized clouds.
   - The High Availability Guide
   http://docs.openstack.org/high-availability-guide/content/index.html now
   has a separate review team and has moved into a separate repository.
   - The Security Guide http://docs.openstack.org/security-guide/content/ now
   has a specialized review team and has moved into a separate repository.
   - The long-form API reference documents have been re-purposed to focus
   on the API Complete Reference
   http://developer.openstack.org/api-ref.html.
   - The User Guide now contains Database Service for OpenStack information.
   - The Command-Line Reference has been updated with new client releases
   and now contains additional chapters for the common OpenStack client, the
   trove-manage client, and the Data processing client (sahara).
   - The OpenStack Cloud Administrator Guide
   http://docs.openstack.org/admin-guide-cloud/content/ now contains
   information about Telemetry (ceilometer).


This past week I got graphical mockups from web designer for
docs.openstack.org and am iterating on them with the Foundation team.

I don't have much more to report -- go work on release notes if you haven't
already! https://wiki.openstack.org/wiki/ReleaseNotes/Juno

Thanks,
Anne
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Automatic evacuate

2014-10-15 Thread Florian Haas
On Wed, Oct 15, 2014 at 9:58 PM, Jay Pipes jaypi...@gmail.com wrote:
 On 10/15/2014 03:16 PM, Florian Haas wrote:

 On Wed, Oct 15, 2014 at 7:20 PM, Russell Bryant rbry...@redhat.com
 wrote:

 On 10/13/2014 05:59 PM, Russell Bryant wrote:

 Nice timing.  I was working on a blog post on this topic.


 which is now here:

 http://blog.russellbryant.net/2014/10/15/openstack-instance-ha-proposal/


 I am absolutely loving the fact that we are finally having a
 discussion in earnest about this. i think this deserves a Design
 Summit session.

 If I may weigh in here, let me share what I've seen users do and what
 can currently be done, and what may be supported in the future.

 Problem: automatically ensure that a Nova guest continues to run, even
 if its host fails.

 (That's the general problem description and I don't need to go into
 further details explaining the problem, because Russell has done that
 beautifully in his blog post.)

 Now, what are the options?

 (1) Punt and leave it to the hypervisor.

 This essentially means that you must use a hypervisor that already has
 HA built in, such as VMware with the VCenter driver. In that scenario,
 Nova itself neither deals with HA, nor exposes any HA switches to the
 user. Obvious downside: not generic, doesn't work with all
 hypervisors, most importantly doesn't work with the most popular one
 (libvirt/KVM).

 (2) Deploy Nova nodes in pairs/groups, and pretend that they are one node.

 You can already do that by overriding host in nova-compute.conf,
 setting resume_guests_state_on_host_boot, and using VIPs with
 Corosync/Pacemaker. You can then group these hosts in host aggregates,
 and the user's scheduler hint to point a newly scheduled guest to such
 a host aggregate becomes, effectively, the keep this guest running at
 all times flag. Upside: no changes to Nova at all, monitoring,
 fencing and recovery for free from Corosync/Pacemaker. Downsides:
 requires vendors to automate Pacemaker configuration in deployment
 tools (because you really don't want to do those things manually).
 Additional downside: you either have some idle hardware, or you might
 be overcommitting resources in case of failover.

 (3) Automatic host evacuation.

 Not supported in Nova right now, as Adam pointed out at the top of the
 thread, and repeatedly shot down. If someone were to implement this,
 it would *still* require that Corosync/Pacemaker be used for
 monitoring and fencing of nodes, because re-implementing this from
 scratch would be the reinvention of a wheel while painting a bikeshed.

 (4) Per-guest HA.

 This is the idea of just doing nova boot --keep-this running, i.e.
 setting a per-guest flag that still means the machine is to be kept up
 at all times. Again, not supported in Nova right now, and probably
 even more complex to implement generically than (3), at the same or
 greater cost.

 I have a suggestion to tackle this that I *think* is reasonably
 user-friendly while still bearable in terms of Nova development
 effort:

 (a) Define a well-known metadata key for a host aggregate, say ha.
 Define that any host aggregate that represents a highly available
 group of compute nodes should have this metadata key set.

 (b) Then define a flavor that sets extra_specs ha=true.

 Granted, this places an additional burden on distro vendors to
 integrate highly-available compute nodes into their deployment
 infrastructure. But since practically all of them already include
 Pacemaker, the additional scaffolding required is actually rather
 limited.


 Or:

 (5) Let monitoring and orchestration services deal with these use cases and
 have Nova simply provide the primitive API calls that it already does (i.e.
 host evacuate).

That would arguably lead to an incredible amount of wheel reinvention
for node failure detection, service failure detection, etc. etc.

Florian

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA] How to attach multiple NICs to an instance VM?

2014-10-15 Thread Danny Choi (dannchoi)
Hi,

“nova help boot” shows the following:


  --nic 
net-id=net-uuid,v4-fixed-ip=ip-addr,v6-fixed-ip=ip-addr,port-id=port-uuid

Create a NIC on the server. Specify option

multiple times to create multiple NICs. net-

id: attach NIC to network with this UUID

(either port-id or net-id must be provided),

v4-fixed-ip: IPv4 fixed address for NIC

(optional), v6-fixed-ip: IPv6 fixed address

for NIC (optional), port-id: attach NIC to

port with this UUID (either port-id or net-id

must be provided).


NOTE:  Specify option multiple times to create multiple NICs. 


I have two private networks and one public network (for floating IPs) 
configured.


localadmin@qa4:~/devstack$ nova net-list

+--+---+--+

| ID   | Label | CIDR |

+--+---+--+

| 6905cf7d-74d7-455b-b9d0-8cea972ec522 | private   | None |

| 8c25e33b-47be-47eb-a945-e0ac2ad6756a | Private_net20 | None |

| faa138e6-4774-41ad-8b5f-9795788eca43 | public| None |

+--+---+--+

When I launch an instance, I specify the “—nic” option twice.


localadmin@qa4:~/devstack$ nova boot --image cirros-0.3.2-x86_64-uec --flavor 1 
--nic net-id=6905cf7d-74d7-455b-b9d0-8cea972ec522 --nic 
net-id=8c25e33b-47be-47eb-a945-e0ac2ad6756a vm10


And then I associate a floating IP to the instance.


localadmin@qa4:~/devstack$ nova list

+--+--+++-+--+

| ID   | Name | Status | Task State | Power 
State | Networks |

+--+--+++-+--+

| e6a13d2e-756b-4b96-bf0c-438c2c875675 | vm10 | ACTIVE | -  | Running   
  | Private_net20=20.0.0.10; private=10.0.0.7, 172.29.173.13 |


localadmin@qa4:~/devstack$ nova show vm10

+--++

| Property | Value  
|

+--++

| OS-DCF:diskConfig| MANUAL 
|

| OS-EXT-AZ:availability_zone  | nova   
|

| OS-EXT-STS:power_state   | 1  
|

| OS-EXT-STS:task_state| -  
|

| OS-EXT-STS:vm_state  | active 
|

| OS-SRV-USG:launched_at   | 2014-10-15T20:22:50.00 
|

| OS-SRV-USG:terminated_at | -  
|

| Private_net20 network| 20.0.0.10  
|

| accessIPv4   |
|

| accessIPv6   |
|

| config_drive |
|

| created  | 2014-10-15T20:21:54Z   
|

| flavor   | m1.tiny (1)
|

| hostId   | 
4660a679d319992f764bcb245b71048212fe8cd67b769400d82382b7   |

| id   | e6a13d2e-756b-4b96-bf0c-438c2c875675   
|

| image| cirros-0.3.2-x86_64-uec 
(feaec710-c1cc-4071-aefa-c3dc2b915ab1) |

| key_name | -  
|

| metadata | {} 
|

| name | vm10   
|

| os-extended-volumes:volumes_attached | [] 
|

| private network  | 10.0.0.7, 172.29.173.13
|

| 

Re: [openstack-dev] [Nova] Automatic evacuate

2014-10-15 Thread Florian Haas
On Wed, Oct 15, 2014 at 10:03 PM, Russell Bryant rbry...@redhat.com wrote:
 Am I making sense?

 Yep, the downside is just that you need to provide a new set of flavors
 for ha vs non-ha.  A benefit though is that it's a way to support it
 today without *any* changes to OpenStack.

Users are already very used to defining new flavors. Nova itself
wouldn't even need to define those; if the vendor's deployment tools
defined them it would be just fine.

 This seems like the kind of thing we should also figure out how to offer
 on a per-guest basis without needing a new set of flavors.  That's why I
 also listed the server tagging functionality as another possible solution.

This still doesn't do away with the requirement to reliably detect
node failure, and to fence misbehaving nodes. Detecting that a node
has failed, and fencing it if unsure, is a prerequisite for any
recovery action. So you need Corosync/Pacemaker anyway.

Note also that when using an approach where you have physically
clustered nodes, but you are also running non-HA VMs on those, then
the user must understand that the following applies:

(1) If your guest is marked HA, then it will automatically recover on
node failure, but
(2) if your guest is *not* marked HA, then it will go down with the
node not only if it fails, but also if it is fenced.

So a non-HA guest on an HA node group actually has a slightly
*greater* chance of going down than a non-HA guest on a non-HA host.
(And let's not get into don't use fencing then; we all know why
that's a bad idea.)

Which is why I think it makes sense to just distinguish between
HA-capable and non-HA-capable hosts, and have the user decide whether
they want HA or non-HA guests simply by assigning them to the
appropriate host aggregates.

Cheers,
Florian

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Automatic evacuate

2014-10-15 Thread Adam Lawson
It would seem to me that if guest HA is highly-desired, and it is,
requiring multiple flavors for multiple SLA requirements (and that's what
we're really talking about) introduces a trade-off that conceivably isn't
needed - double the flavor requirement for the same spec (512/1/10 and
another for HA). I'd like to explore this a little further to define other
possibilities.

I like the idea of instance HA; I like the idea of host HA way better
because it protects every instance on it. And hosts with HA logic would
obviously not be allowed to only host instances that use shared storage.

What are our options to continue discussing in Paris?


*Adam Lawson*

AQORN, Inc.
427 North Tatnall Street
Ste. 58461
Wilmington, Delaware 19801-2230
Toll-free: (844) 4-AQORN-NOW ext. 101
International: +1 302-387-4660
Direct: +1 916-246-2072


On Wed, Oct 15, 2014 at 1:50 PM, Florian Haas flor...@hastexo.com wrote:

 On Wed, Oct 15, 2014 at 9:58 PM, Jay Pipes jaypi...@gmail.com wrote:
  On 10/15/2014 03:16 PM, Florian Haas wrote:
 
  On Wed, Oct 15, 2014 at 7:20 PM, Russell Bryant rbry...@redhat.com
  wrote:
 
  On 10/13/2014 05:59 PM, Russell Bryant wrote:
 
  Nice timing.  I was working on a blog post on this topic.
 
 
  which is now here:
 
 
 http://blog.russellbryant.net/2014/10/15/openstack-instance-ha-proposal/
 
 
  I am absolutely loving the fact that we are finally having a
  discussion in earnest about this. i think this deserves a Design
  Summit session.
 
  If I may weigh in here, let me share what I've seen users do and what
  can currently be done, and what may be supported in the future.
 
  Problem: automatically ensure that a Nova guest continues to run, even
  if its host fails.
 
  (That's the general problem description and I don't need to go into
  further details explaining the problem, because Russell has done that
  beautifully in his blog post.)
 
  Now, what are the options?
 
  (1) Punt and leave it to the hypervisor.
 
  This essentially means that you must use a hypervisor that already has
  HA built in, such as VMware with the VCenter driver. In that scenario,
  Nova itself neither deals with HA, nor exposes any HA switches to the
  user. Obvious downside: not generic, doesn't work with all
  hypervisors, most importantly doesn't work with the most popular one
  (libvirt/KVM).
 
  (2) Deploy Nova nodes in pairs/groups, and pretend that they are one
 node.
 
  You can already do that by overriding host in nova-compute.conf,
  setting resume_guests_state_on_host_boot, and using VIPs with
  Corosync/Pacemaker. You can then group these hosts in host aggregates,
  and the user's scheduler hint to point a newly scheduled guest to such
  a host aggregate becomes, effectively, the keep this guest running at
  all times flag. Upside: no changes to Nova at all, monitoring,
  fencing and recovery for free from Corosync/Pacemaker. Downsides:
  requires vendors to automate Pacemaker configuration in deployment
  tools (because you really don't want to do those things manually).
  Additional downside: you either have some idle hardware, or you might
  be overcommitting resources in case of failover.
 
  (3) Automatic host evacuation.
 
  Not supported in Nova right now, as Adam pointed out at the top of the
  thread, and repeatedly shot down. If someone were to implement this,
  it would *still* require that Corosync/Pacemaker be used for
  monitoring and fencing of nodes, because re-implementing this from
  scratch would be the reinvention of a wheel while painting a bikeshed.
 
  (4) Per-guest HA.
 
  This is the idea of just doing nova boot --keep-this running, i.e.
  setting a per-guest flag that still means the machine is to be kept up
  at all times. Again, not supported in Nova right now, and probably
  even more complex to implement generically than (3), at the same or
  greater cost.
 
  I have a suggestion to tackle this that I *think* is reasonably
  user-friendly while still bearable in terms of Nova development
  effort:
 
  (a) Define a well-known metadata key for a host aggregate, say ha.
  Define that any host aggregate that represents a highly available
  group of compute nodes should have this metadata key set.
 
  (b) Then define a flavor that sets extra_specs ha=true.
 
  Granted, this places an additional burden on distro vendors to
  integrate highly-available compute nodes into their deployment
  infrastructure. But since practically all of them already include
  Pacemaker, the additional scaffolding required is actually rather
  limited.
 
 
  Or:
 
  (5) Let monitoring and orchestration services deal with these use cases
 and
  have Nova simply provide the primitive API calls that it already does
 (i.e.
  host evacuate).

 That would arguably lead to an incredible amount of wheel reinvention
 for node failure detection, service failure detection, etc. etc.

 Florian

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 

[openstack-dev] [Glance] Summit Topics

2014-10-15 Thread Nikhil Komawar
The summit planning etherpad [0] is up and available for discussing topics 
specifically for Glance during the design sessions.

The agenda for the contributors' meetup will be kept open. If you've more 
suggestions or need input on a topic which you would like to include as a part 
of the sessions, please ping me (nikhil_k) on IRC or attend the upcoming Glance 
meetings [1].

For more information on sessions related to other projects including 
cross-project sessions please visit the summit planning wiki page [2].

[0] https://etherpad.openstack.org/p/kilo-glance-summit-topics
[1] https://wiki.openstack.org/wiki/Meetings/Glance
[2] https://wiki.openstack.org/wiki/Summit/Planning

Thanks,
Nikhil
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA] How to attach multiple NICs to an instance VM?

2014-10-15 Thread Salvatore Orlando
I think you did everything right.

Are you sure cirros images by default are configured to boostrap interfaces
different from eth0?
Perhaps all you need to do is just ifup the interface... have you already
tried that?

Salvatore

On 15 October 2014 23:07, Danny Choi (dannchoi) dannc...@cisco.com wrote:

  Hi,

  “nova help boot” shows the following:

--nic
 net-id=net-uuid,v4-fixed-ip=ip-addr,v6-fixed-ip=ip-addr,port-id=port-uuid

 Create a NIC on the server. Specify option

 multiple times to create multiple NICs.
 net-

 id: attach NIC to network with this UUID

 (either port-id or net-id must be
 provided),

 v4-fixed-ip: IPv4 fixed address for NIC

 (optional), v6-fixed-ip: IPv6 fixed address

 for NIC (optional), port-id: attach NIC to

 port with this UUID (either port-id or
 net-id

 must be provided).


  NOTE:  Specify option multiple times to create multiple NICs. 


  I have two private networks and one public network (for floating IPs)
 configured.


  localadmin@qa4:~/devstack$ nova net-list

 +--+---+--+

 | ID   | Label | CIDR |

 +--+---+--+

 | 6905cf7d-74d7-455b-b9d0-8cea972ec522 | private   | None |

 | 8c25e33b-47be-47eb-a945-e0ac2ad6756a | Private_net20 | None |

 | faa138e6-4774-41ad-8b5f-9795788eca43 | public| None |

 +--+---+--+

  When I launch an instance, I specify the “—nic” option twice.

  localadmin@qa4:~/devstack$ nova boot --image cirros-0.3.2-x86_64-uec
 --flavor 1 --nic net-id=6905cf7d-74d7-455b-b9d0-8cea972ec522 --nic
 net-id=8c25e33b-47be-47eb-a945-e0ac2ad6756a vm10


  And then I associate a floating IP to the instance.


  localadmin@qa4:~/devstack$ nova list


 +--+--+++-+--+

 | ID   | Name | Status | Task State |
 Power State | Networks |


 +--+--+++-+--+

 | e6a13d2e-756b-4b96-bf0c-438c2c875675 | vm10 | ACTIVE | -  |
 Running | Private_net20=20.0.0.10; private=10.0.0.7, 172.29.173.13 |

  localadmin@qa4:~/devstack$ nova show vm10


 +--++

 | Property | Value
   |


 +--++

 | OS-DCF:diskConfig| MANUAL
   |

 | OS-EXT-AZ:availability_zone  | nova
   |

 | OS-EXT-STS:power_state   | 1
   |

 | OS-EXT-STS:task_state| -
   |

 | OS-EXT-STS:vm_state  | active
   |

 | OS-SRV-USG:launched_at   | 2014-10-15T20:22:50.00
   |

 | OS-SRV-USG:terminated_at | -
   |

 | Private_net20 network| 20.0.0.10
   |

 | accessIPv4   |
   |

 | accessIPv6   |
   |

 | config_drive |
   |

 | created  | 2014-10-15T20:21:54Z
   |

 | flavor   | m1.tiny (1)
   |

 | hostId   |
 4660a679d319992f764bcb245b71048212fe8cd67b769400d82382b7   |

 | id   |
 e6a13d2e-756b-4b96-bf0c-438c2c875675   |

 | image| cirros-0.3.2-x86_64-uec
 (feaec710-c1cc-4071-aefa-c3dc2b915ab1) |

 | key_name | -
   |

 | metadata | {}
   |

 | name | vm10
   |

 | os-extended-volumes:volumes_attached | []
   |

 | private network  | 10.0.0.7, 172.29.173.13
   |

 | progress | 0
   |

 | security_groups

[openstack-dev] [Fuel] First green builds of Fuel with Juno support, HA mode too!

2014-10-15 Thread Mike Scherbakov
Hi all,
I'm excited to share that thanks to hard collaborative work we have first
build which passed BVT tests.
You can download nightly build using torrent [1]. In short, following tests
passed:

   - Neutron GRE in simple mode (non-HA), CentOS
   - HA, Neutron VLAN, CentOS
   - HA, nova-network VLAN manager, Ubuntu

[1] https://wiki.openstack.org/wiki/Fuel#Nightly_builds
-- 
Mike Scherbakov
#mihgen
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Automatic evacuate

2014-10-15 Thread Jay Pipes



On 10/15/2014 04:50 PM, Florian Haas wrote:

On Wed, Oct 15, 2014 at 9:58 PM, Jay Pipes jaypi...@gmail.com wrote:

On 10/15/2014 03:16 PM, Florian Haas wrote:


On Wed, Oct 15, 2014 at 7:20 PM, Russell Bryant rbry...@redhat.com
wrote:


On 10/13/2014 05:59 PM, Russell Bryant wrote:


Nice timing.  I was working on a blog post on this topic.



which is now here:

http://blog.russellbryant.net/2014/10/15/openstack-instance-ha-proposal/



I am absolutely loving the fact that we are finally having a
discussion in earnest about this. i think this deserves a Design
Summit session.

If I may weigh in here, let me share what I've seen users do and what
can currently be done, and what may be supported in the future.

Problem: automatically ensure that a Nova guest continues to run, even
if its host fails.

(That's the general problem description and I don't need to go into
further details explaining the problem, because Russell has done that
beautifully in his blog post.)

Now, what are the options?

(1) Punt and leave it to the hypervisor.

This essentially means that you must use a hypervisor that already has
HA built in, such as VMware with the VCenter driver. In that scenario,
Nova itself neither deals with HA, nor exposes any HA switches to the
user. Obvious downside: not generic, doesn't work with all
hypervisors, most importantly doesn't work with the most popular one
(libvirt/KVM).

(2) Deploy Nova nodes in pairs/groups, and pretend that they are one node.

You can already do that by overriding host in nova-compute.conf,
setting resume_guests_state_on_host_boot, and using VIPs with
Corosync/Pacemaker. You can then group these hosts in host aggregates,
and the user's scheduler hint to point a newly scheduled guest to such
a host aggregate becomes, effectively, the keep this guest running at
all times flag. Upside: no changes to Nova at all, monitoring,
fencing and recovery for free from Corosync/Pacemaker. Downsides:
requires vendors to automate Pacemaker configuration in deployment
tools (because you really don't want to do those things manually).
Additional downside: you either have some idle hardware, or you might
be overcommitting resources in case of failover.

(3) Automatic host evacuation.

Not supported in Nova right now, as Adam pointed out at the top of the
thread, and repeatedly shot down. If someone were to implement this,
it would *still* require that Corosync/Pacemaker be used for
monitoring and fencing of nodes, because re-implementing this from
scratch would be the reinvention of a wheel while painting a bikeshed.

(4) Per-guest HA.

This is the idea of just doing nova boot --keep-this running, i.e.
setting a per-guest flag that still means the machine is to be kept up
at all times. Again, not supported in Nova right now, and probably
even more complex to implement generically than (3), at the same or
greater cost.

I have a suggestion to tackle this that I *think* is reasonably
user-friendly while still bearable in terms of Nova development
effort:

(a) Define a well-known metadata key for a host aggregate, say ha.
Define that any host aggregate that represents a highly available
group of compute nodes should have this metadata key set.

(b) Then define a flavor that sets extra_specs ha=true.

Granted, this places an additional burden on distro vendors to
integrate highly-available compute nodes into their deployment
infrastructure. But since practically all of them already include
Pacemaker, the additional scaffolding required is actually rather
limited.



Or:

(5) Let monitoring and orchestration services deal with these use cases and
have Nova simply provide the primitive API calls that it already does (i.e.
host evacuate).


That would arguably lead to an incredible amount of wheel reinvention
for node failure detection, service failure detection, etc. etc.


How so? (5) would use existing wheels for monitoring and orchestration 
instead of writing all new code paths inside Nova to do the same thing.


Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] on Dockerfile patterns

2014-10-15 Thread James Bottomley
On Wed, 2014-10-15 at 11:24 -0400, David Vossel wrote:
 
 - Original Message -
  On Tue, 2014-10-14 at 19:52 -0400, David Vossel wrote:
   
   - Original Message -
Ok, why are you so down on running systemd in a container?
   
   It goes against the grain.
   
   From a distributed systems view, we gain quite a bit of control by
   maintaining
   one service per container. Containers can be re-organised and 
   re-purposed
   dynamically.
   If we have systemd trying to manage an entire stack of resources within a
   container,
   we lose this control.
   
   From my perspective a containerized application stack needs to be managed
   externally
   by whatever is orchestrating the containers to begin with. When we take a
   step back
   and look at how we actually want to deploy containers, systemd doesn't 
   make
   much sense.
   It actually limits us in the long run.
   
   Also... recovery. Using systemd to manage a stack of resources within a
   single container
   makes it difficult for whatever is externally enforcing the availability 
   of
   that container
   to detect the health of the container.  As it is now, the actual service 
   is
   pid 1 of a
   container. If that service dies, the container dies. If systemd is pid 1,
   there can
   be all kinds of chaos occurring within the container, but the external
   distributed
   orchestration system won't have a clue (unless it invokes some custom
   health monitoring
   tools within the container itself, which will likely be the case someday.)
  
  I don't really think this is a good argument.  If you're using docker,
  docker is the management and orchestration system for the containers.
 
 no, docker is a local tool for pulling images and launching containers.
 Docker is not the distributed resource manager in charge of overseeing
 what machines launch what containers and how those containers are linked
 together.

Well, neither is systemd: fleet management has a variety of solution.

  There's no dogmatic answer to the question should you run init in the
  container.
 
 an init daemon might make sense to put in some containers where we have
 a tightly coupled resource stack. There could be a use case where it would
 make more sense to put these resources in a single container.
 
 I don't think systemd is a good solution for the init daemon though. Systemd
 attempts to handle recovery itself as if it has the entire view of the 
 system. With containers, the system view exists outside of the containers.
 If we put an internal init daemon within the containers, that daemon needs
 to escalate internal failures. The easiest way to do this is to
 have init die if it encounters a resource failure (init is pid 1, pid 1 
 exiting
 causes container to exit, container exiting gets the attention of whatever
 is managing the containers)

I won't comment on what init should be.  However, init should not be
running in application containers, as I have said because it complicates
the situation.  Application containers are more compelling the simpler
they are constructed because they're easier to describe in xml +
templates.

  The reason for not running init inside a container managed by docker is
  that you want the template to be thin for ease of orchestration and
  transfer, so you want to share as much as possible with the host.  The
  more junk you put into the container, the fatter and less agile it
  becomes, so you should probably share the init system with the host in
  this paradigm.
 
 I don't think the local init system and containers should have anything
 to do with one another.  I said this in a previous reply, I'm approaching
 this problem from a distributed management perspective. The host's
 init daemon only has a local view of the world. 

If the container is an OS container, what you run inside is a full OS
stack; the only sharing is the kernel, so you get whatever the distro is
using as init and for some of them, that's systemd.  You have no choice
for OS containers.

  
  Conversely, containers can be used to virtualize full operating systems.
  This isn't the standard way of doing docker, but LXC and OpenVZ by
  default do containers this way.  For this type of container, because you
  have a full OS running inside the container, you have to also have
  systemd (assuming it's the init system) running within the container.
 
 sure, if you want to do this use systemd. I don't understand the use case
 where this makes any sense though. For me this falls in the yeah you can do 
 it,
 but why? category.

It's the standard Service Provider use case: containers are used as
dense, lightweight Virtual Environments for Virtual Private Servers.
The customer can be provisioned with whatever OS they like, but you
still get 3x the density and the huge elasticity improvements of
containers.

James



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

[openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements

2014-10-15 Thread Jorge Miramontes
Hey Octavia folks!


First off, yes, I'm still alive and kicking. :)

I,d like to start a conversation on usage requirements and have a few
suggestions. I advocate that, since we will be using TCP and HTTP/HTTPS
based protocols, we inherently enable connection logging for load
balancers for several reasons:

1) We can use these logs as the raw and granular data needed to track
usage. With logs, the operator has flexibility as to what usage metrics
they want to bill against. For example, bandwidth is easy to track and can
even be split into header and body data so that the provider can choose if
they want to bill on header data or not. Also, the provider can determine
if they will bill their customers for failed requests that were the fault
of the provider themselves. These are just a few examples; the point is
the flexible nature of logs.

2) Creating billable usage from logs is easy compared to other options
like polling. For example, in our current LBaaS iteration at Rackspace we
bill partly on average concurrent connections. This is based on polling
and is not as accurate as it possibly can be. It's very close, but it
doesn't get more accurate that the logs themselves. Furthermore, polling
is more complex and uses up resources on the polling cadence.

3) Enabling logs for all load balancers can be used for debugging, support
and audit purposes. While the customer may or may not want their logs
uploaded to swift, operators and their support teams can still use this
data to help customers out with billing and setup issues. Auditing will
also be easier with raw logs.

4) Enabling logs for all load balancers will help mitigate uncertainty in
terms of capacity planning. Imagine if every customer suddenly enabled
logs without it ever being turned on. This could produce a spike in
resource utilization that will be hard to manage. Enabling logs from the
start means we are certain as to what to plan for other than the nature of
the customer's traffic pattern.

Some Cons I can think of (please add more as I think the pros outweigh the
cons):

1) If we every add UDP based protocols then this model won't work.  1% of
our load balancers at Rackspace are UDP based so we are not looking at
using this protocol for Octavia. I'm more of a fan of building a really
good TCP/HTTP/HTTPS based load balancer because UDP load balancing solves
a different problem. For me different problem == different product.

2) I'm assuming HA Proxy. Thus, if we choose another technology for the
amphora then this model may break.


Also, and more generally speaking, I have categorized usage into three
categories:

1) Tracking usage - this is usage that will be used my operators and
support teams to gain insight into what load balancers are doing in an
attempt to monitor potential issues.
2) Billable usage - this is usage that is a subset of tracking usage used
to bill customers.
3) Real-time usage - this is usage that should be exposed via the API so
that customers can make decisions that affect their configuration (ex.
Based off of the number of connections my web heads can handle when
should I add another node to my pool?).

These are my preliminary thoughts, and I'd love to gain insight into what
the community thinks. I have built about 3 usage collection systems thus
far (1 with Brandon) and have learned a lot. Some basic rules I have
discovered with collecting usage are:

1) Always collect granular usage as it paints a picture of what actually
happened. Massaged/un-granular usage == lost information.
2) Never imply, always be explicit. Implications usually stem from bad
assumptions.


Last but not least, we need to store every user and system load balancer
event such as creation, updates, suspension and deletion so that we may
bill on things like uptime and serve our customers better by knowing what
happened and when.


Cheers,
--Jorge


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA] How to attach multiple NICs to an instance VM?

2014-10-15 Thread Danny Choi (dannchoi)
Hi Salvatore,

eth1 is not configured in /etc/network/interfaces.

After I manually added eth1 and bounced it, it came up with the 2nd private 
address.

$ sudo vi /etc/network/interfaces


# Configure Loopback

auto lo

iface lo inet loopback


auto eth0

iface eth0 inet dhcp



auto eth1

iface eth1 inet dhcp

~

~

~

$ sudo ifdown eht1  sudo ifup eth1

ifdown: interface eht1 not configured

udhcpc (v1.20.1) started

Sending discover...

Sending select for 20.0.0.10...

Lease of 20.0.0.10 obtained, lease time 86400

deleting routers

adding dns 8.8.4.4

adding dns 8.8.8.8

$ ifconfig -a

eth0  Link encap:Ethernet  HWaddr FA:16:3E:7A:49:1E

  inet addr:10.0.0.7  Bcast:10.0.0.255  Mask:255.255.255.0

  inet6 addr: fe80::f816:3eff:fe7a:491e/64 Scope:Link

  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

  RX packets:707 errors:0 dropped:0 overruns:0 frame:0

  TX packets:446 errors:0 dropped:0 overruns:0 carrier:0

  collisions:0 txqueuelen:1000

  RX bytes:66680 (65.1 KiB)  TX bytes:57968 (56.6 KiB)


eth1  Link encap:Ethernet  HWaddr FA:16:3E:73:C7:F0

  inet addr:20.0.0.10  Bcast:20.0.0.255  Mask:255.255.255.0

  inet6 addr: fe80::f816:3eff:fe73:c7f0/64 Scope:Link

  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

  RX packets:39 errors:0 dropped:0 overruns:0 frame:0

  TX packets:8 errors:0 dropped:0 overruns:0 carrier:0

  collisions:0 txqueuelen:1000

  RX bytes:3354 (3.2 KiB)  TX bytes:1098 (1.0 KiB)


loLink encap:Local Loopback

  inet addr:127.0.0.1  Mask:255.0.0.0

  inet6 addr: ::1/128 Scope:Host

  UP LOOPBACK RUNNING  MTU:16436  Metric:1

  RX packets:4 errors:0 dropped:0 overruns:0 frame:0

  TX packets:4 errors:0 dropped:0 overruns:0 carrier:0

  collisions:0 txqueuelen:0

  RX bytes:336 (336.0 B)  TX bytes:336 (336.0 B)


$ ping 10.0.0.7

PING 10.0.0.7 (10.0.0.7): 56 data bytes

64 bytes from 10.0.0.7: seq=0 ttl=64 time=0.138 ms

64 bytes from 10.0.0.7: seq=1 ttl=64 time=0.041 ms

64 bytes from 10.0.0.7: seq=2 ttl=64 time=0.066 ms

^C

--- 10.0.0.7 ping statistics ---

3 packets transmitted, 3 packets received, 0% packet loss

round-trip min/avg/max = 0.041/0.081/0.138 ms

$ ping 20.0.0.10

PING 20.0.0.10 (20.0.0.10): 56 data bytes

64 bytes from 20.0.0.10: seq=0 ttl=64 time=0.078 ms

64 bytes from 20.0.0.10: seq=1 ttl=64 time=0.041 ms

^C

--- 20.0.0.10 ping statistics ---

2 packets transmitted, 2 packets received, 0% packet loss

round-trip min/avg/max = 0.041/0.059/0.078 ms

$

Thanks,
Danny

===

Date: Thu, 16 Oct 2014 00:10:20 +0200
From: Salvatore Orlando sorla...@nicira.commailto:sorla...@nicira.com
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [QA] How to attach multiple NICs to an
instance VM?
Message-ID:
CAGR=i3jeuz6-peghjze-hnh2yvn8ykmbn4ies4dtqc3b2xl...@mail.gmail.commailto:CAGR=i3jeuz6-peghjze-hnh2yvn8ykmbn4ies4dtqc3b2xl...@mail.gmail.com
Content-Type: text/plain; charset=utf-8

I think you did everything right.

Are you sure cirros images by default are configured to boostrap interfaces
different from eth0?
Perhaps all you need to do is just ifup the interface... have you already
tried that?

Salvatore

On 15 October 2014 23:07, Danny Choi (dannchoi) 
dannc...@cisco.commailto:dannc...@cisco.com wrote:

  Hi,

  ?nova help boot? shows the following:

--nic
net-id=net-uuid,v4-fixed-ip=ip-addr,v6-fixed-ip=ip-addr,port-id=port-uuid

 Create a NIC on the server. Specify option

 multiple times to create multiple NICs.
net-

 id: attach NIC to network with this UUID

 (either port-id or net-id must be
provided),

 v4-fixed-ip: IPv4 fixed address for NIC

 (optional), v6-fixed-ip: IPv6 fixed address

 for NIC (optional), port-id: attach NIC to

 port with this UUID (either port-id or
net-id

 must be provided).


  NOTE:  Specify option multiple times to create multiple NICs. 


  I have two private networks and one public network (for floating IPs)
configured.


  localadmin@qa4:~/devstack$ nova net-list

+--+---+--+

| ID   | Label | CIDR |

+--+---+--+

| 6905cf7d-74d7-455b-b9d0-8cea972ec522 | private   | None |

| 8c25e33b-47be-47eb-a945-e0ac2ad6756a | Private_net20 | None |

| faa138e6-4774-41ad-8b5f-9795788eca43 | public| None |


Re: [openstack-dev] [QA] How to attach multiple NICs to an instance VM?

2014-10-15 Thread Adam Lawson
May I ask a question about approach? Why don't you use aliases i.e. eth0:0,
eth0:1 instead of creating multiple NIC's?


*Adam Lawson*

AQORN, Inc.
427 North Tatnall Street
Ste. 58461
Wilmington, Delaware 19801-2230
Toll-free: (844) 4-AQORN-NOW ext. 101
International: +1 302-387-4660
Direct: +1 916-246-2072


On Wed, Oct 15, 2014 at 4:09 PM, Danny Choi (dannchoi) dannc...@cisco.com
wrote:

  Hi Salvatore,

  eth1 is not configured in /etc/network/interfaces.

  After I manually added eth1 and bounced it, it came up with the 2nd
 private address.

 $ sudo vi /etc/network/interfaces


  # Configure Loopback

 auto lo

 iface lo inet loopback


  auto eth0

 iface eth0 inet dhcp



 auto eth1

 iface eth1 inet dhcp

 ~

 ~

 ~

 $ sudo ifdown eht1  sudo ifup eth1

 ifdown: interface eht1 not configured

 udhcpc (v1.20.1) started

 Sending discover...

 Sending select for 20.0.0.10...

 Lease of 20.0.0.10 obtained, lease time 86400

 deleting routers

 adding dns 8.8.4.4

 adding dns 8.8.8.8

 $ ifconfig -a

 eth0  Link encap:Ethernet  HWaddr FA:16:3E:7A:49:1E

   inet addr:10.0.0.7  Bcast:10.0.0.255  Mask:255.255.255.0

   inet6 addr: fe80::f816:3eff:fe7a:491e/64 Scope:Link

   UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

   RX packets:707 errors:0 dropped:0 overruns:0 frame:0

   TX packets:446 errors:0 dropped:0 overruns:0 carrier:0

   collisions:0 txqueuelen:1000

   RX bytes:66680 (65.1 KiB)  TX bytes:57968 (56.6 KiB)


  eth1  Link encap:Ethernet  HWaddr FA:16:3E:73:C7:F0

   inet addr:20.0.0.10  Bcast:20.0.0.255  Mask:255.255.255.0

   inet6 addr: fe80::f816:3eff:fe73:c7f0/64 Scope:Link

   UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

   RX packets:39 errors:0 dropped:0 overruns:0 frame:0

   TX packets:8 errors:0 dropped:0 overruns:0 carrier:0

   collisions:0 txqueuelen:1000

   RX bytes:3354 (3.2 KiB)  TX bytes:1098 (1.0 KiB)


  loLink encap:Local Loopback

   inet addr:127.0.0.1  Mask:255.0.0.0

   inet6 addr: ::1/128 Scope:Host

   UP LOOPBACK RUNNING  MTU:16436  Metric:1

   RX packets:4 errors:0 dropped:0 overruns:0 frame:0

   TX packets:4 errors:0 dropped:0 overruns:0 carrier:0

   collisions:0 txqueuelen:0

   RX bytes:336 (336.0 B)  TX bytes:336 (336.0 B)


  $ ping 10.0.0.7

 PING 10.0.0.7 (10.0.0.7): 56 data bytes

 64 bytes from 10.0.0.7: seq=0 ttl=64 time=0.138 ms

 64 bytes from 10.0.0.7: seq=1 ttl=64 time=0.041 ms

 64 bytes from 10.0.0.7: seq=2 ttl=64 time=0.066 ms

 ^C

 --- 10.0.0.7 ping statistics ---

 3 packets transmitted, 3 packets received, 0% packet loss

 round-trip min/avg/max = 0.041/0.081/0.138 ms

 $ ping 20.0.0.10

 PING 20.0.0.10 (20.0.0.10): 56 data bytes

 64 bytes from 20.0.0.10: seq=0 ttl=64 time=0.078 ms

 64 bytes from 20.0.0.10: seq=1 ttl=64 time=0.041 ms

 ^C

 --- 20.0.0.10 ping statistics ---

 2 packets transmitted, 2 packets received, 0% packet loss

 round-trip min/avg/max = 0.041/0.059/0.078 ms

 $

  Thanks,
 Danny

  ===

  Date: Thu, 16 Oct 2014 00:10:20 +0200
 From: Salvatore Orlando sorla...@nicira.com
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [QA] How to attach multiple NICs to an
 instance VM?
 Message-ID:
 CAGR=i3jeuz6-peghjze-hnh2yvn8ykmbn4ies4dtqc3b2xl...@mail.gmail.com
 Content-Type: text/plain; charset=utf-8

  I think you did everything right.

  Are you sure cirros images by default are configured to boostrap
 interfaces
 different from eth0?
 Perhaps all you need to do is just ifup the interface... have you already
 tried that?

  Salvatore

  On 15 October 2014 23:07, Danny Choi (dannchoi) dannc...@cisco.com
 wrote:

Hi,

?nova help boot? shows the following:

  --nic
 net-id=net-uuid,v4-fixed-ip=ip-addr,v6-fixed-ip=ip-addr,port-id=port-uuid

   Create a NIC on the server. Specify
 option

   multiple times to create multiple NICs.
 net-

   id: attach NIC to network with this UUID

   (either port-id or net-id must be
 provided),

   v4-fixed-ip: IPv4 fixed address for NIC

   (optional), v6-fixed-ip: IPv6 fixed
 address

   for NIC (optional), port-id: attach NIC
 to

   port with this UUID (either port-id or
 net-id

   must be provided).


NOTE:  Specify option multiple times to create multiple NICs.
 


I have two private networks and one public network (for floating IPs)
 configured.


localadmin@qa4:~/devstack$ nova net-list

  +--+---+--+

  | ID   

Re: [openstack-dev] [qa] Cannot start the VM console when VM is launched at Compute node

2014-10-15 Thread Danny Choi (dannchoi)
I did a fresh re-install of devstack.

Now I got the URL for the console.

localadmin@qa4:~/devstack$ nova get-vnc-console vm1 novnc
+---+-+
| Type | Url |
+---+-+
| novnc | 
http://172.29.172.161:6080/vnc_auto.html?token=9ced0dd0-f146-42eb-9b26-c64a29443936
 |
+---+-+

However, when attempt to connect to the URL, error Failed to connect to server 
(code: 1006) is returned at the web page.

The following traceback is logged in the Controller's screen-x-n-novnc.log:

10.131.67.144 - - [15/Oct/2014 15:11:06] GET /include/webutil.js HTTP/1.1 200 
-
2014-10-15 15:11:06.029 DEBUG nova.console.websocketproxy [-] 10.131.67.144: 
new handler Process from (pid=21242) vmsg 
/usr/local/lib/python2.7/dist-packages/websockify/websocket.py:824
10.131.67.144 - - [15/Oct/2014 15:11:06] GET /include/des.js HTTP/1.1 200 -
10.131.67.144 - - [15/Oct/2014 15:11:06] GET /include/keyboard.js HTTP/1.1 
200 -
10.131.67.144 - - [15/Oct/2014 15:11:06] GET /include/input.js HTTP/1.1 200 -
10.131.67.144 - - [15/Oct/2014 15:11:06] GET /include/display.js HTTP/1.1 200 
-
10.131.67.144 - - [15/Oct/2014 15:11:06] GET /include/jsunzip.js HTTP/1.1 200 
-
10.131.67.144 - - [15/Oct/2014 15:11:06] GET /include/rfb.js HTTP/1.1 200 -
2014-10-15 15:11:06.590 DEBUG nova.console.websocketproxy [-] 10.131.67.144: 
new handler Process from (pid=21242) vmsg 
/usr/local/lib/python2.7/dist-packages/websockify/websocket.py:824
10.131.67.144 - - [15/Oct/2014 15:11:06] GET /websockify HTTP/1.1 101 -
10.131.67.144 - - [15/Oct/2014 15:11:06] 10.131.67.144: Plain non-SSL (ws://) 
WebSocket connection
10.131.67.144 - - [15/Oct/2014 15:11:06] 10.131.67.144: Version hybi-13, 
base64: 'False'
10.131.67.144 - - [15/Oct/2014 15:11:06] 10.131.67.144: Path: '/websockify'
2014-10-15 15:11:06.605 INFO oslo.messaging._drivers.impl_rabbit 
[req-f5c8828b-f111-4a12-8812-d25f56e47b01 None None] Connecting to AMQP server 
on 172.29.172.161:5672
2014-10-15 15:11:06.616 DEBUG nova.console.websocketproxy [-] 10.131.67.144: 
new handler Process from (pid=21242) vmsg 
/usr/local/lib/python2.7/dist-packages/websockify/websocket.py:824
10.131.67.144 - - [15/Oct/2014 15:11:06] GET /favicon.ico HTTP/1.1 200 -
2014-10-15 15:11:06.622 INFO oslo.messaging._drivers.impl_rabbit 
[req-f5c8828b-f111-4a12-8812-d25f56e47b01 None None] Connected to AMQP server 
on 172.29.172.161:5672
2014-10-15 15:11:06.629 INFO oslo.messaging._drivers.impl_rabbit 
[req-f5c8828b-f111-4a12-8812-d25f56e47b01 None None] Connecting to AMQP server 
on 172.29.172.161:5672
2014-10-15 15:11:06.641 INFO oslo.messaging._drivers.impl_rabbit 
[req-f5c8828b-f111-4a12-8812-d25f56e47b01 None None] Connected to AMQP server 
on 172.29.172.161:5672
2014-10-15 15:11:06.652 INFO nova.console.websocketproxy 
[req-f5c8828b-f111-4a12-8812-d25f56e47b01 None None] handler exception: The 
token '9ced0dd0-f146-42eb-9b26-c64a29443936' is invalid or has expired
2014-10-15 15:11:06.652 DEBUG nova.console.websocketproxy 
[req-f5c8828b-f111-4a12-8812-d25f56e47b01 None None] exception from (pid=13509) 
vmsg /usr/local/lib/python2.7/dist-packages/websockify/websocket.py:824
2014-10-15 15:11:06.652 TRACE nova.console.websocketproxy Traceback (most 
recent call last):
2014-10-15 15:11:06.652 TRACE nova.console.websocketproxy File 
/usr/local/lib/python2.7/dist-packages/websockify/websocket.py, line 874, in 
top_new_client
2014-10-15 15:11:06.652 TRACE nova.console.websocketproxy client = 
self.do_handshake(startsock, address)
2014-10-15 15:11:06.652 TRACE nova.console.websocketproxy File 
/usr/local/lib/python2.7/dist-packages/websockify/websocket.py, line 809, in 
do_handshake
2014-10-15 15:11:06.652 TRACE nova.console.websocketproxy 
self.RequestHandlerClass(retsock, address, self)
2014-10-15 15:11:06.652 TRACE nova.console.websocketproxy File 
/opt/stack/nova/nova/console/websocketproxy.py, line 112, in __init__
2014-10-15 15:11:06.652 TRACE nova.console.websocketproxy 
websockify.ProxyRequestHandler.__init__(self, *args, **kwargs)
2014-10-15 15:11:06.652 TRACE nova.console.websocketproxy File 
/usr/local/lib/python2.7/dist-packages/websockify/websocket.py, line 112, in 
__init__
2014-10-15 15:11:06.652 TRACE nova.console.websocketproxy 
SimpleHTTPRequestHandler.__init__(self, req, addr, server)
2014-10-15 15:11:06.652 TRACE nova.console.websocketproxy File 
/usr/lib/python2.7/SocketServer.py, line 649, in __init__
2014-10-15 15:11:06.652 TRACE nova.console.websocketproxy self.handle()
2014-10-15 15:11:06.652 TRACE nova.console.websocketproxy File 
/usr/local/lib/python2.7/dist-packages/websockify/websocket.py, line 540, in 
handle
2014-10-15 15:11:06.652 TRACE nova.console.websocketproxy 
SimpleHTTPRequestHandler.handle(self)
2014-10-15 15:11:06.652 TRACE 

[openstack-dev] [Neutron][LBaaS][SSL] Interim SSL API implementation for LBaaS

2014-10-15 Thread Vijay Bhamidipati
Hi,


A few months ago, as part of moving from legacy systems to Openstack, there
arose a requirement to support SSL APIs in our Openstack cloud
infrastructure at Ebay/Paypal. While the new v2 LBaaS API with its
considerable design improvements is in the process of addressing the SSL
requirements of LBaaS deployments, it is still under development and we had
to deploy a solution to address our immediate needs quicker.


There was a previous effort upstream [1] towards this, but that was
abandoned. Consequently, we came up with a different design for the LBaaS
SSL API that best suited our current requirements, and developed an interim
implementation that we currently have deployed on havana, but which can be
ported to later releases (icehouse/juno) with minimal changes since it’s
designed to be independent modularly and intersects existing code paths at
relatively few points.


We think that this API will be useful to the Openstack community and to
companies that are currently running Openstack clouds with LBaaS and need
SSL API support until LBaaS v2 comes out in Kilo or later, hence this mail
containing pointers to the code and instructions.


We have put up the code on github at:


Neutron:

——

https://github.com/vijayendrabvs/ssl-python-neutronclient.git

branch: stable/havana


LBaaS Driver:

——

https://github.com/vijayendrabvs/ssl-f5-neutron-lbaas.git

branch: havana


CLI:

——

https://github.com/vijayendrabvs/ssl-neutron.git

branch: master



The CLI and API documentation is at:


https://github.com/vijayendrabvs/ssl-neutron/blob/stable/havana/SSL-API-README



We worked with the F5 Openstack team who provided their F5 LBaaS driver to
work with our deployment of F5 LBs. We added the necessary modules in their
driver to plumb SSL entities on the LB, in the F5 plugin and agent driver.


F5 has currently released its drivers under the Mozilla license, and is in
the process of releasing the same under Apache License to align with the
rest of Openstack code.


We do not currently intend to commit this code to upstream stable havana,
unless the community thinks that doing so can be useful and pushes for it.


At the time we developed this solution, HAProxy hadn’t come out with
version 1.5 yet and thus didn’t support SSL, and lack of cycles meant we
weren’t able to implement a reference implementation for HAProxy as well.
That said, doing so would build on the same approach we use with F5, in
reconfiguring HAProxy from the HAProxy driver to setup SSL termination on
VIPs.


A point to note is that we have relied on using the neutron db to store our
certs/cert chains/cert keys. While this meets our current requirements, we
wish to emphasize that this may not suit all deployments. The new LBaaS v2
API is designed to integrate with Barbican and thus address such
requirements.


Finally, going forward, we will need to write migration scripts once the
LBaaS v2 API is ready, and deploying v1’s SSL API will get us started
towards that goal.


Please let us know if you have any questions regarding the code or
deploying it - we would be happy to help!


Thanks,

Regards,

Vijay B


[1] https://review.openstack.org/#/c/74031/5
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] on Dockerfile patterns

2014-10-15 Thread Angus Lees
On Wed, 15 Oct 2014 09:51:03 AM Clint Byrum wrote:
 Excerpts from Vishvananda Ishaya's message of 2014-10-15 07:52:34 -0700:
  On Oct 14, 2014, at 1:12 PM, Clint Byrum cl...@fewbar.com wrote:
   Excerpts from Lars Kellogg-Stedman's message of 2014-10-14 12:50:48 
-0700:
   On Tue, Oct 14, 2014 at 03:25:56PM -0400, Jay Pipes wrote:
   I think the above strategy is spot on. Unfortunately, that's not how
   the
   Docker ecosystem works.
   
   I'm not sure I agree here, but again nobody is forcing you to use this
   tool.
   
   operating system that the image is built for. I see you didn't respond
   to my point that in your openstack-containers environment, you end up
   with Debian *and* Fedora images, since you use the official MySQL
   dockerhub image. And therefore you will end up needing to know
   sysadmin specifics (such as how network interfaces are set up) on
   multiple operating system distributions.  
   I missed that part, but ideally you don't *care* about the
   distribution in use.  All you care about is the application.  Your
   container environment (docker itself, or maybe a higher level
   abstraction) sets up networking for you, and away you go.
   
   If you have to perform system administration tasks inside your
   containers, my general feeling is that something is wrong.
   
   Speaking as a curmudgeon ops guy from back in the day.. the reason
   I choose the OS I do is precisely because it helps me _when something
   is wrong_. And the best way an OS can help me is to provide excellent
   debugging tools, and otherwise move out of the way.
   
   When something _is_ wrong and I want to attach GDB to mysqld in said
   container, I could build a new container with debugging tools installed,
   but that may lose the very system state that I'm debugging. So I need to
   run things inside the container like apt-get or yum to install GDB.. and
   at some point you start to realize that having a whole OS is actually a
   good thing even if it means needing to think about a few more things up
   front, such as which OS will I use? and what tools do I need
   installed
   in my containers?
   
   What I mean to say is, just grabbing off the shelf has unstated
   consequences.
  
  If this is how people are going to use and think about containers, I would
  submit they are a huge waste of time. The performance value they offer is
  dramatically outweighed by the flexibilty and existing tooling that exists
  for virtual machines. As I state in my blog post[1] if we really want to
  get value from containers, we must convert to the single application per
  container view. This means having standard ways of doing the above either
  on the host machine or in a debugging container that is as easy (or
  easier)
  than the workflow you mention. There are not good ways to do this yet, and
  the community hand-waves it away, saying things like, well you could …”.
  You could isn’t good enough. The result is that a lot of people that are
  using containers today are doing fat containers with a full os.
 
 I think we really agree.
 
 What the container universe hasn't worked out is all the stuff that the
 distros have worked out for a long time now: consistency.
 
 I think it would be a good idea for containers' filesystem contents to
 be a whole distro. What's at question in this thread is what should be
 running. If we can just chroot into the container's FS and run apt-get/yum
 install our tools, and then nsenter and attach to the running process,
 then huzzah: I think we have best of both worlds.

Erm, yes that's exactly what you can do with containers (docker, lxc, and 
presumably any other use of containers with a private/ephemeral filesystem).

To others here:

There's a lot of strongly-held opinions being expressed on this thread, and a 
number of them appear to be based on misinformation or a lack of understanding 
of the technology and goals.  I'm happy to talk over IRC/VC/whatever to anyone 
about why I think this sort of stuff is worth pursuing (and I assume there are 
plenty of others too).  I'd also suggest reading docs and/or in-person at your 
local docker/devops meetup would be a more efficient method of learning rather 
than this to-and-fro on the os-dev mailing list...

 To the container makers: consider that things can and will go wrong,
 and the answer may already exist as a traditional tool, and not be
 restart the container.
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
 - Gus

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift] Allow hostname for nodes in Ring

2014-10-15 Thread Osanai, Hisashi

Thanks for your advice.

On Thursday, October 16, 2014 2:25 AM, Pete Zaitcev wrote:
 I don't know if the bug report is all that necessary or useful.
 The scope of the problem is well defined without, IMHO.

I really want to have clear rules for this but your thought looks 
pretty nice for me (good balance) so I will behave according to it.

Thanks again!
Hisashi Osanai


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] on Dockerfile patterns

2014-10-15 Thread Angus Lees
On Wed, 15 Oct 2014 12:40:16 AM Fox, Kevin M wrote:
 Systemd has invested a lot of time/effort to be able to relaunch failed
 services, support spawning and maintaining unix sockets and services across
 them, etc, that you'd have to push out of and across docker containers. All
 of that can be done, but why reinvent the wheel? Like you said, pacemaker
 can be made to make it all work, but I have yet to see a way to deploy
 pacemaker services anywhere near as easy as systemd+yum makes it. (Thanks
 be to redhat. :)

You should also consider fleet, if you want a systemd-approach to containers. 
 
It's basically a cluster-wide systemd that often (but doesn't have to) 
start/restart docker containers on various hosts.

I tried it for a short while and it isn't bad.  The verbosity and 
repetitiveness of the systemd files was a bit annoying, but that would be easy 
to script away.  I did like how simple it was, and the ability to express 
dependencies between systemd entities.

Note that fleet is essentially systemd managing containers from the outside - 
not running systemd inside the container.  So in many ways it's a 
repeat/reinforcement of the same conversation we're already having.

-- 
 - Gus

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] My notes and experiences about OSv on OpenStack

2014-10-15 Thread Gareth
Here is introducing OSv to all OpenStack developers. OSv team is focusing
on performance of KVM based guest OS, or cloud application. I'm interested
in it because of their hard work on optimizing all details. And I had also
worked on deploying OSv on OpenStack environment. However, since my work is
only for private interests in off working time, my progress is pretty slow.
So I have to share my experience and hope other engineers could join it:

# OSv highlights in my mind

1, Super fast booting time means nearly zero down time services, an
alternative way to dynamic flavor changing and time improvement for
deploying instances in KVM based PaaS platform. 2, Great work on
performance. Cloud engineers could borrow experience from their work on
guest OS. 3, Better performance on JVM. We could imagine there are many
overhead and redundancy in host OS/guest OS/JVM. Fixing that could help
Java applications perform closer to bare-metal.

# Enabling OSv on OpenStack

Actually there should not be any big problems. The steps are that building
OSv qcow2 image first and boot it via Nova then. You may face some problems
because OSv image need many new Qemu features, such as virtio-rng-io/vhost
and enable-kvm flag is necessary.

Fortunately, I don't meet any problems with network, Neutron (actually I
thought before network in OpenStack maybe hang me for a long time). OSv
need a tap device and Neutron does good job on it. And then I could access
OSv service very well.

# OSv based demo

The work I finished is only a memcached cluster. And the result is obvious:
memory throughout of OSv based instance has 3 times than it in traditional
virtual machines, and 90% of performance on host OS[0][1]. Since their work
on memcached is quite mature, consider OSv if you need build memcached
instance.

Another valuable demo cluster is Hadoop. When talking about Hadoop on
OpenStack, the topic asked most frequently is the performance on virtual
machines. A known experience is higher version Qemu would help fix disk I/O
performance[2]. But how  does the overlap in JVM/guest OS? I would love to
find that, but don't have so much time.

After of all, the purpose of this thread is to bring an interesting topic
on cloud performance and hope more and more efficient clusters based on
OpenStack (in production use). I don't have so much time on OSv because
this just is my personal interest, but I could prove OSv is a valuable way
and topic.

[0] http://paste.openstack.org/show/121382/
[1] https://github.com/cloudius-systems/osv/wiki/OSv-Case-Study:-Memcached
[2]
https://www.openstack.org/summit/openstack-summit-atlanta-2014/session-videos/presentation/performance-of-hadoop-on-openstack

-- 
Gareth

*Cloud Computing, OpenStack, Distributed Storage, Fitness, Basketball*
*OpenStack contributor, kun_huang@freenode*
*My promise: if you find any spelling or grammar mistakes in my email from
Mar 1 2013, notify me *
*and I'll donate $1 or ¥1 to an open organization you specify.*
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Automatic evacuate

2014-10-15 Thread Russell Bryant
On 10/15/2014 06:30 PM, Jay Pipes wrote:
 
 
 On 10/15/2014 04:50 PM, Florian Haas wrote:
 On Wed, Oct 15, 2014 at 9:58 PM, Jay Pipes jaypi...@gmail.com wrote:
 On 10/15/2014 03:16 PM, Florian Haas wrote:

 On Wed, Oct 15, 2014 at 7:20 PM, Russell Bryant rbry...@redhat.com
 wrote:

 On 10/13/2014 05:59 PM, Russell Bryant wrote:

 Nice timing.  I was working on a blog post on this topic.


 which is now here:

 http://blog.russellbryant.net/2014/10/15/openstack-instance-ha-proposal/



 I am absolutely loving the fact that we are finally having a
 discussion in earnest about this. i think this deserves a Design
 Summit session.

 If I may weigh in here, let me share what I've seen users do and what
 can currently be done, and what may be supported in the future.

 Problem: automatically ensure that a Nova guest continues to run, even
 if its host fails.

 (That's the general problem description and I don't need to go into
 further details explaining the problem, because Russell has done that
 beautifully in his blog post.)

 Now, what are the options?

 (1) Punt and leave it to the hypervisor.

 This essentially means that you must use a hypervisor that already has
 HA built in, such as VMware with the VCenter driver. In that scenario,
 Nova itself neither deals with HA, nor exposes any HA switches to the
 user. Obvious downside: not generic, doesn't work with all
 hypervisors, most importantly doesn't work with the most popular one
 (libvirt/KVM).

 (2) Deploy Nova nodes in pairs/groups, and pretend that they are one
 node.

 You can already do that by overriding host in nova-compute.conf,
 setting resume_guests_state_on_host_boot, and using VIPs with
 Corosync/Pacemaker. You can then group these hosts in host aggregates,
 and the user's scheduler hint to point a newly scheduled guest to such
 a host aggregate becomes, effectively, the keep this guest running at
 all times flag. Upside: no changes to Nova at all, monitoring,
 fencing and recovery for free from Corosync/Pacemaker. Downsides:
 requires vendors to automate Pacemaker configuration in deployment
 tools (because you really don't want to do those things manually).
 Additional downside: you either have some idle hardware, or you might
 be overcommitting resources in case of failover.

 (3) Automatic host evacuation.

 Not supported in Nova right now, as Adam pointed out at the top of the
 thread, and repeatedly shot down. If someone were to implement this,
 it would *still* require that Corosync/Pacemaker be used for
 monitoring and fencing of nodes, because re-implementing this from
 scratch would be the reinvention of a wheel while painting a bikeshed.

 (4) Per-guest HA.

 This is the idea of just doing nova boot --keep-this running, i.e.
 setting a per-guest flag that still means the machine is to be kept up
 at all times. Again, not supported in Nova right now, and probably
 even more complex to implement generically than (3), at the same or
 greater cost.

 I have a suggestion to tackle this that I *think* is reasonably
 user-friendly while still bearable in terms of Nova development
 effort:

 (a) Define a well-known metadata key for a host aggregate, say ha.
 Define that any host aggregate that represents a highly available
 group of compute nodes should have this metadata key set.

 (b) Then define a flavor that sets extra_specs ha=true.

 Granted, this places an additional burden on distro vendors to
 integrate highly-available compute nodes into their deployment
 infrastructure. But since practically all of them already include
 Pacemaker, the additional scaffolding required is actually rather
 limited.


 Or:

 (5) Let monitoring and orchestration services deal with these use
 cases and
 have Nova simply provide the primitive API calls that it already does
 (i.e.
 host evacuate).

 That would arguably lead to an incredible amount of wheel reinvention
 for node failure detection, service failure detection, etc. etc.
 
 How so? (5) would use existing wheels for monitoring and orchestration
 instead of writing all new code paths inside Nova to do the same thing.

Right, there may be some confusion here ... I thought you were both
agreeing that the use of an external toolset was a good approach for the
problem, but Florian's last message makes that not so clear ...

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Automatic evacuate

2014-10-15 Thread Russell Bryant
On 10/15/2014 05:07 PM, Florian Haas wrote:
 On Wed, Oct 15, 2014 at 10:03 PM, Russell Bryant rbry...@redhat.com wrote:
 Am I making sense?

 Yep, the downside is just that you need to provide a new set of flavors
 for ha vs non-ha.  A benefit though is that it's a way to support it
 today without *any* changes to OpenStack.
 
 Users are already very used to defining new flavors. Nova itself
 wouldn't even need to define those; if the vendor's deployment tools
 defined them it would be just fine.

Yes, I know Nova wouldn't need to define it.  I was saying I didn't like
that it was required at all.

 This seems like the kind of thing we should also figure out how to offer
 on a per-guest basis without needing a new set of flavors.  That's why I
 also listed the server tagging functionality as another possible solution.
 
 This still doesn't do away with the requirement to reliably detect
 node failure, and to fence misbehaving nodes. Detecting that a node
 has failed, and fencing it if unsure, is a prerequisite for any
 recovery action. So you need Corosync/Pacemaker anyway.

Obviously, yes.  My post covered all of that directly ... the tagging
bit was just additional input into the recovery operation.

 Note also that when using an approach where you have physically
 clustered nodes, but you are also running non-HA VMs on those, then
 the user must understand that the following applies:
 
 (1) If your guest is marked HA, then it will automatically recover on
 node failure, but
 (2) if your guest is *not* marked HA, then it will go down with the
 node not only if it fails, but also if it is fenced.
 
 So a non-HA guest on an HA node group actually has a slightly
 *greater* chance of going down than a non-HA guest on a non-HA host.
 (And let's not get into don't use fencing then; we all know why
 that's a bad idea.)
 
 Which is why I think it makes sense to just distinguish between
 HA-capable and non-HA-capable hosts, and have the user decide whether
 they want HA or non-HA guests simply by assigning them to the
 appropriate host aggregates.

Very good point.  I hadn't considered that.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Mistral]

2014-10-15 Thread Rich


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] on Dockerfile patterns

2014-10-15 Thread Clint Byrum
Excerpts from Angus Lees's message of 2014-10-15 17:30:52 -0700:
 On Wed, 15 Oct 2014 09:51:03 AM Clint Byrum wrote:
  Excerpts from Vishvananda Ishaya's message of 2014-10-15 07:52:34 -0700:
   On Oct 14, 2014, at 1:12 PM, Clint Byrum cl...@fewbar.com wrote:
Excerpts from Lars Kellogg-Stedman's message of 2014-10-14 12:50:48 
 -0700:
On Tue, Oct 14, 2014 at 03:25:56PM -0400, Jay Pipes wrote:
I think the above strategy is spot on. Unfortunately, that's not how
the
Docker ecosystem works.

I'm not sure I agree here, but again nobody is forcing you to use this
tool.

operating system that the image is built for. I see you didn't respond
to my point that in your openstack-containers environment, you end up
with Debian *and* Fedora images, since you use the official MySQL
dockerhub image. And therefore you will end up needing to know
sysadmin specifics (such as how network interfaces are set up) on
multiple operating system distributions.  
I missed that part, but ideally you don't *care* about the
distribution in use.  All you care about is the application.  Your
container environment (docker itself, or maybe a higher level
abstraction) sets up networking for you, and away you go.

If you have to perform system administration tasks inside your
containers, my general feeling is that something is wrong.

Speaking as a curmudgeon ops guy from back in the day.. the reason
I choose the OS I do is precisely because it helps me _when something
is wrong_. And the best way an OS can help me is to provide excellent
debugging tools, and otherwise move out of the way.

When something _is_ wrong and I want to attach GDB to mysqld in said
container, I could build a new container with debugging tools installed,
but that may lose the very system state that I'm debugging. So I need to
run things inside the container like apt-get or yum to install GDB.. and
at some point you start to realize that having a whole OS is actually a
good thing even if it means needing to think about a few more things up
front, such as which OS will I use? and what tools do I need
installed
in my containers?

What I mean to say is, just grabbing off the shelf has unstated
consequences.
   
   If this is how people are going to use and think about containers, I would
   submit they are a huge waste of time. The performance value they offer is
   dramatically outweighed by the flexibilty and existing tooling that exists
   for virtual machines. As I state in my blog post[1] if we really want to
   get value from containers, we must convert to the single application per
   container view. This means having standard ways of doing the above either
   on the host machine or in a debugging container that is as easy (or
   easier)
   than the workflow you mention. There are not good ways to do this yet, and
   the community hand-waves it away, saying things like, well you could …”.
   You could isn’t good enough. The result is that a lot of people that are
   using containers today are doing fat containers with a full os.
  
  I think we really agree.
  
  What the container universe hasn't worked out is all the stuff that the
  distros have worked out for a long time now: consistency.
  
  I think it would be a good idea for containers' filesystem contents to
  be a whole distro. What's at question in this thread is what should be
  running. If we can just chroot into the container's FS and run apt-get/yum
  install our tools, and then nsenter and attach to the running process,
  then huzzah: I think we have best of both worlds.
 
 Erm, yes that's exactly what you can do with containers (docker, lxc, and 
 presumably any other use of containers with a private/ephemeral filesystem).
 

The point I was trying to make is that this case was not being addressed
by the don't run init in a container crowd. I am in that crowd, and
thus, wanted to make the point: this is how I think it will work when
I do finally get around to trying containers for a real workload.

 To others here:
 
 There's a lot of strongly-held opinions being expressed on this thread, and a 
 number of them appear to be based on misinformation or a lack of 
 understanding 
 of the technology and goals.  I'm happy to talk over IRC/VC/whatever to 
 anyone 
 about why I think this sort of stuff is worth pursuing (and I assume there 
 are 
 plenty of others too).  I'd also suggest reading docs and/or in-person at 
 your 
 local docker/devops meetup would be a more efficient method of learning 
 rather 
 than this to-and-fro on the os-dev mailing list...
 

I think you may have been mistaken about the purpose of this list. It is
for openstack developers to discuss exactly the things you're talking
about not discussing on this list. You've done a great job of helping us
clear these things up. Don't turn around and tell us now don't ever do

[openstack-dev] [Neutron][DVR] Openstack Juno: how to configure dvr in Network-Node and Compute-Node?

2014-10-15 Thread zhang xiaobin

Could anyone help on this?

In Openstack juno, Neutron new feature called Distributed Virtual Routing (DVR),
But how to configure it in network-node and compute-node. Openstack.org just 
said router_distributed = True, which is far than enough.
could any help point to some detailed instruction on how to configure it?
When we were adding port, some errors from router reported errors like: 
AttributeError: 'Ml2Plugin' object has no attribute 'update_dvr_port_binding'
What's more, is this DVR per project, or per compute nodes?
Thanks in advance!




zhang xiaobin
本邮件(包括其附件)可能含有保密、专有或保留著作权的信息。如果您并非本邮件指定接受人,请即刻通知发送人并将本邮件从您的系统中删除,您不得散布、保留、复制、披露或以其他方式使用本邮件任何相关信息,并且通过邮件告知我们此次错误投递。发送人在本邮件下表达的观点并不一定代表苏宁云商集团股份有限公司的观点。苏宁云商集团股份有限公司并不保证本邮件是安全或不受任何计算机病毒影响的,并且对由于邮件传输而导致的邮件内容错误或缺失不承担任何责任。除非明确说明,本邮件并不构成具有约束力的契约。

This e-mail may contain confidential, copyright and/or privileged information. 
If you are not the addressee or authorized to receive this, please inform us of 
the erroneous delivery by return e-mail, and you should delete it from your 
system and may not use, copy, disclose or take any action based on this e-mail 
or any information herein. Any opinions expressed by sender hereof do not 
necessarily represent those of SUNING COMMERCE GROUP CO., LTD.,SUNING COMMERCE 
GROUP CO., LTD.,does not guarantee that this email is secure or free from 
viruses. No liability is accepted for any errors or omissions in the contents 
of this email, which arise as a result of email transmission. Unless expressly 
stated,this email is not intended to form a binding contract.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] My notes and experiences about OSv on OpenStack

2014-10-15 Thread Gareth
yes :)

I planed that if that topic were picked, I could apply that as a formal
project in Intel. But failed...

On Thu, Oct 16, 2014 at 12:53 PM, Zhipeng Huang zhipengh...@gmail.com
wrote:

 Hi, I'm also interested in it. You submitted a talk about it to Paris
 Summit right?

 On Thu, Oct 16, 2014 at 10:34 AM, Gareth academicgar...@gmail.com wrote:


 Here is introducing OSv to all OpenStack developers. OSv team is focusing
 on performance of KVM based guest OS, or cloud application. I'm interested
 in it because of their hard work on optimizing all details. And I had also
 worked on deploying OSv on OpenStack environment. However, since my work is
 only for private interests in off working time, my progress is pretty slow.
 So I have to share my experience and hope other engineers could join it:

 # OSv highlights in my mind

 1, Super fast booting time means nearly zero down time services, an
 alternative way to dynamic flavor changing and time improvement for
 deploying instances in KVM based PaaS platform. 2, Great work on
 performance. Cloud engineers could borrow experience from their work on
 guest OS. 3, Better performance on JVM. We could imagine there are many
 overhead and redundancy in host OS/guest OS/JVM. Fixing that could help
 Java applications perform closer to bare-metal.

 # Enabling OSv on OpenStack

 Actually there should not be any big problems. The steps are that
 building OSv qcow2 image first and boot it via Nova then. You may face some
 problems because OSv image need many new Qemu features, such as
 virtio-rng-io/vhost and enable-kvm flag is necessary.

 Fortunately, I don't meet any problems with network, Neutron (actually I
 thought before network in OpenStack maybe hang me for a long time). OSv
 need a tap device and Neutron does good job on it. And then I could access
 OSv service very well.

 # OSv based demo

 The work I finished is only a memcached cluster. And the result is
 obvious: memory throughout of OSv based instance has 3 times than it in
 traditional virtual machines, and 90% of performance on host OS[0][1].
 Since their work on memcached is quite mature, consider OSv if you need
 build memcached instance.

 Another valuable demo cluster is Hadoop. When talking about Hadoop on
 OpenStack, the topic asked most frequently is the performance on virtual
 machines. A known experience is higher version Qemu would help fix disk I/O
 performance[2]. But how  does the overlap in JVM/guest OS? I would love to
 find that, but don't have so much time.

 After of all, the purpose of this thread is to bring an interesting topic
 on cloud performance and hope more and more efficient clusters based on
 OpenStack (in production use). I don't have so much time on OSv because
 this just is my personal interest, but I could prove OSv is a valuable way
 and topic.

 [0] http://paste.openstack.org/show/121382/
 [1]
 https://github.com/cloudius-systems/osv/wiki/OSv-Case-Study:-Memcached
 [2]
 https://www.openstack.org/summit/openstack-summit-atlanta-2014/session-videos/presentation/performance-of-hadoop-on-openstack

 --
 Gareth

 *Cloud Computing, OpenStack, Distributed Storage, Fitness, Basketball*
 *OpenStack contributor, kun_huang@freenode*
 *My promise: if you find any spelling or grammar mistakes in my email
 from Mar 1 2013, notify me *
 *and I'll donate $1 or ¥1 to an open organization you specify.*

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Zhipeng Huang
 Research Assistant
 Mobile Ad-Hoc Network Lab, Calit2
 University of California, Irvine
 Email: zhipe...@uci.edu
 Office: Calit2 Building Room 2402
 OpenStack, OpenDaylight, OpenCompute affcienado

 --
 You received this message because you are subscribed to the Google Groups
 OSv Development group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to osv-dev+unsubscr...@googlegroups.com.
 For more options, visit https://groups.google.com/d/optout.




-- 
Gareth

*Cloud Computing, OpenStack, Distributed Storage, Fitness, Basketball*
*OpenStack contributor, kun_huang@freenode*
*My promise: if you find any spelling or grammar mistakes in my email from
Mar 1 2013, notify me *
*and I'll donate $1 or ¥1 to an open organization you specify.*
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev