Re: [openstack-dev] [nova][python-novaclient] microversion implementation on client side

2015-05-06 Thread Alex Xu
Thanks, Devananda, I read the ironic spec and it almost cover all the case
what I'm looking for. The only we missed in nova is return max/min version
by header when nova can't process the requested version.

2015-04-28 15:38 GMT+08:00 Devananda van der Veen devananda@gmail.com:

 FWIW, we enumerated the use-cases and expected behavior for all
 combinations of server [pre versions, older version, newer version]
 and client [pre versions, older version, newer version, user-specified
 version], in this informational spec:


 http://specs.openstack.org/openstack/ironic-specs/specs/kilo/api-microversions.html#proposed-change

 Not all of that is implemented yet within our client, but the
 auto-negotiation of version is done. While our clients probably don't
 share any code, maybe something here can help:


 http://git.openstack.org/cgit/openstack/python-ironicclient/tree/ironicclient/common/http.py#n72

 -Deva

 On Mon, Apr 27, 2015 at 2:49 AM, John Garbutt j...@johngarbutt.com
 wrote:
  I see these changes as really important.
 
  We need to establish good patterns other SDKs can copy.
 
  On 24 April 2015 at 12:05, Alex Xu sou...@gmail.com wrote:
  2015-04-24 18:15 GMT+08:00 Andrey Kurilin akuri...@mirantis.com:
  When user execute cmd without --os-compute-version. The nova client
   should discover the nova server supported version. Then cmd choice
 the
   latest version supported both by client and server.
 
  In that case, why X-Compute-API-Version can accept latest value?
 Also,
  such discovery will require extra request to API side for every client
 call.
 
 
  I think it is convenient for some case. like give user more easy to try
 nova
  api by some code access the nova api directly. Yes, it need one more
 extra
  request. But if without discover, we can't ensure the client support
 server.
  Maybe client too old for server even didn't support the server's min
  version. For better user experience, I think it worth to discover the
  version. And we will call keystone each nova client cli call, so it is
  acceptable.
 
  We might need to extend the API to make this easier, but I think we
  need to come up with a simple and efficient pattern here.
 
 
  Case 1:
  Existing python-novaclient calls, now going to v2.1 API
 
  We can look for the transitional entry of computev21, as mentioned
  above, but it seems fair to assume v2.1 and v2.0 are accessed from the
  same service catalog entry of compute, by default (eventually).
 
  Lets be optimistic about what the cloud supports, and request latest
  version from v2.1.
 
  If its a v2.0 only API endpoint, we will not get back a version header
  with the response, we could error out if the user requested v2.x
  min_version via the CLI parameters.
 
  In most cases, we get the latest return values, and all is well.
 
 
  Case 2:
  User wants some info they know was added to the response in a specific
  microversion
 
  We can request latest and error out if we don't get a new enough
  version to meet the user's min requirement.
 
 
  Case 3:
  Adding support for a new request added in a microversion
 
  We could just send latest and assume the new functionality, then
  raise an error when you get bad request (or similar), and check the
  version header to see if that was the cause of the problem, so we can
  say why it failed.
 
  If its supported, everything just works.
 
  If the user requests a specific version before it was supported, we
  should error out as not supported, I guess?
 
  In a way it would be cleaner if we had a way for the client to say
  latest but requires 2.3, so you get a bad version request if your
  minimum requirement is not respected, so its much clearer than
  miss-interpreting random errors that you might generate. But I guess
  its not totally required here.
 
 
  Would all that work? It should avoid an extra API call to discover the
  specific version we have available.
 
  '--os-compute-version=None' can be supported, that means will return
 the
   min version of server supported.
 
  From my point of view '--os-compute-version=None' is equal to not
  specified value. Maybe, it would be better to accept value min for
  os-compute-version option.
 
  I think '--os-compute-version=None' means not specified version request
  header when send api request to server. The server behavior is if there
  isn't value specified, the min version will be used.
 
  --os-compte-version=v2 means no version specified I guess?
 
  Can we go back to the use cases here please?
  What do the users need here and why?
 
 
  3. if the microversion non-supported, but user call cmd with
   --os-compute-version, this should return failed.
 
  Imo, it should be implemented on API side(return BadRequest when
  X-Compute-API-Version header is presented in V2)
 
  V2 is already deployed now, and doesn't do that.
 
  No matter what happens we need to fix that.
 
  Emm I'm not sure. Because GET '/v2/' already can be used to discover
  microversion 

Re: [openstack-dev] [neutron][api] Extensions out, Micro-versions in

2015-05-06 Thread Salvatore Orlando
Thanks Kevin,

answers inline.

On 6 May 2015 at 00:28, Fox, Kevin M kevin@pnnl.gov wrote:

  so... as an operator looking at #3, If I need to support lbaas, I'm
 getting pushed to run more and more services, like octavia, plus a
 neutron-lbaas service, plus neutron? This seems like an operator
 scalability issue... What benifit does splitting out the advanced services
 into their own services have?


You have a valid point here. In the past I was keen on insisting that
neutron was supposed to be a management layer only service for most
networking services.
However, the consensus seems to move toward a microservices-style
architecture. It would be interesting to get some feedback regarding the
additional operational burden of managing a plethora of services, even if
it is worth noting that when one deploys neutron with its reference
architecture there are already plenty of moving parts.

Regardless, I need to slaps your hand because this discussion is not really
pertinent to this thread, which is specifically about having a strategy for
the Neutron API.
I would be happy to have a separate thread for defining a strategy for
neutron services. I'm pretty sure Doug will be more than happy to slap your
hands too.


 Thanks,
 Kevin
  --
 *From:* Salvatore Orlando [sorla...@nicira.com]
 *Sent:* Tuesday, May 05, 2015 3:13 PM
 *To:* OpenStack Development Mailing List
 *Subject:* [openstack-dev] [neutron][api] Extensions out, Micro-versions
 in

   There have now been a few iterations on the specification for Neutron
 micro-versioning [1].
 It seems that no-one in the community opposes introducing versioning. In
 particular API micro-versioning as implemented by Nova and Ironic seems a
 decent way to evolve the API incrementally.

  What the developer community seems not yet convinced about is moving
 away from extensions. It seems everybody realises the flaws of evolving the
 API through extensions, but there are understandable concerns regarding
 impact on plugins/drivers as well as the ability to differentiate, which is
 something quite dear to several neutron teams. I tried to consider all
 those concerns and feedback received; hopefully everything has been
 captured in a satisfactory way in the latest revision of [1].
 With this ML post I also seek feedback from the API-wg concerning the
 current proposal, whose salient points can be summarised as follows:

  #1 extensions are not part anymore of the neutron API.

  Evolution of the API will now be handled through versioning. Once
 microversions are introduced:
- current extensions will be progressively moved into the Neutron
 unified API
- no more extension will be accepted as part of the Neutron API

  #2 Introduction of features for addressing diversity in Neutron plugins

  It is possible that the combination of neutron plugins chosen by the
 operator won't be able to support the whole Neutron API. For this reason a
 concept of feature is included. What features are provided depends on the
 plugins loaded. The list of features is hardcoded as strictly dependent on
 the Neutron API version implemented by the server. The specification also
 mandates a minimum set of features every neutron deployment must provide
 (those would be the minimum set of features needed for integrating Neutron
 with Nova).

  #3 Advanced services are still extensions

  This a temporary measure, as APIs for load balancing, VPN, and Edge
 Firewall are still served through neutron WSGI. As in the future this API
 will live independently it does not make sense to version them with Neutron
 APIs.

  #4 Experimenting in the API

  One thing that has plagued Neutron in the past is the impossibility of
 getting people to reach any sort of agreement over the shape of certain
 APIs. With the proposed plan we encourage developers to submit experimental
 APIs. Experimental APIs are unversioned and no guarantee is made regarding
 deprecation or backward compatibility. Also they're optional, as a deployer
 can turn them off. While there are caveats, like forever-experimental APIs,
 this will enable developer to address user feedback during the APIs'
 experimental phase. The Neutron community and the API-wg can provide plenty
 of useful feeback, but ultimately is user feedback which determines whether
 an API proved successful or not. Please note that the current proposal goes
 in a direction different from that approved in Nova when it comes to
 experimental APIs [3]

  #5 Plugin/Vendor specific APIs

  Neutron is without doubt the project with the highest number of 3rd
 party (OSS and commercial) integration. After all it was mostly vendors who
 started this project.
 Vendors [4] use the extension mechanism to expose features in their
 products not covered by the Neutron API or to provide some sort of
 value-added service.
 The current proposal still allows 3rd parties to attach extensions to the
 neutron API, provided that:
 - they're not considered 

Re: [openstack-dev] [Fuel] Transaction scheme

2015-05-06 Thread Igor Kalnitsky
 First of all I propose to wrap HTTP handlers by begin/commit/rollback

I don't know what you are talking about, but we do wrap handlers in
transaction for a long time. Here's the code
https://github.com/stackforge/fuel-web/blob/2de3806128f398d192d7e31f4ca3af571afeb0b2/nailgun/nailgun/api/v1/handlers/base.py#L53-L84

The issue is that we sometimes perform `.commit()` inside the code
(e.g. `task.execute()`) and therefore it's hard to predict which data
are committed and which are not.

In order to avoid this, we have to declare strict scopes for different
layers. Yes, we definitely should base on idea that handlers should
open transaction on the begin and close it on the end. But that won't
solve all problems, because sometimes we should commit data before
handler's end. For instance, commit some task before sending message
to Astute. Such cases complicate the things.. and it would be cool if
could avoid them by re-factoring our architecture. Perhaps, we could
send tasks to Astute when the handler is done? What do you think?

Thanks,
igor

On Wed, May 6, 2015 at 12:15 PM, Lukasz Oles lo...@mirantis.com wrote:
 On Wed, May 6, 2015 at 10:51 AM, Alexander Kislitsky
 akislit...@mirantis.com wrote:
 Hi!

 The refactoring of transactions management in Nailgun is critically required
 for scaling.

 First of all I propose to wrap HTTP handlers by begin/commit/rollback
 decorator.
 After that we should introduce transactions wrapping decorator into Task
 execute/message calls.
 And the last one is the wrapping of receiver calls.

 As result we should have begin/commit/rollback calls only in transactions
 decorator.

 Big +1 for this. I always wondered why we don't have it.



 Also I propose to separate working with DB objects into separate lair and
 use only high level Nailgun objects in the code and tests. This work was
 started long time ago, but not finished yet.

 On Thu, Apr 30, 2015 at 12:36 PM, Roman Prykhodchenko m...@romcheg.me 
 wrote:

 Hi folks!

 Recently I faced a pretty sad fact that in Nailgun there’s no common
 approach to manage transactions. There are commits and flushes in random
 places of the code and it used to work somehow just because it was all
 synchronous.

 However, after just a few of the subcomponents have been moved to
 different processes, it all started producing races and deadlocks which are
 really hard to resolve because there is absolutely no way to predict how a
 specific transaction is managed but by analyzing the source code. That is
 rather an ineffective and error-prone approach that has to be fixed before
 it became uncontrollable.

 Let’s arrange a discussions to design a document which will describe where
 and how transactions are managed and refactor Nailgun according to it in
 7.0. Otherwise results may be sad.


 - romcheg


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Łukasz Oleś

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack] [“Potential Spoofed”] Neutron Load Balancer behavior while autoscaling

2015-05-06 Thread ashish.jain14

Hello,


I just tested with manually attaching an instance to a neutron load balancer 
pool. But somehow I see the same behavior where the instance attached while 
Jmeter is pumping messages does not serve any request. However same instance 
servers when I restart message pumping from Jmeter.


PS: I am using ha proxy.


Regards
Ashish



From: ashish.jai...@wipro.com ashish.jai...@wipro.com
Sent: Wednesday, May 06, 2015 11:47 AM
To: openstack@lists.openstack.org
Subject: [“Potential Spoofed”] [Openstack] Neutron Load Balancer behavior while 
autoscaling


Hello,


I am using neutron load balancer along with heat. I auto-scale instances and 
all the new instances are automatically  attached to neutron load balancer. I 
am using apache jmeter to create CPU load.


I instantiate apache jmeter for 5 minutes to create load and I am able to auto 
spawn instances and able to attach them to neutron load balancer, however the 
load is not distributed to all the instances ( I am using round robin policy).


Once Jmeter run in complete I start jmeter again and this time the load is 
distributed evenly to all the instances


Anyone seen this behavior, any clues what could be wrong.


Regards
Ashish

  The information contained in this electronic message and any attachments to 
this message are intended for the exclusive use of the addressee(s) and may 
contain proprietary, confidential or privileged information. If you are not the 
intended recipient, you should  not disseminate, distribute or copy this 
e-mail. Please notify the sender immediately and destroy all copies of this 
message and any attachments. WARNING: Computer viruses can be transmitted via 
email. The recipient should check this email and any attachments  for the 
presence of viruses. The company accepts no liability for any damage caused by 
any virus transmitted by this email. www.wipro.com
The information contained in this electronic message and any attachments to 
this message are intended for the exclusive use of the addressee(s) and may 
contain proprietary, confidential or privileged information. If you are not the 
intended recipient, you should not disseminate, distribute or copy this e-mail. 
Please notify the sender immediately and destroy all copies of this message and 
any attachments. WARNING: Computer viruses can be transmitted via email. The 
recipient should check this email and any attachments for the presence of 
viruses. The company accepts no liability for any damage caused by any virus 
transmitted by this email. www.wipro.com

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] How to turn tempest CLI tests into python-*client in-tree functional tests

2015-05-06 Thread Chris Dent

On Wed, 6 May 2015, Robert Collins wrote:


Its actually an antipattern. It tells testr that tests are appearing
and disappearing depending on what test entry point a user runs each
time.

testr expects the set of tests to only change when code changes.

So, I fully expect that this pattern is going to lead to wtf moments
now, and likely more in future.

Whats the right forum for discussing the pressures that lead to this
hack, so we can do something that works better with the underlying
tooling, rather than in such a disruptive fashion?


I'd appreciate here (that is on this list) because from my
perspective there are a lot of embedded assumptions in the way testr
does things and wants the environment to be that aren't immediately
obvious and would perhaps be made more clear if you could expand on
the details of what's wrong with this particular hack.

I tried to come up with a specific question here to drive that
illumination a bit more concretely but a) not enough coffee yet b)
mostly I just want to know more detail about the first three
paragraphs above.

Thanks.
--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack] No valid host found

2015-05-06 Thread Vedsar Kushwaha
Thanks for your valuable suggestions. Problem was solved. OS image I was
booting to launch openstack instance, was corrupted.

After installing fresh OS image, everything is working perfectly fine.

But I didn't understand its connection with centos update.:)

On Sun, May 3, 2015 at 5:55 PM, Amrith Kumar amr...@tesora.com wrote:

  This typically means that the Nova scheduler could not find a host on
 which to launch an instance. Take a look at the various nova logs
 (scheduler or compute) and you'll find more details of the failure there.



 If you can share that with the list, I'm sure we can help you!



 -amrith



 *From:* Vedsar Kushwaha [mailto:vedsarkushw...@gmail.com]
 *Sent:* Sunday, May 03, 2015 5:16 AM
 *To:* openstack@lists.openstack.org
 *Subject:* [Openstack] No valid host found



 I was using JUNO on Centos7.

 Everything was running perfectly fine. I did centos update.

 After that I'm getting this error No valid host found.

 nova-services are running. All compute nodes are also connected with each
 other.

 How should I solve this problem?


 --

 Vedsar Kushwaha

 M.Tech-Computational Science

 Indian Institute of Science




-- 
Vedsar Kushwaha
M.Tech-Computational Science
Indian Institute of Science
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] CenOS7 kilo keystone can not create endpoint with ImportError: No module named oslo_utils

2015-05-06 Thread walterxj






Hi:all    Does anybody test with kilo on CenOS7? I just follow the guide 
http://docs.openstack.org/kilo/install-guide/install/yum/content/ and when I 
create endpoint with: openstack endpoint create \ --publicurl 
http://controller:5000/v2.0 \

--internalurl http://controller:5000/v2.0 \

--adminurl http://controller:35357/v2.0 \

--region RegionOne \

identity
    I get the error: ImportError: No module named oslo_utils .    I googled for 
a long time and noticed that it's something about the namespace change of 
oslo.utils. (mentioned in  https://bugs.launchpad.net/heat/+bug/1423174 )which 
release a fix for heat,but I don't find any fix release for keystone. Can 
anybody give some advice? thanks a lot.

walter

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] [Nova][Ironic] Large number of ironic driver bugs in nova

2015-05-06 Thread Lucas Alvares Gomes
Hi

 I noticed last night that there are 23 bugs currently filed in nova
 tagged as ironic related. Whilst some of those are scheduler issues, a
 lot of them seem like things in the ironic driver itself.

 Does the ironic team have someone assigned to work on these bugs and
 generally keep an eye on their driver in nova? How do we get these
 bugs resolved?


Thanks for this call out. I don't think we have anyone specifically
assigned to keep an eye on the Ironic
Nova driver, we would look at it from time to time or when someone ask
us to in the Ironic channel/ML/etc...
But that said, I think we need to pay more attention to the bugs in Nova.

I've added one item about it to be discussed in the next Ironic
meeting[1]. And in the meantime, I will take a
look at some of the bugs myself.

[1] https://wiki.openstack.org/wiki/Meetings/Ironic#Agenda_for_next_meeting

Thanks again,
Lucas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Transaction scheme

2015-05-06 Thread Alexander Kislitsky
Hi!

The refactoring of transactions management in Nailgun is critically
required for scaling.

First of all I propose to wrap HTTP handlers by begin/commit/rollback
decorator.
After that we should introduce transactions wrapping decorator into Task
execute/message calls.
And the last one is the wrapping of receiver calls.

As result we should have begin/commit/rollback calls only in transactions
decorator.

Also I propose to separate working with DB objects into separate lair and
use only high level Nailgun objects in the code and tests. This work was
started long time ago, but not finished yet.

On Thu, Apr 30, 2015 at 12:36 PM, Roman Prykhodchenko m...@romcheg.me wrote:

 Hi folks!

 Recently I faced a pretty sad fact that in Nailgun there’s no common
 approach to manage transactions. There are commits and flushes in random
 places of the code and it used to work somehow just because it was all
 synchronous.

 However, after just a few of the subcomponents have been moved to
 different processes, it all started producing races and deadlocks which are
 really hard to resolve because there is absolutely no way to predict how a
 specific transaction is managed but by analyzing the source code. That is
 rather an ineffective and error-prone approach that has to be fixed before
 it became uncontrollable.

 Let’s arrange a discussions to design a document which will describe where
 and how transactions are managed and refactor Nailgun according to it in
 7.0. Otherwise results may be sad.


 - romcheg


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Openstack] Swift - Adding S3 Glacier like interface in Swift Swift3 Object Storage

2015-05-06 Thread Bala
I am new to this list so please excuse me if I posted it in wrong list.

We have a tape library which we would like to integrate with OpenStack
Swift  Swift3 object storage service to provide S3 interface.

The current file system we have for the library has been integrated with
Swift storage service and manages changer robot  tapes.

This works well for writing.

However for reading, loading a tape takes longer when GET requests are
received, in some cases over 5 minutes and this causes timeout error. Most
of the data stored in these tapes are archival data. This get worsen when
multiple GET requests received (muti-user) for objects which are stored in
different tapes.

Due to the longer read times, we are looking to provide Amazon S3 Glacier
like interface through Swift  Swift3 so that clients can issue a POST
OBJECT RESTORE request and wait for the data to be moved to temporary
store/cache.

I have come across a similar request

http://openstack-dev.openstack.narkive.com/kI72vk9l/ltfs-integration-with-openstack-swift-for-scenario-like-data-archival-as-a-service

and understand the suggestions.

We would like to provide S3 Glacier like interface than Swift Storage
policies if we can.

I would be great full if you could kindly advise

1. How hard is to change Swift  Swift3 code base to provide S3 Glacier
like interface
2. Can this be done through Swift storage policies alone.
3. Do we have to modify Swift Auditor service to do a tape based checking
rather than object based.
4. Would Swift replication service cause frequent Tape change request.

I look forward to your suggestion/advise.

Thank you,

Regards
Bala
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] How to turn tempest CLI tests into python-*client in-tree functional tests

2015-05-06 Thread Sean Dague
On 05/06/2015 04:57 AM, Chris Dent wrote:
 On Wed, 6 May 2015, Robert Collins wrote:
 
 Its actually an antipattern. It tells testr that tests are appearing
 and disappearing depending on what test entry point a user runs each
 time.

 testr expects the set of tests to only change when code changes.

 So, I fully expect that this pattern is going to lead to wtf moments
 now, and likely more in future.

 Whats the right forum for discussing the pressures that lead to this
 hack, so we can do something that works better with the underlying
 tooling, rather than in such a disruptive fashion?
 
 I'd appreciate here (that is on this list) because from my
 perspective there are a lot of embedded assumptions in the way testr
 does things and wants the environment to be that aren't immediately
 obvious and would perhaps be made more clear if you could expand on
 the details of what's wrong with this particular hack.
 
 I tried to come up with a specific question here to drive that
 illumination a bit more concretely but a) not enough coffee yet b)
 mostly I just want to know more detail about the first three
 paragraphs above.

There are 2 reasons that pattern exists.

1) testr discovery is quite slow on large trees, especially if your
intent is to run a small subset of tests by sending an argument

2) testr still doesn't have an exclude facility, so top level test
exclusion has to be done by quite complicated negative asserting regex,
which are very error prone (and have been done incorrectly many times).
Especially if you *still* want to support partial test passing.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] cross project communication: periodic developer newsletter?

2015-05-06 Thread Thierry Carrez
Joe Gordon wrote:
 On Tue, May 5, 2015 at 9:53 AM, James Bottomley wrote:
 On Tue, 2015-05-05 at 10:45 +0200, Thierry Carrez wrote:
  The issue is, who can write such content ? It is a full-time job to
  produce authored content, you can't just copy (or link to) content
  produced elsewhere. It takes a very special kind of individual to write
  such content: the person has to be highly technical, able to tackle any
  topic, and totally connected with the OpenStack development community.
  That person has to be cross-project and ideally have already-built
  legitimacy.
 
 Here, you're being overly restrictive.  Lwn.net isn't staffed by top
 level kernel maintainers (although it does solicit the occasional
 article from them).  It's staffed by people who gained credibility via
 their insightful reporting rather than by their contributions.  I see no
 reason why the same model wouldn't work for OpenStack.
 
 ++.  I have a hunch that like many things (in OpenStack) if you make a
 space for people to step up,  they will.

I guess being burnt trying to set that up in the past makes me overly
pessimistic. Let's see... Anyone interested in producing that kind of
OpenStack Developer Community Digest ?

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Ironic] Large number of ironic driver bugs in nova

2015-05-06 Thread John Garbutt
On 6 May 2015 at 09:39, Lucas Alvares Gomes lucasago...@gmail.com wrote:
 Hi

 I noticed last night that there are 23 bugs currently filed in nova
 tagged as ironic related. Whilst some of those are scheduler issues, a
 lot of them seem like things in the ironic driver itself.

 Does the ironic team have someone assigned to work on these bugs and
 generally keep an eye on their driver in nova? How do we get these
 bugs resolved?


 Thanks for this call out. I don't think we have anyone specifically
 assigned to keep an eye on the Ironic
 Nova driver, we would look at it from time to time or when someone ask
 us to in the Ironic channel/ML/etc...
 But that said, I think we need to pay more attention to the bugs in Nova.

 I've added one item about it to be discussed in the next Ironic
 meeting[1]. And in the meantime, I will take a
 look at some of the bugs myself.

 [1] https://wiki.openstack.org/wiki/Meetings/Ironic#Agenda_for_next_meeting

Thanks to you both for raising this and pushing on this.

Maybe we can get a named cross project liaison to bridge the Ironic
and Nova meetings. We are working on building a similar pattern for
Neutron. It doesn't necessarily mean attending every nova-meeting,
just someone to act as an explicit bridge between our two projects?

I am open to whatever works though, just hoping we can be more
proactive about issues and dependencies that pop up.

Thanks,
John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Transaction scheme

2015-05-06 Thread Alexander Kislitsky
I mean, that we should have explicitly wrapped http handlers. For example:

@transaction
def PUT(...):
  ...

We don't need transactions, for example, in GET methods.

I propose to rid of complex data flows in our code. Code with 'commit' call
inside the the method should be split into independent units.

I like the solution with sending tasks to Astute at the end of handler
execution.

On Wed, May 6, 2015 at 12:57 PM, Igor Kalnitsky ikalnit...@mirantis.com
wrote:

  First of all I propose to wrap HTTP handlers by begin/commit/rollback

 I don't know what you are talking about, but we do wrap handlers in
 transaction for a long time. Here's the code

 https://github.com/stackforge/fuel-web/blob/2de3806128f398d192d7e31f4ca3af571afeb0b2/nailgun/nailgun/api/v1/handlers/base.py#L53-L84

 The issue is that we sometimes perform `.commit()` inside the code
 (e.g. `task.execute()`) and therefore it's hard to predict which data
 are committed and which are not.

 In order to avoid this, we have to declare strict scopes for different
 layers. Yes, we definitely should base on idea that handlers should
 open transaction on the begin and close it on the end. But that won't
 solve all problems, because sometimes we should commit data before
 handler's end. For instance, commit some task before sending message
 to Astute. Such cases complicate the things.. and it would be cool if
 could avoid them by re-factoring our architecture. Perhaps, we could
 send tasks to Astute when the handler is done? What do you think?

 Thanks,
 igor

 On Wed, May 6, 2015 at 12:15 PM, Lukasz Oles lo...@mirantis.com wrote:
  On Wed, May 6, 2015 at 10:51 AM, Alexander Kislitsky
  akislit...@mirantis.com wrote:
  Hi!
 
  The refactoring of transactions management in Nailgun is critically
 required
  for scaling.
 
  First of all I propose to wrap HTTP handlers by begin/commit/rollback
  decorator.
  After that we should introduce transactions wrapping decorator into Task
  execute/message calls.
  And the last one is the wrapping of receiver calls.
 
  As result we should have begin/commit/rollback calls only in
 transactions
  decorator.
 
  Big +1 for this. I always wondered why we don't have it.
 
 
 
  Also I propose to separate working with DB objects into separate lair
 and
  use only high level Nailgun objects in the code and tests. This work was
  started long time ago, but not finished yet.
 
  On Thu, Apr 30, 2015 at 12:36 PM, Roman Prykhodchenko m...@romcheg.me
 wrote:
 
  Hi folks!
 
  Recently I faced a pretty sad fact that in Nailgun there’s no common
  approach to manage transactions. There are commits and flushes in
 random
  places of the code and it used to work somehow just because it was all
  synchronous.
 
  However, after just a few of the subcomponents have been moved to
  different processes, it all started producing races and deadlocks
 which are
  really hard to resolve because there is absolutely no way to predict
 how a
  specific transaction is managed but by analyzing the source code. That
 is
  rather an ineffective and error-prone approach that has to be fixed
 before
  it became uncontrollable.
 
  Let’s arrange a discussions to design a document which will describe
 where
  and how transactions are managed and refactor Nailgun according to it
 in
  7.0. Otherwise results may be sad.
 
 
  - romcheg
 
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
  --
  Łukasz Oleś
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Core reviewer update proposal

2015-05-06 Thread Jan Provaznik

On 05/05/2015 01:57 PM, James Slagle wrote:

Hi, I'd like to propose adding Giulio Fidente and Steve Hardy to TripleO Core.

Giulio has been an active member of our community for a while. He
worked on the HA implementation in the elements and recently has been
making a lot of valuable contributions and reviews related to puppet
in the manifests, heat templates, ceph, and HA.

Steve Hardy has been instrumental in providing a lot of Heat domain
knowledge to TripleO and his reviews and guidance have been very
beneficial to a lot of the template refactoring. He's also been
reviewing and contributing in other TripleO projects besides just the
templates, and has shown a solid understanding of TripleO overall.

180 day stats:
| gfidente | 2080  42 166   0   079.8% |
16 (  7.7%)  |
|  shardy  | 2060  27 179   0   086.9% |
16 (  7.8%)  |

TripleO cores, please respond with +1/-1 votes and any
comments/objections within 1 week.



+1 Congrats!


Giulio and Steve, also please do let me know if you'd like to serve on
the TripleO core team if there are no objections.

I'd also like to give a heads-up to the following folks whose review
activity is very low for the last 90 days:
|   tomas-8c8 **   |   80   0   0   8   2   100.0% |0 (  0.0%)  |
|lsmola ** |   60   0   0   6   5   100.0% |0 (  0.0%)  |
| cmsj **  |   60   2   0   4   266.7% |0 (  0.0%)  |
|   jprovazn **|   10   1   0   0   0 0.0% |0 (  0.0%)  |


I've shifted my focus in a slightly different area, although I plan to 
contribute to some parts of TripleO I don't have overall overview of all 
major parts of the project which is necessary for core reviews - feel 
free to remove me from the core team.


Jan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] cross project communication: periodic developer newsletter?

2015-05-06 Thread Thierry Carrez
Hugh Blemings wrote:
 +2
 
 I think asking LWN if they have the bandwidth and interest to do this
 would be ideal - they've credibility in the Free/Open Source space and a
 proven track record.  Nice people too.

On the bandwidth side, as a regular reader I was under the impression
that they struggled with their load already, but I guess if it comes
with funding that could be an option.

On the interest side, my past tries to invite them to the OpenStack
Summit so that they could cover it (the way they cover other
conferences) were rejected, so I have doubts in that area as well.

Anyone having a personal connection that we could leverage to pursue
that option further ?

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Transaction scheme

2015-05-06 Thread Lukasz Oles
On Wed, May 6, 2015 at 10:51 AM, Alexander Kislitsky
akislit...@mirantis.com wrote:
 Hi!

 The refactoring of transactions management in Nailgun is critically required
 for scaling.

 First of all I propose to wrap HTTP handlers by begin/commit/rollback
 decorator.
 After that we should introduce transactions wrapping decorator into Task
 execute/message calls.
 And the last one is the wrapping of receiver calls.

 As result we should have begin/commit/rollback calls only in transactions
 decorator.

Big +1 for this. I always wondered why we don't have it.



 Also I propose to separate working with DB objects into separate lair and
 use only high level Nailgun objects in the code and tests. This work was
 started long time ago, but not finished yet.

 On Thu, Apr 30, 2015 at 12:36 PM, Roman Prykhodchenko m...@romcheg.me wrote:

 Hi folks!

 Recently I faced a pretty sad fact that in Nailgun there’s no common
 approach to manage transactions. There are commits and flushes in random
 places of the code and it used to work somehow just because it was all
 synchronous.

 However, after just a few of the subcomponents have been moved to
 different processes, it all started producing races and deadlocks which are
 really hard to resolve because there is absolutely no way to predict how a
 specific transaction is managed but by analyzing the source code. That is
 rather an ineffective and error-prone approach that has to be fixed before
 it became uncontrollable.

 Let’s arrange a discussions to design a document which will describe where
 and how transactions are managed and refactor Nailgun according to it in
 7.0. Otherwise results may be sad.


 - romcheg


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Łukasz Oleś

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] How should edge services APIs integrate into Neutron?

2015-05-06 Thread Salvatore Orlando
I think Paul is correctly scoping this discussion in terms of APIs and
management layer.
For instance, it is true that dynamic routing support, and BGP support
might be a prerequisite for BGP VPNs, but it should be possible to have at
least an idea of how user and admin APIs for this VPN use case should look
like.

In particular the discussion on service chaining is a bit out of scope
here. I'd just note that [1] seems to have a lot of overlap with
group-based-policies [2], and that it appears to be a service that consumes
Neutron rather than an extension to it.

The current VPN service was conceived to be fairly generic. IPSEC VPN is
the only implemented one, but SSL VPN and BGP VPN were on the map as far as
I recall.
Personally having a lot of different VPN APIs is not ideal for users. As a
user, I probably don't even care about configuring a VPN. What is important
for me is to get L2 or L3 access to a network in the cloud; therefore I
would seek for common abstractions that might allow a user for configuring
a VPN service using the same APIs. Obviously then there will be parameters
which will be specific for the particular class of VPN being created.

I listened to several contributors in the area in the past, and there are
plenty of opinions across a spectrum which goes from total abstraction
(just expose edges at the API layer) to what could be tantamount to a
RESTful configuration of a VPN appliance. I am not in a position such to
prescribe what direction the community should take; so, for instance, if
the people working on XXX VPN believe the best way forward for them is to
start a new project, so be it.

The other approach would obviously to build onto the current APIs. The only
way the Neutron API layer provides to do that is to extend and extension.
This sounds terrible, and it is indeed terrible. There is a proposal for
moving toward versioned APIs [3], but until that proposal is approved and
implemented extensions are the only thing we have.
From an API perspective the mechanism would be simpler:
1 - declare the extension, and implement get_required_extension to put
'vpnaas' as a requirement
2 - implement a DB mixin for it providing basic CRUD operations
3 - add it to the VPN service plugin and add its alias to
'supported_extensions_aliases' (step 2 and 3 can be merged if you wish not
to have a mixin)

What might be a bit more challenging is defining how this reflects onto
VPN. Ideally you would have a driver for every VPN type you support, and
then have a little dispatcher to route the API call to the appropriate
driver according to the VPN type.

Salvatore

[1]
https://blueprints.launchpad.net/neutron/+spec/intent-based-service-chaining
[2] https://wiki.openstack.org/wiki/GroupBasedPolicy
[3] https://review.openstack.org/#/c/136760

On 6 May 2015 at 07:14, Vikram Choudhary vikram.choudh...@huawei.com
wrote:

  Hi Paul,



 Thanks for starting this mail thread.  We are also eyeing for supporting
 MPBGP in neutron and will like to actively participate in this discussion.

 Please let me know about the IRC channels which we will be following for
 this discussion.



 Currently, I am following below BP’s for this work.

 https://blueprints.launchpad.net/neutron/+spec/edge-vpn

 https://blueprints.launchpad.net/neutron/+spec/bgp-dynamic-routing

 https://blueprints.launchpad.net/neutron/+spec/dynamic-routing-framework


 https://blueprints.launchpad.net/neutron/+spec/prefix-clashing-issue-with-dynamic-routing-protocol



 Moreover, a similar kind of work is being headed by Cathy for defining an
 intent framework which can extended for various use case. Currently it will
 be leveraged for SFC but I feel the same can be used for providing intend
 VPN use case.


 https://blueprints.launchpad.net/neutron/+spec/intent-based-service-chaining



 Thanks

 Vikram



 *From:* Paul Michali [mailto:p...@michali.net]
 *Sent:* 06 May 2015 01:38
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* [openstack-dev] [neutron] How should edge services APIs
 integrate into Neutron?



 There's been talk in VPN land about new services, like BGP VPN and DM VPN.
 I suspect there are similar things in other Advanced Services. I talked to
 Salvatore today, and he suggested starting a ML thread on this...



 Can someone elaborate on how we should integrate these API extensions into
 Neutron, both today, and in the future, assuming the proposal that
 Salvatore has is adopted?



 I could see two cases. The first, and simplest, is when a feature has an
 entirely new API that doesn't leverage off of an existing API.



 The other case would be when the feature's API would dovetail into the
 existing service API. For example, one may use the existing vpn_service API
 to create the service, but then create BGP VPN or DM VPN connections for
 that service, instead of the IPSec connections we have today.



 If there are examples already of how to extend an existing API extension
 that would help in 

Re: [openstack-dev] [neutron] IPv4 transition/interoperation with IPv6

2015-05-06 Thread Mike Spreitzer
Robert Li (baoli) ba...@cisco.com wrote on 05/05/2015 09:02:08 AM:

 Currently dual stack is supported. Can you be specific on what 
 interoperation/transition techniques you are interested in? We’ve 
 been thinking about NAT64 (stateless or stateful).
 
 thanks,
 Robert
 
 On 5/4/15, 9:56 PM, Mike Spreitzer mspre...@us.ibm.com wrote:
 
 Does Neutron support any of the 4/6 interoperation/transition 
 techniques?  I wear an operator's hat nowadays, and want to make 
 IPv6 as useful and easy to use as possible for my tenants.  I think 
 the interoperation/transition techniques will play a big role in this. 


Is dual stacking working in routers now?  At the moment I am still using 
Juno, but plan to move to Kilo.

I want to encourage my tenants to use as much IPv6 as possible.  But I 
expect some will have to keep some of their workload on v4 (I know there 
is on-going work to get many application frameworks up to v6 speed, and it 
is not complete yet).  I expect some tenants could be mixed: some workload 
on v4 and some on v6.  Such a tenant would appreciate a NAT between his v6 
space and his v4 space.  This is the easiest cases --- sections 2.5 and 
2.6 --- of RFC 6144.

I would prefer to do it in a stateless way if possible.  That would be 
pretty easy if Neutron and Nova were willing to accept an IPv6 subnet that 
is much smaller than 2^64 addresses.  I see that my macs differ only in 
their last 24 bits.

Some tenants could put their entire workload on v6, but that workload 
would be unreachable from customers of all those ISPs (such as mine, 
CableVision) that deny IPv6 service to their customers.  There are 
techniques for coping, and Teredo looks like a pretty good one.  It has 
been shipped in Windows for years.  Yet I can not find a Windows machine 
where the Teredo actually works.  What's up with that?  If Windows somehow 
got its Teredo, or other, act together, that would be only half the job; 
Teredo requires something from the server side as well, right?

Supposing a focus on mobile, where IPv6 is much more available, and/or 
progress by Microsoft and/or other ISPs, my tenant might face a situation 
where his clients could come in over v6 but some of his servers still have 
to run on v4.  That's section 2.3 of RFC 6144.

While I am a Neutron operator, I am also a customer of a lower layer 
network provider.  That network provider will happily give me a few /64. 
How do I serve IPv6 subnets to lots of my tenants?  In the bad old v4 days 
this would be easy: a tenant puts all his stuff on his private networks 
and NATs (e.g., floating IP) his edge servers onto a public network --- no 
need to align tenant private subnets with public subnets.  But with no NAT 
for v6, there is no public/private distinction --- I can only give out the 
public v6 subnets that I am given.  Yes, NAT is bad.  But not being able 
to get your job done is worse.


Sean M. Collins s...@coreitpro.com wrote on 05/05/2015 06:26:28 AM:

 I think that Neutron exposes enough primitives through the API that
 advanced services for handling your transition technique of choice could
 be built.

I think that is right, if I am willing to assume Neutron is using OVS --- 
or build a bunch of alternatives that correspond to all the Neutron 
plugins and mechanisms that I might encounter.  And it would feel a lot 
like Neutron implementation work.  Really, it is one instance of doing 
some NFV.


Thanks,
Mike

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [PKG-Openstack-devel][horizon][xstatic] XStatic-Angular-Bootstrap in violation of the MIT/Expat license (forwarded from: python-xstatic-angular-bootstrap_0.11.0.2-1_amd64.changes R

2015-05-06 Thread Thomas Goirand



On 05/05/2015 05:05 PM, Michael Krotscheck wrote:

The real question seems to be whether packagers have a disproportionate
amount of power to set development goals, tools, and policy. This is a
common theme that I've encountered frequently, and it leads to no small
amount of tension.

This tension serves no-one, and really just causes all of us stress. How
about we start a separate thread to discuss the roles of package
maintainers in OpenStack?

Michael


Mostly, everyone has been super friendly in the OpenStack community, and 
reactions are almost always very constructive, plus my concerns are 
almost always addressed (and when they are not, either their's a real 
reason why, or it's hard to do). I haven't felt tension so much as 
you're claiming, apart maybe with a very low amount of individuals, but 
that's unavoidable in such large community.


Thomas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack] Savana/Swift large object copy error

2015-05-06 Thread Christian Schwede
Hello Ross,

On 05.05.15 21:54, Ross Lillie wrote:
 My understanding is that Swift should automagically split files greater
 that 5G into multiple segments grouped under a metafile but this appears
 to not be working. This was working under the Havana release (Ubuntu)
 using the Swift File System jar file downloaded from the Marantis web
 site.  All current testing is based up the Juno release and when
 performing a distcp using the openstack-hadoop jar file shipped as part
 of the latest hadoop distros.

I don't know the client you're using, but Swift itself (on the server
side) never splits data into segments on its own, and never did so.
Currently it's up to the client to ensure data is broken down into
segments of max. 5GB size.

There were some ideas in the past to implement this feature, however
this raised different problems. Have a look at the history of large
objects:

http://docs.openstack.org/developer/swift/overview_large_objects.html#history

I would assume that there was a change on the client side implementation
that changed the behavior for you?

Best Regards,

Christian

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Problem with cinder backup : type 'NoneType' can't be encoded

2015-05-06 Thread Maxime Aubry
Hello,

problem solved !

I had to put :

rbd_user = glance

into /etc/cinder/cinder.conf. That's why we've got an exception throwed by
/usr/lib/python2.7/dist-packages/cinder/volume/drivers/rbd.py :

self.user = encodeutils.safe_encode(user)





2015-05-05 12:01 GMT+02:00 Maxime Aubry m.au...@hexanet.fr:

 Hello

 we are facing some difficulties when we try to backup a volume stored in a
 RBD backend to a swift container (we have the same issue with ceph rbd
 backup backend).

 We have googled but it seems that nobody ahs this issue. I hope I could be
 more lucky here ;)

 here is the log :

 TE_USER,NO_ENGINE_SUBSTITUTION _check_effective_sql_mode
 /usr/local/lib/python2.7/dist-packages/oslo_db/sqlalchemy/session.py:514
 2015-05-05 11:34:58.329 14048: DEBUG cinder.volume.drivers.rbd
 [req-27352642-0b71-49d5-b38b-e7d3cfa7d8ec 4bd09b121e2043e48d7f89cd6ee6eb47
 8725df8f38e54f409d2fd1a6de2fac8c - - -] opening connection to ceph cluster
 (timeout=-1). _connect_to_rados
 /usr/lib/python2.7/dist-packages/cinder/volume/drivers/rbd.py:300
 *2015-05-05 11:34:58.475 14048: ERROR oslo_messaging.rpc.dispatcher
 [req-27352642-0b71-49d5-b38b-e7d3cfa7d8ec 4bd09b121e2043e48d7f89cd6ee6eb47
 8725df8f38e54f409d2fd1a6de2fac8c - - -] Exception during message handling:
 type 'NoneType' can't be encoded*
 2015-05-05 11:34:58.475 14048: TRACE oslo_messaging.rpc.dispatcher
 Traceback (most recent call last):
 2015-05-05 11:34:58.475 14048: TRACE oslo_messaging.rpc.dispatcher   File
 /usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py,
 line 142, in _dispatch_and_reply
 2015-05-05 11:34:58.475 14048: TRACE oslo_messaging.rpc.dispatcher
 executor_callback))
 2015-05-05 11:34:58.475 14048: TRACE oslo_messaging.rpc.dispatcher   File
 /usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py,
 line 186, in _dispatch
 2015-05-05 11:34:58.475 14048: TRACE oslo_messaging.rpc.dispatcher
 executor_callback)
 2015-05-05 11:34:58.475 14048: TRACE oslo_messaging.rpc.dispatcher   File
 /usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py,
 line 130, in _do_dispatch
 2015-05-05 11:34:58.475 14048: TRACE oslo_messaging.rpc.dispatcher
 result = func(ctxt, **new_args)
 2015-05-05 11:34:58.475 14048: TRACE oslo_messaging.rpc.dispatcher   File
 /usr/lib/python2.7/dist-packages/osprofiler/profiler.py, line 105, in
 wrapper
 2015-05-05 11:34:58.475 14048: TRACE oslo_messaging.rpc.dispatcher
 return f(*args, **kwargs)
 2015-05-05 11:34:58.475 14048: TRACE oslo_messaging.rpc.dispatcher   File
 /usr/lib/python2.7/dist-packages/cinder/backup/manager.py, line 301, in
 create_backup
 2015-05-05 11:34:58.475 14048: TRACE oslo_messaging.rpc.dispatcher
 'fail_reason': six.text_type(err)})
 2015-05-05 11:34:58.475 14048: TRACE oslo_messaging.rpc.dispatcher   File
 /usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py, line 85,
 in __exit__
 2015-05-05 11:34:58.475 14048: TRACE oslo_messaging.rpc.dispatcher
 six.reraise(self.type_, self.value, self.tb)
 2015-05-05 11:34:58.475 14048: TRACE oslo_messaging.rpc.dispatcher   File
 /usr/lib/python2.7/dist-packages/cinder/backup/manager.py, line 294, in
 create_backup
 2015-05-05 11:34:58.475 14048: TRACE oslo_messaging.rpc.dispatcher
 backup_service)
 2015-05-05 11:34:58.475 14048: TRACE oslo_messaging.rpc.dispatcher   File
 /usr/lib/python2.7/dist-packages/osprofiler/profiler.py, line 105, in
 wrapper
 2015-05-05 11:34:58.475 14048: TRACE oslo_messaging.rpc.dispatcher
 return f(*args, **kwargs)
 2015-05-05 11:34:58.475 14048: TRACE oslo_messaging.rpc.dispatcher   File
 /usr/lib/python2.7/dist-packages/cinder/volume/drivers/rbd.py, line 897,
 in backup_volume
 2015-05-05 11:34:58.475 14048: TRACE oslo_messaging.rpc.dispatcher
 self.configuration.rbd_ceph_conf)
 2015-05-05 11:34:58.475 14048: TRACE oslo_messaging.rpc.dispatcher   File
 /usr/lib/python2.7/dist-packages/cinder/volume/drivers/rbd.py, line 94,
 in __init__
 2015-05-05 11:34:58.475 14048: TRACE oslo_messaging.rpc.dispatcher
 self.user = encodeutils.safe_encode(user)
 2015-05-05 11:34:58.475 14048: TRACE oslo_messaging.rpc.dispatcher   File
 /usr/local/lib/python2.7/dist-packages/oslo_utils/encodeutils.py, line
 76, in safe_encode
 2015-05-05 11:34:58.475 14048: TRACE oslo_messaging.rpc.dispatcher
 raise TypeError(%s can't be encoded % type(text))
 2015-05-05 11:34:58.475 14048: TRACE oslo_messaging.rpc.dispatcher
 TypeError: *type 'NoneType' can't be encoded*
 2015-05-05 11:34:58.475 14048 TRACE oslo_messaging.rpc.dispatcher

 Has anyone faced this problem ?

 Thanks !

 --
 ---
 Maxime Aubry
 Service Hébergement
 Hexanet




-- 
---
Maxime Aubry
Service Hébergement
Hexanet

-- 

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] Neutron Load Balancer behavior while autoscaling

2015-05-06 Thread ashish.jain14
Hello,


I am using neutron load balancer along with heat. I auto-scale instances and 
all the new instances are automatically  attached to neutron load balancer. I 
am using apache jmeter to create CPU load.


I instantiate apache jmeter for 5 minutes to create load and I am able to auto 
spawn instances and able to attach them to neutron load balancer, however the 
load is not distributed to all the instances ( I am using round robin policy).


Once Jmeter run in complete I start jmeter again and this time the load is 
distributed evenly to all the instances


Anyone seen this behavior, any clues what could be wrong.


Regards

Ashish

The information contained in this electronic message and any attachments to 
this message are intended for the exclusive use of the addressee(s) and may 
contain proprietary, confidential or privileged information. If you are not the 
intended recipient, you should not disseminate, distribute or copy this e-mail. 
Please notify the sender immediately and destroy all copies of this message and 
any attachments. WARNING: Computer viruses can be transmitted via email. The 
recipient should check this email and any attachments for the presence of 
viruses. The company accepts no liability for any damage caused by any virus 
transmitted by this email. www.wipro.com
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] How to turn tempest CLI tests into python-*client in-tree functional tests

2015-05-06 Thread Robert Collins
On 14 February 2015 at 10:26, Joe Gordon joe.gord...@gmail.com wrote:
 Digging through the logs this originated from this bug:
 https://bugs.launchpad.net/tempest/+bug/1260710

 Its probably not needed everywhere and in all the clients.

So I've looked more closely at this.

Its actually an antipattern. It tells testr that tests are appearing
and disappearing depending on what test entry point a user runs each
time.

testr expects the set of tests to only change when code changes.

So, I fully expect that this pattern is going to lead to wtf moments
now, and likely more in future.

Whats the right forum for discussing the pressures that lead to this
hack, so we can do something that works better with the underlying
tooling, rather than in such a disruptive fashion?

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack] CenOS7 kilo keystone can not create endpoint with ImportError: No module named oslo_utils

2015-05-06 Thread Christian Berendt
On 05/06/2015 11:59 AM, walterxj wrote:
 I get the error: ImportError: No module named oslo_utils .

This is a packaging issue with the current state of the RDO repository.
The RDO Kilo repository is not yet stable. You have to manually install
a few packages at the moment (looks like I forgot to document every
packaging bug in the curren state of the installation guide). Try it
again after installing python-oslo-utils. At the moment I think it is
better to use
http://rdoproject.org/repos/openstack-kilo/rdo-testing-kilo.rpm instead
of http://rdoproject.org/repos/openstack-kilo/rdo-release-kilo.rpm. I
hope that the RDO repositories will be stable/usable until the end of
this week.

Christian.

-- 
Christian Berendt
Cloud Solution Architect
Mail: bere...@b1-systems.de

B1 Systems GmbH
Osterfeldstraße 7 / 85088 Vohburg / http://www.b1-systems.de
GF: Ralph Dehner / Unternehmenssitz: Vohburg / AG: Ingolstadt,HRB 3537

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [“Potential Spoofed”] Re: [“Potential Spoofed”] Neutron Load Balancer behavior while autoscaling

2015-05-06 Thread ashish.jain14
Looks like the issue is with Jmeter and Lbaas compatibility, I wrote a simple 
shell script to simulate a similar load situation and somehow I do not see the 
issue any more.

Regards
Ashish

From: ashish.jai...@wipro.com ashish.jai...@wipro.com
Sent: Wednesday, May 06, 2015 3:15 PM
To: openstack@lists.openstack.org
Subject: [“Potential Spoofed”] Re: [Openstack] [“Potential Spoofed”]  Neutron 
Load Balancer behavior while autoscaling

Hello,


I just tested with manually attaching an instance to a neutron load balancer 
pool. But somehow I see the same behavior where the instance attached while 
Jmeter is pumping messages does not serve any request. However same instance 
servers when I restart message pumping from Jmeter.


PS: I am using ha proxy.


Regards
Ashish



From: ashish.jai...@wipro.com ashish.jai...@wipro.com
Sent: Wednesday, May 06, 2015 11:47 AM
To: openstack@lists.openstack.org
Subject: [“Potential Spoofed”] [Openstack] Neutron Load Balancer behavior while 
autoscaling


Hello,


I am using neutron load balancer along with heat. I auto-scale instances and 
all the new instances are automatically  attached to neutron load balancer. I 
am using apache jmeter to create CPU load.


I instantiate apache jmeter for 5 minutes to create load and I am able to auto 
spawn instances and able to attach them to neutron load balancer, however the 
load is not distributed to all the instances ( I am using round robin policy).


Once Jmeter run in complete I start jmeter again and this time the load is 
distributed evenly to all the instances


Anyone seen this behavior, any clues what could be wrong.


Regards
Ashish

  The information contained in this electronic message and any attachments to 
this message are intended for the exclusive use of the addressee(s) and may 
contain proprietary, confidential or privileged information. If you are not the 
intended recipient, you should  not disseminate, distribute or copy this 
e-mail. Please notify the sender immediately and destroy all copies of this 
message and any attachments. WARNING: Computer viruses can be transmitted via 
email. The recipient should check this email and any attachments  for the 
presence of viruses. The company accepts no liability for any damage caused by 
any virus transmitted by this email. www.wipro.com
The information contained in this electronic message and any attachments to 
this message are intended for the exclusive use of the addressee(s) and may 
contain proprietary, confidential or privileged information. If you are not the 
intended recipient, you should not disseminate, distribute or copy this e-mail. 
Please notify the sender immediately and destroy all copies of this message and 
any attachments. WARNING: Computer viruses can be transmitted via email. The 
recipient should check this email and any attachments for the presence of 
viruses. The company accepts no liability for any damage caused by any virus 
transmitted by this email. www.wipro.com

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
The information contained in this electronic message and any attachments to 
this message are intended for the exclusive use of the addressee(s) and may 
contain proprietary, confidential or privileged information. If you are not the 
intended recipient, you should not disseminate, distribute or copy this e-mail. 
Please notify the sender immediately and destroy all copies of this message and 
any attachments. WARNING: Computer viruses can be transmitted via email. The 
recipient should check this email and any attachments for the presence of 
viruses. The company accepts no liability for any damage caused by any virus 
transmitted by this email. www.wipro.com

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] [nova] Which error code should we return when OverQuota

2015-05-06 Thread Chris Dent

On Wed, 6 May 2015, Sean Dague wrote:


All other client errors, just be a 400. And use the emerging error
reporting json to actually tell the client what's going on.


Please do not do this. Please use the 4xx codes as best as you
possibly can. Yes, they don't always match, but there are several of
them for reasons™ and it is usually possible to find one that sort
of fits.

Using just 400 is bad for a healthy HTTP ecosystem. Sure, for the
most part people are talking to OpenStack through official clients
but a) what happens when they aren't, b) is that the kind of world
we want?

I certainly don't. I want a world where the HTTP APIs that OpenStack
and other services present actually use HTTP and allow a diversity
of clients (machine and human).

Using response codes effectively makes it easier to write client code
that is either simple or is able to use generic libraries effectively.

Let's be honest: OpenStack doesn't have a great record of using HTTP
effectively or correctly. Let's not make it worse.

In the case of quota, 403 is fairly reasonable because you are in
fact Forbidden from doing the thing you want to do. Yes, with the
passage of time you may very well not be forbidden so the semantics
are not strictly matching but it is more immediately expressive yet
not quite as troubling as 409 (which has a more specific meaning).

400 is useful fallback for when nothing else will do.

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder][nova] Question on Cinder client exception handling

2015-05-06 Thread Chen CH Ji

Hi
   In order to work on [1] , nova need to know what kind of
exception are raised when using cinderclient so that it can handle like [2]
did?
   In this case, we don't need to distinguish the error case
based on string compare , it's more accurate and less error leading
   Anyone is doing it or any other methods I can use to catch
cinder specified  exception in nova? Thanks


[1] https://bugs.launchpad.net/nova/+bug/1450658
[2]
https://github.com/openstack/python-neutronclient/blob/master/neutronclient/v2_0/client.py#L64

Best Regards!

Kevin (Chen) Ji 纪 晨

Engineer, zVM Development, CSTL
Notes: Chen CH Ji/China/IBM@IBMCN   Internet: jiche...@cn.ibm.com
Phone: +86-10-82454158
Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District,
Beijing 100193, PRC__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Openstack] heat: Using IP address of one VM with-in another VM

2015-05-06 Thread ashish.jain14
Hi,


Is it possible to somehow pass IP of a VM to another VM during heat deployment. 
For example IP of  a DB server in a VM to a client in another VM.


Regards

Ashish

The information contained in this electronic message and any attachments to 
this message are intended for the exclusive use of the addressee(s) and may 
contain proprietary, confidential or privileged information. If you are not the 
intended recipient, you should not disseminate, distribute or copy this e-mail. 
Please notify the sender immediately and destroy all copies of this message and 
any attachments. WARNING: Computer viruses can be transmitted via email. The 
recipient should check this email and any attachments for the presence of 
viruses. The company accepts no liability for any damage caused by any virus 
transmitted by this email. www.wipro.com
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[openstack-dev] [heat][ceilometer] autoscaling

2015-05-06 Thread ICHIBA Sara
hey there,

Please I wanna know if their is anyway I can have cpu, ram and network
meters for each VM returned by ceilometer to heat for autoscaling tasks?

In advance, thank you for your response,
Sara
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Which error code should we return when OverQuota

2015-05-06 Thread Sean Dague
On 05/06/2015 07:11 AM, Chris Dent wrote:
 On Wed, 6 May 2015, Sean Dague wrote:
 
 All other client errors, just be a 400. And use the emerging error
 reporting json to actually tell the client what's going on.
 
 Please do not do this. Please use the 4xx codes as best as you
 possibly can. Yes, they don't always match, but there are several of
 them for reasons™ and it is usually possible to find one that sort
 of fits.
 
 Using just 400 is bad for a healthy HTTP ecosystem. Sure, for the
 most part people are talking to OpenStack through official clients
 but a) what happens when they aren't, b) is that the kind of world
 we want?
 
 I certainly don't. I want a world where the HTTP APIs that OpenStack
 and other services present actually use HTTP and allow a diversity
 of clients (machine and human).

Absolutely. And the problem is there is not enough namespace in the HTTP
error codes to accurately reflect the error conditions we hit. So the
current model means the following:

If you get any error code, it means multiple failure conditions. Throw
it away, grep the return string to decide if you can recover.

My proposal is to be *extremely* specific for the use of anything
besides 400, so there is only 1 situation that causes that to arise. So
403 means a thing, only one thing, ever. Not 2 kinds of things that you
need to then figure out what you need to do.

If you get a 400, well, that's multiple kinds of errors, and you need to
then go conditional.

This should provide a better experience for all clients, human and machine.

 
 Using response codes effectively makes it easier to write client code
 that is either simple or is able to use generic libraries effectively.
 
 Let's be honest: OpenStack doesn't have a great record of using HTTP
 effectively or correctly. Let's not make it worse.
 
 In the case of quota, 403 is fairly reasonable because you are in
 fact Forbidden from doing the thing you want to do. Yes, with the
 passage of time you may very well not be forbidden so the semantics
 are not strictly matching but it is more immediately expressive yet
 not quite as troubling as 409 (which has a more specific meaning).

Except it's not, because you are saying to use 403 for 2 issues (Don't
have permissions and Out of quota).

Turns out, we have APIs for adjusting quotas, which your user might have
access to. So part of 403 space is something you might be able to code
yourself around, and part isn't. Which means you should always ignore it
and write custom logic client side.

Using something beyond 400 is *not* more expressive if it has more than
one possible meaning. Then it's just muddy. My point is that all errors
besides 400 should have *exactly* one cause, so they are specific.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Which error code should we return when OverQuota

2015-05-06 Thread Sean Dague
It does, however I looked through the history of that repo, and that's
just in one of Jay's documents that predates the group. I'm a little
cautious to give it a lot of weight without rationale.

Honestly, there is this obsession of assuming that there *are* good fits
for HTTP status codes for non webbrowser interaction patterns. There are
not. The error code set was based around a specific expected web browser
/ website model from 20 years ago.

I honestly think we'd be better served by limiting our use of non 200 or
400 codes to really narrow conditions (the ones that you'd expect from
the browser interaction pattern). This would approach the whole problem
from the least surprise perspective.

404 - resource doesn't exist (appropriate for GET /foo/ID_NUMBER where
the thing isn't there)

403 - permissions error. Appropriate for a resource that exists, but you
do not have enough permissions for it. I.e. it's an admin URL or someone
else's resource.

All other client errors, just be a 400. And use the emerging error
reporting json to actually tell the client what's going on.

-Sean


On 05/05/2015 09:48 PM, Alex Xu wrote:
 From API-WG guideline, exceed quota should be 403
 
 https://github.com/openstack/api-wg/blob/master/guidelines/http.rst
 
 2015-05-06 3:30 GMT+08:00 Chen CH Ji jiche...@cn.ibm.com
 mailto:jiche...@cn.ibm.com:
 
 In doing patch [1], A suggestion is submitted that we should return
 400 (bad Request) instead of 403 (Forbidden)
 I take a look at [2], seems 400 is not a good candidate because
 /'//The request could not be understood by the server due to
 malformed syntax. The client SHOULD NOT repeat the request without
 modifications. //'/
 
 may be a 409 (conflict) error if we really don't think 403 is a good
 one?
 Thanks
 
 
 [1] https://review.openstack.org/#/c/173985/
 [2] http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html
 
 Best Regards!
 
 Kevin (Chen) Ji 纪 晨
 
 Engineer, zVM Development, CSTL
 Notes: Chen CH Ji/China/IBM@IBMCN   Internet: jiche...@cn.ibm.com
 mailto:jiche...@cn.ibm.com
 Phone: +86-10-82454158 tel:%2B86-10-82454158
 Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian
 District, Beijing 100193, PRC
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel][Plugin] Contributor license agreement for fuel plugin code?

2015-05-06 Thread Emma Gordon (projectcalico.org)
If fuel plugin code is checked into a stackforge repository (as suggested in 
the fuel plugin wiki https://wiki.openstack.org/wiki/Fuel/Plugins#Repo), who 
owns that code? Is there a contributor license agreement to sign? (For example, 
contributors to OpenStack would sign this 
https://review.openstack.org/static/cla.html)

Thanks,
Emma
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Plugin] Contributor license agreement for fuel plugin code?

2015-05-06 Thread Jeremy Stanley
On 2015-05-06 11:02:42 + (+), Emma Gordon (projectcalico.org) wrote:
 If fuel plugin code is checked into a stackforge repository (as
 suggested in the fuel plugin wiki
 https://wiki.openstack.org/wiki/Fuel/Plugins#Repo), who owns that
 code?

I am not a lawyer, but my understanding is that the individual
copyright holders mentioned in comments at the tops of various
files, listed in an AUTHORS file (if included) and indicated within
the repository's Git commit history retain rights over their
contributions in a project relying on the Apache License (or those
rights may belong to their individual respective employers in a
work-for-hire situation as well).

 Is there a contributor license agreement to sign? (For
 example, contributors to OpenStack would sign this
 https://review.openstack.org/static/cla.html)

If Fuel is planning to apply for inclusion in OpenStack, then it may
make sense to get all current and former contributors to its source
repositories to agree to the OpenStack Individual Contributor
License Agreement. Note that it does _not_ change the ownership of
the software (copyrights), it's intended to simply reinforce the
OpenStack Foundation's ability to continue to redistribute the
software under the Apache License by affirming that the terms of the
license are applied correctly and intentionally.

More detailed questions are probably best posed to the
legal-disc...@lists.openstack.org mailing list, or to your own
private legal representation.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][api] Extensions out, Micro-versions in

2015-05-06 Thread Bob Melander (bmelande)
Hi Salvatore,

Two questions/remarks below.

From: Salvatore Orlando sorla...@nicira.commailto:sorla...@nicira.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: onsdag 6 maj 2015 00:13
To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: [openstack-dev] [neutron][api] Extensions out, Micro-versions in

#5 Plugin/Vendor specific APIs

Neutron is without doubt the project with the highest number of 3rd party (OSS 
and commercial) integration. After all it was mostly vendors who started this 
project.
Vendors [4] use the extension mechanism to expose features in their products 
not covered by the Neutron API or to provide some sort of value-added service.
The current proposal still allows 3rd parties to attach extensions to the 
neutron API, provided that:
- they're not considered part of the Neutron API, in terms of versioning, 
documentation, and client support

BOB There are today vendor specific commands in the Neutron CLI client. Such 
commands are prepended with the name of the vendor, like cisco_command and 
nec_command.
I think that makes it quite visible to the user that the command is specific to 
a vendor feature and not part of neutron core. Would it be possible to allow 
for that also going forward? I would think that from a user perspective it can 
be convenient to be able to access vendor add-on features using a single CLI 
client.

- they do not redefine resources defined by the Neutron API.

BOB Does “redefine here include extending a resource with additional 
attributes?

- they do not live in the neutron source tree
The aim of the provisions above is to minimize the impact of such extensions on 
API portability.

Thanks for reading and thanks in advance for your feedback,
Salvatore

The title of this post has been inspired by [2]  (the message in the banner may 
be unintelligible to readers not fluent in european football)

[1] https://review.openstack.org/#/c/136760/
[2] 
http://a.espncdn.com/combiner/i/?img=/photo/2015/0502/fc-banner-jd-1296x729.jpgw=738site=espnfc
[3] 
http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/api-microversions.html
[4] By vendor here we refer either to a cloud provider or a company providing 
Neutron integration for their products.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.config] MultiStrOpt VS. ListOpt

2015-05-06 Thread Davanum Srinivas
ZhiQiang,

Please log a bug and we can try to do what jd suggested.

-- dims

On Wed, May 6, 2015 at 9:21 AM, Julien Danjou jul...@danjou.info wrote:
 On Wed, May 06 2015, ZhiQiang Fan wrote:

 I come across a problem that crudini cannot handle MultiStrOpt[1], I don't
 know why such type configuration option is needed. It seems ListOpt is a
 better choice. Currently I find lots of MultiStrOpt options in both Nova
 and Ceilometer, and I think other projects have too.

 Here are my questions:

 1) how can I update such type of option without manually rewrite the config
 file? (like devstack scenario)
 2) Is there any way to migrate MultiStrOpt to ListOpt? The ListOpt will
 take last specified value while MultiStrOpt takes all, so the compatibility
 is a big problem

 Any hints?

 I didn't check extensively, but this is something I hit regularly. It
 seems to me we have to two types doing more or less the same things and
 mapping to the same data structure (i.e. list). We should unify them.

 --
 Julien Danjou
 // Free Software hacker
 // http://julien.danjou.info

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [fuel] Some changes in build script

2015-05-06 Thread Vladimir Kozhukalov
Dear colleagues,

Please, be informed that I've made some changes in our build script in
order to support priorities for rpm repositories. I've also removed some
unnecessary variables (EXTRA_RPM_REPOS and EXTRA_DEB_REPOS) and renamed
some others.

We don't need EXTRA_DEB_REPOS any more because it is possible to set an
arbitrary number of repositories together with their priorities in run time
on UI.

The variable MULTI_MIRROR_CENTOS has been introduced and it has the
following format  'repo1,pri1,url1 repo2,pri2,url2'. So, we don't need
EXTRA_RPM_REPOS as well.

Please, make sure this patch [1] does not break anything for you. Review is
welcome.

[1] https://review.openstack.org/#/c/176438

Vladimir Kozhukalov
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [opentack-dev][meetings] Proposing changes in Rally meetings

2015-05-06 Thread Mikhail Dubov
Tony,

many thanks for noticing it, I didn't see it for some reason while looking
at the iCal file / checking the wiki. We will use another time then.

Best regards,
Mikhail Dubov

Engineering OPS
Mirantis, Inc.
E-Mail: mdu...@mirantis.com
Skype: msdubov

On Wed, May 6, 2015 at 5:15 AM, Tony Breeds t...@bakeyournoodle.com wrote:

 On Tue, May 05, 2015 at 06:22:47PM +0300, Mikhail Dubov wrote:
  Hi Rally team,
 
  as mentioned in another message from me in this list, we have decided to
  use the meeting time on Wednesdays at 14:00 UTC for our *usual weekly
  meeting in #openstack-meeting*.

 This meeting time and channel will clash with the docs meeting every
 second week.
 Check May 13th (UTC) at:

 https://www.google.com/calendar/embed?src=bj05mroquq28jhud58esggqmh4%40group.calendar.google.comctz=Iceland/Reykjavik

 It looks like #openstack-meeting-4 is free at that time.

  As for the release meeting, we wil hold it
  just before the main meeting, weekly on Wednesdays at 13:30 UTC in
  *#openstack-rally*.

 Do you want this listed as a meeting on
 https://wiki.openstack.org/wiki/Meetings and in thr iCal above?

 Tony.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] TC Communications planning

2015-05-06 Thread Maish Saidel-Keesing

On 05/06/15 16:13, Anne Gentle wrote:

Hi all,

In the interest of communicating sooner rather than later, I wanted to 
write a new thread to say that Flavio Percoco and I are going to work 
on a TC communications plan as co-chairs of a TC communications 
working group.


I think we can find a happy medium amongst meeting minutes, gerrit 
reviews, and irregular blog entries by applying some comms planning, 
so that Flavio and I can dive in.


Please answer these questions on the list if you're interested in 
shaping the communications plan:


Audience considerations:
Is the primary audience current OpenStack contributors or those in 
consumer roles?
I would think Consumer Roles, most contributors are already in the know 
and on the mailing lists and meetings.
What percentage of the audience are fairly new contributors? Fairly 
new to OpenStack itself?
Is the audience more likely to be an outsider looking in to 
OpenStack governance?
I would think so. The 'insider' already knows where and how to find the 
information.
Is the audience wanting to click links to learn more, or do they just 
want the summary?
Both would be great. There will be those who only want a summary, but 
sometimes would also like a bit more detail on a specific subject
Does the audience always want an action to take, or is simply getting 
information their goal?
I would leave that up to the audience. But if this is a communication 
channel - we should decide if it should be one-way or both ways.


Channel considerations:
Is this audience with their goals more likely to use blogs, RSS, and 
Twitter or subscribe to mailing lists?
If we are talking about the non-contributor - a definite no to mailing 
and a huge yes to the first part of the list.
Depending on the channels chosen, is cross-posting to multiple 
channels a huge error, or are we leaning towards a wide net rather 
than laser targeting?
Cross posting should be fine since it will mostly be a link pointing to 
the main source of content - which will be a blog post of some sorts.

Is there another channel we haven't considered that is widely consumed?

FaceBook? But I personally don't go near it.
Does the cadence have to be weekly, even if not much happened with 
the TC is the activity rate for the week?
I do no think it has to be weekly, because perhaps that would become 
quite boring - if nothing really happened. I would say it should 
according to the need - but a minimum of once a month (even if there was 
nothing exciting).


Thanks all for participating and giving input.

Anne and Flavio



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
Best Regards,
Maish Saidel-Keesing
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] ?????? [heat][ceilometer] autoscaling

2015-05-06 Thread Luo Gangyi
I don't understand what you mean.


Firstly, ceilometer doesn't return meters or samples to heat. In fact, heat 
configures an alarm in ceilometer and the action of this alarm
is to send a REST to heat. When heat gets this REST, it triggers autoscalling.


Besides, you can use #ceilometer alarm-list to see what alarm heat configures, 
then you could run #ceilometer query-sample to see the meter and sample.


Hope it helps.
--
Luo gangyiluogan...@chinamobile.com



 




--  --
??: ICHIBA Sara;ichi.s...@gmail.com;
: 2015??5??6??(??) 8:25
??: openstack-devopenstack-dev@lists.openstack.org; 

: [openstack-dev] [heat][ceilometer] autoscaling



hey there,


Please I wanna know if their is anyway I can have cpu, ram and network meters 
for each VM returned by ceilometer to heat for autoscaling tasks?


In advance, thank you for your response,

Sara__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo.config] MultiStrOpt VS. ListOpt

2015-05-06 Thread ZhiQiang Fan
Hi, devs

I come across a problem that crudini cannot handle MultiStrOpt[1], I don't
know why such type configuration option is needed. It seems ListOpt is a
better choice. Currently I find lots of MultiStrOpt options in both Nova
and Ceilometer, and I think other projects have too.

Here are my questions:

1) how can I update such type of option without manually rewrite the config
file? (like devstack scenario)
2) Is there any way to migrate MultiStrOpt to ListOpt? The ListOpt will
take last specified value while MultiStrOpt takes all, so the compatibility
is a big problem

Any hints?

Thanks!

[1]
https://github.com/pixelb/crudini/blob/6c7cb8330d2b3606610af20c767433358c8d20ab/TODO#L19
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][ceilometer] autoscaling

2015-05-06 Thread Zane Bitter

On 06/05/15 08:25, ICHIBA Sara wrote:

hey there,

Please I wanna know if their is anyway I can have cpu, ram and network
meters for each VM returned by ceilometer to heat for autoscaling tasks?

In advance, thank you for your response,
Sara


The openstack-dev list is for discussing future development plans for 
OpenStack only. For questions about how to use OpenStack, you can post 
to the regular openst...@lists.openstack.org list, but it's usually 
better to use http://ask.openstack.org/


cheers,
Zane.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] cross project communication: periodic developer newsletter?

2015-05-06 Thread James Bottomley
On Wed, 2015-05-06 at 11:54 +0200, Thierry Carrez wrote:
 Hugh Blemings wrote:
  +2
  
  I think asking LWN if they have the bandwidth and interest to do this
  would be ideal - they've credibility in the Free/Open Source space and a
  proven track record.  Nice people too.
 
 On the bandwidth side, as a regular reader I was under the impression
 that they struggled with their load already, but I guess if it comes
 with funding that could be an option.
 
 On the interest side, my past tries to invite them to the OpenStack
 Summit so that they could cover it (the way they cover other
 conferences) were rejected, so I have doubts in that area as well.
 
 Anyone having a personal connection that we could leverage to pursue
 that option further ?

Sure, be glad to.

I've added Jon to the cc list (if his openstack mail sorting scripts
operate like mine, that will get his attention).

I already had a preliminary discussion with him: lwn.net is interested
but would need to hire an extra person to cover the added load.  That
makes it quite a big business investment for them.

James





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Openstack] Neutron-legacy error

2015-05-06 Thread Silvia Fichera
Hi all,
I'm installig Openstack using Devstack, and I'm includind the ODL plugin.

This is my local.conf file:
[[local|localrc]]

#IP Details
HOST_IP=10.30.3.231 #Please Add The Control Node IP Address in this line
FLAT_INTERFACE=eth0
SERVICE_HOST=$HOST_IP
FIXED_RANGE=172.31.31.0/24
FIXED_NETWORK_SIZE=4096
NETWORK_GATEWAY=172.31.31.1
# FLOATING_RANGE=192.168.168.0/24
# PUBLIC_NETWORK_GATEWAY=192.168.168.1

#Instance Details
MULTI_HOST=1
#config Details
RECLONE=yes #Make it no after stacking successfully the first time
VERBOSE=True
LOG_COLOR=True
LOGFILE=/opt/stack/logs/stack.sh.log
SCREEN_LOGDIR=/opt/stack/logs
#OFFLINE=True #Uncomment this after stacking successfully the first time

#Passwords
ADMIN_PASSWORD=stack
DATABASE_PASSWORD=stack
RABBIT_PASSWORD=stack
SERVICE_PASSWORD=stack
SERVICE_TOKEN=a682f596-76f3-11e3-b3b2-e716f9080d50
ENABLE_TENANT_TUNNELS=false

#ML2 Details
Q_PLUGIN=ml2
Q_ML2_PLUGIN_MECHANISM_DRIVERS=opendaylight,openvswitch,linuxbridge
Q_ML2_TENANT_NETWORK_TYPE=local
Q_ML2_PLUGIN_TYPE_DRIVERS=vlan,gre,vxlan
disable_service n-net
enable_service q-svc
enable_service q-meta
enable_service q-dhcp
enable_service q-l3
enable_service neutron

enable_service odl-compute

ODL_MGR_IP=10.30.3.231 #Please Add the ODL IP Address in this line
OVS_PHYSICAL_BRIDGE=br-int
Q_OVS_USE_VETH=True


#Details of the Control node for various services
[[post-config|/etc/neutron/plugins/ml2/ml2_conf.ini]]
[ml2_odl]
url=http://10.30.3.231:8080/controller/nb/v2/neutron #Please Add the ODL IP
Address in this line
username=admin
password=admin

VNCSERVER_PROXYCLIENT_ADDRESS=10.30.3.231
#set for live migration
VNCSERVER_LISTEN=0.0.0.0
NOVA_INSTANCES_PATH=/var/lib/nova/instances

I run ./stack and I receive this error:


2015-05-06 13:19:23.940 | + echo 'screen -t q-svc bash'
2015-05-06 13:19:23.940 | + echo 'stuff python
/usr/local/bin/neutron-server --config-file /etc/neutron/neutron.conf
--config-file /etc/neutron'lugins/ml2/ml2_conf.ini
2015-05-06 13:19:23.941 | + [[ -n /opt/stack/logs ]]
2015-05-06 13:19:23.941 | + echo 'logfile
/opt/stack/logs/q-svc.log.2015-05-06-150747'
2015-05-06 13:19:23.941 | + echo 'log on'
2015-05-06 13:19:23.941 | + screen -S stack -p q-svc -X stuff 'python
/usr/local/bin/neutron-server --config-file /etc/neutron/neutron.conf
--config-file /etc/neutron/plugins/ml2/ml2_conf.ini  echo $!
/opt/stack/status/stack/q-svc.pid; fg || echo q-svc failed to start |
tee /opt/sta'k/status/stack/q-svc.failure
2015-05-06 13:19:23.949 | + echo 'Waiting for Neutron to start...'
2015-05-06 13:19:23.949 | Waiting for Neutron to start...
2015-05-06 13:19:23.949 | + is_ssl_enabled_service neutron
2015-05-06 13:19:23.950 | + local services=neutron
2015-05-06 13:19:23.950 | + local service=
2015-05-06 13:19:23.950 | + '[' False == False ']'
2015-05-06 13:19:23.951 | + return 1
2015-05-06 13:19:23.951 | + timeout 60 sh -c 'while ! wget  --no-proxy -q
-O- http://10.30.3.231:9696; do sleep 1; done'
2015-05-06 13:20:23.957 | + die 694 'Neutron did not start'
2015-05-06 13:20:23.957 | + local exitcode=0
2015-05-06 13:20:23.958 | [Call Trace]
2015-05-06 13:20:23.958 | ./stack.sh:1212:start_neutron_service_and_check
2015-05-06 13:20:23.958 | /home/silvia/devstack/lib/neutron-legacy:694:die
2015-05-06 13:20:24.298 | [ERROR]
/home/silvia/devstack/lib/neutron-legacy:694 Neutron did not start
2015-05-06 13:20:25.306 | Error on exit

Could anyone help me please?
I can't find anything on the web to fix it.
Thank you
Silvia
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] [Neutron][IPAM] Do we need migrate script for neutron IPAM now?

2015-05-06 Thread John Belamaric
I agree, we should amend it to not run pluggable IPAM as the default for now. 
When we decide to make it the default, the migration scripts will be needed.

John


On 5/5/15, 1:47 PM, Salvatore Orlando 
sorla...@nicira.commailto:sorla...@nicira.com wrote:

Patch #153236 is introducing pluggable IPAM in the db base plugin class, and 
default to it at the same time, I believe.

If the consensus is to default to IPAM driver then in order to satisfy grenade 
requirements those migrations scripts should be run. There should actually be a 
single script to be run in a one-off fashion. Even better is treated as a DB 
migration.

However, the plan for Kilo was to not turn on pluggable IPAM for default. Now 
that we are targeting Liberty, we should have this discussion again, and not 
take for granted that we should default to pluggable IPAM just because a few 
months ago we assumed it would be default by Liberty.
I suggest to not enable it by default, and then consider in L-3 whether we 
should do this switch.
For the time being, would it be possible to amend patch #153236 to not run 
pluggable IPAM by default. I appreciate this would have some impact on unit 
tests as well, which should be run both for pluggable and traditional IPAM.

Salvatore

On 4 May 2015 at 20:11, Pavel Bondar 
pbon...@infoblox.commailto:pbon...@infoblox.com wrote:
Hi,

During fixing failures in db_base_plugin_v2.py with new IPAM[1] I faced
to check-grenade-dsvm-neutron failures[2].
check-grenade-dsvm-neutron installs stable/kilo, creates
networks/subnets and upgrades to patched master.
So it validates that migrations passes fine and installation is works
fine after it.

This is where failure occurs.
Earlier there was an agreement about using pluggable IPAM only for
greenhouse installation, so migrate script from built-in IPAM to
pluggable IPAM was postponed.
And check-grenade-dsvm-neutron validates greyhouse scenario.
So do we want to update this agreement and implement migration scripts
from built-in IPAM to pluggable IPAM now?

Details about failures.
Subnets created before patch was applied does not have correspondent
IPAM subnet,
so observed a lot of failures like this in [2]:
Subnet 2c702e2a-f8c2-4ea9-a25d-924e32ef5503 could not be found
Currently config option in patch is modified to use pluggable_ipam by
default (to catch all possible UT/tempest failures).
But before the merge patch will be switched back to non-ipam
implementation by default.

I would prefer to implement migrate script as a separate review,
since [1] is already quite big and hard for review.

[1] https://review.openstack.org/#/c/153236
[2]
http://logs.openstack.org/36/153236/54/check/check-grenade-dsvm-neutron/42ab4ac/logs/grenade.sh.txt.gz

- Pavel Bondar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] LBaaS in version 5.1

2015-05-06 Thread Stanislaw Bogatkin
Hi Daniel,

Unfortunately, we never supported LBaaS until Fuel 6.0 when plugin system
was introduced and LBaaS plugin was created. So, I think than docs about it
never existed for 5.1. But as I know, you can easily install LBaaS in 5.1
(it should be shipped in our repos) and configure it with accordance to
standard OpenStack cloud administrator guide [1].

[1]
http://docs.openstack.org/admin-guide-cloud/content/install_neutron-lbaas-agent.html

On Wed, May 6, 2015 at 2:12 PM, Daniel Comnea comnea.d...@gmail.com wrote:

 HI all,

 Recently i used Fuel 5.1 to deploy Openstack Icehouse on a Lab (PoC) and a
 request came with enabling Neutron LBaaS.

 I have looked up on Fuel doc to see if this is supported in the version
 i'm running but failed ot find anything.

 Anyone can point me to any docs which mentioned a) yes it is supported and
 b) how to update it via Fuel?

 Thanks,
 Dani

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack] heat: Using IP address of one VM with-in another VM

2015-05-06 Thread Chris Friesen

On 05/06/2015 06:20 AM, ashish.jai...@wipro.com wrote:

Hi,


Is it possible to somehow pass IP of a VM to another VM during heat deployment.
For example IP of  a DB server in a VM to a client in another VM.


Yes.  As an example take a look at 
https://github.com/openstack/heat-templates/blob/master/hot/F20/WordPress_2_Instances.yaml;.


The WebServer is configured with the IP address of the DatabaseServer:

db_ipaddr: { get_attr: [DatabaseServer, networks, private, 0] }


Chris


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack-operators] expanding to 2nd location

2015-05-06 Thread Joseph Bajin
Just to add in my $0.02, we run in multiple sites as well.  We are using
regions to do this.  Cells at this point have a lot going for it, but we
thought it wasn't there yet.  We also don't have the necessary resources to
make our own changes to it like a few other places do.

With that, we said the only real thing that we should do is make sure items
such as Tenant and User ID's remained the same. That allows us to do
show-back reporting and it makes it easier on the user-base on when they
want to deploy from one region to another.   With that requirement, we did
used galera in the same manner that many of the others mentioned.  We then
deployed Keystone pointing to that galera DB.  That is the only DB that is
replicated across sites.  Everything else such as Nova, Neutron, etc are
all within its own location.

The only real confusing piece for our users is the dashboard.  When you
first go to the dashboard, there is a dropdown to select a region.  Many
users think that is going to send them to a particular location, so their
information from that location is going to show up.  It is really to which
region do you want to authenticate against.  Once you are in the dashboard,
you can select which Project you want to see.  That has been a major point
of confusion. I think our solution is to just rename that text.





On Tue, May 5, 2015 at 11:46 AM, Clayton O'Neill clay...@oneill.net wrote:

 On Tue, May 5, 2015 at 11:33 AM, Curtis serverasc...@gmail.com wrote:

 Do people have any comments or strategies on dealing with Galera
 replication across the WAN using regions? Seems like something to try
 to avoid if possible, though might not be possible. Any thoughts on
 that?


 We're doing this with good luck.  Few things I'd recommend being aware of:

 Set galera_group_segment so that each site is in a separate segment.  This
 will make it smarter about doing replication and for state transfer.

 Make sure you look at the timers and tunables in Galera and make sure they
 make sense for your network.  We've got lots of BW and lowish latency
 (37ms), so the defaults have worked pretty well for us.

 Make sure that when you do provisioning in one site, you don't have CM
 tools in the other site breaking things.  We can ran into issues during our
 first deploy like this where Puppet was making a change in one site to a
 user, and Puppet in the other site reverted the change nearly immediately.
 You may have to tweak your deployment process to deal with that sort of
 thing.

 Make sure you're running Galera or Galera Arbitrator in enough sites to
 maintain quorum if you have issues.  We run 3 nodes in one DC, and 3 nodes
 in another DC for Horizon, Keystone and Designate.  We run a Galera
 arbitrator in a third DC to settle ties.

 Lastly, the obvious one is just to stay up to date on patches.  Galera is
 pretty stable, but we have run into bugs that we had to get fixes for.

 On Tue, May 5, 2015 at 11:33 AM, Curtis serverasc...@gmail.com wrote:

 Do people have any comments or strategies on dealing with Galera
 replication across the WAN using regions? Seems like something to try
 to avoid if possible, though might not be possible. Any thoughts on
 that?

 Thanks,
 Curtis.

 On Mon, May 4, 2015 at 3:11 PM, Jesse Keating j...@bluebox.net wrote:
  I agree with Subbu. You'll want that to be a region so that the control
  plane is mostly contained. Only Keystone (and swift if you have that)
 would
  be doing lots of site to site communication to keep databases in sync.
 
  http://docs.openstack.org/arch-design/content/multi_site.html is a
 good read
  on the topic.
 
 
  - jlk
 
  On Mon, May 4, 2015 at 1:58 PM, Allamaraju, Subbu su...@subbu.org
 wrote:
 
  I suggest building a new AZ (“region” in OpenStack parlance) in the new
  location. In general I would avoid setting up control plane to operate
  across multiple facilities unless the cloud is very large.
 
   On May 4, 2015, at 1:40 PM, Jonathan Proulx j...@jonproulx.com
 wrote:
  
   Hi All,
  
   We're about to expand our OpenStack Cloud to a second datacenter.
   Anyone one have opinions they'd like to share as to what I would and
   should be worrying about or how to structure this?  Should I be
   thinking cells or regions (or maybe both)?  Any obvious or not so
   obvious pitfalls I should try to avoid?
  
   Current scale is about 75 hypervisors.  Running juno on Ubuntu 14.04
   using Ceph for volume storage, ephemeral block devices, and image
   storage (as well as object store).  Bulk data storage for most (but
 by
   no means all) of our workloads is at the current location (not that
   that matters I suppose).
  
   Second location is about 150km away and we'll have 10G (at least)
   between sites. The expansion will be approximately the same size as
   the existing cloud maybe slightly larger and given site capacities
 the
   new location is also more likely to be where any future grown goes.
  
   Thanks,
   -Jon
  
   

[openstack-dev] [RefStackl] - http://refstack.org/ - does not resolve

2015-05-06 Thread Arkady_Kanevsky
What are we doing to have name resolved?
Meanwhile what is IP address to reach it?
Do we really expect people to submit results to that web site?
Thanks,
Arkady
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] Modification in nova policy file

2015-05-06 Thread Joseph Bajin
The Policy file is not a filtering agent.   It basically just provides ACL
type of abilities.

Can you do this action?  True/False
Do you have the right permissions to call this action? True/False

If you wanted to pull back just the instances that the user owns, then you
would actually have to write some code that would call that particular
filtering action.



On Tue, May 5, 2015 at 11:01 AM, Salman Toor salman.t...@it.uu.se wrote:

  Hi,


  I am trying to setup the policies for nova. Can you please have a look
 if thats correct?


  nova/policy.json
 
 context_is_admin:  role:admin,
 admin_or_owner:  is_admin:True or project_id:%(project_id)s,
 owner:  user_id:%(user_id)s,
 admin_or_user: is_admin:True or user_id:%(user_id)s,
 default: rule:admin_or_owner”,

  compute:get_all: “rule:admin_or_user,
  

  I want users to only see there own instances, not the instances of all
 the users in the same tenant.

  I have restarted the nova-api service on controller, but no effect. I
 have noticed that if I put “rule:context_is_admin”  in “compute:get_all
 than except “admin no one can see anything so system is reading the file
 correctly.

  Important:

  1 - I haven’t changed the  /etc/openstack-dashboard/nova_policy.json

  2 - I have only used the command line client tool to confirm the
 behaviour.

  I am running Juno release.

  Please point to some document that discuss all the policy parameters.

  Thanks in advance.

  /Salman

 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [oslo.config] MultiStrOpt VS. ListOpt

2015-05-06 Thread Julien Danjou
On Wed, May 06 2015, ZhiQiang Fan wrote:

 I come across a problem that crudini cannot handle MultiStrOpt[1], I don't
 know why such type configuration option is needed. It seems ListOpt is a
 better choice. Currently I find lots of MultiStrOpt options in both Nova
 and Ceilometer, and I think other projects have too.

 Here are my questions:

 1) how can I update such type of option without manually rewrite the config
 file? (like devstack scenario)
 2) Is there any way to migrate MultiStrOpt to ListOpt? The ListOpt will
 take last specified value while MultiStrOpt takes all, so the compatibility
 is a big problem

 Any hints?

I didn't check extensively, but this is something I hit regularly. It
seems to me we have to two types doing more or less the same things and
mapping to the same data structure (i.e. list). We should unify them.

-- 
Julien Danjou
// Free Software hacker
// http://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] LBaaS in version 5.1

2015-05-06 Thread Daniel Comnea
HI all,

Recently i used Fuel 5.1 to deploy Openstack Icehouse on a Lab (PoC) and a
request came with enabling Neutron LBaaS.

I have looked up on Fuel doc to see if this is supported in the version i'm
running but failed ot find anything.

Anyone can point me to any docs which mentioned a) yes it is supported and
b) how to update it via Fuel?

Thanks,
Dani
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Core reviewer update proposal

2015-05-06 Thread Dan Prince
On Tue, 2015-05-05 at 07:57 -0400, James Slagle wrote:
 Hi, I'd like to propose adding Giulio Fidente and Steve Hardy to TripleO Core.
 
 Giulio has been an active member of our community for a while. He
 worked on the HA implementation in the elements and recently has been
 making a lot of valuable contributions and reviews related to puppet
 in the manifests, heat templates, ceph, and HA.

+1 Giulio has become one of our resident HA experts.

 
 Steve Hardy has been instrumental in providing a lot of Heat domain
 knowledge to TripleO and his reviews and guidance have been very
 beneficial to a lot of the template refactoring. He's also been
 reviewing and contributing in other TripleO projects besides just the
 templates, and has shown a solid understanding of TripleO overall.

+1 Steve's Heat expertise has been invaluable.

 
 180 day stats:
 | gfidente | 2080  42 166   0   079.8% |
 16 (  7.7%)  |
 |  shardy  | 2060  27 179   0   086.9% |
 16 (  7.8%)  |
 
 TripleO cores, please respond with +1/-1 votes and any
 comments/objections within 1 week.
 
 Giulio and Steve, also please do let me know if you'd like to serve on
 the TripleO core team if there are no objections.
 
 I'd also like to give a heads-up to the following folks whose review
 activity is very low for the last 90 days:
 |   tomas-8c8 **   |   80   0   0   8   2   100.0% |0 (  0.0%)  |
 |lsmola ** |   60   0   0   6   5   100.0% |0 (  0.0%)  |
 | cmsj **  |   60   2   0   4   266.7% |0 (  0.0%)  |
 |   jprovazn **|   10   1   0   0   0 0.0% |0 (  0.0%)  |
 |   jonpaul-sullivan **|  no activity
 Helping out with reviewing contributions is one of the best ways we
 can make good forward progress in TripleO. All of the above folks are
 valued reviewers and we'd love to see you review more submissions. If
 you feel that your focus has shifted away from TripleO and you'd no
 longer like to serve on the core team, please let me know.
 
 I also plan to remove Alexis Lee from core, who previously has
 expressed that he'd be stepping away from TripleO for a while. Alexis,
 thank you for reviews and contributions!
 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fuel][Plugin] Contributor license agreement for fuel plugin code?

2015-05-06 Thread Irina Povolotskaya
 the developer community seems not yet convinced about is moving
  away from extensions. It seems everybody realises the flaws of evolving
 the
  API through extensions, but there are understandable concerns regarding
  impact on plugins/drivers as well as the ability to differentiate, which
 is
  something quite dear to several neutron teams. I tried to consider all
  those concerns and feedback received; hopefully everything has been
  captured in a satisfactory way in the latest revision of [1].
  With this ML post I also seek feedback from the API-wg concerning the
  current proposal, whose salient points can be summarised as follows:
 
   #1 extensions are not part anymore of the neutron API.
 
   Evolution of the API will now be handled through versioning. Once
  microversions are introduced:
 - current extensions will be progressively moved into the Neutron
  unified API
 - no more extension will be accepted as part of the Neutron API
 
   #2 Introduction of features for addressing diversity in Neutron
 plugins
 
   It is possible that the combination of neutron plugins chosen by the
  operator won't be able to support the whole Neutron API. For this reason
 a
  concept of feature is included. What features are provided depends on
 the
  plugins loaded. The list of features is hardcoded as strictly dependent
 on
  the Neutron API version implemented by the server. The specification also
  mandates a minimum set of features every neutron deployment must provide
  (those would be the minimum set of features needed for integrating
 Neutron
  with Nova).
 
   #3 Advanced services are still extensions
 
   This a temporary measure, as APIs for load balancing, VPN, and Edge
  Firewall are still served through neutron WSGI. As in the future this API
  will live independently it does not make sense to version them with
 Neutron
  APIs.
 
   #4 Experimenting in the API
 
   One thing that has plagued Neutron in the past is the impossibility of
  getting people to reach any sort of agreement over the shape of certain
  APIs. With the proposed plan we encourage developers to submit
 experimental
  APIs. Experimental APIs are unversioned and no guarantee is made
 regarding
  deprecation or backward compatibility. Also they're optional, as a
 deployer
  can turn them off. While there are caveats, like forever-experimental
 APIs,
  this will enable developer to address user feedback during the APIs'
  experimental phase. The Neutron community and the API-wg can provide
 plenty
  of useful feeback, but ultimately is user feedback which determines
 whether
  an API proved successful or not. Please note that the current proposal
 goes
  in a direction different from that approved in Nova when it comes to
  experimental APIs [3]
 
   #5 Plugin/Vendor specific APIs
 
   Neutron is without doubt the project with the highest number of 3rd
  party (OSS and commercial) integration. After all it was mostly vendors
 who
  started this project.
  Vendors [4] use the extension mechanism to expose features in their
  products not covered by the Neutron API or to provide some sort of
  value-added service.
  The current proposal still allows 3rd parties to attach extensions to the
  neutron API, provided that:
  - they're not considered part of the Neutron API, in terms of versioning,
  documentation, and client support
  - they do not redefine resources defined by the Neutron API.
  - they do not live in the neutron source tree
  The aim of the provisions above is to minimize the impact of such
  extensions on API portability.
 
   Thanks for reading and thanks in advance for your feedback,
   Salvatore
 
   The title of this post has been inspired by [2]  (the message in the
  banner may be unintelligible to readers not fluent in european football)
 
   [1] https://review.openstack.org/#/c/136760/
  [2]
 
 http://a.espncdn.com/combiner/i/?img=/photo/2015/0502/fc-banner-jd-1296x729.jpgw=738site=espnfc
  [3]
 
 http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/api-microversions.html
  [4] By vendor here we refer either to a cloud provider or a company
  providing Neutron integration for their products.
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 -- next part --
 An HTML attachment was scrubbed...
 URL: 
 http://lists.openstack.org/pipermail/openstack-dev/attachments/20150506/1e3ca3d4/attachment-0001.html
 

 --

 Message: 2
 Date: Wed, 06 May 2015 09:26:37 +0200
 From: Thomas Goirand z...@debian.org
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [PKG-Openstack-devel][horizon][xstatic]
 XStatic-Angular-Bootstrap

[openstack-dev] [all] TC Communications planning

2015-05-06 Thread Anne Gentle
Hi all,

In the interest of communicating sooner rather than later, I wanted to
write a new thread to say that Flavio Percoco and I are going to work on a
TC communications plan as co-chairs of a TC communications working group.

I think we can find a happy medium amongst meeting minutes, gerrit reviews,
and irregular blog entries by applying some comms planning, so that Flavio
and I can dive in.

Please answer these questions on the list if you're interested in shaping
the communications plan:

Audience considerations:
Is the primary audience current OpenStack contributors or those in consumer
roles?
What percentage of the audience are fairly new contributors? Fairly new to
OpenStack itself?
Is the audience more likely to be an outsider looking in to OpenStack
governance?
Is the audience wanting to click links to learn more, or do they just want
the summary?
Does the audience always want an action to take, or is simply getting
information their goal?

Channel considerations:
Is this audience with their goals more likely to use blogs, RSS, and
Twitter or subscribe to mailing lists?
Depending on the channels chosen, is cross-posting to multiple channels a
huge error, or are we leaning towards a wide net rather than laser
targeting?
Is there another channel we haven't considered that is widely consumed?
Does the cadence have to be weekly, even if not much happened with the TC
is the activity rate for the week?

Thanks all for participating and giving input.

Anne and Flavio
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPAM] Do we need migrate script for neutron IPAM now?

2015-05-06 Thread Carl Baldwin
On Tue, May 5, 2015 at 11:47 AM, Salvatore Orlando sorla...@nicira.com wrote:
 I suggest to not enable it by default, and then consider in L-3 whether we
 should do this switch.

I agree.  At the least, the switch should be decoupled from that
patch.  I think decoupling them before merging the patch was the plan
all along, it just hasn't happened yet.  We should create a new patch
dependent on this one to make it the default.  This will tee it up for
discussion and we should put a hold on that new patch until we can
discuss in Liberty-3.  I currently lean toward not making it the
default in Liberty but we can discuss later.

Carl

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] Changing 403 Forbidden to 400 Bad Request for OverQuota was: [nova] Which error code should we return when OverQuota

2015-05-06 Thread Everett Toews
On May 6, 2015, at 1:58 PM, David Kranz 
dkr...@redhat.commailto:dkr...@redhat.com wrote:

+1
The basic problem is we are trying to fit a square (generic api) peg in a round 
(HTTP request/response) hole.
But if we do say we are recognizing sub-error-codes, it might be good to 
actually give them numbers somewhere in the response (maybe an error code 
header) rather than relying on string matching to determine the real error. 
String matching is fragile and has icky i18n implications.

There is an effort underway around defining such sub-error-codes” [1]. Those 
error codes would be surfaced in the REST API here [2]. Naturally feedback is 
welcome.

Everett


[1] https://review.openstack.org/#/c/167793/
[2] https://review.openstack.org/#/c/167793/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] TC Communications planning

2015-05-06 Thread Doug Hellmann
Excerpts from Maish Saidel-Keesing's message of 2015-05-06 17:11:23 +0300:
 On 05/06/15 16:13, Anne Gentle wrote:
  Hi all,
 
  In the interest of communicating sooner rather than later, I wanted to 
  write a new thread to say that Flavio Percoco and I are going to work 
  on a TC communications plan as co-chairs of a TC communications 
  working group.
 
  I think we can find a happy medium amongst meeting minutes, gerrit 
  reviews, and irregular blog entries by applying some comms planning, 
  so that Flavio and I can dive in.
 
  Please answer these questions on the list if you're interested in 
  shaping the communications plan:
 
  Audience considerations:
  Is the primary audience current OpenStack contributors or those in 
  consumer roles?
 I would think Consumer Roles, most contributors are already in the know 
 and on the mailing lists and meetings.

I think you're overestimating the number of contributors who actually
manage to keep up with all of the traffic on this list.

We should address both audiences.

  What percentage of the audience are fairly new contributors? Fairly 
  new to OpenStack itself?
  Is the audience more likely to be an outsider looking in to 
  OpenStack governance?
 I would think so. The 'insider' already knows where and how to find the 
 information.
  Is the audience wanting to click links to learn more, or do they just 
  want the summary?
 Both would be great. There will be those who only want a summary, but 
 sometimes would also like a bit more detail on a specific subject
  Does the audience always want an action to take, or is simply getting 
  information their goal?
 I would leave that up to the audience. But if this is a communication 
 channel - we should decide if it should be one-way or both ways.
 
  Channel considerations:
  Is this audience with their goals more likely to use blogs, RSS, and 
  Twitter or subscribe to mailing lists?
 If we are talking about the non-contributor - a definite no to mailing 
 and a huge yes to the first part of the list.
  Depending on the channels chosen, is cross-posting to multiple 
  channels a huge error, or are we leaning towards a wide net rather 
  than laser targeting?
 Cross posting should be fine since it will mostly be a link pointing to 
 the main source of content - which will be a blog post of some sorts.
  Is there another channel we haven't considered that is widely consumed?
 FaceBook? But I personally don't go near it.
  Does the cadence have to be weekly, even if not much happened with 
  the TC is the activity rate for the week?
 I do no think it has to be weekly, because perhaps that would become 
 quite boring - if nothing really happened. I would say it should 
 according to the need - but a minimum of once a month (even if there was 
 nothing exciting).
 
  Thanks all for participating and giving input.
 
  Anne and Flavio
 
 
 
  __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] expanding to 2nd location

2015-05-06 Thread Mike Dorman
+1 to second site = second region.

I would not recommend using cells unless you have a real nova scalability 
problem.  There are a lot of caveats/gotchas.  Cells v2 I think should come as 
an experimental feature in Liberty, and past that point cells will be the 
default mode of operation.  It will probably be much easier to go from no cells 
to cells v2 than cells v1 to v2.

Mike



From: Joseph Bajin
Date: Wednesday, May 6, 2015 at 8:06 AM
Cc: OpenStack Operators
Subject: Re: [Openstack-operators] expanding to 2nd location

Just to add in my $0.02, we run in multiple sites as well.  We are using 
regions to do this.  Cells at this point have a lot going for it, but we 
thought it wasn't there yet.  We also don't have the necessary resources to 
make our own changes to it like a few other places do.

With that, we said the only real thing that we should do is make sure items 
such as Tenant and User ID's remained the same. That allows us to do show-back 
reporting and it makes it easier on the user-base on when they want to deploy 
from one region to another.   With that requirement, we did used galera in the 
same manner that many of the others mentioned.  We then deployed Keystone 
pointing to that galera DB.  That is the only DB that is replicated across 
sites.  Everything else such as Nova, Neutron, etc are all within its own 
location.

The only real confusing piece for our users is the dashboard.  When you first 
go to the dashboard, there is a dropdown to select a region.  Many users think 
that is going to send them to a particular location, so their information from 
that location is going to show up.  It is really to which region do you want to 
authenticate against.  Once you are in the dashboard, you can select which 
Project you want to see.  That has been a major point of confusion. I think our 
solution is to just rename that text.





On Tue, May 5, 2015 at 11:46 AM, Clayton O'Neill 
clay...@oneill.netmailto:clay...@oneill.net wrote:
On Tue, May 5, 2015 at 11:33 AM, Curtis 
serverasc...@gmail.commailto:serverasc...@gmail.com wrote:
Do people have any comments or strategies on dealing with Galera
replication across the WAN using regions? Seems like something to try
to avoid if possible, though might not be possible. Any thoughts on
that?

We're doing this with good luck.  Few things I'd recommend being aware of:

Set galera_group_segment so that each site is in a separate segment.  This will 
make it smarter about doing replication and for state transfer.

Make sure you look at the timers and tunables in Galera and make sure they make 
sense for your network.  We've got lots of BW and lowish latency (37ms), so the 
defaults have worked pretty well for us.

Make sure that when you do provisioning in one site, you don't have CM tools in 
the other site breaking things.  We can ran into issues during our first deploy 
like this where Puppet was making a change in one site to a user, and Puppet in 
the other site reverted the change nearly immediately.  You may have to tweak 
your deployment process to deal with that sort of thing.

Make sure you're running Galera or Galera Arbitrator in enough sites to 
maintain quorum if you have issues.  We run 3 nodes in one DC, and 3 nodes in 
another DC for Horizon, Keystone and Designate.  We run a Galera arbitrator in 
a third DC to settle ties.

Lastly, the obvious one is just to stay up to date on patches.  Galera is 
pretty stable, but we have run into bugs that we had to get fixes for.

On Tue, May 5, 2015 at 11:33 AM, Curtis 
serverasc...@gmail.commailto:serverasc...@gmail.com wrote:
Do people have any comments or strategies on dealing with Galera
replication across the WAN using regions? Seems like something to try
to avoid if possible, though might not be possible. Any thoughts on
that?

Thanks,
Curtis.

On Mon, May 4, 2015 at 3:11 PM, Jesse Keating 
j...@bluebox.netmailto:j...@bluebox.net wrote:
 I agree with Subbu. You'll want that to be a region so that the control
 plane is mostly contained. Only Keystone (and swift if you have that) would
 be doing lots of site to site communication to keep databases in sync.

 http://docs.openstack.org/arch-design/content/multi_site.html is a good read
 on the topic.


 - jlk

 On Mon, May 4, 2015 at 1:58 PM, Allamaraju, Subbu 
 su...@subbu.orgmailto:su...@subbu.org wrote:

 I suggest building a new AZ (“region” in OpenStack parlance) in the new
 location. In general I would avoid setting up control plane to operate
 across multiple facilities unless the cloud is very large.

  On May 4, 2015, at 1:40 PM, Jonathan Proulx 
  j...@jonproulx.commailto:j...@jonproulx.com wrote:
 
  Hi All,
 
  We're about to expand our OpenStack Cloud to a second datacenter.
  Anyone one have opinions they'd like to share as to what I would and
  should be worrying about or how to structure this?  Should I be
  thinking cells or regions (or maybe both)?  Any obvious or not so
  obvious pitfalls I should try to avoid?
 
  

Re: [openstack-dev] [puppet][operators] How to specify Keystone v3 credentials?

2015-05-06 Thread Mike Dorman
We also run all masterless/puppet apply.  And we just populate a bare 
bones keystone.conf on any box that does not have keystone installed, but 
Puppet needs to be able to create keystone resources.

Also agreed on avoiding puppetdb, for the same reasons.

(Something to note for those of us doing masterless today: there are plans 
from Puppet to move more of the manifest compiling functionality to run 
only in the puppet master process.  So at some point, it’s likely that 
masterless setups may not be possible.)

Mike




 If you do not wish to explicitly define Keystone resources for
 Glance on Keystone nodes but instead let Glance nodes manage
 their own resources, you could always use exported resources.

 You let Glance nodes export their keystone resources and then
 you ask Keystone nodes to realize them where admin credentials
 are available. (I know some people don't really like exported
 resources for various reasons)

 I'm not familiar with exported resources.  Is this a viable
 option that has less impact than just requiring Keystone
 resources to be realized on the Keystone node?
 
 I'm not in favor of having exported resources because it requires 
 PuppetDB, and a lot of people try to avoid that. For now, we've
 been able to setup all OpenStack without PuppetDB in TripleO and in
 some other installers, we might want to keep this benefit.
 
 +100
 
 We're looking at using these puppet modules in a bit, but we're also a
 few steps away from getting rid of our puppetmaster and moving to a
 completely puppet apply based workflow. I would be double-plus
 sad-panda if we were not able to use the openstack puppet modules to
 do openstack because they'd been done in such as way as to require a
 puppetmaster or puppetdb.

100% agree.

Even if you had a puppetmaster and puppetdb, you would still end up in
this eventual consistency dance of puppet runs.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Neutron meeting for the next few weeks is cancelled

2015-05-06 Thread Kyle Mestery
Hi folks!

Given most of us will be in Vancouver for the Summit and we've finished
planning out the design summit, we'll go ahead and cancel the Neutron
meeting for the next 3 weeks. We'll resume the week after the Summit, which
is 6/2/2015 at 1400UTC [1].

Thanks!
Kyle

[1] https://wiki.openstack.org/wiki/Network/Meetings
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Openstack] Neutron LBaaS with L3 HA

2015-05-06 Thread Abhishek Chanda
Hi,

I am trying to run Neutron LBaaS with the HAProxy driver with L3 HA.
When I associate a floating IP to the LBaaS VIP, I cannot access my
backend service over the floating IP. It seems, this is because the
haproxy instance happens to be scheduled on a node that is not the
current master for L3 HA. Am I missing some configuration somewhere?

The issue is described in
https://bugs.launchpad.net/neutron/+bug/1452039

Thanks

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] [all] TC Communications planning

2015-05-06 Thread Zane Bitter

On 06/05/15 09:13, Anne Gentle wrote:

Hi all,

In the interest of communicating sooner rather than later, I wanted to
write a new thread to say that Flavio Percoco and I are going to work on
a TC communications plan as co-chairs of a TC communications working group.

I think we can find a happy medium amongst meeting minutes, gerrit
reviews, and irregular blog entries by applying some comms planning, so
that Flavio and I can dive in.

Please answer these questions on the list if you're interested in
shaping the communications plan:

Audience considerations:
Is the primary audience current OpenStack contributors or those in
consumer roles?


I think it has to be both.

Maish's suggestion that most contributors are already in the know and 
on the mailing lists and meetings is absurd. Beyond the group of 
probably 25 people who pay close attention to governance, most core 
reviewers and even PTLs I speak to have a vague idea what is going on in 
the TC only when it pertains to an issue that was heavily discussed on 
openstack-dev, and even then they're unlikely to know what the outcome 
was unless/until it starts affecting them directly.



What percentage of the audience are fairly new contributors? Fairly new
to OpenStack itself?
Is the audience more likely to be an outsider looking in to OpenStack
governance?


Not sure how to parse this. Substantially everybody is an outsider to 
OpenStack governance, so yes, but I think it should be primarily for 
insiders to OpenStack.



Is the audience wanting to click links to learn more, or do they just
want the summary?


I don't think they want to be clicking links through to the governance 
repo (though it doesn't hurt to have them). IMHO folks need a summary, 
and maybe a summary of the summary so they can figure out when the 
summary is worth reading.



Does the audience always want an action to take, or is simply getting
information their goal?


Information.


Channel considerations:
Is this audience with their goals more likely to use blogs, RSS, and
Twitter or subscribe to mailing lists?


Contributors are mostly on openstack-dev, and that's an audience the 
blog posts haven't been hitting. So I think executive summaries on 
mailing lists with links to blog posts will work and improve the current 
reach.


I think the blog + RSS + newsletter approach approach used up until now 
is probably the best chance to get through to the non-openstack-dev 
readers. It's always going to be an uphill battle though, because people 
have to choose to subscribe to a mailing list or the newsletter or the 
feed or Planet OpenStack or whatever - there's no place we can go to them.



Depending on the channels chosen, is cross-posting to multiple channels
a huge error, or are we leaning towards a wide net rather than laser
targeting?


IMHO cross-posting is fine, but I wouldn't necessarily replicate the 
entire content to every channel.



Is there another channel we haven't considered that is widely consumed?


Not AFAIK.


Does the cadence have to be weekly, even if not much happened with the
TC is the activity rate for the week?


IMO no. It's more likely to be read if it's just posted when there is 
actual important news to report.


cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet][operators] How to specify Keystone v3 credentials?

2015-05-06 Thread Colleen Murphy
On Wed, May 6, 2015 at 4:26 PM, Mike Dorman mdor...@godaddy.com wrote:

 We also run all masterless/puppet apply.  And we just populate a bare
 bones keystone.conf on any box that does not have keystone installed, but
 Puppet needs to be able to create keystone resources.

 Also agreed on avoiding puppetdb, for the same reasons.

 (Something to note for those of us doing masterless today: there are plans
 from Puppet to move more of the manifest compiling functionality to run
 only in the puppet master process.  So at some point, it’s likely that
 masterless setups may not be possible.)

I don't think that's true. I think making sure puppet apply works is a
priority for them, just the implementation as they move to a C++-based
agent has yet to be figured out.

Colleen


 Mike

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet][operators] How to specify Keystone v3 credentials?

2015-05-06 Thread Mike Dorman
Cool, fair enough.  Pretty glad to hear that actually!


From: Colleen Murphy
Reply-To: OpenStack Development Mailing List (not for usage questions)
Date: Wednesday, May 6, 2015 at 5:31 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [puppet][operators] How to specify Keystone v3 
credentials?


On Wed, May 6, 2015 at 4:26 PM, Mike Dorman 
mdor...@godaddy.commailto:mdor...@godaddy.com wrote:
We also run all masterless/puppet apply.  And we just populate a bare
bones keystone.conf on any box that does not have keystone installed, but
Puppet needs to be able to create keystone resources.

Also agreed on avoiding puppetdb, for the same reasons.

(Something to note for those of us doing masterless today: there are plans
from Puppet to move more of the manifest compiling functionality to run
only in the puppet master process.  So at some point, it’s likely that
masterless setups may not be possible.)
I don't think that's true. I think making sure puppet apply works is a priority 
for them, just the implementation as they move to a C++-based agent has yet to 
be figured out.

Colleen

Mike
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] On dynamic policy, role hierarchies/groups/sets etc.

2015-05-06 Thread Hu, David J (Converged Cloud)
Nice summary Henry.  My comments in brown.


From: Adam Young [mailto:ayo...@redhat.com]
Sent: Tuesday, May 5, 2015 8:35 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [keystone] On dynamic policy, role 
hierarchies/groups/sets etc.

On 05/05/2015 07:05 AM, Henry Nash wrote:
We've been discussing changes to these areas for a while - and although I think 
there is general agreement among the keystone cores that we need to change 
*something*, we've been struggling to get agreement on exactly how..  So to try 
and ground the discussion that will (I am sure) occur in Vancouver, here's an 
attempt to take a step back, look at what we have now, as well as where, 
perhaps, we want to get to.

This is a great summary.  Thanks Henry.

david8hu  We need at least one use case to capture or to tight all of the 
specs together.  I think an use case would really help the dynamic policy 
overview spec.  I can help add 1 or 2.


The core functionality all this is related to is that of how does keystone  
policy allow the checking of whether a given API call to an OpenStack service 
should be allowed to take place or not. Within OpenStack this is a two step 
process for an API caller1) Get yourself a token by authentication and 
getting authorised for a particular scope (e.g. a given project), and then 2) 
Use that token as part of your API call to the service you are interested in. 
Assuming you do, indeed, have the rights to execute this API, somehow steps 1) 
and 2) give the policy engine enough info to say yes or no.

So first, how does this work today and (conceptually) how should we describe 
that?  Well first of all, in fact, strictly we don't control access at the raw 
API level.  In fact, each service defines a series capabilities (which 
usually, but not always, map one-to-one with an API call).  These capabilities 
represent the finest grained access control we support via the policy engine. 
Now, in theory, the most transparent way we could have implemented steps 1) and 
2) above would have been to say that users should be assigned capabilities to 
projectsand then those capabilities would be placed in the 
tokenallowing the policy engine to check if they match what is needed for a 
given capability to be executed. We didn't do that since, a) this would 
probably end up being very laborious for the administrator (there would be lots 
of capabilities any given user would need), and b) the tokens would get very 
big storing all those capabilities. Instead, it was recognised that, usually, 
there are sets of these capabilities that nearly always go together - so 
instead let's allow the creation of such setsand we'll assign those to 
users instead. So far, so good. What is perhaps unusual is how this was 
implemented. These capability sets are, today, called Roles...but rather than 
having a role definition that describes the capabilities represented by that 
roleinstead roles are just labels - which can be assigned to users/projects 
and get placed in a tokens.  The expansion to capabilities happens through the 
definition of a json policy file (one for each service) which must be processed 
by the policy engine in order to work out what whether the roles in a token and 
the role-capability mapping means that a given API can go ahead. This 
implementation leads to a number issues (these have all been raised by others, 
just pulling them together here):

i) The role-capability mapping is rather static. Until recently it had to be 
stored in service-specific files pushed out to the service nodes out-of-band. 
Keystone does now provide some REST APIs to store and retrieve whole policy 
files, but these are a) course-grained and b) not really used by services 
anyway yet.

ii) As more and more clouds become multi-customer (i.e. a cloud provider 
hosting multiple companies on a single OpenStack installation), cloud providers 
will want to allow those customers to administer their bit of the cloud. 
Keystone uses the Domains concept to allow a cloud provider to create a 
namespace for a customer to create their own projects, users and groupsand 
there is a version of the keystone policy file that allows a cloud provider to 
effectively delegate management of these items to an administrator of that 
customer (sometimes called a domain administrator).  However, Roles are not 
part of that namespace - they exists in a global namespace (within a keystone 
installation). Diverse customers may have different interpretations of what a 
VM admin or a net admin should be allowed to do for their bit of the cloud 
- but  right now that differentiation is hard to provide. We have no support 
for roles or policy that are domain specific.

david8hu  I can see per domain policy becoming a hot topic for the reseller 
scenario.

iii) Although as stated in ii) above, you can write a policy file that 
differentiates between various levels of admin, or fine-tunes access to certain 

Re: [openstack-dev] [RefStackl] - http://refstack.org/ - does not resolve

2015-05-06 Thread Jeremy Stanley
On 2015-05-06 19:53:44 + (+), Rochelle Grober wrote:
 The Refstack team is working with Infra to get refstack.org up in
 a vm under Infra's purview. Right now, the demo is on refstack.net
 refstack.net will go away once refstack.org is up and managed.

Yep, I recall the discussion. I simply didn't know if the Refstack
developers needed that domain pointed to some particular demo site
until ready to go live with the official infra-tized server.
Sounds like it can just wait for the moment. Thanks!
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][lbaas][octavia] No Octavia meeting today

2015-05-06 Thread Eichberger, German
All,

In order to work on the demo for Vancouver we will be skipping todays, 5/6/15 
meeting. We will have another meeting on 5/13 to finalize for the summit --

If you have questions you can find us in the channel — and again please keep up 
the good work with reviews!

Thanks,
German


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] [Mistral] SSH workflow action

2015-05-06 Thread Georgy Okrokvertskhov
Hi,

From Murano experience I can tell you that ssh to VM in general case will
not work. In order to have an ssh access you will have to assign floating
IPs so that Mistral service will be able to connect to VM.
That is exactly the reason why Murano uses agent and MQ mechanism when
client on VM initiates a connection. I believe the same issue was in Sahara
when they used direct ssh connections to VMs.

Thanks
Gosha


On Wed, May 6, 2015 at 9:00 AM, Pospisil, Radek radek.pospi...@hp.com
wrote:

 Hello,

 I think that the generic question is - can be O~S services also accessible
 on Neutron networks, so VM (created by Nova) can access it? We (I and
 Filip) were discussing this today and we were not make a final decision.
 Another example is Murano agent running on VMs - it connects to RabbitMQ
 which is also accessed by Murano engine

   Regards,

 Radek

 -Original Message-
 From: Blaha, Filip
 Sent: Wednesday, May 06, 2015 5:43 PM
 To: openstack-dev@lists.openstack.org
 Subject: [openstack-dev] [Murano] [Mistral] SSH workflow action

 Hello

 We are considering implementing  actions on services of a murano
 environment via mistral workflows. We are considering whether mistral
 std.ssh action could be used to run some command on an instance. Example of
 such action in murano could be restart action on Mysql DB service.
 Mistral workflow would ssh to that instance running Mysql and run service
 mysql restart. From my point of view trying to use SSH to access instances
 from mistral workflow is not good idea but I would like to confirm it.

 The biggest problem I see there is openstack networking. Mistral service
 running on some openstack node would not be able to access instance via its
 fixed IP (e.g. 10.0.0.5) via SSH. Instance could accessed via ssh from
 namespace of its gateway router e.g. ip netns exec qrouter-... ssh
 cirros@10.0.0.5 but I think it is not good to rely on implementation
 detail of  neutron and use it. In multinode openstack deployment it could
 be even more complicated.

 In other words I am asking whether we can use std.ssh mistral action to
 access instances via ssh on theirs fixed IPs? I think no but I would like
 to confirm it.

 Thanks
 Filip

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Georgy Okrokvertskhov
Architect,
OpenStack Platform Products,
Mirantis
http://www.mirantis.com
Tel. +1 650 963 9828
Mob. +1 650 996 3284
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] IPv4 transition/interoperation with IPv6

2015-05-06 Thread Carl Baldwin
On Wed, May 6, 2015 at 12:46 AM, Mike Spreitzer mspre...@us.ibm.com wrote:
 While I am a Neutron operator, I am also a customer of a lower layer network
 provider.  That network provider will happily give me a few /64.  How do I
 serve IPv6 subnets to lots of my tenants?  In the bad old v4 days this would
 be easy: a tenant puts all his stuff on his private networks and NATs (e.g.,
 floating IP) his edge servers onto a public network --- no need to align
 tenant private subnets with public subnets.  But with no NAT for v6, there
 is no public/private distinction --- I can only give out the public v6
 subnets that I am given.  Yes, NAT is bad.  But not being able to get your
 job done is worse.

Mike, in this paragraph, you're hitting on something that has been on
my mind for a while.  We plan to cover this problem in detail in this
talk [1] and we're defining some work for Liberty to better address it
[2][3].  You hit the nail on the head, there is no distinguishing
private and public IP addresses in Neutron currently with IPv6.

Kilo's new subnet pool feature is a start.  It will allow you to
create a shared subnet pool including the /64s from your service
provider.  Tenants can then create a subnet getting an allocation from
it automatically.  However, given the current state of things, there
will be some manual work on the gateway router to route them to the
tenant's router.

Prefix delegation -- which looks on track for Liberty -- is another
option which could fill this void.  It will allow a router to get a
prefix delegation from an external PD system which will be useable on
a tenant subnet.  Presumably the external system will take care of
routing the subnet to the appropriate tenant router.

Carl

[1] http://sched.co/2qdm
[2] https://review.openstack.org/#/c/180267/
[3] https://review.openstack.org/#/c/125401/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Success of the IPv6 Subteam - Proposal to disband

2015-05-06 Thread Brian Haley

On 05/04/2015 08:37 PM, Sean M. Collins wrote:

It is a bittersweet moment - I am proposing that due to the amazing
success that we have had as a subteam, that because we have
accomplished so much, that it makes sense for our team to
disband and re-integrate with other subteams (the L3 subteam
comes to mind) or have items in the on-demand agenda of the main
meeting.

Unless there is any pressing business, I believe that we will not need a
recurring meeting, and tomorrow's meeting is cancelled.

As always, I am in #openstack-neutron and happy to help.


Sean,

Thanks for leading the team, IPv6 is in a much better place now in Kilo!  I'll 
be the first one to buy you a beer (beers?) in Vancouver.


As long as we adopt the Linux kernel mantra of You can't do the IPv4 work now 
and punt the IPv6 work for later I'm fine with pushing future IPv6 work into 
the respective L3/L2/etc sub-teams.


-Brian

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Are routed TAP interfaces (and DHCP for them) in scope for Neutron?

2015-05-06 Thread Carl Baldwin
This brings up something I'd like to discuss.  We have a config option
called allow_overlapping_ips which actually defaults to False.  It
has been suggested [1] that this should be removed from Neutron and
I've just started playing around with ripping it out [2] to see what
the consequences are.

A purely L3 routed network, like Calico, is a case where it is more
complex to implement allowing overlapping ip addresses.

If we deprecate and eventually remove allow_overlapping_ips, will this
be a problem for Calico?  Is the shared address space in Calico
confined to a single flat network or do you already support tenant
private networks with this technology?  If I recall from previous
discussions, I think that it only supports Neutron's flat network
model in the current form, so I don't think it should be a problem.
Am I correct?  Please confirm.

Carl

[1] http://lists.openstack.org/pipermail/openstack-dev/2014-May/036336.html
[2] https://review.openstack.org/#/c/179953/

On Fri, May 1, 2015 at 8:22 AM, Neil Jerram neil.jer...@metaswitch.com wrote:
 Thanks for your reply, Kevin, and sorry for the delay in following up.

 On 21/04/15 09:40, Kevin Benton wrote:

 Is it compatible with overlapping IPs? i.e. Will it give two different
 VMs the same IP address if the reservations are setup that way?


 No, not as I've described it below, and as we've implemented Calico so far.
 Calico's first target is a shared address space without overlapping IPs, so
 that we can handle everything within the default namespace.

 But we do also anticipate a future Calico release to support private address
 spaces with overlapping IPs, while still routing all VM data rather than
 bridging.  That will need the private address TAP interfaces to go into a
 separate namespace (per address space), and have their data routed there;
 and we'd run a Dnsmasq in that namespace to provide that space's IP
 addresses.

 Within each namespace - whether the default one or private ones - we'd still
 use the other changes I've described below for how the DHCP agent creates
 the ns-XXX interface and launches Dnsmasq.

 Does that make sense?  Do you think that this kind of approach could be in
 scope under the Neutron umbrella, as an alternative to bridging the TAP
 interfaces?

 Thanks,
 Neil


 On 16/04/15 15:12, Neil Jerram wrote:

 I have a Neutron DHCP agent patch whose purpose is to launch dnsmasq
 with options such that it works (= provides DHCP service) for TAP
 interfaces that are _not_ bridged to the DHCP interface (ns-XXX).  For
 the sake of being concrete, this involves:

 - creating the ns-XXX interface as a dummy, instead of as a veth pair

 - launching dnsmasq with --bind-dynamic --listen=ns-XXX --listen=tap*
 --bridge-interface=ns-XXX,tap*

 - not running in a separate namespace

 - running the DHCP agent on every compute host, instead of only on the
 network node

 - using the relevant subnet's gateway IP on the ns-XXX interface (on
 every host), instead of allocating a different IP for each ns-XXX
 interface.

 I proposed a spec for this in the Kilo cycle [1], but it didn't get
 enough traction, and I'm now wondering what to do with this
 work/function.  Specifically, whether to look again at integrating it
 into Neutron during the Liberty cycle, or whether to maintain an
 independent DHCP agent for my project outside the upstream Neutron
 tree.
I would very much appreciate any comments or advice on this.

 For answering that last question, I suspect the biggest factor is
 whether routed TAP interfaces - i.e. forms of networking
 implementation
 that rely on routing data between VMs instead of bridging it - is in
 scope for Neutron, at all.  If it is, I understand that there could be
 a
 lot more detail to work on, such as how it meshes with other Neutron
 features such as DVR and the IPAM work, and that it might end up being
 quite different from the blueprint linked below.  But it would be good
 to know whether this would ultimately be in scope and of interest for
 Neutron at all.

 Please do let me now what you think.

 Many thanks,
   Neil

 [1] https://blueprints.launchpad.net/neutron/+spec/dhcp-for-routed-ifs


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 

Re: [Openstack] Swift - Adding S3 Glacier like interface in Swift Swift3 Object Storage

2015-05-06 Thread Tim Bell
There was a recent post on a SWIFT tape implementation at 
https://github.com/BDT-GER/SWIFT-TLC

Tim

 -Original Message-
 From: Samuel Merritt [mailto:s...@swiftstack.com]
 Sent: 06 May 2015 19:03
 To: openstack@lists.openstack.org
 Subject: Re: [Openstack] Swift - Adding S3 Glacier like interface in Swift  
 Swift3
 Object Storage
 
 On 5/6/15 2:34 AM, Bala wrote:
  I am new to this list so please excuse me if I posted it in wrong list.
 
  We have a tape library which we would like to integrate with OpenStack
  Swift  Swift3 object storage service to provide S3 interface.
 
  The current file system we have for the library has been integrated
  with Swift storage service and manages changer robot  tapes.
 
  This works well for writing.
 
  However for reading, loading a tape takes longer when GET requests are
  received, in some cases over 5 minutes and this causes timeout error.
  Most of the data stored in these tapes are archival data. This get
  worsen when multiple GET requests received (muti-user) for objects
  which are stored in different tapes.
 
  Due to the longer read times, we are looking to provide Amazon S3
  Glacier like interface through Swift  Swift3 so that clients can
  issue a POST OBJECT RESTORE request and wait for the data to be moved
  to temporary store/cache.
 
  I have come across a similar request
 
  http://openstack-dev.openstack.narkive.com/kI72vk9l/ltfs-integration-w
  ith-openstack-swift-for-scenario-like-data-archival-as-a-service
 
  and understand the suggestions.
 
  We would like to provide S3 Glacier like interface than Swift Storage
  policies if we can.
 
  I would be great full if you could kindly advise
 
  1. How hard is to change Swift  Swift3 code base to provide S3
  Glacier like interface
 
 It's not easy, that's for sure. Swift's API is all synchronous: issue a GET, 
 receive
 the object; issue a PUT, create an object; et cetera.
 Glacier-style asynchronous retrieval is something completely new and 
 different.
 
 Some food for thought: where will you store pending retrieval requests?
 How will you ensure that retrieval requests survive disk and machine failures 
 the
 way everything else in Swift does?
 
 I'm not asking you to answer here (though you can if you want to, of course);
 I'm just trying to nudge your thoughts in the right direction.
 
  2. Can this be done through Swift storage policies alone.
 
 No. A storage policy determines where and how its objects are stored, which
 affects things like access times and storage cost. The API for accessing those
 objects does not change based on the storage policy.
 
  3. Do we have to modify Swift Auditor service to do a tape based
  checking rather than object based.
 
 You mean audit in order? Probably a good idea, otherwise your tapes will spend
 all day seeking.
 
  4. Would Swift replication service cause frequent Tape change request.
 
 I'd guess that it would, but nobody knows for sure. As far as I know, nobody 
 has
 jammed tape-library support into Swift before. You're the first. Report back 
 and
 let everyone know how it goes. :)
 
 
 ___
 Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 Post to : openstack@lists.openstack.org
 Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] [keystone] On dynamic policy, role hierarchies/groups/sets etc.

2015-05-06 Thread Tim Hinrichs
Hi all,

Inline.

From: Adam Young ayo...@redhat.commailto:ayo...@redhat.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Tuesday, May 5, 2015 at 8:34 PM
To: 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [keystone] On dynamic policy, role 
hierarchies/groups/sets etc.

On 05/05/2015 07:05 AM, Henry Nash wrote:
We’ve been discussing changes to these areas for a while - and although I think 
there is general agreement among the keystone cores that we need to change 
*something*, we’ve been struggling to get agreement on exactly how..  So to try 
and ground the discussion that will (I am sure) occur in Vancouver, here’s an 
attempt to take a step back, look at what we have now, as well as where, 
perhaps, we want to get to.

This is a great summary.  Thanks Henry.

Super helpful for sure!



The core functionality all this is related to is that of how does keystone  
policy allow the checking of whether a given API call to an OpenStack service 
should be allowed to take place or not. Within OpenStack this is a two step 
process for an API caller….1) Get yourself a token by authentication and 
getting authorised for a particular scope (e.g. a given project), and then 2) 
Use that token as part of your API call to the service you are interested in. 
Assuming you do, indeed, have the rights to execute this API, somehow steps 1) 
and 2) give the policy engine enough info to say yes or no.

So first, how does this work today and (conceptually) how should we describe 
that?  Well first of all, in fact, strictly we don’t control access at the raw 
API level.  In fact, each service defines a series “capabilities” (which 
usually, but not always, map one-to-one with an API call).  These capabilities 
represent the finest grained access control we support via the policy engine. 
Now, in theory, the most transparent way we could have implemented steps 1) and 
2) above would have been to say that users should be assigned capabilities to 
projects….and then those capabilities would be placed in the token….allowing 
the policy engine to check if they match what is needed for a given capability 
to be executed. We didn’t do that since, a) this would probably end up being 
very laborious for the administrator (there would be lots of capabilities any 
given user would need), and b) the tokens would get very big storing all those 
capabilities. Instead, it was recognised that, usually, there are sets of these 
capabilities that nearly always go together - so instead let’s allow the 
creation of such sets….and we’ll assign those to users instead. So far, so 
good. What is perhaps unusual is how this was implemented. These capability 
sets are, today, called Roles…but rather than having a role definition that 
describes the capabilities represented by that role….instead roles are just 
labels - which can be assigned to users/projects and get placed in a tokens.  
The expansion to capabilities happens through the definition of a json policy 
file (one for each service) which must be processed by the policy engine in 
order to work out what whether the roles in a token and the role-capability 
mapping means that a given API can go ahead. This implementation leads to a 
number issues (these have all been raised by others, just pulling them together 
here):
As I understand how this works conceptually, a policy makes go/no-go decisions 
based on two kinds of properties: (1) properties about the user making the API 
call  (which are encoded in the token) and (2) the API call name and arguments. 
  Is that right?


i) The role-capability mapping is rather static. Until recently it had to be 
stored in service-specific files pushed out to the service nodes out-of-band. 
Keystone does now provide some REST APIs to store and retrieve whole policy 
files, but these are a) course-grained and b) not really used by services 
anyway yet.

ii) As more and more clouds become multi-customer (i.e. a cloud provider 
hosting multiple companies on a single OpenStack installation), cloud providers 
will want to allow those customers to administer “their bit of the cloud”. 
Keystone uses the Domains concept to allow a cloud provider to create a 
namespace for a customer to create their own projects, users and groups….and 
there is a version of the keystone policy file that allows a cloud provider to 
effectively delegate management of these items to an administrator of that 
customer (sometimes called a domain administrator).  However, Roles are not 
part of that namespace - they exists in a global namespace (within a keystone 
installation). Diverse customers may have different interpretations of what a 
“VM admin” or a “net admin” should be allowed to do for their bit of the cloud 
- but  right now that 

Re: [Openstack] How to ping instance IP ?

2015-05-06 Thread Kevin Benton
Yes, it sounds like you just have nova network right now. Did you use
devstack? If so, follow this guide.
https://wiki.openstack.org/wiki/NeutronDevstack

On Wed, May 6, 2015 at 11:02 AM, Wilson Kwok leiw...@gmail.com wrote:

 repeat this message to all,

 I can't see network and router option in System panel, I think need
 install  neutron, right ? do you have any guide for help ?

 Thanks

 2015-05-07 1:53 GMT+08:00 Wilson Kwok leiw...@gmail.com:

 Hello,

 I can't see network and router option in System panel, I think need
 install  neutron, right ? do you have any guide for help ?

 Thanks

 2015-05-06 23:15 GMT+08:00 Wilson Kwok leiw...@gmail.com:

 Hi Jonathan Abdiel Gonzalez Valdebenito,

 Sorry, I'm a newbie of Openstack, do you mean create virtual router
 between 172.28.0.0 and 10.0.47.0 ? please see below simply diagram.

 TP-link router = public IP address 119.101.54.x and lan IP address
 172.28.0.1

 Home computer = 172.28.0.130

 Ubuntu eth0 (no IP address) and eth1 172.28.0.105 (for managmeent)

 Virtual router = home interface 172.28.0.254 and internal interface
 10.0.47.254

 br100 IP address = 10.0.47.1 map to eth0

 instance01 IP address = 10.0.47.2
 Thanks

 2015-05-06 21:56 GMT+08:00 Jonathan Abdiel Gonzalez Valdebenito 
 jonathan.abd...@gmail.com:

 Hi Wilson,

 If that so then you have a problem you didn't even configure the
 network for the instances which means you may created a network but not a
 router so if you don't creater a router you can't access the instances
 network.

 On Wed, May 6, 2015 at 6:36 AM Wilson Kwok leiw...@gmail.com wrote:

 Hi Jonathan Abdiel Gonzalez Valdebenito,

 After I type ip netns nothing display.

 Thanks



 2015-05-05 22:28 GMT+08:00 Jonathan Abdiel Gonzalez Valdebenito 
 jonathan.abd...@gmail.com:

 Hi Wilson,

 To ping the instance I suggest you to use these commands:

 ip netns -- to list your namespaces and pick up the one with the
 router name
 ip netns exec router-(hash) ping instance ip  -- with this you can
 ping the instance.

 Hope it was usefull

 On Tue, May 5, 2015 at 10:57 AM Wilson Kwok leiw...@gmail.com
 wrote:

 Hello,

 Here is my home lap network settings:

 Home computer 172.28.0.130

 Ubuntu eth0 (no IP address)
eth1 172.28.0.105 (for managmeent)

 br100 10.0.47.1 map to eth0

 instance01 10.0.47.2

 My question is home computer can't ping instance01 10.0.47.2, even
 Ubuntu itselves,
 I already allow security group ICMP ALL, can anyone help ?

 Thanks

 ___
 Mailing list:
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 Post to : openstack@lists.openstack.org
 Unsubscribe :
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack






 ___
 Mailing list:
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 Post to : openstack@lists.openstack.org
 Unsubscribe :
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack




-- 
Kevin Benton
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] Link is down in openvswitch port

2015-05-06 Thread Xeniya L
Hi all!
I try to resolve this problem during 2 weeks :(

I use up-to-date Centos 7 and Juno.

I have study infrastructure:
1) compute node // probably works fine
2) controller node // probably works fine
3) nethost node // here is mistery and crazy problems with openvswitch

if I manually up interface in first and second VM, vm1 can ping vm2,
and vm2 can ping vm1, But on network node is crazy, and don't
understand situation.

Br-int bridge has a down and no-link interfaces:

  [root@nethost ~]# ovs-ofctl show br-int
OFPT_FEATURES_REPLY (xid=0x2): dpid:6282de3ad749
n_tables:254, n_buffers:256
capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
actions: OUTPUT SET_VLAN_VID SET_VLAN_PCP STRIP_VLAN SET_DL_SRC
SET_DL_DST SET_NW_SRC SET_NW_DST SET_NW_TOS SET_TP_SRC SET_TP_DST
ENQUEUE
 3(int-br-vlan): addr:5a:8a:ac:4d:6a:41
 config: 0
 state:  0
 speed: 0 Mbps now, 0 Mbps max
 4(int-br-ex): addr:36:a4:18:b8:59:7d
 config: 0
 state:  0
 speed: 0 Mbps now, 0 Mbps max
 20204(tap270c51fa-76): addr:62:82:de:3a:d7:49
 config: PORT_DOWN
 state:  LINK_DOWN
 speed: 0 Mbps now, 0 Mbps max
 20205(qr-1a0e7fd7-63): addr:00:00:00:00:00:00
 config: PORT_DOWN
 state:  LINK_DOWN
 speed: 0 Mbps now, 0 Mbps max
 20206(qg-8d652de9-58): addr:62:82:de:3a:d7:49
 config: PORT_DOWN
 state:  LINK_DOWN
 speed: 0 Mbps now, 0 Mbps max
 LOCAL(br-int): addr:62:82:de:3a:d7:49
 config: PORT_DOWN
 state:  LINK_DOWN
 speed: 0 Mbps now, 0 Mbps max
OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0

In compute node br-int works fine (whithout down ports)  and in
network node another bridges (br-vlan and br-ex) also works fine

If I try to delete openvsiwtch (and rm -rf /etc/openvswitch) and
neutron-openvswitch-agent, after install it again, openvswitch create
down interfaces in namepaces again:

But I can ping IP in router interface:

[root@nethost ~]# ip netns exec
qrouter-7573b61c-1cb2-43bc-8296-421e319d2bd0 ifconfig
lo: flags=73UP,LOOPBACK,RUNNING  mtu 65536
inet 127.0.0.1  netmask 255.0.0.0
inet6 ::1  prefixlen 128  scopeid 0x10host
loop  txqueuelen 0  (Local Loopback)
RX packets 0  bytes 0 (0.0 B)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 0  bytes 0 (0.0 B)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

qg-8d652de9-58: flags=4163UP,BROADCAST,RUNNING,MULTICAST  mtu 1500
inet 10.2.57.68  netmask 255.255.255.0  broadcast 10.2.57.255
inet6 fe80::f816:3eff:fe4e:b045  prefixlen 64  scopeid 0x20link
ether fa:16:3e:4e:b0:45  txqueuelen 0  (Ethernet)
RX packets 1027  bytes 67477 (65.8 KiB)
RX errors 0  dropped 11  overruns 0  frame 0
TX packets 45  bytes 2334 (2.2 KiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

qr-1a0e7fd7-63: flags=4163UP,BROADCAST,RUNNING,MULTICAST  mtu 1500
inet 10.0.0.1  netmask 255.255.255.0  broadcast 10.0.0.255
inet6 fe80::f816:3eff:feac:4843  prefixlen 64  scopeid 0x20link
ether fa:16:3e:ac:48:43  txqueuelen 0  (Ethernet)
RX packets 3  bytes 230 (230.0 B)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 10  bytes 864 (864.0 B)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@nethost ~]#

[root@nethost ~]# ip netns exec
qdhcp-f8bdf250-d08f-467a-a073-003ea3165c5d ifconfig
lo: flags=73UP,LOOPBACK,RUNNING  mtu 65536
inet 127.0.0.1  netmask 255.0.0.0
inet6 ::1  prefixlen 128  scopeid 0x10host
loop  txqueuelen 0  (Local Loopback)
RX packets 0  bytes 0 (0.0 B)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 0  bytes 0 (0.0 B)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

tap270c51fa-76: flags=4163UP,BROADCAST,RUNNING,MULTICAST  mtu 1500
inet 10.0.0.101  netmask 255.255.255.0  broadcast 10.0.0.255
inet6 fe80::f816:3eff:fee8:3cd7  prefixlen 64  scopeid 0x20link
ether fa:16:3e:e8:3c:d7  txqueuelen 0  (Ethernet)
RX packets 9  bytes 782 (782.0 B)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 3  bytes 258 (258.0 B)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@nethost ~]#
[root@nethost ~]# ping -c 1 10.0.0.1
PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.624 ms

--- 10.0.0.1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.624/0.624/0.624/0.000 ms
[root@nethost ~]# ping -c 1 10.0.0.101
PING 10.0.0.101 (10.0.0.101) 56(84) bytes of data.
From 10.0.0.2 icmp_seq=1 Destination Host Unreachable

--- 10.0.0.101 ping statistics ---
1 packets transmitted, 0 received, +1 errors, 100% packet loss, time 0ms

[root@nethost ~]#

I try to reinstall nethost node in 2 times, but problem was again.


[Openstack] How to use VirtIO SCSI by default?

2015-05-06 Thread Martinx - ジェームズ
Hey guys!

 How can I use VirtIO SCSI by default?

 I'm seeing that it might be possible to update image properties at Glance,
like this: hw_disk_bus_model=virtio-scsi but, I would like to make it the
default.

 Is there a nova.conf / libvirt option for that?

Thanks!
Thiago
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] Routing from instances to floating ips in nova-network -- possible?

2015-05-06 Thread Andrew Bogott
Since time immemorial, I've accepted as a fact of life that routing 
from a nova instance to another instance via floating ip is impossible. 
 We've coped with this via a hack in dnsmasq, setting an alias to 
rewrite public IPs to the corresponding internal IP.


Right now I'm trying to move our instances off of dnsmasq and onto 
a more robust designate/pdns setup.  Unfortunately, pdns does not 
support split-horizon nor the aliasing scheme that we used in dnsmasq. 
This has me back to square one, wishing that we could just make the 
routing work in the first place.  A recent IRC conversation leads me to 
believe that this issue may actually be fixed in modern nova-network 
versions (with the fix disabled by default), but a day of googling 
hasn't turned up much confirmation.


Is there a fix for this I'm missing?  If not, what kind of 
solutions are people using to work around it?


I'm running Icehouse with flatdhcp nova-network and a single 
network node.  Details about the problematic routing can be found in 
this public phabricator task: https://phabricator.wikimedia.org/T96924


Thanks!

-Andrew

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] How to use VirtIO SCSI by default?

2015-05-06 Thread Martinx - ジェームズ
BTW, I'm running Juno, planning to upgrade to Kilo ASAP. Ubuntu 14.04.2.

On Wed, May 6, 2015 at 6:06 PM Martinx - ジェームズ thiagocmarti...@gmail.com
wrote:

 Hey guys!

  How can I use VirtIO SCSI by default?

  I'm seeing that it might be possible to update image properties at
 Glance, like this: hw_disk_bus_model=virtio-scsi but, I would like to
 make it the default.

  Is there a nova.conf / libvirt option for that?

 Thanks!
 Thiago

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] [api] Changing 403 Forbidden to 400 Bad Request for OverQuota was: [nova] Which error code should we return when OverQuota

2015-05-06 Thread Ryan Brown
On 05/06/2015 02:07 PM, Jay Pipes wrote:
 Adding [api] topic. API WG members, please do comment.
 
 On 05/06/2015 08:01 AM, Sean Dague wrote:
 On 05/06/2015 07:11 AM, Chris Dent wrote:
 On Wed, 6 May 2015, Sean Dague wrote:

 All other client errors, just be a 400. And use the emerging error
 reporting json to actually tell the client what's going on.

 Please do not do this. Please use the 4xx codes as best as you
 possibly can. Yes, they don't always match, but there are several of
 them for reasons™ and it is usually possible to find one that sort
 of fits.

I agree with Jay here: there are only 100 error codes in the 400
namespace, and (way) more than 100 possible errors. The general 400 is
perfectly good as a catch-all where the user can be expected to read the
JSON error response for more information, and the other error codes
should be used to make it easier for folks to distinguish specific
conditions.

Let's take the 403 case. If you are denied with your credentials,
there's no error handling that you're going to be able to fix that.

 Using just 400 is bad for a healthy HTTP ecosystem. Sure, for the
 most part people are talking to OpenStack through official clients
 but a) what happens when they aren't, b) is that the kind of world
 we want?

 I certainly don't. I want a world where the HTTP APIs that OpenStack
 and other services present actually use HTTP and allow a diversity
 of clients (machine and human).

Wanting other clients to be able to plug right in is why we try to be
RESTful and make error codes that are usable by any client (see the
error codes and messages specs). Using Conflict and Forbidden codes
in addition to good error messages will help, if they denote very
specific conditions that the user can act on.

 Absolutely. And the problem is there is not enough namespace in the HTTP
 error codes to accurately reflect the error conditions we hit. So the
 current model means the following:

 If you get any error code, it means multiple failure conditions. Throw
 it away, grep the return string to decide if you can recover.

 My proposal is to be *extremely* specific for the use of anything
 besides 400, so there is only 1 situation that causes that to arise. So
 403 means a thing, only one thing, ever. Not 2 kinds of things that you
 need to then figure out what you need to do.

Agreed

 If you get a 400, well, that's multiple kinds of errors, and you need to
 then go conditional.

 This should provide a better experience for all clients, human and
 machine.
 
 I agree with Sean on this one.
 
 Using response codes effectively makes it easier to write client code
 that is either simple or is able to use generic libraries effectively.

 Let's be honest: OpenStack doesn't have a great record of using HTTP
 effectively or correctly. Let's not make it worse.

 In the case of quota, 403 is fairly reasonable because you are in
 fact Forbidden from doing the thing you want to do. Yes, with the
 passage of time you may very well not be forbidden so the semantics
 are not strictly matching but it is more immediately expressive yet
 not quite as troubling as 409 (which has a more specific meaning).

 Except it's not, because you are saying to use 403 for 2 issues (Don't
 have permissions and Out of quota).

 Turns out, we have APIs for adjusting quotas, which your user might have
 access to. So part of 403 space is something you might be able to code
 yourself around, and part isn't. Which means you should always ignore it
 and write custom logic client side.

 Using something beyond 400 is *not* more expressive if it has more than
 one possible meaning. Then it's just muddy. My point is that all errors
 besides 400 should have *exactly* one cause, so they are specific.
 
 Yes, agreed.
 
 I think Sean makes an excellent point that if you have 1 condition that
 results in a 403 Forbidden, it actually does not make things more
 expressive. It actually just means both humans and clients need to now
 delve deeper into the error context to determine if this is something
 they actually don't have permission to do, or whether they've exceeded
 their quota but otherwise have permission to do some action.
 
 Best,
 -jay
 
 p.s. And, yes, Chris, I definitely do see your side of the coin on this.
 It's nuanced, and a grey area...
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Ryan Brown / Software Engineer, Openstack / Red Hat, Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] Changing 403 Forbidden to 400 Bad Request for OverQuota was: [nova] Which error code should we return when OverQuota

2015-05-06 Thread David Kranz

On 05/06/2015 02:07 PM, Jay Pipes wrote:

Adding [api] topic. API WG members, please do comment.

On 05/06/2015 08:01 AM, Sean Dague wrote:

On 05/06/2015 07:11 AM, Chris Dent wrote:

On Wed, 6 May 2015, Sean Dague wrote:


All other client errors, just be a 400. And use the emerging error
reporting json to actually tell the client what's going on.


Please do not do this. Please use the 4xx codes as best as you
possibly can. Yes, they don't always match, but there are several of
them for reasons™ and it is usually possible to find one that sort
of fits.

Using just 400 is bad for a healthy HTTP ecosystem. Sure, for the
most part people are talking to OpenStack through official clients
but a) what happens when they aren't, b) is that the kind of world
we want?

I certainly don't. I want a world where the HTTP APIs that OpenStack
and other services present actually use HTTP and allow a diversity
of clients (machine and human).


Absolutely. And the problem is there is not enough namespace in the HTTP
error codes to accurately reflect the error conditions we hit. So the
current model means the following:

If you get any error code, it means multiple failure conditions. Throw
it away, grep the return string to decide if you can recover.

My proposal is to be *extremely* specific for the use of anything
besides 400, so there is only 1 situation that causes that to arise. So
403 means a thing, only one thing, ever. Not 2 kinds of things that you
need to then figure out what you need to do.

If you get a 400, well, that's multiple kinds of errors, and you need to
then go conditional.

This should provide a better experience for all clients, human and 
machine.


I agree with Sean on this one.


Using response codes effectively makes it easier to write client code
that is either simple or is able to use generic libraries effectively.

Let's be honest: OpenStack doesn't have a great record of using HTTP
effectively or correctly. Let's not make it worse.

In the case of quota, 403 is fairly reasonable because you are in
fact Forbidden from doing the thing you want to do. Yes, with the
passage of time you may very well not be forbidden so the semantics
are not strictly matching but it is more immediately expressive yet
not quite as troubling as 409 (which has a more specific meaning).


Except it's not, because you are saying to use 403 for 2 issues (Don't
have permissions and Out of quota).

Turns out, we have APIs for adjusting quotas, which your user might have
access to. So part of 403 space is something you might be able to code
yourself around, and part isn't. Which means you should always ignore it
and write custom logic client side.

Using something beyond 400 is *not* more expressive if it has more than
one possible meaning. Then it's just muddy. My point is that all errors
besides 400 should have *exactly* one cause, so they are specific.


Yes, agreed.

I think Sean makes an excellent point that if you have 1 condition 
that results in a 403 Forbidden, it actually does not make things more 
expressive. It actually just means both humans and clients need to now 
delve deeper into the error context to determine if this is something 
they actually don't have permission to do, or whether they've exceeded 
their quota but otherwise have permission to do some action.


Best,
-jay

+1
The basic problem is we are trying to fit a square (generic api) peg in 
a round (HTTP request/response) hole.
But if we do say we are recognizing sub-error-codes, it might be good 
to actually give them numbers somewhere in the response (maybe an error 
code header) rather than relying on string matching to determine the 
real error. String matching is fragile and has icky i18n implications.


 -David


p.s. And, yes, Chris, I definitely do see your side of the coin on 
this. It's nuanced, and a grey area...


__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [OpenStack-Infra] [Third-party-announce] Netapp-CI account disabled

2015-05-06 Thread Meade, Alex
Hello,

I believe I have tracked down the issue. When setting up our Fibre Channel
CI, we created a script to listen to the gerrit event stream that was
using the Œnetapp-ci¹ account. This script had a code path where the
paramiko ssh connection would not be closed and this has been resolved.
Evidence that this was the culprit is that the script had been running
without interruption since April 29th and that is consistent with the
Gerrit connections from this account we see here:
http://paste.openstack.org/show/215173/

I apologize for the inconvenience and appreciate all the help in
#openstack-infra. I think it is safe for the account to be reenabled.

Thanks,

-Alex


On 5/5/15, 8:50 PM, James E. Blair cor...@inaugust.com wrote:

Hi,

We've been tracking a bug in Gerrit recently where all of the threads
tasked with servicing the stream-events command eventually get stuck.
This causes all of the CI systems, including OpenStack's, to stop
responding to events until the server is manually restarted.

We recently found that had happened with connections from the netapp-ci
account.  I believe that Gerrit should be more resilient to these kinds
of errors, however, due to the severe impact to the project when this
happens, I have disabled the netapp-ci account until we find a solution
to the problem.

Note that the Gerrit upgrade scheduled for Saturday May 9 will bring a
new SSH server with it, and may have an impact on this issue.

If the netapp-ci operators have a moment to chat with us in
#openstack-infra on Freenode that would probably be the best way to work
on a plan to debug the problem further.

Thanks, and sorry for the inconvenience,

Jim

___
Third-party-announce mailing list
third-party-annou...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/third-party-announce


___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [openstack-dev] [RefStackl] - http://refstack.org/ - does not resolve

2015-05-06 Thread Rochelle Grober
The Refstack team is working with Infra to get refstack.org up in a vm under 
Infra's purview.  Right now, the demo is on refstack.net  refstack.net will go 
away once refstack.org is up and managed.

--rocky

-Original Message-
From: Jeremy Stanley [mailto:fu...@yuggoth.org] 
Sent: Wednesday, May 06, 2015 08:02
To: OpenStack Development Mailing List (not for usage questions)
Cc: r...@zehicle.com
Subject: Re: [openstack-dev] [RefStackl] - http://refstack.org/ - does not 
resolve

On 2015-05-06 09:37:26 -0500 (-0500), arkady_kanev...@dell.com wrote:
 What are we doing to have name resolved?
 Meanwhile what is IP address to reach it?
 Do we really expect people to submit results to that web site?

It looks like I can add that domain and whatever records we want for it... I'd 
simply need to know the IP address(es) and name(s) you want in those resource 
records.
--
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Ironic] Large number of ironic driver bugs in nova

2015-05-06 Thread John Villalovos
JohnG,

I work on Ironic and would be willing to be a cross project liaison for
Nova and Ironic.  I would just need a little info on what to do from the
Nova side.  Meetings to attend, web pages to monitor, etc...

I assume I would start with this page:
https://bugs.launchpad.net/nova/+bugs?field.tag=ironic

And try to work with the Ironic and Nova teams on getting bugs resolved.

I would appreciate any other info and suggestions to help improve the
process.

John

On Wed, May 6, 2015 at 2:55 AM, John Garbutt j...@johngarbutt.com wrote:

 On 6 May 2015 at 09:39, Lucas Alvares Gomes lucasago...@gmail.com wrote:
  Hi
 
  I noticed last night that there are 23 bugs currently filed in nova
  tagged as ironic related. Whilst some of those are scheduler issues, a
  lot of them seem like things in the ironic driver itself.
 
  Does the ironic team have someone assigned to work on these bugs and
  generally keep an eye on their driver in nova? How do we get these
  bugs resolved?
 
 
  Thanks for this call out. I don't think we have anyone specifically
  assigned to keep an eye on the Ironic
  Nova driver, we would look at it from time to time or when someone ask
  us to in the Ironic channel/ML/etc...
  But that said, I think we need to pay more attention to the bugs in Nova.
 
  I've added one item about it to be discussed in the next Ironic
  meeting[1]. And in the meantime, I will take a
  look at some of the bugs myself.
 
  [1]
 https://wiki.openstack.org/wiki/Meetings/Ironic#Agenda_for_next_meeting

 Thanks to you both for raising this and pushing on this.

 Maybe we can get a named cross project liaison to bridge the Ironic
 and Nova meetings. We are working on building a similar pattern for
 Neutron. It doesn't necessarily mean attending every nova-meeting,
 just someone to act as an explicit bridge between our two projects?

 I am open to whatever works though, just hoping we can be more
 proactive about issues and dependencies that pop up.

 Thanks,
 John

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer][Gnocchi] How to try ceilometer with gnocchi ?

2015-05-06 Thread Emilien Macchi


On 05/06/2015 01:36 PM, Tim Bell wrote:
 Julien,
 
 Has anyone started on the RPMs and/or Puppet modules ? We'd be interested in 
 trying this out.

We wrote https://github.com/stackforge/puppet-gnocchi
But we have to wait for packaging.
I know it's WIP in RDO, no clue for Debian/Ubuntu.

 
 Thanks
 Tim
 
 -Original Message-
 From: Julien Danjou [mailto:jul...@danjou.info]
 Sent: 06 May 2015 17:24
 To: Luo Gangyi
 Cc: OpenStack Development Mailing L
 Subject: Re: [openstack-dev] [Ceilometer][Gnocchi] How to try ceilometer with
 gnocchi ?

 On Wed, May 06 2015, Luo Gangyi wrote:

 Hi Luo,

 I want to try using ceilometer with gnocchi, but I didn't any docs
 about how to configure it.

 Everything should be documented at:

 http://docs.openstack.org/developer/gnocchi/

 The devstack installation should be pretty straighforward:

 http://docs.openstack.org/developer/gnocchi/devstack.html

 (and don't forget to also enable Ceilometer)

 I have check the master branch of ceilometer and didn't see how
 ceilometer interact with gnocchi neither (I think there should be
 something like a
 gnocchi-dispatcher?)

 The dispatcher is in the Gnocchi source tree (for now, we're moving it to
 Ceilometer for Liberty).

 --
 Julien Danjou
 // Free Software hacker
 // http://julien.danjou.info
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

-- 
Emilien Macchi



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] Static Ceph mon connection info prevents VM restart

2015-05-06 Thread Arne Wiebalck
Hi,

As we swapped a fraction of our Ceph mon servers between the pre-production and 
production cluster
— something we considered to be transparent as the Ceph config points to the 
mon alias—, we ended
up in a situation where VMs with volumes attached were not able to boot (with a 
probability that matched
the fraction of the servers moved between the Ceph instances).

We found that the reason for this is the connection_info in 
block_device_mapping which contains the
IP adresses of the mon servers as extracted by the rbd driver in 
initialize_connection() at the moment
when the connection is established. From what we see, however, this information 
is not updated as long
as the connection exists, and will hence be re-applied without checking even 
when the XML is recreated. 

The idea to extract the mon servers by IP from the mon map was probably to get 
all mon servers (rather
than only one from a load-balancer or an alias), but while our current scenario 
may be special, we will face
a similar problem the day the Ceph mons need to be replaced. And that makes it 
a more general issue.

For our current problem:
Is there a user-transparent way to force an update of that connection 
information? (Apart from fiddling
with the database entries, of course.)

For the general issue:
Would it be possible to simply use the information from the ceph.conf file 
directly (an alias in our case)
throughout the whole stack to avoid hard-coding IPs that will be obsolete one 
day?

Thanks!
 Arne

—
Arne Wiebalck
CERN IT
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] Changing 403 Forbidden to 400 Bad Request for OverQuota was: [nova] Which error code should we return when OverQuota

2015-05-06 Thread Chris Dent

On Wed, 6 May 2015, Jay Pipes wrote:

I think Sean makes an excellent point that if you have 1 condition that 
results in a 403 Forbidden, it actually does not make things more expressive. 
It actually just means both humans and clients need to now delve deeper into 
the error context to determine if this is something they actually don't have 
permission to do, or whether they've exceeded their quota but otherwise have 
permission to do some action.


As I said to Sean in IRC, I can see where you guys are coming from
and I haven't really got a better counter-proposal than my experience
writing servers and clients doesn't like this so it's not like I want
to fight about it. I do think it is worth discussion and there are
obviously costs either way that we should identify and balance.
Interestingly, in the process of writing this response I think I've
managed to come up with a few reasons. On the other hand maybe I'm
just getting out yet more paint for the shed.

Basically it seems to me that the proposal to use 400 just moves the
problem around, one option has conditionals localized under 400,
centralizing the ambiguity. The other option puts the ambiguity in
categories. I guess my brain works better with the latter: a kind of
cascading decision tree.

Note: I think we're all perpetuating the myth or wish that we actually
do do something in code in response to 400 errors. Maybe in some very
special clients it might happen, but in ad-hoc clients (the best kind)
for the most part we report the status and fail and let the human decide
what's next.

In that sort of context I want the response _codes_ to have some
semantics because I want to branch on the codes (if I branch at all)
and nothing else:

* 400: bro, something bogus happened, I'm pretty sure it was your
   fault
* 401: Tell me who you are and you might get to do this
* 402: You might get to do this if you pay
* 403: You didn't get to do this because the _server_ forbids you
* 404: You didn't get to do this because it ain't there
* 405: You didn't get to do this because that action is not available
* 406: I've got the thing you want, but not in the form you want it
* 407: Some man in the middle proxy needs auth
* 408: You spoke too slowly for my awesome brains
* 409: Somebody else got there first
* 410: Seriously, it ain't there and it never will be
* 411: Why u no content-length!?
* 412: You sent conditional headers and I can't meet their
   requirements
* 413: Too big in the body!
* 414: Too big in the URI!
* 415: You sent me a thing and I might have been able to do
   something with it if it were in a different form
[...]

These all mean things as defined by rfcs 7231 and 7235. Those rfcs
were not pulled out of thin air: They are part of the suite of rfcs
that define HTTP. Do we want to do HTTP? Yes, I think so. In that case,
we ought to follow it where possible.

Each of those codes above have different levels of ambiguity. Some
are quite specific. For example 405, 406, 411, 412 and 415. Where we
can be sure they are the correct response we should use them and
most assuredly _not_ 400.

403, as you've both identified, is a lot more squiffy: the server
understood the request but refuses to authorize it...a request
might be forbidden for reasons unrelated to the credentials.

Which leads us to 400. How I tend to use 400 is when none of 405, 406,
409, 411, 412 or 415 can be used because the representation is
_claiming_ legitimate form (based on the headers) and no conditionals
are being violated and where none of 401, 403 or 404 can be used because
the thing is there, I am authentic and the server is not forbidding .
What that means is that there's some crufty about the otherwise good
representation: You've claimed to be sending JSON and you did, but you
left out a required field.

There is no other 4xx that covers that, thus 400.

Now if we try to meld my rules with this idea about signifying over
quota, I feel we've now discovered some collisions:

My use of 400 means there's something wrong with your request.
This is also what the spec says: the client seems to have erred.

Both of these essentially say that request was pretty okay, but not
quite right and you can change the _request_ (or perhaps the client
side environment) and achieve success.

In the case of quota you need to change the server side environment,
not this request. In fact if you do change the server (your quota)
and then do the same request again it will likely work.

Looking at 403 again: the server understood the request but refuses
to authorize it.

4xx means client side error (The 4xx (Client Error) class of status
code indicates that the client seems to have erred.), so arguably
over quota doesn't really work in _any_ 4xx because the client made no
error, the service just has a quota lower than they need. We don't
want to go down the non 4xx road at this time, so given our choices
403 is the one that most says the server 

[openstack-dev] [QA] Meeting Thursday May 7th at 17:00 UTC

2015-05-06 Thread Matthew Treinish
Hi everyone,

Just a quick reminder that the weekly OpenStack QA team IRC meeting will be
tomorrow Thursday, May 7th at 17:00 UTC in the #openstack-meeting channel.

The agenda for tomorrow's meeting can be found here:
https://wiki.openstack.org/wiki/Meetings/QATeamMeeting
Anyone is welcome to add an item to the agenda.

To help people figure out what time 17:00 UTC is in other timezones tomorrow's
meeting will be at:

13:00 EDT
02:00 JST
02:30 ACST
19:00 CEST
12:00 CDT
10:00 PDT

-Matt Treinish


pgpz0uiwW7nTP.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Static Ceph mon connection info prevents VM restart

2015-05-06 Thread David Medberry
Hi Arne,

We've had this EXACT same issue.

I don't know of a way to force an update as you are basically pulling the
rug out from under a running instance. I don't know if it is
possible/feasible to update the virsh xml in place and then migrate to get
it to actually use that data. (I think we tried that to no avail.)
dumpxml=massage cephmons=import xml

If you find a way, let me know, and that's part of the reason I'm replying
so that I stay on this thread. NOTE: We did this on icehouse. Haven't tried
since upgrading to Juno but I don't note any change therein that would
mitigate this. So I'm guessing Liberty/post-Liberty for a real fix.



On Wed, May 6, 2015 at 12:57 PM, Arne Wiebalck arne.wieba...@cern.ch
wrote:

 Hi,

 As we swapped a fraction of our Ceph mon servers between the
 pre-production and production cluster
 -- something we considered to be transparent as the Ceph config points to
 the mon alias--, we ended
 up in a situation where VMs with volumes attached were not able to boot
 (with a probability that matched
 the fraction of the servers moved between the Ceph instances).

 We found that the reason for this is the connection_info in
 block_device_mapping which contains the
 IP adresses of the mon servers as extracted by the rbd driver in
 initialize_connection() at the moment
 when the connection is established. From what we see, however, this
 information is not updated as long
 as the connection exists, and will hence be re-applied without checking
 even when the XML is recreated.

 The idea to extract the mon servers by IP from the mon map was probably to
 get all mon servers (rather
 than only one from a load-balancer or an alias), but while our current
 scenario may be special, we will face
 a similar problem the day the Ceph mons need to be replaced. And that
 makes it a more general issue.

 For our current problem:
 Is there a user-transparent way to force an update of that connection
 information? (Apart from fiddling
 with the database entries, of course.)

 For the general issue:
 Would it be possible to simply use the information from the ceph.conf file
 directly (an alias in our case)
 throughout the whole stack to avoid hard-coding IPs that will be obsolete
 one day?

 Thanks!
  Arne

 --
 Arne Wiebalck
 CERN IT
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gerrit downtime and upgrade on Saturday 2015-05-09 at 1600 UTC

2015-05-06 Thread Elizabeth K. Joseph
On Tue, Apr 14, 2015 at 2:57 PM, James E. Blair cor...@inaugust.com wrote:
 On Saturday, May 9 at 16:00 UTC Gerrit will be unavailable for about 4
 hours while we upgrade to the latest release of Gerrit: version 2.10.

 We are currently running Gerrit 2.8 so this is an upgrade across two
 major releases of Gerrit.  The release notes for both versions are here:

   
 https://gerrit-documentation.storage.googleapis.com/ReleaseNotes/ReleaseNotes-2.10.html
   
 https://gerrit-documentation.storage.googleapis.com/ReleaseNotes/ReleaseNotes-2.9.html

 If you have any questions about the upgrade, please feel free to reply
 here or contact us in #openstack-infra on Freenode.

Just a quick reminder that this upgrade is coming up this Saturday,
May 9th, starting at 16:00 UTC.

During this upgrade we anticipate that Gerrit will be unavailable for
about 4 hours.

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [OpenStack-Infra] [openstack-dev] Gerrit downtime and upgrade on Saturday 2015-05-09 at 1600 UTC

2015-05-06 Thread Elizabeth K. Joseph
On Tue, Apr 14, 2015 at 2:57 PM, James E. Blair cor...@inaugust.com wrote:
 On Saturday, May 9 at 16:00 UTC Gerrit will be unavailable for about 4
 hours while we upgrade to the latest release of Gerrit: version 2.10.

 We are currently running Gerrit 2.8 so this is an upgrade across two
 major releases of Gerrit.  The release notes for both versions are here:

   
 https://gerrit-documentation.storage.googleapis.com/ReleaseNotes/ReleaseNotes-2.10.html
   
 https://gerrit-documentation.storage.googleapis.com/ReleaseNotes/ReleaseNotes-2.9.html

 If you have any questions about the upgrade, please feel free to reply
 here or contact us in #openstack-infra on Freenode.

Just a quick reminder that this upgrade is coming up this Saturday,
May 9th, starting at 16:00 UTC.

During this upgrade we anticipate that Gerrit will be unavailable for
about 4 hours.

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [openstack-dev] [api] Changing 403 Forbidden to 400 Bad Request for OverQuota was: [nova] Which error code should we return when OverQuota

2015-05-06 Thread Ryan Brown
On 05/06/2015 03:15 PM, Chris Dent wrote:
 On Wed, 6 May 2015, Jay Pipes wrote:
 
 I think Sean makes an excellent point that if you have 1 condition
 that results in a 403 Forbidden, it actually does not make things more
 expressive. It actually just means both humans and clients need to now
 delve deeper into the error context to determine if this is something
 they actually don't have permission to do, or whether they've exceeded
 their quota but otherwise have permission to do some action.
 
 As I said to Sean in IRC, I can see where you guys are coming from
 and I haven't really got a better counter-proposal than my experience
 writing servers and clients doesn't like this so it's not like I want
 to fight about it. I do think it is worth discussion and there are
 obviously costs either way that we should identify and balance.
 Interestingly, in the process of writing this response I think I've
 managed to come up with a few reasons. On the other hand maybe I'm
 just getting out yet more paint for the shed.
 
 Basically it seems to me that the proposal to use 400 just moves the
 problem around, one option has conditionals localized under 400,
 centralizing the ambiguity. The other option puts the ambiguity in
 categories. I guess my brain works better with the latter: a kind of
 cascading decision tree.
 
 Note: I think we're all perpetuating the myth or wish that we actually
 do do something in code in response to 400 errors. Maybe in some very
 special clients it might happen, but in ad-hoc clients (the best kind)
 for the most part we report the status and fail and let the human decide
 what's next.

Guilty as charged. It may be that the benefit of moving 403-400 isn't
worth the trouble in any case (though I'd prefer it) since there are
already clients out in the world that may/may not rely on this behavior.

 In that sort of context I want the response _codes_ to have some
 semantics because I want to branch on the codes (if I branch at all)
 and nothing else:
 
 * 400: bro, something bogus happened, I'm pretty sure it was your
fault
 * 401: Tell me who you are and you might get to do this
 * 402: You might get to do this if you pay
 * 403: You didn't get to do this because the _server_ forbids you
 * 404: You didn't get to do this because it ain't there
 * 405: You didn't get to do this because that action is not available
 * 406: I've got the thing you want, but not in the form you want it
 * 407: Some man in the middle proxy needs auth
 * 408: You spoke too slowly for my awesome brains
 * 409: Somebody else got there first
 * 410: Seriously, it ain't there and it never will be
 * 411: Why u no content-length!?
 * 412: You sent conditional headers and I can't meet their
requirements
 * 413: Too big in the body!
 * 414: Too big in the URI!
 * 415: You sent me a thing and I might have been able to do
something with it if it were in a different form
 [...]
 
 These all mean things as defined by rfcs 7231 and 7235. Those rfcs
 were not pulled out of thin air: They are part of the suite of rfcs
 that define HTTP. Do we want to do HTTP? Yes, I think so. In that case,
 we ought to follow it where possible.
 
 Each of those codes above have different levels of ambiguity. Some
 are quite specific. For example 405, 406, 411, 412 and 415. Where we
 can be sure they are the correct response we should use them and
 most assuredly _not_ 400.
 
 403, as you've both identified, is a lot more squiffy: the server
 understood the request but refuses to authorize it...a request
 might be forbidden for reasons unrelated to the credentials.
 
 Which leads us to 400. How I tend to use 400 is when none of 405, 406,
 409, 411, 412 or 415 can be used because the representation is
 _claiming_ legitimate form (based on the headers) and no conditionals
 are being violated and where none of 401, 403 or 404 can be used because
 the thing is there, I am authentic and the server is not forbidding .
 What that means is that there's some crufty about the otherwise good
 representation: You've claimed to be sending JSON and you did, but you
 left out a required field.
 
 There is no other 4xx that covers that, thus 400.
 
 Now if we try to meld my rules with this idea about signifying over
 quota, I feel we've now discovered some collisions:
 
 My use of 400 means there's something wrong with your request.
 This is also what the spec says: the client seems to have erred.
 
 Both of these essentially say that request was pretty okay, but not
 quite right and you can change the _request_ (or perhaps the client
 side environment) and achieve success.
 
 In the case of quota you need to change the server side environment,
 not this request. In fact if you do change the server (your quota)
 and then do the same request again it will likely work.
 
 Looking at 403 again: the server understood the request but refuses
 to authorize it.
 

Very good point. I was thinking 

Re: [openstack-dev] [Fuel] Nominate Julia Aranovich for fuel-web core

2015-05-06 Thread Vitaly Kramskikh
So, there is no objections and Julia is now a core reviewer for fuel-web.
Congratulations!

2015-05-05 16:17 GMT+03:00 Vitaly Kramskikh vkramsk...@mirantis.com:

 Thanks for voting. If nobody has objections by tomorrow, Julia will get +2
 rights for fuel-web.

 2015-05-05 15:30 GMT+03:00 Dmitry Pyzhov dpyz...@mirantis.com:

 +1

 On Tue, May 5, 2015 at 1:06 PM, Evgeniy L e...@mirantis.com wrote:

 +1

 On Tue, May 5, 2015 at 12:55 PM, Sebastian Kalinowski 
 skalinow...@mirantis.com wrote:

 +1

 2015-04-30 11:33 GMT+02:00 Przemyslaw Kaminski pkamin...@mirantis.com
 :

 +1, indeed Julia's reviews are very thorough.

 P.

 On 04/30/2015 11:28 AM, Vitaly Kramskikh wrote:
  Hi,
 
  I'd like to nominate Julia Aranovich
  http://stackalytics.com/report/users/jkirnosova for fuel-web
  https://github.com/stackforge/fuel-web core team. Julia's reviews
 are
  always thorough and have decent quality. She is one of the top
  contributors and reviewers in fuel-web repo (mostly for JS/UI stuff).
 
  Please vote by replying with +1/-1.
 
  --
  Vitaly Kramskikh,
  Fuel UI Tech Lead,
  Mirantis, Inc.
 
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Vitaly Kramskikh,
 Fuel UI Tech Lead,
 Mirantis, Inc.




-- 
Vitaly Kramskikh,
Fuel UI Tech Lead,
Mirantis, Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [RefStackl] - http://refstack.org/ - does not resolve

2015-05-06 Thread Steve Gordon
- Original Message -
 From: Jeremy Stanley fu...@yuggoth.org
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 
 On 2015-05-06 09:37:26 -0500 (-0500), arkady_kanev...@dell.com wrote:
  What are we doing to have name resolved?
  Meanwhile what is IP address to reach it?
  Do we really expect people to submit results to that web site?
 
 It looks like I can add that domain and whatever records we want for
 it... I'd simply need to know the IP address(es) and name(s) you
 want in those resource records.
 --
 Jeremy Stanley

It looks like it got moved to refstack.net (or at least, that address resolves 
and looks to be the right content...).

Thanks,

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack] Swift - Adding S3 Glacier like interface in Swift Swift3 Object Storage

2015-05-06 Thread Samuel Merritt

On 5/6/15 2:34 AM, Bala wrote:

I am new to this list so please excuse me if I posted it in wrong list.

We have a tape library which we would like to integrate with OpenStack
Swift  Swift3 object storage service to provide S3 interface.

The current file system we have for the library has been integrated with
Swift storage service and manages changer robot  tapes.

This works well for writing.

However for reading, loading a tape takes longer when GET requests are
received, in some cases over 5 minutes and this causes timeout error.
Most of the data stored in these tapes are archival data. This get
worsen when multiple GET requests received (muti-user) for objects which
are stored in different tapes.

Due to the longer read times, we are looking to provide Amazon S3
Glacier like interface through Swift  Swift3 so that clients can issue
a POST OBJECT RESTORE request and wait for the data to be moved to
temporary store/cache.

I have come across a similar request

http://openstack-dev.openstack.narkive.com/kI72vk9l/ltfs-integration-with-openstack-swift-for-scenario-like-data-archival-as-a-service

and understand the suggestions.

We would like to provide S3 Glacier like interface than Swift Storage
policies if we can.

I would be great full if you could kindly advise

1. How hard is to change Swift  Swift3 code base to provide S3 Glacier
like interface


It's not easy, that's for sure. Swift's API is all synchronous: issue a 
GET, receive the object; issue a PUT, create an object; et cetera. 
Glacier-style asynchronous retrieval is something completely new and 
different.


Some food for thought: where will you store pending retrieval requests? 
How will you ensure that retrieval requests survive disk and machine 
failures the way everything else in Swift does?


I'm not asking you to answer here (though you can if you want to, of 
course); I'm just trying to nudge your thoughts in the right direction.



2. Can this be done through Swift storage policies alone.


No. A storage policy determines where and how its objects are stored, 
which affects things like access times and storage cost. The API for 
accessing those objects does not change based on the storage policy.



3. Do we have to modify Swift Auditor service to do a tape based
checking rather than object based.


You mean audit in order? Probably a good idea, otherwise your tapes will 
spend all day seeking.



4. Would Swift replication service cause frequent Tape change request.


I'd guess that it would, but nobody knows for sure. As far as I know, 
nobody has jammed tape-library support into Swift before. You're the 
first. Report back and let everyone know how it goes. :)



___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] [chef] Feedback to move IRC Monday meeting and time.

2015-05-06 Thread Jan Klare
Hi,

for me (i live in Germany) the full hour (so 15:00 UTC) is fine.

Cheers,
Jan


 On May 6, 2015, at 7:11 PM, JJ Asghar jasg...@chef.io wrote:
 
 Hey everyone!
 
 As we move forward with our big tent move[1] Jan suggested we move from our 
 traditional IRC meeting in our main channel #openstack-chef to one of the 
 official OpenStack meeting channels[2].
 
 This has actually caused a situation that I’d like to make public. In the 
 documentation the times for the meetings are suggested at the top of the 
 hour, we have ours that start at :30 past. This allows for our friends and 
 community members on the west coast of the United States able to join at a 
 pseudo-reasonable time.  The challenge is, if we move it forward to the top 
 of the hour, we may lose the west coast, but if we move it back to the top of 
 the next hour we may lose our friends in Germany and earlier time zones.
 
 I’m not sure what to do here, so i’d like some feedback from the community.  
 
 When we’ve come to a consensus we can attempt to find the open slot in the 
 official IRC channels and i can put the stake in the ground here[3].
 
 Thoughts, questions, concerns?
 
 -JJ
 
 
 
 [1]: https://review.openstack.org/#/c/175000/ 
 https://review.openstack.org/#/c/175000/
 [2]: https://wiki.openstack.org/wiki/Meetings/CreateaMeeting 
 https://wiki.openstack.org/wiki/Meetings/CreateaMeeting
 [3]: https://wiki.openstack.org/wiki/Meetings#Chef_Cookbook_meetings 
 https://wiki.openstack.org/wiki/Meetings#Chef_Cookbook_meetings__
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Adding Joshua Harlow to oslo-core

2015-05-06 Thread Vilobh Meshram
Not a core but definitely a +1 from my side.
Has great technical insights and is someone who is always happy to help
others.

-Vilobh

On Tue, May 5, 2015 at 12:55 PM, David Medberry openst...@medberry.net
wrote:

 Not a voting member, but +1 from me. He's core in my book.

 On Tue, May 5, 2015 at 11:27 AM, Ben Nemec openst...@nemebean.com wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA256

 +1 from me as well!

 On 05/05/2015 09:47 AM, Julien Danjou wrote:
  Hi fellows,
 
  I'd like to propose that we add Joshua Harlow to oslo-core. He is
  already maintaining some of the Oslo libraries (taskflow, tooz…)
  and he's helping on a lot of other ones for a while now. Let's
  bring him in for real!
 
 
 
  __
 
 
 
 OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 -BEGIN PGP SIGNATURE-
 Version: GnuPG v2

 iQEcBAEBCAAGBQJVSP2fAAoJEDehGd0Fy7uqEggH/3VMflb10XVGXFQb/061yrmo
 B1boYZdeqVeBOlURSgsSouKJwY8OahMygu18GhedLXHaefYUlMgZRW/nSeGoS8/7
 fPWc1E4ebn/xupXPtSDo41CT8VswpeDZKod1DV74mTapMVQPzlslwnEmOwaik44h
 uuAwNEaMOPrelpHhv2qbINanOZco431BPmWqbPEEoRrOEkBJi0j7ikY36gHGL1Ny
 UTvtUW0rXDGOEswVi6/F9S6hZLYtvsyTFs+4ZspwQeHgQ+oTNdtFuw9w25oYhxLl
 lTJAKO29b7tcbZ3NHTJRBY1tldx3GVP9DkPAPWmXbZElwLvdfWMTKeLSrPbIdds=
 =aXKU
 -END PGP SIGNATURE-

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] LBaaS in version 5.1

2015-05-06 Thread Daniel Comnea
Thanks Stanislaw for reply.

sure i can do that the only unknown question i have is related to the Fuel
HA controllers. I assume i can easily ignore the controller HA (LBaaS
doesn't support HA :) ) and just go the standard LBaaS?



On Wed, May 6, 2015 at 2:55 PM, Stanislaw Bogatkin sbogat...@mirantis.com
wrote:

 Hi Daniel,

 Unfortunately, we never supported LBaaS until Fuel 6.0 when plugin system
 was introduced and LBaaS plugin was created. So, I think than docs about it
 never existed for 5.1. But as I know, you can easily install LBaaS in 5.1
 (it should be shipped in our repos) and configure it with accordance to
 standard OpenStack cloud administrator guide [1].

 [1]
 http://docs.openstack.org/admin-guide-cloud/content/install_neutron-lbaas-agent.html

 On Wed, May 6, 2015 at 2:12 PM, Daniel Comnea comnea.d...@gmail.com
 wrote:

 HI all,

 Recently i used Fuel 5.1 to deploy Openstack Icehouse on a Lab (PoC) and
 a request came with enabling Neutron LBaaS.

 I have looked up on Fuel doc to see if this is supported in the version
 i'm running but failed ot find anything.

 Anyone can point me to any docs which mentioned a) yes it is supported
 and b) how to update it via Fuel?

 Thanks,
 Dani

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer][Gnocchi] How to try ceilometer with gnocchi ?

2015-05-06 Thread Tim Bell
Sorry to add another question, can Gnocchi be installed on a Juno cloud or do 
we need to be running Kilo ?

Tim

 -Original Message-
 From: Tim Bell [mailto:tim.b...@cern.ch]
 Sent: 06 May 2015 19:36
 To: OpenStack Development Mailing List (not for usage questions); Luo Gangyi
 Subject: Re: [openstack-dev] [Ceilometer][Gnocchi] How to try ceilometer with
 gnocchi ?
 
 Julien,
 
 Has anyone started on the RPMs and/or Puppet modules ? We'd be interested in
 trying this out.
 
 Thanks
 Tim
 
  -Original Message-
  From: Julien Danjou [mailto:jul...@danjou.info]
  Sent: 06 May 2015 17:24
  To: Luo Gangyi
  Cc: OpenStack Development Mailing L
  Subject: Re: [openstack-dev] [Ceilometer][Gnocchi] How to try
  ceilometer with gnocchi ?
 
  On Wed, May 06 2015, Luo Gangyi wrote:
 
  Hi Luo,
 
   I want to try using ceilometer with gnocchi, but I didn't any docs
   about how to configure it.
 
  Everything should be documented at:
 
  http://docs.openstack.org/developer/gnocchi/
 
  The devstack installation should be pretty straighforward:
 
  http://docs.openstack.org/developer/gnocchi/devstack.html
 
  (and don't forget to also enable Ceilometer)
 
   I have check the master branch of ceilometer and didn't see how
   ceilometer interact with gnocchi neither (I think there should be
   something like a
   gnocchi-dispatcher?)
 
  The dispatcher is in the Gnocchi source tree (for now, we're moving it
  to Ceilometer for Liberty).
 
  --
  Julien Danjou
  // Free Software hacker
  // http://julien.danjou.info
 
 _
 _
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack] How to ping instance IP ?

2015-05-06 Thread Wilson Kwok
repeat this message to all,

I can't see network and router option in System panel, I think need install
 neutron, right ? do you have any guide for help ?

Thanks

2015-05-07 1:53 GMT+08:00 Wilson Kwok leiw...@gmail.com:

 Hello,

 I can't see network and router option in System panel, I think need
 install  neutron, right ? do you have any guide for help ?

 Thanks

 2015-05-06 23:15 GMT+08:00 Wilson Kwok leiw...@gmail.com:

 Hi Jonathan Abdiel Gonzalez Valdebenito,

 Sorry, I'm a newbie of Openstack, do you mean create virtual router
 between 172.28.0.0 and 10.0.47.0 ? please see below simply diagram.

 TP-link router = public IP address 119.101.54.x and lan IP address
 172.28.0.1

 Home computer = 172.28.0.130

 Ubuntu eth0 (no IP address) and eth1 172.28.0.105 (for managmeent)

 Virtual router = home interface 172.28.0.254 and internal interface
 10.0.47.254

 br100 IP address = 10.0.47.1 map to eth0

 instance01 IP address = 10.0.47.2
 Thanks

 2015-05-06 21:56 GMT+08:00 Jonathan Abdiel Gonzalez Valdebenito 
 jonathan.abd...@gmail.com:

 Hi Wilson,

 If that so then you have a problem you didn't even configure the network
 for the instances which means you may created a network but not a router so
 if you don't creater a router you can't access the instances network.

 On Wed, May 6, 2015 at 6:36 AM Wilson Kwok leiw...@gmail.com wrote:

 Hi Jonathan Abdiel Gonzalez Valdebenito,

 After I type ip netns nothing display.

 Thanks



 2015-05-05 22:28 GMT+08:00 Jonathan Abdiel Gonzalez Valdebenito 
 jonathan.abd...@gmail.com:

 Hi Wilson,

 To ping the instance I suggest you to use these commands:

 ip netns -- to list your namespaces and pick up the one with the
 router name
 ip netns exec router-(hash) ping instance ip  -- with this you can
 ping the instance.

 Hope it was usefull

 On Tue, May 5, 2015 at 10:57 AM Wilson Kwok leiw...@gmail.com wrote:

 Hello,

 Here is my home lap network settings:

 Home computer 172.28.0.130

 Ubuntu eth0 (no IP address)
eth1 172.28.0.105 (for managmeent)

 br100 10.0.47.1 map to eth0

 instance01 10.0.47.2

 My question is home computer can't ping instance01 10.0.47.2, even
 Ubuntu itselves,
 I already allow security group ICMP ALL, can anyone help ?

 Thanks

 ___
 Mailing list:
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 Post to : openstack@lists.openstack.org
 Unsubscribe :
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack





___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


  1   2   >