Re: [openstack-dev] [Fuel] fake threads in tests

2015-02-18 Thread Evgeniy L
Hi Przemyslaw,

Thanks for bringing up the topic. A long time ago we had similar topic,
I agree that the way it works now is not good at all, because it leads to
a lot of problems, I remember the time when our tests were randomly
broken because of deadlocks and race conditions with fake thread.

We should write some helpers for receiver module, to explicitly and easily
change state of the system, as you mentioned it should be done in
synchronous fashion.

But of course we cannot just remove fake and we should continue supporting
it, some fake thread specific tests should be added to make sure that it's
not
broken.

Thanks,

On Mon, Feb 16, 2015 at 2:54 PM, Przemyslaw Kaminski pkamin...@mirantis.com
 wrote:

 Hello,

 This somehow relates to [1]: in integration tests we have a class
 called FakeThread. It is responsible for spawning threads to simulate
 asynchronous tasks in fake env. In BaseIntegrationTest class we have a
 method called _wait_for_threads that waits for all fake threads to
 terminate.

 In my understanding what these things actually do is that they just
 simulate Astute's responses. I'm thinking if this could be replaced by
 a better solution, I just want to start a discussion on the topic.

 My suggestion is to get rid of all this stuff and implement a
 predictable solution: something along promises or coroutines that
 would execute synchronously. With either promises or coroutines we
 could simulate tasks responses any way we want without the need to
 wait using unpredictable stuff like sleeping, threading and such. No
 need for waiting or killing threads. It would hopefully make our tests
 easier to debug and get rid of the random errors that are sometimes
 getting into our master branch.

 P.

 [1] https://bugs.launchpad.net/fuel/+bug/1421599

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] fake threads in tests

2015-02-18 Thread Przemyslaw Kaminski
Yes, I agree, basically the logic of introducing promises (or fake
threads or whatever they are called) should be tested itself too.

Basically what this is all about is mocking Astute and be able to
easily program it's responses in tests.

P.


On 02/18/2015 09:27 AM, Evgeniy L wrote:
 Hi Przemyslaw,
 
 Thanks for bringing up the topic. A long time ago we had similar
 topic, I agree that the way it works now is not good at all,
 because it leads to a lot of problems, I remember the time when our
 tests were randomly broken because of deadlocks and race conditions
 with fake thread.
 
 We should write some helpers for receiver module, to explicitly and
 easily change state of the system, as you mentioned it should be
 done in synchronous fashion.
 
 But of course we cannot just remove fake and we should continue
 supporting it, some fake thread specific tests should be added to
 make sure that it's not broken.
 
 Thanks,
 
 On Mon, Feb 16, 2015 at 2:54 PM, Przemyslaw Kaminski 
 pkamin...@mirantis.com mailto:pkamin...@mirantis.com wrote:
 
 Hello,
 
 This somehow relates to [1]: in integration tests we have a class 
 called FakeThread. It is responsible for spawning threads to
 simulate asynchronous tasks in fake env. In BaseIntegrationTest
 class we have a method called _wait_for_threads that waits for all
 fake threads to terminate.
 
 In my understanding what these things actually do is that they
 just simulate Astute's responses. I'm thinking if this could be
 replaced by a better solution, I just want to start a discussion on
 the topic.
 
 My suggestion is to get rid of all this stuff and implement a 
 predictable solution: something along promises or coroutines that 
 would execute synchronously. With either promises or coroutines we 
 could simulate tasks responses any way we want without the need to 
 wait using unpredictable stuff like sleeping, threading and such.
 No need for waiting or killing threads. It would hopefully make our
 tests easier to debug and get rid of the random errors that are
 sometimes getting into our master branch.
 
 P.
 
 [1] https://bugs.launchpad.net/fuel/+bug/1421599
 
 __

 
OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
 http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 __

 
OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Separating granular tasks validator

2015-02-18 Thread Evgeniy L
+1 to extract validators for granular deployment tasks

Dmitry, do you mean that we should create some cli to generate graph
picture? Or just make it as a module and then use it in Nailgun?

Thanks,

On Tue, Feb 17, 2015 at 4:31 PM, Dmitriy Shulyak dshul...@mirantis.com
wrote:

 +1 for separate tasks/graph validation library

 In my opinion we may even migrate graph visualizer to this library, cause
 it is most usefull during development and to demand installed fuel with
 nailgun feels a bit suboptimal


 On Tue, Feb 17, 2015 at 12:58 PM, Kamil Sambor ksam...@mirantis.com
 wrote:

 Hi all,

 I want to discuss separating validation from our repositories to one. On
 this moment in fuel we have validation for  granular deployment tasks in 3
 separate repositories so we need to maintain very similar code in all of
 them. New idea that we discussed with guys assumes to keep this code in one
 place. Below are more details.

 Schema validator should be in separate repo, we will install validator in
 fuel-plugin, fuel-lib, fuel-nailgun. Validator should support versions
 (return schemas and validate them for selected version).
 Reasons why we should have validation in all three repositories:
 nailgun: we need validation in api because we are able to send our own
 tasks to nailgun and execute them (now we validate type of tasks in
 deployment graph and  during installation of plugin)
 fuel-library: we need to check if tasks schema is correct defined in
 task.yaml files and if tasks not create cycles (actually we do both things)
 fuel-plugin: we need to check if defined tasks are supported by selected
 version of nailgun (now we check if task type are the same with hardcoded
 types in fuel-plugin, we not update this part since a while and now there
 are only 2 type of tasks: shell and puppet)
 With versioning we shouldn't have conflicts between nailgun serialization
 and fuel-plugin because plugin will be able to use schemas for specified
 version of nailgun.

 As a core reviewers of repositories we should keep the same reviewers as
 we have in fuel-core.

 How validator should looks like:
 separate repo, to install using pip
 need to return correct schema for selected version of fuel
 should be able to validate schema for selected version and ignore
 selected fields
 validate graph from selected tasks

 Pros and cons of this solution:
 pros:
 one place to keep validation
 less error prone - we will eliminate errors caused by not updating one of
 the repos, also it will be easy to test if changes are correct and
 compatible with all repos
 easier to develop (less changes in cases when we add new type of task or
 we change schemas of tasks - we edit just one place)
 easy distribution of code between repositories and easy to use by
 external developers
 cons:
 new repository that needs to be managed (and included into CI/QA/release
 cycle)
 new dependency for fuel-library, fuel-web, fuel-plugins (fuel plugin
 builder) of which developer need to be aware of

 Please comments and give opinions.

 Best regards,
 Kamil Sambor

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer][keystone] Domain information through ceilometer and authentication against keystone v3

2015-02-18 Thread Namita Chaudhari
Sure. Thanks Gordon.

On Tue, Feb 17, 2015 at 10:32 PM, gordon chung g...@live.ca wrote:


 1) *Getting domain information:* I haven't came across but is there any
 ceilometer API which would provide domain information along with the usage
 data?

 2) *Ceilometer auth against keystone v3:* As domain feature is provided
 in keystone v3 API, I am using that. Is there a way to configure ceilometer
 so that it would use keystone v3 API? I tried doing that but it didnt work
 for me. Also, I came across a question forum (
 https://ask.openstack.org/en/question/55353/ceilometer-v3-auth-against-keystone/
 )
  which says that ceilometer can't use v3 for getting service tokens
 since middleware doesn't support it.

 i've never actually tried tihs but if you are referring to ceilometer's
 api, it uses keystonemiddleware to authenticate so you'd probably need to
 add auth_version keystone_authtoken section in ceilometer.conf...

 regarding ceilometer speaking to other services, the service_credentials
 options are available here:
 http://docs.openstack.org/trunk/config-reference/content/ch_configuring-openstack-telemetry.html.
 are any additional options required to be passed in?

 adding keystone tag incase they feel like pointing out something obvious.

 cheers,
 *gord*


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

*Namita Chaudhari* ● Engineer - Product Development ● Sungard Availability
Services, India. ● 2nd Floor, Wing 4, Cluster D, MIDC Kharadi Knowledge
Park, Pune - 411 014 ● Email: namita.chaudh...@sungardas.com
namita.chaudh...@sungard.com ● www.sungardas.in




*CONFIDENTIALITY:*  This e-mail (including any attachments) may contain
confidential, proprietary and privileged information, and unauthorized
disclosure or use is prohibited.  If you received this e-mail in error,
please notify the sender and delete this e-mail from your system.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Lets keep our community open, lets fight for it

2015-02-18 Thread Daniel P. Berrange
On Tue, Feb 17, 2015 at 09:29:19AM -0800, Clint Byrum wrote:
 Excerpts from Daniel P. Berrange's message of 2015-02-17 02:37:50 -0800:
  On Wed, Feb 11, 2015 at 03:14:39PM +0100, Stefano Maffulli wrote:
## Cores are *NOT* special

At some point, for some reason that is unknown to me, this message
changed and the feeling of core's being some kind of superheros became
a thing. It's gotten far enough to the point that I've came to know
that some projects even have private (flagged with +s), password
protected, irc channels for core reviewers.
   
   This is seriously disturbing.
   
   If you're one of those core reviewers hanging out on a private channel,
   please contact me privately: I'd love to hear from you why we failed as
   a community at convincing you that an open channel is the place to be.
   
   No public shaming, please: education first.
  
  I've been thinking about these last few lines a bit, I'm not entirely
  comfortable with the dynamic this sets up.
  
  What primarily concerns me is the issue of community accountability. A core
  feature of OpenStack's project  individual team governance is the idea
  of democractic elections, where the individual contributors can vote in
  people who they think will lead OpenStack in a positive way, or conversely
  hold leadership to account by voting them out next time. The ability of
  individuals contributors to exercise this freedom though, relies on the
  voters being well informed about what is happening in the community.
  
  If cases of bad community behaviour, such as use of passwd protected IRC
  channels, are always primarily dealt with via further private 
  communications,
  then we are denying the voters the information they need to hold people to
  account. I can understand the desire to avoid publically shaming people
  right away, because the accusations may be false, or may be arising from a
  simple mis-understanding, but at some point genuine issues like this need
  to be public. Without this we make it difficult for contributors to make
  an informed decision at future elections.
  
  Right now, this thread has left me wondering whether there are still any
  projects which are using password protected IRC channels, or whether they
  have all been deleted, and whether I will be unwittingly voting for people
  who supported their use in future openstack elections.
  
 
 Shaming a person is a last resort, when that person may not listen to
 reason. It's sometimes necessary to bring shame to a practice, but even
 then, those who are participating are now draped in shame as well and
 will have a hard time saving face.

This really isn't about trying to shame people, rather it is about
having accountability in the open.

If the accusations of running private IRC channels were false, then
yes, it would be an example of shaming to then publicise those who
were accused.

Since it is confirmed that private password protected IRC channels
do in fact exist, then we need to have the explanations as to why
this was done be made in public. The community can then decide
whether the explanations offered provide sufficient justification.
This isn't about shaming, it is about each individual being able
to decide for themselves as to whether what happened was acceptable,
given the explanations.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] - Add custom JS functions to dashboard

2015-02-18 Thread Radomir Dopieralski
On 16/02/15 14:54, Marcos Fermin Lobo wrote:
 Hi all,
 
 I would like to add some (own) JavaScript functions to the image list,
 in Project dashboard.
 
 I've followed up the documentation
 (http://docs.openstack.org/developer/horizon/topics/customizing.html#custom-javascript),
 but I think it is outdated, because that documentation refers to
 directory tree which is not the same in (for example) Juno release. I
 mean, there is no:
 openstack_dashboard/dashboards/project/templates/project/ (check
 https://github.com/cernops/horizon/tree/master-patches/openstack_dashboard/dashboards/project)
 
 
 So, my question is: Where should I write my own JavaScript functions to
 be able to use them in the image list (project dashboard)?. The
 important point here is that my new JavaScript functions should be
 available to compress process which is execute (by default) during RPM
 building.
 
 Thank you for your time.

That documentation is still valid, even if the example paths are
different. We use exactly this technique in tuskar-ui and it still works.

Starting with Juno, you can also add JavaScript files in your panel's
configuration file, see
http://docs.openstack.org/developer/horizon/topics/settings.html#add-js-files

-- 
Radomir Dopieralski


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] The root-cause for IRC private channels (was Re: [all][tc] Lets keep our community open, lets fight for it)

2015-02-18 Thread Flavio Percoco

On 17/02/15 09:32 -0800, Stefano Maffulli wrote:

Changing the subject since Flavio's call for openness was broader than
just private IRC channels.

On Tue, 2015-02-17 at 10:37 +, Daniel P. Berrange wrote:

If cases of bad community behaviour, such as use of passwd protected
IRC channels, are always primarily dealt with via further private
communications, then we are denying the voters the information they
need to hold people to account. I can understand the desire to avoid
publically shaming people right away, because the accusations may be
false, or may be arising from a simple mis-understanding, but at some
point genuine issues like this need to be public. Without this we make
it difficult for contributors to make an informed decision at future
elections.


You got my intention right: I wanted to understand better what lead some
people to create a private channel, what were their needs. For that
objective, having an accusatory tone won't go anywhere and instead I
needed to provide them a safe place to discuss and then I would report
back in the open.

So far, I've only received comments in private from only one person,
concerned about public logging of channels without notification. I
wished the people hanging out on at least one of such private channels
would provide more insights on their choice but so far they have not.


Right, but that isn't a valid point for a private, *password protected*,
IRC channel.


Regarding the why at least one person told me they prefer not to use
official openstack IRC channels because there is no notification if a
channel is being publicly logged. Together with freenode not obfuscating
host names, and eavesdrop logs available to any spammer, one person at
least is concerned that private information may leak. There may also be
legal implications in Europe, under the Data Protection Directive, since
IP addresses and hostnames can be considered sensitive data. Not to
mention the casual dropping of emails or phone numbers in public+logged
channels.


With regards to logging, there are ways to hide hostnames and FWIW, I
believe logging IRC channels is part of our open principles. It allows
for historical research - it certainly helpped building a good point
for this thread ;)- that are useful for our community and reference for
future development.

I don't think anyone reads IRC logs everyday but I've seen them linked
and used as a reference enough times to consider them a valuable
resource for our community.

That said, I believe people working in a open community like
OpenStack's should stop worrying about IRC logged channels - which I
honestly believe are the least of their openness problems - and focus
on more important things. This is an open community and in order to
make it work we need to keep it as such. Honestly, this is like
joining a mailing list and worrying about possible leacks in logged
emails.



I think these points are worth discussing. One easy fix this person
suggests is to make it default that all channels are logged and write a
warning on wiki/IRC page. Another is to make the channel bot announce
whether the channel is logged. Cleaning up the hostname details on
join/parts from eavesdrop and put the logs behind a login (to hide them
from spam harvesters).

Thoughts?


I've proposed this several times already and I still think some
consistency here is worth it. I'd vote to enable logging on all
channels.

Fla.

P.S: Join the open side of the force #badumps

--
@flaper87
Flavio Percoco


pgp6m7o2MC6PB.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Distribution of keys for environments

2015-02-18 Thread Evgeniy L
Vladimir,

What Andrew is saying is we should copy some specific keys to some
specific roles, and it's easy to do even now, just create several role
specific
tasks and copy required keys.
Deployment engineer who knows which keys are required for which roles
can do that.

What you are saying is we should have some way to restrict task from
getting information it wants, it is separate huge topic, because it requires
to create polices which plugin developer should describe to get access to
the data, as it's done for iOS/Android, also it's not so easy to do
sandboxing
when task can execute any shell command on any node.

Thanks,

On Wed, Feb 18, 2015 at 12:49 PM, Vladimir Kuklin vkuk...@mirantis.com
wrote:

 Andrew

 +1 to it - I provided these concerns to guys that we should not ship data
 to tasks that do not need it. It will make us able to increase security for
 pluggable architecture

 On Fri, Feb 13, 2015 at 9:57 PM, Andrew Woodward xar...@gmail.com wrote:

 Cool, You guys read my mind o.O

 RE: the review. We need to avoid copying the secrets to nodes that don't
 require them. I think it might be too soon to be able to make granular
 tasks based for this, but we need to move that way.

 Also, how are the astute tasks read into the environment? Same as with
 the others?

 fuel rel --sync-deployment-tasks


 On Fri, Feb 13, 2015 at 7:32 AM, Evgeniy L e...@mirantis.com wrote:

 Andrew,

 It looks like what you've described is already done for ssh keys [1].

 [1] https://review.openstack.org/#/c/149543/

 On Fri, Feb 13, 2015 at 6:12 PM, Vladimir Kuklin vkuk...@mirantis.com
 wrote:

 +1 to Andrew

 This is actually what we want to do with SSL keys.

 On Wed, Feb 11, 2015 at 3:26 AM, Andrew Woodward xar...@gmail.com
 wrote:

 We need to be highly security conscious here doing this in an insecure
 manner is a HUGE risk so rsync over ssh from the master node is usually 
 (or
 scp) OK but rsync protocol from the node in the cluster will not be BAD 
 (it
 leaves the certs exposed on an weak service.)

 I could see this being implemented as some additional task type that
 can instead be run on the fuel master nodes instead of a target node. This
 could also be useful for plugin writers that may need to access some
 external API as part of their task graph. We'd need some way to make the
 generate task run once for the env, vs the push certs which runs for each
 role that has a cert requirement.

 we'd end up with some like
 generate_certs:
   runs_from: master_once
   provider: whatever
 push_certs:
   runs_from: master
   provider: bash
   role: [*]

 On Thu, Jan 29, 2015 at 2:07 PM, Vladimir Kuklin vkuk...@mirantis.com
  wrote:

 Evgeniy,

 I am not suggesting to go to Nailgun DB directly. There obviously
 should be some layer between a serializier and DB.

 On Thu, Jan 29, 2015 at 9:07 PM, Evgeniy L e...@mirantis.com wrote:

 Vladimir,

  1) Nailgun DB

 Just a small note, we should not provide access to the database,
 this approach
 has serious issues, what we can do is to provide this information
 for example
 via REST API.

 What you are saying is already implemented in any deployment tool
 for example
 lets take a look at Ansible [1].

 What you can do there is to create a task which stores the result of
 executed
 shell command in some variable.
 And you can reuse it in any other task. I think we should use this
 approach.

 [1]
 http://docs.ansible.com/playbooks_variables.html#registered-variables

 On Thu, Jan 29, 2015 at 2:47 PM, Vladimir Kuklin 
 vkuk...@mirantis.com wrote:

 Evgeniy

 This is not about layers - it is about how we get data. And we need
 to separate data sources from the way we manipulate it. Thus, sources 
 may
 be: 1) Nailgun DB 2) Users inventory system 3) Opendata like, list of
 Google DNS Servers. Then all this data is aggregated and transformed
 somehow. After that it is shipped to the deployment layer. That's how 
 I see
 it.

 On Thu, Jan 29, 2015 at 2:18 PM, Evgeniy L e...@mirantis.com
 wrote:

 Vladimir,

 It's no clear how it's going to help. You can generate keys with
 one
 tasks and then upload them with another task, why do we need
 another layer/entity here?

 Thanks,

 On Thu, Jan 29, 2015 at 11:54 AM, Vladimir Kuklin 
 vkuk...@mirantis.com wrote:

 Dmitry, Evgeniy

 This is exactly what I was talking about when I mentioned
 serializers for tasks - taking data from 3rd party sources if user 
 wants.
 In this case user will be able to generate some data somewhere and 
 fetch it
 using this code that we import.

 On Thu, Jan 29, 2015 at 12:08 AM, Dmitriy Shulyak 
 dshul...@mirantis.com wrote:

 Thank you guys for quick response.
 Than, if there is no better option we will follow with second
 approach.

 On Wed, Jan 28, 2015 at 7:08 PM, Evgeniy L e...@mirantis.com
 wrote:

 Hi Dmitry,

 I'm not sure if we should user approach when task executor reads
 some data from the file system, ideally Nailgun should push
 all of the required data to Astute.
 But it can be tricky to 

Re: [openstack-dev] [Mistral] Changing expression delimiters in Mistral DSL

2015-02-18 Thread Renat Akhmerov
Hi again,

Sorry, I started writing this email before Angus replied so I will shoot it as 
is and then we can continue…


So after discussing all the options again with a small group of team members we 
came to the following things:

Syntax options that we’d like to discuss further 

% 1 + 1 % # pro- ruby/js/puppet/chef familiarity con - spaces, and % is too 
large symbol
{1 + 1}  # pro - less spaces, con - no familiarity
? 1 + 1 ?  # php familiarity, need spaces

The primary criteria to select these 3 options is that they are YAML 
compatible. Technically they all would solve our problems (primarily no 
embracing quotes needed like in Ansible so no ambiguity on data types).

The secondary criteria is syntax symmetry. After all I agree with Patrick's 
point about better readability when we have opening and closing sequences alike.

Some additional details can be found in [0]


[0] https://etherpad.openstack.org/p/mistral-YAQL-delimiters 
https://etherpad.openstack.org/p/mistral-YAQL-delimiters

Renat Akhmerov
@ Mirantis Inc.


 On 18 Feb 2015, at 07:37, Patrick Hoolboom patr...@stackstorm.com wrote:
 
  My main concern with the {} delimiters in YAQL is that the curly brace 
 already has a defined use within YAML.  We most definitely will eventually 
 run in to parsing errors with whatever delimiter we choose but I don't feel 
 that it should conflict with the markup language it is directly embedded in.  
 It gets quite difficult to, at a glance, identify YAQL expressions.  % % 
 may appear ugly to some but I feel that it works as a clear delimiter of both 
 the beginning AND the end of the YAQL query. The options that only escape the 
 beginning look fine in small examples like this but the workflows that we 
 have written or seen in the wild tend to have some fairly large expressions.  
 If the opening and closing delimiters don't match, it gets quite difficult to 
 read. 
 
 From: Anastasia Kuznetsova akuznets...@mirantis.com 
 mailto:akuznets...@mirantis.com
 Subject: Re: [openstack-dev] [Mistral] Changing expression delimiters in 
 Mistral DSL
 Date: February 17, 2015 at 8:28:27 AM PST
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org 
 mailto:openstack-dev@lists.openstack.org
 Reply-To: OpenStack Development Mailing List \(not for usage questions\) 
 openstack-dev@lists.openstack.org 
 mailto:openstack-dev@lists.openstack.org
 
 As for me, I think that % ... % is not an elegant solution and looks 
 massive because of '%' sign. Also I agree with Renat, that % ... % reminds 
 HTML/Jinja2 syntax. 
 
 I am not sure that similarity with something should be one of the main 
 criteria, because we don't know who will use Mistral.
 
 I like:
 - {1 + $.var} Renat's example 
 - variant with using some functions (item 2 in Dmitry's list):  { yaql: 
 “1+1+$.my.var  100” } or yaql: 'Hello' + $.name 
 - my two cents, maybe we can use something like: result: - Hello + $.name 
 -
 
 
 Regards,
 Anastasia Kuznetsova
 
 On Tue, Feb 17, 2015 at 1:17 PM, Nikolay Makhotkin nmakhot...@mirantis.com 
 mailto:nmakhot...@mirantis.com wrote:
 Some suggestions from me: 
 
 1. y 1 + $.var  # (short from yaql).
 2. { 1 + $.var }  # as for me, looks more elegant than % %. And visually 
 it is more strong
 
 A also like p7 and p8 suggested by Renat.
 
 On Tue, Feb 17, 2015 at 11:43 AM, Renat Akhmerov rakhme...@mirantis.com 
 mailto:rakhme...@mirantis.com wrote:
 One more:
 
 p9: \{1 + $.var} # That’s pretty much what 
 https://review.openstack.org/#/c/155348/ 
 https://review.openstack.org/#/c/155348/ addresses but it’s not exactly 
 that. Note that we don’t have to put it in quotes in this case to deal with 
 YAML {} semantics, it’s just a string
 
 
 
 Renat Akhmerov
 @ Mirantis Inc.
 
 
 
 On 17 Feb 2015, at 13:37, Renat Akhmerov rakhme...@mirantis.com 
 mailto:rakhme...@mirantis.com wrote:
 
 Along with % % syntax here are some other alternatives that I checked for 
 YAML friendliness with my short comments:
 
 p1: ${1 + $.var}# Here it’s bad that $ sign is used for two 
 different things
 p2: ~{1 + $.var}# ~ is easy to miss in a text
 p3: ^{1 + $.var}# For someone may be associated with regular 
 expressions
 p4: ?{1 + $.var}
 p5: {1 + $.var}   # This is kinda crazy
 p6: e{1 + $.var}# That looks a pretty interesting option to me, “e” 
 could mean “expression” here.
 p7: yaql{1 + $.var} # This is interesting because it would give a clear and 
 easy mechanism to plug in other expression languages, “yaql” here is a used 
 dialect for the following expression
 p8: y{1 + $.var}# “y” here is just shortened “yaql
 
 
 Any ideas and thoughts would be really appreciated!
 
 Renat Akhmerov
 @ Mirantis Inc.
 
 
 
 On 17 Feb 2015, at 12:53, Renat Akhmerov rakhme...@mirantis.com 
 mailto:rakhme...@mirantis.com wrote:
 
 Dmitri,
 
 I agree with all your reasonings and fully support the idea of changing 
 the syntax now as well as changing 

Re: [openstack-dev] [neutron] neutron-drivers meeting

2015-02-18 Thread Akihiro Motoki
Nice to have it!

2015-02-18 5:05 GMT+09:00 Armando M. arma...@gmail.com:
 Hi folks,

 I was wondering if we should have a special neutron-drivers meeting on
 Wednesday Feb 18th (9:30AM CST / 7:30AM PST) to discuss recent patches where
 a few cores have not reached consensus on, namely:

 - https://review.openstack.org/#/c/155373/
 - https://review.openstack.org/#/c/148318/

 The Kilo cycle end is fast approaching and a speedy resolution of these
 matters would be better. I fear that leaving these items to the Open
 Discussion slot in the weekly IRC meeting will not give us enough time.

 Is there any other item where we need to get consensus on?

 Anyone is welcome to join.

 Thanks,
 Armando



-- 
Akihiro Motoki amot...@gmail.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] The root-cause for IRC private channels (was Re: [all][tc] Lets keep our community open, lets fight for it)

2015-02-18 Thread Daniel P. Berrange
On Tue, Feb 17, 2015 at 09:32:53AM -0800, Stefano Maffulli wrote:
 Changing the subject since Flavio's call for openness was broader than
 just private IRC channels.
 
 On Tue, 2015-02-17 at 10:37 +, Daniel P. Berrange wrote:
  If cases of bad community behaviour, such as use of passwd protected
  IRC channels, are always primarily dealt with via further private
  communications, then we are denying the voters the information they
  need to hold people to account. I can understand the desire to avoid
  publically shaming people right away, because the accusations may be
  false, or may be arising from a simple mis-understanding, but at some
  point genuine issues like this need to be public. Without this we make
  it difficult for contributors to make an informed decision at future
  elections.
 
 You got my intention right: I wanted to understand better what lead some
 people to create a private channel, what were their needs. For that
 objective, having an accusatory tone won't go anywhere and instead I
 needed to provide them a safe place to discuss and then I would report
 back in the open.

Reporting back on the explanations is great, but what I'm trying to
understand is at what point would you consider saying *who* was running
the private IRC channels ? Would you intend for that be private forever,
or would you make a judgement call on whether explanations provided are
acceptable, or something else ?

If it is kept private, then I think we are unable to meaningfully
participate in project elections, because the information that is
directly relevant to the people we are potentially voting for in
future elections, is withheld from us. I'm sure you would make a
decision that you considered to be in the best interests of the
project, but ultimately it will always be a subjective decision.

 So far, I've only received comments in private from only one person,
 concerned about public logging of channels without notification. I
 wished the people hanging out on at least one of such private channels
 would provide more insights on their choice but so far they have not.
 
 Regarding the why at least one person told me they prefer not to use
 official openstack IRC channels because there is no notification if a
 channel is being publicly logged. Together with freenode not obfuscating
 host names, and eavesdrop logs available to any spammer, one person at
 least is concerned that private information may leak. There may also be
 legal implications in Europe, under the Data Protection Directive, since
 IP addresses and hostnames can be considered sensitive data. Not to
 mention the casual dropping of emails or phone numbers in public+logged
 channels.

To me this all just feels like an attempt to come up with justification of
action after the fact. Further, everything said there applies just as much
to participation over email than via IRC. The spammer problem and information
leakage is arguably far worse over email. Ultimately this is supposed to be
an open collaborative project, so by its very nature you have to accept that
information  discussions in the open and so subject to viewing by anyone
and at any, whether they are other contributors, users, or spammers.

Ultimately though, this is just my personal POV on the matter, and other
contributors in the community may feel this justification that was provided
is acceptable to them. Everyone is entitled to make up their own mind on the
matter. This is why I feel that if the issue reported is confirmed to be
true, then the explanations offered should be made in public to allow each
person to make their own subjective decision.

 I think these points are worth discussing. One easy fix this person
 suggests is to make it default that all channels are logged and write a
 warning on wiki/IRC page. Another is to make the channel bot announce
 whether the channel is logged. Cleaning up the hostname details on
 join/parts from eavesdrop and put the logs behind a login (to hide them
 from spam harvesters).

Personally I think all our IRC channels should be logged. There is really
no expectation of privacy when using IRC in an open collaborative project.

Scrubbing hostnames/ip addresses from logs is pretty reasonable. As a
comparison with email, mailman archives will typically have email addresses
either scrubbed or obfuscated.

I would object to them being put behind a login of any kind, because that
turns the logs into an information blackhole as it prevents google, etc
from indexing them. There are plenty of times when search results end up
taking you to IRC logs and this is too valuable to loose just because
people want some security through obscurity for their hostnames.

It sucks that there are spammers on the internet, but the basis of an
open project is that of openness to anyone and sadly that includes
spammers that we'd all really rather went away. As soon as you start
trying to close it off to certain people, you cause harm to the community
as a whole, 

Re: [openstack-dev] [mistral] mistral actions plugin architecture

2015-02-18 Thread Renat Akhmerov
Hi Filip,

Well, it’s not necessary to keep custom action sources as part of Mistral 
sources. You can keep them anywhere else but the only requirement is that they 
must be registered globally in python packages so that  when Mistral parses 
entry points in setup.cfg it could find needed classes using their fully 
qualified names. To do that you can use regular well known python procedures 
(setuptools etc.)

Hope this helps

Renat Akhmerov
@ Mirantis Inc.



 On 18 Feb 2015, at 14:55, Filip Blaha filip.bl...@hp.com wrote:
 
 Thanks for answer!
 
 A custom action inherits from base.Action. So if I need to write custom 
 action in different project and register it via entry points then I need 
 dependency on mistral sources. Is that correct? Or is there way how to create 
 custom action without that dependency?
 
 Regards
 Filip
 
 On 02/09/2015 04:58 PM, Renat Akhmerov wrote:
 Hi,
 
 It’s pretty simple and described in 
 http://mistral.readthedocs.org/en/master/developer/writing_a_plugin_action.html
  
 http://mistral.readthedocs.org/en/master/developer/writing_a_plugin_action.html.
 
 Renat Akhmerov
 @ Mirantis Inc.
 
 
 
 On 09 Feb 2015, at 21:43, Filip Blaha filip.bl...@hp.com 
 mailto:filip.bl...@hp.com wrote:
 
 Hi all,
 
 regarding to [1] there should be some plugin mechanism for custom actions 
 in Mistral. I went through code and I found some introspection mechanism 
 [2] generating mistral actions from methods on client classes for openstack 
 core projects. E.g. it takes nova client class (python-novaclient) and 
 introspects its methods and theirs parameters and creates corresponding 
 actions with corresponding parameters. The same for other core projects 
 like neutron, cinder, ... However the list of  these client classes seems 
 to be hardcoded [3].  So I am not sure whether this mechanism can be used 
 for other projects like murano client to create murano related actions in 
 mistral? Or is there any other pluggable mechanism to get murano actions 
 into mistral without hardcoding in mistral project?
 
 [1] 
 https://wiki.openstack.org/wiki/Mistral/Blueprints/ActionsDesign#Plugin_Architecture
  
 https://wiki.openstack.org/wiki/Mistral/Blueprints/ActionsDesign#Plugin_Architecture
  
 [2] 
 https://github.com/stackforge/mistral/blob/master/mistral/actions/openstack/action_generator/base.py#L91
  
 https://github.com/stackforge/mistral/blob/master/mistral/actions/openstack/action_generator/base.py#L91
  
 [3] 
 https://github.com/stackforge/mistral/blob/master/mistral/actions/generator_factory.py
  
 https://github.com/stackforge/mistral/blob/master/mistral/actions/generator_factory.py
  
 
 
 Regards
 Filip
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org 
 mailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
 mailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift] On Object placement

2015-02-18 Thread Christian Schwede
Hello Jonathan,

On 17.02.15 22:17, Halterman, Jonathan wrote:
 Various services desire the ability to control the location of data
 placed in Swift in order to minimize network saturation when moving data
 to compute, or in the case of services like Hadoop, to ensure that
 compute can be moved to wherever the data resides. Read/write latency
 can also be minimized by allowing authorized services to place one or
 more replicas onto the same rack (with other replicas being placed on
 separate racks). Fault tolerance can also be enhanced by ensuring that
 some replica(s) are placed onto separate racks. Breaking this down we
 come up with the following potential requirements:
 
 1. Swift should allow authorized services to place a given number of
 object replicas onto a particular rack, and onto separate racks.

This is already possible if you use zones and regions in your ring
files. For example, if you have 2 racks, you could assign one zone to
each of them and Swift places at least one replica on each rack.

Because Swift takes care of the device weight you could also ensure that
a specific rack gets two copies, and another rack only one.
However, this is only true as long as all primary nodes are accessible.
If Swift stores data on a handoff node this data might be written to a
different node first, and moved to the primary node later on.

Note that placing objects on other than the primary nodes (for example
using an authorized service you described) will only store the data on
these nodes until the replicator moves the data to the primary nodes
described by the ring.
As far as I can see there is no way to ensure that an authorized service
can decide where to place data, and that this data stays on the selected
nodes. That would require a fundamental change within Swift.

 2. Swift should allow authorized services and administrators to learn
 which racks an object resides on, along with endpoints.

You already mentioned the endpoint middleware, though it is currently
not protected and unauthenticated access is allowed if enabled. You
could easily add another small middleware in the pipeline to check
authentication and grant or deny access to /endpoints based on the
authentication.
You can also get the node (and disk) if you have access to the ring
files. There is a tool included in the Swift source code called
swift-get-nodes; however you could simply reuse existing code to
include it in your projects.

Christian

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] The root-cause for IRC private channels (was Re: [all][tc] Lets keep our community open, lets fight for it)

2015-02-18 Thread Chmouel Boudjnah
Daniel P. Berrange berra...@redhat.com writes:

 Personally I think all our IRC channels should be logged. There is really
 no expectation of privacy when using IRC in an open collaborative project.

Agreed with Daniel. I am not sure how a publicly available forum/channel
can be assumed that there is not going to be any records available
publicly.

Chmouel

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] [devstack] About _member_ role

2015-02-18 Thread Pasquale Porreca
I saw 2 different bug report that Devstack dashboard gives an error when
trying to manage projects
https://bugs.launchpad.net/devstack/+bug/1421616 and
https://bugs.launchpad.net/horizon/+bug/1421999
In my devstack environment projects were working just fine, so I tried a
fresh installation to see if I could reproduce the bug and I could
confirm that actually the bug is present in current devstack deployment.
Both reports point to the lack of _member_ role this error, so I just
tried to manually (i.e. via CLI) add a _member_ role and I verified that
just having it - even if not assigned to any user - fix the project
management in Horizon.

I didn't deeply analyze yet the root cause of this, but this behaviour
seemed quite weird, this is the reason I sent this mail to dev list.
Your explanation somewhat confirmed my doubts: I presume that adding a
_member_ role is merely a workaround and the real bug is somewhere else
- in Horizon code with highest chance.

On 02/17/15 21:01, Jamie Lennox wrote:

 - Original Message -
 From: Pasquale Porreca pasquale.porr...@dektech.com.au
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Tuesday, 17 February, 2015 9:07:14 PM
 Subject: [openstack-dev]  [Keystone] [devstack] About _member_ role

 I proposed a fix for a bug in devstack
 https://review.openstack.org/#/c/156527/ caused by the fact the role
 _member_ was not anymore created due to a recent change.

 But why is the existence of _member_ role necessary, even if it is not
 necessary to be used? Is this a know/wanted feature or a bug by itself?
 So the way to be a 'member' of a project so that you can get a token scoped 
 to that project is to have a role defined on that project. 
 The way we would handle that from keystone for default_projects is to create 
 a default role _member_ which had no permissions attached to it, but by 
 assigning it to the user on the project we granted membership of that project.
 If the user has any other roles on the project then the _member_ role is 
 essentially ignored. 

 In that devstack patch I removed the default project because we want our 
 users to explicitly ask for the project they want to be scoped to.
 This patch shouldn't have caused any issues though because in each of those 
 cases the user is immediately granted a different role on the project - 
 therefore having 'membership'. 

 Creating the _member_ role manually won't cause any problems, but what issue 
 are you seeing where you need it?


 Jamie


 --
 Pasquale Porreca

 DEK Technologies
 Via dei Castelli Romani, 22
 00040 Pomezia (Roma)

 Mobile +39 3394823805
 Skype paskporr


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Pasquale Porreca

DEK Technologies
Via dei Castelli Romani, 22
00040 Pomezia (Roma)

Mobile +39 3394823805
Skype paskporr



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Distribution of keys for environments

2015-02-18 Thread Vladimir Kuklin
Andrew

+1 to it - I provided these concerns to guys that we should not ship data
to tasks that do not need it. It will make us able to increase security for
pluggable architecture

On Fri, Feb 13, 2015 at 9:57 PM, Andrew Woodward xar...@gmail.com wrote:

 Cool, You guys read my mind o.O

 RE: the review. We need to avoid copying the secrets to nodes that don't
 require them. I think it might be too soon to be able to make granular
 tasks based for this, but we need to move that way.

 Also, how are the astute tasks read into the environment? Same as with the
 others?

 fuel rel --sync-deployment-tasks


 On Fri, Feb 13, 2015 at 7:32 AM, Evgeniy L e...@mirantis.com wrote:

 Andrew,

 It looks like what you've described is already done for ssh keys [1].

 [1] https://review.openstack.org/#/c/149543/

 On Fri, Feb 13, 2015 at 6:12 PM, Vladimir Kuklin vkuk...@mirantis.com
 wrote:

 +1 to Andrew

 This is actually what we want to do with SSL keys.

 On Wed, Feb 11, 2015 at 3:26 AM, Andrew Woodward xar...@gmail.com
 wrote:

 We need to be highly security conscious here doing this in an insecure
 manner is a HUGE risk so rsync over ssh from the master node is usually (or
 scp) OK but rsync protocol from the node in the cluster will not be BAD (it
 leaves the certs exposed on an weak service.)

 I could see this being implemented as some additional task type that
 can instead be run on the fuel master nodes instead of a target node. This
 could also be useful for plugin writers that may need to access some
 external API as part of their task graph. We'd need some way to make the
 generate task run once for the env, vs the push certs which runs for each
 role that has a cert requirement.

 we'd end up with some like
 generate_certs:
   runs_from: master_once
   provider: whatever
 push_certs:
   runs_from: master
   provider: bash
   role: [*]

 On Thu, Jan 29, 2015 at 2:07 PM, Vladimir Kuklin vkuk...@mirantis.com
 wrote:

 Evgeniy,

 I am not suggesting to go to Nailgun DB directly. There obviously
 should be some layer between a serializier and DB.

 On Thu, Jan 29, 2015 at 9:07 PM, Evgeniy L e...@mirantis.com wrote:

 Vladimir,

  1) Nailgun DB

 Just a small note, we should not provide access to the database, this
 approach
 has serious issues, what we can do is to provide this information for
 example
 via REST API.

 What you are saying is already implemented in any deployment tool for
 example
 lets take a look at Ansible [1].

 What you can do there is to create a task which stores the result of
 executed
 shell command in some variable.
 And you can reuse it in any other task. I think we should use this
 approach.

 [1]
 http://docs.ansible.com/playbooks_variables.html#registered-variables

 On Thu, Jan 29, 2015 at 2:47 PM, Vladimir Kuklin 
 vkuk...@mirantis.com wrote:

 Evgeniy

 This is not about layers - it is about how we get data. And we need
 to separate data sources from the way we manipulate it. Thus, sources 
 may
 be: 1) Nailgun DB 2) Users inventory system 3) Opendata like, list of
 Google DNS Servers. Then all this data is aggregated and transformed
 somehow. After that it is shipped to the deployment layer. That's how I 
 see
 it.

 On Thu, Jan 29, 2015 at 2:18 PM, Evgeniy L e...@mirantis.com wrote:

 Vladimir,

 It's no clear how it's going to help. You can generate keys with one
 tasks and then upload them with another task, why do we need
 another layer/entity here?

 Thanks,

 On Thu, Jan 29, 2015 at 11:54 AM, Vladimir Kuklin 
 vkuk...@mirantis.com wrote:

 Dmitry, Evgeniy

 This is exactly what I was talking about when I mentioned
 serializers for tasks - taking data from 3rd party sources if user 
 wants.
 In this case user will be able to generate some data somewhere and 
 fetch it
 using this code that we import.

 On Thu, Jan 29, 2015 at 12:08 AM, Dmitriy Shulyak 
 dshul...@mirantis.com wrote:

 Thank you guys for quick response.
 Than, if there is no better option we will follow with second
 approach.

 On Wed, Jan 28, 2015 at 7:08 PM, Evgeniy L e...@mirantis.com
 wrote:

 Hi Dmitry,

 I'm not sure if we should user approach when task executor reads
 some data from the file system, ideally Nailgun should push
 all of the required data to Astute.
 But it can be tricky to implement, so I vote for 2nd approach.

 Thanks,

 On Wed, Jan 28, 2015 at 7:08 PM, Aleksandr Didenko 
 adide...@mirantis.com wrote:

 3rd option is about using rsyncd that we run under xinetd on
 primary controller. And yes, the main concern here is security.

 On Wed, Jan 28, 2015 at 6:04 PM, Stanislaw Bogatkin 
 sbogat...@mirantis.com wrote:

 Hi.
 I'm vote for second option, cause if we will want to implement
 some unified hierarchy (like Fuel as CA for keys on controllers 
 for
 different env's) then it will fit better than other options. If we
 implement 3rd option then we will reinvent the wheel with SSL in 
 future.
 Bare rsync as storage for private keys sounds pretty 
 uncomfortable for me.

 On Wed, Jan 

[openstack-dev] [Fuel] Python code in fuel-library

2015-02-18 Thread Sebastian Kalinowski
Hello Fuelers,

There is more and more Python code appearing in fuel-library [1] that is
used in our Puppet manifests. Now, with introduction of Granular Deployment
feature it could appear more often as
writing some tasks as a Python script is a nice option.

First problem that I see is that in some cases this code is getting merged
without a positive review from a Python developer from Fuel team.
My proposition of the solution is simple:
fuel-library core reviewers shouldn't merge such code if there is no a +1
from a Python developer from fuel-core group [2].

Second problem is that there are no automatic tests for this code. Testing
it manually and by running deployment when that code is used is not enough
since those scripts could be quite large and complicated and some of them
are executed in specific situations so it is hard for reviewers to check
how they will work.
In fuel-library we already have tests for Puppet modules: [3].
I suggest that we should introduce similar checks for Python code:
 - there will be one global 'test-requirements.txt' file (if there will be
a need to, we could introduce more granular split, like per module)
 - py.test [4] will be used as a test runner
 - (optional, but advised) flake8+hacking checks [5] (could be limited to
just run flake8/pyflakes checks)

Looking forward to your opinions on those two issues.

Best,
Sebastian

[1] https://github.com/stackforge/fuel-library/search?l=python
[2] https://review.openstack.org/#/admin/groups/209,members
[3] https://fuel-jenkins.mirantis.com/job/fuellib_unit_tests/
[4] http://pytest.org/latest/
[5] https://github.com/openstack-dev/hacking
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral] mistral actions plugin architecture

2015-02-18 Thread Filip Blaha

Thanks for answer!

A custom action inherits from base.Action. So if I need to write custom 
action in different project and register it via entry points then I need 
dependency on mistral sources. Is that correct? Or is there way how to 
create custom action without that dependency?


Regards
Filip

On 02/09/2015 04:58 PM, Renat Akhmerov wrote:

Hi,

It’s pretty simple and described in 
http://mistral.readthedocs.org/en/master/developer/writing_a_plugin_action.html.


Renat Akhmerov
@ Mirantis Inc.



On 09 Feb 2015, at 21:43, Filip Blaha filip.bl...@hp.com 
mailto:filip.bl...@hp.com wrote:


Hi all,

regarding to [1] there should be some plugin mechanism for custom 
actions in Mistral. I went through code and I found some 
introspection mechanism [2] generating mistral actions from methods 
on client classes for openstack core projects. E.g. it takes nova 
client class (python-novaclient) and introspects its methods and 
theirs parameters and creates corresponding actions with 
corresponding parameters. The same for other core projects like 
neutron, cinder, ... However the list of  these client classes seems 
to be hardcoded [3].  So I am not sure whether this mechanism can be 
used for other projects like murano client to create murano related 
actions in mistral? Or is there any other pluggable mechanism to get 
murano actions into mistral without hardcoding in mistral project?


[1] 
https://wiki.openstack.org/wiki/Mistral/Blueprints/ActionsDesign#Plugin_Architecture 

[2] 
https://github.com/stackforge/mistral/blob/master/mistral/actions/openstack/action_generator/base.py#L91 

[3] 
https://github.com/stackforge/mistral/blob/master/mistral/actions/generator_factory.py 




Regards
Filip

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org 
mailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Changing expression delimiters in Mistral DSL

2015-02-18 Thread Angus Salkeld
On Tue, Feb 17, 2015 at 7:06 AM, Dmitri Zimine dzim...@stackstorm.com
wrote:

 SUMMARY:
 

 We are changing the syntax for inlining YAQL expressions in Mistral YAML
 from {1+$.my.var} (or “{1+$.my.var}”) to % 1+$.my.var %

 Below I explain the rationale and the criteria for the choice. Comments
 and suggestions welcome.

 DETAILS:
 -

 We faced a number of problems with using YAQL expressions in Mistral DSL:
 [1] must handle any YAQL, not only the ones started with $; [2] must
 preserve types and [3] must comply with YAML. We fixed these problems by
 applying Ansible style syntax, requiring quotes around delimiters (e.g.
 “{1+$.my.yaql.var}”). However, it lead to unbearable confusion in DSL
 readability, in regards to types:

 publish:
intvalue1: {1+1}” # Confusing: you expect quotes to be string.
intvalue2: {int(1+1)}” # Even this doestn’ clean the confusion
whatisthis:{$.x + $.y}” # What type would this return?

 We got a very strong push back from users in the filed on this syntax.

 The crux of the problem is using { } as delimiters YAML. It is plain wrong
 to use the reserved character. The clean solution is to find a delimiter
 that won’t conflict with YAML.

 Criteria for selecting best alternative are:
 1) Consistently applies to to all cases of using YAML in DSL
 2) Complies with YAML
 3) Familiar to target user audience - openstack and devops

 We prefer using two-char delimiters to avoid requiring extra escaping
 within the expressions.

 The current winner is % %. It fits YAML well. It is familiar to
 openstack/devops as this is used for embedding Ruby expressions in Puppet
 and Chef (for instance, [4]). It plays relatively well across all cases of
 using expressions in Mistral (see examples in [5]):


A really long time ago I posted this patch for Heat:
https://review.openstack.org/#/c/41858/2/doc/source/template_guide/functions.rst
(adds a jinja2 function to Heat http://jinja.pocoo.org/docs/dev/)

I also used % %, it seems to be what people use when using jinja2 on yaml.

This was rejected because of security concerns of Jinja2.



 ALTERNATIVES considered:
 --

 1) Use Ansible-like syntax:
 http://docs.ansible.com/YAMLSyntax.html#gotchas
 Rejected for confusion around types. See above.

 2) Use functions, like Heat HOT or TOSCA:

 HOT templates and TOSCA doesn’t seem to have a concept of typed variables
 to borrow from (please correct me if I missed it). But they have functions:
 function: { function_name: {foo: [parameter1, parameter 2], bar:xxx”}}.
 Applied to Mistral, it would look like:

 publish:
  - bool_var: { yaql: “1+1+$.my.var  100” }


You *could* have the expression as a list, like this (but might not work in
all cases):
{ yaql: [1, +, 1, $.my.var, , 100] }

Generally in Heat we make the functions and args a natural part of the yaml
so it's not one big string that gets parsed separately.
Tho' it would be nice to have a common approach to this, so I am partial to
the one you have here.

-Angus



 Not bad, but currently rejected as it reads worse than delimiter-based
 syntax, especially in simplified one-line action invocation.

 3)   paired with other symbols: php-styoe  ? ..?


 *REFERENCES: *
 --

 [1] Allow arbitrary YAQL expressions, not just ones started with $ :
 https://github.com/stackforge/mistral/commit/5c10fb4b773cd60d81ed93aec33345c0bf8f58fd
 [2] Use Ansible-like syntax to make YAQL expressions YAML complient
 https://github.com/stackforge/mistral/commit/d9517333b1fc9697d4847df33d3b774f881a111b
 [3] Preserving types in YAQL
 https://github.com/stackforge/mistral/blob/d9517333b1fc9697d4847df33d3b774f881a111b/mistral/tests/unit/test_expressions.py#L152-L184
 [4]Using % % in Puppet
 https://docs.puppetlabs.com/guides/templating.html#erb-is-plain-text-with-embedded-ruby

 [5] Etherpad with discussion
 https://etherpad.openstack.org/p/mistral-YAQL-delimiters
 [6] Blueprint
 https://blueprints.launchpad.net/mistral/+spec/yaql-delimiters


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] FFE driver-private-data + pure-iscsi-chap-support

2015-02-18 Thread John Griffith
On Tue, Feb 17, 2015 at 11:36 PM, Mike Perez thin...@gmail.com wrote:
 On 14:50 Sun 15 Feb , Patrick East wrote:
 Hi All,

 I would like to request a FFE for the following blueprints:

 https://blueprints.launchpad.net/cinder/+spec/driver-private-data
 https://blueprints.launchpad.net/cinder/+spec/pure-iscsi-chap-support

 The first being a dependency for the second.

 The new database table for driver data feature was discussed at the Cinder
 mid-cycle meetup and seemed to be generally approved by the team in person
 at the meeting as something we can get into Kilo.

 There is currently a spec up for review for it here:
 https://review.openstack.org/#/c/15/ but doesn't look like it will be
 approved by the end of the day for the deadline. I have code pretty much
 ready to go for review as soon as the spec is approved, it is a relatively
 small patch set.

 I already told Patrick I would help see this change in Kilo. If I can get
 another Cinder core to sponsor this, that would be great.

 This change makes it possible for some drivers to be able to have chap auth
 support in their unique setup, and I'd rather not leave people out in Cinder.

 --
 Mike Perez

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

+2 from me, I'm willing to sponsor this

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Murano][Congress][Mistral] policy guided fulfillment

2015-02-18 Thread Pospisil, Radek
Hello,

I would like to announce integration of 
Muranohttps://wiki.openstack.org/wiki/Murano Application catalog and 
Congresshttps://wiki.openstack.org/wiki/Congress policy service. We call it 
Policy guided fulfillment.

The idea (which is currently implemented) is to use Congress as authority 
controlling deployment of Murano applications. Openstack administrator defines 
policy rules detecting validation of his business policies. When end user is 
about to deploy Murano environment with applications, the Congress decides 
whether the environment is ok (i.e., Murano deploys the environment) or not 
(i.e., Murano cancels the deployment).
In future Congress policies will be used to react to various events generated 
by an application, Openstack, ... e.g., scaling, remediation, ... .

More implementation details

* Murano environment (i.e., application/services + its properties + 
relationships) is mapped (decomposed) to Congress data model (i.e., simple 
rules, ...)

* There is a predefined policy rule predeploy_error which has to be 
defined by an Administrator to detect the 'errors'

o   example - do not allow use instances with more than 2048MB in the 
environment

?  rule explanation - in an environment (eid), find Murano object (obj_id) with 
property 'flavor' and check if the flavor's RAM size is over given limit 
(2048MB), If so, then create error message.

?  Congress rules are written in datalog - check 
Congresshttps://wiki.openstack.org/wiki/Congress for the documentation

predeploy_errors(eid, obj_id, msg) :-
   murano:objects(obj_id, eid, type),
   murano:properties(obj_id, flavor, flavor_name),
   flavor_ram(flavor_name, ram),
   gt(ram, 2048),
   murano:properties(obj_id, name, obj_name),
   concat(obj_name, : instance flavor has RAM size over 2048MB, msg)


* Murano enforces the policy check on deploy action as follows

o   Murano uses Congress' simulation API for policy rule validations

?  In this case Congress temporary stores the decomposed Murano environment in 
order to evaluate the rules

o   Murano creates simulation sequence describing the environment decomposition

o   Murano calls the simulation API and process the result - base on that 
Murano either continue deployment, or deployment is failed.


You can find all necessary steps to make it running, example and developer 
documentation as part of Murano documentation  
http://murano.readthedocs.org/en/latest/articles/policy_enf_index.html .

You can also vote for our call for speaker summit presentations if you are 
interested :)

* 
https://www.openstack.org/vote-vancouver/Presentation/introducing-policy-guided-fulfillment-to-openstack-allowing-your-organization-to-set-business-policies-to-affect-application-deployment

* 
https://www.openstack.org/vote-vancouver/Presentation/governing-murano-application-deployment-with-congress-policy


  regards,

Radek
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Changing expression delimiters in Mistral DSL

2015-02-18 Thread Dmitri Zimine
Zane, Angus, thanks for your input!

This functional based syntax is consisted for dev use. The trouble is it ITSELF 
needs to be delimited. Without pissing off the users :) Consider two key usage: 
 input parameters and shorthand syntax for action input. That’s why we are 
looking for two-char symmetric (opening + closing) delimiters. 

task_with_full_syntax_input:
  action: std.ssh
  input: 
 cmd = awk '{ print % $.my_var % }' /etc/passwd

task_with_shorthand_action_input:
  action: std.ssh cmd=awk '{ print \”% $.my_var %\ }' /etc/passwd”

using function-like we still will have to do 
  action: std.ssh cmd=“awk '{ print \”% yaql {$.my_var} %\ }' /etc/passwd”
that only adds confusion IMO. 

The full set of usages for YAQL in DSL is here: 
https://etherpad.openstack.org/p/mistral-YAQL-delimiters

DZ. 

On Feb 18, 2015, at 7:22 AM, Zane Bitter zbit...@redhat.com wrote:

 On 16/02/15 16:06, Dmitri Zimine wrote:
 2) Use functions, like Heat HOT or TOSCA:
 
 HOT templates and TOSCA doesn’t seem to have a concept of typed
 variables to borrow from (please correct me if I missed it). But they
 have functions: function: { function_name: {foo: [parameter1, parameter
 2], bar:xxx”}}. Applied to Mistral, it would look like:
 
 publish:
  - bool_var: { yaql: “1+1+$.my.var  100” }
 
 Not bad, but currently rejected as it reads worse than delimiter-based
 syntax, especially in simplified one-line action invocation.
 
 Note that you don't actually need the quotes there, so this would be 
 equivalent:
 
publish:
 - bool_var: {yaql: 1+1+$.my.var  100}
 
 FWIW I am partial to this or to Renat's p7 suggestion:
 
publish:
 - bool_var: yaql{1+1+$.my.var  100}
 
 Both offer the flexibility to introduce new syntax in the future without 
 breaking backwards compatibility.
 
 cheers,
 Zane.
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA] Meeting Thursday February 19th at 22:00 UTC

2015-02-18 Thread Matthew Treinish
Hi everyone,

Just a quick reminder that the weekly OpenStack QA team IRC meeting will be
tomorrow Thursday, February 19th at 22:00 UTC in the #openstack-meeting
channel.

The agenda for tomorrow's meeting can be found here:
https://wiki.openstack.org/wiki/Meetings/QATeamMeeting
Anyone is welcome to add an item to the agenda.

To help people figure out what time 22:00 UTC is in other timezones tomorrow's
meeting will be at:

17:00 EST
07:00 JST
08:30 ACDT
23:00 CET
16:00 CST
14:00 PST

-Matt Treinish


pgph0ARC9NA22.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift] On Object placement

2015-02-18 Thread Halterman, Jonathan
Hi Christian - thanks for the response,

On 2/18/15, 1:53 AM, Christian Schwede christian.schw...@enovance.com
wrote:

Hello Jonathan,

On 17.02.15 22:17, Halterman, Jonathan wrote:
 Various services desire the ability to control the location of data
 placed in Swift in order to minimize network saturation when moving data
 to compute, or in the case of services like Hadoop, to ensure that
 compute can be moved to wherever the data resides. Read/write latency
 can also be minimized by allowing authorized services to place one or
 more replicas onto the same rack (with other replicas being placed on
 separate racks). Fault tolerance can also be enhanced by ensuring that
 some replica(s) are placed onto separate racks. Breaking this down we
 come up with the following potential requirements:
 
 1. Swift should allow authorized services to place a given number of
 object replicas onto a particular rack, and onto separate racks.

This is already possible if you use zones and regions in your ring
files. For example, if you have 2 racks, you could assign one zone to
each of them and Swift places at least one replica on each rack.

Because Swift takes care of the device weight you could also ensure that
a specific rack gets two copies, and another rack only one.

Presumably a deployment would/should match the DC layout, where racks
could correspond to Azs.

However, this is only true as long as all primary nodes are accessible.
If Swift stores data on a handoff node this data might be written to a
different node first, and moved to the primary node later on.

Note that placing objects on other than the primary nodes (for example
using an authorized service you described) will only store the data on
these nodes until the replicator moves the data to the primary nodes
described by the ring.
As far as I can see there is no way to ensure that an authorized service
can decide where to place data, and that this data stays on the selected
nodes. That would require a fundamental change within Swift.

So - how can we influence where data is stored? In terms of placement
based on a hash ring, I¹m thinking of perhaps restricting the placement of
an object to a subset of the ring based on a zone. We can still hash an
object somewhere on the ring, for the purposes of controlling locality, we
just want it to be within (or without) a particular zone. Any ideas?


 2. Swift should allow authorized services and administrators to learn
 which racks an object resides on, along with endpoints.

You already mentioned the endpoint middleware, though it is currently
not protected and unauthenticated access is allowed if enabled.

This is good to know. We still need to learn which rack an object resides
on though. This information is important in determining whether a swift
object resides on the same rack as a VM.

You
could easily add another small middleware in the pipeline to check
authentication and grant or deny access to /endpoints based on the
authentication.
You can also get the node (and disk) if you have access to the ring
files. There is a tool included in the Swift source code called
swift-get-nodes; however you could simply reuse existing code to
include it in your projects.

I¹m guessing this would not work for in cloud services?

- jonathan


Christian

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


smime.p7s
Description: S/MIME cryptographic signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] tracking bugs superseded by blueprints

2015-02-18 Thread Andrew Woodward
Bogdan,

Yes I think tracking the bugs like this would be beneficial. We should also
link them from the BP so that the imperilmenter can track them. It adds
related blueprints in the bottom of the right column under the
subscribers so we probably should also edit the description so that the
data is easy to see

On Wed, Feb 18, 2015 at 8:12 AM, Bogdan Dobrelya bdobre...@mirantis.com
wrote:

 Hello.
 There is inconsistency in the triage process for Fuel bugs superseded by
 blueprints.
 The current approach is to set won't fix status for such bugs.
 But there are some cases we should clarify [0], [1].

 I vote to not track superseded bugs separately and keep them as won't
 fix but update the status back to confirmed in case of regression
 discovered. And if we want to backport an improvement tracked by a
 blueprint (just for an exceptional case) let's assign milestones for
 related bugs.

 If we want to change the triage rules, let's announce that so the people
 won't get confused.

 [0] https://bugs.launchpad.net/fuel/+bug/1383741
 [1] https://bugs.launchpad.net/fuel/+bug/1422856

 --
 Best regards,
 Bogdan Dobrelya,
 Skype #bogdando_at_yahoo.com http://bogdando_at_yahoo.com
 Irc #bogdando



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Andrew
Mirantis
Fuel community ambassador
Ceph community
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Pluggable Auth for clients and where should it go

2015-02-18 Thread Justin Hammond
Just starting this discussion…

This is in reference to 
https://blueprints.launchpad.net/python-neutronclient/+spec/pluggable-neutronclient-auth

Originally the blueprint was for python-neutronclient only, but pluggable auth 
is a wide-reaching issue. With OSC/SDK on the horizon (however far), we should 
probably begin the discussion of how to best do this (if it hasn't been done).

A request: We have an immediate need to add pluggable auth to the 
python-neutronclient, modeled after python-novaclient's pluggable auth system, 
to maintain a consistent workflow for our users. After the discussion in the 
neutron-drivers meeting 
(http://eavesdrop.openstack.org/meetings/neutron_drivers/2015/neutron_drivers.2015-02-18-15.31.log.html)
 it is clear that python-neutronclient will survive for Kilo +12 months, at 
least. During that timeframe we'd like to have pluggable auth supported so we 
can bridge that gap. Beyond that immediate need, we are dedicated to making 
OSC/SDK the way to go in the future, and will gladly assist in adding said 
features.

We have a solution for our immediate solution but that may not apply to 
OSC/SDK. So my questions are:


  *   Would you benefit from pluggable auth?
  *   What are you looking for in auth?
  *   Would you benefit from the python-neutronclient getting nova's auth 
capabilities?

Thank you for your time!

- Justin (roaet)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon][keystone]

2015-02-18 Thread David Chadwick
I think this GUI is not intuitive to users and therefore should not be
encouraged or supported.

If you ask a user what does authenticate via a Discovery Service mean?
I think you will get some very strange answers. The same goes for
Authenticate using Default Protocol. Users will have no idea what that
means.

There has been a lot of research into how to support federated
authentication and there is a lot of practical experience across the
academic world from dozens of countries for many years. Most
universities now use federated login on a daily basis. We should use
this experience and follow best practise (which sadly does not involve
the screens that are being proposed here).

If you want to read more you can read a Good Practice Guide here

https://discovery.refeds.org/

It should help you to redesign the login page

regards

David

On 18/02/2015 16:06, Dolph Mathews wrote:
 
 On Fri, Feb 6, 2015 at 12:47 PM, Adam Young ayo...@redhat.com
 mailto:ayo...@redhat.com wrote:
 
 On 02/04/2015 03:54 PM, Thai Q Tran wrote:
 Hi all,

 I have been helping with the websso effort and wanted to get some
 feedback.
 Basically, users are presented with a login screen where they can
 select: credentials, default protocol, or discovery service.
 If user selects credentials, it works exactly the same way it
 works today.
 If user selects default protocol or discovery service, they can
 choose to be redirected to those pages.

 Keep in mind that this is a prototype, early feedback will be good.
 Here are the relevant patches:
 https://review.openstack.org/#/c/136177/
 https://review.openstack.org/#/c/136178/
 https://review.openstack.org/#/c/151842/

 I have attached the files and present them below:
 
 
 
 Replace the dropdown with a specific link for each protocol type:
 
 SAML and OpenID  are the only real contenders at the moment, but we
 will not likely have so many that it will clutter up the page.
 
 
 Agree, but the likelihood that a single IdP will support multiple
 protocols is probably low. Keystone certainly supports that from an API
 perspective, but I don't think it should be the default UX. Choose a
 remote IdP first, and then if *that* IdP supports multiple federation
 protocols, present them.
  
 
 
 Thanks for doing this.






 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
 mailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] out-of-tree plugin for Mech driver/L2 and vif_driver

2015-02-18 Thread Maxime Leroy
Hi Brent,

On Wed, Feb 18, 2015 at 3:26 PM, Brent Eagles beag...@redhat.com wrote:
[..]
 I want to get the ball rolling on this ASAP so, I've started on this as
 well and will be updating the etherpad accordingly. I'm also keen to get
 W.I.P./P.O.C. patches to go along with it. I'll notify on the mailing
 list (and direct so you don't miss it ;)) as soon as I've completed a
 reasonable first swipe through the spec (which should be in the next day
 or so).

 Cheers,

 Brent


Thanks for your help on this feature. I have just created a channel
irc: #vif-plug-script-support to speak about it.
I think it will help to synchronize effort on vif_plug_script
development. Anyone is welcome on this channel!

Cheers,
Maxime

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [trusts] [all] How trusts should work by design?

2015-02-18 Thread Nikolay Makhotkin
Hello!

Nova client's CLI parameter 'bypass_url' helps me. The client's API also
has 'management_url' attribute, if this one is specified - the client
doesn't reauthenticate. Also the most of clients have 'endpoint' argument,
so client doesn't make extra call to keystone to retrieve new token and
service_catalog.

Thank you for clarification!


On Mon, Feb 16, 2015 at 11:30 PM, Jamie Lennox jamielen...@redhat.com
wrote:



 - Original Message -
  From: Alexander Makarov amaka...@mirantis.com
  To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
  Sent: Tuesday, 17 February, 2015 4:00:05 AM
  Subject: Re: [openstack-dev] [keystone] [trusts] [all] How trusts should
 work by design?
 
 
 https://blueprints.launchpad.net/keystone/+spec/trust-scoped-re-authentication
 
  On Mon, Feb 16, 2015 at 7:57 PM, Alexander Makarov 
 amaka...@mirantis.com 
  wrote:
 
 
 
  We could soften this limitation a little by returning token client tries
 to
  authenticate with.
  I think we need to discuss it in community.
 
  On Mon, Feb 16, 2015 at 6:47 PM, Steven Hardy  sha...@redhat.com 
 wrote:
 
 
  On Mon, Feb 16, 2015 at 09:02:01PM +0600, Renat Akhmerov wrote:
   Yeah, clarification from keystone folks would be really helpful.
   If Nikolaya**s info is correct (I believe it is) then I actually
 dona**t
   understand why trusts are needed at all, they seem to be useless. My
   assumption is that they can be used only if we send requests directly
 to
   OpenStack services (w/o using clients) with trust scoped token
 included in
   headers, that might work although I didna**t checked that yet myself.
   So please help us understand which one of my following assumptions is
   correct?
   1. We dona**t understand what trusts are.
   2. We use them in a wrong way. (If yes, then whata**s the correct
 usage?)
 
  One or both of these seems likely, possibly combined with bugs in the
  clients where they try to get a new token instead of using the one you
  provide (this is a common pattern in the shell case, as the token is
  re-requested to get a service catalog).
 
  This provides some (heat specific) information which may help somewhat:
 
 
 http://hardysteven.blogspot.co.uk/2014/04/heat-auth-model-updates-part-1-trusts.html
 
   3. Trust mechanism itself is in development and cana**t be used at this
   point.
 
  IME trusts work fine, Heat has been using them since Havana with few
  problems.
 
   4. OpenStack clients need to be changed in some way to somehow bypass
   this keystone limitation?
 
  AFAICS it's not a keystone limitation, the behavior you're seeing is
  expected, and the 403 mentioned by Nikolay is just trusts working as
  designed.
 
  The key thing from a client perspective is:
 
  1. If you pass a trust-scoped token into the client, you must not request
  another token, normally this means you must provide an endpoint as you
  can't run the normal auth code which retrieves the service catalog.
 
  2. If you could pass a trust ID in, with a non-trust-scoped token, or
  username/password, the above limitation is removed, but AFAIK none of the
  CLI interfaces support a trust ID yet.
 
  3. If you're using a trust scoped token, you cannot create another trust
  (unless you've enabled chained delegation, which only landed recently in
  keystone). This means, for example, that you can't create a heat stack
  with a trust scoped token (when heat is configured to use trusts), unless
  you use chained delegation, because we create a trust internally.
 
  When you understand these constraints, it's definitely possible to
 create a
  trust and use it for requests to other services, for example, here's how
  you could use a trust-scoped token to call heat:
 
  heat --os-auth-token trust-scoped-token --os-no-client-auth
  --heat-url http://192.168.0.4:8004/v1/ project-id stack-list
 
  The pattern heat uses internally to work with trusts is:
 
  1. Use a trust_id and service user credentials to get a trust scoped
 token
  2. Pass the trust-scoped token into python clients for other projects,
  using the endpoint obtained during (1)
 
  This works fine, what you can't do is pass the trust scoped token in
  without explicitly defining the endpoint, because this triggers
  reauthentication, which as you've discovered, won't work.
 
  Hope that helps!
 
  Steve
 

 So I think what you are seeing, and what heat has come up against in the
 past is a limitation of the various python-*clients and not a problem of
 the actual delegation mechanism from the keystone point of view. This is a
 result of the basic authentication code being copied around between clients
 and then not being kept updated since... probably havana.

 The good news is that if you go with the session based approach then you
 can share these tokens amongst clients without the hacks.

 The identity authentication plugins that keystoneclient offers (v2 and v3
 api for Token and Password) both accept a 

Re: [openstack-dev] [stable][requirements] External dependency caps introduced in 499db6b

2015-02-18 Thread Jeremy Stanley
On 2015-02-18 10:00:31 -0500 (-0500), Doug Hellmann wrote:
 I'm interested in seeing what that list looks like. I suspect we have
 some libraries listed in the global requirements now that aren't
 actually used
[...]

Shameless plug for https://review.openstack.org/148071 . It turns up
a lot in master but each needs to be git-blame researched to confirm
why it was added so that we don't prematurely remove those with
pending changes in consuming projects. Doing the same for stable
branches would likely be a lot more clear-cut.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] neutron-drivers meeting

2015-02-18 Thread Kyle Mestery
On Tue, Feb 17, 2015 at 2:05 PM, Armando M. arma...@gmail.com wrote:

 Hi folks,

 I was wondering if we should have a special neutron-drivers meeting on
 Wednesday Feb 18th (9:30AM CST / 7:30AM PST) to discuss recent patches
 where a few cores have not reached consensus on, namely:

 - https://review.openstack.org/#/c/155373/
 - https://review.openstack.org/#/c/148318/

 The Kilo cycle end is fast approaching and a speedy resolution of these
 matters would be better. I fear that leaving these items to the Open
 Discussion slot in the weekly IRC meeting will not give us enough time.

 Is there any other item where we need to get consensus on?

 Anyone is welcome to join.

 Just to followup on this:

We had a productive meeting today on this [1]. We were able to work
together and come to a conclusion on both items presented here. We now have
a way forward for both that we're executing on now. See the logs for the
discussion.

Thanks to all for attending on short notice!

Kyle

[1]
http://eavesdrop.openstack.org/meetings/neutron_drivers/2015/neutron_drivers.2015-02-18-15.31.log.html


 Thanks,
 Armando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Pluggable Auth for clients and where should it go

2015-02-18 Thread Kevin Benton
Perhaps I am misunderstanding, but doesn't the OSC support for pluggable
auth just come for free from Neutron's perspective? (i.e. we don't have to
make any Neutron-specific changes for that to work)

What I was hoping here was that we could get something in the Neutron
client that works with the older auth plugins written for the Nova client
to support setups not using OSC (specifically the Nova-Neutron
interactions). I didn't mean that I didn't want to support OSC at all.

On Wed, Feb 18, 2015 at 11:16 AM, Tim Bell tim.b...@cern.ch wrote:

  Asking on the operators mailing list may yield more examples where
 people are using the Neutron client.



 From the CERN perspective, we use OSC heavily now it has Kerberos and
 X.509 support. With the new support of Keystone V3 in the Nova python
 client, we are interested in extending this support to these methods.



 While we are in the process of planning our Nova network to Neutron
 migration (and thus our Neutron usage is limited to testing currently), it
 would be attractive if the OSC support Neutron operations with these
 authentication methods. Worst case, following the same structure as Nova
 would allow us to work with others interested in Kerberos and X.509 for a
 single set of patches so we would strongly prefer the same plug in approach
 for Neutron as used by Nova (compared to re-inventing the wheel).



 Tim



 *From:* Kevin Benton [mailto:blak...@gmail.com]
 *Sent:* 18 February 2015 20:01
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] Pluggable Auth for clients and where
 should it go



 This is something I have been working on internally as well. I've been
 trying to find a way to make the changes to the python neutronclient in the
 least invasive way to support pluggable authentication. I would be happy to
 help review the changes you submit upstream if you have something already
 well-tested.



 Would you benefit from pluggable auth?



 Yes.



 What are you looking for in auth?



 Parity with the nova client.



 Would you benefit from the python-neutronclient getting nova's auth
 capabilities?



 Yes



 I have a similar constraint with waiting for the move to OSC/SDK. Even if
 the support for auth was merged into OSC/SDK, it wouldn't work with
 existing scripts and (more importantly) existing Icehouse/Juno Nova
 deployments that use the neutron client for the notifications to Neutron.



 On Wed, Feb 18, 2015 at 8:52 AM, Justin Hammond 
 justin.hamm...@rackspace.com wrote:

  Just starting this discussion…



 This is in reference to
 https://blueprints.launchpad.net/python-neutronclient/+spec/pluggable-neutronclient-auth



 Originally the blueprint was for python-neutronclient only, but pluggable
 auth is a wide-reaching issue. With OSC/SDK on the horizon (however far),
 we should probably begin the discussion of how to best do this (if it
 hasn't been done).



 A request: We have an immediate need to add pluggable auth to the
 python-neutronclient, modeled after python-novaclient's pluggable auth
 system, to maintain a consistent workflow for our users. After the
 discussion in the neutron-drivers meeting (
 http://eavesdrop.openstack.org/meetings/neutron_drivers/2015/neutron_drivers.2015-02-18-15.31.log.html)
 it is clear that python-neutronclient will survive for Kilo +12 months, at
 least. During that timeframe we'd like to have pluggable auth supported so
 we can bridge that gap. Beyond that immediate need, we are dedicated to
 making OSC/SDK the way to go in the future, and will gladly assist in
 adding said features.



 We have a solution for our immediate solution but that may not apply to
 OSC/SDK. So my questions are:



- Would you benefit from pluggable auth?
- What are you looking for in auth?
- Would you benefit from the python-neutronclient getting nova's auth
capabilities?

  Thank you for your time!



 - Justin (roaet)






 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





 --

 Kevin Benton

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] out-of-tree plugin for Mech driver/L2 and vif_driver

2015-02-18 Thread Brent Eagles
Hi,

On 18/02/2015 1:53 PM, Maxime Leroy wrote:
 Hi Brent,

snip/

 Thanks for your help on this feature. I have just created a channel
 irc: #vif-plug-script-support to speak about it.
 I think it will help to synchronize effort on vif_plug_script
 development. Anyone is welcome on this channel!
 
 Cheers,
 Maxime

Thanks Maxime. I've made some updates to the etherpad.
(https://etherpad.openstack.org/p/nova_vif_plug_script_spec)
I'm going to start some proof of concept work these evening. If I get
anything worth reading, I'll put it up as a WIP/Draft review. Whatever
state it is in I will be pushing up bits and pieces to github.

https://github.com/beagles/neutron_hacking vif-plug-script
https://github.com/beagles/nova vif-plug-script

Cheers,

Brent




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] per-agent/driver/plugin requirements

2015-02-18 Thread Armando M.
On 17 February 2015 at 22:00, YAMAMOTO Takashi yamam...@valinux.co.jp
wrote:

 hi,

 i want to add an extra requirement specific to OVS-agent.
 (namely, I want to add ryu for ovs-ofctl-to-python blueprint. [1]
 but the question is not specific to the blueprint.)
 to avoid messing deployments without OVS-agent, such a requirement
 should be per-agent/driver/plugin/etc.  however, there currently
 seems no standard mechanism for such a requirement.

 some ideas:

 a. don't bother to make it per-agent.
add it to neutron's requirements. (and global-requirement)
simple, but this would make non-ovs plugin users unhappy.

 b. make devstack look at per-agent extra requirements file in neutron tree.
eg. neutron/plugins/$Q_AGENT/requirements.txt

 c. move OVS agent to a separate repository, just like other
after-decomposition vendor plugins.  and use requirements.txt there.
for longer term, this might be a way to go.  but i don't want to
block my work until it happens.

 d. follow the way how openvswitch is installed by devstack.
a downside: we can't give a jenkins run for a patch which introduces
an extra requirement.  (like my patch for the mentioned blueprint [2])

 i think b. is the most reasonable choice, at least for short/mid term.

 any comments/thoughts?


One thing that I want to ensure we are clear on is about the agent's
OpenFlow communication strategy going forward, because that determines how
we make a decision based on the options you have outlined: do we enforce
the use of ryu while ovs-ofctl goes away from day 0? Or do we provide an
'opt-in' type of approach where users can explicitly choose if/when to
adopt ryu in favor of ovs-ofctl? The latter means that we'll keep both
solutions for a reasonable amount of time to smooth the transition process.

If we adopt the former (i.e. ryu goes in, ovs-ofctl goes out), then option
a) makes sense to me, but I am not sure how happy deployers, and packagers
are going to be welcoming this approach. There's already too much going on
in Kilo right now :)

If we adopt the latter, then I think it's desirable to have two separate
configurations with which we test the agent. This means that we'll have a
new job (besides the existing ones) that runs the agent with ryu instead of
ovs-ofctl. This means that option d) is the viable one, where DevStack will
have to install the dependency based on some configuration variable that is
determined by the openstack-infra project definition.

Thoughts?

Cheers,
Armando



 YAMAMOTO Takashi

 [1] https://blueprints.launchpad.net/neutron/+spec/ovs-ofctl-to-python
 [2] https://review.openstack.org/#/c/153946/

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Libguestfs: possibility not to use it, even when installed ?

2015-02-18 Thread Raphael Glon

Hi,

This is about review:
https://review.openstack.org/#/c/156633/

1 line, can be controversial

Its purpose is to add the possibility not to use libguestfs for data 
injection in nova, even when installed.


Not discussing about the fact that libguestfs should be preferred over 
fuse mounts for data injection as much as possible because mounts are 
more subject to causing security issues (and already have in the past 
nova releases).


However, there are a lot of potential cases when libguestfs won't be 
usable for data injection


This was the case here (fixed):
https://bugzilla.redhat.com/show_bug.cgi?id=984409

I entcountered a similar case more recently on powerkvm 2.1.0 (defect 
with the libguestfs)


So just saying it could be good adding a simple config flag (set to True 
by default, to keep the current behaviour untouched) to force nova not 
using libguestfs without having to uninstall it and thus prevent other 
users on the host from using it.


regards

raphael


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-operators][rally][docker] Rally docker images are available on Docker hub now!

2015-02-18 Thread Boris Pavlovic
Hi stackers,

For those who likes to keep system clean, Rally team is happy to say that
Docker images with Rally are automatically published to docker hub now.

Repo with images is here:
https://hub.docker.com/u/rallyforge/rally/

In repo you can find images for:
1) Every release - image name is the same as release tag
2) Master image - it is updated on every merge to Rally repo.


By the way, there is a nice tutorial about how to use Rally in container:
https://hub.docker.com/u/rallyforge/rally/


Best regards,
Boris Pavlovic
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Pluggable Auth for clients and where should it go

2015-02-18 Thread Kevin Benton
This is something I have been working on internally as well. I've been
trying to find a way to make the changes to the python neutronclient in the
least invasive way to support pluggable authentication. I would be happy to
help review the changes you submit upstream if you have something already
well-tested.

Would you benefit from pluggable auth?

Yes.

What are you looking for in auth?

Parity with the nova client.

Would you benefit from the python-neutronclient getting nova's auth
capabilities?

Yes

I have a similar constraint with waiting for the move to OSC/SDK. Even if
the support for auth was merged into OSC/SDK, it wouldn't work with
existing scripts and (more importantly) existing Icehouse/Juno Nova
deployments that use the neutron client for the notifications to Neutron.

On Wed, Feb 18, 2015 at 8:52 AM, Justin Hammond 
justin.hamm...@rackspace.com wrote:

  Just starting this discussion…

  This is in reference to
 https://blueprints.launchpad.net/python-neutronclient/+spec/pluggable-neutronclient-auth

  Originally the blueprint was for python-neutronclient only, but
 pluggable auth is a wide-reaching issue. With OSC/SDK on the horizon
 (however far), we should probably begin the discussion of how to best do
 this (if it hasn't been done).

  A request: We have an immediate need to add pluggable auth to the
 python-neutronclient, modeled after python-novaclient's pluggable auth
 system, to maintain a consistent workflow for our users. After the
 discussion in the neutron-drivers meeting (
 http://eavesdrop.openstack.org/meetings/neutron_drivers/2015/neutron_drivers.2015-02-18-15.31.log.html)
 it is clear that python-neutronclient will survive for Kilo +12 months, at
 least. During that timeframe we'd like to have pluggable auth supported so
 we can bridge that gap. Beyond that immediate need, we are dedicated to
 making OSC/SDK the way to go in the future, and will gladly assist in
 adding said features.

  We have a solution for our immediate solution but that may not apply to
 OSC/SDK. So my questions are:


- Would you benefit from pluggable auth?
- What are you looking for in auth?
- Would you benefit from the python-neutronclient getting nova's auth
capabilities?

 Thank you for your time!

  - Justin (roaet)



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][lbaas]Topics and possible fomats for LBaaS in OpenStack/Vancouver

2015-02-18 Thread Samuel Bercovici
Hi Everyone,

Based on the last IRC, I thought we could start a discussion on ML on topics 
and then maybe on how we want to discuss durin the summit.
Follows some items we may wish to discuss:

1.   LBaaS API additions (assuming TLS and L7 will be there):

a.   L3 based traffic routing - LB, listener, pool selection based on 
source IP network classes

b.  TLS phase 2:

   i.  Client 
side re-encryption

 ii.  Client 
certificates

c.   Service Insertion models (in addition to proxy based)

   i.  
Transparent mode

d.  Object sharing (yes/no)

   i.  Pools

e.  Monitoring APIs

   i.  
Integration with Ceilometer

f.Batch updates - create a full configuration graph and control when it 
get scheduled

2.   Quota support (ex: max number of LBs, listeners, TLS certificates, 
etc.)

3.   HEAT integration

4.   Horizon Support

5.   LBaaS API extensions - ability to add experimental and vendor APIs

Regards,
-Sam.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Pluggable Auth for clients and where should it go

2015-02-18 Thread Dean Troyer
On Wed, Feb 18, 2015 at 1:29 PM, Kevin Benton blak...@gmail.com wrote:

 Perhaps I am misunderstanding, but doesn't the OSC support for pluggable
 auth just come for free from Neutron's perspective? (i.e. we don't have to
 make any Neutron-specific changes for that to work)


It does if/when the command layer were implemented in OSC.  It already
knows how to create a neutron client object and give it the plugin auth
info.


 What I was hoping here was that we could get something in the Neutron
 client that works with the older auth plugins written for the Nova client
 to support setups not using OSC (specifically the Nova-Neutron
 interactions). I didn't mean that I didn't want to support OSC at all.


I think one thing needs to be clarified...what you are talking about is
utilizing keystoneclient's auth plugins in neutronclient.  Phrasing it as
'novaclient parity' reinforces the old notion that novaclient is the model
for doing things.  It is no longer that...and maybe not even the right
example of how to use auth plugins even though jamielennox did most of that
work.

dt

-- 

Dean Troyer
dtro...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Pluggable Auth for clients and where should it go

2015-02-18 Thread Tim Bell
Asking on the operators mailing list may yield more examples where people are 
using the Neutron client.

From the CERN perspective, we use OSC heavily now it has Kerberos and X.509 
support. With the new support of Keystone V3 in the Nova python client, we are 
interested in extending this support to these methods.

While we are in the process of planning our Nova network to Neutron migration 
(and thus our Neutron usage is limited to testing currently), it would be 
attractive if the OSC support Neutron operations with these authentication 
methods. Worst case, following the same structure as Nova would allow us to 
work with others interested in Kerberos and X.509 for a single set of patches 
so we would strongly prefer the same plug in approach for Neutron as used by 
Nova (compared to re-inventing the wheel).

Tim

From: Kevin Benton [mailto:blak...@gmail.com]
Sent: 18 February 2015 20:01
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Pluggable Auth for clients and where should it go

This is something I have been working on internally as well. I've been trying 
to find a way to make the changes to the python neutronclient in the least 
invasive way to support pluggable authentication. I would be happy to help 
review the changes you submit upstream if you have something already 
well-tested.

Would you benefit from pluggable auth?

Yes.

What are you looking for in auth?

Parity with the nova client.

Would you benefit from the python-neutronclient getting nova's auth 
capabilities?

Yes

I have a similar constraint with waiting for the move to OSC/SDK. Even if the 
support for auth was merged into OSC/SDK, it wouldn't work with existing 
scripts and (more importantly) existing Icehouse/Juno Nova deployments that use 
the neutron client for the notifications to Neutron.

On Wed, Feb 18, 2015 at 8:52 AM, Justin Hammond 
justin.hamm...@rackspace.commailto:justin.hamm...@rackspace.com wrote:
Just starting this discussion…

This is in reference to 
https://blueprints.launchpad.net/python-neutronclient/+spec/pluggable-neutronclient-auth

Originally the blueprint was for python-neutronclient only, but pluggable auth 
is a wide-reaching issue. With OSC/SDK on the horizon (however far), we should 
probably begin the discussion of how to best do this (if it hasn't been done).

A request: We have an immediate need to add pluggable auth to the 
python-neutronclient, modeled after python-novaclient's pluggable auth system, 
to maintain a consistent workflow for our users. After the discussion in the 
neutron-drivers meeting 
(http://eavesdrop.openstack.org/meetings/neutron_drivers/2015/neutron_drivers.2015-02-18-15.31.log.html)
 it is clear that python-neutronclient will survive for Kilo +12 months, at 
least. During that timeframe we'd like to have pluggable auth supported so we 
can bridge that gap. Beyond that immediate need, we are dedicated to making 
OSC/SDK the way to go in the future, and will gladly assist in adding said 
features.

We have a solution for our immediate solution but that may not apply to 
OSC/SDK. So my questions are:


  *   Would you benefit from pluggable auth?
  *   What are you looking for in auth?
  *   Would you benefit from the python-neutronclient getting nova's auth 
capabilities?
Thank you for your time!

- Justin (roaet)



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] FFE driver-private-data + pure-iscsi-chap-support

2015-02-18 Thread Mike Perez
On 10:07 Wed 18 Feb , John Griffith wrote:
 On Tue, Feb 17, 2015 at 11:36 PM, Mike Perez thin...@gmail.com wrote:
  On 14:50 Sun 15 Feb , Patrick East wrote:
  Hi All,
 
  I would like to request a FFE for the following blueprints:
 
  https://blueprints.launchpad.net/cinder/+spec/driver-private-data
  https://blueprints.launchpad.net/cinder/+spec/pure-iscsi-chap-support
 
  The first being a dependency for the second.
 
  The new database table for driver data feature was discussed at the Cinder
  mid-cycle meetup and seemed to be generally approved by the team in person
  at the meeting as something we can get into Kilo.
 
  There is currently a spec up for review for it here:
  https://review.openstack.org/#/c/15/ but doesn't look like it will be
  approved by the end of the day for the deadline. I have code pretty much
  ready to go for review as soon as the spec is approved, it is a relatively
  small patch set.
 
  I already told Patrick I would help see this change in Kilo. If I can get
  another Cinder core to sponsor this, that would be great.
 
  This change makes it possible for some drivers to be able to have chap auth
  support in their unique setup, and I'd rather not leave people out in 
  Cinder.
 
  --
  Mike Perez
 
  __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 +2 from me, I'm willing to sponsor this

This spec is approved for FFE. I would like to get a +2 on the spec from John,
as well as approval from Walt who has interest in this being implemented for
his drivers.

-- 
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Stepping down as TripleO PTL

2015-02-18 Thread Clint Byrum
Excerpts from Clint Byrum's message of 2015-02-17 08:52:46 -0800:
 Excerpts from Anita Kuno's message of 2015-02-17 07:38:01 -0800:
  On 02/17/2015 09:21 AM, Clint Byrum wrote:
   There has been a recent monumental shift in my focus around OpenStack,
   and it has required me to take most of my attention off TripleO. Given
   that, I don't think it is in the best interest of the project that I
   continue as PTL for the Kilo cycle.
   
   I'd like to suggest that we hold an immediate election for a replacement
   who can be 100% focused on the project.
   
   Thanks everyone for your hard work up to this point. I hope that one day
   soon TripleO can deliver on the promise of a self-deploying OpenStack
   that is stable and automated enough to sit in the gate for many if not
   all OpenStack projects.
   
   
   
   __
   OpenStack Development Mailing List (not for usage questions)
   Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
   
  So in the middle of a release, changing PTLs can take 3 avenues:
  
  1) The new PTL is appointed. Usually there is a leadership candidate in
  waiting which the rest of the project feels it can rally around until
  the next election. The stepping down PTL takes the pulse of the
  developers on the project and informs us on the mailing list who the
  appointed PTL is. Barring any huge disagreement, we continue on with
  work and the appointed PTL has the option of standing for election in
  the next election round. The appointment lasts until the next round of
  elections.
  
 
 Thanks for letting me know about this Anita.
 
 I'd like to appoint somebody, but I need to have some discussions with a
 few people first. As luck would have it, some of those people will be in
 Seattle with us for the mid-cycle starting tomorrow.
 
  2) We have an election, in which case we need candidates and some dates.
  Let me know if we want to exercise this option so that Tristan and I can
  organize some dates.
  
 
 Let's wait a bit until I figure out if there's a clear and willing
 appointee. That should be clear by Thursday.

Ok, we talked this morning, and James Slagle has agreed to step in as
the PTL for the rest of this cycle. So I hereby appoint him so.

Thanks everyone!


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Stepping down as TripleO PTL

2015-02-18 Thread Chris Jones
Hi

Thanks for stepping forward, James :)

Cheers,
--
Chris Jones

 On 18 Feb 2015, at 21:45, Clint Byrum cl...@fewbar.com wrote:
 
 Excerpts from Clint Byrum's message of 2015-02-17 08:52:46 -0800:
 Excerpts from Anita Kuno's message of 2015-02-17 07:38:01 -0800:
 On 02/17/2015 09:21 AM, Clint Byrum wrote:
 There has been a recent monumental shift in my focus around OpenStack,
 and it has required me to take most of my attention off TripleO. Given
 that, I don't think it is in the best interest of the project that I
 continue as PTL for the Kilo cycle.
 
 I'd like to suggest that we hold an immediate election for a replacement
 who can be 100% focused on the project.
 
 Thanks everyone for your hard work up to this point. I hope that one day
 soon TripleO can deliver on the promise of a self-deploying OpenStack
 that is stable and automated enough to sit in the gate for many if not
 all OpenStack projects.
 
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 So in the middle of a release, changing PTLs can take 3 avenues:
 
 1) The new PTL is appointed. Usually there is a leadership candidate in
 waiting which the rest of the project feels it can rally around until
 the next election. The stepping down PTL takes the pulse of the
 developers on the project and informs us on the mailing list who the
 appointed PTL is. Barring any huge disagreement, we continue on with
 work and the appointed PTL has the option of standing for election in
 the next election round. The appointment lasts until the next round of
 elections.
 
 
 Thanks for letting me know about this Anita.
 
 I'd like to appoint somebody, but I need to have some discussions with a
 few people first. As luck would have it, some of those people will be in
 Seattle with us for the mid-cycle starting tomorrow.
 
 2) We have an election, in which case we need candidates and some dates.
 Let me know if we want to exercise this option so that Tristan and I can
 organize some dates.
 
 
 Let's wait a bit until I figure out if there's a clear and willing
 appointee. That should be clear by Thursday.
 
 Ok, we talked this morning, and James Slagle has agreed to step in as
 the PTL for the rest of this cycle. So I hereby appoint him so.
 
 Thanks everyone!
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat] raw volume attachment and format in cloud config

2015-02-18 Thread Vignesh Kumar
Hi All,

I am trying to attach a raw volume to a server node and format it. All done
in one go using a heat template.


   1. The volume is attached using OS::Cinder::VolumeAttachment
   2. The node is created using OS::Nova::Server and it is created from a
   coreOS image
   3. I am using the cloud config in the OS::Nova::Server to format and
   mount the volume attached using a script.
   4. The instance is a coreOS instance and I am using a unit service to
   start the sh script which performs the formatting and mounting of the
   volume as follows.

#!/bin/sh
sudo mkdir -p /dev/external
while [ ! -e /dev/vdb ]
do
   echo Waiting for volume to attach
   sleep 10
done
sudo mount -t ext4 /dev/vdb /dev/external/
if [ $? -eq 0 ]
then
  sudo echo Already Formatted Volume. Mounted
else
  sudo /usr/sbin/mkfs.ext4 /dev/vdb
  sudo mount -t ext4 /dev/vdb /dev/external/
  sudo echo RAW Volume. Formatted and mounted
fi
sudo echo Volume mounted


What happens then is the stack gets created. But cannot ssh into the
instance. Sometimes it takes much longer than that and sometime boot never
happens. Sometimes after creating the stack I am able to ssh into the new
node with the formatted volume. But mostly no.

The same formatting and mounting if done manually rather than in cloud
config  works perfectly fine. Any pointers on what is happening here will
be helpful .

Regards,
Vignesh Kumar Kathiresan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] [glance] conclusion needed on functional API

2015-02-18 Thread Brian Rosmaita
On 2/15/15, 2:35 PM, Jay Pipes jaypi...@gmail.com wrote:
On 02/15/2015 01:13 PM, Brian Rosmaita wrote:
 On 2/15/15, 10:10 AM, Jay Pipes jaypi...@gmail.com wrote:

 On 02/15/2015 01:31 AM, Brian Rosmaita wrote:
 This is a follow-up to the discussion at the 12 February API-WG
 meeting [1] concerning functional API in Glance [2].  We made
 some progress, but need to close this off so the spec can be
 implemented in Kilo.

 I believe this is where we left off: 1. The general consensus was
 that POST is the correct verb.

 Yes, POST is correct (though the resource is wrong).

 2. Did not agree on what to POST.  Three options are in play: (A)
 POST /images/{image_id}?action=deactivate POST
 /images/{image_id}?action=reactivate

 (B) POST /images/{image_id}/actions with payload describing the
 action, e.g., { action: deactivate } { action: reactivate
 }

 (C) POST /images/{image_id}/actions/deactivate POST
 /images/{image_id}/actions/reactivate

 d) POST /images/{image_id}/tasks with payload: { action:
 deactivate|activate }

 An action isn't created. An action is taken. A task is created. A
 task contains instructions on what action to take.

 The Images API v2 already has tasks (schema available at
 /v2/schemas/tasks ), which are used for long-running asynchronous
 operations (right now, image import and image export).  I think we
 want to keep those distinct from what we're talking about here.

 Does something really need to be created for this call?  The idea
 behind the functional API was to have a place for things that don't
 fit neatly into the CRUD-centric paradigm.  Option (C) seems like a
 good fit for this.

Why not just use the existing tasks/ interface, then? :) Seems like a
perfect fit to me.

The existing tasks/ interface is kind of heavyweight.  It provides a
framework for asynchronous operations.  It's really not appropriate for
this purpose.

cheers,
brian


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] FWaaS - question about drivers

2015-02-18 Thread Sławek Kapłoński
Hello,

I'm looking to use FWaaS service plugin with my own router solution (I'm
not using L3 agent at all). If I want to use FWaaS plugin also, should I
write own driver to it, or should I write own service plugin?
I will be grateful for any links to some description about this FWaaS
and it's architecture :)
Thx a lot for any help


-- 
Best regards
Sławek Kapłoński
sla...@kaplonski.pl

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-operators][rally][docker] Rally docker images are available on Docker hub now!

2015-02-18 Thread Andrey Kurilin
Great news for everyone!:)

On Wed, Feb 18, 2015 at 9:28 PM, Boris Pavlovic bo...@pavlovic.me wrote:

 Hi stackers,

 For those who likes to keep system clean, Rally team is happy to say that
 Docker images with Rally are automatically published to docker hub now.

 Repo with images is here:
 https://hub.docker.com/u/rallyforge/rally/

 In repo you can find images for:
 1) Every release - image name is the same as release tag
 2) Master image - it is updated on every merge to Rally repo.


 By the way, there is a nice tutorial about how to use Rally in container:
 https://hub.docker.com/u/rallyforge/rally/


 Best regards,
 Boris Pavlovic

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Best regards,
Andrey Kurilin.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Stepping down as TripleO PTL

2015-02-18 Thread Anita Kuno
On 02/18/2015 04:55 PM, Chris Jones wrote:
 Hi
 
 Thanks for stepping forward, James :)
 
 Cheers,
 --
 Chris Jones
 
 On 18 Feb 2015, at 21:45, Clint Byrum cl...@fewbar.com wrote:

 Excerpts from Clint Byrum's message of 2015-02-17 08:52:46 -0800:
 Excerpts from Anita Kuno's message of 2015-02-17 07:38:01 -0800:
 On 02/17/2015 09:21 AM, Clint Byrum wrote:
 There has been a recent monumental shift in my focus around OpenStack,
 and it has required me to take most of my attention off TripleO. Given
 that, I don't think it is in the best interest of the project that I
 continue as PTL for the Kilo cycle.

 I'd like to suggest that we hold an immediate election for a replacement
 who can be 100% focused on the project.

 Thanks everyone for your hard work up to this point. I hope that one day
 soon TripleO can deliver on the promise of a self-deploying OpenStack
 that is stable and automated enough to sit in the gate for many if not
 all OpenStack projects.



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 So in the middle of a release, changing PTLs can take 3 avenues:

 1) The new PTL is appointed. Usually there is a leadership candidate in
 waiting which the rest of the project feels it can rally around until
 the next election. The stepping down PTL takes the pulse of the
 developers on the project and informs us on the mailing list who the
 appointed PTL is. Barring any huge disagreement, we continue on with
 work and the appointed PTL has the option of standing for election in
 the next election round. The appointment lasts until the next round of
 elections.


 Thanks for letting me know about this Anita.

 I'd like to appoint somebody, but I need to have some discussions with a
 few people first. As luck would have it, some of those people will be in
 Seattle with us for the mid-cycle starting tomorrow.

 2) We have an election, in which case we need candidates and some dates.
 Let me know if we want to exercise this option so that Tristan and I can
 organize some dates.


 Let's wait a bit until I figure out if there's a clear and willing
 appointee. That should be clear by Thursday.

 Ok, we talked this morning, and James Slagle has agreed to step in as
 the PTL for the rest of this cycle. So I hereby appoint him so.

 Thanks everyone!
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
Wonderful, thanks everyone.

Clint please propose a patch to openstack/governance updating this line:
http://git.openstack.org/cgit/openstack/governance/tree/reference/projects.yaml#n379

Congratulations James!

Thank you,
Anita.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [trusts] [all] How trusts should work by design?

2015-02-18 Thread Renat Akhmerov
Hi,


 On 18 Feb 2015, at 23:54, Nikolay Makhotkin nmakhot...@mirantis.com wrote:
 
 Nova client's CLI parameter 'bypass_url' helps me. The client's API also has 
 'management_url' attribute, if this one is specified - the client doesn't 
 reauthenticate. Also the most of clients have 'endpoint' argument, so client 
 doesn't make extra call to keystone to retrieve new token and service_catalog.
 
 Thank you for clarification!


I want to say an additional “thank you” from me for helping us solve this 
problem that’s been around for a while.

And just a small conceptual question: in my understanding since trust chaining 
has already landed this kind of reauthentication doesn’t make a lot of sense to 
me. Isn’t trust chaining supposed to mean that trust-scoped tokens a regular 
tokens should be considered equal? Or we should still assume that trust scoped 
tokens are sort of limited? If yes then how exactly they must be understood?


Thanks!

Renat Akhmerov
@ Mirantis Inc.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] per-agent/driver/plugin requirements

2015-02-18 Thread YAMAMOTO Takashi
 On 17 February 2015 at 22:00, YAMAMOTO Takashi yamam...@valinux.co.jp
 wrote:
 
 hi,

 i want to add an extra requirement specific to OVS-agent.
 (namely, I want to add ryu for ovs-ofctl-to-python blueprint. [1]
 but the question is not specific to the blueprint.)
 to avoid messing deployments without OVS-agent, such a requirement
 should be per-agent/driver/plugin/etc.  however, there currently
 seems no standard mechanism for such a requirement.

 some ideas:

 a. don't bother to make it per-agent.
add it to neutron's requirements. (and global-requirement)
simple, but this would make non-ovs plugin users unhappy.

 b. make devstack look at per-agent extra requirements file in neutron tree.
eg. neutron/plugins/$Q_AGENT/requirements.txt

 c. move OVS agent to a separate repository, just like other
after-decomposition vendor plugins.  and use requirements.txt there.
for longer term, this might be a way to go.  but i don't want to
block my work until it happens.

 d. follow the way how openvswitch is installed by devstack.
a downside: we can't give a jenkins run for a patch which introduces
an extra requirement.  (like my patch for the mentioned blueprint [2])

 i think b. is the most reasonable choice, at least for short/mid term.

 any comments/thoughts?

 
 One thing that I want to ensure we are clear on is about the agent's
 OpenFlow communication strategy going forward, because that determines how
 we make a decision based on the options you have outlined: do we enforce
 the use of ryu while ovs-ofctl goes away from day 0? Or do we provide an
 'opt-in' type of approach where users can explicitly choose if/when to
 adopt ryu in favor of ovs-ofctl? The latter means that we'll keep both
 solutions for a reasonable amount of time to smooth the transition process.

my plan is the former.

the latter would need to invent a backend-neutral api which covers
large enough subset of openflow and nicira extensions.
my impression is that it isn't a reasonable amount of work for the benefit.

 
 If we adopt the former (i.e. ryu goes in, ovs-ofctl goes out), then option
 a) makes sense to me, but I am not sure how happy deployers, and packagers
 are going to be welcoming this approach. There's already too much going on
 in Kilo right now :)

sure, there's always been too much things to do.

 
 If we adopt the latter, then I think it's desirable to have two separate
 configurations with which we test the agent. This means that we'll have a
 new job (besides the existing ones) that runs the agent with ryu instead of
 ovs-ofctl. This means that option d) is the viable one, where DevStack will
 have to install the dependency based on some configuration variable that is
 determined by the openstack-infra project definition.

i tend to think the latter is too much.  but if we decide to go
the route, i agree it's reasonable to have separate jobs.

either ways, i need to write working code first.  so i want to be
able to try jenkins runs.  adding ryu to global-requirements [3]
would allow it, while it doesn't hurt anything as far as i know.
(i'm not familiar with infra stuff though.  please correct me if wrong.)

[3] https://review.openstack.org/#/c/154354/

YAMAMOTO Takashi

 
 Thoughts?
 
 Cheers,
 Armando
 
 

 YAMAMOTO Takashi

 [1] https://blueprints.launchpad.net/neutron/+spec/ovs-ofctl-to-python
 [2] https://review.openstack.org/#/c/153946/

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Changing expression delimiters in Mistral DSL

2015-02-18 Thread Renat Akhmerov
Guys,

I really appreciate the input of you all.

We decided that ideally we need to agree on that syntax within days, not weeks 
or months. But anyway, since we started this discussion yesterday I just want 
to give us extra 1-2 days to play with all these thoughts in our heads.

Just one additional maybe a little bit crazy idea to the pile we’ve already 
made:
What if we officially allow more than one delimiter syntax? Why stick to just 
one? Technically I’m 99% sure it’s doable. As long as it’s clearly pointed in 
the documentation there shouldn’t problems with understanding. Just one con 
(IMO, little) that I see here is “if so then what syntax do we use to write all 
our examples, docs, tutorials etc. etc.”

So obviously one of the options (just for majority of votes) would be % % and 
something else less familiar but more smoother looking in YAML  such as yaql{…} 
or {…}.

I’m ready to be beaten by stones! Fire away :)

Renat Akhmerov
@ Mirantis Inc.



 On 19 Feb 2015, at 11:25, James Fryman ja...@stackstorm.com wrote:
 
 On Feb 18, 2015, at 3:07 PM, Dmitri Zimine dzim...@stackstorm.com 
 mailto:dzim...@stackstorm.com wrote:
 
 Syntax options that we’d like to discuss further 
 
 % 1 + 1 % # pro- ruby/js/puppet/chef familiarity con - spaces, and % is 
 too large symbol
 {1 + 1}  # pro - less spaces, con - no familiarity
 ? 1 + 1 ?  # php familiarity, need spaces
 
 The primary criteria to select these 3 options is that they are YAML 
 compatible. Technically they all would solve our problems (primarily no 
 embracing quotes needed like in Ansible so no ambiguity on data types).
 
 The secondary criteria is syntax symmetry. After all I agree with Patrick's 
 point about better readability when we have opening and closing sequences 
 alike.
 
   
 To me, another critical criteria is familiarity: target users - openstack 
 developers and devops, familiar with the delimiters. 
 
 That is why of the three above I prefer % % . 
 
 It is commonly used in Puppet/Chef [1], Ruby, Javascript. One won’t be 
 surprised to see it and won’t need to change the muscle memory to type 
 open/closed characters especially when working on say Puppet and Mistral at 
 the same time (not unlikely). 
 
 
 [1] 
 https://docs.puppetlabs.com/guides/templating.html#erb-is-plain-text-with-embedded-ruby
  
 https://docs.puppetlabs.com/guides/templating.html#erb-is-plain-text-with-embedded-ruby
  
 
 
 
 I have been lurking on this thread, and just wanted to toss in $0.02 as you 
 all deliberate. In truth, any of the options Renat highlights would be fine, 
 and the points made to arrive at the final choices are sound. The end result 
 will types will be explicit, and that is great. In light of this though, 
 using the % % syntax is still ideal if only for one reason: friction. 
 
 In a recent discussion with a colleague of mine, he told me that in his daily 
 job, he is so busy and slammed with operations tasks, his measure of a tool 
 he will use is whether it provides value within 30-60 minutes. Otherwise, 
 there is a fire somewhere else that needs to be put out and cannot be 
 bothered.
 
 To be frank, there is no way that this proposed syntax change and how it is 
 ultimately decorated is going to be a game changer to how future users will 
 evaluate Mistral. But, at 3am x-apple-data-detectors://4 in the morning 
 during a production outage where an Ops admin is hotpatching a workflow to 
 get things moving again... That disparity does in fact matter. Eyes are 
 already trained to look for % %, the large amount of spacing draws the 
 eyes, and it's a known function in other existing ops toolkits. Less friction 
 to adoption/less friction to troubleshoot.I think these are all pluses. I 
 want to be drawn to where things are changing or dynamic in my templates to 
 aid in troubleshooting. 
 
 Again, this is just another data point to throw out. While users are busy 
 trying to absorb all that is workflow creation/design...  Having as many 
 likenesses and anchors to existing tools can certainly not hurt adoption. 
 
 Thank you all for your efforts!
 
 
 On Feb 18, 2015, at 3:20 AM, Renat Akhmerov rakhme...@mirantis.com 
 mailto:rakhme...@mirantis.com wrote:
 
 Hi again,
 
 Sorry, I started writing this email before Angus replied so I will shoot it 
 as is and then we can continue…
 
 
 So after discussing all the options again with a small group of team 
 members we came to the following things:
 
 Syntax options that we’d like to discuss further 
 
 % 1 + 1 % # pro- ruby/js/puppet/chef familiarity con - spaces, and % is 
 too large symbol
 {1 + 1}  # pro - less spaces, con - no familiarity
 ? 1 + 1 ?  # php familiarity, need spaces
 
 The primary criteria to select these 3 options is that they are YAML 
 compatible. Technically they all would solve our problems (primarily no 
 embracing quotes needed like in Ansible so no ambiguity on data types).
 
 The secondary criteria is syntax symmetry. After all I agree with Patrick's 

Re: [openstack-dev] [Mistral] Changing expression delimiters in Mistral DSL

2015-02-18 Thread James Fryman
 On Feb 18, 2015, at 3:07 PM, Dmitri Zimine dzim...@stackstorm.com wrote:
 
 Syntax options that we’d like to discuss further 
 
 % 1 + 1 % # pro- ruby/js/puppet/chef familiarity con - spaces, and % is 
 too large symbol
 {1 + 1}  # pro - less spaces, con - no familiarity
 ? 1 + 1 ?  # php familiarity, need spaces
 
 The primary criteria to select these 3 options is that they are YAML 
 compatible. Technically they all would solve our problems (primarily no 
 embracing quotes needed like in Ansible so no ambiguity on data types).
 
 The secondary criteria is syntax symmetry. After all I agree with Patrick's 
 point about better readability when we have opening and closing sequences 
 alike.
 
   
 To me, another critical criteria is familiarity: target users - openstack 
 developers and devops, familiar with the delimiters. 
 
 That is why of the three above I prefer % % . 
 
 It is commonly used in Puppet/Chef [1], Ruby, Javascript. One won’t be 
 surprised to see it and won’t need to change the muscle memory to type 
 open/closed characters especially when working on say Puppet and Mistral at 
 the same time (not unlikely). 
 
 
 [1] 
 https://docs.puppetlabs.com/guides/templating.html#erb-is-plain-text-with-embedded-ruby
  
 


I have been lurking on this thread, and just wanted to toss in $0.02 as you all 
deliberate. In truth, any of the options Renat highlights would be fine, and 
the points made to arrive at the final choices are sound. The end result will 
types will be explicit, and that is great. In light of this though, using the 
% % syntax is still ideal if only for one reason: friction. 

In a recent discussion with a colleague of mine, he told me that in his daily 
job, he is so busy and slammed with operations tasks, his measure of a tool he 
will use is whether it provides value within 30-60 minutes. Otherwise, there is 
a fire somewhere else that needs to be put out and cannot be bothered.

To be frank, there is no way that this proposed syntax change and how it is 
ultimately decorated is going to be a game changer to how future users will 
evaluate Mistral. But, at 3am in the morning during a production outage where 
an Ops admin is hotpatching a workflow to get things moving again... That 
disparity does in fact matter. Eyes are already trained to look for % %, the 
large amount of spacing draws the eyes, and it's a known function in other 
existing ops toolkits. Less friction to adoption/less friction to 
troubleshoot.I think these are all pluses. I want to be drawn to where things 
are changing or dynamic in my templates to aid in troubleshooting. 

Again, this is just another data point to throw out. While users are busy 
trying to absorb all that is workflow creation/design...  Having as many 
likenesses and anchors to existing tools can certainly not hurt adoption. 

Thank you all for your efforts!

 
 On Feb 18, 2015, at 3:20 AM, Renat Akhmerov rakhme...@mirantis.com wrote:
 
 Hi again,
 
 Sorry, I started writing this email before Angus replied so I will shoot it 
 as is and then we can continue…
 
 
 So after discussing all the options again with a small group of team members 
 we came to the following things:
 
 Syntax options that we’d like to discuss further 
 
 % 1 + 1 % # pro- ruby/js/puppet/chef familiarity con - spaces, and % is 
 too large symbol
 {1 + 1}  # pro - less spaces, con - no familiarity
 ? 1 + 1 ?  # php familiarity, need spaces
 
 The primary criteria to select these 3 options is that they are YAML 
 compatible. Technically they all would solve our problems (primarily no 
 embracing quotes needed like in Ansible so no ambiguity on data types).
 
 The secondary criteria is syntax symmetry. After all I agree with Patrick's 
 point about better readability when we have opening and closing sequences 
 alike.
 
 Some additional details can be found in [0]
 
 
 [0] https://etherpad.openstack.org/p/mistral-YAQL-delimiters
 
 Renat Akhmerov
 @ Mirantis Inc.
 
 
 On 18 Feb 2015, at 07:37, Patrick Hoolboom patr...@stackstorm.com wrote:
 
  My main concern with the {} delimiters in YAQL is that the curly brace 
 already has a defined use within YAML.  We most definitely will eventually 
 run in to parsing errors with whatever delimiter we choose but I don't 
 feel that it should conflict with the markup language it is directly 
 embedded in.  It gets quite difficult to, at a glance, identify YAQL 
 expressions.  % % may appear ugly to some but I feel that it works as a 
 clear delimiter of both the beginning AND the end of the YAQL query. The 
 options that only escape the beginning look fine in small examples like 
 this but the workflows that we have written or seen in the wild tend to 
 have some fairly large expressions.  If the opening and closing delimiters 
 don't match, it gets quite difficult to read. 
 
 From: Anastasia Kuznetsova akuznets...@mirantis.com
 Subject: Re: [openstack-dev] [Mistral] Changing expression delimiters 
 in Mistral DSL
 Date: February 

[openstack-dev] [nova] Nova API meeting

2015-02-18 Thread Christopher Yeoh
Hi,

Just a reminder that the weekly Nova API meeting is being held tomorrow
Friday UTC . 

We encourage cloud operators and those who use the REST API such as
SDK developers and others who and are interested in the future of the
API to participate.

In other timezones the meeting is at:

EST 20:00 (Thu)
Japan 09:00 (Fri)
China 08:00 (Fri)
ACDT 10:30 (Fri)

The proposed agenda and meeting details are here: 

https://wiki.openstack.org/wiki/Meetings/NovaAPI

Please feel free to add items to the agenda. 


We will also be discussing when to release v2.1 and microversions. I'm
going to propose that v2.1 becomes non experimental on Monday and
microversions is enabled with the first api change to use it on
Wednesday.

Please yell here or at the meeting if you think thats a bad idea.
Note that the old v2 code is going to remain the default on /v2 so you
still need to opt-in to v2.1 and microversioned changes will only
affect you if you send the appropriate header

Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon][keystone]

2015-02-18 Thread Dolph Mathews
On Fri, Feb 6, 2015 at 12:47 PM, Adam Young ayo...@redhat.com wrote:

  On 02/04/2015 03:54 PM, Thai Q Tran wrote:

 Hi all,

 I have been helping with the websso effort and wanted to get some feedback.
 Basically, users are presented with a login screen where they can select:
 credentials, default protocol, or discovery service.
 If user selects credentials, it works exactly the same way it works today.
 If user selects default protocol or discovery service, they can choose to
 be redirected to those pages.

 Keep in mind that this is a prototype, early feedback will be good.
 Here are the relevant patches:
 https://review.openstack.org/#/c/136177/
 https://review.openstack.org/#/c/136178/
 https://review.openstack.org/#/c/151842/

 I have attached the files and present them below:




 Replace the dropdown with a specific link for each protocol type:

 SAML and OpenID  are the only real contenders at the moment, but we will
 not likely have so many that it will clutter up the page.


Agree, but the likelihood that a single IdP will support multiple protocols
is probably low. Keystone certainly supports that from an API perspective,
but I don't think it should be the default UX. Choose a remote IdP first,
and then if *that* IdP supports multiple federation protocols, present them.



 Thanks for doing this.







 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] LOG.debug()

2015-02-18 Thread Vedsar Kushwaha
Thanks for immediate reply. I'm aware of logging_monitoring.html
http://docs.openstack.org/openstack-ops/content/logging_monitoring.html
and /var/log. But the information is not there.

LOG.debug(%(host_state)s does not have %(requested_ram)s MB  usable ram,
it only has %(usable_ram)s MB usable ram., {'host_state': host_state, '
requested_ram': requested_ram, 'usable_ram': usable_ram}
).

I wanted to know where this information is storing, when filter returns
false.

On Wed, Feb 18, 2015 at 8:26 PM, Gary Kotton gkot...@vmware.com wrote:

  Hi,
 Please see
 http://docs.openstack.org/openstack-ops/content/logging_monitoring.html
 If you have installed from packages this may be in
 /var/log/nova/nova-compute.log
 Thanks
 Gary

   From: Vedsar Kushwaha vedsarkushw...@gmail.com
 Reply-To: OpenStack List openstack-dev@lists.openstack.org
 Date: Wednesday, February 18, 2015 at 4:52 PM
 To: OpenStack List openstack-dev@lists.openstack.org
 Subject: [openstack-dev] LOG.debug()

   Hello World,

  I'm new to openstack and python too :).

 In the file:

 https://github.com/openstack/nova/blob/master/nova/scheduler/filters/ram_filter.py
 https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_openstack_nova_blob_master_nova_scheduler_filters_ram-5Ffilter.pyd=AwMFAwc=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEsr=VlZxHpZBmzzkWT5jqz9JYBk8YTeq9N3-diTlNj4GyNcm=DJEz4nE1qVf12YKxUwG0LN0yBMtl3warl36Oqq4Vqpos=ONUtXkRDFMM8mfVHff8Fc3zFe9AdJohiOZBUe7P6P3Ie=

  where does the LOG.debug() is storing the information?



 --
   Vedsar Kushwaha
 M.Tech-Computational Science
 Indian Institute of Science

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Vedsar Kushwaha
M.Tech-Computational Science
Indian Institute of Science
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] tracking bugs superseded by blueprints

2015-02-18 Thread Bogdan Dobrelya
Hello.
There is inconsistency in the triage process for Fuel bugs superseded by
blueprints.
The current approach is to set won't fix status for such bugs.
But there are some cases we should clarify [0], [1].

I vote to not track superseded bugs separately and keep them as won't
fix but update the status back to confirmed in case of regression
discovered. And if we want to backport an improvement tracked by a
blueprint (just for an exceptional case) let's assign milestones for
related bugs.

If we want to change the triage rules, let's announce that so the people
won't get confused.

[0] https://bugs.launchpad.net/fuel/+bug/1383741
[1] https://bugs.launchpad.net/fuel/+bug/1422856

--
Best regards,
Bogdan Dobrelya,
Skype #bogdando_at_yahoo.com http://bogdando_at_yahoo.com
Irc #bogdando



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] The root-cause for IRC private channels (was Re: [all][tc] Lets keep our community open, lets fight for it)

2015-02-18 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 02/18/2015 05:07 PM, Michael Krotscheck wrote:
 You got my intention right: I wanted to understand better what
 lead some people to create a private channel, what were their
 needs.
 
 
 I'm in a passworded channel, where the majority of members work on 
 OpenStack, but whose common denominator is We're in the same 
 organizational unit in HP. We talk about openstack, we talk about
 HP, we talk about burning man, we talk about movies, good places to
 drink - it's a nice little backchannel of idle chatter. There have
 been a few times when things related to OpenStack came up, and in
 that case we've booted the topic to a public channel (There was an
 example just yesterday). Either way, in this case a private channel
 was created because we could potentially be discussing corporate
 things, it's more analogous to your Teams' internal Hipchat or IRC
 server (in fact, it started in HipChat, and then we were all 'why
 do we have to use another chat client' and that ended that).
 
 So there's one use case.
 
 Michael
 

I think the use case is very valid. I think most (all?) companies have
internal channels. That said, those should be concerned about
downstream only work and burning men. If an upstream topic arises,
people should have discipline to move discussion to upstream channels.
AFAIK that's what we try to do in Red Hat, and I guess it's a valid
approach that helps both a company in question to get attention to
issues downstream teams are interested in, and the community.

Cheers,
/Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQEcBAEBAgAGBQJU5Lp+AAoJEC5aWaUY1u57xDAIAJisbHHDsm1CAAWpi+eb+Bsg
/QeVotuBDj1dNsSeIHU42/eI7S36Rlwfv8700YQcomwQPNTgXhiQU0y6F3anC62c
rg1nXGpmSY0JFg9VaKzwZGfhN0tAMLK/IdgqSooyJwBGGGnasZCYcVQrcNyPgCjY
q7vHd+1d1QEaYeJbO/CQNN9cjjVtjkclXg8DBU+yL6M1i+z60aPExEVE/b9VnrTB
VfttbOi1WH6tm5bkBR4tmsGGy8UsVZ/VEEgcLOCryIy0kuJYAJ6i61Fs+AcSySyr
lEPJCNTjn4t73DkGBD5NIM78wxorJ9nvNTxGyb3VdDYwnkWMpjloBTv9/DBSSCU=
=fxQR
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tempest] Question about test_external_network_visibility

2015-02-18 Thread Albert
Hi guys,

What the reason behind this test, why I cannot have an external network that is 
shared amongst all the tenants? Also according to this: 
http://docs.openstack.org/icehouse/install-guide/install/apt-debian/content/neutron_initial-external-network.html
 
http://docs.openstack.org/icehouse/install-guide/install/apt-debian/content/neutron_initial-external-network.html
 

 The admin tenant owns this network because it provides external network access 
for multiple tenants. You must also enable sharing to allow access by those 
tenants.

So I’m sure what this test is supposed to test and what’s that and unexpected 
or erroneous behaviour.

Can someone give some light into this matter?

best,
Albert__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tempest] Question about test_external_network_visibility

2015-02-18 Thread Albert
Hi guys,

What the reason behind this test, why I cannot have an external network that is 
shared amongst all the tenants? Also according to this: 
http://docs.openstack.org/icehouse/install-guide/install/apt-debian/content/neutron_initial-external-network.html
 
http://docs.openstack.org/icehouse/install-guide/install/apt-debian/content/neutron_initial-external-network.html
 

 The admin tenant owns this network because it provides external network access 
for multiple tenants. You must also enable sharing to allow access by those 
tenants.

So I’m sure what this test is supposed to test and what’s that and unexpected 
or erroneous behaviour.

Can someone give some light into this matter?

best,
Albert__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] [devstack] About _member_ role

2015-02-18 Thread Pasquale Porreca
Analyzing Horizon code I can confirm that the existence of _member_ role
is required, so the commit https://review.openstack.org/#/c/150667/
introduced the bug in devstack. More details and a fix proposal in my
change submission: https://review.openstack.org/#/c/156527/

On 02/18/15 10:04, Pasquale Porreca wrote:
 I saw 2 different bug report that Devstack dashboard gives an error when
 trying to manage projects
 https://bugs.launchpad.net/devstack/+bug/1421616 and
 https://bugs.launchpad.net/horizon/+bug/1421999
 In my devstack environment projects were working just fine, so I tried a
 fresh installation to see if I could reproduce the bug and I could
 confirm that actually the bug is present in current devstack deployment.
 Both reports point to the lack of _member_ role this error, so I just
 tried to manually (i.e. via CLI) add a _member_ role and I verified that
 just having it - even if not assigned to any user - fix the project
 management in Horizon.

 I didn't deeply analyze yet the root cause of this, but this behaviour
 seemed quite weird, this is the reason I sent this mail to dev list.
 Your explanation somewhat confirmed my doubts: I presume that adding a
 _member_ role is merely a workaround and the real bug is somewhere else
 - in Horizon code with highest chance.

 On 02/17/15 21:01, Jamie Lennox wrote:
 - Original Message -
 From: Pasquale Porreca pasquale.porr...@dektech.com.au
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Tuesday, 17 February, 2015 9:07:14 PM
 Subject: [openstack-dev]  [Keystone] [devstack] About _member_ role

 I proposed a fix for a bug in devstack
 https://review.openstack.org/#/c/156527/ caused by the fact the role
 _member_ was not anymore created due to a recent change.

 But why is the existence of _member_ role necessary, even if it is not
 necessary to be used? Is this a know/wanted feature or a bug by itself?
 So the way to be a 'member' of a project so that you can get a token scoped 
 to that project is to have a role defined on that project. 
 The way we would handle that from keystone for default_projects is to create 
 a default role _member_ which had no permissions attached to it, but by 
 assigning it to the user on the project we granted membership of that 
 project.
 If the user has any other roles on the project then the _member_ role is 
 essentially ignored. 

 In that devstack patch I removed the default project because we want our 
 users to explicitly ask for the project they want to be scoped to.
 This patch shouldn't have caused any issues though because in each of those 
 cases the user is immediately granted a different role on the project - 
 therefore having 'membership'. 

 Creating the _member_ role manually won't cause any problems, but what issue 
 are you seeing where you need it?


 Jamie


 --
 Pasquale Porreca

 DEK Technologies
 Via dei Castelli Romani, 22
 00040 Pomezia (Roma)

 Mobile +39 3394823805
 Skype paskporr


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Pasquale Porreca

DEK Technologies
Via dei Castelli Romani, 22
00040 Pomezia (Roma)

Mobile +39 3394823805
Skype paskporr


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Python code in fuel-library

2015-02-18 Thread Jay Pipes

On 02/18/2015 04:57 AM, Sebastian Kalinowski wrote:

Hello Fuelers,

There is more and more Python code appearing in fuel-library [1] that is
used in our Puppet manifests. Now, with introduction of Granular
Deployment feature it could appear more often as
writing some tasks as a Python script is a nice option.

First problem that I see is that in some cases this code is getting
merged without a positive review from a Python developer from Fuel team.
My proposition of the solution is simple:
fuel-library core reviewers shouldn't merge such code if there is no a
+1 from a Python developer from fuel-core group [2].

Second problem is that there are no automatic tests for this code.
Testing it manually and by running deployment when that code is used is
not enough since those scripts could be quite large and complicated and
some of them are executed in specific situations so it is hard for
reviewers to check how they will work.
In fuel-library we already have tests for Puppet modules: [3].
I suggest that we should introduce similar checks for Python code:
  - there will be one global 'test-requirements.txt' file (if there will
be a need to, we could introduce more granular split, like per module)
  - py.test [4] will be used as a test runner
  - (optional, but advised) flake8+hacking checks [5] (could be limited
to just run flake8/pyflakes checks)

Looking forward to your opinions on those two issues.


Hi Seba,

All those suggestions look fine to me. I'd also add to improve the 
documentation on how to write and run Python tests to help out those 
developers who are not as familiar with Python as Ruby or other languages.


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] out-of-tree plugin for Mech driver/L2 and vif_driver

2015-02-18 Thread Brent Eagles
Hi Maxime, Neil,

On 16/01/2015 1:39 PM, Maxime Leroy wrote:
 On Wed, Jan 14, 2015 at 6:16 PM, Neil Jerram neil.jer...@metaswitch.com 
 wrote:
 Maxime Leroy maxime.le...@6wind.com writes:

 Ok, thank you for the details. I will look how to implement this feature.

 Hi Maxime,

 Did you have time yet to start looking at this?  My team now has a use
 case that could be met by using vif_plug_script, so I would be
 happy to help with writing the spec for that.  Would that be of interest
 to you?

 Thanks,
 Neil
 
 Hi Neil,
 
 I have planned to look later how to implement this new feature.
 As we are in feature freeze for Nova and Neutron, there is no hurry right now.
 
 I think we need to have these 2 news specs ready before the next summit.
 
 Of course, any help is welcome ! ;)
 I have just created an etherpad to write the spec for Nova:
 https://etherpad.openstack.org/p/nova_vif_plug_script_spec
 Feel free to modify it.
 
 Thanks,
 
 Maxime

I want to get the ball rolling on this ASAP so, I've started on this as
well and will be updating the etherpad accordingly. I'm also keen to get
W.I.P./P.O.C. patches to go along with it. I'll notify on the mailing
list (and direct so you don't miss it ;)) as soon as I've completed a
reasonable first swipe through the spec (which should be in the next day
or so).

Cheers,

Brent




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] update on Puppet integration in Kilo

2015-02-18 Thread Derek Higgins


On 11/02/15 17:06, Dan Prince wrote:

I wanted to take a few minutes to go over the progress we've made with
TripleO Puppet in Kilo so far.

For those unfamilar with the efforts our initial goal was to be able to
use Puppet as the configuration tool for a TripleO deployment stack.
This is largely built around a Heat capability added in Icehouse called
Software Deployments. By making use of use of the Software Deployment
Puppet hook and building our images with a few puppet specific elements
we can integrate with puppet as a configuration tool. There has been no
blueprint on this effort... blueprints seemed a bit ridged for the task
at hand. After demo'ing the proof of concept patches in Paris we've been
tracking progress on an etherpad here:

https://etherpad.openstack.org/p/puppet-integration-in-heat-tripleo

Lots of details in that etherpad. But I would like to highlight a few
things:

As of a week or so all of the code needed to run devtest_overcloud.sh to
completion using Puppet (and Fedora packages) has landed. Several
upstream TripleO developers have been successful in setting up a Puppet
overcloud using this process.

As of last Friday We have a running CI job! I'm actually very excited
about this one for several reasons. First CI is going to be crucial in
completing the rest of the puppet feature work around HA, etc. Second
because this job does require packages... and a fairly recent Heat
release we are using a new upstream packaging tool called Delorean.
Delorean makes it very easy to go back in time so if the upstream
packages break for some reason plugging in a stable repo from yesterday,
or 5 minutes ago should be a quick fix... Lots of things to potentially
talk about in this area around CI on various projects.

The puppet deployment is also proving to be quite configurable. We have
a Heat template parameter called 'EnablePackageInstall' which can be
used to enable or disable Yum package installation at deployment time.
So if you want to do traditional image based deployment with images
containing all of your packages you can do that (no Yum repositories
required). Or if you want to roll out images and install or upgrade
packages at deployment time (rather than image build time) you can do
that too... all by simply modifying this parameter. I think this sort of
configurability should prove useful to those who want a bit of choice
with regards to how packages and the like get installed.

Lots of work is still ongoing (documented in the etherpad above for
now). I would love to see multi-distro support for Puppet configuration
w/ TripleO. At present time we are developing and testing w/ Fedora...
but because puppet is used as a configuration tool I would say adding
multi-distro support should be fairly straightforward. Just a couple of
bits in the tripleo-puppet-elements... and perhaps some upstream
packages too (Delorean would be a good fit here for multi-distro too).
Great work, I'd love to see us either replace this fedora job with a 
centos one or add another job for centos, I think this would better 
represent what we expect endusers to deploy. Of course we would need 
todo a bit of work to make that work, I'd be willing to help out here if 
we decide to do it.




Also, the feedback from those in the Puppet community has been
excellent. Emilien Macchi, Yanis Guenene, Spencer Krum, and Colleen
Murphy have all been quite helpful with questions about style, how to
best use the modules, etc.

Likewise, Steve Hardy and Steve Baker have been very helpful in
addressing issues in the Heat templates.

Appreciate all the help and feedback.

Dan


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] per-agent/driver/plugin requirements

2015-02-18 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 02/18/2015 08:14 AM, YAMAMOTO Takashi wrote:
 hi,
 
 On Wednesday, 18 de February de 2015 at 07:00,
 yamam...@valinux.co.jp wrote:
 hi,
 
 i want to add an extra requirement specific to OVS-agent. 
 (namely, I want to add ryu for ovs-ofctl-to-python blueprint.
 [1] but the question is not specific to the blueprint.) to
 avoid messing deployments without OVS-agent, such a
 requirement should be per-agent/driver/plugin/etc. however,
 there currently seems no standard mechanism for such a
 requirement.
 
 
 
 
 Awesome, I was thinking of the same a few days ago, we make lots 
 and lots of calls to ovs-ofctl, and we will do more if we change
 to security groups/routers in OF, if that proves to be efficient,
 and we get CT.
 
 CT?
 
 
 After this change, what would be the differences of ofagent to
 ovs-agent ?
 
 I guess OVS set’s rules in advance, while ofagent works as a
 normal OF controller?
 
 the basic architecture will be same.
 
 actually it was suggested to merge two agents during spec review. i
 think it's a good idea for longer term.  (but unlikely for kilo)
 
 
 
 
 some ideas:
 
 a. don't bother to make it per-agent. add it to neutron's
 requirements. (and global-requirement) simple, but this would
 make non-ovs plugin users unhappy.
 
 I would simply go with a, what’s the ryu’s internal requirement
 list? is it big?
 
 no additional requirements as far as we use only openflow part of
 ryu.

The I suggest to just go with a. If you want to make distributions
happier, just make sure you mark appropriate entries in
requirements.txt with some metadata to suggest its an ovs plugin only
dependency.

 
 
 
 b. make devstack look at per-agent extra requirements file in
 neutron tree. eg. neutron/plugins/$Q_AGENT/requirements.txt
 
 IMHO that would make distribution work a bit harder because we 
 may need to process new requirement files, but my answer could
 depend on what I asked for a.
 
 probably. i guess distributors can speak up.

I am packaging neutron for RDO, and I speak up. I don't think
maintaining multiple requirements files is a good option, for it will
require updating packaging tools not to miss updates for the files.
Also, how would you make sure those dependencies stay consistent with
openstack/requirements repo? Do you plan to update tooling around the
bot that proposes updates to requirements.txt files in each project?

I think this option requires too much from those who will implement it
in all the tools around, and does not seem to justify itself in
comparison with trivial option a.

 
 
 c. move OVS agent to a separate repository, just like other 
 after-decomposition vendor plugins. and use requirements.txt
 there. for longer term, this might be a way to go. but i don't
 want to block my work until it happens.
 

That's a proper direction long term, but as it was already suggested
here, it's not going to happen shortly, so no need to block your work
on it.

 
 
 We’re not ready for that yet, as co-gating has proven as a bad
 strategy and we need to keep the reference implementation working
 for tests.
 
 i agree that it will not likely be ready in near future.
 
 YAMAMOTO Takashi
 
 
 d. follow the way how openvswitch is installed by devstack. a
 downside: we can't give a jenkins run for a patch which
 introduces an extra requirement. (like my patch for the
 mentioned blueprint [2])
 
 i think b. is the most reasonable choice, at least for
 short/mid term.
 
 any comments/thoughts?
 
 YAMAMOTO Takashi
 
 [1]
 https://blueprints.launchpad.net/neutron/+spec/ovs-ofctl-to-python

 
[2] https://review.openstack.org/#/c/153946/
 
 __

 
OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 (mailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe)

 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 
 __

 
OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQEcBAEBAgAGBQJU5J9FAAoJEC5aWaUY1u57eJwH/2+O55t7VjOS0qwM2txpMDV4
MnvMeucX5MBvpcTZ4UuE1gHMxiMHFTmhr5i5Cv+05Pnv7sbyPju+I2Ksi2kdAHlz
4Y5WiJpvrHbEtT2SdJ3OWvjvFUuhz6NM8XMhcCViVBXE3rdLzBVf864Y/8pwZt7d
dR6qiyxIQB+4BbhEqe3G0YWZdULWOBKEH7TM5uXiUewA1p0v14Yt3ysfpGd7Z5t9
HY9Ty1t4JOMB5s0BTGF2LubcaRGNh6Aj596mRCo9OryKy5NR95A/SFBx2B0CIIgV
i58f+PqHlFcoMHp1Qv3H4LUcxfZXitxjheSXA2cAsWDTW6kulnsYMfnSpjfVE0Q=
=4fro
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [nova] Question about force_host skip filters

2015-02-18 Thread Jay Pipes

On 02/18/2015 01:19 AM, Joe Cropper wrote:

Along these lines—dare I bring up the topic of providing an enhanced
mechanism to determine which filter(s) contributed to NoValidHost
exceptions?  Do others ever hear about operators getting this, and then
having no idea why a VM deploy failed?  This is likely another thread,
but thought I’d pose it here to see if we think this might be a
potential blueprint as well.


I think that's a great idea, Joe. Definitely something we should tackle 
in Gantt once the scheduler is broken out of Nova.


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Python code in fuel-library

2015-02-18 Thread Vladimir Kuklin
Hi, Seb

Very fair point, thank you. We need to add this to our jobs for unittests
run and syntax check. I am adding Aleksandr Didenko into the loop as he is
currently working on the similar task.

On Wed, Feb 18, 2015 at 4:53 PM, Jay Pipes jaypi...@gmail.com wrote:

 On 02/18/2015 04:57 AM, Sebastian Kalinowski wrote:

 Hello Fuelers,

 There is more and more Python code appearing in fuel-library [1] that is
 used in our Puppet manifests. Now, with introduction of Granular
 Deployment feature it could appear more often as
 writing some tasks as a Python script is a nice option.

 First problem that I see is that in some cases this code is getting
 merged without a positive review from a Python developer from Fuel team.
 My proposition of the solution is simple:
 fuel-library core reviewers shouldn't merge such code if there is no a
 +1 from a Python developer from fuel-core group [2].

 Second problem is that there are no automatic tests for this code.
 Testing it manually and by running deployment when that code is used is
 not enough since those scripts could be quite large and complicated and
 some of them are executed in specific situations so it is hard for
 reviewers to check how they will work.
 In fuel-library we already have tests for Puppet modules: [3].
 I suggest that we should introduce similar checks for Python code:
   - there will be one global 'test-requirements.txt' file (if there will
 be a need to, we could introduce more granular split, like per module)
   - py.test [4] will be used as a test runner
   - (optional, but advised) flake8+hacking checks [5] (could be limited
 to just run flake8/pyflakes checks)

 Looking forward to your opinions on those two issues.


 Hi Seba,

 All those suggestions look fine to me. I'd also add to improve the
 documentation on how to write and run Python tests to help out those
 developers who are not as familiar with Python as Ruby or other languages.

 Best,
 -jay

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Yours Faithfully,
Vladimir Kuklin,
Fuel Library Tech Lead,
Mirantis, Inc.
+7 (495) 640-49-04
+7 (926) 702-39-68
Skype kuklinvv
45bk3, Vorontsovskaya Str.
Moscow, Russia,
www.mirantis.com http://www.mirantis.ru/
www.mirantis.ru
vkuk...@mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] pci_alias config

2015-02-18 Thread Robert Li (baoli)
If you just use SR-IOV for networking, then pci_alias is not needed.

—Robert

On 2/16/15, 3:11 PM, Harish Patil 
harish.pa...@qlogic.commailto:harish.pa...@qlogic.com wrote:

Hello,

Do we still need “pci_alias config under /etc/nova/nova.conf for SR-IOV PCI 
passthru’ ?

I have Juno release of 1:2014.2.1-0ubuntu1.

Thanks,

Harish



This message and any attached documents contain information from the sending 
company or its parent company(s), subsidiaries, divisions or branch offices 
that may be confidential. If you are not the intended recipient, you may not 
read, copy, distribute, or use this information. If you have received this 
transmission in error, please notify the sender immediately by reply e-mail and 
then delete this message.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] LOG.debug()

2015-02-18 Thread Jordan Pittier
Hi,
Also, make sure you have :
debug = True
verbose = True
in the [DEFAULT] section of your nova.conf

Jordan

On Wed, Feb 18, 2015 at 3:56 PM, Gary Kotton gkot...@vmware.com wrote:

  Hi,
 Please see
 http://docs.openstack.org/openstack-ops/content/logging_monitoring.html
 If you have installed from packages this may be in
 /var/log/nova/nova-compute.log
 Thanks
 Gary

   From: Vedsar Kushwaha vedsarkushw...@gmail.com
 Reply-To: OpenStack List openstack-dev@lists.openstack.org
 Date: Wednesday, February 18, 2015 at 4:52 PM
 To: OpenStack List openstack-dev@lists.openstack.org
 Subject: [openstack-dev] LOG.debug()

   Hello World,

  I'm new to openstack and python too :).

 In the file:

 https://github.com/openstack/nova/blob/master/nova/scheduler/filters/ram_filter.py
 https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_openstack_nova_blob_master_nova_scheduler_filters_ram-5Ffilter.pyd=AwMFAwc=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEsr=VlZxHpZBmzzkWT5jqz9JYBk8YTeq9N3-diTlNj4GyNcm=DJEz4nE1qVf12YKxUwG0LN0yBMtl3warl36Oqq4Vqpos=ONUtXkRDFMM8mfVHff8Fc3zFe9AdJohiOZBUe7P6P3Ie=

  where does the LOG.debug() is storing the information?



 --
   Vedsar Kushwaha
 M.Tech-Computational Science
 Indian Institute of Science

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] The root-cause for IRC private channels (was Re: [all][tc] Lets keep our community open, lets fight for it)

2015-02-18 Thread Michael Krotscheck

 You got my intention right: I wanted to understand better what lead
 some people to create a private channel, what were their needs.


I'm in a passworded channel, where the majority of members work on
OpenStack, but whose common denominator is We're in the same
organizational unit in HP. We talk about openstack, we talk about HP, we
talk about burning man, we talk about movies, good places to drink - it's a
nice little backchannel of idle chatter. There have been a few times when
things related to OpenStack came up, and in that case we've booted the
topic to a public channel (There was an example just yesterday). Either
way, in this case a private channel was created because we could
potentially be discussing corporate things, it's more analogous to your
Teams' internal Hipchat or IRC server (in fact, it started in HipChat, and
then we were all 'why do we have to use another chat client' and that ended
that).

So there's one use case.

Michael
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Match network topology in DB vs. network topology in nodes

2015-02-18 Thread Leo Y
Hello,

I am looking for a way to match information about network topology (such as
interface+port or routing) that is stored in neutron DB vs. actual
information that is known by the neutron agent(s) in the computation node.

I like to do it in one of the following ways:

Way #1:

1. Query neutron data from DB
2. Discover to which computation node it belongs
3. In that node, access internal data and compare it with one that I read
from DB

Way #2:
1. For each computation node, access internal data
2. Find this data in DB and verify that it match

I don't know how I can access internal data of neutron agent(s) in the
computation node. I would appreciate any help and advise.

I am not sure if it is possible, to read a neutron data from DB and
discover to which computation node it belongs. Please, advise if it is
possible.
-- 
Regards,
LeonidY
-
I enjoy the massacre of ads. This sentence will slaughter ads without a
messy bloodbath
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] LOG.debug()

2015-02-18 Thread Gary Kotton
Hi,
Please see 
http://docs.openstack.org/openstack-ops/content/logging_monitoring.html
If you have installed from packages this may be in 
/var/log/nova/nova-compute.log
Thanks
Gary

From: Vedsar Kushwaha 
vedsarkushw...@gmail.commailto:vedsarkushw...@gmail.com
Reply-To: OpenStack List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Wednesday, February 18, 2015 at 4:52 PM
To: OpenStack List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: [openstack-dev] LOG.debug()

Hello World,

I'm new to openstack and python too :).

In the file:
https://github.com/openstack/nova/blob/master/nova/scheduler/filters/ram_filter.pyhttps://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_openstack_nova_blob_master_nova_scheduler_filters_ram-5Ffilter.pyd=AwMFAwc=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEsr=VlZxHpZBmzzkWT5jqz9JYBk8YTeq9N3-diTlNj4GyNcm=DJEz4nE1qVf12YKxUwG0LN0yBMtl3warl36Oqq4Vqpos=ONUtXkRDFMM8mfVHff8Fc3zFe9AdJohiOZBUe7P6P3Ie=

where does the LOG.debug() is storing the information?



--
Vedsar Kushwaha
M.Tech-Computational Science
Indian Institute of Science
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] The root-cause for IRC private channels (was Re: [all][tc] Lets keep our community open, lets fight for it)

2015-02-18 Thread John Dickinson
Story time. (For the record, the -swift channel is logged and I don't know of 
any private Swift IRC channels. I fully support logging every OpenStack IRC 
channel.)

I can understand why people might be hesitant to have a publicly logged 
channel. About a year ago, one of the Swift core devs said something offhand 
out of frustration in our channel. His employer, a prominent OpenStack 
contributing company, did not like what was said, and he was confronted about 
his comment in person at the office.[1]

Now, that might have been a one-time thing. And I think it was horrible and 
terrible to think that our contributors cannot be open and must self-censor in 
case something is said that can be misinterpreted or negatively used against 
them or their project. I know I personally self-sensor what I say in OpenStack 
IRC channels. Text-based mediums lose a lot for communication, even more when 
it's historical logs, and I don't want to say something that is easily taken 
out of context or used against me or Swift.

My point is that while I support logging every OpenStack channel, please 
realize that it does come with a cost. Think back to the conversations that 
happen over drinks late at night at OpenStack Summits. I've had many of those 
with many of you. What's said there is private and wouldn't be said in a public 
IRC channel. And that's ok. People need a way to brainstorm ideas and express 
frustration, and a public place isn't generally where that happens.


--John



[1] I leave it at that for now, since it was a long time ago and the details 
aren't important for the point of this email. Actually I'm glad that it 
happened briefly before our channel was logged, so you can't go back and find 
it. I've not heard of any other incidents like this happening before or since.





 On Feb 18, 2015, at 3:16 AM, Chmouel Boudjnah chmo...@enovance.com wrote:
 
 Daniel P. Berrange berra...@redhat.com writes:
 
 Personally I think all our IRC channels should be logged. There is really
 no expectation of privacy when using IRC in an open collaborative project.
 
 Agreed with Daniel. I am not sure how a publicly available forum/channel
 can be assumed that there is not going to be any records available
 publicly.
 
 Chmouel
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] LOG.debug()

2015-02-18 Thread Vedsar Kushwaha
Hello World,

I'm new to openstack and python too :).

In the file:
https://github.com/openstack/nova/blob/master/nova/scheduler/filters/ram_filter.py

where does the LOG.debug() is storing the information?



-- 
Vedsar Kushwaha
M.Tech-Computational Science
Indian Institute of Science
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Lets keep our community open, lets fight for it

2015-02-18 Thread Doug Hellmann


On Wed, Feb 18, 2015, at 05:40 AM, Daniel P. Berrange wrote:
 On Tue, Feb 17, 2015 at 09:29:19AM -0800, Clint Byrum wrote:
  Excerpts from Daniel P. Berrange's message of 2015-02-17 02:37:50 -0800:
   On Wed, Feb 11, 2015 at 03:14:39PM +0100, Stefano Maffulli wrote:
 ## Cores are *NOT* special
 
 At some point, for some reason that is unknown to me, this message
 changed and the feeling of core's being some kind of superheros became
 a thing. It's gotten far enough to the point that I've came to know
 that some projects even have private (flagged with +s), password
 protected, irc channels for core reviewers.

This is seriously disturbing.

If you're one of those core reviewers hanging out on a private channel,
please contact me privately: I'd love to hear from you why we failed as
a community at convincing you that an open channel is the place to be.

No public shaming, please: education first.
   
   I've been thinking about these last few lines a bit, I'm not entirely
   comfortable with the dynamic this sets up.
   
   What primarily concerns me is the issue of community accountability. A 
   core
   feature of OpenStack's project  individual team governance is the idea
   of democractic elections, where the individual contributors can vote in
   people who they think will lead OpenStack in a positive way, or conversely
   hold leadership to account by voting them out next time. The ability of
   individuals contributors to exercise this freedom though, relies on the
   voters being well informed about what is happening in the community.
   
   If cases of bad community behaviour, such as use of passwd protected IRC
   channels, are always primarily dealt with via further private 
   communications,
   then we are denying the voters the information they need to hold people to
   account. I can understand the desire to avoid publically shaming people
   right away, because the accusations may be false, or may be arising from a
   simple mis-understanding, but at some point genuine issues like this need
   to be public. Without this we make it difficult for contributors to make
   an informed decision at future elections.
   
   Right now, this thread has left me wondering whether there are still any
   projects which are using password protected IRC channels, or whether they
   have all been deleted, and whether I will be unwittingly voting for people
   who supported their use in future openstack elections.
   
  
  Shaming a person is a last resort, when that person may not listen to
  reason. It's sometimes necessary to bring shame to a practice, but even
  then, those who are participating are now draped in shame as well and
  will have a hard time saving face.
 
 This really isn't about trying to shame people, rather it is about
 having accountability in the open.
 
 If the accusations of running private IRC channels were false, then
 yes, it would be an example of shaming to then publicise those who
 were accused.
 
 Since it is confirmed that private password protected IRC channels
 do in fact exist, then we need to have the explanations as to why
 this was done be made in public. The community can then decide
 whether the explanations offered provide sufficient justification.
 This isn't about shaming, it is about each individual being able
 to decide for themselves as to whether what happened was acceptable,
 given the explanations.

Right. And Stef is pulling that information together from the
appropriate sources. Sometimes it's easier to have those sorts of
conversations one-on-one than in a fully public forum. When we have the
full picture, then will know whether further action is needed (I hope
the team decides to close down the channel on their own, for example).
In any case, we will publish the facts. But let's give Stef time to work
on it, first.

Doug

 
 Regards,
 Daniel
 -- 
 |: http://berrange.com  -o-   
 http://www.flickr.com/photos/dberrange/ :|
 |: http://libvirt.org  -o-
 http://virt-manager.org :|
 |: http://autobuild.org   -o-
 http://search.cpan.org/~danberr/ :|
 |: http://entangle-photo.org   -o-  
 http://live.gnome.org/gtk-vnc :|
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][requirements] External dependency caps introduced in 499db6b

2015-02-18 Thread Donald Stufft

 On Feb 18, 2015, at 10:14 AM, Doug Hellmann d...@doughellmann.com wrote:
 
 
 
 On Wed, Feb 18, 2015, at 10:07 AM, Donald Stufft wrote:
 
 On Feb 18, 2015, at 10:00 AM, Doug Hellmann d...@doughellmann.com wrote:
 
 
 
 On Tue, Feb 17, 2015, at 03:17 PM, Joe Gordon wrote:
 On Tue, Feb 17, 2015 at 4:19 AM, Sean Dague s...@dague.net wrote:
 
 On 02/16/2015 08:50 PM, Ian Cordasco wrote:
 On 2/16/15, 16:08, Sean Dague s...@dague.net wrote:
 
 On 02/16/2015 02:08 PM, Doug Hellmann wrote:
 
 
 On Mon, Feb 16, 2015, at 01:01 PM, Ian Cordasco wrote:
 Hey everyone,
 
 The os-ansible-deployment team was working on updates to add support
 for
 the latest version of juno and noticed some interesting version
 specifiers
 introduced into global-requirements.txt in January. It introduced some
 version specifiers that seem a bit impossible like the one for
 requests
 [1]. There are others that equate presently to pinning the versions of
 the
 packages [2, 3, 4].
 
 I understand fully and support the commit because of how it improves
 pretty much everyone’s quality of life (no fires to put out in the
 middle
 of the night on the weekend). I’m also aware that a lot of the
 downstream
 redistributors tend to work from global-requirements.txt when
 determining
 what to package/support.
 
 It seems to me like there’s room to clean up some of these
 requirements
 to
 make them far more explicit and less misleading to the human eye (even
 though tooling like pip can easily parse/understand these).
 
 I think that's the idea. These requirements were generated
 automatically, and fixed issues that were holding back several
 projects.
 Now we can apply updates to them by hand, to either move the lower
 bounds down (as in the case Ihar pointed out with stevedore) or clean
 up
 the range definitions. We should not raise the limits of any Oslo
 libraries, and we should consider raising the limits of third-party
 libraries very carefully.
 
 We should make those changes on one library at a time, so we can see
 what effect each change has on the other requirements.
 
 
 I also understand that stable-maint may want to occasionally bump the
 caps
 to see if newer versions will not break everything, so what is the
 right
 way forward? What is the best way to both maintain a stable branch
 with
 known working dependencies while helping out those who do so much work
 for
 us (downstream and stable-maint) and not permanently pinning to
 certain
 working versions?
 
 Managing the upper bounds is still under discussion. Sean pointed out
 that we might want hard caps so that updates to stable branch were
 explicit. I can see either side of that argument and am still on the
 fence about the best approach.
 
 History has shown that it's too much work keeping testing functioning
 for stable branches if we leave dependencies uncapped. If particular
 people are interested in bumping versions when releases happen, it's
 easy enough to do with a requirements proposed update. It will even run
 tests that in most cases will prove that it works.
 
 It might even be possible for someone to build some automation that did
 that as stuff from pypi released so we could have the best of both
 worlds. But I think capping is definitely something we want as a
 project, and it reflects the way that most deployments will consume this
 code.
 
-Sean
 
 --
 Sean Dague
 http://dague.net
 
 Right. No one is arguing the very clear benefits of all of this.
 
 I’m just wondering if for the example version identifiers that I gave in
 my original message (and others that are very similar) if we want to make
 the strings much simpler for people who tend to work from them (i.e.,
 downstream re-distributors whose jobs are already difficult enough). I’ve
 offered to help at least one of them in the past who maintains all of
 their distro’s packages themselves, but they refused so I’d like to help
 them anyway possible. Especially if any of them chime in as this being
 something that would be helpful.
 
 Ok, your links got kind of scrambled. Can you next time please inline
 the key relevant content in the email, because I think we all missed the
 original message intent as the key content was only in footnotes.
 
 From my point of view, normalization patches would be fine.
 
 requests=1.2.1,!=2.4.0,=2.2.1
 
 Is actually an odd one, because that's still there because we're using
 Trusty level requests in the tests, and my ability to have devstack not
 install that has thus far failed.
 
 Things like:
 
 osprofiler=0.3.0,=0.3.0 # Apache-2.0
 
 Can clearly be normalized to osprofiler==0.3.0 if you want to propose
 the patch manually.
 
 
 global-requirements for stable branches serves two uses:
 
 1. Specify the set of dependencies that we would like to test against
 2.  A tool for downstream packagers to use when determining what to
 package/support.
 
 For #1, Ideally we would like a set of all dependencies, including
 transitive, with explicit versions (very similar to the 

Re: [openstack-dev] [Mistral] Changing expression delimiters in Mistral DSL

2015-02-18 Thread Zane Bitter

On 16/02/15 16:06, Dmitri Zimine wrote:

2) Use functions, like Heat HOT or TOSCA:

HOT templates and TOSCA doesn’t seem to have a concept of typed
variables to borrow from (please correct me if I missed it). But they
have functions: function: { function_name: {foo: [parameter1, parameter
2], bar:xxx”}}. Applied to Mistral, it would look like:

 publish:
  - bool_var: { yaql: “1+1+$.my.var  100” }

Not bad, but currently rejected as it reads worse than delimiter-based
syntax, especially in simplified one-line action invocation.


Note that you don't actually need the quotes there, so this would be 
equivalent:


publish:
 - bool_var: {yaql: 1+1+$.my.var  100}

FWIW I am partial to this or to Renat's p7 suggestion:

publish:
 - bool_var: yaql{1+1+$.my.var  100}

Both offer the flexibility to introduce new syntax in the future without 
breaking backwards compatibility.


cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][requirements] External dependency caps introduced in 499db6b

2015-02-18 Thread Doug Hellmann


On Tue, Feb 17, 2015, at 03:17 PM, Joe Gordon wrote:
 On Tue, Feb 17, 2015 at 4:19 AM, Sean Dague s...@dague.net wrote:
 
  On 02/16/2015 08:50 PM, Ian Cordasco wrote:
   On 2/16/15, 16:08, Sean Dague s...@dague.net wrote:
  
   On 02/16/2015 02:08 PM, Doug Hellmann wrote:
  
  
   On Mon, Feb 16, 2015, at 01:01 PM, Ian Cordasco wrote:
   Hey everyone,
  
   The os-ansible-deployment team was working on updates to add support
   for
   the latest version of juno and noticed some interesting version
   specifiers
   introduced into global-requirements.txt in January. It introduced some
   version specifiers that seem a bit impossible like the one for
  requests
   [1]. There are others that equate presently to pinning the versions of
   the
   packages [2, 3, 4].
  
   I understand fully and support the commit because of how it improves
   pretty much everyone’s quality of life (no fires to put out in the
   middle
   of the night on the weekend). I’m also aware that a lot of the
   downstream
   redistributors tend to work from global-requirements.txt when
   determining
   what to package/support.
  
   It seems to me like there’s room to clean up some of these
  requirements
   to
   make them far more explicit and less misleading to the human eye (even
   though tooling like pip can easily parse/understand these).
  
   I think that's the idea. These requirements were generated
   automatically, and fixed issues that were holding back several
  projects.
   Now we can apply updates to them by hand, to either move the lower
   bounds down (as in the case Ihar pointed out with stevedore) or clean
  up
   the range definitions. We should not raise the limits of any Oslo
   libraries, and we should consider raising the limits of third-party
   libraries very carefully.
  
   We should make those changes on one library at a time, so we can see
   what effect each change has on the other requirements.
  
  
   I also understand that stable-maint may want to occasionally bump the
   caps
   to see if newer versions will not break everything, so what is the
   right
   way forward? What is the best way to both maintain a stable branch
  with
   known working dependencies while helping out those who do so much work
   for
   us (downstream and stable-maint) and not permanently pinning to
  certain
   working versions?
  
   Managing the upper bounds is still under discussion. Sean pointed out
   that we might want hard caps so that updates to stable branch were
   explicit. I can see either side of that argument and am still on the
   fence about the best approach.
  
   History has shown that it's too much work keeping testing functioning
   for stable branches if we leave dependencies uncapped. If particular
   people are interested in bumping versions when releases happen, it's
   easy enough to do with a requirements proposed update. It will even run
   tests that in most cases will prove that it works.
  
   It might even be possible for someone to build some automation that did
   that as stuff from pypi released so we could have the best of both
   worlds. But I think capping is definitely something we want as a
   project, and it reflects the way that most deployments will consume this
   code.
  
-Sean
  
   --
   Sean Dague
   http://dague.net
  
   Right. No one is arguing the very clear benefits of all of this.
  
   I’m just wondering if for the example version identifiers that I gave in
   my original message (and others that are very similar) if we want to make
   the strings much simpler for people who tend to work from them (i.e.,
   downstream re-distributors whose jobs are already difficult enough). I’ve
   offered to help at least one of them in the past who maintains all of
   their distro’s packages themselves, but they refused so I’d like to help
   them anyway possible. Especially if any of them chime in as this being
   something that would be helpful.
 
  Ok, your links got kind of scrambled. Can you next time please inline
  the key relevant content in the email, because I think we all missed the
  original message intent as the key content was only in footnotes.
 
  From my point of view, normalization patches would be fine.
 
  requests=1.2.1,!=2.4.0,=2.2.1
 
  Is actually an odd one, because that's still there because we're using
  Trusty level requests in the tests, and my ability to have devstack not
  install that has thus far failed.
 
  Things like:
 
  osprofiler=0.3.0,=0.3.0 # Apache-2.0
 
  Can clearly be normalized to osprofiler==0.3.0 if you want to propose
  the patch manually.
 
 
 global-requirements for stable branches serves two uses:
 
 1. Specify the set of dependencies that we would like to test against
 2.  A tool for downstream packagers to use when determining what to
 package/support.
 
 For #1, Ideally we would like a set of all dependencies, including
 transitive, with explicit versions (very similar to the output of
 pip-freeze). But for #2 the 

Re: [openstack-dev] [Fuel] Python code in fuel-library

2015-02-18 Thread Aleksandr Didenko
Hi,

I agree that we need a better testing for python tasks/code. There should
be no problems adding py.test tests into fuel-library CI, we already have
one [1] up and running. So I'm all in and ready to help with such testing
implementation.

[1] https://fuel-jenkins.mirantis.com/job/fuellib_tasks_graph_check/

Regards,
Aleksandr

On Wed, Feb 18, 2015 at 4:02 PM, Vladimir Kuklin vkuk...@mirantis.com
wrote:

 Hi, Seb

 Very fair point, thank you. We need to add this to our jobs for unittests
 run and syntax check. I am adding Aleksandr Didenko into the loop as he is
 currently working on the similar task.

 On Wed, Feb 18, 2015 at 4:53 PM, Jay Pipes jaypi...@gmail.com wrote:

 On 02/18/2015 04:57 AM, Sebastian Kalinowski wrote:

 Hello Fuelers,

 There is more and more Python code appearing in fuel-library [1] that is
 used in our Puppet manifests. Now, with introduction of Granular
 Deployment feature it could appear more often as
 writing some tasks as a Python script is a nice option.

 First problem that I see is that in some cases this code is getting
 merged without a positive review from a Python developer from Fuel team.
 My proposition of the solution is simple:
 fuel-library core reviewers shouldn't merge such code if there is no a
 +1 from a Python developer from fuel-core group [2].

 Second problem is that there are no automatic tests for this code.
 Testing it manually and by running deployment when that code is used is
 not enough since those scripts could be quite large and complicated and
 some of them are executed in specific situations so it is hard for
 reviewers to check how they will work.
 In fuel-library we already have tests for Puppet modules: [3].
 I suggest that we should introduce similar checks for Python code:
   - there will be one global 'test-requirements.txt' file (if there will
 be a need to, we could introduce more granular split, like per module)
   - py.test [4] will be used as a test runner
   - (optional, but advised) flake8+hacking checks [5] (could be limited
 to just run flake8/pyflakes checks)

 Looking forward to your opinions on those two issues.


 Hi Seba,

 All those suggestions look fine to me. I'd also add to improve the
 documentation on how to write and run Python tests to help out those
 developers who are not as familiar with Python as Ruby or other languages.

 Best,
 -jay

 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
 unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Yours Faithfully,
 Vladimir Kuklin,
 Fuel Library Tech Lead,
 Mirantis, Inc.
 +7 (495) 640-49-04
 +7 (926) 702-39-68
 Skype kuklinvv
 45bk3, Vorontsovskaya Str.
 Moscow, Russia,
 www.mirantis.com http://www.mirantis.ru/
 www.mirantis.ru
 vkuk...@mirantis.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][requirements] External dependency caps introduced in 499db6b

2015-02-18 Thread Donald Stufft

 On Feb 18, 2015, at 10:00 AM, Doug Hellmann d...@doughellmann.com wrote:
 
 
 
 On Tue, Feb 17, 2015, at 03:17 PM, Joe Gordon wrote:
 On Tue, Feb 17, 2015 at 4:19 AM, Sean Dague s...@dague.net wrote:
 
 On 02/16/2015 08:50 PM, Ian Cordasco wrote:
 On 2/16/15, 16:08, Sean Dague s...@dague.net wrote:
 
 On 02/16/2015 02:08 PM, Doug Hellmann wrote:
 
 
 On Mon, Feb 16, 2015, at 01:01 PM, Ian Cordasco wrote:
 Hey everyone,
 
 The os-ansible-deployment team was working on updates to add support
 for
 the latest version of juno and noticed some interesting version
 specifiers
 introduced into global-requirements.txt in January. It introduced some
 version specifiers that seem a bit impossible like the one for
 requests
 [1]. There are others that equate presently to pinning the versions of
 the
 packages [2, 3, 4].
 
 I understand fully and support the commit because of how it improves
 pretty much everyone’s quality of life (no fires to put out in the
 middle
 of the night on the weekend). I’m also aware that a lot of the
 downstream
 redistributors tend to work from global-requirements.txt when
 determining
 what to package/support.
 
 It seems to me like there’s room to clean up some of these
 requirements
 to
 make them far more explicit and less misleading to the human eye (even
 though tooling like pip can easily parse/understand these).
 
 I think that's the idea. These requirements were generated
 automatically, and fixed issues that were holding back several
 projects.
 Now we can apply updates to them by hand, to either move the lower
 bounds down (as in the case Ihar pointed out with stevedore) or clean
 up
 the range definitions. We should not raise the limits of any Oslo
 libraries, and we should consider raising the limits of third-party
 libraries very carefully.
 
 We should make those changes on one library at a time, so we can see
 what effect each change has on the other requirements.
 
 
 I also understand that stable-maint may want to occasionally bump the
 caps
 to see if newer versions will not break everything, so what is the
 right
 way forward? What is the best way to both maintain a stable branch
 with
 known working dependencies while helping out those who do so much work
 for
 us (downstream and stable-maint) and not permanently pinning to
 certain
 working versions?
 
 Managing the upper bounds is still under discussion. Sean pointed out
 that we might want hard caps so that updates to stable branch were
 explicit. I can see either side of that argument and am still on the
 fence about the best approach.
 
 History has shown that it's too much work keeping testing functioning
 for stable branches if we leave dependencies uncapped. If particular
 people are interested in bumping versions when releases happen, it's
 easy enough to do with a requirements proposed update. It will even run
 tests that in most cases will prove that it works.
 
 It might even be possible for someone to build some automation that did
 that as stuff from pypi released so we could have the best of both
 worlds. But I think capping is definitely something we want as a
 project, and it reflects the way that most deployments will consume this
 code.
 
 -Sean
 
 --
 Sean Dague
 http://dague.net
 
 Right. No one is arguing the very clear benefits of all of this.
 
 I’m just wondering if for the example version identifiers that I gave in
 my original message (and others that are very similar) if we want to make
 the strings much simpler for people who tend to work from them (i.e.,
 downstream re-distributors whose jobs are already difficult enough). I’ve
 offered to help at least one of them in the past who maintains all of
 their distro’s packages themselves, but they refused so I’d like to help
 them anyway possible. Especially if any of them chime in as this being
 something that would be helpful.
 
 Ok, your links got kind of scrambled. Can you next time please inline
 the key relevant content in the email, because I think we all missed the
 original message intent as the key content was only in footnotes.
 
 From my point of view, normalization patches would be fine.
 
 requests=1.2.1,!=2.4.0,=2.2.1
 
 Is actually an odd one, because that's still there because we're using
 Trusty level requests in the tests, and my ability to have devstack not
 install that has thus far failed.
 
 Things like:
 
 osprofiler=0.3.0,=0.3.0 # Apache-2.0
 
 Can clearly be normalized to osprofiler==0.3.0 if you want to propose
 the patch manually.
 
 
 global-requirements for stable branches serves two uses:
 
 1. Specify the set of dependencies that we would like to test against
 2.  A tool for downstream packagers to use when determining what to
 package/support.
 
 For #1, Ideally we would like a set of all dependencies, including
 transitive, with explicit versions (very similar to the output of
 pip-freeze). But for #2 the standard requirement file with a range is
 preferred. Putting an upper bound on each dependency, 

Re: [openstack-dev] The root-cause for IRC private channels (was Re: [all][tc] Lets keep our community open, lets fight for it)

2015-02-18 Thread Russell Bryant
On 02/18/2015 09:58 AM, John Dickinson wrote:
 My point is that while I support logging every OpenStack channel, 
 please realize that it does come with a cost. Think back to the 
 conversations that happen over drinks late at night at OpenStack 
 Summits. I've had many of those with many of you. What's said
 there is private and wouldn't be said in a public IRC channel. And
 that's ok. People need a way to brainstorm ideas and express
 frustration, and a public place isn't generally where that
 happens.

Good point.  I agree that it comes at a cost.  I originally resisted
logging #openstack-nova because of that cost.  The channel used to be
much smaller and originally felt like a much more casual environment
like bonding with the team over some beers.

That's not the reality anymore.  It's a very public forum (as it
should be) and useful discussions happen there.  I support logging all
of our OpenStack channels and have re-proposed doing so for -nova.

https://review.openstack.org/#/c/156979/

-- 
Russell Bryant

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][requirements] External dependency caps introduced in 499db6b

2015-02-18 Thread Doug Hellmann


On Wed, Feb 18, 2015, at 10:07 AM, Donald Stufft wrote:
 
  On Feb 18, 2015, at 10:00 AM, Doug Hellmann d...@doughellmann.com wrote:
  
  
  
  On Tue, Feb 17, 2015, at 03:17 PM, Joe Gordon wrote:
  On Tue, Feb 17, 2015 at 4:19 AM, Sean Dague s...@dague.net wrote:
  
  On 02/16/2015 08:50 PM, Ian Cordasco wrote:
  On 2/16/15, 16:08, Sean Dague s...@dague.net wrote:
  
  On 02/16/2015 02:08 PM, Doug Hellmann wrote:
  
  
  On Mon, Feb 16, 2015, at 01:01 PM, Ian Cordasco wrote:
  Hey everyone,
  
  The os-ansible-deployment team was working on updates to add support
  for
  the latest version of juno and noticed some interesting version
  specifiers
  introduced into global-requirements.txt in January. It introduced some
  version specifiers that seem a bit impossible like the one for
  requests
  [1]. There are others that equate presently to pinning the versions of
  the
  packages [2, 3, 4].
  
  I understand fully and support the commit because of how it improves
  pretty much everyone’s quality of life (no fires to put out in the
  middle
  of the night on the weekend). I’m also aware that a lot of the
  downstream
  redistributors tend to work from global-requirements.txt when
  determining
  what to package/support.
  
  It seems to me like there’s room to clean up some of these
  requirements
  to
  make them far more explicit and less misleading to the human eye (even
  though tooling like pip can easily parse/understand these).
  
  I think that's the idea. These requirements were generated
  automatically, and fixed issues that were holding back several
  projects.
  Now we can apply updates to them by hand, to either move the lower
  bounds down (as in the case Ihar pointed out with stevedore) or clean
  up
  the range definitions. We should not raise the limits of any Oslo
  libraries, and we should consider raising the limits of third-party
  libraries very carefully.
  
  We should make those changes on one library at a time, so we can see
  what effect each change has on the other requirements.
  
  
  I also understand that stable-maint may want to occasionally bump the
  caps
  to see if newer versions will not break everything, so what is the
  right
  way forward? What is the best way to both maintain a stable branch
  with
  known working dependencies while helping out those who do so much work
  for
  us (downstream and stable-maint) and not permanently pinning to
  certain
  working versions?
  
  Managing the upper bounds is still under discussion. Sean pointed out
  that we might want hard caps so that updates to stable branch were
  explicit. I can see either side of that argument and am still on the
  fence about the best approach.
  
  History has shown that it's too much work keeping testing functioning
  for stable branches if we leave dependencies uncapped. If particular
  people are interested in bumping versions when releases happen, it's
  easy enough to do with a requirements proposed update. It will even run
  tests that in most cases will prove that it works.
  
  It might even be possible for someone to build some automation that did
  that as stuff from pypi released so we could have the best of both
  worlds. But I think capping is definitely something we want as a
  project, and it reflects the way that most deployments will consume this
  code.
  
  -Sean
  
  --
  Sean Dague
  http://dague.net
  
  Right. No one is arguing the very clear benefits of all of this.
  
  I’m just wondering if for the example version identifiers that I gave in
  my original message (and others that are very similar) if we want to make
  the strings much simpler for people who tend to work from them (i.e.,
  downstream re-distributors whose jobs are already difficult enough). I’ve
  offered to help at least one of them in the past who maintains all of
  their distro’s packages themselves, but they refused so I’d like to help
  them anyway possible. Especially if any of them chime in as this being
  something that would be helpful.
  
  Ok, your links got kind of scrambled. Can you next time please inline
  the key relevant content in the email, because I think we all missed the
  original message intent as the key content was only in footnotes.
  
  From my point of view, normalization patches would be fine.
  
  requests=1.2.1,!=2.4.0,=2.2.1
  
  Is actually an odd one, because that's still there because we're using
  Trusty level requests in the tests, and my ability to have devstack not
  install that has thus far failed.
  
  Things like:
  
  osprofiler=0.3.0,=0.3.0 # Apache-2.0
  
  Can clearly be normalized to osprofiler==0.3.0 if you want to propose
  the patch manually.
  
  
  global-requirements for stable branches serves two uses:
  
  1. Specify the set of dependencies that we would like to test against
  2.  A tool for downstream packagers to use when determining what to
  package/support.
  
  For #1, Ideally we would like a set of all dependencies, including
  

[openstack-dev] [glance] Import service module from oslo-incubator

2015-02-18 Thread Sampath, Lakshmi
Hi,

For catalog index service we need the service module and its dependencies 
imported from oslo-incubator.
https://review.openstack.org/#/c/152872/ has the required files.   All are new 
files and shouldn't be impacting any existing functionality.

Wanted to send to a wider audience to see if there are any other oslo-incubator 
sync to glance for this release that might overlap or otherwise appreciate 
reviews.

Thanks
Lakshmi.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] LOG.debug()

2015-02-18 Thread Vedsar Kushwaha
thanks. that worked...:)

On Wed, Feb 18, 2015 at 9:30 PM, Jordan Pittier jordan.pitt...@scality.com
wrote:

 Hi,
 Also, make sure you have :
 debug = True
 verbose = True
 in the [DEFAULT] section of your nova.conf

 Jordan

 On Wed, Feb 18, 2015 at 3:56 PM, Gary Kotton gkot...@vmware.com wrote:

  Hi,
 Please see
 http://docs.openstack.org/openstack-ops/content/logging_monitoring.html
 If you have installed from packages this may be in
 /var/log/nova/nova-compute.log
 Thanks
 Gary

   From: Vedsar Kushwaha vedsarkushw...@gmail.com
 Reply-To: OpenStack List openstack-dev@lists.openstack.org
 Date: Wednesday, February 18, 2015 at 4:52 PM
 To: OpenStack List openstack-dev@lists.openstack.org
 Subject: [openstack-dev] LOG.debug()

   Hello World,

  I'm new to openstack and python too :).

 In the file:

 https://github.com/openstack/nova/blob/master/nova/scheduler/filters/ram_filter.py
 https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_openstack_nova_blob_master_nova_scheduler_filters_ram-5Ffilter.pyd=AwMFAwc=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEsr=VlZxHpZBmzzkWT5jqz9JYBk8YTeq9N3-diTlNj4GyNcm=DJEz4nE1qVf12YKxUwG0LN0yBMtl3warl36Oqq4Vqpos=ONUtXkRDFMM8mfVHff8Fc3zFe9AdJohiOZBUe7P6P3Ie=

  where does the LOG.debug() is storing the information?



 --
   Vedsar Kushwaha
 M.Tech-Computational Science
 Indian Institute of Science

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Vedsar Kushwaha
M.Tech-Computational Science
Indian Institute of Science
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] The root-cause for IRC private channels (was Re: [all][tc] Lets keep our community open, lets fight for it)

2015-02-18 Thread Flavio Percoco

On 18/02/15 16:07 +, Michael Krotscheck wrote:

   You got my intention right: I wanted to understand better what lead
   some people to create a private channel, what were their needs.


I'm in a passworded channel, where the majority of members work on OpenStack,
but whose common denominator is We're in the same organizational unit in HP.
We talk about openstack, we talk about HP, we talk about burning man, we talk
about movies, good places to drink - it's a nice little backchannel of idle
chatter. There have been a few times when things related to OpenStack came up,
and in that case we've booted the topic to a public channel (There was an
example just yesterday). Either way, in this case a private channel was created
 because we could potentially be discussing corporate things, it's more
analogous to your Teams' internal Hipchat or IRC server (in fact, it started in
HipChat, and then we were all 'why do we have to use another chat client' and
that ended that).

So there's one use case.



I think the above is perfectly fine and it has nothing to do with
OpenStack. What Stefano (and all of us) is trying to understand is why
part of our community needed a private IRC channel for core
reviewers to hangout together. For the later, I don't think there's a
use case.

I'm not arguing on the general use case for a private IRC channel, I'm
arguing on the need of such channels for a specific set of core
reviewers.

Fla.

--
@flaper87
Flavio Percoco


pgpnqFoZzBFvF.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] [glance] conclusion needed on functional API

2015-02-18 Thread Miguel Grinberg
Out of all the proposals mentioned in this thread, I think Jay's (d) option
is what is closer to the REST ideal:

d) POST /images/{image_id}/tasks with payload:
   { action: deactivate|activate }

Even though I don't think this is the perfect solution, I can recognize
that at least it tries to be RESTful, unlike the other three options
suggested in the first message.

That said, I'm going to keep insisting that in a REST API state changes are
the most important thing, and actions are implicitly derived by the server
from these state changes requested by the client. What you are trying to do
is to reverse this flow, you want the client to invoke an action, which in
turn will cause an implicit state change on the server. This isn't wrong in
itself, it's just not the way you do REST.

Jay's (d) proposal above could be improved by making the task a real
resource. Sending a POST request to the /tasks address creates a new task
resource, which gets a URI of its own, returned in the Location header. You
can then send a GET request to this URI to obtain status info, such as
whether the task completed or not. And since tasks are now real resources,
they should have a documented representation as well.

Miguel

On Wed, Feb 18, 2015 at 1:19 PM, Brian Rosmaita 
brian.rosma...@rackspace.com wrote:

 On 2/15/15, 2:35 PM, Jay Pipes jaypi...@gmail.com wrote:
 On 02/15/2015 01:13 PM, Brian Rosmaita wrote:
  On 2/15/15, 10:10 AM, Jay Pipes jaypi...@gmail.com wrote:
 
  On 02/15/2015 01:31 AM, Brian Rosmaita wrote:
  This is a follow-up to the discussion at the 12 February API-WG
  meeting [1] concerning functional API in Glance [2].  We made
  some progress, but need to close this off so the spec can be
  implemented in Kilo.
 
  I believe this is where we left off: 1. The general consensus was
  that POST is the correct verb.
 
  Yes, POST is correct (though the resource is wrong).
 
  2. Did not agree on what to POST.  Three options are in play: (A)
  POST /images/{image_id}?action=deactivate POST
  /images/{image_id}?action=reactivate
 
  (B) POST /images/{image_id}/actions with payload describing the
  action, e.g., { action: deactivate } { action: reactivate
  }
 
  (C) POST /images/{image_id}/actions/deactivate POST
  /images/{image_id}/actions/reactivate
 
  d) POST /images/{image_id}/tasks with payload: { action:
  deactivate|activate }
 
  An action isn't created. An action is taken. A task is created. A
  task contains instructions on what action to take.
 
  The Images API v2 already has tasks (schema available at
  /v2/schemas/tasks ), which are used for long-running asynchronous
  operations (right now, image import and image export).  I think we
  want to keep those distinct from what we're talking about here.
 
  Does something really need to be created for this call?  The idea
  behind the functional API was to have a place for things that don't
  fit neatly into the CRUD-centric paradigm.  Option (C) seems like a
  good fit for this.
 
 Why not just use the existing tasks/ interface, then? :) Seems like a
 perfect fit to me.

 The existing tasks/ interface is kind of heavyweight.  It provides a
 framework for asynchronous operations.  It's really not appropriate for
 this purpose.

 cheers,
 brian


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Changing expression delimiters in Mistral DSL

2015-02-18 Thread Dmitri Zimine
 Syntax options that we’d like to discuss further 
 
 % 1 + 1 % # pro- ruby/js/puppet/chef familiarity con - spaces, and % is too 
 large symbol
 {1 + 1}  # pro - less spaces, con - no familiarity
 ? 1 + 1 ?  # php familiarity, need spaces
 
 The primary criteria to select these 3 options is that they are YAML 
 compatible. Technically they all would solve our problems (primarily no 
 embracing quotes needed like in Ansible so no ambiguity on data types).
 
 The secondary criteria is syntax symmetry. After all I agree with Patrick's 
 point about better readability when we have opening and closing sequences 
 alike.

  
To me, another critical criteria is familiarity: target users - openstack 
developers and devops, familiar with the delimiters. 

That is why of the three above I prefer % % . 

It is commonly used in Puppet/Chef [1], Ruby, Javascript. One won’t be 
surprised to see it and won’t need to change the muscle memory to type 
open/closed characters especially when working on say Puppet and Mistral at the 
same time (not unlikely). 


[1] 
https://docs.puppetlabs.com/guides/templating.html#erb-is-plain-text-with-embedded-ruby
 


On Feb 18, 2015, at 3:20 AM, Renat Akhmerov rakhme...@mirantis.com wrote:

 Hi again,
 
 Sorry, I started writing this email before Angus replied so I will shoot it 
 as is and then we can continue…
 
 
 So after discussing all the options again with a small group of team members 
 we came to the following things:
 
 Syntax options that we’d like to discuss further 
 
 % 1 + 1 % # pro- ruby/js/puppet/chef familiarity con - spaces, and % is too 
 large symbol
 {1 + 1}  # pro - less spaces, con - no familiarity
 ? 1 + 1 ?  # php familiarity, need spaces
 
 The primary criteria to select these 3 options is that they are YAML 
 compatible. Technically they all would solve our problems (primarily no 
 embracing quotes needed like in Ansible so no ambiguity on data types).
 
 The secondary criteria is syntax symmetry. After all I agree with Patrick's 
 point about better readability when we have opening and closing sequences 
 alike.
 
 Some additional details can be found in [0]
 
 
 [0] https://etherpad.openstack.org/p/mistral-YAQL-delimiters
 
 Renat Akhmerov
 @ Mirantis Inc.
 
 
 On 18 Feb 2015, at 07:37, Patrick Hoolboom patr...@stackstorm.com wrote:
 
  My main concern with the {} delimiters in YAQL is that the curly brace 
 already has a defined use within YAML.  We most definitely will eventually 
 run in to parsing errors with whatever delimiter we choose but I don't feel 
 that it should conflict with the markup language it is directly embedded in. 
  It gets quite difficult to, at a glance, identify YAQL expressions.  % % 
 may appear ugly to some but I feel that it works as a clear delimiter of 
 both the beginning AND the end of the YAQL query. The options that only 
 escape the beginning look fine in small examples like this but the workflows 
 that we have written or seen in the wild tend to have some fairly large 
 expressions.  If the opening and closing delimiters don't match, it gets 
 quite difficult to read. 
 
 From: Anastasia Kuznetsova akuznets...@mirantis.com
 Subject: Re: [openstack-dev] [Mistral] Changing expression delimiters in 
 Mistral DSL
 Date: February 17, 2015 at 8:28:27 AM PST
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Reply-To: OpenStack Development Mailing List \(not for usage questions\) 
 openstack-dev@lists.openstack.org
 
 As for me, I think that % ... % is not an elegant solution and looks 
 massive because of '%' sign. Also I agree with Renat, that % ... % 
 reminds HTML/Jinja2 syntax. 
 
 I am not sure that similarity with something should be one of the main 
 criteria, because we don't know who will use Mistral.
 
 I like:
 - {1 + $.var} Renat's example 
 - variant with using some functions (item 2 in Dmitry's list):  { yaql: 
 “1+1+$.my.var  100” } or yaql: 'Hello' + $.name 
 - my two cents, maybe we can use something like: result: - Hello + 
 $.name -
 
 
 Regards,
 Anastasia Kuznetsova
 
 On Tue, Feb 17, 2015 at 1:17 PM, Nikolay Makhotkin 
 nmakhot...@mirantis.com wrote:
 Some suggestions from me: 
 
 1. y 1 + $.var  # (short from yaql).
 2. { 1 + $.var }  # as for me, looks more elegant than % %. And 
 visually it is more strong
 
 A also like p7 and p8 suggested by Renat.
 
 On Tue, Feb 17, 2015 at 11:43 AM, Renat Akhmerov rakhme...@mirantis.com 
 wrote:
 One more:
 
 p9: \{1 + $.var}# That’s pretty much what 
 https://review.openstack.org/#/c/155348/ addresses but it’s not exactly 
 that. Note that we don’t have to put it in quotes in this case to deal with 
 YAML {} semantics, it’s just a string
 
 
 
 Renat Akhmerov
 @ Mirantis Inc.
 
 
 
 On 17 Feb 2015, at 13:37, Renat Akhmerov rakhme...@mirantis.com wrote:
 
 Along with % % syntax here are some other alternatives that I checked 
 for YAML friendliness with my short comments:
 
 p1: ${1 + $.var}   # 

Re: [openstack-dev] [api] [glance] conclusion needed on functional API

2015-02-18 Thread Brian Rosmaita
Thanks for your comment, Miguel.  Your suggestion is indeed very close to the 
RESTful ideal.

However, I have a question for the entire API-WG.  Our (proposed) mission is 
To improve the developer experience of API users by converging the OpenStack 
API to a consistent and pragmatic RESTful design. [1]  My question is: what is 
the sense of pragmatic in this sentence?  I thought it meant that we advise 
the designers of OpenStack APIs to adhere to RESTful design as much as 
possible, but allow them to diverge where appropriate.  The proposed functional 
call to deactivate an image seems to be an appropriate place to deviate from 
the ideal.  Creating a task or action object so that the POST request will 
create a new resource does not seem very pragmatic.  I believe that a necessary 
component of encouraging OpenStack APIs to be consistent is to allow some 
pragmatism.

thanks,
brian

[1] https://review.openstack.org/#/c/155911/

On 2/18/15, 4:49 PM, Miguel Grinberg 
miguel.s.grinb...@gmail.commailto:miguel.s.grinb...@gmail.com wrote:
Out of all the proposals mentioned in this thread, I think Jay's (d) option is 
what is closer to the REST ideal:

d) POST /images/{image_id}/tasks with payload:
   { action: deactivate|activate }

Even though I don't think this is the perfect solution, I can recognize that at 
least it tries to be RESTful, unlike the other three options suggested in the 
first message.

That said, I'm going to keep insisting that in a REST API state changes are the 
most important thing, and actions are implicitly derived by the server from 
these state changes requested by the client. What you are trying to do is to 
reverse this flow, you want the client to invoke an action, which in turn will 
cause an implicit state change on the server. This isn't wrong in itself, it's 
just not the way you do REST.

Jay's (d) proposal above could be improved by making the task a real resource. 
Sending a POST request to the /tasks address creates a new task resource, which 
gets a URI of its own, returned in the Location header. You can then send a GET 
request to this URI to obtain status info, such as whether the task completed 
or not. And since tasks are now real resources, they should have a documented 
representation as well.

Miguel

On Wed, Feb 18, 2015 at 1:19 PM, Brian Rosmaita 
brian.rosma...@rackspace.commailto:brian.rosma...@rackspace.com wrote:
On 2/15/15, 2:35 PM, Jay Pipes 
jaypi...@gmail.commailto:jaypi...@gmail.com wrote:
On 02/15/2015 01:13 PM, Brian Rosmaita wrote:
 On 2/15/15, 10:10 AM, Jay Pipes 
 jaypi...@gmail.commailto:jaypi...@gmail.com wrote:

 On 02/15/2015 01:31 AM, Brian Rosmaita wrote:
 This is a follow-up to the discussion at the 12 February API-WG
 meeting [1] concerning functional API in Glance [2].  We made
 some progress, but need to close this off so the spec can be
 implemented in Kilo.

 I believe this is where we left off: 1. The general consensus was
 that POST is the correct verb.

 Yes, POST is correct (though the resource is wrong).

 2. Did not agree on what to POST.  Three options are in play: (A)
 POST /images/{image_id}?action=deactivate POST
 /images/{image_id}?action=reactivate

 (B) POST /images/{image_id}/actions with payload describing the
 action, e.g., { action: deactivate } { action: reactivate
 }

 (C) POST /images/{image_id}/actions/deactivate POST
 /images/{image_id}/actions/reactivate

 d) POST /images/{image_id}/tasks with payload: { action:
 deactivate|activate }

 An action isn't created. An action is taken. A task is created. A
 task contains instructions on what action to take.

 The Images API v2 already has tasks (schema available at
 /v2/schemas/tasks ), which are used for long-running asynchronous
 operations (right now, image import and image export).  I think we
 want to keep those distinct from what we're talking about here.

 Does something really need to be created for this call?  The idea
 behind the functional API was to have a place for things that don't
 fit neatly into the CRUD-centric paradigm.  Option (C) seems like a
 good fit for this.

Why not just use the existing tasks/ interface, then? :) Seems like a
perfect fit to me.

The existing tasks/ interface is kind of heavyweight.  It provides a
framework for asynchronous operations.  It's really not appropriate for
this purpose.

cheers,
brian


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] team meeting Feb 19 1800 UTC

2015-02-18 Thread Sergey Lukjanov
Hi folks,

We'll be having the Sahara team meeting in #openstack-meeting-alt channel.

Agenda: https://wiki.openstack.org/wiki/Meetings/SaharaAgenda#Next_meetings

http://www.timeanddate.com/worldclock/fixedtime.html?msg=Sahara+Meetingiso=20150219T18

P.S. I'll be on plane at this time, so, Andrew Lazarev will chair this
meeting.

-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Changing expression delimiters in Mistral DSL

2015-02-18 Thread W Chan
As a user of Mistral pretty regularly these days, I certainly prefers %
%.  I agree with the other comments on devops familiarity.  And looking
this from another angle, it's certainly easier to type % % then the other
options, especially if you have to do this over and over again.  LOL
Although, I am interested in the security concerns of this use in Jinja2.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: [Neutron][DVR]Neutron distributed SNAT

2015-02-18 Thread Kevin Benton
If I understand correctly, for southbound traffic there would be
hair-pinning via the L3 agent that the upstream router happened to pick out
of the ECMP group since it doesn't know where the hypervisors are. On the
other hand northbound traffic could egress directly (assuming an l3 agent
is running on each compute node in the DVR fashion).

If we went down this route, we would require a dynamic routing protocol to
run between the agents and the upstream router. Additionally, we would have
to tweak our addressing scheme a bit so the l3 agents could have separate
addresses to use for their BGP session (or whatever routing protocol we
choose) since the gateway address would be shared amongst them.

Did I get what you were proposing correctly?

On Wed, Feb 18, 2015 at 5:28 PM, Angus Lees g...@inodes.org wrote:

 On Mon Feb 16 2015 at 9:37:22 PM Kevin Benton blak...@gmail.com wrote:

 It's basically very much like floating IPs, only you're handing out a
 sub-slice of a floating-IP to each machine - if you like.

 This requires participation of the upstream router (L4 policy routing
 pointing to next hops that distinguish each L3 agent) or intervention on
 the switches between the router an L3 agents (a few OpenFlow rules would
 make this simple). Both approaches need to adapt to L3 agent changes so
 static configuration is not adequate. Unfortunately, both of these are
 outside of the control of Neutron so I don't see an easy way to push this
 state in a generic fashion.


 (Just to continue this thought experiment)

 The L3 agents that would need to forward ingress traffic to the right
 hypervisor only need to know which [IP+port range] has been assigned to
 which hypervisor.  This information is fairly static, so these forwarders
 are effectively stateless and can be trivially replicated to deal with the
 desired ingress volume and reliability.

 When I've built similar systems in the past, the easy way to interface
 with the rest of the provider network was to use whatever dynamic routing
 protocol was already in use, and just advertise multiple ECMP routes for
 the SNAT source IPs from the forwarders (ideally advertising from the
 forwarders themselves, so they stop if there's a connectivity issue).  All
 the cleverness then happens on the forwarding hosts (we could call them
 L3 agents).  It's simple and works well, but I agree we have no precedent
 in neutron at present.

 On Mon, Feb 16, 2015 at 12:33 AM, Robert Collins 
 robe...@robertcollins.net wrote:

 Or a pool of SNAT addresses ~= to the size of the hypervisor count.


 Oh yeah. If we can afford to assign a unique SNAT address per hypervisor
 then we're done - at that point it really is just like floating-ips.

  - Gus

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Extensions for standalone EC2 API

2015-02-18 Thread Alexandre Levine

All,

I have updated and reworked the review with the proposed microversions 
changes. Please take a look.

Hope tomorrow during the nova meeting we'll decide something about it.

Best regards,
  Alex Levine

On 2/17/15 5:54 PM, Sean Dague wrote:

On 02/17/2015 09:38 AM, Alexandre Levine wrote:

I started this thread to get a couple of pointers from Dan Smith and
Sean Dague but it turned out to be a bigger discussion than I expected.

So the history is, we're trying to add a few properties to be reported
for instances in order to cut workaround access to novaDB from the
standalone EC2 API project implementation. Previous nova meeting it was
discussed that there is still potentially a chance to get this done for
Kilo providing the changes are not risky and not complex. The changes
really are not complex and not risky, you can see it in this prototype
review:

https://review.openstack.org/#/c/155853/

As you can see we just need to expose some more info which is already
available.
Two problems have arisen:

1. I should correctly pack it into this new mechanism of microversions
and Cristopher Yeoh and Alex Xu are very helpful in this area.

2. The os-extended-server-attributes extension is actually admin-only
accessible.

And this second problem produced several options some of which are based
on Alex Xu's suggestions.

1. Stay with the admin-only access. (this is the easiest one)
Problems:
- Standalone EC2 API will have to use admin context to get this info (it
already has creds configured for its metadata service anyways, so no big
deal).
- Some of the data potentially can be usable for regular users (this can
be addressed later by specific policies configurations mechanism as
suggested by Alex Xu).

2. Allow new properties to be user-available, the existing ones will
stay admin-only (extension for the previous one)
Problems:
- The obvious way is to check for context.is_admin for existing options
while allowing the extension to be user-available in policy.json. It
leads to hardcode of this behavior and is not recommended. (see previous
thread for details on that)

3. Put new properties in some non-admin extensions, like
os-extended-status. (almost as easy as the first one)
Problems:
- They just don't fit in there. Status is about statuses, not about some
static or dynamic properties of the object.

4. Create new extension for this. (more complicated)
Problems:
- To start with I couldn't come up with the naming for it. Because
existing os-extended-server-attributes is such and obvious choice for
this. Having os-extended-attributes, or os-extended-instance-attributes,
or os-server-attributes besides would be very confusing for both users
and future developers.

5. Put it into different extensions - reservation_id and launch_index
into os-multiple-create, root_device_name into os_extended_volumes, 
(most complicated)
Problems:
- Not all of the ready extensions exist. There is no ready place to put
hostname, ramdisk_id, kernel_id. We'd still have to create a new extension.

I personally tend to go for 1. It's easiest and fastest at the moment to
put everything in admin-only access and since nova API guys consider
allowing fine-tuning of policies for individual properties, it'll be
possible later to make some of it available for users. Or if necessary
it'll be possible to just switch off admin restriction altogether for
this extension. I don't think hypervisor_name, host and instance_name
are such a secret info that it should be hidden from users.

Please let me know what you think.

Option 1 seems fine for now, I feel like we can decide on different
approaches in Liberty, but getting a microversion adding this as admin
only seems fine.

-Sean




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Issue on adding or removing itself to/from a group

2015-02-18 Thread Ioram Schechtman Sette
Hi all,

My previous message was sent incomplete. Sorry for that. Here it is the
correct one.

I'm currently working on the virtual organisations (VO) management code and
I would like to add the functionallity that when a user creates a VO Role,
he automatically joins it.

Since VO Roles are represented as Groups, I need to create a new group and
add my own user into it.

I have noticed that when I call the methods *add_user_to_group* and
*remove_user_from_group* from the identity_api, the actions are performed
correctly, but I get my token invalidated and receive the following error
message:

[Thu Feb 19 00:41:23 2015] [error] 11764 WARNING keystone.middleware.core
[-] *RBAC: Invalid token*
[Thu Feb 19 00:41:23 2015] [error] 11764 WARNING keystone.common.wsgi [-]
The request you have made requires authentication. (Disable debug mode to
suppress these details.)

I have also tested using the original horizon UI for adding and removing
users to groups and tried to remove my own user from a group.
I got exaclty the same behaviour, so I think the problem is not related to
my code.

Does anyone know if this is the expected behaviour?

I think that maybe because the groups can be associated to roles, this
roles should be added to or removed from the token.
Therefore, the token needs to be replaced by a new one with new privileges.
But, I think this could be done automatically, instead of invalidating the
old ones and forcing the users to log out and in.

Does it make sense to you?
Is there an easy way to avoid the token to be invalidated?

PS: I'm still working on the icehouse version, so this issue can already be
addressed in newer releases.

Regards,
Ioram Sette
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Changing expression delimiters in Mistral DSL

2015-02-18 Thread Patrick Hoolboom
Out of those three I still prefer % %.  The main reason I like it is
familiarity.  Also the question mark makes me think of a wildcard, and I
don't want to use curly braces because of all the aforementioned reasons
(already has a meaning in YAML).

On Wed, Feb 18, 2015 at 3:07 PM, Dmitri Zimine dzim...@stackstorm.com
wrote:

 *Syntax options that we’d like to discuss **further *

 % 1 + 1 % # pro- ruby/js/puppet/chef familiarity con - spaces, and % is
 too large symbol
 {1 + 1}  # pro - less spaces, con - no familiarity
 ? 1 + 1 ?  # php familiarity, need spaces

 The primary criteria to select these 3 options is that they are YAML
 compatible. Technically they all would solve our problems (primarily no
 embracing quotes needed like in Ansible so no ambiguity on data types).

 The secondary criteria is syntax symmetry. After all I agree with
 Patrick's point about better readability when we have opening and closing
 sequences alike.


 To me, another critical criteria is familiarity: target users - openstack
 developers and devops, familiar with the delimiters.

 That is why of the three above I prefer % % .

 It is commonly used in Puppet/Chef [1], Ruby, Javascript. One won’t be
 surprised to see it and won’t need to change the muscle memory to type
 open/closed characters especially when working on say Puppet and Mistral at
 the same time (not unlikely).


 [1]
 https://docs.puppetlabs.com/guides/templating.html#erb-is-plain-text-with-embedded-ruby



 On Feb 18, 2015, at 3:20 AM, Renat Akhmerov rakhme...@mirantis.com
 wrote:

 Hi again,

 Sorry, I started writing this email before Angus replied so I will shoot
 it as is and then we can continue…


 So after discussing all the options again with a small group of team
 members we came to the following things:

 *Syntax options that we’d like to discuss **further *

 % 1 + 1 % # pro- ruby/js/puppet/chef familiarity con - spaces, and % is
 too large symbol
 {1 + 1}  # pro - less spaces, con - no familiarity
 ? 1 + 1 ?  # php familiarity, need spaces

 The primary criteria to select these 3 options is that they are YAML
 compatible. Technically they all would solve our problems (primarily no
 embracing quotes needed like in Ansible so no ambiguity on data types).

 The secondary criteria is syntax symmetry. After all I agree with
 Patrick's point about better readability when we have opening and closing
 sequences alike.

 Some additional details can be found in [0]


 [0] https://etherpad.openstack.org/p/mistral-YAQL-delimiters

 Renat Akhmerov
 @ Mirantis Inc.


 On 18 Feb 2015, at 07:37, Patrick Hoolboom patr...@stackstorm.com wrote:

  My main concern with the {} delimiters in YAQL is that the curly brace
 already has a defined use within YAML.  We most definitely will eventually
 run in to parsing errors with whatever delimiter we choose but I don't feel
 that it should conflict with the markup language it is directly embedded
 in.  It gets quite difficult to, at a glance, identify YAQL expressions.
  % % may appear ugly to some but I feel that it works as a clear
 delimiter of both the beginning AND the end of the YAQL query. The options
 that only escape the beginning look fine in small examples like this but
 the workflows that we have written or seen in the wild tend to have some
 fairly large expressions.  If the opening and closing delimiters don't
 match, it gets quite difficult to read.


 *From: *Anastasia Kuznetsova akuznets...@mirantis.com
 *Subject: **Re: [openstack-dev] [Mistral] Changing expression
 delimiters in Mistral DSL*
 *Date: *February 17, 2015 at 8:28:27 AM PST
 *To: *OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 *Reply-To: *OpenStack Development Mailing List \(not for usage
 questions\) openstack-dev@lists.openstack.org

 As for me, I think that % ... % is not an elegant solution and looks
 massive because of '%' sign. Also I agree with Renat, that % ... %
 reminds HTML/Jinja2 syntax.

 I am not sure that similarity with something should be one of the main
 criteria, because we don't know who will use Mistral.

 I like:
 - {1 + $.var} Renat's example
 - variant with using some functions (item 2 in Dmitry's list):  { yaql:
 “1+1+$.my.var  100” } or yaql: 'Hello' + $.name 
 - my two cents, maybe we can use something like: result: - Hello +
 $.name -


 Regards,
 Anastasia Kuznetsova

 On Tue, Feb 17, 2015 at 1:17 PM, Nikolay Makhotkin 
 nmakhot...@mirantis.com wrote:

 Some suggestions from me:

 1. y 1 + $.var  # (short from yaql).
 2. { 1 + $.var }  # as for me, looks more elegant than % %. And
 visually it is more strong

 A also like p7 and p8 suggested by Renat.

 On Tue, Feb 17, 2015 at 11:43 AM, Renat Akhmerov rakhme...@mirantis.com
  wrote:

 One more:

 p9: \{1 + $.var} # That’s pretty much what
 https://review.openstack.org/#/c/155348/ addresses but it’s not
 exactly that. Note that we don’t have to put it in quotes in this case to
 deal with YAML {} semantics, it’s 

[openstack-dev] [keystone] Issue on adding or removing itself to/from a group

2015-02-18 Thread Ioram Schechtman Sette
Hi all,

I'm currently working on the virtual organisations (VO) management code and
I would like to add the functionallity that when a user creates a VO Role,
he automatically joins it.
Since VO Roles are represented as Groups, I need to create a new group and
add my user into it.

I that when I call the methods *add_user_to_group* and
*remove_user_from_group* from the identity_api, I get my token invalidated
and receive the following error message:

[Thu Feb 19 00:41:23 2015] [error] 11764 WARNING keystone.middleware.core
[-] RBAC: Invalid token
[Thu Feb 19 00:41:23 2015] [error] 11764 WARNING keystone.common.wsgi [-]
The request you have made requires authentication. (Disable debug mode to
suppress these details.)

I have also tested using
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] FWaaS - question about drivers

2015-02-18 Thread Vikram Choudhary
Hi,

You can write your own driver. You can refer to below links for getting some 
idea about the architecture.

https://wiki.openstack.org/wiki/Neutron/ServiceTypeFramework
https://wiki.openstack.org/wiki/Neutron/LBaaS/Agent

Thanks
Vikram

-Original Message-
From: Sławek Kapłoński [mailto: ] 
Sent: 19 February 2015 02:33
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Neutron] FWaaS - question about drivers

Hello,

I'm looking to use FWaaS service plugin with my own router solution (I'm not 
using L3 agent at all). If I want to use FWaaS plugin also, should I write own 
driver to it, or should I write own service plugin?
I will be grateful for any links to some description about this FWaaS and it's 
architecture :) Thx a lot for any help


--
Best regards
Sławek Kapłoński
sla...@kaplonski.pl

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: [Neutron][DVR]Neutron distributed SNAT

2015-02-18 Thread Angus Lees
On Mon Feb 16 2015 at 9:37:22 PM Kevin Benton blak...@gmail.com wrote:

 It's basically very much like floating IPs, only you're handing out a
 sub-slice of a floating-IP to each machine - if you like.

 This requires participation of the upstream router (L4 policy routing
 pointing to next hops that distinguish each L3 agent) or intervention on
 the switches between the router an L3 agents (a few OpenFlow rules would
 make this simple). Both approaches need to adapt to L3 agent changes so
 static configuration is not adequate. Unfortunately, both of these are
 outside of the control of Neutron so I don't see an easy way to push this
 state in a generic fashion.


(Just to continue this thought experiment)

The L3 agents that would need to forward ingress traffic to the right
hypervisor only need to know which [IP+port range] has been assigned to
which hypervisor.  This information is fairly static, so these forwarders
are effectively stateless and can be trivially replicated to deal with the
desired ingress volume and reliability.

When I've built similar systems in the past, the easy way to interface with
the rest of the provider network was to use whatever dynamic routing
protocol was already in use, and just advertise multiple ECMP routes for
the SNAT source IPs from the forwarders (ideally advertising from the
forwarders themselves, so they stop if there's a connectivity issue).  All
the cleverness then happens on the forwarding hosts (we could call them
L3 agents).  It's simple and works well, but I agree we have no precedent
in neutron at present.

On Mon, Feb 16, 2015 at 12:33 AM, Robert Collins robe...@robertcollins.net
 wrote:

 Or a pool of SNAT addresses ~= to the size of the hypervisor count.


Oh yeah. If we can afford to assign a unique SNAT address per hypervisor
then we're done - at that point it really is just like floating-ips.

 - Gus
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev