[openstack-dev] [Neutron] Port Forwarding API

2015-09-20 Thread Gal Sagie
Hello All,

I have sent a spec [1] to resume the work on port forwarding API and
reference implementation.

Its currently marked as "WIP", however i raised some "TBD" questions for
the community.
The way i see port forwarding is an API that is very similar to floating IP
API and implementation
with few changes:

1) Can only define port forwarding on the router external gateway IP (or
additional public IPs
   that are located on the router.  (Similar to the case of centralized
DNAT)

2) The same FIP address can be used for different mappings, for example FIP
with IP X
can be used with different ports to map to different VM's X:4001  ->
VM1 IP
X:4002 -> VM2 IP (This is the essence of port forwarding).
So we also need the port mapping configuration fields

All the rest should probably behave (in my opinion) very similar to FIP's
(for example
not being able to remove external gateway if port forwarding entries are
configured,
if the VM is deletd the port forwarding entry is deleted as well and so
on..)
All of these points are mentioned in the spec and i am waiting for the
community feedback
on them.

I am trying to figure out if implementation wise, it would be smart to try
and use the floating IP
implementation and extend it for this (given all the above mechanism
described above already
works for floating IP's)
Or, add another new implementation which behaves very similar to floating
IP's in most aspects
(But still differ in some)
Or something else...

Would love to hear the community feedback on the spec, even that its WIP

Thanks
Gal.

[1] https://review.openstack.org/#/c/224727/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress] Congress Usecases VM

2015-09-20 Thread himanshu sharma
Hi,

Greetings for the day.

I am finding problem in finding the CLI commands for congress in which I
can create, delete a rule within a policy and viewing different data
sources.
Can you please provide me the list of CLI commands for the same.
Waiting for the reply.


Regards
Himanshu Sharma

On Sat, Sep 19, 2015 at 5:44 AM, Tim Hinrichs  wrote:

> It's great to have this available!  I think it'll help people understand
> what's going on MUCH more quickly.
>
> Some thoughts.
> - The image is 3GB, which took me 30 minutes to download.  Are all VMs
> this big?  I think we should finish this as a VM but then look into doing
> it with containers to make it EVEN easier for people to get started.
>
> - It gave me an error about a missing shared directory when I started up.
>
> - I expected devstack to be running when I launched the VM.  devstack
> startup time is substantial, and if there's a problem, it's good to assume
> the user won't know how to fix it.  Is it possible to have devstack up and
> running when we start the VM?  That said, it started up fine for me.
>
> - It'd be good to have a README to explain how to use the use-case
> structure. It wasn't obvious to me.
>
> - The top-level dir of the Congress_Usecases folder has a
> Congress_Usecases folder within it.  I assume the inner one shouldn't be
> there?
>
> - When I ran the 10_install_policy.sh, it gave me a bunch of authorization
> problems.
>
> But otherwise I think the setup looks reasonable.  Will there be an undo
> script so that we can run the use cases one after another without worrying
> about interactions?
>
> Tim
>
>
> On Fri, Sep 18, 2015 at 11:03 AM Shiv Haris  wrote:
>
>> Hi Congress folks,
>>
>>
>>
>> BTW the login/password for the VM is vagrant/vagrant
>>
>>
>>
>> -Shiv
>>
>>
>>
>>
>>
>> *From:* Shiv Haris [mailto:sha...@brocade.com]
>> *Sent:* Thursday, September 17, 2015 5:03 PM
>> *To:* openstack-dev@lists.openstack.org
>> *Subject:* [openstack-dev] [Congress] Congress Usecases VM
>>
>>
>>
>> Hi All,
>>
>>
>>
>> I have put my VM (virtualbox) at:
>>
>>
>>
>> http://paloaltan.net/Congress/Congress_Usecases_SEPT_17_2015.ova
>>
>>
>>
>> I usually run this on a macbook air – but it should work on other
>> platfroms as well. I chose virtualbox since it is free.
>>
>>
>>
>> Please send me your usecases – I can incorporate in the VM and send you
>> an updated image. Please take a look at the structure I have in place for
>> the first usecase; would prefer it be the same for other usecases. (However
>> I am still open to suggestions for changes)
>>
>>
>>
>> Thanks,
>>
>>
>>
>> -Shiv
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Integration Test Questions

2015-09-20 Thread Qiming Teng
Speaking of adding tests, we need hands on improving Heat API tests in
Tempest [1]. The current test cases there is a weird combination of API
tests, resource type tests, template tests etc. If we decide to move
functional tests back to individual projects, some test cases may need
to be deleted from tempest.

Another important reason of adding API tests into Tempest is because
the orchestration service is assessed [2] by the DefCore team using
tests in Tempest, not in-tree test cases.

The heat team has done a lot (and killed a lot) work to make the API as
stable as possible. Most of the time, there would be nothing new for
testing. The API surface tests may become nothing but waste of time if
we keep running them for every single patch.

So... my suggestions:

- Remove unnecessary tests in Tempest;
- Stop adding API tests to Heat locally;
- Add API tests to Tempest instead, in an organized way. (refer to [3])

[1]
http://git.openstack.org/cgit/openstack/tempest/tree/tempest/api/orchestration/
[2] https://review.openstack.org/#/c/216983/
[3] https://review.openstack.org/#/c/210080/


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [CINDER] [PTL Candidates] Questions

2015-09-20 Thread John Griffith
​PTL nomination emails are good, but I have a few questions that I'd like
to ask to help me in making my vote.  Some of these are covered in the
general proposal announcements, but I'd love to hear some more detail.

It would be awesome if the Cinder candidates could spend some time and
answer these to help me (and maybe others) make an informed choice:

1. Do you actually have the time to spend to be PTL

I don't think many people realize the time commitment. Between being on top
of reviews and having a pretty consistent view of what's going on and in
process; to meetings, questions on IRC, program management type stuff etc.
Do you feel you'll have the ability for PTL to be your FULL Time job?
Don't forget you're working with folks in a community that spans multiple
time zones.

2. What are your plans to make the Cinder project as a core component
better (no... really, what specifically and how does it make Cinder better)?

Most candidates are representing a storage vendor naturally.  Everyone says
"make Cinder better"; But how do you intend to balance vendor interest and
the interest of the general project?  Where will your focus in the M
release be?  On your vendor code or on Cinder as a whole?  Note; I'm not
suggesting that anybody isn't doing the "right" thing here, I'm just asking
for specifics.

3. ​Why do you want to be PTL for Cinder?

Seems like a silly question, but really when you start asking that question
the answers can be surprising and somewhat enlightening.  There's different
motivators for people, what's yours?  By the way, "my employer pays me a
big bonus if I win" is a perfectly acceptable answer in my opinion, I'd
prefer honesty over anything else.  You may not get my vote, but you'd get
respect.

Thanks,
John
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Handling password for k8s

2015-09-20 Thread Hongbin Lu
Hi Ton,

If I understand your proposal correctly, it means the inputted password will be 
exposed to users in the same tenant (since the password is passed as stack 
parameter, which is exposed within tenant). If users are not admin, they don't 
have privilege to create a temp user. As a result, users have to expose their 
own password to create a bay, which is suboptimal.

A slightly amendment is to have operator to create a user that is dedicated for 
communication between k8s and neutron load balancer service. The password of 
the user can be written into config file, picked up by conductor and passed to 
heat. The drawback is that there is no multi-tenancy for openstack load 
balancer service, since all bays will share the same credential.

Another solution I can think of is to have magnum to create a keystone domain 
[1] for each bay (using admin credential in config file), and assign bay's 
owner to that domain. As a result, the user will have privilege to create a bay 
user within that domain. It seems Heat supports native keystone resource [2], 
which makes the administration of keystone users much easier. The drawback is 
the implementation is more complicated.

[1] https://wiki.openstack.org/wiki/Domains
[2] 
http://specs.openstack.org/openstack/heat-specs/specs/kilo/keystone-resources.html

Best regards,
Hongbin

From: Ton Ngo [mailto:t...@us.ibm.com]
Sent: September-20-15 2:08 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [magnum] Handling password for k8s


Hi everyone,
I am running into a potential issue in implementing the support for load 
balancer in k8s services. After a chat with sdake, I would like to run this by 
the team for feedback/suggestion.
First let me give a little background for context. In the current k8s cluster, 
all k8s pods and services run within a private subnet (on Flannel) and they can 
access each other but they cannot be accessed from external network. The way to 
publish an endpoint to the external network is by specifying this attribute in 
your service manifest:
type: LoadBalancer
Then k8s will talk to OpenStack Neutron to create the load balancer pool, 
members, VIP, monitor. The user would associate the VIP with a floating IP and 
then the endpoint of the service would be accessible from the external internet.
To talk to Neutron, k8s needs the user credential and this is stored in a 
config file on the master node. This includes the username, tenant name, 
password. When k8s starts up, it will load the config file and create an 
authenticated client with Keystone.
The issue we need to find a good solution for is how to handle the password. 
With the current effort on security to make Magnum production-ready, we want to 
make sure to handle the password properly.
Ideally, the best solution is to pass the authenticated token to k8s to use, 
but this will require sizeable change upstream in k8s. We have good reason to 
pursue this but it will take time.
For now, my current implementation is as follows:

  1.  In a bay-create, magnum client adds the password to the API call 
(normally it authenticates and sends the token)
  2.  The conductor picks it up and uses it as an input parameter to the heat 
templates
  3.  When configuring the master node, the password is saved in the config 
file for k8s services.
  4.  Magnum does not store the password internally.

This is probably not ideal, but it would let us proceed for now. We can 
deprecate it later when we have a better solution. So leaving aside the issue 
of how k8s should be changed, the question is: is this approach reasonable for 
the time, or is there a better approach?

Ton Ngo,
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-i18n] [nova][i18n] Is there any point in using _() inpython-novaclient?

2015-09-20 Thread Andreas Jaeger

On 09/20/2015 02:16 PM, Duncan Thomas wrote:

Certainly for cinder, and I suspect many other project, the openstack
client is a wrapper for python-cinderclient libraries, so if you want
translated exceptions then you need to translate python-cinderclient
too, unless I'm missing something?


Ah - let's investigate some more here.

Looking at python-cinderclient, I see translations only for the help 
strings of the client like in cinderclient/shell.py. Are there strings 
in the library of cinder that will be displayed to the user as well?


Andreas


On 18 September 2015 at 17:46, Andreas Jaeger > wrote:

With the limited resources that the translation team has, we should
not translate the clients but concentrate on the openstackclient, as
discussed here:


http://lists.openstack.org/pipermail/openstack-i18n/2015-September/001402.html

Andreas
--
  Andreas Jaeger aj@{suse.com ,opensuse.org
} Twitter/Identica: jaegerandi
   SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
GF: Felix Imendörffer, Jane Smithard, Graham Norton,
HRB 21284 (AG Nürnberg)
 GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272
A126



___
Openstack-i18n mailing list
openstack-i...@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-i18n




--
--
Duncan Thomas



--
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-i18n] [nova][i18n] Is there any point in using _() inpython-novaclient?

2015-09-20 Thread Duncan Thomas
Certainly for cinder, and I suspect many other project, the openstack
client is a wrapper for python-cinderclient libraries, so if you want
translated exceptions then you need to translate python-cinderclient too,
unless I'm missing something?

On 18 September 2015 at 17:46, Andreas Jaeger  wrote:

> With the limited resources that the translation team has, we should not
> translate the clients but concentrate on the openstackclient, as discussed
> here:
>
>
> http://lists.openstack.org/pipermail/openstack-i18n/2015-September/001402.html
>
> Andreas
> --
>  Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
>   SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
>GF: Felix Imendörffer, Jane Smithard, Graham Norton,
>HRB 21284 (AG Nürnberg)
> GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126
>
>
>
> ___
> Openstack-i18n mailing list
> openstack-i...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-i18n
>



-- 
-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Neutron debugging tool

2015-09-20 Thread Nodir Kodirov
Hello,

I am planning to develop a tool for network debugging. Initially, it
will handle DVR case, which can also be extended to other too. Based
on my OpenStack deployment/operations experience, I am planning to
handle common pitfalls/misconfigurations, such as:
1) check external gateway validity
2) check if appropriate qrouter/qdhcp/fip namespaces are created in
compute/network hosts
3) execute probing commands inside namespaces, to verify reachability
4) etc.

I came across neutron-debug [1], which mostly focuses on namespace
debugging. Its coverage is limited to OpenStack, while I am planning
to cover compute/network nodes as well. In my experience, I had to ssh
to the host(s) to accurately diagnose the failure (e.g., 1, 2 cases
above). The tool I am considering will handle these, given the host
credentials.

I'd like get community's feedback on utility of such debugging tool.
Do people use neutron-debug on their OpenStack environment? Does the
tool I am planning to develop with complete diagnosis coverage sound
useful? Anyone is interested to join the development? All feedback are
welcome.

Thanks,

- Nodir

[1] http://docs.openstack.org/cli-reference/content/neutron-debug_commands.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][ptl][release] final liberty cycle client library releases needed

2015-09-20 Thread Renat Akhmerov



> On 19 Sep 2015, at 16:04, Doug Hellmann  wrote:
> 
> Excerpts from Renat Akhmerov's message of 2015-09-19 00:35:49 +0300:
>> Doug,
>> 
>> python-mistralclient-1.1.0 (also on pypi) is the final release for Liberty. 
>> Here’s the patch updating global-requirements.txt: 
>> https://review.openstack.org/#/c/225330/ 
>>  
>> > > (upper-constraints.txt should be 
>> soon updated automatically, in my understanding)
> 
> Because we're in requirements freeze, we're trying not to update any of
> the global-requirements.txt entries unless absolutely necessary. At this
> point, no projects can be depending on the previously unreleased
> features in 1.1.0, so as long as python-mistralclient doesn't have a cap
> on the major version allowed the requirements list, it should only be
> necessary to update the constraints.
> 
> Please update the constraints file by hand, only changing
> python-mistralclient. That will allow us to land the update without
> changing any other libraries in the test infrastructure (the automated
> update submits all of the changes together, and we have several
> outstanding right now).

Ok, understood.

https://review.openstack.org/#/c/225491/ 

>> So far I have been doing release management for Mistral myself (~2 years), 
>> and the last year I’ve been trying to be aligned with OpenStack schedule. In 
>> may 2015 Mistral was accepted into Big Tent so does that mean I’m not longer 
>> responsible for doing that? Or I can still do it on my own? Even with final 
>> Mistral client for Liberty I’ve done it just myself (didn’t create a stable 
>> branch though yet), maybe I shouldn’t have. Clarifications would be helpful.
> 
> It means you can now ask the release management team to take over for
> the library, but that is not an automatic change.
>> Does this all apply to all Big Tent projects?
> 
> Yes, and to all horizontal teams. Every project team is expected
> to provide liaisons to all horizontal teams now. The degree to which
> a horizontal team does the work for you is up to each pair of teams
> to negotiate.

I’d prefer to take care about Liberty releases myself. We don’t have much time 
till the end of Liberty and we may not establish all required connections with 
horizontal teams. Is that ok?

Renat Akhmerov
@ Mirantis Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Integration Test Questions

2015-09-20 Thread Steve Baker

On 20/09/15 20:24, Qiming Teng wrote:

Speaking of adding tests, we need hands on improving Heat API tests in
Tempest [1]. The current test cases there is a weird combination of API
tests, resource type tests, template tests etc. If we decide to move
functional tests back to individual projects, some test cases may need
to be deleted from tempest.

Another important reason of adding API tests into Tempest is because
the orchestration service is assessed [2] by the DefCore team using
tests in Tempest, not in-tree test cases.

The heat team has done a lot (and killed a lot) work to make the API as
stable as possible. Most of the time, there would be nothing new for
testing. The API surface tests may become nothing but waste of time if
we keep running them for every single patch.
Thanks for raising this. Wherever they live we do need a dedicated set 
of tests which ensure the REST API is fully exercised.

So... my suggestions:

- Remove unnecessary tests in Tempest;

agreed

- Stop adding API tests to Heat locally;
- Add API tests to Tempest instead, in an organized way. (refer to [3])
I would prefer an alternative approach which would result in the same 
end state:

- port heat_integrationtests to tempest-lib
- build a suite of REST API tests in heat_integrationtests
- work with defcore to identify which heat_integrationtests tests to 
move to tempest

[1]
http://git.openstack.org/cgit/openstack/tempest/tree/tempest/api/orchestration/
[2] https://review.openstack.org/#/c/216983/
[3] https://review.openstack.org/#/c/210080/


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Handling password for k8s

2015-09-20 Thread Steven Dake (stdake)
Hongbin,

I believe the domain approach is the preferred approach for the solution long 
term.  It will require more R to execute then other options but also be 
completely secure.

Regards
-steve


From: Hongbin Lu >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Sunday, September 20, 2015 at 4:26 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [magnum] Handling password for k8s

Hi Ton,

If I understand your proposal correctly, it means the inputted password will be 
exposed to users in the same tenant (since the password is passed as stack 
parameter, which is exposed within tenant). If users are not admin, they don’t 
have privilege to create a temp user. As a result, users have to expose their 
own password to create a bay, which is suboptimal.

A slightly amendment is to have operator to create a user that is dedicated for 
communication between k8s and neutron load balancer service. The password of 
the user can be written into config file, picked up by conductor and passed to 
heat. The drawback is that there is no multi-tenancy for openstack load 
balancer service, since all bays will share the same credential.

Another solution I can think of is to have magnum to create a keystone domain 
[1] for each bay (using admin credential in config file), and assign bay’s 
owner to that domain. As a result, the user will have privilege to create a bay 
user within that domain. It seems Heat supports native keystone resource [2], 
which makes the administration of keystone users much easier. The drawback is 
the implementation is more complicated.

[1] https://wiki.openstack.org/wiki/Domains
[2] 
http://specs.openstack.org/openstack/heat-specs/specs/kilo/keystone-resources.html

Best regards,
Hongbin

From: Ton Ngo [mailto:t...@us.ibm.com]
Sent: September-20-15 2:08 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [magnum] Handling password for k8s


Hi everyone,
I am running into a potential issue in implementing the support for load 
balancer in k8s services. After a chat with sdake, I would like to run this by 
the team for feedback/suggestion.
First let me give a little background for context. In the current k8s cluster, 
all k8s pods and services run within a private subnet (on Flannel) and they can 
access each other but they cannot be accessed from external network. The way to 
publish an endpoint to the external network is by specifying this attribute in 
your service manifest:
type: LoadBalancer
Then k8s will talk to OpenStack Neutron to create the load balancer pool, 
members, VIP, monitor. The user would associate the VIP with a floating IP and 
then the endpoint of the service would be accessible from the external internet.
To talk to Neutron, k8s needs the user credential and this is stored in a 
config file on the master node. This includes the username, tenant name, 
password. When k8s starts up, it will load the config file and create an 
authenticated client with Keystone.
The issue we need to find a good solution for is how to handle the password. 
With the current effort on security to make Magnum production-ready, we want to 
make sure to handle the password properly.
Ideally, the best solution is to pass the authenticated token to k8s to use, 
but this will require sizeable change upstream in k8s. We have good reason to 
pursue this but it will take time.
For now, my current implementation is as follows:

  1.  In a bay-create, magnum client adds the password to the API call 
(normally it authenticates and sends the token)
  2.  The conductor picks it up and uses it as an input parameter to the heat 
templates
  3.  When configuring the master node, the password is saved in the config 
file for k8s services.
  4.  Magnum does not store the password internally.

This is probably not ideal, but it would let us proceed for now. We can 
deprecate it later when we have a better solution. So leaving aside the issue 
of how k8s should be changed, the question is: is this approach reasonable for 
the time, or is there a better approach?

Ton Ngo,
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] Handling password for k8s

2015-09-20 Thread Vikas Choudhary
Hi Ton,

kube-masters will be nova instances only and because any access to
nova-instances is already being secured using keystone, I am not able
to understand what are the concerns in storing password on
master-nodes.

Can you please list down concerns in our current approach?

-Vikas Choudhary

*Hi
everyone,*

*I
am running into a potential issue in implementing the support for*

*load
balancer in k8s services.  After a chat with sdake, I would like to*

*run
this by the team for feedback/suggestion.*

*First
let me give a little background for context.  In the current k8s*

*cluster,
all k8s pods and services run within a private subnet (on Flannel)*

*and
they can access each other but they cannot be accessed from external*

*network.
 The way to publish an endpoint to the external network is by*

*specifying
this attribute in your service manifest:*

*type:
LoadBalancer*

   *Then
k8s will talk to OpenStack Neutron to create the load balancer*

*pool,
members, VIP, monitor.  The user would associate the VIP with a*

*floating
IP and then the endpoint of the service would be accessible from*

*the
external internet.*

   *To
talk to Neutron, k8s needs the user credential and this is stored in*

*a
config file on the master node.  This includes the username, tenant
name,*

*password.
 When k8s starts up, it will load the config file and create an*

*authenticated
client with Keystone.*

*The
issue we need to find a good solution for is how to handle the*

*password.
 With the current effort on security to make Magnum*

*production-ready,
we want to make sure to handle the password properly.*

*Ideally,
the best solution is to pass the authenticated token to k8s to*

*use,
but this will require sizeable change upstream in k8s.  We have good*

*reason
to pursue this but it will take time.*

*For
now, my current implementation is as follows:*

   *In
a bay-create, magnum client adds the password to the API call*

   *(normally
it authenticates and sends the token)*

   *The
conductor picks it up and uses it as an input parameter to the heat*

   *templates*

   *When
configuring the master node, the password is saved in the config*

   *file
for k8s services.*

   *Magnum
does not store the password internally.*


 *This
is probably not ideal, but it would let us proceed for now.  We*

*can
deprecate it later when we have a better solution.  So leaving aside*

*the
issue of how k8s should be changed, the question is:  is this
approach*

*reasonable
for the time, or is there a better approach?*


 *Ton
Ngo,*
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova-scheduler] no IRC meeting this week

2015-09-20 Thread Dugger, Donald D
As discussed last week we won't have a meeting this Mon., 9/21.  Everyone can 
concentrate on getting Liberty out the door and we'll meet again next week, 
9/28, to talk about Mitaka planning a little.

--
Don Dugger
"Censeo Toto nos in Kansa esse decisse." - D. Gale
Ph: 303/443-3786

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Port Forwarding API

2015-09-20 Thread Gal Sagie
Hi shihanzhang,

As mentioned in the spec, this doesnt support distributed FIP's, it will
still work
if the VMs are on different compute nodes, similar to the way centralized
DNAT works (from the network node)

Distributing port forwarding entries in my opinion is similar to
distributing SNAT, and when
there will be a consensus in the community regarding SNAT distrubition (if
its really fully needed)
i think that any solution will also fit port forwarding distrubition.
(But thats not the scope of this proposed spec)

Gal.

On Mon, Sep 21, 2015 at 4:57 AM, shihanzhang  wrote:

>
>  2) The same FIP address can be used for different mappings, for
> example FIP with IP X
>   can be used with different ports to map to different VM's
> X:4001  -> VM1 IP
>   X:4002 -> VM2 IP (This is the essence of port forwarding).
>  So we also need the port mapping configuration fields
>
> For the second use case, I have a question, does it support DVR?  if VM1
> and VM2 are on
> different compute nodes, how does it work?
>
>
>
>
> 在 2015-09-20 14:26:23,"Gal Sagie"  写道:
>
> Hello All,
>
> I have sent a spec [1] to resume the work on port forwarding API and
> reference implementation.
>
> Its currently marked as "WIP", however i raised some "TBD" questions for
> the community.
> The way i see port forwarding is an API that is very similar to floating
> IP API and implementation
> with few changes:
>
> 1) Can only define port forwarding on the router external gateway IP (or
> additional public IPs
>that are located on the router.  (Similar to the case of centralized
> DNAT)
>
> 2) The same FIP address can be used for different mappings, for example
> FIP with IP X
> can be used with different ports to map to different VM's X:4001  ->
> VM1 IP
> X:4002 -> VM2 IP (This is the essence of port forwarding).
> So we also need the port mapping configuration fields
>
> All the rest should probably behave (in my opinion) very similar to FIP's
> (for example
> not being able to remove external gateway if port forwarding entries are
> configured,
> if the VM is deletd the port forwarding entry is deleted as well and so
> on..)
> All of these points are mentioned in the spec and i am waiting for the
> community feedback
> on them.
>
> I am trying to figure out if implementation wise, it would be smart to try
> and use the floating IP
> implementation and extend it for this (given all the above mechanism
> described above already
> works for floating IP's)
> Or, add another new implementation which behaves very similar to floating
> IP's in most aspects
> (But still differ in some)
> Or something else...
>
> Would love to hear the community feedback on the spec, even that its WIP
>
> Thanks
> Gal.
>
> [1] https://review.openstack.org/#/c/224727/
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Best Regards ,

The G.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [oslo.privsep] Any progress on privsep?

2015-09-20 Thread Yuriy Taraday
Hello, Li.

On Sat, Sep 19, 2015 at 6:15 AM Li Ma  wrote:

> Thanks for your reply, Gus. That's awesome. I'd like to have a look at
> it or test if possible.
>
> Any source code available in the upstream?
>

You can find latest (almost approved from the looks of it) version of
blueprint here: https://review.openstack.org/204073
It links to current implementation (not API described in blueprint though):
https://review.openstack.org/155631
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Neutron debugging tool

2015-09-20 Thread Li Ma
AFAIK, there is a project available in the github that does the same thing.
https://github.com/yeasy/easyOVS

I used it before.

On Mon, Sep 21, 2015 at 12:17 AM, Nodir Kodirov  wrote:
> Hello,
>
> I am planning to develop a tool for network debugging. Initially, it
> will handle DVR case, which can also be extended to other too. Based
> on my OpenStack deployment/operations experience, I am planning to
> handle common pitfalls/misconfigurations, such as:
> 1) check external gateway validity
> 2) check if appropriate qrouter/qdhcp/fip namespaces are created in
> compute/network hosts
> 3) execute probing commands inside namespaces, to verify reachability
> 4) etc.
>
> I came across neutron-debug [1], which mostly focuses on namespace
> debugging. Its coverage is limited to OpenStack, while I am planning
> to cover compute/network nodes as well. In my experience, I had to ssh
> to the host(s) to accurately diagnose the failure (e.g., 1, 2 cases
> above). The tool I am considering will handle these, given the host
> credentials.
>
> I'd like get community's feedback on utility of such debugging tool.
> Do people use neutron-debug on their OpenStack environment? Does the
> tool I am planning to develop with complete diagnosis coverage sound
> useful? Anyone is interested to join the development? All feedback are
> welcome.
>
> Thanks,
>
> - Nodir
>
> [1] 
> http://docs.openstack.org/cli-reference/content/neutron-debug_commands.html
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 

Li Ma (Nick)
Email: skywalker.n...@gmail.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] Handling password for k8s

2015-09-20 Thread Vikas Choudhary
Thanks Hongbin.

I was not aware of stack-parameters visibility, so was not able to
figure out actual concerns in Ton's initial approach.

keystone domain approach seems secure enough.

-Vikas



Hongbin,

I believe the domain approach is the preferred approach for the
solution long term.  It will require more R to execute then other
options but also be completely secure.

Regards
-steve


From: Hongbin Lu http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>>>
Reply-To: "OpenStack Development Mailing List (not for usage
questions)" http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>>>
Date: Sunday, September 20, 2015 at 4:26 PM
To: "OpenStack Development Mailing List (not for usage questions)"
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>>>
Subject: Re: [openstack-dev] [magnum] Handling password for k8s

Hi Ton,

If I understand your proposal correctly, it means the inputted
password will be exposed to users in the same tenant (since the
password is passed as stack parameter, which is exposed within
tenant). If users are not admin, they don’t have privilege to create a
temp user. As a result, users have to expose their own password to
create a bay, which is suboptimal.

A slightly amendment is to have operator to create a user that is
dedicated for communication between k8s and neutron load balancer
service. The password of the user can be written into config file,
picked up by conductor and passed to heat. The drawback is that there
is no multi-tenancy for openstack load balancer service, since all
bays will share the same credential.

Another solution I can think of is to have magnum to create a keystone
domain [1] for each bay (using admin credential in config file), and
assign bay’s owner to that domain. As a result, the user will have
privilege to create a bay user within that domain. It seems Heat
supports native keystone resource [2], which makes the administration
of keystone users much easier. The drawback is the implementation is
more complicated.

[1] https://wiki.openstack.org/wiki/Domains
[2] 
http://specs.openstack.org/openstack/heat-specs/specs/kilo/keystone-resources.html

Best regards,
Hongbin

From: Ton Ngo [mailto:ton at us.ibm.com
]
Sent: September-20-15 2:08 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [magnum] Handling password for k8s


Hi everyone,
I am running into a potential issue in implementing the support for
load balancer in k8s services. After a chat with sdake, I would like
to run this by the team for feedback/suggestion.
First let me give a little background for context. In the current k8s
cluster, all k8s pods and services run within a private subnet (on
Flannel) and they can access each other but they cannot be accessed
from external network. The way to publish an endpoint to the external
network is by specifying this attribute in your service manifest:
type: LoadBalancer
Then k8s will talk to OpenStack Neutron to create the load balancer
pool, members, VIP, monitor. The user would associate the VIP with a
floating IP and then the endpoint of the service would be accessible
from the external internet.
To talk to Neutron, k8s needs the user credential and this is stored
in a config file on the master node. This includes the username,
tenant name, password. When k8s starts up, it will load the config
file and create an authenticated client with Keystone.
The issue we need to find a good solution for is how to handle the
password. With the current effort on security to make Magnum
production-ready, we want to make sure to handle the password
properly.
Ideally, the best solution is to pass the authenticated token to k8s
to use, but this will require sizeable change upstream in k8s. We have
good reason to pursue this but it will take time.
For now, my current implementation is as follows:

  1.  In a bay-create, magnum client adds the password to the API call
(normally it authenticates and sends the token)
  2.  The conductor picks it up and uses it as an input parameter to
the heat templates
  3.  When configuring the master node, the password is saved in the
config file for k8s services.
  4.  Magnum does not store the password internally.

This is probably not ideal, but it would let us proceed for now. We
can deprecate it later when we have a better solution. So leaving
aside the issue of how k8s should be changed, the question is: is this
approach reasonable for the time, or is there a better approach?

Ton Ngo,

Re: [openstack-dev] [magnum] Handling password for k8s

2015-09-20 Thread Ton Ngo

Hi Vikas,
 It's correct that once the password is saved in the k8s master node,
then it would have the same security as the nova-instance.  The issue is as
Hongbin noted, the password is exposed along the chain of interaction
between magnum and heat.  Users in the same tenant can potentially see the
password of the user who creates the cluster.  The current k8s mode of
operation is k8s-centric, where the cluster is assumed to be managed
manually so it is reasonable to configure with one OpenStack user
credential.  With Magnum managing the k8s cluster, we add another layer of
management, hence the complication.

Thanks Hongbin, Steve for the suggestion.  If we don't see any fundamental
flaw, we can proceed with the initial sub-optimal implementation and refine
it later with the service domain implementation.

Ton Ngo,




From:   Vikas Choudhary 
To: openstack-dev@lists.openstack.org
Date:   09/20/2015 09:02 PM
Subject:[openstack-dev] [magnum] Handling password for k8s



Hi Ton,
kube-masters will be nova instances only and because any access to
nova-instances is already being secured using keystone, I am not able to
understand what are the concerns in storing password on master-nodes.
Can you please list down concerns in our current approach?
-Vikas Choudhary
Hi
everyone,
I
am running into a potential issue in implementing the support for
load
balancer in k8s services.  After a chat with sdake, I would like to
run
this by the team for feedback/suggestion.
First
let me give a little background for context.  In the current k8s
cluster,
all k8s pods and services run within a private subnet (on Flannel)
and
they can access each other but they cannot be accessed from external
network.
 The way to publish an endpoint to the external network is by
specifying
this attribute in your service manifest:
type:
LoadBalancer
   Then
k8s will talk to OpenStack Neutron to create the load balancer
pool,
members, VIP, monitor.  The user would associate the VIP with a
floating
IP and then the endpoint of the service would be accessible from
the
external internet.
   To
talk to Neutron, k8s needs the user credential and this is stored in
a
config file on the master node.  This includes the username, tenant
name,
password.
 When k8s starts up, it will load the config file and create an
authenticated
client with Keystone.
The
issue we need to find a good solution for is how to handle the
password.
 With the current effort on security to make Magnum
production-ready,
we want to make sure to handle the password properly.
Ideally,
the best solution is to pass the authenticated token to k8s to
use,
but this will require sizeable change upstream in k8s.  We have good
reason
to pursue this but it will take time.
For
now, my current implementation is as follows:
   In
a bay-create, magnum client adds the password to the API call
   (normally
it authenticates and sends the token)
   The
conductor picks it up and uses it as an input parameter to the heat
   templates
   When
configuring the master node, the password is saved in the config
   file
for k8s services.
   Magnum
does not store the password internally.



This
is probably not ideal, but it would let us proceed for now.  We
can
deprecate it later when we have a better solution.  So leaving aside
the
issue of how k8s should be changed, the question is:  is this
approach
reasonable
for the time, or is there a better approach?



Ton
Ngo,
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Port Forwarding API

2015-09-20 Thread shihanzhang


 2) The same FIP address can be used for different mappings, for example 
FIP with IP X

  can be used with different ports to map to different VM's X:4001  -> 
VM1 IP

  X:4002 -> VM2 IP (This is the essence of port forwarding).

 So we also need the port mapping configuration fields


For the second use case, I have a question, does it support DVR?  if VM1 and 
VM2 are on
different compute nodes, how does it work?





在 2015-09-20 14:26:23,"Gal Sagie"  写道:

Hello All,


I have sent a spec [1] to resume the work on port forwarding API and reference 
implementation.


Its currently marked as "WIP", however i raised some "TBD" questions for the 
community.

The way i see port forwarding is an API that is very similar to floating IP API 
and implementation

with few changes:


1) Can only define port forwarding on the router external gateway IP (or 
additional public IPs

   that are located on the router.  (Similar to the case of centralized DNAT)


2) The same FIP address can be used for different mappings, for example FIP 
with IP X

can be used with different ports to map to different VM's X:4001  -> VM1 IP 
  

X:4002 -> VM2 IP (This is the essence of port forwarding).

So we also need the port mapping configuration fields


All the rest should probably behave (in my opinion) very similar to FIP's (for 
example

not being able to remove external gateway if port forwarding entries are 
configured,

if the VM is deletd the port forwarding entry is deleted as well and so on..)

All of these points are mentioned in the spec and i am waiting for the 
community feedback

on them.


I am trying to figure out if implementation wise, it would be smart to try and 
use the floating IP

implementation and extend it for this (given all the above mechanism described 
above already

works for floating IP's)

Or, add another new implementation which behaves very similar to floating IP's 
in most aspects

(But still differ in some)

Or something else...


Would love to hear the community feedback on the spec, even that its WIP


Thanks
Gal.

[1] https://review.openstack.org/#/c/224727/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [CINDER] [PTL Candidates] Questions

2015-09-20 Thread Sean McGinnis
On Sun, Sep 20, 2015 at 11:30:15AM -0600, John Griffith wrote:
> ​PTL nomination emails are good, but I have a few questions that I'd like
> to ask to help me in making my vote.  Some of these are covered in the
> general proposal announcements, but I'd love to hear some more detail.
> 
> It would be awesome if the Cinder candidates could spend some time and
> answer these to help me (and maybe others) make an informed choice:

Great idea John. We have a lot of candidates this time around, so it's
probably a good idea to get a little more info before the election is
over.

> 
> 1. Do you actually have the time to spend to be PTL
> 

Yes. Prior to submitting my name I had a few conversations with my
management to make sure this would be something they would support.

I have been assured I could make Cinder my primary and full time
responsibility should I become elected.

> 
> 2. What are your plans to make the Cinder project as a core component
> better (no... really, what specifically and how does it make Cinder better)?
> 
> Most candidates are representing a storage vendor naturally.  Everyone says
> "make Cinder better"; But how do you intend to balance vendor interest and
> the interest of the general project?  Where will your focus in the M
> release be?  On your vendor code or on Cinder as a whole?  Note; I'm not
> suggesting that anybody isn't doing the "right" thing here, I'm just asking
> for specifics.

Even though we have some vendor code in Cinder, I'm lucky enough to have
a couple folks on my team that I've been having take care of anything to
do with our driver. My role would be specifically to focus on the core
and overall (multi-vendor contributions, cross-project collaboration, etc.)
contributions.

I can't say I have a specific, actionable plan for exactly what I would
do to "make Cinder better". I think we already have several initiatives 
under way in that respect that I would help to make those happen. I
would see my role as more of a facilitator to help provide support and
focus resources on accomplishing these goals.

I would also work with OpenStack operators, other projects that are
consumers of Cinder services, and the community at large to make sure
Cinder is meeting their block storage needs.

> 
> 3. ​Why do you want to be PTL for Cinder?
> 
> Seems like a silly question, but really when you start asking that question
> the answers can be surprising and somewhat enlightening.  There's different
> motivators for people, what's yours?  By the way, "my employer pays me a
> big bonus if I win" is a perfectly acceptable answer in my opinion, I'd
> prefer honesty over anything else.  You may not get my vote, but you'd get
> respect.

I won't get a big bonus, and I doubt I would get any kind of promotion
or increase out of this. What I would get is the ability to focus full
time on OpenStack and Cinder. Right now it is one of several of my 
responsibilities, and something that I spend a lot of my own time
on because I enjoy working on the project, and working with the folks
involved, and I believe in the future of OpenStack. 

I want to be PTL because I feel I could be a "facilitator" of all the
different efforts underway, to help drive them to completion and to
help reduce the distractions away from all the smart folks that are
getting things done. I can organize, communicate, and simplify our efforts.

> 
> Thanks,
> John


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-operators][tc][tags] Rally tags

2015-09-20 Thread Boris Pavlovic
Hi stackers,

Rally project is becoming more and more used by Operators to check that
live OpenStack clouds perform well and that they are ready for production.

Results of PAO OPS meeting showed that there are interest in Rally related
tags for projects:
http://www.gossamer-threads.com/lists/openstack/operators/49466

3) "works in rally" - new tag suggestion
> There was general interest in asking the Rally team to consider making a
> "works in rally" tag, since the rally tests were deemed 'good'.


I have few ideas about the rally tags:

- covered-by-rally
   It means that there are official (inside the rally repo) plugins for
testing of particular project

- has-rally-gates
   It means that Rally is run against every patch proposed to the project

- certified-by-rally [wip]
   As well we are starting working on certification task:
https://review.openstack.org/#/c/225176/5
   which will be the standard way to check whatever cloud is ready for
production based on volume, performance & scale testing.


Thoughts?


Best regards,
Boris Pavlovic
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Core Reviewers groups restructure

2015-09-20 Thread Mike Scherbakov
Hi all,
as of my larger proposal on improvements to code review workflow [1], we
need to have cores for repositories, not for the whole Fuel. It is the path
we are taking for a while, and new core reviewers added to specific repos
only. Now we need to complete this work.

My proposal is:

   1. Get rid of one common fuel-core [2] group, members of which can merge
   code anywhere in Fuel. Some members of this group may cover a couple of
   repositories, but can't really be cores in all repos.
   2. Extend existing groups, such as fuel-library [3], with members from
   fuel-core who are keeping up with large number of reviews / merges. This
   data can be queried at Stackalytics.
   3. Establish a new group "fuel-infra", and ensure that it's included
   into any other core group. This is for maintenance purposes, it is expected
   to be used only in exceptional cases. Fuel Infra team will have to decide
   whom to include into this group.
   4. Ensure that fuel-plugin-* repos will not be affected by removal of
   fuel-core group.

#2 needs specific details. Stackalytics can show active cores easily, we
can look at people with *:
http://stackalytics.com/report/contribution/fuel-web/180. This is for
fuel-web, change the link for other repos accordingly. If people are added
specifically to the particular group, leaving as is (some of them are no
longer active. But let's clean them up separately from this group
restructure process).

   - fuel-library-core [3] group will have following members: Bogdan D.,
   Sergii G., Alex Schultz, Vladimir Kuklin, Alex Didenko.
   - fuel-web-core [4]: Sebastian K., Igor Kalnitsky, Alexey Kasatkin,
   Vitaly Kramskikh, Julia Aranovich, Evgeny Li, Dima Shulyak
   - fuel-astute-core [5]: Vladimir Sharshov, Evgeny Li
   - fuel-dev-tools-core [6]: Przemek Kaminski, Sebastian K.
   - fuel-devops-core [7]: Tatyana Leontovich, Andrey Sledzinsky, Nastya
   Urlapova
   - fuel-docs-core [8]: Irina Povolotskaya, Denis Klepikov, Evgeny
   Konstantinov, Olga Gusarenko
   - fuel-main-core [9]: Vladimir Kozhukalov, Roman Vyalov, Dmitry Pyzhov,
   Sergii Golovatyuk, Vladimir Kuklin, Igor Kalnitsky
   - fuel-nailgun-agent-core [10]: Vladimir Sharshov, V.Kozhukalov
   - fuel-ostf-core [11]: Tatyana Leontovich, Nastya Urlapova, Andrey
   Sledzinsky, Dmitry Shulyak
   - fuel-plugins-core [12]: Igor Kalnitsky, Evgeny Li, Alexey Kasatkin
   - fuel-qa-core [13]: Andrey Sledzinsky, Tatyana Leontovich, Nastya
   Urlapova
   - fuel-stats-core [14]: Alex Kislitsky, Alexey Kasatkin, Vitaly Kramskikh
   - fuel-tasklib-core [15]: Igor Kalnitsky, Dima Shulyak, Alexey Kasatkin
   (this project seems to be dead, let's consider to rip it off)
   - fuel-specs-core: there is no such a group at the moment. I propose to
   create one with following members, based on stackalytics data [16]: Vitaly
   Kramskikh, Bogdan Dobrelia, Evgeny Li, Sergii Golovatyuk, Vladimir Kuklin,
   Igor Kalnitsky, Alexey Kasatkin, Roman Vyalov, Dmitry Borodaenko, Mike
   Scherbakov, Dmitry Pyzhov. We would need to reconsider who can merge after
   Fuel PTL/Component Leads elections
   - fuel-octane-core: needs to be created. Members: Yury Taraday, Oleg
   Gelbukh, Ilya Kharin
   - fuel-mirror-core: needs to be created. Sergey Kulanov, Vitaly Parakhin
   - fuel-upgrade-core: needs to be created. Sebastian Kalinowski, Alex
   Schultz, Evgeny Li, Igor Kalnitsky
   - fuel-provision: repo seems to be outdated, needs to be removed.

I suggest to make changes in groups first, and then separately address
specific issues like removing someone from cores (not doing enough reviews
anymore or too many positive reviews, let's say > 95%).

I hope I don't miss anyone / anything. Please check carefully.
Comments / objections?

[1]
http://lists.openstack.org/pipermail/openstack-dev/2015-August/072406.html
[2] https://review.openstack.org/#/admin/groups/209,members
[3] https://review.openstack.org/#/admin/groups/658,members
[4] https://review.openstack.org/#/admin/groups/664,members
[5] https://review.openstack.org/#/admin/groups/655,members
[6] https://review.openstack.org/#/admin/groups/646,members
[7] https://review.openstack.org/#/admin/groups/656,members
[8] https://review.openstack.org/#/admin/groups/657,members
[9] https://review.openstack.org/#/admin/groups/659,members
[10] https://review.openstack.org/#/admin/groups/1000,members
[11] https://review.openstack.org/#/admin/groups/660,members
[12] https://review.openstack.org/#/admin/groups/661,members
[13] https://review.openstack.org/#/admin/groups/662,members
[14] https://review.openstack.org/#/admin/groups/663,members
[15] https://review.openstack.org/#/admin/groups/624,members
[16] http://stackalytics.com/report/contribution/fuel-specs/180


-- 
Mike Scherbakov
#mihgen
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [CINDER] [PTL Candidates] Questions

2015-09-20 Thread Ivan Kolodyazhny
Hi John,

Thank you for these question. Such questions with answers could be a good
part of PTL proposal in the future.

Please, see my answers inline.

Regards,
Ivan Kolodyazhny

On Sun, Sep 20, 2015 at 8:30 PM, John Griffith 
wrote:

> ​PTL nomination emails are good, but I have a few questions that I'd like
> to ask to help me in making my vote.  Some of these are covered in the
> general proposal announcements, but I'd love to hear some more detail.
>
> It would be awesome if the Cinder candidates could spend some time and
> answer these to help me (and maybe others) make an informed choice:
>
> 1. Do you actually have the time to spend to be PTL
>
> I don't think many people realize the time commitment. Between being on
> top of reviews and having a pretty consistent view of what's going on and
> in process; to meetings, questions on IRC, program management type stuff
> etc.
>

I sincerely admire if any PTL could be in TOP of reviews, commits, etc.
Cross-projects meetings and activities will take a lot of time. IRC
participation is required for every active contributor, especially for
PTLs. Talking about Cinder, we need to remember that not only Community is
involved into the project. Many vendors have their drivers and didn't
contribute to other parts of projects. Cinder PTL is also responsible for
communication and collaboration with vendors to make their drivers be
working with Cinder.


> Do you feel you'll have the ability for PTL to be your FULL Time job?
>
It was first question which I asked myself before a nomination.


> Don't forget you're working with folks in a community that spans multiple
> time zones.
>

Sure, I can't forget it because I spend time almost every night in our
#openstack-cinder channel. Talking about something more measurable, I would
like to provide only this http://stackalytics.com/report/users/e0ne report.



>
> 2. What are your plans to make the Cinder project as a core component
> better (no... really, what specifically and how does it make Cinder better)?
>
> Most candidates are representing a storage vendor naturally.  Everyone
> says "make Cinder better"; But how do you intend to balance vendor interest
> and the interest of the general project?  Where will your focus in the M
> release be?  On your vendor code or on Cinder as a whole?  Note; I'm not
> suggesting that anybody isn't doing the "right" thing here, I'm just asking
> for specifics.
>

My company doesn't have own driver. I don't want to talk about Block Device
Driver now. I'll be that person, who will create patch with removing after
M-2 milestone is this driver won't have CI and minimum required feature
set.

A a Cinder user and contributor, I'm interesting in making Cinder Core more
flexible (E.g. working w/o Nova), tested (e.g. functional tests, 3rd party
CI, unit tests coverage etc) and make our users more happy with it.


>
> 3. ​Why do you want to be PTL for Cinder?
>
> Seems like a silly question, but really when you start asking that
> question the answers can be surprising and somewhat enlightening.  There's
> different motivators for people, what's yours?  By the way, "my employer
> pays me a big bonus if I win" is a perfectly acceptable answer in my
> opinion, I'd prefer honesty over anything else.  You may not get my vote,
> but you'd get respect.
>

OpenStack itself is very dynamic project. It makes a big progress each
release. It varies greatly from one release to another. It's a real
community-driven project. And I think that   PTLs are not dictators. PTLs
only help community to work on Cinder to make it better. Each person as a
PTL could bring something new to the community. Sometimes, "something new"
may means "something bad", but after each mistake we'll do our work better.
Being PTL helps to understand not only Cinder developer needs. It helps to
understand other OpenStack project needs. PTL should take care not only on
Cider or Nova or Heat. IMO, The main task of each PTL is to coordinate
developers of one project, other OpenStack developers and vendors to work
together on regular basis. It will be very big challenge for me I'll do my
best to make Cinder better as PTL. I'm sure, Cinder community will help our
new PTL a lot with it.


> Thanks,
> John
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>

Ivan.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][ptl][release] final liberty cycle client library releases needed

2015-09-20 Thread Doug Hellmann
Excerpts from Renat Akhmerov's message of 2015-09-20 19:06:20 +0300:
> 
> > On 19 Sep 2015, at 16:04, Doug Hellmann  wrote:
> > 
> > Excerpts from Renat Akhmerov's message of 2015-09-19 00:35:49 +0300:
> >> Doug,
> >> 
> >> python-mistralclient-1.1.0 (also on pypi) is the final release for 
> >> Liberty. Here’s the patch updating global-requirements.txt: 
> >> https://review.openstack.org/#/c/225330/ 
> >>  
> >>  >> > (upper-constraints.txt should 
> >> be soon updated automatically, in my understanding)
> > 
> > Because we're in requirements freeze, we're trying not to update any of
> > the global-requirements.txt entries unless absolutely necessary. At this
> > point, no projects can be depending on the previously unreleased
> > features in 1.1.0, so as long as python-mistralclient doesn't have a cap
> > on the major version allowed the requirements list, it should only be
> > necessary to update the constraints.
> > 
> > Please update the constraints file by hand, only changing
> > python-mistralclient. That will allow us to land the update without
> > changing any other libraries in the test infrastructure (the automated
> > update submits all of the changes together, and we have several
> > outstanding right now).
> 
> Ok, understood.
> 
> https://review.openstack.org/#/c/225491/ 
> 

+2

> >> So far I have been doing release management for Mistral myself (~2 years), 
> >> and the last year I’ve been trying to be aligned with OpenStack schedule. 
> >> In may 2015 Mistral was accepted into Big Tent so does that mean I’m not 
> >> longer responsible for doing that? Or I can still do it on my own? Even 
> >> with final Mistral client for Liberty I’ve done it just myself (didn’t 
> >> create a stable branch though yet), maybe I shouldn’t have. Clarifications 
> >> would be helpful.
> > 
> > It means you can now ask the release management team to take over for
> > the library, but that is not an automatic change.
> >> Does this all apply to all Big Tent projects?
> > 
> > Yes, and to all horizontal teams. Every project team is expected
> > to provide liaisons to all horizontal teams now. The degree to which
> > a horizontal team does the work for you is up to each pair of teams
> > to negotiate.
> 
> I’d prefer to take care about Liberty releases myself. We don’t have much 
> time till the end of Liberty and we may not establish all required 
> connections with horizontal teams. Is that ok?

That makes a lot of sense.

Doug

> 
> Renat Akhmerov
> @ Mirantis Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] Handling password for k8s

2015-09-20 Thread Ton Ngo


Hi everyone,
I am running into a potential issue in implementing the support for
load balancer in k8s services.  After a chat with sdake, I would like to
run this by the team for feedback/suggestion.
First let me give a little background for context.  In the current k8s
cluster, all k8s pods and services run within a private subnet (on Flannel)
and they can access each other but they cannot be accessed from external
network.  The way to publish an endpoint to the external network is by
specifying this attribute in your service manifest:
type: LoadBalancer
   Then k8s will talk to OpenStack Neutron to create the load balancer
pool, members, VIP, monitor.  The user would associate the VIP with a
floating IP and then the endpoint of the service would be accessible from
the external internet.
   To talk to Neutron, k8s needs the user credential and this is stored in
a config file on the master node.  This includes the username, tenant name,
password.  When k8s starts up, it will load the config file and create an
authenticated client with Keystone.
The issue we need to find a good solution for is how to handle the
password.  With the current effort on security to make Magnum
production-ready, we want to make sure to handle the password properly.
Ideally, the best solution is to pass the authenticated token to k8s to
use, but this will require sizeable change upstream in k8s.  We have good
reason to pursue this but it will take time.
For now, my current implementation is as follows:
   In a bay-create, magnum client adds the password to the API call
   (normally it authenticates and sends the token)
   The conductor picks it up and uses it as an input parameter to the heat
   templates
   When configuring the master node, the password is saved in the config
   file for k8s services.
   Magnum does not store the password internally.

This is probably not ideal, but it would let us proceed for now.  We
can deprecate it later when we have a better solution.  So leaving aside
the issue of how k8s should be changed, the question is:  is this approach
reasonable for the time, or is there a better approach?

Ton Ngo,

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev