Re: [openstack-dev] [neutron] Is it possible that creating a port and binding it with 2 IPs

2016-12-05 Thread zhi
Hi Vincent, as far as I know, Neutron supports trunk ports[1] now. You can
create a parent port and add one or more subports on it. Every subport has
its own IP address.

[1]: https://wiki.openstack.org/wiki/Neutron/TrunkPort


Thanks
Zhi Chang

2016-12-06 14:42 GMT+08:00 Vincent.Chao :

> Hi Neutrons,
>
> As my observation from the flow table, every created port seems bound with
> only one IP address.
> I was wondering is neutron support this function which can bind a port
> with two IPs?
>
> For example, a VNF load balancer, which has a VIP and a real IP.
> And this VNF receives packets with the destination of a VIP or a real IP
> through one port.
>
> Any corrections or suggestions are welcome.
> Thanks in advance.
>
> Vincent
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Is it possible that creating a port and binding it with 2 IPs

2016-12-05 Thread Vincent.Chao
Hi Neutrons,

As my observation from the flow table, every created port seems bound with
only one IP address.
I was wondering is neutron support this function which can bind a port with
two IPs?

For example, a VNF load balancer, which has a VIP and a real IP.
And this VNF receives packets with the destination of a VIP or a real IP
through one port.

Any corrections or suggestions are welcome.
Thanks in advance.

Vincent
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Horizon] Meeting at 20:00 UTC this Wednesday, 7th December

2016-12-05 Thread Richard Jones
Hi folks,

The Horizon team will be having our next meeting at 20:00 UTC this
Wednesday, 7th December in #openstack-meeting-3

Meeting agenda is here: https://wiki.openstack.org/wiki/Meetings/Horizon

If we have spare time this meeting I think we should look into getting some
patches reviewed together.

Anyone is welcome to to add agenda items and everyone interested in
Horizon is encouraged to attend.


Cheers,

Richard

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [devstack][neutron-lbaas][octavia] About installing Devstack on Ubuntu

2016-12-05 Thread Yipei Niu
Hi, All,

I failed installing devstack on Ubuntu. The detailed info of local.conf and
error is pasted in http://paste.openstack.org/show/591493/.

BTW, python2.7 is installed in Ubuntu, and Python.h can be found under
/usr/include/python2.7.

stack@stack-VirtualBox:~$ locate Python.h

/usr/include/python2.7/Python.h


However, after I comment the configuration related to Neutron-LBaaS and
Octavia in local.conf, it successes.

Is it a bug? How to fix it? Look forward to your comments. Thanks.

Best regards,
Yipei
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [devstack] About installing devstack on Ubuntu

2016-12-05 Thread Yipei Niu
Hi, All,

I failed installing devstack on Ubuntu. The detailed info of local.conf and
error is pasted in http://paste.openstack.org/show/591493/.

BTW, python2.7 is installed in Ubuntu, and Python.h can be found under
/usr/include/python2.7.

stack@stack-VirtualBox:~$ locate Python.h

/usr/include/python2.7/Python.h


Look forward to your comments. Thanks.

Best regards,
Yipei
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] [ci]

2016-12-05 Thread Ian Main
Wesley Hayutin wrote:
> Greetings,
> 
> I wanted to send a status update on the quickstart based containerized
> compute ci.
> 
> The work is here:
> https://review.openstack.org/#/c/393348/
> 
> I had two passes on the morning of Nov 30 in a row, then later that day the
> deployment started to fail due the compute node loosing it's networking and
> became unpingable.   After poking around and talking to a few folks its
> likely that we're hitting at least one of two possible bugs [1-2]
> 
> I am on pto next week but will periodically check in and can easily retest
> if these resolve.

I've been seeing this a lot too.  It's happening to both the controller and
compute for me.  Probably because the controller is ALSO running the firstboot
script in docker/ which is not what we want (or we need it to be smarter
anyway).

So far it appears that cloud-init is running our firstboot script but is not
configuring networking.  If I run dhclient eth0 it comes up and has internet
access etc.  Going to look into this more tomorrow.

  Ian

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][tap-as-a-service] Use ExtensionDescriptor from neutron-lib

2016-12-05 Thread Gary Kotton
Sorry hit send prior to posting trace.

2016-12-06 
03:43:02.554832
 | Failed to import test module: 
vmware_nsx.tests.unit.services.neutron_taas.test_nsxv3_driver
2016-12-06 
03:43:02.554851
 | Traceback (most recent call last):
2016-12-06 
03:43:02.554894
 |   File 
"/home/jenkins/workspace/gate-vmware-nsx-python27-db-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/unittest2/loader.py",
 line 456, in _find_test_path
2016-12-06 
03:43:02.554912
 | module = self._get_module_from_name(name)
2016-12-06 
03:43:02.554954
 |   File 
"/home/jenkins/workspace/gate-vmware-nsx-python27-db-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/unittest2/loader.py",
 line 395, in _get_module_from_name
2016-12-06 
03:43:02.554976
 | __import__(name)
2016-12-06 
03:43:02.555000
 |   File "vmware_nsx/tests/unit/services/neutron_taas/test_nsxv3_driver.py", 
line 20, in 
2016-12-06 
03:43:02.555014
 | from neutron_taas.extensions import taas
2016-12-06 
03:43:02.555037
 |   File "/tmp/openstack/tap-as-a-service/neutron_taas/extensions/taas.py", 
line 156, in 
2016-12-06 
03:43:02.555052
 | class Taas(extensions.ExtensionDescriptor):
2016-12-06 
03:43:02.555071
 | AttributeError: 'module' object has no attribute 'ExtensionDescriptor'
2016-12-06 
03:43:02.555084
 | The test run didn't actually run any tests
2016-12-06 
03:43:02.844792
 |

Can someone from the team please take a look ASAP

From: Gary Kotton 
Reply-To: OpenStack List 
Date: Tuesday, December 6, 2016 at 6:09 AM
To: OpenStack List 
Subject: [openstack-dev] [neutron][tap-as-a-service] Use ExtensionDescriptor 
from neutron-lib

Hi,
Patch https://review.openstack.org/#/c/398113/ has landed. This plugins using 
tap-as-a-service. Can someone please approve 
https://review.openstack.org/#/c/398143
Thanks
Gary

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][tap-as-a-service] Use ExtensionDescriptor from neutron-lib

2016-12-05 Thread Gary Kotton
Hi,
Patch https://review.openstack.org/#/c/398113/ has landed. This plugins using 
tap-as-a-service. Can someone please approve 
https://review.openstack.org/#/c/398143
Thanks
Gary

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][tricircle]DVR issue in cross Neutron networking

2016-12-05 Thread joehuang
Hello, Brian,

Thank you for your comment, see inline comments marked with [joehuang]. 

The ASCII figure is not good in the plain text mail, you can check it at the 
browser: 
http://lists.openstack.org/pipermail/openstack-dev/2016-December/108447.html

Best Regards
Chaoyi Huang (joehuang)


From: Brian Haley [brian.ha...@hpe.com]
Sent: 06 December 2016 10:29
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron][tricircle]DVR issue in cross Neutron 
networking

On 12/5/16 3:03 AM, joehuang wrote:
> Hello,

Hi Chaoyi,

Comments inline below.

>  Tricircle plans to provide L2 network across Neutron to ease supporting
> high
>  availability of application:
>
>  For example, in the following figure, the application is consisted of
>  instance1 and instance2, these two instances will be deployed into two
>  OpenStack. Intance1 will provide service through "ext net1"(i.e, external
>  network in OpenStack1), and Instance2 will provide service through
>  "ext net2". Instance1 and Instance2 will be plugged into same L2 network
>  net3 for data replication( for example database replication ).
>
>   +-+   +-+
>   |OpenStack1   |   |OpenStack2   |
>   | |   | |
>   | ext net1|   | ext net2|
>   |   +-+-+ |   |   +-+-+ |
>   | |   |   | |   |
>   | |   |   | |   |
>   |  +--+--+|   |  +--+--+|
>   |  | ||   |  | ||
>   |  | R1  ||   |  | R2  ||
>   |  | ||   |  | ||
>   |  +--+--+|   |  +--+--+|
>   | |   |   | |   |
>   | |   |   | |   |
>   | +---+-+-+   |   | +---+-+-+   |
>   | net1  | |   | net2  | |
>   |   | |   |   | |
>   |  ++--+  |   |  ++--+  |
>   |  | Instance1 |  |   |  | Instance2 |  |
>   |  +---+  |   |  +---+  |
>   | |   |   | |   |
>   | |   | net3  | |   |
>   |  +--+-++  |
>   | |   | |
>   +-+   +-+

Are Openstack1 and 2 simply different compute nodes?

[joehuang] No, OpenStack 1 and 2 are two OpenStack clouds, each OpenStack cloud
 includes its own services like Nova, Cinder, Neutron. That 
means
 R1, net1 are network objects in OpenStack cloud1 ( Neutron1)
 R2, net2 are network objects in OpenStack cloud2 ( Neutron2),
 but net3 will be a shared L2 network existing in both 
OpenStack cloud1 and 
 OpenStack cloud2, i.e in Neutron 1 and Neutron 2.

>  When we deploy the application in such a way, no matter which part of the
>  application stops providing service, the other part can still provide
>  service, and take the workload from the failure one. It'll bring the
> failure
>  tolerance no matter the failure is due to OpenStack crush or upgrade, or
>  part of the application crush or upgrade.
>
>  This mode can work very well and helpful, and router R1 R2 can run in DVR
>  or legacy mode.
>
>  While during the discussion and review of the spec:
>  https://review.openstack.org/#/c/396564/, in this deployment, the end user
>  has to add two NICs for each instance, one for the net3(a L2 network across
>  OpenStack). And the net3 (a L2 network across OpenStack) can not be allowed
>  to add_router_interface to router R1 R2, this is not good in networking.
>
>  If the end user wants to do so, there is DVR MAC issues if more than one L2
>  network across OpenStack are performed add_router_interface to router
> R1 R2.
>
>  Let's look at the following deployment scenario:
>  +-+   +---+
>  |OpenStack1   |   |OpenStack2 |
>  | |   |   |
>  | ext net1|   | ext net2  |
>  |   +-+-+ |   |   +-+-+   |
>  | |   |   | | |
>  | |   |   | | |
>  | +---+--+|   |  +--+---+ |
>  | |  ||   |  |  | |
>  | |R1||   |  |   R2 | |
>  | |  ||   |  |  | |
>  | ++--+--+|   |  +--+-+-+ |
>  |  |  |   |   | | |   |
>  |  |  |   | net3  | | |   |
>  |  |   -+-+---+-+--+  |   |
>  |  || |   |   |   |   |
>  |  | +--+---+ |   | +-+-+ |   |
>  |  | | Instance1| |   | | Instance2 | |   |
>  |  | +--+ |   | +---+ |   |
>  |  |  

Re: [openstack-dev] [neutron][tricircle]DVR issue in cross Neutron networking

2016-12-05 Thread Brian Haley

On 12/5/16 3:03 AM, joehuang wrote:

Hello,


Hi Chaoyi,

Comments inline below.


 Tricircle plans to provide L2 network across Neutron to ease supporting
high
 availability of application:

 For example, in the following figure, the application is consisted of
 instance1 and instance2, these two instances will be deployed into two
 OpenStack. Intance1 will provide service through "ext net1"(i.e, external
 network in OpenStack1), and Instance2 will provide service through
 "ext net2". Instance1 and Instance2 will be plugged into same L2 network
 net3 for data replication( for example database replication ).

  +-+   +-+
  |OpenStack1   |   |OpenStack2   |
  | |   | |
  | ext net1|   | ext net2|
  |   +-+-+ |   |   +-+-+ |
  | |   |   | |   |
  | |   |   | |   |
  |  +--+--+|   |  +--+--+|
  |  | ||   |  | ||
  |  | R1  ||   |  | R2  ||
  |  | ||   |  | ||
  |  +--+--+|   |  +--+--+|
  | |   |   | |   |
  | |   |   | |   |
  | +---+-+-+   |   | +---+-+-+   |
  | net1  | |   | net2  | |
  |   | |   |   | |
  |  ++--+  |   |  ++--+  |
  |  | Instance1 |  |   |  | Instance2 |  |
  |  +---+  |   |  +---+  |
  | |   |   | |   |
  | |   | net3  | |   |
  |  +--+-++  |
  | |   | |
  +-+   +-+


Are Openstack1 and 2 simply different compute nodes?


 When we deploy the application in such a way, no matter which part of the
 application stops providing service, the other part can still provide
 service, and take the workload from the failure one. It'll bring the
failure
 tolerance no matter the failure is due to OpenStack crush or upgrade, or
 part of the application crush or upgrade.

 This mode can work very well and helpful, and router R1 R2 can run in DVR
 or legacy mode.

 While during the discussion and review of the spec:
 https://review.openstack.org/#/c/396564/, in this deployment, the end user
 has to add two NICs for each instance, one for the net3(a L2 network across
 OpenStack). And the net3 (a L2 network across OpenStack) can not be allowed
 to add_router_interface to router R1 R2, this is not good in networking.

 If the end user wants to do so, there is DVR MAC issues if more than one L2
 network across OpenStack are performed add_router_interface to router
R1 R2.

 Let's look at the following deployment scenario:
 +-+   +---+
 |OpenStack1   |   |OpenStack2 |
 | |   |   |
 | ext net1|   | ext net2  |
 |   +-+-+ |   |   +-+-+   |
 | |   |   | | |
 | |   |   | | |
 | +---+--+|   |  +--+---+ |
 | |  ||   |  |  | |
 | |R1||   |  |   R2 | |
 | |  ||   |  |  | |
 | ++--+--+|   |  +--+-+-+ |
 |  |  |   |   | | |   |
 |  |  |   | net3  | | |   |
 |  |   -+-+---+-+--+  |   |
 |  || |   |   |   |   |
 |  | +--+---+ |   | +-+-+ |   |
 |  | | Instance1| |   | | Instance2 | |   |
 |  | +--+ |   | +---+ |   |
 |  |  | net4  |   |   |
 | ++---+--+---+-+ |
 |  |  |   |   |   |
 |  +---+---+  |   |  ++---+   |
 |  | Instance3 |  |   |  | Instance4  |   |
 |  +---+  |   |  ++   |
 | |   |   |
 +-+   +---+

 net3 and net4 are two L2 network across OpenStacks. These two networks will
 be added router interface to R1 R2. Tricircle can help this, and addressed
 the DHCP and gateway challenges: different gateway port for the same
network
 in different OpenStack, so there is no problem for north-south traffic, the
 north-south traffic will goes to local external network directly, for
example,
 Instance1->R1->ext net1, instance2->R2->ext net2.


Can you describe the subnet configuration here?  Is there just one per 
network and was is the IP range?



 The issue is in east-west traffic if R1 R2 are running in DVR mode:
 when instance1 tries to ping instance4, DVR MAC replacement will happen
before
 the packet leaves the host where the instance1 is running, when the packet
 arrives at the host where 

Re: [openstack-dev] [keystone] Custom ProjectID upon creation

2016-12-05 Thread Matt Fischer
>
>
>
> I'm surprised any AD administrator let Keystone write to it. I've always
> hear the inverse that AD admins never would allow keystone to write to it,
> therefore it was never used for Projects or Assignments. Users were
> likewise read-only when AD was involved.
>
> I have seen normal LDAP setups work with Keystone and used in both
> read/write mode (but even still the write-allowed was the extreme minority).
>

Yes agreed. AD administrators are generally pretty protective of write
access. And especially so of some Linux-based open source project writing
into their Windows kingdom. We got over our lack of being able to store
assignment in LDAP, mainly because the blocker was not Keystone, it was
corporate policy.

As for everything else that's been discussed, I think database replication
is easier, and when you're not replicating tokens, there's just not that
much traffic across the WAN. It's been very stable for us, especially since
we started using Fernet tokens.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [karbor] Karbor weekly irc meeting

2016-12-05 Thread edison xiang
Hi guys,

Karbor today's weekly irc meeting will be held at 09:00 UTC
#openstack-meeting.
Weclome to add your agenda at [1] and welcome to join the meeting.

[1] https://wiki.openstack.org/wiki/Meetings/Karbor

Best Regards,

edison xiang
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gate][congress] subunit.parser failures in pythonXX gate job

2016-12-05 Thread Clark Boylan
On Mon, Dec 5, 2016, at 04:15 PM, Eric K wrote:
> Hi all,
> 
> Does anyone know what could be causing these subunit.parser failures in
> some pythonXX check jobs (they do not show up on my local setup). I
> vaguely remember seeing them before, but can't remember what they were.
> And haven't been able to find relevant info online or in lists. Thanks so
> much!
> 
> Example:
> 
> FAIL: subunit.parser
> tags: worker-0
> Binary content:
> Packet data (application/octet-stream)
> Parser Error: {{{Bad checksum - calculated (0xc62b176a), stored
> (0x494e464f)}}}
> 
> http://logs.openstack.org/29/254429/8/check/gate-congress-python27-ubuntu-x
> enial/b3dacaf/testr_results.html.gz

If you grab the subunit file (its in the logs dir for that job run) you
can inspect it directly. I find version 2 of subunit to be a bit harder
for humans to read so you can also convert it to v1 using subunit-2to1
(included with a python-subunit installation). Digging around that a bit
it appears that the logs are getting truncated:

Datasource driver raised exception
Traceback (most recent call last):
  File "congress/datasources/datasource_driver.py", line 1384, in poll
self.update_from_datasource()  # sets self.state
  File "congress/datasources/datasource_driver.py", line 1371, in
  update_from_datasource
self.update_methods[registered_table]()
  File "congress/datasources/nova_driver.py", line 215, in 
self.nova_client.floating_ips.list())
  File
  
"/home/jenkins/workspace/gate-congress-python27-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/novaclient/api_versions.py",
  line 443, in wrapped
re2016-12-05 20:49:40.921 2394 INFO
congress.datasources.datasource_driver [-] nova:: polling
2016-12-05 20:49:40.922 2394 ERRO0^M
Content-Type: text/plain;charset=utf8
Parser Error 
3B^M
Bad checksum - calculated (0x8ecaea4f), stored (0x4552524f)0^M
]

Notice that ERRO0 ends that traceback. Of course now we must wonder if
that is truncated due to the checksum error or if the checksum error is
due to truncation. Either way I would look to see if something is
closing stdout inappropriately. ps on my local machine says you have a
subprocess congress_server running during at least some of these test
cases, perhaps it is interfering?

As for reproducing locally are you attempting to do so on Ubuntu Xenial
(so that python and other libs are the same version)? Also be sure to
use the same version of the constraints file to make sure all the python
deps are installed at the same version.

Other debugging things, you could run the specific failing test cases
with subunit directly and avoid the testr indirection layer:
`.tox/py27/bin/python -m subunit.run module.path.to.test` and
s/subunit/testtools/ if you just want to see it without the subunit data
stream.

Hope this helps,
Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Custom ProjectID upon creation

2016-12-05 Thread Morgan Fainberg
On Mon, Dec 5, 2016 at 3:21 PM, Andrey Grebennikov <
agrebenni...@mirantis.com> wrote:

>
>>
>> On Mon, Dec 5, 2016 at 2:31 PM, Andrey Grebennikov <
>> agrebenni...@mirantis.com> wrote:
>>
>>> -Original Message-
 From: Andrey Grebennikov 
 Reply: OpenStack Development Mailing List (not for usage questions)
 
 Date: December 5, 2016 at 12:22:09
 To: openstack-dev@lists.openstack.org 
 Subject:  [openstack-dev] [keystone] Custom ProjectID upon creation

 > Hi keystoners,

 I'm not a keystoner, but I hope youu don't mind my replying.
>>>
>>>
 > I'd like to open the discussion about the little feature which I'm
 trying
 > to push forward for a while but I need some
 feedbacks/opinions/concerns
 > regarding this.
 > Here is the review I'm talking about https://review.
 > openstack.org/#/c/403866/
 >
 > What I'm trying to cover is multi-region deployment, which includes
 > geo-distributed cloud with independent Keystone in every region.
 >
 > There is a number of use cases for the change:
 > 1. Allow users to re-use their tokens in all regions across the
 distributed
 > cloud. With global authentication (LDAP backed) and same roles names
 this
 > is only one missing piece which prevents the user to switch between
 regions
 > even withing single Horizon session.

 So this just doesn't sound right to me. You say above that there are
 independent Keystone deployments in each region. What token type are
 you using that each region could validate a token (assuming project
 IDs that are identical across regions) that would do this for
 "independent Keystone" deployments?

 Specifically, let's presume you use Keystone's new default of fernet
 tokens and you have independently deployed Keystone in each region.
 Without synchronizing the keys Keystone uses to generate and validate
 fernet tokens, I can't imagine how one token would work across all
 regions. This sounds like a lofty goal.

 Further, if Keystone is backed by LDAP, why are there projects being
 created in the Keystone database at all? I thought using LDAP as a
 backend would avoid that necessity. (Again, I'm not a keystone
 developer ;))

 Sorry that I didn't mention this in the beginning.
>>> Yes, it is supposed to be fernet tokens installation for sure, UUID will
>>> not work by default, PKI is deprecated. Keys are supposed to be
>>> synchronized. Without it multi-site will never work even if I replicate the
>>> database.
>>> This is what I started from about half a year ago immediately after
>>> receiving the usecase. I created 2 clouds, replicated the key, set up each
>>> Keystone to know about both sites as Regions, made project IDs same, and
>>> voilla - having global LDAP for authentication in place I could even switch
>>> between these regions within one Horizon session. So that one works.
>>>
>>> Next, the ability to store projects in LDAP was removed 2 releases ago.
>>> From my personal opinion (and in fact not just mine but hundreds of other
>>> users as well) this was one of the biggest mistakes.
>>> This is one of the major questions from my side to the community - if it
>>> was always possible to store project IDs in the external provider, and if
>>> it is still possible to do it for the userIDs - what is the point of
>>> preventing it now?
>>>
>>>
>> I want to go on record that we (the maintainers of Keystone) and those of
>> us who have spent a significant amount of time working through the LDAP
>> code came to the community and asked who used this feature (Through many
>> channels). There were exactly 2 responses of deployers using LDAP to store
>> project IDs. Both of them were open or actively working towards moving
>> projects into SQL (or alternatively, developing their own "resource"
>> store). The LDAP backend for resources (projects/domains) was poorly
>> supported, had limited interest from the community (for improvements) and
>> generally was a very large volume of work to bring up to being on-par with
>> the SQL implementation.
>>
>> Without the interest of stakeholders (and with few/no active users), it
>> wasn't feasible to continue to maintain it. There is nothing stopping you
>> from storing projects externally. You can develop a driver to communicate
>> with the backend of your choice. The main reason projects were stored in
>> LDAP was due to the "all or nothing" original design of "KeystoneV2 "; you
>> could store users in LDAP but you also had to store projects, role
>> assignments, etc. Most deployments only wanted Users in LDAP but suffered
>> through the rest because it was required (there was no split of User,
>> Resource, and Assignment like there is today).
>>
>> Maybe it was the time when I haven't yet actively 

Re: [openstack-dev] [keystone] Custom ProjectID upon creation

2016-12-05 Thread Adrian Turjak

On 06/12/16 12:15, Andrey Grebennikov wrote:
>
> Just to clarify, the reason you want custom project IDs is so that
> when you create a project in one region, you can create it again
> with the same ID in another? Isn't that just manual replication
> though?
>
> What happens when there are projects in only one region? Or if
> that isn't meant to happen, how would you make sure that doesn't
> happen?
>
> Not quite sure I  understand the point...
> Yes, this is exactly the reason why I need this feature. Project
> creation/replication is supposed to be done via external scripts.
> Currently I already have 2 slightly different use cases which
> definitely require this feature.
>
>
My point was how is this any better than database replication? It pretty
much sounds like replication but at the API layer.

Do you just create a project normally in one region (keystone API) and
then a cron job or something triggers the replication script, or do you
have a queue of changes to replicate? Isn't that much the same as doing
asynchronous database replication? Or are you creating and duplicating
in the scope of the same script? If the latter, then that sounds like
synchronous DB replication.

How are you handling conflicts? Are you?

If networks are down, the same problem is present. If ProjectOne is
created in RegionOne, and ProjectTwo in RegionTwo, until the
inter-region network is restored, the two keystones will be different
until your script runs in both regions and actually gets through to the
opposite one.

What is the benefit to doing things this way rather than just trusting
your replication to the DB layer?

I'm genuinely curious about your approach as we've been looking at
moving to a multi-master like set up for our keystone so I'd love to
know what problems you found with DB replication. API level replication
sounds like a terrifying idea, so I'm curious what exactly your reasons
behind wanting to do it are.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [gate][congress] subunit.parser failures in pythonXX gate job

2016-12-05 Thread Eric K
Hi all,

Does anyone know what could be causing these subunit.parser failures in
some pythonXX check jobs (they do not show up on my local setup). I
vaguely remember seeing them before, but can't remember what they were.
And haven't been able to find relevant info online or in lists. Thanks so
much!

Example:

FAIL: subunit.parser
tags: worker-0
Binary content:
Packet data (application/octet-stream)
Parser Error: {{{Bad checksum - calculated (0xc62b176a), stored
(0x494e464f)}}}

http://logs.openstack.org/29/254429/8/check/gate-congress-python27-ubuntu-x
enial/b3dacaf/testr_results.html.gz



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] consistency groups in ceph

2016-12-05 Thread Victor Denisov
I just realized that probably we create images from individual
snapshots of that consistency group and add those images to the new
consistency group.
Am I correct?

On Mon, Dec 5, 2016 at 3:20 PM, Victor Denisov  wrote:
> Or probably when we clone a consistency group from a snapshot we
> should clone all the images in this consistency group?
>
> On Mon, Dec 5, 2016 at 3:15 PM, Victor Denisov  wrote:
>> Hi Xing,
>>
>> One more question. You mentioned that there is an operation: create
>> consistency group from a snap shot.
>> Does it mean that an image can be a member of several consistency groups?
>>
>> Thanks,
>> V.
>>
>> On Tue, Nov 8, 2016 at 6:21 AM, yang, xing  wrote:
>>> You cannot remove a volume completely if there is still a group snapshot.  
>>> You can remove the volume from the group but you can’t delete the volume 
>>> because it still has snapshot dependent on it.  So if you want to 
>>> completely remove a volume that is in a group, you can delete the group 
>>> snapshot first which will delete the individual snapshot.  After that you 
>>> can remove the volume from the group and delete the volume.
>>>
>>> More comments inline below.
>>>
>>> Thanks,
>>> Xing
>>>
>>>
>>> 
>>> From: Victor Denisov [vdeni...@mirantis.com]
>>> Sent: Tuesday, November 8, 2016 12:04 AM
>>> To: OpenStack Development Mailing List (not for usage questions)
>>> Cc: Jason Dillaman
>>> Subject: Re: [openstack-dev] [cinder] consistency groups in ceph
>>>
>>> One more question. What is the expected behavior if you remove a
>>> volume completely?
>>> [Xing] You cannot remove the volume completely (delete volume won't 
>>> succeed) if there is still a group snapshot.
>>>
>>> Should the group snapshot removed first? Should the volume snapshot be
>>> removed from the group snapshot?
>>> [Xing] Yes, you should delete the group snapshot first and that will delete 
>>> the volume snapshot as well.
>>>
>>> Should we keep the snapshot even if the image doesn't exist anymore?
>>> [Xing] You cannot delete the image (volume) if there is still a snapshot.
>>>
>>> Thanks,
>>> Victor.
>>>
>>> On Tue, Nov 1, 2016 at 7:02 AM, yang, xing  wrote:
 Hi Victor,

 Please see my answers inline below.

 In Newton, we added support for Generic Volume Groups.  See doc below.  
 CGs will be migrated to Generic Volume Groups gradually.  Drivers should 
 not implement CGs any more.  Instead, it can add CG support using Generic 
 Volume Group interfaces.  I'm working on a dev doc to explain how to do 
 this and will send an email to the mailing list when I'm done.  The 
 Generic Volume Group interface is very similar to CG interface, except 
 that the Generic Volume Group requires an additional Group Type parameter 
 to be created.  Using Group Type, CG can be a special type of Generic 
 Volume Group.  Please feel free to grab me on Cinder IRC if you have any 
 questions.  My IRC handle is xyang or xyang1.

 http://docs.openstack.org/admin-guide/blockstorage-groups.html

 Thanks,
 Xing


 
 From: Victor Denisov [vdeni...@mirantis.com]
 Sent: Monday, October 31, 2016 11:29 PM
 To: openstack-dev@lists.openstack.org
 Cc: Jason Dillaman
 Subject: [openstack-dev] [cinder] consistency groups in ceph

 Hi,

 I'm working on consistency groups feature in ceph.
 My question is about what kind of behavior does cinder expect from
 storage backends.
 I'm particularly interested in what happens to consistency groups
 snapshots when I remove an image from the group:

 Let's imagine I have a consistency group called CG. I have images in
 the consistency group:
 Im1, Im2, Im3, Im4.
 Let's imagine we have snapshots of this consistency group:

 CGSnap1
 CGSnap2
 CGSnap3

 Snapshots of individual images in a consistency group snapshot I will call
 CGSnap2Im1 - Snapshot of image 1 from consistency group snapshot 2.

 Qustion 1:
 If consistency group CG has 4 images: Im1, Im2, Im3, Im4.
 Can CGSnap1 have more images than it already has: Im1, Im2, Im3, Im4, Im5.

 Can CGSnap1 have less images than it already has: Im1, Im2, Im3.

 [Xing]  Once a snapshot is taken from a CG, it can no longer be changed.  
 It is a point-in-time copy.  CGSnap1 cannot be modified.

 Question 2:
 If we remove image2 from the consistency group. Does it mean that
 snapshots of this image should be removed from all the CGSnaps.

 Example:
 We are removing Im2.
 CGSnaps look like this:

 CGSnap1 - CGSnap1Im1, CGSnap1Im2, CGSnap1Im3
 CGSnap2 - CGSnap2Im1, CGSnap2Im2, CGSnap2Im3, CGSnap3Im4
 CGSnap3 - CGSnap3Im1, CGSnap3Im2, CGSnap3Im3, CGSnap3Im4

 What happens 

Re: [openstack-dev] [cinder] consistency groups in ceph

2016-12-05 Thread Victor Denisov
Or probably when we clone a consistency group from a snapshot we
should clone all the images in this consistency group?

On Mon, Dec 5, 2016 at 3:15 PM, Victor Denisov  wrote:
> Hi Xing,
>
> One more question. You mentioned that there is an operation: create
> consistency group from a snap shot.
> Does it mean that an image can be a member of several consistency groups?
>
> Thanks,
> V.
>
> On Tue, Nov 8, 2016 at 6:21 AM, yang, xing  wrote:
>> You cannot remove a volume completely if there is still a group snapshot.  
>> You can remove the volume from the group but you can’t delete the volume 
>> because it still has snapshot dependent on it.  So if you want to completely 
>> remove a volume that is in a group, you can delete the group snapshot first 
>> which will delete the individual snapshot.  After that you can remove the 
>> volume from the group and delete the volume.
>>
>> More comments inline below.
>>
>> Thanks,
>> Xing
>>
>>
>> 
>> From: Victor Denisov [vdeni...@mirantis.com]
>> Sent: Tuesday, November 8, 2016 12:04 AM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Cc: Jason Dillaman
>> Subject: Re: [openstack-dev] [cinder] consistency groups in ceph
>>
>> One more question. What is the expected behavior if you remove a
>> volume completely?
>> [Xing] You cannot remove the volume completely (delete volume won't succeed) 
>> if there is still a group snapshot.
>>
>> Should the group snapshot removed first? Should the volume snapshot be
>> removed from the group snapshot?
>> [Xing] Yes, you should delete the group snapshot first and that will delete 
>> the volume snapshot as well.
>>
>> Should we keep the snapshot even if the image doesn't exist anymore?
>> [Xing] You cannot delete the image (volume) if there is still a snapshot.
>>
>> Thanks,
>> Victor.
>>
>> On Tue, Nov 1, 2016 at 7:02 AM, yang, xing  wrote:
>>> Hi Victor,
>>>
>>> Please see my answers inline below.
>>>
>>> In Newton, we added support for Generic Volume Groups.  See doc below.  CGs 
>>> will be migrated to Generic Volume Groups gradually.  Drivers should not 
>>> implement CGs any more.  Instead, it can add CG support using Generic 
>>> Volume Group interfaces.  I'm working on a dev doc to explain how to do 
>>> this and will send an email to the mailing list when I'm done.  The Generic 
>>> Volume Group interface is very similar to CG interface, except that the 
>>> Generic Volume Group requires an additional Group Type parameter to be 
>>> created.  Using Group Type, CG can be a special type of Generic Volume 
>>> Group.  Please feel free to grab me on Cinder IRC if you have any 
>>> questions.  My IRC handle is xyang or xyang1.
>>>
>>> http://docs.openstack.org/admin-guide/blockstorage-groups.html
>>>
>>> Thanks,
>>> Xing
>>>
>>>
>>> 
>>> From: Victor Denisov [vdeni...@mirantis.com]
>>> Sent: Monday, October 31, 2016 11:29 PM
>>> To: openstack-dev@lists.openstack.org
>>> Cc: Jason Dillaman
>>> Subject: [openstack-dev] [cinder] consistency groups in ceph
>>>
>>> Hi,
>>>
>>> I'm working on consistency groups feature in ceph.
>>> My question is about what kind of behavior does cinder expect from
>>> storage backends.
>>> I'm particularly interested in what happens to consistency groups
>>> snapshots when I remove an image from the group:
>>>
>>> Let's imagine I have a consistency group called CG. I have images in
>>> the consistency group:
>>> Im1, Im2, Im3, Im4.
>>> Let's imagine we have snapshots of this consistency group:
>>>
>>> CGSnap1
>>> CGSnap2
>>> CGSnap3
>>>
>>> Snapshots of individual images in a consistency group snapshot I will call
>>> CGSnap2Im1 - Snapshot of image 1 from consistency group snapshot 2.
>>>
>>> Qustion 1:
>>> If consistency group CG has 4 images: Im1, Im2, Im3, Im4.
>>> Can CGSnap1 have more images than it already has: Im1, Im2, Im3, Im4, Im5.
>>>
>>> Can CGSnap1 have less images than it already has: Im1, Im2, Im3.
>>>
>>> [Xing]  Once a snapshot is taken from a CG, it can no longer be changed.  
>>> It is a point-in-time copy.  CGSnap1 cannot be modified.
>>>
>>> Question 2:
>>> If we remove image2 from the consistency group. Does it mean that
>>> snapshots of this image should be removed from all the CGSnaps.
>>>
>>> Example:
>>> We are removing Im2.
>>> CGSnaps look like this:
>>>
>>> CGSnap1 - CGSnap1Im1, CGSnap1Im2, CGSnap1Im3
>>> CGSnap2 - CGSnap2Im1, CGSnap2Im2, CGSnap2Im3, CGSnap3Im4
>>> CGSnap3 - CGSnap3Im1, CGSnap3Im2, CGSnap3Im3, CGSnap3Im4
>>>
>>> What happens to snapshots: CGSnap1Im2,CGSnap2Im2, CGSnap3Im2? Do we
>>> remove them, do we keep them. Is it important what we do to them at
>>> all?
>>>
>>> [Xing] If your CG contains 4 volumes when you take the snapshot of the CG, 
>>> the resulting CGSnap should be associated with 4 snapshots corresponding to 
>>> the 4 volumes.  If you add more volumes to the CG or remove 

Re: [openstack-dev] [keystone] Custom ProjectID upon creation

2016-12-05 Thread Andrey Grebennikov
>
>
>
> On Mon, Dec 5, 2016 at 2:31 PM, Andrey Grebennikov <
> agrebenni...@mirantis.com> wrote:
>
>> -Original Message-
>>> From: Andrey Grebennikov 
>>> Reply: OpenStack Development Mailing List (not for usage questions)
>>> 
>>> Date: December 5, 2016 at 12:22:09
>>> To: openstack-dev@lists.openstack.org >> >
>>> Subject:  [openstack-dev] [keystone] Custom ProjectID upon creation
>>>
>>> > Hi keystoners,
>>>
>>> I'm not a keystoner, but I hope youu don't mind my replying.
>>
>>
>>> > I'd like to open the discussion about the little feature which I'm
>>> trying
>>> > to push forward for a while but I need some feedbacks/opinions/concerns
>>> > regarding this.
>>> > Here is the review I'm talking about https://review.
>>> > openstack.org/#/c/403866/
>>> >
>>> > What I'm trying to cover is multi-region deployment, which includes
>>> > geo-distributed cloud with independent Keystone in every region.
>>> >
>>> > There is a number of use cases for the change:
>>> > 1. Allow users to re-use their tokens in all regions across the
>>> distributed
>>> > cloud. With global authentication (LDAP backed) and same roles names
>>> this
>>> > is only one missing piece which prevents the user to switch between
>>> regions
>>> > even withing single Horizon session.
>>>
>>> So this just doesn't sound right to me. You say above that there are
>>> independent Keystone deployments in each region. What token type are
>>> you using that each region could validate a token (assuming project
>>> IDs that are identical across regions) that would do this for
>>> "independent Keystone" deployments?
>>>
>>> Specifically, let's presume you use Keystone's new default of fernet
>>> tokens and you have independently deployed Keystone in each region.
>>> Without synchronizing the keys Keystone uses to generate and validate
>>> fernet tokens, I can't imagine how one token would work across all
>>> regions. This sounds like a lofty goal.
>>>
>>> Further, if Keystone is backed by LDAP, why are there projects being
>>> created in the Keystone database at all? I thought using LDAP as a
>>> backend would avoid that necessity. (Again, I'm not a keystone
>>> developer ;))
>>>
>>> Sorry that I didn't mention this in the beginning.
>> Yes, it is supposed to be fernet tokens installation for sure, UUID will
>> not work by default, PKI is deprecated. Keys are supposed to be
>> synchronized. Without it multi-site will never work even if I replicate the
>> database.
>> This is what I started from about half a year ago immediately after
>> receiving the usecase. I created 2 clouds, replicated the key, set up each
>> Keystone to know about both sites as Regions, made project IDs same, and
>> voilla - having global LDAP for authentication in place I could even switch
>> between these regions within one Horizon session. So that one works.
>>
>> Next, the ability to store projects in LDAP was removed 2 releases ago.
>> From my personal opinion (and in fact not just mine but hundreds of other
>> users as well) this was one of the biggest mistakes.
>> This is one of the major questions from my side to the community - if it
>> was always possible to store project IDs in the external provider, and if
>> it is still possible to do it for the userIDs - what is the point of
>> preventing it now?
>>
>>
> I want to go on record that we (the maintainers of Keystone) and those of
> us who have spent a significant amount of time working through the LDAP
> code came to the community and asked who used this feature (Through many
> channels). There were exactly 2 responses of deployers using LDAP to store
> project IDs. Both of them were open or actively working towards moving
> projects into SQL (or alternatively, developing their own "resource"
> store). The LDAP backend for resources (projects/domains) was poorly
> supported, had limited interest from the community (for improvements) and
> generally was a very large volume of work to bring up to being on-par with
> the SQL implementation.
>
> Without the interest of stakeholders (and with few/no active users), it
> wasn't feasible to continue to maintain it. There is nothing stopping you
> from storing projects externally. You can develop a driver to communicate
> with the backend of your choice. The main reason projects were stored in
> LDAP was due to the "all or nothing" original design of "KeystoneV2 "; you
> could store users in LDAP but you also had to store projects, role
> assignments, etc. Most deployments only wanted Users in LDAP but suffered
> through the rest because it was required (there was no split of User,
> Resource, and Assignment like there is today).
>
> Maybe it was the time when I haven't yet actively participated in the
community bringing issues.
I personally participated in at least 5 projects within the last 2 years
where we stored assignments/projects/roles in LDAP or AD, and All 

Re: [openstack-dev] [cinder] consistency groups in ceph

2016-12-05 Thread Victor Denisov
Hi Xing,

One more question. You mentioned that there is an operation: create
consistency group from a snap shot.
Does it mean that an image can be a member of several consistency groups?

Thanks,
V.

On Tue, Nov 8, 2016 at 6:21 AM, yang, xing  wrote:
> You cannot remove a volume completely if there is still a group snapshot.  
> You can remove the volume from the group but you can’t delete the volume 
> because it still has snapshot dependent on it.  So if you want to completely 
> remove a volume that is in a group, you can delete the group snapshot first 
> which will delete the individual snapshot.  After that you can remove the 
> volume from the group and delete the volume.
>
> More comments inline below.
>
> Thanks,
> Xing
>
>
> 
> From: Victor Denisov [vdeni...@mirantis.com]
> Sent: Tuesday, November 8, 2016 12:04 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Cc: Jason Dillaman
> Subject: Re: [openstack-dev] [cinder] consistency groups in ceph
>
> One more question. What is the expected behavior if you remove a
> volume completely?
> [Xing] You cannot remove the volume completely (delete volume won't succeed) 
> if there is still a group snapshot.
>
> Should the group snapshot removed first? Should the volume snapshot be
> removed from the group snapshot?
> [Xing] Yes, you should delete the group snapshot first and that will delete 
> the volume snapshot as well.
>
> Should we keep the snapshot even if the image doesn't exist anymore?
> [Xing] You cannot delete the image (volume) if there is still a snapshot.
>
> Thanks,
> Victor.
>
> On Tue, Nov 1, 2016 at 7:02 AM, yang, xing  wrote:
>> Hi Victor,
>>
>> Please see my answers inline below.
>>
>> In Newton, we added support for Generic Volume Groups.  See doc below.  CGs 
>> will be migrated to Generic Volume Groups gradually.  Drivers should not 
>> implement CGs any more.  Instead, it can add CG support using Generic Volume 
>> Group interfaces.  I'm working on a dev doc to explain how to do this and 
>> will send an email to the mailing list when I'm done.  The Generic Volume 
>> Group interface is very similar to CG interface, except that the Generic 
>> Volume Group requires an additional Group Type parameter to be created.  
>> Using Group Type, CG can be a special type of Generic Volume Group.  Please 
>> feel free to grab me on Cinder IRC if you have any questions.  My IRC handle 
>> is xyang or xyang1.
>>
>> http://docs.openstack.org/admin-guide/blockstorage-groups.html
>>
>> Thanks,
>> Xing
>>
>>
>> 
>> From: Victor Denisov [vdeni...@mirantis.com]
>> Sent: Monday, October 31, 2016 11:29 PM
>> To: openstack-dev@lists.openstack.org
>> Cc: Jason Dillaman
>> Subject: [openstack-dev] [cinder] consistency groups in ceph
>>
>> Hi,
>>
>> I'm working on consistency groups feature in ceph.
>> My question is about what kind of behavior does cinder expect from
>> storage backends.
>> I'm particularly interested in what happens to consistency groups
>> snapshots when I remove an image from the group:
>>
>> Let's imagine I have a consistency group called CG. I have images in
>> the consistency group:
>> Im1, Im2, Im3, Im4.
>> Let's imagine we have snapshots of this consistency group:
>>
>> CGSnap1
>> CGSnap2
>> CGSnap3
>>
>> Snapshots of individual images in a consistency group snapshot I will call
>> CGSnap2Im1 - Snapshot of image 1 from consistency group snapshot 2.
>>
>> Qustion 1:
>> If consistency group CG has 4 images: Im1, Im2, Im3, Im4.
>> Can CGSnap1 have more images than it already has: Im1, Im2, Im3, Im4, Im5.
>>
>> Can CGSnap1 have less images than it already has: Im1, Im2, Im3.
>>
>> [Xing]  Once a snapshot is taken from a CG, it can no longer be changed.  It 
>> is a point-in-time copy.  CGSnap1 cannot be modified.
>>
>> Question 2:
>> If we remove image2 from the consistency group. Does it mean that
>> snapshots of this image should be removed from all the CGSnaps.
>>
>> Example:
>> We are removing Im2.
>> CGSnaps look like this:
>>
>> CGSnap1 - CGSnap1Im1, CGSnap1Im2, CGSnap1Im3
>> CGSnap2 - CGSnap2Im1, CGSnap2Im2, CGSnap2Im3, CGSnap3Im4
>> CGSnap3 - CGSnap3Im1, CGSnap3Im2, CGSnap3Im3, CGSnap3Im4
>>
>> What happens to snapshots: CGSnap1Im2,CGSnap2Im2, CGSnap3Im2? Do we
>> remove them, do we keep them. Is it important what we do to them at
>> all?
>>
>> [Xing] If your CG contains 4 volumes when you take the snapshot of the CG, 
>> the resulting CGSnap should be associated with 4 snapshots corresponding to 
>> the 4 volumes.  If you add more volumes to the CG or remove volumes from CG 
>> after CGSnap was taken, it should not affect CGSnap.  It will only affect CG 
>> snapshots that you take in the future.
>>
>> Thanks,
>> Victor.
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 

Re: [openstack-dev] [keystone] Custom ProjectID upon creation

2016-12-05 Thread Boris Bobrov
On 12/06/2016 01:46 AM, Andrey Grebennikov wrote:
> Hi,
> 
> On 12/05/2016 09:20 PM, Andrey Grebennikov wrote:
> > Hi keystoners,
> > I'd like to open the discussion about the little feature which I'm
> > trying to push forward for a while but I need some
> > feedbacks/opinions/concerns regarding this.
> > Here is the review I'm talking
> > about https://review.openstack.org/#/c/403866/
> 
> >  >
> >
> > What I'm trying to cover is multi-region deployment, which includes
> > geo-distributed cloud with independent Keystone in every region.
> >
> > There is a number of use cases for the change:
> > 1. Allow users to re-use their tokens in all regions across the
> > distributed cloud. With global authentication (LDAP backed) and same
> > roles names this is only one missing piece which prevents the user to
> > switch between regions even withing single Horizon session.
> > 2. Automated tools responsible for statistics collection may access all
> > regions using one token (real customer's usecase)
> 
> What do you understand by "region"?
> 
> I believe I explained what the "region" is in the beginning. In here it
> is actually a generic "keystone region" with all stuff, but Keystone is
> independent in it. Literally all Keystones in all "my" regions are aware
> about each other since they all have common catalog. 
> 
> 
> > 3. Glance replication may happen because the images' parameter "owner"
> > (which is a project) should be consistent across the regions.
> >
> > What I hear all time - "you have to replicate your database" which from
> > the devops/deployment/operations perspective is totally wrong approach.
> > If it is possible to avoid Galera replication over geographically
> > distributed regions - then API calls should be used. Moreover, in case
> > of 2 DCs there will be an issue to decide which region has to take over
> > when they are isolated from each other.
> 
> My comment in the review still stands. With the change we are getting
> ourselves into situation when some tokens *are* verifiable in 2
> regions (project-scoped with identical project ids in both regions),
> some *might be* verifiable in 2 regions (project-scoped with ids about
> which we can't tell anything), and some *are not* verifiable, because
> they are federation- or trust-scoped. A user (human or script) won't be
> able to tell what functionality the token brings without complex
> inspection.
> 
> I commented it in the IRC and repeat over here - it is Always the
> responsibility of the administrator to realize how things work and
> implement it in the way he/she wants it. You don't need it - you don't
> set it. The IDs will be still generated.

It is not a general usecase that an administrator creates projects.
There are policies that define who can do that.

> 
> Current design is there is single issuer of tokens and single
> consumer. With the patch there will be single issuer and multiple
> consumers. Which is basically SSO, but done without explicit
> design decision.
> 
> Not true. SSO assumes Central point of authorization. Here it is highly
> distributed.
>  
> 
> Here is what i suggest:
> 
> 1. Stop thinking about 2 keystone installations with non-shared database
> as about "one single keystone". If there are 2 non-replicated databases,
> there are 2 separate keystones. 2 separate keystones have completely
> different sets of tokens. Do not try to share fernet keys between
> separate keystones.
> 
> Even if you replicate the DB it is not going to work with no key
> replication. I repeat my statement once again - if the admin doesn't
> need it - leave the --id field blank, that's it. Nothing is broken.
>  
> 
> 2. Instead of implementing poor man's federation, use real federation.
> Create appropriate projects and create group-based assignments, one
> for each keystone instance. Use these group-based assignments for
> federated users.
> 
> Does federation currently allow me to use remote groups?

What do you mean by remote groups? Group name can be specified in
SAML assertion and then used, yes.

> Does it allow me to replicate my projects?

No. Create projects in each keystone:

for ip in $list_of_ips; do
openstack project create --os-url=$ip ...;
done

> Does it allow me to work when there is no connectivity between keystones?

Yes

> Does it allow me to know whether user
> exists in the federated provider before I create shadow user?

I don't understand the question.

Shadow users don't need to be created. Shadow users is internal keystone
entity that operator doesn't deal with at all.

> Does it
> delete assignments/shadow user records when the user is deleted from the
> remote 

Re: [openstack-dev] [keystone] Custom ProjectID upon creation

2016-12-05 Thread Andrey Grebennikov
>
> Just to clarify, the reason you want custom project IDs is so that when
> you create a project in one region, you can create it again with the same
> ID in another? Isn't that just manual replication though?
>
> What happens when there are projects in only one region? Or if that isn't
> meant to happen, how would you make sure that doesn't happen?
>
Not quite sure I  understand the point...
Yes, this is exactly the reason why I need this feature. Project
creation/replication is supposed to be done via external scripts. Currently
I already have 2 slightly different use cases which definitely require this
feature.

>
> On 06/12/16 07:20, Andrey Grebennikov wrote:
>
> Hi keystoners,
> I'd like to open the discussion about the little feature which I'm trying
> to push forward for a while but I need some feedbacks/opinions/concerns
> regarding this.
> Here is the review I'm talking about https://review.openstack
> .org/#/c/403866/
>
> What I'm trying to cover is multi-region deployment, which includes
> geo-distributed cloud with independent Keystone in every region.
>
> There is a number of use cases for the change:
> 1. Allow users to re-use their tokens in all regions across the
> distributed cloud. With global authentication (LDAP backed) and same roles
> names this is only one missing piece which prevents the user to switch
> between regions even withing single Horizon session.
> 2. Automated tools responsible for statistics collection may access all
> regions using one token (real customer's usecase)
> 3. Glance replication may happen because the images' parameter "owner"
> (which is a project) should be consistent across the regions.
>
> What I hear all time - "you have to replicate your database" which from
> the devops/deployment/operations perspective is totally wrong approach.
> If it is possible to avoid Galera replication over geographically
> distributed regions - then API calls should be used. Moreover, in case of 2
> DCs there will be an issue to decide which region has to take over when
> they are isolated from each other.
>
> There is a long conversation in the comments of the review, mainly with
> concerns from cores (purely developer's opinions).
>
> Please help me to bring it to life ;)
>
> PS I'm so sorry, forgot to create a topic in the original message
>
> --
> Andrey Grebennikov
> Principal Deployment Engineer
> Mirantis Inc, Austin TX
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Andrey Grebennikov
Principal Deployment Engineer
Mirantis Inc, Austin TX
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Custom ProjectID upon creation

2016-12-05 Thread Adrian Turjak
Just to clarify, the reason you want custom project IDs is so that when
you create a project in one region, you can create it again with the
same ID in another? Isn't that just manual replication though?

What happens when there are projects in only one region? Or if that
isn't meant to happen, how would you make sure that doesn't happen?


On 06/12/16 07:20, Andrey Grebennikov wrote:
> Hi keystoners,
> I'd like to open the discussion about the little feature which I'm
> trying to push forward for a while but I need some
> feedbacks/opinions/concerns regarding this.
> Here is the review I'm talking
> about https://review.openstack.org/#/c/403866/
> 
>
> What I'm trying to cover is multi-region deployment, which includes
> geo-distributed cloud with independent Keystone in every region.
>
> There is a number of use cases for the change:
> 1. Allow users to re-use their tokens in all regions across the
> distributed cloud. With global authentication (LDAP backed) and same
> roles names this is only one missing piece which prevents the user to
> switch between regions even withing single Horizon session.
> 2. Automated tools responsible for statistics collection may access
> all regions using one token (real customer's usecase)
> 3. Glance replication may happen because the images' parameter "owner"
> (which is a project) should be consistent across the regions.
>
> What I hear all time - "you have to replicate your database" which
> from the devops/deployment/operations perspective is totally wrong
> approach.
> If it is possible to avoid Galera replication over geographically
> distributed regions - then API calls should be used. Moreover, in case
> of 2 DCs there will be an issue to decide which region has to take
> over when they are isolated from each other.
>
> There is a long conversation in the comments of the review, mainly
> with concerns from cores (purely developer's opinions).
>
> Please help me to bring it to life ;)
>
> PS I'm so sorry, forgot to create a topic in the original message
>
> -- 
> Andrey Grebennikov
> Principal Deployment Engineer
> Mirantis Inc, Austin TX
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Custom ProjectID upon creation

2016-12-05 Thread Morgan Fainberg
On Mon, Dec 5, 2016 at 2:31 PM, Andrey Grebennikov <
agrebenni...@mirantis.com> wrote:

> -Original Message-
>> From: Andrey Grebennikov 
>> Reply: OpenStack Development Mailing List (not for usage questions)
>> 
>> Date: December 5, 2016 at 12:22:09
>> To: openstack-dev@lists.openstack.org 
>> Subject:  [openstack-dev] [keystone] Custom ProjectID upon creation
>>
>> > Hi keystoners,
>>
>> I'm not a keystoner, but I hope youu don't mind my replying.
>
>
>> > I'd like to open the discussion about the little feature which I'm
>> trying
>> > to push forward for a while but I need some feedbacks/opinions/concerns
>> > regarding this.
>> > Here is the review I'm talking about https://review.
>> > openstack.org/#/c/403866/
>> >
>> > What I'm trying to cover is multi-region deployment, which includes
>> > geo-distributed cloud with independent Keystone in every region.
>> >
>> > There is a number of use cases for the change:
>> > 1. Allow users to re-use their tokens in all regions across the
>> distributed
>> > cloud. With global authentication (LDAP backed) and same roles names
>> this
>> > is only one missing piece which prevents the user to switch between
>> regions
>> > even withing single Horizon session.
>>
>> So this just doesn't sound right to me. You say above that there are
>> independent Keystone deployments in each region. What token type are
>> you using that each region could validate a token (assuming project
>> IDs that are identical across regions) that would do this for
>> "independent Keystone" deployments?
>>
>> Specifically, let's presume you use Keystone's new default of fernet
>> tokens and you have independently deployed Keystone in each region.
>> Without synchronizing the keys Keystone uses to generate and validate
>> fernet tokens, I can't imagine how one token would work across all
>> regions. This sounds like a lofty goal.
>>
>> Further, if Keystone is backed by LDAP, why are there projects being
>> created in the Keystone database at all? I thought using LDAP as a
>> backend would avoid that necessity. (Again, I'm not a keystone
>> developer ;))
>>
>> Sorry that I didn't mention this in the beginning.
> Yes, it is supposed to be fernet tokens installation for sure, UUID will
> not work by default, PKI is deprecated. Keys are supposed to be
> synchronized. Without it multi-site will never work even if I replicate the
> database.
> This is what I started from about half a year ago immediately after
> receiving the usecase. I created 2 clouds, replicated the key, set up each
> Keystone to know about both sites as Regions, made project IDs same, and
> voilla - having global LDAP for authentication in place I could even switch
> between these regions within one Horizon session. So that one works.
>
> Next, the ability to store projects in LDAP was removed 2 releases ago.
> From my personal opinion (and in fact not just mine but hundreds of other
> users as well) this was one of the biggest mistakes.
> This is one of the major questions from my side to the community - if it
> was always possible to store project IDs in the external provider, and if
> it is still possible to do it for the userIDs - what is the point of
> preventing it now?
>
>
I want to go on record that we (the maintainers of Keystone) and those of
us who have spent a significant amount of time working through the LDAP
code came to the community and asked who used this feature (Through many
channels). There were exactly 2 responses of deployers using LDAP to store
project IDs. Both of them were open or actively working towards moving
projects into SQL (or alternatively, developing their own "resource"
store). The LDAP backend for resources (projects/domains) was poorly
supported, had limited interest from the community (for improvements) and
generally was a very large volume of work to bring up to being on-par with
the SQL implementation.

Without the interest of stakeholders (and with few/no active users), it
wasn't feasible to continue to maintain it. There is nothing stopping you
from storing projects externally. You can develop a driver to communicate
with the backend of your choice. The main reason projects were stored in
LDAP was due to the "all or nothing" original design of "KeystoneV2 "; you
could store users in LDAP but you also had to store projects, role
assignments, etc. Most deployments only wanted Users in LDAP but suffered
through the rest because it was required (there was no split of User,
Resource, and Assignment like there is today).

> 2. Automated tools responsible for statistics collection may access all
>> > regions using one token (real customer's usecase)
>>
>> Why can't the automated tools be updated to talk to each Keystone and
>> get a token while talking to that region?
>>
>>
> They may. Depending on what is currently being used in production. It is
> not always so easy to completely 

Re: [openstack-dev] [keystone] Custom ProjectID upon creation

2016-12-05 Thread Andrey Grebennikov
>
> Hi,
>
> On 12/05/2016 09:20 PM, Andrey Grebennikov wrote:
> > Hi keystoners,
> > I'd like to open the discussion about the little feature which I'm
> > trying to push forward for a while but I need some
> > feedbacks/opinions/concerns regarding this.
> > Here is the review I'm talking
> > about https://review.openstack.org/#/c/403866/
> > 
> >
> > What I'm trying to cover is multi-region deployment, which includes
> > geo-distributed cloud with independent Keystone in every region.
> >
> > There is a number of use cases for the change:
> > 1. Allow users to re-use their tokens in all regions across the
> > distributed cloud. With global authentication (LDAP backed) and same
> > roles names this is only one missing piece which prevents the user to
> > switch between regions even withing single Horizon session.
> > 2. Automated tools responsible for statistics collection may access all
> > regions using one token (real customer's usecase)
>
> What do you understand by "region"?
>
I believe I explained what the "region" is in the beginning. In here it is
actually a generic "keystone region" with all stuff, but Keystone is
independent in it. Literally all Keystones in all "my" regions are aware
about each other since they all have common catalog.

>
> > 3. Glance replication may happen because the images' parameter "owner"
> > (which is a project) should be consistent across the regions.
> >
> > What I hear all time - "you have to replicate your database" which from
> > the devops/deployment/operations perspective is totally wrong approach.
> > If it is possible to avoid Galera replication over geographically
> > distributed regions - then API calls should be used. Moreover, in case
> > of 2 DCs there will be an issue to decide which region has to take over
> > when they are isolated from each other.
>
> My comment in the review still stands. With the change we are getting
> ourselves into situation when some tokens *are* verifiable in 2
> regions (project-scoped with identical project ids in both regions),
> some *might be* verifiable in 2 regions (project-scoped with ids about
> which we can't tell anything), and some *are not* verifiable, because
> they are federation- or trust-scoped. A user (human or script) won't be
> able to tell what functionality the token brings without complex
> inspection.
>
> I commented it in the IRC and repeat over here - it is Always the
responsibility of the administrator to realize how things work and
implement it in the way he/she wants it. You don't need it - you don't set
it. The IDs will be still generated.


> Current design is there is single issuer of tokens and single
> consumer. With the patch there will be single issuer and multiple
> consumers. Which is basically SSO, but done without explicit
> design decision.
>
> Not true. SSO assumes Central point of authorization. Here it is highly
distributed.


> Here is what i suggest:
>
> 1. Stop thinking about 2 keystone installations with non-shared database
> as about "one single keystone". If there are 2 non-replicated databases,
> there are 2 separate keystones. 2 separate keystones have completely
> different sets of tokens. Do not try to share fernet keys between
> separate keystones.
>
> Even if you replicate the DB it is not going to work with no key
replication. I repeat my statement once again - if the admin doesn't need
it - leave the --id field blank, that's it. Nothing is broken.


> 2. Instead of implementing poor man's federation, use real federation.
> Create appropriate projects and create group-based assignments, one
> for each keystone instance. Use these group-based assignments for
> federated users.
>
> Does federation currently allow me to use remote groups? Does it allow me
to replicate my projects? Does it allow me to work when there is no
connectivity between keystones? Does it allow me to know whether user
exists in the federated provider before I create shadow user? Does it
delete assignments/shadow user records when the user is deleted from the
remote provider? I can keep going forever. And yes, I don't mention CLI
support of federation.
Feel free to add.

Thanks!

> There is a long conversation in the comments of the review, mainly with
> > concerns from cores (purely developer's opinions).
> >
> > Please help me to bring it to life ;)
> >
> > PS I'm so sorry, forgot to create a topic in the original message
> >
> > --
> > Andrey Grebennikov
> > Principal Deployment Engineer
> > Mirantis Inc, Austin TX
> >
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)

Re: [openstack-dev] [keystone] Custom ProjectID upon creation

2016-12-05 Thread Andrey Grebennikov
>
> -Original Message-
> From: Andrey Grebennikov 
> Reply: OpenStack Development Mailing List (not for usage questions)
> 
> Date: December 5, 2016 at 12:22:09
> To: openstack-dev@lists.openstack.org 
> Subject:  [openstack-dev] [keystone] Custom ProjectID upon creation
>
> > Hi keystoners,
>
> I'm not a keystoner, but I hope youu don't mind my replying.


> > I'd like to open the discussion about the little feature which I'm trying
> > to push forward for a while but I need some feedbacks/opinions/concerns
> > regarding this.
> > Here is the review I'm talking about https://review.
> > openstack.org/#/c/403866/
> >
> > What I'm trying to cover is multi-region deployment, which includes
> > geo-distributed cloud with independent Keystone in every region.
> >
> > There is a number of use cases for the change:
> > 1. Allow users to re-use their tokens in all regions across the
> distributed
> > cloud. With global authentication (LDAP backed) and same roles names this
> > is only one missing piece which prevents the user to switch between
> regions
> > even withing single Horizon session.
>
> So this just doesn't sound right to me. You say above that there are
> independent Keystone deployments in each region. What token type are
> you using that each region could validate a token (assuming project
> IDs that are identical across regions) that would do this for
> "independent Keystone" deployments?
>
> Specifically, let's presume you use Keystone's new default of fernet
> tokens and you have independently deployed Keystone in each region.
> Without synchronizing the keys Keystone uses to generate and validate
> fernet tokens, I can't imagine how one token would work across all
> regions. This sounds like a lofty goal.
>
> Further, if Keystone is backed by LDAP, why are there projects being
> created in the Keystone database at all? I thought using LDAP as a
> backend would avoid that necessity. (Again, I'm not a keystone
> developer ;))
>
> Sorry that I didn't mention this in the beginning.
Yes, it is supposed to be fernet tokens installation for sure, UUID will
not work by default, PKI is deprecated. Keys are supposed to be
synchronized. Without it multi-site will never work even if I replicate the
database.
This is what I started from about half a year ago immediately after
receiving the usecase. I created 2 clouds, replicated the key, set up each
Keystone to know about both sites as Regions, made project IDs same, and
voilla - having global LDAP for authentication in place I could even switch
between these regions within one Horizon session. So that one works.

Next, the ability to store projects in LDAP was removed 2 releases ago.
>From my personal opinion (and in fact not just mine but hundreds of other
users as well) this was one of the biggest mistakes.
This is one of the major questions from my side to the community - if it
was always possible to store project IDs in the external provider, and if
it is still possible to do it for the userIDs - what is the point of
preventing it now?

> 2. Automated tools responsible for statistics collection may access all
> > regions using one token (real customer's usecase)
>
> Why can't the automated tools be updated to talk to each Keystone and
> get a token while talking to that region?
>
>
They may. Depending on what is currently being used in production. It is
not always so easy to completely refactor external tooling, especially if
they are proprietary or semi-proprietary.


> > 3. Glance replication may happen because the images' parameter "owner"
> > (which is a project) should be consistent across the regions.
>
> So, Glance replication doesn't even guarantee identical image IDs
> across regions. If Glance's replication isn't working because the
> owner project is being synchronized directly, that sounds like a bug
> in Glance, not Keystone.
>
Not sure I'm following In essence it is "replication". It should be the
same. So when Glance brings the image to another region (from the
Particular project) - it expects this project to exist.

>
> > What I hear all time - "you have to replicate your database" which from
> the
> > devops/deployment/operations perspective is totally wrong approach.
>
> DevOps is a movement [1]. Replicating the database is not pleasant,
> no, but it is your better option. I'll repeat, though, why is your
> LDAP backed Keystone so reliant on Keystone's DB?
>
> See above - no other way to do it. Replication of the DB is not only bad
way to deal with it - there is no guarantee that broken tables from one
region will stay only in their own region. You have to start dealing with
decisions like "which region is supposed to be the main one", because there
is no "main" one. You'll start getting your regions frozen when they lose
 connectivity to each others etc (this is about 2 regions architecture and
there is no quorum).

> If it is 

Re: [openstack-dev] [All] Finish test job transition to Ubuntu Xenial

2016-12-05 Thread Amrith Kumar
Clark, for Trove, I'm in the process of finishing this and there is a change
[1] that is currently up for review (I have to incorporate changes to
reflect Andreas' comments).

If you find something amiss with the Trove jobs (trove, trove client,
trove-integration or dashboard), please let me know and I'll fix it up
pronto.

Thanks,

-amrith

[1] https://review.openstack.org/#/c/405018/

> -Original Message-
> From: Clark Boylan [mailto:cboy...@sapwetik.org]
> Sent: Monday, December 5, 2016 4:53 PM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [All] Finish test job transition to Ubuntu
Xenial
> 
> On Mon, Nov 7, 2016, at 01:48 PM, Clark Boylan wrote:
> > Hello everyone,
> >
> > The infra team would really like to get the Ubuntu Xenial for testing
> > transition completed early this cycle. We are planning to switch any
> > jobs that remain on Ubuntu Trusty but should be on Ubuntu Xenial on
> > December 6, 2016. That gives us about a month from today to more
> > gracefully migrate jobs while still getting it done early enough in
> > the cycle to fix any issues and put it behind us. Would be great for
> > project teams to test if their jobs run on Xenial and propose updates
> > to openstack-infra/project-config as necessary to switch to Ubuntu
Xenial.
> 
> As a heads up the Infra team has begun pushing changes to start this work.
> The vast majority of them likely won't start merging until tomorrow,
however
> you may start to see changes going in particularly for experimental and
non
> voting jobs.
> 
> Thank you to all the teams that got ahead of this and worked to make the
> transition earlier.
> 
> One thing that pops out at me as we go through this work is that we have a
> lot of experimental and non voting jobs that need to be updated.
> Considering that experimental jobs in particular and often non voting jobs
> are supposed to be works in progress to get to voting, does the lack of
> interest in updating these from the projects themselves imply the jobs are
> dead and not needed? Maybe we should be doing cleanup of old forgotten
> experimental and non voting jobs that aren't being used?
> 
> Thank you,
> Clark
> 
> __
> 
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


smime.p7s
Description: S/MIME cryptographic signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [All] Finish test job transition to Ubuntu Xenial

2016-12-05 Thread Clark Boylan
On Mon, Nov 7, 2016, at 01:48 PM, Clark Boylan wrote:
> Hello everyone,
> 
> The infra team would really like to get the Ubuntu Xenial for testing
> transition completed early this cycle. We are planning to switch any
> jobs that remain on Ubuntu Trusty but should be on Ubuntu Xenial on
> December 6, 2016. That gives us about a month from today to more
> gracefully migrate jobs while still getting it done early enough in the
> cycle to fix any issues and put it behind us. Would be great for project
> teams to test if their jobs run on Xenial and propose updates to
> openstack-infra/project-config as necessary to switch to Ubuntu Xenial.

As a heads up the Infra team has begun pushing changes to start this
work. The vast majority of them likely won't start merging until
tomorrow, however you may start to see changes going in particularly for
experimental and non voting jobs.

Thank you to all the teams that got ahead of this and worked to make the
transition earlier.

One thing that pops out at me as we go through this work is that we have
a lot of experimental and non voting jobs that need to be updated.
Considering that experimental jobs in particular and often non voting
jobs are supposed to be works in progress to get to voting, does the
lack of interest in updating these from the projects themselves imply
the jobs are dead and not needed? Maybe we should be doing cleanup of
old forgotten experimental and non voting jobs that aren't being used?

Thank you,
Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Custom ProjectID upon creation

2016-12-05 Thread David Stanek
On 05-Dec 15:14, Lance Bragstad wrote:
> I put myself in Boris' camp on this one. This can open up the opportunity
> for negative user-experience, purely based on where I authenticate and
> which token I happen to authenticate with. A token would no longer be
> something I can assume to be properly validated against any node in my
> deployment. Now, when I receive a 401 Unauthorized, is it because the token
> is actually invalid, did I use the wrong endpoint, or did I use a token
> with the wrong scope for the endpoint I wanted to interact with?
> 

I agree. I think having different behavior for tokens based on scope
will not only lead to bad user experiences, but will lead to baking in
those rules into the client. Someone will propose this as soon as they
get confused by the token 401ing unexpectedly. 

-- 
david stanek
web: http://www.dstanek.com
blog: http://www.traceback.org

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat] On allowing null as a parameter default

2016-12-05 Thread Zane Bitter
Any parameter in a Heat template that has a default other than None is 
considered optional, so the user is not required to pass a value. 
Otherwise, however, the parameter is required and creating the stack 
will fail pretty immediately if the user does not pass a value.


I've noticed that this presents a giant pain, particularly when trying 
to create what we used to call provider templates. If you do e.g.


`openstack orchestration resource type show -f yaml --template-type hot 
OS::Nova::Server`


then you get back a template with dozens of parameters, most of which 
don't have defaults (because the corresponding resource properties don't 
have defaults) and are therefore not optional. I consider that a bug, 
because in many cases the corresponding resource properties are *not* 
required (properties have a "required" flag that is independent from the 
"default" value).


The result is that it's effectively impossible for our users to build 
re-usable child templates; they have to know which properties the parent 
template does and does not want to specify values for.


Using a default that corresponds to the parameter type ("", [], {}, 0, 
false) doesn't work, I don't think, because there are properties that 
treat None differently to e.g. an empty dict.


The obvious alternative is to use a different sentinel value, other than 
None, for determining whether a parameter default is provided and then 
allowing users to pass null as default. We could then adjust the 
properties code to treat this sentinel as if no value were specified for 
the property.


The difficulty of this is knowing how to handle other places that 
get_param might be used, especially in arguments to other functions. I 
guess we have that problem now in some ways, because get_attr often 
returns None up to the point where the resource it refers to is created. 
I hoped that we might get away from that with the placeholders spec 
though :/


Not for nothing did C.A.R. Hoare call allowing null references "the 
billion dollar mistake'. OTOH I don't recall him suggesting an 
alternative. Anybody got any ideas?


cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Custom ProjectID upon creation

2016-12-05 Thread Lance Bragstad
I put myself in Boris' camp on this one. This can open up the opportunity
for negative user-experience, purely based on where I authenticate and
which token I happen to authenticate with. A token would no longer be
something I can assume to be properly validated against any node in my
deployment. Now, when I receive a 401 Unauthorized, is it because the token
is actually invalid, did I use the wrong endpoint, or did I use a token
with the wrong scope for the endpoint I wanted to interact with?

On Mon, Dec 5, 2016 at 2:43 PM, Ian Cordasco  wrote:

> -Original Message-
> From: Andrey Grebennikov 
> Reply: OpenStack Development Mailing List (not for usage questions)
> 
> Date: December 5, 2016 at 12:22:09
> To: openstack-dev@lists.openstack.org 
> Subject:  [openstack-dev] [keystone] Custom ProjectID upon creation
>
> > Hi keystoners,
>
> I'm not a keystoner, but I hope youu don't mind my replying.
>
> > I'd like to open the discussion about the little feature which I'm trying
> > to push forward for a while but I need some feedbacks/opinions/concerns
> > regarding this.
> > Here is the review I'm talking about https://review.
> > openstack.org/#/c/403866/
> >
> > What I'm trying to cover is multi-region deployment, which includes
> > geo-distributed cloud with independent Keystone in every region.
> >
> > There is a number of use cases for the change:
> > 1. Allow users to re-use their tokens in all regions across the
> distributed
> > cloud. With global authentication (LDAP backed) and same roles names this
> > is only one missing piece which prevents the user to switch between
> regions
> > even withing single Horizon session.
>
> So this just doesn't sound right to me. You say above that there are
> independent Keystone deployments in each region. What token type are
> you using that each region could validate a token (assuming project
> IDs that are identical across regions) that would do this for
> "independent Keystone" deployments?
>
> Specifically, let's presume you use Keystone's new default of fernet
> tokens and you have independently deployed Keystone in each region.
> Without synchronizing the keys Keystone uses to generate and validate
> fernet tokens, I can't imagine how one token would work across all
> regions. This sounds like a lofty goal.
>
> Further, if Keystone is backed by LDAP, why are there projects being
> created in the Keystone database at all? I thought using LDAP as a
> backend would avoid that necessity. (Again, I'm not a keystone
> developer ;))
>
> > 2. Automated tools responsible for statistics collection may access all
> > regions using one token (real customer's usecase)
>
> Why can't the automated tools be updated to talk to each Keystone and
> get a token while talking to that region?
>
> > 3. Glance replication may happen because the images' parameter "owner"
> > (which is a project) should be consistent across the regions.
>
> So, Glance replication doesn't even guarantee identical image IDs
> across regions. If Glance's replication isn't working because the
> owner project is being synchronized directly, that sounds like a bug
> in Glance, not Keystone.
>
> > What I hear all time - "you have to replicate your database" which from
> the
> > devops/deployment/operations perspective is totally wrong approach.
>
> DevOps is a movement [1]. Replicating the database is not pleasant,
> no, but it is your better option. I'll repeat, though, why is your
> LDAP backed Keystone so reliant on Keystone's DB?
>
> > If it is possible to avoid Galera replication over geographically
> > distributed regions - then API calls should be used. Moreover, in case
> of 2
> > DCs there will be an issue to decide which region has to take over when
> > they are isolated from each other.
> >
> > There is a long conversation in the comments of the review, mainly with
> > concerns from cores (purely developer's opinions).
>
> You say that as if developer opinions (the folks who have to
> understand and maintain your desired approach) is invalid or
> worthless. That's not the case.
>
> > Please help me to bring it to life ;)
>
> Please give us more detail and convince us to help you. :)
>
> [1]: https://theagileadmin.com/what-is-devops/
>
> --
> Ian Cordasco
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack Dev Digest November 26 to December 2

2016-12-05 Thread Jeremy Stanley
On 2016-12-05 20:45:08 + (+), Kendall Nelson wrote:
[...]
> Allowing Teams Based on Vendor-specific Drivers [10]
>-
>Option 1: https://review.openstack.org/403834 - Proprietary driver dev
>is unlevel
>-
>Option 2: https://review.openstack.org/403836 - Driver development can
>be level
>-
>Option 3: https://review.openstack.org/403839 - Level playing fields,
>except drivers
>-
>Option 4:  https://review.openstack.org/403829
> - establish a new "driver team"
>concept
>-
>   Thierry prefers this option
>   -
>Option 5: https://review.openstack.org/403830 - add resolution requiring
>teams to accept driver contributions
>-
>   One of Flavio’s preferred options
>   -
>Option 6: https://review.openstack.org/403826 - add a resolution
>allowing teams based on vendor-specific drivers
>-
>   Flavio’s other preferred option
[...]

Worth noting, these map to options 1, 2, 4, 5, 6 and 7 from Doug's
summary. His option #3 is missing above, which was:

https://review.openstack.org/403838 - Stop requiring a level
playing field

That probably explains the numbering skew between the two summaries.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] OpenStack Dev Digest November 26 to December 2

2016-12-05 Thread Kendall Nelson
Hello All!

HTML Version:
https://docs.google.com/document/d/14bijOMZOWBISKovCq-p0-TdvyrVZHGp8p-ybVYXaqAA/edit

Updates

   -

   Nova Resource Providers update [2]
   -

   Nova blueprints update [16]
   -

   OpenStack-Ansible deploy guide live! [6]


The Future of OpenStack Needs You [1]

   -

   Need more mentors to help run Upstream Trainings at the summits
   -

   Interested in doing an abridged version at smaller more local events
   -

   Contact ildikov or diablo_rojo on IRC if interested


New project: Nimble [3]

   -

   Interesting chat about bare metal management
   -

   The project name is likely to change


Community goals for Pike [4]

   -

   As Ocata is a short cycle it’s time to think about goals for Pike [7]
   -

   Or give feedback on what’s already started [8]


Exposing project team's metadata in README files (Cont.) [9]

   -

   Amrith agrees with the value of Flavio’s proposal that a short summary
   would be good for new contributors
   -

   Will need a small API that will generate the list of badges
   -

  Done- as a part of governance
  -

  Just a graphical representation of what’s in the governance repo
  -

  Do what you want with the badges in README files
  -

   Patches have been pushed to the projects initiating this change


Allowing Teams Based on Vendor-specific Drivers [10]

   -

   Option 1: https://review.openstack.org/403834 - Proprietary driver dev
   is unlevel
   -

   Option 2: https://review.openstack.org/403836 - Driver development can
   be level
   -

   Option 3: https://review.openstack.org/403839 - Level playing fields,
   except drivers
   -

   Option 4:  https://review.openstack.org/403829
    - establish a new "driver team"
   concept
   -

  Thierry prefers this option
  -

   Option 5: https://review.openstack.org/403830 - add resolution requiring
   teams to accept driver contributions
   -

  One of Flavio’s preferred options
  -

   Option 6: https://review.openstack.org/403826 - add a resolution
   allowing teams based on vendor-specific drivers
   -

  Flavio’s other preferred option


Cirros Images to Change Default Password [11]

   -

   New password: gocubsgo
   -

   Not ‘cubswin:)’ anymore


Destructive/HA/Fail-over scenarios

   -

   Discussion started about adding end-user focused test suits to test
   OpenStack clusters beyond what’s already available in Tempest [12]
   -

   Feedback is needed from users and operators on what preferred scenarios
   they would like to see in the test suite [5]
   -

   You can read more in the spec for High Availability testing [13] and the
   user story describing destructive testing [14] which are both on review


Events discussion [15]

   -

   Efforts to remove duplicated functionality from OpenStack in the sense
   of providing event information to end-users (Zaqar, Aodh)
   -

   It is also pointed out that the information in events can be sensitive
   which needs to be handled carefully


[1]
http://lists.openstack.org/pipermail/openstack-dev/2016-November/108084.html

[2]
http://lists.openstack.org/pipermail/openstack-dev/2016-November/107982.html

[3]
http://lists.openstack.org/pipermail/openstack-dev/2016-November/107961.html

[4]
http://lists.openstack.org/pipermail/openstack-dev/2016-November/108167.html

[5]
http://lists.openstack.org/pipermail/openstack-dev/2016-November/108062.html

[6]
http://lists.openstack.org/pipermail/openstack-dev/2016-November/108200.html

[7] https://etherpad.openstack.org/p/community-goals

[8] https://etherpad.openstack.org/p/community-goals-ocata-feedback

[9]
http://lists.openstack.org/pipermail/openstack-dev/2016-November/107966.html

[10]
http://lists.openstack.org/pipermail/openstack-dev/2016-November/108074.html

[11]
http://lists.openstack.org/pipermail/openstack-dev/2016-November/108118.html

[12]
http://lists.openstack.org/pipermail/openstack-dev/2016-November/108062.html

[13] https://review.openstack.org/#/c/399618/

[14] https://review.openstack.org/#/c/396142

[15]
http://lists.openstack.org/pipermail/openstack-dev/2016-November/108070.html

[16]
http://lists.openstack.org/pipermail/openstack-dev/2016-November/108089.html

Enjoy!

- Kendall Nelson (diablo_rojo)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Custom ProjectID upon creation

2016-12-05 Thread Ian Cordasco
-Original Message-
From: Andrey Grebennikov 
Reply: OpenStack Development Mailing List (not for usage questions)

Date: December 5, 2016 at 12:22:09
To: openstack-dev@lists.openstack.org 
Subject:  [openstack-dev] [keystone] Custom ProjectID upon creation

> Hi keystoners,

I'm not a keystoner, but I hope youu don't mind my replying.

> I'd like to open the discussion about the little feature which I'm trying
> to push forward for a while but I need some feedbacks/opinions/concerns
> regarding this.
> Here is the review I'm talking about https://review.
> openstack.org/#/c/403866/
>
> What I'm trying to cover is multi-region deployment, which includes
> geo-distributed cloud with independent Keystone in every region.
>
> There is a number of use cases for the change:
> 1. Allow users to re-use their tokens in all regions across the distributed
> cloud. With global authentication (LDAP backed) and same roles names this
> is only one missing piece which prevents the user to switch between regions
> even withing single Horizon session.

So this just doesn't sound right to me. You say above that there are
independent Keystone deployments in each region. What token type are
you using that each region could validate a token (assuming project
IDs that are identical across regions) that would do this for
"independent Keystone" deployments?

Specifically, let's presume you use Keystone's new default of fernet
tokens and you have independently deployed Keystone in each region.
Without synchronizing the keys Keystone uses to generate and validate
fernet tokens, I can't imagine how one token would work across all
regions. This sounds like a lofty goal.

Further, if Keystone is backed by LDAP, why are there projects being
created in the Keystone database at all? I thought using LDAP as a
backend would avoid that necessity. (Again, I'm not a keystone
developer ;))

> 2. Automated tools responsible for statistics collection may access all
> regions using one token (real customer's usecase)

Why can't the automated tools be updated to talk to each Keystone and
get a token while talking to that region?

> 3. Glance replication may happen because the images' parameter "owner"
> (which is a project) should be consistent across the regions.

So, Glance replication doesn't even guarantee identical image IDs
across regions. If Glance's replication isn't working because the
owner project is being synchronized directly, that sounds like a bug
in Glance, not Keystone.

> What I hear all time - "you have to replicate your database" which from the
> devops/deployment/operations perspective is totally wrong approach.

DevOps is a movement [1]. Replicating the database is not pleasant,
no, but it is your better option. I'll repeat, though, why is your
LDAP backed Keystone so reliant on Keystone's DB?

> If it is possible to avoid Galera replication over geographically
> distributed regions - then API calls should be used. Moreover, in case of 2
> DCs there will be an issue to decide which region has to take over when
> they are isolated from each other.
>
> There is a long conversation in the comments of the review, mainly with
> concerns from cores (purely developer's opinions).

You say that as if developer opinions (the folks who have to
understand and maintain your desired approach) is invalid or
worthless. That's not the case.

> Please help me to bring it to life ;)

Please give us more detail and convince us to help you. :)

[1]: https://theagileadmin.com/what-is-devops/

--
Ian Cordasco

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][placement][ironic] Progress on custom resource classes

2016-12-05 Thread Jim Rollenhagen
On Fri, Dec 2, 2016 at 12:10 PM, Jay Pipes  wrote:

> Ironic colleagues, heads up, please read the below fully! I'd like your
> feedback on a couple outstanding questions.
>
> tl;dr
> -
>
> Work for custom resource classes has been proceeding well this cycle, and
> we're at a point where reviews from the Ironic community and functional
> testing of a series of patches would be extremely helpful.
>
> https://review.openstack.org/#/q/topic:bp/custom-resource-cl
> asses+status:open


\o/ will do sir.


>
>
> History
> ---
>
> As a brief reminder, in Newton, the Ironic community added a
> "resource_class" attribute to the primary Node object returned by the GET
> /nodes/{uuid} API call. This resource class attribute represents the
> "hardware profile" (for lack of a better term) of the Ironic baremetal node.
>
> In Nova-land, we would like to stop tracking Ironic baremetal nodes as
> collections of vCPU, RAM, and disk space -- because an Ironic baremetal
> node is consumed atomically, not piecemeal like a hypervisor node is.
> We'd like to have the scheduler search for an appropriate Ironic baremetal
> node using a simplified search that simply looks for node that has a
> particular hardware profile [1] instead of searching for nodes that have a
> certain amount of VCPU, RAM, and disk space.
>
> In addition to the scheduling and "boot request" alignment issues, we want
> to fix the reporting and account of resources in an OpenStack deployment
> containing Ironic. Currently, Nova reports an aggregate amount of CPU, RAM
> and disk space but doesn't understand that, when Ironic is in the mix, that
> a significant chunk of that CPU, RAM and disk isn't "targetable" for
> virtual machines. We would much prefer to have resource reporting look like:
>
>  48 vCPU total, 14 used
>  204800 MB RAM total, 10240 used
>  1340 GB disk total, 100 used
>  250 baremetal profile "A" total, 120 used
>  120 baremetal profile "B" total, 16 used
>
> instead of mixing all the resources together.
>
> Need review and functional testing on a few things
> --
>
> Now that the custom resource classes REST API endpoint is established [2]
> in the placement REST API, we are figuring out an appropriate way of
> migrating the existing inventory and allocation records for Ironic
> baremetal nodes from the "old-style" way of storing inventory for VCPU,
> MEMORY_MB and DISK_GB resources towards the "new-style" way of storing a
> single inventory record of amount 1 for the Ironic node's "resource_class"
> attribute.
>
> The patch that does this online data migration (from within the
> nova-compute resource tracker) is here:
>
> https://review.openstack.org/#/c/404472/
>
> I'd really like to get some Ironic contributor eyeballs on that patch and
> provide me feedback on whether the logic in the
> _cleanup_ironic_legacy_allocations() method is sound.
>
> There are still a couple things that need to be worked out:
>
> 1) Should the resource tracker auto-create custom resource classes in the
> placement REST API when it sees an Ironic node's resource_class attribute
> set to a non-NULL value and there is no record of such a resource class in
> the `GET /resource-classes` placement API call? My gut reaction to this is
> "yes, let's just do it", but I want to check with operators and Ironic devs
> first. The alternative is to ignore errors about "no such resource class
> exists", log a warning, and wait for an administrator to create the custom
> resource classes that match the distinct Ironic node resource classes that
> may exist in the deployment.
>

I tend to agree with Matt, there's no need to make operators do this when
they've already explicitly configured it on the ironic side.


>
> 2) How we are going to modify the Nova baremetal flavors to specify that
> the flavor requires one resource where the resource is of a set of custom
> resource classes? For example, let's say I'm have an Ironic installation
> with 10 different Ironic node hardware profiles. I've set all my Ironic
> node's resource_class attributes to match one of those hardware profiles. I
> now need to set up a Nova flavor that requests one of those ten hardware
> profiles. How do I do that? One solution might be to have a hacky flavor
> extra_spec called "ironic_resource_classes=CUSTOM_METAL_A,CUSTOM_METAL_B..."
> or similar. When we construct the request_spec object that gets sent to the
> scheduler (and later the placement service), we could look for that
> extra_spec and construct a special request to the placement service that
> says "find me a resource provider that has a capacity of 1 for any of the
> following resource classes...". The flavor extra_specs thing is a total
> hack, admittedly, but flavors are the current mess that Nova has to specify
> requested resources and we need to work within that mess unfortunately...
>

As it was explained to me at the beginning of all this, flavors were going

Re: [openstack-dev] [keystone] Custom ProjectID upon creation

2016-12-05 Thread Boris Bobrov
Hi,

On 12/05/2016 09:20 PM, Andrey Grebennikov wrote:
> Hi keystoners,
> I'd like to open the discussion about the little feature which I'm
> trying to push forward for a while but I need some
> feedbacks/opinions/concerns regarding this.
> Here is the review I'm talking
> about https://review.openstack.org/#/c/403866/
> 
> 
> What I'm trying to cover is multi-region deployment, which includes
> geo-distributed cloud with independent Keystone in every region.
> 
> There is a number of use cases for the change:
> 1. Allow users to re-use their tokens in all regions across the
> distributed cloud. With global authentication (LDAP backed) and same
> roles names this is only one missing piece which prevents the user to
> switch between regions even withing single Horizon session.
> 2. Automated tools responsible for statistics collection may access all
> regions using one token (real customer's usecase)

What do you understand by "region"?

> 3. Glance replication may happen because the images' parameter "owner"
> (which is a project) should be consistent across the regions.
> 
> What I hear all time - "you have to replicate your database" which from
> the devops/deployment/operations perspective is totally wrong approach.
> If it is possible to avoid Galera replication over geographically
> distributed regions - then API calls should be used. Moreover, in case
> of 2 DCs there will be an issue to decide which region has to take over
> when they are isolated from each other.

My comment in the review still stands. With the change we are getting
ourselves into situation when some tokens *are* verifiable in 2
regions (project-scoped with identical project ids in both regions),
some *might be* verifiable in 2 regions (project-scoped with ids about
which we can't tell anything), and some *are not* verifiable, because
they are federation- or trust-scoped. A user (human or script) won't be
able to tell what functionality the token brings without complex
inspection.

Current design is there is single issuer of tokens and single
consumer. With the patch there will be single issuer and multiple
consumers. Which is basically SSO, but done without explicit
design decision.

Here is what i suggest:

1. Stop thinking about 2 keystone installations with non-shared database
as about "one single keystone". If there are 2 non-replicated databases,
there are 2 separate keystones. 2 separate keystones have completely
different sets of tokens. Do not try to share fernet keys between
separate keystones.

2. Instead of implementing poor man's federation, use real federation.
Create appropriate projects and create group-based assignments, one
for each keystone instance. Use these group-based assignments for
federated users.

> There is a long conversation in the comments of the review, mainly with
> concerns from cores (purely developer's opinions).
> 
> Please help me to bring it to life ;)
> 
> PS I'm so sorry, forgot to create a topic in the original message
> 
> -- 
> Andrey Grebennikov
> Principal Deployment Engineer
> Mirantis Inc, Austin TX
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Custom ProjectID upon creation

2016-12-05 Thread Lance Bragstad
The ability to specify IDs at project creation time was proposed as a
specification last summer [0]. The common theme from the discussion in that
thread was to use shadow mapping [1] to solve that problem.

[0] https://review.openstack.org/#/c/323499/
[1]
http://specs.openstack.org/openstack/keystone-specs/specs/keystone/ocata/shadow-mapping.html


On Mon, Dec 5, 2016 at 12:47 PM, Steve Martinelli 
wrote:

> I'm OK with the agreed approach in the patch, we restrict the ID being
> specified to 32 character UUID4 string. And it only works on project
> create, not update.
>
> On Mon, Dec 5, 2016 at 1:20 PM, Andrey Grebennikov <
> agrebenni...@mirantis.com> wrote:
>
>> Hi keystoners,
>> I'd like to open the discussion about the little feature which I'm trying
>> to push forward for a while but I need some feedbacks/opinions/concerns
>> regarding this.
>> Here is the review I'm talking about https://review.openstack
>> .org/#/c/403866/
>>
>> What I'm trying to cover is multi-region deployment, which includes
>> geo-distributed cloud with independent Keystone in every region.
>>
>> There is a number of use cases for the change:
>> 1. Allow users to re-use their tokens in all regions across the
>> distributed cloud. With global authentication (LDAP backed) and same roles
>> names this is only one missing piece which prevents the user to switch
>> between regions even withing single Horizon session.
>> 2. Automated tools responsible for statistics collection may access all
>> regions using one token (real customer's usecase)
>> 3. Glance replication may happen because the images' parameter "owner"
>> (which is a project) should be consistent across the regions.
>>
>> What I hear all time - "you have to replicate your database" which from
>> the devops/deployment/operations perspective is totally wrong approach.
>> If it is possible to avoid Galera replication over geographically
>> distributed regions - then API calls should be used. Moreover, in case of 2
>> DCs there will be an issue to decide which region has to take over when
>> they are isolated from each other.
>>
>> There is a long conversation in the comments of the review, mainly with
>> concerns from cores (purely developer's opinions).
>>
>> Please help me to bring it to life ;)
>>
>> PS I'm so sorry, forgot to create a topic in the original message
>>
>> --
>> Andrey Grebennikov
>> Principal Deployment Engineer
>> Mirantis Inc, Austin TX
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [third-party][ci] devstack failures

2016-12-05 Thread Apoorva Deshpande
Thanks Lenny and Ian.

** Cinder Team **
At Tintri, we are looking at the issue and would try to pursue workarounds
for this problem. But due to holiday schedule resource crunch we may not be
able to fix it until new years. I will update the status on Tintri CI when
we have a breakthrough. Please contact openstack-...@tintri.com if you have
any questions.

Thanks,
Apoorva



On Sun, Dec 4, 2016 at 11:27 PM, Lenny Verkhovsky 
wrote:

> Hello Apoorva,
>
> According to the log you have issue with remote git cloning,
>
> We are facing similar issues from time to time and as a workaround started
> to use local git mirror instead of rerunning such jobs.
>
>
>
> 2016-12-05 01:19:17.356 | + functions-common:git_timed:598   :
> timeout -s SIGINT 0 git clone git://git.openstack.org/
> openstack/horizon.git /opt/stack/horizon --branch master
>
> 2016-12-05 01:19:17.358 | Cloning into '/opt/stack/horizon'...
>
> 2016-12-05 03:20:41.160 | fatal: read error: Connection reset by peer
>
> 2016-12-05 03:20:41.165 | fatal: early EOF
>
>
>
>
>
>
>
> *From:* Apoorva Deshpande [mailto:apps.d...@gmail.com]
> *Sent:* Monday, December 5, 2016 7:43 AM
> *To:* openstack-dev@lists.openstack.org
> *Subject:* [openstack-dev] [cinder] [third-party][ci] devstack failures
>
>
>
> Hello,
>
>
>
> I am encountering devstack failures on our Cinder CI [1]. Could you please
> help me debug this? Last successful devstack installation was [2].
>
>
>
> [1] http://openstack-ci.tintri.com/tintri/refs-changes-23-405223-4/
>
> [2] http://openstack-ci.tintri.com/tintri/refs-changes-95-393395-7/
>
>
>
> thanks,
>
> @apoorva
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Creating a new IRC meeting room ?

2016-12-05 Thread Dariusz Śmigiel
2016-12-05 12:51 GMT-06:00 Paul Belanger :
> On Mon, Dec 05, 2016 at 09:47:15AM +0100, Luigi Toscano wrote:
>> On Friday, 2 December 2016 14:42:31 CET Matt Riedemann wrote:
>> > But like we recently talked about the stable team meetings, we don't
>> > really need to be in a separate -alt room for those when we have the
>> > channel and anyone that cares about stable enough to be in the meeting
>> > is already in that channel, but sometimes the people in that channel
>> > forget about the meeting or which of the 20 alt rooms it's being held
>> > in, so they miss it (or Tony is biking down a volcano and we just don't
>> > have a meeting).
>>
>> This is just part of the problem, but couldn't the bot remind about the
>> meeting before its start on the main channel of a project? It would help with
>> people forgetting about the meetings.
>>
> I'm glad I am not the only one who has wanted this. Always thought it would 
> be a
> nice feature to have one of our bot remind me, PM would work, about coming
> events.
>
> This could be fixed if I could figure out have to run a calendar notifications
> via console.
>

Oslo community has working tool for that purpose [1]. Probably it
could be used during other meetings.

Although, if someone is interested in particular Meeting, probably
will be able to have some kind of reminder: calendar, console one
(sent by fungi in other thread[2]).

I've seen couple flash meetings in different #openstack-{project}
channels, so I think, instead of creating new separated channel,
better would be just to have meeting activity in the {project} one.


[1] https://github.com/openstack/oslo.tools/blob/master/ping_me.py
[2] http://lists.openstack.org/pipermail/openstack-dev/2016-December/108470.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Console-based calendaring and reminders (was: Creating a new IRC meeting room)

2016-12-05 Thread Jeremy Stanley
On 2016-12-05 13:51:19 -0500 (-0500), Paul Belanger wrote:
[...]
> This could be fixed if I could figure out have to run a calendar
> notifications via console.

I use the remind utility and the console-based frontend wyrd:

https://www.roaringpenguin.com/products/remind/
http://pessimization.com/software/wyrd/

Both are packaged in Debian for ages, probably many other distros as
well. Get up with me if you want details on how I tie it into tmux
window highlighting in my personal workflow (an almost-one-liner
shell script to echo a bell into the window where I leave wyrd
running).
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] this week's priorities and subteam reports

2016-12-05 Thread Loo, Ruby
Hi,

We are dumbfounded to present this week's priorities and subteam report for 
Ironic. As usual, this is pulled directly from the Ironic whiteboard[0] and 
formatted.

This Week's Priorities (as of the weekly ironic meeting)

1. portgroup: review code  
https://review.openstack.org/#/q/status:open+branch:master+topic:bug/1618754
2. attach/detach: review code (after sambetts updates): 
https://review.openstack.org/#/c/327046/
3. boot from volume: next up: 
https://review.openstack.org/#/q/status:open+project:openstack/ironic+branch:master+topic:bug/1559691
4. next notifications: review code: 
https://review.openstack.org/#/q/topic:bug/1606520+status:open # NOTE(mariojv): 
This doesn't have all the notification patches. See subteam status report for 
links to all notification patches. (this is next, not all)
5. rolling upgrades spec needs reviews: https://review.openstack.org/#/c/299245/
6. driver composition: the next patch introduces hardware types: 
https://review.openstack.org/336626


Bugs (dtantsur)
===
- Stats (diff between 27 Nov 2016 and 05 Dec 2016)
- Ironic: 219 bugs (-15) + 227 wishlist items. 16 new (-28), 171 in progress 
(-6), 0 critical (-1), 30 high (+1) and 28 incomplete (+7)
- Inspector: 14 bugs (+2) + 22 wishlist items. 3 new, 11 in progress (+1), 2 
critical (+1), 1 high and 3 incomplete (+1)
- Nova bugs with Ironic tag: 10. 0 new, 0 critical, 1 high

Portgroups support (sambetts, vdrok)

* trello: https://trello.com/c/KvVjeK5j/29-portgroups-support
- status as of most recent weekly meeting:
- portgroups patches need reviews: 
https://review.openstack.org/#/q/topic:bug/1618754
- including the client!

CI refactoring (dtantsur, lucasagomes)
==
* trello: https://trello.com/c/c96zb3dm/32-ci-refactoring
- status as of most recent weekly meeting:
- (lucasagomes): postgres job with standard PXE deployment fixed

Rolling upgrades and grenade-partial (rloo, jlvillal)
=
* trello: 
https://trello.com/c/GAlhSzLm/2-rolling-upgrades-and-grenade-with-multi-node
- status as of most recent weekly meeting:
- spec was updated, needs reviews, is really close: 
https://review.openstack.org/299245
- Work is ongoing for enabling Grenade with multi-tenant: 
https://review.openstack.org/389268
- Work on-going to get tempest "smoke" test working for the 
multi-node/multi-tenant job(vsaienko)

Security groups (jroll)
===
* trello: 
https://trello.com/c/klty7VVo/30-security-groups-for-provisioning-cleaning-network
- status as of most recent weekly meeting:
- last patch, documentation. Needs to be rebased and reviewed: 
https://review.openstack.org/#/c/393962/

Interface attach/detach API (sambetts)
==
* trello: https://trello.com/c/nryU4w58/39-interface-attach-detach-api
- status as of most recent weekly meeting:
- Spec merged and Nova BP approved
- Ironic patch up to date with latest version of spec, however we decided 
in Ironic/Neutron meeting to experiment with spliting it down for easier review 
(driver changes/RPC API changes/REST API changes)
- Ironic - https://review.openstack.org/327046
- Patches need updating still:
- Nova - https://review.openstack.org/364413
- IronicClient - https://review.openstack.org/364420

Generic boot-from-volume (TheJulia)
===
* trello: https://trello.com/c/UttNjDB7/13-generic-boot-from-volume
- status as of most recent weekly meeting:
- The majority of volume connection information patches have landed for the 
conductor.
- API side changes for volume connector information have a procedural -2 
until we can begin making use of the data in the conductor, but should stil be 
reviewed
- https://review.openstack.org/#/c/214586/
- Boot from volume/storage interface patches have been rebased and received 
updates, and now await reviews.
- 
https://review.openstack.org/#/q/status:open+project:openstack/ironic+branch:master+topic:bug/1559691

Driver composition (dtantsur)
=
* trello: https://trello.com/c/fTya14y6/14-driver-composition
- gerrit topic: https://review.openstack.org/#/q/status:open+topic:bug/1524745
- status as of most recent weekly meeting:
- moved node_create to conductor
- the next patch introduces hardware types: 
https://review.openstack.org/336626

Rescue mode (JayF)
==
* trello: https://trello.com/c/PwH1pexJ/23-rescue-mode
- status as of most recent weekly meeting:
- patch for API/Conductor methods needs review: 
https://review.openstack.org/#/c/350831/

etags in the REST API (gzholtkevych)

* trello: https://trello.com/c/MbNA4geB/33-rest-api-etags
- status as of most recent weekly meeting:
- 

Re: [openstack-dev] [all] Creating a new IRC meeting room ?

2016-12-05 Thread Paul Belanger
On Mon, Dec 05, 2016 at 09:47:15AM +0100, Luigi Toscano wrote:
> On Friday, 2 December 2016 14:42:31 CET Matt Riedemann wrote:
> > But like we recently talked about the stable team meetings, we don't
> > really need to be in a separate -alt room for those when we have the
> > channel and anyone that cares about stable enough to be in the meeting
> > is already in that channel, but sometimes the people in that channel
> > forget about the meeting or which of the 20 alt rooms it's being held
> > in, so they miss it (or Tony is biking down a volcano and we just don't
> > have a meeting).
> 
> This is just part of the problem, but couldn't the bot remind about the 
> meeting before its start on the main channel of a project? It would help with 
> people forgetting about the meetings.
> 
I'm glad I am not the only one who has wanted this. Always thought it would be a
nice feature to have one of our bot remind me, PM would work, about coming
events.

This could be fixed if I could figure out have to run a calendar notifications
via console.

> -- 
> Luigi
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Custom ProjectID upon creation

2016-12-05 Thread Steve Martinelli
I'm OK with the agreed approach in the patch, we restrict the ID being
specified to 32 character UUID4 string. And it only works on project
create, not update.

On Mon, Dec 5, 2016 at 1:20 PM, Andrey Grebennikov <
agrebenni...@mirantis.com> wrote:

> Hi keystoners,
> I'd like to open the discussion about the little feature which I'm trying
> to push forward for a while but I need some feedbacks/opinions/concerns
> regarding this.
> Here is the review I'm talking about https://review.openstack
> .org/#/c/403866/
>
> What I'm trying to cover is multi-region deployment, which includes
> geo-distributed cloud with independent Keystone in every region.
>
> There is a number of use cases for the change:
> 1. Allow users to re-use their tokens in all regions across the
> distributed cloud. With global authentication (LDAP backed) and same roles
> names this is only one missing piece which prevents the user to switch
> between regions even withing single Horizon session.
> 2. Automated tools responsible for statistics collection may access all
> regions using one token (real customer's usecase)
> 3. Glance replication may happen because the images' parameter "owner"
> (which is a project) should be consistent across the regions.
>
> What I hear all time - "you have to replicate your database" which from
> the devops/deployment/operations perspective is totally wrong approach.
> If it is possible to avoid Galera replication over geographically
> distributed regions - then API calls should be used. Moreover, in case of 2
> DCs there will be an issue to decide which region has to take over when
> they are isolated from each other.
>
> There is a long conversation in the comments of the review, mainly with
> concerns from cores (purely developer's opinions).
>
> Please help me to bring it to life ;)
>
> PS I'm so sorry, forgot to create a topic in the original message
>
> --
> Andrey Grebennikov
> Principal Deployment Engineer
> Mirantis Inc, Austin TX
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] Custom ProjectID upon creation

2016-12-05 Thread Andrey Grebennikov
Hi keystoners,
I'd like to open the discussion about the little feature which I'm trying
to push forward for a while but I need some feedbacks/opinions/concerns
regarding this.
Here is the review I'm talking about https://review.
openstack.org/#/c/403866/

What I'm trying to cover is multi-region deployment, which includes
geo-distributed cloud with independent Keystone in every region.

There is a number of use cases for the change:
1. Allow users to re-use their tokens in all regions across the distributed
cloud. With global authentication (LDAP backed) and same roles names this
is only one missing piece which prevents the user to switch between regions
even withing single Horizon session.
2. Automated tools responsible for statistics collection may access all
regions using one token (real customer's usecase)
3. Glance replication may happen because the images' parameter "owner"
(which is a project) should be consistent across the regions.

What I hear all time - "you have to replicate your database" which from the
devops/deployment/operations perspective is totally wrong approach.
If it is possible to avoid Galera replication over geographically
distributed regions - then API calls should be used. Moreover, in case of 2
DCs there will be an issue to decide which region has to take over when
they are isolated from each other.

There is a long conversation in the comments of the review, mainly with
concerns from cores (purely developer's opinions).

Please help me to bring it to life ;)

PS I'm so sorry, forgot to create a topic in the original message

-- 
Andrey Grebennikov
Principal Deployment Engineer
Mirantis Inc, Austin TX
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone]

2016-12-05 Thread Andrey Grebennikov
Hi keystoners,
I'd like to open the discussion about the little feature which I'm trying
to push forward for a while but I need some feedbacks/opinions/concerns
regarding this.
Here is the review I'm talking about
https://review.openstack.org/#/c/403866/

What I'm trying to cover is multi-region deployment, which includes
geo-distributed cloud with independent Keystone in every region.

There is a number of use cases for the change:
1. Allow users to re-use their tokens in all regions across the distributed
cloud. With global authentication (LDAP backed) and same roles names this
is only one missing piece which prevents the user to switch between regions
even withing single Horizon session.
2. Automated tools responsible for statistics collection may access all
regions using one token (real customer's usecase)
3. Glance replication may happen because the images' parameter "owner"
(which is a project) should be consistent across the regions.

What I hear all time - "you have to replicate your database" which from the
devops/deployment/operations perspective is totally wrong approach.
If it is possible to avoid Galera replication over geographically
distributed regions - then API calls should be used. Moreover, in case of 2
DCs there will be an issue to decide which region has to take over when
they are isolated from each other.

There is a long conversation in the comments of the review, mainly with
concerns from cores (purely developer's opinions).

Please help me to bring it to life ;)

-- 
Andrey Grebennikov
Principal Deployment Engineer
Mirantis Inc, Austin TX
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Hacking 0.13.0 broken

2016-12-05 Thread Julien Danjou
On Mon, Dec 05 2016, Ian Cordasco wrote:

Hi Ian,

> In the last couple weeks we've merged and released broken code that
> had an underspecified use-case ostensibly because we all want to help
> each other be more productive. That said, as one of the few people who
> understands the interactions between what hacking uses (since Joe
> left) it seems like we're not enforcing the same level of quality for
> hacking that we do for other projects.
>
> Because the use case is far too fuzzy, and the code broken (and the
> code to fix it, further hardening our dependency on unsupported
> versions of upstream dependencies) I'm strongly proposing we revert
> the original change (https://review.openstack.org/407101).
>
> We should be working with the upstream communities (like we used to)
> and providing them with clear, unambiguous, narrowly defined use cases
> that will convince them of the benefits of our feature requests.

It seems to me that subject of the thread is much more alarming that it
have to be, no? Hacking still works perfectly, it's just a new feature
that's non-working, right?

Since he author itself proposed to do that for good reasons, so I'm not
sure what's so wrong here.

Anyway, I've +2'ed your revert since it seems to make sense.

Cheers,
-- 
Julien Danjou
/* Free Software hacker
   https://julien.danjou.info */


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qa] Hacking 0.13.0 broken

2016-12-05 Thread Ian Cordasco
Hi all,

I would normally have brought this up at the QA team meeting but those
meetings are usually either quite late at night or quite early in the
morning for me and thus unattendable for me.

In the last couple weeks we've merged and released broken code that
had an underspecified use-case ostensibly because we all want to help
each other be more productive. That said, as one of the few people who
understands the interactions between what hacking uses (since Joe
left) it seems like we're not enforcing the same level of quality for
hacking that we do for other projects.

Because the use case is far too fuzzy, and the code broken (and the
code to fix it, further hardening our dependency on unsupported
versions of upstream dependencies) I'm strongly proposing we revert
the original change (https://review.openstack.org/407101).

We should be working with the upstream communities (like we used to)
and providing them with clear, unambiguous, narrowly defined use cases
that will convince them of the benefits of our feature requests.

Cheers,
-- 
Ian Cordasco

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] trunk api performance and scale measurments

2016-12-05 Thread Armando M.
On 5 December 2016 at 08:07, Jay Pipes  wrote:

> On 12/05/2016 10:59 AM, Bence Romsics wrote:
>
>> Hi,
>>
>> I measured how the new trunk API scales with lots of subports. You can
>> find the results here:
>>
>> https://wiki.openstack.org/wiki/Neutron_Trunk_API_Performance_and_Scaling
>>
>> Hope you find it useful. There are several open ends, let me know if
>> you're interested in following up some of them.
>>
>
> Great info in there, Ben, thanks very much for sharing!


Bence,

Thanks for the wealth of information provided, I was looking forward to it!
The results of the experimentation campaign makes me somewhat confident
that trunk feature design is solid, or at least that is what it looks like!
I'll look into why there is a penalty on port-list, because that's
surprising for me too.

I also know that the QE team internally at HPE has done some perf testing
(though I don't have results publicly available yet), but what I can share
at this point is:

   - They also disabled l2pop to push the boundaries of trunk deployments;
   - They disabled OVS firewall (though for reasons orthogonal to
   scalability limits introduced by the functionality);
   - They flipped back to ovsctl interface, as it turned out to be one of
   components that introduced some *penalty*. Since you use native
   interface, it'd be nice to see what happens if you flipped this switch too;
   - RPC timeout of 300.

On a testbed of 3 controllers and 7 computes, this is at high level what
they found out the following:

   - 100 trunks, with 1900 subports took about 30 mins with no errors;
   - 500 subports take about 1 min to bind to a trunk;
   - Booting a VM on trunk with 100 subports takes as little as 15 seconds
   to successful ping. Trunk goes from BUILD -> ACTIVE within 60 seconds of
   booting the VM;
   - Scaling to 700 VM's incrementally on trunks with 100 initial subports
   is constant (e.g. booting time stays set at ~15 seconds).

I believe Ryan Tidwell may have more on this.

Cheers,
Armando


>
> -jay
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] trunk api performance and scale measurments

2016-12-05 Thread Jay Pipes

On 12/05/2016 10:59 AM, Bence Romsics wrote:

Hi,

I measured how the new trunk API scales with lots of subports. You can
find the results here:

https://wiki.openstack.org/wiki/Neutron_Trunk_API_Performance_and_Scaling

Hope you find it useful. There are several open ends, let me know if
you're interested in following up some of them.


Great info in there, Ben, thanks very much for sharing!

-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] trunk api performance and scale measurments

2016-12-05 Thread Bence Romsics
Hi,

I measured how the new trunk API scales with lots of subports. You can
find the results here:

https://wiki.openstack.org/wiki/Neutron_Trunk_API_Performance_and_Scaling

Hope you find it useful. There are several open ends, let me know if
you're interested in following up some of them.

Cheers,
Bence Romsics

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] CI scenarios design - how to add more services

2016-12-05 Thread Emilien Macchi
Giving few updates here:

- we implemented option 1.a), which means that we moved the tripleo CI
scenarios environments and pingtests into tripleo-heat-template.
- we created tripleo-scenarioXXX-puppet jobs that run on some modules.

Some Example:
- puppet-gnocchi now runs tripleo-scenario001, that deploy TripleO
with Telemetry services
- if you submit a patch in puppet-tripleo that touch the gnocchi
profiles, tripleo-scenario001 will also run
- if you submit a patch in THT that touch the gnocchi composable
service, tripleo-scenario001 will also run
- if you add a new service in TripleO during Pike and test it in a
scenario, TripleO CI scenarios for Ocata will continue to work as we
now use THT to store the CI environments and we don't backport
features.

Future:
- investigate if we could run the scenarios outside tripleo CI
(example: run tripleo-scenario001 in Gnocchi upstream CI, beside other
devstack jobs)
- keep increasing coverage of use-cases: more services, ssl, ipv6,
more plugins, etc.
- investigate how we could run multinode scenarios by using tripleo-quickstart.

Any feedback and help on this topic is welcome as usual.
Don't hesitate to contribute, add your own service, or propose
scenario improvements, it's highly welcome!

Thanks,

On Mon, Nov 28, 2016 at 3:35 PM, John Trowbridge  wrote:
>
>
> On 11/22/2016 09:02 PM, Emilien Macchi wrote:
>> == Context
>>
>> In Newton we added new multinode jobs called "scenarios".
>> The challenged we tried to solve was "how to test the maximum of
>> services without overloading the nodes that run tests".
>>
>> Each scenarios deploys a set of services, which allows us to
>> horizontally scale the number of scenarios to increase the service
>> testing coverage.
>> See the result here:
>> https://github.com/openstack-infra/tripleo-ci#service-testing-matrix
>>
>> To implement this model, we took example of Puppet OpenStack CI:
>> https://github.com/openstack/puppet-openstack-integration#description
>> We even tried to keep consistent the services/scenarios relations, so
>> it's consistent and easier to maintain.
>>
>> Everything was fine until we had to add new services during Ocata cycles.
>> Because tripleo-ci repository is not branched, adding Barbican service
>> in the TripleO environment for scenario002 would break Newton CI jobs.
>> During my vacations, the team created a new scenario, scenario004,
>> that deploys Barbican and that is only run for Ocata jobs.
>> I don't think we should proceed this way, and let me explain why.
>>
>> == Problem
>>
>> How to scale the number of services that we test without increasing
>> the number of scenarios and therefore the complexity of maintaining
>> them on long-term.
>>
>>
>> == Solutions
>>
>> The list is not exhaustive, feel free to add more.
>>
>> 1) Re-use experience from Puppet OpenStack CI and have environments
>> that are in a branched repository.
>> environments.
>> In Puppet OpenStack CI, the repository that deploys environments
>> (puppet-openstack-integration) is branched. So if puppet-barbican is
>> ready to be tested in Ocata, we'll patch
>> puppet-openstack-integration/master to start testing it and it won't
>> break stable jobs.
>> Like this, we were successfully able to maintain a fair number of
>> scenarios and keep our coverage increasing over each cycle.
>>
>> I see 2 sub-options here:
>>
>> a) Move CI environments and pingtest into
>> tripleo-heat-templates/environments/ci/(scenarios|pingtest). This repo
>> is branched and we could add a README to explain these files are used
>> in CI and we don't guarantee they would work outside TripleO CI tools.
>
> I also like this solution the best. It has the added benefit of being
> able to review the CI for a new service in the same patch (or patch
> chain) that adds the new service. We already have the low-memory
> environment in tht, which while not CI specific, is definitely a CI
> requirement.
>
>> b) Branch tripleo-ci repository. Personally I don't like this solution
>> because a lot of patches in this repo are not related to OpenStack
>> versions, which means we would need to backport most of the things
>> from master.
>>
>> 2) Introduce branch-based scenario tests -
>> https://review.openstack.org/#/c/396008/
>> It duplicates a lot of code and it's imho not really effective, though
>> this solution would correctly works.
>>
>> 3) Introduce a new scenario each time we have new services (like we
>> did with scenario004).
>> By adding new scenarios at each release because we test new services
>> is imho the wrong choice because:
>> a) it adds complexity in our we're going to maintain these scenarios.
>> b) it consumes more CI resources that we would need when some patches
>> have to run all scenarios jobs.
>>
>>
>> So I gave my opinion on the solutions, discussion is now open and my
>> hope is that we find a consensus soon, so we can make progress in our
>> testing coverage.
>> Thanks,
>>
>
> 

Re: [openstack-dev] [tripleo] POC patch for using tripleo-repos for repo setup

2016-12-05 Thread Brad P. Crochet
On Wed, Nov 23, 2016 at 11:07 AM, Ben Nemec  wrote:
>
>
> On 11/22/2016 08:18 PM, Emilien Macchi wrote:
>>
>> Even if I was part of those who approved this feature, I still have
>> some comments, inline:
>>
>> On Tue, Nov 22, 2016 at 1:30 PM, Alex Schultz  wrote:
>>>
>>> On Tue, Nov 22, 2016 at 11:06 AM, Ben Nemec 
>>> wrote:



 On 11/21/2016 05:26 PM, Alex Schultz wrote:
>
>
> On Mon, Nov 21, 2016 at 2:57 PM, Ben Nemec 
> wrote:
>>
>>
>> Hi,
>>
>> I have a POC patch[1] up to demonstrate the use of the tripleo-repos
>> tool
>> [2] as a replacement for most of tripleo.sh --repo-setup.  It has now
>> passed
>> all of the CI jobs so I wanted to solicit some feedback.
>>
>> There are a few changes related to repo naming because the tool names
>> repo
>> files based on the repo name rather than always calling them something
>> generic like delorean.repo.  I think it's less confusing to have the
>> delorean-newton repo file named delorean-newton.repo, but I'm open to
>> discussion on that.
>>
>> If no one has any major objections to how the tool looks/works right
>> now,
>> I'll proceed with the work to get it imported into the openstack
>> namespace
>> as part of TripleO.  We can always iterate on it after import too, of
>> course, so this isn't a speak now or forever hold your peace
>> situation.
>> :-)
>>
>
> Why a python standalone application for the management of specifically
> (and only?) tripleo repositories.  It seems we should be trying to
> leverage existing tooling and not hiding the problem behind a python
> app.  It's not that I enjoy the current method described in the spec
> (3 curls, 1 sed, 1 bash thing, and a yum install) but it seems like to
> write 586 lines of python and tests might be the wrong approach.
> Would it be better to just devote some time to rpm generation for
> these and deliver it in discrete RPMs?  'yum install
> http://tripleo.org/repos-current.rpm' seems way more straight forward.



 That's essentially trading "curl ..." for "yum install ..." in the docs.
 The repo rpm would have to be part of the delorean build so you'd still
 have
 to be pointing people at a delorean repo.  It would also still require
 code
 changes somewhere to handle the mixed current/current-tripleo setup that
 we
 run for development and test. Given how specific to tripleo that is I'm
 not
 sure how much sense it makes to implement it elsewhere.

>>>
>>> I'm asking because essentially we're delivering basically static repo
>>> files.  Which should really be done via RPM. Upgrades and cleanups are
>>> already well established practices between RPMs.  I'm not seeing the
>>> reasoning why a python app.  I thought about this further and I'm not
>>> sure why this should be done on the client side via an app rather than
>>> at repository build/promotion time.  As long as we're including the
>>> repo rpm, we can always create simple 302 redirects from a webserver
>>> to get the latest version.  I don't see why we should introduce a
>>> client tool for this when the action is really on the repository
>>> packaging side.   This seems odd doing system configuration via a
>>> python script rather than a configuration management tool like
>>> ansible, puppet or even just packaging.
>>>
 There are also optional ceph and opstool repos and at least ceph needs
 to
 match the version of OpenStack being installed.  Sure, you could just
 document all that logic, but then the logic has to be reimplemented in
 CI
 anyway so you end up with code to maintain either way.  At least with
 one
 tool the logic is shared between the user and dev/test paths, which is
 one
 of the primary motivations behind it.  Pretty much every place that we
 have
 divergence between what users do and what developers do becomes a pain
 point
 for users because developers only fix the things they actually use.

>>>
>>> Yes we should not have a different path for dev/test vs operational
>>> deployments, but I'm not convinced we need to add a custom python tool
>>> to handle this only for tripleo.  There are already established
>>> patterns of repository rpm delivery and installation via existing
>>> tools.  What are we getting from this tool that can't already be done?
>>
>>
>> That is true, here are some of them:
>> - centos-release-ceph-(hammer|jewel) rpm that deploys repos.
>> - since we are moving TripleO CI to use tripleo-quickstart, we could
>> handle repository with Ansible, directly in the roles.
>
>
> This is exactly what I'm trying to avoid here.  I want us to be using the
> same thing for repo management in CI and dev and real user environments.
> Unless 

Re: [openstack-dev] [all] Creating a new IRC meeting room ?

2016-12-05 Thread Dolph Mathews
On Sun, Dec 4, 2016 at 8:49 PM Tony Breeds  wrote:

> On Fri, Dec 02, 2016 at 11:35:05AM +0100, Thierry Carrez wrote:
> > Hi everyone,
> >
> > There has been a bit of tension lately around creating IRC meetings.
> > I've been busy[1] cleaning up unused slots and defragmenting biweekly
> > ones to open up possibilities, but truth is, even with those changes
> > approved, there will still be a number of time slots that are full:
> >
> > Tuesday 14utc -- only biweekly available
> > Tuesday 16utc -- full
> > Wednesday 15utc -- only biweekly available
> > Wednesday 16utc -- full
> > Thursday 14utc -- only biweekly available
> > Thursday 17utc -- only biweekly available
> >
> > [1] https://review.openstack.org/#/q/topic:dec2016-cleanup
> >
> > Historically, we maintained a limited number of meeting rooms in order
> > to encourage teams to spread around and limit conflicts. This worked for
> > a time, but those days I feel like team members don't have that much
> > flexibility in picking a time that works for everyone. If the miracle
> > slot that works for everyone is not available on the calendar, they tend
> > to move the meeting elsewhere (private IRC channel, Slack, Hangouts)
> > rather than change time to use a less-busy slot.
> >
> > So I'm now wondering how much that artificial scarcity policy is hurting
> > us more than it helps us. I'm still convinced it's very valuable to have
> > a number of "meetings rooms" that you can lurk in and be available for
> > pings, without having to join hundreds of channels where meetings might
> > happen. But I'm not sure anymore that maintaining an artificial scarcity
> > is helpful in limiting conflicts, and I can definitely see that it
> > pushes some meetings away from the meeting channels, defeating their
> > main purpose.
> >
> > TL;DR:
>
> Shouldn't this have been the headline ;P
>
> > - is it time for us to add #openstack-meeting-5 ?
>
> 13:38  info #openstack-meeting-5
> 13:38 -ChanServ(ChanServ@services.)- Information on #openstack-meeting-5:
> 13:38 -ChanServ(ChanServ@services.)- Founder: Magni, openstackinfra
> 13:38 -ChanServ(ChanServ@services.)- Successor  : freenode-staff
> 13:38 -ChanServ(ChanServ@services.)- Registered : Nov 27 20:02:51 2015
> (1y 1w 1d ago)
> 13:38 -ChanServ(ChanServ@services.)- Mode lock  : +ntc-slk
> 13:38 -ChanServ(ChanServ@services.)- Flags  : GUARD
> 13:38 -ChanServ(ChanServ@services.)- *** End of Info ***
>
> So if we're going to go down that path it's just a matter of the
> appropriate
> changes in openstack-infra/{system,project}-config
>
> > - should we more proactively add meeting channels in the future ?
>
> In an attempt to get send the worlds most "on the fence" reply.  I really
> like
> the current structure, and I think it works well for the parts of the
> community that
> touch lots of projects.  Having said that in my not very scientific
> opionion
> that's a very small amount of the community.  I think that most
> contributors
> would benefit from moving the meetings into $project specific rooms as
> Amrith,
> Matt and (kinda sorta) Daniel suggested.
>

I think it honestly reflects our current breakdown of contributors &
collaboration. The artificial scarcity model only helps a vocal minority
with cross-project focus, and just results in odd meeting times for the
majority of projects that don't hold primetime meeting slots.

While I don't think we should do away with meetings rooms, if a project
wants to hold meetings at a convenient time in their normal channel, I
think that's fine. Meeting conflicts will always exist. Major conflicts
will be resolved without the additional pressure of artificial scarcity.


>
> Yours Tony.
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-- 
-Dolph
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] core and driver teams attrition

2016-12-05 Thread Kevin Benton
+100 :)

We got hit pretty hard by some unfortunate company decisions that took
contributors away but we still have lots of contributors to promote to core
and get back on track.


On Dec 1, 2016 23:36, "Armando M."  wrote:

Hi Neutrinos,

By now a few of us have noticed that the neutron core and driver teams have
lost a number of very talented and experienced engineers: well...this sucks
there's no more polite way to put it!!

These engineers are the collective memory of the project, know the kinks
and gotchas of the codebase, and are very intimate with the overall
openstack machine. They leave a big void behind. It sad to see them go, but
personally I have the utmost confidence in the fact that they have groomed
other engineers to step up in a role of greater responsibility.

Ocata might be seen by some as a hiatus, and that's ok: take the time to
rest if you can, because the team is going to grow back stronger than ever,
and it will make neutron kick-ass, even more so than it already is.

You can count on it.

Happy hacking!
Armando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Creating a new IRC meeting room ?

2016-12-05 Thread Jeremy Stanley
On 2016-12-05 08:43:38 +0100 (+0100), Andreas Jaeger wrote:
[...]
> Accessbot is just permissions - this is not relevant.
[...]

To clarify, our accessbot never joins any channels at all. It only
connects to the server and interacts with ChanServ to configure
permissions for the channels listed. Thus, it is not impacted by
Freenode's CHANLIMIT setting.

Our meetbot on the other hand is, as mentioned, near the 120 channel
limit already because it's also the bot that handles general channel
logging to eavesdrop.openstack.org. An interested party could try
making http://git.openstack.org/cgit/openstack-infra/puppet-meetbot
support configuring and running an arbitrary number of meetbots
simultaneously to allow us to (at least manually) shard the service
across channels.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] No team meeting today - 12/05/2016

2016-12-05 Thread Renat Akhmerov

Renat Akhmerov
@Nokia

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Blazar] Blazar weekly teem meeting starts again

2016-12-05 Thread Pierre Riteau
> On 5 Dec 2016, at 10:36, Sylvain Bauza  wrote:
> 
> 
> 
> Le 05/12/2016 05:45, Masahito MUROI a écrit :
>> Hi Stackers,
>> 
>> I'm happy to announce it in openstack-dev ML that weekly Blazar team
>> meeting is going to start again from 6th Dec.
>> 
>> Some of folks who are interested in resource reservation feature met
>> together in Barcelona summit and agreed to re-start the project. As a
>> first step for resurrecting the project, we'll start team activity from
>> now on.
>> 
>> Following are meeting details. The meeting info[1] is now in review
>> status, so if it's not merged by the time, we will use #openstack-blazar
>> for the first meeting. Feel free to join it!!
>> 
>> Meeting days: Tuesday 9:00am UTC
>> Meeting channel: #openstack-meeting-alt
>> 
>> 1. https://review.openstack.org/#/c/406667/
>> 
> 
> I *could* be able to join weekly meetings for helping you to hand over
> the project, drop me a courtesy ping (bauzas) when you start, folks.
> 
> -Sylvain

Thank you Sylvain, that’s very kind of you! I will give you a ping tomorrow 
before we start.

The agenda for tomorrow’s meeting is here: 
https://wiki.openstack.org/wiki/Meetings/Blazar#Agenda_for_December_6_2016 


Pierre

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Question about config cinder volume

2016-12-05 Thread Christian Berendt

> On 2 Dec 2016, at 14:49, Jason HU  wrote:
> 1. Will Glance use Cinder volume as storage automatically when I enable 
> Cinder?

The default store for Glance is “file" when not using Ceph.

https://github.com/openstack/kolla-ansible/blob/master/ansible/roles/glance/templates/glance-api.conf.j2#L68-L71

You can change this by using custom configurations 
(http://docs.openstack.org/developer/kolla/advanced-configuration.html#openstack-service-configuration-in-kolla).

> 2. I heard that kolla has a third party plugin method which can be used to do 
> some setup on target node. Can it be used to setup the cinder-volume vg? Is 
> there any example on doing this?

http://docs.openstack.org/developer/kolla/image-building.html#plugin-functionality

We support a plugin functionality for our Docker templates.

I think it is better to create the required volume group with a custom Ansible 
playbook during the bootstrap of your storage node.

> 3. The storage system is HDS AMS2300 which seems do not supported by any 
> Cinder driver. So I have to use LVM backend. According to the cinder/glance 
> bug mentioned in Kolla doc, does it mean that I can not deploy 
> multi-controller scenario, unless using Ceph?

Check if you can use the HDS AMS2300 as external iSCSI storage 
(http://docs.openstack.org/developer/kolla/cinder-guide.html#cinder-back-end-with-external-iscsi-storage).

I think the use of LVM2 / iSCSI as a layer before the storage backend is not a 
good idea.

HTH, Christian.

-- 
Christian Berendt
Chief Executive Officer (CEO)

Mail: bere...@betacloud-solutions.de
Web: https://www.betacloud-solutions.de

Betacloud Solutions GmbH
Teckstrasse 62 / 70190 Stuttgart / Deutschland

Geschäftsführer: Christian Berendt
Unternehmenssitz: Stuttgart
Amtsgericht: Stuttgart, HRB 756139


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Blazar] Blazar weekly teem meeting starts again

2016-12-05 Thread Sylvain Bauza


Le 05/12/2016 05:45, Masahito MUROI a écrit :
> Hi Stackers,
> 
> I'm happy to announce it in openstack-dev ML that weekly Blazar team
> meeting is going to start again from 6th Dec.
> 
> Some of folks who are interested in resource reservation feature met
> together in Barcelona summit and agreed to re-start the project. As a
> first step for resurrecting the project, we'll start team activity from
> now on.
> 
> Following are meeting details. The meeting info[1] is now in review
> status, so if it's not merged by the time, we will use #openstack-blazar
> for the first meeting. Feel free to join it!!
> 
> Meeting days: Tuesday 9:00am UTC
> Meeting channel: #openstack-meeting-alt
> 
> 1. https://review.openstack.org/#/c/406667/
> 

I *could* be able to join weekly meetings for helping you to hand over
the project, drop me a courtesy ping (bauzas) when you start, folks.

-Sylvain

> best regards,
> Masahito
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Allowing Teams Based on Vendor-specific Drivers

2016-12-05 Thread Thierry Carrez
Armando M. wrote:
> That's when I came up with the proposal of defining the neutron stadium
> as a list of projects that behave consistently and adhere to a common
> set of agreed principles, such as common backporting strategies, testing
> procedures, including our ability to claim the entire technology stack
> to be fully open and completely exercised with upstream infra resources:
> a list of projects that any member of the neutron core team should be
> able to stand behind and support it without too many ideological clashes.

+1

With our team-centric view of community, I don't think it's reasonable
to ask a project team to vouch for code they are not comfortable with
(whatever the reason). At the same time I think we need to find a way to
consider vendor driver development part of OpenStack, without
compromising on our open collaboration principles.

So I think the right trade-off is to recognize that driver teams are
separate beasts, with a narrow scope (implementing an interface designed
by another OpenStack project) and relaxed open collaboration rules...

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Upgrades meeting canceled for today

2016-12-05 Thread Ihar Hrachyshka

Hi all,

all candidates to chair the meeting are overbooked today and won’t be able  
to lead it; we will cancel the meeting. We will meet the next week at usual  
time.


Sorry for late notification,
Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Creating a new IRC meeting room ?

2016-12-05 Thread Luigi Toscano
On Friday, 2 December 2016 14:42:31 CET Matt Riedemann wrote:
> But like we recently talked about the stable team meetings, we don't
> really need to be in a separate -alt room for those when we have the
> channel and anyone that cares about stable enough to be in the meeting
> is already in that channel, but sometimes the people in that channel
> forget about the meeting or which of the 20 alt rooms it's being held
> in, so they miss it (or Tony is biking down a volcano and we just don't
> have a meeting).

This is just part of the problem, but couldn't the bot remind about the 
meeting before its start on the main channel of a project? It would help with 
people forgetting about the meetings.

-- 
Luigi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][tricircle]DVR issue in cross Neutron networking

2016-12-05 Thread joehuang
Hello,

 Tricircle plans to provide L2 network across Neutron to ease supporting high
 availability of application:

 For example, in the following figure, the application is consisted of
 instance1 and instance2, these two instances will be deployed into two
 OpenStack. Intance1 will provide service through "ext net1"(i.e, external
 network in OpenStack1), and Instance2 will provide service through
 "ext net2". Instance1 and Instance2 will be plugged into same L2 network
 net3 for data replication( for example database replication ).

  +-+   +-+
  |OpenStack1   |   |OpenStack2   |
  | |   | |
  | ext net1|   | ext net2|
  |   +-+-+ |   |   +-+-+ |
  | |   |   | |   |
  | |   |   | |   |
  |  +--+--+|   |  +--+--+|
  |  | ||   |  | ||
  |  | R1  ||   |  | R2  ||
  |  | ||   |  | ||
  |  +--+--+|   |  +--+--+|
  | |   |   | |   |
  | |   |   | |   |
  | +---+-+-+   |   | +---+-+-+   |
  | net1  | |   | net2  | |
  |   | |   |   | |
  |  ++--+  |   |  ++--+  |
  |  | Instance1 |  |   |  | Instance2 |  |
  |  +---+  |   |  +---+  |
  | |   |   | |   |
  | |   | net3  | |   |
  |  +--+-++  |
  | |   | |
  +-+   +-+

 When we deploy the application in such a way, no matter which part of the
 application stops providing service, the other part can still provide
 service, and take the workload from the failure one. It'll bring the failure
 tolerance no matter the failure is due to OpenStack crush or upgrade, or
 part of the application crush or upgrade.

 This mode can work very well and helpful, and router R1 R2 can run in DVR
 or legacy mode.

 While during the discussion and review of the spec:
 https://review.openstack.org/#/c/396564/, in this deployment, the end user
 has to add two NICs for each instance, one for the net3(a L2 network across
 OpenStack). And the net3 (a L2 network across OpenStack) can not be allowed
 to add_router_interface to router R1 R2, this is not good in networking.

 If the end user wants to do so, there is DVR MAC issues if more than one L2
 network across OpenStack are performed add_router_interface to router R1 R2.

 Let's look at the following deployment scenario:
 +-+   +---+
 |OpenStack1   |   |OpenStack2 |
 | |   |   |
 | ext net1|   | ext net2  |
 |   +-+-+ |   |   +-+-+   |
 | |   |   | | |
 | |   |   | | |
 | +---+--+|   |  +--+---+ |
 | |  ||   |  |  | |
 | |R1||   |  |   R2 | |
 | |  ||   |  |  | |
 | ++--+--+|   |  +--+-+-+ |
 |  |  |   |   | | |   |
 |  |  |   | net3  | | |   |
 |  |   -+-+---+-+--+  |   |
 |  || |   |   |   |   |
 |  | +--+---+ |   | +-+-+ |   |
 |  | | Instance1| |   | | Instance2 | |   |
 |  | +--+ |   | +---+ |   |
 |  |  | net4  |   |   |
 | ++---+--+---+-+ |
 |  |  |   |   |   |
 |  +---+---+  |   |  ++---+   |
 |  | Instance3 |  |   |  | Instance4  |   |
 |  +---+  |   |  ++   |
 | |   |   |
 +-+   +---+

 net3 and net4 are two L2 network across OpenStacks. These two networks will
 be added router interface to R1 R2. Tricircle can help this, and addressed
 the DHCP and gateway challenges: different gateway port for the same network
 in different OpenStack, so there is no problem for north-south traffic, the
 north-south traffic will goes to local external network directly, for example,
 Instance1->R1->ext net1, instance2->R2->ext net2.

 The issue is in east-west traffic if R1 R2 are running in DVR mode:
 when instance1 tries to ping instance4, DVR MAC replacement will happen before
 the packet leaves the host where the instance1 is running, when the packet
 arrives at the host where the instance4 is running, because DVR MAC 
replacement,
 the source mac(DVR MAC from OpenStack1) of the packet could not be recognized
 in OpenStack2, thus the packet will be dropped, and the ping fails.

 The latter one deployment bring