[openstack-dev] [vitrage] Design discussion for overlapping templates support & new section in Vitrage wiki

2016-05-10 Thread Rosensweig, Elisha (Nokia - IL)
Hi all,

I've added a new section in the Vitrage Wiki page where links to active design 
discussions will take place. You can find it here: 
https://wiki.openstack.org/wiki/Vitrage#Open_Design_Discussions

Once a design has been finalized, it will be moved to the "Design Documents" 
section.

The first member of this section is an etherpad for discussing Vitrage support 
for overlapping templates. Here is the link: 
https://etherpad.openstack.org/p/vitrage-overlapping-templates-support-design 

Thanks,

Elisha


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Seeing db lockout issues in neutron add_router_interface

2016-05-10 Thread Kevin Benton
Unfortunately we didn't switch to the new sql driver until liberty so that
probably wouldn't be a safe switch in Kilo.

Adding a retry will help, but unfortunately that will still block your call
for 60 seconds with that driver until the timeout exception is triggered.
We worked around this in ML2 by identifying the calls that could yield
while holding a DB lock and then acquiring a semaphore before doing each
one.
You can see an example here:
https://github.com/openstack/neutron/blob/363eeb06104662ee38aeed04af043899379f6ab8/neutron/plugins/ml2/plugin.py#L1074

On Tue, May 10, 2016 at 11:27 PM, Divya  wrote:

> Thanks Mike for the response. I am part of Nuage openstack team. We are
> looking in to the issue.
> An extra delete_port call in NuagePlugin's add_router_interface triggers
> db lockout when insert into routerport ( this is in core neutron ).
> Are you suggesting NuagePlugin should retry in this case or should core
> neutron, add-router_interface should retry??
> Will give it a try.
>
>
> On Tue, May 10, 2016 at 4:54 PM, Mike Bayer  wrote:
>
>>
>>
>> On 05/10/2016 04:57 PM, Divya wrote:
>>
>>> Hi,
>>> I am trying to run this rally test on stable/kilo
>>>
>>> https://github.com/openstack/rally/blob/master/samples/tasks/scenarios/neutron/create_and_delete_routers.json
>>>
>>> with concurrency 50 and iterations 2000.
>>>
>>> This test basically cretaes routers and subnets
>>> and then calls
>>> router-interface-add
>>> router-interface-delete
>>>
>>>
>>> And i am running this against 3rd party Nuage plugin.
>>>
>>> In the NuagePlugin:
>>>
>>> add_router_interface is something like this:
>>> 
>>> super().add_router_interface
>>> try:
>>>some calls to external rest server
>>>super().delete_port
>>> except:
>>>
>>> remove_router_interface:
>>> ---
>>> super().remove_router_interface
>>> some calls to external rest server
>>> super().create_port()
>>> some calls to external rest server
>>>
>>>
>>> If i comment delete_port in the add_router_interface, i am not hitting
>>> the db lockout issue.
>>> delete_port or any other operations are not within any transaction.
>>> So not sure, why this is leading to db lock timeouts in insert to
>>> routerport
>>>
>>> error trace
>>> http://paste.openstack.org/show/496626/
>>>
>>>
>>>
>>> Really appreciate any help on this.
>>>
>>
>>
>> I'm not on the Neutron team, but in general, Openstack applications
>> should be employing retry logic internally which anticipates database
>> deadlocks like these and retries the operation.  I'd report this stack
>> trace (especially if it is reproducible) as a bug to this plugin's
>> launchpad project.
>>
>>
>>
>>
>>> Thanks,
>>> Divya
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [barbican] Issues with gate-barbican-python27

2016-05-10 Thread Juan Antonio Osorio
This happened because of a change in oslo.messaging that we didn't react
fast enough to.

I have already submitted a bug report
https://bugs.launchpad.net/barbican/+bug/1580461 which is the recommended
approach if something like this is noticed. Thanks for notifying us in the
mailing list though.

I'm working on a fix.

BR

On Wed, May 11, 2016 at 12:19 AM, Freddy Pedraza <
freddy.pedr...@rackspace.com> wrote:

> Hi,
>
> I submitted a simple CR (https://review.openstack.org/#/c/312786) and
> "gate-barbican-python27” is failing and I think it's caused by something
> else upstream. These are the issues that I see in the console log
>
> FAIL:
> barbican.tests.queue.test_keystone_listener.WhenUsingMessageServer.test_keystone_notification_pool_size_used
> FAIL:
> barbican.tests.queue.test_keystone_listener.WhenUsingMessageServer.test_should_start
> FAIL:
> barbican.tests.queue.test_keystone_listener.WhenUsingMessageServer.test_should_stop
> FAIL:
> barbican.tests.queue.test_keystone_listener.WhenUsingMessageServer.test_should_wait
>
> ERROR: InvocationError:
> '/home/jenkins/workspace/gate-barbican-python27/.tox/py27/bin/python
> setup.py testr --coverage --testr-args=‘
> __ summary
> 
> ERROR:   py27: commands failed
>
> More details at —>
> http://logs.openstack.org/86/312786/2/check/gate-barbican-python27/833bcc2/console.html#_2016-05-10_20_29_16_472
>
> Any ideas on what’s going on?
>
> Thanks in Advance
>
> Freddy Pedraza
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Juan Antonio Osorio R.
e-mail: jaosor...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Seeing db lockout issues in neutron add_router_interface

2016-05-10 Thread Divya
Thanks Mike for the response. I am part of Nuage openstack team. We are
looking in to the issue.
An extra delete_port call in NuagePlugin's add_router_interface triggers db
lockout when insert into routerport ( this is in core neutron ).
Are you suggesting NuagePlugin should retry in this case or should core
neutron, add-router_interface should retry??
Will give it a try.


On Tue, May 10, 2016 at 4:54 PM, Mike Bayer  wrote:

>
>
> On 05/10/2016 04:57 PM, Divya wrote:
>
>> Hi,
>> I am trying to run this rally test on stable/kilo
>>
>> https://github.com/openstack/rally/blob/master/samples/tasks/scenarios/neutron/create_and_delete_routers.json
>>
>> with concurrency 50 and iterations 2000.
>>
>> This test basically cretaes routers and subnets
>> and then calls
>> router-interface-add
>> router-interface-delete
>>
>>
>> And i am running this against 3rd party Nuage plugin.
>>
>> In the NuagePlugin:
>>
>> add_router_interface is something like this:
>> 
>> super().add_router_interface
>> try:
>>some calls to external rest server
>>super().delete_port
>> except:
>>
>> remove_router_interface:
>> ---
>> super().remove_router_interface
>> some calls to external rest server
>> super().create_port()
>> some calls to external rest server
>>
>>
>> If i comment delete_port in the add_router_interface, i am not hitting
>> the db lockout issue.
>> delete_port or any other operations are not within any transaction.
>> So not sure, why this is leading to db lock timeouts in insert to
>> routerport
>>
>> error trace
>> http://paste.openstack.org/show/496626/
>>
>>
>>
>> Really appreciate any help on this.
>>
>
>
> I'm not on the Neutron team, but in general, Openstack applications should
> be employing retry logic internally which anticipates database deadlocks
> like these and retries the operation.  I'd report this stack trace
> (especially if it is reproducible) as a bug to this plugin's launchpad
> project.
>
>
>
>
>> Thanks,
>> Divya
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Seeing db lockout issues in neutron add_router_interface

2016-05-10 Thread Divya
Thanks Kevin for the response.
Kevin, this is stable/kilo (customer is still on stable/kilo),is pymysql
supported in stable/kilo??

Thanks & Regards,
Divya

On Tue, May 10, 2016 at 10:36 PM, Kevin Benton  wrote:

> In addition to what Mike said, "Lock wait timeout exceeded" sounds like an
> error from the C-based mysql driver that eventlet couldn't recognize
> yielding calls on. We have moved away from that upstream for quite some
> time now. Ensure your DB connection string starts with 'mysql+pymysql://'
> to use the pymysql one.
>
> On Tue, May 10, 2016 at 4:54 PM, Mike Bayer  wrote:
>
>>
>>
>> On 05/10/2016 04:57 PM, Divya wrote:
>>
>>> Hi,
>>> I am trying to run this rally test on stable/kilo
>>>
>>> https://github.com/openstack/rally/blob/master/samples/tasks/scenarios/neutron/create_and_delete_routers.json
>>>
>>> with concurrency 50 and iterations 2000.
>>>
>>> This test basically cretaes routers and subnets
>>> and then calls
>>> router-interface-add
>>> router-interface-delete
>>>
>>>
>>> And i am running this against 3rd party Nuage plugin.
>>>
>>> In the NuagePlugin:
>>>
>>> add_router_interface is something like this:
>>> 
>>> super().add_router_interface
>>> try:
>>>some calls to external rest server
>>>super().delete_port
>>> except:
>>>
>>> remove_router_interface:
>>> ---
>>> super().remove_router_interface
>>> some calls to external rest server
>>> super().create_port()
>>> some calls to external rest server
>>>
>>>
>>> If i comment delete_port in the add_router_interface, i am not hitting
>>> the db lockout issue.
>>> delete_port or any other operations are not within any transaction.
>>> So not sure, why this is leading to db lock timeouts in insert to
>>> routerport
>>>
>>> error trace
>>> http://paste.openstack.org/show/496626/
>>>
>>>
>>>
>>> Really appreciate any help on this.
>>>
>>
>>
>> I'm not on the Neutron team, but in general, Openstack applications
>> should be employing retry logic internally which anticipates database
>> deadlocks like these and retries the operation.  I'd report this stack
>> trace (especially if it is reproducible) as a bug to this plugin's
>> launchpad project.
>>
>>
>>
>>
>>> Thanks,
>>> Divya
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] gate testing with lvm imagebackend

2016-05-10 Thread Chris Friesen

On 05/10/2016 04:47 PM, Matt Riedemann wrote:

On 5/10/2016 5:14 PM, Chris Friesen wrote:

On 05/10/2016 03:51 PM, Matt Riedemann wrote:

For the libvirt imagebackend refactor that mdbooth is working on, I
have a POC
devstack-gate change which runs with the lvm imagebackend in the
libvirt driver
[1].

The test results are mostly happy except for anything related to migrate
(including resize to same host) [2][3].

That's because we're not testing with boot from volume [4].

This is a weird capability wrinkle that is not clear from the API,
you'll only
find out that you can't migrate/resize on this host that's using lvm
when it
fails. We can't even disable this in tempest really since there isn't
a flag for
'only-supports-resize-for-volume-backed-instances'. So this job would
just have
to disable any tests that have anything to do with resize/migrate,
which kind of
sucks since that's what we wanted to test going into the libvirt
imagebackend
refactor.



For what it's worth, we've got internal patches to enable cold migration
and resize for LVM-backed instances.  We've also got a proof of concept
to enable thin-provisioned LVM to get rid of the huge
wipe-volume-on-deletion cost.

Pretty sure we'd be happy to contribute these if there is interest.
Last time I brought up some of these there didn't seem to be much.



I'd be interested if you want to push the changes up as a WIP, then we could run
my devstack-gate change with your series as a dependency and see what kind of
fallout there is.


I'll start the ball rolling...I'm in the middle of something so might take some 
time.


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Seeing db lockout issues in neutron add_router_interface

2016-05-10 Thread Kevin Benton
In addition to what Mike said, "Lock wait timeout exceeded" sounds like an
error from the C-based mysql driver that eventlet couldn't recognize
yielding calls on. We have moved away from that upstream for quite some
time now. Ensure your DB connection string starts with 'mysql+pymysql://'
to use the pymysql one.

On Tue, May 10, 2016 at 4:54 PM, Mike Bayer  wrote:

>
>
> On 05/10/2016 04:57 PM, Divya wrote:
>
>> Hi,
>> I am trying to run this rally test on stable/kilo
>>
>> https://github.com/openstack/rally/blob/master/samples/tasks/scenarios/neutron/create_and_delete_routers.json
>>
>> with concurrency 50 and iterations 2000.
>>
>> This test basically cretaes routers and subnets
>> and then calls
>> router-interface-add
>> router-interface-delete
>>
>>
>> And i am running this against 3rd party Nuage plugin.
>>
>> In the NuagePlugin:
>>
>> add_router_interface is something like this:
>> 
>> super().add_router_interface
>> try:
>>some calls to external rest server
>>super().delete_port
>> except:
>>
>> remove_router_interface:
>> ---
>> super().remove_router_interface
>> some calls to external rest server
>> super().create_port()
>> some calls to external rest server
>>
>>
>> If i comment delete_port in the add_router_interface, i am not hitting
>> the db lockout issue.
>> delete_port or any other operations are not within any transaction.
>> So not sure, why this is leading to db lock timeouts in insert to
>> routerport
>>
>> error trace
>> http://paste.openstack.org/show/496626/
>>
>>
>>
>> Really appreciate any help on this.
>>
>
>
> I'm not on the Neutron team, but in general, Openstack applications should
> be employing retry logic internally which anticipates database deadlocks
> like these and retries the operation.  I'd report this stack trace
> (especially if it is reproducible) as a bug to this plugin's launchpad
> project.
>
>
>
>
>> Thanks,
>> Divya
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible] LBaaSv2 / Octavia support

2016-05-10 Thread Xav Paice
Sorry to dig up an ancient thread.

I see the spec has been implemented, and in the os_neutron repo I see
configs for the Haproxy driver for LOADBALANCERV2 - but not Octavia.  Am I
missing something here?

On 29 January 2016 at 10:03, Major Hayden  wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
> On 01/26/2016 01:48 PM, Kevin Carter wrote:
> > I personally think it'd be great to see this feature in OSA and I look
> forward to reviewing the spec.
>
> The first draft of the spec is in Gerrit:
>
>   https://review.openstack.org/#/c/273749/
>
> I appreciate any and all feedback! :)
>
> - --
> Major Hayden
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v2
>
> iQIcBAEBCAAGBQJWqoIyAAoJEHNwUeDBAR+xjuEP/2TSZoziJFTbKCsu3LvfkXir
> qaC/J0XZTSZVfCFB1gjqdXAsSYQT0T8gxRvEAtWkjXQ9IjbNdn+JP1TS5KntZnLc
> PB5+Fg90zj00IG7RHTaeMirv9FHqRwVOwI8AQmLZRovD+t8QFIGMAFWzHYGHzRoP
> VigvNau1HEgMs525cA2cZwG0AaC2dmt5pnuWpX9sPtUklbGq4xlZgjOi5RZT3wjO
> yzG4LqimVpWnYhKB1WxE4VCwzFXSkvZ8QmNoAjj/yNJafyV0f/aQn9Zg0yZ3JGi6
> OZtpUrhS3NA+goog1BI5gObfo+cRGUUIkhSBzXgPOWAqXr19uMXhWWabAf5BhQFv
> 2I4l+mkwU7cVa5FMKIgOdT/CUd9Cs1hLKYVYePrEoFDRagZpKbcC7ozeWdSJb6ri
> GK766Wm9ypLshI75fZTsnzLRaJEGk25PpmggYG9afnS6lP1JMlZ78opiVGpu5ISb
> H+aWQDhZopG8wxBkQ21xpS3NaG/oIfVst0R6zrBpxTznRSPA/gnqSN8YHdHmr8M4
> z+zxXxeU7iSG1uc5Nu4rUrVydXId8Cm9lwH33VDqs0MOJmawpxu7HeK2fk2J4JQH
> Nqky4EQZu9lWVjwEyfrnFYNY/xxnolboQTCC/cvDokwp+NHMsZmnUdzbaPFhrayX
> 8u41SM4i4S+ffOURAvt+
> =jZxV
> -END PGP SIGNATURE-
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra] Jobs failing : "No matching distribution found for "

2016-05-10 Thread Ian Wienand
So it seems the just released pip 8.1.2 has brought in a new version
of setuptools with it, which creates canonical names per [1] by
replacing "." with "-".

The upshot is that pip is now looking for the wrong name on our local
mirrors.  e.g.

---
 $ pip --version
pip 8.1.2 from /tmp/foo/lib/python2.7/site-packages (python 2.7)
$ pip --verbose  install --trusted-host mirror.ord.rax.openstack.org -i 
http://mirror.ord.rax.openstack.org/pypi/simple 'oslo.config>=3.9.0'
Collecting oslo.config>=3.9.0
  1 location(s) to search for versions of oslo.config:
  * http://mirror.ord.rax.openstack.org/pypi/simple/oslo-config/
  Getting page http://mirror.ord.rax.openstack.org/pypi/simple/oslo-config/
  Starting new HTTP connection (1): mirror.ord.rax.openstack.org
  "GET /pypi/simple/oslo-config/ HTTP/1.1" 404 222
  Could not fetch URL 
http://mirror.ord.rax.openstack.org/pypi/simple/oslo-config/: 404 Client Error: 
Not Found for url: http://mirror.ord.rax.openstack.org/pypi/simple/oslo-config/ 
- skipping
  Could not find a version that satisfies the requirement oslo.config>=3.9.0 
(from versions: )
---

(note olso-config, not oslo.config).  Compare to

---
$ pip --verbose install --trusted-host mirror.ord.rax.openstack.org -i 
http://mirror.ord.rax.openstack.org/pypi/simple 'oslo.config>=3.9.0'
You are using pip version 6.0.8, however version 8.1.2 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
Collecting oslo.config>=3.9.0
  Getting page http://mirror.ord.rax.openstack.org/pypi/simple/oslo.config/
  Starting new HTTP connection (1): mirror.ord.rax.openstack.org
  "GET /pypi/simple/oslo.config/ HTTP/1.1" 200 2491
---

I think infra jobs that run on bare-precise are hitting this
currently, because that image was just built.  Other jobs *might* be
isolated from this for a bit, until the new pip gets out there on
images, but "winter is coming", as they say...

There is [2] available to make bandersnatch use the new names.
However, I wonder if this might have the effect of breaking the
mirrors for old versions of pip that ask for the "."?

pypi proper does not seem affected, just our mirrors.

I think probably working with bandersnatch to get a fixed version ASAP
is probably the best way forward, rather than us trying to pin to old
pip versions.

-i

[1] https://www.python.org/dev/peps/pep-0503/
[2] 
https://bitbucket.org/pypa/bandersnatch/pull-requests/20/fully-implement-pep-503-normalization/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][doc]An error in OpenStack architecture picture

2016-05-10 Thread Lana Brindley
On 11/05/16 12:58, hao wang wrote:
> Hi, stackers,
> 
> I found an error in OpenStack architecture picture in docs: 
> http://docs.openstack.org/openstack-ops/content/architecture.html.
> 
> In OpenStack Logical Architecture picture, Cinder's database is named "nova 
> database", it's not correct. It should be "cinder database".
> 

I create a bug for you: 
https://bugs.launchpad.net/openstack-manuals/+bug/1580424

L

-- 
Lana Brindley
Technical Writer
Rackspace Cloud Builders Australia
http://lanabrindley.com



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Gap between host cpu features and guest cpu's

2016-05-10 Thread Jin, Yuntong
Hi everyone,

Currently nova exposes all the host CPU instruction set extensions available
on the compute node in the host state, and there is a scheduler filter
`ComputeCapabilitiesFilter` which looks at these.

But the limits on this is:
CPU instruction set in ComputeCapabilitiesFilter should be guest's view instead 
of host's.

Admin may use specific set of CPU instruction to deploy instance to make
it migratable in a heterogeneous cloud.
This is actually by design in nova as nova is using baselineCPU
andallowed to pass/config guest CPU instruction feature for instance.

Shall we add a string "guest_features" in ``ComputeNode`` object as 
``ComputeNode:cpu_info:guest_features``
And let ComputeCapabilitiesFilter use guest_features instead of host features 
here?

Is this a real gap ? and the above easy fix is the right way ?

Thanks
-yuntongjin


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder][doc]An error in OpenStack architecture picture

2016-05-10 Thread hao wang
Hi, stackers,

I found an error in OpenStack architecture picture in docs:
http://docs.openstack.org/openstack-ops/content/architecture.html.

In OpenStack Logical Architecture picture, Cinder's database is named "nova
database", it's not correct. It should be "cinder database".
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Nova API sub-team meeting

2016-05-10 Thread Alex Xu
Hi,

We have weekly Nova API meeting today. The meeting is being held Wednesday
UTC1300 and irc channel is #openstack-meeting-4.

The proposed agenda and meeting details are here:

https://wiki.openstack.org/wiki/Meetings/NovaAPI

Please feel free to add items to the agenda.

Thanks
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Smaug] Meeting time change

2016-05-10 Thread ????
Got it


-- Original --
From:  "??";;
Date:  Tue, May 10, 2016 10:45 PM
To:  "openstack-dev@lists.openstack.org"; 

Subject:  Re: [openstack-dev] [Smaug] Meeting time change



About the Smaug meeting time,


- 09:00 UTC for east, biweekly-odd


This time is very good for the eastern side of the globe.


Thanks saggi.


Best Regards,
xiangxinyong



-- Original --
From:  "Saggi Mizrahi";;
Date:  Tue, May 10, 2016 09:04 PM
To:  "openstack-dev@lists.openstack.org"; 
Cc:  "Eran Gampel"; "yinwei 
(E)"; 
Subject:  [openstack-dev] [Smaug] Meeting time change



  
Hi everyone,
 

 
 
We would like to make the Smaug meeting weekly instead of
 
biweekly and make it so that one week is in an appropriate time
 
is preferable for the eastern side of the globe and one week for
 
the western side of the globe.
 

 
 
current time is Tuesdays at 14:00 UTC which is 22:00 in China
 
and 07:00 PDT (if my calculations are correct).
 

 
 
I'm suggesting that we change it to:
 
- 15:00 UTC for west, biweekly-even
 
- 09:00 UTC for east, biweekly-odd
 

 
 
Are there any better suggestions?
 
Am I suggesting times that collide with other projects?
 
Please send approvals or suggestions but remember to specify if
 
you are going to come to the east or west meeting.
 

 
 
Thank you all,
 
Let's build Smaug together!
 
 
-
 
 This email and any files transmitted and/or attachments with it are 
confidential and proprietary information of
 Toga Networks Ltd., and intended solely for the use of the individual or 
entity to whom they are addressed.
 If you have received this email in error please notify the system manager. 
This message contains confidential
 information of Toga Networks Ltd., and is intended only for the individual 
named. If you are not the named
 addressee you should not disseminate, distribute or copy this e-mail. Please 
notify the sender immediately
 by e-mail if you have received this e-mail by mistake and delete this e-mail 
from your system. If you are not 
 the intended recipient you are notified that disclosing, copying, distributing 
or taking any action in reliance on
 the contents of this information is strictly prohibited. 
  
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] [senlin] [keystone] [ceilometer] [telemetry] Questions about api-ref launchpad bugs

2016-05-10 Thread Qiming Teng
On Tue, May 10, 2016 at 07:53:19AM -0500, Anne Gentle wrote:
> Great questions, so I'm copying the -docs and -dev lists to make sure
> people know the answers.
> 
> On Tue, May 10, 2016 at 5:14 AM, Atsushi SAKAI 
> wrote:
> 
> > Hello Anne
> >
> >   I have several question when I am reading through etherpad's (in
> > progress).
> >   It would be appreciated to answer these questions.
> >
> > 1)Should api-ref launchpad **bugs** be moved to each modules
> >   (like keystone, nova etc)?
> >   Also, this should be applied to moved one's only or all components?
> >(compute, baremetal Ref.2)
> >
> >   Ref.
> > https://etherpad.openstack.org/p/austin-docs-newtonplan
> > API site bug list cleanup: move specific service API ref bugs to
> > project's Launchpad
> >
> >   Ref.2
> > http://developer.openstack.org/api-ref/compute/
> > http://developer.openstack.org/api-ref/baremetal/
> 
> 
> Yes! I definitely got agreement from nova team that they want them. Does
> anyone have a Launchpad script that could help with the bulk filter/export?
> Also, are any teams concerned about taking on their API reference bugs?
> Let's chat.
> 
> 
> >
> >
> > 2)Status of API-Ref
> >   a)Why keystone and senlin are no person at this moment?
> >
> >
> >
> Keystone -- after the Summit, keystone had someone sign up [1], but sounds
> like we need someone else. Brant, can you help us find someone?
> 
> Senlin -- Qiming Teng had asked a lot of questions earlier in the process
> and tested the instructions. Qiming had good concerns about personal
> bandwidth limits following along with all the changes. Now that it's
> settled, I'll follow up (and hoping the senlin team is reading the list).

Well, I should have spoken up that we are already moving in that
direction. So far we have migrated quite some APIs into the new format.
Will let the team know when we have finished the migration for senlin.

When working on this migration, I do have some suggestions to improve
the sphinx extensions. For example, whether a parameter is optional
should be specified from where it is referenced (i.e. the RST files)
rather than where it is defined (i.e. the parameters.yaml file). Other
than that, the migration is smooth.

We were not doing a batch commit for the migration. We see this another
chance to cleanse any mistakes in API doc and/or API code. So we are
manually adding API docs one resource after another.

Regards,
  Qiming 
 
> >   b)What is your plan for sahara and ceilometer?
> >  (It seems already exist the document.)
> >
> 
> Yes, these are two I had seen already have RST, but they do not use the
> helpful Sphinx extensions.
> 
> Sahara -- Mike McCune, we should chat about the plans. Are you okay with
> moving towards the common framework and editing the current RST files to
> use the rest_method and rest_parameters Sphinx directives?
> 
> Ceilometer -- sorry, Julien, I hadn't reached out individually to you.
> Could you let me know your plans for the RST API reference docs?
> 
> 
> >   c)When is the table's status changed to "Done"?
> >  nova (compute) and ironic (baremetal) seems first patch merged
> >  and see the document already.
> >
> 
> I'll change those two to Done.
> 
> Thanks for asking -
> Anne
> 
> 
> >
> >
> >   Ref.
> > [1]
> > https://wiki.openstack.org/wiki/Documentation/Migrate#API_Reference_Plan
> >
> >
> > Thanks
> >
> > Atsushi SAKAI
> >
> 
> 
> 
> -- 
> Anne Gentle
> www.justwriteclick.com


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] gate testing with lvm imagebackend

2016-05-10 Thread Matt Riedemann

On 5/10/2016 5:47 PM, Matt Riedemann wrote:

On 5/10/2016 5:14 PM, Chris Friesen wrote:

On 05/10/2016 03:51 PM, Matt Riedemann wrote:

For the libvirt imagebackend refactor that mdbooth is working on, I
have a POC
devstack-gate change which runs with the lvm imagebackend in the
libvirt driver
[1].

The test results are mostly happy except for anything related to migrate
(including resize to same host) [2][3].

That's because we're not testing with boot from volume [4].

This is a weird capability wrinkle that is not clear from the API,
you'll only
find out that you can't migrate/resize on this host that's using lvm
when it
fails. We can't even disable this in tempest really since there isn't
a flag for
'only-supports-resize-for-volume-backed-instances'. So this job would
just have
to disable any tests that have anything to do with resize/migrate,
which kind of
sucks since that's what we wanted to test going into the libvirt
imagebackend
refactor.



For what it's worth, we've got internal patches to enable cold migration
and resize for LVM-backed instances.  We've also got a proof of concept
to enable thin-provisioned LVM to get rid of the huge
wipe-volume-on-deletion cost.

Pretty sure we'd be happy to contribute these if there is interest.
Last time I brought up some of these there didn't seem to be much.

Chris

__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



I'd be interested if you want to push the changes up as a WIP, then we
could run my devstack-gate change with your series as a dependency and
see what kind of fallout there is.

I also need to check of there are any tempest tests that test
resize/migrate from a volume-backed instance and if those are passing on
this.



We don't have any Tempest tests which test resizing a volume-backed 
instance. We did have two redundant resize tests though, so I changed 
one of those to use a volume-backed instance [1]. If that works I'll add 
it as a dependency for my d-g test patch.


[1] https://review.openstack.org/#/c/314816/

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] supporting Go

2016-05-10 Thread Fox, Kevin M
Thomas, fully agree. :)

Rayson Ho, even with containers, distro packages are preferable. Its really 
difficult at the moment to ensure your containers don't have security 
vulnerabilities backed into them. None of the docker repo's I've seen really 
help you with automating this. The only trick I've found is to setup a jenkins 
server that tests a 'docker run -it --rm containername [apt-get upgrade -y || 
yum upgrade -y] periodically, check the results to see if it does anything, and 
if it does, force a rebuild of the container using the native tools. And then 
ensure you either get notified or have some kind of orchestration system that 
notices the new containers and does the right rolling upgrades for you.

This process gets much more complicated if your using, random language provided 
tool on top of the distro provided tools as there are increasing numbers of 
sources to check.

Thanks,
Kevin

From: Thomas Goirand [z...@debian.org]
Sent: Tuesday, May 10, 2016 5:01 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [tc] supporting Go

On 05/10/2016 04:19 PM, Rayson Ho wrote:
> I mentioned in earlier replies but I may as well mention it again: a
> package manager gives you no advantage in a language toolchain like Go

Oh... You mean just like in Python where we have pip, Perl where we have
CPAN, PHP where we have PEAR, or JavaScript where we have
gulp/npm/grunt/you-name-it?

Each and every language think it's "special" and that no distro should
be involved. Of course, the reality is different.

> IMO, the best use case of not using a package manager is when deploying
> into containers
> -- would you prefer to just drop a static binary of your
> Go code, or you would rather install "apt-get" into a container image,

For anything serious, the later, of course! The former is only for
hackers, calling themselves devs, who don't know about opts, playing and
thinking they're the cool guys. This fashion of "we're in a container,
so it's ok to do everything dirty" will soon be regarded by everyone as
one big mistake.

If you're using containers the wrong way, you loose:
1/ Version accountability
2/ Security audit
3/ Build reproducibility

Installing from $language manager instead of distro packages, be it in
containers or not, will almost always make you download random blobs
from the Internet, which are of course changing over time without any
notice, loosing the above 3 important features.

Cheers,

Thomas Goirand (zigo)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] supporting Go

2016-05-10 Thread Thomas Goirand
On 05/10/2016 04:19 PM, Rayson Ho wrote:
> I mentioned in earlier replies but I may as well mention it again: a
> package manager gives you no advantage in a language toolchain like Go

Oh... You mean just like in Python where we have pip, Perl where we have
CPAN, PHP where we have PEAR, or JavaScript where we have
gulp/npm/grunt/you-name-it?

Each and every language think it's "special" and that no distro should
be involved. Of course, the reality is different.

> IMO, the best use case of not using a package manager is when deploying
> into containers
> -- would you prefer to just drop a static binary of your
> Go code, or you would rather install "apt-get" into a container image,

For anything serious, the later, of course! The former is only for
hackers, calling themselves devs, who don't know about opts, playing and
thinking they're the cool guys. This fashion of "we're in a container,
so it's ok to do everything dirty" will soon be regarded by everyone as
one big mistake.

If you're using containers the wrong way, you loose:
1/ Version accountability
2/ Security audit
3/ Build reproducibility

Installing from $language manager instead of distro packages, be it in
containers or not, will almost always make you download random blobs
from the Internet, which are of course changing over time without any
notice, loosing the above 3 important features.

Cheers,

Thomas Goirand (zigo)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Seeing db lockout issues in neutron add_router_interface

2016-05-10 Thread Mike Bayer



On 05/10/2016 04:57 PM, Divya wrote:

Hi,
I am trying to run this rally test on stable/kilo
https://github.com/openstack/rally/blob/master/samples/tasks/scenarios/neutron/create_and_delete_routers.json

with concurrency 50 and iterations 2000.

This test basically cretaes routers and subnets
and then calls
router-interface-add
router-interface-delete


And i am running this against 3rd party Nuage plugin.

In the NuagePlugin:

add_router_interface is something like this:

super().add_router_interface
try:
   some calls to external rest server
   super().delete_port
except:

remove_router_interface:
---
super().remove_router_interface
some calls to external rest server
super().create_port()
some calls to external rest server


If i comment delete_port in the add_router_interface, i am not hitting
the db lockout issue.
delete_port or any other operations are not within any transaction.
So not sure, why this is leading to db lock timeouts in insert to routerport

error trace
http://paste.openstack.org/show/496626/



Really appreciate any help on this.



I'm not on the Neutron team, but in general, Openstack applications 
should be employing retry logic internally which anticipates database 
deadlocks like these and retries the operation.  I'd report this stack 
trace (especially if it is reproducible) as a bug to this plugin's 
launchpad project.






Thanks,
Divya














__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Wiki

2016-05-10 Thread Tom Fifield



On 11/05/16 09:04, Dan Smith wrote:

Here it is :)

https://wiki.openstack.org/wiki/Special:AncientPages


Great, I see at least one I can nuke on the first page.

Note that I don't seem to have delete powers on the wiki. That's surely
a first step in letting people maintain the relevance of things on the wiki.


Looks like that was just fixed ... :)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] supporting Go

2016-05-10 Thread Thomas Goirand
On 05/10/2016 08:42 AM, Tim Bell wrote:
> I hope that the packaging technologies are considered as part of the TC
> evaluation of a new language. While many alternative approaches are
> available, a language which could not be packaged into RPM or DEB would
> be an additional burden for distro builders and deployers.
> 
> Does Go present any additional work compared to Python in this area ?
> 
> Tim

As I wrote earlier, the main issue is that Go doesn't understand the
concept of shared libraries (at least, last time I checked, and that was
a few months ago). This means that whenever we get a new version of
library X, everyone that depends on it must be rebuilt. This is *very*
painful.

Also, this means that every single binary contains an embedded copy of
every Go lib it uses. This can potentially be a security nightmare. This
also means we're wasting a lot of resources.

Hopefully, we'll get there. According to someone else in this thread, Go
has already basic support for shared libs, and eventually, we'll make it
happen in distros too. But as much as I know, it's not the case yet, and
it will take a lot of time (months? years?) to get it right.

Plus all what Matthias wrote...

Cheers,

Thomas Goirand (zigo)

P.S: Again, I'm all but a Go specialist, so I hope I'm not writing too
many wrong things here... Feel free to correct me if I'm wrong.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] supporting Go

2016-05-10 Thread Thomas Goirand
On 05/10/2016 01:43 AM, Rayson Ho wrote:
> Using a package manager won't buy us anything, and like Clint raised,
> the Linux distros are way too slow in picking up new Go releases.

Let's check for the facts and compare:
https://golang.org/doc/devel/release.html

with:
https://tracker.debian.org/pkg/golang

For the 1.5.x releases, we can see that Debian Sid is lagging at most 1
month behind. For 1.6.x, it's less than 2 weeks. In my book, that's
*very quick* transitions, especially if you consider all the reverse
dependencies that could potentially break with these last versions.

So, either you have a very limited definition of "distro" (ie: RHEL
only? Ubuntu LTS only?), or anything older than a week is too old for
you. In both cases, that's not reasonable from my viewpoint.

Last, if that's still too slow for you, I'm sure the Debian Go packaging
team will accept contributions (which may later be backported wherever
Canonical people want to store it, if you are a Ubuntu user).

Cheers,

Thomas Goirand (zigo)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Newton midycle planning

2016-05-10 Thread Morgan Fainberg
On Wed, Apr 13, 2016 at 7:07 PM, Morgan Fainberg 
wrote:

> It is that time again, the time to plan the Keystone midcycle! Looking at
> the schedule [1] for Newton, the weeks that make the most sense look to be
> (not in preferential order):
>
> R-14 June 27-01
> R-12 July 11-15
> R-11 July 18-22
>
> As usual this will be a 3 day event (probably Wed, Thurs, Fri), and based
> on previous attendance we can expect ~30 people to attend. Based upon all
> the information (other midcycles, other events, the US July4th holiday), I
> am thinking that week R-12 (the week of the newton-2 milestone) would be
> the best offering. Weeks before or after these three tend to push too close
> to the summit or too far into the development cycle.
>
> I am trying to arrange for a venue in the Bay Area (most likely will be
> South Bay, such as Mountain View, Sunnyvale, Palo Alto, San Jose) since we
> have done east coast and central over the last few midcycles.
>
> Please let me know your thoughts / preferences. In summary:
>
> * Venue will be Bay Area (more info to come soon)
>
> * Options of weeks (in general subjective order of preference): R-12,
> R-11, R-14
>
> Cheers,
> --Morgan
>
> [1] http://releases.openstack.org/newton/schedule.html
>

We have an update for the midcycle planning!

First of all, I want to thank Cisco for hosting us for this midcycle. The
Dates will be R-11[1], Wed-Friday: July 20-22 (expect to be around for a
full day on the 20th and at least 1/2 day on the 22nd). The address will be
170 W Tasman Dr, San Jose, CA 95134 . The exact building and room # will be
determined soon. Expect a place (wiki, google form, etc) to be posted this
week so we can collect real numbers of those who will be joining us.

Thanks for being patient with the planning. We should have ~35 spots for
this midcycle.

Cheers,
--Morgan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Wiki

2016-05-10 Thread Dan Smith
> Here it is :)
> 
> https://wiki.openstack.org/wiki/Special:AncientPages

Great, I see at least one I can nuke on the first page.

Note that I don't seem to have delete powers on the wiki. That's surely
a first step in letting people maintain the relevance of things on the wiki.

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] L2gw

2016-05-10 Thread Sukhdev Kapur
Yes, I am on it. I was waiting for a Green Light from few folks, which I
got this morning.
So, I will be working on releasing it sometime tomorrow.

Hope that is OK with everybody.

-Sukhdev


On Mon, May 9, 2016 at 6:46 PM, Armando M.  wrote:

>
>
> On 9 May 2016 at 18:03, Gary Kotton  wrote:
>
>> Hi,
>> Are there plans to cut a a stable/mitaka version for the l2gw?
>> https://github.com/openstack/networking-l2gw
>> Thanks
>> Gary
>>
>
> I know Sukhdev was working on it.
>
>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] gate testing with lvm imagebackend

2016-05-10 Thread Matt Riedemann

On 5/10/2016 5:14 PM, Chris Friesen wrote:

On 05/10/2016 03:51 PM, Matt Riedemann wrote:

For the libvirt imagebackend refactor that mdbooth is working on, I
have a POC
devstack-gate change which runs with the lvm imagebackend in the
libvirt driver
[1].

The test results are mostly happy except for anything related to migrate
(including resize to same host) [2][3].

That's because we're not testing with boot from volume [4].

This is a weird capability wrinkle that is not clear from the API,
you'll only
find out that you can't migrate/resize on this host that's using lvm
when it
fails. We can't even disable this in tempest really since there isn't
a flag for
'only-supports-resize-for-volume-backed-instances'. So this job would
just have
to disable any tests that have anything to do with resize/migrate,
which kind of
sucks since that's what we wanted to test going into the libvirt
imagebackend
refactor.



For what it's worth, we've got internal patches to enable cold migration
and resize for LVM-backed instances.  We've also got a proof of concept
to enable thin-provisioned LVM to get rid of the huge
wipe-volume-on-deletion cost.

Pretty sure we'd be happy to contribute these if there is interest.
Last time I brought up some of these there didn't seem to be much.

Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



I'd be interested if you want to push the changes up as a WIP, then we 
could run my devstack-gate change with your series as a dependency and 
see what kind of fallout there is.


I also need to check of there are any tempest tests that test 
resize/migrate from a volume-backed instance and if those are passing on 
this.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Wiki

2016-05-10 Thread Tom Fifield



On 11/05/16 02:48, Dan Smith wrote:

Hmm... that's unfortunate, as we were trying to get some of our less
ephemeral items out of random etherpads and into the wiki (which has the
value of being google indexed).


Yeah, I'm kinda surprised anyone would consider a wiki-less world. I'm
definitely bummed at the thought of losing it.


The Google indexing is also what makes the wiki so painful... After 6
years most of the content there is inaccurate or outdated. It's a
massive effort to clean it up without breaking the Google juice, and
nobody has the universal knowledge to determine if pages are still
accurate or not. We are bitten every day by newcomers finding wrong
information on the wiki and acting using it. It's getting worse every
day we keep on using it.


Sure, I think we all feel the pain of the stale information on the wiki.
What if we were to do what we do for bug or review purges and make a
list of pages, in reverse order of how recently they've been updated?
Then we can have a few sprints to tag obviously outdated things to
purge, and perhaps some things that just need some freshening.


Here it is :)

https://wiki.openstack.org/wiki/Special:AncientPages

MediaWiki also has some other useful tools installed by default:

https://wiki.openstack.org/wiki/Special:SpecialPages

There are also a series of plugins, such as the ones for "patrolling" 
edits, and bots for mass updates and triggers that would potentially be 
helpful.



There are a lot of nova-related things on the wiki that are the
prehistory equivalent of specs, most of which are very misleading to
people about the current state of things. I would think we could purge a
ton of stuff like that pretty quickly. I'll volunteer to review such a
list from the nova perspective.


* Deprecate the current wiki and start over with another wiki (with
stronger ACL support ?)


I'm somewhat surprised that this is an issue, because I thought that the
wiki requires an ubuntu login. Are spammers really getting ubuntu logins
so they can come over and deface our wiki?

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Need a volunteer for documentation liaisons

2016-05-10 Thread Anthony Chow
HongBin,

What is the skill requirement or credential for this documentation liaison
role?  I am interested in doing this

Anthony.

On Tue, May 10, 2016 at 3:24 PM, Hongbin Lu  wrote:

> Hi team,
>
> We need a volunteer as liaison for documentation team. Just let me know if
> you interest in this role.
>
> Best regards,
> Hongbin
>
> > -Original Message-
> > From: Lana Brindley [mailto:openst...@lanabrindley.com]
> > Sent: May-10-16 5:47 PM
> > To: OpenStack Development Mailing List; enstack.org
> > Subject: [openstack-dev] [PTL][docs]Update your cross-project liaison!
> >
> > Hi everyone,
> >
> > OpenStack use cross project liaisons to ensure that projects are
> > talking to each effectively, and the docs CPLs are especially important
> > to the documentation team to ensure we have accurate docs. Can all PTLs
> > please take a moment to check (and update if necessary) their CPL
> > listed here:
> > https://wiki.openstack.org/wiki/CrossProjectLiaisons#Documentation
> >
> > Thanks a bunch!
> >
> > Lana
> >
> > --
> > Lana Brindley
> > Technical Writer
> > Rackspace Cloud Builders Australia
> > http://lanabrindley.com
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] supporting Go

2016-05-10 Thread Gregory Haynes
On Tue, May 10, 2016, at 11:10 AM, Hayes, Graham wrote:
> On 10/05/2016 01:01, Gregory Haynes wrote:
> >
> > On Mon, May 9, 2016, at 03:54 PM, John Dickinson wrote:
> >> On 9 May 2016, at 13:16, Gregory Haynes wrote:
> >>>
> >>> This is a bit of an aside but I am sure others are wondering the same
> >>> thing - Is there some info (specs/etherpad/ML thread/etc) that has more
> >>> details on the bottleneck you're running in to? Given that the only
> >>> clients of your service are the public facing DNS servers I am now even
> >>> more surprised that you're hitting a python-inherent bottleneck.
> >>
> >> In Swift's case, the summary is that it's hard[0] to write a network
> >> service in Python that shuffles data between the network and a block
> >> device (hard drive) and effectively utilizes all of the hardware
> >> available. So far, we've done very well by fork()'ing child processes,
> >> using cooperative concurrency via eventlet, and basic "write more
> >> efficient code" optimizations. However, when it comes down to it,
> >> managing all of the async operations across many cores and many drives
> >> is really hard, and there just isn't a good, efficient interface for
> >> that in Python.
> >
> > This is a pretty big difference from hitting an unsolvable performance
> > issue in the language and instead is a case of language preference -
> > which is fine. I don't really want to fall in to the language-comparison
> > trap, but I think more detailed reasoning for why it is preferable over
> > python in specific use cases we have hit is good info to include /
> > discuss in the document you're drafting :). Essentially its a matter of
> > weighing the costs (which lots of people have hit on so I won't) with
> > the potential benefits and so unless the benefits are made very clear
> > (especially if those benefits are technical) its pretty hard to evaluate
> > IMO.
> >
> > There seemed to be an assumption in some of the designate rewrite posts
> > that there is some language-inherent performance issue causing a
> > bottleneck. If this does actually exist then that is a good reason for
> > rewriting in another language and is something that would be very useful
> > to clearly document as a case where we support this type of thing. I am
> > highly suspicious that this is the case though, but I am trying hard to
> > keep an open mind...
> 
> The way this component works makes it quite difficult to make any major
> improvement.

OK, I'll bite.

I had a look at the code and there's a *ton* of low hanging fruit. I
decided to hack in some fixes or emulation of fixes to see whether I
could get any major improvements. Each test I ran 4 workers using
SO_REUSEPORT and timed doing 1k axfr's with 4 in parallel at a time and
recorded 5 timings. I also added these changes on top of one another in
the order they follow.

Base timings: [9.223, 9.030, 8.942, 8.657, 9.190]

Stop spawning a thread per request - there are a lot of ways to do this
better, but lets not even mess with that and just literally move the
thread spawning that happens per request because its a silly idea here:
[8.579, 8.732, 8.217, 8.522, 8.214] (almost 10% increase).

Stop instantiating oslo config object per request - this should be a no
brainer, we dont need to parse config inside of a request handler:
[8.544, 8.191, 8.318, 8.086] (a few more percent).

Now, the slightly less low hanging fruit - there are 3 round trips to
the database *every request*. This is where the vast majority of request
time is spent (not in python). I didn't actually implement a full on
cache (I just hacked around the db queries), but this should be trivial
to do since designate does know when to invalidate the cache data. Some
numbers on how much a warm cache will help:

Caching zone: [5.968, 5.942, 5.936, 5.797, 5.911]

Caching records: [3.450, 3.357, 3.364, 3.459, 3.352].

I would also expect real-world usage to be similar in that you should
only get 1 cache miss per worker per notify, and then all the other
public DNS servers would be getting cache hits. You could also remove
the cost of that 1 cache miss by pre-loading data in to the cache.

All said and done, I think that's almost a 3x speed increase with
minimal effort. So, can we stop saying that this has anything to do with
Python as a language and has everything to do with the algorithms being
used?

> 
> MiniDNS (the component) takes data and sends a zone transfer every time 
> a recordset gets updated. That is a full (AXFR) zone transfer, so every
> record in the zone gets sent to each of the DNS servers that end users
> can hit.
> 
> This can be quite a large number - ns[1-6].example.com. may well be
> tens or hundreds of servers behind anycast IPs and load balancers.
> 

This design sounds like a *perfect* contender for caching. If you're
designing this properly its purely a question of how quickly can you
shove memory over the wire and as a result your choice in language will
have almost no effect - it'll be e

[openstack-dev] [magnum] Need a volunteer for documentation liaisons

2016-05-10 Thread Hongbin Lu
Hi team,

We need a volunteer as liaison for documentation team. Just let me know if you 
interest in this role.

Best regards,
Hongbin

> -Original Message-
> From: Lana Brindley [mailto:openst...@lanabrindley.com]
> Sent: May-10-16 5:47 PM
> To: OpenStack Development Mailing List; enstack.org
> Subject: [openstack-dev] [PTL][docs]Update your cross-project liaison!
> 
> Hi everyone,
> 
> OpenStack use cross project liaisons to ensure that projects are
> talking to each effectively, and the docs CPLs are especially important
> to the documentation team to ensure we have accurate docs. Can all PTLs
> please take a moment to check (and update if necessary) their CPL
> listed here:
> https://wiki.openstack.org/wiki/CrossProjectLiaisons#Documentation
> 
> Thanks a bunch!
> 
> Lana
> 
> --
> Lana Brindley
> Technical Writer
> Rackspace Cloud Builders Australia
> http://lanabrindley.com



signature.asc
Description: OpenPGP digital signature.asc
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] gate testing with lvm imagebackend

2016-05-10 Thread Chris Friesen

On 05/10/2016 03:51 PM, Matt Riedemann wrote:

For the libvirt imagebackend refactor that mdbooth is working on, I have a POC
devstack-gate change which runs with the lvm imagebackend in the libvirt driver
[1].

The test results are mostly happy except for anything related to migrate
(including resize to same host) [2][3].

That's because we're not testing with boot from volume [4].

This is a weird capability wrinkle that is not clear from the API, you'll only
find out that you can't migrate/resize on this host that's using lvm when it
fails. We can't even disable this in tempest really since there isn't a flag for
'only-supports-resize-for-volume-backed-instances'. So this job would just have
to disable any tests that have anything to do with resize/migrate, which kind of
sucks since that's what we wanted to test going into the libvirt imagebackend
refactor.



For what it's worth, we've got internal patches to enable cold migration and 
resize for LVM-backed instances.  We've also got a proof of concept to enable 
thin-provisioned LVM to get rid of the huge wipe-volume-on-deletion cost.


Pretty sure we'd be happy to contribute these if there is interest.  Last time I 
brought up some of these there didn't seem to be much.


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Seeing db lockout issues in neutron add_router_interface

2016-05-10 Thread Divya
Are there any general guidelines to avoid these db lock timeout issues in
the third party neutron plugins??

Thanks,
Divya

On Tue, May 10, 2016 at 1:57 PM, Divya  wrote:

> Hi,
>I am trying to run this rally test on stable/kilo
> https://github.com/openstack/rally/blob/master/samples/tasks/scenarios/neutron/create_and_delete_routers.json
>
> with concurrency 50 and iterations 2000.
>
> This test basically cretaes routers and subnets
> and then calls
> router-interface-add
> router-interface-delete
>
>
> And i am running this against 3rd party Nuage plugin.
>
> In the NuagePlugin:
>
> add_router_interface is something like this:
> 
> super().add_router_interface
> try:
>   some calls to external rest server
>   super().delete_port
> except:
>
>
> remove_router_interface:
> ---
> super().remove_router_interface
> some calls to external rest server
> super().create_port()
> some calls to external rest server
>
>
> If i comment delete_port in the add_router_interface, i am not hitting the
> db lockout issue.
> delete_port or any other operations are not within any transaction.
> So not sure, why this is leading to db lock timeouts in insert to
> routerport
>
> error trace
> http://paste.openstack.org/show/496626/
>
>
>
> Really appreciate any help on this.
>
> Thanks,
> Divya
>
>
>
>
>
>
>
>
>
>
>
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] gate testing with lvm imagebackend

2016-05-10 Thread Matt Riedemann
For the libvirt imagebackend refactor that mdbooth is working on, I have 
a POC devstack-gate change which runs with the lvm imagebackend in the 
libvirt driver [1].


The test results are mostly happy except for anything related to migrate 
(including resize to same host) [2][3].


That's because we're not testing with boot from volume [4].

This is a weird capability wrinkle that is not clear from the API, 
you'll only find out that you can't migrate/resize on this host that's 
using lvm when it fails. We can't even disable this in tempest really 
since there isn't a flag for 
'only-supports-resize-for-volume-backed-instances'. So this job would 
just have to disable any tests that have anything to do with 
resize/migrate, which kind of sucks since that's what we wanted to test 
going into the libvirt imagebackend refactor.


Anyway, I'm dumping this before leaving for the day, maybe others have 
some ideas here.


[1] https://review.openstack.org/#/c/314744/
[2] 
http://logs.openstack.org/44/314744/3/check/gate-tempest-dsvm-neutron-full/57a083b/logs/testr_results.html.gz
[3] 
http://logs.openstack.org/44/314744/3/check/gate-tempest-dsvm-neutron-full/57a083b/logs/screen-n-cpu.txt.gz?level=TRACE#_2016-05-10_20_36_07_446
[4] 
https://github.com/openstack/nova/blob/00eccf56d01f4945ab46f246ab4fe751375b39be/nova/virt/libvirt/driver.py#L6960


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [PTL][docs]Update your cross-project liaison!

2016-05-10 Thread Lana Brindley
Hi everyone,

OpenStack use cross project liaisons to ensure that projects are talking to 
each effectively, and the docs CPLs are especially important to the 
documentation team to ensure we have accurate docs. Can all PTLs please take a 
moment to check (and update if necessary) their CPL listed here: 
https://wiki.openstack.org/wiki/CrossProjectLiaisons#Documentation

Thanks a bunch!

Lana

-- 
Lana Brindley
Technical Writer
Rackspace Cloud Builders Australia
http://lanabrindley.com



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Easing contributions to central documentation

2016-05-10 Thread Lana Brindley
On 10/05/16 20:08, Julien Danjou wrote:
> On Mon, May 09 2016, Matt Kassawara wrote:
> 
>> So, before developer frustrations drive some or all projects to move
>> their documentation in-tree which which negatively impacts the goal of
>> presenting a coherent product, I suggest establishing an agreement
>> between developers and the documentation team regarding the review
>> process.
> 
> My 2c, but it's said all over the place that OpenStack is not a product,
> but a framework. So perhaps the goal you're pursuing is not working
> because it's not accessible by design?
> 
>> 1) The documentation team should review the patch for compliance with
>> conventions (proper structure, format, grammar, spelling, etc.) and provide
>> feedback to the developer who updates the patch.
>> 2) The documentation team should modify the patch to make it compliant and
>> ask the developer for a final review to prior to merging it.
>> 3) The documentation team should only modify the patch to make it build (if
>> necessary) and quickly merge it with a documentation bug to resolve any
>> compliance problems in a future patch by the documentation team.
>>
>> What do you think?
> 
> We, Telemetry, are moving our documentation in-tree and are applying a
> policy of "no doc, no merge" (same policy we had for unit tests).

This is great news! I love hearing stories like this from project teams who 
recognise the important of documentation. Hopefully the new model for Install 
Guides will help you out here, too.

> So until the doc team starts to help projects with that (proof-reading,
> pointing out missing doc update in patches, etc) and trying to be part
> of actual OpenStack projects, I don't think your goal will ever work.
> 
> For example, we have an up-to-date documentation in Gnocchi since the
> beginning, that covers the whole project. It's probably not coherent
> with the rest of OpenStack in wording etc, but we'd be delighted to have
> some folks of the doc team help us with that.

Let's work together to find out how we can help. I note that Lance Bragstad is 
your CPL, is that still current?

Lana

-- 
Lana Brindley
Technical Writer
Rackspace Cloud Builders Australia
http://lanabrindley.com



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Wiki

2016-05-10 Thread Jeremy Stanley
On 2016-05-11 08:49:14 +1200 (+1200), Robert Collins wrote:
[...]
> Ubuntu SSO is **not** Launchpad. Launchpad is just another consumer of
> Ubuntu SSO, and it has the 'feature' of forwarding through to Ubuntu
> SSO - so we're actually seeing Ubuntu SSO spam accounts :(.
[...]

Thanks for the correction, you're right that's actually what I
meant (I should be more careful not to accidentally conflate the two
with vague terminology).

In fact I went so far as trying to track them back to Launchpad
profiles via the LP API's OpenID reverse lookup method and confirmed
they don't have any, so they're using login.launchpad.net to create
accounts to spam various places (our wiki, the Ubuntu wiki,
and presumably lots of others too). I think that means, at least in
most cases, they're probably not actually preexisting compromised
accounts and just new accounts created solely for the purpose of
spamming places relying on the Ubuntu SSO for authentication.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] welcome tarballs!

2016-05-10 Thread Emilien Macchi
Hi,

It has been some weeks we worked on having tarballs for Puppet
modules, and it's now in place.

Look http://tarballs.openstack.org/puppet-nova/ as an example.

* A tarball is created at every patch in master.
* A tarball is created at every patch in our stable branches.
* A tarball is created at every tag (we created tarballs for 7.0.0 and 8.0.x).

These artifacts are continuously, securely and automatically built by
OpenStack Infrastructure. They contain SHA1 for every file, and are
now considered as our official artifacts in OpenStack.

Special kudos to pabelanger & infra folks for their help!
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [barbican] Issues with gate-barbican-python27

2016-05-10 Thread Freddy Pedraza
Hi,

I submitted a simple CR (https://review.openstack.org/#/c/312786) and 
"gate-barbican-python27" is failing and I think it's caused by something else 
upstream. These are the issues that I see in the console log

FAIL: 
barbican.tests.queue.test_keystone_listener.WhenUsingMessageServer.test_keystone_notification_pool_size_used
FAIL: 
barbican.tests.queue.test_keystone_listener.WhenUsingMessageServer.test_should_start
FAIL: 
barbican.tests.queue.test_keystone_listener.WhenUsingMessageServer.test_should_stop
FAIL: 
barbican.tests.queue.test_keystone_listener.WhenUsingMessageServer.test_should_wait

ERROR: InvocationError: 
'/home/jenkins/workspace/gate-barbican-python27/.tox/py27/bin/python setup.py 
testr --coverage --testr-args='
__ summary 
ERROR:   py27: commands failed

More details at -> 
http://logs.openstack.org/86/312786/2/check/gate-barbican-python27/833bcc2/console.html#_2016-05-10_20_29_16_472

Any ideas on what's going on?

Thanks in Advance

Freddy Pedraza



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Infra] Proposal Jobs (was: Newton Summit Infra Sessions Recap)

2016-05-10 Thread Jeremy Stanley
On 2016-05-10 21:07:29 +0200 (+0200), Andreas Jaeger wrote:
> On 05/10/2016 09:00 PM, Jeremy Stanley wrote:
> > [...]
> > Another outcome of this is that Andreas Jaeger put together some
> > project-config specific reviewing guidelines:
> > http://git.openstack.org/cgit/openstack-infra/project-config/plain/README.rst
> > In the future, that will be extended to mention the sorts of tribal
> > knowledge which came up in this session so that reviewers and
> > submitters are all on the same page.
> > [...]
> 
> Actual URL is:
> 
> http://git.openstack.org/cgit/openstack-infra/project-config/plain/REVIEWING.rst

Oops! Thanks, yes that's what I get for not paying close enough
attention before hitting send.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cross-project] The Future of Meetings

2016-05-10 Thread Matt Riedemann

On 5/9/2016 11:30 PM, Mike Perez wrote:

Hey all,

When we first discussed the future of cross-project meetings at the Tokyo
summit, we walked out with the idea of beginning to have ad-hoc meetings
instead of one big meeting with all big tent projects.

Now that we have some process in place [1] (unfortunately still under review),
we can begin that idea.

I will no longer be announcing the weekly cross-project meeting being skipped.
Instead people who are interested in fixing some cross-project issue or feature
may do so by introducing a meeting for that initiative [1] to take place in the
#openstack-meeting-cp channel. Together, this group of project will create
a spec or guideline [2].

For some initiatives you may know which OpenStack projects are involved with
it. You can find people who are interested in helping with cross-project
initiatives for their respected project by viewing the CPL page [3]. As noted
in their duties, they will either be the person to attend the meeting and give
feedback in the spec/guideline on behalf of their project, or find someone to
knowledgeable in the initiative and the project [4].

If warranted, we can have a cross-project wide meeting, but I see these
becoming unnecessary.

Some examples of cross-project initiatives happening are:

* The service catalog TNG [5]
* Quotas - Delimiter [6]

I think this will allow cross-project initiatives to continue to flow in a more
a natural way.

Please reach out to me if you have any questions. Thanks!

[1] - https://review.openstack.org/#/c/301822/4/doc/source/cross-project.rst
[2] - https://review.openstack.org/#/c/295940/5/doc/source/cross-project.rst
[3] - 
https://wiki.openstack.org/wiki/CrossProjectLiaisons#Cross-Project_Spec_Liaisons
[4] - 
http://docs.openstack.org/project-team-guide/cross-project.html#cross-project-specification-liaisons
[5] - https://wiki.openstack.org/wiki/ServiceCatalogTNG
[6] - https://launchpad.net/delimiter



Are we going to drop the calendar entry [1] or at least update the 
meeting agenda [2] to point out there is no regular meeting anymore?


[1] http://eavesdrop.openstack.org/#OpenStack_Cross-Project_Meeting
[2] https://wiki.openstack.org/wiki/Meetings/CrossProjectMeeting

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Seeing db lockout issues in neutron add_router_interface

2016-05-10 Thread Divya
Hi,
   I am trying to run this rally test on stable/kilo
https://github.com/openstack/rally/blob/master/samples/tasks/scenarios/neutron/create_and_delete_routers.json

with concurrency 50 and iterations 2000.

This test basically cretaes routers and subnets
and then calls
router-interface-add
router-interface-delete


And i am running this against 3rd party Nuage plugin.

In the NuagePlugin:

add_router_interface is something like this:

super().add_router_interface
try:
  some calls to external rest server
  super().delete_port
except:


remove_router_interface:
---
super().remove_router_interface
some calls to external rest server
super().create_port()
some calls to external rest server


If i comment delete_port in the add_router_interface, i am not hitting the
db lockout issue.
delete_port or any other operations are not within any transaction.
So not sure, why this is leading to db lock timeouts in insert to routerport

error trace
http://paste.openstack.org/show/496626/



Really appreciate any help on this.

Thanks,
Divya
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Wiki

2016-05-10 Thread Robert Collins
On 11 May 2016 at 08:27, Jeremy Stanley  wrote:

>
> Anyway, to the original point, yes Launchpad is full of compromised
> or perhaps freshly created accounts under the control of spammers.

Ubuntu SSO is **not** Launchpad. Launchpad is just another consumer of
Ubuntu SSO, and it has the 'feature' of forwarding through to Ubuntu
SSO - so we're actually seeing Ubuntu SSO spam accounts :(.

Why does this matter? If folk want to solve this at source - make
making new accounts harder - you need to look at the correct code
base, which is https://launchpad.net/canonical-identity-provider

-Rob


-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Austin summit priorities session recap

2016-05-10 Thread Alexandre Levine

Thank you Matt.
We'll think how we can help here.

Best regards,
  Alex Levine

On 5/10/16 7:40 PM, Matt Riedemann wrote:

On 5/10/2016 11:24 AM, Alexandre Levine wrote:

Hi Matt,

Sorry I couldn't reply earlier - was away.
I'm worrying about ScaleIO ephemeral storage backend
(https://blueprints.launchpad.net/nova/+spec/scaleio-ephemeral-storage-backend) 


which is not in this list but various clients are very interested in
having it working along with or instead of Ceph. Especially I'm worrying
in view of the global libvirt storage pools refactoring which looks like
a quite global effort to me judging by a number of preliminary reviews.
It seems to me that we wouldn't be able to squeeze ScaleIO additions
after this refactoring.
What can be done about it?
We could've contribute our initial changes to current code (which would
potentially allow easy backporting to previous versions as a benefit
afterwards) and promise to update our parts along with the refactoring
reviews or something like this.

Best regards,
  Alex Levine


On 5/6/16 3:34 AM, Matt Riedemann wrote:

There are still a few design summit sessions from the summit that I'll
recap but I wanted to get the priorities session recap out as early as
possible. We held that session in the last slot on Thursday. The full
etherpad is here [1].

The first part of the session was mostly going over schedule 
milestones.


We already started Newton with a freeze on spec approvals for new
things since we already have a sizable backlog [2]. Now that we're
past the summit we can approve specs for new things again.

The full Newton release schedule for Nova is in this wiki [3].

These are the major dates from here on out:

* June 2: newton-1, non-priority spec approval freeze
* June 30: non-priority feature freeze
* July 15: newton-2
* July 19-21: Nova Midcycle
* Aug 4: priority spec approval freeze
* Sept 2: newton-3, final python-novaclient release, FeatureFreeze,
Soft StringFreeze
* Sept 16: RC1 and Hard StringFreeze
* Oct 7, 2016: Newton Release

The important thing for most people right now is we have exactly four
weeks until the non-priority spec approval freeze. We then have about
one month after that to land all non-priority blueprints.

Keep in mind that we've already got 52 approved blueprints and most of
those were re-approved from Mitaka, so have been approved for several
weeks already.

The non-priority blueprint cycle is intentionally restricted in Newton
because of all of the backlog work we've had spilling over into this
release. We really need to focus on getting as much of that done as
possible before taking on more new work.

For the rest of the priorities session we talked about what our actual
review priorities are for Newton. The list with details and owners is
already available here [4].

In no particular order, these are the review priorities:

* Cells v2
* Scheduler
* API Improvements
* os-vif integration
* libvirt storage pools (for live migration)
* Get Me a Network
* Glance v2 Integration

We *should* be able to knock out glance v2, get-me-a-network and
os-vif relatively soon (I'm thinking sometime in June).

Not listed in [4] but something we talked about was volume
multi-attach with Cinder. We said this was going to be a 'stretch
goal' contingent on making decent progress on that item by
non-priority feature freeze *and* we get the above three smaller
priority items completed.

Another thing we talked about but isn't going to be a priority is
NFV-related work. We talked about cleaning up technical debt and
additional testing for NFV but had no one in the session signed up to
own that work or with concrete proposals on how to make improvements
in that area. Since we can't assign review priorities to something
that nebulous it was left out. Having said that, Moshe Levi has
volunteered to restart and lead the SR-IOV/PCI bi-weekly meeting [5]
(thanks again, Moshe!). So if you (or your employer, or your vendor)
are interested in working on NFV in Nova please attend that meeting
and get involved in helping out that subteam.

[1] https://etherpad.openstack.org/p/newton-nova-summit-priorities
[2]
http://lists.openstack.org/pipermail/openstack-dev/2016-March/090370.html 


[3] https://wiki.openstack.org/wiki/Nova/Newton_Release_Schedule
[4]
https://specs.openstack.org/openstack/nova-specs/priorities/newton-priorities.html 



[5]
http://lists.openstack.org/pipermail/openstack-dev/2016-April/093541.html 






__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Alexandre,

A closed-source vendor-specific ephemeral backend for a single virt 
driver in Nova isn't a review priority for the release. The review 
priorities we have for Newton are really broad multi-release efforts 
that we need to focus on.


This doe

Re: [openstack-dev] [tc] supporting Go

2016-05-10 Thread Robert Collins
On 11 May 2016 at 06:10, Hayes, Graham  wrote:
> On 10/05/2016 01:01, Gregory Haynes wrote:

> The way this component works makes it quite difficult to make any major
> improvement.
>
> MiniDNS (the component) takes data and sends a zone transfer every time
> a recordset gets updated. That is a full (AXFR) zone transfer, so every
> record in the zone gets sent to each of the DNS servers that end users
> can hit.
>
> This can be quite a large number - ns[1-6].example.com. may well be
> tens or hundreds of servers behind anycast IPs and load balancers.
>
> In many cases, internal zones (or even external zones) can be quite
> large - I have seen zones that are 200-300Mb. If a zone is high traffic

I presume you mean MB ?

> (like say cloud.example.com. where a record is added / removed for
> each boot / destroy, or the reverse DNS zones for a cloud), there can
> be a lot of data sent out from this component.
>
> We are a small development team, and after looking at our options, and
> judging the amount of developer hours we had available, a different
> language was the route we decided on. I was going to go implement a few
> POCs and see what was most suitable.

Out of interest, what was the problem you had/have with Python here?
Sending a few GB of data at wire speeds on a TCP link is pretty
shallow for Python, though not having had my hands on a 40gpbs NIC I
can't personally say whether its still the case there.

I guess my fundamental question is: is this a domain problem, or a
Python problem? If the problem is 'we need to send 300MB to 100
servers in < 5 seconds', which is 30GB of traffic - you're going to
need a 240GB/5s == 48Gbps NIC, or you're going to need distributed
workers to shard that workload across machines.

If the problem is 'designate's memory use is blowing way up when we
try to do this' - that might be a very straightforward fix (use
memoryviews and zero-copy IO).

I guess what I'm wondering is whether there is a low hanging fix, and
as an observer I have absolutely no insight into the problem you've
been having - I'd like to know more... is there a bug report perhaps?

My *fear* is that the underlying problem has nothing to do with Python
and can rear its head in any language - and that perhaps the idioms
being used in Designate (or OpenStack as a whole) are driving whatever
specific problem you've got?

I know that our current programming model is missing a lot of the
easily-correct improvements that have been created in the last few
decades (because our basic abstraction is the thread) - how much does
that factor in, do you think?

-Rob

-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Wiki

2016-05-10 Thread Jeremy Stanley
On 2016-05-10 20:17:43 + (+), Jeremy Stanley wrote:
[...]
> Last I heard, wiki.ubuntu.com has been made read-only for general
> users because they're having too hard a time keeping spam under
> control (they obviously also use
> login.launchpad.net/login.ubuntu.com). I'm trying to create an
> account there right now to confirm whether this is still the case,
> and the post-OpenID page which should in theory be creating my
> account is timing out in my browser with a 500 ISR after several
> minutes).

Just to follow up, after a few tries I finally got one to go
through. It looks like editing existing pages may still work (I
haven't tried to save an edit) though creating new pages seems to be
a forbidden action at the moment.

Anyway, to the original point, yes Launchpad is full of compromised
or perhaps freshly created accounts under the control of spammers.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Wiki

2016-05-10 Thread Jeremy Stanley
On 2016-05-10 12:59:41 -0400 (-0400), Anita Kuno wrote:
> On 05/10/2016 12:48 PM, Dan Smith wrote:
[...]
> > I'm somewhat surprised that this is an issue, because I thought
> > that the wiki requires an ubuntu login. Are spammers really
> > getting ubuntu logins so they can come over and deface our wiki?
> 
> Yes.

We've temporarily (for a couple months now) halted new account
creation on wiki.openstack.org while we work through better spam
mitigation. Last I heard, wiki.ubuntu.com has been made read-only
for general users because they're having too hard a time keeping
spam under control (they obviously also use
login.launchpad.net/login.ubuntu.com). I'm trying to create an
account there right now to confirm whether this is still the case,
and the post-OpenID page which should in theory be creating my
account is timing out in my browser with a 500 ISR after several
minutes).
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] supporting Go

2016-05-10 Thread Hayes, Graham
On 10/05/2016 20:48, Chris Friesen wrote:
> On 05/10/2016 12:10 PM, Hayes, Graham wrote:
>
>> The way this component works makes it quite difficult to make any major
>> improvement.
>>
>> MiniDNS (the component) takes data and sends a zone transfer every time
>> a recordset gets updated. That is a full (AXFR) zone transfer, so every
>> record in the zone gets sent to each of the DNS servers that end users
>> can hit.
>>
>> This can be quite a large number - ns[1-6].example.com. may well be
>> tens or hundreds of servers behind anycast IPs and load balancers.
>>
>> In many cases, internal zones (or even external zones) can be quite
>> large - I have seen zones that are 200-300Mb. If a zone is high traffic
>> (like say cloud.example.com. where a record is added / removed for
>> each boot / destroy, or the reverse DNS zones for a cloud), there can
>> be a lot of data sent out from this component.
>>
>> We are a small development team, and after looking at our options, and
>> judging the amount of developer hours we had available, a different
>> language was the route we decided on. I was going to go implement a few
>> POCs and see what was most suitable.
>
>
> I know nothing about what you're doing beyond what you've mentioned above, but
> it seems really odd to transmit all that data any time something changes.
>
> Is there no way to send an incremental change?

In short, yes there is a DNS standard for a incremental change, but it
is quite complex to implement, and can often end up reverting to a full 
zone transfer when there is problems.

We have discussed IXFR (incremental zone transfers) previously but we
have not found a good solution to it.

> Chris
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] supporting Go

2016-05-10 Thread Samuel Merritt

On 5/9/16 5:21 PM, Robert Collins wrote:

On 10 May 2016 at 10:54, John Dickinson  wrote:

On 9 May 2016, at 13:16, Gregory Haynes wrote:


This is a bit of an aside but I am sure others are wondering the same
thing - Is there some info (specs/etherpad/ML thread/etc) that has more
details on the bottleneck you're running in to? Given that the only
clients of your service are the public facing DNS servers I am now even
more surprised that you're hitting a python-inherent bottleneck.


In Swift's case, the summary is that it's hard[0] to write a network
service in Python that shuffles data between the network and a block
device (hard drive) and effectively utilizes all of the hardware
available. So far, we've done very well by fork()'ing child processes,

...

Initial results from a golang reimplementation of the object server in
Python are very positive[1]. We're not proposing to rewrite Swift
entirely in Golang. Specifically, we're looking at improving object
replication time in Swift. This service must discover what data is on
a drive, talk to other servers in the cluster about what they have,
and coordinate any data sync process that's needed.

[0] Hard, not impossible. Of course, given enough time, we can do
 anything in a Turing-complete language, right? But we're not talking
 about possible, we're talking about efficient tools for the job at
 hand.

...

I'm glad you're finding you can get good results in (presumably)
clean, understandable code.

Given go's historically poor perfornance with multiple cores
(https://golang.org/doc/faq#Why_GOMAXPROCS) I'm going to presume the
major advantage is in the CSP programming model - something that
Twisted does very well: and frustratingly we've had numerous
discussions from folk in the Twisted world who see the pain we have
and want to help, but as a community we've consistently stayed with
eventlet, which has a threaded programming model - and threaded models
are poorly suited for the case here.


At its core, the problem is that filesystem IO can take a surprisingly 
long time, during which the calling thread/process is blocked, and 
there's no good asynchronous alternative.


Some background:

With Eventlet, when your greenthread tries to read from a socket and the 
socket is not readable, then recvfrom() returns -1/EWOULDBLOCK; then, 
the Eventlet hub steps in, unschedules your greenthread, finds an 
unblocked one, and lets it proceed. It's pretty good at servicing a 
bunch of concurrent connections and keeping the CPU busy.


On the other hand, when the socket is readable, then recvfrom() returns 
quickly (a few microseconds). The calling process was technically 
blocked, but the syscall is so fast that it hardly matters.


Now, when your greenthread tries to read from a file, that read() call 
doesn't return until the data is in your process's memory. This can take 
a surprisingly long time. If the data isn't in buffer cache and the 
kernel has to go fetch it from a spinning disk, then you're looking at a 
seek time of ~7 ms, and that's assuming there are no other pending 
requests for the disk.


There's no EWOULDBLOCK when reading from a plain file, either. If the 
file pointer isn't at EOF, then the calling process blocks until the 
kernel fetches data for it.


Back to Swift:

The Swift object server basically does two things: it either reads from 
a disk and writes to a socket or vice versa. There's a little HTTP 
parsing in there, but the vast majority of the work is shuffling bytes 
between network and disk. One Swift object server can service many 
clients simultaneously.


The problem is those pauses due to read(). If your process is servicing 
hundreds of clients reading from and writing to dozens of disks (in, 
say, a 48-disk 4U server), then all those little 7 ms waits are pretty 
bad for throughput. Now, a lot of the time, the kernel does some 
readahead so your read() calls can quickly return data from buffer 
cache, but there are still lots of little hitches.


But wait: it gets worse. Sometimes a disk gets slow. Maybe it's got a 
lot of pending IO requests, maybe its filesystem is getting close to 
full, or maybe the disk hardware is just starting to get flaky. For 
whatever reason, IO to this disk starts taking a lot longer than 7 ms on 
average; think dozens or hundreds of milliseconds. Now, every time your 
process tries to read from this disk, all other work stops for quite a 
long time. The net effect is that the object server's throughput 
plummets while it spends most of its time blocked on IO from that one 
slow disk.


Now, of course there's things we can do. The obvious one is to use a 
couple of IO threads per disk and push the blocking syscalls out 
there... and, in fact, Swift did that. In commit b491549, the object 
server gained a small threadpool for each disk[1] and started doing its 
IO there.


This worked pretty well for avoiding the slow-disk problem. Requests 
that touched the slow disk would back up, but requests fo

Re: [openstack-dev] Team blogs

2016-05-10 Thread Hayes, Graham
On 10/05/2016 20:20, Matt Riedemann wrote:
> On 5/9/2016 6:46 PM, Sean Dague wrote:
>> On 05/09/2016 06:37 PM, Joshua Harlow wrote:
>>> After seeing the amount of summit recaps and the scattered nature of
>>> these (some on the ML, some on etherpads, some on personal blogs); I am
>>> starting to wonder if we should again bring up the question of having
>>> infra (and I guess the foundation?) support/provide a place for team
>>> blogs...
>>
>> Honestly, I'm really liking that more of them are hitting the mailing
>> list proper this time around. Discoverability is key. The mailing list
>> is a shared medium, archived forever.
>>
>>  -Sean
>>
>
> I have the spirit of an 80 year old man inside me so I really don't want
> to have to start blogging about stuff for Nova.
>
> I have successfully avoided social media to this point in my life,
> including blogging.
>
> So I prefer to just use the -dev list for communicating this kind of stuff.
>

I agree - I think the list is the place for them. There is nothing
worse than a blog with no content for extended periods.

If people want to put them on a blog as well, that's fine, but
we should keep the -dev list as the main place to send them IMO.

-- Graham

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] supporting Go

2016-05-10 Thread Chris Friesen

On 05/10/2016 12:10 PM, Hayes, Graham wrote:


The way this component works makes it quite difficult to make any major
improvement.

MiniDNS (the component) takes data and sends a zone transfer every time
a recordset gets updated. That is a full (AXFR) zone transfer, so every
record in the zone gets sent to each of the DNS servers that end users
can hit.

This can be quite a large number - ns[1-6].example.com. may well be
tens or hundreds of servers behind anycast IPs and load balancers.

In many cases, internal zones (or even external zones) can be quite
large - I have seen zones that are 200-300Mb. If a zone is high traffic
(like say cloud.example.com. where a record is added / removed for
each boot / destroy, or the reverse DNS zones for a cloud), there can
be a lot of data sent out from this component.

We are a small development team, and after looking at our options, and
judging the amount of developer hours we had available, a different
language was the route we decided on. I was going to go implement a few
POCs and see what was most suitable.



I know nothing about what you're doing beyond what you've mentioned above, but 
it seems really odd to transmit all that data any time something changes.


Is there no way to send an incremental change?

Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][ironic] Austin summit nova/ironic cross-project session recap

2016-05-10 Thread Matt Riedemann

The full session etherpad is here [1].

Jim has already written the recap (thanks Jim) in his blog here [2].

The only thing I'd say was omitted was a mention of James Penick wanting 
to avoid giant fireballs of suck.


[1] https://etherpad.openstack.org/p/newton-nova-ironic
[2] http://jroll.ghost.io/newton-summit-recap/

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][mistral] Saga of process than ack and where can we go from here...

2016-05-10 Thread Doug Hellmann
Excerpts from Zane Bitter's message of 2016-05-10 15:13:46 -0400:
> Another data point: at this summit session we discussed delivering to 
> users notifications about events in the cloud: 
> https://etherpad.openstack.org/p/newton-alternatives-to-polling
> 
> It's pretty critical that this have at-least-once delivery semantics, 
> because in future people will be using this to do things like triggering 
> automated recovery when e.g. an instance dies.
> 
> The sane way to accomplish this would be to have all services accept a 
> list of Zaqar queues in which to dump notifications (Zaqar already has 
> at-least-once delivery semantics). Unfortunately this means changing a 
> lot of APIs, and this was pre-empted at the session by Nova cores 
> indicating that they would never ever accept such a change.
> 
> The alternative proposed was to create some sort of proxy that listens 
> for notifications, sanitises them and drops them into the appropriate 
> Zaqar queues. So this would be an example that:
> 
> * Requires at-least-once delivery semantics
> * Is fundamentally a message queue (not a job queue, and not RPC)
> * Receives notifications sent from oslo.messaging
> 
> For those reasons I think it makes sense to have some sort of 
> abstraction in oslo.messaging to permit this.

The telemetry team has discussed pulling out the "listening" part of
ceilometer to make it easier to repurpose for cases like this. In fact,
it might be possible to write a plugin for the existing listeners
without any other changes inside ceilometer itself.

> 
> I am sympathetic to the idea that we should try to make clear that this 
> is for occasions when you are absolutely sure that these are the 
> semantics you want, as in the case of Mistral. Everyone just turning 
> this option on because it sounds safer would be bad. And the risk is 
> high, because the default "at-most-once delivery with no recovery from 
> lost messages" sounds mad when you first hear it. Actually it _is_ mad. 
> But "at-least-once delivery with no handling for duplicate messages" is 
> worse. So +1 for a separate messaging type in addition to call and cast 
> if that will help make clear who this is and is not for.

Yes, it's also important that this is not configurable with an option
the deployer can set, because it relates to the application's
understanding of how it will process messages and shouldn't be changed.

Doug

> 
> cheers,
> Zane.
> 
> On 06/05/16 18:56, Joshua Harlow wrote:
> > So then let's all get onboard https://review.openstack.org/#/c/260246/?
> >
> > I've yet to see what all these things called 'process-than-ack' not
> > seemingly fit into that API in that review. IMHO most of what people are
> > trying to fit into oslo.messaging here isn't really messages but are
> > jobs to be completed that should *only* be acked when they are actually
> > complete.
> >
> > Which is in part what that review adds/does (extracts the job[1] part
> > from taskflow so others can use it, without say taking in the rest of
> > taskflow).
> >
> > [1] http://docs.openstack.org/developer/taskflow/jobs.html
> >
> > Dmitry Tantsur wrote:
> >> On 05/04/2016 08:21 AM, Mehdi Abaakouk wrote:
> >>>
> >>> Hi,
> >>>
>  That said, I agree with Mehdi that *most* RPC calls throughout
>  OpenStack,
>  not being idempotent, should not use process-then-ack.
> >>>
> >>> That why I think we must not call this RPC. And the new API should be
> >>> clear the expected idempotent of the application callbacks.
> >>>
> > Thoughts from folks (mistral and oslo)?
> >>>
> >>> Also, I was not at the Summit, should I conclude the Tooz+taskflow
> >>> approach (that ensure the idempotent of the application within the
> >>> library API) have not been accepted by mistral folks ?
> >>>
> >>
> >> Taskflow is pretty opinionated about the whole application design. We
> >> can't use it in ironic-inspector, but we also need process-then-ack
> >> semantics for our HA work.
> >>
> >> __
> >>
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> >> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Team blogs

2016-05-10 Thread Matt Riedemann

On 5/9/2016 6:46 PM, Sean Dague wrote:

On 05/09/2016 06:37 PM, Joshua Harlow wrote:

After seeing the amount of summit recaps and the scattered nature of
these (some on the ML, some on etherpads, some on personal blogs); I am
starting to wonder if we should again bring up the question of having
infra (and I guess the foundation?) support/provide a place for team
blogs...


Honestly, I'm really liking that more of them are hitting the mailing
list proper this time around. Discoverability is key. The mailing list
is a shared medium, archived forever.

-Sean



I have the spirit of an 80 year old man inside me so I really don't want 
to have to start blogging about stuff for Nova.


I have successfully avoided social media to this point in my life, 
including blogging.


So I prefer to just use the -dev list for communicating this kind of stuff.

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][mistral] Saga of process than ack and where can we go from here...

2016-05-10 Thread Zane Bitter
Another data point: at this summit session we discussed delivering to 
users notifications about events in the cloud: 
https://etherpad.openstack.org/p/newton-alternatives-to-polling


It's pretty critical that this have at-least-once delivery semantics, 
because in future people will be using this to do things like triggering 
automated recovery when e.g. an instance dies.


The sane way to accomplish this would be to have all services accept a 
list of Zaqar queues in which to dump notifications (Zaqar already has 
at-least-once delivery semantics). Unfortunately this means changing a 
lot of APIs, and this was pre-empted at the session by Nova cores 
indicating that they would never ever accept such a change.


The alternative proposed was to create some sort of proxy that listens 
for notifications, sanitises them and drops them into the appropriate 
Zaqar queues. So this would be an example that:


* Requires at-least-once delivery semantics
* Is fundamentally a message queue (not a job queue, and not RPC)
* Receives notifications sent from oslo.messaging

For those reasons I think it makes sense to have some sort of 
abstraction in oslo.messaging to permit this.


I am sympathetic to the idea that we should try to make clear that this 
is for occasions when you are absolutely sure that these are the 
semantics you want, as in the case of Mistral. Everyone just turning 
this option on because it sounds safer would be bad. And the risk is 
high, because the default "at-most-once delivery with no recovery from 
lost messages" sounds mad when you first hear it. Actually it _is_ mad. 
But "at-least-once delivery with no handling for duplicate messages" is 
worse. So +1 for a separate messaging type in addition to call and cast 
if that will help make clear who this is and is not for.


cheers,
Zane.

On 06/05/16 18:56, Joshua Harlow wrote:

So then let's all get onboard https://review.openstack.org/#/c/260246/?

I've yet to see what all these things called 'process-than-ack' not
seemingly fit into that API in that review. IMHO most of what people are
trying to fit into oslo.messaging here isn't really messages but are
jobs to be completed that should *only* be acked when they are actually
complete.

Which is in part what that review adds/does (extracts the job[1] part
from taskflow so others can use it, without say taking in the rest of
taskflow).

[1] http://docs.openstack.org/developer/taskflow/jobs.html

Dmitry Tantsur wrote:

On 05/04/2016 08:21 AM, Mehdi Abaakouk wrote:


Hi,


That said, I agree with Mehdi that *most* RPC calls throughout
OpenStack,
not being idempotent, should not use process-then-ack.


That why I think we must not call this RPC. And the new API should be
clear the expected idempotent of the application callbacks.


Thoughts from folks (mistral and oslo)?


Also, I was not at the Summit, should I conclude the Tooz+taskflow
approach (that ensure the idempotent of the application within the
library API) have not been accepted by mistral folks ?



Taskflow is pretty opinionated about the whole application design. We
can't use it in ironic-inspector, but we also need process-then-ack
semantics for our HA work.

__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] [Infra] Newton Summit Infra Sessions Recap

2016-05-10 Thread Andreas Jaeger
On 05/10/2016 09:00 PM, Jeremy Stanley wrote:
> [...]
> Another outcome of this is that Andreas Jaeger put together some
> project-config specific reviewing guidelines:
> http://git.openstack.org/cgit/openstack-infra/project-config/plain/README.rst
> In the future, that will be extended to mention the sorts of tribal
> knowledge which came up in this session so that reviewers and
> submitters are all on the same page.
> [...]

Actual URL is:

http://git.openstack.org/cgit/openstack-infra/project-config/plain/REVIEWING.rst

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] supporting Go

2016-05-10 Thread Mike Perez
On 15:54 May 09, John Dickinson wrote:
> On 9 May 2016, at 13:16, Gregory Haynes wrote:
> >
> > This is a bit of an aside but I am sure others are wondering the same
> > thing - Is there some info (specs/etherpad/ML thread/etc) that has more
> > details on the bottleneck you're running in to? Given that the only
> > clients of your service are the public facing DNS servers I am now even
> > more surprised that you're hitting a python-inherent bottleneck.
> 
> In Swift's case, the summary is that it's hard[0] to write a network
> service in Python that shuffles data between the network and a block
> device (hard drive) and effectively utilizes all of the hardware
> available. So far, we've done very well by fork()'ing child processes,
> using cooperative concurrency via eventlet, and basic "write more
> efficient code" optimizations. However, when it comes down to it,
> managing all of the async operations across many cores and many drives
> is really hard, and there just isn't a good, efficient interface for
> that in Python.

If I'm understanding correctly you're findings are:

1) Performance great from benchmarks.
2) Interface great for dealing with network and block devices

For item one, I'm wondering if asyncio was explored at all? I get more and more
curious if this is going to be a thing in the past [1] as I read some
improvements in this area (still immature as noted).

For item two, can you speak more about the interface improvements with working
with all the hardware available. Cinder for example, we deal with block devices
a bit and having to sometimes just dd data from here to there.

[1] - http://magic.io/blog/uvloop-blazing-fast-python-networking/

-- 
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Infra] Newton Summit Infra Sessions Recap

2016-05-10 Thread Jeremy Stanley
I'm Cc'ing this to the openstack-infra ML but setting MFT to direct
subsequent discussion to the openstack-dev ML so we can hopefully
avoid further cross-posting as much as possible. If you're replying
on a particular session topic, please update the Subject so that the
subthreads are easier to keep straight.


Community Task-Tracking
---

https://etherpad.openstack.org/p/newton-infra-community-task-tracking

A brief update was provided from contributors working on Maniphest
(Craige McWhirter) and Storyboard (Zara Zaimeche, Adam Coldrick),
followed by a rehash of general task tracking needs within the
community. This has shifted a bit since the Release team now has
automation covering some of their previous needs, so we confirmed
which features were still a must vs. which had fallen in priority
and whether that made a difference in choosing between
implementations.

Thierry Carrez volunteered to write a spec and work toward a TC
resolution on consensus for moving the community to a suitable task
tracking platform. It's now in progress at
https://review.openstack.org/314185 . James Blair and I volunteered
as backups on that task.

There were also some related discussions with regard to dashboard
needs for the Product working group (stemming from their "Defining
scope of cross projects specs" session on Tuesday afternoon), and
some further ad hoc discussion during our sprint day about VMT
embargo bug workflow and related needs.

It was additionally confirmed that Infra could still deploy and
maintain a Pholio service for the UI/UX team's use even if Maniphest
did not end up in production as they are separate and distinct
tools, and that the existing deployment automation and configuration
management should remain suitable for that purpose.


Landing Page for Contributors
-

https://etherpad.openstack.org/p/newton-infra-landing-page-for-contributors

This ended up being a little about publication/maintenance
mechanisms, and mostly about picking a non-contentious hostname for
the new "contributing" portal. Thierry Carrez and James Blair had
strong opinions on naming, attempting to strike a balance between
clarity of scope and avoiding alienation of potential audiences.
This discussion continued after the session ended, well into the
lunch line, and eventually "project.openstack.org" was settled on as
being clearly related to the upstream project teams while not
overreaching into the domains of work being done by the foundation
and other groups outside the domain of the TC.

The initial plan is to just throw some ugly static HTML (maybe
locally generated with a templating engine and then committed) into
a repo and push that up to a vhost, but not publicize it until it
gets a little more polish. Ultimately, we want a "choose your own
adventure" sort of flow to the site, which avoids giving newcomers
too much information they don't need, so as to avoid confusion. Mike
Perez (who was unable to attend the session due to a conflict) has a
new contributor workflow/walkthrough targeted at low barrier to
entry audiences we might incorporate or borrow from.

Thierry volunteered to lead this, potentially with Mike's help, and
Jimmy McArthur offered to provide layout/formatting and information
engineering assistance to make it more visually appealing and easier
to follow.


Launch-Node, Ansible and Puppet
---

https://etherpad.openstack.org/p/newton-infra-launch-node-ansible-and-puppet

The session was on further automating our server creation, making it
possible to trigger and drive new server creation from configuration
in Git. Spencer Krum volunteered to write a spec for the new
automation needed. The hope is that we might incorporate some of
this into the upcoming distro upgrade process for our servers, as a
means of vetting the proposed solution.

It was also suggested that there should also be a spec for hot/cold
orchestration in service of server replacement cut-over, though I
think we're still lacking a volunteer for that second spec.


Wiki Upgrades
-

https://etherpad.openstack.org/p/newton-infra-wiki-upgrades

We started with a rehash of earlier mailing list threads and a
summary of current state on the wiki server. While intended to be
primarily on our plans to get the wiki back into a working state, it
ended up being more about the long-term viability of running a wiki
for the OpenStack community.

Consensus within the room was that we want to continue the current
years-long effort of moving important content off the wiki,
deprecating it for specific use cases when there are more suitable
publication and information management mechanisms available. Some
current uses will still need new solutions engineered of course, but
over the coming year we'd like to get to a point where sufficient
content is moved off so that we can better determine whether its
continued existence is warranted.

I'll be starting a thread on the openst

Re: [openstack-dev] [neutron] work on Common Flow Classifier and OVS Agent extension for Newton cycle

2016-05-10 Thread Cathy Zhang
It is always hard to find a day and time that is good for everyone around the 
globe:-)
The first meeting will still be UTC 1700 ~ UTC 1800 May 17 on Neutron channel. 
In the meeting, we can see if we can reach consensus on a new meeting time. 

Cathy

-Original Message-
From: Takashi Yamamoto [mailto:yamam...@midokura.com] 
Sent: Tuesday, May 10, 2016 12:40 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] work on Common Flow Classifier and OVS 
Agent extension for Newton cycle

On Tue, May 10, 2016 at 12:41 AM,   wrote:
> Hi Cathy,
>
> Cathy Zhang:
>>
>> I will tentatively set the meeting time to UTC 1700 ~ UTC 1800 Tuesday.
>> Hope this time is good for all people who have interest and like to 
>> contribute to this work. We plan to start the first meeting on May 17.
>
>
> I would be happy to participate, but I'm unlikely to be able to attend 
> at that time.
> Might 15:00 UTC work for others ?

+1 for earlier

> If not, well, I'll make do with log/emails/pads/gerrit interactions.
>
> -Thomas
>
>
>
>
>> -Original Message-
>> From: Cathy Zhang
>> Sent: Thursday, April 21, 2016 11:43 AM
>> To: Cathy Zhang; OpenStack Development Mailing List (not for usage 
>> questions); Ihar Hrachyshka; Vikram Choudhary; Sean M. Collins; Haim 
>> Daniel; Mathieu Rohon; Shaughnessy, David; Eichberger, German; Henry 
>> Fourie; arma...@gmail.com; Miguel Angel Ajo; Reedip; Thierry Carrez
>> Cc: Cathy Zhang
>> Subject: RE: [openstack-dev] [neutron] work on Common Flow Classifier 
>> and OVS Agent extension for Newton cycle
>>
>> Hi everyone,
>>
>> We have room 400 at 3:10pm on Thursday available for discussion of 
>> the two topics.
>> Another option is to use the common room with roundtables in "Salon C"
>> during Monday or Wednesday lunch time.
>>
>> Room 400 at 3:10pm is a closed room while the Salon C is a big open 
>> room which can host 500 people.
>>
>> I am Ok with either option. Let me know if anyone has a strong preference.
>>
>> Thanks,
>> Cathy
>>
>>
>> -Original Message-
>> From: Cathy Zhang
>> Sent: Thursday, April 14, 2016 1:23 PM
>> To: OpenStack Development Mailing List (not for usage questions); 
>> 'Ihar Hrachyshka'; Vikram Choudhary; 'Sean M. Collins'; 'Haim 
>> Daniel'; 'Mathieu Rohon'; 'Shaughnessy, David'; 'Eichberger, German'; 
>> Cathy Zhang; Henry Fourie; 'arma...@gmail.com'
>> Subject: RE: [openstack-dev] [neutron] work on Common Flow Classifier 
>> and OVS Agent extension for Newton cycle
>>
>> Thanks for everyone's reply!
>>
>> Here is the summary based on the replies I received:
>>
>> 1.  We should have a meet-up for these two topics. The "to" list are 
>> the people who have interest in these topics.
>>  I am thinking about around lunch time on Tuesday or Wednesday 
>> since some of us will fly back on Friday morning/noon.
>>  If this time is OK with everyone, I will find a place and let 
>> you know where and what time to meet.
>>
>> 2.  There is a bug opened for the QoS Flow Classifier
>> https://bugs.launchpad.net/neutron/+bug/1527671
>> We can either change the bug title and modify the bug details or 
>> start with a new one for the common FC which provides info on all 
>> requirements needed by all relevant use cases. There is a bug opened 
>> for OVS agent extension 
>> https://bugs.launchpad.net/neutron/+bug/1517903
>>
>> 3.  There are some very rough, ugly as Sean put it:-), and 
>> preliminary work on common FC 
>> https://github.com/openstack/neutron-classifier which we can see how 
>> to leverage. There is also a SFC API spec which covers the FC API for 
>> SFC usage 
>> https://github.com/openstack/networking-sfc/blob/master/doc/source/ap
>> i.rst, the following is the CLI version of the Flow Classifier for 
>> your
>> reference:
>>
>> neutron flow-classifier-create [-h]
>>  [--description ]
>>  [--protocol ]
>>  [--ethertype ]
>>  [--source-port :> source protocol port>]
>>  [--destination-port > port>:]
>>  [--source-ip-prefix ]
>>  [--destination-ip-prefix ]
>>  [--logical-source-port ]
>>  [--logical-destination-port ]
>>  [--l7-parameters ] FLOW-CLASSIFIER-NAME
>>
>> The corresponding code is here
>> https://github.com/openstack/networking-sfc/tree/master/networking_sf
>> c/extensions
>>
>> 4.  We should come up with a formal Neutron spec for FC and another 
>> one for OVS Agent extension and get everyone's review and approval. 
>> Here is the etherpad catching our previous requirement discussion on 
>> OVS agent (Thanks David for the link! I remember we had this 
>> discussion before) 
>> https://etherpad.openstack.org/p/l2-agent-extensions-api-expansion
>>
>>
>> More inline.
>>
>> Thanks,
>> Cathy
>>
>>
>> -Original Message-
>> From: Ihar Hrachyshka [mailto:ihrac...@redhat.com]
>> Sent: Thursday, April 14, 2016 3:34 AM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: Re: 

Re: [openstack-dev] [neutron] [designate] multi-tenancy in Neutron's DNS integration

2016-05-10 Thread Kevin Benton
Whoops. What I just said was wrong if it hadn't been explicitly overwritten.

I think you will end up having to do a port-list looking for the DHCP
port(s).
http://paste.openstack.org/show/496604/


On Tue, May 10, 2016 at 11:28 AM, Kevin Benton  wrote:

> neutron subnet-show with the UUID of the subnet they have a port on will
> tell you.
>
> On Tue, May 10, 2016 at 6:40 AM, Mike Spreitzer 
> wrote:
>
>> "Hayes, Graham"  wrote on 05/10/2016 09:30:26 AM:
>>
>> > ...
>> > > Ah, that may be what I want.  BTW, I am not planning to use Nova.  I
>> am
>> > > planning to use Swarm and Kubernetes to create containers attached to
>> > > Neutron private tenant networks.  What DNS server would I configure
>> > > those containers to use?
>> >
>> > Not sure what happened with that last reply - it seems to have dropped
>> > my content.
>> >
>> > The DNSMasq instance running on the neutron network would have these
>> > records - they should be sent as part of the DHCP lease, so leaving the
>> > DNS set to automatic should pick them up.
>>
>> IIRC, our Docker containers do not use DHCP.  Is there any other way to
>> find out the correct DNS server(s) for the containers to use?
>>
>> Thanks,
>> Mike
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [third-party] Are you getting value from the 8:00 utc Tuesday meeting?

2016-05-10 Thread Anita Kuno
I've been chairing this meeting for about 3 releases now and in this
last release it has mostly been myself and lennyb, who also attends the
Monday 15:00 utc third-party meeting that I chair.

Are you getting value from the Tuesday 8:00 utc third-party meeting? If
yes, please make yourself known. If no, the meeting in this slot will be
removed leaving the other third-party meetings to continue along their
regular schedule.

Thank you,
Anita.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [designate] multi-tenancy in Neutron's DNS integration

2016-05-10 Thread Kevin Benton
neutron subnet-show with the UUID of the subnet they have a port on will
tell you.

On Tue, May 10, 2016 at 6:40 AM, Mike Spreitzer  wrote:

> "Hayes, Graham"  wrote on 05/10/2016 09:30:26 AM:
>
> > ...
> > > Ah, that may be what I want.  BTW, I am not planning to use Nova.  I am
> > > planning to use Swarm and Kubernetes to create containers attached to
> > > Neutron private tenant networks.  What DNS server would I configure
> > > those containers to use?
> >
> > Not sure what happened with that last reply - it seems to have dropped
> > my content.
> >
> > The DNSMasq instance running on the neutron network would have these
> > records - they should be sent as part of the DHCP lease, so leaving the
> > DNS set to automatic should pick them up.
>
> IIRC, our Docker containers do not use DHCP.  Is there any other way to
> find out the correct DNS server(s) for the containers to use?
>
> Thanks,
> Mike
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] supporting Go

2016-05-10 Thread Hayes, Graham
On 10/05/2016 01:01, Gregory Haynes wrote:
>
> On Mon, May 9, 2016, at 03:54 PM, John Dickinson wrote:
>> On 9 May 2016, at 13:16, Gregory Haynes wrote:
>>>
>>> This is a bit of an aside but I am sure others are wondering the same
>>> thing - Is there some info (specs/etherpad/ML thread/etc) that has more
>>> details on the bottleneck you're running in to? Given that the only
>>> clients of your service are the public facing DNS servers I am now even
>>> more surprised that you're hitting a python-inherent bottleneck.
>>
>> In Swift's case, the summary is that it's hard[0] to write a network
>> service in Python that shuffles data between the network and a block
>> device (hard drive) and effectively utilizes all of the hardware
>> available. So far, we've done very well by fork()'ing child processes,
>> using cooperative concurrency via eventlet, and basic "write more
>> efficient code" optimizations. However, when it comes down to it,
>> managing all of the async operations across many cores and many drives
>> is really hard, and there just isn't a good, efficient interface for
>> that in Python.
>
> This is a pretty big difference from hitting an unsolvable performance
> issue in the language and instead is a case of language preference -
> which is fine. I don't really want to fall in to the language-comparison
> trap, but I think more detailed reasoning for why it is preferable over
> python in specific use cases we have hit is good info to include /
> discuss in the document you're drafting :). Essentially its a matter of
> weighing the costs (which lots of people have hit on so I won't) with
> the potential benefits and so unless the benefits are made very clear
> (especially if those benefits are technical) its pretty hard to evaluate
> IMO.
>
> There seemed to be an assumption in some of the designate rewrite posts
> that there is some language-inherent performance issue causing a
> bottleneck. If this does actually exist then that is a good reason for
> rewriting in another language and is something that would be very useful
> to clearly document as a case where we support this type of thing. I am
> highly suspicious that this is the case though, but I am trying hard to
> keep an open mind...

The way this component works makes it quite difficult to make any major
improvement.

MiniDNS (the component) takes data and sends a zone transfer every time 
a recordset gets updated. That is a full (AXFR) zone transfer, so every
record in the zone gets sent to each of the DNS servers that end users
can hit.

This can be quite a large number - ns[1-6].example.com. may well be
tens or hundreds of servers behind anycast IPs and load balancers.

In many cases, internal zones (or even external zones) can be quite
large - I have seen zones that are 200-300Mb. If a zone is high traffic
(like say cloud.example.com. where a record is added / removed for
each boot / destroy, or the reverse DNS zones for a cloud), there can
be a lot of data sent out from this component.

We are a small development team, and after looking at our options, and 
judging the amount of developer hours we had available, a different
language was the route we decided on. I was going to go implement a few
POCs and see what was most suitable.

Golang was then being proposed as a new "blessed" language, and as it
was a language that we had a pre-existing POC in we decided to keep
this within the potential new list of languages.

As I said before, we did not just randomly decide this. We have been
talking about it for a while, and at this summit we dedicated an entire
session to it, and decided to do it.


>>
>> Initial results from a golang reimplementation of the object server in
>> Python are very positive[1]. We're not proposing to rewrite Swift
>> entirely in Golang. Specifically, we're looking at improving object
>> replication time in Swift. This service must discover what data is on
>> a drive, talk to other servers in the cluster about what they have,
>> and coordinate any data sync process that's needed.
>>
>> [0] Hard, not impossible. Of course, given enough time, we can do
>>   anything in a Turing-complete language, right? But we're not talking
>>   about possible, we're talking about efficient tools for the job at
>>   hand.
>
> Sorry to be a pedant but that is just plain false - there are plenty of
> intractable problems.
>
>>
>> [1] http://d.not.mn/python_vs_golang_gets.png and
>>   http://d.not.mn/python_vs_golang_puts.png
>>
>>
>> --John
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscri

Re: [openstack-dev] [api] [senlin] [keystone] [ceilometer] [telemetry] Questions about api-ref launchpad bugs

2016-05-10 Thread Augustina Ragwitz
 
>
> On Tue, May 10, 2016 at 5:14 AM, Atsushi SAKAI
>  wrote:
> Hello Anne
>
> I have several question when I am reading through etherpad's (in
> progress).
> It would be appreciated to answer these questions.
>
> 1)Should api-ref launchpad **bugs** be moved to each modules
> (like keystone, nova etc)?
> Also, this should be applied to moved one's only or all components?
> (compute, baremetal Ref.2)
>
> Ref.
> https://etherpad.openstack.org/p/austin-docs-newtonplan
> API site bug list cleanup: move specific service API ref bugs to
> project's Launchpad
>
> Ref.2
> http://developer.openstack.org/api-ref/compute/
> http://developer.openstack.org/api-ref/baremetal/
>
> Yes! I definitely got agreement from nova team that they want them.
> Does anyone have a Launchpad script that could help with the bulk
> filter/export? Also, are any teams concerned about taking on their API
> reference bugs? Let's chat.
>
 
I had started work on a tool for filing api-ref bugs for Nova before the
Summit and just haven't had a chance to pick it back up. Anyone is
welcome to fork or submit PR's to fix it up. I was trying to get
Launchpad login to work when I had to halt work on it due to the
upcoming summit. It's still very much a WIP and someone may have
something better out there.
 
https://github.com/missaugustina/nova-api-docs-tracker
 
 
--
Augustina Ragwitz
Sr Systems Software Engineer, HPE Cloud
Hewlett Packard Enterprise
---
irc: auggy
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Nominating Micha?? Dulko to Cinder Core

2016-05-10 Thread Michał Dulko
On 05/10/2016 07:46 AM, Sean McGinnis wrote:
> It has been one week and all very positive feedback. I have now added
> Michał to the cinder-core group.
>
> Welcome Michał! Glad to have your expertise in the group.
>
> Sean

Thank you all for mentoring and support! I'll do my best to fulfill the
expectations.

…but I'll start making it happen in the next week, when I'll return from
post-summit vacations. :)

> On Tue, May 03, 2016 at 01:16:59PM -0500, Sean McGinnis wrote:
>> Hey everyone,
>>
>> I would like to nominate Michał Dulko to the Cinder core team. Michał's
>> contributions with both code reviews [0] and code contributions [1] have
>> been significant for some time now.
>>
>> His persistence with versioned objects has been instrumental in getting
>> support in the Mitaka release for rolling upgrades.
>>
>> If there are no objections from current cores by next week, I will add
>> Michał to the core group.
>>
>> [0] http://cinderstats-dellstorage.rhcloud.com/cinder-reviewers-90.txt
>> [1]
>> https://review.openstack.org/#/q/owner:%22Michal+Dulko+%253Cmichal.dulko%2540intel.com%253E%22++status:merged
>>
>> Thanks!
>>
>> Sean McGinnis (smcginnis)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] supporting Go

2016-05-10 Thread Adam Young
Forget package management for a moment;  we can figure it out if we need 
to.  The question is "Why Go" which I've pondered for a while.



If you need to write a multithreaded app, Python's GIL makes it very 
hard to do.  It is one reason why I pushed for HTTPD as the Keystone 
front end.


Python has long held the option to optimize to native as the way to deal 
with performance sensitive segments.


The question, then, is what language are you going to use to write that 
perf sensitive native code?


To date, there have been two realistic options, Straight C and C++. For 
numeric algorithms, there is a large body written in Fortran that are 
often pulled over for scientific operations.  The rest have been largely 
one-offs.


Go and Rust are interesting in that they are both native, as opposed to 
runtime compiled languages like Java and Python.  That makes them 
candidates for writing this kind of performance code.


Rust is not there yet.  I like it, but it is tricky to learn, and the 
packaging and distribution is still getting in place.


Go has been more aggressively integrated into the larger community. 
Probably the most notable and relevant for our world is the Kubernetes 
push toward Go.


In the cosmic scheme of things, I see Go taking on C++ as the "native 
but organized" language, as contrasted with C which is native but purely 
procedural, and thus requires a lot more work to avoid security and 
project scale issues.


So, I can see the desire to not start supporting C++, and to jump right 
to Go.  I think it is a reasonable language to investigate for this type 
of coding, but committing to it is less obvious than Javascript was:  
with Javascript, there is no alternative for dynamic web apps, and for 
native, there are several.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [barbican]barbican github installation failing

2016-05-10 Thread Kris G. Lindgren
Uwsgi is a way to run the API portion of a python code base.  You most likely 
need to install uwsgi for you operating system.

http://uwsgi-docs.readthedocs.io/en/latest/

___
Kris Lindgren
Senior Linux Systems Engineer
GoDaddy

From: Akshay Kumar Sanghai 
mailto:akshaykumarsang...@gmail.com>>
Date: Tuesday, May 10, 2016 at 11:15 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>, 
openstack-operators 
mailto:openstack-operat...@lists.openstack.org>>
Subject: [Openstack-operators] [openstack-dev][barbican]barbican github 
installation failing

Hi,
I have a 4 node working setup of openstack (1 controller, 1 network node, 2 
compute node).
I am trying to use ssl offload feature of lbaas v2. For that I need tls 
containers, hence barbican.
I did a git clone of barbican repo from https://github.com/openstack/barbican
Then ./bin/barbican.sh install
I am getting this error

Traceback (most recent call last):
  File "/usr/lib/python2.7/dist-packages/mock/mock.py", line 1305, in patched
return func(*args, **keywargs)
  File "barbican/tests/queue/test_keystone_listener.py", line 327, in 
test_should_wait
msg_server = keystone_listener.MessageServer(self.conf)
  File "barbican/queue/keystone_listener.py", line 156, in __init__
endpoints=[self])
  File "barbican/queue/__init__.py", line 112, in get_notification_server
allow_requeue)
TypeError: __init__() takes exactly 3 arguments (5 given)
Ran 1246 tests in 172.776s (-10.533s)
FAILED (id=1, failures=4, skips=4)
error: testr failed (1)
Starting barbican...
./bin/barbican.sh: line 57: uwsgi: command not found

Please help me.

Thanks
Akshay
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] supporting Go

2016-05-10 Thread Clint Byrum
Excerpts from Rayson Ho's message of 2016-05-10 07:19:23 -0700:
> On Tue, May 10, 2016 at 2:42 AM, Tim Bell  wrote:
> > I hope that the packaging technologies are considered as part of the TC
> evaluation of a new language. While many alternative approaches are
> available, a language which could not be packaged into RPM or DEB would be
> an additional burden for distro builders and deployers.
> >
> 
> I mentioned in earlier replies but I may as well mention it again: a
> package manager gives you no advantage in a language toolchain like Go (or
> Rust). In fact, when you code is written in Go, you will be spared from
> dependency hell.
> 

Package managers don't just resolve dependencies. You may have forgotten
the days _before_ apt-get where they just _expressed_ dependencies, but
it was up to you to find and download and install them all together.

There is also integration. Having an init script or systemd unit that
expresses when it makes sense to start this service is quite useful.

Source packages assist users in repeating the build the way the binary
they're using was built. If you do need to patch, patching the version
you're using, with the flags it used, means less entropy to deal with.
The alternative is going full upstream, which is great, and should be
done for anything you intend to have a deep relationship, but may not be
appropriate in every case.

Finally, the chain of trust is huge. Knowing that the binary in that
package is the one that was built by developers who understand your OS
is something we take for granted every time we 'yum install' or 'apt-get
install'. Of course, a go binary distributor can make detached pgp
signatures of their binaries, or try to claim their server is secured
and https is enough. But that puts the onus on the user to figure out
how to verify, or places trust in the global PKI, which is usually fine
(and definitely better than nothing at all!) but far inferior to the
signed metadata/binary approach distros use.

> And, while not the official way to install Go, the Go toolchain can be
> packaged and in fact it is in Ubuntu 16.04 LTS:
> 
> https://launchpad.net/ubuntu/xenial/+source/golang-1.6
> 
> 
> IMO, the best use case of not using a package manager is when deploying
> into containers -- would you prefer to just drop a static binary of your Go
> code, or you would rather install "apt-get" into a container image, and
> then install the language runtime via apt-get, and finally your
> application?? I don't know about you, but many startup companies like Go as
> it would give them much faster time to react.
> 
> Lastly, I would encourage anyone who has never even used Go to at least
> download the Go toolchain and install it in your home directory (or if you
> are committed enough, system wide), and then compile a few hello world
> programs and see what Go gives you. Go won't give you everything, but I am
> a "pick the right tool for the right job" guy and I am pretty happy about
> Go.
> 

Go's fine. But so is Python. I think the debate here is whether Go has
enough strengths vs. Python to warrant endorsement by OpenStack.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [barbican]barbican github installation failing

2016-05-10 Thread Akshay Kumar Sanghai
Hi,
I have a 4 node working setup of openstack (1 controller, 1 network node, 2
compute node).
I am trying to use ssl offload feature of lbaas v2. For that I need tls
containers, hence barbican.
I did a git clone of barbican repo from
https://github.com/openstack/barbican
Then ./bin/barbican.sh install
I am getting this error

Traceback (most recent call last):
  File "/usr/lib/python2.7/dist-packages/mock/mock.py", line 1305, in
patched
return func(*args, **keywargs)
  File "barbican/tests/queue/test_keystone_listener.py", line 327, in
test_should_wait
msg_server = keystone_listener.MessageServer(self.conf)
  File "barbican/queue/keystone_listener.py", line 156, in __init__
endpoints=[self])
  File "barbican/queue/__init__.py", line 112, in get_notification_server
allow_requeue)
TypeError: __init__() takes exactly 3 arguments (5 given)
Ran 1246 tests in 172.776s (-10.533s)
FAILED (id=1, failures=4, skips=4)
error: testr failed (1)
Starting barbican...
./bin/barbican.sh: line 57: uwsgi: command not found

Please help me.

Thanks
Akshay
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Wiki

2016-05-10 Thread Anita Kuno
On 05/10/2016 12:48 PM, Dan Smith wrote:
>>> Hmm... that's unfortunate, as we were trying to get some of our less
>>> ephemeral items out of random etherpads and into the wiki (which has the
>>> value of being google indexed).
> 
> Yeah, I'm kinda surprised anyone would consider a wiki-less world. I'm
> definitely bummed at the thought of losing it.
> 
>> The Google indexing is also what makes the wiki so painful... After 6
>> years most of the content there is inaccurate or outdated. It's a
>> massive effort to clean it up without breaking the Google juice, and
>> nobody has the universal knowledge to determine if pages are still
>> accurate or not. We are bitten every day by newcomers finding wrong
>> information on the wiki and acting using it. It's getting worse every
>> day we keep on using it.
> 
> Sure, I think we all feel the pain of the stale information on the wiki.
> What if we were to do what we do for bug or review purges and make a
> list of pages, in reverse order of how recently they've been updated?
> Then we can have a few sprints to tag obviously outdated things to
> purge, and perhaps some things that just need some freshening.
> 
> There are a lot of nova-related things on the wiki that are the
> prehistory equivalent of specs, most of which are very misleading to
> people about the current state of things. I would think we could purge a
> ton of stuff like that pretty quickly. I'll volunteer to review such a
> list from the nova perspective.
> 
>> * Deprecate the current wiki and start over with another wiki (with
>> stronger ACL support ?)
> 
> I'm somewhat surprised that this is an issue, because I thought that the
> wiki requires an ubuntu login. Are spammers really getting ubuntu logins
> so they can come over and deface our wiki?

Yes.

> 
> --Dan
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] CentOS binary and source gate failed due to the rabbitmq

2016-05-10 Thread Steven Dake (stdake)
Paul,

Please run strace with -f (trace child processes).

What you have there is not sufficient to do the job.  FWIW erlang -16 is
totally busted in CentOS because of logging, and erlang -17 is ready for
testing which fixes the erlang crashes.  Erlang-15 introduced ipv6 which
could also be the cause of the crashes seen in the gate since they never
happened on 14 and prior.  It could also be a bug introduced in -16, so
testing -17 would be helpful.

See:
https://bugzilla.redhat.com/show_bug.cgi?id=1324922


Regards
-steve

On 5/10/16, 8:53 AM, "Paul Bourke"  wrote:

>I have a debug job open atm to try and investigate this:
>https://review.openstack.org/#/c/300988/
>
>If anyone is handy with strace here is a run with the output:
>
>http://logs.openstack.org/88/300988/6/check/gate-kolla-dsvm-deploy-centos-
>binary/726c3b1/console.html
>
>On 09/05/16 18:23, Hui Kang wrote:
>> The ubuntu gate deploy failed too; that is awkward.
>>
>> 
>>http://logs.openstack.org/05/314205/1/check/gate-kolla-dsvm-deploy-ubuntu
>>-source/766ee43/console.html#_2016-05-09_16_58_39_411
>>
>> - Hui
>>
>> On Mon, May 9, 2016 at 1:48 AM, Jeffrey Zhang 
>>wrote:
>>> I deploy the Kolla by using centos+source on centos host always.
>>> Never see such kinda of issue. So i can not re-produce this
>>> locally, now.
>>>
>>> On Mon, May 9, 2016 at 8:31 AM, Hui Kang 
>>>wrote:

 Hi, Jeffrey,
 I tried deploying centos binary and source on my ubuntu host, which
 completed successfully. Looking at the VMs on the failed gate, they
 are centos VMs.

 I think we need to debug this problem locally by deploying centos
 kolla on centos hosts. Is my understanding correct? Thanks.

 - Hui

 On Sat, May 7, 2016 at 9:54 AM, Jeffrey Zhang

 wrote:
> Recently, the centos binary and source gate failed due to the
>rabbitmq
> container
> existed. After making some debug. I do not found the root cause.
>
> does anyone has any idea for this?
>
> see this PS gate result[0]
> centos binary gate failed[1]
> CentOS source gate failed[2]
>
> [0] https://review.openstack.org/313838
> [1]
>
> 
>http://logs.openstack.org/38/313838/1/check/gate-kolla-dsvm-deploy-cen
>tos-binary/ea293fe/
> [2]
>
> 
>http://logs.openstack.org/38/313838/1/check/gate-kolla-dsvm-deploy-cen
>tos-source/d4cb127/
>
> --
> Regards,
> Jeffrey Zhang
> Blog: http://xcodest.me
>
>
> 
>__
>
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

 
___
___
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>>
>>>
>>> --
>>> Regards,
>>> Jeffrey Zhang
>>> Blog: http://xcodest.me
>>>
>>> 
>>>
>>>__
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: 
>>>openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>> 
>>_
>>_
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
>>openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Wiki

2016-05-10 Thread Dan Smith
>> Hmm... that's unfortunate, as we were trying to get some of our less
>> ephemeral items out of random etherpads and into the wiki (which has the
>> value of being google indexed).

Yeah, I'm kinda surprised anyone would consider a wiki-less world. I'm
definitely bummed at the thought of losing it.

> The Google indexing is also what makes the wiki so painful... After 6
> years most of the content there is inaccurate or outdated. It's a
> massive effort to clean it up without breaking the Google juice, and
> nobody has the universal knowledge to determine if pages are still
> accurate or not. We are bitten every day by newcomers finding wrong
> information on the wiki and acting using it. It's getting worse every
> day we keep on using it.

Sure, I think we all feel the pain of the stale information on the wiki.
What if we were to do what we do for bug or review purges and make a
list of pages, in reverse order of how recently they've been updated?
Then we can have a few sprints to tag obviously outdated things to
purge, and perhaps some things that just need some freshening.

There are a lot of nova-related things on the wiki that are the
prehistory equivalent of specs, most of which are very misleading to
people about the current state of things. I would think we could purge a
ton of stuff like that pretty quickly. I'll volunteer to review such a
list from the nova perspective.

> * Deprecate the current wiki and start over with another wiki (with
> stronger ACL support ?)

I'm somewhat surprised that this is an issue, because I thought that the
wiki requires an ubuntu login. Are spammers really getting ubuntu logins
so they can come over and deface our wiki?

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Austin summit priorities session recap

2016-05-10 Thread Matt Riedemann

On 5/10/2016 11:24 AM, Alexandre Levine wrote:

Hi Matt,

Sorry I couldn't reply earlier - was away.
I'm worrying about ScaleIO ephemeral storage backend
(https://blueprints.launchpad.net/nova/+spec/scaleio-ephemeral-storage-backend)
which is not in this list but various clients are very interested in
having it working along with or instead of Ceph. Especially I'm worrying
in view of the global libvirt storage pools refactoring which looks like
a quite global effort to me judging by a number of preliminary reviews.
It seems to me that we wouldn't be able to squeeze ScaleIO additions
after this refactoring.
What can be done about it?
We could've contribute our initial changes to current code (which would
potentially allow easy backporting to previous versions as a benefit
afterwards) and promise to update our parts along with the refactoring
reviews or something like this.

Best regards,
  Alex Levine


On 5/6/16 3:34 AM, Matt Riedemann wrote:

There are still a few design summit sessions from the summit that I'll
recap but I wanted to get the priorities session recap out as early as
possible. We held that session in the last slot on Thursday. The full
etherpad is here [1].

The first part of the session was mostly going over schedule milestones.

We already started Newton with a freeze on spec approvals for new
things since we already have a sizable backlog [2]. Now that we're
past the summit we can approve specs for new things again.

The full Newton release schedule for Nova is in this wiki [3].

These are the major dates from here on out:

* June 2: newton-1, non-priority spec approval freeze
* June 30: non-priority feature freeze
* July 15: newton-2
* July 19-21: Nova Midcycle
* Aug 4: priority spec approval freeze
* Sept 2: newton-3, final python-novaclient release, FeatureFreeze,
Soft StringFreeze
* Sept 16: RC1 and Hard StringFreeze
* Oct 7, 2016: Newton Release

The important thing for most people right now is we have exactly four
weeks until the non-priority spec approval freeze. We then have about
one month after that to land all non-priority blueprints.

Keep in mind that we've already got 52 approved blueprints and most of
those were re-approved from Mitaka, so have been approved for several
weeks already.

The non-priority blueprint cycle is intentionally restricted in Newton
because of all of the backlog work we've had spilling over into this
release. We really need to focus on getting as much of that done as
possible before taking on more new work.

For the rest of the priorities session we talked about what our actual
review priorities are for Newton. The list with details and owners is
already available here [4].

In no particular order, these are the review priorities:

* Cells v2
* Scheduler
* API Improvements
* os-vif integration
* libvirt storage pools (for live migration)
* Get Me a Network
* Glance v2 Integration

We *should* be able to knock out glance v2, get-me-a-network and
os-vif relatively soon (I'm thinking sometime in June).

Not listed in [4] but something we talked about was volume
multi-attach with Cinder. We said this was going to be a 'stretch
goal' contingent on making decent progress on that item by
non-priority feature freeze *and* we get the above three smaller
priority items completed.

Another thing we talked about but isn't going to be a priority is
NFV-related work. We talked about cleaning up technical debt and
additional testing for NFV but had no one in the session signed up to
own that work or with concrete proposals on how to make improvements
in that area. Since we can't assign review priorities to something
that nebulous it was left out. Having said that, Moshe Levi has
volunteered to restart and lead the SR-IOV/PCI bi-weekly meeting [5]
(thanks again, Moshe!). So if you (or your employer, or your vendor)
are interested in working on NFV in Nova please attend that meeting
and get involved in helping out that subteam.

[1] https://etherpad.openstack.org/p/newton-nova-summit-priorities
[2]
http://lists.openstack.org/pipermail/openstack-dev/2016-March/090370.html
[3] https://wiki.openstack.org/wiki/Nova/Newton_Release_Schedule
[4]
https://specs.openstack.org/openstack/nova-specs/priorities/newton-priorities.html

[5]
http://lists.openstack.org/pipermail/openstack-dev/2016-April/093541.html




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Alexandre,

A closed-source vendor-specific ephemeral backend for a single virt 
driver in Nova isn't a review priority for the release. The review 
priorities we have for Newton are really broad multi-release efforts 
that we need to focus on.


This doesn't mean we aren't approving other specs/blueprints. We already 
have 53 approved blueprints for Newton and only 6 of those are 
implemented,

Re: [openstack-dev] [puppet] Stepping down from puppet core

2016-05-10 Thread Iury Gregory
Thanks for your work Clayton! =D

2016-05-10 13:30 GMT-03:00 Matt Fischer :

> On Tue, May 10, 2016 at 9:11 AM, Clayton O'Neill 
> wrote:
>
>> I’d like to step down as a core reviewer for the OpenStack Puppet
>> modules.  For the last cycle I’ve had very little time to spend
>> reviewing patches, and I don’t expect that to change in the next
>> cycle.  In addition, it used to be that I was contributing regularly
>> because we were early upgraders and the modules always needed some
>> work early in the cycle.  Under Emilien’s leadership this situation
>> has changed significantly and I find that the puppet modules generally
>> “just work” for us in most cases.
>>
>> I intend to still be contribute when I can and I’d like to thank
>> everyone for the hard work for the last two cycles.  The OpenStack
>> Puppet modules are really in great shape these days.
>>
>
>
> Thanks Clayton!
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 

~


*Att[]'sIury Gregory Melo Ferreira **Master student in Computer Science at
UFCG*
*E-mail:  iurygreg...@gmail.com *
~
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Stepping down from puppet core

2016-05-10 Thread Matt Fischer
On Tue, May 10, 2016 at 9:11 AM, Clayton O'Neill  wrote:

> I’d like to step down as a core reviewer for the OpenStack Puppet
> modules.  For the last cycle I’ve had very little time to spend
> reviewing patches, and I don’t expect that to change in the next
> cycle.  In addition, it used to be that I was contributing regularly
> because we were early upgraders and the modules always needed some
> work early in the cycle.  Under Emilien’s leadership this situation
> has changed significantly and I find that the puppet modules generally
> “just work” for us in most cases.
>
> I intend to still be contribute when I can and I’d like to thank
> everyone for the hard work for the last two cycles.  The OpenStack
> Puppet modules are really in great shape these days.
>


Thanks Clayton!
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Austin summit priorities session recap

2016-05-10 Thread Alexandre Levine

Hi Matt,

Sorry I couldn't reply earlier - was away.
I'm worrying about ScaleIO ephemeral storage backend 
(https://blueprints.launchpad.net/nova/+spec/scaleio-ephemeral-storage-backend) 
which is not in this list but various clients are very interested in 
having it working along with or instead of Ceph. Especially I'm worrying 
in view of the global libvirt storage pools refactoring which looks like 
a quite global effort to me judging by a number of preliminary reviews. 
It seems to me that we wouldn't be able to squeeze ScaleIO additions 
after this refactoring.

What can be done about it?
We could've contribute our initial changes to current code (which would 
potentially allow easy backporting to previous versions as a benefit 
afterwards) and promise to update our parts along with the refactoring 
reviews or something like this.


Best regards,
  Alex Levine


On 5/6/16 3:34 AM, Matt Riedemann wrote:
There are still a few design summit sessions from the summit that I'll 
recap but I wanted to get the priorities session recap out as early as 
possible. We held that session in the last slot on Thursday. The full 
etherpad is here [1].


The first part of the session was mostly going over schedule milestones.

We already started Newton with a freeze on spec approvals for new 
things since we already have a sizable backlog [2]. Now that we're 
past the summit we can approve specs for new things again.


The full Newton release schedule for Nova is in this wiki [3].

These are the major dates from here on out:

* June 2: newton-1, non-priority spec approval freeze
* June 30: non-priority feature freeze
* July 15: newton-2
* July 19-21: Nova Midcycle
* Aug 4: priority spec approval freeze
* Sept 2: newton-3, final python-novaclient release, FeatureFreeze, 
Soft StringFreeze

* Sept 16: RC1 and Hard StringFreeze
* Oct 7, 2016: Newton Release

The important thing for most people right now is we have exactly four 
weeks until the non-priority spec approval freeze. We then have about 
one month after that to land all non-priority blueprints.


Keep in mind that we've already got 52 approved blueprints and most of 
those were re-approved from Mitaka, so have been approved for several 
weeks already.


The non-priority blueprint cycle is intentionally restricted in Newton 
because of all of the backlog work we've had spilling over into this 
release. We really need to focus on getting as much of that done as 
possible before taking on more new work.


For the rest of the priorities session we talked about what our actual 
review priorities are for Newton. The list with details and owners is 
already available here [4].


In no particular order, these are the review priorities:

* Cells v2
* Scheduler
* API Improvements
* os-vif integration
* libvirt storage pools (for live migration)
* Get Me a Network
* Glance v2 Integration

We *should* be able to knock out glance v2, get-me-a-network and 
os-vif relatively soon (I'm thinking sometime in June).


Not listed in [4] but something we talked about was volume 
multi-attach with Cinder. We said this was going to be a 'stretch 
goal' contingent on making decent progress on that item by 
non-priority feature freeze *and* we get the above three smaller 
priority items completed.


Another thing we talked about but isn't going to be a priority is 
NFV-related work. We talked about cleaning up technical debt and 
additional testing for NFV but had no one in the session signed up to 
own that work or with concrete proposals on how to make improvements 
in that area. Since we can't assign review priorities to something 
that nebulous it was left out. Having said that, Moshe Levi has 
volunteered to restart and lead the SR-IOV/PCI bi-weekly meeting [5] 
(thanks again, Moshe!). So if you (or your employer, or your vendor) 
are interested in working on NFV in Nova please attend that meeting 
and get involved in helping out that subteam.


[1] https://etherpad.openstack.org/p/newton-nova-summit-priorities
[2] 
http://lists.openstack.org/pipermail/openstack-dev/2016-March/090370.html

[3] https://wiki.openstack.org/wiki/Nova/Newton_Release_Schedule
[4] 
https://specs.openstack.org/openstack/nova-specs/priorities/newton-priorities.html
[5] 
http://lists.openstack.org/pipermail/openstack-dev/2016-April/093541.html





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OVN] [networking-ovn] [networking-sfc] SFC and OVN

2016-05-10 Thread John McDowall
Ryan,

Let me do that - I assume adding them to plugin.py is the right approach.

I have cleaned up https://github.com/doonhammer/networking-ovn and did a merge 
so it should be a lot easier to see the changes. I am going to cleanup ovs/ovn 
next. Once I have everything cleaned up and make sure it is still working I 
will move the code over to the port-pair/port-chain model.

Let me know if that works for you.

Regards

John

From: Ryan Moats mailto:rmo...@us.ibm.com>>
Date: Tuesday, May 10, 2016 at 7:38 AM
To: John McDowall 
mailto:jmcdow...@paloaltonetworks.com>>
Cc: "disc...@openvswitch.org" 
mailto:disc...@openvswitch.org>>, OpenStack 
Development Mailing List 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [OVN] [networking-ovn] [networking-sfc] SFC and OVN


John McDowall 
mailto:jmcdow...@paloaltonetworks.com>> wrote 
on 05/09/2016 10:46:41 AM:

> From: John McDowall 
> mailto:jmcdow...@paloaltonetworks.com>>
> To: Ryan Moats/Omaha/IBM@IBMUS
> Cc: "disc...@openvswitch.org" 
> mailto:disc...@openvswitch.org>>, "OpenStack
> Development Mailing List" 
> mailto:openstack-dev@lists.openstack.org>>
> Date: 05/09/2016 10:46 AM
> Subject: Re: [OVN] [networking-ovn] [networking-sfc] SFC and OVN
>
> Ryan,
>
> Thanks - let me try and get the code cleaned up and rebased. One
> area that I could use your insight on is the interface to
> networking-ovn and how it should look.
>
> Regards
>
> John

Looking at this, the initial code that I think should move over are
_create_ovn_vnf and _delete_ovn_vnf and maybe rename them to
create_vnf and delete_vnf.

What I haven't figured out at this point is:
1) Is the above enough?
2) Do we need to refactor some of OVNPlugin's calls to provide hooks for the SFC
   driver to use for when the OVNPlugin inheritance goes away.

Ryan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] [QA] running Fuel tests using nodepool

2016-05-10 Thread Spencer Krum
As a frequent tinc user I'd be interested to see the code you are using
to manage tinc into doing this. Is that code available somewhere?

On Tue, May 10, 2016, at 09:02 AM, Monty Taylor wrote:
> On 05/10/2016 08:54 AM, Vladimir Eremin wrote:
> > Hi,
> > 
> > I've investigated status of nodepool multi node testing and fuel-qa
> > approaches, and I wanna share my opinion on moving Fuel testing on
> > OpenStack and nodepool.
> 
> Awesome! This is a great writeup - and hopefully will be useful as we
> validate our theory that zuul v3 should provide a richer environment for
> doing complex things like fuel testing than the current multi-node work.
> 
> > Our CI pipeline consists of next stages:
> > 
> > 1. Artifact building and publishing
> > 2. QA jobs:
> > 2.1. Master node installation from ISO
> > 2.2. Slave nodes provisioning
> > 2.3. Software deployment
> > 2.4. Workload verification
> > 
> > Current upstream nodepool limitations are pre-spwaned nodes, small
> > flavors and, only l3 connectivity. Also, we have no PXE booting and VLAN
> > trunking in OpenStack itself. So, the main problem with moving this
> > pipeline to nodepool is to emulate IT tasks: installation from ISO and
> > nodes provisioning.
> > 
> > Actually the point is: to test Fuel and test rest of OpenStack
> > components against Fuel we mostly need to test stage artifact building,
> > deployment and verification. So we need to make Fuel installable from
> > packages and create overlay L2 networking. I've found no unsolvable
> > problems right now to check most of scenarios with this approach.
> 
> 
> 
> > Besides artifact building step, there are next actions items to do to
> > run Fuel QA test:
> > 
> > 1. Automate overlay networking setup. I've
> > used https://www.tinc-vpn.org/ as a L2 switching overlay, but OpenVPN
> > could be tool of choice. Action items:
> >  - overlay networking setup should be integrated in fuel-devops
> 
> There is overlay work in the multi-node stuff for devstack. I believe
> clarkb has a todo-list item to make that networking setup more general
> and more generally available. (it's currently done in devstack-gate
> script) I'm not sure if you saw that or if it's suitable for what you
> need? If not, it would be good to understand deficiencies.
> 
> > 2. Automate Fuel master node codebase installation from packages,
> > including repo adding and deployment. Action items:
> > - installation should be integrated in fuel-devops or nodepool infra
> > - make bootstrap scripts working with more than one network on master
> > node ("Bringing down ALL network interfaces except...")
> > - fix iptables and ssh for underlay networking
> 
> We've talked a few times about handling packages and repos of packages
> for patches that have not yet landed, but have done exactly zero work on
> it. Since you're a concrete use case, perhaps we can design things with
> you in mind.
> 
> > 3. Automate Fuel slave node codebase installation and node enrollment.
> > Action items:
> > - nailgun-agent installation should be integrated in fuel-devops or
> > nodepool infra
> > - mcollective and ssh keys setup should be automated
> > - nailgun ang/or astute should be extended to allow pre-provisioned
> > nodes enrollment (I'm doing this part now)
> > - nailgun-agent and l23network should support overlay network interfaces
> 
> Exciting. I look forward to working on this with you - there are fun
> problems in here. :)
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-- 
  Spencer Krum
  n...@spencerkrum.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] [QA] running Fuel tests using nodepool

2016-05-10 Thread Clark Boylan
On Tue, May 10, 2016, at 07:50 AM, Vladimir Eremin wrote:
> Hi Jeremy,
> 
> Yep, I saw it. Unfortunately, because Fuel deployment scenarios is about
> setting up OVS too, it could be kinda freaky to provide overlay
> networking for OVS on OVS. That's why I was looking on other L2 overlays
> (kernel space mcast vxlan was in scope too).

Neutron uses OVS + vxlan too and we very explicitly do not nest the
tunnels. Neutron is thus able to happily make its internal VM network
overlays as it would normally over the actual VM networks. Basically
this is a non issue. You only make overlays for the non managed
networking, things like floating IP networks and so on.

Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] [QA] running Fuel tests using nodepool

2016-05-10 Thread Monty Taylor
On 05/10/2016 08:54 AM, Vladimir Eremin wrote:
> Hi,
> 
> I've investigated status of nodepool multi node testing and fuel-qa
> approaches, and I wanna share my opinion on moving Fuel testing on
> OpenStack and nodepool.

Awesome! This is a great writeup - and hopefully will be useful as we
validate our theory that zuul v3 should provide a richer environment for
doing complex things like fuel testing than the current multi-node work.

> Our CI pipeline consists of next stages:
> 
> 1. Artifact building and publishing
> 2. QA jobs:
> 2.1. Master node installation from ISO
> 2.2. Slave nodes provisioning
> 2.3. Software deployment
> 2.4. Workload verification
> 
> Current upstream nodepool limitations are pre-spwaned nodes, small
> flavors and, only l3 connectivity. Also, we have no PXE booting and VLAN
> trunking in OpenStack itself. So, the main problem with moving this
> pipeline to nodepool is to emulate IT tasks: installation from ISO and
> nodes provisioning.
> 
> Actually the point is: to test Fuel and test rest of OpenStack
> components against Fuel we mostly need to test stage artifact building,
> deployment and verification. So we need to make Fuel installable from
> packages and create overlay L2 networking. I've found no unsolvable
> problems right now to check most of scenarios with this approach.



> Besides artifact building step, there are next actions items to do to
> run Fuel QA test:
> 
> 1. Automate overlay networking setup. I've
> used https://www.tinc-vpn.org/ as a L2 switching overlay, but OpenVPN
> could be tool of choice. Action items:
>  - overlay networking setup should be integrated in fuel-devops

There is overlay work in the multi-node stuff for devstack. I believe
clarkb has a todo-list item to make that networking setup more general
and more generally available. (it's currently done in devstack-gate
script) I'm not sure if you saw that or if it's suitable for what you
need? If not, it would be good to understand deficiencies.

> 2. Automate Fuel master node codebase installation from packages,
> including repo adding and deployment. Action items:
> - installation should be integrated in fuel-devops or nodepool infra
> - make bootstrap scripts working with more than one network on master
> node ("Bringing down ALL network interfaces except...")
> - fix iptables and ssh for underlay networking

We've talked a few times about handling packages and repos of packages
for patches that have not yet landed, but have done exactly zero work on
it. Since you're a concrete use case, perhaps we can design things with
you in mind.

> 3. Automate Fuel slave node codebase installation and node enrollment.
> Action items:
> - nailgun-agent installation should be integrated in fuel-devops or
> nodepool infra
> - mcollective and ssh keys setup should be automated
> - nailgun ang/or astute should be extended to allow pre-provisioned
> nodes enrollment (I'm doing this part now)
> - nailgun-agent and l23network should support overlay network interfaces

Exciting. I look forward to working on this with you - there are fun
problems in here. :)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Watcher module for Puppet

2016-05-10 Thread Emilien Macchi
ack on my side.

So you'll need to create a governance patch like:
https://review.openstack.org/#/c/252959/

and project-config patch to create the repo like:
https://review.openstack.org/#/c/251857/

Once it's done, PTL (currently me) will review it.
Once the module is created, we will use cookiecutter to generate a
module and you'll be free to contribute to it.

Thanks for your collaboration!

On Tue, May 10, 2016 at 10:25 AM, Daniel Pawlik
 wrote:
> Hello,
> I'm working on implementation of a new puppet module for Openstack Watcher
> (https://launchpad.net/watcher).
> I'm already creating this module and I would like to share it when it will
> be done.
>
> Could someone tell me how can I proceed to join puppet team's workflow ?
>
>
> By the way, Watcher team plans to be into big tent by neutron-1 milestone.
>
> Regards,
> Daniel Pawlik
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] CentOS binary and source gate failed due to the rabbitmq

2016-05-10 Thread Paul Bourke
I have a debug job open atm to try and investigate this: 
https://review.openstack.org/#/c/300988/


If anyone is handy with strace here is a run with the output:

http://logs.openstack.org/88/300988/6/check/gate-kolla-dsvm-deploy-centos-binary/726c3b1/console.html

On 09/05/16 18:23, Hui Kang wrote:

The ubuntu gate deploy failed too; that is awkward.

http://logs.openstack.org/05/314205/1/check/gate-kolla-dsvm-deploy-ubuntu-source/766ee43/console.html#_2016-05-09_16_58_39_411

- Hui

On Mon, May 9, 2016 at 1:48 AM, Jeffrey Zhang  wrote:

I deploy the Kolla by using centos+source on centos host always.
Never see such kinda of issue. So i can not re-produce this
locally, now.

On Mon, May 9, 2016 at 8:31 AM, Hui Kang  wrote:


Hi, Jeffrey,
I tried deploying centos binary and source on my ubuntu host, which
completed successfully. Looking at the VMs on the failed gate, they
are centos VMs.

I think we need to debug this problem locally by deploying centos
kolla on centos hosts. Is my understanding correct? Thanks.

- Hui

On Sat, May 7, 2016 at 9:54 AM, Jeffrey Zhang 
wrote:

Recently, the centos binary and source gate failed due to the rabbitmq
container
existed. After making some debug. I do not found the root cause.

does anyone has any idea for this?

see this PS gate result[0]
centos binary gate failed[1]
CentOS source gate failed[2]

[0] https://review.openstack.org/313838
[1]

http://logs.openstack.org/38/313838/1/check/gate-kolla-dsvm-deploy-centos-binary/ea293fe/
[2]

http://logs.openstack.org/38/313838/1/check/gate-kolla-dsvm-deploy-centos-source/d4cb127/

--
Regards,
Jeffrey Zhang
Blog: http://xcodest.me


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





--
Regards,
Jeffrey Zhang
Blog: http://xcodest.me

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Team blogs

2016-05-10 Thread Mathieu Gagné
On Tue, May 10, 2016 at 8:05 AM, Jeremy Stanley  wrote:
> On 2016-05-09 19:46:14 -0400 (-0400), Sean Dague wrote:
>> Honestly, I'm really liking that more of them are hitting the
>> mailing list proper this time around. Discoverability is key. The
>> mailing list is a shared medium, archived forever.
>
> I feel the same (says the guy who is still in the process of
> drafting his to send to the ML, hopefully later today). I'm not sure
> what drives people to put these on random personal blogs instead,
> but the "blog" of our contributor community is the openstack-dev
> mailing list.

I find it hard to find older threads on the mailinglist.
The online HTML archives isn't great and doesn't group those kind of
threads for people to easily find them later.
Unlike mailinglist, a blog allows some form of grouping or tagging for
easy finding.

--
Mathieu

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][nova][horizon] Serial console support for ironic instances

2016-05-10 Thread Shiina, Hironori
Hi all,

I'm working with Tien who is a submitter of one[1] of console specs.
I joined the console session in Austin.

In the session, we got the following consensus.
- focus on serial console in Newton
- use nova-serial proxy as is

We also got some requirements[2] for this feature in the session.
We have started cooperating with Akira and Yuiko who submitted another similar 
spec[3].
We're going to unite our specs and add solutions for the requirements ASAP.

[1] ironic-ipmiproxy: https://review.openstack.org/#/c/296869/
[2] https://etherpad.openstack.org/p/ironic-newton-summit-console
[3] ironic-console-server: https://review.openstack.org/#/c/306755/

Thanks,
Hironori Shiina

> -Original Message-
> From: Akira Yoshiyama [mailto:akirayoshiy...@gmail.com]
> Sent: Saturday, April 23, 2016 9:26 AM
> To: OpenStack Development Mailing List (not for usage questions) 
> 
> Subject: Re: [openstack-dev] [ironic][nova][horizon] Serial console support 
> for ironic instances
> 
> Hi all,
> 
> 
> Thank you Yuiko. I'll join the console session. See you at the venue.
> 
> 
> (2)Add console drivers using ironic-console-server
> https://review.openstack.org/#/c/302291/ (ironic-console-server)
> 
> https://review.openstack.org/#/c/306754/ (console logging spec)
> https://review.openstack.org/#/c/306755/ (serial console spec)
> 
> 
> * Pros:
> - There is no influence to other components like nova and horizon.
> 
>   Only adding 2 methods to nova.virt.ironic.driver.IronicDriver
> 
> - No additional nova/ironic service required but a tool 
> (ironic-console-server)
> 
> - No change required for pre-existing console drivers
> - Output console log files; users can show them by 'nova console-log'
> 
>   ex. 
> https://github.com/yosshy/wiki/wiki/image/ironic_console_on_horizon-22.png
> 
> 
> * Cons:
> - Need to bump API microversion/RPC for Ironic because it has no console 
> logging capability now.
> 
> 
> Regards,
> Akira
> 
> 
> 2016-04-13 17:47 GMT+09:00 Yuiko Takada   >:
> 
> 
>   Hi,
> 
>   I also want to discuss about it at summit session.
> 
> 
>   2016-04-13 0:41 GMT+09:00 Ruby Loo   >:
> 
> 
>   Yes, I think it would be good to have a summit session on that. 
> However, before the session, it would really
> be helpful if the folks with proposals got together and/or reviewed each 
> other's proposals, and summarized their findings.
> 
> 
>   I've summarized all of related proposals.
> 
>   (1)Add driver using Socat
>   https://review.openstack.org/#/c/293827/
> 
>   * Pros:
>   - There is no influence to other components
>   - Don't need to change any other Ironic drivers(like 
> IPMIShellinaboxConsole)
>   - Don't need to bump API microversion/RPC
> 
>   * Cons:
>   - Don't output log file
> 
>   (2)Add driver starting ironic-console-server
>   https://review.openstack.org/#/c/302291/
>   (There is no spec, yet)
> 
>   * Pros:
>   - There is no influence to other components
>   - Output log file
>   - Don't need to change any other Ironic drivers(like 
> IPMIShellinaboxConsole)
>   - No adding any Ironic services required, only add tools
> 
>   * Cons:
>   - Need to bump API microversion/RPC
> 
>   (3)Add a custom HTTP proxy to Nova
>   https://review.openstack.org/#/c/300582/
> 
>   * Pros:
>   - Don't need any change to Ironic API
> 
>   * Cons:
>   - Need Nova API changes(bump microversion)
>   - Need Horizon changes
>   - Don't output log file
> 
>   (4)Add Ironic-ipmiproxy server
>   https://review.openstack.org/#/c/296869/
> 
>   * Pros:
>   - There is no influence to other components
>   - Output log file
>   - IPMIShellinaboxConsole will be also available via Horizon
> 
>   * Cons:
>   - Need IPMIShellinaboxConsole changes?
>   - Need to bump API microversion/RPC
> 
>   If there is any mistake, please give me comment.
> 
> 
>   Best Regards,
>   Yuiko Takada
> 
>   2016-04-13 0:41 GMT+09:00 Ruby Loo   >:
> 
> 
>   Yes, I think it would be good to have a summit session on that. 
> However, before the session, it would really
> be helpful if the folks with proposals got together and/or reviewed each 
> other's proposals, and summarized their findings.
> You may find after reviewing the proposals, that eg only 2 are really 
> different. Or they several have merit because they are
> addressing slightly different issues. That would make it easier to 
> present/discuss/decide at the session.
> 
>   --ruby
> 
> 
> 
>   On 12 April 2016 at 09:17, Jim Rollenhagen 
> mailto:j...@jimrollenhagen.com> > wrote:
> 
> 
>   On Tue, Apr 12, 2016 at 02:02:44AM +0800, Zhenguo Niu 
> wrote:
>   > Maybe we can continue the discussion here, as there's 
> no enough time in the
>  

Re: [openstack-dev] Wiki

2016-05-10 Thread Joshua Harlow

Thierry Carrez wrote:

Sean Dague wrote:

On 05/09/2016 06:53 PM, Monty Taylor wrote:

On 05/09/2016 05:45 PM, Robert Collins wrote:

IIRC mediawiki provides RSS of changes... maybe just using the wiki
more would be a good start, and have zero infra costs?


We'd actually like to start using the wiki less, per the most recent
summit. Also, the wiki currently has new accounts turned off (thanks
spammers) so if you don't have a wiki account now, you're not getting
one soon.


Hmm... that's unfortunate, as we were trying to get some of our less
ephemeral items out of random etherpads and into the wiki (which has the
value of being google indexed).


The Google indexing is also what makes the wiki so painful... After 6
years most of the content there is inaccurate or outdated. It's a
massive effort to clean it up without breaking the Google juice, and
nobody has the universal knowledge to determine if pages are still
accurate or not. We are bitten every day by newcomers finding wrong
information on the wiki and acting using it. It's getting worse every
day we keep on using it.

Also the Google juice is what made our wiki a target for spammers /
defacers. We don't have an army of maintainers like wikipedia ready to
jump at any defacement, so the fully open nature of the wiki which makes
it so convenient (anyone can create or modify a page), is also it's
major flaw (anyone can create or modify a page).

We moved most of the reference information out of the wiki to proper
documentation and peer-reviewed websites (security.o.o, governance.o.o,
releases.o.o...) but we still need somewhere to easily publish random
pages -- something between etherpad (too transient) and proper
documentation (too formal). Ideally the new tool would make it clear
that the page is not canonical information, so that we avoid the wiki
effect. Three options:

* Keep the current wiki to achieve that (valid option if we have a whole
team of wiki gardeners to weed out outdated pages and watch for
spam/defacement -- and history proved that we don't)

* Drop the current wiki and replace it by another lightweight
publication solution (if there is anything convenient)

* Deprecate the current wiki and start over with another wiki (with
stronger ACL support ?)



Would the previous topic (team blogs) that this was be a good 
replacement, if information on the wiki is project specific then why not 
just allow each project to have a blog and/or wiki-blog combination and 
then the project that owns the blog/wiki-blog would be responsible for 
maintaining it...


I don't know if any software solution exists for this, but I guess we 
are all brainstorming anyway :)


-Josh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] composable roles team

2016-05-10 Thread Dan Prince
On Fri, 2016-04-29 at 15:27 -0500, Emilien Macchi wrote:
> Hi,
> 
> One of the most urgent tasks we need to achieve in TripleO during
> Newton cycle is the composable roles support.
> So we decided to build a team that would focus on it during the next
> weeks.
> 
> We started this etherpad:
> https://etherpad.openstack.org/p/tripleo-composable-roles-work

Sorry I missed this. So there is an older etherpad where we are
actually maintaining stuff here now:

https://etherpad.openstack.org/p/tripleo-composable-services

Thanks,

Dan

> 
> So anyone can help or check where we are.
> We're pushing / going to push a lot of patches, we would appreciate
> some reviews and feedback.
> 
> Also, I would like to propose to -1 every patch that is not
> composable-role-helpful, it will help us to move forward. Our team
> will be available to help in the patches, so we can all converge
> together.
> 
> Any feedback is welcome, thanks.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Team blogs

2016-05-10 Thread Joshua Harlow

Jeremy Stanley wrote:

On 2016-05-09 19:46:14 -0400 (-0400), Sean Dague wrote:

Honestly, I'm really liking that more of them are hitting the
mailing list proper this time around. Discoverability is key. The
mailing list is a shared medium, archived forever.


I feel the same (says the guy who is still in the process of
drafting his to send to the ML, hopefully later today). I'm not sure
what drives people to put these on random personal blogs instead,
but the "blog" of our contributor community is the openstack-dev
mailing list.


Understood (it's also why I sent the oslo one to the ML); just my 
thinking was along the lines of something more project focused (yes u 
could say ML tags are this) and a little more free-form. Perhaps 
something that could say include diagrams and pictures, for example to 
explain how `XYZ` feature is done, a diagram explaining/showing the 
components of `XYZ` can be very useful.


I guess since the wiki might be going away, perhaps these project blogs 
could be the replacement? Something perhaps like 
https://openstack-security.github.io/ (but say not on github); at least 
then it becomes the projects job to prune content and approve new 
content (via gerrit?) and IMHO would lead to less spam (although I do 
find it funny that when I have to edit the wiki it recently asks me 
questions like 'what is the first letter of this question' before 
saving, I guess that's for spam protection).


Seems like something like the following really wouldn't be that hard?

http://openstack.org/blog/oslo
http://openstack.org/blog/nova
http://openstack.org/blog/$project_here

-Josh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] Bootstrapping new team for Requirements.

2016-05-10 Thread Doug Hellmann
Excerpts from Davanum Srinivas (dims)'s message of 2016-05-07 11:02:23 -0400:
> Dirk, Haïkel, Igor, Alan, Tony, Ghe,
> 
> Please see brain dump here - 
> https://etherpad.openstack.org/p/requirements-tasks
> 
> Looking at time overlap, it seems that most of you are in one time
> range and Tony and I are outliers
> http://www.timeanddate.com/worldclock/meetingtime.html?iso=20160506&p1=43&p2=240&p3=195&p4=166&p5=83&p6=281&p7=141)
> 
> So one choice for time is 7:00 AM or 8:00 AM my time which will be
> 9:00/10:00 PM for Tony. Are there other options that anyone sees?
> Please let me know which days work as well.
> 
> dhellmann, sdague, markmcclain, ttx, lifeless,
> Since you are on the current requirements-core gerrit group, Can you
> please review the etherpad and add your thoughts/ideas/pointers to
> transfer knowledge to the new folks?

I've added a bunch of the todo items that came up in the release team
meetup Friday at the summit.

Doug

> 
> To be clear, we are not yet adding new folks to the gerrit group, At
> the moment, i am just getting everyone familiar and productive with
> what we do now and see who is still around doing stuff in a couple of
> months :)
> 
> Anyone else want to help, Jump in please!
> 
> Thanks,
> Dims
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Stepping down from puppet core

2016-05-10 Thread Emilien Macchi
On Tue, May 10, 2016 at 11:11 AM, Clayton O'Neill  wrote:
> I’d like to step down as a core reviewer for the OpenStack Puppet
> modules.  For the last cycle I’ve had very little time to spend
> reviewing patches, and I don’t expect that to change in the next
> cycle.  In addition, it used to be that I was contributing regularly
> because we were early upgraders and the modules always needed some
> work early in the cycle.  Under Emilien’s leadership this situation
> has changed significantly and I find that the puppet modules generally
> “just work” for us in most cases.

Well, thanks a lot for your work.
Your contribution as an operator is one of the reasons why Puppet
modules are used in production today.
Thank your for your time, for your feedback, for your openness, it was
very appreciated.

> I intend to still be contribute when I can and I’d like to thank
> everyone for the hard work for the last two cycles.  The OpenStack
> Puppet modules are really in great shape these days.

I hope so! Also feel free to kick our ass every time we do something
wrong. Your voice will still be listened and your feedback
appreciated.

Thanks again,
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] weekly meeting #80

2016-05-10 Thread Emilien Macchi
We did our meeting, and you can read notes here:
http://eavesdrop.openstack.org/meetings/puppet_openstack/2016/puppet_openstack.2016-05-10-15.01.html

Thanks!

On Mon, May 9, 2016 at 9:17 AM, Emilien Macchi  wrote:
> Hi,
>
> Tomorrow, we'll have our weekly meeting, I added a few topics:
> https://etherpad.openstack.org/p/puppet-openstack-weekly-meeting-20160510
>
> Feel free to add more and also submit any outstanding bug or patch.
>
> Thanks,
>
> On Tue, May 3, 2016 at 7:58 AM, Emilien Macchi  wrote:
>> Hi,
>>
>> If you have any topic that you would like to discuss, please add it to
>> the topic list:
>> https://etherpad.openstack.org/p/puppet-openstack-weekly-meeting-20160503
>>
>> If we don't have topics, we'll probably cancel the meeting this week.
>> I'm currently working on a summary of what happened during the Summit
>> for Puppet OpenStack project.
>>
>> Thanks,
>> --
>> Emilien Macchi
>
>
>
> --
> Emilien Macchi



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] api-ref sprint today & wed

2016-05-10 Thread Sean Dague
On 05/09/2016 06:27 PM, Augustina Ragwitz wrote:
> Currently it's really hard to tell (at least to me) which files have
> patches against them and which don't. I've had to manually make a
> spreadsheet because it's not obvious to me, at a glance, from the
> current commit messages, and I've accidentally started work on several
> files that already have owners. Maybe if people could put the .inc
> filename in their commit message, or maybe we could agree on a
> consistent commit message for whichever phase we're on, it would be
> easier to tell, at a glance from the list, what's already being worked
> on. Other suggestions welcome, or if there's another list somewhere I
> don't know about, a link to that would be great.
> 
> This is the list I'm referring to:
> https://review.openstack.org/#/q/project:openstack/nova+file:api-ref+status:open

Also, while only a point in time, here are the currently open reviews,
and the files they touch:

Open reviews changing files
 - https://review.openstack.org/311070 -
[u'api-ref/source/os-floating-ip-pools.inc']
 - https://review.openstack.org/311727 -
[u'api-ref/source/servers-admin-action.inc']
 - https://review.openstack.org/313532 -
[u'api-ref/source/parameters.yaml', u'api-ref/source/servers.inc']
 - https://review.openstack.org/314085 - [u'api-ref/source/diagnostics.inc']
 - https://review.openstack.org/314101 -
[u'api-ref/source/extensions.inc', u'api-ref/source/parameters.yaml']
 - https://review.openstack.org/314133 - [u'api-ref/source/flavors.inc',
u'api-ref/source/parameters.yaml']
 - https://review.openstack.org/314198 - [u'api-ref/source/os-networks.inc']
 - https://review.openstack.org/314257 -
[u'api-ref/source/parameters.yaml',
u'api-ref/source/servers-action-console-output.inc']
 - https://review.openstack.org/314268 - [u'api-ref/source/images.inc']
 - https://review.openstack.org/314310 -
[u'api-ref/source/os-migrations.inc']
 - https://review.openstack.org/314320 - [u'api-ref/source/ips.inc']
 - https://review.openstack.org/314325 - [u'api-ref/source/os-volumes.inc']
 - https://review.openstack.org/314328 - [u'api-ref/source/ips.inc']
 - https://review.openstack.org/314502 -
[u'api-ref/source/os-keypairs.inc', u'api-ref/source/parameters.yaml']
 - https://review.openstack.org/314566 -
[u'api-ref/source/_static/api-site.css',
u'api-ref/source/parameters.yaml',
u'api-ref/source/servers-action-crash-dump.inc']
 - https://review.openstack.org/314629 -
[u'api-ref/source/servers-action-shelve.inc']

I've got some code to generate this, but it's a bit of a mess with hard
coded gerrit id/pass. I'll get it cleaned up and published by end of day
(hopefully).

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OpenStackClient] Meeting change reminder

2016-05-10 Thread Dean Troyer
This is a reminder that the OSC even week team meetings have changed time
to Thursdays at 13:00 UTC in #openstack-meeting-3.  See the eavesdrop
meeting page [0] for complete information.

dt

[0] http://eavesdrop.openstack.org/#OpenStackClient_Team_Meeting

-- 

Dean Troyer
dtro...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Midcycle: virtual or physical?

2016-05-10 Thread Loo, Ruby
The poll closes Sunday night/Monday morning. Ie, whenever Jim gets around to 
looking at the numbers Monday (May 16) morning, before our weekly ironic 
meeting :)

‹ruby



On 2016-05-09, 6:24 PM, "Jim Rollenhagen"  wrote:

>Hey all,
>
>In this morning's meetings we discussed having a virtual midcycle again
>this cycle, versus a physical midcycle.
>
>Pros of virtual:
>
>* More people can attend (we'd be missing a significant portion of our
>  core team at a physical midcycle)
>* Lower cost for employers
>* It went very well last time!
>
>Cons of virtual:
>
>* No face-to-face non-work interaction (this is the main reason I hear)
>* Somewhat lower bandwidth
>* No whiteboards
>
>We seem to be about 50-50 split on this, so I'd like to ask for your
>preference. Here's a poll, please take it:
>
>http://doodle.com/poll/4vm5ea28t3qyn7bp
>
>Also note that if we do have a physical midcycle, that doesn't mean we
>can't also have some virtual meetups during the cycle :)
>
>// jim
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Easing contributions to central documentation

2016-05-10 Thread Matt Kassawara
Julien,

Project or framework... regardless of the word, consumers of OpenStack
(without additional knowledge) see it as a single entity. Anyway,
especially after implementing the big tent, the documentation team is not
large enough to assign one or more people to manage documentation in each
project repository (including bug triage and patch reviews) or attend
project meetings. As a result, the documentation team asks each project to
assign a liaison that advocates documentation with developers, assists
developers with contributing/maintaining documentation, and collaborates
with the documentation team. Collaboration includes bug triage, patch
reviews, attending meetings, and communicating via IRC and/or the mailing
list. Unfortunately, most projects do not collaborate effectively with the
documentation team which results in a disconnection between the
documentation team and projects/developers. Improving collaboration via
liaisons would resolve most problems.

On Tue, May 10, 2016 at 4:08 AM, Julien Danjou  wrote:

> On Mon, May 09 2016, Matt Kassawara wrote:
>
> > So, before developer frustrations drive some or all projects to move
> > their documentation in-tree which which negatively impacts the goal of
> > presenting a coherent product, I suggest establishing an agreement
> > between developers and the documentation team regarding the review
> > process.
>
> My 2c, but it's said all over the place that OpenStack is not a product,
> but a framework. So perhaps the goal you're pursuing is not working
> because it's not accessible by design?
>
> > 1) The documentation team should review the patch for compliance with
> > conventions (proper structure, format, grammar, spelling, etc.) and
> provide
> > feedback to the developer who updates the patch.
> > 2) The documentation team should modify the patch to make it compliant
> and
> > ask the developer for a final review to prior to merging it.
> > 3) The documentation team should only modify the patch to make it build
> (if
> > necessary) and quickly merge it with a documentation bug to resolve any
> > compliance problems in a future patch by the documentation team.
> >
> > What do you think?
>
> We, Telemetry, are moving our documentation in-tree and are applying a
> policy of "no doc, no merge" (same policy we had for unit tests).
> So until the doc team starts to help projects with that (proof-reading,
> pointing out missing doc update in patches, etc) and trying to be part
> of actual OpenStack projects, I don't think your goal will ever work.
>
> For example, we have an up-to-date documentation in Gnocchi since the
> beginning, that covers the whole project. It's probably not coherent
> with the rest of OpenStack in wording etc, but we'd be delighted to have
> some folks of the doc team help us with that.
>
> Cheers,
> --
> Julien Danjou
> /* Free Software hacker
>https://julien.danjou.info */
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] Stepping down from puppet core

2016-05-10 Thread Clayton O'Neill
I’d like to step down as a core reviewer for the OpenStack Puppet
modules.  For the last cycle I’ve had very little time to spend
reviewing patches, and I don’t expect that to change in the next
cycle.  In addition, it used to be that I was contributing regularly
because we were early upgraders and the modules always needed some
work early in the cycle.  Under Emilien’s leadership this situation
has changed significantly and I find that the puppet modules generally
“just work” for us in most cases.

I intend to still be contribute when I can and I’d like to thank
everyone for the hard work for the last two cycles.  The OpenStack
Puppet modules are really in great shape these days.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] api-ref sprint: Tuesday status

2016-05-10 Thread Sean Dague
On 05/09/2016 08:23 AM, Sean Dague wrote:
> There is a lot of work to be done to get the api-ref into a final state.
> 
> Review / fix existing patches -
> https://review.openstack.org/#/q/project:openstack/nova+file:api-ref+status:open
> shows patches not yet merged. Please review them, and if there are
> issues feel free to fix them.
> 
> Help create new API ref changes verifying some of the details -
> https://wiki.openstack.org/wiki/NovaAPIRef

We made some reasonable progress yesterday, including discovering things
like the completely unexposed standardized diagnostics infrastructure
(which will get reproposed as a microversions). The current burndown
state is here: http://burndown.dague.org

Thanks to the following folks for contributing so far:

Has proposed changes
 - Alex Xu
 - Anusha Unnam
 - Augustina Ragwitz
 - Ronald Bradford
 - Sarafraj Singh
 - Sean Dague
 - Sivasathurappan Radhakrishnan
 - Sujitha
 - jichenjc

Has had changes merged
 - Alex Xu
 - Ronald Bradford
 - Sean Dague
 - Sivasathurappan Radhakrishnan
 - jichenjc

Has reviewed changes
 - Alex Xu
 - Andrew Laski
 - John Garbutt
 - Ken'ichi Ohmichi
 - Matt Riedemann
 - Ronald Bradford
 - Sarafraj Singh
 - Sean Dague
 - jichenjc
 - melissaml
 - yejiawei

Although today officially is a break day from the sprint, I encourage
folks to carry on. There is a lot to be done here to get us in a sane place.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Watcher] Meeting time change

2016-05-10 Thread Antoine Cabot
Hi Watcher team,

As discussed in Austin, I would like to make our team meeting
more appropriate for people from Asia.

I suggest to keep the current time (14:00 UTC) on even weeks
and switch to 9:00 UTC for odd weeks. We can still meet on
#openstack-meeting-4 for both meetings.

Do you have any suggestion ?

Thank you,

Antoine
acabot

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] [senlin] [keystone] [ceilometer] [telemetry] Questions about api-ref launchpad bugs

2016-05-10 Thread gordon chung


On 10/05/2016 9:36 AM, Anne Gentle wrote:
>
> It's a small set of files:
> https://github.com/openstack/api-site/tree/master/api-ref/source/telemetry/v2
> How about I ask someone to do the conversion and add it to
> https://github.com/openstack/ceilometer? I have someone in mind who's
> looking for a task. Let me know and I'll get her started.

i wouldn't mind this if it's free help :). although we should probably 
add a disclaimer that things may be dropped as we worked on streamlining 
the telemetry workflow. eg. Alarming API is only available via Aodh as 
of Mitaka.

cheers,

-- 
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] [QA] running Fuel tests using nodepool

2016-05-10 Thread Vladimir Eremin
Hi Jeremy,

Yep, I saw it. Unfortunately, because Fuel deployment scenarios is about 
setting up OVS too, it could be kinda freaky to provide overlay networking for 
OVS on OVS. That's why I was looking on other L2 overlays (kernel space mcast 
vxlan was in scope too).

But yes, we still can use this method (with l23network or multinode scripts for 
some cases). 

-- 
With best regards,
Vladimir Eremin,
Fuel Deployment Engineer,
Mirantis, Inc.



> On May 10, 2016, at 5:39 PM, Jeremy Stanley  wrote:
> 
> On 2016-05-10 15:54:34 +0300 (+0300), Vladimir Eremin wrote:
> [...]
>> 1. Automate overlay networking setup. I've used
>> https://www.tinc-vpn.org/  as a L2
>> switching overlay, but OpenVPN could be tool of choice. Action
>> items:
>>- overlay networking setup should be integrated in fuel-devops
> [...]
> 
> Just to be sure, you've seen the ovs_vxlan_bridge() implementation
> in devstack-gate where we set up an overlay L2 network using
> OVS/VXLAN? The same design also works fine with GRE (we used it for
> a while but ran into some service providers blocking IP protocol 47
> on their LANs).
> 
> http://git.openstack.org/cgit/openstack-infra/devstack-gate/tree/multinode_setup_info.txt
> http://git.openstack.org/cgit/openstack-infra/devstack-gate/tree/functions.sh#n1050
> -- 
> Jeremy Stanley
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Smaug] Meeting time change

2016-05-10 Thread xiangxinyong
About the Smaug meeting time,


- 09:00 UTC for east, biweekly-odd


This time is very good for the eastern side of the globe.


Thanks saggi.


Best Regards,
xiangxinyong



-- Original --
From:  "Saggi Mizrahi";;
Date:  Tue, May 10, 2016 09:04 PM
To:  "openstack-dev@lists.openstack.org"; 
Cc:  "Eran Gampel"; "yinwei 
(E)"; 
Subject:  [openstack-dev] [Smaug] Meeting time change



  
Hi everyone,
 

 
 
We would like to make the Smaug meeting weekly instead of
 
biweekly and make it so that one week is in an appropriate time
 
is preferable for the eastern side of the globe and one week for
 
the western side of the globe.
 

 
 
current time is Tuesdays at 14:00 UTC which is 22:00 in China
 
and 07:00 PDT (if my calculations are correct).
 

 
 
I'm suggesting that we change it to:
 
- 15:00 UTC for west, biweekly-even
 
- 09:00 UTC for east, biweekly-odd
 

 
 
Are there any better suggestions?
 
Am I suggesting times that collide with other projects?
 
Please send approvals or suggestions but remember to specify if
 
you are going to come to the east or west meeting.
 

 
 
Thank you all,
 
Let's build Smaug together!
 
 
-
 
 This email and any files transmitted and/or attachments with it are 
confidential and proprietary information of
 Toga Networks Ltd., and intended solely for the use of the individual or 
entity to whom they are addressed.
 If you have received this email in error please notify the system manager. 
This message contains confidential
 information of Toga Networks Ltd., and is intended only for the individual 
named. If you are not the named
 addressee you should not disseminate, distribute or copy this e-mail. Please 
notify the sender immediately
 by e-mail if you have received this e-mail by mistake and delete this e-mail 
from your system. If you are not 
 the intended recipient you are notified that disclosing, copying, distributing 
or taking any action in reliance on
 the contents of this information is strictly prohibited. 
  
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OVN] [networking-ovn] [networking-sfc] SFC and OVN

2016-05-10 Thread Ryan Moats


John McDowall  wrote on 05/09/2016 10:46:41
AM:

> From: John McDowall 
> To: Ryan Moats/Omaha/IBM@IBMUS
> Cc: "disc...@openvswitch.org" , "OpenStack
> Development Mailing List" 
> Date: 05/09/2016 10:46 AM
> Subject: Re: [OVN] [networking-ovn] [networking-sfc] SFC and OVN
>
> Ryan,
>
> Thanks – let me try and get the code cleaned up and rebased. One
> area that I could use your insight on is the interface to
> networking-ovn and how it should look.
>
> Regards
>
> John

Looking at this, the initial code that I think should move over are
_create_ovn_vnf and _delete_ovn_vnf and maybe rename them to
create_vnf and delete_vnf.

What I haven't figured out at this point is:
1) Is the above enough?
2) Do we need to refactor some of OVNPlugin's calls to provide hooks for
the SFC
   driver to use for when the OVNPlugin inheritance goes away.

Ryan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >