Re: [openstack-dev] [all] what do you work on upstream of OpenStack?

2016-06-23 Thread Clint Byrum
Excerpts from Doug Hellmann's message of 2016-06-23 08:37:04 -0400:
> Excerpts from Doug Hellmann's message of 2016-06-13 15:11:17 -0400:
> > I'm trying to pull together some information about contributions
> > that OpenStack community members have made *upstream* of OpenStack,
> > via code, docs, bug reports, or anything else to dependencies that
> > we have.
> > 
> > If you've made a contribution of that sort, I would appreciate a
> > quick note.  Please reply off-list, there's no need to spam everyone,
> > and I'll post the summary if folks want to see it.
> > 
> > Thanks,
> > Doug
> > 
> 
> I've summarized the results of all of your responses (on and off
> list) on a blog post this morning [1]. I removed individual names
> because I was concentrating on the community as a whole, rather than
> individual contributions.
> 
> I'm sure there are projects not listed, either because I missed
> something in my summary or because someone didn't reply. Please feel
> free to leave a comment on the post with references to other projects.
> It's not necessary to link to specific commits or bugs or anything like
> that, unless there's something you would especially like to highlight.
> 
> Thank you for your input into the survey. I'm impressed with the
> breadth of the results. I'm very happy to know that our community,
> which so often seems to be focused on building new projects, also
> contributes to existing projects that we all rely on.
> 
> [1] 
> https://doughellmann.com/blog/2016/06/23/openstack-contributions-to-other-open-source-projects/

That is pretty cool.

I forgot to reply to your original request, but I did a lot of python3
porting on the pysaml2 project in support of Keystone's python3 port.

https://github.com/rohe/pysaml2/commits?author=SpamapS

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-ansible] Change of default database collation

2016-06-23 Thread Jimmy McCrory
Hi all,

OpenStack-Ansible has been configuring the default database collation as
'utf8_unicode_ci'.
We've recently run into an issue with new deployments during cinder
database migrations where a foreign key column had different a collation
between the parent and child table[1].

That's since been fixed, and we're now looking at changing the default
collation to match the default of MySQL/MariaDB's utf8 character set to
avoid the possibility of this same discrepancy with new deployments[2].

The question then comes to how to best handle upgrades from Mitaka to
Newton.
Any input for the current proposal[3] from anyone that may have experience
with any project's database migration scripts, or MySQL-based databases in
general, would be appreciated.

Also,
Have any of the other deployment projects been effected by this?
Are there any in-progress efforts to help further enforce a standard
character set and collation through DB migration scripts?

Thanks

[1] https://bugs.launchpad.net/cinder/+bug/1594195
[2] https://review.openstack.org/#/c/331786/
[3] https://review.openstack.org/#/c/333733/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] No middle-man - when does/will Nova directly connect iSCSI volumes?

2016-06-23 Thread Preston L. Bannister
On 06/23/2016  Daniel Berrange wrote (lost attribution in thread):

> Our long term goal is that 100% of all network storage will be connected
>
to directly by QEMU. We already have the ability to partially do this with
> iSCSI, but it is lacking support for multipath. As & when that gap is
> addressed though, we'll stop using the host OS for any iSCSI stuff.
>
> So if you're requiring access to host iSCSI volumes, it'll work in the
> short-medium term, but in the medium-long term we're not going to use
> that so plan accordingly.
>

On 06/23/2016 10:09 AM, Walter A. Boring IV wrote:

> We regularly fix issues with iSCSI attaches in the release cycles of
> OpenStack,
> because it's all done in python using existing linux packages.  How often
> are QEMU
> releases done and upgraded on customer deployments vs. python packages
> (os-brick)?
>
> I don't see a compelling reason for re-implementing the wheel,
> and it seems like a major step backwards.
>

On Thu, Jun 23, 2016 at 12:07:43PM -0600, Chris Friesen wrote:

> This is an interesting point.
>
> Unless there's a significant performance benefit to connecting
> directly from qemu, it seems to me that we would want to leverage
> the existing work done by the kernel and other "standard" iSCSI
> initators.
>

On Thu, Jun 23, 2016 at 1:28 PM, Sean McGinnis 
wrote:
>
> I'm curious to find out this as well. Is this for a performance gain? If
> so, do we have any metrics showing that gain is significant enough to
> warrant making a change like this?
>
> The host OS is still going to be involved. AFAIK, this just cuts out the
> software iSCSI initiator from the picture. So we would be moving from a
> piece of software dedicated to one specific functionality, to a
> different piece of software that's main reason for existence is nothing
> to do with IO path management.
>
> I'm not saying I'm completely opposed to this. If there is a reason for
> doing it then it could be worth it. But so far I haven't seen anything
> explaining why this would be better than what we have today.



First, I have not taken any measurements, so please ignore everything I
say. :)

Very generally, if you take out unnecessary layers, you can often improve
performance and reliability. Not in every case, but often.

Volume connections routed through the Linux kernel *might* lose performance
from the extra layer (measures are needed), and have to be managed. The
last could be easily underestimated. Nova has to manage Linux's knowledge
of volume connections. In the strictest sense the nova-compute host Linux
does not *need* to know about volumes attached to Nova instances. The
hairiest part of the the problem: What to do when the nova-compute Linux
table of attached volumes gets out of sync? My guess is there are error
cases not-yet well-handled in Nova in this area. My guess is Nova could be
somewhat simpler if all volumes were directly attached to QEMU.

(Bit cheating when mentioning the out-of-sync case, as got bit a couple
times in testing. It happens.)

But ... as mentioned earlier, suspect you cannot get to 100% direct to QEMU
if there is specialized hardware that has to tie into the nova-compute
Linux. Seems unlikely you would get consensus, as this impacts major
vendors. Which means you have to keep managing the host map of volumes.
Which means you cannot simplify Nova. (If someone knows how to use the
specialized hardware, with less footprint in host Linux, this answer could
change.)

Where this will land, I do not know. Do not know the performance measures.

Can OpenStack allow for specialized hardware, without routing through host
Linux? (Probably not, but would be happy to be wrong.)

And again, as an outsider, I could be wrong about everything. :)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] What's Up, Doc? 24 June 2016

2016-06-23 Thread Lana Brindley
Hi everyone,

We're starting to see a real rush of projects publishing their Install Guides 
this week, which is really exciting! We're also working on getting the index 
page on docs.openstack.org up and running, so we should have that ready to go 
up by the time I'm writing this newsletter next week. Well done to all the docs 
people, and the cross-project liaisons who have been working hard to make this 
happen. It's great to see it all start to come together :)

== Progress towards Newton ==

103 days to go!

Bugs closed so far: 207

Newton deliverables: 
https://wiki.openstack.org/wiki/Documentation/NewtonDeliverables
Feel free to add more detail and cross things off as they are achieved 
throughout the release.

Also, just a note that the CFP for Barcelona is open now, until 13 July. If you 
want to brainstorm some documentation-related ideas, please get in touch!

== Speciality Team Reports ==

'''HA Guide: Bogdan Dobrelya'''
No report this week.

'''Install Guide: Lana Brindley'''
Swift, Manila patches in progress. Petr is working on the index page: 
https://review.openstack.org/331704 Feedback requested! Next meeting: 5 July 
0600 UTC

'''Networking Guide: Edgar Magana'''
No report this week.

'''Security Guide: Nathaniel Dillon'''
No report this week.

'''User Guides: Joseph Robinson'''
No report this week.

'''Ops Guide: Shilla Saebi'''
Team is currently reviewing enterprise ops documentation to incorporate into 
the Ops Guide. 
Ops tasks are documented here: https://etherpad.openstack.org/p/ops-arch-tasks
OpenStack ops guide reorg in progress & documented here: 
https://etherpad.openstack.org/p/ops-guide-reorg
Members of the ops guide team are joining ops meetings to find volunteers

'''API Guide: Anne Gentle'''
Progress ongoing on navigation for multiple OpenStack APIs: 
https://review.openstack.org/#/c/329508
Working on lists of project's API references that don't use RST+YAML framework: 
http://lists.openstack.org/pipermail/openstack-docs/2016-June/008775.html

'''Config/CLI Ref: Tomoyuki Kato'''
Got some comments for improvements from Brian Rosmaita, Hemanth Makkapati and 
Richard Jones. Thank you!
Closed many bugs for Configuration Reference.
Updated openstack, glance, neutron-sanity-check, and trove-manage CLI reference.

'''Training labs: Pranav Salunke, Roger Luethi'''
Webpage is looking good and also all the URL's point to the right link 
http://docs.openstack.org/training_labs/
Trying to finalize PXE support https://review.openstack.org/#/c/305991/

'''Training Guides: Matjaz Pancur'''
Italian translation of the Upstream training
Details about running a Lego session (https://review.openstack.org/#/c/325020/, 
https://review.openstack.org/#/c/330819/)

'''Hypervisor Tuning Guide: Blair Bethwaite
No report this week.

'''UX/UI Guidelines: Michael Tullis, Rodrigo Caballero'''
No report this week.

== Site Stats ==

The top five search terms on the site so far during June: snapshot, cinder, 
nova, security group, quota

== Doc team meeting ==

Next meetings:

The US meeting was held this week, you can read the minutes here: 
https://wiki.openstack.org/wiki/Documentation/MeetingLogs#2016-06-22

Next meetings:
APAC: Wednesday 29 June, 00:30 UTC
US: Wednesday 6 July, 19:00 UTC

Please go ahead and add any agenda items to the meeting page here: 
https://wiki.openstack.org/wiki/Meetings/DocTeamMeeting#Agenda_for_next_meeting

--

Keep on doc'ing!

Lana

https://wiki.openstack.org/wiki/Documentation/WhatsUpDoc#24_June_2016

-- 
Lana Brindley
Technical Writer
Rackspace Cloud Builders Australia
http://lanabrindley.com



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Kuryr] Refactoring into common library and kuryr-libnetwork + Nested_VMs

2016-06-23 Thread Vikas Choudhary
Hi Team,

As already discussed with some of teammates over irc and internally,
thought of bringing discussionto ml for more opinions:

My idea on repo structure is something similar to this:

kuryr
└── controller
│   ├── api (running on controller node(cluster master or openstack
controller node), talking to other services(neutron))
│   │
│   ├── kubernetes-controller
│   │   │
│   │   └── watcher (for network related services making api calls)
│   │
│   │___any_other_coe_controller_capable_of_watching_events
│
│
│
│___driver
 │common (traffic tagging utilities and binding)
 │
 │kubernetes(cni)
 │
 │libnetwork(network and ipam driver)(for network related services
making api calls)
 │
 │ any_other_driver(calling api for nw related services if watcher
not supported)


Thoughts?


-Vikas




-- Forwarded message --
From: Vikas Choudhary 
Date: Thu, Jun 23, 2016 at 2:54 PM
Subject: Re: Kuryr Refactoring into common library and kuryr-libnetwork +
Nested_VMs
To: Antoni Segura Puimedon 


@Toni, Can you please explain a bit on how the roles regarding
 vlan/segmentation id allocation, tagging ang untagging containers' traffic
are divided among entities you mentioned.

In my understanding, in k8s case, API_watcher has resource translators and
these will be talking to neutron for port creation and ip allocation. Then
why for k8s case, neutron talking utilities are present in common lib. Or
in other words, which neutron apis will be used from common lib?

-Vikas

On Thu, Jun 23, 2016 at 2:22 PM, Antoni Segura Puimedon 
wrote:

>
>
> On Thu, Jun 23, 2016 at 7:28 AM, Irena Berezovsky 
> wrote:
>
>> Hi guys,
>> Just minor suggestion from my side. Please link all the refactoring
>> patches to the same launchpad bp/topic so it will be easy to trace the
>> relevant work.
>>
>> Vikas, Gal,let me know if you need so help.
>>
>> BR,
>> Irena
>>
>> On Thu, Jun 23, 2016 at 7:58 AM, Vikas Choudhary <
>> choudharyvika...@gmail.com> wrote:
>>
>>> Hi Gal,
>>>
>>> Greeting of the day!!!
>>>
>>> I have trying reaching you over irc unsuccessfully since last two days.
>>> So finally thought of dropping an email.
>>>
>>> Since you have taken up the task of moving code to kuryr-libnetwork and
>>> I also have started working on refactoring/changes for nested-vm, seems
>>> there is some overlap. Therefore wanted to coordinate following two tasks:
>>>
>>> 1. Writing a common(COE agnostic) library , "Kuryr_api" or some other
>>> similar name, responsible for handling requests from kuryr-libnetwork and
>>> making requests to other OpenStack services.
>>>
>>> 2. Modify current kuryr controllers.py to make calls to common
>>> "Kuryr_api" and not to OpenStack services directly.
>>>
>>
> My idea was to leave:
>
> https://github.com/openstack/kuryr
>
> with a single package
>
> kuryr
> └── lib
> ├── binding
> │   └── __init__.py
> └── __init__.py
>
>
>  that would contain just a library with the common  bits like the
> controller, the binding, and utils to talk to neutron.
>
> Then, the other repos like openstack/kuryr-libnetwork and
> openstack/kuryr-kubernetes would have a package like the following:
>
> kuryr
> └── kubernetes
> ├── cni
> │   └── __init__.py
> ├── __init__.py
> └── watcher
> └── __init__.py
>
> This way, all would be inside the namespace Python package kuryr (read the
> first and second answers to
> http://stackoverflow.com/questions/1675734/how-do-i-create-a-namespace-package-in-python
>
>
>
>
>>> Shall i start working on both of these or you are already working on
>>> either one? Please suggest.
>>>
>>>
>>> -Vikas
>>>
>>>
>>>
>>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][pci-passthrough] definitely specify VFIO driver as the host PCI driver for passthrough

2016-06-23 Thread Chen Fan

hi all,
 in openstack, we can use the pci passthrough feature now, refer to
 https://wiki.openstack.org/wiki/Pci_passthrough
 but we can't definitely specify the host pci driver is LEGACY_KVM 
or newer VFIO,
 new VFIO driver is more safer, and higher-performance user-space 
driver
 than legacy kvm driver (pci-stub), the benefit relative to kvm 
assignment driver

 could refer to http://lwn.net/Articles/474088/.
 In additional, VFIO driver provides the GPU passthrough as primary 
card support.
 I think it is more useful for further GPU passthrough support in 
openstack.


 Openstack relies on the libvirt nodedev device configuration to do 
pci passthrough,
 with managed mode, the configured device is automatically detached 
and re-attached
 with KVM or VFIO driver that depended on the host driver modules 
configuration,
 so now we can't specify the driver in openstack to VFIO mode, I 
think we should need
 to add this feature support in openstack to get pci passhthrough 
more scalability.


 a simply idea is to add a option in nova.conf HOST_PCI_MODEL = 
VFIO /KVM to specify

 the pci passthrough device driver is using VFIO driver.
 any comments are welcome. :)

Thanks,
Chen



--
Sincerely,
Chen Fan


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][monasca][requirements] Kafka 1.x for Oslo.Messaging and Monasca

2016-06-23 Thread Tony Breeds
On Wed, Jun 22, 2016 at 05:34:27AM +, Keen, Joe wrote:
> Davanum,
>   We started work on getting Monasca into the global requirements with two
> reviews [1] [2] that add gate jobs and check requirements jobs for the
> Monasca repositories.  Some repositories are being adapted to use versions
> of libraries that OpenStack currently accepts [3] and we¹re looking at the
> libraries we use that are not currently part of OpenStack and seeing if
> they¹re worth trying to add to the global requirements.  We¹re hoping to
> be able to start adding the global requirements reviews within a week or
> two.
> 
> We definitely want to talk with the oslo.messaging team and explain the
> ways we use Kafka and what effects the move to the 1.x versions of the
> library has on us.  I¹ve attempted to contact the oslo.messaging team in
> the oslo IRC channel to see if we can talk about this at a weekly meeting
> but I wasn¹t able to connect with anyone.  Would you prefer that
> conversation happen on the mailing list here or could we add that topic to
> the next weekly meeting?
> 
> [1] https://review.openstack.org/#/c/316293/
> [2] https://review.openstack.org/#/c/323567/

These 2 are merged.

> [3] https://review.openstack.org/#/c/323598/

Taking a tangent here:

In 2014[1] we added a cap to psutil because 2.x wasn't compatible with 1.x
which is fine but 2 years latere we have 4.3.0 and because of the cap I'm
guessing we've done very little to work towards 4.3.0

I've added an item for the requirements team to look at what's involved in
raising the minimum for psutil, but:
Requirement: psutil<2.0.0,>=1.1.1 (used by 41 projects)
it wont happen soon.

Is psutil the last of the "old" libraries you need to deal with?

Getting back to the topic of kafka, what are the pain points involved in
working with the 2.x API?  Clearly we're going to need to get monasca and
oslo.messgaing on a compatible page RSN or we'll end up delaying things until
Ocata :(

Yours Tony.

[1] https://review.openstack.org/#/c/81373/


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [stable/mitaka] Error: Discovering versions from the identity service failed when creating the password plugin....

2016-06-23 Thread Tony Breeds
On Tue, Jun 21, 2016 at 10:27:28AM +0530, mohammad shahid wrote:
> Hi,
> 
> I am getting below error while starting openstack devstack with
> stable/mitaka release. can someone look at this problem ?

so I *think* you're trying to deploy stable/mitaka with the master version of
devstack.

Please make sure that you're running the stable/mitaka version of devstack.

git clone https://git.openstack.org/openstack-dev/devstack
cd devstack
git checkout -b stable/mitaka -t origin/stable/mitaka

If that doesn't work please inlude the devstack SHA you are using along with
the trace and config.

Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder]restore method while ceph backup backend is used

2016-06-23 Thread 刘庆
Hi all,
This is Liu Qing, new to this community. I have a question about the ceph
backup backend.
>From the cinder/backup/manager.py, the restore process is done by attach
the original device(restore dest) to the cinder service host. And in the
ceph driver the full_restore will discard the additional space if restore
dest(by volume extend) is larger than the backup. There are two ways to
discard the unused space, by ceph discard or write zeroes to the unused
space. As the dest is attached to the host, the dest is not recognized as
an ceph volume. Ceph discard will never be used in volume restore, right?
But ceph discard is much more efficient than writing zeroes to the unused
space. Is there any plan to use the ceph discard way? Thanks.

Liu Qing
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][packaging] Normalizing requirements file

2016-06-23 Thread Tony Breeds
On Wed, Jun 22, 2016 at 03:23:43PM +1000, Tony Breeds wrote:
> On Wed, Jun 22, 2016 at 06:45:31AM +0200, Haïkel wrote:
> > Hi,
> > 
> > as a packager, I spend a lot of time to scrutinize the requirements
> > repo, and I find it easier to read if specifiers are ordered.
> > So in a quick glance, you can check which is the min version required
> > and max one without trying to search them among other specifiers.
> > I scripted a basic linter to do that (which also normalize comments to
> > PEP8 standards)
> > 
> > Initial review is here:
> > https://review.openstack.org/#/c/332623/
> > 
> > script is available here;
> > https://gist.github.com/hguemar/7a17bf93f6c8bd8ae5ec34bf9ab311a1
> > 
> > Your thoughts?
> 
> I'm fine with doign something like this.  I wrote [1] some time ago but didn't
> push on it as I needed to verify that this wouldn't create a "storm" of
> pointless updates that just reorder things in every projects 
> *requirements.txt.

I think we need to pause on these 'normalizing' changes in g-r.  They're
genertaing whitspace only reviews in many, (possibly all) projects that have
managed requirements.

We need to do more testing and possibly make the bot smarter befoer we look at
this again.


Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla][ironic] My thoughts on Kolla + BiFrost integration

2016-06-23 Thread Steven Dake (stdake)
Hey folks,

I created the following sequence diagram to show my thinking on Ironic 
integration.  I recognize some internals of the recently merged bifrost changes 
are not represented in this diagram.  I would like to see a bootstrap action do 
all of the necessary things to bring up BiFrost in a container using Sean's WIP 
Kolla patch followed by bare metal minimal OS load followed by Kolla dependency 
software (docker-engine, docker-py, and ntpd) loading and initialization.

This diagram expects ssh keys to be installed on the deployment targets via 
BiFrost.

https://creately.com/diagram/ipt09l352/ROMDJH4QY1Avy1RYhbMUDraaQ4%3D

Thoughts welcome, especially from folks in the Ironic community or Sean who is 
leading this work in Kolla.

Regards,
-steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Status of the OpenStack port to Python 3

2016-06-23 Thread Steve Martinelli
+1 on the idea of running function tests to see the fallout. Right now
python-memcached is labelled as py3 compatible and in keystone our unit
tests pass, but I'm skeptical about its behavior in functional and tempest
tests.
On Jun 23, 2016 1:14 PM, "Sean Dague"  wrote:

So, given that everything in base iaas is over besides Nova, and there
is some python 3 support in Devstack, before Newton is over one could
get a python 3 (except Nova) job running, and start seeing the fallout
of full stack testing. We could even prioritize functional changes in
Nova to get full stack python 3 working (a lot of what is holding Nova
back is actually unit tests that aren't python 3 clean).

That seems like the next logical step, and I think would help add
incentive to full stack testing to show this actually working outside of
just isolated test suites.

On 06/23/2016 12:58 PM, Davanum Srinivas wrote:
> +1 from me as well Doug! ("community to set a goal for Ocata to have
Python
> 3 functional tests running for all projects.")
>
> -- Dims
>
> On Thu, Jun 23, 2016 at 12:11 PM, Doug Hellmann 
wrote:
>> Excerpts from Thomas Goirand's message of 2016-06-22 10:49:01 +0200:
>>> On 06/22/2016 09:18 AM, Victor Stinner wrote:
 Hi,

 Current status: only 3 projects are not ported yet to Python 3:

 * nova (76% done)
 * trove (42%)
 * swift (0%)

https://wiki.openstack.org/wiki/Python3

 Number of projects already ported:

 * 19 Oslo Libraries
 * 4 Development Tools
 * 22 OpenStack Clients
 * 6 OpenStack Libraries (os-brick, taskflow, glance_store, ...)
 * 12 OpenStack services approved by the TC
 * 17 OpenStack services (not approved by the TC)

 Raw total: 80 projects. In fact, 3 remaining projects on 83 is only 4%,
 we are so close! ;-)

 The next steps are to port the 3 remaining projects and work on
 functional and integration tests on Python 3.

 Victor
>>>
>>> Hi Victor,
>>>
>>> Thanks a lot for your efforts on Py3.
>>>
>>> Do you think it looks like possible to have Nova ported to Py3 during
>>> the Newton cycle?
>>>
>>> Cheers,
>>>
>>> Thomas Goirand (zigo)
>>>
>>
>> I'd like for the community to set a goal for Ocata to have Python
>> 3 functional tests running for all projects.
>>
>> As Tony points out, it's a bit late to have this as a priority for
>> Newton, though work can and should continue. But given how close
>> we are to having the initial phase of the port done (thanks Victor!),
>> and how far we are from discussions of priorities for Ocata, it
>> seems very reasonable to set a community-wide goal for our next
>> release cycle.
>>
>> Thoughts?
>>
>> Doug
>>
>>
__
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>


--
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Gate troubleshooting howto

2016-06-23 Thread Jay Faulkner
I dropped this Friday as an option based on the results. If you're interested 
this will still be happening, but in mid-July.

Thanks,
Jay Faulkner
OSIC


> On Jun 22, 2016, at 12:39 PM, Jay Faulkner  wrote:
> 
> There was a request at the mid-cycle for a presentation on
> troubleshooting Ironic gate failures. I'd be willing to share some of my
> knowledge about this to interested folks.
> 
> I've created a doodle with a few possible times; note that one option is
> this Friday, but the others are in mid-July, as I'll be moving over the
> gap of time; so I can do before or after the move.
> 
> Please vote here: http://doodle.com/poll/44whfnwkkm4vcgn4
> 
> 
> Thanks,
> Jay Faulkner
> OSIC
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] NUMA, huge pages, and scheduling

2016-06-23 Thread Jay Pipes

On 06/22/2016 01:56 PM, Paul Michali wrote:

I did have a question about the current implementation as described by
292499, 324379, and 292500.

Looking at the code, when a NUMAPagesTopology object is create, a new
parameter is passed for the "reserved" pages. This reservation comes
from a dictionary, which is populated at LIbvirtDriver init time, via
grabbing the multi-string configuration settings from nova.conf. Because
the object's API is changed, a version change is required.

Is it possible to, instead of adding a new argument to reduce the
"total" argument (Ian Wells suggested this to me on a patch I had), by
the number of reserved pages from the config file? This would prevent
the need to alter the object's API.  So, instead of:

 mempages = [
 objects.NUMAPagesTopology(
 size_kb=pages.size,
 total=pages.total,
 used=0,
 reserved=_get_reserved_memory_for_cell(
 self,cell.id , pages.size))
 for pages in cell.mempages]


Do something like this...

  mempages = [

objects.NUMAPagesTopology( size_kb=pages.size, used=0, total=pages.total
- _get_reserved_memory_for_cell( self, cell.id ,
pages.size)) for pages in cell.mempages]
If we do this, would it avoid issues with back porting the change?


No, that would cause anyone who upgraded to Mitaka to immediately have 
the total column contain incorrect data... you would essentially have a 
double-reserve calculation.


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] NUMA, huge pages, and scheduling

2016-06-23 Thread Jay Pipes

On 06/22/2016 01:56 PM, Paul Michali wrote:

I did have a question about the current implementation as described by
292499, 324379, and 292500.

Looking at the code, when a NUMAPagesTopology object is create, a new
parameter is passed for the "reserved" pages. This reservation comes
from a dictionary, which is populated at LIbvirtDriver init time, via
grabbing the multi-string configuration settings from nova.conf. Because
the object's API is changed, a version change is required.

Is it possible to, instead of adding a new argument to reduce the
"total" argument (Ian Wells suggested this to me on a patch I had), by
the number of reserved pages from the config file? This would prevent
the need to alter the object's API.  So, instead of:

 mempages = [
 objects.NUMAPagesTopology(
 size_kb=pages.size,
 total=pages.total,
 used=0,
 reserved=_get_reserved_memory_for_cell(
 self,cell.id , pages.size))
 for pages in cell.mempages]


Do something like this...

  mempages = [

objects.NUMAPagesTopology( size_kb=pages.size, used=0, total=pages.total
- _get_reserved_memory_for_cell( self, cell.id ,
pages.size)) for pages in cell.mempages]
If we do this, would it avoid issues with back porting the change?

Thanks!

PCM


On Wed, Jun 15, 2016 at 5:52 PM Matt Riedemann
> wrote:

On 6/15/2016 3:10 PM, Paul Michali wrote:
 > Is the plan to back port that change to Mitaka?
 >
 > Thanks,
 >
 > PCM
 >
 >
 > On Wed, Jun 15, 2016 at 1:31 PM Matt Riedemann
 > 
>> wrote:
 >
 > On 6/14/2016 3:09 PM, Jay Pipes wrote:
 > >
 > > Yes. Code merged recently from Sahid does this:
 > >
 > > https://review.openstack.org/#/c/277422/
 > >
 > > Best,
 > > -jay
 > >
 >
 > That was actually reverted out of mitaka:
 >
 > https://review.openstack.org/#/c/292290/
 >
 > The feature change that got into newton was this:
 >
 > https://review.openstack.org/#/c/292499/
 >
 > Which was busted, and required:
 >
 > https://review.openstack.org/#/c/324379/
 >
 > Well, required as long as you want your compute service to
start. :)
 >
 > And no, we aren't backporting these, especially to liberty
which is
 > security / critical fix mode only now.
 >
 > --
 >
 > Thanks,
 >
 > Matt Riedemann
 >
 >
 >
  __
 > OpenStack Development Mailing List (not for usage questions)
 > Unsubscribe:
 > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

 >
  
 > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 >
 >
 >
 >
__
 > OpenStack Development Mailing List (not for usage questions)
 > Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

 > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 >

No, it's really a feature.

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [vitrage][aodh] Notifications about aodh alarm state changes

2016-06-23 Thread Afek, Ifat (Nokia - IL)
Hi Julien, Gordon,

I understood that during Aodh-Vitrage design session in Austin, you had a 
discussion about Vitrage need for notifications about Aodh alarm state changes. 
Did you have a chance to think about it since, and to check how this can be 
done?

Thanks,
Ifat.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] better solution for the non-ini format configure file

2016-06-23 Thread Steven Dake (stdake)
I looked more at this solution and am struggling to come up with a solution for:


https://github.com/openstack/kolla/blob/master/ansible/roles/nova/tasks/config.yml#L69-L85


That doesn't involve creating a task per file.


Any ideas?


Regards

-steve

From: Steven Dake >
Reply-To: 
"openstack-dev@lists.openstack.org" 
>
Date: Thursday, June 23, 2016 at 6:07 AM
To: 
"openstack-dev@lists.openstack.org" 
>
Subject: Re: [openstack-dev] [kolla] better solution for the non-ini format 
configure file

Looks like a really cool feature, and possibly a way to non-ini files, such as 
policy.json files which we just want to copy rather then override.

Kolla already has ini merging for ini files though and the example you provide 
is an ini file.  I like Kolla's ini merging, and it is sort of an external 
interface, since operators have been using it, so to remove it would mean 
following the deprecation cycle.  I do agree this would be fantastic for 
straight copies of non-ini configuration files.

Regards
-steve

From: OpenStack Mailing List Archive >
Reply-To: 
"openstack-dev@lists.openstack.org" 
>
Date: Wednesday, June 22, 2016 at 8:47 PM
To: 
"openstack-dev@lists.openstack.org" 
>
Subject: Re: [openstack-dev] [kolla] better solution for the non-ini format 
configure file

Link: https://openstack.nimeyo.com/83165/?show=88496#a88496
From: AndrewLiu >


Recently, we find this feature of ansible:
http://docs.ansible.com/ansible/playbooks_loops.html#finding-first-matched-files

A specific path of template file can be add in the ansible task.

If a user want to customize an non-ini template file, the user can copy the 
template file to the customization directory, and modify the template file as 
the user wants.

An example of how to modify the ansible task:

change from:

- name: Copying over horizon.conf
  template:
  src: "{{ item }}.conf.j2"
  dest: "{{ node_config_directory }}/{{ item }}/{{ item }}.conf"
  with_items:
  - "horizon"


to:

- name: Copying over horizon.conf
  template:
  src: "{{ item }}"
  dest: "{{ node_config_directory }}/horizon/horizon.conf"
  with_first_found:
  - "{{ node_custom_config }}/horizon.conf.j2"
  - "horizon.conf.j2"


But a convention of how to organize the directory structure of customization 
template files should be addressed.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [lbaas]HA for lbaas v2 agent

2016-06-23 Thread Doug Wiegley
As Assaf mentioned, namespace and octavia are two different lbaas drivers.  But 
within those:

- The lbaas plugin will schedule lbaas VIPs from all available lbaas-agents (I 
think it’s random.)  No, you can’t move them later, or override the scheduler.

- The lbaas agent is used by the octavia driver. Octavia is its own REST 
endpoint, and the lbaas driver for it just makes synchronous REST calls.

Thanks,
doug


> On Jun 23, 2016, at 3:33 PM, Assaf Muller  wrote:
> 
> On Thu, Jun 23, 2016 at 3:43 PM, Akshay Kumar Sanghai
>  wrote:
>> Thanks Assaf.
>> I have few questions for lbaas:
>> -  if i run agents on multiple nodes, will the request be ditrsibuted by
>> neutron-server?
>> - Does neutron lbaas agent forward the request to octavia-api or the
>> neutron-server?
> 
> The LBaaS v2 API has multiple implementations. One of which is based
> off haproxy and namespaces, known as the agent based implementation.
> Do you have neutron-lbaas-agent running on your network nodes? The
> second implementation is called Octavia and is based off service VMs
> instead of agents and namespaces. Octavia calls out to Nova to create
> VMs and inside those VMs is an agent that talks back to Octavia, and
> that creates an haproxy instance to perform the actual loadbalancing.
> The answer to both of your questions depends on which of these two
> implementations you're going with. There's a bunch of summit sessions
> about Octavia you can look in to.
> 
>> 
>> Thanks
>> Akshay
>> 
>> On Thu, Jun 23, 2016 at 1:00 AM, Assaf Muller  wrote:
>>> 
>>> On Wed, Jun 22, 2016 at 3:17 PM, Akshay Kumar Sanghai
>>>  wrote:
 Hi,
 I have a multinode openstack installation (3 controller, 3 network
 nodes,
 and some compute nodes).
 Like l3 agent, is high availability feature available for the lbaas v2
 agent?
>>> 
>>> It is not. Nir Magnezi is working on a couple of patches to implement
>>> a simplistic HA solution for LBaaS v2 with haproxy:
>>> https://review.openstack.org/#/c/28/
>>> https://review.openstack.org/#/c/327966/
>>> 
 
 Thanks
 Akshay
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [lbaas]HA for lbaas v2 agent

2016-06-23 Thread Assaf Muller
On Thu, Jun 23, 2016 at 3:43 PM, Akshay Kumar Sanghai
 wrote:
> Thanks Assaf.
> I have few questions for lbaas:
> -  if i run agents on multiple nodes, will the request be ditrsibuted by
> neutron-server?
> - Does neutron lbaas agent forward the request to octavia-api or the
> neutron-server?

The LBaaS v2 API has multiple implementations. One of which is based
off haproxy and namespaces, known as the agent based implementation.
Do you have neutron-lbaas-agent running on your network nodes? The
second implementation is called Octavia and is based off service VMs
instead of agents and namespaces. Octavia calls out to Nova to create
VMs and inside those VMs is an agent that talks back to Octavia, and
that creates an haproxy instance to perform the actual loadbalancing.
The answer to both of your questions depends on which of these two
implementations you're going with. There's a bunch of summit sessions
about Octavia you can look in to.

>
> Thanks
> Akshay
>
> On Thu, Jun 23, 2016 at 1:00 AM, Assaf Muller  wrote:
>>
>> On Wed, Jun 22, 2016 at 3:17 PM, Akshay Kumar Sanghai
>>  wrote:
>> > Hi,
>> > I have a multinode openstack installation (3 controller, 3 network
>> > nodes,
>> > and some compute nodes).
>> > Like l3 agent, is high availability feature available for the lbaas v2
>> > agent?
>>
>> It is not. Nir Magnezi is working on a couple of patches to implement
>> a simplistic HA solution for LBaaS v2 with haproxy:
>> https://review.openstack.org/#/c/28/
>> https://review.openstack.org/#/c/327966/
>>
>> >
>> > Thanks
>> > Akshay
>> >
>> >
>> > __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Status of the OpenStack port to Python 3

2016-06-23 Thread Clark Boylan
On Thu, Jun 23, 2016, at 02:15 PM, Doug Hellmann wrote:
> Excerpts from Thomas Goirand's message of 2016-06-23 23:04:28 +0200:
> > On 06/23/2016 06:11 PM, Doug Hellmann wrote:
> > > I'd like for the community to set a goal for Ocata to have Python
> > > 3 functional tests running for all projects.
> > > 
> > > As Tony points out, it's a bit late to have this as a priority for
> > > Newton, though work can and should continue. But given how close
> > > we are to having the initial phase of the port done (thanks Victor!),
> > > and how far we are from discussions of priorities for Ocata, it
> > > seems very reasonable to set a community-wide goal for our next
> > > release cycle.
> > > 
> > > Thoughts?
> > > 
> > > Doug
> > 
> > +1
> > 
> > Just think about it for a while. If we get Nova to work with Py3, and
> > everything else is working, including all functional tests in Tempest,
> > then after Otaca, we could even start to *REMOVE* Py2 support after
> > Otaca+1. That would be really awesome to stop all the compat layer
> > madness and use the new features available in Py3.
> 
> We'll need to get some input from other distros and from deployers
> before we decide on a timeline for dropping Python 2. For now, let's
> focus on making Python 3 work. Then we can all rejoice while having the
> discussion of how much longer to support Python 2. :-)
> 
> > 
> > I really would love to ship a full stack running Py3 for Debian Stretch.
> > However, for this, it'd be super helful to have as much visibility as
> > possible. Are we setting a hard deadline for the Otaca release? Or is
> > this just a goal we only "would like" to reach, but it's not really a
> > big deal if we don't reach it?
> 
> Let's see what PTLs have to say about planning, but I think if not
> Ocata then we'd want to set one for the P release. We're running
> out of supported lifetime for Python 2.7.

Keep in mind that there is interest in running OpenStack on PyPy which
is python 2.7. We don't have to continue supporting CPython 2.7
necessarily but we may want to support python 2.7 by way of PyPy.

Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Status of the OpenStack port to Python 3

2016-06-23 Thread Doug Hellmann
Excerpts from Thomas Goirand's message of 2016-06-23 23:04:28 +0200:
> On 06/23/2016 06:11 PM, Doug Hellmann wrote:
> > I'd like for the community to set a goal for Ocata to have Python
> > 3 functional tests running for all projects.
> > 
> > As Tony points out, it's a bit late to have this as a priority for
> > Newton, though work can and should continue. But given how close
> > we are to having the initial phase of the port done (thanks Victor!),
> > and how far we are from discussions of priorities for Ocata, it
> > seems very reasonable to set a community-wide goal for our next
> > release cycle.
> > 
> > Thoughts?
> > 
> > Doug
> 
> +1
> 
> Just think about it for a while. If we get Nova to work with Py3, and
> everything else is working, including all functional tests in Tempest,
> then after Otaca, we could even start to *REMOVE* Py2 support after
> Otaca+1. That would be really awesome to stop all the compat layer
> madness and use the new features available in Py3.

We'll need to get some input from other distros and from deployers
before we decide on a timeline for dropping Python 2. For now, let's
focus on making Python 3 work. Then we can all rejoice while having the
discussion of how much longer to support Python 2. :-)

> 
> I really would love to ship a full stack running Py3 for Debian Stretch.
> However, for this, it'd be super helful to have as much visibility as
> possible. Are we setting a hard deadline for the Otaca release? Or is
> this just a goal we only "would like" to reach, but it's not really a
> big deal if we don't reach it?

Let's see what PTLs have to say about planning, but I think if not
Ocata then we'd want to set one for the P release. We're running
out of supported lifetime for Python 2.7.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Status of the OpenStack port to Python 3

2016-06-23 Thread Thomas Goirand
On 06/23/2016 06:11 PM, Doug Hellmann wrote:
> I'd like for the community to set a goal for Ocata to have Python
> 3 functional tests running for all projects.
> 
> As Tony points out, it's a bit late to have this as a priority for
> Newton, though work can and should continue. But given how close
> we are to having the initial phase of the port done (thanks Victor!),
> and how far we are from discussions of priorities for Ocata, it
> seems very reasonable to set a community-wide goal for our next
> release cycle.
> 
> Thoughts?
> 
> Doug

+1

Just think about it for a while. If we get Nova to work with Py3, and
everything else is working, including all functional tests in Tempest,
then after Otaca, we could even start to *REMOVE* Py2 support after
Otaca+1. That would be really awesome to stop all the compat layer
madness and use the new features available in Py3.

I really would love to ship a full stack running Py3 for Debian Stretch.
However, for this, it'd be super helful to have as much visibility as
possible. Are we setting a hard deadline for the Otaca release? Or is
this just a goal we only "would like" to reach, but it's not really a
big deal if we don't reach it?

Cheers,

Thomas Goirand (zigo)

P.S: Again, thanks Victor for your awesome work.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release] Release countdown for week R-15, Jun 20-24

2016-06-23 Thread Doug Hellmann
Focus
-

Teams should be working on new feature development and bug fixes in this
period between
the first and second milestones.

General Notes
-

The members of the release team will all be traveling next week.
This will result in delays in releases being processed, so please
plan accordingly.

Release Actions
---

Official independent projects should file information about historical
releases using the openstack/releases repository so the team pages
on releases.openstack.org are up to date.

This is also a good time to review stable/liberty and stable/mitaka
branches for needed releases.

Important Dates
---

Newton 2 milestone, July 14.

Newton release schedule: http://releases.openstack.org/newton/schedule.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Placement API WSGI code -- let's just use flask

2016-06-23 Thread Gregory Haynes
On Wed, Jun 22, 2016, at 09:07 AM, Chris Dent wrote:
> On Tue, 21 Jun 2016, Sylvain Bauza wrote:
> 
> > To be honest, Chris and you were saying that you don't like Flask, and I'm 
> > a 
> > bit agreeing with you. Why now it's a good possibility ?
> 
> As I said when I started the other version of this same thread: What is
> most important to me is generating a consensus that we can actually
> commit to. To build a _real_ consensus it is important to have
> strong opionions that are weakly held[1] otherwise we are not
> actually evaluating the options.
> 
> You are right: I don't like Flask. It claims to be a microframework
> but to me it is overweight. I do, however, like it more than the
> chaos that is the current Nova WSGI stack.

This seems to be a recurring complaint in this thread - has any
consideration been given to using werkzeug[1] directly (its the library
underneath Flask). IMO this isn't a big win because the extra stuff that
comes in with Flask shouldn't present additional problems for us, but if
that really is the sticking point then it might be worth a look.

> 
> Flask has a very strong community and it does lots of stuff well
> such that we, in OpenStack, could just stop worrying about it. That
> is one reasonable way to approach doing WSGI moving forward.
> 

++. If there are issues we hit in Flask because of the extra components
we are so worried about then maybe we could work with them to resolve
these issues? I get the impression we are throwing out the baby with the
bathwater avoiding it altogether due to this fear.

> > Why not Routes and Paste couldn't be using also ?
> 
> More on this elsewhere in the thread.
> 

Cheers,
Greg

1: https://github.com/pallets/werkzeug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] No middle-man - when does/will Nova directly connect iSCSI volumes?

2016-06-23 Thread Sean McGinnis
On Thu, Jun 23, 2016 at 12:07:43PM -0600, Chris Friesen wrote:
> On 06/23/2016 10:09 AM, Walter A. Boring IV wrote:
> >
> >volumes connected to QEMU instances eventually become directly connected?
> >
> >>Our long term goal is that 100% of all network storage will be connected
> >>to directly by QEMU. We already have the ability to partially do this with
> >>iSCSI, but it is lacking support for multipath. As & when that gap is
> >>addressed though, we'll stop using the host OS for any iSCSI stuff.
> >>
> >>So if you're requiring access to host iSCSI volumes, it'll work in the
> >>short-medium term, but in the medium-long term we're not going to use
> >>that so plan accordingly.
> >
> >What is the benefit of this largely monolithic approach?  It seems that
> >moving everything into QEMU is diametrically opposed to the unix model 
> >itself and
> >is just a re-implementation of what already exists in the linux world 
> >outside of
> >QEMU.
> >
> >Does QEMU support hardware initiators? iSER?
> >
> >We regularly fix issues with iSCSI attaches in the release cycles of 
> >OpenStack,
> >because it's all done in python using existing linux packages.  How often 
> >are QEMU
> >releases done and upgraded on customer deployments vs. python packages 
> >(os-brick)?
> >
> >I don't see a compelling reason for re-implementing the wheel,
> >and it seems like a major step backwards.
> 
> This is an interesting point.
> 
> Unless there's a significant performance benefit to connecting
> directly from qemu, it seems to me that we would want to leverage
> the existing work done by the kernel and other "standard" iSCSI
> initators.
> 
> Chris

I'm curious to find out this as well. Is this for a performance gain? If
so, do we have any metrics showing that gain is significant enough to
warrant making a change like this?

The host OS is still going to be involved. AFAIK, this just cuts out the
software iSCSI initiator from the picture. So we would be moving from a
piece of software dedicated to one specific functionality, to a
different piece of software that's main reason for existence is nothing
to do with IO path management.

I'm not saying I'm completely opposed to this. If there is a reason for
doing it then it could be worth it. But so far I haven't seen anything
explaining why this would be better than what we have today.

> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] No middle-man - when does/will Nova directly connect iSCSI volumes?

2016-06-23 Thread Sean McGinnis
On Thu, Jun 23, 2016 at 09:50:11AM -0700, Preston L. Bannister wrote:
> Daniel, Thanks. Looking for a sense of direction.
> 
> Clearly there is some range of opinion, as Walter indicates. :)
> 
> Not sure you are get to 100% direct connection to QEMU. When there is
> dedicated hardware to do off-board processing of the connection to storage,
> you might(?) be stuck routing through the nova-compute host Linux. (I am
> not an expert in this area, so I could easily be wrong.) This sort of
> hardware tends to be associated with higher-end "enterprise" storage and
> hosts (and higher cost). The storage guys are in the habit of calling these
> "HBA adaptors" (for high-bandwidth) - might just be a local thing.

nit: HBA == Host Bus Adapter

:)
> 
> I *suspect* that higher cost will drive most cloud deployments away from
> that sort of specialized hardware. In which case the direct-connect to QEMU
> model should (mostly?) work. (My non-expert guess.)
> 
> 
> 
> On Thu, Jun 23, 2016 at 9:09 AM, Walter A. Boring IV 
> wrote:
> 
> >
> > volumes connected to QEMU instances eventually become directly connected?
> >
> > Our long term goal is that 100% of all network storage will be connected
> >> to directly by QEMU. We already have the ability to partially do this with
> >> iSCSI, but it is lacking support for multipath. As & when that gap is
> >> addressed though, we'll stop using the host OS for any iSCSI stuff.
> >>
> >> So if you're requiring access to host iSCSI volumes, it'll work in the
> >> short-medium term, but in the medium-long term we're not going to use
> >> that so plan accordingly.
> >>
> >
> > What is the benefit of this largely monolithic approach?  It seems that
> > moving everything into QEMU is diametrically opposed to the unix model
> > itself and
> > is just a re-implementation of what already exists in the linux world
> > outside of QEMU.
> >
> > Does QEMU support hardware initiators? iSER?
> >
> > We regularly fix issues with iSCSI attaches in the release cycles of
> > OpenStack,
> > because it's all done in python using existing linux packages.  How often
> > are QEMU
> > releases done and upgraded on customer deployments vs. python packages
> > (os-brick)?
> >
> > I don't see a compelling reason for re-implementing the wheel,
> > and it seems like a major step backwards.
> >
> >
> >> Xiao's unanswered query (below) presents another question. Is this a
> >>> site-choice? Could I require my customers to configure their OpenStack
> >>> clouds to always route iSCSI connections through the nova-compute host?
> >>> (I
> >>> am not a fan of this approach, but I have to ask.)
> >>>
> >>
> 
> > In the short term that'll work, but long term we're not intending to
> >> support that once QEMU gains multi-path. There's no timeframe on when
> >> that will happen though.
> >>
> >>
> >> Regards,
> >> Daniel
> >>
> >

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Status of the OpenStack port to Python 3

2016-06-23 Thread Sean McGinnis
On Thu, Jun 23, 2016 at 12:58:23PM -0400, Davanum Srinivas wrote:

+1 from me as well. I get questions on support for these often enough.

> +1 from me as well Doug! ("community to set a goal for Ocata to have Python
> 3 functional tests running for all projects.")
> 
> -- Dims
> 
> On Thu, Jun 23, 2016 at 12:11 PM, Doug Hellmann  wrote:
> > Excerpts from Thomas Goirand's message of 2016-06-22 10:49:01 +0200:
> >> On 06/22/2016 09:18 AM, Victor Stinner wrote:
> >> > Hi,
> >> >
> >> > Current status: only 3 projects are not ported yet to Python 3:
> >> >
> >> > * nova (76% done)
> >> > * trove (42%)
> >> > * swift (0%)
> >> >
> >> >https://wiki.openstack.org/wiki/Python3
> >> >
> >> > Number of projects already ported:
> >> >
> >> > * 19 Oslo Libraries
> >> > * 4 Development Tools
> >> > * 22 OpenStack Clients
> >> > * 6 OpenStack Libraries (os-brick, taskflow, glance_store, ...)
> >> > * 12 OpenStack services approved by the TC
> >> > * 17 OpenStack services (not approved by the TC)
> >> >
> >> > Raw total: 80 projects. In fact, 3 remaining projects on 83 is only 4%,
> >> > we are so close! ;-)
> >> >
> >> > The next steps are to port the 3 remaining projects and work on
> >> > functional and integration tests on Python 3.
> >> >
> >> > Victor
> >>
> >> Hi Victor,
> >>
> >> Thanks a lot for your efforts on Py3.
> >>
> >> Do you think it looks like possible to have Nova ported to Py3 during
> >> the Newton cycle?
> >>
> >> Cheers,
> >>
> >> Thomas Goirand (zigo)
> >>
> >
> > I'd like for the community to set a goal for Ocata to have Python
> > 3 functional tests running for all projects.
> >
> > As Tony points out, it's a bit late to have this as a priority for
> > Newton, though work can and should continue. But given how close
> > we are to having the initial phase of the port done (thanks Victor!),
> > and how far we are from discussions of priorities for Ocata, it
> > seems very reasonable to set a community-wide goal for our next
> > release cycle.
> >
> > Thoughts?
> >
> > Doug
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> -- 
> Davanum Srinivas :: https://twitter.com/dims
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] CI package build issues

2016-06-23 Thread John Trowbridge


On 06/23/2016 02:56 PM, Dan Prince wrote:
> After discovering some regressions today we found what we think is a
> package build issue in our CI environment which might be the cause of
> our issues:
> 
> https://bugs.launchpad.net/tripleo/+bug/1595660
> 
> Specifically, there is a case where DLRN might not be giving an error
> code if build failures occur, and thus our jobs don't get the updated
> package symlink and thus give a false positive.
> 
> Until we get this solved be careful when merging. You might look for
> 'packages not built correctly: not updating the consistent symlink' in
> the job output. I see over 200 of these in the last 24 hours:
> 

I updated the bug, but will reply here for completeness. The "not
updating the consistent symlink" message appears 100% of the time when
not building all packages in rdoinfo.

Instead what happened is we built HEAD of master instead of the refspec
from zuul.

http://logs.openstack.org/17/324117/22/check-tripleo/gate-tripleo-ci-centos-7-nonha/3758449/console.html#_2016-06-23_03_40_49_234238

c48410a05ec0ffd11c717bcf350badc9e5f0e910 is the commit it should have built.

4ef338574b1a7cef8b1b884d439556b24fb09718 was built instead.

So the logstash query we could use is instead:

http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22cleaning%20directory%20and%20cloning%20again%5C%22%20AND%20filename%3A%5C%22console.html%5C%22

I think https://review.rdoproject.org/r/1500 is the fix.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [lbaas]HA for lbaas v2 agent

2016-06-23 Thread Akshay Kumar Sanghai
Thanks Assaf.
I have few questions for lbaas:
-  if i run agents on multiple nodes, will the request be ditrsibuted by
neutron-server?
- Does neutron lbaas agent forward the request to octavia-api or the
neutron-server?

Thanks
Akshay

On Thu, Jun 23, 2016 at 1:00 AM, Assaf Muller  wrote:

> On Wed, Jun 22, 2016 at 3:17 PM, Akshay Kumar Sanghai
>  wrote:
> > Hi,
> > I have a multinode openstack installation (3 controller, 3 network nodes,
> > and some compute nodes).
> > Like l3 agent, is high availability feature available for the lbaas v2
> > agent?
>
> It is not. Nir Magnezi is working on a couple of patches to implement
> a simplistic HA solution for LBaaS v2 with haproxy:
> https://review.openstack.org/#/c/28/
> https://review.openstack.org/#/c/327966/
>
> >
> > Thanks
> > Akshay
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] Ironic Newton midcycle sprint summary

2016-06-23 Thread Mathieu Mitchell

Dear group,

Please enjoy this midcycle sprint summary. You might have to put your 
Markdown glasses on for proper formatting :)



The midcycle sprint lasted three days. It was virtually held over 
Infra's conference system and IRC. The event was split in two different 
time slots. The first slot, from 15:00 to 20:00 UTC, was by far the most 
popular. A lot of topics were covered by the participants. The second 
slot, from 00:00 to 04:00 UTC, was much less popular, having a whooping 
peak participant count of four.



June 20 2016 15:00 to 20:00 UTC
---

Most of the group was present for this session. Missing were Devananda 
van der Veen (devananda) and Jay Faulkner (JayF). We decided to create 
an agenda for the upcoming sessions and reserve topics of interest to 
our missing members for the days where they would be present. Also, 
Ukraine was observing a national holiday, so some of our Ukrainian 
members were absent, too.


The session started with an overview of our Newton priorities. This was 
done using the new Ironic Newton Priorities Trello board [1].


[1] https://trello.com/b/ROTxmGIc/ironic-newton-priorities


### Ironic-Neutron integration

The first topic to be covered was the Ironic-Neutron integration. At the 
time of discussing that topic, most patches needed rebasing. However, 
the group still agreed on the following game plan:


  * Merge the devstack parts ASAP
  * Split portgroups support to the end of the patch chain so they can 
merge later. (done: Vladyslav Drok (vdrok) quickly posted a new revision 
for [1] and created [2]).

  * Merge everything but portgroups in server-side Ironic
  * Merge client patches in
  * Merge "Ironic: change flat network provider to 'flat'" in nova [3]
  * Finish portgroups
  * Merge "Replace vif_portgroup_id with vif_port_id" [4] (merged 
already, thanks Dmitry)


[1] https://review.openstack.org/#/c/206244/
[2] https://review.openstack.org/#/c/332177/
[3] https://review.openstack.org/#/c/297895/
[4] https://review.openstack.org/#/c/325197/


### Security groups for Bare metal deployments

Sukhdev Singh (Sukhdev) reports that full security group support will be 
available for bare metal deployments by leveraging the Neutron integration.


> There is minor work that is needed in the Ironic driver. Most of the 
ML2 drivers already know how to deal with Security Groups. Hence, this 
becomes a slam dunk - huge reward with little work.



### Future networking work

Up for review is the spec for VLAN-aware baremetal instances [1] by Sam 
Betts (sambetts). It has received comments and a few reviews, but more 
eyes would help get this through :)


Attach and detach are becoming first class citizens [2] and will be 
defined in network drivers, allowing for different vendors to implement 
their own logic. This will also support post-boot network attach/detach. 
Also, keep an eye open for network-aware scheduling in Nova.


[1] https://review.openstack.org/#/c/277853/
[2] https://review.openstack.org/#/c/317636/


### Driver composition

Big topic that has been in progress for officially more than a year 
(first patch set is dated June 4th, 2015!). We are finally reaching 
consensus. Most people are okay with the spec, the only pain point was 
using the `driver` vs `hardware_type` field. Since hardware_type was 
introduced before dynamic driver had default interfaces, most of the 
group agreed to dropping it and simply keeping the `driver` field.


Dmitry Tantsur (dtantsur) quickly posted a few new patch sets and the 
spec [1] is currently awaiting workflow.


[1] https://review.openstack.org/#/c/188370/


### Serial console spec

Up for review is the Nova-compatible serial console support [1]. The 
group had a few questions but none of the authors were present in the room.


One interesting question was whether this should depend on the driver 
composition spec. The answer was that, given the limited scope of the 
serial console in current deployments, simply one or two new drivers 
could be added to the matrix, instead of duplicating all current 
drivers, preventing this from requiring the driver composition.


Everyone interested posted questions directly in the review for the 
authors to answer. Another point of interest was the full path to the 
socat binary, and the code behaving differently based finding "socat" in 
said string.


[1] https://review.openstack.org/#/c/319505/


June 21 2016 00:00 to 04:00 UTC
---

A small number of participants assisted the session.

An informal discussion took place and the following topics were discussed:

* Merge conflicts and their cause
* Feature enabling
* v2 API
* Multi-node devstack deployments

The session ended early at 01:20 UTC.


June 21 2016 15:00 to 20:00 UTC
---

### Versioning the ironic-python-agent API

Our first topic of the day was IPA API versioning. It was agreed that 
Ironic should work with N-1 and N+1 versions of 

[openstack-dev] [TripleO] CI package build issues

2016-06-23 Thread Dan Prince
After discovering some regressions today we found what we think is a
package build issue in our CI environment which might be the cause of
our issues:

https://bugs.launchpad.net/tripleo/+bug/1595660

Specifically, there is a case where DLRN might not be giving an error
code if build failures occur, and thus our jobs don't get the updated
package symlink and thus give a false positive.

Until we get this solved be careful when merging. You might look for
'packages not built correctly: not updating the consistent symlink' in
the job output. I see over 200 of these in the last 24 hours:

http://logstash.openstack.org/#dashboard/file/logstash.json?query=messa
ge%3A%5C%22not%20updating%20the%20consistent%20symlink%5C%22%20AND%20fi
lename%3A%5C%22console.html%5C%22

Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][release] reno 1.8.0 release

2016-06-23 Thread no-reply
We are satisfied to announce the release of:

reno 1.8.0: RElease NOtes manager

With source available at:

http://git.openstack.org/cgit/openstack/reno

Please report issues through launchpad:

http://bugs.launchpad.net/reno

For more details, please see below.

Changes in reno 1.7.0..1.8.0


96b0641 ignore all coverage output files
2a86047 add warnings for malformated input
92ddd8f add API for writing the cache file
0c103c3 report extra files with warnings


Diffstat (except docs and test files)
-

.gitignore|  2 +-
reno/cache.py | 58 ++
reno/loader.py| 42 --
reno/scanner.py   | 11 +++---
6 files changed, 194 insertions(+), 25 deletions(-)




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] No middle-man - when does/will Nova directly connect iSCSI volumes?

2016-06-23 Thread Chris Friesen

On 06/23/2016 10:09 AM, Walter A. Boring IV wrote:


volumes connected to QEMU instances eventually become directly connected?


Our long term goal is that 100% of all network storage will be connected
to directly by QEMU. We already have the ability to partially do this with
iSCSI, but it is lacking support for multipath. As & when that gap is
addressed though, we'll stop using the host OS for any iSCSI stuff.

So if you're requiring access to host iSCSI volumes, it'll work in the
short-medium term, but in the medium-long term we're not going to use
that so plan accordingly.


What is the benefit of this largely monolithic approach?  It seems that
moving everything into QEMU is diametrically opposed to the unix model itself 
and
is just a re-implementation of what already exists in the linux world outside of
QEMU.

Does QEMU support hardware initiators? iSER?

We regularly fix issues with iSCSI attaches in the release cycles of OpenStack,
because it's all done in python using existing linux packages.  How often are 
QEMU
releases done and upgraded on customer deployments vs. python packages 
(os-brick)?

I don't see a compelling reason for re-implementing the wheel,
and it seems like a major step backwards.


This is an interesting point.

Unless there's a significant performance benefit to connecting directly from 
qemu, it seems to me that we would want to leverage the existing work done by 
the kernel and other "standard" iSCSI initators.


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Status of the OpenStack port to Python 3

2016-06-23 Thread Sean Dague
So, given that everything in base iaas is over besides Nova, and there
is some python 3 support in Devstack, before Newton is over one could
get a python 3 (except Nova) job running, and start seeing the fallout
of full stack testing. We could even prioritize functional changes in
Nova to get full stack python 3 working (a lot of what is holding Nova
back is actually unit tests that aren't python 3 clean).

That seems like the next logical step, and I think would help add
incentive to full stack testing to show this actually working outside of
just isolated test suites.

On 06/23/2016 12:58 PM, Davanum Srinivas wrote:
> +1 from me as well Doug! ("community to set a goal for Ocata to have Python
> 3 functional tests running for all projects.")
> 
> -- Dims
> 
> On Thu, Jun 23, 2016 at 12:11 PM, Doug Hellmann  wrote:
>> Excerpts from Thomas Goirand's message of 2016-06-22 10:49:01 +0200:
>>> On 06/22/2016 09:18 AM, Victor Stinner wrote:
 Hi,

 Current status: only 3 projects are not ported yet to Python 3:

 * nova (76% done)
 * trove (42%)
 * swift (0%)

https://wiki.openstack.org/wiki/Python3

 Number of projects already ported:

 * 19 Oslo Libraries
 * 4 Development Tools
 * 22 OpenStack Clients
 * 6 OpenStack Libraries (os-brick, taskflow, glance_store, ...)
 * 12 OpenStack services approved by the TC
 * 17 OpenStack services (not approved by the TC)

 Raw total: 80 projects. In fact, 3 remaining projects on 83 is only 4%,
 we are so close! ;-)

 The next steps are to port the 3 remaining projects and work on
 functional and integration tests on Python 3.

 Victor
>>>
>>> Hi Victor,
>>>
>>> Thanks a lot for your efforts on Py3.
>>>
>>> Do you think it looks like possible to have Nova ported to Py3 during
>>> the Newton cycle?
>>>
>>> Cheers,
>>>
>>> Thomas Goirand (zigo)
>>>
>>
>> I'd like for the community to set a goal for Ocata to have Python
>> 3 functional tests running for all projects.
>>
>> As Tony points out, it's a bit late to have this as a priority for
>> Newton, though work can and should continue. But given how close
>> we are to having the initial phase of the port done (thanks Victor!),
>> and how far we are from discussions of priorities for Ocata, it
>> seems very reasonable to set a community-wide goal for our next
>> release cycle.
>>
>> Thoughts?
>>
>> Doug
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 


-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] what do you work on upstream of OpenStack?

2016-06-23 Thread Doug Hellmann
Excerpts from Ihor Dvoretskyi's message of 2016-06-23 19:47:32 +0300:
> Doug, nice work, thank for collecting these items.
> 
> I'd like to suggest adding Kubernetes as one of the container projects,
> that is actively contributed by people from the OpenStack world. Within the
> Kubernetes Community we have an OpenStack Special Interest Group [0], where
> the most active members are deeply involved into OpenStack and Kubernetes
> either [1].

Thanks for pointing that out. I've updated the list.

Doug

> 
> 0. https://github.com/kubernetes/community/tree/master/sig-openstack
> 1.
> https://github.com/kubernetes/community/blob/master/sig-openstack/SIG-members.md
> 
> On Thu, Jun 23, 2016 at 7:27 PM, Anita Kuno  wrote:
> 
> > On 06/23/2016 08:37 AM, Doug Hellmann wrote:
> > > Excerpts from Doug Hellmann's message of 2016-06-13 15:11:17 -0400:
> > >> I'm trying to pull together some information about contributions
> > >> that OpenStack community members have made *upstream* of OpenStack,
> > >> via code, docs, bug reports, or anything else to dependencies that
> > >> we have.
> > >>
> > >> If you've made a contribution of that sort, I would appreciate a
> > >> quick note.  Please reply off-list, there's no need to spam everyone,
> > >> and I'll post the summary if folks want to see it.
> > >>
> > >> Thanks,
> > >> Doug
> > >>
> > >
> > > I've summarized the results of all of your responses (on and off
> > > list) on a blog post this morning [1]. I removed individual names
> > > because I was concentrating on the community as a whole, rather than
> > > individual contributions.
> > >
> > > I'm sure there are projects not listed, either because I missed
> > > something in my summary or because someone didn't reply. Please feel
> > > free to leave a comment on the post with references to other projects.
> > > It's not necessary to link to specific commits or bugs or anything like
> > > that, unless there's something you would especially like to highlight.
> > >
> > > Thank you for your input into the survey. I'm impressed with the
> > > breadth of the results. I'm very happy to know that our community,
> > > which so often seems to be focused on building new projects, also
> > > contributes to existing projects that we all rely on.
> > >
> > > Doug
> > >
> > > [1]
> > https://doughellmann.com/blog/2016/06/23/openstack-contributions-to-other-open-source-projects/
> > >
> > >
> > __
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
> >
> > Thanks for putting this together Doug that is an impressive list.
> >
> > It is interesting as I had not considered Zuul and Jenkins Job Builder
> > to be 'other open source tools' but you are correct, it is a way to look
> > at them.
> >
> > Thanks Doug,
> > Anita.
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Status of the OpenStack port to Python 3

2016-06-23 Thread Davanum Srinivas
+1 from me as well Doug! ("community to set a goal for Ocata to have Python
3 functional tests running for all projects.")

-- Dims

On Thu, Jun 23, 2016 at 12:11 PM, Doug Hellmann  wrote:
> Excerpts from Thomas Goirand's message of 2016-06-22 10:49:01 +0200:
>> On 06/22/2016 09:18 AM, Victor Stinner wrote:
>> > Hi,
>> >
>> > Current status: only 3 projects are not ported yet to Python 3:
>> >
>> > * nova (76% done)
>> > * trove (42%)
>> > * swift (0%)
>> >
>> >https://wiki.openstack.org/wiki/Python3
>> >
>> > Number of projects already ported:
>> >
>> > * 19 Oslo Libraries
>> > * 4 Development Tools
>> > * 22 OpenStack Clients
>> > * 6 OpenStack Libraries (os-brick, taskflow, glance_store, ...)
>> > * 12 OpenStack services approved by the TC
>> > * 17 OpenStack services (not approved by the TC)
>> >
>> > Raw total: 80 projects. In fact, 3 remaining projects on 83 is only 4%,
>> > we are so close! ;-)
>> >
>> > The next steps are to port the 3 remaining projects and work on
>> > functional and integration tests on Python 3.
>> >
>> > Victor
>>
>> Hi Victor,
>>
>> Thanks a lot for your efforts on Py3.
>>
>> Do you think it looks like possible to have Nova ported to Py3 during
>> the Newton cycle?
>>
>> Cheers,
>>
>> Thomas Goirand (zigo)
>>
>
> I'd like for the community to set a goal for Ocata to have Python
> 3 functional tests running for all projects.
>
> As Tony points out, it's a bit late to have this as a priority for
> Newton, though work can and should continue. But given how close
> we are to having the initial phase of the port done (thanks Victor!),
> and how far we are from discussions of priorities for Ocata, it
> seems very reasonable to set a community-wide goal for our next
> release cycle.
>
> Thoughts?
>
> Doug
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Status of the OpenStack port to Python 3

2016-06-23 Thread Monty Taylor
On 06/23/2016 11:11 AM, Doug Hellmann wrote:
> Excerpts from Thomas Goirand's message of 2016-06-22 10:49:01 +0200:
>> On 06/22/2016 09:18 AM, Victor Stinner wrote:
>>> Hi,
>>>
>>> Current status: only 3 projects are not ported yet to Python 3:
>>>
>>> * nova (76% done)
>>> * trove (42%)
>>> * swift (0%)
>>>
>>>https://wiki.openstack.org/wiki/Python3
>>>
>>> Number of projects already ported:
>>>
>>> * 19 Oslo Libraries
>>> * 4 Development Tools
>>> * 22 OpenStack Clients
>>> * 6 OpenStack Libraries (os-brick, taskflow, glance_store, ...)
>>> * 12 OpenStack services approved by the TC
>>> * 17 OpenStack services (not approved by the TC)
>>>
>>> Raw total: 80 projects. In fact, 3 remaining projects on 83 is only 4%,
>>> we are so close! ;-)
>>>
>>> The next steps are to port the 3 remaining projects and work on
>>> functional and integration tests on Python 3.
>>>
>>> Victor
>>
>> Hi Victor,
>>
>> Thanks a lot for your efforts on Py3.
>>
>> Do you think it looks like possible to have Nova ported to Py3 during
>> the Newton cycle?
>>
>> Cheers,
>>
>> Thomas Goirand (zigo)
>>
> 
> I'd like for the community to set a goal for Ocata to have Python
> 3 functional tests running for all projects.
> 
> As Tony points out, it's a bit late to have this as a priority for
> Newton, though work can and should continue. But given how close
> we are to having the initial phase of the port done (thanks Victor!),
> and how far we are from discussions of priorities for Ocata, it
> seems very reasonable to set a community-wide goal for our next
> release cycle.

++


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] No middle-man - when does/will Nova directly connect iSCSI volumes?

2016-06-23 Thread Preston L. Bannister
Daniel, Thanks. Looking for a sense of direction.

Clearly there is some range of opinion, as Walter indicates. :)

Not sure you are get to 100% direct connection to QEMU. When there is
dedicated hardware to do off-board processing of the connection to storage,
you might(?) be stuck routing through the nova-compute host Linux. (I am
not an expert in this area, so I could easily be wrong.) This sort of
hardware tends to be associated with higher-end "enterprise" storage and
hosts (and higher cost). The storage guys are in the habit of calling these
"HBA adaptors" (for high-bandwidth) - might just be a local thing.

I *suspect* that higher cost will drive most cloud deployments away from
that sort of specialized hardware. In which case the direct-connect to QEMU
model should (mostly?) work. (My non-expert guess.)



On Thu, Jun 23, 2016 at 9:09 AM, Walter A. Boring IV 
wrote:

>
> volumes connected to QEMU instances eventually become directly connected?
>
> Our long term goal is that 100% of all network storage will be connected
>> to directly by QEMU. We already have the ability to partially do this with
>> iSCSI, but it is lacking support for multipath. As & when that gap is
>> addressed though, we'll stop using the host OS for any iSCSI stuff.
>>
>> So if you're requiring access to host iSCSI volumes, it'll work in the
>> short-medium term, but in the medium-long term we're not going to use
>> that so plan accordingly.
>>
>
> What is the benefit of this largely monolithic approach?  It seems that
> moving everything into QEMU is diametrically opposed to the unix model
> itself and
> is just a re-implementation of what already exists in the linux world
> outside of QEMU.
>
> Does QEMU support hardware initiators? iSER?
>
> We regularly fix issues with iSCSI attaches in the release cycles of
> OpenStack,
> because it's all done in python using existing linux packages.  How often
> are QEMU
> releases done and upgraded on customer deployments vs. python packages
> (os-brick)?
>
> I don't see a compelling reason for re-implementing the wheel,
> and it seems like a major step backwards.
>
>
>> Xiao's unanswered query (below) presents another question. Is this a
>>> site-choice? Could I require my customers to configure their OpenStack
>>> clouds to always route iSCSI connections through the nova-compute host?
>>> (I
>>> am not a fan of this approach, but I have to ask.)
>>>
>>

> In the short term that'll work, but long term we're not intending to
>> support that once QEMU gains multi-path. There's no timeframe on when
>> that will happen though.
>>
>>
>> Regards,
>> Daniel
>>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Placement API WSGI code -- let's just use flask

2016-06-23 Thread Sean M. Collins
Sean Dague wrote:
> If we look at the iaas base layer:
> 
> Keystone - custom WSGI with Routes / Paste
> Glance - WSME + Routes / Paste
> Cinder - custom WSGI with Routes / Paste
> Neutron - pecan + Routes / Paste
> Nova - custom WSGI with Routes / Paste

Neutron's pecan code is still fairly new. Deployments still use the old
code[1]. So, I don't know if Neutron is quite there yet on the pecan
front. :(
 
[1]: 
https://github.com/openstack/neutron/blob/b59bb0fcfa41963c0e2f7bcbf34b7e4f4ff5cd08/neutron/common/config.py#L174

-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] what do you work on upstream of OpenStack?

2016-06-23 Thread Ihor Dvoretskyi
Doug, nice work, thank for collecting these items.

I'd like to suggest adding Kubernetes as one of the container projects,
that is actively contributed by people from the OpenStack world. Within the
Kubernetes Community we have an OpenStack Special Interest Group [0], where
the most active members are deeply involved into OpenStack and Kubernetes
either [1].

0. https://github.com/kubernetes/community/tree/master/sig-openstack
1.
https://github.com/kubernetes/community/blob/master/sig-openstack/SIG-members.md


On Thu, Jun 23, 2016 at 7:27 PM, Anita Kuno  wrote:

> On 06/23/2016 08:37 AM, Doug Hellmann wrote:
> > Excerpts from Doug Hellmann's message of 2016-06-13 15:11:17 -0400:
> >> I'm trying to pull together some information about contributions
> >> that OpenStack community members have made *upstream* of OpenStack,
> >> via code, docs, bug reports, or anything else to dependencies that
> >> we have.
> >>
> >> If you've made a contribution of that sort, I would appreciate a
> >> quick note.  Please reply off-list, there's no need to spam everyone,
> >> and I'll post the summary if folks want to see it.
> >>
> >> Thanks,
> >> Doug
> >>
> >
> > I've summarized the results of all of your responses (on and off
> > list) on a blog post this morning [1]. I removed individual names
> > because I was concentrating on the community as a whole, rather than
> > individual contributions.
> >
> > I'm sure there are projects not listed, either because I missed
> > something in my summary or because someone didn't reply. Please feel
> > free to leave a comment on the post with references to other projects.
> > It's not necessary to link to specific commits or bugs or anything like
> > that, unless there's something you would especially like to highlight.
> >
> > Thank you for your input into the survey. I'm impressed with the
> > breadth of the results. I'm very happy to know that our community,
> > which so often seems to be focused on building new projects, also
> > contributes to existing projects that we all rely on.
> >
> > Doug
> >
> > [1]
> https://doughellmann.com/blog/2016/06/23/openstack-contributions-to-other-open-source-projects/
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> Thanks for putting this together Doug that is an impressive list.
>
> It is interesting as I had not considered Zuul and Jenkins Job Builder
> to be 'other open source tools' but you are correct, it is a way to look
> at them.
>
> Thanks Doug,
> Anita.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Best regards,

Ihor Dvoretskyi
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] what do you work on upstream of OpenStack?

2016-06-23 Thread Anita Kuno
On 06/23/2016 08:37 AM, Doug Hellmann wrote:
> Excerpts from Doug Hellmann's message of 2016-06-13 15:11:17 -0400:
>> I'm trying to pull together some information about contributions
>> that OpenStack community members have made *upstream* of OpenStack,
>> via code, docs, bug reports, or anything else to dependencies that
>> we have.
>>
>> If you've made a contribution of that sort, I would appreciate a
>> quick note.  Please reply off-list, there's no need to spam everyone,
>> and I'll post the summary if folks want to see it.
>>
>> Thanks,
>> Doug
>>
> 
> I've summarized the results of all of your responses (on and off
> list) on a blog post this morning [1]. I removed individual names
> because I was concentrating on the community as a whole, rather than
> individual contributions.
> 
> I'm sure there are projects not listed, either because I missed
> something in my summary or because someone didn't reply. Please feel
> free to leave a comment on the post with references to other projects.
> It's not necessary to link to specific commits or bugs or anything like
> that, unless there's something you would especially like to highlight.
> 
> Thank you for your input into the survey. I'm impressed with the
> breadth of the results. I'm very happy to know that our community,
> which so often seems to be focused on building new projects, also
> contributes to existing projects that we all rely on.
> 
> Doug
> 
> [1] 
> https://doughellmann.com/blog/2016/06/23/openstack-contributions-to-other-open-source-projects/
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

Thanks for putting this together Doug that is an impressive list.

It is interesting as I had not considered Zuul and Jenkins Job Builder
to be 'other open source tools' but you are correct, it is a way to look
at them.

Thanks Doug,
Anita.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] No middle-man - when does/will Nova directly connect iSCSI volumes?

2016-06-23 Thread Walter A. Boring IV


volumes connected to QEMU instances eventually become directly connected?


Our long term goal is that 100% of all network storage will be connected
to directly by QEMU. We already have the ability to partially do this with
iSCSI, but it is lacking support for multipath. As & when that gap is
addressed though, we'll stop using the host OS for any iSCSI stuff.

So if you're requiring access to host iSCSI volumes, it'll work in the
short-medium term, but in the medium-long term we're not going to use
that so plan accordingly.


What is the benefit of this largely monolithic approach?  It seems that
moving everything into QEMU is diametrically opposed to the unix model 
itself and
is just a re-implementation of what already exists in the linux world 
outside of QEMU.


Does QEMU support hardware initiators? iSER?

We regularly fix issues with iSCSI attaches in the release cycles of 
OpenStack,
because it's all done in python using existing linux packages.  How 
often are QEMU
releases done and upgraded on customer deployments vs. python packages 
(os-brick)?


I don't see a compelling reason for re-implementing the wheel,
and it seems like a major step backwards.




Xiao's unanswered query (below) presents another question. Is this a
site-choice? Could I require my customers to configure their OpenStack
clouds to always route iSCSI connections through the nova-compute host? (I
am not a fan of this approach, but I have to ask.)

In the short term that'll work, but long term we're not intending to
support that once QEMU gains multi-path. There's no timeframe on when
that will happen though.



Regards,
Daniel



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Status of the OpenStack port to Python 3

2016-06-23 Thread Doug Hellmann
Excerpts from Thomas Goirand's message of 2016-06-22 10:49:01 +0200:
> On 06/22/2016 09:18 AM, Victor Stinner wrote:
> > Hi,
> > 
> > Current status: only 3 projects are not ported yet to Python 3:
> > 
> > * nova (76% done)
> > * trove (42%)
> > * swift (0%)
> > 
> >https://wiki.openstack.org/wiki/Python3
> > 
> > Number of projects already ported:
> > 
> > * 19 Oslo Libraries
> > * 4 Development Tools
> > * 22 OpenStack Clients
> > * 6 OpenStack Libraries (os-brick, taskflow, glance_store, ...)
> > * 12 OpenStack services approved by the TC
> > * 17 OpenStack services (not approved by the TC)
> > 
> > Raw total: 80 projects. In fact, 3 remaining projects on 83 is only 4%,
> > we are so close! ;-)
> > 
> > The next steps are to port the 3 remaining projects and work on
> > functional and integration tests on Python 3.
> > 
> > Victor
> 
> Hi Victor,
> 
> Thanks a lot for your efforts on Py3.
> 
> Do you think it looks like possible to have Nova ported to Py3 during
> the Newton cycle?
> 
> Cheers,
> 
> Thomas Goirand (zigo)
> 

I'd like for the community to set a goal for Ocata to have Python
3 functional tests running for all projects.

As Tony points out, it's a bit late to have this as a priority for
Newton, though work can and should continue. But given how close
we are to having the initial phase of the port done (thanks Victor!),
and how far we are from discussions of priorities for Ocata, it
seems very reasonable to set a community-wide goal for our next
release cycle.

Thoughts?

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Openstack Neutron lbaas question

2016-06-23 Thread Doug Wiegley
Hi,

LBaaS is installed automatically if neutron is present. Then you just need to 
install a driver and get everything configured.  I’d start here:

http://docs.openstack.org/liberty/networking-guide/adv-config-lbaas.html 


Thanks,
doug


> On Jun 23, 2016, at 9:33 AM, zhihao wang  wrote:
> 
> 
> Dear Openstack Team:
> 
> I have some questions regarding to openstack Mitaka Neutron lbaas 
> 
> For Neutron-lbaas installation, should I just follow this LINK
> 
> https://wiki.openstack.org/wiki/Neutron/LBaaS/HowToRun 
> 
> 
> but my openstack has controller and compute nodes. is not installed by 
> Devstack, it is manually installed on production environment (one controller 
> and a few computed nodes)
>  
> Can I install Neutron-lbaas on production environment , if I can, then what 
> is the document guide?
> Thanks
> wally
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org 
> ?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ansible] Fail to install Openstack all in one

2016-06-23 Thread Alioune
Hi all,

I'm trying to install OpenStack all in one using ansible but I got the
error bellow.
Someone knows how to solve this error?

Regards,

ERROR:
+ openstack-ansible --forks 1 openstack-hosts-setup.yml
Variable files: "-e @/etc/openstack_deploy/user_secrets.yml -e
@/etc/openstack_deploy/user_variables.yml "
ERROR: Inventory script (inventory/dynamic_inventory.py) had an execution
error: No user config loadaed
No openstack_user_config files are available in either the base location or
the conf.d directory

+ (( RETRY++ ))
+ (( 1 != 0 && RETRY < MAX_RETRIES ))
+ '[' 1 -gt 1 ']'
+ openstack-ansible --forks 1 openstack-hosts-setup.yml
Variable files: "-e @/etc/openstack_deploy/user_secrets.yml -e
@/etc/openstack_deploy/user_variables.yml "
ERROR: Inventory script (inventory/dynamic_inventory.py) had an execution
error: No user config loadaed
No openstack_user_config files are available in either the base location or
the conf.d directory

+ (( RETRY++ ))
+ (( 1 != 0 && RETRY < MAX_RETRIES ))
+ '[' 2 -gt 1 ']'
+ openstack-ansible --forks 1 openstack-hosts-setup.yml -
Variable files: "-e @/etc/openstack_deploy/user_secrets.yml -e
@/etc/openstack_deploy/user_variables.yml "
ERROR: Inventory script (inventory/dynamic_inventory.py) had an execution
error: No user config loadaed
No openstack_user_config files are available in either the base location or
the conf.d directory

+ (( RETRY++ ))
+ (( 1 != 0 && RETRY < MAX_RETRIES ))
+ '[' 3 -gt 1 ']'
+ openstack-ansible --forks 1 openstack-hosts-setup.yml -
Variable files: "-e @/etc/openstack_deploy/user_secrets.yml -e
@/etc/openstack_deploy/user_variables.yml "
ERROR: Inventory script (inventory/dynamic_inventory.py) had an execution
error: No user config loadaed
No openstack_user_config files are available in either the base location or
the conf.d directory
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Openstack Neutron lbaas question

2016-06-23 Thread zhihao wang




Dear Openstack Team:
I have some questions regarding to openstack Mitaka Neutron lbaas 
For Neutron-lbaas installation, should I just follow this 
LINKhttps://wiki.openstack.org/wiki/Neutron/LBaaS/HowToRunbut my openstack has 
controller and compute nodes. is not installed by Devstack, it is manually 
installed on production environment (one controller and a few computed nodes) 
Can I install Neutron-lbaas on production environment , if I can, then what is 
the document guide?Thankswally   

  __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Deprecation of fuel-mirror tool

2016-06-23 Thread Roman Prykhodchenko
A big +1

> 23 черв. 2016 р. о 14:59 Bulat Gaifullin  
> написав(ла):
> 
> Totally agree with this decision.
> 
> Vladimir, thank you for driving this activity.
> 
> Regards,
> Bulat Gaifullin
> Mirantis Inc.
> 
> 
> 
>> On 23 Jun 2016, at 13:31, Vladimir Kozhukalov > > wrote:
>> 
>> Dear colleagues.
>> 
>> I'd like to announce that fuel-mirror tool is not going to be a part of Fuel 
>> any more. Its functionality is to build/clone rpm/deb repos and modify Fuel 
>> releases repository lists (metadata).
>> 
>> Since Fuel 10.0 it is recommended to use other available tools for managing 
>> local deb/rpm repositories.
>> 
>> Packetary is a good example [0]. Packetary is ideal if one needs to create a 
>> partial mirror of a deb/rpm repository, i.e. mirror that contains not all 
>> available packages but only a subset of packages. To create full mirror it 
>> is better to use debmirror or rsync or any other tools that are available.
>> 
>> To modify releases repository lists one can use commands which are to 
>> available by default on the Fuel admin node since Newton.
>> 
>> # list of available releases
>> fuel2 release list
>> # list of repositories for a release
>> fuel2 release repos list 
>> # save list of repositories for a release in yaml format
>> fuel2 release repos list  -f yaml | tee repos.yaml
>> # modify list of repositories
>> vim repos.yaml
>> # update list of repositories for a release from yaml file
>> fuel2 release repos update  -f repos.yaml
>> 
>> They are provided by python-fuelclient [1] package and were introduced by 
>> this [2] patch.
>> 
>> 
>> [0] https://wiki.openstack.org/wiki/Packetary 
>> 
>> [1] https://github.com/openstack/python-fuelclient 
>> 
>> [2] https://review.openstack.org/#/c/326435/ 
>> 
>> 
>> 
>> Vladimir Kozhukalov
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org 
>> ?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
>> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][oslo] pbr potential breaking change coming

2016-06-23 Thread Jeremy Stanley
On 2016-06-23 13:44:36 +0200 (+0200), Markus Zoeller wrote:
> Interesting, I didn't know that. It runs without warnings in Nova too,
> so hopefully everything's in a good shape for the pbr release. :)

Note that there has been at least one past proposal[1] to add a
"docs" tox environment as mandated in the CTI[2] for official
projects in Python, but it was rejected over concerns it would
encourage projects to run arbitrary commands before/after the Sphinx
build rather than implementing proper plugins.

[1] https://review.openstack.org/119875
[2] 
http://governance.openstack.org/reference/cti/python_cti.html#specific-commands
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [fuel] Weekly meeting for today canceled.

2016-06-23 Thread Andrew Woodward
There is nothing in the agenda for the meeting today, so the meeting is
canceled

https://etherpad.openstack.org/p/fuel-weekly-meeting-agenda
.
-- 

--

Andrew Woodward

Mirantis

Fuel Community Ambassador

Ceph Community
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [grenade] upgrades vs rootwrap

2016-06-23 Thread Sean Dague
On 06/23/2016 10:07 AM, Sean McGinnis wrote:
> On Thu, Jun 23, 2016 at 12:19:34AM +, Angus Lees wrote:
>> So how does rootwrap fit into the "theory of upgrade"?
>>
>> The doc talks about deprecating config, but is silent on when new required
>> config (rootwrap filters) should be installed.  By virtue of the way the
>> grenade code works (install N from scratch, attempt to run code from N+1),
>> we effectively have a policy that any new configs are installed at some
>> nebulous time *after* the N+1 code is deployed.  In particular, this means
>> a new rootwrap filter needs to be merged a whole release before that
>> rootwrap entry can be used - and anything else is treated as an "exception"
>> (see for example the nova from-* scripts which have basically updated
>> rootwrap for each release).
>>
>> --
>>
>> Stepping back, I feel like an "expand-contract" style upgrade process
>> involving rootwrap should look something like
>> 0. Update rootwrap to be the union of N and N+1 rootwrap filters,
>> 1. Rolling update from N to N+1 code,
>> 2. Remove N-only rootwrap entries.
>>
>> We could make that a bit easier for deployers by having a sensible
>> deprecation policy that requires our own rootwrap filters for each release
>> are already the union of this release and the last (which indeed is already
>> our policy aiui), and then it would just look like:
>> 0. Install rootwrap filters from N+1
>> 1. Rolling update to N+1 code
> 
> I think effectively this is what we've ended up with in the past.
> 
> We've had this issue for some time. There have been several releases
> where either Cinder drivers or os-brick changes have needed to add
> rootwrap changes. Theoretically we _should_ have hit these problems long
> ago.
> 
> I think the only reason it hasn't come up before is that these changes
> are usually for vendor storage backends. So they never got hit in
> grenade tests since those use LVM. We have third party CI, but grenade
> tests are not a part of that.
> 
> The switch to privsep now has really highlighted this gap. I think we
> need to make this implied constraint clear and have it documented. To
> upgrade we will need to make sure the rootwrap filters are in place
> _before_ we perform any upgrades.

Are we going to have to do this for every service individually as it
moves to privsep? Or is there a way we can do it common once, take the
hit, and everyone moves forward?

For instance, can we get oslo.rootwrap to make an exception, in code,
for privsep-helper? Thereby not having to touch a config file in etc to
roll forward.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] [bigswitch] question about neutron bigswitch agent

2016-06-23 Thread Emilien Macchi
for the record: https://review.openstack.org/#/c/73/

On Wed, Jun 22, 2016 at 11:50 PM, Emilien Macchi  wrote:
> Ok I found what I wanted, for the record, only
> python-networking-bigswitch is required on the compute node, and not
> the restproxy options in neutron.
>
> On Wed, Jun 22, 2016 at 1:04 PM, Emilien Macchi  wrote:
>> Hey,
>>
>> I have a quick question for Bigswitch folks.
>> I'm currently working on TripleO composable service and refactoring a
>> bit of Neutron puppet code in TripleO Heat Templates.
>> I need to know if when you deploy a compute node, if you actually need
>> neutron plugins parameter (restproxy bits [1]).
>>
>> Looking at current THT master, it looks like compute environment does
>> not load the plugin, but only agent bits, but
>> puppet-neutron/agents/bigswitch has a require on the plugin, so I'm
>> really confused.
>>
>> Thanks for your help,
>>
>> [1] 
>> https://github.com/openstack/puppet-neutron/blob/master/manifests/plugins/ml2/bigswitch/restproxy.pp#L72-L84
>> --
>> Emilien Macchi
>
>
>
> --
> Emilien Macchi



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [javascript] remove IIFE in js-generator-openstack

2016-06-23 Thread Yujun Zhang
Hi, JavaScript Ninjas,

Currently, all scripts in js-generator-openstack
 is written in
IIFE form which is not necessary for nodejs project but introduce some
trouble to jsdoc.

What do you think we remove all the IIFE wrapper?

A partial modification can be previewed in
https://review.openstack.org/#/c/322881/

--
Yujun
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] oslo.concurrency 2.6.1 release (liberty)

2016-06-23 Thread no-reply
We are gleeful to announce the release of:

oslo.concurrency 2.6.1: Oslo Concurrency library

This release is part of the liberty stable release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.concurrency

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.concurrency

For more details, please see below.

Changes in oslo.concurrency 2.6.0..2.6.1


d65d931 processutils: add support for missing process limits
e33f64f Add prlimit parameter to execute()
306cf37 Updated from global requirements
d0de35f Updated from global requirements
725e2f4 Updated from global requirements
d65a8b8 Update .gitreview for stable/liberty


Diffstat (except docs and test files)
-

.gitreview   |   3 +-
oslo_concurrency/prlimit.py  | 110 +
oslo_concurrency/processutils.py |  67 +++
requirements.txt |   6 +-
setup.py |   2 +-
6 files changed, 327 insertions(+), 5 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 9ac5e1c..5e66c50 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -5,2 +5,2 @@
-pbr<2.0,>=1.6
-Babel>=1.3
+pbr>=1.6
+Babel!=2.3.0,!=2.3.1,!=2.3.2,!=2.3.3,>=1.3 # BSD
@@ -10 +10 @@ oslo.i18n>=1.5.0 # Apache-2.0
-oslo.utils>=2.0.0 # Apache-2.0
+oslo.utils!=2.6.0,>=2.0.0 # Apache-2.0



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [grenade] upgrades vs rootwrap

2016-06-23 Thread Sean McGinnis
On Thu, Jun 23, 2016 at 12:19:34AM +, Angus Lees wrote:
> So how does rootwrap fit into the "theory of upgrade"?
> 
> The doc talks about deprecating config, but is silent on when new required
> config (rootwrap filters) should be installed.  By virtue of the way the
> grenade code works (install N from scratch, attempt to run code from N+1),
> we effectively have a policy that any new configs are installed at some
> nebulous time *after* the N+1 code is deployed.  In particular, this means
> a new rootwrap filter needs to be merged a whole release before that
> rootwrap entry can be used - and anything else is treated as an "exception"
> (see for example the nova from-* scripts which have basically updated
> rootwrap for each release).
> 
> --
> 
> Stepping back, I feel like an "expand-contract" style upgrade process
> involving rootwrap should look something like
> 0. Update rootwrap to be the union of N and N+1 rootwrap filters,
> 1. Rolling update from N to N+1 code,
> 2. Remove N-only rootwrap entries.
> 
> We could make that a bit easier for deployers by having a sensible
> deprecation policy that requires our own rootwrap filters for each release
> are already the union of this release and the last (which indeed is already
> our policy aiui), and then it would just look like:
> 0. Install rootwrap filters from N+1
> 1. Rolling update to N+1 code

I think effectively this is what we've ended up with in the past.

We've had this issue for some time. There have been several releases
where either Cinder drivers or os-brick changes have needed to add
rootwrap changes. Theoretically we _should_ have hit these problems long
ago.

I think the only reason it hasn't come up before is that these changes
are usually for vendor storage backends. So they never got hit in
grenade tests since those use LVM. We have third party CI, but grenade
tests are not a part of that.

The switch to privsep now has really highlighted this gap. I think we
need to make this implied constraint clear and have it documented. To
upgrade we will need to make sure the rootwrap filters are in place
_before_ we perform any upgrades.

> 
> --
> 
> So: We obviously need to update rootwrap filters at *some* point, and we
> should actually decide when that is.
> 
> We can stick with the current de-facto "config after code" grenade policy
> where we use the rootwrap filters from N for code from N+1, but this
> implies a 6-month lag on any new rootwrap-using code.  I hereby propose we
> choose a "config before code" where we use the rootwrap filters for N+1 for
> code from N+1.  This implies that upgrading the rootwrap filters is a
> necessary precondition for a code upgrade.

+1

> 
> In practice (for grenade) this just means copying any rootwrap filters from
> the new branch into place before attempting code upgrade.  I presume there
> would also be a corresponding ops docs impact - but I haven't seen our
> current published upgrade procedure.

I think we definitely should have this documented somewhere. Not that
most folks will read that documentation, but at least we have it out
there.

> 
> Thoughts?
> 
>  - Gus

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [storlets]Brand new documentation

2016-06-23 Thread eran

Brand new documentation for storlets is now available.
Compiled HTML format can be found here:
http://storlets.readthedocs.io/en/latest/

Major new sections are:
1. A use cases section
2. A detailed dev env installation instructions (a-la SAIO), as well  
as deployment instructions.


Thanks,
Eran



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [api] Today's API-WG meeting is headless

2016-06-23 Thread Chris Dent


All three of the usual chairs of the API-WG meeting (at 1600 UTC
in #openstack-meeting-3) are travelling today and will not be in
attendance. Apologies for the late notice.

If you were hoping to bring an issue up at the meeting today please
instead start an email thread tagged "[api]" so we can discuss it
asynchronously or if it seems appropriate create a bug[1].

If your heart was yearning to do some API-WG related thinking and
working during that hour, go to #openstack-meeting-3 at 1600 UTC and
find like minded people to discuss the pending guidelines[2] or
write some new guidelines based on the bugs in [1].

Thanks!

[1] https://bugs.launchpad.net/openstack-api-wg

[2] https://review.openstack.org/#/q/project:openstack/api-wg+status:open

--
Chris Dent   (╯°□°)╯︵┻━┻http://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] better solution for the non-ini format configure file

2016-06-23 Thread Steven Dake (stdake)
Looks like a really cool feature, and possibly a way to non-ini files, such as 
policy.json files which we just want to copy rather then override.

Kolla already has ini merging for ini files though and the example you provide 
is an ini file.  I like Kolla's ini merging, and it is sort of an external 
interface, since operators have been using it, so to remove it would mean 
following the deprecation cycle.  I do agree this would be fantastic for 
straight copies of non-ini configuration files.

Regards
-steve

From: OpenStack Mailing List Archive >
Reply-To: 
"openstack-dev@lists.openstack.org" 
>
Date: Wednesday, June 22, 2016 at 8:47 PM
To: 
"openstack-dev@lists.openstack.org" 
>
Subject: Re: [openstack-dev] [kolla] better solution for the non-ini format 
configure file

Link: https://openstack.nimeyo.com/83165/?show=88496#a88496
From: AndrewLiu >


Recently, we find this feature of ansible:
http://docs.ansible.com/ansible/playbooks_loops.html#finding-first-matched-files

A specific path of template file can be add in the ansible task.

If a user want to customize an non-ini template file, the user can copy the 
template file to the customization directory, and modify the template file as 
the user wants.

An example of how to modify the ansible task:

change from:

- name: Copying over horizon.conf
  template:
  src: "{{ item }}.conf.j2"
  dest: "{{ node_config_directory }}/{{ item }}/{{ item }}.conf"
  with_items:
  - "horizon"


to:

- name: Copying over horizon.conf
  template:
  src: "{{ item }}"
  dest: "{{ node_config_directory }}/horizon/horizon.conf"
  with_first_found:
  - "{{ node_custom_config }}/horizon.conf.j2"
  - "horizon.conf.j2"


But a convention of how to organize the directory structure of customization 
template files should be addressed.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Deprecation of fuel-mirror tool

2016-06-23 Thread Bulat Gaifullin
Totally agree with this decision.

Vladimir, thank you for driving this activity.

Regards,
Bulat Gaifullin
Mirantis Inc.



> On 23 Jun 2016, at 13:31, Vladimir Kozhukalov  
> wrote:
> 
> Dear colleagues.
> 
> I'd like to announce that fuel-mirror tool is not going to be a part of Fuel 
> any more. Its functionality is to build/clone rpm/deb repos and modify Fuel 
> releases repository lists (metadata). 
> 
> Since Fuel 10.0 it is recommended to use other available tools for managing 
> local deb/rpm repositories. 
> 
> Packetary is a good example [0]. Packetary is ideal if one needs to create a 
> partial mirror of a deb/rpm repository, i.e. mirror that contains not all 
> available packages but only a subset of packages. To create full mirror it is 
> better to use debmirror or rsync or any other tools that are available.
> 
> To modify releases repository lists one can use commands which are to 
> available by default on the Fuel admin node since Newton.
> 
> # list of available releases
> fuel2 release list
> # list of repositories for a release
> fuel2 release repos list 
> # save list of repositories for a release in yaml format
> fuel2 release repos list  -f yaml | tee repos.yaml
> # modify list of repositories
> vim repos.yaml
> # update list of repositories for a release from yaml file 
> fuel2 release repos update  -f repos.yaml
> 
> They are provided by python-fuelclient [1] package and were introduced by 
> this [2] patch. 
> 
> 
> [0] https://wiki.openstack.org/wiki/Packetary 
>  
> [1] https://github.com/openstack/python-fuelclient 
> 
> [2] https://review.openstack.org/#/c/326435/ 
> 
> 
> 
> Vladimir Kozhukalov
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] what do you work on upstream of OpenStack?

2016-06-23 Thread Doug Hellmann
Excerpts from Doug Hellmann's message of 2016-06-13 15:11:17 -0400:
> I'm trying to pull together some information about contributions
> that OpenStack community members have made *upstream* of OpenStack,
> via code, docs, bug reports, or anything else to dependencies that
> we have.
> 
> If you've made a contribution of that sort, I would appreciate a
> quick note.  Please reply off-list, there's no need to spam everyone,
> and I'll post the summary if folks want to see it.
> 
> Thanks,
> Doug
> 

I've summarized the results of all of your responses (on and off
list) on a blog post this morning [1]. I removed individual names
because I was concentrating on the community as a whole, rather than
individual contributions.

I'm sure there are projects not listed, either because I missed
something in my summary or because someone didn't reply. Please feel
free to leave a comment on the post with references to other projects.
It's not necessary to link to specific commits or bugs or anything like
that, unless there's something you would especially like to highlight.

Thank you for your input into the survey. I'm impressed with the
breadth of the results. I'm very happy to know that our community,
which so often seems to be focused on building new projects, also
contributes to existing projects that we all rely on.

Doug

[1] 
https://doughellmann.com/blog/2016/06/23/openstack-contributions-to-other-open-source-projects/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] libvirt driver: who should create the libvirt.xml file?

2016-06-23 Thread Markus Zoeller
On 23.06.2016 11:50, Daniel P. Berrange wrote:
> On Mon, Jun 20, 2016 at 05:47:57PM +0200, Markus Zoeller wrote:
>> White working on the change series to implement the virtlogd feature I
>> got feedback [1] to move code which creates parts of the libvirt.xml
>> file from the "driver" module into the "guest" module. I'm a bit
>> hesitant to do so as the responsibility of creating a valid libvirt.xml
>> file is then spread across 3 modules:
>> * driver.py
>> * guest.py
>> * designer.py
>> I'm only looking for a guideline here (The "driver" module is humongous
>> and I think it would be a good thing to have the "libvirt.xml" creation
>> code outside of it). Thoughts?
> 
> The designer.py file was created as a place which would ultimately hold
> all the XML generator logic.
> 
> Ultimately the "_get_guest_xml" (and everything it calls) from driver.py
> would move into the designer.py class. Before we could do that though, we
> needed to create the host.py + guest.py classes to isolate the libvirt
> API logic.
> 
> Now that the guest.py conversion/move is mostly done, we should be able
> to start moving the XML generation out of driver.py  and into designer.py
> 
> I would definitely *not* put XML generation code into guest.py
> 
> In terms of your immediate patch, I'd suggest just following current
> practice and putting your new code in driver.py.  We'll move everything
> over to designer.py at the same time, later on.
> 
> Regards,
> Daniel
> 

Cool, thanks! That was the guideline I hoped for. Count me in when the
move to "designer.py" gets tackled.

-- 
Regards, Markus Zoeller (markus_z)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][oslo] pbr potential breaking change coming

2016-06-23 Thread Markus Zoeller
On 23.06.2016 12:21, Andreas Jaeger wrote:
> On 06/23/2016 10:43 AM, Markus Zoeller wrote:
>> On 21.06.2016 15:01, Doug Hellmann wrote:
>>> A while back pbr had a feature that let projects pass "warnerror"
>>> through to Sphinx during documentation builds, causing any warnings in
>>> that build to be treated as an error and fail the build. This lets us
>>> avoid things like links to places that don't exist in the docs, bad but
>>> renderable rst, typos in directive or role names, etc.
>>>
>>> Somewhat more recently, but still a while ago, that feature "broke"
>>> with a Sphinx release that was not API compatible. Sachi King has
>>> fixed this issue within pbr, and so the next release of pbr will
>>> fix the broken behavior and start correctly passing warnerror again.
>>> That may result in doc builds breaking where they didn't before.
>>>
>>> The short-term solution is to turn of warnerrors (look in your
>>> setup.cfg), then fix the issues and turn it back on. Or you could
>>> preemptively fix any existing warnings in your doc builds before the
>>> release, but it's simple enough to turn off the feature if there isn't
>>> time.
>>>
>>> Josh, Sachi, & other Oslo folks, I think we should hold off on
>>> releasing until next week to give folks time. Is that OK?
>>>
>>> Doug
>>>
>>> PS - Thanks, Sachi, I know that bug wasn't a ton of fun to fix!
>> [...]
>>
>> Thanks for the heads up. I checked Nova, and every sphinx-build command
>> we use in our "tox.ini" file already uses the -W flag to treat warnings
>> as errors. I guess we're safe with that?
>>
> 
> Infra calls the docs job using
> tox -e venv -- python setup.py build_sphinx
> 
> And that's where the change comes in...
> 
> So, your docs environment is not used as is in the gate,
> 
> Andreas
> 

Interesting, I didn't know that. It runs without warnings in Nova too,
so hopefully everything's in a good shape for the pbr release. :)

-- 
Regards, Markus Zoeller (markus_z)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone][openid][mistral] Enabling OpenID Connect authentication w/o federation

2016-06-23 Thread Renat Akhmerov
Hi,

I’m looking for some hints on how to enable authentication via OpenID Connect 
protocol, particularly in Mistral. Actually, specific protocol is not so 
important. I’m mostly interested in conceptional vision here and I’d like to 
ask the community if what we would like to do makes sense.

Problem statement

Whereas there are people using Mistral as an OpenStack service with proper 
Keystone authentication etc. some people want to be able to use it w/o 
OpenStack at all or in some scenarios where OpenStack is just one thing that 
Mistral workflows should interact with.

In one of our cases we want to use Mistral w/o OpenStack but we want to make 
Mistral perform authentication via OIDC. I’ve done some research on what 
Keystone already has that could help us do that and I found a group of plugins 
for OIDC authentication flows under [1]. The problem I see with these plugins 
for my particular case is that I still have to properly install Keystone and 
configure it for Federation since the plugins use Federation. Feels like a 
redundant time consuming step for me. A normal flow for these plugins is to 
first get so-called unscoped token via OIDC and then request a scoped token 
from Keystone via its Federation API. I think understand why it works this way, 
it’s well documented in Keystone docs. Briefly, it’s required to get user info, 
list of available resources etc, whatever OIDC server does not provide, it only 
works as an identity provider.

What ideally I'd like to do is to avoid installing and configuring Keystone at 
all. 

Possible solution

What I’m thinking about is: would it be OK to just create a set of new 
authentication plugins under keystoneauth project that would do the same as 
existing ones but w/o getting a Keystone scoped token? That way we could still 
take advantage of existing keystone auth plugins framework but w/o having to 
install and configure Keystone service. I realize that we’ll lose some 
capabilities that Keystone provides but for many cases it would be enough just 
to authenticate on a client and then validate token from HTTP headers via OIDC 
server on server side. Just one more necessary thing to do here is to fill 
tenant/project but that could be extracted from a token.


Questions

Would this new plugin have a right to be part of keystoneauth project despite 
Keystone service is not involved at all? The alternative is just to teach 
Mistral to do authentication w/o using keystone client  at all. But IMO the 
advantage of having such plugin (group of plugins actually) is that someone 
else could reuse it.
Is there any existing code that we could reuse to solve this problem? Maybe 
what I’m describing is already solved by someone.
Can you please point to some user examples on how to switch between 
authentication plugins in both client and service for some OpenStack services? 
I read the docs and looked at the code but it’s still not so clear how to 
implement support for different plugins on the client in the best way. I’m 
looking for best practices. Server side seems ok because we use 
keystonemiddleware and it can dynamically load a plugin by name and use 
relevant config options just by specifying “auth_plugin” property.
What may be some other caveats in the solution I described?


Thanks

[1] 
https://github.com/openstack/keystoneauth/blob/master/keystoneauth1/identity/v3/oidc.py
 



Renat Akhmerov
@Nokia

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Deprecation of fuel-mirror tool

2016-06-23 Thread Vladimir Kozhukalov
Dear colleagues.

I'd like to announce that fuel-mirror tool is not going to be a part of
Fuel any more. Its functionality is to build/clone rpm/deb repos and modify
Fuel releases repository lists (metadata).

Since Fuel 10.0 it is recommended to use other available tools for managing
local deb/rpm repositories.

Packetary is a good example [0]. Packetary is ideal if one needs to create
a partial mirror of a deb/rpm repository, i.e. mirror that contains not all
available packages but only a subset of packages. To create full mirror it
is better to use debmirror or rsync or any other tools that are available.

To modify releases repository lists one can use commands which are to
available by default on the Fuel admin node since Newton.

# list of available releases
fuel2 release list
# list of repositories for a release
fuel2 release repos list 
# save list of repositories for a release in yaml format
fuel2 release repos list  -f yaml | tee repos.yaml
# modify list of repositories
vim repos.yaml
# update list of repositories for a release from yaml file
fuel2 release repos update  -f repos.yaml

They are provided by python-fuelclient [1] package and were introduced by
this [2] patch.


[0] https://wiki.openstack.org/wiki/Packetary
[1] https://github.com/openstack/python-fuelclient
[2] https://review.openstack.org/#/c/326435/


Vladimir Kozhukalov
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][oslo] pbr potential breaking change coming

2016-06-23 Thread Andreas Jaeger

On 06/23/2016 10:43 AM, Markus Zoeller wrote:

On 21.06.2016 15:01, Doug Hellmann wrote:

A while back pbr had a feature that let projects pass "warnerror"
through to Sphinx during documentation builds, causing any warnings in
that build to be treated as an error and fail the build. This lets us
avoid things like links to places that don't exist in the docs, bad but
renderable rst, typos in directive or role names, etc.

Somewhat more recently, but still a while ago, that feature "broke"
with a Sphinx release that was not API compatible. Sachi King has
fixed this issue within pbr, and so the next release of pbr will
fix the broken behavior and start correctly passing warnerror again.
That may result in doc builds breaking where they didn't before.

The short-term solution is to turn of warnerrors (look in your
setup.cfg), then fix the issues and turn it back on. Or you could
preemptively fix any existing warnings in your doc builds before the
release, but it's simple enough to turn off the feature if there isn't
time.

Josh, Sachi, & other Oslo folks, I think we should hold off on
releasing until next week to give folks time. Is that OK?

Doug

PS - Thanks, Sachi, I know that bug wasn't a ton of fun to fix!

[...]

Thanks for the heads up. I checked Nova, and every sphinx-build command
we use in our "tox.ini" file already uses the -W flag to treat warnings
as errors. I guess we're safe with that?



Infra calls the docs job using
tox -e venv -- python setup.py build_sphinx

And that's where the change comes in...

So, your docs environment is not used as is in the gate,

Andreas
--
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] libvirt driver: who should create the libvirt.xml file?

2016-06-23 Thread Daniel P. Berrange
On Mon, Jun 20, 2016 at 05:47:57PM +0200, Markus Zoeller wrote:
> White working on the change series to implement the virtlogd feature I
> got feedback [1] to move code which creates parts of the libvirt.xml
> file from the "driver" module into the "guest" module. I'm a bit
> hesitant to do so as the responsibility of creating a valid libvirt.xml
> file is then spread across 3 modules:
> * driver.py
> * guest.py
> * designer.py
> I'm only looking for a guideline here (The "driver" module is humongous
> and I think it would be a good thing to have the "libvirt.xml" creation
> code outside of it). Thoughts?

The designer.py file was created as a place which would ultimately hold
all the XML generator logic.

Ultimately the "_get_guest_xml" (and everything it calls) from driver.py
would move into the designer.py class. Before we could do that though, we
needed to create the host.py + guest.py classes to isolate the libvirt
API logic.

Now that the guest.py conversion/move is mostly done, we should be able
to start moving the XML generation out of driver.py  and into designer.py

I would definitely *not* put XML generation code into guest.py

In terms of your immediate patch, I'd suggest just following current
practice and putting your new code in driver.py.  We'll move everything
over to designer.py at the same time, later on.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [bilean] Bilean Team IRC Meeting

2016-06-23 Thread lvdongbing

Hello everyone,


The first IRC meeting of Bilean team will be Thursday, June 23th at 
14:00 UTC


in the #openstack-meeting-3 channel, everybody is welcome to join.

The agenda can be found here:

https://wiki.openstack.org/wiki/Meetings/BileanAgenda

And you can figure out the time in your timezone by following url:

http://www.timeanddate.com/worldclock/fixedtime.html?hour=14=0=0


Regards

lvdongbing




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Status of the OpenStack port to Python 3

2016-06-23 Thread Tony Breeds
On Thu, Jun 23, 2016 at 10:08:22AM +0200, Sylvain Bauza wrote:
> 
> 
> Le 23/06/2016 02:42, Tony Breeds a écrit :
> > On Wed, Jun 22, 2016 at 12:13:21PM +0200, Victor Stinner wrote:
> > > Le 22/06/2016 à 10:49, Thomas Goirand a écrit :
> > > > Do you think it looks like possible to have Nova ported to Py3 during
> > > > the Newton cycle?
> > > It doesn't depend on me: I'm sending patches, and then I have to wait for
> > > reviews. The question is more how to accelerate reviews.
> > Clearly I'm far from authorative but given how close we are to R-14 which is
> > the Nova non-priority feature freeze[1] and the python3 port isn't listed as
> > a priority[2] I'd guess that this wont land in Newton.
> > 
> > [1] 
> > http://releases.openstack.org/newton/schedule.html#nova-non-priority-feature-freeze
> > [2] 
> > http://specs.openstack.org/openstack/nova-specs/priorities/newton-priorities.html
> 
> Well, IIRC we discussed in the previous year on some of those blueprints
> (including the Py3 effort) that are not really features (rather refactoring
> items) and which shouldn't be hit by the non-priority feature freeze.
> That doesn't mean we could merge those anytime of course, but I don't think
> we would procedurally -2 them.

Sure I didn't mean to imply a proceedural -2 I meant to convey that at this
point in the cycle review focus shifts to priority items.  So getting attention
for this effort will be hard.

We did discuss that this (py3 support) was a great background task but we also
noted that those tasks would have a cut-off point.  I *thought* that was the
Nova non-priority feature freeze but I can't find that written anywhere.

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] No middle-man - when does/will Nova directly connect iSCSI volumes?

2016-06-23 Thread Daniel P. Berrange
On Wed, Jun 15, 2016 at 04:59:39PM -0700, Preston L. Bannister wrote:
> QEMU has the ability to directly connect to iSCSI volumes. Running the
> iSCSI connections through the nova-compute host *seems* somewhat
> inefficient.
> 
> There is a spec/blueprint and implementation that landed in Kilo:
> 
> https://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/qemu-built-in-iscsi-initiator.html
> https://blueprints.launchpad.net/nova/+spec/qemu-built-in-iscsi-initiator
> 
> From looking at the OpenStack Nova sources ... I am not entirely clear on
> when this behavior is invoked (just for Ceph?), and how it might change in
> future.
> 
> Looking for a general sense where this is headed. (If anyone knows...)
> 
> If there is some problem with QEMU and directly attached iSCSI volumes,
> that would explain why this is not the default. Or is this simple inertia?
> 
> 
> I have a concrete concern. I work for a company (EMC) that offers backup
> products, and we now have backup for instances in OpenStack. To make this
> efficient, we need to collect changed-block information from instances.
> 
> 1)  We could put an intercept in the Linux kernel of the nova-compute host
> to track writes at the block layer. This has the merit of working for
> containers, and potentially bare-metal instance deployments. But is not
> guaranteed for instances, if the iSCSI volumes are directly attached to
> QEMU.
> 
> 2)  We could use the QEMU support for incremental backup (first bit landed
> in QEMU 2.4). This has the merit of working with any storage, by only for
> virtual machines under QEMU.
> 
> As our customers are (so far) only asking about virtual machine backup. I
> long ago settled on (2) as most promising.
> 
> What I cannot clearly determine is where (1) will fail. Will all iSCSI
> volumes connected to QEMU instances eventually become directly connected?

Our long term goal is that 100% of all network storage will be connected
to directly by QEMU. We already have the ability to partially do this with
iSCSI, but it is lacking support for multipath. As & when that gap is
addressed though, we'll stop using the host OS for any iSCSI stuff.

So if you're requiring access to host iSCSI volumes, it'll work in the
short-medium term, but in the medium-long term we're not going to use
that so plan accordingly.

> Xiao's unanswered query (below) presents another question. Is this a
> site-choice? Could I require my customers to configure their OpenStack
> clouds to always route iSCSI connections through the nova-compute host? (I
> am not a fan of this approach, but I have to ask.)

In the short term that'll work, but long term we're not intending to
support that once QEMU gains multi-path. There's no timeframe on when
that will happen though.



Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][oslo] pbr potential breaking change coming

2016-06-23 Thread Markus Zoeller
On 21.06.2016 15:01, Doug Hellmann wrote:
> A while back pbr had a feature that let projects pass "warnerror"
> through to Sphinx during documentation builds, causing any warnings in
> that build to be treated as an error and fail the build. This lets us
> avoid things like links to places that don't exist in the docs, bad but
> renderable rst, typos in directive or role names, etc.
> 
> Somewhat more recently, but still a while ago, that feature "broke"
> with a Sphinx release that was not API compatible. Sachi King has
> fixed this issue within pbr, and so the next release of pbr will
> fix the broken behavior and start correctly passing warnerror again.
> That may result in doc builds breaking where they didn't before.
> 
> The short-term solution is to turn of warnerrors (look in your
> setup.cfg), then fix the issues and turn it back on. Or you could
> preemptively fix any existing warnings in your doc builds before the
> release, but it's simple enough to turn off the feature if there isn't
> time.
> 
> Josh, Sachi, & other Oslo folks, I think we should hold off on
> releasing until next week to give folks time. Is that OK?
> 
> Doug
> 
> PS - Thanks, Sachi, I know that bug wasn't a ton of fun to fix!
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

Thanks for the heads up. I checked Nova, and every sphinx-build command
we use in our "tox.ini" file already uses the -W flag to treat warnings
as errors. I guess we're safe with that?

-- 
Regards, Markus Zoeller (markus_z)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][general] Multiple implementation of /usr/bin/foo stored at the same location, leading to conflicts

2016-06-23 Thread Thomas Goirand
On 06/22/2016 06:36 PM, Nate Johnston wrote:
> On Wed, Jun 22, 2016 at 05:55:48PM +0200, Thomas Goirand wrote:
>  
>> Does neutron-fwaas also use "neutron" as launchpad package for bug report?
>  
> Yes we do, just add the 'fwaas' tag to the bug.  Thanks!
> 
> --N.

Here's the neutron-fwaas bug then:
https://bugs.launchpad.net/neutron/+bug/1595440

Cheers,

Thomas Goirand (zigo)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [trove] RE: [all] Status of the OpenStack port to Python 3

2016-06-23 Thread Amrith Kumar
Victor,

To echo what Sylvain is saying, I think Trove has talked about python 3 for a 
while now and while it isn’t listed as a blueprint for Newton, I think we’re 
far enough along that we should see if we can get this completed for the Newton 
cycle.

Thanks for all your work on this, and also to Abhishek Kekane and Wang Bo who 
have contributed patches to Trove related to python 3.

I’ll post this subject for discussion at the next Trove weekly meeting.

-amrith

From: Sylvain Bauza [mailto:sba...@redhat.com]
Sent: Thursday, June 23, 2016 4:08 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [all] Status of the OpenStack port to Python 3


Le 23/06/2016 02:42, Tony Breeds a écrit :

On Wed, Jun 22, 2016 at 12:13:21PM +0200, Victor Stinner wrote:

Le 22/06/2016 à 10:49, Thomas Goirand a écrit :

Do you think it looks like possible to have Nova ported to Py3 during

the Newton cycle?



It doesn't depend on me: I'm sending patches, and then I have to wait for

reviews. The question is more how to accelerate reviews.



Clearly I'm far from authorative but given how close we are to R-14 which is

the Nova non-priority feature freeze[1] and the python3 port isn't listed as

a priority[2] I'd guess that this wont land in Newton.



[1] 
http://releases.openstack.org/newton/schedule.html#nova-non-priority-feature-freeze

[2] 
http://specs.openstack.org/openstack/nova-specs/priorities/newton-priorities.html

Well, IIRC we discussed in the previous year on some of those blueprints 
(including the Py3 effort) that are not really features (rather refactoring 
items) and which shouldn't be hit by the non-priority feature freeze.
That doesn't mean we could merge those anytime of course, but I don't think we 
would procedurally -2 them.

My .02€
-Sylvain





Yours Tony.




__

OpenStack Development Mailing List (not for usage questions)

Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Status of the OpenStack port to Python 3

2016-06-23 Thread Sylvain Bauza



Le 23/06/2016 02:42, Tony Breeds a écrit :

On Wed, Jun 22, 2016 at 12:13:21PM +0200, Victor Stinner wrote:

Le 22/06/2016 à 10:49, Thomas Goirand a écrit :

Do you think it looks like possible to have Nova ported to Py3 during
the Newton cycle?

It doesn't depend on me: I'm sending patches, and then I have to wait for
reviews. The question is more how to accelerate reviews.

Clearly I'm far from authorative but given how close we are to R-14 which is
the Nova non-priority feature freeze[1] and the python3 port isn't listed as
a priority[2] I'd guess that this wont land in Newton.

[1] 
http://releases.openstack.org/newton/schedule.html#nova-non-priority-feature-freeze
[2] 
http://specs.openstack.org/openstack/nova-specs/priorities/newton-priorities.html


Well, IIRC we discussed in the previous year on some of those blueprints 
(including the Py3 effort) that are not really features (rather 
refactoring items) and which shouldn't be hit by the non-priority 
feature freeze.
That doesn't mean we could merge those anytime of course, but I don't 
think we would procedurally -2 them.


My .02€
-Sylvain


Yours Tony.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible] L3HA problem

2016-06-23 Thread Anna Kamyshnikova
Version 1.2.13 is reliable.

On Wed, Jun 22, 2016 at 8:40 PM, Assaf Muller  wrote:

> On Wed, Jun 22, 2016 at 12:02 PM, fabrice grelaud
>  wrote:
> >
> > Le 22 juin 2016 à 17:35, fabrice grelaud 
> a
> > écrit :
> >
> >
> > Le 22 juin 2016 à 15:45, Assaf Muller  a écrit :
> >
> > On Wed, Jun 22, 2016 at 9:24 AM, fabrice grelaud
> >  wrote:
> >
> > Hi,
> >
> > we deployed our openstack infrastructure with your « exciting » project
> > openstack-ansible (mitaka 13.1.2) but we have some problems with L3HA
> after
> > create router.
> >
> > Our infra (closer to the doc):
> > 3 controllers nodes (with bond0 (br-mgmt, br-storage), bond1 (br-vxlan,
> > br-vlan))
> > 2 compute nodes (same for network)
> >
> > We create an external network (vlan type), an internal network (vxlan
> type)
> > and a router connected to both networks.
> > And when we launch an instance (cirros), we can’t receive an ip on the
> vm.
> >
> > We have:
> >
> > root@p-osinfra03-utility-container-783041da:~# neutron
> > l3-agent-list-hosting-router router-bim
> >
> +--+---++---+--+
> > | id   | host
> > | admin_state_up | alive | ha_state |
> >
> +--+---++---+--+
> > | 3c7918e5-3ad6-4f82-a81b-700790e3c016 |
> > p-osinfra01-neutron-agents-container-f1ab9c14 | True   | :-)   |
> > active   |
> > | f2bf385a-f210-4dbc-8d7d-4b7b845c09b0 |
> > p-osinfra02-neutron-agents-container-48142ffe | True   | :-)   |
> > active   |
> > | 55350fac-16aa-488e-91fd-a7db38179c62 |
> > p-osinfra03-neutron-agents-container-2f6557f0 | True   | :-)   |
> > active   |
> >
> +--+---++---+—+
> >
> > I know, i got a problem now because i should have :-) active, :-)
> standby,
> > :-) standby… Snif...
> >
> > root@p-osinfra01-neutron-agents-container-f1ab9c14:~# ip netns
> > qrouter-eeb2147a-5cc6-4b5e-b97c-07cfc141e8e6
> > qdhcp-0ba266fb-15c4-4566-ae88-92d4c8fd2036
> >
> > root@p-osinfra01-neutron-agents-container-f1ab9c14:~# ip netns exec
> > qrouter-eeb2147a-5cc6-4b5e-b97c-07cfc141e8e6 ip a sh
> > 1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group
> > default
> >link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> >inet 127.0.0.1/8 scope host lo
> >   valid_lft forever preferred_lft forever
> >inet6 ::1/128 scope host
> >   valid_lft forever preferred_lft forever
> > 2: ha-4a5f0287-91@if6:  mtu 1450 qdisc
> > pfifo_fast state UP group default qlen 1000
> >link/ether fa:16:3e:c2:67:a9 brd ff:ff:ff:ff:ff:ff
> >inet 169.254.192.1/18 brd 169.254.255.255 scope global ha-4a5f0287-91
> >   valid_lft forever preferred_lft forever
> >inet 169.254.0.1/24 scope global ha-4a5f0287-91
> >   valid_lft forever preferred_lft forever
> >inet6 fe80::f816:3eff:fec2:67a9/64 scope link
> >   valid_lft forever preferred_lft forever
> > 3: qr-44804d69-88@if9:  mtu 1450 qdisc
> > pfifo_fast state UP group default qlen 1000
> >link/ether fa:16:3e:a5:8c:f2 brd ff:ff:ff:ff:ff:ff
> >inet 192.168.100.254/24 scope global qr-44804d69-88
> >   valid_lft forever preferred_lft forever
> >inet6 fe80::f816:3eff:fea5:8cf2/64 scope link
> >   valid_lft forever preferred_lft forever
> > 4: qg-c5c7378e-1d@if12:  mtu 1500 qdisc
> > pfifo_fast state UP group default qlen 1000
> >link/ether fa:16:3e:b6:4c:97 brd ff:ff:ff:ff:ff:ff
> >inet 147.210.240.11/23 scope global qg-c5c7378e-1d
> >   valid_lft forever preferred_lft forever
> >inet 147.210.240.12/32 scope global qg-c5c7378e-1d
> >   valid_lft forever preferred_lft forever
> >inet6 fe80::f816:3eff:feb6:4c97/64 scope link
> >   valid_lft forever preferred_lft forever
> >
> > Same result on infra02 and infra03, qr and qg interfaces have the same
> ip,
> > and ha interfaces the address 169.254.0.1.
> >
> > If we stop 2 neutron agent containers (p-osinfra02, p-osinfra03) and we
> > restart the first (p-osinfra01), we can reboot the instance and we got an
> > ip, a floating ip and we can access by ssh from internet to the vm.
> (Note:
> > after few time, we loss our connectivity too).
> >
> > But if we restart the two containers, we got a ha_state to « standby »
> until
> > the three become « active » and finally we have the problem again.
> >
> > The three routers on infra 01/02/03 are seen as master.
> >
> > If we ping from our instance to the router (internal network
> 192.168.100.4
> > to 192.168.100.254) we can see some ARP Request
> > ARP,