Re: [openstack-dev] [Security] Introducing Killick PKI

2015-10-12 Thread Clark, Robert Graham
> -Original Message-
> From: Adam Young [mailto:ayo...@redhat.com]
> Sent: 12 October 2015 02:24
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [Security] Introducing Killick PKI
> 
> On 10/11/2015 06:50 PM, Robert Collins wrote:
> > On 9 October 2015 at 06:47, Adam Young  wrote:
> >> On 10/08/2015 12:50 PM, Chivers, Doug wrote:
> >>> Hi All,
> >>>
> >>> At a previous OpenStack Security Project IRC meeting, we briefly discussed
> >>> a lightweight traditional PKI using the Anchor validation functionality, 
> >>> for
> >>> use in internal deployments, as an alternative to things like MS ADCS. To
> >>> take this further, I have drafted a spec, which is in the security-specs
> >>> repo, and would appreciate feedback:
> >>>
> >>> https://review.openstack.org/#/c/231955/
> >>>
> >>> Regards
> >>>
> >>> Doug
> >> How is this better than Dogtag/FreeIPA?
> > DogTag is Tomcat yeah? Thats no exactly trivial to deploy - the spec
> > specifically calls out the desire to have a low-admin-overhead
> > solution. Perhaps DogTag/FreeIPA are that in the context of a RHEL
> > environment? I see that the dogtag-pki packages in Debian are up to
> > date - perhaps more discussion w/ops is needed?
> 
> Tomcat is trivial to deploy; it is in all the major distributions
> already. Dogtag is slightly more complex because it does things right
> WRT security hardening the Tomcat instance.  But the process is
> automated as part of the Dogtag code base.
> 
> A better bet is using Dogtag as installed with FreeIPA. It is supported
> in both Debian based and RPM based distributions.  The dev team is
> primarily Red Hat, with an Ubuntu packager dealing with the headaches of
> getting it installed there.  There is someone working on SuSE already as
> well.  FreeIPA gets us Dogtag, as well as Kerberos for Symmetric Key.
> 
> We have a demo of Using Kerberos to authenticate and encrypt the
> messaging backend (AMQP 1.0 Driver with Proton) and also for auth on all
> of the Web services.  I'll be one of the people demoing it at the Red
> Hat booth at Tokyo if you want to see it and ask questions directly.
> 
> For Self Signed certificates, we can use certmonger and the self-signed
> backend; we should be using Certmonger as the cert management client no
> matter what.  There was a Certmonger- Barbican plugin underway, but I do
> not know the status of it.
> 
> 
> Let's not reinvent this; the security and cryptography focused people on
> OpenStack are already spread thin. Lets focus on reusing pre-existing
> solutions.
> 
> 
> 

There's very little out there in terms of easy to use, deploy and scale PKI 
systems. ADCS is very tightly coupled to Windows, EJBCA is clunky, pyCA isn't 
supported anymore afaik and my personal experience with Dogtag (YMMV of course) 
is that it was difficult to setup and maintain. Now that was some time ago, 
when the available documentation didn't match with the shipping version and 
Ubuntu support wasn't a thing so I'm sure it's moved on now and it's possibly 
great - but - that's no reason to not have a crack at making something better. 
(For some personal interpretation of "better").

Reinvention can be good, after all, if it wasn't OpenStack probably wouldn't be 
a thing.

-Rob

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar] [Horizon] UI for pools and flavors

2015-10-12 Thread Flavio Percoco

erm, now it has the subject change!

On 12/10/15 16:25 +0900, Flavio Percoco wrote:

On 10/10/15 21:07 +0530, Shifali Agrawal wrote:

Greetings!

I have prepared mock-ups[1],[2] to build Zaqar UI, at present focusing only to
bring pools and flavors on dashboard. Sharing two mock-ups for same purpose -
allowing all operations related to them(CRUD).

It will be great to know the views of zaqar developers/users if the design is
satisfactory to them or they want some amendments. Also let me know if any
information of pools/flavors is missing and need to be added.

In first mockup[1] showing pools information by default and will show flavors
if user click on flavors button present on top second menu bar.

[1]: http://tinyurl.com/o2b9q6r
[2]: http://tinyurl.com/pqwgwhl



wow

Thanks a lot for working on this! I think I'm good with the above but
I'll take a better look.

I'm adding `[horizon]` to the subject to get more folks' attention.

Flavio

--
@flaper87
Flavio Percoco





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][db]Neutron db revision fails

2015-10-12 Thread Zhi Chang
Ok. Everything looks right when I run command "neutron-db-manage --config-file 
/etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini 
revision -m "Just For Test" --autogenerate"


Output:
  Running revision for neutron ...
No handlers could be found for logger "neutron.quota"
INFO  [alembic.runtime.migration] Context impl MySQLImpl.
INFO  [alembic.runtime.migration] Will assume non-transactional DDL.
  OK



Where is the new migration script?


Thx 
Zhi Chang
 
 
-- Original --
From:  "Anna Kamyshnikova";
Date:  Mon, Oct 12, 2015 03:19 PM
To:  "OpenStack Development Mailing List (not for usage 
questions)"; 

Subject:  Re: [openstack-dev] [Neutron][db]Neutron db revision fails

 
You should use neutron-db-manage --config-file /etc/neutron/neutron.conf 
--config-file /etc/neutron/plugins/ml2/ml2_conf.ini revision -m "Just For 
Test"" --autogenerate. Make changes in db models and then run this command - 
migration will be generated automatically.

On Mon, Oct 12, 2015 at 7:02 AM, Zhi Chang  wrote:
Thanks for your reply. What should I do if I want to create a new migration 
script?


Thanks
Zhi Chang
 
 
-- Original --
From:  "Vikram Choudhary";
Date:  Mon, Oct 12, 2015 12:22 PM
To:  "OpenStack Development Mailing List (not for usage 
questions)"; 

Subject:  Re: [openstack-dev] [Neutron][db]Neutron db revision fails

 

Hi Zhi,
 
We already have a defect logged for this issue.
 
https://bugs.launchpad.net/neutron/+bug/1503342
 
Thanks
 Vikram
 On Oct 12, 2015 8:10 AM, "Zhi Chang"  wrote:
Hi, everyone.
I install a devstack from the latest code. But there is an error when I 
create a db migration script. My migration shell is:
"neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file 
/etc/neutron/plugins/ml2/ml2_conf.ini revision -m "Just For Test""
The error shows:
  Running revision for neutron ...
  FAILED: Multiple heads are present; please specify the head revision on 
which the new revision should be based, or perform a merge.


Does my method wrong? Could someone helps me?


Thx
Zhi Chang


__
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 




__
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 





-- 
Regards,Ann Kamyshnikova
Mirantis, Inc__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Openstack] [openstack] [murano] Deployment of Environment gives [yaql.exceptions.YaqlExecutionException]: Unable to run values error

2015-10-12 Thread Sumanth Sathyanarayana
Hi,

I am trying to add a simple app like MySql into a new enviroment from the
murano dashboard and trying to deploy the environment and I get the error -
"[yaql.exceptions.YaqlExecutionException]: Unable to run values"

I saw that this error/bug was already reported some time back and made sure
that my code has the changes mentioned in:
*https://review.openstack.org/#/c/119072/1/meta/io.murano/Classes/resources/Instance.yaml

- Bug#1364446*
*&*

*https://bugs.launchpad.net/murano/+bug/1359225
*

*But I still am getting the error even after restarting the murano-engine
and murano-api services. If anyone has any suggestion on how to deploy a
new environment, it would be very helpful.*

*Thanks & Best Regards*
*Sumanth*
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Auto-abandon bot

2015-10-12 Thread marios
On 10/10/15 01:10, Ben Nemec wrote:
> Hi OoOers,
> 
> As discussed in the meeting a week or two ago, we would like to bring
> back the auto-abandon functionality for old, unloved gerrit reviews.
> I've got a first implementation of a tool to do that:
> https://github.com/cybertron/tripleo-auto-abandon
> 
> It currently follows these rules for determining what would be abandoned:
> 
> Never abandoned:
> -WIP patches are never abandoned
> -Approved patches are never abandoned
> -Patches with no feedback are never abandoned
> -Patches with negative feedback, followed by any sort of non-negative
> comment are never abandoned (this is to allow committers to respond to
> reviewer comments)
> -Patches that get restored after a first abandonment are not abandoned
> again, unless a new patch set is pushed and also receives negative feedback.
> 
> Candidates for abandonment:
> -Patches with negative feedback that has not been responded to in over a
> month.
> -Patches that are failing CI for over a month on the same patch set
> (regardless of any followup comments - the intent is that patches
> expected to fail CI should be marked WIP).
> 
> My intent with this can be summed up as "when in doubt, leave it open".
>  I'm open to discussion on any of the points above though.  I expect
> that at least the current message for abandonment needs tweaking before
> this gets run for real.
> 
> I'm a little torn on whether this should be run under my account or a
> dedicated bot account.  On the one hand, I don't really want to end up
> subscribed to every dead change, but on the other this is only supposed
> to run on changes that are unlikely to be resurrected, so that should
> limit the review spam.
> 
> Anyway, please take a look and let me know what you think.  Thanks.

Hi Ben, had a quick poke at the repo (pull request #1 \o/) thanks very
much for getting this started - regardless of where/how this ends up
running, I'd say, we've spoken about it a couple times recently in
tripleo meetings without much pushback ( + let's see what reaction there
is to this thread) so I don't think we need/should start another
discussion on the criteria [1][2] <-- these are the ones I could quickly
find, probably there are more :) - imo/fwiw what you have above sounds
sane enough - and tbh in other projects this just gets done manually and
with a pretty wide net and ultimately not much drama,

thanks, marios

[1]
http://lists.openstack.org/pipermail/openstack-dev/2014-September/044869.html
[2]http://lists.openstack.org/pipermail/openstack-dev/2014-September/046199.html


> 
> -Ben
> 
> For the curious, this is the list of patches that would currently be
> abandoned by the tool:
> 
> Abandoning https://review.openstack.org/192521 - Add centos7 test
> Abandoning https://review.openstack.org/168002 - Allow dib to be lauched
> from venv
> Abandoning https://review.openstack.org/180807 - Warn when silently
> ignoring executable files
> Abandoning https://review.openstack.org/91376 - RabbitMQ: VHost support
> Abandoning https://review.openstack.org/112870 - Adding configuration
> options for stunnel.
> Abandoning https://review.openstack.org/217511 - Fix "pkg-map failed"
> issue building IPA ramdisk
> Abandoning https://review.openstack.org/176060 - Introduce Overcloud Log
> Aggregation
> Abandoning https://review.openstack.org/141380 - Add --force-yes option
> for install-packages
> Abandoning https://review.openstack.org/149433 - Double quote to prevent
> globbing and word splitting in os-db-create
> Abandoning https://review.openstack.org/204639 - Perform a booting test
> for our images
> Abandoning https://review.openstack.org/102304 - Configures keystone
> with apache
> Abandoning https://review.openstack.org/214771 - Ramdisk should consider
> the size unit when inspecting the amount of RAM
> Abandoning https://review.openstack.org/87223 - Install the "classic"
> icinga interface
> Abandoning https://review.openstack.org/89744 - configure keystone with
> apache
> Abandoning https://review.openstack.org/176057 - Introduce Elements for
> Log Aggregation
> Abandoning https://review.openstack.org/153747 - Fail job if SELinux
> denials are found
> Abandoning https://review.openstack.org/179229 - Document how to use
> network isolation/static IPs
> Abandoning https://review.openstack.org/109651 - Add explicit
> configuraton parameters for DB pool size
> Abandoning https://review.openstack.org/189026 - shorter sleeps if
> metadata changes are detected
> Abandoning https://review.openstack.org/139627 - Nothing to see here
> Abandoning https://review.openstack.org/117887 - Support Debian distro
> for haproxy iptables
> Abandoning https://review.openstack.org/113823 - Allow single node
> mariadb clusters to restart
> Abandoning https://review.openstack.org/110906 - Install pkg-config to
> use ceilometer-agent-compute
> Abandoning https://review.openstack.org/86580 - Add support for
> specifying swift ring directory range
> Abandoning 

Re: [openstack-dev] [neutron]What happened when the 3-rd controller restarted?

2015-10-12 Thread Germy Lure
Hi Kevin,

*Thank you for your response. Periodic data checking is a popular and
effective method to sync info. But there is no such a feature in Neutron.
Right? Will the community merge it soon? And can we consider it with
agent-style mechanism together?*

Vendor-specific extension or coding a periodic task private by vendor is
not a good solution, I think. Because it means that Neutron-Sever could not
integrate with multiple vendors' controller and even the controller of
those vendors that introduced this extension or task could not integrate
with a standard community Neutron-Server.
That is just the tip of the iceberg. Many of the other problems resulting,
such as fixing bugs,upgrade,patch and etc.
But wait, is it a vendor-specific feature? Of course not. All software
systems need data checking.

Many thanks.
Germy


On Sun, Oct 11, 2015 at 4:28 PM, Kevin Benton  wrote:

> You can have a periodic task that asks your backend if it needs sync info.
> Another option is to define a vendor-specific extension that makes it easy
> to retrieve all info in one call via the HTTP API.
>
> On Sat, Oct 10, 2015 at 2:24 AM, Germy Lure  wrote:
>
>> Hi all,
>>
>> After restarting, Agents load data from Neutron via RPC. What about 3-rd
>> controller? They only can re-gather data via NBI. Right?
>>
>> Is it possible to provide some mechanism for those controllers and agents
>> to sync data? or something else I missed?
>>
>> Thanks
>> Germy
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Kevin Benton
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova-scheduler] Scheduler sub-group meeting - Agenda 10/12

2015-10-12 Thread Dugger, Donald D
Meeting on #openstack-meeting-alt at 1400 UTC (8:00AM MDT)



1) Mitaka planning

2) Opens

--
Don Dugger
"Censeo Toto nos in Kansa esse decisse." - D. Gale
Ph: 303/443-3786

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar][Horizon] UI for pools and flavors

2015-10-12 Thread Matthias Runge
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 12/10/15 09:25, Flavio Percoco wrote:
> On 10/10/15 21:07 +0530, Shifali Agrawal wrote:
>> Greetings!
>> 
>> I have prepared mock-ups[1],[2] to build Zaqar UI, at present
>> focusing only to bring pools and flavors on dashboard. Sharing
>> two mock-ups for same purpose - allowing all operations related
>> to them(CRUD).
>> 
>> It will be great to know the views of zaqar developers/users if
>> the design is satisfactory to them or they want some amendments.
>> Also let me know if any information of pools/flavors is missing
>> and need to be added.
>> 
>> In first mockup[1] showing pools information by default and will
>> show flavors if user click on flavors button present on top
>> second menu bar.
>> 
>> [1]: http://tinyurl.com/o2b9q6r [2]: http://tinyurl.com/pqwgwhl

> I'm adding `[horizon]` to the subject to get more folks'
> attention.
> 
Thank you for bringing this up!

maybe it's necessary to give a brief context for those mockups?
What does a pool here? why is a pool weighted (or how) and isn't a
pools and a pool group somehow the same?
If you have pools, what is the intention of flavors then?

Just to ask the obvious questions here

Matthias

-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQIcBAEBCAAGBQJWG2LZAAoJEBdpskgc9eOLSnYQAIFJb2/u5FWUkuk+mAVcRHXJ
nHD0oAN5fxmAR0fqZH57n0T7et8Ocp+Xgp9aUD9roLPGlj9uFnLntMSHo9WIAzU9
gq2fl0pgnmpMzwJ0yooAiMq4HXyJcG5UaY9xImTReTH4382x8j1OTVFvS+9ws+VU
enK4S/2JW4Rvf6jpYGUcoNqtdmkFRLQohvu/K3V1Cg1Y+zMeYIQZxb4oZzcEOEyo
6E+DicR/4VVGi8gl0UGPkXSmm/jFaG8H3m5KTvTQr0PDd6l5ypwDXvZcRKQvPYmc
c155EvjeXPByhm1jXpE6Cgv6NNROQFr+uRM1jKLF0Ss030XI7TSyMNYv9br6OVxi
/SMlF+BG+FK26uwTc9Mf5D5mqTF6qgbPTLLu4vlqKa3JX0/y7Z8a7NoujzTKUBEl
WAftY2ADAJE6Vb/1TEubujDIlKGBtoT/lGQUIJtj1t5VftfaNrZ1N2vMQ4P7c9hA
W2EoRCTe6m8YbPPT3b3QFuuH2oecPECTiWcjEgRxp23ksnoFDkgjEGeM1+xNeQVV
kfwfG4mcGP4lEUwPk2sdL/3xu16iUTMJURZdvkqI1FX3YfMwITjiY/jdqIwVwlNF
03XhFn1uloXyHSNRNNFXf85r3v4FDw9FWQCeCU5mqOzkdZZafhvdi2tUjkHniGsG
mDyxERTHHiDOAqHUhLNn
=nLNP
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][Ironic] Testing Ironic multi-tenancy feature

2015-10-12 Thread Pavlo Shchelokovskyy
Hi all,

we would like to start preliminary testing of Ironic multi-tenant network
setup which is supported by Neutron in Liberty according to [1]. According
to Neutron design integration with network equipment is done via ML2
plugins. We are looking for plugins and network equipment that can work
with such Ironic multi-tenant setup. Could community recommend a pair of
hardware switch/corresponding Neutron plugin that already supports this
functionality?

[1]
https://blueprints.launchpad.net/neutron/+spec/neutron-ironic-integration

Best regards,
-- 
Dr. Pavlo Shchelokovskyy
Senior Software Engineer
Mirantis Inc
www.mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar] UI for pools and flavors

2015-10-12 Thread Flavio Percoco

On 10/10/15 21:07 +0530, Shifali Agrawal wrote:

Greetings!

I have prepared mock-ups[1],[2] to build Zaqar UI, at present focusing only to
bring pools and flavors on dashboard. Sharing two mock-ups for same purpose -
allowing all operations related to them(CRUD).

It will be great to know the views of zaqar developers/users if the design is
satisfactory to them or they want some amendments. Also let me know if any
information of pools/flavors is missing and need to be added.

In first mockup[1] showing pools information by default and will show flavors
if user click on flavors button present on top second menu bar.

[1]: http://tinyurl.com/o2b9q6r
[2]: http://tinyurl.com/pqwgwhl



wow

Thanks a lot for working on this! I think I'm good with the above but
I'll take a better look.

I'm adding `[horizon]` to the subject to get more folks' attention.

Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [fuel] fuel-python lead candidacy

2015-10-12 Thread Igor Kalnitsky
Hey everyone,

I'd like to announce my candidacy for fuel-python component lead position.

I've been working on the Nailgun project for year and half now, as one
of regular contributor. Last fall I became a core reviewer, and since
then I'm doing my best to help others to land their patches and to
keep high quality of our code base.

Last release cycle I worked mostly on fuel plugin framework, and with
my fellow companions we've managed to release a new version with a
bunch of new features [1]. I also have deep understanding about our
master node upgrade architecture, and fuel components interaction. I
believe that would help me to make right decisions, and improve our
project quality and flexibility.

Some of my goals for the following release cycle is to fix our old
tech-debts and improve performance on scale. That includes -

* Remove a non-caching query and use SQLAlchemy's default one.
* Reduce a number of SQL queries for the most common operations, such
as getting a list of nodes (it takes quite a time on scale) and
running deployment (serialization takes ~10 minutes for 200 nodes).

Also, it would be great to get rid of duplicated logic that was
introduced in Network Templates [2], since it's painful to support
both network branches.

Thanks,
Igor

[1] https://wiki.openstack.org/wiki/Fuel/Plugins#7.0_features
[2] 
http://specs.fuel-infra.org/fuel-specs-master/specs/7.0/networking-templates.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] network interface order

2015-10-12 Thread Yaron Illouz
Hi 

 

I have a vm with 3 network interfaces.

I create it through heat template.

I want the interfaces to be in a specific order in the vm (eth0 a_net,
eth1 b_net, eth2 c_net).

How can I do the following?

I am using juno version

With the following configuration I have : eth0 a_net, eth2 b_net, eth1
c_net

 

 

  vMyNode:

type: OS::Nova::Server

properties:

  flavor: m1.flavor

  image: vMyNode

  key_name: HPG9KeyPair

  name: Appliance

  networks:

- port: {get_resource: a_net_port}

- port: {get_resource: b_net_port}  

- port: {get_resource: c_net_port}  

 

 

 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] Proposing Vitaly Gridnev to core reviewer team

2015-10-12 Thread Sergey Lukjanov
Hi folks,

I'd like to propose Vitaly Gridnev as a member of the Sahara core reviewer
team.

Vitaly contributing to Sahara for a long time and doing a great job on
reviewing and improving Sahara. Here are the statistics for reviews
[0][1][2] and commits [3].

Existing Sahara core reviewers, please vote +1/-1 for the addition of
Vitaly to the core reviewer team.

Thanks.

[0]
https://review.openstack.org/#/q/reviewer:%22Vitaly+Gridnev+%253Cvgridnev%2540mirantis.com%253E%22,n,z
[1] http://stackalytics.com/report/contribution/sahara-group/180
[2] http://stackalytics.com/?metric=marks_id=vgridnev
[3]
https://review.openstack.org/#/q/status:merged+owner:%22Vitaly+Gridnev+%253Cvgridnev%2540mirantis.com%253E%22,n,z

-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Requests + urllib3 + distro packages

2015-10-12 Thread Cory Benfield

> On 12 Oct 2015, at 01:16, Robert Collins  wrote:
> To sum up the thread, it sounds to me like a viable way-forward is:
> 
> - get distros to fixup their requests Python dependencies (and
> hopefully they can update that in stable releases).
> - fix the existing known bugs in pip where such accurate dependencies
> are violated by some operations.

Agreed.

And we’re taking this pretty seriously at requests: we got in touch with our 
downstream unbundlers at Debian and Fedora asking them to make sure they 
populate setup.py correctly. Fedora has created updates for F21, F22, and F23:

- https://bodhi.fedoraproject.org/updates/FEDORA-2015-20de3774f4
- https://bodhi.fedoraproject.org/updates/FEDORA-2015-1f580ccfa4
- https://bodhi.fedoraproject.org/updates/FEDORA-2015-d7c710a812

We’ve also heard positive noises from our Debian packager, though I don’t yet 
have a link to any change there.

If Ubuntu has a separate downstream packager from Debian, we don’t have a 
relationship with them, so it’s harder for us to affect change there.

This should mean that we’re only missing the pip fix.

Cory


signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Different OpenStack components

2015-10-12 Thread Thierry Carrez
Jeremy Stanley wrote:
> On 2015-10-09 23:13:48 + (+), Fox, Kevin M wrote:
>> On 2015-10-09 22:20:43 + (+), Amrith Kumar wrote:
>> [...]
>>> A google search produced this as result #2.
>>>
>>> http://governance.openstack.org/reference/projects/index.html
>>>
>>> Looks pretty complete to me.
>>
>> The official list is
>> http://git.openstack.org/cgit/openstack/governance/tree/reference/projects.yaml
> 
> They're both official. The former is automatically rendered and
> published from the latter at every update by a CI job.

Also worth noting that the Foundation is working on a new version of the
"software" section of the website for the Liberty release, which should
make it more convenient to discover all the projects under the OpenStack
umbrella.

Regards,

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] gates broken by WebOb 1.5 release

2015-10-12 Thread Victor Stinner

Hi,

WebOb 1.5 was released yesterday. test_misc of Cinder starts failing 
with this release. I wrote this simple fix which should be enough to 
repair it:


https://review.openstack.org/233528
"Fix test_misc for WebOb 1.5"

 class ConvertedException(webob.exc.WSGIHTTPException):
-def __init__(self, code=0, title="", explanation=""):
+def __init__(self, code=500, title="", explanation=""):

Victor

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] ERROR : openstack Forbidden (HTTP 403)

2015-10-12 Thread Dhvanan Shah
Hi,

I am getting this error while installing Openstack on Centos.
ERROR : Error appeared during Puppet run: 10.16.37.221_keystone.pp
Error: Could not prefetch keystone_service provider 'openstack': Execution
of '/usr/bin/openstack service list --quiet --format csv --long' returned
1: ERROR: openstack Forbidden (HTTP 403)

I've checked the permissions of the the executable files and they are not
the problem. So I'm not sure why I'm forbidden from executing this. Could
use some help!

-- 
Dhvanan Shah
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][db]Neutron db revision fails

2015-10-12 Thread Anna Kamyshnikova
Have you made changes in db models? There result should be:

akamyshnikova@akamyshnikova:/opt/stack/neutron$
.tox/py27/bin/neutron-db-manage --config-file /etc/neutron/neutron.conf
--config-file /etc/neutron/plugins/ml2/ml2_conf.ini revision -m 'test'
--autogenerate
  Running revision for neutron ...
No handlers could be found for logger "neutron.quota"
INFO  [alembic.runtime.migration] Context impl MySQLImpl.
INFO  [alembic.runtime.migration] Will assume non-transactional DDL.
INFO  [alembic.autogenerate.compare] Detected added column 'flavors.test'
  Generating
/opt/stack/neutron/neutron/db/migration/alembic_migrations/versions/mitaka/expand/7a277e40970_test.py
... done
  OK

If this does not help make sure that all migrations applied, try to
recreate database: drop it, run upgrade head and  revision -m 'test'
--autogenerate one more time.

On Mon, Oct 12, 2015 at 10:32 AM, Zhi Chang 
wrote:

> Ok. Everything looks right when I run command "neutron-db-manage
> --config-file /etc/neutron/neutron.conf --config-file
> /etc/neutron/plugins/ml2/ml2_conf.ini revision -m "Just For Test"
> --autogenerate"
>
> Output:
>   Running revision for neutron ...
> No handlers could be found for logger "neutron.quota"
> INFO  [alembic.runtime.migration] Context impl MySQLImpl.
> INFO  [alembic.runtime.migration] Will assume non-transactional DDL.
>   OK
>
> Where is the new migration script?
>
> Thx
> Zhi Chang
>
>
> -- Original --
> *From: * "Anna Kamyshnikova";
> *Date: * Mon, Oct 12, 2015 03:19 PM
> *To: * "OpenStack Development Mailing List (not for usage questions)"<
> openstack-dev@lists.openstack.org>;
> *Subject: * Re: [openstack-dev] [Neutron][db]Neutron db revision fails
>
> You should use neutron-db-manage --config-file /etc/neutron/neutron.conf
> --config-file /etc/neutron/plugins/ml2/ml2_conf.ini revision -m "Just For
> Test"" --autogenerate. Make changes in db models and then run this command
> - migration will be generated automatically.
>
> On Mon, Oct 12, 2015 at 7:02 AM, Zhi Chang 
> wrote:
>
>> Thanks for your reply. What should I do if I want to create a new
>> migration script?
>>
>> Thanks
>> Zhi Chang
>>
>>
>> -- Original --
>> *From: * "Vikram Choudhary";
>> *Date: * Mon, Oct 12, 2015 12:22 PM
>> *To: * "OpenStack Development Mailing List (not for usage questions)"<
>> openstack-dev@lists.openstack.org>;
>> *Subject: * Re: [openstack-dev] [Neutron][db]Neutron db revision fails
>>
>>
>> Hi Zhi,
>>
>> We already have a defect logged for this issue.
>>
>> https://bugs.launchpad.net/neutron/+bug/1503342
>>
>> Thanks
>> Vikram
>> On Oct 12, 2015 8:10 AM, "Zhi Chang"  wrote:
>>
>>> Hi, everyone.
>>> I install a devstack from the latest code. But there is an error
>>> when I create a db migration script. My migration shell is:
>>> "neutron-db-manage --config-file /etc/neutron/neutron.conf
>>> --config-file /etc/neutron/plugins/ml2/ml2_conf.ini revision -m "Just For
>>> Test""
>>> The error shows:
>>>   Running revision for neutron ...
>>>   FAILED: Multiple heads are present; please specify the head
>>> revision on which the new revision should be based, or perform a merge.
>>>
>>> Does my method wrong? Could someone helps me?
>>>
>>> Thx
>>> Zhi Chang
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Regards,
> Ann Kamyshnikova
> Mirantis, Inc
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Regards,
Ann Kamyshnikova
Mirantis, Inc
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Scheduler proposal

2015-10-12 Thread Thierry Carrez
Adam Lawson wrote:
> I have a quick question: how is Amazon doing this? When choosing a next
> path forward that reliably scales, would be interesting to know how this
> is already being done.

Well, those who know probably would be sued if they told.

Since they have a limited set of instance types and very limited
placement options, my bet would be that they do flavor-based scheduling
("let compute nodes grab node reservation requests directly
out of flavor based queues based on their own current observation of
their ability to service it" in Clint's own words).

This is the most efficient way to scale: you no longer rely on a
specific scheduler trying to keep an up-to-date view of your compute
nodes resource availability. As long as you are ready to abandon fancy
placement features, you can get simple, reliable and scalable
(non-)scheduling.

Personally as we explore the options we have in that space, I'd like to
consider options that still enable us to plug such a no-scheduler
solution without too much trouble. Just for those of us who are ready to
make that trade-off :)

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron]Anyone tried to mix-use openstack components or projects?

2015-10-12 Thread Germy Lure
Thank you, Kevin.
So the community just divided the whole openstack into separate
sub-projects(Nova,Neutron and etc.) but it's not taken into account that if
those modules can work together with different versions. Yes?

If so, is it possible to keep being compatible with each other in
technology? How about just N+1? And how about just in Neutron?

Germy
.

On Sun, Oct 11, 2015 at 4:33 PM, Kevin Benton  wrote:

> For the particular Nova Neutron example, the Neutron Kilo API should still
> be compatible with the calls Havana Nova makes. I think you will need to
> disable the Nova callbacks on the Neutron side because the Havana version
> wasn't expecting them.
>
> I've tried out many N+1 combinations (e.g. Icehouse + Juno, Juno + Kilo)
> but I haven't tried a gap that big.
>
> Cheers,
> Kevin Benton
>
> On Sat, Oct 10, 2015 at 1:50 AM, Germy Lure  wrote:
>
>> Hi all,
>>
>> As you know, openstack projects are developed separately. And
>> theoretically, people can create networks with Neutron in Kilo version for
>> Nova in Havana version.
>>
>> Did Anyone tried it?
>> Do we have some pages to show what combination can work together?
>>
>> Thanks.
>> Germy
>> .
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Kevin Benton
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Recording little everyday OpenStack successes

2015-10-12 Thread Lana Brindley
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 09/10/15 19:42, Thierry Carrez wrote:
> Hello everyone,
> 
> OpenStack has become quite big, and it's easier than ever to feel lost,
> to feel like nothing is really happening. It's more difficult than ever
> to feel part of a single community, and to celebrate little successes
> and progress.
> 
> In a (small) effort to help with that, I suggested making it easier to
> record little moments of joy and small success bits. Those are usually
> not worth the effort of a blog post or a new mailing-list thread, but
> they show that our community makes progress *every day*.
> 
> So whenever you feel like you made progress, or had a little success in
> your OpenStack adventures, or have some joyful moment to share, just
> throw the following message on your local IRC channel:
> 
> #success [Your message here]
> 
> The openstackstatus bot will take that and record it on this wiki page:
> 
> https://wiki.openstack.org/wiki/Successes
> 
> We'll add a few of those every week to the weekly newsletter (as part of
> the developer digest that we reecently added there).
> 
> Caveats: Obviously that only works on channels where openstackstatus is
> present (the official OpenStack IRC channels), and we may remove entries
> that are off-topic or spam.
> 
> So... please use #success liberally and record lttle everyday OpenStack
> successes. Share the joy and make the OpenStack community a happy place.
> 


I love this, nice work! (And I see we already have a docs success listed. Yay!)

L

- -- 
Lana Brindley
Technical Writer
Rackspace Cloud Builders Australia
http://lanabrindley.com
-BEGIN PGP SIGNATURE-
Version: GnuPG v2
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBAgAGBQJWG3cAAAoJELppzVb4+KUyKnEIAL42s36Pzj53fJcVdkzDWnff
wQfd9dw0kRIEaFM8h2Ix597T/tMj/loK8Fc9ImvduBsAiHlXfXbU0vfG1qvE339N
pKSSv0YOt4dT+xweQ6EDIDpYKElRDLG39Jptw6RCOJVD6NmxIH9WnrUHMiT0Taxm
0gCDhOokLKghezB77G9GlAVyRKog2j9h4w9Q4fClMXcaDm8nkH0On9yO4qIoakhR
YJHXsn53QTtOMsvH7zcKEYhxAJpUSZqwZk292fsE5qgSHrU8rYHK19EJsLfEz9IF
XWIzTkQQ4wBP9Y9izZ+LU93Xwh0RBJPNlKgq2HbgVbbywGXnPYbNzP0Bf7YMlCI=
=j1zZ
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Scheduler proposal

2015-10-12 Thread Thierry Carrez
Clint Byrum wrote:
> Excerpts from Joshua Harlow's message of 2015-10-10 17:43:40 -0700:
>> I'm curious is there any more detail about #1 below anywhere online?
>>
>> Does cassandra use some features of the JVM that the openJDK version 
>> doesn't support? Something else?
> 
> This about sums it up:
> 
> https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/service/StartupChecks.java#L153-L155
> 
> // There is essentially no QA done on OpenJDK builds, and
> // clusters running OpenJDK have seen many heap and load issues.
> logger.warn("OpenJDK is not recommended. Please upgrade to the newest 
> Oracle Java release");

Or:
https://twitter.com/mipsytipsy/status/596697501991702528

This is one of the reasons I'm generally negative about Java solutions
(Cassandra or Zookeeper): the free software JVM is still not on par with
the non-free one, so we indirectly force our users to use a non-free
dependency. I've been there before often enough to hear "did you
reproduce that bug under the {Sun,Oracle} JVM" quite a few times.

When the Java solution is the only solution for a problem space that
might still be a good trade-off (compared to reinventing the wheel for
example), but to share state or distribute locks, there are some pretty
good other options out there that don't suffer from the same fundamental
problem...

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] jobs that make break when we remove Devstack extras.d in 10 weeks

2015-10-12 Thread Dmitry Tantsur

On 10/09/2015 05:41 PM, Dmitry Tantsur wrote:

On 10/09/2015 12:58 PM, Dmitry Tantsur wrote:

On 10/09/2015 12:35 PM, Sean Dague wrote:

 From now until the removal of devstack extras.d support I'm going to
send a weekly email of jobs that may break. A warning was added that we
can track in logstash.

Here are the top 25 jobs (by volume) that are currently tripping the
warning:

gate-murano-devstack-dsvm
gate-cue-integration-dsvm-rabbitmq
gate-murano-congress-devstack-dsvm
gate-solum-devstack-dsvm-centos7
gate-rally-dsvm-murano-task
gate-congress-dsvm-api
gate-tempest-dsvm-ironic-agent_ssh
gate-solum-devstack-dsvm
gate-tempest-dsvm-ironic-pxe_ipa-nv
gate-ironic-inspector-dsvm-nv
gate-tempest-dsvm-ironic-pxe_ssh
gate-tempest-dsvm-ironic-parallel-nv
gate-tempest-dsvm-ironic-pxe_ipa
gate-designate-dsvm-powerdns
gate-python-barbicanclient-devstack-dsvm
gate-tempest-dsvm-ironic-pxe_ssh-postgres
gate-rally-dsvm-designate-designate
gate-tempest-dsvm-ironic-pxe_ssh-dib
gate-tempest-dsvm-ironic-agent_ssh-src
gate-tempest-dsvm-ironic-pxe_ipa-src
gate-muranoclient-dsvm-functional
gate-designate-dsvm-bind9
gate-tempest-dsvm-python-ironicclient-src
gate-python-ironic-inspector-client-dsvm
gate-tempest-dsvm-ironic-lib-src-nv

(You can view this query with http://goo.gl/6p8lvn)

The ironic jobs are surprising, as something is crudding up extras.d
with a file named 23, which isn't currently run. Eventual removal of
that directory is going to potentially make those jobs fail, so someone
more familiar with it should look into it.


Thanks for noticing, looking now.


As I'm leaving for the weekend, I'll post my findings here.

I was not able to spot what writes these files (in my case it was named
33). I also was not able to reproduce it on my regular devstack
environment.

I've posted a temporary patch https://review.openstack.org/#/c/233017/
so that we're able to track where and when these files appear. Right now
I only understood that they really appear during the devstack run, not
earlier.


So, no file seems to be created, so it looks like a problem in devstack: 
https://review.openstack.org/#/c/233584/








This is not guarunteed to be a complete list, but as jobs are removed /
fixed we should end up with other less frequently run jobs popping up in
future weeks.

-Sean




__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Recording little everyday OpenStack successes

2015-10-12 Thread Thierry Carrez
Mike Spreitzer wrote:
> Great.  I am about to contribute one myself.  Lucky I noticed this
> email.  How will the word get out to those who did not?  How about a
> pointer to instructions on the Successes page?

Yes, we need to document that in various places (infra guide, project
team guide, the wiki page itself). I'll take care of it.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][mistral] Automatic evacuation as a long running task

2015-10-12 Thread Nikola Đipanov
On 10/06/2015 04:34 PM, Matthew Booth wrote:
> Hi, Roman,
> 
> Evacuated has been on my radar for a while and this post has prodded me
> to take a look at the code. I think it's worth starting by explaining
> the problems in the current solution. Nova client is currently
> responsible for doing this evacuate. It does:
>



> 
> I believe we can solve this problem, but I think that without fixing
> single-instance evacuate we're just pushing the problem around (or
> creating new places for it to live). I would base the robustness of my
> implementation on a single principal:
> 
>   An instance has a single owner, which is exclusively responsible for
> rebuilding it.
> 
> In outline, I would redefine the evacuate process to do:
> 
> API:
> 1. Call the scheduler to get a destination for the evacuate if none was
> given.
> 2. Atomically update instance.host to this destination, and task state
> to rebuilding.
> 

We can't do this because of resource tracking - the host switch has to
be done after the claim is done which can happen only on the target
compute, otherwise we don't track the resources properly (*).

That does not invalidate your more general point which is that we need a
way to make sure that started evacuations can be picked up and resumed
in case of any failures along the way (even a rebuild failure of the
target host that may have failed during the process).

Some work that dansmith did [1] and I later built upon some of that work
[2]. I think our assumption was that we would use the migration record
for this, which _I think_ gives us all the stuff you talk about further
below, apart of course from there being a need for an external task to
actually see the evacuation through to the end. I think this is in-line
with most HA design proposals, where we make sure our control plane is
redundant while we really don't care about individual compute nodes
(apart from the instances they host).

I am also not sure that leaving the actual building of the instance up
to a periodic task is a good choice if we want to minimize downtime
which seem to me to be the point of the instance HA proposals.

N.

(*) We could "solve" this by checkin instance.task_state for example but
IMHO we shouldn't go down that route as it becomes way more difficult to
reason about resource tracking once you introduce one more free-variable.

[1]
https://github.com/openstack/nova/blob/02b7e64b29dd707c637ea7026d337e5cb196f337/nova/compute/api.py#L3303
[2]
https://github.com/openstack/nova/blob/02b7e64b29dd707c637ea7026d337e5cb196f337/nova/compute/manager.py#L2702

> Compute:
> 3. Rebuild the instance.
> 
> This would be supported by a periodic task on the compute host which
> looks for rebuilding instances assigned to this host which aren't
> currently rebuilding, and kicks off a rebuild for them. This would cover
> the compute going down during a rebuild, or the api going down before
> messaging the compute.
> 
> Implementing this gives us several things:
> 
> 1. The list instances, evacuate all instances process becomes
> idempotent, because as soon as the evacuate is initiated, the instance
> is removed from the source host.
> 2. We get automatic recovery of failure of the target compute. Because
> we atomically moved the instance to the target compute immediately, if
> the target compute also has to be evacuated, our instance won't fall
> through the gap.
> 3. We don't need an additional place for the code to run, because it
> will run on the compute. All the work has to be done by the compute
> anyway. By farming the evacuates out directly and immediately to the
> target compute we reduce both overhead and complexity.
> 
> The coordination becomes very simple. If we've run the nova client
> evacuation anywhere at least once, the actual evacuations are now
> Sombody Else's Problem (to quote h2g2), and will complete eventually. As
> evacuation in any case involves a forced change of owner it requires
> fencing of the source and implies an external agent such as pacemaker.
> The nova client evacuation can run in pacemaker.
> 
> Matt
> 
> On Fri, Oct 2, 2015 at 2:05 PM, Roman Dobosz  > wrote:
> 
> Hi all,
> 
> The case of automatic evacuation (or resurrection currently), is a topic
> which surfaces once in a while, but it isn't yet fully supported by
> OpenStack and/or by the cluster services. There was some attempts to
> bring the feature into OpenStack, however it turns out it cannot be
> easily integrated with. On the other hand evacuation may be executed
> from the outside using Nova client or Nova API calls for evacuation
> initiation.
> 
> I did some research regarding the ways how it could be designed, based
> on Russel Bryant blog post[1] as a starting point. Apart from it, I've
> also taken high availability and reliability into consideration when
> designing the solution.
> 
> Together with coworker, we did first 

Re: [openstack-dev] [neutron]What happened when the 3-rd controller restarted?

2015-10-12 Thread Salvatore Orlando
Inline,
Salvatore

On 12 October 2015 at 09:29, Germy Lure  wrote:

> Hi Kevin,
>
> *Thank you for your response. Periodic data checking is a popular and
> effective method to sync info. But there is no such a feature in Neutron.
> Right? Will the community merge it soon? And can we consider it with
> agent-style mechanism together?*
>
> Vendor-specific extension or coding a periodic task private by vendor is
> not a good solution, I think. Because it means that Neutron-Sever could not
> integrate with multiple vendors' controller and even the controller of
> those vendors that introduced this extension or task could not integrate
> with a standard community Neutron-Server.
>

I am not sure what is the issue you are seeing here and what you are
advocating for.
If you're asking for a generic interface for synchronising the neutron
database with a backend, that could be implemented, but it would still be
up to plugin and driver maintainers to use that interface.



> That is just the tip of the iceberg. Many of the other problems resulting,
> such as fixing bugs,upgrade,patch and etc.
> But wait, is it a vendor-specific feature? Of course not. All software
> systems need data checking.
>

If you have something in mind I'd like to understand more about your use
case (I got the issue, I want to understand what you're trying to achieve),
and how you think you could possibly implement it.

>
> Many thanks.
> Germy
>
>
> On Sun, Oct 11, 2015 at 4:28 PM, Kevin Benton  wrote:
>
>> You can have a periodic task that asks your backend if it needs sync info.
>> Another option is to define a vendor-specific extension that makes it
>> easy to retrieve all info in one call via the HTTP API.
>>
>> On Sat, Oct 10, 2015 at 2:24 AM, Germy Lure  wrote:
>>
>>> Hi all,
>>>
>>> After restarting, Agents load data from Neutron via RPC. What about 3-rd
>>> controller? They only can re-gather data via NBI. Right?
>>>
>>> Is it possible to provide some mechanism for those controllers and
>>> agents to sync data? or something else I missed?
>>>
>>> Thanks
>>> Germy
>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>> --
>> Kevin Benton
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] [neutron] A larger batch of questions about configuring DevStack to use Neutron

2015-10-12 Thread Sean M. Collins
On Thu, Oct 08, 2015 at 01:47:31PM EDT, Mike Spreitzer wrote:
> ..
> > > In the section 
> > > http://docs.openstack.org/developer/devstack/guides/
> > neutron.html#using-neutron-with-a-single-interface
> > > there is a helpful display of localrc contents.  It says, among other 
> > > things,
> > > 
> > >OVS_PHYSICAL_BRIDGE=br-ex
> > >PUBLIC_BRIDGE=br-ex
> > > 
> > > In the next top-level section, 
> > > http://docs.openstack.org/developer/devstack/guides/
> > neutron.html#using-neutron-with-multiple-interfaces
> > > , there is no display of revised localrc contents and no mention of 
> > > changing either bridge setting.  That is an oversight, right?
> > 
> > No, this is deliberate. Each section is meant to be independent, since
> > each networking configuration and corresponding DevStack configuration
> > is different. Of course, this may need to be explicitly stated in the
> > guide, so there is always room for improvement.
> 
> I am not quite sure I understand your answer.  Is the intent that I can 
> read only one section, ignore all the others, and that will tell me how to 
> use DevStack to produce that section's configuration?  If so then it would 
> be good if each section had a display of all the necessary localrc 
> contents.

Agreed. This is a failure on my part, because I was pasting in only
parts of my localrc (since it came out of a live lab environment). I've
started pushing patches to correct this.


> I have started over, from exactly the picture drawn at the start of the 
> doc.  That has produced a configuration that mostly works.  However, I 
> tried creating a VM connected to the public network, and that failed for 
> lack of a Neutron DHCP server there.

The public network is used for floating IPs. The L3 agent creates NAT
rules to intercept traffic on the public network and NAT it back to a
guest instance that has the floating IP allocated to it.

The behavior for when a guest is directly attached to the public
network, I sort of forget what happens exactly but I do know that it
doesn't work from experience - most likely because the network is not
configured as a flat network. It will not receive a DHCP lease from the
external router.

> I am going to work out how to change 
> that, and am willing to contribute an update to this doc.  Would you want 
> that in this section --- in which case this section needs to specify that 
> the provider DOES NOT already have DHCP service on the hardware network 
> --- or as a new section?

No, I think we should maybe have a warning or something that the
external network will be used for Floating IPs, and that guest instances
should not be directly attached to that network.

> > > 
> > > Looking at 
> > > http://docs.openstack.org/networking-guide/scenario_legacy_ovs.html(or
> , in 
> > > former days, the doc now preserved at 
> > > http://docs.ocselected.org/openstack-manuals/kilo/networking-
> > guide/content/under_the_hood_openvswitch.html
> > > ) I see the name br-ex used for $PUBLIC_BRIDGE--- not 
> $OVS_PHYSICAL_BRIDGE
> > > , right?  Wouldn't it be less confusing if 
> > > http://docs.openstack.org/developer/devstack/guides/neutron.htmlused a 
> 
> > > name other than "br-ex" for the exhibited commands that apply to 
> > > $OVS_PHYSICAL_BRIDGE?
> > 
> > No, this is deliberate - br-ex is the bridge that is used for external
> > network traffic - such as floating IPs and public IP address ranges. On
> > the network node, a physical interface is attached to br-ex so that
> > traffic will flow.
> > 
> > PUBLIC_BRIDGE is a carryover from DevStack's Nova-Network support and is
> > used in some places, with OVS_PHYSICAL_BRIDGE being used by DevStack's
> > Neutron support, for the Open vSwitch driver specifically. They are two
> > variables that for the most part serve the same purpose. Frankly,
> > DevStack has a lot of problems with configuration knobs, and
> > PUBLIC_BRIDGE and OVS_PHYSICAL_BRIDGE is just a symptom.
> 
> Ah, thanks, that helps.  But I am still confused.  When using Neutron with 
> two interfaces, there will be a bridge for each.

There shouldn't be. I'm pushing patches in the multiple interface
section that includes output from the ovs-vsctl commands, hopefully
it'll clarify things.


> I have learned that 
> DevStack will automatically create one bridge, and seen that it is named 
> "br-ex" when following the instructions in the single-interface section. 
> Now in the multiple-interface section I see that I should create a bridge 
> named "br-ex" before invoking DevStack.

It should still create the bridge br-ex automatically. Depending on your
configuration, DevStack should also attach the interface to the bridge
automatically.

> I suppose DevStack will create 
> the other bridge needed, but suspect I am missing a clue about what to 
> tell DevStack so that it makes a bridge with the right name and connected 
> to the right interface.

I don't believe a second bridge is required.

> Good.  I was afraid 

Re: [openstack-dev] [openstack-announce] [release][oslo] oslo.log release 1.12.0 (mitaka)

2015-10-12 Thread Julien Danjou
On Mon, Oct 12 2015, dava...@gmail.com wrote:

> We are amped to announce the release of:
>
> oslo.log 1.12.0: oslo.log library
>

[…]

This version breaks Gnocchi (and probably all other projects) on
platform that are not Linux, for no good reason. I'm trying to push a
fix here:

  https://review.openstack.org/#/c/233708/

-- 
Julien Danjou
// Free Software hacker
// https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][db]Neutron db revision fails

2015-10-12 Thread Anna Kamyshnikova
You should use neutron-db-manage --config-file /etc/neutron/neutron.conf
--config-file /etc/neutron/plugins/ml2/ml2_conf.ini revision -m "Just For
Test"" --autogenerate. Make changes in db models and then run this command
- migration will be generated automatically.

On Mon, Oct 12, 2015 at 7:02 AM, Zhi Chang  wrote:

> Thanks for your reply. What should I do if I want to create a new
> migration script?
>
> Thanks
> Zhi Chang
>
>
> -- Original --
> *From: * "Vikram Choudhary";
> *Date: * Mon, Oct 12, 2015 12:22 PM
> *To: * "OpenStack Development Mailing List (not for usage questions)"<
> openstack-dev@lists.openstack.org>;
> *Subject: * Re: [openstack-dev] [Neutron][db]Neutron db revision fails
>
>
> Hi Zhi,
>
> We already have a defect logged for this issue.
>
> https://bugs.launchpad.net/neutron/+bug/1503342
>
> Thanks
> Vikram
> On Oct 12, 2015 8:10 AM, "Zhi Chang"  wrote:
>
>> Hi, everyone.
>> I install a devstack from the latest code. But there is an error when
>> I create a db migration script. My migration shell is:
>> "neutron-db-manage --config-file /etc/neutron/neutron.conf
>> --config-file /etc/neutron/plugins/ml2/ml2_conf.ini revision -m "Just For
>> Test""
>> The error shows:
>>   Running revision for neutron ...
>>   FAILED: Multiple heads are present; please specify the head
>> revision on which the new revision should be based, or perform a merge.
>>
>> Does my method wrong? Could someone helps me?
>>
>> Thx
>> Zhi Chang
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Regards,
Ann Kamyshnikova
Mirantis, Inc
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] novaclient functional test failures

2015-10-12 Thread Kevin L. Mitchell
Functional tests on novaclient (gate-novaclient-dsvm-functional) have
started failing consistently.  The test failures all seem to be for an
HTTP 300, which leads me to suspect a problem with the test environment,
rather than with the tests in novaclient.  Anyone have any insights as
to how to address the problem?
-- 
Kevin L. Mitchell 
Rackspace


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] [xen] [ovs]: how to handle the ports when there are multiple ports existing for a single VM vif

2015-10-12 Thread Jianghua Wang
Hi guys,
   I'm working on a bug #1268955 which is due to neutron ovs agent/plugin can't 
process the ports correctly when multiple ports existing for a single VM vif. I 
originally identified two potential solutions but one of them requires not 
minor change; and the other one may result in a race condition. So I'm posting 
it at here to seek help. Please let me know if you have any comments or advice. 
Thanks in advance.

Bug description:

When the guest VM is running under HVM mode, neutron doesn't set the vlan tag 
to the proper port. So guest VM lost network communication.

Problem analysis:
When VM is under HVM mode, ovs will create two ports and two interfaces for a 
single vif inside the VM: If the domID is x, one port/interface is named as 
tapx.0
which is qemu-emulated NIC, used when no PV drivers installed; The other one is 
named as vifx.0 which is the xen network frontend NIC, used when VM has PV 
drivers installed. Depending on the PV driver's existing, either port/interface 
may be used. But current ovs agent/plugin use the VM's vif id(iface-id) to 
identify the port. So depending on the ports sequence retrieved from ovs; only 
one port will be processed by neutron. Then the network problem occurs if the 
finally used port is not the same one processed by neutron (e.g. set vlan tag).



Two of my potential solutions:

1.  configure both ports regardless which port will be used finally; so both 
have the same configuration. It should be able to resolve the problem. But the 
existing code uses the iface-id as the key for each port. Both tapx.0 and 
vifx.0 have the same iface-id. With this solution, I have to change the data 
structure to hold both ports and change relative functions; such required 
change spreads at many places. So it will take much more effort by comparing to 
the 2nd choice. And I have a concern if there will be potential issues to 
configure the inactive port although I can't point it out currently.



2.  if there are multiple choices, ovs set the field of "iface-status" as 
active for the one taking effective; and others will be inactive. So the other 
solution is to return the active one only. If there is any switchover happens 
in later phase, treat this port as updated and then it will configure the new 
chosen port accordingly. In this way it will ensure the active port to be 
configured properly. The needed change is very limited. Please see the draft 
patch set for this solution: https://review.openstack.org/#/c/233498/



But the problem is it will introduce a race condition. E.g. if it sets tag on 
tapx.0 firstly; the guest VM get connection via tapx.0; then the PM driver 
loaded, so the active port switch to vifx.0; but depending on the neutron agent 
polling interval, the vifx.0 may not be tagged for a while; then during this 
period the connection is lost.


Could you share your insights? Thanks a lot.

B.R.
Jianghua Wang
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Scheduler proposal

2015-10-12 Thread Joshua Harlow

Thierry Carrez wrote:

Clint Byrum wrote:

Excerpts from Joshua Harlow's message of 2015-10-10 17:43:40 -0700:

I'm curious is there any more detail about #1 below anywhere online?

Does cassandra use some features of the JVM that the openJDK version
doesn't support? Something else?

This about sums it up:

https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/service/StartupChecks.java#L153-L155

 // There is essentially no QA done on OpenJDK builds, and
 // clusters running OpenJDK have seen many heap and load issues.
 logger.warn("OpenJDK is not recommended. Please upgrade to the newest Oracle 
Java release");


Or:
https://twitter.com/mipsytipsy/status/596697501991702528

This is one of the reasons I'm generally negative about Java solutions
(Cassandra or Zookeeper): the free software JVM is still not on par with
the non-free one, so we indirectly force our users to use a non-free
dependency. I've been there before often enough to hear "did you
reproduce that bug under the {Sun,Oracle} JVM" quite a few times.


I'd be happy to 'fight' (and even fix) for any issues found with 
zookeeper + openjdk if needed, that twitter posting hopefully ended up 
in a bug being filed at https://issues.apache.org/jira/browse/ZOOKEEPER/ 
and hopefully things getting fixed...




When the Java solution is the only solution for a problem space that
might still be a good trade-off (compared to reinventing the wheel for
example), but to share state or distribute locks, there are some pretty
good other options out there that don't suffer from the same fundamental
problem...



IMHO it's the only 'mature' solution so far; but of course maturity is a 
relative thing (look at the project age, version number of zookeeper vs 
etcd, consul for a general idea around this); in general I'd really like 
the TC and the foundation to help make the right decision here, because 
this kind of choice affects the long-term future (and health) of 
openstack as a whole (or I believe it does).


-Josh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] DevStack switching from MySQL-python to PyMySQL

2015-10-12 Thread Jesse Pretorius
On 15 June 2015 at 12:30, Sean Dague  wrote:

>
> As a heads up for where we stand. The switch was flipped, but a lot of
> neutron jobs (rally & tempest) went into a pretty high failure rate
> after it was (all the other high volume jobs seemed fine).
>
> We reverted the change here to unwedge things -
> https://review.openstack.org/#/c/191010/
>
> After a long conversation with Henry and Armando we came up with a new
> plan, because we want the driver switch, and we want to figure out why
> it causes a high Neutron failure rate, but we don't want to block
> everything.
>
> https://review.openstack.org/#/c/191121/ - make the default Neutron jobs
> set some safe defaults (which are different than non Neutron job
> defaults), but add a flag to make it possible to expose these issues.
>
> Then add new non-voting check jobs to Neutron queue to expose these
> issues - https://review.openstack.org/#/c/191141/. Hopefully allowing
> interested parties to get to the bottom of these issues around the db
> layer. It's in the check queue instead of the experimental queue to get
> enough volume to figure out the pattern for the failures, because they
> aren't 100%, and they seem to move around a bit.
>
> Once https://review.openstack.org/#/c/191121/ is landed we'll revert
> revert - https://review.openstack.org/#/c/191113/ and get everything
> else back onto pymysql.


Did this ever progress through to completion?
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] novaclient functional test failures

2015-10-12 Thread Sean Dague
On 10/12/2015 11:54 AM, Kevin L. Mitchell wrote:
> Functional tests on novaclient (gate-novaclient-dsvm-functional) have
> started failing consistently.  The test failures all seem to be for an
> HTTP 300, which leads me to suspect a problem with the test environment,
> rather than with the tests in novaclient.  Anyone have any insights as
> to how to address the problem?

Do you have a link to failed jobs? Or a bug to start accumulating that in?

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Auto-abandon bot

2015-10-12 Thread Ben Nemec
On 10/10/2015 08:12 AM, Jeremy Stanley wrote:
> On 2015-10-09 17:10:15 -0500 (-0500), Ben Nemec wrote:
>> As discussed in the meeting a week or two ago, we would like to bring
>> back the auto-abandon functionality for old, unloved gerrit reviews.
> [...]
>> -WIP patches are never abandoned
> [...]
>> -Patches that are failing CI for over a month on the same patch set
>> (regardless of any followup comments - the intent is that patches
>> expected to fail CI should be marked WIP).
> [...]
> 
> Have you considered the possibility of switching stale changes to
> WIP instead of abandoning them?


That might be a valid alternative.  I'll bring it up in the meeting
tomorrow.

> 
> I usually have somewhere around 50-100 open changes submitted (often
> more), and for some of those I might miss failures or negative
> review comments for a month or so at a time depending on what else I
> have going on. It's very easy to lose track of a change if someone
> abandons it for me.

I have to admit I had missed this point in previous discussions on the
topic, and it is a valid concern IMHO.

As an alternative to just setting WIP on old changes, we could
significantly increase the abandon timeout.  I re-ran the tool with some
different timeouts to get more data points.  Here's what I found:

Days - # of changes to be abandoned
31 - 36
90 - 29
180 - 20
365(!) - 12

So even setting the timeout to 1 year, which I think we can all agree is
safe to call abandoned, we'd catch 12 changes.  Hence the desire for a
cleanup bot. :-)

I kind of like 90 or 180 as a happy medium.  It would still clear a
non-trivial amount of patches, but I think after six months of neglect
we can safely call a patch abandoned by the submitter.  I'm inclined to
say 90 is safe too, but I'm okay with 180 and I do want to err on the
side of leaving things open so that would work for me.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] DevStack switching from MySQL-python to PyMySQL

2015-10-12 Thread Sean Dague
On 10/12/2015 12:03 PM, Jesse Pretorius wrote:
> On 15 June 2015 at 12:30, Sean Dague  > wrote:
> 
> 
> As a heads up for where we stand. The switch was flipped, but a lot of
> neutron jobs (rally & tempest) went into a pretty high failure rate
> after it was (all the other high volume jobs seemed fine).
> 
> We reverted the change here to unwedge things -
> https://review.openstack.org/#/c/191010/
> 
> After a long conversation with Henry and Armando we came up with a new
> plan, because we want the driver switch, and we want to figure out why
> it causes a high Neutron failure rate, but we don't want to block
> everything.
> 
> https://review.openstack.org/#/c/191121/ - make the default Neutron jobs
> set some safe defaults (which are different than non Neutron job
> defaults), but add a flag to make it possible to expose these issues.
> 
> Then add new non-voting check jobs to Neutron queue to expose these
> issues - https://review.openstack.org/#/c/191141/. Hopefully allowing
> interested parties to get to the bottom of these issues around the db
> layer. It's in the check queue instead of the experimental queue to get
> enough volume to figure out the pattern for the failures, because they
> aren't 100%, and they seem to move around a bit.
> 
> Once https://review.openstack.org/#/c/191121/ is landed we'll revert
> revert - https://review.openstack.org/#/c/191113/ and get everything
> else back onto pymysql.
> 
> 
> Did this ever progress through to completion?

Yes, the last patch was landed on June 16th. We've been running on
pymysql for most of the liberty development cycle.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Scheduler proposal

2015-10-12 Thread Clint Byrum
Excerpts from Thomas Goirand's message of 2015-10-12 05:57:26 -0700:
> On 10/11/2015 02:53 AM, Davanum Srinivas wrote:
> > Thomas,
> > 
> > i am curious as well. AFAIK, cassandra works well with OpenJDK. Can you
> > please elaborate what you concerns are for #1?
> > 
> > Thanks,
> > Dims
> 
> s/works well/works/
> 
> Upstream doesn't test against OpenJDK, and they close bugs without
> fixing them when it only affects OpenJDK and it isn't grave. I know this
> from one of the upstream from Cassandra, who is also a Debian developer.
> Because of this state of things, he gave up on packaging Cassandra in
> Debian (and for other reasons too, like not having enough time to work
> on the packaging).
> 
> I trust what this Debian developer told me. If I remember correctly,
> it's Eric Evans  (ie, the author of the ITP at
> https://bugs.debian.org/585905) that I'm talking about.
> 

Indeed, I once took a crack at packaging it for Debian/Ubuntu too.
There's a reason 'apt-cache search cassandra' returns 0 results on Debian
and Ubuntu.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] About availability zones

2015-10-12 Thread Lingxian Kong
Hi, Sylvain Nova guys,

Our team has also argued about this deeply, and no consensus was reached.

if instance.az is designed just for a reflect of the instance at boot
time, then it doesn't make any sense after the vm is live-migrated
with specified destination for several times, right?
Then, how could user get the exact az information for that vm? The
workaround I can think of is, user can query the host that vm is
running on, then query az that the host belongs to, is this the
recommended way?

On Wed, Sep 23, 2015 at 4:49 PM, Zhenyu Zheng  wrote:
> Hi,
>
> Thanks for the reply, one possible usecase is that user wants to
> live-migrate to az2 so he specified host2. As we didn't update the
> instance.az, if the user live-migrate again without specifiying destination
> host, the instance will migrate to az1 again, this might be different as the
> user expect. Any thought about this?
>
> BR,
>
> Zheng
>
> On Wed, Sep 23, 2015 at 4:30 PM, Sylvain Bauza  wrote:
>>
>>
>>
>> Le 23/09/2015 05:24, Zhenyu Zheng a écrit :
>>
>> Hi, all
>>
>> I have a question about availability zones when performing live-migration.
>>
>> Currently, when performing live-migration the AZ of the instance didn't
>> update. In usecase like this:
>> Instance_1 is in host1 which is in az1, we live-migrate it to host2
>> (provide host2 in API request) which is in az2. The operation will secusess
>> but the availability zone data stored in instance1 is still az1, this may
>> cause inconsistency with the az data stored in instance db and the actual
>> az. I think update the az information in instance using the host az can
>> solve this.
>>
>>
>> Well, no. Instance.AZ is only the reflect of what the user asked, not what
>> the current AZ is from the host the instance belongs to. In other words,
>> instance.az is set once forever by taking the --az hint from the API request
>> and persisting it in DB.
>>
>> That means that if you want to create a new VM without explicitly
>> specifying one AZ in the CLI, it will take the default value of
>> CONF.default_schedule_az which is None (unless you modify that flag).
>>
>> Consequently, when it will go to the scheduler, the AZFilter will not
>> check the related AZs from any host because you didn't asked for an AZ. That
>> means that the instance is considered "AZ-free".
>>
>> Now, when live-migrating, *if you specify a destination*, you totally
>> bypass the scheduler and thus the AZFilter. By doing that, you can put your
>> instance to another host without really checking the AZ.
>>
>> That said, if you *don't specify a destination*, then the scheduler will
>> be called and will enforce the instance.az field with regards to the host
>> AZ. That should still work (again, depending on whether you explicitly set
>> an AZ at the boot time)
>>
>> To be clear, there is no reason of updating that instance AZ field. We can
>> tho consider it's a new "request"' field and could be potentially moved to
>> the RequestSpec object, but for the moment, this is a bit too early since we
>> don't really use that new RequestSpec object yet.
>>
>>
>>
>> Also, I have heard from my collegue that in the future we are planning to
>> use host az information for instances. I couldn't find informations about
>> this, could anyone provide me some information about it if thats true?
>>
>>
>> See my point above, I'd rather prefer to fix how live-migrations check the
>> scheduler (and not bypass it when specifying a destination) and possibly
>> move the instance AZ field to the RequestSpec object once that object is
>> persisted, but I don't think we should check the host instead of the
>> instance in the AZFilter.
>>
>>
>> I assume all of that can be very confusing and mostly tribal knowledge,
>> that's why we need to document that properly and one first shot is
>> https://review.openstack.org/#/c/223802/
>>
>> -Sylvain
>>
>> Thanks,
>>
>> Best Regards,
>>
>> Zheng
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Regards!
---
Lingxian Kong

__

Re: [openstack-dev] Scheduler proposal

2015-10-12 Thread Thomas Goirand
On 10/11/2015 02:53 AM, Davanum Srinivas wrote:
> Thomas,
> 
> i am curious as well. AFAIK, cassandra works well with OpenJDK. Can you
> please elaborate what you concerns are for #1?
> 
> Thanks,
> Dims

s/works well/works/

Upstream doesn't test against OpenJDK, and they close bugs without
fixing them when it only affects OpenJDK and it isn't grave. I know this
from one of the upstream from Cassandra, who is also a Debian developer.
Because of this state of things, he gave up on packaging Cassandra in
Debian (and for other reasons too, like not having enough time to work
on the packaging).

I trust what this Debian developer told me. If I remember correctly,
it's Eric Evans  (ie, the author of the ITP at
https://bugs.debian.org/585905) that I'm talking about.

On 10/12/2015 01:19 AM, Amrith Kumar wrote:
> This is not a requirement by any means. See [3].
>
http://stackoverflow.com/questions/21487354/does-latest-cassandra-support-openjdk

A *hard* requirement, probably not. But this doesn't mean that Cassandra
works *well* on OpenJDK.

Anyway, I'd prefer if nobody trusted me, and that this was seriously
checked.

On 10/11/2015 08:53 AM, Clint Byrum wrote:
>
https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/service/StartupChecks.java#L153-L155
>
> // There is essentially no QA done on OpenJDK builds, and
> // clusters running OpenJDK have seen many heap and load issues.
> logger.warn("OpenJDK is not recommended. Please upgrade to the
> newest Oracle Java release");

Ah! Thanks for finding out the details I was missing... :)
With this kinds of problems, I don't think anyone would like to take the
responsibility to upload Cassandra to either Debian or Ubuntu. At least
*I* wouldn't.

That being said, maybe the issue that Clint quoted is fixable. Probably
the issue is that nobody really cares... (yet?)

Cheers,

Thomas Goirand (zigo)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] Team meeting - 10/12/2015

2015-10-12 Thread Renat Akhmerov
Hi,

This is a reminder that we’ll have a team meeting today at #openstack-meeting 
at 16.00 UTC.

Agenda: 
Review action items
Current status (progress, issues, roadblocks, further plans)
Official Liberty Release health
Bugfixing progress
Documentation progress
https://review.openstack.org/232507 
UI progress
Open discussion



Renat Akhmerov
@ Mirantis Inc.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] ERROR : openstack Forbidden (HTTP 403)

2015-10-12 Thread Dhvanan Shah
Resolved it.
The no_proxy env var needs to be set if your computer is located behind a
authenticating proxy infrastructure.

Source :
https://ask.openstack.org/en/question/67203/kilo-deployment-using-packstack-fails-with-403-error-on-usrbinopenstack-service-list/

On Mon, Oct 12, 2015 at 3:12 PM, Dhvanan Shah  wrote:

> Hi,
>
> I am getting this error while installing Openstack on Centos.
> ERROR : Error appeared during Puppet run: 10.16.37.221_keystone.pp
> Error: Could not prefetch keystone_service provider 'openstack': Execution
> of '/usr/bin/openstack service list --quiet --format csv --long' returned
> 1: ERROR: openstack Forbidden (HTTP 403)
>
> I've checked the permissions of the the executable files and they are not
> the problem. So I'm not sure why I'm forbidden from executing this. Could
> use some help!
>
> --
> Dhvanan Shah
>



-- 
Dhvanan Shah
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] [inspector] Ideas for summit discussions

2015-10-12 Thread Dmitry Tantsur

Hi inspectors! :)

We don't have a proper design session in Tokyo, but I hope it won't 
prevent us from having an informal one, probably on Friday morning 
during the contributor meetup. I'm collecting the ideas of what we could 
discuss, so please feel free to jump in:

https://etherpad.openstack.org/p/mitaka-ironic-inspector

Dmitry

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Meter-list with multiple filters in simple query is not working

2015-10-12 Thread Eoghan Glynn


> Hi
> 
> Can anyone plz help me on how to specify a simple query with multiple values
> for a query field in a Ceilometer meter-list request? I need to fetch meters
> that belongs to more than one project id. I have tried the following query
> format, but only the last query value (in this case, project_id=
> d41cdd2ade394e599b40b9b50d9cd623) is used for filtering. Any help is
> appreciated here.
> 
> curl -H 'X-Auth-Token:'
> http://localhost:8777/v2/meters?q.field=project_id=eq=f28d2e522e1f466a95194c10869acd0c=project_id=eq=d41cdd2ade394e599b40b9b50d9cd623
> 
> Thanks
> Srikanth

By "not working" you mean "not doing what you (incorrectly) expect it to do"

Your query asks for samples with aproject_id set to *both* f28d.. *and* d41c..
The result is empty, as a sample can't be associated with two project_ids.

The ceilometer simple query API combines all filters using logical AND.

Seems like you want logical OR here, which is possible to express via complex
queries:

  http://docs.openstack.org/developer/ceilometer/webapi/v2.html#complex-query

Cheers,
Eoghan



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Scheduler proposal

2015-10-12 Thread Jean-Daniel Bonnetot
Hi everyone,

What do you think about this proposal ?
http://www.slideshare.net/viggates/openstack-india-meetupscheduler

It seems they found a real solution for a scaling scheduler.
The good idea is to move intelligence on compute.
A synchronisation is only needed for anti-afinity and stuff like that which can 
be managed in an other way.

—
Jean-Daniel Bonnetot
http://www.ovh.com
@pilgrimstack



> Le 12 oct. 2015 à 12:30, Thierry Carrez  a écrit :
> 
> Clint Byrum wrote:
>> Excerpts from Joshua Harlow's message of 2015-10-10 17:43:40 -0700:
>>> I'm curious is there any more detail about #1 below anywhere online?
>>> 
>>> Does cassandra use some features of the JVM that the openJDK version 
>>> doesn't support? Something else?
>> 
>> This about sums it up:
>> 
>> https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/service/StartupChecks.java#L153-L155
>> 
>>// There is essentially no QA done on OpenJDK builds, and
>>// clusters running OpenJDK have seen many heap and load issues.
>>logger.warn("OpenJDK is not recommended. Please upgrade to the newest 
>> Oracle Java release");
> 
> Or:
> https://twitter.com/mipsytipsy/status/596697501991702528
> 
> This is one of the reasons I'm generally negative about Java solutions
> (Cassandra or Zookeeper): the free software JVM is still not on par with
> the non-free one, so we indirectly force our users to use a non-free
> dependency. I've been there before often enough to hear "did you
> reproduce that bug under the {Sun,Oracle} JVM" quite a few times.
> 
> When the Java solution is the only solution for a problem space that
> might still be a good trade-off (compared to reinventing the wheel for
> example), but to share state or distribute locks, there are some pretty
> good other options out there that don't suffer from the same fundamental
> problem...
> 
> -- 
> Thierry Carrez (ttx)
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] 7.0.0 (liberty) release: which modules?

2015-10-12 Thread Alexey Deryugin
Hi, Emilien!

I've pushed some basic functional tests for Murano. Here's a review link:
https://review.openstack.org/#/c/233591/.

Also I'll try to add more functional tests as soon as all patches will be
merged to puppet-murano.

-- 
Kind Regards,
Alexey Deryugin,
Intern Deployment Engineer,
Mirantis, Inc
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Ironic] Testing Ironic multi-tenancy feature

2015-10-12 Thread Jim Rollenhagen
On Mon, Oct 12, 2015 at 07:45:55AM +, Pavlo Shchelokovskyy wrote:
> Hi all,
> 
> we would like to start preliminary testing of Ironic multi-tenant network
> setup which is supported by Neutron in Liberty according to [1]. According
> to Neutron design integration with network equipment is done via ML2
> plugins. We are looking for plugins and network equipment that can work
> with such Ironic multi-tenant setup. Could community recommend a pair of
> hardware switch/corresponding Neutron plugin that already supports this
> functionality?

So, this functionality wasn't finished during Liberty on the Ironic side
(which is the larger part of the change). Most of the code is completed
and in testing and review. If you'd like to help test, please do jump
into our weekly meeting on the topic (link below).

Some references:
https://blueprints.launchpad.net/ironic/+spec/network-provider
https://blueprints.launchpad.net/ironic/+spec/ironic-ml2-integration
https://wiki.openstack.org/wiki/Meetings/Ironic-neutron

To answer your actual question, I know that HP and Arista are both
working on (or have completed) Neutron ML2 mechanisms that support this
on some of their gear. I don't know specifics on that, but hopefully
they can chime in, and hopefully there's other vendors working on this
as well. :)

// jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] Proposing Vitaly Gridnev to core reviewer team

2015-10-12 Thread michael mccune
i'm +1 for this, Vitaly has been doing a great job contributing code and 
reviews to the project.


mike

On 10/12/2015 07:19 AM, Sergey Lukjanov wrote:

Hi folks,

I'd like to propose Vitaly Gridnev as a member of the Sahara core
reviewer team.

Vitaly contributing to Sahara for a long time and doing a great job on
reviewing and improving Sahara. Here are the statistics for reviews
[0][1][2] and commits [3].

Existing Sahara core reviewers, please vote +1/-1 for the addition of
Vitaly to the core reviewer team.

Thanks.

[0]
https://review.openstack.org/#/q/reviewer:%22Vitaly+Gridnev+%253Cvgridnev%2540mirantis.com%253E%22,n,z
[1] http://stackalytics.com/report/contribution/sahara-group/180
[2] http://stackalytics.com/?metric=marks_id=vgridnev
[3]
https://review.openstack.org/#/q/status:merged+owner:%22Vitaly+Gridnev+%253Cvgridnev%2540mirantis.com%253E%22,n,z

--
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] service catalog: TNG

2015-10-12 Thread Sean Dague
On 10/09/2015 07:14 PM, Clint Byrum wrote:

> I don't think we're suggesting that we abandon the current one. We don't
> break userspace!
> 
> However, replacing the underpinnings of the current one with the new one,
> and leaving the current one as a compatibility layer _is_ a way to get
> progress on the new one without shafting users. So I think considerable
> consideration should be given to an approach where we limit working on
> the core of the current solution, and replace that core with the new
> solution + compatibility layer.
> 
>> And, as I've definitely discovered through this process the Service
>> Catalog today has been fluid enough that where it is used, and what
>> people rely on in it, isn't always clear all at once. For instance,
>> tenant_ids in urls are very surface features in Nova (we don't rely on
>> it, we're using the context), don't exist at all in most new services,
>> and are very corely embedded in Swift. This is part of what has also
>> required the service catalog is embedded in the Token, which causes toke
>> bloat, and has led to other features to try to shrink the catalog by
>> filtering it by what a user is allowed. Which in turn ended up being
>> used by Horizon to populate the feature matrix users see.
>>
>> So we're pulling on a thread, and we have to do that really carefully.
>>
>> I think the important thing is to focus on what we have in 6 months
>> doesn't break current users / applications, and is incrementally closer
>> to our end game. That's the lens I'm going to keep putting on this one.
>>
> 
> Right, so adopting a new catalog type that we see as the future, and
> making it the backend for the current solution, is the route I'd like
> to work toward. If we get the groundwork laid for that, but we don't
> make any user-visible improvement in 6 months, is that a failure or a win?

I consider it a fail. The issues with the service catalog today aren't
backend issues, they are front end issues. They are how that data is
represented and consumable. Changes in that representation will require
applications to adapt, so is the long pole in the tent.

I feel pretty strongly we have to start on the UX, and let backends
match once we get better interaction. That also lets you see whether or
not you are getting any uptake on the new approach before you go and
spend a ton of time retooling the backend.

-Sean


-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] gates broken by WebOb 1.5 release

2015-10-12 Thread Victor Stinner

Le 12/10/2015 10:43, Victor Stinner a écrit :

WebOb 1.5 was released yesterday. test_misc of Cinder starts failing
with this release. I wrote this simple fix which should be enough to
repair it:

https://review.openstack.org/233528
"Fix test_misc for WebOb 1.5"


FYI my fix was merged in Cinder. You may have to recheck your changes.

The bug report: https://bugs.launchpad.net/bugs/1505153

Victor

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Nova API sub-team meeting

2015-10-12 Thread Alex Xu
Hi,

We have weekly Nova API meeting this week. The meeting is being held
Tuesday UTC1200.

In other timezones the meeting is at:

EST 08:00 (Tue)
Japan 21:00 (Tue)
China 20:00 (Tue)
United Kingdom 13:00 (Tue)

The proposed agenda and meeting details are here:

https://wiki.openstack.org/wiki/Meetings/NovaAPI

Please feel free to add items to the agenda.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Requests + urllib3 + distro packages

2015-10-12 Thread Thomas Goirand
On 10/12/2015 02:16 AM, Robert Collins wrote:
> On 10 October 2015 at 02:58, Cory Benfield  wrote:
>>
>>> On 9 Oct 2015, at 14:40, William M Edmonds  wrote:
>>>
>>> Cory Benfield  writes:
> The problem that occurs is the result of a few interacting things:
>  - requests has very very specific versions of urllib3 it works with.
> So specific they aren't always released yet.

 This should no longer be true. Our downstream redistributors pointedout to 
 us
 that this  was making their lives harder than they needed to be, so it's 
 now
 our policy to only  update to actual release versions of urllib3.
>>>
>>> That's great... except that I'm confused as to why requests would continue 
>>> to repackage urllib3 if that's the case. Why not just prereq the version of 
>>> urllib3 that it needs? I thought the one and only answer to that question 
>>> had been so that requests could package non-standard versions.
>>>
>>
>> That is not and was never the only reason for vendoring urllib3. However, 
>> and I cannot stress this enough, the decision to vendor urllib3 is *not 
>> going to be changed on this thread*. If and when it changes, it will be by 
>> consensus decision from the requests maintenance team, which we do not have 
>> at this time.
>>
>> Further, as I pointed out to Donald Stufft on IRC, if requests unbundled 
>> urllib3 *today* that would not fix the problem. The reason is that we’d 
>> specify our urllib3 dependency as: urllib3>=1.12,<1.13. This dependency note 
>> would still cause exactly the problem observed in this thread.
> 
> Actually, that would fix the problem (in conjunction with a fix to
> https://github.com/pypa/pip/issues/2687 - which is conceptually
> trivial once 988 is fixed).
> 
>> As you correctly identify in your subsequent email, William, the core 
>> problem is mixing of packages from distributions and PyPI. This happens with 
>> any tool with external dependencies: if you subsequently install a different 
>> version of a dependency using a packaging tool that is not aware of some of 
>> the dependency tree, it is entirely plausible that an incompatible version 
>> will be installed. It’s not hard to trigger this kind of thing on Ubuntu. 
>> IMO, what OpenStack needs is a decision about where it’s getting its 
>> packages from, and then to refuse to mix the two.
> 
> We can't do that for all our downstreams. Further, Ubuntu preserve
> dependency information - I think a key underlying issue is that they
> don't fix up the dependency data for requests when they alter it. I've
> filed https://bugs.launchpad.net/ubuntu/+source/python-requests/+bug/1505039
> to complement the one filed on Fedora earlier in this thread.

Well, since Ubuntu uses the Debian package simply by syncing from
Debian, I would suggest to file a bug against the Debian package instead
[1].

> Obviously a trivial workaround is to always use virtualenvs and not
> system-site-packages.

or opposite way... always use system-site-packages! :)

Has the infra team ever thought about doing that for (at least) all of
the 3rd party libs we use? I'd love to work closer with the infra team
to provide them with missing packages they would need, and I'm sure my
RPM buddy Haikel would too. This also would help getting our
openstack/{deb,rpm}- projects up to speed as well.

> To sum up the thread, it sounds to me like a viable way-forward is:
> 
>  - get distros to fixup their requests Python dependencies (and
> hopefully they can update that in stable releases).

The maintainer for both urllib3 and requests is a single person, so I'm
quite sure he is aware of the issue, and make sure that both packages
are compatible. Otherwise, opening bugs in Debian [1] to have him fix it
would obviously work. I don't believe that adding version upper-bounds
in the deb package would solve anything, we just need to not make
incompatible versions co-exist at a given moment in the distro.

Cheers,

Thomas Goirand (zigo)

[1] https://www.debian.org/Bugs/Reporting (note: I know Robert knows how
to file a bug in Debian, so this is obviously not for him that I'm
giving this URL... :) )


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Cross-Project track topic proposals

2015-10-12 Thread Flavio Percoco

On 23/09/15 10:45 +0200, Flavio Percoco wrote:

Greetings,

The community is in the process of collecting topics for the
cross-project tack that we'll have in the Mitaka summit.

The good ol' OSDREG has been setup[0] to help collectiong these topics
and we'd like to encourage the community to propose sessions there.

During the TC meeting last night, it was pointed out that some people
have already proposed sessions on this[1] etherpad. I'd also like to
ask these folks to move these proposals to OSDREG as that's the tool
the cross-project track committee will be using as a reference for
proposals.

The deadline for proposing topics for the cross-project track is
October 9th. That will leave the committee roughly 2 weeks to review
the proposals and schedule them before the summit.


As mentioned in this thread, the proposal period ended on October 9th
and we'll now proceed to vote and organize the proposals. There will
be a meeting[0] to discuss the topics and start voting and shifting
them around.

[0] http://lists.openstack.org/pipermail/openstack-tc/2015-October/001024.html

Cheers,
Flavio


--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Requests + urllib3 + distro packages

2015-10-12 Thread Thomas Goirand
Note: it's not my intention to restart a flame war about vendorizing of
urllib3 in requests, however, I can't let you write wrong things, and
the point of this message is only that, not discussing if requests
should stop vendorizing (I've given up a long time ago any attempt to
convince upstream about this...).

On 10/09/2015 09:21 AM, Cory Benfield wrote:
> Additionally, getting *all* of
> Fedora/Debian/Ubuntu on board with not unbundling requests is about as likely
> as hell freezing over.

Debian + Ubuntu is only a single maintainer (Daniele Tricoli, plus the
Debian Python Module Team), and he will (for very valid reasons) not
want to do it. I support his decision.

The issue here isn't downstream distros though, but an upstream who
don't want to care about downstream. We're only trying to deal with
this, let's not spread the propaganda that the issue is in downstream
distros: it is *not*.

On 10/09/2015 03:58 PM, Cory Benfield wrote:
> As you correctly identify in your subsequent email, William, the
> core problem is mixing of packages from distributions and PyPI.

It's not. The core problem is vendorizing. It would work perfectly to do
such mix-up otherwise (see below).

On 10/09/2015 03:58 PM, Cory Benfield wrote:
> This happens with any tool with external dependencies: if you
> subsequently install a different version of a dependency using a
> packaging tool that is not aware of some of the dependency tree, it
> is entirely plausible that an incompatible version will be installed.
> It’s not hard to trigger this kind of thing on Ubuntu. IMO, what
> OpenStack needs is a decision about where it’s getting its packages
> from, and then to refuse to mix the two.

I regret to say it in a so direct way, but this is simply false. Pip
install'ed packages very easily co-exist with system install'ed
packages, because there's a central registry composed the a collection
of egg-info files. Just, the virtualenv will have precedence over the
system version. But if no package does vendorizing, mixing system
packages and virtualenv with pip install works perfectly. There's only
one issue with this: Python 3 and namespace. This is by the way one of
the reason we removed the namespace from the Oslo libs, it didn't work
well for Python 3 and namespace to run tests for distributions.

Thomas Goirand (zigo)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] jobs that make break when we remove Devstack extras.d in 10 weeks

2015-10-12 Thread Sean Dague
On 10/12/2015 07:12 AM, Dmitry Tantsur wrote:
> On 10/09/2015 05:41 PM, Dmitry Tantsur wrote:
>> On 10/09/2015 12:58 PM, Dmitry Tantsur wrote:
>>> On 10/09/2015 12:35 PM, Sean Dague wrote:
  From now until the removal of devstack extras.d support I'm going to
 send a weekly email of jobs that may break. A warning was added that we
 can track in logstash.

 Here are the top 25 jobs (by volume) that are currently tripping the
 warning:

 gate-murano-devstack-dsvm
 gate-cue-integration-dsvm-rabbitmq
 gate-murano-congress-devstack-dsvm
 gate-solum-devstack-dsvm-centos7
 gate-rally-dsvm-murano-task
 gate-congress-dsvm-api
 gate-tempest-dsvm-ironic-agent_ssh
 gate-solum-devstack-dsvm
 gate-tempest-dsvm-ironic-pxe_ipa-nv
 gate-ironic-inspector-dsvm-nv
 gate-tempest-dsvm-ironic-pxe_ssh
 gate-tempest-dsvm-ironic-parallel-nv
 gate-tempest-dsvm-ironic-pxe_ipa
 gate-designate-dsvm-powerdns
 gate-python-barbicanclient-devstack-dsvm
 gate-tempest-dsvm-ironic-pxe_ssh-postgres
 gate-rally-dsvm-designate-designate
 gate-tempest-dsvm-ironic-pxe_ssh-dib
 gate-tempest-dsvm-ironic-agent_ssh-src
 gate-tempest-dsvm-ironic-pxe_ipa-src
 gate-muranoclient-dsvm-functional
 gate-designate-dsvm-bind9
 gate-tempest-dsvm-python-ironicclient-src
 gate-python-ironic-inspector-client-dsvm
 gate-tempest-dsvm-ironic-lib-src-nv

 (You can view this query with http://goo.gl/6p8lvn)

 The ironic jobs are surprising, as something is crudding up extras.d
 with a file named 23, which isn't currently run. Eventual removal of
 that directory is going to potentially make those jobs fail, so someone
 more familiar with it should look into it.
>>>
>>> Thanks for noticing, looking now.
>>
>> As I'm leaving for the weekend, I'll post my findings here.
>>
>> I was not able to spot what writes these files (in my case it was named
>> 33). I also was not able to reproduce it on my regular devstack
>> environment.
>>
>> I've posted a temporary patch https://review.openstack.org/#/c/233017/
>> so that we're able to track where and when these files appear. Right now
>> I only understood that they really appear during the devstack run, not
>> earlier.
> 
> So, no file seems to be created, so it looks like a problem in devstack:
> https://review.openstack.org/#/c/233584/

Oh, good catch. We can fix it with making i a local instead though, I'll
spin a patch for that.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] Proposing Vitaly Gridnev to core reviewer team

2015-10-12 Thread Sergey Reshetnyak
+1
Great work, Vitaly!

--
Sergey Reshetnyak

2015-10-12 14:19 GMT+03:00 Sergey Lukjanov :

> Hi folks,
>
> I'd like to propose Vitaly Gridnev as a member of the Sahara core reviewer
> team.
>
> Vitaly contributing to Sahara for a long time and doing a great job on
> reviewing and improving Sahara. Here are the statistics for reviews
> [0][1][2] and commits [3].
>
> Existing Sahara core reviewers, please vote +1/-1 for the addition of
> Vitaly to the core reviewer team.
>
> Thanks.
>
> [0]
> https://review.openstack.org/#/q/reviewer:%22Vitaly+Gridnev+%253Cvgridnev%2540mirantis.com%253E%22,n,z
> [1] http://stackalytics.com/report/contribution/sahara-group/180
> [2] http://stackalytics.com/?metric=marks_id=vgridnev
> [3]
> https://review.openstack.org/#/q/status:merged+owner:%22Vitaly+Gridnev+%253Cvgridnev%2540mirantis.com%253E%22,n,z
>
> --
> Sincerely yours,
> Sergey Lukjanov
> Sahara Technical Lead
> (OpenStack Data Processing)
> Principal Software Engineer
> Mirantis Inc.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Requests + urllib3 + distro packages

2015-10-12 Thread Thomas Goirand
On 10/09/2015 02:39 AM, Robert Collins wrote:
> - get the distros to stop un-vendoring urllib3

I'm not the package maintainer of requests, but I know that Barry
Warsaw, the actual maintainer, will not want to do that.

The other solutions which you didn't mention:

1- Stop using a vendorized version of requests, and fork that project to
make it use dependencies at it should be from the start.

2- Convince upstream to stop vendorizing urllib3 and work more with
upstream of urllib3 to have them release when they need it.

3- Always use the distro version of requests, never the one from venv

While 1- is not realistic (unless someone volunteers), and we already
tried 2- without luck, 3- can happen easily.

BTW, the same applies for tablib which is in a even more horrible state
that makes it impossible to package with Py3 support. But tablib could
be removed from our (build-)dependency list, if someone cares about
re-writing cliff-tablib, which IMO wouldn't be that much work. Doug, how
many beers shall I offer you for that work? :)

Just my 2 cents hopping to provide some contrib in this discussion,
Cheers,

Thomas Goirand (zigo)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar][Horizon] UI for pools and flavors

2015-10-12 Thread Fei Long Wang



On 12/10/15 20:36, Matthias Runge wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 12/10/15 09:25, Flavio Percoco wrote:

On 10/10/15 21:07 +0530, Shifali Agrawal wrote:

Greetings!

I have prepared mock-ups[1],[2] to build Zaqar UI, at present
focusing only to bring pools and flavors on dashboard. Sharing
two mock-ups for same purpose - allowing all operations related
to them(CRUD).

It will be great to know the views of zaqar developers/users if
the design is satisfactory to them or they want some amendments.
Also let me know if any information of pools/flavors is missing
and need to be added.

In first mockup[1] showing pools information by default and will
show flavors if user click on flavors button present on top
second menu bar.

[1]: http://tinyurl.com/o2b9q6r [2]: http://tinyurl.com/pqwgwhl

I'm adding `[horizon]` to the subject to get more folks'
attention.


Thank you for bringing this up!

maybe it's necessary to give a brief context for those mockups?
+1 For now, it would be nice if we can figure out where to hold the 
source code, a new dashboard project or in Horizon.

What does a pool here?
A pool is like a container to organize queues/messages. Admin can create 
many different pools based on their capabilities.

  why is a pool weighted (or how)
When there are many pools, the pool with higher weight will be selected 
to be posted messages.

and isn't a
pools and a pool group somehow the same?

Pool group is most like a group/container for group to organize groups.

If you have pools, what is the intention of flavors then?
Admin can create flavor based on pool group's capabilities. Then user 
can create queue based on pool group to leverage the advantage of the 
pool group. For example, there are two pool groups. One named 'fast 
storage' which contains pools: pool_A and pool_B, one named 'reliable 
storage' which contains pool_C and pool_D, then admin can create a 
flavor named 'fast_storage'. When user created a new queue and set the 
flavor with 'fast_storage', then the messages posted to the queue will 
be stored in pool_A and pool_B.

Just to ask the obvious questions here

Matthias

-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQIcBAEBCAAGBQJWG2LZAAoJEBdpskgc9eOLSnYQAIFJb2/u5FWUkuk+mAVcRHXJ
nHD0oAN5fxmAR0fqZH57n0T7et8Ocp+Xgp9aUD9roLPGlj9uFnLntMSHo9WIAzU9
gq2fl0pgnmpMzwJ0yooAiMq4HXyJcG5UaY9xImTReTH4382x8j1OTVFvS+9ws+VU
enK4S/2JW4Rvf6jpYGUcoNqtdmkFRLQohvu/K3V1Cg1Y+zMeYIQZxb4oZzcEOEyo
6E+DicR/4VVGi8gl0UGPkXSmm/jFaG8H3m5KTvTQr0PDd6l5ypwDXvZcRKQvPYmc
c155EvjeXPByhm1jXpE6Cgv6NNROQFr+uRM1jKLF0Ss030XI7TSyMNYv9br6OVxi
/SMlF+BG+FK26uwTc9Mf5D5mqTF6qgbPTLLu4vlqKa3JX0/y7Z8a7NoujzTKUBEl
WAftY2ADAJE6Vb/1TEubujDIlKGBtoT/lGQUIJtj1t5VftfaNrZ1N2vMQ4P7c9hA
W2EoRCTe6m8YbPPT3b3QFuuH2oecPECTiWcjEgRxp23ksnoFDkgjEGeM1+xNeQVV
kfwfG4mcGP4lEUwPk2sdL/3xu16iUTMJURZdvkqI1FX3YfMwITjiY/jdqIwVwlNF
03XhFn1uloXyHSNRNNFXf85r3v4FDw9FWQCeCU5mqOzkdZZafhvdi2tUjkHniGsG
mDyxERTHHiDOAqHUhLNn
=nLNP
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
Cheers & Best regards,
Fei Long Wang (王飞龙)
--
Senior Cloud Software Engineer
Tel: +64-48032246
Email: flw...@catalyst.net.nz
Catalyst IT Limited
Level 6, Catalyst House, 150 Willis Street, Wellington
--


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] weekly meeting #55

2015-10-12 Thread Emilien Macchi
Hello!

Here's an initial agenda for our weekly meeting, tomorrow at 1500 UTC
in #openstack-meeting-4:

https://etherpad.openstack.org/p/puppet-openstack-weekly-meeting-20151013

Feel free to add any items you'd like to discuss.
If our schedule allows it, we'll make bug triage during the meeting.

Regards,
-- 
Emilien Macchi
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Requests + urllib3 + distro packages

2015-10-12 Thread Thomas Goirand
On 10/09/2015 03:39 PM, William M Edmonds wrote:
> When you're using a distro, you're always going to have to worry about
> someone pip installing something that conflicts with the rpm, no?

The point of this thread is: no you don't. You do only if some (bad)
upstream decide to vendorize.

> Unless the distros have a way to put in protection against
> this, preventing pip install of something that is already installed by RPM?

Well, the point of pip is so that you can use it to install a package
which may already be installed in the system, but you want another
version, and then they both co-exist without an issue.

I don't think we want to remove this feature (or at least, we'd need to
have a kind of global switch for that if we were to implement it in pip).

>>  - make sure none of our testing environments include distro
>> requests packages.
> 
> It's not like requests is an unusual package for someone to have
> installed from their distro in a base OS image. So when they take that
> base OS and go to setup OpenStack, they'll be hitting this case, whether
> we tested it or not. So while not testing this case seems nice from a
> development perspective, it doesn't seem to fit real-world usage. I
> don't think it would make operators very happy.

This thread has nothing to do with operators. Operators typically
install from distro packages only (unless they do like Helion does,
which is pretty rare...) and wouldn't be affected by this problem.

Cheers,

Thomas Goirand (zigo)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Requests + urllib3 + distro packages

2015-10-12 Thread Jeremy Stanley
On 2015-10-12 15:40:48 +0200 (+0200), Thomas Goirand wrote:
[...]
> Has the infra team ever thought about doing that for (at least) all of
> the 3rd party libs we use? I'd love to work closer with the infra team
> to provide them with missing packages they would need, and I'm sure my
> RPM buddy Haikel would too. This also would help getting our
> openstack/{deb,rpm}- projects up to speed as well.
[...]

Long ago there was an idea that we might somehow be able to
near-instantly package anything on PyPI and serve RPMs and DEBs of
it up to CI jobs, but doing that would be a pretty massive (and in
my opinion very error-prone) undertaking.

Right now we can take advantage of the fact that the Python
ecosystem uploads new releases to PyPI as their primary distribution
channel. By using pip to install dependencies from PyPI in our CI
system, we can pretty instantly test compatibility of our software
with new releases of dependencies (much faster that they can get
properly packaged in distros), and easily test different versions by
proposing changes to the openstack/requirements repository.

The only way I see this changing is if authors of Python libraries
switch to packaging their own software for major distributions and
have a pass to get them included by those distributions
near-instantly. Also, distros would have to cease caring about
reducing the number of concurrent versions of libraries they provide
(and I posit that as soon as debian/sid ships a DEB for every
version ever released for packages like python-requests and
python-urllib3, apt-get will begin to exhibit similar dependency
resolution challenges).
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] Proposing Vitaly Gridnev to core reviewer team

2015-10-12 Thread Ethan Gafford
I'm a very hearty +1 to this; Vitaly's a critical driver of both reviews and 
features in Sahara.

Cheers,
Ethan

- Original Message -
From: "michael mccune" 
To: openstack-dev@lists.openstack.org
Sent: Monday, October 12, 2015 8:49:18 AM
Subject: Re: [openstack-dev] [sahara] Proposing Vitaly Gridnev to core reviewer 
team

i'm +1 for this, Vitaly has been doing a great job contributing code and 
reviews to the project.

mike

On 10/12/2015 07:19 AM, Sergey Lukjanov wrote:
> Hi folks,
>
> I'd like to propose Vitaly Gridnev as a member of the Sahara core
> reviewer team.
>
> Vitaly contributing to Sahara for a long time and doing a great job on
> reviewing and improving Sahara. Here are the statistics for reviews
> [0][1][2] and commits [3].
>
> Existing Sahara core reviewers, please vote +1/-1 for the addition of
> Vitaly to the core reviewer team.
>
> Thanks.
>
> [0]
> https://review.openstack.org/#/q/reviewer:%22Vitaly+Gridnev+%253Cvgridnev%2540mirantis.com%253E%22,n,z
> [1] http://stackalytics.com/report/contribution/sahara-group/180
> [2] http://stackalytics.com/?metric=marks_id=vgridnev
> [3]
> https://review.openstack.org/#/q/status:merged+owner:%22Vitaly+Gridnev+%253Cvgridnev%2540mirantis.com%253E%22,n,z
>
> --
> Sincerely yours,
> Sergey Lukjanov
> Sahara Technical Lead
> (OpenStack Data Processing)
> Principal Software Engineer
> Mirantis Inc.
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Cross-project meeting, Tue Oct 13th, 21:00 UTC

2015-10-12 Thread Emilien Macchi
Dear PTLs, cross-project liaisons, and interested team members,

We'll have a cross-project meeting tomorrow at 21:00 UTC, with the
following agenda:

https://wiki.openstack.org/wiki/Meetings/CrossProjectMeeting#Proposed_agenda

   - Review past action items
   - Team announcements (horizontal, vertical, diagonal)
   - New API Guidelines ready for cross project review
  - This topic is only added for CPL visibility. Comments should go
  into the review linked to below.
  - #link https://review.openstack.org/#/c/214817/
  - #link https://review.openstack.org/#/c/221163/
   - Skip meeting on next two weeks ?
   - Open discussion

For more details on this meeting, please see:
https://wiki.openstack.org/wiki/Meetings/CrossProjectMeeting

See you there,
-- 
Emilien Macchi
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA][ec2-api][nova] Future of Tempest ec2 tests

2015-10-12 Thread Alexandre Levine

Hi Matthew,

I'd like to provide you with some info in regard to standalone EC2 API part.
I can't claim clear understanding of tempest development directions but 
here are the things to take in account about EC2 API:


1. Standalone EC2 API carry all of its tests including previously 
tempest ones as in-project functional tests.
2. As far as I recall all of the original tempest EC2 API tests are 
covered. (not true for all S3 tests, though)
3. We're trying to use RefStack for checking EC2 API compatibility as 
well. And RefStack works with tempest tests so we'd have to somehow 
connect our tests with RefStack.


Most probably I'll be in Tokyo if you'd want to discuss it in more details.

Best regards,
  Alex Levine

On 10/12/15 5:41 PM, Matthew Treinish wrote:

Hi everyone,

I think it's time we revisit having ec2 api tests in the tempest tree. They have
always been something that was really out of scope for tempest, since tempest is
about using and testing the OpenStack APIs, not OpenStack's implementation of
amazon's APIs. We only kept these tests around because nova was still providing
and "supporting" the ec2 implementation in tree. However, with nova deprecating
their ec2 api implementation and the start of the standalone ec2-api project as
well as the start of the tempest external test plugin interface I think mitaka
is the time to drop the in-tree tempest support for ec2 and movign it to an
external plugin.
  
I'd like to have the tempest tests fully moved to an external plugin and out of

the tempest tree by mitaka-1. To that end I started the work of creating a
plugin:

https://github.com/mtreinish/tempest_ec2

and pushed up patches to remove ec2 from tempest:

https://review.openstack.org/222737

and a test patch using the plugin:

https://review.openstack.org/222740
(look at the periodic-tempest-dsvm-all-master results)

The plugin doesn't quite work yet, since I just copy and pasted the test code,
so there are a couple things that still need to be fixed. But I got all of it
importing correctly and it attempts to run all the tests.

But, I really need someone to take things over, since I have limited time to
continue pushing this, and definitely don't have the throughput to continue
maintaining the plugin after we get things working. What I have up on github
should be a good starting point that's easy to take over.

Thanks,

Matthew Treinish


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar][Horizon] UI for pools and flavors

2015-10-12 Thread Fei Long Wang



On 12/10/15 20:36, Matthias Runge wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 12/10/15 09:25, Flavio Percoco wrote:

On 10/10/15 21:07 +0530, Shifali Agrawal wrote:

Greetings!

I have prepared mock-ups[1],[2] to build Zaqar UI, at present
focusing only to bring pools and flavors on dashboard. Sharing
two mock-ups for same purpose - allowing all operations related
to them(CRUD).

It will be great to know the views of zaqar developers/users if
the design is satisfactory to them or they want some amendments.
Also let me know if any information of pools/flavors is missing
and need to be added.

In first mockup[1] showing pools information by default and will
show flavors if user click on flavors button present on top
second menu bar.

[1]: http://tinyurl.com/o2b9q6r [2]: http://tinyurl.com/pqwgwhl

I'm adding `[horizon]` to the subject to get more folks'
attention.


Thank you for bringing this up!

maybe it's necessary to give a brief context for those mockups?
+1 For now, it would be nice if we can figure out where to hold the 
source code, a new dashboard project or in Horizon.

What does a pool here?
A pool is like a container to organize queues/messages. Admin can create 
many different pools based on their capabilities.

  why is a pool weighted (or how)
When there are many pools, the pool with higher weight will be selected 
to be posted messages.

and isn't a
pools and a pool group somehow the same?

Pool group is most like a group/container for group to organize groups.

If you have pools, what is the intention of flavors then?
Admin can create flavor based on pool group's capabilities. Then user 
can create queue based on pool group to leverage the advantage of the 
pool group. For example, there are two pool groups. One named 'fast 
storage' which contains pools: pool_A and pool_B, one named 'reliable 
storage' which contains pool_C and pool_D, then admin can create a 
flavor named 'fast_storage'. When user created a new queue and set the 
flavor with 'fast_storage', then the messages posted to the queue will 
be stored in pool_A and pool_B.

Just to ask the obvious questions here

Matthias

-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQIcBAEBCAAGBQJWG2LZAAoJEBdpskgc9eOLSnYQAIFJb2/u5FWUkuk+mAVcRHXJ
nHD0oAN5fxmAR0fqZH57n0T7et8Ocp+Xgp9aUD9roLPGlj9uFnLntMSHo9WIAzU9
gq2fl0pgnmpMzwJ0yooAiMq4HXyJcG5UaY9xImTReTH4382x8j1OTVFvS+9ws+VU
enK4S/2JW4Rvf6jpYGUcoNqtdmkFRLQohvu/K3V1Cg1Y+zMeYIQZxb4oZzcEOEyo
6E+DicR/4VVGi8gl0UGPkXSmm/jFaG8H3m5KTvTQr0PDd6l5ypwDXvZcRKQvPYmc
c155EvjeXPByhm1jXpE6Cgv6NNROQFr+uRM1jKLF0Ss030XI7TSyMNYv9br6OVxi
/SMlF+BG+FK26uwTc9Mf5D5mqTF6qgbPTLLu4vlqKa3JX0/y7Z8a7NoujzTKUBEl
WAftY2ADAJE6Vb/1TEubujDIlKGBtoT/lGQUIJtj1t5VftfaNrZ1N2vMQ4P7c9hA
W2EoRCTe6m8YbPPT3b3QFuuH2oecPECTiWcjEgRxp23ksnoFDkgjEGeM1+xNeQVV
kfwfG4mcGP4lEUwPk2sdL/3xu16iUTMJURZdvkqI1FX3YfMwITjiY/jdqIwVwlNF
03XhFn1uloXyHSNRNNFXf85r3v4FDw9FWQCeCU5mqOzkdZZafhvdi2tUjkHniGsG
mDyxERTHHiDOAqHUhLNn
=nLNP
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
Cheers & Best regards,
Fei Long Wang (王飞龙)
--
Senior Cloud Software Engineer
Tel: +64-48032246
Email: flw...@catalyst.net.nz
Catalyst IT Limited
Level 6, Catalyst House, 150 Willis Street, Wellington
--


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] Proposing Vitaly Gridnev to core reviewer team

2015-10-12 Thread Alexander Ignatov
+1. Thank you for your contributions, Vitaly!

Regards,
Alexander Ignatov



> On 12 Oct 2015, at 17:06, Ethan Gafford  wrote:
> 
> I'm a very hearty +1 to this; Vitaly's a critical driver of both reviews and 
> features in Sahara.
> 
> Cheers,
> Ethan
> 
> - Original Message -
> From: "michael mccune" 
> To: openstack-dev@lists.openstack.org
> Sent: Monday, October 12, 2015 8:49:18 AM
> Subject: Re: [openstack-dev] [sahara] Proposing Vitaly Gridnev to core 
> reviewer team
> 
> i'm +1 for this, Vitaly has been doing a great job contributing code and 
> reviews to the project.
> 
> mike
> 
> On 10/12/2015 07:19 AM, Sergey Lukjanov wrote:
>> Hi folks,
>> 
>> I'd like to propose Vitaly Gridnev as a member of the Sahara core
>> reviewer team.
>> 
>> Vitaly contributing to Sahara for a long time and doing a great job on
>> reviewing and improving Sahara. Here are the statistics for reviews
>> [0][1][2] and commits [3].
>> 
>> Existing Sahara core reviewers, please vote +1/-1 for the addition of
>> Vitaly to the core reviewer team.
>> 
>> Thanks.
>> 
>> [0]
>> https://review.openstack.org/#/q/reviewer:%22Vitaly+Gridnev+%253Cvgridnev%2540mirantis.com%253E%22,n,z
>> [1] http://stackalytics.com/report/contribution/sahara-group/180
>> [2] http://stackalytics.com/?metric=marks_id=vgridnev
>> [3]
>> https://review.openstack.org/#/q/status:merged+owner:%22Vitaly+Gridnev+%253Cvgridnev%2540mirantis.com%253E%22,n,z
>> 
>> --
>> Sincerely yours,
>> Sergey Lukjanov
>> Sahara Technical Lead
>> (OpenStack Data Processing)
>> Principal Software Engineer
>> Mirantis Inc.
>> 
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA][ec2-api][nova] Future of Tempest ec2 tests

2015-10-12 Thread Matthew Treinish
Hi everyone,

I think it's time we revisit having ec2 api tests in the tempest tree. They have
always been something that was really out of scope for tempest, since tempest is
about using and testing the OpenStack APIs, not OpenStack's implementation of
amazon's APIs. We only kept these tests around because nova was still providing
and "supporting" the ec2 implementation in tree. However, with nova deprecating
their ec2 api implementation and the start of the standalone ec2-api project as
well as the start of the tempest external test plugin interface I think mitaka
is the time to drop the in-tree tempest support for ec2 and movign it to an
external plugin.
 
I'd like to have the tempest tests fully moved to an external plugin and out of
the tempest tree by mitaka-1. To that end I started the work of creating a
plugin:

https://github.com/mtreinish/tempest_ec2

and pushed up patches to remove ec2 from tempest:

https://review.openstack.org/222737

and a test patch using the plugin:

https://review.openstack.org/222740
(look at the periodic-tempest-dsvm-all-master results)

The plugin doesn't quite work yet, since I just copy and pasted the test code,
so there are a couple things that still need to be fixed. But I got all of it
importing correctly and it attempts to run all the tests.

But, I really need someone to take things over, since I have limited time to
continue pushing this, and definitely don't have the throughput to continue
maintaining the plugin after we get things working. What I have up on github
should be a good starting point that's easy to take over.

Thanks,

Matthew Treinish


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron]Anyone tried to mix-use openstack components or projects?

2015-10-12 Thread Salvatore Orlando
Inline,
Salvatore

On 12 October 2015 at 10:23, Germy Lure  wrote:

> Thank you, Kevin.
> So the community just divided the whole openstack into separate
> sub-projects(Nova,Neutron and etc.) but it's not taken into account that if
> those modules can work together with different versions. Yes?
>

The developer community has been addressing this by ensuring, to some
extent, backward compatibility between the APIs used for communicating
across services. This is what allows a component at version X to operate
with another component at version Y.

In the case of Neutron and Nova, this is only done with REST over HTTP.
Other projects also use RPC over AMQP.
Neutron strived to be backward compatible since the v2 API was introduced
in Folsom. Therefore you should be able to run Neutron Kilo with Nova
Havana; as Kevin noted, you might want to disable notifications on the
Neutron side as the nova extension that processes them does not exist in
Havana.



>
> If so, is it possible to keep being compatible with each other in
> technology? How about just N+1? And how about just in Neutron?
>

While it is surely possible, enforcing this, as far as I can tell, is not a
requirement for Openstack projects. Indeed, it is not something which is
tested in the gate. It would be interesting to have it as a part of a
rolling upgrade test for an OpenStack cloud, where, for instance, you first
upgrade the networking service and then the compute service. But beyond
that I do not think the upstream developer community should provide any
additional guarantee, notwithstanding guarantees on API backward
compatibility.


> Germy
> .
>
> On Sun, Oct 11, 2015 at 4:33 PM, Kevin Benton  wrote:
>
>> For the particular Nova Neutron example, the Neutron Kilo API should
>> still be compatible with the calls Havana Nova makes. I think you will need
>> to disable the Nova callbacks on the Neutron side because the Havana
>> version wasn't expecting them.
>>
>> I've tried out many N+1 combinations (e.g. Icehouse + Juno, Juno + Kilo)
>> but I haven't tried a gap that big.
>>
>> Cheers,
>> Kevin Benton
>>
>> On Sat, Oct 10, 2015 at 1:50 AM, Germy Lure  wrote:
>>
>>> Hi all,
>>>
>>> As you know, openstack projects are developed separately. And
>>> theoretically, people can create networks with Neutron in Kilo version for
>>> Nova in Havana version.
>>>
>>> Did Anyone tried it?
>>> Do we have some pages to show what combination can work together?
>>>
>>> Thanks.
>>> Germy
>>> .
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>> --
>> Kevin Benton
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Reminder: Team meeting on Monday at 2100 UTC

2015-10-12 Thread Armando M.
A kind reminder for today's meeting.

Please add agenda items to the meeting here [1].

See you all in a few hours!

Armando

[1] https://wiki.openstack.org/wiki/Network/Meetings
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Scheduler proposal

2015-10-12 Thread Monty Taylor

On 10/12/2015 12:43 PM, Clint Byrum wrote:

Excerpts from Thomas Goirand's message of 2015-10-12 05:57:26 -0700:

On 10/11/2015 02:53 AM, Davanum Srinivas wrote:

Thomas,

i am curious as well. AFAIK, cassandra works well with OpenJDK. Can you
please elaborate what you concerns are for #1?

Thanks,
Dims


s/works well/works/

Upstream doesn't test against OpenJDK, and they close bugs without
fixing them when it only affects OpenJDK and it isn't grave. I know this
from one of the upstream from Cassandra, who is also a Debian developer.
Because of this state of things, he gave up on packaging Cassandra in
Debian (and for other reasons too, like not having enough time to work
on the packaging).

I trust what this Debian developer told me. If I remember correctly,
it's Eric Evans  (ie, the author of the ITP at
https://bugs.debian.org/585905) that I'm talking about.



Indeed, I once took a crack at packaging it for Debian/Ubuntu too.
There's a reason 'apt-cache search cassandra' returns 0 results on Debian
and Ubuntu.


There is a different reason too - which is that (at least at one point 
in the past) upstream expressed frustration with the idea of distro 
packages of Cassandra because it led to people coming to them with 
complaints about the software which had been fixed in newer versions but 
which, because of distro support policies, were not present in the 
user's software version. (I can sympathize)


I think they've been an excellent case study in how there is an 
impedance mismatch sometimes between the value that distros provide and 
the needs of particular communities. That's not a negative thought 
towards either of them - just purely that it's not purely limited to them.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Design Summit schedule - update

2015-10-12 Thread Armando M.
Hi folks,

As some of you may know, this summit we have 12 fishbowl sessions between
Wednesday and Thursday, and a full day on Friday for team get-together.

We broke down the 12 sessions in three separate tracks:

   - https://etherpad.openstack.org/p/mitaka-neutron-core-track
   - https://etherpad.openstack.org/p/mitaka-neutron-next-track
   - https://etherpad.openstack.org/p/mitaka-neutron-labs-track

Each track has its own theme and more details can be found on their
respective etherpads. Each session as a chair, and we'll work together to
prepare the content of the etherpad also based on the input provided in:

https://etherpad.openstack.org/p/neutron-mitaka-designsummit

The Friday's etherpad is available here:

   - https://etherpad.openstack.org/p/mitaka-neutron-unplugged-track
   

If you are planning to gather people together, please add your name and
topic to the list so that people can sign-up for attendance.

This summit, as any other summit, we'll have a lightning slot session:

   - https://etherpad.openstack.org/p/mitaka-neutron-labs-lighting-talks

Put your ideas down and 6 topics will selected for the 5 mins long talks.

Cheers,
Armando
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Infra] Meeting Tuesday October 13th at 19:00 UTC

2015-10-12 Thread Elizabeth K. Joseph
Hi everyone,

The OpenStack Infrastructure (Infra) team is having our next weekly
meeting on Tuesday October 13th, at 19:00 UTC in #openstack-meeting

Meeting agenda available here:
https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting (anyone is
welcome to to add agenda items)

Everyone interested in infrastructure and process surrounding
automated testing and deployment is encouraged to attend.

In case you missed it or would like a refresher, the meeting minutes
and full logs from our last meeting are available:

Minutes: 
http://eavesdrop.openstack.org/meetings/infra/2015/infra.2015-10-06-19.01.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/infra/2015/infra.2015-10-06-19.01.txt
Log: 
http://eavesdrop.openstack.org/meetings/infra/2015/infra.2015-10-06-19.01.log.html

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Scheduler proposal

2015-10-12 Thread Clint Byrum
Excerpts from Joshua Harlow's message of 2015-10-12 08:35:20 -0700:
> Thierry Carrez wrote:
> > Clint Byrum wrote:
> >> Excerpts from Joshua Harlow's message of 2015-10-10 17:43:40 -0700:
> >>> I'm curious is there any more detail about #1 below anywhere online?
> >>>
> >>> Does cassandra use some features of the JVM that the openJDK version
> >>> doesn't support? Something else?
> >> This about sums it up:
> >>
> >> https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/service/StartupChecks.java#L153-L155
> >>
> >>  // There is essentially no QA done on OpenJDK builds, and
> >>  // clusters running OpenJDK have seen many heap and load issues.
> >>  logger.warn("OpenJDK is not recommended. Please upgrade to the newest 
> >> Oracle Java release");
> >
> > Or:
> > https://twitter.com/mipsytipsy/status/596697501991702528
> >
> > This is one of the reasons I'm generally negative about Java solutions
> > (Cassandra or Zookeeper): the free software JVM is still not on par with
> > the non-free one, so we indirectly force our users to use a non-free
> > dependency. I've been there before often enough to hear "did you
> > reproduce that bug under the {Sun,Oracle} JVM" quite a few times.
> 
> I'd be happy to 'fight' (and even fix) for any issues found with 
> zookeeper + openjdk if needed, that twitter posting hopefully ended up 
> in a bug being filed at https://issues.apache.org/jira/browse/ZOOKEEPER/ 
> and hopefully things getting fixed...
> 
> >
> > When the Java solution is the only solution for a problem space that
> > might still be a good trade-off (compared to reinventing the wheel for
> > example), but to share state or distribute locks, there are some pretty
> > good other options out there that don't suffer from the same fundamental
> > problem...
> >
> 
> IMHO it's the only 'mature' solution so far; but of course maturity is a 
> relative thing (look at the project age, version number of zookeeper vs 
> etcd, consul for a general idea around this); in general I'd really like 
> the TC and the foundation to help make the right decision here, because 
> this kind of choice affects the long-term future (and health) of 
> openstack as a whole (or I believe it does).
> 

Zookeeper sits in a very different space from Cassandra. I have had good
success with it on OpenJDK as well.

That said, we need to maybe go through some feature/risk matrices and
compare to etcd and Consul (this might be good to do as part of filling
out the DLM spec). The jvm issues goes away with both of those, but then
we get to deal Go issues.

Also, ZK has one other advantage over those: It is already in Debian and
Ubuntu, making access for developers much easier.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Scheduler proposal

2015-10-12 Thread Clint Byrum
Excerpts from Boris Pavlovic's message of 2015-10-11 01:14:08 -0700:
> Clint,
> 
> There are many PROS and CONS in both of approaches.
> 
> Reinventing wheel (in this case it's quite simple task) and it gives more
> flexibility and doesn't require
> usage of ZK/Consul (which will simplify integration of it with current
> system)
> 
> Using ZK/Consul for POC may save a lot of time and as well we are
> delegating part of work
> to other communities (which may lead in better supported/working code).
> 
> By the way some of the parts (like sync of schedulers) stuck on review in
> Nova project.
> 
> Basically for POC we can use anything and using ZK/Consul may reduce
> resources for development
> which is good.
> 

Awesome, I think we are aligned.

So, let's try and come up with a set of next steps to see a POC.

1) Let's try and get some numbers at the upper bounds of the current
scheduler with one and multiple schedulers. We can actually turn this
into a gate test harness, as we don't _actually_ care about the vms,
so this is an excellent use for the fake virt driver. In addition to
"where it breaks", I'd also like to see graphs of what it does to the
database and MQ bus. This aligns with the performance discussions that
will be happening as a sub-group of the large operators group, so I
think we can gather support for such an effort there.

2) Let's resolve which backend thing to use in the DLM spec. I have a
strong desire to consider the needs of DLM and the needs of scheduling
together. If the DLM discussion is tied, or nearly tied, on a few
choices, but one of the choices is better for the scheduler, it may
help the discussion. It may also hurt if one is more desirable for DLM,
and one is more desirable for scheduling. My gut says that they'll all
be suitable for both of these tasks, and it will boil down to binary
access and operator preference.

3) POC goes to the first person with free time. It's been my experience
that people come free at somewhat unexpected intervals, and I don't
want anyone to wait too long for consensus. So if anyone who agrees
with this direction gets time, I say, write a spec, get it out there,
and experiment with code.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Scheduler proposal

2015-10-12 Thread Joshua Harlow

Clint Byrum wrote:

Excerpts from Boris Pavlovic's message of 2015-10-11 01:14:08 -0700:

Clint,

There are many PROS and CONS in both of approaches.

Reinventing wheel (in this case it's quite simple task) and it gives more
flexibility and doesn't require
usage of ZK/Consul (which will simplify integration of it with current
system)

Using ZK/Consul for POC may save a lot of time and as well we are
delegating part of work
to other communities (which may lead in better supported/working code).

By the way some of the parts (like sync of schedulers) stuck on review in
Nova project.

Basically for POC we can use anything and using ZK/Consul may reduce
resources for development
which is good.



Awesome, I think we are aligned.

So, let's try and come up with a set of next steps to see a POC.

1) Let's try and get some numbers at the upper bounds of the current
scheduler with one and multiple schedulers. We can actually turn this
into a gate test harness, as we don't _actually_ care about the vms,
so this is an excellent use for the fake virt driver. In addition to
"where it breaks", I'd also like to see graphs of what it does to the
database and MQ bus. This aligns with the performance discussions that
will be happening as a sub-group of the large operators group, so I
think we can gather support for such an effort there.


Just a related thought/question. It really seems we (as a community) 
need some kind of scale testing ground. Internally at yahoo we were/are 
going to use a 200 hypervisor cluster for some of this and then expand 
that into 200 * X by using nested virtualization and/or fake drivers and 
such. But this is a 'lab' that not everyone can have, and therefore 
isn't suited toward community work IMHO. Has there been any thought on 
such a 'lab' that is directly in the community, perhaps trystack.org can 
be this? (users get free VMs, but then we can tell them this area is a 
lab, so don't expect things to always work, free isn't free after all...)


With such a lab, there could be these kinds of experiments, graphs, 
tweaks and such...




2) Let's resolve which backend thing to use in the DLM spec. I have a
strong desire to consider the needs of DLM and the needs of scheduling
together. If the DLM discussion is tied, or nearly tied, on a few
choices, but one of the choices is better for the scheduler, it may
help the discussion. It may also hurt if one is more desirable for DLM,
and one is more desirable for scheduling. My gut says that they'll all
be suitable for both of these tasks, and it will boil down to binary
access and operator preference.

3) POC goes to the first person with free time. It's been my experience
that people come free at somewhat unexpected intervals, and I don't
want anyone to wait too long for consensus. So if anyone who agrees
with this direction gets time, I say, write a spec, get it out there,
and experiment with code.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] Team meeting minutes - 10/12/2015

2015-10-12 Thread Renat Akhmerov
Hi,

Thanks for joining team meeting today.

Meeting minutes: 
http://eavesdrop.openstack.org/meetings/mistral/2015/mistral.2015-10-12-16.00.html
 

Meeting log: 
http://eavesdrop.openstack.org/meetings/mistral/2015/mistral.2015-10-12-16.00.log.html
 


See you next Monday at the same time.

Renat Akhmerov
@ Mirantis Inc.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] OpenStack Ironic dinner

2015-10-12 Thread Lucas Alvares Gomes
Since many of us is going to be in Tokyo for the summit, who's interested
in an OpenStack Ironic dinner?

Let's first decide what day of the week suits most people, so please vote
here: http://doodle.com/poll/2nqbemmrdd9a5ypd

if you have any food requirements or restrictions please add it to the
"comments" section in the doodle pool.

Cheers,
Lucas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Scheduler proposal

2015-10-12 Thread Alec Hothan (ahothan)





On 10/10/15, 11:35 PM, "Clint Byrum"  wrote:

>Excerpts from Alec Hothan (ahothan)'s message of 2015-10-09 21:19:14 -0700:
>> 
>> On 10/9/15, 6:29 PM, "Clint Byrum"  wrote:
>> 
>> >Excerpts from Chris Friesen's message of 2015-10-09 17:33:38 -0700:
>> >> On 10/09/2015 03:36 PM, Ian Wells wrote:
>> >> > On 9 October 2015 at 12:50, Chris Friesen > >> > > wrote:
>> >> >
>> >> > Has anybody looked at why 1 instance is too slow and what it would 
>> >> > take to
>> >> >
>> >> > make 1 scheduler instance work fast enough? This does not 
>> >> > preclude the
>> >> > use of
>> >> > concurrency for finer grain tasks in the background.
>> >> >
>> >> >
>> >> > Currently we pull data on all (!) of the compute nodes out of the 
>> >> > database
>> >> > via a series of RPC calls, then evaluate the various filters in 
>> >> > python code.
>> >> >
>> >> >
>> >> > I'll say again: the database seems to me to be the problem here.  Not to
>> >> > mention, you've just explained that they are in practice holding all 
>> >> > the data in
>> >> > memory in order to do the work so the benefit we're getting here is 
>> >> > really a
>> >> > N-to-1-to-M pattern with a DB in the middle (the store-to-DB is rather
>> >> > secondary, in fact), and that without incremental updates to the 
>> >> > receivers.
>> >> 
>> >> I don't see any reason why you couldn't have an in-memory scheduler.
>> >> 
>> >> Currently the database serves as the persistant storage for the resource 
>> >> usage, 
>> >> so if we take it out of the picture I imagine you'd want to have some way 
>> >> of 
>> >> querying the compute nodes for their current state when the scheduler 
>> >> first 
>> >> starts up.
>> >> 
>> >> I think the current code uses the fact that objects are remotable via the 
>> >> conductor, so changing that to do explicit posts to a known scheduler 
>> >> topic 
>> >> would take some work.
>> >> 
>> >
>> >Funny enough, I think thats exactly what Josh's "just use Zookeeper"
>> >message is about. Except in memory, it is "in an observable storage
>> >location".
>> >
>> >Instead of having the scheduler do all of the compute node inspection
>> >and querying though, you have the nodes push their stats into something
>> >like Zookeeper or consul, and then have schedulers watch those stats
>> >for changes to keep their in-memory version of the data up to date. So
>> >when you bring a new one online, you don't have to query all the nodes,
>> >you just scrape the data store, which all of these stores (etcd, consul,
>> >ZK) are built to support atomically querying and watching at the same
>> >time, so you can have a reasonable expectation of correctness.
>> >
>> >Even if you figured out how to make the in-memory scheduler crazy fast,
>> >There's still value in concurrency for other reasons. No matter how
>> >fast you make the scheduler, you'll be slave to the response time of
>> >a single scheduling request. If you take 1ms to schedule each node
>> >(including just reading the request and pushing out your scheduling
>> >result!) you will never achieve greater than 1000/s. 1ms is way lower
>> >than it's going to take just to shove a tiny message into RabbitMQ or
>> >even 0mq.
>> 
>> That is not what I have seen, measurements that I did or done by others show 
>> between 5000 and 1 send *per sec* (depending on mirroring, up to 1KB msg 
>> size) using oslo messaging/kombu over rabbitMQ.
>
>You're quoting througput of RabbitMQ, but how many threads were
>involved? An in-memory scheduler that was multi-threaded would need to
>implement synchronization at a fairly granular level to use the same
>in-memory store, and we're right back to the extreme need for efficient
>concurrency in the design, though with much better latency on the
>synchronization.

These were single-threaded tests and you're correct that if you had multiple 
threads trying to send something you'd have some inefficiency.
However I'd question the likelihood of that happening as it is very likely that 
most of the cpu time will be spent outside of oslo messaging code.

Furthermore, Python does not need multiple threads to go faster. As a matter of 
fact, for in-memory operations, it could end up being slower because of the 
inherent design of the interpreter (and there are many independent measurements 
that have shown it).


>
>> And this is unmodified/highly unoptimized oslo messaging code.
>> If you remove the oslo messaging layer, you get 25000 to 45000 msg/sec with 
>> kombu/rabbitMQ (which shows how inefficient is oslo messaging layer itself)
>> 
>> > So I'm pretty sure this is o-k for small clouds, but would be
>> >a disaster for a large, busy cloud.
>> 
>> It all depends on how many sched/sec for the "large busy cloud"...
>> 
>
>I think there are two interesting things to discern. Of course, the
>exact rate would be great to have as a target, 

Re: [openstack-dev] Scheduler proposal

2015-10-12 Thread Joshua Harlow

Alec Hothan (ahothan) wrote:





On 10/10/15, 11:35 PM, "Clint Byrum"  wrote:


Excerpts from Alec Hothan (ahothan)'s message of 2015-10-09 21:19:14 -0700:

On 10/9/15, 6:29 PM, "Clint Byrum"  wrote:


Excerpts from Chris Friesen's message of 2015-10-09 17:33:38 -0700:

On 10/09/2015 03:36 PM, Ian Wells wrote:

On 9 October 2015 at 12:50, Chris Friesen>  wrote:

 Has anybody looked at why 1 instance is too slow and what it would take to

 make 1 scheduler instance work fast enough? This does not preclude the
 use of
 concurrency for finer grain tasks in the background.


 Currently we pull data on all (!) of the compute nodes out of the database
 via a series of RPC calls, then evaluate the various filters in python 
code.


I'll say again: the database seems to me to be the problem here.  Not to
mention, you've just explained that they are in practice holding all the data in
memory in order to do the work so the benefit we're getting here is really a
N-to-1-to-M pattern with a DB in the middle (the store-to-DB is rather
secondary, in fact), and that without incremental updates to the receivers.

I don't see any reason why you couldn't have an in-memory scheduler.

Currently the database serves as the persistant storage for the resource usage,
so if we take it out of the picture I imagine you'd want to have some way of
querying the compute nodes for their current state when the scheduler first
starts up.

I think the current code uses the fact that objects are remotable via the
conductor, so changing that to do explicit posts to a known scheduler topic
would take some work.


Funny enough, I think thats exactly what Josh's "just use Zookeeper"
message is about. Except in memory, it is "in an observable storage
location".

Instead of having the scheduler do all of the compute node inspection
and querying though, you have the nodes push their stats into something
like Zookeeper or consul, and then have schedulers watch those stats
for changes to keep their in-memory version of the data up to date. So
when you bring a new one online, you don't have to query all the nodes,
you just scrape the data store, which all of these stores (etcd, consul,
ZK) are built to support atomically querying and watching at the same
time, so you can have a reasonable expectation of correctness.

Even if you figured out how to make the in-memory scheduler crazy fast,
There's still value in concurrency for other reasons. No matter how
fast you make the scheduler, you'll be slave to the response time of
a single scheduling request. If you take 1ms to schedule each node
(including just reading the request and pushing out your scheduling
result!) you will never achieve greater than 1000/s. 1ms is way lower
than it's going to take just to shove a tiny message into RabbitMQ or
even 0mq.

That is not what I have seen, measurements that I did or done by others show 
between 5000 and 1 send *per sec* (depending on mirroring, up to 1KB msg 
size) using oslo messaging/kombu over rabbitMQ.

You're quoting througput of RabbitMQ, but how many threads were
involved? An in-memory scheduler that was multi-threaded would need to
implement synchronization at a fairly granular level to use the same
in-memory store, and we're right back to the extreme need for efficient
concurrency in the design, though with much better latency on the
synchronization.


These were single-threaded tests and you're correct that if you had multiple 
threads trying to send something you'd have some inefficiency.
However I'd question the likelihood of that happening as it is very likely that 
most of the cpu time will be spent outside of oslo messaging code.

Furthermore, Python does not need multiple threads to go faster. As a matter of 
fact, for in-memory operations, it could end up being slower because of the 
inherent design of the interpreter (and there are many independent measurements 
that have shown it).



And this is unmodified/highly unoptimized oslo messaging code.
If you remove the oslo messaging layer, you get 25000 to 45000 msg/sec with 
kombu/rabbitMQ (which shows how inefficient is oslo messaging layer itself)


So I'm pretty sure this is o-k for small clouds, but would be
a disaster for a large, busy cloud.

It all depends on how many sched/sec for the "large busy cloud"...


I think there are two interesting things to discern. Of course, the
exact rate would be great to have as a target, but operational security
and just plain secrecy of business models will probably prevent us from
getting at many of these requirements.


I don't think that is the case. We have no visibility because nobody has really 
thought about these numbers. Ops should be ok to provide some rough requirement 
numbers if asked (everybody is in the same boat).



The second is the complexity model of scaling. We can just think about
the 

Re: [openstack-dev] [Ceilometer] Meter-list with multiple filters in simple query is not working

2015-10-12 Thread Srikanth Vavilapalli
Hi 

Thanks for your reply. 

Yes, I need the logical OR operation here. 

Seems the simple query is not either doing logical "AND" operation also. 
Because, if so, the output should be empty list in my example. However, the 
output is list of meters with the project_id value as d41cd...*. If I 
interchange the project_id field values in my query, the resulting output is 
list of meters with the project_id value as f28d2...*. i.e. the API is using 
the last q.field value for filtering for the "project_id" field.

I have seen the complex query format supporting these additional operations 
like logical "OR", but my understanding from the documentation was that these 
complex queries are supported only for Samples, Alarms and Alarms History, but 
not for "Meters". Nevertheless I still tried this complex query with /v2/meters 
by following /v2/query/samples example and it was returning 404 Not found. Plz 
correct me if I am doing something wrong here.

curl -X POST -H 'X-Auth-Token:eed134b917914fd987e50d8ec1651213' -H 
'Content-Type: application/json' -d '{"filter" : "{\"or\":[{\"=\": 
{\"project_id\": \"f28d2e522e1f466a95194c10869acd0c\"}},{\"=\": 
{\"project_id\": \"d41cdd2ade394e599b40b9b50d9cd623\"}}]}}' 
"http://10.11.10.1:8777/v2/query/meters;

Thanks
Srikanth
  

-Original Message-
From: Eoghan Glynn [mailto:egl...@redhat.com] 
Sent: Monday, October 12, 2015 4:38 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Ceilometer] Meter-list with multiple filters in 
simple query is not working



> Hi
> 
> Can anyone plz help me on how to specify a simple query with multiple 
> values for a query field in a Ceilometer meter-list request? I need to 
> fetch meters that belongs to more than one project id. I have tried 
> the following query format, but only the last query value (in this 
> case, project_id=
> d41cdd2ade394e599b40b9b50d9cd623) is used for filtering. Any help is 
> appreciated here.
> 
> curl -H 'X-Auth-Token:'
> http://localhost:8777/v2/meters?q.field=project_id=eq=f28
> d2e522e1f466a95194c10869acd0c=project_id=eq=d41cd
> d2ade394e599b40b9b50d9cd623
> 
> Thanks
> Srikanth

By "not working" you mean "not doing what you (incorrectly) expect it to do"

Your query asks for samples with aproject_id set to *both* f28d.. *and* d41c..
The result is empty, as a sample can't be associated with two project_ids.

The ceilometer simple query API combines all filters using logical AND.

Seems like you want logical OR here, which is possible to express via complex
queries:

  http://docs.openstack.org/developer/ceilometer/webapi/v2.html#complex-query

Cheers,
Eoghan



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Scheduler proposal

2015-10-12 Thread Monty Taylor

On 10/12/2015 02:45 PM, Joshua Harlow wrote:

Alec Hothan (ahothan) wrote:





On 10/10/15, 11:35 PM, "Clint Byrum"  wrote:


Excerpts from Alec Hothan (ahothan)'s message of 2015-10-09 21:19:14
-0700:

On 10/9/15, 6:29 PM, "Clint Byrum"  wrote:


Excerpts from Chris Friesen's message of 2015-10-09 17:33:38 -0700:

On 10/09/2015 03:36 PM, Ian Wells wrote:

On 9 October 2015 at 12:50, Chris
Friesen>  wrote:

 Has anybody looked at why 1 instance is too slow and what it
would take to

 make 1 scheduler instance work fast enough? This does
not preclude the
 use of
 concurrency for finer grain tasks in the background.


 Currently we pull data on all (!) of the compute nodes out
of the database
 via a series of RPC calls, then evaluate the various filters
in python code.


I'll say again: the database seems to me to be the problem here.
Not to
mention, you've just explained that they are in practice holding
all the data in
memory in order to do the work so the benefit we're getting here
is really a
N-to-1-to-M pattern with a DB in the middle (the store-to-DB is
rather
secondary, in fact), and that without incremental updates to the
receivers.

I don't see any reason why you couldn't have an in-memory scheduler.

Currently the database serves as the persistant storage for the
resource usage,
so if we take it out of the picture I imagine you'd want to have
some way of
querying the compute nodes for their current state when the
scheduler first
starts up.

I think the current code uses the fact that objects are remotable
via the
conductor, so changing that to do explicit posts to a known
scheduler topic
would take some work.


Funny enough, I think thats exactly what Josh's "just use Zookeeper"
message is about. Except in memory, it is "in an observable storage
location".

Instead of having the scheduler do all of the compute node inspection
and querying though, you have the nodes push their stats into
something
like Zookeeper or consul, and then have schedulers watch those stats
for changes to keep their in-memory version of the data up to date. So
when you bring a new one online, you don't have to query all the
nodes,
you just scrape the data store, which all of these stores (etcd,
consul,
ZK) are built to support atomically querying and watching at the same
time, so you can have a reasonable expectation of correctness.

Even if you figured out how to make the in-memory scheduler crazy
fast,
There's still value in concurrency for other reasons. No matter how
fast you make the scheduler, you'll be slave to the response time of
a single scheduling request. If you take 1ms to schedule each node
(including just reading the request and pushing out your scheduling
result!) you will never achieve greater than 1000/s. 1ms is way lower
than it's going to take just to shove a tiny message into RabbitMQ or
even 0mq.

That is not what I have seen, measurements that I did or done by
others show between 5000 and 1 send *per sec* (depending on
mirroring, up to 1KB msg size) using oslo messaging/kombu over
rabbitMQ.

You're quoting througput of RabbitMQ, but how many threads were
involved? An in-memory scheduler that was multi-threaded would need to
implement synchronization at a fairly granular level to use the same
in-memory store, and we're right back to the extreme need for efficient
concurrency in the design, though with much better latency on the
synchronization.


These were single-threaded tests and you're correct that if you had
multiple threads trying to send something you'd have some inefficiency.
However I'd question the likelihood of that happening as it is very
likely that most of the cpu time will be spent outside of oslo
messaging code.

Furthermore, Python does not need multiple threads to go faster. As a
matter of fact, for in-memory operations, it could end up being slower
because of the inherent design of the interpreter (and there are many
independent measurements that have shown it).



And this is unmodified/highly unoptimized oslo messaging code.
If you remove the oslo messaging layer, you get 25000 to 45000
msg/sec with kombu/rabbitMQ (which shows how inefficient is oslo
messaging layer itself)


So I'm pretty sure this is o-k for small clouds, but would be
a disaster for a large, busy cloud.

It all depends on how many sched/sec for the "large busy cloud"...


I think there are two interesting things to discern. Of course, the
exact rate would be great to have as a target, but operational security
and just plain secrecy of business models will probably prevent us from
getting at many of these requirements.


I don't think that is the case. We have no visibility because nobody
has really thought about these numbers. Ops should be ok to provide
some rough requirement numbers if asked (everybody is in the same boat).



The second is the complexity model of 

Re: [openstack-dev] [nova] novaclient functional test failures

2015-10-12 Thread Kevin L. Mitchell
On Mon, 2015-10-12 at 12:16 -0400, Sean Dague wrote:
> On 10/12/2015 11:54 AM, Kevin L. Mitchell wrote:
> > Functional tests on novaclient (gate-novaclient-dsvm-functional) have
> > started failing consistently.  The test failures all seem to be for an
> > HTTP 300, which leads me to suspect a problem with the test environment,
> > rather than with the tests in novaclient.  Anyone have any insights as
> > to how to address the problem?
> 
> Do you have a link to failed jobs? Or a bug to start accumulating that in?

And I thought about including such a link right after I hit "Send".
Figures :)

http://logs.openstack.org/77/232677/1/check/gate-novaclient-dsvm-functional/2de31bc/
(For review https://review.openstack.org/232677)

http://logs.openstack.org/99/232899/1/check/gate-novaclient-dsvm-functional/6d6dd1d/
(For review https://review.openstack.org/232899)

The first review does some stuff with functional tests, but the second
is a simple global-requirements update, and both have the same failure
signature.
-- 
Kevin L. Mitchell 
Rackspace


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron]What happened when the 3-rd controller restarted?

2015-10-12 Thread Kevin Benton
>*But there is no such a feature in Neutron. Right? Will the community
merge it soon? And can we consider it with agent-style mechanism together?*

The agents have their own mechanisms for getting information from the
server. The community has no plans to merge a feature that is going to be
different for almost every vendor.

We tried to come up with some common syncing stuff in the recent ML2
meeting, the various backends had different methods of detecting when they
were out of sync with Neutron (e.g. headers in hashes, recording errors,
etc), all of which depended on the capabilities of the backend. Then the
sync method itself was different between backends (sending deltas, sending
entire state, sending a replay log, etc).

About the only thing they have in common is that they need a way detect if
they are out of sync and they need a method to sync. So that's two abstract
methods, and we likely can't even agree on when they should be called.

Echoing Salvatore's comments, what is it that you want to see?

On Mon, Oct 12, 2015 at 12:29 AM, Germy Lure  wrote:

> Hi Kevin,
>
> *Thank you for your response. Periodic data checking is a popular and
> effective method to sync info. But there is no such a feature in Neutron.
> Right? Will the community merge it soon? And can we consider it with
> agent-style mechanism together?*
>
> Vendor-specific extension or coding a periodic task private by vendor is
> not a good solution, I think. Because it means that Neutron-Sever could not
> integrate with multiple vendors' controller and even the controller of
> those vendors that introduced this extension or task could not integrate
> with a standard community Neutron-Server.
> That is just the tip of the iceberg. Many of the other problems resulting,
> such as fixing bugs,upgrade,patch and etc.
> But wait, is it a vendor-specific feature? Of course not. All software
> systems need data checking.
>
> Many thanks.
> Germy
>
>
> On Sun, Oct 11, 2015 at 4:28 PM, Kevin Benton  wrote:
>
>> You can have a periodic task that asks your backend if it needs sync info.
>> Another option is to define a vendor-specific extension that makes it
>> easy to retrieve all info in one call via the HTTP API.
>>
>> On Sat, Oct 10, 2015 at 2:24 AM, Germy Lure  wrote:
>>
>>> Hi all,
>>>
>>> After restarting, Agents load data from Neutron via RPC. What about 3-rd
>>> controller? They only can re-gather data via NBI. Right?
>>>
>>> Is it possible to provide some mechanism for those controllers and
>>> agents to sync data? or something else I missed?
>>>
>>> Thanks
>>> Germy
>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>> --
>> Kevin Benton
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] [neutron] A larger batch of questions about configuring DevStack to use Neutron

2015-10-12 Thread Mike Spreitzer
Thanks, i will review soon.  BTW, i am interested in creating a config in which 
a Compute Instance can be created on an external network.

Thanks,
Mike




   Sean M. Collins --- Re: [openstack-dev] [devstack] [neutron] A larger batch 
of questions about configuring DevStack to use Neutron --- 
From:"Sean M. Collins" To:"OpenStack Development 
Mailing List (not for usage questions)" 
Date:Mon, Oct 12, 2015 11:34Subject:Re: 
[openstack-dev] [devstack] [neutron] A larger batch of questions about 
configuring DevStack to use Neutron
  On Thu, Oct 08, 2015 at 01:47:31PM EDT, Mike Spreitzer wrote: > .. > > > In 
the section > > > http://docs.openstack.org/developer/devstack/guides/ > > 
neutron.html#using-neutron-with-a-single-interface > > > there is a helpful 
display of localrc contents. It says, among other > > > things, > > > > > > 
OVS_PHYSICAL_BRIDGE=br-ex > > > PUBLIC_BRIDGE=br-ex > > > > > > In the next 
top-level section, > > > http://docs.openstack.org/developer/devstack/guides/ > 
> neutron.html#using-neutron-with-multiple-interfaces > > > , there is no 
display of revised localrc contents and no mention of > > > changing either 
bridge setting. That is an oversight, right? > > > > No, this is deliberate. 
Each section is meant to be independent, since > > each networking 
configuration and corresponding DevStack configuration > > is different. Of 
course, this may need to be explicitly stated in the > > guide, so there is 
always room for improvement. > > I am not quite sure I understand your answer. 
Is the intent that I can > read only one section, ignore all the others, and 
that will tell me how to > use DevStack to produce that section's 
configuration? If so then it would > be good if each section had a display of 
all the necessary localrc > contents. Agreed. This is a failure on my part, 
because I was pasting in only parts of my localrc (since it came out of a live 
lab environment). I've started pushing patches to correct this. > I have 
started over, from exactly the picture drawn at the start of the > doc. That 
has produced a configuration that mostly works. However, I > tried creating a 
VM connected to the public network, and that failed for > lack of a Neutron 
DHCP server there. The public network is used for floating IPs. The L3 agent 
creates NAT rules to intercept traffic on the public network and NAT it back to 
a guest instance that has the floating IP allocated to it. The behavior for 
when a guest is directly attached to the public network, I sort of forget what 
happens exactly but I do know that it doesn't work from experience - most 
likely because the network is not configured as a flat network. It will not 
receive a DHCP lease from the external router. > I am going to work out how to 
change > that, and am willing to contribute an update to this doc. Would you 
want > that in this section --- in which case this section needs to specify 
that > the provider DOES NOT already have DHCP service on the hardware network 
> --- or as a new section? No, I think we should maybe have a warning or 
something that the external network will be used for Floating IPs, and that 
guest instances should not be directly attached to that network. > > > > > > 
Looking at > > > 
http://docs.openstack.org/networking-guide/scenario_legacy_ovs.html(or > , in > 
> > former days, the doc now preserved at > > > 
http://docs.ocselected.org/openstack-manuals/kilo/networking- > > 
guide/content/under_the_hood_openvswitch.html > > > ) I see the name br-ex used 
for $PUBLIC_BRIDGE--- not > $OVS_PHYSICAL_BRIDGE > > > , right? Wouldn't it be 
less confusing if > > > 
http://docs.openstack.org/developer/devstack/guides/neutron.htmlused a > > > > 
name other than "br-ex" for the exhibited commands that apply to > > > 
$OVS_PHYSICAL_BRIDGE? > > > > No, this is deliberate - br-ex is the bridge that 
is used for external > > network traffic - such as floating IPs and public IP 
address ranges. On > > the network node, a physical interface is attached to 
br-ex so that > > traffic will flow. > > > > PUBLIC_BRIDGE is a carryover from 
DevStack's Nova-Network support and is > > used in some places, with 
OVS_PHYSICAL_BRIDGE being used by DevStack's > > Neutron support, for the Open 
vSwitch driver specifically. They are two > > variables that for the most part 
serve the same purpose. Frankly, > > DevStack has a lot of problems with 
configuration knobs, and > > PUBLIC_BRIDGE and OVS_PHYSICAL_BRIDGE is just a 
symptom. > > Ah, thanks, that helps. But I am still confused. When using 
Neutron with > two interfaces, there will be a bridge for each. There shouldn't 
be. I'm pushing patches in the multiple interface section that includes output 
from the ovs-vsctl commands, hopefully it'll clarify things. > I have learned 
that > DevStack will automatically create one bridge, and seen that it is named 
> "br-ex" when following the instructions in the 

Re: [openstack-dev] [devstack] [neutron] A larger batch of questions about configuring DevStack to use Neutron

2015-10-12 Thread Sean M. Collins
On Mon, Oct 12, 2015 at 04:48:09PM EDT, Mike Spreitzer wrote:
> Thanks, i will review soon.  BTW, i am interested in creating a config in 
> which a Compute Instance can be created on an external network.
> 

That would be with the provider networking extension. I've only done it
with a dedicated interface (so two total interfaces).

The Q_USE_PROVIDERNET_FOR_PUBLIC switch configures part of what is
required, but there may be other pieces if you only have one physical
interfaces.

-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Requests + urllib3 + distro packages

2015-10-12 Thread Steve Baker

On 13/10/15 02:05, Thomas Goirand wrote:


BTW, the same applies for tablib which is in a even more horrible state
that makes it impossible to package with Py3 support. But tablib could
be removed from our (build-)dependency list, if someone cares about
re-writing cliff-tablib, which IMO wouldn't be that much work. Doug, how
many beers shall I offer you for that work? :)

Regarding tablib, cliff has had its own table formatter for some time, 
and now has its own json and yaml formatters. I believe the only tablib 
formatter left is the HTML one, which likely wouldn't be missed if it 
was just dropped (or it could be simply reimplemented inside cliff).


If the cliff deb depends on cliff-tablib I would recommend removing that 
dependency and just stop packaging cliff-tablib.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Ironic] Testing Ironic multi-tenancy feature

2015-10-12 Thread Sukhdev Kapur
Hi Pavlo,

This feature spans three projects; Neutron, Nova, and Ironic. The code for
neutron modification was merged in Liberty, but,
because of the timeline/logistics, we missed the deadline to merge the Nova
and Ironic code in Liberty. Most of the patches are ready and the
functionality is being tested very actively.

We are looking at merging all of this in early part of Mitaka (M1
hopefully).

This feature will be available on all Arista Switches and some HP switches
as well. Arista's ML2 driver will work for both virtual and baremetal
deployments - hence, no special configuration or setup requirements.
If you are looking for Arista HW to test this feature, please ping me
directly (IRC: Sukhdev, sukh...@arista.com), I can put you in touch with
right folks who can provide the HW.
Additionally, We have an early version of Ironic documentation. I will have
Neutron side documentation ready real soon, which will help with the
deployment setup.

Lots of information is available in the links provided by Jim. We meet
every week Monday at 9AM on #openstack-meeting-4. Feel free to stop by with
any specific questions. Team would be happy to help answer any questions.

Hope this helps.

-Sukhdev




On Mon, Oct 12, 2015 at 12:45 AM, Pavlo Shchelokovskyy <
pshchelokovs...@mirantis.com> wrote:

> Hi all,
>
> we would like to start preliminary testing of Ironic multi-tenant network
> setup which is supported by Neutron in Liberty according to [1]. According
> to Neutron design integration with network equipment is done via ML2
> plugins. We are looking for plugins and network equipment that can work
> with such Ironic multi-tenant setup. Could community recommend a pair of
> hardware switch/corresponding Neutron plugin that already supports this
> functionality?
>
> [1]
> https://blueprints.launchpad.net/neutron/+spec/neutron-ironic-integration
>
> Best regards,
> --
> Dr. Pavlo Shchelokovskyy
> Senior Software Engineer
> Mirantis Inc
> www.mirantis.com
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Scheduler proposal

2015-10-12 Thread Alec Hothan (ahothan)





On 10/12/15, 11:45 AM, "Joshua Harlow"  wrote:

>Alec Hothan (ahothan) wrote:
>>
>>
>>
>>
>> On 10/10/15, 11:35 PM, "Clint Byrum"  wrote:
>>
>>> Excerpts from Alec Hothan (ahothan)'s message of 2015-10-09 21:19:14 -0700:
 On 10/9/15, 6:29 PM, "Clint Byrum"  wrote:

> Excerpts from Chris Friesen's message of 2015-10-09 17:33:38 -0700:
>> On 10/09/2015 03:36 PM, Ian Wells wrote:
>>> On 9 October 2015 at 12:50, Chris Friesen>> >  wrote:
>>>
>>>  Has anybody looked at why 1 instance is too slow and what it would 
>>> take to
>>>
>>>  make 1 scheduler instance work fast enough? This does not 
>>> preclude the
>>>  use of
>>>  concurrency for finer grain tasks in the background.
>>>
>>>
>>>  Currently we pull data on all (!) of the compute nodes out of the 
>>> database
>>>  via a series of RPC calls, then evaluate the various filters in 
>>> python code.
>>>
>>>
>>> I'll say again: the database seems to me to be the problem here.  Not to
>>> mention, you've just explained that they are in practice holding all 
>>> the data in
>>> memory in order to do the work so the benefit we're getting here is 
>>> really a
>>> N-to-1-to-M pattern with a DB in the middle (the store-to-DB is rather
>>> secondary, in fact), and that without incremental updates to the 
>>> receivers.
>> I don't see any reason why you couldn't have an in-memory scheduler.
>>
>> Currently the database serves as the persistant storage for the resource 
>> usage,
>> so if we take it out of the picture I imagine you'd want to have some 
>> way of
>> querying the compute nodes for their current state when the scheduler 
>> first
>> starts up.
>>
>> I think the current code uses the fact that objects are remotable via the
>> conductor, so changing that to do explicit posts to a known scheduler 
>> topic
>> would take some work.
>>
> Funny enough, I think thats exactly what Josh's "just use Zookeeper"
> message is about. Except in memory, it is "in an observable storage
> location".
>
> Instead of having the scheduler do all of the compute node inspection
> and querying though, you have the nodes push their stats into something
> like Zookeeper or consul, and then have schedulers watch those stats
> for changes to keep their in-memory version of the data up to date. So
> when you bring a new one online, you don't have to query all the nodes,
> you just scrape the data store, which all of these stores (etcd, consul,
> ZK) are built to support atomically querying and watching at the same
> time, so you can have a reasonable expectation of correctness.
>
> Even if you figured out how to make the in-memory scheduler crazy fast,
> There's still value in concurrency for other reasons. No matter how
> fast you make the scheduler, you'll be slave to the response time of
> a single scheduling request. If you take 1ms to schedule each node
> (including just reading the request and pushing out your scheduling
> result!) you will never achieve greater than 1000/s. 1ms is way lower
> than it's going to take just to shove a tiny message into RabbitMQ or
> even 0mq.
 That is not what I have seen, measurements that I did or done by others 
 show between 5000 and 1 send *per sec* (depending on mirroring, up to 
 1KB msg size) using oslo messaging/kombu over rabbitMQ.
>>> You're quoting througput of RabbitMQ, but how many threads were
>>> involved? An in-memory scheduler that was multi-threaded would need to
>>> implement synchronization at a fairly granular level to use the same
>>> in-memory store, and we're right back to the extreme need for efficient
>>> concurrency in the design, though with much better latency on the
>>> synchronization.
>>
>> These were single-threaded tests and you're correct that if you had multiple 
>> threads trying to send something you'd have some inefficiency.
>> However I'd question the likelihood of that happening as it is very likely 
>> that most of the cpu time will be spent outside of oslo messaging code.
>>
>> Furthermore, Python does not need multiple threads to go faster. As a matter 
>> of fact, for in-memory operations, it could end up being slower because of 
>> the inherent design of the interpreter (and there are many independent 
>> measurements that have shown it).
>>
>>
 And this is unmodified/highly unoptimized oslo messaging code.
 If you remove the oslo messaging layer, you get 25000 to 45000 msg/sec 
 with kombu/rabbitMQ (which shows how inefficient is oslo messaging layer 
 itself)

> So I'm pretty sure this is o-k for small clouds, but would be
> a 

Re: [openstack-dev] Scheduler proposal

2015-10-12 Thread Joshua Harlow

Alec Hothan (ahothan) wrote:





On 10/12/15, 11:45 AM, "Joshua Harlow"  wrote:


Alec Hothan (ahothan) wrote:




On 10/10/15, 11:35 PM, "Clint Byrum"   wrote:


Excerpts from Alec Hothan (ahothan)'s message of 2015-10-09 21:19:14 -0700:

On 10/9/15, 6:29 PM, "Clint Byrum"   wrote:


Excerpts from Chris Friesen's message of 2015-10-09 17:33:38 -0700:

On 10/09/2015 03:36 PM, Ian Wells wrote:

On 9 October 2015 at 12:50, Chris Friesen>   wrote:

  Has anybody looked at why 1 instance is too slow and what it would take to

  make 1 scheduler instance work fast enough? This does not preclude the
  use of
  concurrency for finer grain tasks in the background.


  Currently we pull data on all (!) of the compute nodes out of the database
  via a series of RPC calls, then evaluate the various filters in python 
code.


I'll say again: the database seems to me to be the problem here.  Not to
mention, you've just explained that they are in practice holding all the data in
memory in order to do the work so the benefit we're getting here is really a
N-to-1-to-M pattern with a DB in the middle (the store-to-DB is rather
secondary, in fact), and that without incremental updates to the receivers.

I don't see any reason why you couldn't have an in-memory scheduler.

Currently the database serves as the persistant storage for the resource usage,
so if we take it out of the picture I imagine you'd want to have some way of
querying the compute nodes for their current state when the scheduler first
starts up.

I think the current code uses the fact that objects are remotable via the
conductor, so changing that to do explicit posts to a known scheduler topic
would take some work.


Funny enough, I think thats exactly what Josh's "just use Zookeeper"
message is about. Except in memory, it is "in an observable storage
location".

Instead of having the scheduler do all of the compute node inspection
and querying though, you have the nodes push their stats into something
like Zookeeper or consul, and then have schedulers watch those stats
for changes to keep their in-memory version of the data up to date. So
when you bring a new one online, you don't have to query all the nodes,
you just scrape the data store, which all of these stores (etcd, consul,
ZK) are built to support atomically querying and watching at the same
time, so you can have a reasonable expectation of correctness.

Even if you figured out how to make the in-memory scheduler crazy fast,
There's still value in concurrency for other reasons. No matter how
fast you make the scheduler, you'll be slave to the response time of
a single scheduling request. If you take 1ms to schedule each node
(including just reading the request and pushing out your scheduling
result!) you will never achieve greater than 1000/s. 1ms is way lower
than it's going to take just to shove a tiny message into RabbitMQ or
even 0mq.

That is not what I have seen, measurements that I did or done by others show 
between 5000 and 1 send *per sec* (depending on mirroring, up to 1KB msg 
size) using oslo messaging/kombu over rabbitMQ.

You're quoting througput of RabbitMQ, but how many threads were
involved? An in-memory scheduler that was multi-threaded would need to
implement synchronization at a fairly granular level to use the same
in-memory store, and we're right back to the extreme need for efficient
concurrency in the design, though with much better latency on the
synchronization.

These were single-threaded tests and you're correct that if you had multiple 
threads trying to send something you'd have some inefficiency.
However I'd question the likelihood of that happening as it is very likely that 
most of the cpu time will be spent outside of oslo messaging code.

Furthermore, Python does not need multiple threads to go faster. As a matter of 
fact, for in-memory operations, it could end up being slower because of the 
inherent design of the interpreter (and there are many independent measurements 
that have shown it).



And this is unmodified/highly unoptimized oslo messaging code.
If you remove the oslo messaging layer, you get 25000 to 45000 msg/sec with 
kombu/rabbitMQ (which shows how inefficient is oslo messaging layer itself)


So I'm pretty sure this is o-k for small clouds, but would be
a disaster for a large, busy cloud.

It all depends on how many sched/sec for the "large busy cloud"...


I think there are two interesting things to discern. Of course, the
exact rate would be great to have as a target, but operational security
and just plain secrecy of business models will probably prevent us from
getting at many of these requirements.

I don't think that is the case. We have no visibility because nobody has really 
thought about these numbers. Ops should be ok to provide some rough requirement 
numbers if asked 

Re: [openstack-dev] [nova] novaclient functional test failures

2015-10-12 Thread melanie witt
On Oct 12, 2015, at 12:14, Kevin L. Mitchell  
wrote:

> http://logs.openstack.org/77/232677/1/check/gate-novaclient-dsvm-functional/2de31bc/
> (For review https://review.openstack.org/232677)
> 
> http://logs.openstack.org/99/232899/1/check/gate-novaclient-dsvm-functional/6d6dd1d/
> (For review https://review.openstack.org/232899)
> 
> The first review does some stuff with functional tests, but the second
> is a simple global-requirements update, and both have the same failure
> signature.

I noticed in the trace, novaclient is calling a function for keystone v1 auth 
[1][2]. It had been calling v2 auth in the past and I think this commit [3] in 
devstack that writes the clouds.yaml specifying v3 as the identity API version 
is probably responsible for the change in behavior. It used to use the 
$IDENTITY_API_VERSION variable. The patch merged on Oct 7 and in logstash I 
find the failures start on Oct 8 [4]

Novaclient looks for "v2.0" in the auth url and creates a request based on 
that. If it doesn't find "v2.0" it falls back generating a v1 request. And it 
doesn't yet have a function for generating a v3 request.

-melanie (irc: melwitt)

[1] 
http://logs.openstack.org/99/232899/1/check/gate-novaclient-dsvm-functional/6d6dd1d/console.html.gz#_2015-10-09_16_01_24_088
[2] 
https://github.com/openstack/python-novaclient/blob/147a1a6ee421f9a45a562f013e233d29d43258e4/novaclient/client.py#L601-L622
[3] https://review.openstack.org/#/c/220846/
[4] http://goo.gl/5GxiiF



signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Open a new project in OpenStack or Stackforge ?

2015-10-12 Thread Clark Boylan
On Mon, Oct 12, 2015, at 05:09 PM, joehuang wrote:
> Hello, 
> 
> Could you or someone else pls. help us to open a new project? We learned
> that Stackforge will be retired soon, so shall we open this new project
> under OpenStack directly instead? How to do it ?
> 
> A new project, a standalone new service Kingbird
> (https://launchpad.net/kingbird ) for distributed multi-region OpenStack
> cloud environment, is planned to start in OpenStack. Kingbird is
> initiated from OPNFV multisite project, and Dimitri is preparing for the
> repository, gerrit, CI.
> 
Yes, you should create it under openstack/ instead. The process is the
same as it was when adding a project to stackforge/ you just change the
name. Documentation is at
http://docs.openstack.org/infra/manual/creators.html.

Hope this helps,
Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] Cross-project track schedule draft: Feedback needed

2015-10-12 Thread Flavio Percoco

Greetings,

We have a draft schedule for the cross-project track for the Mitaka
summit[0] (find it at the bottom of the etherpad). Yay!

We would like to get feedback from folks that have proposed these
sessions - hopefully you're all cc'd - or other folks that may be
co-moderating them. If there are any conflicts for you in the current
schedule, please, do let us know and we'll re-arange as possible.

Feedback from anyone is obviously welcomed but keep in mind that this
is a process that will hardly make everyone happy.

Thanks,
Flavio

[0] https://etherpad.openstack.org/p/mitaka-cross-project-session-planning

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposed solution to "Admin" ness improperly scoped:

2015-10-12 Thread Shinobu Kinjo
Just question.
Will be scopes of non-admin users projects in admin scoped project?

Shinobu

- Original Message -
From: "Adam Young" 
To: "OpenStack Development Mailing List" 
Sent: Monday, October 12, 2015 3:38:01 AM
Subject: [openstack-dev] Proposed solution to "Admin" ness improperly scoped:

https://bugs.launchpad.net/keystone/+bug/968696/comments/39

1. Add a config value ADMIN_PROJECT_ID
2. In token creation, if ADMIN_PROJECT_ID is not None: only add the 
admin role to the token if the id of the scoped project == ADMIN_PROJECT_ID

Does this work?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Cross-project track schedule draft: Feedback needed

2015-10-12 Thread Flavio Percoco

cc'ing w/o ' (facepalm)

On 13/10/15 07:05 +0900, Flavio Percoco wrote:

Greetings,

We have a draft schedule for the cross-project track for the Mitaka
summit[0] (find it at the bottom of the etherpad). Yay!

We would like to get feedback from folks that have proposed these
sessions - hopefully you're all cc'd - or other folks that may be
co-moderating them. If there are any conflicts for you in the current
schedule, please, do let us know and we'll re-arange as possible.

Feedback from anyone is obviously welcomed but keep in mind that this
is a process that will hardly make everyone happy.

Thanks,
Flavio

[0] https://etherpad.openstack.org/p/mitaka-cross-project-session-planning

--
@flaper87
Flavio Percoco





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] CI jobs blocked on oslo.messaging and webob releases

2015-10-12 Thread Matt Riedemann
Most people are probably already aware of this but given we have two 
changes that now rely on each other to land things are a bit complicated 
for tracking, so I figured I'd dump the info here.


We have to bugs introduced with dependent library releases:

1. oslo.messaging 2.6.0: https://bugs.launchpad.net/cinder/+bug/1505295

There is already an exclusion for that in g-r on master and 
cherry-picked to stable/liberty:


https://review.openstack.org/#/q/I47ab12f719fba41c2f0c03047b05eb28f4423682,n,z

There is a nova change which has the same block and depends on a grenade 
change, but that was failing due to...


2. webob 1.5.0: https://bugs.launchpad.net/cinder/+bug/1505153

We have a patch to fix the actual nova code bug for that:

https://review.openstack.org/#/c/233845/

But it won't pass tests because of the oslo.messaging change, so we have 
co-dependent failures.


--

So in order to avoid squashing those changes (which is kind of messy, 
especially when doing the backports to stable/liberty and kilo for the 
webob one), we've got a multi-part fix.


1. Block oslo.messaging 2.6.0 and webob 1.5.0 in a single nova change:

https://review.openstack.org/#/c/233772/

2. That depends on a g-r cap on webob<1.5.0:

https://review.openstack.org/#/c/233851/

3. Once the nova change 233772 and the webob fix 
https://review.openstack.org/#/c/233845/ lands, we can unpin webob in 
g-r again.


--

I think this requires backports to stable/liberty which probably means a 
liberty-rc3 for nova.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar][cli][openstackclient] conflict in nova flavor and zaqar flavor

2015-10-12 Thread Victoria Martínez de la Cruz
HI all,

Thanks for your feedback. We discussed this topic in this week weekly
meeting and we came to the conclusion that it would be better to use
"pool-flavor" instead of creating a namespace for Zaqar only (by prefixing
everything with the "message" key).

So, this commands would look like

openstack pool-flavor create
openstack pool-flavor get
openstack pool-flavor delete
openstack pool-flavor update
openstack pool-flavor list

Best,

Victoria

2015-10-10 10:10 GMT-03:00 Shifali Agrawal :

> All right, thanks for responses, will code accordingly :)
>
> On Wed, Oct 7, 2015 at 9:31 PM, Doug Hellmann 
> wrote:
>
>> Excerpts from Steve Martinelli's message of 2015-10-06 16:09:32 -0400:
>> >
>> > Using `message flavor` works for me, and having two words is just fine.
>>
>> It might even be good to change "flavor" to "server flavor" (keeping
>> flavor as a backwards-compatible alias, of course).
>>
>> Doug
>>
>> >
>> > I'm in the process of collecting all of the existing "object" works are
>> > putting them online, there's a lot of them. Hopefully this will reduce
>> the
>> > collisions in the future.
>> >
>> > Thanks,
>> >
>> > Steve Martinelli
>> > OpenStack Keystone Core
>> >
>> >
>> >
>> > From:Shifali Agrawal 
>> > To:openstack-dev@lists.openstack.org
>> > Date:2015/10/06 03:40 PM
>> > Subject:[openstack-dev] [Zaqar][cli][openstack-client] conflict in
>> nova
>> > flavor and zaqar flavor
>> >
>> >
>> >
>> > Greetings,
>> >
>> > I am implementing cli commands for Zaqar flavors, the command should be
>> > like:
>> >
>> > "openstack flavor "
>> >
>> > But there is already same command present for Nova flavors. After
>> > discussing with Zaqar devs we thought to change all zaqar commands such
>> > that they include `message` word after openstack, thus above Zaqar
>> flavor
>> > command will become:
>> >
>> > "openstack message flavor "
>> >
>> > Does openstack-client devs have something to say for this? Or they also
>> > feel its good to move with adding `message` word to all Zaqar cli
>> > commands?
>> >
>> > Already existing Zaqar commands will work with get a deprecation
>> > message/warning and also I will implement them all to work with
>> `message`
>> > word, and all new commands will be implement so that they work only with
>> > `message` word.
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Requests + urllib3 + distro packages

2015-10-12 Thread Joshua Harlow

Jeremy Stanley wrote:

On 2015-10-12 15:40:48 +0200 (+0200), Thomas Goirand wrote:
[...]

Has the infra team ever thought about doing that for (at least) all of
the 3rd party libs we use? I'd love to work closer with the infra team
to provide them with missing packages they would need, and I'm sure my
RPM buddy Haikel would too. This also would help getting our
openstack/{deb,rpm}- projects up to speed as well.

[...]

Long ago there was an idea that we might somehow be able to
near-instantly package anything on PyPI and serve RPMs and DEBs of
it up to CI jobs, but doing that would be a pretty massive (and in
my opinion very error-prone) undertaking.


Anvil gets somewhat far on this, although its not supporting DEBs it 
does build its best attempt at RPMs building them automatically and 
turning git repos of projects into RPMs.


http://anvil.readthedocs.org/en/latest/topics/summary.html (hopefully 
the existence of this is not news to folks).


A log of this in action (very verbose) is at:

http://logs.openstack.org/40/225240/4/check/gate-anvil-rpms-dsvm-devstack-centos7/0eea2a9/console.html

Feel free to ask questions as to how this is being done on 
#openstack-anvil or find me on other channels... I believe it is also 
one of the goals of the rpm and packaging teams to do something similar, 
although that might be a ways off?




Right now we can take advantage of the fact that the Python
ecosystem uploads new releases to PyPI as their primary distribution
channel. By using pip to install dependencies from PyPI in our CI
system, we can pretty instantly test compatibility of our software
with new releases of dependencies (much faster that they can get
properly packaged in distros), and easily test different versions by
proposing changes to the openstack/requirements repository.

The only way I see this changing is if authors of Python libraries
switch to packaging their own software for major distributions and
have a pass to get them included by those distributions
near-instantly. Also, distros would have to cease caring about
reducing the number of concurrent versions of libraries they provide
(and I posit that as soon as debian/sid ships a DEB for every
version ever released for packages like python-requests and
python-urllib3, apt-get will begin to exhibit similar dependency
resolution challenges).


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar][cli][openstackclient] conflict in nova flavor and zaqar flavor

2015-10-12 Thread Dean Troyer
On Mon, Oct 12, 2015 at 5:25 PM, Victoria Martínez de la Cruz <
victo...@vmartinezdelacruz.com> wrote:

> So, this commands would look like
>
> openstack pool-flavor create
> openstack pool-flavor get
> openstack pool-flavor delete
> openstack pool-flavor update
> openstack pool-flavor list
>

I would strongly suggest leaving the dash out of the resource name:

openstack pool flavor create
etc

Multiple word names have been supported for a long time and the only other
plugin I know that has them has a bug against it to remove the dash.

dt

-- 

Dean Troyer
dtro...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Oslo survey

2015-10-12 Thread Joshua Harlow

Hi OpenStack devs,

The Oslo team would like to collect feedback and information
from its user base in order to help us plan and strategize around
Mitaka deliverables and to help improve itself and the projects it
maintains.

To do so, we have developed a survey that we think will be helpful in
making the Oslo team, and its projects better. It should take about 15
minutes to complete.

https://www.surveymonkey.com/r/QKMVDG6

On behalf of the Oslo community, we thank you for the time you will
spend in helping us understand your needs.

-Josh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Open a new project in OpenStack or Stackforge ?

2015-10-12 Thread joehuang
Hello, 

Could you or someone else pls. help us to open a new project? We learned that 
Stackforge will be retired soon, so shall we open this new project under 
OpenStack directly instead? How to do it ?

A new project, a standalone new service Kingbird 
(https://launchpad.net/kingbird ) for distributed multi-region OpenStack cloud 
environment, is planned to start in OpenStack. Kingbird is initiated from OPNFV 
multisite project, and Dimitri is preparing for the repository, gerrit, CI.

Best Regards
Chaoyi Huang ( Joe Huang )


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] CI jobs blocked on oslo.messaging and webob releases

2015-10-12 Thread Davanum Srinivas
Matt,

oslo.messaging 2.6.1 has unclogged the pipe -
http://lists.openstack.org/pipermail/openstack-announce/2015-October/000712.html

Thanks,
Dims

On Mon, Oct 12, 2015 at 4:12 PM, Matt Riedemann 
wrote:

> Most people are probably already aware of this but given we have two
> changes that now rely on each other to land things are a bit complicated
> for tracking, so I figured I'd dump the info here.
>
> We have to bugs introduced with dependent library releases:
>
> 1. oslo.messaging 2.6.0: https://bugs.launchpad.net/cinder/+bug/1505295
>
> There is already an exclusion for that in g-r on master and cherry-picked
> to stable/liberty:
>
>
> https://review.openstack.org/#/q/I47ab12f719fba41c2f0c03047b05eb28f4423682,n,z
>
> There is a nova change which has the same block and depends on a grenade
> change, but that was failing due to...
>
> 2. webob 1.5.0: https://bugs.launchpad.net/cinder/+bug/1505153
>
> We have a patch to fix the actual nova code bug for that:
>
> https://review.openstack.org/#/c/233845/
>
> But it won't pass tests because of the oslo.messaging change, so we have
> co-dependent failures.
>
> --
>
> So in order to avoid squashing those changes (which is kind of messy,
> especially when doing the backports to stable/liberty and kilo for the
> webob one), we've got a multi-part fix.
>
> 1. Block oslo.messaging 2.6.0 and webob 1.5.0 in a single nova change:
>
> https://review.openstack.org/#/c/233772/
>
> 2. That depends on a g-r cap on webob<1.5.0:
>
> https://review.openstack.org/#/c/233851/
>
> 3. Once the nova change 233772 and the webob fix
> https://review.openstack.org/#/c/233845/ lands, we can unpin webob in g-r
> again.
>
> --
>
> I think this requires backports to stable/liberty which probably means a
> liberty-rc3 for nova.
>
> --
>
> Thanks,
>
> Matt Riedemann
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Davanum Srinivas :: https://twitter.com/dims
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >