Re: [openstack-dev] [CI] How to set a proxy for zuul.

2015-07-21 Thread Asselin, Ramy
Hi Tang,

Please use a new thread for this new question.  I'd like to keep the current 
thread focused on How to set a proxy for zuul.

Ramy

From: Tang Chen [mailto:tangc...@cn.fujitsu.com]
Sent: Tuesday, July 21, 2015 12:23 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [CI] How to set a proxy for zuul.

Hi Asselin, Abhishek,

I got some problems when I was trying to write a jenkins job.

I found that when zuul received the notification from gerrit, jenkins didn't 
run the test.

I added something to noop-check-communication in /etc/jenkins_jobs/config/ 
examples.yaml,
just touched a file under /tmp.

- job-template:
name: 'noop-check-communication'
node: '{node}'

builders:
  - shell: |
  #!/bin/bash -xe
  touch /tmp/noop-check-communication   
 # I added something here.
  echo Hello world, this is the {vendor} Testing System
  - link-logs  # In macros.yaml from os-ext-testing

And I flushed the jobs, using jenkins-jobs --flush-cache update 
/etc/jenkins_jobs/config/.
I can build the job in jenkins web UI, and the file was touched.


But when I send a patch, the file was not touched. But CI really works.
I can see it on the web site. ( https://review.openstack.org/#/c/203941/)

How do you think of this ?


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-infra] [CI] [tempest] Tempest tests failing with SSH timeout.

2015-07-21 Thread Asselin, Ramy
Why do you want to stop those downloads? The purpose is to setup your VM so 
that it has the latest code in the git repos and that each project has any 
custom refs, such as the patch under test.

Also, this is a different problem than the subject, so should be a new thread.

Ramy

From: Abhishek Shrivastava [mailto:abhis...@cloudbyte.com]
Sent: Tuesday, July 21, 2015 2:15 AM
To: OpenStack Development Mailing List (not for usage questions)
Cc: openstack-in...@lists.openstack.org
Subject: Re: [openstack-dev] [openstack-infra] [CI] [tempest] Tempest tests 
failing with SSH timeout.

Also, can't we stop downloading all those projects and let them include them in 
the DevStack using the ENABLED_SERVICES parameter, like we usually do while 
installing devstack.

On Tue, Jul 21, 2015 at 11:18 AM, Abhishek Shrivastava 
abhis...@cloudbyte.commailto:abhis...@cloudbyte.com wrote:
Hi Ramy,


  *   The project list is mentioned in the devstack-vm-gate-wrap script[1].
  *   Downloaded using functions.sh script using setup workspace function[2]
[1] 
https://github.com/openstack-infra/devstack-gate/blob/master/devstack-vm-gate-wrap.sh#L35
[2] 
https://github.com/openstack-infra/devstack-gate/blob/master/functions.sh#L416

On Mon, Jul 20, 2015 at 7:14 PM, Asselin, Ramy 
ramy.asse...@hp.commailto:ramy.asse...@hp.com wrote:
Is this to optimize the performance of the job? Can you provide a link to where 
the downloading is occurring that you’d like to restrict?

From: Abhishek Shrivastava 
[mailto:abhis...@cloudbyte.commailto:abhis...@cloudbyte.com]
Sent: Sunday, July 19, 2015 10:53 PM

To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [openstack-infra] [CI] [tempest] Tempest tests 
failing with SSH timeout.

This is ok for the services it will install, but how can we also restrict the 
downloading of all the projects(i.e; downloading only required projects) ?

On Sun, Jul 19, 2015 at 11:39 PM, Asselin, Ramy 
ramy.asse...@hp.commailto:ramy.asse...@hp.com wrote:
There are two ways that I know of to customize what services are run:

1.  Setup your own feature matrix [1]

2.  Override enabled services [2]

Option 2 is probably what you’re looking for.

[1] 
http://git.openstack.org/cgit/openstack-infra/devstack-gate/tree/devstack-vm-gate-wrap.sh#n152
[2] 
http://git.openstack.org/cgit/openstack-infra/devstack-gate/tree/devstack-vm-gate.sh#n76

From: Abhishek Shrivastava 
[mailto:abhis...@cloudbyte.commailto:abhis...@cloudbyte.com]
Sent: Sunday, July 19, 2015 10:37 AM

To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [openstack-infra] [CI] [tempest] Tempest tests 
failing with SSH timeout.

Hi Ramy,

Thanks for the suggestion. One more thing I need to ask, as I have have setup 
one more CI so is there any way that we can decide that only required projects 
should get downloaded and installed during devstack installation dynamically. 
As I see no such things that can be done to devstack-gate scripts so the 
following scenario can be achieved.

On Sun, Jul 19, 2015 at 8:38 PM, Asselin, Ramy 
ramy.asse...@hp.commailto:ramy.asse...@hp.com wrote:
Just the export I mentioned:
export DEVSTACK_GATE_NEUTRON=1
Devstack-gate scripts will do the right thing when it sees that set. You can 
see plenty of examples here [1].

Ramy

[1] 
http://git.openstack.org/cgit/openstack-infra/project-config/tree/jenkins/jobs/devstack-gate.yaml#n467

From: Abhishek Shrivastava 
[mailto:abhis...@cloudbyte.commailto:abhis...@cloudbyte.com]
Sent: Sunday, July 19, 2015 2:24 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [openstack-infra] [CI] [tempest] Tempest tests 
failing with SSH timeout.

Hi Ramy,

Thanks for the suggestion but since I am not including the neutron project, so 
downloading and including it will require any additional configuration in 
devstack-gate or not?

On Sat, Jul 18, 2015 at 11:41 PM, Asselin, Ramy 
ramy.asse...@hp.commailto:ramy.asse...@hp.com wrote:
We ran into this issue as well. I never found the root cause, but I found a 
work-around: Use neutron-networking instead of the default nova-networking.

If you’re using devstack-gate, it’s as  simple as:
export DEVSTACK_GATE_NEUTRON=1

Then run the job as usual.

Ramy

From: Abhishek Shrivastava 
[mailto:abhis...@cloudbyte.commailto:abhis...@cloudbyte.com]
Sent: Friday, July 17, 2015 9:15 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [openstack-infra] [CI] [tempest] Tempest tests failing 
with SSH timeout.

Hi Folks,

In my CI I see the following tempest tests failure for a past couple of days.

•
tempest.scenario.test_minimum_basic.TestMinimumBasicScenario.test_minimum_basic_scenario
 [361.274316s] ... FAILED

•
tempest.scenario.test_volume_boot_pattern.TestVolumeBootPattern.test_volume_boot_pattern
 [320.122458s] ... FAILED

•

[openstack-dev] [nova] v3 Cleanup coming

2015-07-21 Thread Ed Leafe
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

One of the tasks we have now that the 2.1 microversion API has become
the default is cleaning up the old references to 'v3', and moving all
the v2 code to a separate directory. This is needed because the
current arrangement is confusing, especially to developers new to the
code.

There is a patch that starts the process [1], but since it touches a
whole lot of test code, when it merges it will throw most patches into
a merge conflict. The same thing will happen with each subsequent
patch. Rather that inflict multiple painful merge conflicts, it would
be better to merge all the changes at once, and only require one rebase.

Since we have a bunch of the development team together for the
mid-cycle, we're planning on working on this tomorrow, and merging the
patches together. So consider this a courtesy notice that we're gonna
break your stuff. :)

[1] https://review.openstack.org/193725

- -- 

- -- Ed Leafe
-BEGIN PGP SIGNATURE-
Version: GnuPG v2
Comment: GPGTools - https://gpgtools.org
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIcBAEBCgAGBQJVrlx9AAoJEKMgtcocwZqL9SQP/3JMFtpOPNRlndap3pZqfpqB
hdvYr36Qq9Yy3CNSGPQQ3a3Pj06/AVWsugtfnL5lMh7fkRmaTbCSMs76fKPyVlj1
MJTBPM29scKBiX81H1hl8IBHCzPz1BCgyFIqZn9hUfThbI9IcEZED/lchDF4iG1o
IMMre0Tda0QYtqAJCkLdRoeK0z1EKl2+BJMCet0OdwbfoHemGdGWLXa9IKmZSdkv
I3nznygIeWFrQ3NzDyh6TCV2rTLbyQL3hB+LU5nMglK1wEjG+DqE52lKAAeWBoIU
5o+9lQMV7AFbIK/Zq56iGdhe+Oj7vZLlC/rbmcftiVswg6+iCazSHlk9KKFqu0Sy
Lx4AIaSXeetZjt/LKSHfMyjopXoA3RhrGbEc0FTNzJLYF7IB2N/Ji5tvGTwrGZIh
uACJHkOZRJfrJDsgnx1xKnzTwreXp90Z6l0wnEBjDZLPpPggo+sYoHn34IOVZ2eO
SYqI4GEYe1mKu3ZzHq2w9geH2EK1xKm3zEasuh9cYeuZcA7VsaGwrKbmx4vuPuSy
a5v3ZYZeE6n083F9KcxkJDTuUqvgGe6887i6y+HA5WItbgSp1vq45OxSe4yp16F7
kGfk7HfLZHE5U6u8qx1fZdM7qY6+pJ1wFFnM7TRlQ4ld1QTFIaBd3gRQkBQO9RnL
Y977GST224iGvKYtTQGu
=N/Y2
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [requirements] backlog of requirements changes

2015-07-21 Thread Robert Collins
There seem to be quite a backlog in openstack/requirements.

http://russellbryant.net/openstack-stats/requirements-reviewers-30.txt

there are roughly 10x new changes to the number of reviews cores are
managing to do.

This worries me, and I'd like to help.

How can I best do so?

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][L3] Representing a networks connected by routers

2015-07-21 Thread Neil Jerram
On 21/07/15 01:47, Assaf Muller wrote:

 - Original Message -
 I'm looking for feedback from anyone interest but, in particular, I'd
 like feedback from the following people for varying perspectives:
 Mark McClain (proposed alternate), John Belamaric (IPAM), Ryan Tidwell
 (BGP), Neil Jerram (L3 networks), Aaron Rosen (help understand
 multi-provider networks) and you if you're reading this list of names
 and thinking he forgot me!

 We have been struggling to develop a way to model a network which is
 composed of disjoint L2 networks connected by routers.  The intent of
 this email is to describe the two proposals and request input on the
 two in attempt to choose a direction forward.  But, first:
 requirements.

 Requirements:

 The network should appear to end users as a single network choice.
 They should not be burdened with choosing between segments.  It might
 interest them that L2 communications may not work between instances on
 this network but that is all.  This has been requested by numerous
 operators [1][4].  It can be useful for external networks and provider
 networks.
 I think that [1] and [4] are conflating the problem statement with the
 proposed solutions, and lacking some lower level details regarding the
 problem statement, which makes it a lot harder to engage in a discussion.

 I'm looking at [4]:
 What I don't see explicitly mentioned is: Does the same CIDR extend across 
 racks,
 or would each rack get its own CIDR(s)?

I think it's the latter, i.e. what you call option (1) below.

  I understand this can differ according to
 the architectural choices you make in your data center, and that changes the
 choices we'd need to make to Neutron in order to satisfy that requirement.

 To clarify, option (1) means that a subnet is contained to a rack. Option (2)
 means that a subnet may span across racks. I don't think we need to change 
 the network/subnet
 model at all to satisfy case (1). Each rack would have its own network/subnet
 (Or perhaps multiple, if more than a single VLAN or other characteristic is 
 desired).
 Each network would be tagged with an AZ (This ties in nicely to the already 
 proposed Neutron AZ spec),
 and the Nova scheduler would become aware of Neutron network AZs. In this 
 model
 you don't want to connect to a network, you want Nova to schedule the VM and 
 then have Nova choose
 the network on that rack. If you want more than a single network in a rack, 
 then there's
 some difference between those networks that could be expressed in tags 
 (Think: Network flavors),
 such as the security zone. You'd then specify a tag that should be satisfied 
 by the
 network that the VM ends up connecting to, so that the tag may be added to 
 the list
 of Nova scheduler filters. Again, this goes back to keeping the Neutron 
 network and subnet
 just as they are but doing some work with AZs, tagging and the Nova scheduler.
 We've known that the Nova scheduler must become Network aware for the past 
 few years,
 perhaps it's time to finally tackle that.

Interesting.  Perhaps we can do something along those lines that will
fly without lots of change in Nova/Neutron interactions:

- allow a Neutron network to have tags associated with it

- when launching a set of VMs, allow specifying a network tag, instead
of a specific network name/ID, with the meaning that each VM can attach
to any network that has that tag.

Longer term the Nova scheduler could become tag-aware, as you suggest,
but until then I think what will happen is that

- Nova will choose a host independently of the network tag

- if it isn't possible for Neutron to bind a port on that host to a
network with the requested tag, it will bounce back to Nova, and Nova
will try the next available host (?)

So, inefficient, but kind of already working.

Effectively, with this model, the tag is replacing Carl's front
network.  Which nicely side-steps any confusion above which network ID a
port is expected to have.

Regards,
Neil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][L3] Representing a networks connected by routers

2015-07-21 Thread Salvatore Orlando
A few comments inline.

Generally speaking the only thing I'd like to remark is that this use case
makes sense independently of whether you are using overlay, or any other
SDN solution (whatever SDN means to you).

Also, please note that this thread is now split in two - there's a new
branch starting with Ian's post. So perhaps let's make two threads.

On 21 July 2015 at 14:21, Neil Jerram neil.jer...@metaswitch.com wrote:

 On 20/07/15 18:36, Carl Baldwin wrote:
  I'm looking for feedback from anyone interest but, in particular, I'd
  like feedback from the following people for varying perspectives:
  Mark McClain (proposed alternate), John Belamaric (IPAM), Ryan Tidwell
  (BGP), Neil Jerram (L3 networks), Aaron Rosen (help understand
  multi-provider networks) and you if you're reading this list of names
  and thinking he forgot me!
 
  We have been struggling to develop a way to model a network which is
  composed of disjoint L2 networks connected by routers.  The intent of
  this email is to describe the two proposals and request input on the
  two in attempt to choose a direction forward.  But, first:
  requirements.
 
  Requirements:
 
  The network should appear to end users as a single network choice.
  They should not be burdened with choosing between segments.  It might
  interest them that L2 communications may not work between instances on
  this network but that is all.


It is however important to ensure services like DHCP keep working as usual.
Treating segments as logical networks in their own right is the simples
solution to achieve this imho.


 This has been requested by numerous
  operators [1][4].  It can be useful for external networks and provider
  networks.
 
  The model needs to be flexible enough to support two distinct types of
  addresses:  1) address blocks which are statically bound to a single
  segment and 2) address blocks which are mobile across segments using
  some sort of dynamic routing capability like BGP or programmatically
  injecting routes in to the infrastructure's routers with a plugin.

 FWIW, I hadn't previously realized (2) here.


A mobile address block translates to a subnet whose network association
might change.
Achieving mobile address block does not seem simple to me at all. Route
injection (booring) and BGP might solve the networking aspect of the
problem, but we'd need also coordination with the compute service to ensure
also all the workloads using addresses from the mobile block migrate;
unless I've not understood the way these mobile address blocks work, I
struggle to see this as a requirement.



 
  Overlay networks are not the answer to this.  The goal of this effort
  is to scale very large networks with many connected ports by doing L3
  routing (e.g. to the top of rack) instead of using a large continuous
  L2 fabric.


As a side note, I find interesting that overlays where indeed proposed as a
solution to avoid hybrid L2/L3 networks or having to span VLANs across the
core and aggregation layers.


 Also, the operators interested in this work do not want
  the complexity of overlay networks [4].
 
  Proposal 1:
 
  We refined this model [2] at the Neutron mid-cycle a couple of weeks
  ago.  This proposal has already resonated reasonably with operators,
  especially those from GoDaddy who attended the Neutron sprint.  Some
  key parts of this proposal are:
 
  1.  The routed super network is called a front network.  The segments
  are called back(ing) networks.
  2.  Backing networks are modeled as admin-owned private provider
  networks but otherwise are full-blown Neutron networks.
  3.  The front network is marked with a new provider type.
  4.  A Neutron router is created to link the backing networks with
  internal ports.  It represents the collective routing ability of the
  underlying infrastructure.
  5.  Backing networks are associated with a subset of hosts.
  6.  Ports created on the front network must have a host binding and
  are actually created on a backing network when all is said and done.
  They carry the ID of the backing network in the DB.


While the logical model and workflow you describe here makes sense, I have
the impression that:
1) The front network is not a neutron logical network. Because it does not
really behave like a network, with the only exception that you can pass its
id to the nova API. To reinforce this consider that basically the front
network has no ports.
2) from a topological perspective the front network kind of behaves like
an external network; but it isn't. The front network is not really a common
gateway for all backing networks, more like a label which is attached to
the router which interconnects all the backing networks.
3) more on topology. How can we know that all these segments will always be
connected by a single logical router? Using static router (or If one day
BGP will be a thing), it is already possible to implement multi-segments
networks with L3 connectivity using multiple logical 

[openstack-dev] [security] [docs] Security Guide Freeze and RST migration

2015-07-21 Thread Dillon, Nathaniel
All,

The OpenStack Security Guide is migrating to RST format [1] and with help from 
the docs team we hope to have this completed shortly. We will therefore be 
entering a freeze on all changes coming into the Security Guide until the 
migration is complete, and all future changes will be in the much easier RST 
format.

Progress can be tracked on the etherpad [2] or specific issues can be asked in 
reply to this message or during the Security Guide weekly meeting [3], and an 
announcement will be made when the migration is complete.

Thanks,

Nathaniel

[1] https://bugs.launchpad.net/openstack-manuals/+bug/1463111
[2] https://etherpad.openstack.org/p/sec-guide-rst
[3] https://wiki.openstack.org/wiki/Documentation/SecurityGuide

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][L3] Representing a networks connected by routers

2015-07-21 Thread Carl Baldwin
On Jul 20, 2015 4:26 PM, Ian Wells ijw.ubu...@cack.org.uk wrote:

 There are two routed network models:

 - I give my VM an address that bears no relation to its location and
ensure the routed fabric routes packets there - this is very much the
routing protocol method for doing things where I have injected a route into
the network and it needs to propagate.  It's also pretty useless because
there are too many host routes in any reasonable sized cloud.

 - I give my VM an address that is based on its location, which only
becomes apparent at binding time.  This means that the semantics of a port
changes - a port has no address of any meaning until binding, because its
location is related to what it does - and it leaves open questions about
what to do when you migrate.

 Now, you seem to generally be thinking in terms of the latter model,
particularly since the provider network model you're talking about fits
there.  But then you say:

Actually, both.  For example, GoDaddy assigns each vm an ip from the
location based address blocks and optionally one from the routed location
agnostic ones.  I would also like to assign router ports out of the
location based blocks which could host floating ips from the other blocks.

 On 20 July 2015 at 10:33, Carl Baldwin c...@ecbaldwin.net wrote:

 When creating a
 port, the binding information would be sent to the IPAM system and the
 system would choose an appropriate address block for the allocation.

Implicit in both is a need to provide at least a hint at host binding.  Or,
delay address assignment until binding.  I didn't mention it because my
email was already long.
This is something and discussed but applies equally to both proposals.

 No, it wouldn't, because creating and binding a port are separate
operations.  I can't give the port a location-specific address on creation
- not until it's bound, in fact, which happens much later.

 On proposal 1: consider the cost of adding a datamodel to Neutron.  It
has to be respected by all developers, it frequently has to be deployed by
all operators, and every future change has to align with it.  Plus either
it has to be generic or optional, and if optional it's a burden to some
proportion of Neutron developers and users.  I accept proposal 1 is easy,
but it's not universally applicable.  It doesn't work with Neil Jerram's
plans, it doesn't work with multiple interfaces per host, and it doesn't
work with the IPv6 routed-network model I worked on.

Please be more specific.  I'm not following your argument here.  My
proposal doesn't really add much new data model.

We've discussed this with Neil at length.  I haven't been able to reconcile
our respective approaches in to one model that works for both of us and
still provides value.  The routed segments model needs to somehow handle
the L2 details of the underlying network.  Neil's model confines L2 to the
port and routes to it.  The two models can't just be squished together
unless I'm missing something.

Could you provide some links so that I can brush up on your ipv6 routed
network model?  I'd like to consider it but I don't know much about it.

 Given that, I wonder whether proposal 2 could be rephrased.

 1: some network types don't allow unbound ports to have addresses, they
just get placeholder addresses for each subnet until they're bound
 2: 'subnets' on these networks are more special than subnets on other
networks.  (More accurately, they dont use subnets.  It's a shame subnets
are core Neutron, because they're pretty horrible and yet hard to replace.)
 3: there's an independent (in an extension?  In another API endpoint?)
datamodel that the network points to and that IPAM refers to to find a port
an address.  Bonus, people who aren't using funky network types can disable
this extension.
 4: when the port is bound, the IPAM is referred to, and it's told the
binding information of the port.
 5: when binding the port, once IPAM has returned its address, the network
controller probably does stuff with that address when it completes the
binding (like initialising routing).
 6: live migration either has to renumber a port or forward old traffic to
the new address via route injection.  This is an open question now, so I'm
mentioning it rather than solving it.

I left out the migration issue from my email also because it also affects
both proposals equally.

 In fact, adding that hook to IPAM at binding plus setting aside a 'not
set' IP address might be all you need to do to make it possible.  The IPAM
needs data to work out what an address is, but that doesn't have to take
the form of existing Neutron constructs.

What about the L2 network for each segment?  I suggested creating provider
networks for these.  Do you have a different suggestion?

What about distinguishing the bound address blocks from the mobile address
blocks?  For example, the address blocks bound to the segments could be
from a private space. A router port may get an address from this private
space and 

Re: [openstack-dev] [GSLB][LBaaS] Launchpad project, git repos and more!

2015-07-21 Thread Fox, Kevin M
Why not stackforge?

Thanks,
Kevin


From: Hayes, Graham
Sent: Tuesday, July 21, 2015 11:53:35 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [GSLB][LBaaS] Launchpad project, git repos and more!

Hi All,

I have created a github org and 2 repos for us to get started in.

https://github.com/gslb/ is the org, with https://github.com/gslb/gslb
as the main code repo.

There is 2 teams (Github's name for groups) gslb-core, and gslb-admin.

Core have read/write access to the repos, and admin can add / remove
projects.

I also created https://github.com/gslb/gslb-specs which will
automatically publish to https://gslb-specs.readthedocs.org

There is also a launchpad project https://launchpad.net/gslb with 2
teams:

gslb-drivers - people who can target bugs / bps
gslb-core - the maintainers of the project

All the info is on the wiki: https://wiki.openstack.org/wiki/GSLB

So, next question - who should be in what groups? I am open to
suggestions... should it be an item for discussion next week?

Thanks,

Graham

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [CI] How to set a proxy for zuul.

2015-07-21 Thread Tang Chen

hi Asselin,

On 07/21/2015 11:52 PM, Asselin, Ramy wrote:


Tang,

#openstack-meeting is for people to come together for officially 
scheduled meetings [1]


Did you try to join the 3^rd party meeting on Monday[2]? I was 
chairing the meeting but did not see you. That would be a great forum 
to ask these questions.




Sorry, I joined #openstack-meting yesterday.

Will try to join 3rd party meeting next Monday.

Otherwise, you can ask in #openstack-infra. If you do ask, remember to 
stay logged in, otherwise you'll miss any responses  people are not 
likely to respond to your question if they see you're not logged in 
(because you'll miss the response). :)




OK, Thanks. :)


Ramy

[1] https://wiki.openstack.org/wiki/Meetings

[2] http://eavesdrop.openstack.org/#Third_Party_Meeting

*From:*Tang Chen [mailto:tangc...@cn.fujitsu.com]
*Sent:* Tuesday, July 21, 2015 12:50 AM
*To:* openstack-dev@lists.openstack.org
*Subject:* Re: [openstack-dev] [CI] How to set a proxy for zuul.

On 07/21/2015 03:35 PM, Abhishek Shrivastava wrote:

Hi Tang,

Can you please send me the whole job snippet you wrote.


In /etc/jenkins_jobs/config/ examples.yaml, that's all.

- job-template:
name: 'noop-check-communication'
node: '{node}'

builders:
  - shell: |
  #!/bin/bash -xe
  touch /tmp/noop-check-communication
  echo Hello world, this is the {vendor} Testing System
  - link-logs  # In macros.yaml from os-ext-testing

#publishers:
#  - devstack-logs  # In macros.yaml from os-ext-testing
#  - console-log  # In macros.yaml from os-ext-testing



noop-check-communication was setup by default. I didn't change 
anything else.



BTW, I tried to ask this in #openstack-meeting IRC.
But no one seems to be active. :)

Thanks.



On Tue, Jul 21, 2015 at 12:52 PM, Tang Chen
tangc...@cn.fujitsu.com mailto:tangc...@cn.fujitsu.com wrote:

Hi Asselin, Abhishek,

I got some problems when I was trying to write a jenkins job.

I found that when zuul received the notification from gerrit,
jenkins didn't run the test.

I added something to noop-check-communication in
/etc/jenkins_jobs/config/ examples.yaml,
just touched a file under /tmp.

- job-template:
name: 'noop-check-communication'
node: '{node}'

builders:
  - shell: |
  #!/bin/bash -xe
  touch /tmp/noop-check-communication # I added
something here.
  echo Hello world, this is the {vendor} Testing System
  - link-logs  # In macros.yaml from os-ext-testing

And I flushed the jobs, using jenkins-jobs --flush-cache
update /etc/jenkins_jobs/config/.
I can build the job in jenkins web UI, and the file was touched.


But when I send a patch, the file was not touched. But CI
really works.
I can see it on the web site. (
https://review.openstack.org/#/c/203941/)

How do you think of this ?




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 


**

*Thanks  Regards,*

*Abhishek*

/_Cloudbyte Inc. http://www.cloudbyte.com_/




__

OpenStack Development Mailing List (not for usage questions)

Unsubscribe:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe  
mailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Nested Quota Driver and policy.json changes

2015-07-21 Thread Vilobh Meshram
Hi,

While developing Nested Quota Driver for Cinder, when performing
show/update/delete following restrictions apply :-

1. show : Only user who is admin or admin in parent or admin in root
project should be able to perform show/view the quota of the leaf projects.

2. update : Only user admin in parent or admin in root project should be
able to perform update.

3. delete : Only user admin in parent or admin in root project should be
able to perform delete.

In order to get the parent information or child list in nested hierarchy
calls need to be made to keystone. So as part of these changes do we want
to introduce 2 new roles in cinder one for project_admin and one for
root_admin so that the token can be scoped at project/root level and only
the permissible operation at the respective levels as described above can
be allowed.

For example  :-

A
 |
B
 |
C

cinder quota-update C (should only be permissible from B or A)

This can achieved either by :-
1. Introducing project_admin or cloud_admin rule in policy.json and later
populate the [1] with respective target[2][3]. Minises code changes and
gives the freedom to operators to modify policy.json and tune changes
accordingly.
2. Not introduce these 2 roles in policy.json by just make code changes and
additional logic in code to handle this but using this option we can go to
at max 1 level of heirarchy as in-order to fetch more parent we will need
to make a keystone call.

Need opinion on which option can be helpful in longterm.

-Vilobh
[1]
https://github.com/openstack/cinder/blob/master/cinder/api/contrib/quotas.py#L33
[2]
https://github.com/openstack/cinder/blob/master/cinder/api/extensions.py#L379
[3]
https://github.com/openstack/cinder/blob/master/cinder/api/contrib/quotas.py#L109
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] weekly meeting Agenda for July 21

2015-07-21 Thread Matt Fischer
Thanks everyone for attending.

The minutes are here:

http://eavesdrop.openstack.org/meetings/puppet/2015/puppet.2015-07-21-14.59.html


Please be sure to work on the mid-cycle planning over the next couple of
weeks too:

https://etherpad.openstack.org/p/puppet-liberty-mid-cycle

On Mon, Jul 20, 2015 at 4:17 PM, Matt Fischer m...@mattfischer.com wrote:

 A late notice but here's the agenda for tomorrow's meeting. Emilien is out
 so I will be running it. There's not a big agenda so if you have bugs you'd
 like to go into please bring them.

 https://etherpad.openstack.org/p/puppet-openstack-weekly-meeting-20150721

 See you tomorrow

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [CI] How to set a proxy for zuul.

2015-07-21 Thread Asselin, Ramy
Tang,

#openstack-meeting is for people to come together for officially scheduled 
meetings [1]
Did you try to join the 3rd party meeting on Monday[2]? I was chairing the 
meeting but did not see you. That would be a great forum to ask these questions.

Otherwise, you can ask in #openstack-infra. If you do ask, remember to stay 
logged in, otherwise you'll miss any responses  people are not likely to 
respond to your question if they see you're not logged in (because you'll miss 
the response).

Ramy

[1] https://wiki.openstack.org/wiki/Meetings
[2] http://eavesdrop.openstack.org/#Third_Party_Meeting



From: Tang Chen [mailto:tangc...@cn.fujitsu.com]
Sent: Tuesday, July 21, 2015 12:50 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [CI] How to set a proxy for zuul.


On 07/21/2015 03:35 PM, Abhishek Shrivastava wrote:
Hi Tang,

Can you please send me the whole job snippet you wrote.

In /etc/jenkins_jobs/config/ examples.yaml, that's all.

- job-template:
name: 'noop-check-communication'
node: '{node}'

builders:
  - shell: |
  #!/bin/bash -xe
  touch /tmp/noop-check-communication
  echo Hello world, this is the {vendor} Testing System
  - link-logs  # In macros.yaml from os-ext-testing

#publishers:
#  - devstack-logs  # In macros.yaml from os-ext-testing
#  - console-log  # In macros.yaml from os-ext-testing



noop-check-communication was setup by default. I didn't change anything else.


BTW, I tried to ask this in #openstack-meeting IRC.
But no one seems to be active. :)

Thanks.




On Tue, Jul 21, 2015 at 12:52 PM, Tang Chen 
tangc...@cn.fujitsu.commailto:tangc...@cn.fujitsu.com wrote:
Hi Asselin, Abhishek,

I got some problems when I was trying to write a jenkins job.

I found that when zuul received the notification from gerrit, jenkins didn't 
run the test.

I added something to noop-check-communication in /etc/jenkins_jobs/config/ 
examples.yaml,
just touched a file under /tmp.

- job-template:
name: 'noop-check-communication'
node: '{node}'

builders:
  - shell: |
  #!/bin/bash -xe
  touch /tmp/noop-check-communication   
 # I added something here.
  echo Hello world, this is the {vendor} Testing System
  - link-logs  # In macros.yaml from os-ext-testing

And I flushed the jobs, using jenkins-jobs --flush-cache update 
/etc/jenkins_jobs/config/.
I can build the job in jenkins web UI, and the file was touched.


But when I send a patch, the file was not touched. But CI really works.
I can see it on the web site. ( https://review.openstack.org/#/c/203941/)

How do you think of this ?



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
[https://docs.google.com/uc?export=downloadid=0Byq0j7ZjFlFKV3ZCWnlMRXBCcU0revid=0Byq0j7ZjFlFKa2V5VjdBSjIwUGx6bUROS2IrenNwc0kzd2IwPQ]
Thanks  Regards,
Abhishek
Cloudbyte Inc.http://www.cloudbyte.com




__

OpenStack Development Mailing List (not for usage questions)

Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribemailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] backlog of requirements changes

2015-07-21 Thread Dave Walker
On 21 July 2015 at 16:17, Robert Collins robe...@robertcollins.net wrote:
 There seem to be quite a backlog in openstack/requirements.

 http://russellbryant.net/openstack-stats/requirements-reviewers-30.txt

 there are roughly 10x new changes to the number of reviews cores are
 managing to do.

 This worries me, and I'd like to help.

 How can I best do so?


It seems that last year I was silently dropped from requirements-core
and so I've been less interested in providing trivial +1's which IMO
rarely add much value to this project.   Looking at the stats, i'm
still healthily represented.

If i'm re-added, i'll gladly help more with reviews.

Thanks

--
Kind Regards,
Dave Walker

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Service Chain project IRC meeting minutes - 07/16/2015

2015-07-21 Thread Cathy Zhang
Hi Wei,

Yes, We will have project meeting on IRC every week. I will send out an 
cancellation notice to the openstack-dev if a meeting will be canceled.
You are welcome to join!

Cathy

From: Vikram Choudhary [mailto:viks...@gmail.com]
Sent: Tuesday, July 21, 2015 3:40 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] Service Chain project IRC meeting 
minutes - 07/16/2015

Hi Wei,

It happens every week unless cancelled by the chair.

Thanks
Vikram

On Tue, Jul 21, 2015 at 3:10 PM, Damon Wang 
damon.dev...@gmail.commailto:damon.dev...@gmail.com wrote:
Hi,

Does service chaining project meeting will be held this week? I'd like to join 
:-D

Wei Wang

2015-07-17 2:09 GMT+08:00 Cathy Zhang 
cathy.h.zh...@huawei.commailto:cathy.h.zh...@huawei.com:
Hi Everyone,

Thanks for joining the service chaining project meeting on 7/16/2015. Here is 
the link to the meeting logs:
http://eavesdrop.openstack.org/meetings/service_chaining/2015/

Thanks,
Cathy

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] backlog of requirements changes

2015-07-21 Thread Sean Dague
On 07/21/2015 12:01 PM, Joshua Harlow wrote:
 Count me in to,
 
 I can help, although not exactly sure what helping entails (requirements
 reviews don't exactly feel like they are complex to really need tons of
 reviews, it's not like they are some complex code algorithm that may be
 buggy...).
 
 Robert Collins wrote:
 There seem to be quite a backlog in openstack/requirements.

 http://russellbryant.net/openstack-stats/requirements-reviewers-30.txt

 there are roughly 10x new changes to the number of reviews cores are
 managing to do.

 This worries me, and I'd like to help.

 How can I best do so?

 -Rob

I'd say review everything outstanding. And we can expand the group over
time.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] backlog of requirements changes

2015-07-21 Thread Joshua Harlow

Count me in to,

I can help, although not exactly sure what helping entails (requirements 
reviews don't exactly feel like they are complex to really need tons of 
reviews, it's not like they are some complex code algorithm that may be 
buggy...).


Robert Collins wrote:

There seem to be quite a backlog in openstack/requirements.

http://russellbryant.net/openstack-stats/requirements-reviewers-30.txt

there are roughly 10x new changes to the number of reviews cores are
managing to do.

This worries me, and I'd like to help.

How can I best do so?

-Rob



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] backlog of requirements changes

2015-07-21 Thread Sean Dague
On 07/21/2015 10:17 AM, Robert Collins wrote:
 There seem to be quite a backlog in openstack/requirements.
 
 http://russellbryant.net/openstack-stats/requirements-reviewers-30.txt
 
 there are roughly 10x new changes to the number of reviews cores are
 managing to do.
 
 This worries me, and I'd like to help.
 
 How can I best do so?

There are currently only about 20 changes that are in a positive state -
https://review.openstack.org/#/q/requirements+status:open+label:Verified%253E%253D1%252Cjenkins+NOT+label:Workflow%253C%253D-1+NOT+label:Code-Review%253C%253D-1,n,z

I just did a quick run through, typically I don't look at this more than
once a week because they are usually not time critical unless there is a
gate break.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] New gate blocker: 1476770

2015-07-21 Thread Matt Riedemann
Grenade is blowing up on an AttributeError in glance, just started 
blowing up today so it must be a new library release.


Bug is tracked here:

https://bugs.launchpad.net/glance/+bug/1476770

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel Zabbix in deployment tasks

2015-07-21 Thread Stanislaw Bogatkin
Actually, I didn't participate in that process a lot - just reviewed plugin
couple of times and as I know, we had had a commits that deleted zabbix
from current Fuel.
There is bug about that: https://bugs.launchpad.net/fuel/+bug/1455664
There is a review: https://review.openstack.org/#/c/182615/

Seems that it should be resolved and merged to have zabbix code actually
deleted from current master.

On Thu, Jul 16, 2015 at 1:29 PM, Mike Scherbakov mscherba...@mirantis.com
wrote:

 I thought it was done...
 Stas - do you know anything about it?

 On Thu, Jul 16, 2015 at 9:18 AM Sergii Golovatiuk 
 sgolovat...@mirantis.com wrote:

 Hi,

 Working on granular deployment, I realized we still call zabbix.pp in
 deployment tasks. As far as I know zabbix was moved to plugin. Should we
 remove zabbix from
 1. Deployment graph
 2. fixtures
 3. Tests
 4. Any other places

 Are we going to clean up zabbix code as part of migration to plugin?

 --
 Best regards,
 Sergii Golovatiuk,
 Skype #golserge
 IRC #holser

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 --
 Mike Scherbakov
 #mihgen

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Question about unique default hostname for node

2015-07-21 Thread Sergii Golovatiuk
node-uuid is very terrible from UX perspective of view. Ask support people
if they are comfortable to ssh such nodes or telling the name in phone
conversation with customer. If we cannot validate FQDN of hostname I would
slip this feature to next release where we can pay more attention to
details.

--
Best regards,
Sergii Golovatiuk,
Skype #golserge
IRC #holser
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel Zabbix in deployment tasks

2015-07-21 Thread Sergii Golovatiuk
https://github.com/stackforge/fuel-library/blob/master/deployment/puppet/osnailyfacter/modular/zabbix/tasks.yaml

As far as I see, zabbix is still present in deployment graph, so it's a bug
;(

--
Best regards,
Sergii Golovatiuk,
Skype #golserge
IRC #holser
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [CI] How to set a proxy for zuul.

2015-07-21 Thread Tang Chen


On 07/21/2015 11:42 PM, Asselin, Ramy wrote:


Hi Tang,

Please use a new thread for this new question.  I'd like to keep the 
current thread focused on How to set a proxy for zuul.




Sure, will start a new thread if I cannot get it through.

Thanks. :)



Ramy

*From:*Tang Chen [mailto:tangc...@cn.fujitsu.com]
*Sent:* Tuesday, July 21, 2015 12:23 AM
*To:* openstack-dev@lists.openstack.org
*Subject:* Re: [openstack-dev] [CI] How to set a proxy for zuul.

Hi Asselin, Abhishek,

I got some problems when I was trying to write a jenkins job.

I found that when zuul received the notification from gerrit, jenkins 
didn't run the test.


I added something to noop-check-communication in 
/etc/jenkins_jobs/config/ examples.yaml,

just touched a file under /tmp.

- job-template:
name: 'noop-check-communication'
node: '{node}'

builders:
  - shell: |
  #!/bin/bash -xe
  touch /tmp/noop-check-communication # I added something here.
  echo Hello world, this is the {vendor} Testing System
  - link-logs  # In macros.yaml from os-ext-testing

And I flushed the jobs, using jenkins-jobs --flush-cache update 
/etc/jenkins_jobs/config/.

I can build the job in jenkins web UI, and the file was touched.


But when I send a patch, the file was not touched. But CI really works.
I can see it on the web site. ( https://review.openstack.org/#/c/203941/)

How do you think of this ?




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [CI] Jenkins jobs are not executed when setting up a new CI system.

2015-07-21 Thread Tang Chen

Hi all,

When I send a patch to gerrit, my zuul is notified, but jenkins jobs are 
not run.


My CI always reports the following error:

Merge Failed.

This change was unable to be automatically merged with the current state of the 
repository. Please rebase your change and upload a new patchset.

I think, because the patch cannot be merged, the jobs are not run.

Referring to https://www.mediawiki.org/wiki/Gerrit/Advanced_usage,I did update 
my master branch and make sure it is up-to-date. But it doesn't work.And other 
CIs from other companies didn't report this error.


And also, when zuul tries to get the patch from gerrit, it executes:

gerrit query --format json --all-approvals --comments --commit-message 
--current-patch-set --dependencies --files --patch-sets --submit-records 204337


When I try to execute it myself, it reports:Permission denied (publickey).

I updated my ssh key, and uploaded the new public key to gerrit, but it doesn't 
work.


Does anyone have any idea what's going on here ?

Thanks.





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Abandon changesets which hang for a while without updates

2015-07-21 Thread Mike Scherbakov
Folks,
I'll push DevOps to run the script. However, what we need is to just go
ahead and clean up, abandon manually what is not relevant anymore, provide
comment.

Please start with your patches.

On Thu, Jul 16, 2015 at 11:08 PM Oleg Gelbukh ogelb...@mirantis.com wrote:

 Nicely put, Doug, you gave me laughs :)

 I can't see how a CR could hang for a month without anyone paying
 attention if it worths merging. If this really happens (which I'm not aware
 of), auto-abandon definitely won't make things any worse.

 --
 Best regards,
 Oleg Gelbukh

 On Fri, Jul 17, 2015 at 6:10 AM, Doug Wiegley 
 doug...@parksidesoftware.com wrote:

 Just adding an experience from another project, Neutron.

 We had similar debates, and prepping for the long apocalyptic winter of
 changeset death, Kyle decimated the world and ran the abandon script. The
 debates were far more intense than the reality, and my large stockpile of
 Rad-X and Nuka Cola went to waste.

 Every few weeks, I get a few emails of things being abandoned. And if I
 care about something, mine or not, I click through and tap ‘Restore’. If
 one person in the entire community can’t be bothered to click one button,
 I’m not sure how it’d ever be kept up-to-date, much less merge.

 Thanks,
 doug


 On Jul 16, 2015, at 8:36 PM, Dmitry Borodaenko dborodae...@mirantis.com
 wrote:

 I'm with Stanislaw on this one: abandoning reviews just to make numbers
 *look* better will accomplish nothing.

 The only benefit I can see is cleaning up reviews that we *know* don't
 need to be considered, so that it's easier for reviewers to find the
 reviews that still need attention. I don't see this as that much of a
 problem, finding stuff to review in Fuel Review Inbox [0] is not hard at
 all.

 [0] https://wiki.openstack.org/wiki/Fuel#Development_related_links

 And the state of our review backlog is such that it's not safe to
 auto-abandon reviews without looking at them, and if a contributor has
 spent time looking at a review, abandoning it manually is one click away.

 If we do go with setting up an auto-abandon rule, it should be extremely
 conservative, for example: CR has a negative vote from a core reviewer AND
 there were no comments or positive votes from anyone after that AND it has
 not been touched in any way for 2 months.

 On Wed, Jul 15, 2015 at 5:10 PM Mike Scherbakov mscherba...@mirantis.com
 wrote:

 Folks,
 let's execute here. Numbers are still large. Did we have a chance to
 look over the whole queue?

 Can we go ahead and abandon changes having -1 or -2 from reviewers for
 over than a months or so?
 I'm all for just following standard OpenStack process [1], and then
 change it only if there is good reason for it.

 [1] https://wiki.openstack.org/wiki/Puppet#Patch_abandonment_policy


 On Thu, Jul 9, 2015 at 6:27 PM Stanislaw Bogatkin 
 sbogat...@mirantis.com wrote:

 2 weeks seems too small for me. We easy can be in situation when fix
 for medium bug is done, but SCF starts. And gap between SCF and release
 easily can be more than a month. So, 2 months seems okay for me if speaking
 about forcibly applying auto-abandon by major vote. And I'm personally
 against such innovation at all.

 On Thu, Jul 9, 2015 at 5:37 PM, Davanum Srinivas dava...@gmail.com
 wrote:

 That's a very good plan (Initial feedback/triage) Mike.

 thanks,
 dims

 On Thu, Jul 9, 2015 at 3:23 PM, Mike Scherbakov
 mscherba...@mirantis.com wrote:
  +1 for just reusing existing script, and adjust it on the way. No
 need to
  immediately switch from infinite time to a couple of weeks, we can
 always
  adjust it later. But 1-2 month should be a good start already.
 
  Our current stats [1] look just terrible. Before we enable an
 auto-abandon,
  we need to go every single patch first, and review it / provide
 comment to
  authors. The idea is not to abandon some good patches, and not to
 offend
  contributors...
 
  Let's think how we can approach it. Should we have core reviewers to
 check
  their corresponding components?
 
  [1] http://stackalytics.com/report/reviews/fuel-group/open
 
  On Wed, Jul 8, 2015 at 1:13 PM Sean M. Collins s...@coreitpro.com
 wrote:
 
  Let's keep it at 4 weeks without comment, and Jenkins failed -
 similar
  to the script that Kyle Mestery uses for Neutron. In fact, we could
  actually just use his script ;)
 
 
 
 https://github.com/openstack/neutron/blob/master/tools/abandon_old_reviews.sh
  --
  Sean M. Collins
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://openstack-dev-requ...@lists.openstack.org/?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  --
  Mike Scherbakov
  #mihgen
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 

Re: [openstack-dev] [nova] v3 Cleanup coming

2015-07-21 Thread Alex Xu
Cool! Thanks for the work!

2015-07-21 22:51 GMT+08:00 Ed Leafe e...@leafe.com:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA512

 One of the tasks we have now that the 2.1 microversion API has become
 the default is cleaning up the old references to 'v3', and moving all
 the v2 code to a separate directory. This is needed because the
 current arrangement is confusing, especially to developers new to the
 code.

 There is a patch that starts the process [1], but since it touches a
 whole lot of test code, when it merges it will throw most patches into
 a merge conflict. The same thing will happen with each subsequent
 patch. Rather that inflict multiple painful merge conflicts, it would
 be better to merge all the changes at once, and only require one rebase.

 Since we have a bunch of the development team together for the
 mid-cycle, we're planning on working on this tomorrow, and merging the
 patches together. So consider this a courtesy notice that we're gonna
 break your stuff. :)

 [1] https://review.openstack.org/193725

 - --

 - -- Ed Leafe
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v2
 Comment: GPGTools - https://gpgtools.org
 Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

 iQIcBAEBCgAGBQJVrlx9AAoJEKMgtcocwZqL9SQP/3JMFtpOPNRlndap3pZqfpqB
 hdvYr36Qq9Yy3CNSGPQQ3a3Pj06/AVWsugtfnL5lMh7fkRmaTbCSMs76fKPyVlj1
 MJTBPM29scKBiX81H1hl8IBHCzPz1BCgyFIqZn9hUfThbI9IcEZED/lchDF4iG1o
 IMMre0Tda0QYtqAJCkLdRoeK0z1EKl2+BJMCet0OdwbfoHemGdGWLXa9IKmZSdkv
 I3nznygIeWFrQ3NzDyh6TCV2rTLbyQL3hB+LU5nMglK1wEjG+DqE52lKAAeWBoIU
 5o+9lQMV7AFbIK/Zq56iGdhe+Oj7vZLlC/rbmcftiVswg6+iCazSHlk9KKFqu0Sy
 Lx4AIaSXeetZjt/LKSHfMyjopXoA3RhrGbEc0FTNzJLYF7IB2N/Ji5tvGTwrGZIh
 uACJHkOZRJfrJDsgnx1xKnzTwreXp90Z6l0wnEBjDZLPpPggo+sYoHn34IOVZ2eO
 SYqI4GEYe1mKu3ZzHq2w9geH2EK1xKm3zEasuh9cYeuZcA7VsaGwrKbmx4vuPuSy
 a5v3ZYZeE6n083F9KcxkJDTuUqvgGe6887i6y+HA5WItbgSp1vq45OxSe4yp16F7
 kGfk7HfLZHE5U6u8qx1fZdM7qY6+pJ1wFFnM7TRlQ4ld1QTFIaBd3gRQkBQO9RnL
 Y977GST224iGvKYtTQGu
 =N/Y2
 -END PGP SIGNATURE-

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][barbican] Setting a live debug session time

2015-07-21 Thread Ade Lee
So, as discussed on #irc, I plan to:

1. Check with folks who are running in a devstack environment as to
where/how their barbican.conf file is configured.

2. Will keep you updated as to the progress of dogtag packaging in
Ubuntu/Debian.  Currently, there are a couple of bugs due to changes in
tomcat.  These should be resolved this week - and Dogtag will be back in
sid.

3.  Will send you script that is used to configure Barbican with Dogtag
in the Barbican Dogtag gate.

Ade

On Tue, 2015-07-21 at 09:05 +0900, Madhuri wrote:
 Hi Alee,
 
 Thank you for showing up for help.
 
 The proposed timing suits me. It would be 10:30 am JST for me.
 
 I am madhuri on #freenode.
 Will we be discussing on #openstack-containers?
 
 Sdake,
 Thank you for setting up this.
 
 Regards,
 Madhuri
 
 
 On Mon, Jul 20, 2015 at 11:26 PM, Ade Lee a...@redhat.com wrote:
 Madhuri,
 
 I understand that you are somewhere in APAC.  Perhaps it would
 be best
 to set up a debugging session on Tuesday night  -- at 9:30 pm
 EST
 
 This would correspond to 01:30:00 a.m. GMT (Wednesday), which
 should
 correspond to sometime in the morning for you.
 
 We can start with the initial goal of getting the snake oil
 plugin
 working for you, and then see where things are going wrong in
 the Dogtag
 install.
 
 Will that work for you?  What is your IRC nick?
 Ade
 
 (ps. I am alee on #freenode and can be found on either
 #openstack-barbican or #dogtag-pki)
 
  01:30:00 a.m. Tuesday July 21, 2015 in GMT
 On Fri, 2015-07-17 at 14:39 +, Steven Dake (stdake) wrote:
  Madhuri,
 
 
  Alee is in EST timezone (gmt-5 IIRC).  Alee will help you
 get barbican
  rolling.  Can you two folks set up a time to chat on irc on
 Monday or
  tuesday?
 
 
  Thanks
  -steve
 
 
 
 
 
 
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] Nested Quota Driver and policy.json changes

2015-07-21 Thread Vilobh Meshram
Hi,

While developing Nested Quota Driver for Cinder, when performing
show/update/delete following restrictions apply :-

1. show : Only user who is admin or admin in parent or admin in root
project should be able to perform show/view the quota of the leaf projects.

2. update : Only user admin in parent or admin in root project should be
able to perform update.

3. delete : Only user admin in parent or admin in root project should be
able to perform delete.

In order to get the parent information or child list in nested hierarchy
calls need to be made to keystone. So as part of these changes do we want
to introduce 2 new roles in cinder one for project_admin and one for
root_admin so that the token can be scoped at project/root level and only
the permissible operation at the respective levels as described above can
be allowed.

For example  :-

A
 |
B
 |
C

cinder quota-update C (should only be permissible from B or A)

This can achieved either by :-
1. Introducing project_admin or cloud_admin rule in policy.json and later
populate the [1] with respective target[2][3]. Minises code changes and
gives the freedom to operators to modify policy.json and tune changes
accordingly.
2. Not introduce these 2 roles in policy.json by just make code changes and
additional logic in code to handle this but using this option we can go to
at max 1 level of heirarchy as in-order to fetch more parent we will need
to make a keystone call.

Need opinion on which option can be helpful in longterm.

-Vilobh
[1]
https://github.com/openstack/cinder/blob/master/cinder/api/contrib/quotas.py#L33
[2]
https://github.com/openstack/cinder/blob/master/cinder/api/extensions.py#L379
[3]
https://github.com/openstack/cinder/blob/master/cinder/api/contrib/quotas.py#L109
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][L3] Representing a networks connected by routers

2015-07-21 Thread Ian Wells
On 21 July 2015 at 07:52, Carl Baldwin c...@ecbaldwin.net wrote:

  Now, you seem to generally be thinking in terms of the latter model,
 particularly since the provider network model you're talking about fits
 there.  But then you say:

 Actually, both.  For example, GoDaddy assigns each vm an ip from the
 location based address blocks and optionally one from the routed location
 agnostic ones.  I would also like to assign router ports out of the
 location based blocks which could host floating ips from the other blocks.

Well, routed IPs that are not location-specific are no different to normal
ones, are they?  Why do they need special work that changes the API?

  On 20 July 2015 at 10:33, Carl Baldwin c...@ecbaldwin.net wrote:
 
  When creating a
  port, the binding information would be sent to the IPAM system and the
  system would choose an appropriate address block for the allocation.

 Implicit in both is a need to provide at least a hint at host binding.
 Or, delay address assignment until binding.  I didn't mention it because my
 email was already long.
 This is something and discussed but applies equally to both proposals.

No, it doesn't - if the IP address is routed and not relevant to the
location of the host then yes, you would want to *inject a route* at
binding, but you wouldn't want to delay address assignment till binding
because it's location-agnostic.

  No, it wouldn't, because creating and binding a port are separate
 operations.  I can't give the port a location-specific address on creation
 - not until it's bound, in fact, which happens much later.
 
  On proposal 1: consider the cost of adding a datamodel to Neutron.  It
 has to be respected by all developers, it frequently has to be deployed by
 all operators, and every future change has to align with it.  Plus either
 it has to be generic or optional, and if optional it's a burden to some
 proportion of Neutron developers and users.  I accept proposal 1 is easy,
 but it's not universally applicable.  It doesn't work with Neil Jerram's
 plans, it doesn't work with multiple interfaces per host, and it doesn't
 work with the IPv6 routed-network model I worked on.

 Please be more specific.  I'm not following your argument here.  My
 proposal doesn't really add much new data model.

My point is that there's a whole bunch of work there to solve the question
of 'how do I allocate addresses to a port when addresses are location
specific' that assumes that there's one model for location specific
addresses that is a bunch of segments with each host on one segment.  I can
break this model easily.  Per the previous IPv6 proposal, I might choose my
address with more care than just by its location, to contain extra
information I care about.  I might have multiple segments connected to one
host where either segment will do and the scheduler should choose the most
useful one.

If this whole model is built using reusable-ish concepts like networks, and
adds a field to ports, then basically it ends up in, or significantly
affecting, the model of core Neutron.  Every Neutron developer to come will
have to read it, understand it, and not break it.  Depending on how it's
implemented, every operator that comes along will have to deploy it and may
be affected by bugs in it (though that depends on precisely how much ends
up as an extension).

If we find a more general purpose interface - and per above, mostly the
interface is 'sometimes I want to pick my address only at binding' plus
'IPAM and address assignment is more complex than the subnet model we have
today' then potentially these datamodels can be specific to IPAM - and not
general purpose 'we have these objects around already' things we're reusing
- and with a clean interface the models may not even be present as code
into a deployed system, which is the best proof they are not introducing
bugs.

Every bit of cruft we write, we have to carry.  It makes more sense to make
the core extensible for this case, in my mind, than it does to introduce it
into the core.

 We've discussed this with Neil at length.  I haven't been able to
 reconcile our respective approaches in to one model that works for both of
 us and still provides value.

QED.


 Could you provide some links so that I can brush up on your ipv6 routed
 network model?  I'd like to consider it but I don't know much about it.


The best writeup I have is
http://datatracker.ietf.org/doc/draft-baker-openstack-ipv6-model/?include_text=1
(don't judge it by the place it was filed). But the concept was that (a)
VMs received v6 addresses, (b) they were location specific, (c) each had
their own L2 segment (per Neil's idea, and really the ultimate use of this
model), and (d) there was information in the address additional to just its
location and the entropy of choosing a random address.


  1: some network types don't allow unbound ports to have addresses, they
 just get placeholder addresses for each subnet until they're bound
  2: 'subnets' 

Re: [openstack-dev] [Neutron][L3] Representing a networks connected by routers

2015-07-21 Thread John Belamaric
Wow, a lot to digest in these threads. If I can summarize my understanding of 
the two proposals. Let me know whether I get this right. There are a couple 
problems that need to be solved:

 a. Scheduling based on host reachability to the segments
 b. Floating IP functionality across the segments. I am not sure I am clear on 
this one but it sounds like you want the routers attached to the segments to 
advertise routes to the specific floating IPs. Presumably then they would do 
NAT or the instance would assign both the fixed IP and the floating IP to its 
interface?

In Proposal 1, (a) is solved by associating segments to the front network via a 
router - that association is used to provide a single hook into the existing 
API that limits the scope of segment selection to those associated with the 
front network. (b) is solved by tying the floating IP ranges to the same front 
network and managing the reachability with dynamic routing.

In Proposal 2, (a) is solved by tagging each network with some meta-data that 
the IPAM system uses to make a selection. This implies an IP allocation request 
that passes something other than a network/port to the IPAM subsystem. This 
fine from the IPAM point of view but there is no corresponding API for this 
right now. To solve (b) either the IPAM system has to publish the routes or the 
higher level management has to ALSO be aware of the mappings (rather than just 
IPAM).

To throw some fuel on the fire, I would argue also that (a) is not sufficient 
and address availability needs to be considered as well (as described in [1]). 
Selecting a host based on reachability alone will fail when addresses are 
exhausted. Similarly, with (b) I think there needs to be consideration during 
association of a floating IP to the effect on routing. That is, rather than a 
huge number of host routes it would be ideal to allocate the floating IPs in 
blocks that can be associated with the backing networks (though we would want 
to be able to split these blocks as small as a /32 if necessary - but avoid 
it/optimize as much as possible).

In fact, I think that these proposals are more or less the same - it's just in 
#1 the meta-data used to tie the backing networks together is another network. 
This allows it to fit in neatly with the existing APIs. You would still need to 
implement something prior to IPAM or within IPAM that would select the 
appropriate backing network.

As a (gulp) third alternative, we should consider that the front network here 
is in essence a layer 3 domain, and we have modeled layer 3 domains as address 
scopes in Liberty. The user is essentially saying give me an address that is 
routable in this scope - they don't care which actual subnet it gets allocated 
on. This is conceptually more in-line with [2] - modeling L3 domain separately 
from the existing Neutron concept of a network being a broadcast domain.

Fundamentally, however we associate the segments together, this comes down to a 
scheduling problem. Nova needs to be able to incorporate data from Neutron in 
its scheduling decision. Rather than solving this with a single piece of 
meta-data like network_id as described in proposal 1, it probably makes more 
sense to build out the general concept of utilizing network data for nova 
scheduling. We could still model this as in #1, or using address scopes, or 
some arbitrary data as in #2. But the harder problem to solve is the 
scheduling, not how we tag these things to inform that scheduling.

The optimization of routing for floating IPs is also a scheduling problem, 
though one that would require a lot more changes to how FIP are allocated and 
associated to solve.

John

[1] https://review.openstack.org/#/c/180803/
[2] https://bugs.launchpad.net/neutron/+bug/1458890/comments/7




On Jul 21, 2015, at 10:52 AM, Carl Baldwin 
c...@ecbaldwin.netmailto:c...@ecbaldwin.net wrote:


On Jul 20, 2015 4:26 PM, Ian Wells 
ijw.ubu...@cack.org.ukmailto:ijw.ubu...@cack.org.uk wrote:

 There are two routed network models:

 - I give my VM an address that bears no relation to its location and ensure 
 the routed fabric routes packets there - this is very much the routing 
 protocol method for doing things where I have injected a route into the 
 network and it needs to propagate.  It's also pretty useless because there 
 are too many host routes in any reasonable sized cloud.

 - I give my VM an address that is based on its location, which only becomes 
 apparent at binding time.  This means that the semantics of a port changes - 
 a port has no address of any meaning until binding, because its location is 
 related to what it does - and it leaves open questions about what to do when 
 you migrate.

 Now, you seem to generally be thinking in terms of the latter model, 
 particularly since the provider network model you're talking about fits 
 there.  But then you say:

Actually, both.  For example, GoDaddy assigns each vm an ip from the location 
based address 

Re: [openstack-dev] [GSLB][LBaaS] Launchpad project, git repos and more!

2015-07-21 Thread Gandhi, Kunal
+1. I think the group discussion should be one of the items on the meeting 
agenda for next week.

Regards
Kunal

 On Jul 21, 2015, at 12:21 PM, Susanne Balle sleipnir...@gmail.com wrote:
 
 correction: I think that discussing who should be in what group at next 
 week's meeting make sense. Susanne
 
 On Tue, Jul 21, 2015 at 3:18 PM, Susanne Balle sleipnir...@gmail.com 
 mailto:sleipnir...@gmail.com wrote:
 cool! thanks. I will request to be added to the correct groups.
 
 Susanne
 
 On Tue, Jul 21, 2015 at 2:53 PM, Hayes, Graham graham.ha...@hp.com 
 mailto:graham.ha...@hp.com wrote:
 Hi All,
 
 I have created a github org and 2 repos for us to get started in.
 
 https://github.com/gslb/ https://github.com/gslb/ is the org, with 
 https://github.com/gslb/gslb https://github.com/gslb/gslb
 as the main code repo.
 
 There is 2 teams (Github's name for groups) gslb-core, and gslb-admin.
 
 Core have read/write access to the repos, and admin can add / remove
 projects.
 
 I also created https://github.com/gslb/gslb-specs 
 https://github.com/gslb/gslb-specs which will
 automatically publish to https://gslb-specs.readthedocs.org 
 https://gslb-specs.readthedocs.org/
 
 There is also a launchpad project https://launchpad.net/gslb 
 https://launchpad.net/gslb with 2
 teams:
 
 gslb-drivers - people who can target bugs / bps
 gslb-core - the maintainers of the project
 
 All the info is on the wiki: https://wiki.openstack.org/wiki/GSLB 
 https://wiki.openstack.org/wiki/GSLB
 
 So, next question - who should be in what groups? I am open to
 suggestions... should it be an item for discussion next week?
 
 Thanks,
 
 Graham
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
 http://openstack-dev-requ...@lists.openstack.org/?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Moving instack upstream

2015-07-21 Thread Derek Higgins

Hi All,
   Something we discussed at the summit was to switch the focus of 
tripleo's deployment method to deploy using instack using images built 
with tripleo-puppet-elements. Up to now all the instack work has been 
done downstream of tripleo as part of rdo. Having parts of our 
deployment story outside of upstream gives us problems mainly because it 
becomes very difficult to CI what we expect deployers to use while we're 
developing the upstream parts.


Essentially what I'm talking about here is pulling instack-undercloud 
upstream along with a few of its dependency projects (instack, 
tripleo-common, tuskar-ui-extras etc..) into tripleo and using them in 
our CI in place of devtest.


Getting our CI working with instack is close to working but has taken 
longer then I expected because of various complications and distractions 
but I hope to have something over the next few days that we can use to 
replace devtest in CI, in a lot of ways this will start out by taking a 
step backwards but we should finish up in a better place where we will 
be developing (and running CI on) what we expect deployers to use.


Once I have something that works I think it makes sense to drop the jobs 
undercloud-precise-nonha and overcloud-precise-nonha, while switching 
overcloud-f21-nonha to use instack, this has a few effects that need to 
be called out


1. We will no longer be running CI on (and as a result not supporting) 
most of the the bash based elements
2. We will no longer be running CI on (and as a result not supporting) 
ubuntu


Should anybody come along in the future interested in either of these 
things (and prepared to put the time in) we can pick them back up again. 
In fact the move to puppet element based images should mean we can more 
easily add in extra distros in the future.


3. While we find our feet we should remove all tripleo-ci jobs from non 
tripleo projects, once we're confident with it we can explore adding our 
jobs back into other projects again


Nothing has changed yet, I order to check we're all on the same page 
this is high level details of how I see things should proceed so shout 
now if I got anything wrong or you disagree.


Sorry for not sending this out sooner for those of you who weren't at 
the summit,

Derek.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [GSLB][LBaaS] Launchpad project, git repos and more!

2015-07-21 Thread Gandhi, Kunal
I have created a doc in ether pad to get a voting started on the project name. 
Please feel free to suggest names or vote on existing names.

https://etherpad.openstack.org/p/GSLB_project_name_vote 
https://etherpad.openstack.org/p/GSLB_project_name_vote

Regards
Kunal

 On Jul 21, 2015, at 12:36 PM, Hayes, Graham graham.ha...@hp.com wrote:
 
 I forgot to add there is also an IRC channel created -
 #openstack-gslb
 
 I also have one item for the agenda that we should think about over the
 week - do we want to have a project name?
 
 If we do we should add it soon, so we can name projects/repo etc correctly.
 
 - Graham
 
 On 21/07/15 20:02, Hayes, Graham wrote:
 Hi All,
 
 I have created a github org and 2 repos for us to get started in.
 
 https://github.com/gslb/ is the org, with https://github.com/gslb/gslb
 as the main code repo.
 
 There is 2 teams (Github's name for groups) gslb-core, and gslb-admin.
 
 Core have read/write access to the repos, and admin can add / remove
 projects.
 
 I also created https://github.com/gslb/gslb-specs which will
 automatically publish to https://gslb-specs.readthedocs.org
 
 There is also a launchpad project https://launchpad.net/gslb with 2
 teams:
 
 gslb-drivers - people who can target bugs / bps
 gslb-core - the maintainers of the project
 
 All the info is on the wiki: https://wiki.openstack.org/wiki/GSLB
 
 So, next question - who should be in what groups? I am open to
 suggestions... should it be an item for discussion next week?
 
 Thanks,
 
 Graham
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [GSLB][LBaaS] Launchpad project, git repos and more!

2015-07-21 Thread Hayes, Graham
Yup - we are waiting for the experimental projects spec to merge in
openstack governance - we have a patch ready and waiting[1].

This is just to allow us to get moving.

- 1 https://review.openstack.org/#/c/201683/

On 21/07/15 20:56, Clint Byrum wrote:
 Perhaps I missed a discussion: You seem to be doing all the things that
 an OpenStack project team does. Is there some reason you aren't just
 creating an OpenStack project team?
 
 http://governance.openstack.org/reference/new-projects-requirements.html
 
 https://wiki.openstack.org/wiki/Governance/NewProjectTeams
 
 ?
 
 Excerpts from Hayes, Graham's message of 2015-07-21 11:53:35 -0700:
 Hi All,

 I have created a github org and 2 repos for us to get started in.

 https://github.com/gslb/ is the org, with https://github.com/gslb/gslb
 as the main code repo.

 There is 2 teams (Github's name for groups) gslb-core, and gslb-admin.

 Core have read/write access to the repos, and admin can add / remove
 projects.

 I also created https://github.com/gslb/gslb-specs which will
 automatically publish to https://gslb-specs.readthedocs.org

 There is also a launchpad project https://launchpad.net/gslb with 2
 teams:

 gslb-drivers - people who can target bugs / bps
 gslb-core - the maintainers of the project

 All the info is on the wiki: https://wiki.openstack.org/wiki/GSLB

 So, next question - who should be in what groups? I am open to
 suggestions... should it be an item for discussion next week?

 Thanks,

 Graham

 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][bp] Power Magnum to run on metal withHyper

2015-07-21 Thread Hongbin Lu
Adrian,

I definitely agree with #1 - #5. I am just trying to understand the nova virt 
driver for hyper approach. As Peng mentioned, hyper is a hypervisor-based 
substitute for container, but magnum is not making a special virt driver for 
container host creation (Instead, magnum leverages the existing virt driver to 
do that, such as libvirt and ironic). What I don’t understand is why we need a 
dedicated nova-hyper virt driver for host creation, but we are not doing the 
same for docker host creation. What makes hyper special so that we have to make 
a virt driver for it? Or do I miss something?

Best regards,
Hongbin

From: Adrian Otto [mailto:adrian.o...@rackspace.com]
Sent: July-19-15 11:11 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum][bp] Power Magnum to run on metal withHyper

Peng,

You are not the first to think this way, and it's one of the reasons we did not 
integrate Containers with OpenStack in a meaningful way a full year earlier. 
Please pay attention closely.

1) OpenStack's key influences care about two personas: 1.1) Cloud Operators 
1.2) Cloud Consumers. If you only think in terms of 1.2, then your idea will 
get killed. Operators matter.

2) Cloud Operators need a consistent way to bill for the IaaS services the 
provide. Nova emits all of the RPC messages needed to do this. Having a second 
nova that does this slightly differently is a really annoying problem that will 
make Operators hate the software. It's better to use nova, have things work 
consistently, and plug in virt drivers to it.

3) Creation of a host is only part of the problem. That's the easy part. Nova 
also does a bunch of other things too. For example, say you want to live 
migrate a guest from one host to another. There is already functionality in 
Nova for doing that.

4) Resources need to be capacity managed. We call this scheduling. Nova has a 
pluggable scheduler to help with the placement of guests on hosts. Magnum will 
not.

5) Hosts in a cloud need to integrate with a number of other services, such as 
an image service, messaging, networking, storage, etc. If you only think in 
terms of host creation, and do something without nova, then you need to 
re-integrate with all of these things.

Now, I probably left out examples of lots of other things that Nova does. What 
I have mentioned us enough to make my point that there are a lot of things that 
Magnum is intentionally NOT doing that we expect to get from Nova, and I will 
block all code that gratuitously duplicates functionality that I believe 
belongs in Nova. I promised our community I would not duplicate existing 
functionality without a very good reason, and I will keep that promise.

Let's find a good way to fit Hyper with OpenStack in a way that best leverages 
what exists today, and is least likely to be rejected. Please note that the 
proposal needs to be changed from where it is today to achieve this fit.

My fist suggestion is to find a way to make a nova virt driver for Hyper, which 
could allow it to be used with all of our current Bay types in Magnum.

Thanks,

Adrian


 Original message 
From: Peng Zhao p...@hyper.shmailto:p...@hyper.sh
Date: 07/19/2015 5:36 AM (GMT-08:00)
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [magnum][bp] Power Magnum to run on metal withHyper
Thanks Jay.

Hongbin, yes, it will be a scheduling system, either swarm, k8s or mesos. I 
just think bay isn't a must in this case, and we don't need nova to provision 
BM hosts, which makes things more complicated imo.

Peng


-- Original --
From:  Jay Laujay.lau@gmail.commailto:jay.lau@gmail.com;
Date:  Sun, Jul 19, 2015 10:36 AM
To:  OpenStack Development Mailing List (not for usage 
questions)openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org;
Subject:  Re: [openstack-dev] [magnum][bp] Power Magnum to run on metal 
withHyper

Hong Bin,
I have some online discussion with Peng, seems hyper is now integrating with 
Kubernetes and also have plan integrate with mesos for scheduling. Once mesos 
integration finished, we can treat mesos+hyper as another kind of bay.
Thanks

2015-07-19 4:15 GMT+08:00 Hongbin Lu 
hongbin...@huawei.commailto:hongbin...@huawei.com:
Peng,

Several questions Here. You mentioned that HyperStack is a single big “bay”. 
Then, who is doing the multi-host scheduling, Hyper or something else? Were you 
suggesting to integrate Hyper with Magnum directly? Or you were suggesting to 
integrate Hyper with Magnum indirectly (i.e. through k8s, mesos and/or Nova)?

Best regards,
Hongbin

From: Peng Zhao [mailto:p...@hyper.shmailto:p...@hyper.sh]
Sent: July-17-15 12:34 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum][bp] Power Magnum to run on metal 

[openstack-dev] [GSLB][LBaaS] Launchpad project, git repos and more!

2015-07-21 Thread Hayes, Graham
Hi All,

I have created a github org and 2 repos for us to get started in.

https://github.com/gslb/ is the org, with https://github.com/gslb/gslb
as the main code repo.

There is 2 teams (Github's name for groups) gslb-core, and gslb-admin.

Core have read/write access to the repos, and admin can add / remove
projects.

I also created https://github.com/gslb/gslb-specs which will
automatically publish to https://gslb-specs.readthedocs.org

There is also a launchpad project https://launchpad.net/gslb with 2
teams:

gslb-drivers - people who can target bugs / bps
gslb-core - the maintainers of the project

All the info is on the wiki: https://wiki.openstack.org/wiki/GSLB

So, next question - who should be in what groups? I am open to
suggestions... should it be an item for discussion next week?

Thanks,

Graham

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [GSLB][LBaaS] Launchpad project, git repos and more!

2015-07-21 Thread Susanne Balle
correction: I think that discussing who should be in what group at next
week's meeting make sense. Susanne

On Tue, Jul 21, 2015 at 3:18 PM, Susanne Balle sleipnir...@gmail.com
wrote:

 cool! thanks. I will request to be added to the correct groups.

 Susanne

 On Tue, Jul 21, 2015 at 2:53 PM, Hayes, Graham graham.ha...@hp.com
 wrote:

 Hi All,

 I have created a github org and 2 repos for us to get started in.

 https://github.com/gslb/ is the org, with https://github.com/gslb/gslb
 as the main code repo.

 There is 2 teams (Github's name for groups) gslb-core, and gslb-admin.

 Core have read/write access to the repos, and admin can add / remove
 projects.

 I also created https://github.com/gslb/gslb-specs which will
 automatically publish to https://gslb-specs.readthedocs.org

 There is also a launchpad project https://launchpad.net/gslb with 2
 teams:

 gslb-drivers - people who can target bugs / bps
 gslb-core - the maintainers of the project

 All the info is on the wiki: https://wiki.openstack.org/wiki/GSLB

 So, next question - who should be in what groups? I am open to
 suggestions... should it be an item for discussion next week?

 Thanks,

 Graham

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [GSLB][LBaaS] Launchpad project, git repos and more!

2015-07-21 Thread Clint Byrum
Perhaps I missed a discussion: You seem to be doing all the things that
an OpenStack project team does. Is there some reason you aren't just
creating an OpenStack project team?

http://governance.openstack.org/reference/new-projects-requirements.html

https://wiki.openstack.org/wiki/Governance/NewProjectTeams

?

Excerpts from Hayes, Graham's message of 2015-07-21 11:53:35 -0700:
 Hi All,
 
 I have created a github org and 2 repos for us to get started in.
 
 https://github.com/gslb/ is the org, with https://github.com/gslb/gslb
 as the main code repo.
 
 There is 2 teams (Github's name for groups) gslb-core, and gslb-admin.
 
 Core have read/write access to the repos, and admin can add / remove
 projects.
 
 I also created https://github.com/gslb/gslb-specs which will
 automatically publish to https://gslb-specs.readthedocs.org
 
 There is also a launchpad project https://launchpad.net/gslb with 2
 teams:
 
 gslb-drivers - people who can target bugs / bps
 gslb-core - the maintainers of the project
 
 All the info is on the wiki: https://wiki.openstack.org/wiki/GSLB
 
 So, next question - who should be in what groups? I am open to
 suggestions... should it be an item for discussion next week?
 
 Thanks,
 
 Graham
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] backlog of requirements changes

2015-07-21 Thread Doug Hellmann
Excerpts from Robert Collins's message of 2015-07-22 03:17:53 +1200:
 There seem to be quite a backlog in openstack/requirements.
 
 http://russellbryant.net/openstack-stats/requirements-reviewers-30.txt
 
 there are roughly 10x new changes to the number of reviews cores are
 managing to do.
 
 This worries me, and I'd like to help.
 
 How can I best do so?
 
 -Rob
 

I've noticed that most of the proposed changes recently would somehow
break the constraints list.

I added https://review.openstack.org/204181, which I think fixes the
test to prevent projects from being out of the allowed range.

I've also added https://review.openstack.org/204198 to encourage
contributors adding new requirements to list them in
upper-constraints.txt. The latter might be something we plan to rely on
having happen automatically, but until we have that automation in place
it seems reasonable to do it manually.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [GSLB][LBaaS] Launchpad project, git repos and more!

2015-07-21 Thread Susanne Balle
cool! thanks. I will request to be added to the correct groups.

Susanne

On Tue, Jul 21, 2015 at 2:53 PM, Hayes, Graham graham.ha...@hp.com wrote:

 Hi All,

 I have created a github org and 2 repos for us to get started in.

 https://github.com/gslb/ is the org, with https://github.com/gslb/gslb
 as the main code repo.

 There is 2 teams (Github's name for groups) gslb-core, and gslb-admin.

 Core have read/write access to the repos, and admin can add / remove
 projects.

 I also created https://github.com/gslb/gslb-specs which will
 automatically publish to https://gslb-specs.readthedocs.org

 There is also a launchpad project https://launchpad.net/gslb with 2
 teams:

 gslb-drivers - people who can target bugs / bps
 gslb-core - the maintainers of the project

 All the info is on the wiki: https://wiki.openstack.org/wiki/GSLB

 So, next question - who should be in what groups? I am open to
 suggestions... should it be an item for discussion next week?

 Thanks,

 Graham

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [GSLB][LBaaS] Launchpad project, git repos and more!

2015-07-21 Thread Hayes, Graham
I forgot to add there is also an IRC channel created -
#openstack-gslb

I also have one item for the agenda that we should think about over the
week - do we want to have a project name?

If we do we should add it soon, so we can name projects/repo etc correctly.

- Graham

On 21/07/15 20:02, Hayes, Graham wrote:
 Hi All,
 
 I have created a github org and 2 repos for us to get started in.
 
 https://github.com/gslb/ is the org, with https://github.com/gslb/gslb
 as the main code repo.
 
 There is 2 teams (Github's name for groups) gslb-core, and gslb-admin.
 
 Core have read/write access to the repos, and admin can add / remove
 projects.
 
 I also created https://github.com/gslb/gslb-specs which will
 automatically publish to https://gslb-specs.readthedocs.org
 
 There is also a launchpad project https://launchpad.net/gslb with 2
 teams:
 
 gslb-drivers - people who can target bugs / bps
 gslb-core - the maintainers of the project
 
 All the info is on the wiki: https://wiki.openstack.org/wiki/GSLB
 
 So, next question - who should be in what groups? I am open to
 suggestions... should it be an item for discussion next week?
 
 Thanks,
 
 Graham
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] [Swift] Erasure coding reconstructor doesn't work

2015-07-21 Thread Luse, Paul E
I was about to ask that very same thing and, at the same time, if you can 
indicate if you’ve seen errors in any logs and if so please provide those as 
well.  I’m hoping you just didn’t delete the hashes.pkl file though ☺

-Paul

From: Clay Gerrard [mailto:clay.gerr...@gmail.com]
Sent: Tuesday, July 21, 2015 2:22 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Openstack] [Swift] Erasure coding reconstructor 
doesn't work

How did you deleted one data fragment?

Like replication the EC consistency engine uses some sub directory hashing to 
accelerate replication requests in a consistent system - so if you just rm a 
file down in an hashdir somewhere you also need to delete the hashes.pkl up in 
the part dir (or call the invalidate_hash method like PUT, DELETE, POST, and 
quarantine do)

Every so often someone discusses the idea of having the auditor invalidate a 
hash after long enough or take some action on empty hashdirs (mind the 
races!) - but its really only an issue when someone delete's something by hand 
so we normally manage to get distracted with other things.

-Clay

On Tue, Jul 21, 2015 at 1:38 PM, Changbin Liu 
changbin@gmail.commailto:changbin@gmail.com wrote:
Folks,

To test the latest feature of Swift erasure coding, I followed this document 
(http://docs.openstack.org/developer/swift/overview_erasure_code.html) to 
deploy a simple cluster. I used Swift 2.3.0.

I am glad that operations like object PUT/GET/DELETE worked fine. I can see 
that objects were correctly encoded/uploaded and downloaded at proxy and object 
servers.

However, I noticed that swift-object-reconstructor seemed don't work as 
expected. Here is my setup: my cluster has three object servers, and I use this 
policy:

[storage-policy:1]
policy_type = erasure_coding
name = jerasure-rs-vand-2-1
ec_type = jerasure_rs_vand
ec_num_data_fragments = 2
ec_num_parity_fragments = 1
ec_object_segment_size = 1048576

After I uploaded one object, I verified that: there was one data fragment on 
each of two object servers, and one parity fragment on the third object server. 
However, when I deleted one data fragment, no matter how long I waited, it 
never got repaired, i.e., the deleted data fragment was never regenerated by 
the swift-object-reconstructor process.

My question: is swift-object-reconstructor supposed to be NOT WORKING given 
the current implementation status? Or, is there any configuration I missed in 
setting up swift-object-reconstructor?

Thanks

Changbin

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [horizon] [stable] Freeze exception

2015-07-21 Thread Shiva Prasad Rao (shivrao)
Hello,

I would like to request for a freeze exception for the following bug:
https://bugs.launchpad.net/horizon/+bug/1475190

Here is the patch:
https://review.openstack.org/#/c/202836/

This patch is not required in master as the network profile feature is 
supported in neutron for liberty release. Please consider this bug fix for the 
stable/kilo as it is currently breaking network-create workflow when 
profile_support is set to ‘cisco’ in horizon.

Thanks,
Shiva Prasad Rao
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Barbican : Unable to store the secret when Barbican was Integrated with SafeNet HSM

2015-07-21 Thread Asha Seshagiri
Hi John ,

Thanks for providing the solution .
Its a bug in Barbican code , it works without passing the length .
I would raise the bug and fix it .

root@HSM-Client bin]# python pkcs11-key-generation --library-path
'/usr/lib/libCryptoki2_64.so'  --passphrase 'test123' --slot-id  1 mkek
--label 'an_mkek'
Verified label !
MKEK successfully generated!

[root@HSM-Client bin]# python pkcs11-key-generation --library-path
'/usr/lib/libCryptoki2_64.so' --passphrase 'test123' --slot-id 1 hmac
--label 'my_hmac_label'
HMAC successfully generated!

Thanks and Regards,
Asha Seshagiri

On Mon, Jul 20, 2015 at 2:05 PM, John Vrbanac john.vrba...@rackspace.com
wrote:

  Hmm... This error is usually because one of the parameters is
 an incorrect type. I'm wondering if the length is coming through as a
 string instead of an integer. As the length defaults to 32, try not
 specifying the length parameter. If that works, we need to report a defect
 to make sure that it's properly converted to an integer.


 John Vrbanac
  --
 *From:* Asha Seshagiri asha.seshag...@gmail.com
 *Sent:* Monday, July 20, 2015 10:30 AM

 *To:* OpenStack Development Mailing List (not for usage questions)
 *Cc:* Reller, Nathan S.
 *Subject:* Re: [openstack-dev] Barbican : Unable to store the secret when
 Barbican was Integrated with SafeNet HSM

   Hi  John ,

  Thanks a lot John for your response.
  I tried   executing the script with the following options  before , but
 it seems it did not work .Hence tried with the curly baraces .

  Please find other options below :

 [root@HSM-Client bin]# python pkcs11-key-generation --library-path
 '/usr/lib/libCryptoki2_64.so' --passphrase 'test123' --slot-id 1 mkek
 --length 32 --label 'an_mkek'
 HSM returned response code: 0x13L CKR_ATTRIBUTE_VALUE_INVALID
 [root@HSM-Client bin]# python pkcs11-key-generation --library-path
 /usr/lib/libCryptoki2_64.so  --passphrase test123  --slot-id 1  mkek
 --length 32 --label an_mkek
 HSM returned response code: 0x13L CKR_ATTRIBUTE_VALUE_INVALID


  Would be of great help if l could the syntax for running the script

  Thanks and Regards,
  Asha  Seshagiri

 On Sun, Jul 19, 2015 at 6:25 PM, John Vrbanac john.vrba...@rackspace.com
 wrote:

  Don't include the curly brackets on the script arguments. The
 documentation is just using them to indicate that those are placeholders
 for real values.


 John Vrbanac
  --
 *From:* Asha Seshagiri asha.seshag...@gmail.com
 *Sent:* Sunday, July 19, 2015 2:15 PM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Cc:* Reller, Nathan S.
 *Subject:* Re: [openstack-dev] Barbican : Unable to store the secret
 when Barbican was Integrated with SafeNet HSM

Hi John ,

  Thanks  for pointing me to the right script.
 I appreciate your help .

  I tried running the script with the following command :

  [root@HSM-Client bin]# python pkcs11-key-generation --library-path
 {/usr/lib/libCryptoki2_64.so} --passphrase {test123} --slot-id 1  mkek
 --length 32 --label 'an_mkek'
 Traceback (most recent call last):
   File pkcs11-key-generation, line 120, in module
 main()
   File pkcs11-key-generation, line 115, in main
 kg = KeyGenerator()
   File pkcs11-key-generation, line 38, in __init__
 ffi=ffi
   File /root/barbican/barbican/plugin/crypto/pkcs11.py, line 315, in
 __init__
 self.lib = self.ffi.dlopen(library_path)
   File /usr/lib64/python2.7/site-packages/cffi/api.py, line 127, in
 dlopen
 lib, function_cache = _make_ffi_library(self, name, flags)
   File /usr/lib64/python2.7/site-packages/cffi/api.py, line 572, in
 _make_ffi_library
 backendlib = _load_backend_lib(backend, libname, flags)
   File /usr/lib64/python2.7/site-packages/cffi/api.py, line 561, in
 _load_backend_lib
 return backend.load_library(name, flags)
 *OSError: cannot load library {/usr/lib/libCryptoki2_64.so}:
 {/usr/lib/libCryptoki2_64.so}: cannot open shared object file: No such file
 or directory*

 *Unable to run the script since the library libCryptoki2_64.so cannot be
 opened.*

  Tried the following solution  :

-  vi /etc/ld.so.conf
- Added both the paths of ld.so.conf in the  /etc/ld.so.conf file got
 from the command find / -name libCryptoki2_64.so
 - /usr/safenet/lunaclient/lib/libCryptoki2_64.so
   - /usr/lib/libCryptoki2_64.so
- sudo ldconfig
- ldconfig -p

 But the above solution failed and am geting the same error.

  Any help would highly be apprecited.
 Thanks in advance!

  Thanks and Regards,
 Asha Seshagiri

 On Sat, Jul 18, 2015 at 11:12 PM, John Vrbanac 
 john.vrba...@rackspace.com wrote:

  Asha,

 It looks like you don't have your mkek label correctly configured. Make
 sure that the mkek_label and hmac_label values in your config correctly
 reflect the keys that you've generated on your HSM.

 The plugin will cache the key handle to the mkek and hmac when the
 plugin starts, so if it cannot find them, it'll 

Re: [openstack-dev] New gate blocker: 1476770

2015-07-21 Thread Matt Riedemann



On 7/21/2015 12:58 PM, Matt Riedemann wrote:

Grenade is blowing up on an AttributeError in glance, just started
blowing up today so it must be a new library release.

Bug is tracked here:

https://bugs.launchpad.net/glance/+bug/1476770



We're waiting to see if https://review.openstack.org/#/c/204194/ proves 
the fix which is to cap urllib3 in stable/kilo.


Given the check queue and that change hasn't even started running test 
jobs yet, please hold off on rechecking stuff so this gets a chance.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Question about unique default hostname for node

2015-07-21 Thread Andrew Woodward
On Tue, Jul 21, 2015 at 5:38 AM Fedor Zhadaev fzhad...@mirantis.com wrote:

 Hi all,

 The next issue was found during implementation
 https://blueprints.launchpad.net/fuel/+spec/node-naming :

   User may change node hostname to any another, including default-like
 'node-{№}', where № may be bigger than maximum nodeID existing at that
 moment.
   Later when node with ID == № will be created it's default name
 'node-{ID}' will break hostnames uniqueness.

 To avoid this now it was decided to generate in such situation another
 default hostname.

 The current solution is to generate hostname '*node-{UUID}*'. It works,
 but may look terribly.

 There are a few another possible solutions:

- Use '*node-{ID}-{#}*' format, where *{#} *we'll chose in loop till
the first unique.
- Use some unique value, shorter than UUID (for example - number of
microseconds from current timestamp)

 I think the only solution here is to a) ensure that every hostname is
unique or refuse to update the value b)In cases that the user wants to use
our format, the only allowed format is node-{ID} where ID must be equal to
this nodes ID. we don't need to come up with some scheme to rescue the
format. We do however need some value/method that will make it reset back
to the default.


 Please share you opinion - what is better?

 Also you can propose your own solutions.

 --
 Kind Regards,
 Fedor Zhadaev
 Junior Software Engineer, Mirantis Inc.
 Skype: zhadaevfm
 E-mail: fzhad...@mirantis.com
  __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
--
Andrew Woodward
Mirantis
Fuel Community Ambassador
Ceph Community
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Barbican : Regarding the use of HMAC in HSM

2015-07-21 Thread Asha Seshagiri
Hi All  ,

Would like to understand the usage of HMAC in HSM. From Barbican , we send
the request to generate MKEK and HMAC.
What is the relation between HMAC, MKEK AND KEK.

would need help in understanding HMAC.

-- 
*Thanks and Regards,*
*Asha Seshagiri*
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] [Swift] Erasure coding reconstructor doesn't work

2015-07-21 Thread John Dickinson
Yes, it's supposed to work, but you've run in to some errors we've been finding 
and fixing. Right now the top priority for the Swift dev community is to take 
care of the outstanding EC issues and make a release.

The list of the known EC bugs right now is 
https://bugs.launchpad.net/swift/+bugs?field.tag=ec. You'll see that nearly all 
of them are handled, and the rest are being worked on. We will have them fixed 
and a new Swift release ASAP.

Specifically, I think you were hitting bug 
https://bugs.launchpad.net/swift/+bug/1469094 (or maybe 
https://bugs.launchpad.net/swift/+bug/1452619).

I'm so happy you're trying out erasure codes in Swift! That's exactly what we 
need to happen. As the docs say, it's still a beta feature. Please let us 
know what you find. Bug reports are very helpful, but even mailing list posts 
or dropping in the #openstack-swift channel in IRC is appreciated.

--John




 On Jul 21, 2015, at 1:38 PM, Changbin Liu changbin@gmail.com wrote:
 
 Folks,
 
 To test the latest feature of Swift erasure coding, I followed this document 
 (http://docs.openstack.org/developer/swift/overview_erasure_code.html) to 
 deploy a simple cluster. I used Swift 2.3.0.
 
 I am glad that operations like object PUT/GET/DELETE worked fine. I can see 
 that objects were correctly encoded/uploaded and downloaded at proxy and 
 object servers.
 
 However, I noticed that swift-object-reconstructor seemed don't work as 
 expected. Here is my setup: my cluster has three object servers, and I use 
 this policy:
 
 [storage-policy:1]
 policy_type = erasure_coding
 name = jerasure-rs-vand-2-1
 ec_type = jerasure_rs_vand
 ec_num_data_fragments = 2
 ec_num_parity_fragments = 1
 ec_object_segment_size = 1048576
 
 After I uploaded one object, I verified that: there was one data fragment on 
 each of two object servers, and one parity fragment on the third object 
 server. However, when I deleted one data fragment, no matter how long I 
 waited, it never got repaired, i.e., the deleted data fragment was never 
 regenerated by the swift-object-reconstructor process.
 
 My question: is swift-object-reconstructor supposed to be NOT WORKING given 
 the current implementation status? Or, is there any configuration I missed in 
 setting up swift-object-reconstructor?
 
 Thanks
 
 Changbin
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Openstack] [Swift] Erasure coding reconstructor doesn't work

2015-07-21 Thread Changbin Liu
Folks,

To test the latest feature of Swift erasure coding, I followed this
document (
http://docs.openstack.org/developer/swift/overview_erasure_code.html) to
deploy a simple cluster. I used Swift 2.3.0.

I am glad that operations like object PUT/GET/DELETE worked fine. I can see
that objects were correctly encoded/uploaded and downloaded at proxy and
object servers.

However, I noticed that swift-object-reconstructor seemed don't work as
expected. Here is my setup: my cluster has three object servers, and I use
this policy:

[storage-policy:1]
policy_type = erasure_coding
name = jerasure-rs-vand-2-1
ec_type = jerasure_rs_vand
ec_num_data_fragments = 2
ec_num_parity_fragments = 1
ec_object_segment_size = 1048576

After I uploaded one object, I verified that: there was one data fragment
on each of two object servers, and one parity fragment on the third object
server. However, when I deleted one data fragment, no matter how long I
waited, it never got repaired, i.e., the deleted data fragment was never
regenerated by the swift-object-reconstructor process.

My question: is swift-object-reconstructor supposed to be NOT WORKING
given the current implementation status? Or, is there any configuration I
missed in setting up swift-object-reconstructor?

Thanks

Changbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] [Swift] Erasure coding reconstructor doesn't work

2015-07-21 Thread Clay Gerrard
How did you deleted one data fragment?

Like replication the EC consistency engine uses some sub directory hashing
to accelerate replication requests in a consistent system - so if you just
rm a file down in an hashdir somewhere you also need to delete the
hashes.pkl up in the part dir (or call the invalidate_hash method like PUT,
DELETE, POST, and quarantine do)

Every so often someone discusses the idea of having the auditor invalidate
a hash after long enough or take some action on empty hashdirs (mind the
races!) - but its really only an issue when someone delete's something by
hand so we normally manage to get distracted with other things.

-Clay

On Tue, Jul 21, 2015 at 1:38 PM, Changbin Liu changbin@gmail.com
wrote:

 Folks,

 To test the latest feature of Swift erasure coding, I followed this
 document (
 http://docs.openstack.org/developer/swift/overview_erasure_code.html) to
 deploy a simple cluster. I used Swift 2.3.0.

 I am glad that operations like object PUT/GET/DELETE worked fine. I can
 see that objects were correctly encoded/uploaded and downloaded at proxy
 and object servers.

 However, I noticed that swift-object-reconstructor seemed don't work as
 expected. Here is my setup: my cluster has three object servers, and I use
 this policy:

 [storage-policy:1]
 policy_type = erasure_coding
 name = jerasure-rs-vand-2-1
 ec_type = jerasure_rs_vand
 ec_num_data_fragments = 2
 ec_num_parity_fragments = 1
 ec_object_segment_size = 1048576

 After I uploaded one object, I verified that: there was one data fragment
 on each of two object servers, and one parity fragment on the third object
 server. However, when I deleted one data fragment, no matter how long I
 waited, it never got repaired, i.e., the deleted data fragment was never
 regenerated by the swift-object-reconstructor process.

 My question: is swift-object-reconstructor supposed to be NOT WORKING
 given the current implementation status? Or, is there any configuration I
 missed in setting up swift-object-reconstructor?

 Thanks

 Changbin

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] New gate blocker: 1476770

2015-07-21 Thread Matt Riedemann



On 7/21/2015 3:24 PM, Matt Riedemann wrote:



On 7/21/2015 12:58 PM, Matt Riedemann wrote:

Grenade is blowing up on an AttributeError in glance, just started
blowing up today so it must be a new library release.

Bug is tracked here:

https://bugs.launchpad.net/glance/+bug/1476770



We're waiting to see if https://review.openstack.org/#/c/204194/ proves
the fix which is to cap urllib3 in stable/kilo.

Given the check queue and that change hasn't even started running test
jobs yet, please hold off on rechecking stuff so this gets a chance.



The grenade test failed because oslo.vmware on stable/kilo has uncapped 
urllib3 so that pulls in the 1.11 still and we fail.


So we need to merge the g-r cap on stable/kilo [1] which will sync to 
oslo.vmware on stable/kilo and we'll merge that, and then need to 
release oslo.vmware on stable/kilo and we should all be hunky dory until 
the next library release breaks the gate on Wednesday.


[1] https://review.openstack.org/#/c/204193/

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Nominate Denys Klepikov for fuel-docs core

2015-07-21 Thread Dmitry Borodaenko
Looks like we have a consensus, I've added Denys to the fuel-docs-core
group.

Congratulations Denys, please keep up the good work!

On Thu, Jul 16, 2015 at 11:30 AM Mike Scherbakov mscherba...@mirantis.com
wrote:

 +1

 On Thu, Jul 16, 2015 at 8:40 AM Miroslav Anashkin manash...@mirantis.com
 wrote:

 +1

 --

 *Kind Regards*

 *Miroslav Anashkin**L2 support engineer**,*
 *Mirantis Inc.*
 *+7(495)640-4944 (office receptionist)*
 *+1(650)587-5200 (office receptionist, call from US)*
 *35b, Bld. 3, Vorontsovskaya St.*
 *Moscow**, Russia, 109147.*

 www.mirantis.com

 manash...@mirantis.com


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 --
 Mike Scherbakov
 #mihgen
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] [VXLAN] patch to use per-VNI multicast group addresses

2015-07-21 Thread John Nielsen
I may be in a small minority since I a) use VXLAN, b) don’t hate multicast and 
c) use linuxbridge instead of OVS. However I thought I’d share this patch in 
case I’m not alone.

If you assume the use of multicast, VXLAN works quite nicely to isolate L2 
domains AND to prevent delivery of unwanted broadcast/unknown/multicast packets 
to VTEPs that don’t need them. However, the latter only holds up if each VXLAN 
VNI uses its own unique multicast group address. Currently, you have to either 
disable multicast (and use l2_population or similar) or use only a single group 
address for ALL VNIs (and force every single VTEP to receive every BUM packet 
from every network). For my usage, this patch seems simpler.

Feedback is very welcome. In particular I’d like to know if anyone else finds 
this useful and if so, what (if any) changes might be required to get it 
committed. Thanks!

JN


commit 17c32a9ad07911f3b4148e96cbcae88720eef322
Author: John Nielsen j...@jnielsen.net
Date:   Tue Jul 21 16:13:42 2015 -0600

Add a boolean option, vxlan_group_auto, which if enabled will compute
a unique multicast group address group for each VXLAN VNI. Since VNIs
are 24 bits, they map nicely to the 239.0.0.0/8 site-local multicast
range. Eight bits of the VNI are used for the second, third and fourth
octets (with 239 always as the first octet).

Using this option allows VTEPs to receive BUM datagrams via multicast,
but only for those VNIs in which they participate. In other words, it is
an alternative to the l2_population extension and driver for environments
where both multicast and linuxbridge are used.

If the option is True then multicast groups are computed as described
above. If the option is False then the previous behavior is used
(either a single multicast group is defined by vxlan_group or multicast
is disabled).

diff --git a/etc/neutron/plugins/ml2/linuxbridge_agent.ini 
b/etc/neutron/plugins/ml2/linuxbridge_agent.ini
index d1a01ba..03578ad 100644
--- a/etc/neutron/plugins/ml2/linuxbridge_agent.ini
+++ b/etc/neutron/plugins/ml2/linuxbridge_agent.ini
@@ -25,6 +25,10 @@
 # This group must be the same on all the agents.
 # vxlan_group = 224.0.0.1
 #
+# (BoolOpt) Derive a unique 239.x.x.x multicast group for each vxlan VNI.
+# If this option is true, the setting of vxlan_group is ignored.
+# vxlan_group_auto = False
+#
 # (StrOpt) Local IP address to use for VXLAN endpoints (required)
 # local_ip =
 #
diff --git a/neutron/plugins/ml2/drivers/linuxbridge/agent/common/config.py 
b/neutron/plugins/ml2/drivers/linuxbridge/agent/common/config.py
index 6f15236..b4805d5 100644
--- a/neutron/plugins/ml2/drivers/linuxbridge/agent/common/config.py
+++ b/neutron/plugins/ml2/drivers/linuxbridge/agent/common/config.py
@@ -31,6 +31,9 @@ vxlan_opts = [
help=_(TOS for vxlan interface protocol packets.)),
 cfg.StrOpt('vxlan_group', default=DEFAULT_VXLAN_GROUP,
help=_(Multicast group for vxlan interface.)),
+cfg.BoolOpt('vxlan_group_auto', default=False,
+help=_(Derive a unique 239.x.x.x multicast group for each 
+   vxlan VNI)),
 cfg.IPOpt('local_ip', version=4,
   help=_(Local IP address of the VXLAN endpoints.)),
 cfg.BoolOpt('l2_population', default=False,
diff --git 
a/neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py 
b/neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py
index 61627eb..a0efde1 100644
--- a/neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py
+++ b/neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py
@@ -127,6 +127,14 @@ class LinuxBridgeManager(object):
 LOG.warning(_LW(Invalid Segmentation ID: %s, will lead to 
 incorrect vxlan device name), segmentation_id)
 
+def get_vxlan_group(self, segmentation_id):
+if cfg.CONF.VXLAN.vxlan_group_auto:
+return (239. +
+str(segmentation_id  16) + . +
+str(segmentation_id  8 % 256) + . +
+str(segmentation_id % 256))
+return cfg.CONF.VXLAN.vxlan_group
+
 def get_all_neutron_bridges(self):
 neutron_bridge_list = []
 bridge_list = os.listdir(BRIDGE_FS)
@@ -240,7 +248,7 @@ class LinuxBridgeManager(object):
'segmentation_id': segmentation_id})
 args = {'dev': self.local_int}
 if self.vxlan_mode == lconst.VXLAN_MCAST:
-args['group'] = cfg.CONF.VXLAN.vxlan_group
+args['group'] = self.get_vxlan_group(segmentation_id)
 if cfg.CONF.VXLAN.ttl:
 args['ttl'] = cfg.CONF.VXLAN.ttl
 if cfg.CONF.VXLAN.tos:
@@ -553,9 +561,10 @@ class LinuxBridgeManager(object):
 self.delete_vxlan(test_iface)
 
 def vxlan_mcast_supported(self):
-if not 

Re: [openstack-dev] [Neutron][L3] Representing a networks connected by routers

2015-07-21 Thread Ian Wells
On 21 July 2015 at 12:11, John Belamaric jbelama...@infoblox.com wrote:

  Wow, a lot to digest in these threads. If I can summarize my
 understanding of the two proposals. Let me know whether I get this right.
 There are a couple problems that need to be solved:

   a. Scheduling based on host reachability to the segments


So actually this is something Assaf and I were debating on IRC, and I think
it depends what you're aiming for.

Imagine you have connectivity for a 'network' to every host, but that
connectivity only works if you get a host-specific address because the
address range is different per host.  This seems to be the use case we come
back to.  (There's a corner case of this where the network is not available
on every host and that gets you different requirements, but for now, this.)

You *can* use the current mechanism: allocate address, schedule, run -
providing your scheduler respects the address you've been allocated and
puts you on a host that can reach this address.  This is a silly approach.
You can't tell when getting the address (for a port that is entirely
disassociated from the VM it's going to be attached to, via Neutron when
most of the scheduling constraints live in Nova) that the address is on a
machine that can even run the VM.

You can delay address allocation - then the machine can be scheduled
anywhere because the address it has is not a constraint.  This saves any
change to scheduling at all - normal scheduling rules apply, excepting the
case where addresses are exhausted on that machine, and in that case we'd
probably use the retry mechanism as a fallback to find a better place until
someone works out it's not really just a *nova* scheduler.


  b. Floating IP functionality across the segments. I am not sure I am
 clear on this one but it sounds like you want the routers attached to the
 segments to advertise routes to the specific floating IPs. Presumably then
 they would do NAT or the instance would assign both the fixed IP and the
 floating IP to its interface?


That's the summary.  And I don't think anyone is clear on this and I also
don't know that anyone has specifically requested this.

In Proposal 1, (a) is solved by associating segments to the front network
 via a router - that association is used to provide a single hook into the
 existing API that limits the scope of segment selection to those associated
 with the front network. (b) is solved by tying the floating IP ranges to
 the same front network and managing the reachability with dynamic routing.

  In Proposal 2, (a) is solved by tagging each network with some meta-data
 that the IPAM system uses to make a selection.


The distinction is actually pretty small.  The same backing data exists for
the IPAM to use - the difference is only that in (1) it's there as a misuse
of networks and in (2) it's not specified.


 This implies an IP allocation request that passes something other than a
 network/port to the IPAM subsystem.


This is where I started - there is nothing to pass when I run 'neutron
port-create' except for a network and this is where address allocation
happens today.  We need a mechanism to defer address allocation and
indicate that the port has no address right now.


 This fine from the IPAM point of view but there is no corresponding API
 for this right now. To solve (b) either the IPAM system has to publish the
 routes


It needs to ensure there's enough information on the port that the network
controller can push the routes, is the way I think of it.


 or the higher level management has to ALSO be aware of the mappings
 (rather than just IPAM).

  To throw some fuel on the fire, I would argue also that (a) is not
 sufficient and address availability needs to be considered as well (as
 described in [1]). Selecting a host based on reachability alone will fail
 when addresses are exhausted. Similarly, with (b) I think there needs to be
 consideration during association of a floating IP to the effect on routing.
 That is, rather than a huge number of host routes it would be ideal to
 allocate the floating IPs in blocks that can be associated with the backing
 networks (though we would want to be able to split these blocks as small as
 a /32 if necessary - but avoid it/optimize as much as possible).


Again - the scheduler is simplistic and nova-centric as things stand, and I
think we all recgonise this.  The current fallbacks work, but they're
fallbacks.

In fact, I think that these proposals are more or less the same - it's just
 in #1 the meta-data used to tie the backing networks together is another
 network.


Yup.


 This allows it to fit in neatly with the existing APIs. You would still
 need to implement something prior to IPAM or within IPAM that would select
 the appropriate backing network.

  As a (gulp) third alternative, we should consider that the front network
 here is in essence a layer 3 domain, and we have modeled layer 3 domains as
 address scopes in Liberty. The user is 

Re: [openstack-dev] Barbican : Unable to store the secret when Barbican was Integrated with SafeNet HSM

2015-07-21 Thread Asha Seshagiri
Hi John ,

One quick question :

When barbican is integrated with HSM , we send the order request to
generate symmetric key .
The request would goes to HSM and would generate the symmetic key which is
a secret.Then the secret is  wrapped with the KEKs and then sent to
Barbican.

The key requested through the order resource is never persisted in HSM.

Please correct me if I am wrong.

Thanks and  Regards,
Asha Seshagiri




On Tue, Jul 21, 2015 at 3:04 PM, Asha Seshagiri asha.seshag...@gmail.com
wrote:

 Hi John ,

 Thanks for providing the solution .
 Its a bug in Barbican code , it works without passing the length .
 I would raise the bug and fix it .

 root@HSM-Client bin]# python pkcs11-key-generation --library-path
 '/usr/lib/libCryptoki2_64.so'  --passphrase 'test123' --slot-id  1 mkek
 --label 'an_mkek'
 Verified label !
 MKEK successfully generated!

 [root@HSM-Client bin]# python pkcs11-key-generation --library-path
 '/usr/lib/libCryptoki2_64.so' --passphrase 'test123' --slot-id 1 hmac
 --label 'my_hmac_label'
 HMAC successfully generated!

 Thanks and Regards,
 Asha Seshagiri

 On Mon, Jul 20, 2015 at 2:05 PM, John Vrbanac john.vrba...@rackspace.com
 wrote:

  Hmm... This error is usually because one of the parameters is
 an incorrect type. I'm wondering if the length is coming through as a
 string instead of an integer. As the length defaults to 32, try not
 specifying the length parameter. If that works, we need to report a defect
 to make sure that it's properly converted to an integer.


 John Vrbanac
  --
 *From:* Asha Seshagiri asha.seshag...@gmail.com
 *Sent:* Monday, July 20, 2015 10:30 AM

 *To:* OpenStack Development Mailing List (not for usage questions)
 *Cc:* Reller, Nathan S.
 *Subject:* Re: [openstack-dev] Barbican : Unable to store the secret
 when Barbican was Integrated with SafeNet HSM

   Hi  John ,

  Thanks a lot John for your response.
  I tried   executing the script with the following options  before , but
 it seems it did not work .Hence tried with the curly baraces .

  Please find other options below :

 [root@HSM-Client bin]# python pkcs11-key-generation --library-path
 '/usr/lib/libCryptoki2_64.so' --passphrase 'test123' --slot-id 1 mkek
 --length 32 --label 'an_mkek'
 HSM returned response code: 0x13L CKR_ATTRIBUTE_VALUE_INVALID
 [root@HSM-Client bin]# python pkcs11-key-generation --library-path
 /usr/lib/libCryptoki2_64.so  --passphrase test123  --slot-id 1  mkek
 --length 32 --label an_mkek
 HSM returned response code: 0x13L CKR_ATTRIBUTE_VALUE_INVALID


  Would be of great help if l could the syntax for running the script

  Thanks and Regards,
  Asha  Seshagiri

 On Sun, Jul 19, 2015 at 6:25 PM, John Vrbanac john.vrba...@rackspace.com
  wrote:

  Don't include the curly brackets on the script arguments. The
 documentation is just using them to indicate that those are placeholders
 for real values.


 John Vrbanac
  --
 *From:* Asha Seshagiri asha.seshag...@gmail.com
 *Sent:* Sunday, July 19, 2015 2:15 PM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Cc:* Reller, Nathan S.
 *Subject:* Re: [openstack-dev] Barbican : Unable to store the secret
 when Barbican was Integrated with SafeNet HSM

Hi John ,

  Thanks  for pointing me to the right script.
 I appreciate your help .

  I tried running the script with the following command :

  [root@HSM-Client bin]# python pkcs11-key-generation --library-path
 {/usr/lib/libCryptoki2_64.so} --passphrase {test123} --slot-id 1  mkek
 --length 32 --label 'an_mkek'
 Traceback (most recent call last):
   File pkcs11-key-generation, line 120, in module
 main()
   File pkcs11-key-generation, line 115, in main
 kg = KeyGenerator()
   File pkcs11-key-generation, line 38, in __init__
 ffi=ffi
   File /root/barbican/barbican/plugin/crypto/pkcs11.py, line 315, in
 __init__
 self.lib = self.ffi.dlopen(library_path)
   File /usr/lib64/python2.7/site-packages/cffi/api.py, line 127, in
 dlopen
 lib, function_cache = _make_ffi_library(self, name, flags)
   File /usr/lib64/python2.7/site-packages/cffi/api.py, line 572, in
 _make_ffi_library
 backendlib = _load_backend_lib(backend, libname, flags)
   File /usr/lib64/python2.7/site-packages/cffi/api.py, line 561, in
 _load_backend_lib
 return backend.load_library(name, flags)
 *OSError: cannot load library {/usr/lib/libCryptoki2_64.so}:
 {/usr/lib/libCryptoki2_64.so}: cannot open shared object file: No such file
 or directory*

 *Unable to run the script since the library libCryptoki2_64.so cannot be
 opened.*

  Tried the following solution  :

-  vi /etc/ld.so.conf
- Added both the paths of ld.so.conf in the  /etc/ld.so.conf file
got  from the command find / -name libCryptoki2_64.so
 - /usr/safenet/lunaclient/lib/libCryptoki2_64.so
   - /usr/lib/libCryptoki2_64.so
- sudo ldconfig
- ldconfig -p

 But the above solution failed and am geting 

Re: [openstack-dev] [keystone] token revocation woes

2015-07-21 Thread Matt Fischer
Dolph,

Excuse the delayed reply, was waiting for a brilliant solution from
someone. Without one, personally I'd prefer the cronjob as it seems to be
the type of thing cron was designed for. That will be a painful change as
people now rely on this behavior so I don't know if its feasible. I will be
setting up monitoring for the revocation count and alerting me if it
crosses probably 500 or so. If the problem gets worse then I think a custom
no-op or sql driver is the next step.

Thanks.


On Wed, Jul 15, 2015 at 4:00 PM, Dolph Mathews dolph.math...@gmail.com
wrote:



 On Wed, Jul 15, 2015 at 4:51 PM, Matt Fischer m...@mattfischer.com
 wrote:

 I'm having some issues with keystone revocation events. The bottom line
 is that due to the way keystone handles the clean-up of these events[1],
 having more than a few leads to:

  - bad performance, up to 2x slower token validation with about 600
 events based on my perf measurements.
  - database deadlocks, which cause API calls to fail, more likely with
 more events it seems

 I am seeing this behavior in code from trunk on June 11 using Fernet
 tokens, but the token backend does not seem to make a difference.

 Here's what happens to the db in terms of deadlock:
 2015-07-15 21:25:41.082 31800 TRACE keystone.common.wsgi DBDeadlock:
 (OperationalError) (1213, 'Deadlock found when trying to get lock; try
 restarting transaction') 'DELETE FROM revocation_event WHERE
 revocation_event.revoked_at  %s' (datetime.datetime(2015, 7, 15, 18, 55,
 41, 55186),)

 When this starts happening, I just go truncate the table, but this is not
 ideal. If [1] is really true then the design is not great, it sounds like
 keystone is doing a revocation event clean-up on every token validation
 call. Reading and deleting/locking from my db cluster is not something I
 want to do on every validate call.


 Unfortunately, that's *exactly* what keystone is doing. Adam and I had a
 conversation about this problem in Vancouver which directly resulted in
 opening the bug referenced on the operator list:

   https://bugs.launchpad.net/keystone/+bug/1456797

 Neither of us remembered the actual implemented behavior, which is what
 you've run into and Deepti verified in the bug's comments.



 So, can I turn of token revocation for now? I didn't see an obvious no-op
 driver.


 Not sure how, other than writing your own no-op driver, or perhaps an
 extended driver that doesn't try to clean the table on every read?


 And in the long-run can this be fixed? I'd rather do almost anything
 else, including writing a cronjob than what happens now.


 If anyone has a better solution than the current one, that's also better
 than requiring a cron job on something like keystone-manage
 revocation_flush I'd love to hear it.


 [1] -
 http://lists.openstack.org/pipermail/openstack-operators/2015-June/007210.html

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [VXLAN] patch to use per-VNI multicast group addresses

2015-07-21 Thread Ian Wells
It is useful, yes; and posting diffs on the mailing list is not the way to
get them reviewed and approved.  If you can get this on gerrit it will get
a proper review, and I would certainly like to see something like this
incorporated.

On 21 July 2015 at 15:41, John Nielsen li...@jnielsen.net wrote:

 I may be in a small minority since I a) use VXLAN, b) don’t hate multicast
 and c) use linuxbridge instead of OVS. However I thought I’d share this
 patch in case I’m not alone.

 If you assume the use of multicast, VXLAN works quite nicely to isolate L2
 domains AND to prevent delivery of unwanted broadcast/unknown/multicast
 packets to VTEPs that don’t need them. However, the latter only holds up if
 each VXLAN VNI uses its own unique multicast group address. Currently, you
 have to either disable multicast (and use l2_population or similar) or use
 only a single group address for ALL VNIs (and force every single VTEP to
 receive every BUM packet from every network). For my usage, this patch
 seems simpler.

 Feedback is very welcome. In particular I’d like to know if anyone else
 finds this useful and if so, what (if any) changes might be required to get
 it committed. Thanks!

 JN


 commit 17c32a9ad07911f3b4148e96cbcae88720eef322
 Author: John Nielsen j...@jnielsen.net
 Date:   Tue Jul 21 16:13:42 2015 -0600

 Add a boolean option, vxlan_group_auto, which if enabled will compute
 a unique multicast group address group for each VXLAN VNI. Since VNIs
 are 24 bits, they map nicely to the 239.0.0.0/8 site-local multicast
 range. Eight bits of the VNI are used for the second, third and fourth
 octets (with 239 always as the first octet).

 Using this option allows VTEPs to receive BUM datagrams via multicast,
 but only for those VNIs in which they participate. In other words, it
 is
 an alternative to the l2_population extension and driver for
 environments
 where both multicast and linuxbridge are used.

 If the option is True then multicast groups are computed as described
 above. If the option is False then the previous behavior is used
 (either a single multicast group is defined by vxlan_group or multicast
 is disabled).

 diff --git a/etc/neutron/plugins/ml2/linuxbridge_agent.ini
 b/etc/neutron/plugins/ml2/linuxbridge_agent.ini
 index d1a01ba..03578ad 100644
 --- a/etc/neutron/plugins/ml2/linuxbridge_agent.ini
 +++ b/etc/neutron/plugins/ml2/linuxbridge_agent.ini
 @@ -25,6 +25,10 @@
  # This group must be the same on all the agents.
  # vxlan_group = 224.0.0.1
  #
 +# (BoolOpt) Derive a unique 239.x.x.x multicast group for each vxlan VNI.
 +# If this option is true, the setting of vxlan_group is ignored.
 +# vxlan_group_auto = False
 +#
  # (StrOpt) Local IP address to use for VXLAN endpoints (required)
  # local_ip =
  #
 diff --git
 a/neutron/plugins/ml2/drivers/linuxbridge/agent/common/config.py
 b/neutron/plugins/ml2/drivers/linuxbridge/agent/common/config.py
 index 6f15236..b4805d5 100644
 --- a/neutron/plugins/ml2/drivers/linuxbridge/agent/common/config.py
 +++ b/neutron/plugins/ml2/drivers/linuxbridge/agent/common/config.py
 @@ -31,6 +31,9 @@ vxlan_opts = [
 help=_(TOS for vxlan interface protocol packets.)),
  cfg.StrOpt('vxlan_group', default=DEFAULT_VXLAN_GROUP,
 help=_(Multicast group for vxlan interface.)),
 +cfg.BoolOpt('vxlan_group_auto', default=False,
 +help=_(Derive a unique 239.x.x.x multicast group for
 each 
 +   vxlan VNI)),
  cfg.IPOpt('local_ip', version=4,
help=_(Local IP address of the VXLAN endpoints.)),
  cfg.BoolOpt('l2_population', default=False,
 diff --git
 a/neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py
 b/neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py
 index 61627eb..a0efde1 100644
 ---
 a/neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py
 +++
 b/neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py
 @@ -127,6 +127,14 @@ class LinuxBridgeManager(object):
  LOG.warning(_LW(Invalid Segmentation ID: %s, will lead to 
  incorrect vxlan device name),
 segmentation_id)

 +def get_vxlan_group(self, segmentation_id):
 +if cfg.CONF.VXLAN.vxlan_group_auto:
 +return (239. +
 +str(segmentation_id  16) + . +
 +str(segmentation_id  8 % 256) + . +
 +str(segmentation_id % 256))
 +return cfg.CONF.VXLAN.vxlan_group
 +
  def get_all_neutron_bridges(self):
  neutron_bridge_list = []
  bridge_list = os.listdir(BRIDGE_FS)
 @@ -240,7 +248,7 @@ class LinuxBridgeManager(object):
 'segmentation_id': segmentation_id})
  args = {'dev': self.local_int}
  if self.vxlan_mode == lconst.VXLAN_MCAST:
 -args['group'] = 

Re: [openstack-dev] [keystone] token revocation woes

2015-07-21 Thread Dolph Mathews
Well, you might be in luck! Morgan Fainberg actually implemented an
improvement that was apparently documented by Adam Young way back in March:

  https://bugs.launchpad.net/keystone/+bug/1287757

There's a link to the stable/kilo backport in comment #2 - I'd be eager to
hear how it performs for you!

On Tue, Jul 21, 2015 at 5:58 PM, Matt Fischer m...@mattfischer.com wrote:

 Dolph,

 Excuse the delayed reply, was waiting for a brilliant solution from
 someone. Without one, personally I'd prefer the cronjob as it seems to be
 the type of thing cron was designed for. That will be a painful change as
 people now rely on this behavior so I don't know if its feasible. I will be
 setting up monitoring for the revocation count and alerting me if it
 crosses probably 500 or so. If the problem gets worse then I think a custom
 no-op or sql driver is the next step.

 Thanks.


 On Wed, Jul 15, 2015 at 4:00 PM, Dolph Mathews dolph.math...@gmail.com
 wrote:



 On Wed, Jul 15, 2015 at 4:51 PM, Matt Fischer m...@mattfischer.com
 wrote:

 I'm having some issues with keystone revocation events. The bottom line
 is that due to the way keystone handles the clean-up of these events[1],
 having more than a few leads to:

  - bad performance, up to 2x slower token validation with about 600
 events based on my perf measurements.
  - database deadlocks, which cause API calls to fail, more likely with
 more events it seems

 I am seeing this behavior in code from trunk on June 11 using Fernet
 tokens, but the token backend does not seem to make a difference.

 Here's what happens to the db in terms of deadlock:
 2015-07-15 21:25:41.082 31800 TRACE keystone.common.wsgi DBDeadlock:
 (OperationalError) (1213, 'Deadlock found when trying to get lock; try
 restarting transaction') 'DELETE FROM revocation_event WHERE
 revocation_event.revoked_at  %s' (datetime.datetime(2015, 7, 15, 18, 55,
 41, 55186),)

 When this starts happening, I just go truncate the table, but this is
 not ideal. If [1] is really true then the design is not great, it sounds
 like keystone is doing a revocation event clean-up on every token
 validation call. Reading and deleting/locking from my db cluster is not
 something I want to do on every validate call.


 Unfortunately, that's *exactly* what keystone is doing. Adam and I had a
 conversation about this problem in Vancouver which directly resulted in
 opening the bug referenced on the operator list:

   https://bugs.launchpad.net/keystone/+bug/1456797

 Neither of us remembered the actual implemented behavior, which is what
 you've run into and Deepti verified in the bug's comments.



 So, can I turn of token revocation for now? I didn't see an obvious
 no-op driver.


 Not sure how, other than writing your own no-op driver, or perhaps an
 extended driver that doesn't try to clean the table on every read?


 And in the long-run can this be fixed? I'd rather do almost anything
 else, including writing a cronjob than what happens now.


 If anyone has a better solution than the current one, that's also better
 than requiring a cron job on something like keystone-manage
 revocation_flush I'd love to hear it.


 [1] -
 http://lists.openstack.org/pipermail/openstack-operators/2015-June/007210.html


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaasv2]How to configure lbaasv2 in devstack

2015-07-21 Thread Eichberger, German
Please make sure to check the previous discussion about the effort Vivek is 
leading [1]

[1] https://www.mail-archive.com/openstack-dev@lists.openstack.org/msg58328.html



From: jiangshan0...@139.commailto:jiangshan0...@139.com 
jiangshan0...@139.commailto:jiangshan0...@139.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Monday, July 20, 2015 at 11:06 PM
To: Yingjun Li liyingjun1...@gmail.commailto:liyingjun1...@gmail.com, 
OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [neutron][lbaasv2]How to configure lbaasv2 in 
devstack

Thanks a lot.


jiangshan0...@139.commailto:jiangshan0...@139.com

From: Yingjun Limailto:liyingjun1...@gmail.com
Date: 2015-07-21 10:07
To: OpenStack Development Mailing List (not for usage 
questions)mailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [neutron][lbaasv2]How to configure lbaasv2 in 
devstack

Currently horizon doesn’t support LBaaS v2, there is a blueprint related but it 
doesn’t implement yet: 
https://blueprints.launchpad.net/horizon/+spec/lbaas-v2-panel

2015-07-21 9:49 GMT+08:00 jiangshan0...@139.commailto:jiangshan0...@139.com 
jiangshan0...@139.commailto:jiangshan0...@139.com:
Hi all,

 I have configured these lines in my devstack localrc

# Load the external LBaaS plugin.
enable_plugin neutron-lbaas 
https://git.openstack.org/openstack/neutron-lbaas

## Neutron - Load Balancing
ENABLED_SERVICES+=,q-lbaasv2

# Horizon (Dashboard UI) - (always use the trunk)
ENABLED_SERVICES+=,horizon

# Neutron - Networking Service
# If Neutron is not declared the old good nova-network will be used
ENABLED_SERVICES+=,q-svc,q-agt,q-dhcp,q-l3,q-meta,neutron


And I can use lbaasv2 through CLI. But do not have the loadbalance 
pages in dashboard(the other pages like routers, networks are all right).

Is there anything wrong in my configuration? Or maybe some 
configuration need to be done in horizon to use lbaasv2?

Thanks a lot for your help!




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum][blueprint] magnum-service-list

2015-07-21 Thread SURO

Hi all,
[special attention: Jay Lau]

The bp[1] registered, asks for the following implementation -

 * 'magnum service-list' should be similar to 'nova service-list'
 * 'magnum service-list' should be moved to be ' magnum
   k8s-service-list'. Also similar holds true for 'pod-list'/'rc-list'

As I dug some details, I find -

 * 'magnum service-list' fetches data from OpenStack DB[2], instead of
   the COE endpoint. So technically it is not k8s-specific. magnum is
   serving data for objects modeled as 'service', just the way we are
   catering for 'magnum container-list' in case of swarm bay.
 * If magnum provides a way to get the COE endpoint details, users can
   use native tools to fetch the status of the COE-specific objects,
   viz. 'kubectl get services' here.
 * nova has lot more backend services, e.g. cert, scheduler,
   consoleauth, compute etc. in comparison to magnum's conductor only.
   Also, not all the api's have this 'service-list' available.

With these arguments in view, can we have some more 
explanation/clarification in favor of the ask in the blueprint?


[1] - https://blueprints.launchpad.net/magnum/+spec/magnum-service-list
[2] - 
https://github.com/openstack/magnum/blob/master/magnum/objects/service.py#L114


--
Regards,
SURO
irc//freenode: suro-patz

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [CI] How to set a proxy for zuul.

2015-07-21 Thread Abhishek Shrivastava
Hi Tang,

Can you please send me the whole job snippet you wrote.

On Tue, Jul 21, 2015 at 12:52 PM, Tang Chen tangc...@cn.fujitsu.com wrote:

  Hi Asselin, Abhishek,

 I got some problems when I was trying to write a jenkins job.

 I found that when zuul received the notification from gerrit, jenkins
 didn't run the test.

 I added something to noop-check-communication in /etc/jenkins_jobs/config/
 examples.yaml,
 just touched a file under /tmp.

 - job-template:
 name: 'noop-check-communication'
 node: '{node}'

 builders:
   - shell: |
   #!/bin/bash -xe
   touch
 /tmp/noop-check-communication# I added
 something here.
   echo Hello world, this is the {vendor} Testing System
   - link-logs  # In macros.yaml from os-ext-testing

 And I flushed the jobs, using jenkins-jobs --flush-cache update
 /etc/jenkins_jobs/config/.
 I can build the job in jenkins web UI, and the file was touched.


 But when I send a patch, the file was not touched. But CI really works.
 I can see it on the web site. ( https://review.openstack.org/#/c/203941/)

 How do you think of this ?




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 


*Thanks  Regards,*
*Abhishek*
*Cloudbyte Inc. http://www.cloudbyte.com*
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [CI] How to set a proxy for zuul.

2015-07-21 Thread Tang Chen


On 07/21/2015 03:35 PM, Abhishek Shrivastava wrote:

Hi Tang,

Can you please send me the whole job snippet you wrote.


In /etc/jenkins_jobs/config/ examples.yaml, that's all.

- job-template:
name: 'noop-check-communication'
node: '{node}'

builders:
  - shell: |
  #!/bin/bash -xe
  touch /tmp/noop-check-communication
  echo Hello world, this is the {vendor} Testing System
  - link-logs  # In macros.yaml from os-ext-testing

#publishers:
#  - devstack-logs  # In macros.yaml from os-ext-testing
#  - console-log  # In macros.yaml from os-ext-testing



noop-check-communication was setup by default. I didn't change anything 
else.



BTW, I tried to ask this in #openstack-meeting IRC.
But no one seems to be active. :)

Thanks.




On Tue, Jul 21, 2015 at 12:52 PM, Tang Chen tangc...@cn.fujitsu.com 
mailto:tangc...@cn.fujitsu.com wrote:


Hi Asselin, Abhishek,

I got some problems when I was trying to write a jenkins job.

I found that when zuul received the notification from gerrit,
jenkins didn't run the test.

I added something to noop-check-communication in
/etc/jenkins_jobs/config/ examples.yaml,
just touched a file under /tmp.

- job-template:
name: 'noop-check-communication'
node: '{node}'

builders:
  - shell: |
  #!/bin/bash -xe
  touch /tmp/noop-check-communication # I added something
here.
  echo Hello world, this is the {vendor} Testing System
  - link-logs  # In macros.yaml from os-ext-testing

And I flushed the jobs, using jenkins-jobs --flush-cache update
/etc/jenkins_jobs/config/.
I can build the job in jenkins web UI, and the file was touched.


But when I send a patch, the file was not touched. But CI really
works.
I can see it on the web site. (
https://review.openstack.org/#/c/203941/)

How do you think of this ?




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
*
*
*Thanks  Regards,
*
*Abhishek*
/_Cloudbyte Inc. http://www.cloudbyte.com_/


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaasv2]How to configure lbaasv2 in devstack

2015-07-21 Thread jiangshan0...@139.com
Currently horizon doesn’t support LBaaS v2, there is a blueprint related
but it doesn’t implement yet:
https://blueprints.launchpad.net/horizon/+spec/lbaas-v2-panel

2015-07-21 9:49 GMT+08:00 jiangshan0...@139.com jiangshan0...@139.com:

 Hi all,

  I have configured these lines in my devstack localrc

 # Load the external LBaaS plugin.
 enable_plugin neutron-lbaas
 https://git.openstack.org/openstack/neutron-lbaas

 ## Neutron - Load Balancing
 ENABLED_SERVICES+=,q-lbaasv2

 # Horizon (Dashboard UI) - (always use the trunk)
 ENABLED_SERVICES+=,horizon

 # Neutron - Networking Service
 # If Neutron is not declared the old good nova-network will be used
 ENABLED_SERVICES+=,q-svc,q-agt,q-dhcp,q-l3,q-meta,neutron


 And I can use lbaasv2 through CLI. But do not have the
 loadbalance pages in dashboard(the other pages like routers, networks are
 all right).

 Is there anything wrong in my configuration? Or maybe some
 configuration need to be done in horizon to use lbaasv2?

 Thanks a lot for your help!

 --


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel][puppet] The state of collaboration: 5 weeks

2015-07-21 Thread Dmitry Borodaenko

On Mon, Jul 20, 2015 at 12:24:13PM -0600, Matt Fischer wrote:

Dmitry and I chatted some this morning and I'd at least like to address
those issues so that we can resolve them and move on to the other
discussions. I am not speaking for Emilien here, just jumping in as a core
and trying to help resolve some of this.

As Emilien notes, many of us do not work full-time on puppet code, so
sometimes reviews can slip. IRC messages can also be missed especially
during summer months with many of us on vacations. For that reason, Dmitry
and the Fuel team will be bringing reviews to the weekly meeting to call
out ones that need or are overdue for attention. I believe that missed,
overlooked, ignored, or forgotten reviews are not a new issue for an
OpenStack project, but hopefully this solution will work to improve
throughput. This solution applies to anyone who is possibly not getting
enough attention on the review.

Our review dashboard is here: https://goo.gl/bSYJt8


After looking carefully at this dashboard's definition, I think I've 
figured out what happened to https://review.openstack.org/190548 -- the 
dashboard's foreach statement excludes all reviews with a -2 code review 
vote, so once that review received a -2 from Emilien, it would no longer 
show up in anyone's inbox, including Emilien's.


To mitigate that, I've proposed a change for the dashboard that adds a 
Disagreement section including reviews with both positive and negative 
votes:


https://review.openstack.org/203903

The rationale is that if, despite someone voting a patch set (or 
the whole review) down, other reviewers did not revoke (or kept adding) 
positive votes, further discussion of that commit is needed. It would 
still be possible to exclude a commit from all sections of the 
dashboard, if necessary (e.g. author is MIA) by abandoning it.


Reviewing that list once a week might be a good way to put stuck reviews 
on the meeting's agenda, and would benefit all contributors, not only 
Fuel developers.


Thoughts?

--
Dmitry Borodaenko

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [CI] How to set a proxy for zuul.

2015-07-21 Thread Tang Chen

Hi Asselin, Abhishek,

I got some problems when I was trying to write a jenkins job.

I found that when zuul received the notification from gerrit, jenkins 
didn't run the test.


I added something to noop-check-communication in 
/etc/jenkins_jobs/config/ examples.yaml,

just touched a file under /tmp.

- job-template:
name: 'noop-check-communication'
node: '{node}'

builders:
  - shell: |
  #!/bin/bash -xe
  touch /tmp/noop-check-communication # I added something here.
  echo Hello world, this is the {vendor} Testing System
  - link-logs  # In macros.yaml from os-ext-testing

And I flushed the jobs, using jenkins-jobs --flush-cache update 
/etc/jenkins_jobs/config/.

I can build the job in jenkins web UI, and the file was touched.


But when I send a patch, the file was not touched. But CI really works.
I can see it on the web site. ( https://review.openstack.org/#/c/203941/)

How do you think of this ?



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel][plugins] Ability of plugin to provision switches before node deployment

2015-07-21 Thread Evgeniy L
Hi Steven, thank you for letting us know about your use case,

The problem is caused because since 6.1 release we run
part of network verification tasks before deployment [1], from
the code it can be seen, that this check is hardcoded [2], before
we send any deployment tasks, so we should figure out a way
to fix it.

Possible options:

1. run network verification us a part of deployment, somewhere
in pre_deployment stage, between pre_deployment/4999 and
pre_deployment/6000 [3], but it doesn't cover the case when user
wants to just run network verification on Network tab

2. introduce new stage pre_network_verification which will be
run before we start network verification, so you will be able
to configure switches

Also I would like to notice, that role as a plugin feature which
will allow to change deployment graph of the core, will not help
here [4]. Because network verification is not a part of the graph.

Thanks,

[1] https://bugs.launchpad.net/fuel/+bug/1439686
[2]
https://github.com/stackforge/fuel-web/commit/c4594fc2461f1cf66e580a07d32a869c3f25678d
[3] https://wiki.openstack.org/wiki/Fuel/Plugins#stage_parameter
[4]
https://github.com/stackforge/fuel-specs/blob/master/specs/7.0/role-as-a-plugin.rst

On Tue, Jul 21, 2015 at 12:49 AM, Steven Kath k...@linux.com wrote:

 Hi,

 I'm hoping to design a FUEL plugin which can provision a switch or set
 of switches according to the Network Settings specified when first
 configuring an environment in FUEL.

 We have puppet manifests which allow us to configure every aspect
 of our switches, including the plumbing of VLANs. It would be possible
 for us to configure the VLANs as specified in the environment's
 Network Settings prior to, or as part of, the Verify Networks stage.

 As far as I can tell, there aren't any FUEL plugin hooks this early in
 the provisioning process. I can't find where to initiate any
 configuration of a remote device before validating the network
 settings. I've looked at a number of the networking plugins for FUEL
 and they all seem to be focused on adding overlays, pre-supposing a
 static underlay network configuration which the plugins never
 manipulate.

 Can anyone confirm whether the current FUEL capabilities, or planned
 FUEL 7.0 functionality, would allow for this sort of pre-deployment
 network configuration?

 Are there any relevant documentation sections or plugin examples
 which I might have overlooked?

 Thanks!
 - Steven

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [CI] How to set a proxy for zuul.

2015-07-21 Thread Abhishek Shrivastava
If you want to create a new job then refer *dsvm-cinder-driver.yaml.sample*,
and regarding *noop-check-communication* its just for testing the first
time don't modify it.

On Tue, Jul 21, 2015 at 1:19 PM, Tang Chen tangc...@cn.fujitsu.com wrote:


 On 07/21/2015 03:35 PM, Abhishek Shrivastava wrote:

  Hi Tang,

  Can you please send me the whole job snippet you wrote.


 In /etc/jenkins_jobs/config/ examples.yaml, that's all.

 - job-template:
 name: 'noop-check-communication'
 node: '{node}'

 builders:
   - shell: |
   #!/bin/bash -xe
   touch /tmp/noop-check-communication
   echo Hello world, this is the {vendor} Testing System
   - link-logs  # In macros.yaml from os-ext-testing

 #publishers:
 #  - devstack-logs  # In macros.yaml from os-ext-testing
 #  - console-log  # In macros.yaml from os-ext-testing



 noop-check-communication was setup by default. I didn't change anything
 else.


 BTW, I tried to ask this in #openstack-meeting IRC.
 But no one seems to be active. :)

 Thanks.



 On Tue, Jul 21, 2015 at 12:52 PM, Tang Chen tangc...@cn.fujitsu.com
 wrote:

  Hi Asselin, Abhishek,

 I got some problems when I was trying to write a jenkins job.

 I found that when zuul received the notification from gerrit, jenkins
 didn't run the test.

 I added something to noop-check-communication in
 /etc/jenkins_jobs/config/ examples.yaml,
 just touched a file under /tmp.

 - job-template:
 name: 'noop-check-communication'
 node: '{node}'

 builders:
   - shell: |
   #!/bin/bash -xe
   touch
 /tmp/noop-check-communication# I added
 something here.
   echo Hello world, this is the {vendor} Testing System
   - link-logs  # In macros.yaml from os-ext-testing

 And I flushed the jobs, using jenkins-jobs --flush-cache update
 /etc/jenkins_jobs/config/.
 I can build the job in jenkins web UI, and the file was touched.


 But when I send a patch, the file was not touched. But CI really works.
 I can see it on the web site. ( https://review.openstack.org/#/c/203941/)

 How do you think of this ?




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




  --


 *Thanks  Regards, *
 *Abhishek*
  *Cloudbyte Inc. http://www.cloudbyte.com*


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 


*Thanks  Regards,*
*Abhishek*
*Cloudbyte Inc. http://www.cloudbyte.com*
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel][puppet] The state of collaboration: 5 weeks

2015-07-21 Thread Dmitry Borodaenko

On Sun, Jul 19, 2015 at 09:42:20PM -0400, Emilien Macchi wrote:

I'm currently in holidays but I could not resist to take some time and
reply.


Thanks for taking the time, apologies about distracting you from your 
vacation!



Please read again my first comment on the Governance patch:

They are making progress and I'm sure they are doing their best.


Ok, I'm happy to drop this topic and focus on the positive message here! 
One way or the other, commits are a better way to measure progress than 
emails :)



Andrew took the best metric in your advantage... the review metric,
which is something given to anyone having signed the CLA.


Review metric is just what Stackalytics shows by default, if Andrew were 
shopping for the best metric he'd have used patch sets.



I would rather focus on patchset, commits, bug fix, IRC, ML etc... which
really show collaboration in a group.
And I honestly think it's making progress, even though the numbers.


Patch set count is naturally the first metric to show progress. Fuel 
team's bugs triage process is already fairly mature, so I'm not 
surprised there's noticeable contribution in that area, too.


Keeping that up and pushing for more IRC  ML participation should help
improve the patch set to commit ratio, which would make Fuel team more 
productive and would improve the mutual trust, helping Puppet OpenStack 
core reviewers trust that Fuel developers will not mess things up for 
them, and helping Fuel developers trust that their commits will not get 
stuck on review.


I think this should be our primary focus of improvement in the near 
term. Once there's evidence that Fuel developers can produce quality 
patches that can be landed relatively quickly, next level would be

improving our +1/-1 ratio and reviewing specs for blueprints.


[3] https://review.openstack.org/198119

(...)
Even before looking into the review comments, I could see a technical 
reason for abandoning the commit: if there is a bug in a component, 
fixing that bug in the package is preferrable to fixing it in puppet, 
because it allows anybody to benefit from the fix, not just the 
people deploying that package with puppet. 


You are not providing official Ubuntu packaging, but your own packages
mainly used by Fuel, while Puppet OpenStack modules are widely used by
OpenStack community.


The only reason we've been doing our own packaging is lack of an 
official cross-distro upstream. We've already opened access to all our 
packaging code on review.fuel-infra.org, and want to move our packaging 
work directly to upstream as soon as it is becomes possible:


http://lists.openstack.org/pipermail/openstack-dev/2015-July/069377.html


Fixing that bug in Fuel packaging is the shortest  easiest way for you
to fix that,


It's neither shortest nor easiest: it's 26 changed lines in 4 files, vs 
4 changed lines in 2 files, and it was created only after we were told 
in the puppet-horizon review that packaging is the right place to fix 
it, so it was additional work on top of a workaround that we already 
had.



while we are really doing something wrong in puppet-horizon
about the 'compress' option.
So Fuel is now fixed and puppet-horizon broken.


No, puppet-horizon simply lacks a workaround for broken horizon 
packages, and that workaround also was reverted from fuel-library. 
Mirantis may be the first to fix their packages, but if other distros 
such as RDO or Ubuntu have the same problem in their packages, they 
should fix it on their end, instead of relying on puppet, chef, ansible, 
and other deployment tools to implement workarounds for their bugs.


It's a general right tools for the job problem. You can do a lot of 
things with Puppet, but it doesn't mean that you should use Puppet for 
everything. Managing options in a service's config files is Puppet's 
job. Replacing package's init script, tweaking its directory structure, 
creating symlinks, compiling binaries from code and so on should all be 
done while you're building a package, not after you have installed it.



[4] https://review.openstack.org/190548

Here's what I see in this review:

a) Fuel team has spent more than a month (since June 11) on trying to 
land this patch. 

b) 2 out of 5 negative reviews are from Fuel team, proving that Fuel 
developers are not rubberstamping each other's commits as was 
implied by Emilien's comments on the TC review above. 


c) There was one patch set that was pushed 3 business days after a negative
review, all other patch sets (11 total) were pushed no later than next day
after a negative review.

All in all, I see great commitment and effort on behalf of Fuel team,
eventually awarded by a +2 from Michael Chapman.

On the same day (June 30), Emilien votes -2 for a correction in a 
comment, and walks away from the review for good. 18 days and 4 patch 
sets and 2 outstanding +1's later, the review remains blocked by that 
-2. Reaching out on #puppet-openstack [5] didn't help, either. 


[5] 

Re: [openstack-dev] [designate] The designate API service is stopped

2015-07-21 Thread Jaime Fernández
I confirm that it happened again. I started all the designate processes and
after an hour (approximately), the designate-api process died with the same
stack trace.

Restarting the designate-api does not help because API requests are not
replied any more (timeouts):
2015-07-21 10:12:53.463 4403 ERROR designate.api.middleware
[req-281e9665-c49b-43e0-a5d0-9a48e5f52aa1 noauth-user noauth-project - - -]
Timed out waiting for a reply to message ID 19cda11c089f43e2a43a75a5851926c8

When the designate-api process dies, I need to restart all the designate
processes. Then the api works correctly until the process dies again.

I consider that it's normal the hex dump for qpid (it is debug level)
although I noticed it was different in rabbitmq.

We have deployed designate in an Ubuntu host, as installation instructions
recommended, and I don't think there is any security issue to stop the
service. In fact, the trace is really strange because the api was already
bound on port 9001. Our Openstack platform is supported by RedHat and
that's why we need to integrate with qpid.

I will try a couple of different scenarios:
a) Use a qpid local instance (instead of OST qpid instance)
b) Use a rabbitmq local instance



On Mon, Jul 20, 2015 at 5:20 PM, Kiall Mac Innes ki...@macinnes.ie wrote:

 Side Question: Is is normal for QPid to log all the
 \x0f\x01\x00\x14\x00\x01\x00\x00\x00\x00\x00's etc?

 I'm guessing that, since you're using qpid, you're also on RedHat. Could
 RH's SELinux policies be preventing the service from binding to tcp/9001?

 If you start as root, do you see similar issues?

 Thanks,
 Kiall

 On 20/07/15 15:48, Jaime Fernández wrote:
  Hi Tim,
 
  I only start one api process. In fact, when I say that the api process
  dies, I don't have any designate-api process and there is no process
  listening on the 9001 port.
 
  When I started all the designate processes, the API worked correctly
  because I had tested it. But after some inactivity period (a couple of
  hours, or a day), then the designate-api process died. It is not
  possible that the process has been restarted during this time.
 
  I've just started the process again and now it works. I will check if it
  dies again and report it.
 
 
  Thanks
 
  On Mon, Jul 20, 2015 at 4:24 PM, Tim Simmons tim.simm...@rackspace.com
  mailto:tim.simm...@rackspace.com wrote:
 
  Jaime,
 
 
  Usually that's the error you see if you're trying to start up
  multiple API processes. They all try and bind to port 9001, so that
  error is saying the API can't bind. So something else (I suspect
  another designate-api process, or some other type of API) is already
  listening on that port.
 
 
  Hope that helps,
 
  Tim Simmons
 
 
 
  
  *From:* Jaime Fernández jjja...@gmail.com mailto:jjja...@gmail.com
 
  *Sent:* Monday, July 20, 2015 8:54 AM
  *To:* OpenStack Development Mailing List (not for usage questions)
  *Subject:* [openstack-dev] [designate] The designate API service is
  stopped
 
  I've followed instructions to install Designate in Dev environment:
 
 http://docs.openstack.org/developer/designate/install/ubuntu-dev.html
 
  I've made some slight modifications to use qpid (instead of
  rabbitmq) and to integrate with Infoblox.
 
  What I've seen is that designate-api process dies (the other
  processes are running correctly). I'm not sure if the problem could
  be a network issue between designate-api and qpid.
 
  Here it is the output for last traces of designate-api process:
 
  2015-07-20 14:43:37.728 727 DEBUG qpid.messaging.io.raw [-]
  READ[3f383f8]:
  '\x0f\x00\x00\x10\x00\x00\x00\x00\x00\x00\x00\x00\x01\n\x00\x00'
  readable
 
  
 /home/sysadmin/openstack/designate/.venv/local/lib/python2.7/site-packages/qpid/messaging/driver.py:411
  2015-07-20 14:43:37.729 727 DEBUG qpid.messaging.io.ops [-]
  RCVD[3f383f8]: ConnectionHeartbeat() write
 
  
 /home/sysadmin/openstack/designate/.venv/local/lib/python2.7/site-packages/qpid/messaging/driver.py:651
  2015-07-20 14:43:37.729 727 DEBUG qpid.messaging.io.ops [-]
  SENT[3f383f8]: ConnectionHeartbeat() write_op
 
  
 /home/sysadmin/openstack/designate/.venv/local/lib/python2.7/site-packages/qpid/messaging/driver.py:683
  2015-07-20 14:43:37.730 727 DEBUG qpid.messaging.io.raw [-]
  SENT[3f383f8]:
  '\x0f\x00\x00\x10\x00\x00\x00\x00\x00\x00\x00\x00\x01\n\x00\x00'
  writeable
 
  
 /home/sysadmin/openstack/designate/.venv/local/lib/python2.7/site-packages/qpid/messaging/driver.py:475
  Traceback (most recent call last):
File
 
  
 /home/sysadmin/openstack/designate/.venv/local/lib/python2.7/site-packages/eventlet-0.17.4-py2.7.egg/eventlet/hubs/hub.py,
  line 457, in fire_timers
  timer()
File
 
  
 

[openstack-dev] [fuel][plugin] Plugin depends on another plugin

2015-07-21 Thread Daniel Depaoli
Hi all! I'm writing a fuel plugin that depends on another plugin, in
particular one plugin install node-js and the other plugin install a
software that uses nodejs.
What i did is to add a condition in environment_config.yaml:
```
*restrictions:*
*- condition: settings:fuel-plugin-node-js.metadata.enabled ==
false*
*action: disable*
*message: Node JS must be present and enabled*
*```*
This work if fuel-plugin-node-js is present, but doesn't work otherwise.
So I tried with:
```
*- condition: settings:fuel-plugin-node-js
and settings:fuel-plugin-node-js.metadata.enabled == false*
*```*
but with the same result: it works only if the first plugin is present.

Can you help me?

-- 

Daniel Depaoli
CREATE-NET Research Center
Smart Infrastructures Area
Junior Research Engineer

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel][plugin] Plugin depends on another plugin

2015-07-21 Thread Vitaly Kramskikh
Hi, currently it's not possible to handle cases like this. The expression
parser by default expects every key in the expression to exist, otherwise
it throws an error. But it also supports non-strict mode, in which
non-existent keys are treated as null value. We can add support for
enabling this mode in 7.0, so it will look like this:

restrictions:
- condition: settings:fuel-plugin-node-js == null or
settings:fuel-plugin-node-js.metadata.enabled == false
  strict: false
  action: disable
  message: Node JS must be present and enabled

Will this work for you?

2015-07-21 11:30 GMT+03:00 Daniel Depaoli daniel.depa...@create-net.org:

 Hi all! I'm writing a fuel plugin that depends on another plugin, in
 particular one plugin install node-js and the other plugin install a
 software that uses nodejs.
 What i did is to add a condition in environment_config.yaml:
 ```
 *restrictions:*
 *- condition: settings:fuel-plugin-node-js.metadata.enabled ==
 false*
 *action: disable*
 *message: Node JS must be present and enabled*
 *```*
 This work if fuel-plugin-node-js is present, but doesn't work otherwise.
 So I tried with:
 ```
 *- condition: settings:fuel-plugin-node-js
 and settings:fuel-plugin-node-js.metadata.enabled == false*
 *```*
 but with the same result: it works only if the first plugin is present.

 Can you help me?

 --
 
 Daniel Depaoli
 CREATE-NET Research Center
 Smart Infrastructures Area
 Junior Research Engineer
 

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Vitaly Kramskikh,
Fuel UI Tech Lead,
Mirantis, Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Service Chain project IRC meeting minutes - 07/16/2015

2015-07-21 Thread Vikram Choudhary
Hi Wei,

It happens every week unless cancelled by the chair.

Thanks
Vikram

On Tue, Jul 21, 2015 at 3:10 PM, Damon Wang damon.dev...@gmail.com wrote:

 Hi,

 Does service chaining project meeting will be held this week? I'd like to
 join :-D

 Wei Wang

 2015-07-17 2:09 GMT+08:00 Cathy Zhang cathy.h.zh...@huawei.com:

  Hi Everyone,



 Thanks for joining the service chaining project meeting on 7/16/2015.
 Here is the link to the meeting logs:

 http://eavesdrop.openstack.org/meetings/service_chaining/2015/



 Thanks,

 Cathy

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] How should oslo.service handle SIGINT?

2015-07-21 Thread Elena Ezhova
@Dims,

The main advantage of having a handler is a more clean output in case a
service is killed, which means there wouldn't be a stacktrace with
KeyboardInterrupt
exception, just messages like we have in nova now (nova Switch to
oslo.service patch hadn't merged yet):

2015-07-21 10:46:06.765 INFO nova.openstack.common.service
[req-b65dd702-4407-4442-941b-bdb45d935cfa None None] Caught SIGINT,
stopping children
2015-07-21 10:46:06.779 INFO nova.openstack.common.service
[req-b65dd702-4407-4442-941b-bdb45d935cfa None None] Waiting on 2 children
to exit
2015-07-21 10:46:06.823 INFO nova.openstack.common.service
[req-b65dd702-4407-4442-941b-bdb45d935cfa None None] Child 14490 killed by
signal 15
2015-07-21 10:46:06.830 INFO nova.openstack.common.service
[req-b65dd702-4407-4442-941b-bdb45d935cfa None None] Child 14491 killed by
signal 15

That looks more clean than KeyboardInterrupt:
http://paste.openstack.org/show/395226/


Elena




On Mon, Jul 20, 2015 at 8:22 PM, Davanum Srinivas dava...@gmail.com wrote:

 @Elena,

 What are the pro's and con's of having a handler as this was the
 existing behavior for all the projects that picked up code from
 oslo-incubator for this behavior?

 -- dims

 On Mon, Jul 20, 2015 at 1:12 PM, Elena Ezhova eezh...@mirantis.com
 wrote:
  Hi!
 
  Not so long ago oslo.service had a handler for SIGINT and on receiving
 this
  signal a process killed all children with SIGTERM and exited.
  Change [1], that added graceful shutdown on SIGTERM, removed all SIGINT
  handlers and currently all services that consume oslo.service generate
  KeyboardInterrupt exceptions on receiving SIGINT.
 
  My question is whether such behavior is what users and operators expect
 or
  having a handler is still the preferred way?
 
 
  Thanks, Elena
 
  [1]
 
 https://github.com/openstack/oslo.service/commit/fa9aa6b665f75e610f2b91a7d310f6499bd71770
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 



 --
 Davanum Srinivas :: https://twitter.com/dims

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Python 3: 5 more projects with a py34 voting gate, only 4 remaing

2015-07-21 Thread Kiall Mac Innes
You can (nearly) add Designate to this list :)

Pradeep has been doing a great job getting the codebase py3 compatible!

Thanks,
Kiall

On 17/07/15 12:32, Victor Stinner wrote:
 Hi,
 
 We are close to having a voting py34 gate on all OpenStack libraries and
 applications. I just made the py34 gate voting for the 5 following
 projects:
 
 * keystone
 * heat
 * glance_store: Glance library (py34 is already voting in Glance)
 * os-brick: Cinder library (py34 is already voting in Cinder)
 * sqlalchemy-migrate
 
 
 A voting py34 gate means that we cannot reintroduce Python 3 regressions
 in the code tested by tox -e py34. Currently, only a small subset of
 test suites is executed on Python 3.4, but the subset is growing
 constantly and it already helps to detect various kinds of Python 3 issues.
 
 Sirushti Murugesan (who is porting Heat to Python 3) and me proposed a
 talk Python 3 is coming! to the next OpenStack Summit at Tokyo. We
 will explain the plan to port OpenStack to Python in depth.
 
 
 There are only 4 remaining projects without py34 voting gate:
 
 (1) swift: I sent patches, see the Fix tox -e py34 patch:
 
 https://review.openstack.org/#/q/project:openstack/swift+branch:master+topic:py3,n,z
 
 
 
 (2) horizon: I sent patches:
 
 https://review.openstack.org/#/q/topic:bp/porting-python3+project:openstack/horizon,n,z
 
 
 
 (3) keystonemiddleware: blocked by python-memcached, I sent a pull
 request 3 months ago and I'm still waiting...
 
 https://github.com/linsomniac/python-memcached/pull/67
 
 I may fork the project if the maintainer never reply. Read the current
 thread [all] Non-responsive upstream libraries (python34 specifically)
 on openstack-dev.
 
 
 (4) python-savannaaclient: We haven't enough tests to ensure that
 savanna client works correctly on py33, so, it's kind of premature step.
 We already have py33 and pypy jobs in experimental pipeline. This
 client can be ported later.
 
 
 Note: The py34 gate of oslo.messaging is currently non voting because of
 a bug in Python 3.4.0, fix not backported to Ubuntu Trusty LTS yet:
 
 https://bugs.launchpad.net/ubuntu/+source/python3.4/+bug/1367907
 
 The bug was fixed in Python 3.4 in May 2014 and was reported to Ubuntu
 in September 2014.
 
 Victor
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel][plugin] Plugin depends on another plugin

2015-07-21 Thread Vitaly Kramskikh
Daniel,

Yes, it doesn't work in 6.1 release. My question is: are you OK if we
support your case in 7.0 using the approach I described?

2015-07-21 14:13 GMT+03:00 Daniel Depaoli daniel.depa...@create-net.org:



 On Tue, Jul 21, 2015 at 12:02 PM, Vitaly Kramskikh 
 vkramsk...@mirantis.com wrote:

 Hi, currently it's not possible to handle cases like this. The expression
 parser by default expects every key in the expression to exist, otherwise
 it throws an error. But it also supports non-strict mode, in which
 non-existent keys are treated as null value. We can add support for
 enabling this mode in 7.0, so it will look like this:

 restrictions:
 - condition: settings:fuel-plugin-node-js == null or
 settings:fuel-plugin-node-js.metadata.enabled == false
   strict: false
   action: disable
   message: Node JS must be present and enabled

 Will this work for you?


 No this solution unfortunately doesn't work if the nodejs plugin is not
 present. But thanks anyway!


 2015-07-21 11:30 GMT+03:00 Daniel Depaoli daniel.depa...@create-net.org
 :

 Hi all! I'm writing a fuel plugin that depends on another plugin, in
 particular one plugin install node-js and the other plugin install a
 software that uses nodejs.
 What i did is to add a condition in environment_config.yaml:
 ```
 *restrictions:*
 *- condition: settings:fuel-plugin-node-js.metadata.enabled ==
 false*
 *action: disable*
 *message: Node JS must be present and enabled*
 *```*
 This work if fuel-plugin-node-js is present, but doesn't work otherwise.
 So I tried with:
 ```
 *- condition: settings:fuel-plugin-node-js
 and settings:fuel-plugin-node-js.metadata.enabled == false*
 *```*
 but with the same result: it works only if the first plugin is present.

 Can you help me?

 --
 
 Daniel Depaoli
 CREATE-NET Research Center
 Smart Infrastructures Area
 Junior Research Engineer
 


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Vitaly Kramskikh,
 Fuel UI Tech Lead,
 Mirantis, Inc.

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 
 Daniel Depaoli
 CREATE-NET Research Center
 Smart Infrastructures Area
 Junior Research Engineer
 

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Vitaly Kramskikh,
Fuel UI Tech Lead,
Mirantis, Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable]{kilo][glance] freeze exceptions

2015-07-21 Thread Alan Pevec
 We have been waiting python-cinderclient stable/kilo release for couple of
 weeks to be able to merge glance_store stable/kilo backports. Namigly:

 https://review.openstack.org/#/q/status:open+project:openstack/glance_store+branch:stable/kilo,n,z

 As Alan blocked them all, I’d like to ask everyone hold your horses with the
 2015.1.1 release until cinder gets their client released so we can fix the
 glance store for the release.

This was actually freeze script misfire, it matched glance substring.
glance_store was not part of point releases, same as clients and Oslo
libs are not, so I'll remove my -2s

BTW, what is the estimate for the cinderclient stable/kilo release?

Cheers,
Alan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Python 3: 5 more projects with a py34 voting gate, only 4 remaing

2015-07-21 Thread Victor Stinner
Hi,

Could you please modify the wiki page yourself to add Designate? I don't want 
to be the only one maintaining this wiki page ;-)

https://wiki.openstack.org/wiki/Python3

Victor

- Original Message -
 From: Kiall Mac Innes ki...@macinnes.ie
 To: openstack-dev@lists.openstack.org
 Sent: Tuesday, July 21, 2015 12:51:03 PM
 Subject: Re: [openstack-dev] Python 3: 5 more projects with a py34 voting 
 gate, only 4 remaing
 
 You can (nearly) add Designate to this list :)
 
 Pradeep has been doing a great job getting the codebase py3 compatible!
 
 Thanks,
 Kiall
 
 On 17/07/15 12:32, Victor Stinner wrote:
  Hi,
  
  We are close to having a voting py34 gate on all OpenStack libraries and
  applications. I just made the py34 gate voting for the 5 following
  projects:
  
  * keystone
  * heat
  * glance_store: Glance library (py34 is already voting in Glance)
  * os-brick: Cinder library (py34 is already voting in Cinder)
  * sqlalchemy-migrate
  
  
  A voting py34 gate means that we cannot reintroduce Python 3 regressions
  in the code tested by tox -e py34. Currently, only a small subset of
  test suites is executed on Python 3.4, but the subset is growing
  constantly and it already helps to detect various kinds of Python 3 issues.
  
  Sirushti Murugesan (who is porting Heat to Python 3) and me proposed a
  talk Python 3 is coming! to the next OpenStack Summit at Tokyo. We
  will explain the plan to port OpenStack to Python in depth.
  
  
  There are only 4 remaining projects without py34 voting gate:
  
  (1) swift: I sent patches, see the Fix tox -e py34 patch:
  
  https://review.openstack.org/#/q/project:openstack/swift+branch:master+topic:py3,n,z
  
  
  
  (2) horizon: I sent patches:
  
  https://review.openstack.org/#/q/topic:bp/porting-python3+project:openstack/horizon,n,z
  
  
  
  (3) keystonemiddleware: blocked by python-memcached, I sent a pull
  request 3 months ago and I'm still waiting...
  
  https://github.com/linsomniac/python-memcached/pull/67
  
  I may fork the project if the maintainer never reply. Read the current
  thread [all] Non-responsive upstream libraries (python34 specifically)
  on openstack-dev.
  
  
  (4) python-savannaaclient: We haven't enough tests to ensure that
  savanna client works correctly on py33, so, it's kind of premature step.
  We already have py33 and pypy jobs in experimental pipeline. This
  client can be ported later.
  
  
  Note: The py34 gate of oslo.messaging is currently non voting because of
  a bug in Python 3.4.0, fix not backported to Ubuntu Trusty LTS yet:
  
  https://bugs.launchpad.net/ubuntu/+source/python3.4/+bug/1367907
  
  The bug was fixed in Python 3.4 in May 2014 and was reported to Ubuntu
  in September 2014.
  
  Victor
  
  __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] How should oslo.service handle SIGINT?

2015-07-21 Thread Davanum Srinivas
Elena,

The cleaner output was the norm before oslo.service was merged and if
there are no downsides, then we should fix oslo.service to do the
same.

thanks,
dims

On Tue, Jul 21, 2015 at 6:49 AM, Elena Ezhova eezh...@mirantis.com wrote:
 @Dims,

 The main advantage of having a handler is a more clean output in case a
 service is killed, which means there wouldn't be a stacktrace with
 KeyboardInterrupt exception, just messages like we have in nova now (nova
 Switch to oslo.service patch hadn't merged yet):

 2015-07-21 10:46:06.765 INFO nova.openstack.common.service
 [req-b65dd702-4407-4442-941b-bdb45d935cfa None None] Caught SIGINT, stopping
 children
 2015-07-21 10:46:06.779 INFO nova.openstack.common.service
 [req-b65dd702-4407-4442-941b-bdb45d935cfa None None] Waiting on 2 children
 to exit
 2015-07-21 10:46:06.823 INFO nova.openstack.common.service
 [req-b65dd702-4407-4442-941b-bdb45d935cfa None None] Child 14490 killed by
 signal 15
 2015-07-21 10:46:06.830 INFO nova.openstack.common.service
 [req-b65dd702-4407-4442-941b-bdb45d935cfa None None] Child 14491 killed by
 signal 15

 That looks more clean than KeyboardInterrupt:
 http://paste.openstack.org/show/395226/


 Elena




 On Mon, Jul 20, 2015 at 8:22 PM, Davanum Srinivas dava...@gmail.com wrote:

 @Elena,

 What are the pro's and con's of having a handler as this was the
 existing behavior for all the projects that picked up code from
 oslo-incubator for this behavior?

 -- dims

 On Mon, Jul 20, 2015 at 1:12 PM, Elena Ezhova eezh...@mirantis.com
 wrote:
  Hi!
 
  Not so long ago oslo.service had a handler for SIGINT and on receiving
  this
  signal a process killed all children with SIGTERM and exited.
  Change [1], that added graceful shutdown on SIGTERM, removed all SIGINT
  handlers and currently all services that consume oslo.service generate
  KeyboardInterrupt exceptions on receiving SIGINT.
 
  My question is whether such behavior is what users and operators expect
  or
  having a handler is still the preferred way?
 
 
  Thanks, Elena
 
  [1]
 
  https://github.com/openstack/oslo.service/commit/fa9aa6b665f75e610f2b91a7d310f6499bd71770
 
 
  __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 



 --
 Davanum Srinivas :: https://twitter.com/dims

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Device names supplied to the boot request

2015-07-21 Thread Daniel P. Berrange
On Fri, Jul 17, 2015 at 08:12:18AM +0300, Feodor Tersin wrote:
 The third reason is that at least xen supports them.I know that vdr
 becomes /dev/xvdr, but:1) I believe the main aim of specifying device
 name is to easy distinguish among devices. A last char is enough for
 that. And xen supports that.2) Xen users may use this now.3) We don't
 know about device name support in VMWare and other hypervisors.4) AWS
 in its turn also does not warrant that selected name will be the same
 as requested one [3]
 So i think Nova could look at a specified device name as at a hint.
 If a used hypervisor can follow it even partially, Nova honours it.
 In particular libvirt could sort devices by device name.

If the libvirt XML contains device names, then in fact the *only*
thing that libvirt uses those names for is to represent the relative
sort ordering of the disk devices attached to the same bus.

So if you have two virtio-blk devices with names /dev/vda and
/dev/vdb, libvirt will maintain that relative ordering, in so
much that the /dev/vda will get a PCI device address that is
numerically lower than the PCI device address of the device
with /dev/vdb. Now by convention Linux currently enumerates
block devices in PCI address order, so by luck they will
*probably* end up being named /dev/vda and /dev/vdb in the
guest, but the virtio protocol makes no guarantees about this.

Things get more complicated though when you have a mixture of
block device buses eg virtio-blk and virtio-scsi, or plain
scsi, or IDE, etc. As there's no gaurantee about which bus
type will get enumerated first - in fact its entirely possible
to be enumerated in parallel. Throw in hotplug  unplug and
the mess becomes even worse, eg start with 3 disk vda, vdb
and vdc, now unplug vdb and reboot the guest OS. What used
to be vdc will now appear as vdb in the guest.

NB, none of this is a limitation imposed by libvirt - it is
all just an artifact of the hypervisor virtual hardware model.
Xen paravirt disk is the only case I know of where it is
possible to have the guest OS honour requested device name,
but even that's not guaranteed to be honoured by the guest.

 And obviously i agree with the aim of your patch which persists a
 true device name. I believe an operator must see true names to
 have an easy way to associate devices visible from inside guest
 OS with Cinder volumes and their attributes (like volume type,
 multiattach ability and so on).

There is really no such thing as true device names. What Nikola's
patch persists is just the libvirt auto-assigned names - the only
thing these tell you is the relative ordering of the devices within
a specific bus. The guest OS is no more likely to honour the libvirt
assigned names, than the user assigned names.

So if you are trying to identify what the guest OS sees as devices
names, you are out of luck in every way.

If users want a reliable way to identify devices, there are really
only two options that work in general

 - Disk serial numbers (they're not actually numbers, they are
   arbitrary 32 byte strings). Nova auto-assigns these for cinder
   volumes but its kind of dumb as the values nova assigns are
   unique but longer than 32 bytes, so they get truncated to a
   value which is possibly no longer unique !

   We also don't set the serial for non-cinder volumes. We should
   probably define some sensible generic naming scheme we can use
   for recording serial numbers for all types of disk and apply
   that across all hypervisor drivers in Nova

 - Device address information. eg what PCI device the disk is
   associated with. The users don't get to choose the PCI device
   address though, so its not immediately useful. I have a blueprint
   proposed to make this useful though, by allowing an arbitrary
   string tag to be associated with any type of guest device
   (eg NICs as well as disk), and then expose metadata to the guest
   OS associating the tags with the device address info. This will
   let admins reliably associated devices with specific roles

https://review.openstack.org/#/c/195662/


All this is to say, I think Nikola's proposed patch is fine. The intended
use case for supplying device names is already broken in all cases, with
possible exception of Xen if using Xen paravirt disks and Linux guests.
There are faaar better ways to allow users to identify disks which we
should prioritize.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [fuel][plugin] Plugin depends on another plugin

2015-07-21 Thread Daniel Depaoli
On Tue, Jul 21, 2015 at 12:02 PM, Vitaly Kramskikh vkramsk...@mirantis.com
wrote:

 Hi, currently it's not possible to handle cases like this. The expression
 parser by default expects every key in the expression to exist, otherwise
 it throws an error. But it also supports non-strict mode, in which
 non-existent keys are treated as null value. We can add support for
 enabling this mode in 7.0, so it will look like this:

 restrictions:
 - condition: settings:fuel-plugin-node-js == null or
 settings:fuel-plugin-node-js.metadata.enabled == false
   strict: false
   action: disable
   message: Node JS must be present and enabled

 Will this work for you?


No this solution unfortunately doesn't work if the nodejs plugin is not
present. But thanks anyway!


 2015-07-21 11:30 GMT+03:00 Daniel Depaoli daniel.depa...@create-net.org:

 Hi all! I'm writing a fuel plugin that depends on another plugin, in
 particular one plugin install node-js and the other plugin install a
 software that uses nodejs.
 What i did is to add a condition in environment_config.yaml:
 ```
 *restrictions:*
 *- condition: settings:fuel-plugin-node-js.metadata.enabled ==
 false*
 *action: disable*
 *message: Node JS must be present and enabled*
 *```*
 This work if fuel-plugin-node-js is present, but doesn't work otherwise.
 So I tried with:
 ```
 *- condition: settings:fuel-plugin-node-js
 and settings:fuel-plugin-node-js.metadata.enabled == false*
 *```*
 but with the same result: it works only if the first plugin is present.

 Can you help me?

 --
 
 Daniel Depaoli
 CREATE-NET Research Center
 Smart Infrastructures Area
 Junior Research Engineer
 

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Vitaly Kramskikh,
 Fuel UI Tech Lead,
 Mirantis, Inc.

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

Daniel Depaoli
CREATE-NET Research Center
Smart Infrastructures Area
Junior Research Engineer

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [designate] The designate API service is stopped

2015-07-21 Thread Jaime Fernández
Hi Kiall,

It's a bit strange because only designate-api dies but designate-sink is
also integrated with qpid and survives.

These issues are a bit difficult because it is not deterministic. What I've
just tested is using a local qpid instance and it looks like the
designate-api is not killed any more (however, it's a short period of
time). We are going to integrate the host where designate components are
installed in the same VLAN than the rest of OST just to check if it's a
rare issue with the network.

Before testing with Rabbit, as you recommended, we are testing with qpid in
the same VLAN (just to discard the network issue).

I will give you info about my progress.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel][plugin] Plugin depends on another plugin

2015-07-21 Thread Daniel Depaoli
Yes, it should resolve my case and in general any case of dependency of
something that is not installed.

On Tue, Jul 21, 2015 at 1:23 PM, Vitaly Kramskikh vkramsk...@mirantis.com
wrote:

 Daniel,

 Yes, it doesn't work in 6.1 release. My question is: are you OK if we
 support your case in 7.0 using the approach I described?

 2015-07-21 14:13 GMT+03:00 Daniel Depaoli daniel.depa...@create-net.org:



 On Tue, Jul 21, 2015 at 12:02 PM, Vitaly Kramskikh 
 vkramsk...@mirantis.com wrote:

 Hi, currently it's not possible to handle cases like this. The
 expression parser by default expects every key in the expression to exist,
 otherwise it throws an error. But it also supports non-strict mode, in
 which non-existent keys are treated as null value. We can add support for
 enabling this mode in 7.0, so it will look like this:

 restrictions:
 - condition: settings:fuel-plugin-node-js == null or
 settings:fuel-plugin-node-js.metadata.enabled == false
   strict: false
   action: disable
   message: Node JS must be present and enabled

 Will this work for you?


 No this solution unfortunately doesn't work if the nodejs plugin is not
 present. But thanks anyway!


 2015-07-21 11:30 GMT+03:00 Daniel Depaoli daniel.depa...@create-net.org
 :

 Hi all! I'm writing a fuel plugin that depends on another plugin, in
 particular one plugin install node-js and the other plugin install a
 software that uses nodejs.
 What i did is to add a condition in environment_config.yaml:
 ```
 *restrictions:*
 *- condition: settings:fuel-plugin-node-js.metadata.enabled ==
 false*
 *action: disable*
 *message: Node JS must be present and enabled*
 *```*
 This work if fuel-plugin-node-js is present, but doesn't work otherwise.
 So I tried with:
 ```
 *- condition: settings:fuel-plugin-node-js
 and settings:fuel-plugin-node-js.metadata.enabled == false*
 *```*
 but with the same result: it works only if the first plugin is present.

 Can you help me?

 --
 
 Daniel Depaoli
 CREATE-NET Research Center
 Smart Infrastructures Area
 Junior Research Engineer
 


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Vitaly Kramskikh,
 Fuel UI Tech Lead,
 Mirantis, Inc.


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 
 Daniel Depaoli
 CREATE-NET Research Center
 Smart Infrastructures Area
 Junior Research Engineer
 

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Vitaly Kramskikh,
 Fuel UI Tech Lead,
 Mirantis, Inc.

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

Daniel Depaoli
CREATE-NET Research Center
Smart Infrastructures Area
Junior Research Engineer

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] What to do with reservation check in node update API?

2015-07-21 Thread Dmitry Tantsur

Hi folks!

If you're not aware already, I'm working on solving node is locked 
problems breaking users (and tracking it at 
https://etherpad.openstack.org/p/ironic-locking-reform). We have retries 
in place in client, but we all agree that it's not the eventual solution.


One of the things we've figured out is that we actually have server-side 
retries - in task_manager.acquire. They're nice and configurable. Alas, 
we have one place that checks reservations without task_manager: 
https://github.com/openstack/ironic/blob/master/ironic/api/controllers/v1/node.py#L401-L403 
(note that this check is actually racy)


I'd like to ask your opinions on how to solve it? I have 3 ideas:
1. Just implement retries on API level (possibly split away a common 
function from task_manager).

2. Move update to conductor instead of doing it directly in API.
3. Don't check reservation when updating node. At all.

Ideas?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [stable]{kilo][glance] freeze exceptions

2015-07-21 Thread Kuvaja, Erno
Hi all,

We have been waiting python-cinderclient stable/kilo release for couple of 
weeks to be able to merge glance_store stable/kilo backports. Namigly:
https://review.openstack.org/#/q/status:open+project:openstack/glance_store+branch:stable/kilo,n,z

As Alan blocked them all, I'd like to ask everyone hold your horses with the 
2015.1.1 release until cinder gets their client released so we can fix the 
glance store for the release.

Thanks,
Erno
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [designate] The designate API service is stopped

2015-07-21 Thread Kiall Mac Innes
Inline.

Thanks,
Kiall

On 21/07/15 09:24, Jaime Fernández wrote:
 I confirm that it happened again. I started all the designate processes
 and after an hour (approximately), the designate-api process died with
 the same stack trace.
 
 Restarting the designate-api does not help because API requests are not
 replied any more (timeouts):
 2015-07-21 10:12:53.463 4403 ERROR designate.api.middleware
 [req-281e9665-c49b-43e0-a5d0-9a48e5f52aa1 noauth-user noauth-project - -
 -] Timed out waiting for a reply to message ID
 19cda11c089f43e2a43a75a5851926c8

This suggests the API service is back up and running, but either your
message broker (qpid), or the target service (designate-central) is down.

 
 When the designate-api process dies, I need to restart all the designate
 processes. Then the api works correctly until the process dies again.

This leads me to believe qpid is somehow responsible, but, I don't have
any concrete reasons to believe that. Simply a gut feeling! If all
services need to be restarted, then it's most likely a shared resource
that's failing, mis-configured, or somehow incompatible with Designate.
I've never seen anything like this before :/

 
 I consider that it's normal the hex dump for qpid (it is debug level)
 although I noticed it was different in rabbitmq.
 
 We have deployed designate in an Ubuntu host, as installation
 instructions recommended, and I don't think there is any security issue
 to stop the service. In fact, the trace is really strange because the
 api was already bound on port 9001. Our Openstack platform is supported
 by RedHat and that's why we need to integrate with qpid.
 
 I will try a couple of different scenarios:
 a) Use a qpid local instance (instead of OST qpid instance)
 b) Use a rabbitmq local instance

Can you try with RabbitMQ? It's really the only one we test, but it's
all hidden behind the oslo.messaging library, so I can't think of any
good reason why one would work and the other doesn't. If this works,
I'll test out with qpid and see if there's anything obvious.

 
 On Mon, Jul 20, 2015 at 5:20 PM, Kiall Mac Innes ki...@macinnes.ie
 mailto:ki...@macinnes.ie wrote:
 
 Side Question: Is is normal for QPid to log all the
 \x0f\x01\x00\x14\x00\x01\x00\x00\x00\x00\x00's etc?
 
 I'm guessing that, since you're using qpid, you're also on RedHat. Could
 RH's SELinux policies be preventing the service from binding to
 tcp/9001?
 
 If you start as root, do you see similar issues?
 
 Thanks,
 Kiall
 
 On 20/07/15 15:48, Jaime Fernández wrote:
  Hi Tim,
 
  I only start one api process. In fact, when I say that the api process
  dies, I don't have any designate-api process and there is no process
  listening on the 9001 port.
 
  When I started all the designate processes, the API worked correctly
  because I had tested it. But after some inactivity period (a couple of
  hours, or a day), then the designate-api process died. It is not
  possible that the process has been restarted during this time.
 
  I've just started the process again and now it works. I will check if it
  dies again and report it.
 
 
  Thanks
 
  On Mon, Jul 20, 2015 at 4:24 PM, Tim Simmons tim.simm...@rackspace.com 
 mailto:tim.simm...@rackspace.com
  mailto:tim.simm...@rackspace.com mailto:tim.simm...@rackspace.com 
 wrote:
 
  Jaime,
 
 
  Usually that's the error you see if you're trying to start up
  multiple API processes. They all try and bind to port 9001, so that
  error is saying the API can't bind. So something else (I suspect
  another designate-api process, or some other type of API) is already
  listening on that port.
 
 
  Hope that helps,
 
  Tim Simmons
 
 

  
  *From:* Jaime Fernández jjja...@gmail.com
 mailto:jjja...@gmail.com mailto:jjja...@gmail.com
 mailto:jjja...@gmail.com
  *Sent:* Monday, July 20, 2015 8:54 AM
  *To:* OpenStack Development Mailing List (not for usage questions)
  *Subject:* [openstack-dev] [designate] The designate API
 service is
  stopped
 
  I've followed instructions to install Designate in Dev environment:
  
 http://docs.openstack.org/developer/designate/install/ubuntu-dev.html
 
  I've made some slight modifications to use qpid (instead of
  rabbitmq) and to integrate with Infoblox.
 
  What I've seen is that designate-api process dies (the other
  processes are running correctly). I'm not sure if the problem
 could
  be a network issue between designate-api and qpid.
 
  Here it is the output for last traces of designate-api process:
 
  2015-07-20 14:43:37.728 727 DEBUG qpid.messaging.io.raw [-]
  

Re: [openstack-dev] [CI] How to set a proxy for zuul.

2015-07-21 Thread Abhishek Shrivastava
Its just because what kind of file are you creating, as touch
/tmp/noop-check-communication doesn't makes any sense to it.

On Tue, Jul 21, 2015 at 2:48 PM, Tang Chen tangc...@cn.fujitsu.com wrote:


 On 07/21/2015 04:05 PM, Abhishek Shrivastava wrote:

  If you want to create a new job then refer 
 *dsvm-cinder-driver.yaml.sample*, and regarding 
 *noop-check-communication* its just for testing the first time don't
 modify it.


 Well, I understand. But I don't think this little change will cause a
 problem.

 I think, if noop-check-communication was executed, the file should be
 created under /tmp.

 I just asked the  same question in IRC meeting, but not resolved yet.

 Thanks. :)


  On Tue, Jul 21, 2015 at 1:19 PM, Tang Chen tangc...@cn.fujitsu.com
 wrote:


 On 07/21/2015 03:35 PM, Abhishek Shrivastava wrote:

  Hi Tang,

  Can you please send me the whole job snippet you wrote.


  In /etc/jenkins_jobs/config/ examples.yaml, that's all.

 - job-template:
 name: 'noop-check-communication'
 node: '{node}'

 builders:
   - shell: |
   #!/bin/bash -xe
   touch /tmp/noop-check-communication
echo Hello world, this is the {vendor} Testing System
   - link-logs  # In macros.yaml from os-ext-testing

  #publishers:
 #  - devstack-logs  # In macros.yaml from os-ext-testing
 #  - console-log  # In macros.yaml from os-ext-testing



 noop-check-communication was setup by default. I didn't change anything
 else.


 BTW, I tried to ask this in #openstack-meeting IRC.
 But no one seems to be active. :)

 Thanks.



 On Tue, Jul 21, 2015 at 12:52 PM, Tang Chen tangc...@cn.fujitsu.com
 wrote:

  Hi Asselin, Abhishek,

 I got some problems when I was trying to write a jenkins job.

 I found that when zuul received the notification from gerrit, jenkins
 didn't run the test.

 I added something to noop-check-communication in
 /etc/jenkins_jobs/config/ examples.yaml,
 just touched a file under /tmp.

 - job-template:
 name: 'noop-check-communication'
 node: '{node}'

 builders:
   - shell: |
   #!/bin/bash -xe
   touch
 /tmp/noop-check-communication# I added
 something here.
   echo Hello world, this is the {vendor} Testing System
   - link-logs  # In macros.yaml from os-ext-testing

 And I flushed the jobs, using jenkins-jobs --flush-cache update
 /etc/jenkins_jobs/config/.
 I can build the job in jenkins web UI, and the file was touched.


 But when I send a patch, the file was not touched. But CI really works.
 I can see it on the web site. ( https://review.openstack.org/#/c/203941/
 )

 How do you think of this ?





 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




  --


 *Thanks  Regards, *
 *Abhishek*
  *Cloudbyte Inc. http://www.cloudbyte.com*


  __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




  --


 *Thanks  Regards, *
 *Abhishek*
  *Cloudbyte Inc. http://www.cloudbyte.com*


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 


*Thanks  Regards,*
*Abhishek*
*Cloudbyte Inc. http://www.cloudbyte.com*
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] ceph gate unstable

2015-07-21 Thread Gary Kotton
Hi,
It seems like the gating is failing with cep tests: 
http://logs.openstack.org/29/199129/6/check/gate-tempest-dsvm-full-ceph/bfe423d/logs/testr_results.html.gz
Is anyone looking into this?
Thanks
Gary
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] ceph gate unstable

2015-07-21 Thread Andreas Jaeger

On 07/21/2015 10:51 AM, Gary Kotton wrote:

Hi,
It seems like the gating is failing with cep tests:
http://logs.openstack.org/29/199129/6/check/gate-tempest-dsvm-full-ceph/bfe423d/logs/testr_results.html.gz
Is anyone looking into this?


https://review.openstack.org/#/c/203845/ is a band-aid,

Andreas
--
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Dilip Upmanyu, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-infra] [CI] [tempest] Tempest tests failing with SSH timeout.

2015-07-21 Thread Abhishek Shrivastava
Also, can't we stop downloading all those projects and let them include
them in the DevStack using the ENABLED_SERVICES parameter, like we usually
do while installing devstack.

On Tue, Jul 21, 2015 at 11:18 AM, Abhishek Shrivastava 
abhis...@cloudbyte.com wrote:

 Hi Ramy,


- The project list is mentioned in the *devstack-vm-gate-wrap*
script[1].
- Downloaded using *functions.sh* script using *setup workspace*
function[2]

 [1]
 https://github.com/openstack-infra/devstack-gate/blob/master/devstack-vm-gate-wrap.sh#L35
 [2]
 https://github.com/openstack-infra/devstack-gate/blob/master/functions.sh#L416

 On Mon, Jul 20, 2015 at 7:14 PM, Asselin, Ramy ramy.asse...@hp.com
 wrote:

  Is this to optimize the performance of the job? Can you provide a link
 to where the downloading is occurring that you’d like to restrict?



 *From:* Abhishek Shrivastava [mailto:abhis...@cloudbyte.com]
 *Sent:* Sunday, July 19, 2015 10:53 PM

 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [openstack-infra] [CI] [tempest] Tempest
 tests failing with SSH timeout.



 This is ok for the services it will install, but how can we also restrict
 the downloading of all the projects(i.e; downloading only required
 projects) ?



 On Sun, Jul 19, 2015 at 11:39 PM, Asselin, Ramy ramy.asse...@hp.com
 wrote:

  There are two ways that I know of to customize what services are run:

 1.  Setup your own feature matrix [1]

 2.  Override enabled services [2]



 Option 2 is probably what you’re looking for.



 [1]
 http://git.openstack.org/cgit/openstack-infra/devstack-gate/tree/devstack-vm-gate-wrap.sh#n152

 [2]
 http://git.openstack.org/cgit/openstack-infra/devstack-gate/tree/devstack-vm-gate.sh#n76



 *From:* Abhishek Shrivastava [mailto:abhis...@cloudbyte.com]
 *Sent:* Sunday, July 19, 2015 10:37 AM


 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [openstack-infra] [CI] [tempest] Tempest
 tests failing with SSH timeout.



 Hi Ramy,



 Thanks for the suggestion. One more thing I need to ask, as I have have
 setup one more CI so is there any way that we can decide that only required
 projects should get downloaded and installed during devstack installation
 dynamically. As I see no such things that can be done to devstack-gate
 scripts so the following scenario can be achieved.



 On Sun, Jul 19, 2015 at 8:38 PM, Asselin, Ramy ramy.asse...@hp.com
 wrote:

  Just the export I mentioned:

 export DEVSTACK_GATE_NEUTRON=1

 Devstack-gate scripts will do the right thing when it sees that set. You
 can see plenty of examples here [1].



 Ramy



 [1]
 http://git.openstack.org/cgit/openstack-infra/project-config/tree/jenkins/jobs/devstack-gate.yaml#n467



 *From:* Abhishek Shrivastava [mailto:abhis...@cloudbyte.com]
 *Sent:* Sunday, July 19, 2015 2:24 AM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [openstack-infra] [CI] [tempest] Tempest
 tests failing with SSH timeout.



 Hi Ramy,



 Thanks for the suggestion but since I am not including the neutron
 project, so downloading and including it will require any additional
 configuration in devstack-gate or not?



 On Sat, Jul 18, 2015 at 11:41 PM, Asselin, Ramy ramy.asse...@hp.com
 wrote:

  We ran into this issue as well. I never found the root cause, but I
 found a work-around: Use neutron-networking instead of the default
 nova-networking.



 If you’re using devstack-gate, it’s as  simple as:

 export DEVSTACK_GATE_NEUTRON=1



 Then run the job as usual.



 Ramy



 *From:* Abhishek Shrivastava [mailto:abhis...@cloudbyte.com]
 *Sent:* Friday, July 17, 2015 9:15 AM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* [openstack-dev] [openstack-infra] [CI] [tempest] Tempest
 tests failing with SSH timeout.



 Hi Folks,



 In my CI I see the following tempest tests failure for a past couple of
 days.

 ·
 tempest.scenario.test_minimum_basic.TestMinimumBasicScenario.test_minimum_basic_scenario
  [361.274316s] ... FAILED

 ·
 tempest.scenario.test_volume_boot_pattern.TestVolumeBootPattern.test_volume_boot_pattern
  [320.122458s] ... FAILED

 ·
 tempest.scenario.test_volume_boot_pattern.TestVolumeBootPatternV2.test_volume_boot_pattern
  [317.399342s] ... FAILED

 ·
 tempest.thirdparty.boto.test_ec2_instance_run.InstanceRunTest.test_compute_with_volumes
  [257.858272s] ... FAILED

  The failure logs are always the same every time, i.e;



  *03:34:09* 2015-07-17 03:21:13,256 9505 ERROR
 [tempest.scenario.manager] (TestVolumeBootPattern:test_volume_boot_pattern) 
 Initializing SSH connection to 172.24.5.1 failed. Error: Connection to the 
 172.24.5.1 via SSH timed out.

 *03:34:09* User: cirros, Password: None

 *03:34:09* 2015-07-17 03:21:13.256 9505 ERROR tempest.scenario.manager 
 Traceback (most recent call last):

 *03:34:09* 

Re: [openstack-dev] [CI] How to set a proxy for zuul.

2015-07-21 Thread Tang Chen


On 07/21/2015 04:05 PM, Abhishek Shrivastava wrote:
If you want to create a new job then refer 
*dsvm-cinder-driver.yaml.sample*, and regarding 
*noop-check-communication* its just for testing the first time don't 
modify it.




Well, I understand. But I don't think this little change will cause a 
problem.


I think, if noop-check-communication was executed, the file should be 
created under /tmp.


I just asked the  same question in IRC meeting, but not resolved yet.

Thanks. :)

On Tue, Jul 21, 2015 at 1:19 PM, Tang Chen tangc...@cn.fujitsu.com 
mailto:tangc...@cn.fujitsu.com wrote:



On 07/21/2015 03:35 PM, Abhishek Shrivastava wrote:

Hi Tang,

Can you please send me the whole job snippet you wrote.


In /etc/jenkins_jobs/config/ examples.yaml, that's all.

- job-template:
name: 'noop-check-communication'
node: '{node}'

builders:
  - shell: |
  #!/bin/bash -xe
  touch /tmp/noop-check-communication
  echo Hello world, this is the {vendor} Testing System
  - link-logs  # In macros.yaml from os-ext-testing

#publishers:
#  - devstack-logs  # In macros.yaml from os-ext-testing
#  - console-log  # In macros.yaml from os-ext-testing



noop-check-communication was setup by default. I didn't change
anything else.


BTW, I tried to ask this in #openstack-meeting IRC.
But no one seems to be active. :)

Thanks.




On Tue, Jul 21, 2015 at 12:52 PM, Tang Chen
tangc...@cn.fujitsu.com mailto:tangc...@cn.fujitsu.com wrote:

Hi Asselin, Abhishek,

I got some problems when I was trying to write a jenkins job.

I found that when zuul received the notification from gerrit,
jenkins didn't run the test.

I added something to noop-check-communication in
/etc/jenkins_jobs/config/ examples.yaml,
just touched a file under /tmp.

- job-template:
name: 'noop-check-communication'
node: '{node}'

builders:
  - shell: |
  #!/bin/bash -xe
  touch /tmp/noop-check-communication # I added
something here.
  echo Hello world, this is the {vendor} Testing System
  - link-logs  # In macros.yaml from os-ext-testing

And I flushed the jobs, using jenkins-jobs --flush-cache
update /etc/jenkins_jobs/config/.
I can build the job in jenkins web UI, and the file was touched.


But when I send a patch, the file was not touched. But CI
really works.
I can see it on the web site. (
https://review.openstack.org/#/c/203941/)

How do you think of this ?





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
*

*
*Thanks  Regards,
*
*Abhishek*
/_Cloudbyte Inc. http://www.cloudbyte.com_/


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe  
mailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
*
*
*Thanks  Regards,
*
*Abhishek*
/_Cloudbyte Inc. http://www.cloudbyte.com_/


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Service Chain project IRC meeting minutes - 07/16/2015

2015-07-21 Thread Damon Wang
Hi,

Does service chaining project meeting will be held this week? I'd like to
join :-D

Wei Wang

2015-07-17 2:09 GMT+08:00 Cathy Zhang cathy.h.zh...@huawei.com:

  Hi Everyone,



 Thanks for joining the service chaining project meeting on 7/16/2015.
 Here is the link to the meeting logs:

 http://eavesdrop.openstack.org/meetings/service_chaining/2015/



 Thanks,

 Cathy

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][L3] Representing a networks connected by routers

2015-07-21 Thread Neil Jerram
On 20/07/15 18:36, Carl Baldwin wrote:
 I'm looking for feedback from anyone interest but, in particular, I'd
 like feedback from the following people for varying perspectives:
 Mark McClain (proposed alternate), John Belamaric (IPAM), Ryan Tidwell
 (BGP), Neil Jerram (L3 networks), Aaron Rosen (help understand
 multi-provider networks) and you if you're reading this list of names
 and thinking he forgot me!

 We have been struggling to develop a way to model a network which is
 composed of disjoint L2 networks connected by routers.  The intent of
 this email is to describe the two proposals and request input on the
 two in attempt to choose a direction forward.  But, first:
 requirements.

 Requirements:

 The network should appear to end users as a single network choice.
 They should not be burdened with choosing between segments.  It might
 interest them that L2 communications may not work between instances on
 this network but that is all.  This has been requested by numerous
 operators [1][4].  It can be useful for external networks and provider
 networks.

 The model needs to be flexible enough to support two distinct types of
 addresses:  1) address blocks which are statically bound to a single
 segment and 2) address blocks which are mobile across segments using
 some sort of dynamic routing capability like BGP or programmatically
 injecting routes in to the infrastructure's routers with a plugin.

FWIW, I hadn't previously realized (2) here.


 Overlay networks are not the answer to this.  The goal of this effort
 is to scale very large networks with many connected ports by doing L3
 routing (e.g. to the top of rack) instead of using a large continuous
 L2 fabric.  Also, the operators interested in this work do not want
 the complexity of overlay networks [4].

 Proposal 1:

 We refined this model [2] at the Neutron mid-cycle a couple of weeks
 ago.  This proposal has already resonated reasonably with operators,
 especially those from GoDaddy who attended the Neutron sprint.  Some
 key parts of this proposal are:

 1.  The routed super network is called a front network.  The segments
 are called back(ing) networks.
 2.  Backing networks are modeled as admin-owned private provider
 networks but otherwise are full-blown Neutron networks.
 3.  The front network is marked with a new provider type.
 4.  A Neutron router is created to link the backing networks with
 internal ports.  It represents the collective routing ability of the
 underlying infrastructure.
 5.  Backing networks are associated with a subset of hosts.
 6.  Ports created on the front network must have a host binding and
 are actually created on a backing network when all is said and done.
 They carry the ID of the backing network in the DB.

 Using Neutron networks to model the segments allows us to fully
 specify the details of each network using the regular Neutron model.
 They could be heterogeneous or homogeneous, it doesn't matter.

You've probably seen Robert Kukura's comment on the related bug at
https://bugs.launchpad.net/neutron/+bug/1458890/comments/30, and there
is a useful detailed description of how the multiprovider extension
works at
https://bugs.launchpad.net/openstack-api-site/+bug/1242019/comments/3. 
I believe it is correct to say that using multiprovider would be an
effective substitute for using multiple backing networks with different
{network_type, physical_network, segmentation_id}, and that logically
multiprovider is aiming to describe the same thing as this email thread
is, i.e. non-overlay mapping onto a physical network composed of
multiple segments.

However, I believe multiprovider does not (per se) address the IP
addressing requirement(s) of the multi-segment scenario.


 This proposal offers a clear separation between the statically bound
 and the mobile address blocks by associating the former with the
 backing networks and the latter with the front network.  The mobile
 addresses are modeled just like floating IPs are today but are
 implemented by some plugin code (possibly without NAT).

Couldn't the mobile addresses be _exactly_ like floating IPs already
are?  Why is anything different from floating IPs needed here?


 This proposal also provides some advantages for integrating dynamic
 routing.  Since each backing network will, by necessity, have a
 corresponding router in the infrastructure, the relationship between
 dynamic routing speaker, router, and network is clear in the model:
 network - speaker - router.

I'm not sure exactly what you mean here by 'dynamic routing', but I
think this touches on a key point: can IP routing happen anywhere in a
Neutron network, without being explicitly represented by a router object
in the model?

I think the answer to that should be yes.  It clearly already is in the
underlay if you are using tunnels - the tunnel between two compute hosts
may require multiple IP hops across the fabric.  At the network level
that Neutron networks currently model, the answer 

Re: [openstack-dev] [Ironic] What to do with reservation check in node update API?

2015-07-21 Thread Lucas Alvares Gomes
Hi,

 So, it looks like the only reason we check the reservation field here is
 because we want to return a 409 for node is locked rather than a 400,
 right? do_node_deploy and such will raise a NodeLocked, which should do
 the same as this check. It's unclear to me why we can't just remove this
 check and let the conductor deal with it.


Looking at the code this assumption seems to be correct. If the RPC
methods raise NodeLocked we will return 409 as expected.

Cheers,
Lucas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] What to do with reservation check in node update API?

2015-07-21 Thread Lucas Alvares Gomes
Hi,


 Another question folks: while the problem above is valid and should be
 solved, I was actually keeping in mind another one:
 https://github.com/openstack/ironic/blob/master/ironic/api/controllers/v1/node.py#L1052-L1057

 This is also not retried, and it prevents updating during power operations,
 which I'm not sure is a correct thing to do. WDYT about dropping it?

Not retried by in the API side you mean?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] What to do with reservation check in node update API?

2015-07-21 Thread Dmitry Tantsur

On 07/21/2015 03:26 PM, Lucas Alvares Gomes wrote:

Hi,


So, it looks like the only reason we check the reservation field here is
because we want to return a 409 for node is locked rather than a 400,
right? do_node_deploy and such will raise a NodeLocked, which should do
the same as this check. It's unclear to me why we can't just remove this
check and let the conductor deal with it.



Looking at the code this assumption seems to be correct. If the RPC
methods raise NodeLocked we will return 409 as expected.


It's a bit more tricky, please see my patch: 
https://review.openstack.org/#/c/204081/




Cheers,
Lucas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] What to do with reservation check in node update API?

2015-07-21 Thread Dmitry Tantsur

On 07/21/2015 03:32 PM, Lucas Alvares Gomes wrote:

Hi,



Another question folks: while the problem above is valid and should be
solved, I was actually keeping in mind another one:
https://github.com/openstack/ironic/blob/master/ironic/api/controllers/v1/node.py#L1052-L1057

This is also not retried, and it prevents updating during power operations,
which I'm not sure is a correct thing to do. WDYT about dropping it?


Not retried by in the API side you mean?


Yep, we have to rely on client retry here.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] [stable] Freze exception

2015-07-21 Thread Sergey Nikitin
Hello

I'd like to ask for a freeze exception for the follow bug fix:

https://review.openstack.org/#/c/198385/

bug: https://bugs.launchpad.net/nova/+bug/1443970

merged bug fix in master: https://review.openstack.org/#/c/173362/

Incorrect usage of argument 'dhcp_server' may be cause of some problems
when we using nova-network. Please consider this bug fix to be a part of
Kilo.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Question about unique default hostname for node

2015-07-21 Thread Igor Kalnitsky
Hi Fedor,

 Use 'node-{ID}-{#}' format, where {#} we'll chose in loop till the first 
 unique.

I don't like this approach by many reasons. Here's some of them:

* With a loop you're going to perform N SQL queries in order to check
for uniqueness, and that's a bad design and it'd be better to avoid
it.
* What if user wants to use exactly the same schema? For instance, he
use the first number as a rack id and the second one as a node id in
this rack. The proposed approach will implicitly set some wrong
hostname, and detecting it may be a challenge.

 Use some unique value, shorter than UUID (for example - number of 
 microseconds from current timestamp)

Usually, it's incredibly rare situation, so solving it by adding UUID
is a more than enough. Moreover, it will take attention of the cloud
operator, so he will easily detect a conflicting node and fix its name
as he wishes. And that's probable what he wants to do almost all the
time - to fix its hostname, none want to work with names like: node-1,
node-2, node-2-2, node-3, because what is the node-2-2? Does it store
some backup or what? It confuses.

So, I think current approach - node-UUID - is the way we should do.

Thanks,
Igor

On Tue, Jul 21, 2015 at 3:38 PM, Fedor Zhadaev fzhad...@mirantis.com wrote:
 Hi all,

 The next issue was found during implementation
 https://blueprints.launchpad.net/fuel/+spec/node-naming :

   User may change node hostname to any another, including default-like
 'node-{№}', where № may be bigger than maximum nodeID existing at that
 moment.
   Later when node with ID == № will be created it's default name 'node-{ID}'
 will break hostnames uniqueness.

 To avoid this now it was decided to generate in such situation another
 default hostname.

 The current solution is to generate hostname 'node-{UUID}'. It works, but
 may look terribly.

 There are a few another possible solutions:

 Use 'node-{ID}-{#}' format, where {#} we'll chose in loop till the first
 unique.
 Use some unique value, shorter than UUID (for example - number of
 microseconds from current timestamp)

 Please share you opinion - what is better?

 Also you can propose your own solutions.

 --
 Kind Regards,
 Fedor Zhadaev
 Junior Software Engineer, Mirantis Inc.
 Skype: zhadaevfm
 E-mail: fzhad...@mirantis.com

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] What to do with reservation check in node update API?

2015-07-21 Thread Dmitry Tantsur

On 07/21/2015 02:24 PM, Dmitry Tantsur wrote:

Hi folks!

If you're not aware already, I'm working on solving node is locked
problems breaking users (and tracking it at
https://etherpad.openstack.org/p/ironic-locking-reform). We have retries
in place in client, but we all agree that it's not the eventual solution.

One of the things we've figured out is that we actually have server-side
retries - in task_manager.acquire. They're nice and configurable. Alas,
we have one place that checks reservations without task_manager:
https://github.com/openstack/ironic/blob/master/ironic/api/controllers/v1/node.py#L401-L403
(note that this check is actually racy)

I'd like to ask your opinions on how to solve it? I have 3 ideas:
1. Just implement retries on API level (possibly split away a common
function from task_manager).
2. Move update to conductor instead of doing it directly in API.
3. Don't check reservation when updating node. At all.


Another question folks: while the problem above is valid and should be 
solved, I was actually keeping in mind another one:

https://github.com/openstack/ironic/blob/master/ironic/api/controllers/v1/node.py#L1052-L1057

This is also not retried, and it prevents updating during power 
operations, which I'm not sure is a correct thing to do. WDYT about 
dropping it?


And with check on provision state, the options are the same 1,2,3 as 
above: move to conductor, reimplement retries, just ignore.




Ideas?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Question about unique default hostname for node

2015-07-21 Thread Fedor Zhadaev
Hi all,

The next issue was found during implementation
https://blueprints.launchpad.net/fuel/+spec/node-naming :

  User may change node hostname to any another, including default-like
'node-{№}', where № may be bigger than maximum nodeID existing at that
moment.
  Later when node with ID == № will be created it's default name
'node-{ID}' will break hostnames uniqueness.

To avoid this now it was decided to generate in such situation another
default hostname.

The current solution is to generate hostname '*node-{UUID}*'. It works, but
may look terribly.

There are a few another possible solutions:

   - Use '*node-{ID}-{#}*' format, where *{#} *we'll chose in loop till the
   first unique.
   - Use some unique value, shorter than UUID (for example - number of
   microseconds from current timestamp)

Please share you opinion - what is better?

Also you can propose your own solutions.

-- 
Kind Regards,
Fedor Zhadaev
Junior Software Engineer, Mirantis Inc.
Skype: zhadaevfm
E-mail: fzhad...@mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >