Re: [openstack-dev] [heat] agenda for OpenStack Heat meeting 2014-06-18 20:00 UTC - corrections to meeting minutes

2014-06-18 Thread Mike Spreitzer
Mike Spreitzer/Watson/IBM@IBMUS wrote on 06/18/2014 05:00:57 PM:
...
> 
http://eavesdrop.openstack.org/meetings/heat/2014/heat.2014-06-18-20.00.html 


I found two goofups so far.  One is that the following was not recorded in 
the official outline (#agreed only really works for chairs):

20:10:12  #agreed heat-slow job will remain non-voting until 
current issues are fixed

The other concerns the dates for the mid-cycle meet-up in August.  The 
agreement was on Monday--Wednesday, which are the 18th through the 20th. I 
incorrectly recorded that as

20:35:47  #agreed have the second meetup, Aug 19--21

when in fact the correct statement is

20:35:47  #agreed have the second meetup, Aug 18--20

Regards,
Mike
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ML2] Modular L2 agent architecture

2014-06-18 Thread henry hly
OVS agent manipulate not only ovs flow table, but also linux stack, which
is not so easily replaced by pure openflow controller today.
fastpath-slowpath separation sounds good, but really a nightmare for high
concurrent connection application if we set L4 flow into OVS (in our
testing, vswitchd daemon always stop working in this case).

Someday when OVS can do all the L2-L4 rules in the kernel without bothering
userspace classifier, pure OF controller can replace agent based solution
then. OVS hooking to netfilter conntrack may come this year, but not enough
yet.


On Wed, Jun 18, 2014 at 12:56 AM, Armando M.  wrote:

> just a provocative thought: If we used the ovsdb connection instead, do we
> really need an L2 agent :P?
>
>
> On 17 June 2014 18:38, Kyle Mestery  wrote:
>
>> Another area of improvement for the agent would be to move away from
>> executing CLIs for port commands and instead use OVSDB. Terry Wilson
>> and I talked about this, and re-writing ovs_lib to use an OVSDB
>> connection instead of the CLI methods would be a huge improvement
>> here. I'm not sure if Terry was going to move forward with this, but
>> I'd be in favor of this for Juno if he or someone else wants to move
>> in this direction.
>>
>> Thanks,
>> Kyle
>>
>> On Tue, Jun 17, 2014 at 11:24 AM, Salvatore Orlando 
>> wrote:
>> > We've started doing this in a slightly more reasonable way for icehouse.
>> > What we've done is:
>> > - remove unnecessary notification from the server
>> > - process all port-related events, either trigger via RPC or via
>> monitor in
>> > one place
>> >
>> > Obviously there is always a lot of room for improvement, and I agree
>> > something along the lines of what Zang suggests would be more
>> maintainable
>> > and ensure faster event processing as well as making it easier to have
>> some
>> > form of reliability on event processing.
>> >
>> > I was considering doing something for the ovs-agent again in Juno, but
>> since
>> > we've moving towards a unified agent, I think any new "big" ticket
>> should
>> > address this effort.
>> >
>> > Salvatore
>> >
>> >
>> > On 17 June 2014 13:31, Zang MingJie  wrote:
>> >>
>> >> Hi:
>> >>
>> >> Awesome! Currently we are suffering lots of bugs in ovs-agent, also
>> >> intent to rebuild a more stable flexible agent.
>> >>
>> >> Taking the experience of ovs-agent bugs, I think the concurrency
>> >> problem is also a very important problem, the agent gets lots of event
>> >> from different greenlets, the rpc, the ovs monitor or the main loop.
>> >> I'd suggest to serialize all event to a queue, then process events in
>> >> a dedicated thread. The thread check the events one by one ordered,
>> >> and resolve what has been changed, then apply the corresponding
>> >> changes. If there is any error occurred in the thread, discard the
>> >> current processing event, do a fresh start event, which reset
>> >> everything, then apply the correct settings.
>> >>
>> >> The threading model is so important and may prevent tons of bugs in
>> >> the future development, we should describe it clearly in the
>> >> architecture
>> >>
>> >>
>> >> On Wed, Jun 11, 2014 at 4:19 AM, Mohammad Banikazemi 
>> >> wrote:
>> >> > Following the discussions in the ML2 subgroup weekly meetings, I have
>> >> > added
>> >> > more information on the etherpad [1] describing the proposed
>> >> > architecture
>> >> > for modular L2 agents. I have also posted some code fragments at [2]
>> >> > sketching the implementation of the proposed architecture. Please
>> have a
>> >> > look when you get a chance and let us know if you have any comments.
>> >> >
>> >> > [1] https://etherpad.openstack.org/p/modular-l2-agent-outline
>> >> > [2] https://review.openstack.org/#/c/99187/
>> >> >
>> >> >
>> >> > ___
>> >> > OpenStack-dev mailing list
>> >> > OpenStack-dev@lists.openstack.org
>> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >> >
>> >>
>> >> ___
>> >> OpenStack-dev mailing list
>> >> OpenStack-dev@lists.openstack.org
>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>> >
>> >
>> > ___
>> > OpenStack-dev mailing list
>> > OpenStack-dev@lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron]Performance of security group

2014-06-18 Thread henry hly
we have done some tests, but have different result: the performance is
nearly the same for empty and 5k rules in iptable, but huge gap between
enable/disable iptable hook on linux bridge


On Thu, Jun 19, 2014 at 11:21 AM, shihanzhang  wrote:

> Now I have not get accurate test data, but I  can confirm the following
> points:
> 1. In compute node, the iptable's chain of a VM is liner, iptable filter
> it one by one, if a VM in default security group and this default security
> group have many members, but ipset chain is set, the time ipset filter one
> and many member is not much difference.
> 2. when the iptable rule is very large, the probability of  failure  that  
> iptable-save
> save the iptable rule  is very large.
>
>
>
>
>
> At 2014-06-19 10:55:56, "Kevin Benton"  wrote:
>
> This sounds like a good idea to handle some of the performance issues
> until the ovs firewall can be implemented down the the line.
> Do you have any performance comparisons?
> On Jun 18, 2014 7:46 PM, "shihanzhang"  wrote:
>
>> Hello all,
>>
>> Now in neutron, it use iptable implementing security group, but the
>> performance of this  implementation is very poor, there is a bug:
>> https://bugs.launchpad.net/neutron/+bug/1302272 to reflect this problem.
>> In his test, with default security groups(which has remote security
>> group), beyond 250-300 VMs, there were around 6k Iptable rules on evry
>> compute node, although his patch can reduce the processing time, but it
>> don't solve this problem fundamentally. I have commit a BP to solve this
>> problem:
>> https://blueprints.launchpad.net/neutron/+spec/add-ipset-to-security
>> 
>> There are other people interested in this it?
>>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][libvirt] Block migrations and Cinder volumes

2014-06-18 Thread Rafi Khardalian
I am concerned about how block migration functions when Cinder volumes are
attached to an instance being migrated.  We noticed some unexpected
behavior recently, whereby attached generic NFS-based volumes would become
entirely unsparse over the course of a migration.  After spending some time
reviewing the code paths in Nova, I'm more concerned that this was actually
a minor symptom of a much more significant issue.

For those unfamiliar, NFS-based volumes are simply RAW files residing on an
NFS mount.  From Libvirt's perspective, these volumes look no different
than root or ephemeral disks.  We are currently not filtering out volumes
whatsoever when making the request into Libvirt to perform the migration.
 Libvirt simply receives an additional flag (VIR_MIGRATE_NON_SHARED_INC)
when a block migration is requested, which applied to the entire migration
process, not differentiated on a per-disk basis.  Numerous guards within
Nova to prevent a block based migration from being allowed if the instance
disks exist on the destination; yet volumes remain attached and within the
defined XML during a block migration.

Unless Libvirt has a lot more logic around this than I am lead to believe,
this seems like a recipe for corruption.  It seems as though this would
also impact any type of volume attached to an instance (iSCSI, RBD, etc.),
NFS just happens to be what we were testing.  If I am wrong and someone can
correct my understanding, I would really appreciate it.  Otherwise, I'm
surprised we haven't had more reports of issues when block migrations are
used in conjunction with any attached volumes.

I have ideas on how we can address the issue if we can reach some consensus
that the issue is valid, but we'll discuss those when if/when we get to
that point.

Regards,
Rafi
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Bug squashing day on June, 17

2014-06-18 Thread Roman Podoliaka
Hi guys,

Dmitry, I have nothing against using 'also affects', but
unfortunately, it seems that Launchpad advanced search doesn't allow
to filter by affected projects :( (my use case is to be able to list
only bugs affecting Nova in MOS, and as long as we deploy stable
releases rather than trunk, upstream Nova bugs aren't always
applicable or just have lower priority for us).

Mike, cool, I didn't know https://launchpad.net/mos existed!  I'm all
for using it rather than spamming you guys with purely MOS/OS bugs :)
So we should probably ask QAs to start filing those against MOS now.
But per project tags can still be useful due to Launchpad advanced
search limitations.

Thanks,
Roman

On Thu, Jun 19, 2014 at 5:29 AM, Mike Scherbakov
 wrote:
> Actually I agree on tagging bugs as Roman suggests.
> If no one against, we can create official tags for every project (nova,
> neutron, etc.) - as long as it simplifies life and easy to use, I'm all for
> it.
>
>
> On Thu, Jun 19, 2014 at 6:26 AM, Mike Scherbakov 
> wrote:
>>
>> +1 to this approach.
>> Actually we've just created separate LP project for MOS:
>> https://launchpad.net/mos,
>> and all bugs related to openstack / linux code (not Fuel), should be
>> tracked there.
>> I still think that we should also adding other OpenStack projects by
>> clicking on "also affects" where possible.
>>
>>
>> On Thu, Jun 19, 2014 at 1:30 AM, Dmitry Borodaenko
>>  wrote:
>>>
>>> Roman,
>>>
>>> What do you think about adding OS projects into the bug as "also
>>> affects"? That allows to track upstream and downstream state of the bug
>>> separately while maintaing visibility of both on the same page. The only
>>> downside is spamming the bug with comments related to different projects,
>>> but I think it's a reasonable trade off, you can't have too much information
>>> about a bug :)
>>>
>>> -DmitryB
>>>
>>>
>>> On Wed, Jun 18, 2014 at 2:04 AM, Roman Podoliaka
>>>  wrote:

 Hi Fuelers,

 Not directly related to bug squashing day, but something to keep in
 mind.

 AFAIU, both MOS and Fuel bugs are currently tracked under
 https://bugs.launchpad.net/fuel/ Launchpad project page. Most bugs
 filed there are probably deployment-specific, but still I bet there is
 a lot of bugs in OS projects you run into. If you could tag those
 using OS projects names (e.g. you already have the 'neutron' tag, but
 not 'nova' one) when triaging new bugs, that would greatly help us to
 find and fix them in both MOS and upstream projects.

 Thanks,
 Roman

 On Wed, Jun 18, 2014 at 8:04 AM, Mike Scherbakov
  wrote:
 > Fuelers,
 > please pay attention to stalled in progress bugs too - those which are
 > In
 > progress for more than a week. See [1].
 >
 >
 > [1]
 >
 > https://bugs.launchpad.net/fuel/+bugs?field.searchtext=&orderby=date_last_updated&search=Search&field.status%3Alist=INPROGRESS&assignee_option=any&field.assignee=&field.bug_reporter=&field.bug_commenter=&field.subscriber=&field.structural_subscriber=&field.tag=&field.tags_combinator=ANY&field.has_cve.used=&field.omit_dupes.used=&field.omit_dupes=on&field.affects_me.used=&field.has_patch.used=&field.has_branches.used=&field.has_branches=on&field.has_no_branches.used=&field.has_no_branches=on&field.has_blueprints.used=&field.has_blueprints=on&field.has_no_blueprints.used=&field.has_no_blueprints=on
 >
 >
 > On Wed, Jun 18, 2014 at 8:43 AM, Mike Scherbakov
 > 
 > wrote:
 >>
 >> Thanks for participation, folks.
 >> Current count:
 >> New - 12
 >> Incomplete - 30
 >> Confirmed / Triaged / in progress for 5.1 - 368
 >>
 >> I've not logged how many bugs we had, but calculated that 26 bugs
 >> were
 >> filed over last 24 hours.
 >>
 >> Overall, seems to be we did a good job in triaging, but results for
 >> fixing
 >> bugs are not that impressive. I'm inclined to think about another
 >> run, let's
 >> say, next Tuesday.
 >>
 >>
 >>
 >> On Tue, Jun 17, 2014 at 7:12 AM, Mike Scherbakov
 >>  wrote:
 >>>
 >>> Current count:
 >>> New - 56
 >>> Incomplete - 48
 >>> Confirmed/Triaged/In progress for 5.1 - 331
 >>>
 >>> Let's squash as many as we can!
 >>>
 >>>
 >>> On Mon, Jun 16, 2014 at 6:16 AM, Mike Scherbakov
 >>>  wrote:
 
  Fuelers,
  as we discussed during last IRC meeting, I'm scheduling bug
  squashing
  day on Tuesday, June 17th.
 
  I'd like to propose the following order of bugs processing:
 
  Confirm / triage bugs in New status, assigning them to yourself to
  avoid
  the situation when a few people work on same bug
  Review bugs in Incomplete status, move them to Confirmed / Triaged
  or
  close as Invalid.
  Follow https://wiki.openstack.org/wiki/BugTriage for the rest (

Re: [openstack-dev] [neutron]Performance of security group

2014-06-18 Thread shihanzhang
Now I have not get accurate test data, but I  can confirm the following points:
1. In compute node, the iptable's chain of a VM is liner, iptable filter it one 
by one, if a VM in default security group and this default security group have 
many members, but ipset chain is set, the time ipset filter one and many member 
is not much difference.
2. when the iptable rule is very large, the probability of  failure  that  
iptable-save save the iptable rule  is very large.






At 2014-06-19 10:55:56, "Kevin Benton"  wrote:


This sounds like a good idea to handle some of the performance issues until the 
ovs firewall can be implemented down the the line.
Do you have any performance comparisons?

On Jun 18, 2014 7:46 PM, "shihanzhang"  wrote:

Hello all,


Now in neutron, it use iptable implementing security group, but the performance 
of this  implementation is very poor, there is a 
bug:https://bugs.launchpad.net/neutron/+bug/1302272 to reflect this problem. In 
his test, with default security groups(which has remote security group), beyond 
250-300 VMs, there were around 6k Iptable rules on evry compute node, although 
his patch can reduce the processing time, but it don't solve this problem 
fundamentally. I have commit a BP to solve this 
problem:https://blueprints.launchpad.net/neutron/+spec/add-ipset-to-security 
There are other people interested in this it?



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [horizon] UI for cinder volume type extra specs

2014-06-18 Thread Gary Smith
During our weekly horizon meeting this Tuesday, I requested that you 
review the horizon change https://review.openstack.org/#/c/64103/, which 
added support for volume type extra specs into horizon.  This 
functionality is desired by the cinder team, and it will also be a model 
for the upcoming development of the UI for cinder qos spec support 
(https://blueprints.launchpad.net/horizon/+spec/cinder-qos-specs)


It was suggested during the meeting to evaluate and possibly incorporate 
a new key/value widget (https://review.openstack.org/#/c/99761/) into 
the above change. Since this new widget has not yet landed and still has 
some other issues to address, I recommend that we not couple these two 
changes together.  Later, when the new widget lands, a separate change 
can be created to incorporate it into the several places in the UI that 
deal with metadata / extra specs.


So, I would request that you please review the change 
https://review.openstack.org/#/c/64103/ at your earliest convenience and 
give your feedback to the developer.  Thanks!



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron]Performance of security group

2014-06-18 Thread Kevin Benton
This sounds like a good idea to handle some of the performance issues until
the ovs firewall can be implemented down the the line.
Do you have any performance comparisons?
On Jun 18, 2014 7:46 PM, "shihanzhang"  wrote:

> Hello all,
>
> Now in neutron, it use iptable implementing security group, but the
> performance of this  implementation is very poor, there is a bug:
> https://bugs.launchpad.net/neutron/+bug/1302272 to reflect this problem.
> In his test, with default security groups(which has remote security
> group), beyond 250-300 VMs, there were around 6k Iptable rules on evry
> compute node, although his patch can reduce the processing time, but it
> don't solve this problem fundamentally. I have commit a BP to solve this
> problem:
> https://blueprints.launchpad.net/neutron/+spec/add-ipset-to-security
> 
> There are other people interested in this it?
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron]Performance of security group

2014-06-18 Thread shihanzhang
Hello all,


Now in neutron, it use iptable implementing security group, but the performance 
of this  implementation is very poor, there is a 
bug:https://bugs.launchpad.net/neutron/+bug/1302272 to reflect this problem. In 
his test, with default security groups(which has remote security group), beyond 
250-300 VMs, there were around 6k Iptable rules on evry compute node, although 
his patch can reduce the processing time, but it don't solve this problem 
fundamentally. I have commit a BP to solve this 
problem:https://blueprints.launchpad.net/neutron/+spec/add-ipset-to-security 
There are other people interested in this it?___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Bug squashing day on June, 17

2014-06-18 Thread Mike Scherbakov
Actually I agree on tagging bugs as Roman suggests.
If no one against, we can create official tags for every project (nova,
neutron, etc.) - as long as it simplifies life and easy to use, I'm all for
it.


On Thu, Jun 19, 2014 at 6:26 AM, Mike Scherbakov 
wrote:

> +1 to this approach.
> Actually we've just created separate LP project for MOS:
> https://launchpad.net/mos,
> and all bugs related to openstack / linux code (not Fuel), should be
> tracked there.
> I still think that we should also adding other OpenStack projects by
> clicking on "also affects" where possible.
>
>
> On Thu, Jun 19, 2014 at 1:30 AM, Dmitry Borodaenko <
> dborodae...@mirantis.com> wrote:
>
>> Roman,
>>
>> What do you think about adding OS projects into the bug as "also
>> affects"? That allows to track upstream and downstream state of the bug
>> separately while maintaing visibility of both on the same page. The only
>> downside is spamming the bug with comments related to different projects,
>> but I think it's a reasonable trade off, you can't have too much
>> information about a bug :)
>>
>> -DmitryB
>>
>>
>> On Wed, Jun 18, 2014 at 2:04 AM, Roman Podoliaka > > wrote:
>>
>>> Hi Fuelers,
>>>
>>> Not directly related to bug squashing day, but something to keep in mind.
>>>
>>> AFAIU, both MOS and Fuel bugs are currently tracked under
>>> https://bugs.launchpad.net/fuel/ Launchpad project page. Most bugs
>>> filed there are probably deployment-specific, but still I bet there is
>>> a lot of bugs in OS projects you run into. If you could tag those
>>> using OS projects names (e.g. you already have the 'neutron' tag, but
>>> not 'nova' one) when triaging new bugs, that would greatly help us to
>>> find and fix them in both MOS and upstream projects.
>>>
>>> Thanks,
>>> Roman
>>>
>>> On Wed, Jun 18, 2014 at 8:04 AM, Mike Scherbakov
>>>  wrote:
>>> > Fuelers,
>>> > please pay attention to stalled in progress bugs too - those which are
>>> In
>>> > progress for more than a week. See [1].
>>> >
>>> >
>>> > [1]
>>> >
>>> https://bugs.launchpad.net/fuel/+bugs?field.searchtext=&orderby=date_last_updated&search=Search&field.status%3Alist=INPROGRESS&assignee_option=any&field.assignee=&field.bug_reporter=&field.bug_commenter=&field.subscriber=&field.structural_subscriber=&field.tag=&field.tags_combinator=ANY&field.has_cve.used=&field.omit_dupes.used=&field.omit_dupes=on&field.affects_me.used=&field.has_patch.used=&field.has_branches.used=&field.has_branches=on&field.has_no_branches.used=&field.has_no_branches=on&field.has_blueprints.used=&field.has_blueprints=on&field.has_no_blueprints.used=&field.has_no_blueprints=on
>>> >
>>> >
>>> > On Wed, Jun 18, 2014 at 8:43 AM, Mike Scherbakov <
>>> mscherba...@mirantis.com>
>>> > wrote:
>>> >>
>>> >> Thanks for participation, folks.
>>> >> Current count:
>>> >> New - 12
>>> >> Incomplete - 30
>>> >> Confirmed / Triaged / in progress for 5.1 - 368
>>> >>
>>> >> I've not logged how many bugs we had, but calculated that 26 bugs were
>>> >> filed over last 24 hours.
>>> >>
>>> >> Overall, seems to be we did a good job in triaging, but results for
>>> fixing
>>> >> bugs are not that impressive. I'm inclined to think about another
>>> run, let's
>>> >> say, next Tuesday.
>>> >>
>>> >>
>>> >>
>>> >> On Tue, Jun 17, 2014 at 7:12 AM, Mike Scherbakov
>>> >>  wrote:
>>> >>>
>>> >>> Current count:
>>> >>> New - 56
>>> >>> Incomplete - 48
>>> >>> Confirmed/Triaged/In progress for 5.1 - 331
>>> >>>
>>> >>> Let's squash as many as we can!
>>> >>>
>>> >>>
>>> >>> On Mon, Jun 16, 2014 at 6:16 AM, Mike Scherbakov
>>> >>>  wrote:
>>> 
>>>  Fuelers,
>>>  as we discussed during last IRC meeting, I'm scheduling bug
>>> squashing
>>>  day on Tuesday, June 17th.
>>> 
>>>  I'd like to propose the following order of bugs processing:
>>> 
>>>  Confirm / triage bugs in New status, assigning them to yourself to
>>> avoid
>>>  the situation when a few people work on same bug
>>>  Review bugs in Incomplete status, move them to Confirmed / Triaged
>>> or
>>>  close as Invalid.
>>>  Follow https://wiki.openstack.org/wiki/BugTriage for the rest
>>> (this is
>>>  MUST read for those who have not done it yet)
>>> 
>>>  When we are more or less done with triaging, we can start proposing
>>>  fixes for bugs. I suggest to extensively use #fuel-dev IRC for
>>>  synchronization, and while someone fixes some bugs - the other one
>>> can
>>>  participate in review of fixes. Don't hesitate to ask for code
>>> reviews.
>>> 
>>>  Regards,
>>>  --
>>>  Mike Scherbakov
>>>  #mihgen
>>> 
>>> >>>
>>> >>>
>>> >>>
>>> >>> --
>>> >>> Mike Scherbakov
>>> >>> #mihgen
>>> >>>
>>> >>
>>> >>
>>> >>
>>> >> --
>>> >> Mike Scherbakov
>>> >> #mihgen
>>> >>
>>> >
>>> >
>>> >
>>> > --
>>> > Mike Scherbakov
>>> > #mihgen
>>> >
>>> >
>>> > ___
>>> > OpenStack-dev mailing list
>>> > OpenStack-dev@lists.openstack.org
>>> > http:/

Re: [openstack-dev] [Fuel] Bug squashing day on June, 17

2014-06-18 Thread Mike Scherbakov
+1 to this approach.
Actually we've just created separate LP project for MOS:
https://launchpad.net/mos,
and all bugs related to openstack / linux code (not Fuel), should be
tracked there.
I still think that we should also adding other OpenStack projects by
clicking on "also affects" where possible.


On Thu, Jun 19, 2014 at 1:30 AM, Dmitry Borodaenko  wrote:

> Roman,
>
> What do you think about adding OS projects into the bug as "also affects"?
> That allows to track upstream and downstream state of the bug separately
> while maintaing visibility of both on the same page. The only downside is
> spamming the bug with comments related to different projects, but I think
> it's a reasonable trade off, you can't have too much information about a
> bug :)
>
> -DmitryB
>
>
> On Wed, Jun 18, 2014 at 2:04 AM, Roman Podoliaka 
> wrote:
>
>> Hi Fuelers,
>>
>> Not directly related to bug squashing day, but something to keep in mind.
>>
>> AFAIU, both MOS and Fuel bugs are currently tracked under
>> https://bugs.launchpad.net/fuel/ Launchpad project page. Most bugs
>> filed there are probably deployment-specific, but still I bet there is
>> a lot of bugs in OS projects you run into. If you could tag those
>> using OS projects names (e.g. you already have the 'neutron' tag, but
>> not 'nova' one) when triaging new bugs, that would greatly help us to
>> find and fix them in both MOS and upstream projects.
>>
>> Thanks,
>> Roman
>>
>> On Wed, Jun 18, 2014 at 8:04 AM, Mike Scherbakov
>>  wrote:
>> > Fuelers,
>> > please pay attention to stalled in progress bugs too - those which are
>> In
>> > progress for more than a week. See [1].
>> >
>> >
>> > [1]
>> >
>> https://bugs.launchpad.net/fuel/+bugs?field.searchtext=&orderby=date_last_updated&search=Search&field.status%3Alist=INPROGRESS&assignee_option=any&field.assignee=&field.bug_reporter=&field.bug_commenter=&field.subscriber=&field.structural_subscriber=&field.tag=&field.tags_combinator=ANY&field.has_cve.used=&field.omit_dupes.used=&field.omit_dupes=on&field.affects_me.used=&field.has_patch.used=&field.has_branches.used=&field.has_branches=on&field.has_no_branches.used=&field.has_no_branches=on&field.has_blueprints.used=&field.has_blueprints=on&field.has_no_blueprints.used=&field.has_no_blueprints=on
>> >
>> >
>> > On Wed, Jun 18, 2014 at 8:43 AM, Mike Scherbakov <
>> mscherba...@mirantis.com>
>> > wrote:
>> >>
>> >> Thanks for participation, folks.
>> >> Current count:
>> >> New - 12
>> >> Incomplete - 30
>> >> Confirmed / Triaged / in progress for 5.1 - 368
>> >>
>> >> I've not logged how many bugs we had, but calculated that 26 bugs were
>> >> filed over last 24 hours.
>> >>
>> >> Overall, seems to be we did a good job in triaging, but results for
>> fixing
>> >> bugs are not that impressive. I'm inclined to think about another run,
>> let's
>> >> say, next Tuesday.
>> >>
>> >>
>> >>
>> >> On Tue, Jun 17, 2014 at 7:12 AM, Mike Scherbakov
>> >>  wrote:
>> >>>
>> >>> Current count:
>> >>> New - 56
>> >>> Incomplete - 48
>> >>> Confirmed/Triaged/In progress for 5.1 - 331
>> >>>
>> >>> Let's squash as many as we can!
>> >>>
>> >>>
>> >>> On Mon, Jun 16, 2014 at 6:16 AM, Mike Scherbakov
>> >>>  wrote:
>> 
>>  Fuelers,
>>  as we discussed during last IRC meeting, I'm scheduling bug squashing
>>  day on Tuesday, June 17th.
>> 
>>  I'd like to propose the following order of bugs processing:
>> 
>>  Confirm / triage bugs in New status, assigning them to yourself to
>> avoid
>>  the situation when a few people work on same bug
>>  Review bugs in Incomplete status, move them to Confirmed / Triaged or
>>  close as Invalid.
>>  Follow https://wiki.openstack.org/wiki/BugTriage for the rest (this
>> is
>>  MUST read for those who have not done it yet)
>> 
>>  When we are more or less done with triaging, we can start proposing
>>  fixes for bugs. I suggest to extensively use #fuel-dev IRC for
>>  synchronization, and while someone fixes some bugs - the other one
>> can
>>  participate in review of fixes. Don't hesitate to ask for code
>> reviews.
>> 
>>  Regards,
>>  --
>>  Mike Scherbakov
>>  #mihgen
>> 
>> >>>
>> >>>
>> >>>
>> >>> --
>> >>> Mike Scherbakov
>> >>> #mihgen
>> >>>
>> >>
>> >>
>> >>
>> >> --
>> >> Mike Scherbakov
>> >> #mihgen
>> >>
>> >
>> >
>> >
>> > --
>> > Mike Scherbakov
>> > #mihgen
>> >
>> >
>> > ___
>> > OpenStack-dev mailing list
>> > OpenStack-dev@lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> Dmitry Borodaenko
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/

Re: [openstack-dev] [Neutron][ML2] Modular L2 agent architecture

2014-06-18 Thread Isaku Yamahata
What's the progress by Terry Wilson?
If not much, I'm willing to file blueprint/spec and drive it.

thanks,

On Wed, Jun 18, 2014 at 07:00:59PM +0900,
Isaku Yamahata  wrote:

> Hi. Ryu provides ovs_vsctl.py library which is python equivalent to
> ovs-vsctl command. It speaks OVSDB protocl.
> https://github.com/osrg/ryu/blob/master/ryu/lib/ovs/vsctl.py
> 
> So with the library, it's mostly mechanical change to convert
> ovs_lib.py, I think.
> I'm not aware other similar library written in python.
> 
> thanks,
> Isaku Yamahata
> 
> 
> On Tue, Jun 17, 2014 at 11:38:36AM -0500,
> Kyle Mestery  wrote:
> 
> > Another area of improvement for the agent would be to move away from
> > executing CLIs for port commands and instead use OVSDB. Terry Wilson
> > and I talked about this, and re-writing ovs_lib to use an OVSDB
> > connection instead of the CLI methods would be a huge improvement
> > here. I'm not sure if Terry was going to move forward with this, but
> > I'd be in favor of this for Juno if he or someone else wants to move
> > in this direction.
> > 
> > Thanks,
> > Kyle
> > 
> > On Tue, Jun 17, 2014 at 11:24 AM, Salvatore Orlando  
> > wrote:
> > > We've started doing this in a slightly more reasonable way for icehouse.
> > > What we've done is:
> > > - remove unnecessary notification from the server
> > > - process all port-related events, either trigger via RPC or via monitor 
> > > in
> > > one place
> > >
> > > Obviously there is always a lot of room for improvement, and I agree
> > > something along the lines of what Zang suggests would be more maintainable
> > > and ensure faster event processing as well as making it easier to have 
> > > some
> > > form of reliability on event processing.
> > >
> > > I was considering doing something for the ovs-agent again in Juno, but 
> > > since
> > > we've moving towards a unified agent, I think any new "big" ticket should
> > > address this effort.
> > >
> > > Salvatore
> > >
> > >
> > > On 17 June 2014 13:31, Zang MingJie  wrote:
> > >>
> > >> Hi:
> > >>
> > >> Awesome! Currently we are suffering lots of bugs in ovs-agent, also
> > >> intent to rebuild a more stable flexible agent.
> > >>
> > >> Taking the experience of ovs-agent bugs, I think the concurrency
> > >> problem is also a very important problem, the agent gets lots of event
> > >> from different greenlets, the rpc, the ovs monitor or the main loop.
> > >> I'd suggest to serialize all event to a queue, then process events in
> > >> a dedicated thread. The thread check the events one by one ordered,
> > >> and resolve what has been changed, then apply the corresponding
> > >> changes. If there is any error occurred in the thread, discard the
> > >> current processing event, do a fresh start event, which reset
> > >> everything, then apply the correct settings.
> > >>
> > >> The threading model is so important and may prevent tons of bugs in
> > >> the future development, we should describe it clearly in the
> > >> architecture
> > >>
> > >>
> > >> On Wed, Jun 11, 2014 at 4:19 AM, Mohammad Banikazemi 
> > >> wrote:
> > >> > Following the discussions in the ML2 subgroup weekly meetings, I have
> > >> > added
> > >> > more information on the etherpad [1] describing the proposed
> > >> > architecture
> > >> > for modular L2 agents. I have also posted some code fragments at [2]
> > >> > sketching the implementation of the proposed architecture. Please have 
> > >> > a
> > >> > look when you get a chance and let us know if you have any comments.
> > >> >
> > >> > [1] https://etherpad.openstack.org/p/modular-l2-agent-outline
> > >> > [2] https://review.openstack.org/#/c/99187/
> > >> >
> > >> >
> > >> > ___
> > >> > OpenStack-dev mailing list
> > >> > OpenStack-dev@lists.openstack.org
> > >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >> >
> > >>
> > >> ___
> > >> OpenStack-dev mailing list
> > >> OpenStack-dev@lists.openstack.org
> > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
> > >
> > >
> > > ___
> > > OpenStack-dev mailing list
> > > OpenStack-dev@lists.openstack.org
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
> > 
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> -- 
> Isaku Yamahata 

-- 
Isaku Yamahata 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Nominating Ken'ichi Ohmichi for nova-core

2014-06-18 Thread wu jiang
Congratulation!


On Wed, Jun 18, 2014 at 7:07 PM, Kenichi Oomichi 
wrote:

>
> > -Original Message-
> > From: Michael Still [mailto:mi...@stillhq.com]
> > Sent: Wednesday, June 18, 2014 7:54 PM
> > To: OpenStack Development Mailing List (not for usage questions)
> > Subject: Re: [openstack-dev] [Nova] Nominating Ken'ichi Ohmichi for
> nova-core
> >
> > Kenichi has now been added to the nova-core group in gerrit. Welcome
> aboard!
>
> Thank you for many +1s, and I'm glad to join the nova-core group :-)
> I am going to try hard for the smooth development.
>
>
> Thanks
> Ken'ichi Ohmichi
>
> ---
>
> > On Tue, Jun 17, 2014 at 6:18 PM, Michael Still 
> wrote:
> > > Hi. I'm going to let this sit for another 24 hours, and then we'll
> > > declare it closed.
> > >
> > > Cheers,
> > > Michael
> > >
> > > On Tue, Jun 17, 2014 at 6:16 AM, Mark McLoughlin 
> wrote:
> > >> On Sat, 2014-06-14 at 08:40 +1000, Michael Still wrote:
> > >>> Greetings,
> > >>>
> > >>> I would like to nominate Ken'ichi Ohmichi for the nova-core team.
> > >>>
> > >>> Ken'ichi has been involved with nova for a long time now.  His
> reviews
> > >>> on API changes are excellent, and he's been part of the team that has
> > >>> driven the new API work we've seen in recent cycles forward. Ken'ichi
> > >>> has also been reviewing other parts of the code base, and I think his
> > >>> reviews are detailed and helpful.
> > >>
> > >> +1, great to see Ken'ichi join the team
> > >>
> > >> Mark.
> > >>
> > >>
> > >> ___
> > >> OpenStack-dev mailing list
> > >> OpenStack-dev@lists.openstack.org
> > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
> > >
> > >
> > > --
> > > Rackspace Australia
> >
> >
> >
> > --
> > Rackspace Australia
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] A modest proposal to reduce reviewer load

2014-06-18 Thread Joe Gordon
On Wed, Jun 18, 2014 at 5:19 PM, Clint Byrum  wrote:

> Excerpts from Duncan Thomas's message of 2014-06-17 03:56:10 -0700:
> > A far more effective way to reduce the load of trivial review issues
> > on core reviewers is for none-core reviewers to get in there first,
> > spot the problems and add a -1 - the trivial issues are then hopefully
> > fixed up before a core reviewer even looks at the patch.
> >
> > The fundamental problem with review is that there are more people
> > submitting than doing regular reviews. If you want the review queue to
> > shrink, do five reviews for every one you submit. A -1 from a
> > none-core (followed by a +1 when all the issues are fixed) is far,
> > far, far more useful in general than a +1 on a new patch.
> >
>
> Perhaps we should incentivize having a good "reviews to patches" ratio
> somehow. There are probably quite a few people who are not ever going to
> be core reviewers, but who don't mind doing a few reviews per day.
>
>
Perhaps we can add that to
http://stackalytics.com/report/contribution/nova-group/30


> I can think of a few ways, but one way is to make that a real statistic
> (brace yourselves for the warnings of "gaming the system") and then give
> the top 10 non-core reviews to patches ratios a shout out each release.
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][all] Tracking bug and patch statuses

2014-06-18 Thread Joe Gordon
On Sat, Jun 7, 2014 at 11:39 AM, Matt Riedemann 
wrote:

>
>
> On 6/6/2014 1:40 AM, Joe Gordon wrote:
>
>> Hi All,
>>
>> In the nova meeting this week, we discussed some of the shortcomings of
>> our recent bug day, one of the ideas that was brought up was to do a
>> better job of keeping track of stale bugs (assigned but not worked on)
>> [0]. To that end I put something together based on what infra uses for
>> there bug days to go through all the open bugs in a project and list the
>> related gerrit patches and there state [1].
>>
>> I ran this on nova [2] (just the first 750 bugs or so) and
>> python-novaclient [3].  From the looks of it we can be doing a much
>> better job of keeping bug states in sync with patches etc.
>>
>> [0]
>> http://eavesdrop.openstack.org/meetings/nova/2014/nova.
>> 2014-06-05-21.01.log.html
>> [1] https://github.com/jogo/openstack-infra-scripts
>> [2] http://paste.openstack.org/show/83055/
>> [3] http://paste.openstack.org/show/83057
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> Can you paste 2 and 3 somewhere besides p.o.o?  That doesn't seem to work
> anymore.


Sorry for the delayed response here is a sample for nova (first 100 bugs or
so, as all those API calls to launchpad can be slow)

https://etherpad.openstack.org/p/eEYO2Fdsuv


>
>
> --
>
> Thanks,
>
> Matt Riedemann
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] A modest proposal to reduce reviewer load

2014-06-18 Thread Clint Byrum
Excerpts from Duncan Thomas's message of 2014-06-17 03:56:10 -0700:
> A far more effective way to reduce the load of trivial review issues
> on core reviewers is for none-core reviewers to get in there first,
> spot the problems and add a -1 - the trivial issues are then hopefully
> fixed up before a core reviewer even looks at the patch.
> 
> The fundamental problem with review is that there are more people
> submitting than doing regular reviews. If you want the review queue to
> shrink, do five reviews for every one you submit. A -1 from a
> none-core (followed by a +1 when all the issues are fixed) is far,
> far, far more useful in general than a +1 on a new patch.
> 

Perhaps we should incentivize having a good "reviews to patches" ratio
somehow. There are probably quite a few people who are not ever going to
be core reviewers, but who don't mind doing a few reviews per day.

I can think of a few ways, but one way is to make that a real statistic
(brace yourselves for the warnings of "gaming the system") and then give
the top 10 non-core reviews to patches ratios a shout out each release.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] PyCon AU OpenStack Miniconf – Call for proposals now open!

2014-06-18 Thread Joshua Hesketh

Hello everybody,

Just a quick reminder that the call for proposals closes at the end of 
Friday for the openstack miniconf in Brisbane, Australia


http://openstack.pycon-au.org

Cheers,
Josh

On 5/27/14 5:33 PM, Joshua Hesketh wrote:

The OpenStack miniconf organisers for PyCon AU are pleased to announce
their call for proposals is now open!

The OpenStack miniconf is a one day conference held on Friday the 1st of
August 2014 in Brisbane before PyCon Australia. The day is dedicated to
talks related to the OpenStack project and we welcome proposals of all
kinds, from all kinds of speakers - first-time through to
super-experienced. The miniconf is a community conference and we are
eager to hear from anyone in the community.

Presentation subjects may range from reports on OpenStack; technical,
community, infrastructure or code talks/discussions; academic or
commercial applications; or even tutorials and case studies. If a
presentation is interesting and useful to the OpenStack community, it
will be considered for inclusion. We also welcome talks that have been
given previously in different events - e.g. talks from the OpenStack
Summit in Atlanta will be considered for inclusion.

The deadline for proposals is the 20th of June. If you submitted
OpenStack related talks to the main programme we encourage you to
re-submit to the miniconf.

If you have friends or colleagues who have something valuable to
contribute, twist their arms to tell us about it! Please also forward
this Call for Proposals to anyone that you feel may be interested.

To send in your submissions please visit http://openstack.pycon-au.org
See you in Brisbane in August!





--
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] locked instances and snaphot

2014-06-18 Thread Duncan Thomas
Duncan Thomas
On Jun 18, 2014 9:51 PM, "Jay Pipes"  wrote:

> VMs should be cattle, not pets, but yes, a locked instance should be able
to be snapshotted, for sure, IMO.

Shooting all your cattle by accident is bad y'know, and you're a cattle
farmer will probably put you out of business... The effort you've put into
raising them has a none-zero cost, and if you keep using them for target
practice then some other farmer is going to be selling cheaper beef than
you...
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder][Driver] Delete snapshot

2014-06-18 Thread Mike Perez
On 10:20 Wed 18 Jun , Amit Das wrote:
> Implementation issues - If Cinder driver throws an Exception the snapshot
> will have error_deleting status & will not be usable. If Cinder driver logs
> the error silently then Openstack will probably mark the snapshot as
> deleted.
> 
> What is the appropriate procedure that needs to be followed for above
> usecase.

I'm not sure what "Openstack will probably mark the snapshot as deleted" means.
If a snapshot gets marked with error_deleting, we don't know what state the
snapshot is in because it could've been a delete that partially finished. You
should leave the cinder volume manager to handle this. It's up to the driver to
say the delete finished or failed, that's it.

-- 
Mike Perez

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] TLS support RST document on Gerrit

2014-06-18 Thread Stephen Balukoff
So... what I'm hearing here is that we might want to support both a
'hostname' and 'order' attribute. Though exact behavior from vendor to
vendor when there is name overlap is not likely to be consistent.

Note that while we have seen that corner case, it is unusual... so I'm not
against having slightly different behavior when there's name overlap from
vendor to vendor.

Stephen


On Wed, Jun 18, 2014 at 2:15 PM, Samuel Bercovici 
wrote:

>  Hi Stephen,
>
>
>
> Radware Alteon extract the hostname information and the alt subjectAltName
> from the certificate information.
>
> It then do:
>
> 1.  Find if there is exact match with the name in the https handshake
> and the ones extracted from the certificate, if there are more than a
> single match, the 1st one in the order will be used
>
> 2.  If no match was found than try to use the regexp hostname to
> match, if you have multiple matches, the 1st one will be used
>
> 3.  If no match was found than try to use subjectAltName to match. If
> you have multiple matches, the 1st one will be used
>
> 4.  If no match than use default certificate
>
>
>
> -Sam.
>
>
>
>
>
>
>
>
>
> *From:* Stephen Balukoff [mailto:sbaluk...@bluebox.net]
> *Sent:* Thursday, June 19, 2014 12:03 AM
>
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [Neutron][LBaaS] TLS support RST document
> on Gerrit
>
>
>
> Hi Evg,
>
>
>
> I do not think stunnel supports an "ordered list" without hostnames. Since
> we're talking about making the reference implementation use stunnel for TLS
> termination, then this seems like it's important to support its behavioral
> model.
>
>
>
> It is possible to extract hostnames from the CN and x509v3 Subject
> Alternative Names in the certs, but, as has been discussed previously,
> these can overlap, and it's not always reliable to rely on this data from
> the certs themselves. So, while I have nothing against having an ordered
> certificate list, stunnel won't use the order here, and stunnel will likely
> have unexpected behavior if hostnames are duplicated.
>
>
>
> Would it work for Radware to simply order the (unique) hostnames
> alphabetically, and put any wildcard certificates at the end of the list?
>
>
>
> Also, while I'm loathe to ask for details on a proprietary system: How
> does Radware do SNI *without* hostnames? Isn't that entirely the point of
> SNI? Client sends a hostname, and server responds with the certificate that
> applies to that hostname?
>
>
>
> Thanks,
>
> Stephen
>
>
>
> On Wed, Jun 18, 2014 at 8:00 AM, Evgeny Fedoruk 
> wrote:
>
> Hi Stephen,
> Regarding your comment related to SNI list management and behavior in the
> RST document:
>
> I understand the need to explicitly specify specific certificates for
> specific hostnames.
> However we need to deliver lowest common denominator for this feature
> which every vendor is able to support
> In this case, specifying hostname for certificate will not be supported by
> Radware.
> The original proposal with ordered certificates list may be the lowest
> common denominator for all vendors and we should find out if this is the
> case.
> If not, managing a simple none-ordered list will probably be the lowest
> common denominator.
>
> With the proposed flavors framework considered, extra SNI management
> capabilities may be represented for providers
> but meanwhile we should agree on proposal that can be implemented by all
> vendors.
> What are your thought on this?
>
> Regarding the SNIPolicy, I agree and will change the document accordingly.
>
> Thanks,
> Evg
>
>
>
>
>
>
> -Original Message-
> From: Evgeny Fedoruk
> Sent: Sunday, June 15, 2014 1:55 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Neutron][LBaaS] TLS support RST document on
> Gerrit
>
> Hi All,
>
> The document was updated and ready for next review round.
> Main things that were changed:
> 1. Comments were addressed
> 2. No back-end re-encryption supported
> 3. Intermediate certificates chain supported
> *Opened question: Should chain be stored in same TLS container of
> the certificate?
>
> Please review
> Regards,
> Evgeny
>
>
> -Original Message-
> From: Douglas Mendizabal [mailto:douglas.mendiza...@rackspace.com]
> Sent: Wednesday, June 11, 2014 10:22 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Neutron][LBaaS] TLS support RST document on
> Gerrit
>
> Hi Doug,
>
>
> Barbican does guarantee the integrity and availability of the secret,
> unless the owner of the secret deletes it from Barbican.  We’re not
> encouraging that you store a shadow-copy of the secret either.  This was
> proposed by the LBaaS team as a possible workaround for your use case.
> Our recommendation was that there are two options for dealing with Secrets
> being deleted from under you:
>
> If you want to control the lifecycle of the secret so that you can pr

Re: [openstack-dev] [devstack] [zmq] [oslo.messaging] Running devstack with zeromq

2014-06-18 Thread Ben Nemec
On 06/18/2014 05:45 AM, Elena Ezhova wrote:
> Hello!
> 
> I have been exploring bugs connected with using devstack with zmq [1], [2],
> [3] and experimenting with various configurations in attempt to make zmq
> work with projects which have moved to oslo.messaging. It turned out that
> there is a number of things to fix.
> 
> Firstly, even though nova currently uses oslo.messaging, devstack still
> uses nova-rpc-zmq-receiver instead of oslo-messaging-zmq-receiver when
> starting zeromq receiver.
> 
> Secondly, the default matchmaker for zmq is always set as MatchmakerRedis
> (which currently does not work either) and there is no opportunity to
> specify anything else (e.g. MatchmakerRing) using devstack. If there was an
> option to use MatchmakerRing, it would have been possible to create a
> configuration file matchmaker_ring.json in etc/oslo/ directory and write
> there all key-value pairs needed by zmq.
> 
> So I wonder whether it is something the community is interested in and, if
> yes, are there any recommendations concerning possible implementation?

I can't speak to the specific implementation, but if we're going to keep
the zmq driver in oslo.messaging then IMHO it should be usable with
devstack, so +1 to making that work.

> 
> 
> Thanks,
> Elena
> 
> [1] - https://bugs.launchpad.net/devstack/+bug/1279739
> [2] - https://bugs.launchpad.net/neutron/+bug/1298803
> [3] - https://bugs.launchpad.net/oslo.messaging/+bug/1290772
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] should we have a stale data indication in "nova list/show"?

2014-06-18 Thread Chris Friesen
The output of "nova list" and "nova show" reflects the current status in 
the database, not the actual state on the compute node.


If the instances in question are on a compute node that is currently 
"down", then the information is stale and possibly incorrect.  Would 
there be any benefit in adding some sort of indication of this in the 
"nova list" output?  Or do we expect the end-user to check "nova 
service-list" (or other health-monitoring mechanisms) to see if the 
compute node is "up" before relying on the output of "nova list"?


Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] All Clear on the western front (i.e. gate)

2014-06-18 Thread Anita Kuno
On 06/18/2014 05:16 PM, Jay Pipes wrote:
> On 06/18/2014 04:38 PM, Davanum Srinivas wrote:
>> w00t! thanks for the hard work everyone.
> 
> Indeed, thank you to all involved. Much appreciated. I hope in the
> future I can better help with gate fixing.
> 
> Best,
> -jay
We need to get working on that cloning program. We need a few more yous
and a couple additional Salvatores. :D

And yes, awesome work on the gate all those who made this a priority.

Anita.
> 
>> -- dims
>>
>> On Wed, Jun 18, 2014 at 7:17 AM, Sean Dague  wrote:
>>> I realized that folks may have been waiting for an 'all clear' on the
>>> gate situation. It was a tiring couple of weeks, so took a little while
>>> to get there.
>>>
>>> Due to a huge amount of effort, but a bunch of different people, a ton
>>> of bugs were squashed to get the gate back to a high pass rate -
>>> https://etherpad.openstack.org/p/gatetriage-june2014
>>>
>>> Then jeblair came back from vacation and quickly sorted out a nodepool
>>> bug that was starving our capacity, so now we aren't leaking deleted
>>> nodes the same way.
>>>
>>> With both those, our capacity for changes goes way up. Because we have
>>> more workers available at any time, and less round tripping on race
>>> bugs. We also dropped the Nova v3 tests, which shaved 8 minutes (on
>>> average) off of Tempest runs. Again, increasing throughput by getting
>>> nodes back into the pool faster.
>>>
>>> The net of all these changes is that yesterday we merged 117 patches -
>>> https://github.com/openstack/openstack/graphs/commit-activity (not a
>>> record, that's 147 in one day, but definitely a top merge day).
>>>
>>> So if you were holding off on reviews / code changes because of the
>>> state of things, you can stop now. And given the system is pretty
>>> healthy, now is actually a pretty good time to put and keep it under
>>> load to help evaluate where we stand.
>>>
>>> Thanks all,
>>>
>>>  -Sean
>>>
>>> -- 
>>> Sean Dague
>>> http://dague.net
>>>
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>>
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Bug squashing day on June, 17

2014-06-18 Thread Dmitry Borodaenko
Roman,

What do you think about adding OS projects into the bug as "also affects"?
That allows to track upstream and downstream state of the bug separately
while maintaing visibility of both on the same page. The only downside is
spamming the bug with comments related to different projects, but I think
it's a reasonable trade off, you can't have too much information about a
bug :)

-DmitryB


On Wed, Jun 18, 2014 at 2:04 AM, Roman Podoliaka 
wrote:

> Hi Fuelers,
>
> Not directly related to bug squashing day, but something to keep in mind.
>
> AFAIU, both MOS and Fuel bugs are currently tracked under
> https://bugs.launchpad.net/fuel/ Launchpad project page. Most bugs
> filed there are probably deployment-specific, but still I bet there is
> a lot of bugs in OS projects you run into. If you could tag those
> using OS projects names (e.g. you already have the 'neutron' tag, but
> not 'nova' one) when triaging new bugs, that would greatly help us to
> find and fix them in both MOS and upstream projects.
>
> Thanks,
> Roman
>
> On Wed, Jun 18, 2014 at 8:04 AM, Mike Scherbakov
>  wrote:
> > Fuelers,
> > please pay attention to stalled in progress bugs too - those which are In
> > progress for more than a week. See [1].
> >
> >
> > [1]
> >
> https://bugs.launchpad.net/fuel/+bugs?field.searchtext=&orderby=date_last_updated&search=Search&field.status%3Alist=INPROGRESS&assignee_option=any&field.assignee=&field.bug_reporter=&field.bug_commenter=&field.subscriber=&field.structural_subscriber=&field.tag=&field.tags_combinator=ANY&field.has_cve.used=&field.omit_dupes.used=&field.omit_dupes=on&field.affects_me.used=&field.has_patch.used=&field.has_branches.used=&field.has_branches=on&field.has_no_branches.used=&field.has_no_branches=on&field.has_blueprints.used=&field.has_blueprints=on&field.has_no_blueprints.used=&field.has_no_blueprints=on
> >
> >
> > On Wed, Jun 18, 2014 at 8:43 AM, Mike Scherbakov <
> mscherba...@mirantis.com>
> > wrote:
> >>
> >> Thanks for participation, folks.
> >> Current count:
> >> New - 12
> >> Incomplete - 30
> >> Confirmed / Triaged / in progress for 5.1 - 368
> >>
> >> I've not logged how many bugs we had, but calculated that 26 bugs were
> >> filed over last 24 hours.
> >>
> >> Overall, seems to be we did a good job in triaging, but results for
> fixing
> >> bugs are not that impressive. I'm inclined to think about another run,
> let's
> >> say, next Tuesday.
> >>
> >>
> >>
> >> On Tue, Jun 17, 2014 at 7:12 AM, Mike Scherbakov
> >>  wrote:
> >>>
> >>> Current count:
> >>> New - 56
> >>> Incomplete - 48
> >>> Confirmed/Triaged/In progress for 5.1 - 331
> >>>
> >>> Let's squash as many as we can!
> >>>
> >>>
> >>> On Mon, Jun 16, 2014 at 6:16 AM, Mike Scherbakov
> >>>  wrote:
> 
>  Fuelers,
>  as we discussed during last IRC meeting, I'm scheduling bug squashing
>  day on Tuesday, June 17th.
> 
>  I'd like to propose the following order of bugs processing:
> 
>  Confirm / triage bugs in New status, assigning them to yourself to
> avoid
>  the situation when a few people work on same bug
>  Review bugs in Incomplete status, move them to Confirmed / Triaged or
>  close as Invalid.
>  Follow https://wiki.openstack.org/wiki/BugTriage for the rest (this
> is
>  MUST read for those who have not done it yet)
> 
>  When we are more or less done with triaging, we can start proposing
>  fixes for bugs. I suggest to extensively use #fuel-dev IRC for
>  synchronization, and while someone fixes some bugs - the other one can
>  participate in review of fixes. Don't hesitate to ask for code
> reviews.
> 
>  Regards,
>  --
>  Mike Scherbakov
>  #mihgen
> 
> >>>
> >>>
> >>>
> >>> --
> >>> Mike Scherbakov
> >>> #mihgen
> >>>
> >>
> >>
> >>
> >> --
> >> Mike Scherbakov
> >> #mihgen
> >>
> >
> >
> >
> > --
> > Mike Scherbakov
> > #mihgen
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Dmitry Borodaenko
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] All Clear on the western front (i.e. gate)

2014-06-18 Thread Jay Pipes

On 06/18/2014 04:38 PM, Davanum Srinivas wrote:

w00t! thanks for the hard work everyone.


Indeed, thank you to all involved. Much appreciated. I hope in the 
future I can better help with gate fixing.


Best,
-jay


-- dims

On Wed, Jun 18, 2014 at 7:17 AM, Sean Dague  wrote:

I realized that folks may have been waiting for an 'all clear' on the
gate situation. It was a tiring couple of weeks, so took a little while
to get there.

Due to a huge amount of effort, but a bunch of different people, a ton
of bugs were squashed to get the gate back to a high pass rate -
https://etherpad.openstack.org/p/gatetriage-june2014

Then jeblair came back from vacation and quickly sorted out a nodepool
bug that was starving our capacity, so now we aren't leaking deleted
nodes the same way.

With both those, our capacity for changes goes way up. Because we have
more workers available at any time, and less round tripping on race
bugs. We also dropped the Nova v3 tests, which shaved 8 minutes (on
average) off of Tempest runs. Again, increasing throughput by getting
nodes back into the pool faster.

The net of all these changes is that yesterday we merged 117 patches -
https://github.com/openstack/openstack/graphs/commit-activity (not a
record, that's 147 in one day, but definitely a top merge day).

So if you were holding off on reviews / code changes because of the
state of things, you can stop now. And given the system is pretty
healthy, now is actually a pretty good time to put and keep it under
load to help evaluate where we stand.

Thanks all,

 -Sean

--
Sean Dague
http://dague.net


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev








___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] TLS support RST document on Gerrit

2014-06-18 Thread Samuel Bercovici
Hi Stephen,

Radware Alteon extract the hostname information and the alt subjectAltName from 
the certificate information.
It then do:

1.  Find if there is exact match with the name in the https handshake and 
the ones extracted from the certificate, if there are more than a single match, 
the 1st one in the order will be used

2.  If no match was found than try to use the regexp hostname to match, if 
you have multiple matches, the 1st one will be used

3.  If no match was found than try to use subjectAltName to match. If you 
have multiple matches, the 1st one will be used

4.  If no match than use default certificate

-Sam.




From: Stephen Balukoff [mailto:sbaluk...@bluebox.net]
Sent: Thursday, June 19, 2014 12:03 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] TLS support RST document on Gerrit

Hi Evg,

I do not think stunnel supports an "ordered list" without hostnames. Since 
we're talking about making the reference implementation use stunnel for TLS 
termination, then this seems like it's important to support its behavioral 
model.

It is possible to extract hostnames from the CN and x509v3 Subject Alternative 
Names in the certs, but, as has been discussed previously, these can overlap, 
and it's not always reliable to rely on this data from the certs themselves. 
So, while I have nothing against having an ordered certificate list, stunnel 
won't use the order here, and stunnel will likely have unexpected behavior if 
hostnames are duplicated.

Would it work for Radware to simply order the (unique) hostnames 
alphabetically, and put any wildcard certificates at the end of the list?

Also, while I'm loathe to ask for details on a proprietary system: How does 
Radware do SNI *without* hostnames? Isn't that entirely the point of SNI? 
Client sends a hostname, and server responds with the certificate that applies 
to that hostname?

Thanks,
Stephen

On Wed, Jun 18, 2014 at 8:00 AM, Evgeny Fedoruk 
mailto:evge...@radware.com>> wrote:
Hi Stephen,
Regarding your comment related to SNI list management and behavior in the RST 
document:

I understand the need to explicitly specify specific certificates for specific 
hostnames.
However we need to deliver lowest common denominator for this feature which 
every vendor is able to support
In this case, specifying hostname for certificate will not be supported by 
Radware.
The original proposal with ordered certificates list may be the lowest common 
denominator for all vendors and we should find out if this is the case.
If not, managing a simple none-ordered list will probably be the lowest common 
denominator.

With the proposed flavors framework considered, extra SNI management 
capabilities may be represented for providers
but meanwhile we should agree on proposal that can be implemented by all 
vendors.
What are your thought on this?

Regarding the SNIPolicy, I agree and will change the document accordingly.

Thanks,
Evg





-Original Message-
From: Evgeny Fedoruk
Sent: Sunday, June 15, 2014 1:55 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] TLS support RST document on Gerrit

Hi All,

The document was updated and ready for next review round.
Main things that were changed:
1. Comments were addressed
2. No back-end re-encryption supported
3. Intermediate certificates chain supported
*Opened question: Should chain be stored in same TLS container of the 
certificate?

Please review
Regards,
Evgeny


-Original Message-
From: Douglas Mendizabal 
[mailto:douglas.mendiza...@rackspace.com]
Sent: Wednesday, June 11, 2014 10:22 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] TLS support RST document on Gerrit

Hi Doug,


Barbican does guarantee the integrity and availability of the secret, unless 
the owner of the secret deletes it from Barbican.  We’re not encouraging that 
you store a shadow-copy of the secret either.  This was proposed by the LBaaS 
team as a possible workaround for your use case.
Our recommendation was that there are two options for dealing with Secrets 
being deleted from under you:

If you want to control the lifecycle of the secret so that you can prevent the 
user from deleting the secret, then the secret should be owned by LBaaS, not by 
the user.  You can achieve this by asking the user to upload the secret via 
LBaaS api, and then use Barbican on the back end to store the secret under the 
LBaaS tenant.

If you want the user to own and manage their secret in Barbican, then you have 
to deal with the situation where the user deletes a secret and it is no longer 
available to LBaaS.  This is a situation you would have to deal with even with 
a reference-counting and force-deleting Barbican, so I don’t think you really 
gain anything from all the complexity you’r

Re: [openstack-dev] [Neutron][LBaaS] TLS support RST document on Gerrit

2014-06-18 Thread Stephen Balukoff
Hi Evg,

I do not think stunnel supports an "ordered list" without hostnames. Since
we're talking about making the reference implementation use stunnel for TLS
termination, then this seems like it's important to support its behavioral
model.

It is possible to extract hostnames from the CN and x509v3 Subject
Alternative Names in the certs, but, as has been discussed previously,
these can overlap, and it's not always reliable to rely on this data from
the certs themselves. So, while I have nothing against having an ordered
certificate list, stunnel won't use the order here, and stunnel will likely
have unexpected behavior if hostnames are duplicated.

Would it work for Radware to simply order the (unique) hostnames
alphabetically, and put any wildcard certificates at the end of the list?

Also, while I'm loathe to ask for details on a proprietary system: How does
Radware do SNI *without* hostnames? Isn't that entirely the point of SNI?
Client sends a hostname, and server responds with the certificate that
applies to that hostname?

Thanks,
Stephen


On Wed, Jun 18, 2014 at 8:00 AM, Evgeny Fedoruk  wrote:

> Hi Stephen,
> Regarding your comment related to SNI list management and behavior in the
> RST document:
>
> I understand the need to explicitly specify specific certificates for
> specific hostnames.
> However we need to deliver lowest common denominator for this feature
> which every vendor is able to support
> In this case, specifying hostname for certificate will not be supported by
> Radware.
> The original proposal with ordered certificates list may be the lowest
> common denominator for all vendors and we should find out if this is the
> case.
> If not, managing a simple none-ordered list will probably be the lowest
> common denominator.
>
> With the proposed flavors framework considered, extra SNI management
> capabilities may be represented for providers
> but meanwhile we should agree on proposal that can be implemented by all
> vendors.
> What are your thought on this?
>
> Regarding the SNIPolicy, I agree and will change the document accordingly.
>
> Thanks,
> Evg
>
>
>
>
>
> -Original Message-
> From: Evgeny Fedoruk
> Sent: Sunday, June 15, 2014 1:55 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Neutron][LBaaS] TLS support RST document on
> Gerrit
>
> Hi All,
>
> The document was updated and ready for next review round.
> Main things that were changed:
> 1. Comments were addressed
> 2. No back-end re-encryption supported
> 3. Intermediate certificates chain supported
> *Opened question: Should chain be stored in same TLS container of
> the certificate?
>
> Please review
> Regards,
> Evgeny
>
>
> -Original Message-
> From: Douglas Mendizabal [mailto:douglas.mendiza...@rackspace.com]
> Sent: Wednesday, June 11, 2014 10:22 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Neutron][LBaaS] TLS support RST document on
> Gerrit
>
> Hi Doug,
>
>
> Barbican does guarantee the integrity and availability of the secret,
> unless the owner of the secret deletes it from Barbican.  We’re not
> encouraging that you store a shadow-copy of the secret either.  This was
> proposed by the LBaaS team as a possible workaround for your use case.
> Our recommendation was that there are two options for dealing with Secrets
> being deleted from under you:
>
> If you want to control the lifecycle of the secret so that you can prevent
> the user from deleting the secret, then the secret should be owned by
> LBaaS, not by the user.  You can achieve this by asking the user to upload
> the secret via LBaaS api, and then use Barbican on the back end to store
> the secret under the LBaaS tenant.
>
> If you want the user to own and manage their secret in Barbican, then you
> have to deal with the situation where the user deletes a secret and it is
> no longer available to LBaaS.  This is a situation you would have to deal
> with even with a reference-counting and force-deleting Barbican, so I don’t
> think you really gain anything from all the complexity you’re proposing to
> add to Barbican.
>
> -Douglas M.
>
>
>
> On 6/11/14, 12:57 PM, "Doug Wiegley"  wrote:
>
> >There are other fundamental things about secrets, like relying on their
> >presence, and not encouraging a proliferation of a dozen
> >mini-secret-stores everywhere to get around that fact, which makes it
> >less secret.  Have you considered a ³force² delete flag, required if
> >some service is using the secret, sort of ³rm² vs ³rm -f², to avoid the
> >obvious foot-shooting use cases, but still allowing the user to nuke it
> >if necessary?
> >
> >Thanks,
> >Doug
> >
> >
> >On 6/11/14, 11:43 AM, "Clark, Robert Graham"  wrote:
> >
> >>Users have to be able to delete their secrets from Barbican, it's a
> >>fundamental key-management requirement.
> >>
> >>> -Original Message-
> >>> From: Eichberger, German
> >>> Sent: 11 June 2014 17

Re: [openstack-dev] [heat] agenda for OpenStack Heat meeting 2014-06-18 20:00 UTC

2014-06-18 Thread Mike Spreitzer
A good time was had by all.

http://eavesdrop.openstack.org/meetings/heat/2014/heat.2014-06-18-20.00.html

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Do any hyperviors allow disk reduction as part of resize ?

2014-06-18 Thread Jay Pipes

On 06/17/2014 05:42 PM, Daniel P. Berrange wrote:

On Tue, Jun 17, 2014 at 04:32:36PM +0100, Pádraig Brady wrote:

On 06/13/2014 02:22 PM, Day, Phil wrote:

I guess the question I’m really asking here is:  “Since we know resize down 
won’t work in all cases,
and the failure if it does occur will be hard for the user to detect,
should we just block it at the API layer and be consistent across all 
Hypervisors ?”


+1

There is an existing libvirt blueprint:
   https://blueprints.launchpad.net/nova/+spec/libvirt-resize-disk-down
which I've never been in favor of:
   https://bugs.launchpad.net/nova/+bug/1270238/comments/1


All of the functionality around resizing VMs to match a different
flavour seem to be a recipe for unleashing a torrent of unfixable
bugs, whether resizing disks, adding CPUs, RAM or any other aspect.


+1

I'm of the opinion that we should plan to rip resize functionality out 
of (the next major version of) the Compute API and have a *single*, 
*consistent* API for migrating resources. No more "API extension X for 
migrating this kind of thing, and API extension Y for this kind of 
thing, and API extension Z for migrating /live/ this type of thing."


There should be One "move" API to Rule Them All, IMHO.

Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] locked instances and snaphot

2014-06-18 Thread Jay Pipes

On 06/18/2014 01:15 PM, Day, Phil wrote:

-Original Message- From: Ahmed RAHAL
[mailto:ara...@iweb.com] Sent: 18 June 2014 01:21 To:
openstack-dev@lists.openstack.org Subject: Re: [openstack-dev]
[nova] locked instances and snaphot

Hi there,

Le 2014-06-16 15:28, melanie witt a écrit :

Hi all,


[...]


During the patch review, a reviewer raised a concern about the
purpose of instance locking and whether prevention of snapshot
while an instance is locked is appropriate. From what we
understand, instance lock is meant to prevent unwanted
modification of an instance. Is snapshotting considered a logical
modification of an instance? That is, if an instance is locked to
a user, they take a snapshot, create another instance using that
snapshot, and modify the instance, have they essentially modified
the original locked instance?

I wanted to get input from the ML on whether it makes sense to
disallow snapshot an instance is locked.


Beyond 'preventing accidental change to the instance', locking
could be seen as 'preventing any operation' to the instance. If I,
as a user, lock an instance, it certainly only prevents me from
accidentally deleting the VM. As I can unlock whenever I need to,
there seems to be no other use case (chmod-like).


It bocks any operation that would stop the instance from changing
state:  Delete, stop, start, reboot, rebuild, resize, shelve, pause,
resume, etc

In keeping with that I don't see why it should block a snapshot, and
having to unlock it to take a snapshot doesn't feel good either.


VMs should be cattle, not pets, but yes, a locked instance should be 
able to be snapshotted, for sure, IMO.



If I, as an admin, lock an instance, I am preventing operations on
a VM and am preventing an ordinary user from overriding the lock.


The driver for doing this as an admin is slightly different - its to
stop the user from changing the state of an instance rather than a
protection.   A couple of use cases: - if you want to migrate a VM
and the user is running a continual sequence of say reboot commands
at it putting an admin lock in place gives you a way to break into
that cycle. - There are a few security cases where we need to take
over control of an instance, and make sure it doesn't get deleted by
the user


But the user would still be able to SSH into their instance and do:

shutdown -r now

Best,
-jay


This is a form of authority enforcing that maybe should prevent
even snapshots to be taken off that VM. The thing is that enforcing
this beyond the limits of nova is AFAIK not there, so
cloning/snapshotting cinder volumes will still be feasible.
Enforcing it only in nova as a kind of 'security feature' may
become misleading.

The more I think about it, the more I get to think that locking is
just there to avoid mistakes, not voluntary misbehaviour.

--

Ahmed

___ OpenStack-dev
mailing list OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___ OpenStack-dev mailing
list OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][L3] Team Meeting Thursday at 1500 UTC

2014-06-18 Thread Brian Haley
The Neutron L3 Subteam will meet tomorrow at the regular time in
#openstack-meeting-3.  The agenda [1] is posted, please update as needed.

I'll be standing in for Carl as he's on vacation this week.

Brian Haley

[1] https://wiki.openstack.org/wiki/Meetings/Neutron-L3-Subteam#Agenda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev][Cinder] Review days? (open to ANYBODY and EVERYBODY)

2014-06-18 Thread Erlon Cruz
Great John,

I'll push fellows here to help. Couldn't  join today but I'll put in my
schedule for the next weeks. Do you plan to do this review force every
Wednesday?

Erlon


On Mon, Jun 16, 2014 at 9:31 AM, Kerr, Andrew 
wrote:

> +1
>
> Andrew Kerr
>
>
> On 6/13/14, 10:30 AM, "Duncan Thomas"  wrote:
>
> >Same as Jay, for much the same reasons. Having a fixed calendar time
> >makes it easy for me to put up a 'do not disturb' sign.
> >
> >On 13 June 2014 05:10, Jay Bryant  wrote:
> >> John,
> >>
> >> +2
> >>
> >> I am guilty of falling behind on reviews. Pulled in to a lot of other
> >>stuff
> >> since the summit ... and before.
> >>
> >> Having prescribed time on my calendar is a good idea.  Just put it on my
> >> calendar.
> >>
> >> Jay
> >>
> >> On Jun 12, 2014 10:49 PM, "John Griffith" 
> >> wrote:
> >>>
> >>> Hey Everyone,
> >>>
> >>> So I've been noticing some issues with regards to reviews in Cinder
> >>> lately, namely we're not keeping up very well.  Most of this is a math
> >>> problem (submitters >> reviewers).  We're up around 200+ patches in the
> >>> queue, and a large number of them have no negative feedback but have
> >>>just
> >>> been waiting patiently (some > 2 months).
> >>>
> >>> Growth is good, new contributors are FANTASTIC... but stale
> >>>submissions in
> >>> the queue are BAD, and I hate for people interested in contributing to
> >>> become discouraged and just go away (almost as much as I hate emails
> >>>asking
> >>> me to review patches).
> >>>
> >>> I'd like to propose we consider one or two review days a week for a
> >>>while
> >>> to try and work on our backlog.  I'd like to propose that on these
> >>>days we
> >>> make an attempt to NOT propose new code (or at least limit it to
> >>>bug-fixes
> >>> [real bugs, not features disguised as bugs]) and have an agreement from
> >>> folks to focus on actually doing reviews and using IRC to collaborate
> >>> together and knock some of these out.
> >>>
> >>> We did this sort of thing over a virtual meetup and it was really
> >>> effective, I'd like to see if we can't do something for a brief
> >>>duration
> >>> over IRC.
> >>>
> >>> I'm thinking we give it a test run, set aside a few hours next Wed
> >>>morning
> >>> to start (coinciding with our Cinder weekly meeting since many folks
> >>>around
> >>> that morning across TZ's etc) where we all dedicate some time prior to
> >>>the
> >>> meeting to focus exclusively on helping each other get some reviews
> >>>knocked
> >>> out.  As a reminder Cinder weekly meeting is 16:00 UTC.
> >>>
> >>> Let me know what you all think, and keep in mind this is NOT limited to
> >>> just current regular "Block-Heads" but anybody in the OpenStack
> >>>community
> >>> that's willing to help out and of course new reviewers are MORE than
> >>> welcome.
> >>>
> >>> Thanks,
> >>> John
> >>>
> >>> ___
> >>> OpenStack-dev mailing list
> >>> OpenStack-dev@lists.openstack.org
> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>>
> >>
> >> ___
> >> OpenStack-dev mailing list
> >> OpenStack-dev@lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> >
> >
> >--
> >Duncan Thomas
> >
> >___
> >OpenStack-dev mailing list
> >OpenStack-dev@lists.openstack.org
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Swift][third-party] Most Third Party CI's failing

2014-06-18 Thread Kashyap Chamarthy
On Sat, Jun 14, 2014 at 09:39:50PM -0700, Sukhdev Kapur wrote:
> Oppss...sorry wrong link... please use this
> http://paste.openstack.org/show/84073/.

Just a friendly note -- pastebins usually expire, and if someone looks
at the archives years later, there'll be nothing to look at.

It's more useful to reproduce the content in plain text in email.
 

-- 
/kashyap

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack Designate DNSaaS

2014-06-18 Thread ESWARAN, PK
-Original Message-
From: SCOLLARD, JAMES 
Sent: Wednesday, June 18, 2014 02:25 PM
To: O'CONNELL, MATT; Joe Mcbride; OpenStack Development Mailing List (not for 
usage questions); MIDGLEY, BRAD
Cc: 'raymond.h...@accenture.com' (raymond.h...@accenture.com); BOEHMER, JEFF; 
MICHALEK, KEN; BISHOP, JAMES; ESWARAN, PK; SHANGHAVI, PRAFUL B; JALLI, RANDEEP; 
PACHECO, RODOLFO J; O'KEEFE, TIMOTHY J
Subject: RE: [openstack-dev] OpenStack Designate DNSaaS

Agreed.  If the BIND9 zone transfer from Designate to external is not fixed 
already it will be soon.

Thanks.

-Original Message-
From: O'CONNELL, MATT 
Sent: Wednesday, June 18, 2014 1:51 PM
To: Joe Mcbride; OpenStack Development Mailing List (not for usage questions); 
MIDGLEY, BRAD
Cc: 'raymond.h...@accenture.com' (raymond.h...@accenture.com); BOEHMER, JEFF; 
MICHALEK, KEN; BISHOP, JAMES; ESWARAN, PK; SHANGHAVI, PRAFUL B; JALLI, RANDEEP; 
PACHECO, RODOLFO J; SCOLLARD, JAMES; O'KEEFE, TIMOTHY J
Subject: RE: [openstack-dev] OpenStack Designate DNSaaS

I think the thinking was the AT&T DNS backend was more desirable (hardened, 
load balanced, global, configurable,
non-prototype, 30 second updates), so the idea was to go around the Designate 
DNS function and 
call the ATT DNS API and use the Designate for customer and other non DNS 
function points. I think that is the plan for CORA. 

I would think there are other enterprise customers like AT&T that already have 
global systems
that would just need hooks from the OpenStack to their current systems,
and Openstack could just be the framework for those calls, unless they don't 
already have a solution.

Matt

-Original Message-
From: Joe Mcbride [mailto:jmcbr...@rackspace.com] 
Sent: Wednesday, June 18, 2014 11:00 AM
To: OpenStack Development Mailing List (not for usage questions); MIDGLEY, BRAD
Cc: 'raymond.h...@accenture.com' (raymond.h...@accenture.com); O'CONNELL, MATT; 
BOEHMER, JEFF; MICHALEK, KEN; BISHOP, JAMES; ESWARAN, PK; SHANGHAVI, PRAFUL B; 
JALLI, RANDEEP; PACHECO, RODOLFO J; SCOLLARD, JAMES
Subject: Re: [openstack-dev] OpenStack Designate DNSaaS

Brad,
It seems to me you have a classic migration problem on your hands.
Assuming your ideal end state is to completely migrate to Designate and
minimize customizations (which is definitely preferable for the long
term), your strategy is the real challenge.

One approach is to deploy Designate and put all new domains and or tenants
there.  Over time, migrate domains and tenants over. Your consumers will
naturally want the new benefits and hopefully facilitate the changes.

Another is to have an old fashioned ³cut over². There is considerable risk
which can be mitigated by if your domains don¹t change often and your
consumer base is aligned.

Alternatively, you can run both in parallel and sink between them at the
database layer until you can ultimately switch users over.  This approach
is only recommended if you have to support an older API on the previous
system. You can also synchronize to the same set of name servers if you
can not change them. The problem with this approach is you will write a
lot of throw-a-way code.

SOME QUESTIONS:
- Is there an API available to your current system for your consumers? If
no, that greatly simplifies things.
- Can you easily change your name servers with your registrars?
- Why bother keeping the old system around?


On 6/17/14, 4:38 PM, "Hayes, Graham"  wrote:

>Unfortunately #1 is not a real option - designate needs the storage layer
>to operate.
>
>I would guess #2 or #3 would be the more feasible options.
>
>Graham
>
>
>"MIDGLEY, BRAD"  wrote:
>
>
>
>PK,
>
>I¹d agree with pursuing #1 or with a simple reference implementation like
>minidns if ripping it out is disruptive.
>
>Brad
>
>_
>From: ESWARAN, PK
>Sent: Tuesday, June 17, 2014 1:26 PM
>To: openstack-dev@lists.openstack.org; Graham Hayes
>(graham.ha...@hp.com); Kiall Mac Innes (ki...@hp.com)
>Cc: PACHECO, RODOLFO J; JALLI, RANDEEP; O'CONNELL, MATT; MICHALEK, KEN;
>MIDGLEY, BRAD; BISHOP, JAMES; 'raymond.h...@accenture.com'
>(raymond.h...@accenture.com); BOEHMER, JEFF; SCOLLARD, JAMES; SHANGHAVI,
>PRAFUL B
>Subject: OpenStack Designate DNSaaS
>
>
>Dear OpenStack Dev Team:
>I exchanged a few thoughts on Designate DNSaaS with Graham Hayes
>of HP and he advised me to send this out to a larger DEV audience. Your
>feedback will help the Designte DNSaaS project and also the AT&T Cloud
>group.
>
>AT&T Cloud group is investigating ³Designate DNSaaS² and we would like to
>indulge in this emerging technology. We could embrace and also
>participate into this new OpenStack technology. I am also copying a few
>of the AT&T team members.
>
>In AT&T in the near term, we would like to use Designate DNSaaS as a
>frontend while retaining the current AT&T backend DNS infrastructure in
>place. The main issue in this is the role of ³Designate Database² as
>MASTER database of record for all Designate DNS provisionin

Re: [openstack-dev] OpenStack Designate DNSaaS

2014-06-18 Thread Joe Mcbride

On 6/18/14, 12:51 PM, "O'CONNELL, MATT"  wrote:

>I think the thinking was the AT&T DNS backend was more desirable
>(hardened, load balanced, global, configurable,
>non-prototype, 30 second updates), so the idea was to go around the
>Designate DNS function and
>call the ATT DNS API and use the Designate for customer and other non DNS
>function points. I think that is the plan for CORA.

I definitely agree that is a plausible approach and Designate should be
able to accommodate customers with a running set of authoritative name
servers (assuming supported makes/versions of name servers) as you
describe.

OP’s challenge is greater than keeping their current backend, they have a
"database of record” to manage.
> In AT&T, we already have a ³DNS bind provisioner and a database of
> record² that keeps our DNS authoritative servers updated.


My feeling is that eliminating that database would be a better long term
solution as it minimizes upgrade complexity.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Set compute_node:hypervisor_nodename as unique and not null

2014-06-18 Thread Manickam, Kanagaraj
Hi,

This mail is regarding the required model change in nova. Please fine more 
details below:

As we knew, Nova db has the table "compute_nodes" for modelling the hypervisors 
and its using the "hypervisor_hostname" field to represent the hypervisor name.
This value is having significant value in os-hypervisor extension api which is 
using this field to uniquely identify the hypervisor.

Consider the case where a given environment is having more than one hypervisors 
(KVM, EXS, Xen, etc)  with same hostname and os-hypervisor and thereby Horizon 
Hypervisor panel and nova hypervisors-servers command will fail.
There is a defect (https://bugs.launchpad.net/nova/+bug/1329261)  already filed 
on VMware VC driver to address this issue to make sure that, a unique value is 
generated for the VC driver's hypervisor.  But its good to fix at the model 
level as well by  making "hypervisor_hostname" field as unique always. And a 
bug https://bugs.launchpad.net/nova/+bug/1329299 is filed for the same.

Before fixing this bug, I would like to get the opinion from the community. 
Could you please help here !

Regards
Kanagaraj M
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Swift][third-party] Most Third Party CI's failing

2014-06-18 Thread Salvatore Orlando
When I get these errors I try to import the modules for which I get an
import error in a python shell.
This sometimes give a fairly explicative error message.

That will hopefully help.
Also - consider submitting your question to ask.openstack.org, where a lot
more experienced operators and developers might have an answer to your
problem.

Salvatore


On 18 June 2014 18:54, Luke Gorrie  wrote:

> On 18 June 2014 18:24, Salvatore Orlando  wrote:
>
>> it seems something is not quite right with your tempest environment - you
>> have import errors at startup [1]
>> This might be happening because of missing dependencies, or, if you have
>> applied some custom patches to tempest trunk, possibly those are causing
>> some problems.
>>
>
> Interesting. I'm using a fresh checkout of tempest from master on github
> and running "pip install -r requirements.txt" in tempest/. Any ideas on
> what I should check next?
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Swift][third-party] Most Third Party CI's failing

2014-06-18 Thread Luke Gorrie
On 18 June 2014 18:24, Salvatore Orlando  wrote:

> it seems something is not quite right with your tempest environment - you
> have import errors at startup [1]
> This might be happening because of missing dependencies, or, if you have
> applied some custom patches to tempest trunk, possibly those are causing
> some problems.
>

Interesting. I'm using a fresh checkout of tempest from master on github
and running "pip install -r requirements.txt" in tempest/. Any ideas on
what I should check next?
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] A modest proposal to reduce reviewer load

2014-06-18 Thread Martin Geisler
Chris Friesen  writes:

> On 06/18/2014 08:35 AM, Duncan Thomas wrote:
>> On 18 June 2014 15:28, Matthew Booth  wrote:
>>> The answer is not always more
>>> review: there are other tools in the box. Imagine we spent 50% of the
>>> time we spend on review writing tempest tests instead.
>>
>> Or we push the work off of core into the wider community and require
>> 100% unit test coverage of every change *and* record the tempest
>> coverage of any changed lines so that the reviewer can gauge better
>> what the risks are?
>
> 100% coverage is not realistic.

I was thinking the same, but there are actually some non-trivial
projects that have 100% code coverage in their tests. These are two
large projects that I know of:

* http://www.pylonsproject.org/projects/pyramid/about: "Every release of
  Pyramid has 100% statement coverage via unit tests"

* http://www.sqlite.org/testing.html: "100% branch test coverage in an
  as-deployed configuration"

Whether 100% test coverage is worth is it another question. People
sometimes confuse "100% test coverage" with "100% bug free", which is
just wrong.

-- 
Martin Geisler

http://google.com/+MartinGeisler


pgpfLHxEQODiG.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Reducing quota below utilisation

2014-06-18 Thread Vishvananda Ishaya

On Jun 17, 2014, at 12:53 PM, Jan van Eldik  wrote:

> Just
> 
> On 06/17/2014 08:18 PM, Tim Bell wrote:
>> We have some projects which are dynamically creating VMs up to their
>> quota. Under some circumstances, as cloud administrators, we would like
>> these projects to shrink and make room for other higher priority work.
>> 
>> We had investigated setting the project quota below the current
>> utilisation (i.e. effectively delete only, no create). This will
>> eventually match the desired level of VMs as the dynamic workload leads
>> to old VMs being deleted and new ones cannot be created.
>> 
>> However, OpenStack does not allow a quota to be set to below the current
>> usage.
> 
> Just to add that "nova help quota-update" suggests that the "--force" option 
> should do the trick:
> 
>  --force   Whether force update the quota even if the
>already used and reserved exceeds the new quota
> 
> However, when trying to lower the quota below the current usage value,
> we get:
> 
> 
> $ nova absolute-limits --te $ID|grep -i core
> | totalCoresUsed  | 11|
> | maxTotalCores   | 20|
> $ nova quota-update --cores 2 $ID
> ERROR: Quota value 2 for cores are greater than already used and reserved 11 
> (HTTP 400) (Request-ID: req-c1dd6add-772c-4cd5-9a13-c33940698f93)
> $ nova quota-update --cores 2 --force $ID
> ERROR: Quota limit must greater than 11. (HTTP 400) (Request-ID: 
> req-cfc58810-35af-46a3-b554-59d34c647e40)
> 
> Am I misunderstanding what "--force" does?

That was my understanding of force as well. This looks like a bug to me.

Vish

> 
> BTW: I believe the first error message is wrong, and will propose
> a patch.
> 
>   cheers, Jan
> 
> 
> 
>> 
>> This seems a little restrictive … any thoughts from others ?
>> 
>> Tim
>> 
>> 
>> 
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Swift][third-party] Most Third Party CI's failing

2014-06-18 Thread Salvatore Orlando
Hi Luke,

it seems something is not quite right with your tempest environment - you
have import errors at startup [1]
This might be happening because of missing dependencies, or, if you have
applied some custom patches to tempest trunk, possibly those are causing
some problems.

Salvatore

[1] http://paste.openstack.org/show/84406/


On 18 June 2014 18:01, Luke Gorrie  wrote:

> On 18 June 2014 15:48, Salvatore Orlando  wrote:
>
>> Hi Luke,
>>
>> That kind of message usually shows up in unit tests job when there is
>> some syntax error or circular import. But I think that it's not your case.
>> Usually you see an "import error" message towards the end of the
>> "garbage".
>>
>> If you can point me to a failing log of your CI I can have a look at it
>> and see if I can help you.
>>
>
> Thanks, Salvatore!
>
> I have a log here: http://88.198.8.227:81/html/ci-logs/problem-1.log
>
> and on this machine I can reproduce the problem using the steps in the bug
> that I referenced above:
>
> ci@egg:/tmp$ mkdir bug
> ci@egg:/tmp$ cd !$
> cd bug
> ci@egg:/tmp/bug$ git clone https://github.com/openstack/tempest.git
> Cloning into 'tempest'...
> remote: Reusing existing pack: 35264, done.
> remote: Counting objects: 229, done.
> remote: Compressing objects: 100% (221/221), done.
> remote: Total 35493 (delta 105), reused 25 (delta 8)
> Receiving objects: 100% (35493/35493), 8.51 MiB | 1.61 MiB/s, done.
> Resolving deltas: 100% (25835/25835), done.
> Checking connectivity... done.
> ci@egg:/tmp/bug$ cd tempest
> ci@egg:/tmp/bug/tempest$ sudo pip install -r requirements.txt
> Requirement already satisfied (use --upgrade to upgrade):
> pbr>=0.6,!=0.7,<1.0 in /usr/local/lib/python2.7/dist-packages (from -r
> requirements.txt (line 1))
> Requirement already satisfied (use --upgrade to upgrade): anyjson>=0.3.3
> in /usr/lib/python2.7/dist-packages (from -r requirements.txt (line 2))
> Requirement already satisfied (use --upgrade to upgrade): httplib2>=0.7.5
> in /usr/local/lib/python2.7/dist-packages (from -r requirements.txt (line
> 3))
> Requirement already satisfied (use --upgrade to upgrade):
> jsonschema>=2.0.0,<3.0.0 in /usr/local/lib/python2.7/dist-packages (from -r
> requirements.txt (line 4))
> Requirement already satisfied (use --upgrade to upgrade):
> testtools>=0.9.34 in /usr/local/lib/python2.7/dist-packages (from -r
> requirements.txt (line 5))
> Requirement already satisfied (use --upgrade to upgrade): lxml>=2.3 in
> /usr/lib/python2.7/dist-packages (from -r requirements.txt (line 6))
> Requirement already satisfied (use --upgrade to upgrade):
> boto>=2.12.0,!=2.13.0 in /usr/local/lib/python2.7/dist-packages (from -r
> requirements.txt (line 7))
> Requirement already satisfied (use --upgrade to upgrade): paramiko>=1.13.0
> in /usr/local/lib/python2.7/dist-packages (from -r requirements.txt (line
> 8))
> Requirement already satisfied (use --upgrade to upgrade): netaddr>=0.7.6
> in /usr/local/lib/python2.7/dist-packages (from -r requirements.txt (line
> 9))
> Requirement already satisfied (use --upgrade to upgrade):
> python-glanceclient>=0.9.0 in /opt/stack/python-glanceclient (from -r
> requirements.txt (line 10))
> Requirement already satisfied (use --upgrade to upgrade):
> python-keystoneclient>=0.8.0 in /opt/stack/python-keystoneclient (from -r
> requirements.txt (line 11))
> Requirement already satisfied (use --upgrade to upgrade):
> python-novaclient>=2.17.0 in /opt/stack/python-novaclient (from -r
> requirements.txt (line 12))
> Requirement already satisfied (use --upgrade to upgrade):
> python-neutronclient>=2.3.4,<3 in /opt/stack/python-neutronclient (from -r
> requirements.txt (line 13))
> Requirement already satisfied (use --upgrade to upgrade):
> python-cinderclient>=1.0.6 in /opt/stack/python-cinderclient (from -r
> requirements.txt (line 14))
> Requirement already satisfied (use --upgrade to upgrade):
> python-heatclient>=0.2.9 in /opt/stack/python-heatclient (from -r
> requirements.txt (line 15))
> Requirement already satisfied (use --upgrade to upgrade):
> python-ironicclient in /usr/local/lib/python2.7/dist-packages (from -r
> requirements.txt (line 16))
> Requirement already satisfied (use --upgrade to upgrade):
> python-saharaclient>=0.6.0 in /usr/local/lib/python2.7/dist-packages (from
> -r requirements.txt (line 17))
> Requirement already satisfied (use --upgrade to upgrade):
> python-swiftclient>=2.0.2 in /opt/stack/python-swiftclient (from -r
> requirements.txt (line 18))
> Requirement already satisfied (use --upgrade to upgrade):
> testresources>=0.2.4 in /usr/local/lib/python2.7/dist-packages (from -r
> requirements.txt (line 19))
> Requirement already satisfied (use --upgrade to upgrade):
> testrepository>=0.0.18 in /usr/local/lib/python2.7/dist-packages (from -r
> requirements.txt (line 20))
> Requirement already satisfied (use --upgrade to upgrade):
> oslo.config>=1.2.0 in /usr/local/lib/python2.7/dist-packages (from -r
> requirements.txt (line 21))
> Requirement already satisfied (us

[openstack-dev] [Congress] Use Cases

2014-06-18 Thread Tim Hinrichs
Hi all,

We’ve been working on developing a list of use cases for Congress, which we’ve 
also started voting on so as to prioritize our efforts.  Please feel free take 
a look, leave feedback, add to the list, vote, whatever.

https://docs.google.com/document/d/1ExDmT06vDZjzOPePYBqojMRfXodvsk0R8nRkX-zrkSw/edit

If you add a use case, please use the one called “Public/private networks with 
group membership” as a guide.  The policy doesn’t need to be written in the 
policy language syntax, but it needs to be precise enough for us to figure out 
if it can be written in that language.

For those of you who have already contributed use cases, could you take a look 
at “Public/private networks with group membership” to see what info I believe 
we need for each use case?  And then if your use case is missing something, 
could you add it (or leave a note explaining why the info is 
unnecessary/unknown)?

Thanks,
Tim
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TC] [Murano] Follow up on cross-project session

2014-06-18 Thread Ruslan Kamaldinov
Hi Thierry!

On Wed, Jun 18, 2014 at 1:14 PM, Thierry Carrez  wrote:
> So to take a practical example, Murano lets you pick (using UI or CLI) a
> wordpress package (which requires a DB) and compose it with a mysql
> package (which provides a DB), and will deploy that composition using
> Heat ? And additionally, it provides package-publisher-friendly features
> like certification, licensing and bulling ?

That's a correct example.

> Does that mean, to come back to my example above, that we could substitute a 
> Trove resource to the mysql package?
Yes. Catalog may have several packages for MySQL implementations:
MySQL Galera, single node, Trove-based. All of them can return
database connection string which will be used by wordpress.

> or put a Neutron LBaaS load balancer on top ?
It should be possible if application can run behind a load balancer
and there is an LBaaS resource in Heat.

> or publish a DNS entry via Designate ?
Should be possible, given the underlying infrastructure and catalog
provides needed resources. As Murano uses Heat under the hood,
whatever is possible in Heat will be possible in Murano. Once
Designate resource is available in Heat, application developers can
use it.


Thanks,
Ruslan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Swift][third-party] Most Third Party CI's failing

2014-06-18 Thread Luke Gorrie
On 18 June 2014 15:48, Salvatore Orlando  wrote:

> Hi Luke,
>
> That kind of message usually shows up in unit tests job when there is some
> syntax error or circular import. But I think that it's not your case.
> Usually you see an "import error" message towards the end of the "garbage".
>
> If you can point me to a failing log of your CI I can have a look at it
> and see if I can help you.
>

Thanks, Salvatore!

I have a log here: http://88.198.8.227:81/html/ci-logs/problem-1.log

and on this machine I can reproduce the problem using the steps in the bug
that I referenced above:

ci@egg:/tmp$ mkdir bug
ci@egg:/tmp$ cd !$
cd bug
ci@egg:/tmp/bug$ git clone https://github.com/openstack/tempest.git
Cloning into 'tempest'...
remote: Reusing existing pack: 35264, done.
remote: Counting objects: 229, done.
remote: Compressing objects: 100% (221/221), done.
remote: Total 35493 (delta 105), reused 25 (delta 8)
Receiving objects: 100% (35493/35493), 8.51 MiB | 1.61 MiB/s, done.
Resolving deltas: 100% (25835/25835), done.
Checking connectivity... done.
ci@egg:/tmp/bug$ cd tempest
ci@egg:/tmp/bug/tempest$ sudo pip install -r requirements.txt
Requirement already satisfied (use --upgrade to upgrade):
pbr>=0.6,!=0.7,<1.0 in /usr/local/lib/python2.7/dist-packages (from -r
requirements.txt (line 1))
Requirement already satisfied (use --upgrade to upgrade): anyjson>=0.3.3 in
/usr/lib/python2.7/dist-packages (from -r requirements.txt (line 2))
Requirement already satisfied (use --upgrade to upgrade): httplib2>=0.7.5
in /usr/local/lib/python2.7/dist-packages (from -r requirements.txt (line
3))
Requirement already satisfied (use --upgrade to upgrade):
jsonschema>=2.0.0,<3.0.0 in /usr/local/lib/python2.7/dist-packages (from -r
requirements.txt (line 4))
Requirement already satisfied (use --upgrade to upgrade): testtools>=0.9.34
in /usr/local/lib/python2.7/dist-packages (from -r requirements.txt (line
5))
Requirement already satisfied (use --upgrade to upgrade): lxml>=2.3 in
/usr/lib/python2.7/dist-packages (from -r requirements.txt (line 6))
Requirement already satisfied (use --upgrade to upgrade):
boto>=2.12.0,!=2.13.0 in /usr/local/lib/python2.7/dist-packages (from -r
requirements.txt (line 7))
Requirement already satisfied (use --upgrade to upgrade): paramiko>=1.13.0
in /usr/local/lib/python2.7/dist-packages (from -r requirements.txt (line
8))
Requirement already satisfied (use --upgrade to upgrade): netaddr>=0.7.6 in
/usr/local/lib/python2.7/dist-packages (from -r requirements.txt (line 9))
Requirement already satisfied (use --upgrade to upgrade):
python-glanceclient>=0.9.0 in /opt/stack/python-glanceclient (from -r
requirements.txt (line 10))
Requirement already satisfied (use --upgrade to upgrade):
python-keystoneclient>=0.8.0 in /opt/stack/python-keystoneclient (from -r
requirements.txt (line 11))
Requirement already satisfied (use --upgrade to upgrade):
python-novaclient>=2.17.0 in /opt/stack/python-novaclient (from -r
requirements.txt (line 12))
Requirement already satisfied (use --upgrade to upgrade):
python-neutronclient>=2.3.4,<3 in /opt/stack/python-neutronclient (from -r
requirements.txt (line 13))
Requirement already satisfied (use --upgrade to upgrade):
python-cinderclient>=1.0.6 in /opt/stack/python-cinderclient (from -r
requirements.txt (line 14))
Requirement already satisfied (use --upgrade to upgrade):
python-heatclient>=0.2.9 in /opt/stack/python-heatclient (from -r
requirements.txt (line 15))
Requirement already satisfied (use --upgrade to upgrade):
python-ironicclient in /usr/local/lib/python2.7/dist-packages (from -r
requirements.txt (line 16))
Requirement already satisfied (use --upgrade to upgrade):
python-saharaclient>=0.6.0 in /usr/local/lib/python2.7/dist-packages (from
-r requirements.txt (line 17))
Requirement already satisfied (use --upgrade to upgrade):
python-swiftclient>=2.0.2 in /opt/stack/python-swiftclient (from -r
requirements.txt (line 18))
Requirement already satisfied (use --upgrade to upgrade):
testresources>=0.2.4 in /usr/local/lib/python2.7/dist-packages (from -r
requirements.txt (line 19))
Requirement already satisfied (use --upgrade to upgrade):
testrepository>=0.0.18 in /usr/local/lib/python2.7/dist-packages (from -r
requirements.txt (line 20))
Requirement already satisfied (use --upgrade to upgrade):
oslo.config>=1.2.0 in /usr/local/lib/python2.7/dist-packages (from -r
requirements.txt (line 21))
Requirement already satisfied (use --upgrade to upgrade): six>=1.6.0 in
/usr/local/lib/python2.7/dist-packages (from -r requirements.txt (line 22))
Requirement already satisfied (use --upgrade to upgrade): iso8601>=0.1.9 in
/usr/local/lib/python2.7/dist-packages (from -r requirements.txt (line 23))
Requirement already satisfied (use --upgrade to upgrade): fixtures>=0.3.14
in /usr/local/lib/python2.7/dist-packages (from -r requirements.txt (line
24))
Requirement already satisfied (use --upgrade to upgrade):
testscenarios>=0.4 in /usr/local/lib/python2.7/dist-packages (from -r
requiremen

Re: [openstack-dev] [glance] Unifying configuration file

2014-06-18 Thread Mark Washenberger
On Tue, Jun 17, 2014 at 8:57 AM, Arnaud Legendre 
wrote:

> All the things that you mention here seem to be technical difficulties.
> I don't think technical difficulties should drive the experience of the
> user.
> Also, Zhi Yan seems to be able to make that happen :)
>

+1


>
> Thanks,
> Arnaud
>
> - Original Message -
> From: "Julien Danjou" 
> To: "Arnaud Legendre" 
> Cc: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Sent: Tuesday, June 17, 2014 8:43:38 AM
> Subject: Re: [openstack-dev] [glance] Unifying configuration file
>
> On Tue, Jun 17 2014, Arnaud Legendre wrote:
>
> > @ZhiYan: I don't like the idea of removing the sample configuration
> file(s)
> > from the git repository. Many people do not want to have to checkout the
> > entire codebase and tox every time they have to verify a variable name
> in a
> > configuration file. I know many people who were really frustrated where
> they
> > realized that the sample config file was gone from the Nova repo.
> > However, I agree with the fact that it would be better if the sample was
> > 100% accurate: so the way I would love to see this working is to generate
> > the sample file every time there is a config change (this being totally
> > automated (maybe at the gate level...)).
>
> You're a bit late on this. :)
> So what I did these last months (year?) in several project, is to check
> at gate time the configuration file that is automatically generated
> against what's in the patches.
> That turned out to be a real problem because sometimes some options
> changes from the eternal module we rely on (e.g. keystone authtoken or
> oslo.messaging). In the end many projects (like Nova) disabled this
> check altogether, and therefore removed the generated configuration file
> From the git repository.
>
> > @Julien: I would be interested to understand the value that you see of
> > having only one config file? At this point, I don't see why managing one
> > file is more complicated than managing several files especially when they
> > are organized by categories. Also, scrolling through the registry
> settings
> > every time I want to modify an api setting seem to add some overhead.
>
> Because there's no way to automatically generate several configuration
> files with each its own set of options using oslo.config.
>
> Glance is (one of?) the last project in OpenStack to manually write its
> sample configuration file, which are not up to date obviously.
>
> So really this is mainly about following what every other projects did
> the last year(s).
>
> --
> Julien Danjou
> -- Free Software hacker
> --
> https://urldefense.proofpoint.com/v1/url?u=http://julien.danjou.info/&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=5wWaXo2oVaivfKLCMyU6Z9UTO8HOfeGCzbGHAT4gZpo%3D%0A&m=a7BLHSmThzpuZ12zhxZOghcz1HWzlQNCbEAXFoAcFSY%3D%0A&s=fe3ff048464bdba926f7da2f19834adba8df90b69fdb2ddd63a35f8288e7fed2
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Unifying configuration file

2014-06-18 Thread Mark McLoughlin
On Wed, 2014-06-18 at 09:29 -0400, Doug Hellmann wrote:
> On Wed, Jun 18, 2014 at 1:58 AM, Mark McLoughlin  wrote:
> > Hey
> >
> > On Tue, 2014-06-17 at 17:43 +0200, Julien Danjou wrote:
> >> On Tue, Jun 17 2014, Arnaud Legendre wrote:
> >> > @Julien: I would be interested to understand the value that you see of
> >> > having only one config file? At this point, I don't see why managing one
> >> > file is more complicated than managing several files especially when they
> >> > are organized by categories. Also, scrolling through the registry 
> >> > settings
> >> > every time I want to modify an api setting seem to add some overhead.
> >>
> >> Because there's no way to automatically generate several configuration
> >> files with each its own set of options using oslo.config.
> >
> > I think that's a failing of oslo.config, though. Glance's layout of
> > config files is useful and intuitive.
> 
> The config generator lets you specify the modules, libraries, and
> files to be used to generate a config file. It even has a way to
> specify which files to ignore. So I think we have everything we need
> in the config generator, but we need to run it more than once, with
> different inputs, to generate multiple files.

Yep, except the magic way we troll through the code, loading modules,
introspecting what config options were registered, etc. will likely make
this a frustrating experience to get right.

I took a little time to hack up a much more simple and explicit approach
to config file generation and posted a draft here:

  https://review.openstack.org/100946

The docstring at the top of the file explains the approach:

  https://review.openstack.org/#/c/100946/1/oslo/config/generator.py

Thanks,
Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Oslo.cfg] Configuration string substitution

2014-06-18 Thread Doug Hellmann
Thanks for the background. I think it has come up a couple of times
recently, so if we're converging on a solution that's good.

Doug

On Wed, Jun 18, 2014 at 10:52 AM, Davanum Srinivas  wrote:
> Doug,
>
> For the record, yes this came up before in
> https://review.openstack.org/#/c/59994. Gary and I talked about
> $imagecache.image_cache_subdirectory_name when discussing about that
> review.
>
> -- dims
>
> On Wed, Jun 18, 2014 at 10:34 AM, Gary Kotton  wrote:
>>
>>
>> On 6/18/14, 4:19 PM, "Doug Hellmann"  wrote:
>>
>>>On Wed, Jun 18, 2014 at 4:47 AM, Gary Kotton  wrote:
 Hi,
 I have encountered a problem with string substitution with the nova
 configuration file. The motivation was to move all of the glance
settings to
 their own section (https://review.openstack.org/#/c/100567/). The
 glance_api_servers had default setting that uses the current
glance_host and
 the glance port. This is a problem when we move to the Œglance¹ section.
 First and foremost I think that we need to decide on how we should
denote
 the string substitutions for group variables and then we can dive into
 implementation details. Does anyone have any thoughts on this?

 My thinking is that when we use we should use a format of
$.. An
 example is below.

 Original code:

 cfg.ListOpt('glance_api_servers',
 default=['$glance_host:$glance_port'],
 help='A list of the glance api servers available to
nova. '
  'Prefix with
https://urldefense.proofpoint.com/v1/url?u=https:///&k=oIvRg1%2BdGAgOoM1B
IlLLqw%3D%3D%0A&r=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0A&m=t
PCYaurIa1F3hEMCd5LOOfvP785BZFa8M58fXpp0Lcw%3D%0A&s=2ac62a772fd5bd58fa7cf7
0a973956ba97f933d649fb2f95be7b7d3e18d2b086 for ssl-based glance api
servers.
 '
  '([hostname|ip]:port)'),

 Proposed change (in the glance section):
 cfg.ListOpt('api_servers',
 default=[Œ$glance.host:$glance.port'],
 help='A list of the glance api servers available to
nova. '
  'Prefix with
https://urldefense.proofpoint.com/v1/url?u=https:///&k=oIvRg1%2BdGAgOoM1B
IlLLqw%3D%3D%0A&r=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0A&m=t
PCYaurIa1F3hEMCd5LOOfvP785BZFa8M58fXpp0Lcw%3D%0A&s=2ac62a772fd5bd58fa7cf7
0a973956ba97f933d649fb2f95be7b7d3e18d2b086 for ssl-based glance api
servers.
 '
  '([hostname|ip]:port)¹,
 deprecated_group='DEFAULT¹,

 deprecated_name='glance_api_servers'),

 This would require some preprocessing on the oslo.cfg side to be able to
 understand the $glance is the specific group and then host is the
requested
 value int he group.

 Thanks
 Gary
>>>
>>>Do we need to set the variable off somehow to allow substitutions that
>>>need the literal '.' after a variable? How often is that likely to
>>>come up?
>>
>>
>> To be honest I think that this is a real edge case. I had a chat with
>> markmc on IRC and he suggested a different approach, which I liked,
>> regarding the specific patch. That is, to set the default to None and when
>> the data is accessed to check if it is is None. If so then provide the
>> default values.
>>
>> We may still nonetheless need something like this in the future.
>>
>>>
>>>Doug
>>>


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

>>>
>>>___
>>>OpenStack-dev mailing list
>>>OpenStack-dev@lists.openstack.org
>>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> --
> Davanum Srinivas :: http://davanum.wordpress.com
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] A modest proposal to reduce reviewer load

2014-06-18 Thread Chris Friesen

On 06/18/2014 08:35 AM, Duncan Thomas wrote:

On 18 June 2014 15:28, Matthew Booth  wrote:

The answer is not always more
review: there are other tools in the box. Imagine we spent 50% of the
time we spend on review writing tempest tests instead.


Or we push the work off of core into the wider community and require
100% unit test coverage of every change *and* record the tempest
coverage of any changed lines so that the reviewer can gauge better
what the risks are?


100% coverage is not realistic.

How would you handle bugfixes that depend on specific databases?

How would you handle bugs in the unit tests themselves? (Like 1298690 
where the sqlite database used for unit tests handles regexp() 
differently than either mysql or postgres.)


Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Oslo.cfg] Configuration string substitution

2014-06-18 Thread Doug Hellmann
On Wed, Jun 18, 2014 at 10:34 AM, Gary Kotton  wrote:
>
>
> On 6/18/14, 4:19 PM, "Doug Hellmann"  wrote:
>
>>On Wed, Jun 18, 2014 at 4:47 AM, Gary Kotton  wrote:
>>> Hi,
>>> I have encountered a problem with string substitution with the nova
>>> configuration file. The motivation was to move all of the glance
>>>settings to
>>> their own section (https://review.openstack.org/#/c/100567/). The
>>> glance_api_servers had default setting that uses the current
>>>glance_host and
>>> the glance port. This is a problem when we move to the Œglance¹ section.
>>> First and foremost I think that we need to decide on how we should
>>>denote
>>> the string substitutions for group variables and then we can dive into
>>> implementation details. Does anyone have any thoughts on this?
>>>
>>> My thinking is that when we use we should use a format of
>>>$.. An
>>> example is below.
>>>
>>> Original code:
>>>
>>> cfg.ListOpt('glance_api_servers',
>>> default=['$glance_host:$glance_port'],
>>> help='A list of the glance api servers available to
>>>nova. '
>>>  'Prefix with
>>>https://urldefense.proofpoint.com/v1/url?u=https:///&k=oIvRg1%2BdGAgOoM1B
>>>IlLLqw%3D%3D%0A&r=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0A&m=t
>>>PCYaurIa1F3hEMCd5LOOfvP785BZFa8M58fXpp0Lcw%3D%0A&s=2ac62a772fd5bd58fa7cf7
>>>0a973956ba97f933d649fb2f95be7b7d3e18d2b086 for ssl-based glance api
>>>servers.
>>> '
>>>  '([hostname|ip]:port)'),
>>>
>>> Proposed change (in the glance section):
>>> cfg.ListOpt('api_servers',
>>> default=[Œ$glance.host:$glance.port'],
>>> help='A list of the glance api servers available to
>>>nova. '
>>>  'Prefix with
>>>https://urldefense.proofpoint.com/v1/url?u=https:///&k=oIvRg1%2BdGAgOoM1B
>>>IlLLqw%3D%3D%0A&r=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0A&m=t
>>>PCYaurIa1F3hEMCd5LOOfvP785BZFa8M58fXpp0Lcw%3D%0A&s=2ac62a772fd5bd58fa7cf7
>>>0a973956ba97f933d649fb2f95be7b7d3e18d2b086 for ssl-based glance api
>>>servers.
>>> '
>>>  '([hostname|ip]:port)¹,
>>> deprecated_group='DEFAULT¹,
>>>
>>> deprecated_name='glance_api_servers'),
>>>
>>> This would require some preprocessing on the oslo.cfg side to be able to
>>> understand the $glance is the specific group and then host is the
>>>requested
>>> value int he group.
>>>
>>> Thanks
>>> Gary
>>
>>Do we need to set the variable off somehow to allow substitutions that
>>need the literal '.' after a variable? How often is that likely to
>>come up?
>
>
> To be honest I think that this is a real edge case. I had a chat with
> markmc on IRC and he suggested a different approach, which I liked,
> regarding the specific patch. That is, to set the default to None and when
> the data is accessed to check if it is is None. If so then provide the
> default values.

That sounds like a good solution, too, although we should be careful
about how we document the default value for that option.

Doug

>
> We may still nonetheless need something like this in the future.
>
>>
>>Doug
>>
>>>
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>___
>>OpenStack-dev mailing list
>>OpenStack-dev@lists.openstack.org
>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] TLS support RST document on Gerrit

2014-06-18 Thread Evgeny Fedoruk
Hi Stephen,
Regarding your comment related to SNI list management and behavior in the RST 
document:

I understand the need to explicitly specify specific certificates for specific 
hostnames. 
However we need to deliver lowest common denominator for this feature which 
every vendor is able to support
In this case, specifying hostname for certificate will not be supported by 
Radware.
The original proposal with ordered certificates list may be the lowest common 
denominator for all vendors and we should find out if this is the case.
If not, managing a simple none-ordered list will probably be the lowest common 
denominator.

With the proposed flavors framework considered, extra SNI management 
capabilities may be represented for providers
but meanwhile we should agree on proposal that can be implemented by all 
vendors.
What are your thought on this?

Regarding the SNIPolicy, I agree and will change the document accordingly.

Thanks,
Evg





-Original Message-
From: Evgeny Fedoruk 
Sent: Sunday, June 15, 2014 1:55 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] TLS support RST document on Gerrit

Hi All,

The document was updated and ready for next review round.
Main things that were changed:
1. Comments were addressed
2. No back-end re-encryption supported
3. Intermediate certificates chain supported
*Opened question: Should chain be stored in same TLS container of the 
certificate?

Please review
Regards,
Evgeny


-Original Message-
From: Douglas Mendizabal [mailto:douglas.mendiza...@rackspace.com]
Sent: Wednesday, June 11, 2014 10:22 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] TLS support RST document on Gerrit

Hi Doug,


Barbican does guarantee the integrity and availability of the secret, unless 
the owner of the secret deletes it from Barbican.  We’re not encouraging that 
you store a shadow-copy of the secret either.  This was proposed by the LBaaS 
team as a possible workaround for your use case.
Our recommendation was that there are two options for dealing with Secrets 
being deleted from under you:

If you want to control the lifecycle of the secret so that you can prevent the 
user from deleting the secret, then the secret should be owned by LBaaS, not by 
the user.  You can achieve this by asking the user to upload the secret via 
LBaaS api, and then use Barbican on the back end to store the secret under the 
LBaaS tenant.

If you want the user to own and manage their secret in Barbican, then you have 
to deal with the situation where the user deletes a secret and it is no longer 
available to LBaaS.  This is a situation you would have to deal with even with 
a reference-counting and force-deleting Barbican, so I don’t think you really 
gain anything from all the complexity you’re proposing to add to Barbican.

-Douglas M.



On 6/11/14, 12:57 PM, "Doug Wiegley"  wrote:

>There are other fundamental things about secrets, like relying on their 
>presence, and not encouraging a proliferation of a dozen 
>mini-secret-stores everywhere to get around that fact, which makes it 
>less secret.  Have you considered a ³force² delete flag, required if 
>some service is using the secret, sort of ³rm² vs ³rm -f², to avoid the 
>obvious foot-shooting use cases, but still allowing the user to nuke it 
>if necessary?
>
>Thanks,
>Doug
>
>
>On 6/11/14, 11:43 AM, "Clark, Robert Graham"  wrote:
>
>>Users have to be able to delete their secrets from Barbican, it's a 
>>fundamental key-management requirement.
>>
>>> -Original Message-
>>> From: Eichberger, German
>>> Sent: 11 June 2014 17:43
>>> To: OpenStack Development Mailing List (not for usage questions)
>>> Subject: Re: [openstack-dev] [Neutron][LBaaS] TLS support RST 
>>> document on Gerrit
>>> 
>>> Sorry, I am late to the party. Holding the shadow copy in the 
>>> backend
>>is a
>>> fine solution.
>>> 
>>> Also, if containers are immutable can they be deleted at all? Can we
>>make a
>>> requirement that a user can't delete a container in Barbican?
>>> 
>>> German
>>> 
>>> -Original Message-
>>> From: Eichberger, German
>>> Sent: Wednesday, June 11, 2014 9:32 AM
>>> To: OpenStack Development Mailing List (not for usage questions)
>>> Subject: Re: [openstack-dev] [Neutron][LBaaS] TLS support RST 
>>> document on Gerrit
>>> 
>>> Hi,
>>> 
>>> I think the previous solution is easier for a user to understand. 
>>> The referenced container got tampered/deleted we throw an error - 
>>> but keep existing load balancers intact.
>>> 
>>> With the shadow container we get additional complexity and the user
>>might
>>> be confused where the values are coming from.
>>> 
>>> German
>>> 
>>> -Original Message-
>>> From: Carlos Garza [mailto:carlos.ga...@rackspace.com]
>>> Sent: Tuesday, June 10, 2014 12:18 PM
>>> To: OpenStack Development Mailing List (not for usage questions)
>>> Subject: Re

Re: [openstack-dev] OpenStack Designate DNSaaS

2014-06-18 Thread Joe Mcbride
Brad,
It seems to me you have a classic migration problem on your hands.
Assuming your ideal end state is to completely migrate to Designate and
minimize customizations (which is definitely preferable for the long
term), your strategy is the real challenge.

One approach is to deploy Designate and put all new domains and or tenants
there.  Over time, migrate domains and tenants over. Your consumers will
naturally want the new benefits and hopefully facilitate the changes.

Another is to have an old fashioned ³cut over². There is considerable risk
which can be mitigated by if your domains don¹t change often and your
consumer base is aligned.

Alternatively, you can run both in parallel and sink between them at the
database layer until you can ultimately switch users over.  This approach
is only recommended if you have to support an older API on the previous
system. You can also synchronize to the same set of name servers if you
can not change them. The problem with this approach is you will write a
lot of throw-a-way code.

SOME QUESTIONS:
- Is there an API available to your current system for your consumers? If
no, that greatly simplifies things.
- Can you easily change your name servers with your registrars?
- Why bother keeping the old system around?


On 6/17/14, 4:38 PM, "Hayes, Graham"  wrote:

>Unfortunately #1 is not a real option - designate needs the storage layer
>to operate.
>
>I would guess #2 or #3 would be the more feasible options.
>
>Graham
>
>
>"MIDGLEY, BRAD"  wrote:
>
>
>
>PK,
>
>I¹d agree with pursuing #1 or with a simple reference implementation like
>minidns if ripping it out is disruptive.
>
>Brad
>
>_
>From: ESWARAN, PK
>Sent: Tuesday, June 17, 2014 1:26 PM
>To: openstack-dev@lists.openstack.org; Graham Hayes
>(graham.ha...@hp.com); Kiall Mac Innes (ki...@hp.com)
>Cc: PACHECO, RODOLFO J; JALLI, RANDEEP; O'CONNELL, MATT; MICHALEK, KEN;
>MIDGLEY, BRAD; BISHOP, JAMES; 'raymond.h...@accenture.com'
>(raymond.h...@accenture.com); BOEHMER, JEFF; SCOLLARD, JAMES; SHANGHAVI,
>PRAFUL B
>Subject: OpenStack Designate DNSaaS
>
>
>Dear OpenStack Dev Team:
>I exchanged a few thoughts on Designate DNSaaS with Graham Hayes
>of HP and he advised me to send this out to a larger DEV audience. Your
>feedback will help the Designte DNSaaS project and also the AT&T Cloud
>group.
>
>AT&T Cloud group is investigating ³Designate DNSaaS² and we would like to
>indulge in this emerging technology. We could embrace and also
>participate into this new OpenStack technology. I am also copying a few
>of the AT&T team members.
>
>In AT&T in the near term, we would like to use Designate DNSaaS as a
>frontend while retaining the current AT&T backend DNS infrastructure in
>place. The main issue in this is the role of ³Designate Database² as
>MASTER database of record for all Designate DNS provisioning. This
>Designate role is useful when there is a deployment of new DNS system and
>auth servers. But Š.
>
>In AT&T, we already have a ³DNS bind provisioner and a database of
>record² that keeps our DNS authoritative servers updated. Having a dual
>DNS Master Databases of record will be a synchronizing nightmare and it
>will be a difficult, time consuming process to reposition our existing
>BIND auth servers DIRECTLY under Designate.
>
>Please also note that the exisitng AT&T ³DNS bind provisioner and
>a database of record² handles multiple services and multiple customers
>within each service while validating the rightful ownership prior to any
>DNS manipulations.
>
>In the near term we would like to engage in a LESS-THAN-fork-lift
>effort so that we can enjoy the major functions of ³Designate API²,
>³Designate Central² and ³Designate Sink².
>
>So the question is what are our alternatives without a fork-lift
>effort?
>
>Knowing that the Designate storage layer is pluggable, I was
>looking at the following possibilities and your feedback could help us.
>Thank you in advance.
>
>
>  1.  Disable the storage layer entirely while keeping it for minimal
>essential Designate functions.
>  2.  Writing a storage plugin to our existing DB may be one
>consideration, but this would require a fork-lift effort since the
>current system supports multiple services and customers. This would take
>a longer timeframe than what we want to ensure that things are OK.
>  3.  Writing a storage-plugin-backend to a system-process that in turn
>will manage our existing DB, seems to be a good possibility but this is
>not yet clear.
>  4.  ?? Possibly other with your feedback. Thanks.
>
>
>P.K. Eswaran
>AT&T Information Technology Operations
>Middletown, NJ
>Room C1-2B06
>pk.eswa...@att.com
>pe3...@att.com
>(732) 420-2175
>
>
>
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__

Re: [openstack-dev] [Nova][Oslo.cfg] Configuration string substitution

2014-06-18 Thread Davanum Srinivas
Doug,

For the record, yes this came up before in
https://review.openstack.org/#/c/59994. Gary and I talked about
$imagecache.image_cache_subdirectory_name when discussing about that
review.

-- dims

On Wed, Jun 18, 2014 at 10:34 AM, Gary Kotton  wrote:
>
>
> On 6/18/14, 4:19 PM, "Doug Hellmann"  wrote:
>
>>On Wed, Jun 18, 2014 at 4:47 AM, Gary Kotton  wrote:
>>> Hi,
>>> I have encountered a problem with string substitution with the nova
>>> configuration file. The motivation was to move all of the glance
>>>settings to
>>> their own section (https://review.openstack.org/#/c/100567/). The
>>> glance_api_servers had default setting that uses the current
>>>glance_host and
>>> the glance port. This is a problem when we move to the Œglance¹ section.
>>> First and foremost I think that we need to decide on how we should
>>>denote
>>> the string substitutions for group variables and then we can dive into
>>> implementation details. Does anyone have any thoughts on this?
>>>
>>> My thinking is that when we use we should use a format of
>>>$.. An
>>> example is below.
>>>
>>> Original code:
>>>
>>> cfg.ListOpt('glance_api_servers',
>>> default=['$glance_host:$glance_port'],
>>> help='A list of the glance api servers available to
>>>nova. '
>>>  'Prefix with
>>>https://urldefense.proofpoint.com/v1/url?u=https:///&k=oIvRg1%2BdGAgOoM1B
>>>IlLLqw%3D%3D%0A&r=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0A&m=t
>>>PCYaurIa1F3hEMCd5LOOfvP785BZFa8M58fXpp0Lcw%3D%0A&s=2ac62a772fd5bd58fa7cf7
>>>0a973956ba97f933d649fb2f95be7b7d3e18d2b086 for ssl-based glance api
>>>servers.
>>> '
>>>  '([hostname|ip]:port)'),
>>>
>>> Proposed change (in the glance section):
>>> cfg.ListOpt('api_servers',
>>> default=[Œ$glance.host:$glance.port'],
>>> help='A list of the glance api servers available to
>>>nova. '
>>>  'Prefix with
>>>https://urldefense.proofpoint.com/v1/url?u=https:///&k=oIvRg1%2BdGAgOoM1B
>>>IlLLqw%3D%3D%0A&r=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0A&m=t
>>>PCYaurIa1F3hEMCd5LOOfvP785BZFa8M58fXpp0Lcw%3D%0A&s=2ac62a772fd5bd58fa7cf7
>>>0a973956ba97f933d649fb2f95be7b7d3e18d2b086 for ssl-based glance api
>>>servers.
>>> '
>>>  '([hostname|ip]:port)¹,
>>> deprecated_group='DEFAULT¹,
>>>
>>> deprecated_name='glance_api_servers'),
>>>
>>> This would require some preprocessing on the oslo.cfg side to be able to
>>> understand the $glance is the specific group and then host is the
>>>requested
>>> value int he group.
>>>
>>> Thanks
>>> Gary
>>
>>Do we need to set the variable off somehow to allow substitutions that
>>need the literal '.' after a variable? How often is that likely to
>>come up?
>
>
> To be honest I think that this is a real edge case. I had a chat with
> markmc on IRC and he suggested a different approach, which I liked,
> regarding the specific patch. That is, to set the default to None and when
> the data is accessed to check if it is is None. If so then provide the
> default values.
>
> We may still nonetheless need something like this in the future.
>
>>
>>Doug
>>
>>>
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>___
>>OpenStack-dev mailing list
>>OpenStack-dev@lists.openstack.org
>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: http://davanum.wordpress.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] An alternative approach to enforcing "expected election behaviour"

2014-06-18 Thread Eoghan Glynn

> James E. Blair wrote:
> > I think our recent experience has shown that the fundamental problem is
> > that not all of the members of our community knew what kind of behavior
> > we expected around elections.  That's understandable -- we had hardly
> > articulated it.  I think the best solution to that is therefore to
> > articulate and communicate that.
> > 
> > I believe Anita's proposal starts off by doing a very good job of
> > exactly that, so I would like to see a final resolution based on that
> > approach with very similar text to what she has proposed.  That
> > statement of expected behavior should then be communicated by election
> > officials to all participants in announcements related to all elections.
> > Those two simple acts will, I believe, suffice to address the problem we
> > have seen.
> > 
> > I do agree that a heavy bureaucracy is not necessary for this.  Our
> > community has a Code of Conduct established and administered by the
> > Foundation.  I think we should focus on minimizing additional process
> > and instead try to make this effort slot into the existing framework as
> > easily as possible by expecting the election officials to forward
> > potential violations to the Foundation's Executive Director (or
> > delegate) to handle as they would any other potential CoC violation.
>
> Thierry Carrez wrote:
> +1
> 
> The community code of conduct states:
> 
> """Respect the election process. Members should not attempt to
> manipulate election results. Open debate is welcome, but vote trading,
> ballot stuffing and other forms of abuse are not acceptable."""
> 
> Maybe just clarifying what we mean by "open debate" and giving examples
> of what we would consider "other forms of abuse" in the context of the
> TC elections is actually sufficient. Then voters can judge abuse on
> their own in their vote (reputational pressure) *and* we have an
> established process (the alleged violation of the community code of
> conduct) to escalate to in case we really need to (institutional pressure).
> 
> I think the first part of Anita's draft captures that very well, so
> maybe that's all we need. I really think that documenting and better
> communicating expectations will actually avoid problems in the future.

Absolutely agreed with jeblair and ttx here, that communicating
expectations clearly is the key.

However I think that the choice between relying on reputational
versus institutional pressure for enforcement should be an
either-or proposition, in order for it to be most effective.

The potential problem with reputational being the default, then
falling back to institutional pressure in extremis, is that folks
may read something into the absence of an escalation.

e.g. "the openstack officialdom doesn't seem to have done
  anything about this practice, so it must be ok"

IMHO reliance on the wisdom-of-crowds for enforcement works best
when the crowd explicitly knows that it's the last line of defense.

Cheers,
Eoghan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] All Clear on the western front (i.e. gate)

2014-06-18 Thread Davanum Srinivas
w00t! thanks for the hard work everyone.

-- dims

On Wed, Jun 18, 2014 at 7:17 AM, Sean Dague  wrote:
> I realized that folks may have been waiting for an 'all clear' on the
> gate situation. It was a tiring couple of weeks, so took a little while
> to get there.
>
> Due to a huge amount of effort, but a bunch of different people, a ton
> of bugs were squashed to get the gate back to a high pass rate -
> https://etherpad.openstack.org/p/gatetriage-june2014
>
> Then jeblair came back from vacation and quickly sorted out a nodepool
> bug that was starving our capacity, so now we aren't leaking deleted
> nodes the same way.
>
> With both those, our capacity for changes goes way up. Because we have
> more workers available at any time, and less round tripping on race
> bugs. We also dropped the Nova v3 tests, which shaved 8 minutes (on
> average) off of Tempest runs. Again, increasing throughput by getting
> nodes back into the pool faster.
>
> The net of all these changes is that yesterday we merged 117 patches -
> https://github.com/openstack/openstack/graphs/commit-activity (not a
> record, that's 147 in one day, but definitely a top merge day).
>
> So if you were holding off on reviews / code changes because of the
> state of things, you can stop now. And given the system is pretty
> healthy, now is actually a pretty good time to put and keep it under
> load to help evaluate where we stand.
>
> Thanks all,
>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Davanum Srinivas :: http://davanum.wordpress.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Oslo.cfg] Configuration string substitution

2014-06-18 Thread Gary Kotton


On 6/18/14, 4:19 PM, "Doug Hellmann"  wrote:

>On Wed, Jun 18, 2014 at 4:47 AM, Gary Kotton  wrote:
>> Hi,
>> I have encountered a problem with string substitution with the nova
>> configuration file. The motivation was to move all of the glance
>>settings to
>> their own section (https://review.openstack.org/#/c/100567/). The
>> glance_api_servers had default setting that uses the current
>>glance_host and
>> the glance port. This is a problem when we move to the Œglance¹ section.
>> First and foremost I think that we need to decide on how we should
>>denote
>> the string substitutions for group variables and then we can dive into
>> implementation details. Does anyone have any thoughts on this?
>>
>> My thinking is that when we use we should use a format of
>>$.. An
>> example is below.
>>
>> Original code:
>>
>> cfg.ListOpt('glance_api_servers',
>> default=['$glance_host:$glance_port'],
>> help='A list of the glance api servers available to
>>nova. '
>>  'Prefix with
>>https://urldefense.proofpoint.com/v1/url?u=https:///&k=oIvRg1%2BdGAgOoM1B
>>IlLLqw%3D%3D%0A&r=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0A&m=t
>>PCYaurIa1F3hEMCd5LOOfvP785BZFa8M58fXpp0Lcw%3D%0A&s=2ac62a772fd5bd58fa7cf7
>>0a973956ba97f933d649fb2f95be7b7d3e18d2b086 for ssl-based glance api
>>servers.
>> '
>>  '([hostname|ip]:port)'),
>>
>> Proposed change (in the glance section):
>> cfg.ListOpt('api_servers',
>> default=[Œ$glance.host:$glance.port'],
>> help='A list of the glance api servers available to
>>nova. '
>>  'Prefix with
>>https://urldefense.proofpoint.com/v1/url?u=https:///&k=oIvRg1%2BdGAgOoM1B
>>IlLLqw%3D%3D%0A&r=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0A&m=t
>>PCYaurIa1F3hEMCd5LOOfvP785BZFa8M58fXpp0Lcw%3D%0A&s=2ac62a772fd5bd58fa7cf7
>>0a973956ba97f933d649fb2f95be7b7d3e18d2b086 for ssl-based glance api
>>servers.
>> '
>>  '([hostname|ip]:port)¹,
>> deprecated_group='DEFAULT¹,
>>
>> deprecated_name='glance_api_servers'),
>>
>> This would require some preprocessing on the oslo.cfg side to be able to
>> understand the $glance is the specific group and then host is the
>>requested
>> value int he group.
>>
>> Thanks
>> Gary
>
>Do we need to set the variable off somehow to allow substitutions that
>need the literal '.' after a variable? How often is that likely to
>come up?


To be honest I think that this is a real edge case. I had a chat with
markmc on IRC and he suggested a different approach, which I liked,
regarding the specific patch. That is, to set the default to None and when
the data is accessed to check if it is is None. If so then provide the
default values.

We may still nonetheless need something like this in the future.

>
>Doug
>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] A modest proposal to reduce reviewer load

2014-06-18 Thread Duncan Thomas
On 18 June 2014 15:28, Matthew Booth  wrote:
> On 18/06/14 13:31, Sean Dague wrote:
>> Even with 2 +2s you do the wrong thing. Yesterday we landed
>> baremetal tests that broke ironic. It has a ton of +1s from people
>> that have been working on those tests.
>
> This is slightly off topic, but think about that for a moment: the
> patch had a ton of peer review and 2 +2s from core reviewers, and it
> still broke. Review has significant benefits, but also a large cost,
> and it doesn't have all the answers. The answer is not always more
> review: there are other tools in the box. Imagine we spent 50% of the
> time we spend on review writing tempest tests instead.

Or we push the work off of core into the wider community and require
100% unit test coverage of every change *and* record the tempest
coverage of any changed lines so that the reviewer can gauge better
what the risks are?

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] A modest proposal to reduce reviewer load

2014-06-18 Thread Matthew Booth
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 18/06/14 13:31, Sean Dague wrote:
> On 06/18/2014 08:26 AM, Duncan Thomas wrote:
>> On 18 June 2014 10:04, Thierry Carrez 
>> wrote:
>> 
>>> As an aside, we don't really need two core reviewers to bless a
>>> trivial change: one could be considered sufficient. So a patch
>>> marked as trivial which has a number of +1s could be +2/APRVed
>>> directly by a core reviewer.
>>> 
>>> That would slightly reduce load on core reviewers, although I
>>> suspect most of the time is spent on complex patches, and
>>> trivial patches do not take that much time to process (or could
>>> even be seen as a nice break from more complex patch
>>> reviewing).
>> 
>> 
>> I think removing the need for two +2s is higher risk that you
>> think - the definition of 'trivial' gets stretched and stretched
>> over time because it allows people to get patches in
>> quicker/easier and we end up in a mess. I'm all for adding the
>> tag, but reducing the review requirements is, in my view,
>> dangerous. If a change is truly trivial then it is only going to
>> take moments for the second core to review it, so the saving
>> really is negligible compared to the risk.
> 
> Agreed.
> 
> Even with 2 +2s you do the wrong thing. Yesterday we landed
> baremetal tests that broke ironic. It has a ton of +1s from people
> that have been working on those tests.

This is slightly off topic, but think about that for a moment: the
patch had a ton of peer review and 2 +2s from core reviewers, and it
still broke. Review has significant benefits, but also a large cost,
and it doesn't have all the answers. The answer is not always more
review: there are other tools in the box. Imagine we spent 50% of the
time we spend on review writing tempest tests instead.

Matt

- -- 
Matthew Booth
Red Hat Engineering, Virtualisation Team

Phone: +442070094448 (UK)
GPG ID:  D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490
-BEGIN PGP SIGNATURE-
Version: GnuPG v1
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iEYEARECAAYFAlOhoiYACgkQNEHqGdM8NJDdLwCffEvJqlR6ETt9BJPjrqN+0FpP
o18An25t4Z4NdRZEDIKu060h54Dd+PwR
=MpQW
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Swift][third-party] Most Third Party CI's failing

2014-06-18 Thread Salvatore Orlando
Hi Luke,

That kind of message usually shows up in unit tests job when there is some
syntax error or circular import. But I think that it's not your case.
Usually you see an "import error" message towards the end of the "garbage".

If you can point me to a failing log of your CI I can have a look at it and
see if I can help you.

Salvatore


On 18 June 2014 11:09, Luke Gorrie  wrote:

> On 17 June 2014 09:55, Luke Gorrie  wrote:
>
>> I have a problem that appeared at the same time and may be related?
>> "testr list-tests" in the tempest directory is failing with an obscure
>> error message. Seems to be exactly the situation described here:
>> https://bugs.launchpad.net/subunit/+bug/1278539
>>
>> Any tips?
>>
>
> How should I go about getting help with this? Mailing list + IRC is not
> getting anybody's attention and I want to get my CI back online.
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Clarification of policy for qa-specs around adding new tests

2014-06-18 Thread Matthew Treinish
On Tue, Jun 17, 2014 at 01:45:55AM +, Kenichi Oomichi wrote:
> 
> > -Original Message-
> > From: Matthew Treinish [mailto:mtrein...@kortar.org]
> > Sent: Monday, June 16, 2014 11:58 PM
> > To: OpenStack Development Mailing List (not for usage questions)
> > Subject: Re: [openstack-dev] [qa] Clarification of policy for qa-specs 
> > around adding new tests
> > 
> > On Mon, Jun 16, 2014 at 10:46:51AM -0400, David Kranz wrote:
> > > I have been reviewing some of these specs and sense a lack of clarity 
> > > around
> > > what is expected. In the pre-qa-specs world we did not want tempest
> > > blueprints to be used by projects to track their tempest test submissions
> > > because the core review team did not want to have to spend a lot of time
> > > dealing with that. We said that each project could have one tempest
> > > blueprint that would point to some other place (project blueprints,
> > > spreadsheet, etherpad, etc.) that would track specific tests to be added.
> > > I'm not sure what aspect of the new qa-spec process would make us feel
> > > differently about this. Has this policy changed? We should spell out the
> > > expectation in any event. I will update the README when we have a
> > > conclusion.
> > >
> > 
> > The policy has not changed. There should be 1 BP (or maybe 2 or 3 if they 
> > want
> > to split the effort a bit more granularly for tracking different classes of
> > tests, but still 1 BP series) for improving project tests. For individual 
> > tests
> > part of a bigger effort should be tracked outside of the Tempest LP. IMO 
> > after
> > it's approved the spec/BP for tracking test additions is only really useful 
> > to
> > have a unified topic to use for review classification.
> 
> +1 to use a single blueprint for adding new tests of each project.
> The unified topic of each project would be useful to get each project
> reviewers' effort on the Tempest tests reviews.
> To add new tests, do we need to have qa-specs, or is it OK to have
> blueprints only?
> 

So I've been asking all the new BPs for project testing being opened this cycle
to have a spec too. My feeling is that we should only have one process for doing
BPs/specs that way we get all the artifacts in the same place. It should also
hopefully get everyone more involved with the qa-specs workflow.

The specs for adding project test should be pretty simple, they just basically
need to outline what project is going to be tested, what types of tests are
going to be worked on, (API, CLI, etc..)  and how the test development is going
to be tracked. (etherpad, google doc, etc.)

-Matt Treinish



pgpve1uKjCslX.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] RESEND: OpenStack Designate DNSaaS

2014-06-18 Thread ESWARAN, PK
Dear OpenStack Dev Team:
I am resending this "OpenStack Designate DNSaaS" e-mail once 
again after my registration.
The original message is at the bottom of this thread and was 
sent on Tuesday, June 17, 2014 03:26 PM.

Thanks in advance for your feedback.

P.K. Eswaran
AT&T Information Technology Operations
Middletown, NJ
Room C1-2B06
pk.eswa...@att.com
pe3...@att.com
(732) 420-2175
---

-Original Message-
From: Hayes, Graham [mailto:graham.ha...@hp.com]
Sent: Tuesday, June 17, 2014 05:38 PM
To: MIDGLEY, BRAD
Cc: ESWARAN, PK; openstack-dev@lists.openstack.org; Mac Innes, Kiall; PACHECO, 
RODOLFO J; JALLI, RANDEEP; O'CONNELL, MATT; MICHALEK, KEN; BISHOP, JAMES; 
'raymond.h...@accenture.com' (raymond.h...@accenture.com); BOEHMER, JEFF; 
SCOLLARD, JAMES; SHANGHAVI, PRAFUL B
Subject: RE: OpenStack Designate DNSaaS



Unfortunately #1 is not a real option - designate needs the storage layer to 
operate.



I would guess #2 or #3 would be the more feasible options.



Graham


---



"MIDGLEY, BRAD" mailto:bm9...@att.com>> wrote:







PK,



I'd agree with pursuing #1 or with a simple reference implementation like 
minidns if ripping it out is disruptive.



Brad

From: ESWARAN, PK
Sent: Tuesday, June 17, 2014 03:26 PM
To: openstack-dev@lists.openstack.org; Graham Hayes (graham.ha...@hp.com); 
Kiall Mac Innes (ki...@hp.com)
Cc: PACHECO, RODOLFO J; JALLI, RANDEEP; O'CONNELL, MATT; MICHALEK, KEN; 
MIDGLEY, BRAD; BISHOP, JAMES; 'raymond.h...@accenture.com' 
(raymond.h...@accenture.com); BOEHMER, JEFF; SCOLLARD, JAMES; SHANGHAVI, PRAFUL 
B
Subject: OpenStack Designate DNSaaS

Dear OpenStack Dev Team:
I exchanged a few thoughts on Designate DNSaaS with Graham Hayes of HP 
and he advised me to send this out to a larger DEV audience. Your feedback will 
help the Designte DNSaaS project and also the AT&T Cloud group.

AT&T Cloud group is investigating "Designate DNSaaS" and we would like to 
indulge in this emerging technology. We could embrace and also participate into 
this new OpenStack technology. I am also copying a few of the AT&T team members.

In AT&T in the near term, we would like to use Designate DNSaaS as a frontend 
while retaining the current AT&T backend DNS infrastructure in place. The main 
issue in this is the role of "Designate Database" as MASTER database of record 
for all Designate DNS provisioning. This Designate role is useful when there is 
a deployment of new DNS system and auth servers. But 

In AT&T, we already have a "DNS bind provisioner and a database of record" that 
keeps our DNS authoritative servers updated. Having a dual DNS Master Databases 
of record will be a synchronizing nightmare and it will be a difficult, time 
consuming process to reposition our existing BIND auth servers DIRECTLY under 
Designate.

Please also note that the exisitng AT&T "DNS bind provisioner and a 
database of record" handles multiple services and multiple customers within 
each service while validating the rightful ownership prior to any DNS 
manipulations.

In the near term we would like to engage in a LESS-THAN-fork-lift 
effort so that we can enjoy the major functions of "Designate API", "Designate 
Central" and "Designate Sink".

So the question is what are our alternatives without a fork-lift effort?

Knowing that the Designate storage layer is pluggable, I was looking at 
the following possibilities and your feedback could help us. Thank you in 
advance.

  1.  Disable the storage layer entirely while keeping it for minimal essential 
Designate functions.
  2.  Writing a storage plugin to our existing DB may be one consideration, but 
this would require a fork-lift effort since the current system supports 
multiple services and customers. This would take a longer timeframe than what 
we want to ensure that things are OK.
  3.  Writing a storage-plugin-backend to a system-process that in turn will 
manage our existing DB, seems to be a good possibility but this is not yet 
clear.
  4.  ?? Possibly other with your feedback. Thanks.

P.K. Eswaran
AT&T Information Technology Operations
Middletown, NJ
Room C1-2B06
pk.eswa...@att.com
pe3...@att.com
(732) 420-2175



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Unifying configuration file

2014-06-18 Thread Doug Hellmann
On Wed, Jun 18, 2014 at 1:58 AM, Mark McLoughlin  wrote:
> Hey
>
> On Tue, 2014-06-17 at 17:43 +0200, Julien Danjou wrote:
>> On Tue, Jun 17 2014, Arnaud Legendre wrote:
>>
>> > @ZhiYan: I don't like the idea of removing the sample configuration file(s)
>> > from the git repository. Many people do not want to have to checkout the
>> > entire codebase and tox every time they have to verify a variable name in a
>> > configuration file. I know many people who were really frustrated where 
>> > they
>> > realized that the sample config file was gone from the Nova repo.
>> > However, I agree with the fact that it would be better if the sample was
>> > 100% accurate: so the way I would love to see this working is to generate
>> > the sample file every time there is a config change (this being totally
>> > automated (maybe at the gate level...)).
>>
>> You're a bit late on this. :)
>> So what I did these last months (year?) in several project, is to check
>> at gate time the configuration file that is automatically generated
>> against what's in the patches.
>> That turned out to be a real problem because sometimes some options
>> changes from the eternal module we rely on (e.g. keystone authtoken or
>> oslo.messaging). In the end many projects (like Nova) disabled this
>> check altogether, and therefore removed the generated configuration file
>> From the git repository.
>
> For those that casually want to refer to the sample config, what would
> help if there was Jenkins jobs to publish the generated sample config
> file somewhere.

We talked at one point about having it added to one of the doc builds.
Since an accurate file requires having all of the dependencies for the
app installed, it might be easiest to do it in the developer doc build
where that will already be the case. Ultimately we would want it in
the config guide as well.

>
> For people installing the software, it would probably be nice if pbr
> added 'python setup.py sample_config' or something.
>
>> > @Julien: I would be interested to understand the value that you see of
>> > having only one config file? At this point, I don't see why managing one
>> > file is more complicated than managing several files especially when they
>> > are organized by categories. Also, scrolling through the registry settings
>> > every time I want to modify an api setting seem to add some overhead.
>>
>> Because there's no way to automatically generate several configuration
>> files with each its own set of options using oslo.config.
>
> I think that's a failing of oslo.config, though. Glance's layout of
> config files is useful and intuitive.

The config generator lets you specify the modules, libraries, and
files to be used to generate a config file. It even has a way to
specify which files to ignore. So I think we have everything we need
in the config generator, but we need to run it more than once, with
different inputs, to generate multiple files.

Doug

>
>> Glance is (one of?) the last project in OpenStack to manually write its
>> sample configuration file, which are not up to date obviously.
>
> Neutron too, but not split out per-service. I don't find Neutron's
> config file layout as intuitive.
>
>> So really this is mainly about following what every other projects did
>> the last year(s).
>
> There's a balance here between what makes technical sense and what helps
> users. If Glance has support for generating a unified config file while
> also manually maintaining the split configs, I think that's a fine
> compromise.
>
> Mark.
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA] Meeting Thursday June 19th at 17:00 UTC

2014-06-18 Thread Matthew Treinish
Hi Everyone,

Just a quick reminder that the weekly OpenStack QA team IRC meeting will be
this Thursday, June 19th at 17:00 UTC in the #openstack-meeting channel.

The agenda for tomorrow's meeting can be found here:
https://wiki.openstack.org/wiki/Meetings/QATeamMeeting
Anyone is welcome to add an item to the agenda.

To help people figure out what time 17:00 UTC is in other timezones tomorrow's
meeting will be at:

13:00 EDT
02:00 JST
02:30 ACST
19:00 CEST
12:00 CDT
10:00 PDT

-Matt Treinish


pgp3Ab1l3jS2P.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Oslo.cfg] Configuration string substitution

2014-06-18 Thread Doug Hellmann
On Wed, Jun 18, 2014 at 4:47 AM, Gary Kotton  wrote:
> Hi,
> I have encountered a problem with string substitution with the nova
> configuration file. The motivation was to move all of the glance settings to
> their own section (https://review.openstack.org/#/c/100567/). The
> glance_api_servers had default setting that uses the current glance_host and
> the glance port. This is a problem when we move to the ‘glance’ section.
> First and foremost I think that we need to decide on how we should denote
> the string substitutions for group variables and then we can dive into
> implementation details. Does anyone have any thoughts on this?
>
> My thinking is that when we use we should use a format of $.. An
> example is below.
>
> Original code:
>
> cfg.ListOpt('glance_api_servers',
> default=['$glance_host:$glance_port'],
> help='A list of the glance api servers available to nova. '
>  'Prefix with https:// for ssl-based glance api servers.
> '
>  '([hostname|ip]:port)'),
>
> Proposed change (in the glance section):
> cfg.ListOpt('api_servers',
> default=[‘$glance.host:$glance.port'],
> help='A list of the glance api servers available to nova. '
>  'Prefix with https:// for ssl-based glance api servers.
> '
>  '([hostname|ip]:port)’,
> deprecated_group='DEFAULT’,
>
> deprecated_name='glance_api_servers'),
>
> This would require some preprocessing on the oslo.cfg side to be able to
> understand the $glance is the specific group and then host is the requested
> value int he group.
>
> Thanks
> Gary

Do we need to set the variable off somehow to allow substitutions that
need the literal '.' after a variable? How often is that likely to
come up?

Doug

>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Xen docs haven't been updated in two years

2014-06-18 Thread Tom Fifield

Hi all,

Almost a year ago a blueprint entitled "Re-document Xen integration with 
OpenStack" was created, because at that time, the Xen documentation 
hadn't been touched for more than a year.


We also added a prominent warning on the lead page that the "...section 
is low quality, and contains out of date information ..." and that help 
is being sought.


So far, no-one has come forward.

It's a shame, because the fine folks at VMWare, the community for KVM 
and Hyper-V have been good in helping out the docs team in the technical 
bits that aren't pure OpenStack.



Is anyone out there developing or using XenServer with OpenStack?


Please consider signing up for the blueprint: 
https://blueprints.launchpad.net/openstack-manuals/+spec/redocument-xen


posting on the docs mailing list 
(http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-docs) or 
wandering into #openstack-doc on freenode for assistance.



Regards,


Tom


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] A modest proposal to reduce reviewer load

2014-06-18 Thread Sean Dague
On 06/18/2014 08:26 AM, Duncan Thomas wrote:
> On 18 June 2014 10:04, Thierry Carrez  wrote:
> 
>> As an aside, we don't really need two core reviewers to bless a trivial
>> change: one could be considered sufficient. So a patch marked as trivial
>> which has a number of +1s could be +2/APRVed directly by a core reviewer.
>>
>> That would slightly reduce load on core reviewers, although I suspect
>> most of the time is spent on complex patches, and trivial patches do not
>> take that much time to process (or could even be seen as a nice break
>> from more complex patch reviewing).
> 
> 
> I think removing the need for two +2s is higher risk that you think -
> the definition of 'trivial' gets stretched and stretched over time
> because it allows people to get patches in quicker/easier and we end
> up in a mess. I'm all for adding the tag, but reducing the review
> requirements is, in my view, dangerous. If a change is truly trivial
> then it is only going to take moments for the second core to review
> it, so the saving really is negligible compared to the risk.

Agreed.

Even with 2 +2s you do the wrong thing. Yesterday we landed baremetal
tests that broke ironic. It has a ton of +1s from people that have been
working on those tests.

People throw +1s around with 'please do this thing', and miss the part
about 'and this current way of doing this thing is actually the correct
way to do it'.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] A modest proposal to reduce reviewer load

2014-06-18 Thread Duncan Thomas
On 18 June 2014 10:04, Thierry Carrez  wrote:

> As an aside, we don't really need two core reviewers to bless a trivial
> change: one could be considered sufficient. So a patch marked as trivial
> which has a number of +1s could be +2/APRVed directly by a core reviewer.
>
> That would slightly reduce load on core reviewers, although I suspect
> most of the time is spent on complex patches, and trivial patches do not
> take that much time to process (or could even be seen as a nice break
> from more complex patch reviewing).


I think removing the need for two +2s is higher risk that you think -
the definition of 'trivial' gets stretched and stretched over time
because it allows people to get patches in quicker/easier and we end
up in a mess. I'm all for adding the tag, but reducing the review
requirements is, in my view, dangerous. If a change is truly trivial
then it is only going to take moments for the second core to review
it, so the saving really is negligible compared to the risk.

-- 
Duncan Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] team meeting June 19 1800 UTC

2014-06-18 Thread Sergey Lukjanov
Hi folks,

We'll be having the Sahara team meeting as usual in
#openstack-meeting-alt channel.

Agenda: https://wiki.openstack.org/wiki/Meetings/SaharaAgenda#Next_meetings

http://www.timeanddate.com/worldclock/fixedtime.html?msg=Sahara+Meeting&iso=20140619T18

P.S. I'll be in flight in this time, so, Alex Ignatov will chair the
meeting instead of me.

-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Do any hyperviors allow disk reduction as part of resize ?

2014-06-18 Thread Day, Phil
> -Original Message-
> From: Richard W.M. Jones [mailto:rjo...@redhat.com]
> Sent: 18 June 2014 12:32
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [nova] Do any hyperviors allow disk reduction
> as part of resize ?
> 
> On Wed, Jun 18, 2014 at 11:05:01AM +, Day, Phil wrote:
> > > -Original Message-
> > > From: Russell Bryant [mailto:rbry...@redhat.com]
> > > Sent: 17 June 2014 15:57
> > > To: OpenStack Development Mailing List (not for usage questions)
> > > Subject: Re: [openstack-dev] [nova] Do any hyperviors allow disk
> > > reduction as part of resize ?
> > >
> > > On 06/17/2014 10:43 AM, Richard W.M. Jones wrote:
> > > > On Fri, Jun 13, 2014 at 06:12:16AM -0400, Aryeh Friedman wrote:
> > > >> Theoretically impossible to reduce disk unless you have some
> > > >> really nasty guest additions.
> > > >
> > > > True for live resizing.
> > > >
> > > > For "dead" resizing, libguestfs + virt-resize can do it.  Although
> > > > I wouldn't necessarily recommend it.  In almost all cases where
> > > > someone wants to shrink a disk, IMHO it is better to sparsify it instead
> (ie.
> > > > virt-sparsify).
> > >
> > > FWIW, the resize operation in OpenStack is a dead one.
> > >
> > Dead as in "not supported in V3" ?
> 
> "dead" as in not live resizing, ie. it happens only on offline disk images.
> 
> Rich.
> 
Ah, thanks.  I was thinking of "dead" as in "it is an ex-operation, it has 
ceased to be, ..." ;-)

There seems to be a consensus towards this being treated as an error - so I'll 
raise a spec.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Do any hyperviors allow disk reduction as part of resize ?

2014-06-18 Thread Richard W.M. Jones
On Wed, Jun 18, 2014 at 11:05:01AM +, Day, Phil wrote:
> > -Original Message-
> > From: Russell Bryant [mailto:rbry...@redhat.com]
> > Sent: 17 June 2014 15:57
> > To: OpenStack Development Mailing List (not for usage questions)
> > Subject: Re: [openstack-dev] [nova] Do any hyperviors allow disk reduction
> > as part of resize ?
> > 
> > On 06/17/2014 10:43 AM, Richard W.M. Jones wrote:
> > > On Fri, Jun 13, 2014 at 06:12:16AM -0400, Aryeh Friedman wrote:
> > >> Theoretically impossible to reduce disk unless you have some really
> > >> nasty guest additions.
> > >
> > > True for live resizing.
> > >
> > > For "dead" resizing, libguestfs + virt-resize can do it.  Although I
> > > wouldn't necessarily recommend it.  In almost all cases where someone
> > > wants to shrink a disk, IMHO it is better to sparsify it instead (ie.
> > > virt-sparsify).
> > 
> > FWIW, the resize operation in OpenStack is a dead one.
> > 
> Dead as in "not supported in V3" ?

"dead" as in not live resizing, ie. it happens only on offline disk images.

Rich.

-- 
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
Read my programming and virtualization blog: http://rwmj.wordpress.com
virt-builder quickly builds VMs from scratch
http://libguestfs.org/virt-builder.1.html

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] DevStack hacks requirements.txt et al

2014-06-18 Thread Sean Dague
On 06/18/2014 01:57 AM, Clark Boylan wrote:
> On Tue, Jun 17, 2014 at 10:33 PM, Mike Spreitzer  wrote:
>> I have noticed that lately DevStack has been hacking requirements.txt in
>> most projects and test-requirements.txt in many.  Why is this being done?
>>
>> Thanks,
>> Mike
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> 
> This is being done to ensure that all projects are tested with a
> common reproduceable dependency list. Without doing this every project
> can install different versions of things based on the way pip resolves
> dependencies. It isn't intuitive and can cause us to not actually work
> with the dependencies advertised in requirements.txt.
> 
> For example since we don't make requirements syncs happen in lockstep
> project A may depend on an older version of some dependency that is
> shared with project B. Pip will then install this older version for us
> and this older version is what gets tested. Then we update the
> dependency in project A and everything breaks because we actually
> needed the older version. It is also possible that project B
> absolutely requires the newer version that we have stated in
> requirements.txt (perhaps this is why requirements.txt says what it
> says), but if it gets the old version from project A now project B is
> broken.

Global requirements is about a year old, and was a direct reaction to
project growth to the point that we could no longer get all the projects
to sync requirements in a reasonable time frame.

Before we did it we would regularly install / uninstall python-keystone
client 6 times over the course of a devstack run, based on conflicting
project requirements.

It was *awesome*... (not)

The end result is you may or may not be testing with what you thought
you were supposed to be, and if a project used entry point plugins and
was installed at the wrong time, would explode on start, as the
requirements were changed under it.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] A modest proposal to reduce reviewer load

2014-06-18 Thread Sean Dague
On 06/18/2014 05:24 AM, Steven Hardy wrote:
> On Wed, Jun 18, 2014 at 11:04:15AM +0200, Thierry Carrez wrote:
>> Russell Bryant wrote:
>>> On 06/17/2014 08:20 AM, Daniel P. Berrange wrote:
 On Tue, Jun 17, 2014 at 01:12:45PM +0100, Matthew Booth wrote:
> On 17/06/14 12:36, Sean Dague wrote:
>> It could go in the commit message:
>>
>> TrivialFix
>>
>> Then could be queried with - 
>> https://review.openstack.org/#/q/message:TrivialFix,n,z
>>
>> If a reviewer felt it wasn't a trivial fix, they could just edit
>> the commit message inline to drop it out.

 Yes, that would be a workable idea.

> +1. If possible I'd update the query to filter out anything with a -1.
>
> Where do we document these things? I'd be happy to propose a docs update.

 Lets see if any other nova cores dissent, but then can add it to these 2
 wiki pages

   https://wiki.openstack.org/wiki/ReviewChecklist
   
 https://wiki.openstack.org/wiki/GitCommitMessages#Including_external_references
>>>
>>> Seems reasonable to me.
>>>
>>> Of course, I just hope it doesn't put reviewers in a mode of only
>>> looking for the trivial stuff and helping less with the big stuff.
>>
>> As an aside, we don't really need two core reviewers to bless a trivial
>> change: one could be considered sufficient. So a patch marked as trivial
>> which has a number of +1s could be +2/APRVed directly by a core reviewer.
>>
>> That would slightly reduce load on core reviewers, although I suspect
>> most of the time is spent on complex patches, and trivial patches do not
>> take that much time to process (or could even be seen as a nice break
>> from more complex patch reviewing).
> 
> Agreed, I think this actually would help improve velocity in many cases,
> provided there was sufficient common understanding of what consititutes a
> trivial patch.
> 
> The other situation this may make sense is when a non-trivial change has
> already been widely reviewed and approved then needs a last-minute minor
> rebase to resolve a simple merge conflict (e.g due to all these trivial
> patches suddenly landing really fast.. ;)
> 
> We discussed this in the Heat team a while back and agreed that having one
> core re-review and approve the patch was sufficient, and basically that
> folks could use their discretion in this situation.

Honestly, I think this is already culture in most projects. There is a
reason that we enforce 2 +2 in culture and not in code, as it allows for
fast approve with rationale. Previously approved with a merge conflict
typically falls into this category. Every group does this a little
differently, but at the end of the day the system does lean on us being
reasonable humans, which is typically a solid assumption.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] A modest proposal to reduce reviewer load

2014-06-18 Thread Sean Dague
On 06/18/2014 04:46 AM, Daniel P. Berrange wrote:
> On Tue, Jun 17, 2014 at 12:55:26PM -0400, Russell Bryant wrote:
>> On 06/17/2014 12:22 PM, Joe Gordon wrote:
>>>
>>>
>>>
>>> On Tue, Jun 17, 2014 at 3:56 AM, Duncan Thomas >> > wrote:
>>>
>>> A far more effective way to reduce the load of trivial review issues
>>> on core reviewers is for none-core reviewers to get in there first,
>>> spot the problems and add a -1 - the trivial issues are then hopefully
>>> fixed up before a core reviewer even looks at the patch.
>>>
>>> The fundamental problem with review is that there are more people
>>> submitting than doing regular reviews. If you want the review queue to
>>> shrink, do five reviews for every one you submit. A -1 from a
>>> none-core (followed by a +1 when all the issues are fixed) is far,
>>> far, far more useful in general than a +1 on a new patch.
>>>
>>>
>>> ++
>>>
>>> I think this thread is trying to optimize for the wrong types of
>>> patches.  We shouldn't be focusing on making trivial patches land
>>> faster, but rather more important changes such as bugs and blueprints.
>>> As some simple code motion won't directly fix any users issue such as
>>> bugs or missing features.
>>
>> In fact, landing easier and less important changes causes churn in the
>> code base can make the more important bugs and blueprints even *harder*
>> to get done.
> 
> None the less I think it is worthwhile having a way to tag trivial
> bugs so we can easily identify them. IMHO if there's a way we can
> improve turnaround time on such bugs it is worth it, if only to
> stop authors getting depressed with the wait for trivial/obvious
> fixes.

I'm definitely on that side of the fence. Actual trivial bug fix changes
(where we're talking about a couple of targeted lines) aren't very
dangerous from a merge conflict stand point.

It's the cross cutting stuff like hacking/pep8 clean ups that wreck us
on merge conflicts.

And I do agree that turn around time is important, especially for new
developers. Because you have context on an issue, and if it takes weeks
for people to get to it, by the time they do you are off to something else.

Honestly, what I really wish gerrit would do is sort based on # of lines
changed, from small to large. That would provide a good feedback loop to
make things reviewable chunks.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] All Clear on the western front (i.e. gate)

2014-06-18 Thread Sean Dague
I realized that folks may have been waiting for an 'all clear' on the
gate situation. It was a tiring couple of weeks, so took a little while
to get there.

Due to a huge amount of effort, but a bunch of different people, a ton
of bugs were squashed to get the gate back to a high pass rate -
https://etherpad.openstack.org/p/gatetriage-june2014

Then jeblair came back from vacation and quickly sorted out a nodepool
bug that was starving our capacity, so now we aren't leaking deleted
nodes the same way.

With both those, our capacity for changes goes way up. Because we have
more workers available at any time, and less round tripping on race
bugs. We also dropped the Nova v3 tests, which shaved 8 minutes (on
average) off of Tempest runs. Again, increasing throughput by getting
nodes back into the pool faster.

The net of all these changes is that yesterday we merged 117 patches -
https://github.com/openstack/openstack/graphs/commit-activity (not a
record, that's 147 in one day, but definitely a top merge day).

So if you were holding off on reviews / code changes because of the
state of things, you can stop now. And given the system is pretty
healthy, now is actually a pretty good time to put and keep it under
load to help evaluate where we stand.

Thanks all,

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] locked instances and snaphot

2014-06-18 Thread Day, Phil
> -Original Message-
> From: Ahmed RAHAL [mailto:ara...@iweb.com]
> Sent: 18 June 2014 01:21
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [nova] locked instances and snaphot
> 
> Hi there,
> 
> Le 2014-06-16 15:28, melanie witt a écrit :
> > Hi all,
> >
> [...]
> >
> > During the patch review, a reviewer raised a concern about the purpose
> > of instance locking and whether prevention of snapshot while an
> > instance is locked is appropriate. From what we understand, instance
> > lock is meant to prevent unwanted modification of an instance. Is
> > snapshotting considered a logical modification of an instance? That
> > is, if an instance is locked to a user, they take a snapshot, create
> > another instance using that snapshot, and modify the instance, have
> > they essentially modified the original locked instance?
> >
> > I wanted to get input from the ML on whether it makes sense to
> > disallow snapshot an instance is locked.
> 
> Beyond 'preventing accidental change to the instance', locking could be seen
> as 'preventing any operation' to the instance.
> If I, as a user, lock an instance, it certainly only prevents me from 
> accidentally
> deleting the VM. As I can unlock whenever I need to, there seems to be no
> other use case (chmod-like).

It bocks any operation that would stop the instance from changing state:  
Delete, stop, start, reboot, rebuild, resize, shelve, pause, resume, etc

In keeping with that I don't see why it should block a snapshot, and having to 
unlock it to take a snapshot doesn't feel good either. 


> If I, as an admin, lock an instance, I am preventing operations on a VM and
> am preventing an ordinary user from overriding the lock.

The driver for doing this as an admin is slightly different - its to stop the 
user from changing the state of an instance rather than a protection.   A 
couple of use cases:
- if you want to migrate a VM and the user is running a continual 
sequence of say reboot commands at it putting an admin lock in place gives you 
a way to break into that cycle.
- There are a few security cases where we need to take over control of 
an instance, and make sure it doesn't get deleted by the user

> 
> This is a form of authority enforcing that maybe should prevent even
> snapshots to be taken off that VM. The thing is that enforcing this beyond
> the limits of nova is AFAIK not there, so cloning/snapshotting cinder volumes
> will still be feasible.
> Enforcing it only in nova as a kind of 'security feature' may become
> misleading.
> 
> The more I think about it, the more I get to think that locking is just there 
> to
> avoid mistakes, not voluntary misbehaviour.
> 
> --
> 
> Ahmed
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Neutron ML2 plugin is failing to configure vif on the instances when two mechanism drivers for different hypervisors are configured in the ml2_conf.ini

2014-06-18 Thread Srivastava, Abhishek
It was a configuration issue. I was able to resolve it.

From: Srivastava, Abhishek
Sent: Wednesday, June 18, 2014 1:39 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] Neutron ML2 plugin is failing to configure vif on the 
instances when two mechanism drivers for different hypervisors are configured 
in the ml2_conf.ini

Hi,

I am trying out Icehouse on ubuntu 14.04. My controller has got two computes 
attached: one hyperv and other KVM. Hyperv runs hyperv_neutron agent as L2 and 
kvm runs OVS.

ML2 plugin's conf.ini gives you the option to provide both hyperv and 
openvswitch separated by comma as the mechanism driver on the controller node, 
but it seems that only one of them takes into effect at a time. i.e. when the 
configuration is like this: 'mechanism_drivers = hyperv, openvswitch' only 
hyperv nova instances gets the port attached to the vnic and while kvm 
instances fail while when the configuration is like this: 'mechanism_drivers = 
openvswitch, hyperv' only KVM nova instances gets the port attached to the vnic 
and while hyperv instances doesn't get the network adapter configured.

Please help me resolve this issue.

Thanks in advance.

Regards,
Abhishek

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Meeting notes on LSI (17.06)

2014-06-18 Thread Nikolay Markov
Hi colleagues,

We had a meeting with LSI guys yesterday, discussing status of this feature
as a possible Fuel/Nailgun plugin. There was no final decision, but there
are two possible ways to simplify our integration:

- LSI team can continue modifying Fuel itself, but with strong targeting on
future moving this code to separate plugin. The approach, hooks and points
of interaction can be done according to pull request on plugin
implementation (https://review.openstack.org/#/c/97827/)
- We have some issues right now with UI plugins, so we can do it this way:
UI part of LSI may be merged directly into Fuel (until we stabilize UI
plugins and move this code out), and all Python business logic can be
implemented as a separate Python package (this will also help us in adding
some particular hooks which can be useful in cases like this).

As for me, the second approach is better in some ways, but in this case we
can't call LSI a "plugin", because it will be something intermediate.
Nevertheless, full plugin support is declared for Fuel 6.0, so it seems
like a good thing to do some steps towards it already.

-- 
Best regards,
Nick Markov
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Nominating Ken'ichi Ohmichi for nova-core

2014-06-18 Thread Kenichi Oomichi

> -Original Message-
> From: Michael Still [mailto:mi...@stillhq.com]
> Sent: Wednesday, June 18, 2014 7:54 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Nova] Nominating Ken'ichi Ohmichi for nova-core
> 
> Kenichi has now been added to the nova-core group in gerrit. Welcome aboard!

Thank you for many +1s, and I'm glad to join the nova-core group :-)
I am going to try hard for the smooth development.


Thanks
Ken'ichi Ohmichi

---

> On Tue, Jun 17, 2014 at 6:18 PM, Michael Still  wrote:
> > Hi. I'm going to let this sit for another 24 hours, and then we'll
> > declare it closed.
> >
> > Cheers,
> > Michael
> >
> > On Tue, Jun 17, 2014 at 6:16 AM, Mark McLoughlin  wrote:
> >> On Sat, 2014-06-14 at 08:40 +1000, Michael Still wrote:
> >>> Greetings,
> >>>
> >>> I would like to nominate Ken'ichi Ohmichi for the nova-core team.
> >>>
> >>> Ken'ichi has been involved with nova for a long time now.  His reviews
> >>> on API changes are excellent, and he's been part of the team that has
> >>> driven the new API work we've seen in recent cycles forward. Ken'ichi
> >>> has also been reviewing other parts of the code base, and I think his
> >>> reviews are detailed and helpful.
> >>
> >> +1, great to see Ken'ichi join the team
> >>
> >> Mark.
> >>
> >>
> >> ___
> >> OpenStack-dev mailing list
> >> OpenStack-dev@lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> > --
> > Rackspace Australia
> 
> 
> 
> --
> Rackspace Australia
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Do any hyperviors allow disk reduction as part of resize ?

2014-06-18 Thread Day, Phil
> -Original Message-
> From: Russell Bryant [mailto:rbry...@redhat.com]
> Sent: 17 June 2014 15:57
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [nova] Do any hyperviors allow disk reduction
> as part of resize ?
> 
> On 06/17/2014 10:43 AM, Richard W.M. Jones wrote:
> > On Fri, Jun 13, 2014 at 06:12:16AM -0400, Aryeh Friedman wrote:
> >> Theoretically impossible to reduce disk unless you have some really
> >> nasty guest additions.
> >
> > True for live resizing.
> >
> > For "dead" resizing, libguestfs + virt-resize can do it.  Although I
> > wouldn't necessarily recommend it.  In almost all cases where someone
> > wants to shrink a disk, IMHO it is better to sparsify it instead (ie.
> > virt-sparsify).
> 
> FWIW, the resize operation in OpenStack is a dead one.
> 
Dead as in "not supported in V3" ?

How does that map into the plans to implement V2.1 on top of V3 ?

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Nominating Ken'ichi Ohmichi for nova-core

2014-06-18 Thread Michael Still
Kenichi has now been added to the nova-core group in gerrit. Welcome aboard!

Michael

On Tue, Jun 17, 2014 at 6:18 PM, Michael Still  wrote:
> Hi. I'm going to let this sit for another 24 hours, and then we'll
> declare it closed.
>
> Cheers,
> Michael
>
> On Tue, Jun 17, 2014 at 6:16 AM, Mark McLoughlin  wrote:
>> On Sat, 2014-06-14 at 08:40 +1000, Michael Still wrote:
>>> Greetings,
>>>
>>> I would like to nominate Ken'ichi Ohmichi for the nova-core team.
>>>
>>> Ken'ichi has been involved with nova for a long time now.  His reviews
>>> on API changes are excellent, and he's been part of the team that has
>>> driven the new API work we've seen in recent cycles forward. Ken'ichi
>>> has also been reviewing other parts of the code base, and I think his
>>> reviews are detailed and helpful.
>>
>> +1, great to see Ken'ichi join the team
>>
>> Mark.
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> --
> Rackspace Australia



-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [devstack] [zmq] [oslo.messaging] Running devstack with zeromq

2014-06-18 Thread Elena Ezhova
Hello!

I have been exploring bugs connected with using devstack with zmq [1], [2],
[3] and experimenting with various configurations in attempt to make zmq
work with projects which have moved to oslo.messaging. It turned out that
there is a number of things to fix.

Firstly, even though nova currently uses oslo.messaging, devstack still
uses nova-rpc-zmq-receiver instead of oslo-messaging-zmq-receiver when
starting zeromq receiver.

Secondly, the default matchmaker for zmq is always set as MatchmakerRedis
(which currently does not work either) and there is no opportunity to
specify anything else (e.g. MatchmakerRing) using devstack. If there was an
option to use MatchmakerRing, it would have been possible to create a
configuration file matchmaker_ring.json in etc/oslo/ directory and write
there all key-value pairs needed by zmq.

So I wonder whether it is something the community is interested in and, if
yes, are there any recommendations concerning possible implementation?


Thanks,
Elena

[1] - https://bugs.launchpad.net/devstack/+bug/1279739
[2] - https://bugs.launchpad.net/neutron/+bug/1298803
[3] - https://bugs.launchpad.net/oslo.messaging/+bug/1290772
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] nova-compute vfsguestfs

2014-06-18 Thread abhishek jain
Hi RIch

Thanks for the reply.The libguestfs is working fine here and there are no
issues regarding that on the ubuntu compute node.From the nova-compute logs
on the compute node,it appears that the tap interface is not coming up on
the compute node.

Also in the file driver.py at the path /opt/stack/nova/nova/virt/libvirt in
the below section,











*uuid = dom.UUIDString()if event ==
libvirt.VIR_DOMAIN_EVENT_STOPPED:transition =
virtevent.EVENT_LIFECYCLE_STOPPEDelif event ==
libvirt.VIR_DOMAIN_EVENT_STARTED:transition =
virtevent.EVENT_LIFECYCLE_STARTEDelif event ==
libvirt.VIR_DOMAIN_EVENT_SUSPENDED:transition =
virtevent.EVENT_LIFECYCLE_PAUSEDelif event ==
libvirt.VIR_DOMAIN_EVENT_RESUMED:transition =
virtevent.EVENT_LIFECYCLE_RESUMED*
the compute node should go in the second part i.e in

*libvirt.VIR_DOMAIN_EVENT_STARTED ,which it it missing.*
The output of brctl show is follows on the ubuntu compute node--

brctl show
bridge namebridge idSTP enabledinterfaces
qbr06865952-4f8000.6654b7d239b7noqvb06865952-4f
qbr9cbe0875-428000.da3180e79619noqvb9cbe0875-42
qbrb61ecf20-cf8000.e24a1b9d71c0noqvbb61ecf20-cf
qbrda8485b3-738000.f602a0f09835noqvbda8485b3-73

The main problem is that the tap interface is not able to come up.Below are
the nova-compute logs..

ova.virt.libvirt.config [req-6ce8d1f9-5d85-4d4c-a06e-bd8c4d3071ec admin
admin] Generated XML ^M
  3a0e6076-1136-47d2-b883-326d7a87e5d4^M
  instance-0009^M
  524288^M
  1^M
  ^M
hvm^M

/opt/stack/data/nova/instances/3a0e6076-1136-47d2-b883-326d7a87e5d4/kernel^M

/opt/stack/data/nova/instances/3a0e6076-1136-47d2-b883-326d7a87e5d4/ramdisk^M
root=/dev/vda console=tty0 console=ttyS0^M
  ^M
  ^M
^M
^M
  ^M
  ^M
^M
^M
  ^M
  ^M
^M
  ^M
  ^M
  ^M
^M
^M
  ^M
  ^M
  ^M
  ^M
  ^M
^M
^M
  ^M

365,1036%

Please help regarding this.

Thanks




On Tue, Jun 17, 2014 at 8:05 PM, Richard W.M. Jones 
wrote:

> On Fri, Jun 13, 2014 at 03:06:25PM +0530, abhishek jain wrote:
> > Hi Rich
> >
> > Can you  help me regarding the possible cause for  VM stucking at
> spawning
> > state on ubuntu powerpc compute node in openstack using devstack.
>
> Did you solve this one?  It's impossible to debug unless you collect
> the full debugging information.  See also:
>
>
> http://libguestfs.org/guestfs-faq.1.html#how-do-i-debug-when-using-the-api
>   https://bugs.launchpad.net/nova/+bug/1279857
>
> Rich.
>
> --
> Richard Jones, Virtualization Group, Red Hat
> http://people.redhat.com/~rjones
> Read my programming and virtualization blog: http://rwmj.wordpress.com
> Fedora Windows cross-compiler. Compile Windows programs, test, and
> build Windows installers. Over 100 libraries supported.
> http://fedoraproject.org/wiki/MinGW
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ML2] Modular L2 agent architecture

2014-06-18 Thread Isaku Yamahata
Hi. Ryu provides ovs_vsctl.py library which is python equivalent to
ovs-vsctl command. It speaks OVSDB protocl.
https://github.com/osrg/ryu/blob/master/ryu/lib/ovs/vsctl.py

So with the library, it's mostly mechanical change to convert
ovs_lib.py, I think.
I'm not aware other similar library written in python.

thanks,
Isaku Yamahata


On Tue, Jun 17, 2014 at 11:38:36AM -0500,
Kyle Mestery  wrote:

> Another area of improvement for the agent would be to move away from
> executing CLIs for port commands and instead use OVSDB. Terry Wilson
> and I talked about this, and re-writing ovs_lib to use an OVSDB
> connection instead of the CLI methods would be a huge improvement
> here. I'm not sure if Terry was going to move forward with this, but
> I'd be in favor of this for Juno if he or someone else wants to move
> in this direction.
> 
> Thanks,
> Kyle
> 
> On Tue, Jun 17, 2014 at 11:24 AM, Salvatore Orlando  
> wrote:
> > We've started doing this in a slightly more reasonable way for icehouse.
> > What we've done is:
> > - remove unnecessary notification from the server
> > - process all port-related events, either trigger via RPC or via monitor in
> > one place
> >
> > Obviously there is always a lot of room for improvement, and I agree
> > something along the lines of what Zang suggests would be more maintainable
> > and ensure faster event processing as well as making it easier to have some
> > form of reliability on event processing.
> >
> > I was considering doing something for the ovs-agent again in Juno, but since
> > we've moving towards a unified agent, I think any new "big" ticket should
> > address this effort.
> >
> > Salvatore
> >
> >
> > On 17 June 2014 13:31, Zang MingJie  wrote:
> >>
> >> Hi:
> >>
> >> Awesome! Currently we are suffering lots of bugs in ovs-agent, also
> >> intent to rebuild a more stable flexible agent.
> >>
> >> Taking the experience of ovs-agent bugs, I think the concurrency
> >> problem is also a very important problem, the agent gets lots of event
> >> from different greenlets, the rpc, the ovs monitor or the main loop.
> >> I'd suggest to serialize all event to a queue, then process events in
> >> a dedicated thread. The thread check the events one by one ordered,
> >> and resolve what has been changed, then apply the corresponding
> >> changes. If there is any error occurred in the thread, discard the
> >> current processing event, do a fresh start event, which reset
> >> everything, then apply the correct settings.
> >>
> >> The threading model is so important and may prevent tons of bugs in
> >> the future development, we should describe it clearly in the
> >> architecture
> >>
> >>
> >> On Wed, Jun 11, 2014 at 4:19 AM, Mohammad Banikazemi 
> >> wrote:
> >> > Following the discussions in the ML2 subgroup weekly meetings, I have
> >> > added
> >> > more information on the etherpad [1] describing the proposed
> >> > architecture
> >> > for modular L2 agents. I have also posted some code fragments at [2]
> >> > sketching the implementation of the proposed architecture. Please have a
> >> > look when you get a chance and let us know if you have any comments.
> >> >
> >> > [1] https://etherpad.openstack.org/p/modular-l2-agent-outline
> >> > [2] https://review.openstack.org/#/c/99187/
> >> >
> >> >
> >> > ___
> >> > OpenStack-dev mailing list
> >> > OpenStack-dev@lists.openstack.org
> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >> >
> >>
> >> ___
> >> OpenStack-dev mailing list
> >> OpenStack-dev@lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Isaku Yamahata 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ML2] Modular L2 agent architecture

2014-06-18 Thread Isaku Yamahata
No. ovs_lib invokes both ovs-vsctl and ovs-ofctl.
ovs-vsctl speaks OVSDB protocol, ovs-ofctl speaks OF-wire.

thanks,

On Tue, Jun 17, 2014 at 01:25:59PM -0500,
Kyle Mestery  wrote:

> I don't think so. Once we implement the OVSDB support, we will
> deprecate using the CLI commands in ovs_lib.
> 
> On Tue, Jun 17, 2014 at 12:50 PM, racha  wrote:
> > Hi,
> > Does it make sense also to have the choice between ovs-ofctl CLI and a
> > direct OF1.3 connection too in the ovs-agent?
> >
> > Best Regards,
> > Racha
> >
> >
> >
> > On Tue, Jun 17, 2014 at 10:25 AM, Narasimhan, Vivekanandan
> >  wrote:
> >>
> >>
> >>
> >> Managing the ports and plumbing logic is today driven by L2 Agent, with
> >> little assistance
> >>
> >> from controller.
> >>
> >>
> >>
> >> If we plan to move that functionality to the controller,  the controller
> >> has to be more
> >>
> >> heavy weight (both hardware and software)  since it has to do the job of
> >> L2 Agent for all
> >>
> >> the compute servers in the cloud. , We need to re-verify all scale numbers
> >> for the controller
> >>
> >> on POC’ing of such a change.
> >>
> >>
> >>
> >> That said, replacing CLI with direct OVSDB calls in the L2 Agent is
> >> certainly a good direction.
> >>
> >>
> >>
> >> Today, OVS Agent invokes flow calls of OVS-Lib but has no idea (or
> >> processing) to follow up
> >>
> >> on success or failure of such invocations.  Nor there is certain guarantee
> >> that all such
> >>
> >> flow invocations would be executed by the third-process fired by OVS-Lib
> >> to execute CLI.
> >>
> >>
> >>
> >> When we transition to OVSDB calls which are more programmatic in nature,
> >> we can
> >>
> >> enhance the Flow API (OVS-Lib) to provide more fine grained errors/return
> >> codes (or content)
> >>
> >> and ovs-agent (and even other components) can act on such return state
> >> more
> >>
> >> intelligently/appropriately.
> >>
> >>
> >>
> >> --
> >>
> >> Thanks,
> >>
> >>
> >>
> >> Vivek
> >>
> >>
> >>
> >>
> >>
> >> From: Armando M. [mailto:arma...@gmail.com]
> >> Sent: Tuesday, June 17, 2014 10:26 PM
> >> To: OpenStack Development Mailing List (not for usage questions)
> >> Subject: Re: [openstack-dev] [Neutron][ML2] Modular L2 agent architecture
> >>
> >>
> >>
> >> just a provocative thought: If we used the ovsdb connection instead, do we
> >> really need an L2 agent :P?
> >>
> >>
> >>
> >> On 17 June 2014 18:38, Kyle Mestery  wrote:
> >>
> >> Another area of improvement for the agent would be to move away from
> >> executing CLIs for port commands and instead use OVSDB. Terry Wilson
> >> and I talked about this, and re-writing ovs_lib to use an OVSDB
> >> connection instead of the CLI methods would be a huge improvement
> >> here. I'm not sure if Terry was going to move forward with this, but
> >> I'd be in favor of this for Juno if he or someone else wants to move
> >> in this direction.
> >>
> >> Thanks,
> >> Kyle
> >>
> >>
> >> On Tue, Jun 17, 2014 at 11:24 AM, Salvatore Orlando 
> >> wrote:
> >> > We've started doing this in a slightly more reasonable way for icehouse.
> >> > What we've done is:
> >> > - remove unnecessary notification from the server
> >> > - process all port-related events, either trigger via RPC or via monitor
> >> > in
> >> > one place
> >> >
> >> > Obviously there is always a lot of room for improvement, and I agree
> >> > something along the lines of what Zang suggests would be more
> >> > maintainable
> >> > and ensure faster event processing as well as making it easier to have
> >> > some
> >> > form of reliability on event processing.
> >> >
> >> > I was considering doing something for the ovs-agent again in Juno, but
> >> > since
> >> > we've moving towards a unified agent, I think any new "big" ticket
> >> > should
> >> > address this effort.
> >> >
> >> > Salvatore
> >> >
> >> >
> >> > On 17 June 2014 13:31, Zang MingJie  wrote:
> >> >>
> >> >> Hi:
> >> >>
> >> >> Awesome! Currently we are suffering lots of bugs in ovs-agent, also
> >> >> intent to rebuild a more stable flexible agent.
> >> >>
> >> >> Taking the experience of ovs-agent bugs, I think the concurrency
> >> >> problem is also a very important problem, the agent gets lots of event
> >> >> from different greenlets, the rpc, the ovs monitor or the main loop.
> >> >> I'd suggest to serialize all event to a queue, then process events in
> >> >> a dedicated thread. The thread check the events one by one ordered,
> >> >> and resolve what has been changed, then apply the corresponding
> >> >> changes. If there is any error occurred in the thread, discard the
> >> >> current processing event, do a fresh start event, which reset
> >> >> everything, then apply the correct settings.
> >> >>
> >> >> The threading model is so important and may prevent tons of bugs in
> >> >> the future development, we should describe it clearly in the
> >> >> architecture
> >> >>
> >> >>
> >> >> On Wed, Jun 11, 2014 at 4:19 AM, Mohammad Banikazemi 
> >> >> wrote:
> >> >> > Following the discussi

Re: [openstack-dev] [Nova][neutron][NFV] Mid cycle sprints

2014-06-18 Thread Carlos Gonçalves
I’ve added Joao Soares (Portugal Telecom) and myself (Instituto de 
Telecomunicacoes) to https://wiki.openstack.org/wiki/Sprints/ParisJuno2014 for 
a Neutron and NFV meetup.
Please add yourselves as well so that we can have a better idea of who’s 
showing interest in participating.

Thanks,
Carlos Goncalves

On 17 Jun 2014, at 18:20, Sylvain Afchain  wrote:

> Hi,
> 
> +1 for Paris, since a mid-cycle sprint is already being hosted and organised 
> by eNovance :)
> 
> Sylvain
> 
> - Original Message -
>> From: "Dmitry" 
>> To: "OpenStack Development Mailing List (not for usage questions)" 
>> 
>> Sent: Sunday, June 15, 2014 3:40:43 PM
>> Subject: Re: [openstack-dev] [Nova][neutron][NFV] Mid cycle sprints
>> 
>> +1 for Paris/Lisbon
>> 
>> On Sun, Jun 15, 2014 at 4:27 PM, Gary Kotton  wrote:
>>> 
>>> 
>>> On 6/14/14, 1:05 AM, "Anita Kuno"  wrote:
>>> 
 On 06/13/2014 05:58 PM, Carlos Gonçalves wrote:
> Let me add to what I've said in my previous email, that Instituto de
> Telecomunicacoes and Portugal Telecom are also available to host and
> organize a mid cycle sprint in Lisbon, Portugal.
> 
> Please let me know who may be interested in participating.
> 
> Thanks,
> Carlos Goncalves
> 
> On 13 Jun 2014, at 10:45, Carlos Gonçalves  wrote:
> 
>> Hi,
>> 
>> I like the idea of arranging a mid cycle for Neutron in Europe
>> somewhere in July. I was also considering inviting folks from the
>> OpenStack NFV team to meet up for a F2F kick-off.
>> 
>> I did not know about the sprint being hosted and organised by eNovance
>> in Paris until just now. I think it is a great initiative from eNovance
>> even because it¹s not being focused on a specific OpenStack project.
>> So, I'm interested in participating in this sprint for discussing
>> Neutron and NFV. Two more people from Instituto de Telecomunicacoes and
>> Portugal Telecom have shown interested too.
>> 
>> Neutron and NFV team members, who¹s interested in meeting in Paris, or
>> if not available on the date set by eNovance in other time and place?
>> 
>> Thanks,
>> Carlos Goncalves
>> 
>> On 13 Jun 2014, at 08:42, Sylvain Bauza  wrote:
>> 
>>> Le 12/06/2014 15:32, Gary Kotton a écrit :
 Hi,
 There is the mid cycle sprint in July for Nova and Neutron. Anyone
 interested in maybe getting one together in Europe/Middle East around
 the same dates? If people are willing to come to this part of the
 world I am sure that we can organize a venue for a few days. Anyone
 interested. If we can get a quorum then I will be happy to try and
 arrange things.
 Thanks
 Gary
 
>>> 
>>> 
>>> Hi Gary,
>>> 
>>> Wouldn't it be more interesting to have a mid-cycle sprint *before*
>>> the Nova one (which is targeted after juno-2) so that we could discuss
>>> on some topics and make a status to other folks so that it would allow
>>> a second run ?
>>> 
>>> There is already a proposal in Paris for hosting some OpenStack
>>> sprints, see https://wiki.openstack.org/wiki/Sprints/ParisJuno2014
>>> 
>>> -Sylvain
>>> 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> 
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
> 
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
 Neutron already has two sprints scheduled:
 https://wiki.openstack.org/wiki/Sprints
>>> 
>>> Those sprints are both in the US. It is a very long way to travel. If
>>> there are a group of people that can get together in Europe then it would
>>> be great.
>>> 
 
 Thanks,
 Anita.
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> 
>>> 
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstac

Re: [openstack-dev] [Fuel] Support for plugins in fuel client

2014-06-18 Thread Igor Kalnitsky
Hi guys,

Actually, I'm not a fun of cliff, but I think it's a good solution to use
it in our fuel client.

Here some pros:

* pluggable design: we can encapsulate entire command logic in separate
plugin file
* builtin output formatters: we no need to implement various formatters to
represent received data
* interactive mode: cliff makes possible to provide a shell mode, just like
psql do

Well, I vote to use cliff inside fuel client. Yeah, I know, we need to
rewrite a lot of code, but we
can do it step-by-step.

- Igor




On Wed, Jun 18, 2014 at 9:14 AM, Dmitriy Shulyak 
wrote:

> Hi folks,
>
> I am wondering what our story/vision for plugins in fuel client [1]?
>
> We can benefit from using cliff [2] as framework for fuel cli, apart from
> common code
> for building cli applications on top of argparse, it provides nice feature
> that allows to
> dynamicly add actions by means of entry points (stevedore-like).
>
> So we will be able to add new actions for fuel client simply by installing
> separate packages with correct entry points.
>
> Afaik stevedore is not used there, but i think it will be - cause of same
> author and maintainer.
>
> Do we need this? Maybe there is other options?
>
> Thanks
>
> [1] https://github.com/stackforge/fuel-web/tree/master/fuelclient
> [2]  https://github.com/openstack/cliff
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] pacemaker management tools

2014-06-18 Thread Howley, Tom
Jan/Adam,

Is cibadmin available in the different distros? This can be used to update the 
CIB based on XML description of full pacemaker config. I have used this on 
ubuntu in the past and found it more reliable than using crm commands for 
automated deployment/configuration of pacemaker clusters. It also has patch 
facility, which I haven't used.

I wouldn't have assumed that the pacemaker config needed to be a static file 
baked into an image. If cibadmin is an option, the different elements requiring 
pacemaker control could supply their relevant XML snippets (based off config 
values supplied via heat) and a pacemaker/pacemaker-config element could apply 
those XML configs to the running cluster (with checks for resource naming 
clashes, etc.) Does that sound like a possible approach?

Tom

-Original Message-
From: Jan Provaznik [mailto:jprov...@redhat.com] 
Sent: 13 June 2014 13:53
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [TripleO] pacemaker management tools

On 06/12/2014 09:37 PM, Adam Gandelman wrote:
> It's been a while since I've used these tools and I'm not 100% surprised
> they've fragmented once again. :)  That said, does pcs support creating
> the CIB configuration in bulk from a file? I know that crm shell would
> let you dump the entire cluster config and restore from file.  Unless
> the CIB format has differs now, couldn't we just create the entire thing
> first and use a single pcs or crm command to import it to the cluster,
> rather than building each resource command-by-command?
>

That is an interesting idea. But I'm afraid that this can't be used in 
TripleO use-case. We would have to keep the whole cluster definition as 
a static file which would be included when building overcloud image. 
Keeping this static definition up-to-date sounds like a complex task. 
Also this would make impossible any customization based on used 
elements. For example if there are 2 elements which use pacemaker - 
neutron-l3-agent and ceilometer-agent-central, then I couldn't use them 
separately.

Jan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [metrics] How to group activity in git/gerrit repositories

2014-06-18 Thread Thierry Carrez
Stefano Maffulli wrote:
> On 06/16/2014 12:25 PM, Ilya Shakhat wrote:
>> Most of groups are created from the official programs
>> .yaml. Every program turns into item in the module
>> list (colored in violet), for example 'Nova Compute' is a group
>> containing 'nova', 'python-novaclient' and 'nova-specs'. Every type of
>> repo (integrated, incubated and others) turns into the project type, for
>> example 'integrated' type would contain all modules for a chosen release.
> 
> Thanks for clarifying that, I suspected that was the case. I don't think
> it makes much sense to count the *-specs repositories together with code
> in the program but probably they don't move the needle that much. In any
> case, I'm having specs not counted on Activity Board.

It all depends on the analysis you want to make, but counting -specs in
the same bucket as main code is not completely weird. Before specs
existed we would just have extra review iterations (or additional
patches) due to lack of upfront design. Now we formally design first,
but that's an integral part of feature development and it involves the
same group of people.

> I also am not fully convinced that the clients and their parent project
> should be counted together as I suspect different set of people work on
> them and they have different behavior. Again, the difference may be too
> small to justify adding complexity to the reports but I would like to
> see that difference quantified precisely first.

People working on clients are arguably a subset of people working on
main code, but they live under the same roof with the same big daddy. We
have been considering moving all client projects under a specific client
tools program though, so having a way to analyze their behavior
separately surely will help as a data point.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] A modest proposal to reduce reviewer load

2014-06-18 Thread Steven Hardy
On Wed, Jun 18, 2014 at 11:04:15AM +0200, Thierry Carrez wrote:
> Russell Bryant wrote:
> > On 06/17/2014 08:20 AM, Daniel P. Berrange wrote:
> >> On Tue, Jun 17, 2014 at 01:12:45PM +0100, Matthew Booth wrote:
> >>> On 17/06/14 12:36, Sean Dague wrote:
>  It could go in the commit message:
> 
>  TrivialFix
> 
>  Then could be queried with - 
>  https://review.openstack.org/#/q/message:TrivialFix,n,z
> 
>  If a reviewer felt it wasn't a trivial fix, they could just edit
>  the commit message inline to drop it out.
> >>
> >> Yes, that would be a workable idea.
> >>
> >>> +1. If possible I'd update the query to filter out anything with a -1.
> >>>
> >>> Where do we document these things? I'd be happy to propose a docs update.
> >>
> >> Lets see if any other nova cores dissent, but then can add it to these 2
> >> wiki pages
> >>
> >>   https://wiki.openstack.org/wiki/ReviewChecklist
> >>   
> >> https://wiki.openstack.org/wiki/GitCommitMessages#Including_external_references
> > 
> > Seems reasonable to me.
> > 
> > Of course, I just hope it doesn't put reviewers in a mode of only
> > looking for the trivial stuff and helping less with the big stuff.
> 
> As an aside, we don't really need two core reviewers to bless a trivial
> change: one could be considered sufficient. So a patch marked as trivial
> which has a number of +1s could be +2/APRVed directly by a core reviewer.
> 
> That would slightly reduce load on core reviewers, although I suspect
> most of the time is spent on complex patches, and trivial patches do not
> take that much time to process (or could even be seen as a nice break
> from more complex patch reviewing).

Agreed, I think this actually would help improve velocity in many cases,
provided there was sufficient common understanding of what consititutes a
trivial patch.

The other situation this may make sense is when a non-trivial change has
already been widely reviewed and approved then needs a last-minute minor
rebase to resolve a simple merge conflict (e.g due to all these trivial
patches suddenly landing really fast.. ;)

We discussed this in the Heat team a while back and agreed that having one
core re-review and approve the patch was sufficient, and basically that
folks could use their discretion in this situation.

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TC] [Murano] Follow up on cross-project session

2014-06-18 Thread Thierry Carrez
Ruslan Kamaldinov wrote:
> [...]
> On top of those Murano provides the following features:
> - allow users to combine various packages from catalog by using
> capabilities and requirements of applications
> - provide easy-to-use rich UI for end users who don’t necessarily have
> understanding of the underlying cloud infrastructure
> - Murano "knows" how to merge different packages and generates Heat
> template to deploy the environment, which in terms of Murano is a
> logical aggregation of multiple applications
> - as an application catalog allows app publishers and cloud owners to
> certify and license packages, provide additional partner information
> - allow to define billing rules. Murano can generate events predefined
> by app publisher to Ceilometer and integrate with 3rd party billing
> systems to bill users based on Ceilometer statistics

Thanks Ruslan, that's really helpful. Asking a few more questions to
make sure I got it right.

So to take a practical example, Murano lets you pick (using UI or CLI) a
wordpress package (which requires a DB) and compose it with a mysql
package (which provides a DB), and will deploy that composition using
Heat ? And additionally, it provides package-publisher-friendly features
like certification, licensing and bulling ?

> - third-party services plumbing to support integration with APIs, both
> in stack, like Trove, and external

Does that mean, to come back to my example above, that we could
substitute a Trove resource to the mysql package ? or put a Neutron
LBaaS load balancer on top ? or publish a DNS entry via Designate ?

Regards,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Swift][third-party] Most Third Party CI's failing

2014-06-18 Thread Luke Gorrie
On 17 June 2014 09:55, Luke Gorrie  wrote:

> I have a problem that appeared at the same time and may be related? "testr
> list-tests" in the tempest directory is failing with an obscure error
> message. Seems to be exactly the situation described here:
> https://bugs.launchpad.net/subunit/+bug/1278539
>
> Any tips?
>

How should I go about getting help with this? Mailing list + IRC is not
getting anybody's attention and I want to get my CI back online.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [FUEL] Zabbix in MOS meeting notes

2014-06-18 Thread Alexander Kislitsky
18.07.2014

Participants:
Szymon Banka,
Bartek Kupidura,
Dmitry Nikishov
Alexander Kislitsky

Discussed limitation of current implementation, timelines, integration
workflow.
Colleagues going to build custom ISO with current Zabbix monitoring
implementation, test it, add review comments.
On next week we plan to review and probably merge improvement of Zabbix
monitoring based on implementation for Ericsson.
For HA clusters Zabbix server should be installed on the controller nodes.
This requirement will be researched and implemented in nailgun part.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] A modest proposal to reduce reviewer load

2014-06-18 Thread Thierry Carrez
Russell Bryant wrote:
> On 06/17/2014 08:20 AM, Daniel P. Berrange wrote:
>> On Tue, Jun 17, 2014 at 01:12:45PM +0100, Matthew Booth wrote:
>>> On 17/06/14 12:36, Sean Dague wrote:
 It could go in the commit message:

 TrivialFix

 Then could be queried with - 
 https://review.openstack.org/#/q/message:TrivialFix,n,z

 If a reviewer felt it wasn't a trivial fix, they could just edit
 the commit message inline to drop it out.
>>
>> Yes, that would be a workable idea.
>>
>>> +1. If possible I'd update the query to filter out anything with a -1.
>>>
>>> Where do we document these things? I'd be happy to propose a docs update.
>>
>> Lets see if any other nova cores dissent, but then can add it to these 2
>> wiki pages
>>
>>   https://wiki.openstack.org/wiki/ReviewChecklist
>>   
>> https://wiki.openstack.org/wiki/GitCommitMessages#Including_external_references
> 
> Seems reasonable to me.
> 
> Of course, I just hope it doesn't put reviewers in a mode of only
> looking for the trivial stuff and helping less with the big stuff.

As an aside, we don't really need two core reviewers to bless a trivial
change: one could be considered sufficient. So a patch marked as trivial
which has a number of +1s could be +2/APRVed directly by a core reviewer.

That would slightly reduce load on core reviewers, although I suspect
most of the time is spent on complex patches, and trivial patches do not
take that much time to process (or could even be seen as a nice break
from more complex patch reviewing).

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Bug squashing day on June, 17

2014-06-18 Thread Roman Podoliaka
Hi Fuelers,

Not directly related to bug squashing day, but something to keep in mind.

AFAIU, both MOS and Fuel bugs are currently tracked under
https://bugs.launchpad.net/fuel/ Launchpad project page. Most bugs
filed there are probably deployment-specific, but still I bet there is
a lot of bugs in OS projects you run into. If you could tag those
using OS projects names (e.g. you already have the 'neutron' tag, but
not 'nova' one) when triaging new bugs, that would greatly help us to
find and fix them in both MOS and upstream projects.

Thanks,
Roman

On Wed, Jun 18, 2014 at 8:04 AM, Mike Scherbakov
 wrote:
> Fuelers,
> please pay attention to stalled in progress bugs too - those which are In
> progress for more than a week. See [1].
>
>
> [1]
> https://bugs.launchpad.net/fuel/+bugs?field.searchtext=&orderby=date_last_updated&search=Search&field.status%3Alist=INPROGRESS&assignee_option=any&field.assignee=&field.bug_reporter=&field.bug_commenter=&field.subscriber=&field.structural_subscriber=&field.tag=&field.tags_combinator=ANY&field.has_cve.used=&field.omit_dupes.used=&field.omit_dupes=on&field.affects_me.used=&field.has_patch.used=&field.has_branches.used=&field.has_branches=on&field.has_no_branches.used=&field.has_no_branches=on&field.has_blueprints.used=&field.has_blueprints=on&field.has_no_blueprints.used=&field.has_no_blueprints=on
>
>
> On Wed, Jun 18, 2014 at 8:43 AM, Mike Scherbakov 
> wrote:
>>
>> Thanks for participation, folks.
>> Current count:
>> New - 12
>> Incomplete - 30
>> Confirmed / Triaged / in progress for 5.1 - 368
>>
>> I've not logged how many bugs we had, but calculated that 26 bugs were
>> filed over last 24 hours.
>>
>> Overall, seems to be we did a good job in triaging, but results for fixing
>> bugs are not that impressive. I'm inclined to think about another run, let's
>> say, next Tuesday.
>>
>>
>>
>> On Tue, Jun 17, 2014 at 7:12 AM, Mike Scherbakov
>>  wrote:
>>>
>>> Current count:
>>> New - 56
>>> Incomplete - 48
>>> Confirmed/Triaged/In progress for 5.1 - 331
>>>
>>> Let's squash as many as we can!
>>>
>>>
>>> On Mon, Jun 16, 2014 at 6:16 AM, Mike Scherbakov
>>>  wrote:

 Fuelers,
 as we discussed during last IRC meeting, I'm scheduling bug squashing
 day on Tuesday, June 17th.

 I'd like to propose the following order of bugs processing:

 Confirm / triage bugs in New status, assigning them to yourself to avoid
 the situation when a few people work on same bug
 Review bugs in Incomplete status, move them to Confirmed / Triaged or
 close as Invalid.
 Follow https://wiki.openstack.org/wiki/BugTriage for the rest (this is
 MUST read for those who have not done it yet)

 When we are more or less done with triaging, we can start proposing
 fixes for bugs. I suggest to extensively use #fuel-dev IRC for
 synchronization, and while someone fixes some bugs - the other one can
 participate in review of fixes. Don't hesitate to ask for code reviews.

 Regards,
 --
 Mike Scherbakov
 #mihgen

>>>
>>>
>>>
>>> --
>>> Mike Scherbakov
>>> #mihgen
>>>
>>
>>
>>
>> --
>> Mike Scherbakov
>> #mihgen
>>
>
>
>
> --
> Mike Scherbakov
> #mihgen
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Horizon] Quick Survey: Horizon Mid-Cycle Meetup

2014-06-18 Thread Jaromir Coufal

Hello folks,

there were few discussions about meeting during the cycle and discuss 
ongoing issues, progress and next steps of bigger topics.


As long as it is pretty late announcement, I don't think (and I guess we 
agreed) that it doesn't worth to organize a special event just for 
Horizon. So it was suggested that we join the mid-cycle Sprint in Paris 
(July 2-4).


My quick questions are:
* Who would be interested (and able) to get to the meeting?
* What topics do we want to discuss?

https://etherpad.openstack.org/p/horizon-juno-meetup

Please fill in information as soon as possible, preferable by the end of 
this week (Friday), so that people can start with travel arrangements if 
we have reasonable amount of participants.


Cheers
-- Jarda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova][Oslo.cfg] Configuration string substitution

2014-06-18 Thread Gary Kotton
Hi,
I have encountered a problem with string substitution with the nova 
configuration file. The motivation was to move all of the glance settings to 
their own section (https://review.openstack.org/#/c/100567/). The 
glance_api_servers had default setting that uses the current glance_host and 
the glance port. This is a problem when we move to the 'glance' section.
First and foremost I think that we need to decide on how we should denote the 
string substitutions for group variables and then we can dive into 
implementation details. Does anyone have any thoughts on this?

My thinking is that when we use we should use a format of $.. An 
example is below.

Original code:

cfg.ListOpt('glance_api_servers',
default=['$glance_host:$glance_port'],
help='A list of the glance api servers available to nova. '
 'Prefix with https:// for ssl-based glance api servers. '
 '([hostname|ip]:port)'),

Proposed change (in the glance section):
cfg.ListOpt('api_servers',
default=['$glance.host:$glance.port'],
help='A list of the glance api servers available to nova. '
 'Prefix with https:// for ssl-based glance api servers. '
 '([hostname|ip]:port)',
deprecated_group='DEFAULT',
  deprecated_name='glance_api_servers'),

This would require some preprocessing on the oslo.cfg side to be able to 
understand the $glance is the specific group and then host is the requested 
value int he group.

Thanks
Gary

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] A modest proposal to reduce reviewer load

2014-06-18 Thread Daniel P. Berrange
On Tue, Jun 17, 2014 at 12:55:26PM -0400, Russell Bryant wrote:
> On 06/17/2014 12:22 PM, Joe Gordon wrote:
> > 
> > 
> > 
> > On Tue, Jun 17, 2014 at 3:56 AM, Duncan Thomas  > > wrote:
> > 
> > A far more effective way to reduce the load of trivial review issues
> > on core reviewers is for none-core reviewers to get in there first,
> > spot the problems and add a -1 - the trivial issues are then hopefully
> > fixed up before a core reviewer even looks at the patch.
> > 
> > The fundamental problem with review is that there are more people
> > submitting than doing regular reviews. If you want the review queue to
> > shrink, do five reviews for every one you submit. A -1 from a
> > none-core (followed by a +1 when all the issues are fixed) is far,
> > far, far more useful in general than a +1 on a new patch.
> > 
> > 
> > ++
> > 
> > I think this thread is trying to optimize for the wrong types of
> > patches.  We shouldn't be focusing on making trivial patches land
> > faster, but rather more important changes such as bugs and blueprints.
> > As some simple code motion won't directly fix any users issue such as
> > bugs or missing features.
> 
> In fact, landing easier and less important changes causes churn in the
> code base can make the more important bugs and blueprints even *harder*
> to get done.

None the less I think it is worthwhile having a way to tag trivial
bugs so we can easily identify them. IMHO if there's a way we can
improve turnaround time on such bugs it is worth it, if only to
stop authors getting depressed with the wait for trivial/obvious
fixes.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] A modest proposal to reduce reviewer load

2014-06-18 Thread Matthew Booth
On 17/06/14 17:55, Russell Bryant wrote:
> On 06/17/2014 12:22 PM, Joe Gordon wrote:
>>
>>
>>
>> On Tue, Jun 17, 2014 at 3:56 AM, Duncan Thomas > > wrote:
>>
>> A far more effective way to reduce the load of trivial review issues
>> on core reviewers is for none-core reviewers to get in there first,
>> spot the problems and add a -1 - the trivial issues are then hopefully
>> fixed up before a core reviewer even looks at the patch.
>>
>> The fundamental problem with review is that there are more people
>> submitting than doing regular reviews. If you want the review queue to
>> shrink, do five reviews for every one you submit. A -1 from a
>> none-core (followed by a +1 when all the issues are fixed) is far,
>> far, far more useful in general than a +1 on a new patch.
>>
>>
>> ++
>>
>> I think this thread is trying to optimize for the wrong types of
>> patches.  We shouldn't be focusing on making trivial patches land
>> faster, but rather more important changes such as bugs and blueprints.
>> As some simple code motion won't directly fix any users issue such as
>> bugs or missing features.
> 
> In fact, landing easier and less important changes causes churn in the
> code base can make the more important bugs and blueprints even *harder*
> to get done.

I see 3 principal advantages to getting trivial changes out of the queue
quickly:

* It reduces unnecessary rebases. The longer your code motion patch
languishes, the more times you're going to have to rebase it.

* It encourages submitters to break patches down in to a larger number
of smaller pieces. This makes it much simpler to understand and validate
a large change[1].

* It reduces frustration. It is soul destroying to have to wait weeks
for somebody to agree that you fixed a typo[2]. Unhappy developers can
be poisonous.

> In the end, as others have said, the biggest problem by far is just that
> we need more of the right people reviewing code.

Agreed, but a resource squeeze is often a good time to see
optimisations. A small improvement is still an improvement :)

Matt

[1] This series is very nice: https://review.openstack.org/#/c/98604/

[2] In fact, I'm aware of a significant amount of cleanup which hasn't
happened because of this.
-- 
Matthew Booth
Red Hat Engineering, Virtualisation Team

Phone: +442070094448 (UK)
GPG ID:  D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] An alternative approach to enforcing "expected election behaviour"

2014-06-18 Thread Thierry Carrez
James E. Blair wrote:
> I think our recent experience has shown that the fundamental problem is
> that not all of the members of our community knew what kind of behavior
> we expected around elections.  That's understandable -- we had hardly
> articulated it.  I think the best solution to that is therefore to
> articulate and communicate that.
> 
> I believe Anita's proposal starts off by doing a very good job of
> exactly that, so I would like to see a final resolution based on that
> approach with very similar text to what she has proposed.  That
> statement of expected behavior should then be communicated by election
> officials to all participants in announcements related to all elections.
> Those two simple acts will, I believe, suffice to address the problem we
> have seen.
> 
> I do agree that a heavy bureaucracy is not necessary for this.  Our
> community has a Code of Conduct established and administered by the
> Foundation.  I think we should focus on minimizing additional process
> and instead try to make this effort slot into the existing framework as
> easily as possible by expecting the election officials to forward
> potential violations to the Foundation's Executive Director (or
> delegate) to handle as they would any other potential CoC violation.

+1

The community code of conduct states:

"""Respect the election process. Members should not attempt to
manipulate election results. Open debate is welcome, but vote trading,
ballot stuffing and other forms of abuse are not acceptable."""

Maybe just clarifying what we mean by "open debate" and giving examples
of what we would consider "other forms of abuse" in the context of the
TC elections is actually sufficient. Then voters can judge abuse on
their own in their vote (reputational pressure) *and* we have an
established process (the alleged violation of the community code of
conduct) to escalate to in case we really need to (institutional pressure).

I think the first part of Anita's draft captures that very well, so
maybe that's all we need. I really think that documenting and better
communicating expectations will actually avoid problems in the future.

Regards,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Neutron ML2 plugin is failing to configure vif on the instances when two mechanism drivers for different hypervisors are configured in the ml2_conf.ini

2014-06-18 Thread Srivastava, Abhishek
Hi,

I am trying out Icehouse on ubuntu 14.04. My controller has got two computes 
attached: one hyperv and other KVM. Hyperv runs hyperv_neutron agent as L2 and 
kvm runs OVS.

ML2 plugin's conf.ini gives you the option to provide both hyperv and 
openvswitch separated by comma as the mechanism driver on the controller node, 
but it seems that only one of them takes into effect at a time. i.e. when the 
configuration is like this: 'mechanism_drivers = hyperv, openvswitch' only 
hyperv nova instances gets the port attached to the vnic and while kvm 
instances fail while when the configuration is like this: 'mechanism_drivers = 
openvswitch, hyperv' only KVM nova instances gets the port attached to the vnic 
and while hyperv instances doesn't get the network adapter configured.

Please help me resolve this issue.

Thanks in advance.

Regards,
Abhishek

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] ML2 support in Fuel deployment risks

2014-06-18 Thread Mike Scherbakov
Ok. If don't have more or less working solution by the beginning of next
week, let's start a second track to consume ML2 part only as a mitigation
plan.


On Tue, Jun 17, 2014 at 2:29 AM, Dmitry Borodaenko  wrote:

> Correction/clarification: call tomorrow is about
> multiple-cluster-networks, and it was a bad idea on my part to try to
> hijack that with neutron-ml2 discussion. Lets not do that and continue
> discussing the blueprint spec in gerrit, and hopefully by Thursday
> Andrew will have enough code out there to make the discussion more
> concrete. Link:
> https://review.openstack.org/99807
>
> On Mon, Jun 16, 2014 at 3:07 PM, Dmitry Borodaenko
>  wrote:
> > Mike,
> >
> > We discussed this in our team syncup meeting earlier today. The
> > agreement was that HA is the biggest risk with the current approach.
> > However, keeping our current state of divergence from upstream (and
> > even exagerrating it further) leaves us with a huge technical debt, so
> > the tradeoff between that and potential new neutron deployment issues
> > is not that obvious. Andrew is confident that he can port our HA
> > deployment code over to the current upstream puppet-neutron by the end
> > of this week, he's now updating the spec per review comments from the
> > team and we will have another meeting tomorrow morning (8am PT) to go
> > over all concerns and risks.
> >
> > -DmitryB
> >
> > On Mon, Jun 16, 2014 at 3:57 AM, Mike Scherbakov
> >  wrote:
> >> Fuelers, Andrew,
> >> I've talked to Sergey V. today about ML2 support in Fuel. Our current
> >> approach [1] is to port upstream puppet module for Neutron which has
> support
> >> of ML2, however our Neutron module is significantly diverged from
> upstream
> >> one (at least for Neutron HA deployment capabilities), as far as I
> >> understand. Basically, there is a risk that we will get unstable Neutron
> >> deployment in 5.1. Also, unless we have ML2, we are blocking others who
> rely
> >> on it, for example Mellanox.
> >>
> >> To mitigate the risk, there is a suggestion to start the work in two
> >> parallel tracks: one is to continue porting of upstream puppet module,
> and
> >> another one - port the only ML2 part into Fuel Neutron puppet module.
> This
> >> will not take much time, but will allow us to have 5.1 reliable and
> with ML2
> >> in case of instability after porting external module.
> >>
> >> Your opinion on this?
> >>
> >> [1] https://review.openstack.org/#/c/99807/1/specs/5.1/ml2-neutron.rst
> >> --
> >> Mike Scherbakov
> >> #mihgen
> >>
> >>
> >> ___
> >> OpenStack-dev mailing list
> >> OpenStack-dev@lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> >
> >
> > --
> > Dmitry Borodaenko
>
>
>
> --
> Dmitry Borodaenko
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Mike Scherbakov
#mihgen
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >