Re: [openstack-dev] [TripleO] Review metrics - what do we want to measure?

2014-09-02 Thread Jesus M. Gonzalez-Barahona
On Wed, 2014-09-03 at 12:58 +1200, Robert Collins wrote:
> On 14 August 2014 11:03, James Polley  wrote:
> > In recent history, we've been looking each week at stats from
> > http://russellbryant.net/openstack-stats/tripleo-openreviews.html to get a
> > gauge on how our review pipeline is tracking.
> >
> > The main stats we've been tracking have been the "since the last revision
> > without -1 or -2". I've included some history at [1], but the summary is
> > that our 3rd quartile has slipped from 13 days to 16 days over the last 4
> > weeks or so. Our 1st quartile is fairly steady lately, around 1 day (down
> > from 4 a month ago) and median is unchanged around 7 days.
> >
> > There was lots of discussion in our last meeting about what could be causing
> > this[2]. However, the thing we wanted to bring to the list for the
> > discussion is:
> >
> > Are we tracking the right metric? Should we be looking to something else to
> > tell us how well our pipeline is performing?
> >
> > The meeting logs have quite a few suggestions about ways we could tweak the
> > existing metrics, but if we're measuring the wrong thing that's not going to
> > help.
> >
> > I think that what we are looking for is a metric that lets us know whether
> > the majority of patches are getting feedback quickly. Maybe there's some
> > other metric that would give us a good indication?
> 
> If we review all patches quickly and land none, thats bad too :).
> 
> For the reviewers specifically i think we need a metric(s) that:
>  - doesn't go bad when submitters go awol, don't respond etc
>- including when they come back - our stats shouldn't jump hugely
> because an old review was resurrected
>  - when good means submitters will be getting feedback
>  - flag inventory- things we'd be happy to have landed that haven't
>- including things with a -1 from non-core reviewers (*)
> 
> (*) I often see -1's on things core wouldn't -1 due to the learning
> curve involved in becoming core
> 
> So, as Ben says, I think we need to address the its-not-a-vote issue
> as a priority, that has tripped us up in lots of ways
> 
> I think we need to discount -workflow patches where that was set by
> the submitter, which AFAICT we don't do today.
> 
> Looking at current stats:
> Longest waiting reviews (based on oldest rev without -1 or -2):
> 
> 54 days, 2 hours, 41 minutes https://review.openstack.org/106167
> (Keystone/LDAP integration)
> That patch had a -1 on Aug 16 1:23 AM: but was quickyl turned to +2.
> 
> So this patch had a -1 then after discussion it became a +2. And its
> evolved multiple times.
> 
> What should we be saying here? Clearly its had little review input
> over its life, so I think its sadly accurate.
> 
> I wonder if a big chunk of our sliding quartile is just use not
> reviewing the oldest reviews.

I've been researching review process in OpenStack and other projects for
a while, and my impression is that at least three timing metrics are
relevant:

(1) Total time from submitting a patch to final closing of the review
process (landing that, or a subsequent patch, or finally abandoning).
This gives an idea of how the whole process is working.

(2) Time from submitting a patch to that patch being approved (+2 in
OpenStack, I guess) or declined (and a new patch is requested). This
gives an idea of how quick reviewers are providing definite feedback to
patch submitters, and is a metric for each patch cycle.

(3) Time from a patch being reviewed, with a new patch being requested,
to a new patch being submitted. This gives an idea of the "reaction
time" of patch submitter.

Usually, you want to keep (1) low, while (2) and (3) give you an idea of
what is happening if (1) gets high.

There is another relevant metric in some cases, which is

(4) The number of patch cycles per review cycle (that is, how many
patches are needed per patch landing in master). In some cases, that may
help to explain how (2) and (3) contribute to (1).

And a fifth metric gives you a "throughput" metric:

(5) BMI (backlog management index), number of new review processes by
number of closed review process for a certain period. It gives an idea
of whether the backlog is going up (>1) or down (<1), and is usually
very interesting when seen over time.

(1) alone is not enough to assess on how well the review process is,
because it could low, but (5) showing an increasing backlog because
simply new review requests come too quickly (eg, in periods when
developers are submitting a lot of patch proposals after a freeze). (1)
could also be high, but (5) show a decrease in the backlog, because for
example reviewers or submitters are overworked or slowly scheduled, but
still the project copes with the backlog. Depending on the relationship
of (1) and (5), maybe you need more reviewers, or reviewers scheduling
their reviews with more priority wrt other actions, or something else.

Note for example that in a project with low BMI (<1) for a long period,
but with a high tot

Re: [openstack-dev] Rally scenario Issue

2014-09-02 Thread masoom alam
Hi Ajay,

We are testing the same scenario that you are working one, but getting the
follow error:

http://paste.openstack.org/show/105029/

Could you be of any help here?

Thanks




On Wed, Sep 3, 2014 at 4:16 AM, Ajay Kalambur (akalambu)  wrote:

>  Hi Guys
> For the throughput tests I need to be able to install iperf on the cloud
> image. For this DNS server needs to be set. But the current network context
> should also support DNS name server setting
> Should we add that into network context?
> Ajay
>
>
>
>   From: Boris Pavlovic 
>
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Date: Friday, August 29, 2014 at 2:08 PM
>
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Cc: "Harshil Shah (harsshah)" 
> Subject: Re: [openstack-dev] Rally scenario Issue
>
>   Timur,
>
>  Thanks for pointing Ajay.
>
>  Ajay,
>
>   Also I cannot see this failure unless I run rally with –v –d object.
>
>
>  Actually rally is sotring information about all failures. To get
> information about them you can run next command:
>
>  *rally task results --pprint*
>
>  It will display all information about all iterations (including
> exceptions)
>
>
>   Second when most of the steps in the scenario failed like attaching to
>> network, ssh and run command why bother reporting the results
>
>
>  Because, bad results are better then nothing...
>
>
>  Best regards,
> Boris Pavlovic
>
>
> On Sat, Aug 30, 2014 at 12:54 AM, Timur Nurlygayanov <
> tnurlygaya...@mirantis.com> wrote:
>
>>   Hi Ajay,
>>
>>  looks like you need to use NeutronContext feature to configure Neutron
>> Networks during the benchmarks execution.
>>  We now working on merge of two different comits with NeutronContext
>> implementation:
>> https://review.openstack.org/#/c/96300  and
>> https://review.openstack.org/#/c/103306
>>
>>  could you please apply commit https://review.openstack.org/#/c/96300
>> and run your benchmarks? Neutron Network with subnetworks and routers will
>> be automatically created for each created tenant and you should have the
>> ability to connect to VMs. Please, note, that you should add the following
>> part to your task JSON to enable Neutron context:
>> ...
>> "context": {
>> ...
>> "neutron_network": {
>> "network_cidr": "10.%s.0.0/16",
>> }
>> }
>> ...
>>
>>  Hope this will help.
>>
>>
>>
>>  On Fri, Aug 29, 2014 at 11:42 PM, Ajay Kalambur (akalambu) <
>> akala...@cisco.com> wrote:
>>
>>>   Hi
>>> I am trying to run the Rally scenario boot-runcommand-delete. This
>>> scenario has the following code
>>>   def boot_runcommand_delete(self, image, flavor,
>>>script, interpreter, username,
>>>fixed_network="private",
>>>floating_network="public",
>>>ip_version=4, port=22,
>>>use_floatingip=True, **kwargs):
>>>server = None
>>> floating_ip = None
>>> try:
>>> print "fixed network:%s floating network:%s"
>>> %(fixed_network,floating_network)
>>> server = self._boot_server(
>>> self._generate_random_name("rally_novaserver_"),
>>> image, flavor, key_name='rally_ssh_key', **kwargs)
>>>
>>>  *self.check_network(server, fixed_network)*
>>>
>>>  The question I have is the instance is created with a call to
>>> boot_server but no networks are attached to this server instance. Next step
>>> it goes and checks if the fixed network is attached to the instance and
>>> sure enough it fails
>>> At the step highlighted in bold. Also I cannot see this failure unless I
>>> run rally with –v –d object. So it actually reports benchmark scenario
>>> numbers in a table with no errors when I run with
>>> rally task start boot-and-delete.json
>>>
>>>  And reports results. First what am I missing in this case. Thing is I
>>> am using neutron not nova-network
>>> Second when most of the steps in the scenario failed like attaching to
>>> network, ssh and run command why bother reporting the results
>>>
>>>  Ajay
>>>
>>>
>>>  ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>> --
>>
>>  Timur,
>> QA Engineer
>> OpenStack Projects
>> Mirantis Inc
>>
>> [image: http://www.openstacksv.com/] 
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__

[openstack-dev] [oslo] instance lock and class lock

2014-09-02 Thread Zang MingJie
Hi all:

currently oslo provides lock utility, but unlike other languages, it is
class lock, which prevent all instances call the function. IMO, oslo should
provide an instance lock, only lock current instance to gain better
concurrency.

I have written a lock in a patch[1], please consider pick it into oslo

[1]
https://review.openstack.org/#/c/114154/4/neutron/openstack/common/lockutils.py
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] [Horizon] Ironic Horizon API

2014-09-02 Thread Akihiro Motoki
Thanks for the clarification. This input is really useful to Horizon team too.
I understand the goal is to know the beta implementation has the right
direction.
It is a good startline to discuss what it should be.

Thanks,
Akihiro


On Wed, Sep 3, 2014 at 2:02 PM, Josh Gachnang  wrote:
> Right, the ASAP part is just to have a working Horizon for Ironic (proposed,
> not merged) with most of the features we want as we move towards the vote
> for graduation. We definitely understand the end-of-cycle crunch, as we're
> dealing with the same in Ironic. I'm just looking for a general "This code
> looks reasonable" or "Woah, don't do it like that", not trying to get it
> merged this late in the cycle.
>
> As for graduation, as I understand, we need to have code proposed to Horizon
> that we can work to merge after we graduate.
>
> Thanks!
>
> ---
> Josh Gachnang
> Tech Blog: ServerCobra.com, @ServerCobra
> Github.com/PCsForEducation
>
>
> On Tue, Sep 2, 2014 at 9:37 PM, Jim Rollenhagen 
> wrote:
>>
>>
>>
>> On September 2, 2014 9:28:15 PM PDT, Akihiro Motoki 
>> wrote:
>> >Hi,
>> >
>> >Good to know we will have Ironic support. I can help the integration.
>> >
>> >Let me clarify the situation as Horizon core team. I wonder why it is
>> >ASAP.
>> >Horizon is released with integrated projects and it is true in Juno
>> >release
>> >too.
>> >Ironic is still incubated even if it is graduated for Kilo release.
>> >What is the requirement for graduation? More detail clarification is
>> >needed.
>> >All teams of the integrated projects are focusing on Juno releases and
>> >we all features will be reviewed after rc1 is shipped. The timing is a
>> >bit
>> >bad.
>>
>> Right, the Ironic team does not expect this to land in the Juno cycle. The
>> graduation requirement is that Ironic has made a good faith effort to work
>> toward a Horizon panel.
>>
>> We would like some eyes on the code to make sure we're moving the right
>> direction, but again, we don't expect this to land until Kilo.
>>
>> // jim
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] [Horizon] Ironic Horizon API

2014-09-02 Thread Josh Gachnang
Right, the ASAP part is just to have a working Horizon for Ironic
(proposed, not merged) with most of the features we want as we move towards
the vote for graduation. We definitely understand the end-of-cycle crunch,
as we're dealing with the same in Ironic. I'm just looking for a general
"This code looks reasonable" or "Woah, don't do it like that", not trying
to get it merged this late in the cycle.

As for graduation, as I understand, we need to have code proposed to
Horizon that we can work to merge after we graduate.

Thanks!

---
Josh Gachnang
Tech Blog: ServerCobra.com, @ServerCobra
Github.com/PCsForEducation


On Tue, Sep 2, 2014 at 9:37 PM, Jim Rollenhagen 
wrote:

>
>
> On September 2, 2014 9:28:15 PM PDT, Akihiro Motoki 
> wrote:
> >Hi,
> >
> >Good to know we will have Ironic support. I can help the integration.
> >
> >Let me clarify the situation as Horizon core team. I wonder why it is
> >ASAP.
> >Horizon is released with integrated projects and it is true in Juno
> >release
> >too.
> >Ironic is still incubated even if it is graduated for Kilo release.
> >What is the requirement for graduation? More detail clarification is
> >needed.
> >All teams of the integrated projects are focusing on Juno releases and
> >we all features will be reviewed after rc1 is shipped. The timing is a
> >bit
> >bad.
>
> Right, the Ironic team does not expect this to land in the Juno cycle. The
> graduation requirement is that Ironic has made a good faith effort to work
> toward a Horizon panel.
>
> We would like some eyes on the code to make sure we're moving the right
> direction, but again, we don't expect this to land until Kilo.
>
> // jim
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] [Horizon] Ironic Horizon API

2014-09-02 Thread Jim Rollenhagen


On September 2, 2014 9:28:15 PM PDT, Akihiro Motoki  wrote:
>Hi,
>
>Good to know we will have Ironic support. I can help the integration.
>
>Let me clarify the situation as Horizon core team. I wonder why it is
>ASAP.
>Horizon is released with integrated projects and it is true in Juno
>release
>too.
>Ironic is still incubated even if it is graduated for Kilo release.
>What is the requirement for graduation? More detail clarification is
>needed.
>All teams of the integrated projects are focusing on Juno releases and
>we all features will be reviewed after rc1 is shipped. The timing is a
>bit
>bad.

Right, the Ironic team does not expect this to land in the Juno cycle. The 
graduation requirement is that Ironic has made a good faith effort to work 
toward a Horizon panel. 

We would like some eyes on the code to make sure we're moving the right 
direction, but again, we don't expect this to land until Kilo. 

// jim

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [H][Neutron][IPSecVPN]Cannot tunnel two namespace Routers

2014-09-02 Thread Akihiro Motoki
It seems -dev list is not an appropriate place to discuss it.
please use the general list. I replies to the general list.

2014年9月3日水曜日、Germy Lureさんは書きました:

> Hi Stackers,
>
> Network TOPO like this: VM1(net1)--Router1---IPSec VPN
> tunnel---Router2--VM2(net2)
> If left and right side deploy on different OpenStack environments, it
> works well. But in the same environment, Router1 and Router2 are namespace
> implement in the same network node. I cannot ping from VM1 to VM2.
>
> In R2(Router2), tcpdump tool tells us that R2 receives ICMP echo request
> packets but doesnt send them out.
>
> *7837C113-D21D-B211-9630-**00821800:~ # ip netns exec
> qrouter-4fd2e76e-37d0-4d05-**b5a1-dd987c0231ef tcpdump -i any *
> *tcpdump: verbose output suppressed, use -v or -vv for full protocol
> decode*
> *listening on any, link-type LINUX_SLL (Linux cooked), capture size 65535
> bytes*
> * 11:50:14.853470 IP 10.10.5.2 > 10.10.5.3 :
> ESP(spi=0xc6d65c02,seq=0x1e6), length 132*
> *11:50:14.853470 IP 128.6.25.2 > 128.6.26.2 : ICMP echo
> request, id 44567, seq 486, length 64*
> * 11:50:15.853475 IP 10.10.5.2 > 10.10.5.3 :
> ESP(spi=0xc6d65c02,seq=0x1e7), length 132*
> *11:50:15.853475 IP 128.6.25.2 > 128.6.26.2 : ICMP echo
> request, id 44567, seq 487, length 64*
> * 11:50:16.853461 IP 10.10.5.2 > 10.10.5.3 :
> ESP(spi=0xc6d65c02,seq=0x1e8), length 132*
> *11:50:16.853461 IP 128.6.25.2 > 128.6.26.2 : ICMP echo
> request, id 44567, seq 488, length 64*
> * 11:50:17.853447 IP 10.10.5.2 > 10.10.5.3 :
> ESP(spi=0xc6d65c02,seq=0x1e9), length 132*
> *11:50:17.853447 IP 128.6.25.2 > 128.6.26.2 : ICMP echo
> request, id 44567, seq 489, length 64*
> * ^C*
> *8 packets captured*
> *8 packets received by filter*
> *0 packets dropped by kernel*
>
> ip addr in R2:
>
> 7837C113-D21D-B211-9630-00821800:~ # ip netns exec
> qrouter-4fd2e76e-37d0-4d05-b5a1-dd987c0231ef ip addr
> 187: lo:  mtu 16436 qdisc noqueue state UNKNOWN
> group default
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> inet 127.0.0.1/8 scope host lo
> inet6 ::1/128 scope host
>valid_lft forever preferred_lft forever
> 206: qr-4bacb61c-72:  mtu 1500 qdisc noqueue state
> UNKNOWN group default
> link/ether fa:16:3e:23:10:97 brd ff:ff:ff:ff:ff:ff
> inet 128.6.26.1/24 brd 128.6.26.255 scope global qr-4bacb61c-72
> inet6 fe80::f816:3eff:fe23:1097/64 scope link
>valid_lft forever preferred_lft forever
> 208: qg-4abd4bb0-21:  mtu 1500 qdisc noqueue state
> UNKNOWN group default
> link/ether fa:16:3e:e6:cd:1a brd ff:ff:ff:ff:ff:ff
> inet 10.10.5.3/24 brd 10.10.5.255 scope global qg-4abd4bb0-21
> inet6 fe80::f816:3eff:fee6:cd1a/64 scope link
>valid_lft forever preferred_lft forever
>
>
> In addition, the kernel counter "/proc/net/snmp" in namespace is
> unchanged. These couters do not work well with namespace?
>
>
> BR,
> Germy
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] [Horizon] Ironic Horizon API

2014-09-02 Thread Akihiro Motoki
Hi,

Good to know we will have Ironic support. I can help the integration.

Let me clarify the situation as Horizon core team. I wonder why it is ASAP.
Horizon is released with integrated projects and it is true in Juno release
too.
Ironic is still incubated even if it is graduated for Kilo release.
What is the requirement for graduation? More detail clarification is needed.
All teams of the integrated projects are focusing on Juno releases and
we all features will be reviewed after rc1 is shipped. The timing is a bit
bad.

Thanks,
Akihiro

2014年9月3日水曜日、Josh Gachnangさんは書きました:

> Hey all,
>
> I published a patch to add an Ironic API wrapper in Horizon. Having code
> up for Horizon is a graduation requirement for Ironic, so I'd like some
> eyeballs on it to at least tell us we're going in the right direction. I
> understand this code won't land until after Ironic is integrated.
>
> Another developer is working on the Horizon panels and other parts, and
> will have them ASAP.
>
> Review: https://review.openstack.org/#/c/117376/
> ---
> Josh Gachnang
> Tech Blog: ServerCobra.com, @ServerCobra
> Github.com/PCsForEducation
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [releases] pbr, postversioning, integrated release workflow

2014-09-02 Thread Robert Collins
Hi, so as everyone knows I've been on an arc to teach pbr a lot more
about the semantic versioning we say we use (for all things) - with
the API servers being different, but a common set of code driving it.

We realised there's one quirky interaction today: since we fixed the
bug where pbr would create versions older than the release:
https://bugs.launchpad.net/pbr/+bug/1206730 we've made the big-bang
release somewhat harder.

I don't think betas are affected, only actual releases, and only for
projects using 'preversioning' - thats the API servers, with a version
number in setup.cfg.

Here's what happens.

The two interacting rules are this:
 - pbr will now error if it tries to create a version number lower
than the last release (and releases are found via the git tags in the
branch).
 - pbr treats preversion version numbers as a hard target it has to use

When we make a release (say 2014.1.0), we tag it, and we now have
setup.cfg containing that version, and that version tagged.

The very next patch will be one patch *after* that version, so it has
to be a patch for the next release. That makes the minimum legitimate
release version 2014.1.1, and the local version number will be a dev
build so 2014.1.1.dev1.g$shahere.

But the target it is required to use is 2014.1.0 - so we get a dev
build of that (2014.1.0.dev1.g$shahere) - and thats lower than the
last release (2014.1.0) and thus we trigger the error.

So, if we tag an API server branch with the same version it has in the
branch, any patch that attempts to change that branch will fail,
unless that patch is updating the version number in setup.cfg.

This interacts with the release process: as soon as we tag say nova
with the release tag (not a beta tag), all changes to nova will start
erroring as the dev versions will be bad. When we then tag neutron,
the same will happen there - it will effectively wipe the gate queue
clean except for patches fixing the version numbers.

This is needless to say fairly disruptive. I had initially been
concerned it would wedge things entirely - but unaltered branches will
get the release tag version and be ok, so we can correct things just
by submitting the needed patches - we'll want to promote them to the
front of the queue, for obvious reasons.

Going forward:

* We could just do the above - tag and submit a version fix

* We could submit the version fix as soon as the release sha is
chosen, before the tag
  - we could also wait for the version fixes to land before tagging

* We could change pbr to not enforce this check again
  - or we could add an option to say 'I don't care'

* We could remove the version numbers from setup.cfg entirely

* We could change pbr to treat preversion versions as a *minimum*
rather than a *must-reach*.

I'm in favour of the last of those options. Its quite a conceptual
change from the current definition, which is why we didn't do it
initially. The way it would work is that when pbr calculates the
minimum acceptable version based on the tags and sem-ver: headers in
git history, it would compare the result to the preversion version,
and if the preversion version is higher, take that. The impact would
be that if someone lands an ABI break on a stable-release branch, the
major version would have to be bumped - and for API servers we don't
want that. *but* thats something we should improve in pbr anyway -
teach it how we map semver onto the API server projects [e.g. that
major is the first two components of 2014.1.0, and minor and patch are
bundled together into the third component.

The reason I prefer the last option over the second last, is that we
don't have any mechanism to skip versions other than tagging today -
and doing an alpha-0 tag at the opening of a new cycle just feels a
little odd to me.

-Rob

-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [bashate] .bashateignore

2014-09-02 Thread Ian Wienand

On 09/03/2014 11:32 AM, Robert Collins wrote:

if-has-bash-hashbang-and-is-versioned-then-bashate-it?


That misses library files that aren't execed and have no #!

This might be an appropriate rule for test infrastructure to generate a 
list for their particular project, but IMO I don't think we need to 
start building that logic into bashate


-i

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [bashate] .bashateignore

2014-09-02 Thread Dean Troyer
On Tue, Sep 2, 2014 at 8:32 PM, Robert Collins 
wrote:

> Well, git knows all the files in-tree, right? Or am I missing something
> here?
>
> if-has-bash-hashbang-and-is-versioned-then-bashate-it?


It's not quote that simple, none of the include files have a shebang line;
I've always felt that having that in an include file is an indication that
the file is (also) a stand-alone script.   Shocco (the docs processor)
wants one too.

I think I've given up attempting to mock os.walk so I'm going to post the
latest version of my bashateignore review and we can use .bashateignore to
both exclude as well as include the files to be processed using the
gitignore syntax.

Starting with the list of files in the repo is also an option, and
excluding from there...but I'm not going to have that tonight.

dt

-- 

Dean Troyer
dtro...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][Marconi][Heat] Creating accounts in Keystone

2014-09-02 Thread Adam Young

On 08/25/2014 10:49 AM, Zane Bitter wrote:

On 24/08/14 23:17, Adam Young wrote:

On 08/23/2014 02:01 AM, Clint Byrum wrote:
I don't know how Zaqar does its magic, but I'd love to see simple 
signed
URLs rather than users/passwords. This would work for Heat as well. 
That

way we only have to pass in a single predictably formatted string.

Excerpts from Zane Bitter's message of 2014-08-22 14:35:38 -0700:

Here's an interesting fact about Zaqar (the project formerly known as
Marconi) that I hadn't thought about before this week: it's 
probably the

first OpenStack project where a major part of the API primarily faces




Nah, this is the direction we are headed.  Service users (out of LDAP!)
are going to be the norm with a recent feature add to Keytone:


http://adam.younglogic.com/2014/08/getting-service-users-out-of-ldap/


Ah, excellent, thanks Adam. (BTW markup fail: "The naming of this file 
is essential: keystone..conf [sic] is the expected form.")

If that is the worst typo in that article I consider that success.



So this will solve the Authentication half of the problem. What is the 
recommended solution for Authorisation?


In particular, even if a service like Zaqar or Heat implements their 
own authorisation (e.g. the user creating a Zaqar queue supplies lists 
of the accounts that are allowed to read or write to it, 
respectively), how does the user ensure that the service accounts they 
create will not have access to other OpenStack APIs? IIRC the default 
policy.json files supplied by the various projects allow non-admin 
operations from any account with a role in the project.


There are things I want to implement to solve this.  Locking a token 
(and a trust) to a service and/or Endpoint is the primary thing. More 
finely grained roles.  Delegating operations instead of roles. 
Additional constraints on tokens.


Basically, I want the moon on a stick.

Keep asking.  I can't justify the effort to build this stuff until 
people show they need it.  Heat has been the primary driver for so much 
of Keystone already.




thanks,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Infra][Neutron] How to verify link to logs for disabled third-party CI

2014-09-02 Thread Gary Duan
Hi,

Our CI system is disabled due to a running bug and wrong log link. I have
manually verified the system with sandbox and two Neutron testing patches.
However, with CI disabled, I am not able to see its review comment on any
patch.

Is there a way that I can see what the comment will look like when CI is
disabled?

Thanks,
Gary
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [H][Neutron][IPSecVPN]Cannot tunnel two namespace Routers

2014-09-02 Thread Germy Lure
Hi Stackers,

Network TOPO like this: VM1(net1)--Router1---IPSec VPN
tunnel---Router2--VM2(net2)
If left and right side deploy on different OpenStack environments, it works
well. But in the same environment, Router1 and Router2 are namespace
implement in the same network node. I cannot ping from VM1 to VM2.

In R2(Router2), tcpdump tool tells us that R2 receives ICMP echo request
packets but doesnt send them out.

*7837C113-D21D-B211-9630-**00821800:~ # ip netns exec
qrouter-4fd2e76e-37d0-4d05-**b5a1-dd987c0231ef tcpdump -i any *
*tcpdump: verbose output suppressed, use -v or -vv for full protocol decode*
*listening on any, link-type LINUX_SLL (Linux cooked), capture size 65535
bytes*
* 11:50:14.853470 IP 10.10.5.2 > 10.10.5.3 :
ESP(spi=0xc6d65c02,seq=0x1e6), length 132*
*11:50:14.853470 IP 128.6.25.2 > 128.6.26.2 : ICMP echo
request, id 44567, seq 486, length 64*
* 11:50:15.853475 IP 10.10.5.2 > 10.10.5.3 :
ESP(spi=0xc6d65c02,seq=0x1e7), length 132*
*11:50:15.853475 IP 128.6.25.2 > 128.6.26.2 : ICMP echo
request, id 44567, seq 487, length 64*
* 11:50:16.853461 IP 10.10.5.2 > 10.10.5.3 :
ESP(spi=0xc6d65c02,seq=0x1e8), length 132*
*11:50:16.853461 IP 128.6.25.2 > 128.6.26.2 : ICMP echo
request, id 44567, seq 488, length 64*
* 11:50:17.853447 IP 10.10.5.2 > 10.10.5.3 :
ESP(spi=0xc6d65c02,seq=0x1e9), length 132*
*11:50:17.853447 IP 128.6.25.2 > 128.6.26.2 : ICMP echo
request, id 44567, seq 489, length 64*
* ^C*
*8 packets captured*
*8 packets received by filter*
*0 packets dropped by kernel*

ip addr in R2:

7837C113-D21D-B211-9630-00821800:~ # ip netns exec
qrouter-4fd2e76e-37d0-4d05-b5a1-dd987c0231ef ip addr
187: lo:  mtu 16436 qdisc noqueue state UNKNOWN group
default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
   valid_lft forever preferred_lft forever
206: qr-4bacb61c-72:  mtu 1500 qdisc noqueue state
UNKNOWN group default
link/ether fa:16:3e:23:10:97 brd ff:ff:ff:ff:ff:ff
inet 128.6.26.1/24 brd 128.6.26.255 scope global qr-4bacb61c-72
inet6 fe80::f816:3eff:fe23:1097/64 scope link
   valid_lft forever preferred_lft forever
208: qg-4abd4bb0-21:  mtu 1500 qdisc noqueue state
UNKNOWN group default
link/ether fa:16:3e:e6:cd:1a brd ff:ff:ff:ff:ff:ff
inet 10.10.5.3/24 brd 10.10.5.255 scope global qg-4abd4bb0-21
inet6 fe80::f816:3eff:fee6:cd1a/64 scope link
   valid_lft forever preferred_lft forever


In addition, the kernel counter "/proc/net/snmp" in namespace is unchanged.
These couters do not work well with namespace?


BR,
Germy
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [bashate] .bashateignore

2014-09-02 Thread Robert Collins
Well, git knows all the files in-tree, right? Or am I missing something here?

if-has-bash-hashbang-and-is-versioned-then-bashate-it?

-Rob

On 3 September 2014 13:26, Ian Wienand  wrote:
> On 09/02/2014 10:13 PM, Sean Dague wrote:
>>
>> One of the things that could make it better is to add file extensions to
>> all shell files in devstack. This would also solve the issue of gerrit
>> not syntax highlighting most of the files. If people are up for that,
>> I'll propose a rename patch to get us there. Then dumping the special
>> bashate discover bits is simple.
>
>
> I feel like adding .sh to bash to-be-sourced-only (library) files is
> probably a less common idiom.  It's just feeling, I don't think it's
> any sort of rule.
>
> So my first preference is for bashate to just punt the whole thing and
> only work on a list of files.  We can then discuss how best to match
> things in devstack so we don't add files but miss checking them.
>
> -i
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [bashate] .bashateignore

2014-09-02 Thread Ian Wienand

On 09/02/2014 10:13 PM, Sean Dague wrote:

One of the things that could make it better is to add file extensions to
all shell files in devstack. This would also solve the issue of gerrit
not syntax highlighting most of the files. If people are up for that,
I'll propose a rename patch to get us there. Then dumping the special
bashate discover bits is simple.


I feel like adding .sh to bash to-be-sourced-only (library) files is
probably a less common idiom.  It's just feeling, I don't think it's
any sort of rule.

So my first preference is for bashate to just punt the whole thing and
only work on a list of files.  We can then discuss how best to match
things in devstack so we don't add files but miss checking them.

-i

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] [Horizon] Ironic Horizon API

2014-09-02 Thread Josh Gachnang
Hey all,

I published a patch to add an Ironic API wrapper in Horizon. Having code up
for Horizon is a graduation requirement for Ironic, so I'd like some
eyeballs on it to at least tell us we're going in the right direction. I
understand this code won't land until after Ironic is integrated.

Another developer is working on the Horizon panels and other parts, and
will have them ASAP.

Review: https://review.openstack.org/#/c/117376/
---
Josh Gachnang
Tech Blog: ServerCobra.com, @ServerCobra
Github.com/PCsForEducation
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Review metrics - what do we want to measure?

2014-09-02 Thread Robert Collins
On 14 August 2014 11:03, James Polley  wrote:
> In recent history, we've been looking each week at stats from
> http://russellbryant.net/openstack-stats/tripleo-openreviews.html to get a
> gauge on how our review pipeline is tracking.
>
> The main stats we've been tracking have been the "since the last revision
> without -1 or -2". I've included some history at [1], but the summary is
> that our 3rd quartile has slipped from 13 days to 16 days over the last 4
> weeks or so. Our 1st quartile is fairly steady lately, around 1 day (down
> from 4 a month ago) and median is unchanged around 7 days.
>
> There was lots of discussion in our last meeting about what could be causing
> this[2]. However, the thing we wanted to bring to the list for the
> discussion is:
>
> Are we tracking the right metric? Should we be looking to something else to
> tell us how well our pipeline is performing?
>
> The meeting logs have quite a few suggestions about ways we could tweak the
> existing metrics, but if we're measuring the wrong thing that's not going to
> help.
>
> I think that what we are looking for is a metric that lets us know whether
> the majority of patches are getting feedback quickly. Maybe there's some
> other metric that would give us a good indication?

If we review all patches quickly and land none, thats bad too :).

For the reviewers specifically i think we need a metric(s) that:
 - doesn't go bad when submitters go awol, don't respond etc
   - including when they come back - our stats shouldn't jump hugely
because an old review was resurrected
 - when good means submitters will be getting feedback
 - flag inventory- things we'd be happy to have landed that haven't
   - including things with a -1 from non-core reviewers (*)

(*) I often see -1's on things core wouldn't -1 due to the learning
curve involved in becoming core

So, as Ben says, I think we need to address the its-not-a-vote issue
as a priority, that has tripped us up in lots of ways

I think we need to discount -workflow patches where that was set by
the submitter, which AFAICT we don't do today.

Looking at current stats:
Longest waiting reviews (based on oldest rev without -1 or -2):

54 days, 2 hours, 41 minutes https://review.openstack.org/106167
(Keystone/LDAP integration)
That patch had a -1 on Aug 16 1:23 AM: but was quickyl turned to +2.

So this patch had a -1 then after discussion it became a +2. And its
evolved multiple times.

What should we be saying here? Clearly its had little review input
over its life, so I think its sadly accurate.

I wonder if a big chunk of our sliding quartile is just use not
reviewing the oldest reviews.

-Rob


-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Review metrics - what do we want to measure?

2014-09-02 Thread Jeremy Stanley
On 2014-09-03 11:51:13 +1200 (+1200), Robert Collins wrote:
> I thought there was now a thung where zuul can use a different account
> per pipeline?

That was the most likely solution we discussed at the summit, but I
don't believe we've implemented it yet (or if we have then it isn't
yet being used for any existing pipelines).
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder][Nova]Quest about Cinder Brick proposal

2014-09-02 Thread Emma Lin
Preston,
Thanks for your support. What I mentioned Cinder Brick is described in this 
wiki page: https://wiki.openstack.org/wiki/CinderBrick

Regards.
Emma

From: "Preston L. Bannister" mailto:pres...@bannister.us>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, 2 September, 2014 6:08 pm
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Cinder][Nova]Quest about Cinder Brick proposal

Hi Emma,

I do not claim to be an OpenStack guru, but might know something about backing 
up a vCloud.

What proposal did you have in mind? A link would be helpful.

Backing up a step (ha), the existing cinder-backup API is very close to 
useless. Backup needs to apply to an active instance. The Nova backup API is 
closer, but needs (much) work.

Local storage is a big issue, as at scale we need to extract changed-block 
lists. (VMware has an advantage here, for now.)




On Mon, Sep 1, 2014 at 8:56 PM, Emma Lin 
mailto:l...@vmware.com>> wrote:
Hi Gurus,
I saw the wiki page for Cinder Brick proposal for Havana, but I didn't see any 
follow up on that idea. Is there any real progress on that idea?

As this proposal is to address the local storage issue, I'd like to know the 
status, and to see if there is any task required for hypervisor provider.

Any comments are appreciated
Emma


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder][Nova]Quest about Cinder Brick proposal

2014-09-02 Thread Emma Lin
Thank you all for the prompt response. And I'm glad to see the progress on this 
topic.
Basically, what I'm thinking is the local storage support for big data and 
large scale computing is specially useful.

I'll monitor the meeting progress actively.

Duncan,
And I'm interested to know the details of this topic. Btw, if this Brick code 
called by Cinder?

Thanks
Emma

From: Ivan Kolodyazhny mailto:e...@e0ne.info>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Wednesday, 3 September, 2014 12:40 am
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Cinder][Nova]Quest about Cinder Brick proposal

Hi all,

Emma,
thanks for raising this topic, I've added it for the next Cinder weekly meeting 
[1].

Duncan,
I absolutely agree with you that if any code could be located in a one place, 
it must be there.

I'm not sure which place is the best for Brick: oslo or stackforge. Let's 
discuss it. I want to be a volunteer to make Brick as a separate library and 
make OpenStack code better with re-using code as much, as possible.


[1] https://wiki.openstack.org/wiki/CinderMeetings#Next_meeting

Regards,
Ivan Kolodyazhny,
Software Engineer,
Mirantis Inc


On Tue, Sep 2, 2014 at 1:54 PM, Duncan Thomas 
mailto:duncan.tho...@gmail.com>> wrote:
On 2 September 2014 04:56, Emma Lin mailto:l...@vmware.com>> 
wrote:
> Hi Gurus,
> I saw the wiki page for Cinder Brick proposal for Havana, but I didn't see
> any follow up on that idea. Is there any real progress on that idea?
>
> As this proposal is to address the local storage issue, I'd like to know the
> status, and to see if there is any task required for hypervisor provider.

Hi Emma

Brick didn't really evolve to cover the local storage case, so we've
not made much progress in that direction.

Local storage comes up fairly regularly, but solving all of the points
(availability, API behaviour completeness, performance, scheduling)
from a pure cinder PoV is a hard problem - i.e. making local storage
look like a normal cinder volume. Specs welcome, email me if you want
more details on the problems - there are certainly many people
interested in seeing the problem solved.

There is code in brick that could be used in nova as is to reduce
duplication and give a single place to fix bugs - nobody has yet taken
this work on as far as I know.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Review velocity and core reviewer commitments

2014-09-02 Thread James Polley
One recurring topic in our weekly meetings over the last few months has
been the fact that it's taking us longer and longer to review and land
patches. Right now, http://www.nemebean.com/reviewstats/tripleo-open.html
has the following stats:

   - Stats since the latest revision:
  1. Average wait time: 10 days, 14 hours, 37 minutes
  2. 1st quartile wait time: 4 days, 8 hours, 45 minutes
  3. Median wait time: 7 days, 10 hours, 50 minutes
  4. 3rd quartile wait time: 15 days, 9 hours, 57 minutes
  5. Number waiting more than 7 days: 63
   - Stats since the last revision without -1 or -2 :
  1. Average wait time: 11 days, 22 hours, 21 minutes
  2. 1st quartile wait time: 4 days, 10 hours, 55 minutes
  3. Median wait time: 8 days, 3 hours, 49 minutes
  4. 3rd quartile wait time: 18 days, 23 hours, 20 minutes

 There are many things that can contribute to this; for instance, a patch
that has no negative reviews but also can't be approved (because it isn't
passing CI, or because it depends on other changes that are more
contentious) will increase these average wait times. I kicked off a
discussion about the possibility that we're measuring the wrong thing a few
weeks ago (
http://lists.openstack.org/pipermail/openstack-dev/2014-August/042960.html)
- if you've got suggestions about what we should be measuring, please
follow up there.

There's one trend I'm seeing that seems as though it's exacerbating our
sluggishness: many of our core reviewers are not meeting their commitment
to 3 reviews per day. http://www.nemebean.com/reviewstats/tripleo-30.txt
currently shows just 9 cores (and 2 non-cores!) meeting have exceeded 60
reviews over the last 30 days; 10 cores have not[1]. This isn't just a
short-term glitch either: the 90-day stats show the same numbers, although
a slightly different set of reviewers on either side of the cutoff.

The commitment to 3 reviews per day is one of the most important things we
ask of our core reviewers. We want their reviews to be good quality to help
make sure our code is also good quality - but if they can't commit to 3
reviews per day, they are slowing the project down by making it harder for
even good quality code to land. There's little point in us having good
quality code if it's perpetually out of date.

I'd like to call on all existing cores to make a concerted effort to meet
the commitment they made when they were accepted as core reviewers. We need
to ensure that patches are being reviewed and landed, and we can't do that
unless cores take the time to do reviews. If you aren't able to meet the
commitment you made when you became a core, it would be helpful if you
could let us know - or undertake a meta-review so we can consider possibly
adding some new cores (it seems like we might have 2 strong candidates, if
the quality of their reviews matches the quantity)

I'd like to call on everyone else (myself included - I've only managed 13
reviews over the last 30 days!) to help out as well. It's easier for cores
to review patchsets that already have several +1s. If you're looking for a
place to start,  http://www.nemebean.com/reviewstats/tripleo-open.html has
a list of the oldest patches that are looking for reviews, and
https://wiki.openstack.org/wiki/TripleO#Review_team has a link to a
dashboard that has sections for reviews that have been a long time without
feedback.

If you have ideas for other things we can or should measure, please follow
up on the other thread, or in our weekly meeting.

[1] lxsli and jonpaul-sullivan are cores since the conclusion of the thread
starting at
http://lists.openstack.org/pipermail/openstack-dev/2014-July/039762.html,
but not shown as such. https://review.openstack.org/#/c/118483 will correct
this.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Observations re swift-container usage of SQLite

2014-09-02 Thread Taras Glek
Hi,
I have done some SQLite footgun elimination at Mozilla, was curious if
swift ran into similar issues.
>From blog posts like
http://blog.maginatics.com/2014/05/13/multi-container-sharding-making-openstack-swift-swifter/
 and http://engineering.spilgames.com/openstack-swift-lots-small-files/ it
seemed worth looking into.

*Good things*
* torgomatic pointed out on IRC that inserts are now batched via an
intermediate file that isn't fsync()ed(
https://github.com/openstack/swift/commit/85362fdf4e7e70765ba08cee288437a763ea5475).
That should help with usecases described by above blog posts. Hope rest of
my observations are still of some use.
* There are few indexes involved, this is good because indexes in
single-file databases are very risky for perf.

I setup devstack on my laptop to observe swift performance and poke at the
resulting db. I don't have a proper benchmarking environment to check if
any of my observations are valid.

*Container .db handle LRU*
It seems that container DBs are opened once per read/write operation:
having container-server keep LRU list of db handles might help workloads
with hot containers

*Speeding up LIST*
* Lack of index for LIST is good, but means LIST will effectively read
whole file.
* 1024 byte pagesize is used, moving to bigger pagesizes, reduces numer of
syscalls
** Firefox moving to 1K->32K cut our DB IO by 1.2-2x
http://taras.glek.net/blog/2013/06/28/new-performance-people/
* Doing fadvise(WILL_NEED) on the db file prior to opening it with SQLite
should help OS read the db file in at maximum throughput. This causes Linux
to issue disk IO in 2mb chunks vs 128K with default readahead settings.
SQLite should really do this itself :(
* Appends end up fragmenting the db file, should use
http://www.sqlite.org/c3ref/c_fcntl_chunk_size.html


#sqlitefcntlchunksize
 to
grow DB with less fragmentation OR copy(with fallocate) sqlite file over
every time it doubles in size(eg during weekly compaction)
** Fragmentation means db scans are non-sequential on disk
** XFS is particularly susceptible to fragmentation. Can use filefrag on
.db files to monitor fragmentation

*Write amplification*
* write amplification is bad because it causes table scans to be slower
than necessary(eg reading less data is always better for cache locality;
torgomatic says container dbs can get into gigabytes)
* swift uses timestamps in decimal seconds form..eg 1409350185.26144 as a
string. I'm guessing these are mainly used for HTTP headers yet HTTP uses
seconds, which would normally only take up 4 bytes
* CREATE INDEX ix_object_deleted_name ON object (deleted, name) might be a
problem for delete-heavy workloads
** SQLite copies column entries used in indexes. Here the index almost
doubles amount of space used by deleted entries
** Indexes in general are risky in sqlite, as they end up dispersed with
table data until a VACUUM. This causes table scan operations(eg during
LIST) to be suboptimal. This could also mean that operations that rely on
the index are no better IO-wise than a whole table scan.
* deleted is both in content type & deleted field. This might not be a big
deal.
* Ideally you'd be using a database that can be (lz4?) compressed at a
whole-file level. I'm not aware of a good off-the-shelf solution here. Some
column store might be a decent replacement for SQLite

Hope some of these observations are useful. If not, sorry for the noise.
I'm pretty impressed at swift-container's minimalist SQLite usage, did not
see many footguns here.

Taras
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Review metrics - what do we want to measure?

2014-09-02 Thread Robert Collins
On 16 August 2014 02:43, Jeremy Stanley  wrote:
> On 2014-08-13 19:51:52 -0500 (-0500), Ben Nemec wrote:
> [...]
>> make the check-tripleo job leave an actual vote rather than just a
>> comment.
> [...]
>
> That, as previously discussed, will require some design work in
> Zuul. Gerrit uses a single field per account for verify votes, which
> means that if you want to have multiple concurrent votes for
> different pipelines/job collections then we either need to use more
> than one account for those or add additional columns for each.
>
> There have already been discussions around how to implement this,
> but in the TripleO case it might make more sense to revisit why we
> have those additional pipelines and instead focus on resolving the
> underlying issues which led to their use as a stop-gap.

I thought there was now a thung where zuul can use a different account
per pipeline?

-Rob

-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Rally scenario Issue

2014-09-02 Thread Ajay Kalambur (akalambu)
Hi Guys
For the throughput tests I need to be able to install iperf on the cloud image. 
For this DNS server needs to be set. But the current network context should 
also support DNS name server setting
Should we add that into network context?
Ajay



From: Boris Pavlovic mailto:bo...@pavlovic.me>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Friday, August 29, 2014 at 2:08 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Cc: "Harshil Shah (harsshah)" mailto:harss...@cisco.com>>
Subject: Re: [openstack-dev] Rally scenario Issue

Timur,

Thanks for pointing Ajay.

Ajay,

 Also I cannot see this failure unless I run rally with –v –d object.

Actually rally is sotring information about all failures. To get information 
about them you can run next command:

rally task results --pprint

It will display all information about all iterations (including exceptions)


Second when most of the steps in the scenario failed like attaching to network, 
ssh and run command why bother reporting the results

Because, bad results are better then nothing...


Best regards,
Boris Pavlovic


On Sat, Aug 30, 2014 at 12:54 AM, Timur Nurlygayanov 
mailto:tnurlygaya...@mirantis.com>> wrote:
Hi Ajay,

looks like you need to use NeutronContext feature to configure Neutron Networks 
during the benchmarks execution.
We now working on merge of two different comits with NeutronContext 
implementation:
https://review.openstack.org/#/c/96300  and 
https://review.openstack.org/#/c/103306

could you please apply commit https://review.openstack.org/#/c/96300 and run 
your benchmarks? Neutron Network with subnetworks and routers will be 
automatically created for each created tenant and you should have the ability 
to connect to VMs. Please, note, that you should add the following part to your 
task JSON to enable Neutron context:
...
"context": {
...
"neutron_network": {
"network_cidr": "10.%s.0.0/16",
}
}
...

Hope this will help.



On Fri, Aug 29, 2014 at 11:42 PM, Ajay Kalambur (akalambu) 
mailto:akala...@cisco.com>> wrote:
Hi
I am trying to run the Rally scenario boot-runcommand-delete. This scenario has 
the following code
 def boot_runcommand_delete(self, image, flavor,
   script, interpreter, username,
   fixed_network="private",
   floating_network="public",
   ip_version=4, port=22,
   use_floatingip=True, **kwargs):
  server = None
floating_ip = None
try:
print "fixed network:%s floating network:%s" 
%(fixed_network,floating_network)
server = self._boot_server(
self._generate_random_name("rally_novaserver_"),
image, flavor, key_name='rally_ssh_key', **kwargs)

self.check_network(server, fixed_network)

The question I have is the instance is created with a call to boot_server but 
no networks are attached to this server instance. Next step it goes and checks 
if the fixed network is attached to the instance and sure enough it fails
At the step highlighted in bold. Also I cannot see this failure unless I run 
rally with –v –d object. So it actually reports benchmark scenario numbers in a 
table with no errors when I run with
rally task start boot-and-delete.json

And reports results. First what am I missing in this case. Thing is I am using 
neutron not nova-network
Second when most of the steps in the scenario failed like attaching to network, 
ssh and run command why bother reporting the results

Ajay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--

Timur,
QA Engineer
OpenStack Projects
Mirantis Inc

[http://www.openstacksv.com/]

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Octavia] Using Nova Scheduling Affinity and AntiAffinity

2014-09-02 Thread Susanne Balle
Just wanted to let you know that sahara has move to server groups for
anti-affinity. This is IMHO the way we should so it as well.

Susanne

Jenkins (Code Review) 
5:46 PM (0 minutes ago)
to Andrew, Sahara, Alexander, Sergey, Michael, Sergey, Vitaly, Dmitry,
Trevor
Jenkins has posted comments on this change.

Change subject: Switched anti-affinity feature to server groups
..


Patch Set 15: Verified+1

Build succeeded.

- gate-sahara-pep8
http://logs.openstack.org/59/112159/15/check/gate-sahara-pep8/15869b3 :
SUCCESS in 3m 46s
- gate-sahara-docs
http://docs-draft.openstack.org/59/112159/15/check/gate-sahara-docs/dd9eecd/doc/build/html/
:
SUCCESS in 4m 20s
- gate-sahara-python26
http://logs.openstack.org/59/112159/15/check/gate-sahara-python26/027c775 :
SUCCESS in 4m 53s
- gate-sahara-python27
http://logs.openstack.org/59/112159/15/check/gate-sahara-python27/08f492a :
SUCCESS in 3m 36s
- check-tempest-dsvm-full
http://logs.openstack.org/59/112159/15/check/check-tempest-dsvm-full/e30530a :
SUCCESS in 59m 21s
- check-tempest-dsvm-postgres-full
http://logs.openstack.org/59/112159/15/check/check-tempest-dsvm-postgres-full/9e90341
:
SUCCESS in 1h 19m 32s
- check-tempest-dsvm-neutron-heat-slow
http://logs.openstack.org/59/112159/15/check/check-tempest-dsvm-neutron-heat-slow/70b1955
:
SUCCESS in 21m 30s
- gate-sahara-pylint
http://logs.openstack.org/59/112159/15/check/gate-sahara-pylint/55250e1 :
SUCCESS in 5m 18s (non-voting)

--
To view, visit https://review.openstack.org/112159
To unsubscribe, visit https://review.openstack.org/settings

Gerrit-MessageType: comment
Gerrit-Change-Id: I501438d84f3a486dad30081b05933f59ebab4858
Gerrit-PatchSet: 15
Gerrit-Project: openstack/sahara
Gerrit-Branch: master
Gerrit-Owner: Andrew Lazarev 
Gerrit-Reviewer: Alexander Ignatov 
Gerrit-Reviewer: Andrew Lazarev 
Gerrit-Reviewer: Dmitry Mescheryakov 
Gerrit-Reviewer: Jenkins
Gerrit-Reviewer: Michael McCune 
Gerrit-Reviewer: Sahara Hadoop Cluster CI 
Gerrit-Reviewer: Sergey Lukjanov 
Gerrit-Reviewer: Sergey Reshetnyak 
Gerrit-Reviewer: Trevor McKay 
Gerrit-Reviewer: Vitaly Gridnev 
Gerrit-HasComments: No


On Thu, Aug 28, 2014 at 1:26 AM, Brandon Logan 
wrote:

> Nova scheduler has ServerGroupAffinityFilter and
> ServerGroupAntiAffinityFilter which does the colocation and apolocation
> for VMs.  I think this is something we've discussed before about taking
> advantage of nova's scheduling.  I need to verify that this will work
> with what we (RAX) plan to do, but I'd like to get everyone else's
> thoughts.  Also, if we do decide this works for everyone involved,
> should we make it mandatory that the nova-compute services are running
> these two filters?  I'm also trying to see if we can use this to also do
> our own colocation and apolocation on load balancers, but it looks like
> it will be a bit complex if it can even work.  Hopefully, I can have
> something definitive on that soon.
>
> Thanks,
> Brandon
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar] Early proposals for design summit sessions

2014-09-02 Thread Kurt Griffiths
Thanks Flavio, I added a few thoughts.

On 8/28/14, 3:27 AM, "Flavio Percoco"  wrote:

>Greetings,
>
>I'd like to join the early coordination effort for design sessions. I've
>shamelessly copied Doug's template for Oslo into a new etherpad so we
>can start proposing sessions there.
>
>https://etherpad.openstack.org/p/kilo-zaqar-summit-topics
>
>Flavio
>
>-- 
>@flaper87
>Flavio Percoco
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPv6] Moving meeting time to 1500 UTC on Tuesdays on #openstack-meeting-alt?

2014-09-02 Thread Kyle Mestery
On Tue, Sep 2, 2014 at 4:05 PM, Collins, Sean
 wrote:
> Any objection? We need to move the time since the main meeting conflicts
> with our current time slot.
>
If you do that, I'll take over #openstack-meeting from you for the
Neutron meeting, so let me know once this meeting moves and I'll
update the wiki for the main neutron meeting.

Thanks,
Kyle

> Discussion from today's meeting:
>
> http://eavesdrop.openstack.org/meetings/neutron_ipv6/2014/neutron_ipv6.2014-09-02-13.59.log.html
>
> --
> Sean M. Collins
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] (Non-)consistency of the Ironic hash ring implementation

2014-09-02 Thread Robert Collins
The implementation in ceilometer is very different to the Ironic one -
are you saying the test you linked fails with Ironic, or that it fails
with the ceilometer code today?

The Ironic hash_ring implementation uses a hash:
def _get_partition(self, data):
try:
return (struct.unpack_from('>I', hashlib.md5(data).digest())[0]
>> self.partition_shift)
except TypeError:
raise exception.Invalid(
_("Invalid data supplied to HashRing.get_hosts."))


so I don't see the fixed size thing you're referring to. Could you
point a little more specifically? Thanks!

-Rob

On 1 September 2014 19:48, Nejc Saje  wrote:
> Hey guys,
>
> in Ceilometer we're using consistent hash rings to do workload
> partitioning[1]. We've considered generalizing your hash ring implementation
> and moving it up to oslo, but unfortunately your implementation is not
> actually consistent, which is our requirement.
>
> Since you divide your ring into a number of equal sized partitions, instead
> of hashing hosts onto the ring, when you add a new host,
> an unbound amount of keys get re-mapped to different hosts (instead of the
> 1/#nodes remapping guaranteed by hash ring). I've confirmed this with the
> test in aforementioned patch[2].
>
> If this is good enough for your use-case, great, otherwise we can get a
> generalized hash ring implementation into oslo for use in both projects or
> we can both use an external library[3].
>
> Cheers,
> Nejc
>
> [1] https://review.openstack.org/#/c/113549/
> [2]
> https://review.openstack.org/#/c/113549/21/ceilometer/tests/test_utils.py
> [3] https://pypi.python.org/pypi/hash_ring
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] (Non-)consistency of the Ironic hash ring implementation

2014-09-02 Thread Gregory Haynes
Excerpts from Nejc Saje's message of 2014-09-01 07:48:46 +:
> Hey guys,
> 
> in Ceilometer we're using consistent hash rings to do workload 
> partitioning[1]. We've considered generalizing your hash ring 
> implementation and moving it up to oslo, but unfortunately your 
> implementation is not actually consistent, which is our requirement.
> 
> Since you divide your ring into a number of equal sized partitions, 
> instead of hashing hosts onto the ring, when you add a new host,
> an unbound amount of keys get re-mapped to different hosts (instead of 
> the 1/#nodes remapping guaranteed by hash ring). I've confirmed this 
> with the test in aforementioned patch[2].

I am just getting started with the ironic hash ring code, but this seems
surprising to me. AIUI we do require some rebalancing when a conductor
is removed or added (which is normal use of a CHT) but not for every
host added. This is supported by the fact that we currently dont have a
rebalancing routine, so I would be surprised if ironic worked at all if
we required it for each host that is added.

Can anyone in Ironic with a bit more experience confirm/deny this?

> 
> If this is good enough for your use-case, great, otherwise we can get a 
> generalized hash ring implementation into oslo for use in both projects 
> or we can both use an external library[3].
> 
> Cheers,
> Nejc
> 
> [1] https://review.openstack.org/#/c/113549/
> [2] 
> https://review.openstack.org/#/c/113549/21/ceilometer/tests/test_utils.py
> [3] https://pypi.python.org/pypi/hash_ring
> 

Thanks,
Greg

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas][octavia]

2014-09-02 Thread Brandon Logan
I am not for this if Octavia is merged into the incubator when LBaaS V2
is, assuming LBaaS V2 will be merged into it before the summit.  I'd
rather Octavia get merged into whatever repository it is destined to
whenever it is much more mature.  If Octavia is merged into the
incubator too soon, I think it's velocity will be much less than if it
were independent at first.

On Tue, 2014-09-02 at 13:45 -0700, Stephen Balukoff wrote:
> Hi Kyle,
> 
> 
> IMO, that depends entirely on how the incubator project is run. For
> now, I'm in favor of remaining separate and letting someone else be
> the guinea pig. :/  I think we'll (all) be more productive this way.
> 
> 
> Also keep in mind that the LBaaS v2 code is mostly there (just waiting
> on reviews), so it's probably going to be ready for neutron-incubator
> incubation well before Octavia is ready for anything like that.
> 
> 
> Stephen
> 
> On Tue, Sep 2, 2014 at 12:52 PM, Kyle Mestery 
> wrote:
> 
> 
> To me what makes sense here is that we merge the Octavia code
> into the
> neutron-incubator when the LBaaS V2 code is merged there. If
> the end
> goal is to spin the LBaaS V2 stuff out into a separate git
> repository
> and project (under the networking umbrella), this would allow
> for the
> Octavia driver to be developed alongside the V2 API code, and
> in fact
> help satisfy one of the requirements around Neutron incubation
> graduation: Having a functional driver. And it also allows for
> the
> driver to continue to live on next to the API.
> 
> What do people think about this?
> 
> Thanks,
> Kyle
> 
> 
> 
> 
> 
> -- 
> Stephen Balukoff 
> Blue Box Group, LLC 
> (800)613-4305 x807
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][IPv6] Moving meeting time to 1500 UTC on Tuesdays on #openstack-meeting-alt?

2014-09-02 Thread Collins, Sean
Any objection? We need to move the time since the main meeting conflicts
with our current time slot.

Discussion from today's meeting:

http://eavesdrop.openstack.org/meetings/neutron_ipv6/2014/neutron_ipv6.2014-09-02-13.59.log.html

-- 
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Octavia] Agenda and stand-up for 2014-09-03 meeting

2014-09-02 Thread Stephen Balukoff
Hi folks!

The preliminary agenda items are here:
https://wiki.openstack.org/wiki/Octavia/Weekly_Meeting_Agenda#Agenda

Please feel free to add agenda items as necessary.

Also, we're going to start keeping a weekly stand-up etherpad, so that
those working on the Octavia project know what other people working on the
project are engaged in from week to week (modeled after the Neutron LBaaS
weekly standup put together by Jorge).  If you've been working on Octavia,
please update this following the template here:

https://etherpad.openstack.org/p/octavia-weekly-standup

Stephen

-- 
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas][octavia]

2014-09-02 Thread Stephen Balukoff
Hi Kyle,

IMO, that depends entirely on how the incubator project is run. For now,
I'm in favor of remaining separate and letting someone else be the guinea
pig. :/  I think we'll (all) be more productive this way.

Also keep in mind that the LBaaS v2 code is mostly there (just waiting on
reviews), so it's probably going to be ready for neutron-incubator
incubation well before Octavia is ready for anything like that.

Stephen

On Tue, Sep 2, 2014 at 12:52 PM, Kyle Mestery  wrote:

>
> To me what makes sense here is that we merge the Octavia code into the
> neutron-incubator when the LBaaS V2 code is merged there. If the end
> goal is to spin the LBaaS V2 stuff out into a separate git repository
> and project (under the networking umbrella), this would allow for the
> Octavia driver to be developed alongside the V2 API code, and in fact
> help satisfy one of the requirements around Neutron incubation
> graduation: Having a functional driver. And it also allows for the
> driver to continue to live on next to the API.
>
> What do people think about this?
>
> Thanks,
> Kyle
>
>

-- 
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas][octavia]

2014-09-02 Thread Eichberger, German
+1

On Sep 2, 2014 12:59 PM, Kyle Mestery  wrote:
On Tue, Sep 2, 2014 at 1:28 PM, Salvatore Orlando  wrote:
> Inline.
> Salvatore
>
> On 2 September 2014 19:46, Stephen Balukoff  wrote:
>>
>> For what it's worth in this discussion, I agree that the possible futures
>> of Octavia already discussed (where it lives, how it relates to Neutron
>> LBaaS, etc.) are all possible. What actually happens here is going to depend
>> both on the Octavia team, the Neutron team (especially when it comes to how
>> the neutron-incubator is practically managed), and anyone else interested in
>> contributing to these projects.
>>
>> Again, for now, I think it's most important to get involved, write code,
>> and start delivering on the immediate, obvious things that need to be done
>> for Octavia.
>
>
> Probably... at least we'll be speculating about something which actually
> exists.
>
To me what makes sense here is that we merge the Octavia code into the
neutron-incubator when the LBaaS V2 code is merged there. If the end
goal is to spin the LBaaS V2 stuff out into a separate git repository
and project (under the networking umbrella), this would allow for the
Octavia driver to be developed alongside the V2 API code, and in fact
help satisfy one of the requirements around Neutron incubation
graduation: Having a functional driver. And it also allows for the
driver to continue to live on next to the API.

What do people think about this?

Thanks,
Kyle

>>
>>
>> In my mind, there are too many unknowns to predict exactly where things
>> will end up in the long run. About the only thing I am certain of is that
>> everyone involving themselves in the Octavia project wants to see it become
>> a part of OpenStack (in whatever way that happens), and that that will
>> certainly not happen if we aren't able to build the operator-scale load
>> balancer we all want.
>>
>>
>> Beyond that, I don't see a whole lot of point to the speculation here. :/
>> (Maybe someone can enlighten me to this point?)
>
>
> I have speculated only to the extent that it was needed for me to understand
> what's the interface between the two things.
> Beyond that, I agree and have already pointed out that there is no urgency
> for prolonging this discussion, unless the lbaas and octavia team feel this
> will have a bearing on short term developments. I don't think so but I do
> not have the full picture.
>
> Talking about pointless things you might want to ensure the name 'octavia'
> is not trademarked before writing lots of code! Renames are painful and some
> openstack projects (like neutron and zaqar) know something about that.
>
>>
>>
>> Stephen
>>
>>
>>
>> On Tue, Sep 2, 2014 at 9:40 AM, Brandon Logan
>>  wrote:
>>>
>>> Hi Susanne,
>>>
>>> I believe the options for Octavia are:
>>> 1) Merge into the LBaaS tree (wherever LBaaS is)
>>> 2) Become its own openstack project
>>> 3) Remains in stackforge for eternity
>>>
>>> #1 Is dependent on these options
>>> 1) LBaaS V2 graduates from the incubator into Neutron. V1 is deprecated.
>>> 2) LBaaS V2 remains in incubator until it can be spun out.  V1 in
>>> Neutron is deprecated.
>>> 3) LBaaS V2 is abandoned in the incubator and LBaaS V1 remains.  (An
>>> unlikely option)
>>>
>>> I don't see any other feasible options.
>>>
>>> On Tue, 2014-09-02 at 12:06 -0400, Susanne Balle wrote:
>>> > Doug
>>> >
>>> >
>>> > I agree with you but I need to understand the options. Susanne
>>> >
>>> >
>>> > >> And I agree with Brandon’s sentiments.  We need to get something
>>> > built before I’m going to worry too
>>> > >> much about where it should live.  Is this a candidate to get sucked
>>> > into LBaaS?  Sure.  Could the reverse
>>> > >> happen?  Sure.  Let’s see how it develops.
>>> >
>>> >
>>> >
>>> > On Tue, Sep 2, 2014 at 11:45 AM, Doug Wiegley 
>>> > wrote:
>>> > Hi all,
>>> >
>>> >
>>> > > On the other hand one could also say that Octavia is the ML2
>>> > equivalent of LBaaS. The equivalence here is very loose.
>>> > Octavia would be a service-VM framework for doing load
>>> > balancing using a variety of drivers. The drivers ultimately
>>> > are in charge of using backends like haproxy or nginx running
>>> > on the service VM to implement lbaas configuration.
>>> >
>>> >
>>> > This, exactly.  I think it’s much fairer to define Octavia as
>>> > an LBaaS purpose-built service vm framework, which will use
>>> > nova and haproxy initially to provide a highly scalable
>>> > backend. But before we get into terminology misunderstandings,
>>> > there are a bunch of different “drivers” at play here, exactly
>>> > because this is a framework:
>>> >   * Neutron lbaas drivers – what we all know and love
>>> >   * Octavia’s “network driver” - this is a piece of glue
>>> > that exists to hide internal calls we have to make
>>> > into Neutron until clean interfaces exist.  It migh

Re: [openstack-dev] [Nova] Feature Freeze Exception process for Juno

2014-09-02 Thread Dan Genin
Just out of curiosity, what is the rational behind upping the number of 
core sponsors for feature freeze exception to 3 if only two +2 are 
required to merge? In Icehouse, IIRC, two core sponsors was deemed 
sufficient.


Dan

On 09/02/2014 02:16 PM, Michael Still wrote:

Hi.

We're soon to hit feature freeze, as discussed in Thierry's recent
email. I'd like to outline the process for requesting a freeze
exception:

 * your code must already be up for review
 * your blueprint must have an approved spec
 * you need three (3) sponsoring cores for an exception to be granted
 * exceptions must be granted before midnight, Friday this week
(September 5) UTC
 * the exception is valid until midnight Friday next week
(September 12) UTC when all exceptions expire

For reference, our rc1 drops on approximately 25 September, so the
exception period needs to be short to maximise stabilization time.

John Garbutt and I will both be granting exceptions, to maximise our
timezone coverage. We will grant exceptions as they come in and gather
the required number of cores, although I have also carved some time
out in the nova IRC meeting this week for people to discuss specific
exception requests.

Michael






smime.p7s
Description: S/MIME Cryptographic Signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qa] Tempest Bug Day: Tuesday September 9

2014-09-02 Thread David Kranz

It's been a while since we had a bug day. We now have 121 NEW bugs:

https://bugs.launchpad.net/tempest/+bugs?field.searchtext=&field.status%3Alist=NEW&orderby=-importance

The first order of business is to triage these bugs. This is a large 
enough number that I hesitate to
mention anything else, but there are also many In Progress bugs that 
should be looked at to see if they should

be closed or an assignee removed if no work is actually planned:

https://bugs.launchpad.net/tempest/+bugs?search=Search&field.status=In+Progress

I hope we will see a lot of activity on this bug day. During the 
Thursday meeting right after we can see if
there are ideas for how to manage the bugs on a more steady-state basis. 
We could also discuss how the grenade and

devstack bugs should fit in to such activities.

-David

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Infra] Meeting Tuesday September 2nd at 19:00 UTC

2014-09-02 Thread Elizabeth K. Joseph
On Mon, Sep 1, 2014 at 11:59 AM, Elizabeth K. Joseph
 wrote:
> Hi everyone,
>
> The OpenStack Infrastructure (Infra) team is hosting our weekly
> meeting on Tuesday September 2nd, at 19:00 UTC in #openstack-meeting

Had quite the busy meeting! Thanks to everyone who participated.
Minutes and log now available:

Minutes: 
http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-09-02-19.01.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-09-02-19.01.txt
Log: 
http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-09-02-19.01.log.html

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2
http://www.princessleia.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] znc as a service (was Re: [nova] Is the BP approval process broken?)

2014-09-02 Thread Ryan Brown
On 09/02/2014 02:50 PM, Stefano Maffulli wrote:
> On 08/29/2014 11:17 AM, John Garbutt wrote:
>> After moving to use ZNC, I find IRC works much better for me now, but
>> I am still learning really.
> 
> There! this sentence has two very important points worth highlighting:
> 
> 1- when people say IRC they mean IRC + a hack to overcome its limitation
> 2- IRC+znc is complex, not many people are used to it
> 
> I never used znc, refused to install, secure and maintain yet another
> public facing service. For me IRC is: be there when it happens or read
> the logs on eavesdrop, if needed.
> 
> Recently I found out that there are znc services out there that could
> make things simpler but they're not easy to join (at least the couple I
> looked at).
> 
> Would it make sense to offer znc as a service within the openstack project?
> 

I would worry a lot about privacy/liability if OpenStack were to provide
ZNCaaS. Not being on infra I can't speak definitively, but I know I
wouldn't be especially excited about hosting & securing folks' private
data.

Eavesdrop just records public meetings, and the logs are 100% public so
no privacy headaches. Many folks using OpenStack's ZNCaaS would be in
other channels (or at least would receive private messages) and
OpenStack probably shouldn't take responsibility for keeping all those safe.

Just my 0.02 USD.
-- 
Ryan Brown / Software Engineer, Openstack / Red Hat, Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas][octavia]

2014-09-02 Thread Kyle Mestery
On Tue, Sep 2, 2014 at 1:28 PM, Salvatore Orlando  wrote:
> Inline.
> Salvatore
>
> On 2 September 2014 19:46, Stephen Balukoff  wrote:
>>
>> For what it's worth in this discussion, I agree that the possible futures
>> of Octavia already discussed (where it lives, how it relates to Neutron
>> LBaaS, etc.) are all possible. What actually happens here is going to depend
>> both on the Octavia team, the Neutron team (especially when it comes to how
>> the neutron-incubator is practically managed), and anyone else interested in
>> contributing to these projects.
>>
>> Again, for now, I think it's most important to get involved, write code,
>> and start delivering on the immediate, obvious things that need to be done
>> for Octavia.
>
>
> Probably... at least we'll be speculating about something which actually
> exists.
>
To me what makes sense here is that we merge the Octavia code into the
neutron-incubator when the LBaaS V2 code is merged there. If the end
goal is to spin the LBaaS V2 stuff out into a separate git repository
and project (under the networking umbrella), this would allow for the
Octavia driver to be developed alongside the V2 API code, and in fact
help satisfy one of the requirements around Neutron incubation
graduation: Having a functional driver. And it also allows for the
driver to continue to live on next to the API.

What do people think about this?

Thanks,
Kyle

>>
>>
>> In my mind, there are too many unknowns to predict exactly where things
>> will end up in the long run. About the only thing I am certain of is that
>> everyone involving themselves in the Octavia project wants to see it become
>> a part of OpenStack (in whatever way that happens), and that that will
>> certainly not happen if we aren't able to build the operator-scale load
>> balancer we all want.
>>
>>
>> Beyond that, I don't see a whole lot of point to the speculation here. :/
>> (Maybe someone can enlighten me to this point?)
>
>
> I have speculated only to the extent that it was needed for me to understand
> what's the interface between the two things.
> Beyond that, I agree and have already pointed out that there is no urgency
> for prolonging this discussion, unless the lbaas and octavia team feel this
> will have a bearing on short term developments. I don't think so but I do
> not have the full picture.
>
> Talking about pointless things you might want to ensure the name 'octavia'
> is not trademarked before writing lots of code! Renames are painful and some
> openstack projects (like neutron and zaqar) know something about that.
>
>>
>>
>> Stephen
>>
>>
>>
>> On Tue, Sep 2, 2014 at 9:40 AM, Brandon Logan
>>  wrote:
>>>
>>> Hi Susanne,
>>>
>>> I believe the options for Octavia are:
>>> 1) Merge into the LBaaS tree (wherever LBaaS is)
>>> 2) Become its own openstack project
>>> 3) Remains in stackforge for eternity
>>>
>>> #1 Is dependent on these options
>>> 1) LBaaS V2 graduates from the incubator into Neutron. V1 is deprecated.
>>> 2) LBaaS V2 remains in incubator until it can be spun out.  V1 in
>>> Neutron is deprecated.
>>> 3) LBaaS V2 is abandoned in the incubator and LBaaS V1 remains.  (An
>>> unlikely option)
>>>
>>> I don't see any other feasible options.
>>>
>>> On Tue, 2014-09-02 at 12:06 -0400, Susanne Balle wrote:
>>> > Doug
>>> >
>>> >
>>> > I agree with you but I need to understand the options. Susanne
>>> >
>>> >
>>> > >> And I agree with Brandon’s sentiments.  We need to get something
>>> > built before I’m going to worry too
>>> > >> much about where it should live.  Is this a candidate to get sucked
>>> > into LBaaS?  Sure.  Could the reverse
>>> > >> happen?  Sure.  Let’s see how it develops.
>>> >
>>> >
>>> >
>>> > On Tue, Sep 2, 2014 at 11:45 AM, Doug Wiegley 
>>> > wrote:
>>> > Hi all,
>>> >
>>> >
>>> > > On the other hand one could also say that Octavia is the ML2
>>> > equivalent of LBaaS. The equivalence here is very loose.
>>> > Octavia would be a service-VM framework for doing load
>>> > balancing using a variety of drivers. The drivers ultimately
>>> > are in charge of using backends like haproxy or nginx running
>>> > on the service VM to implement lbaas configuration.
>>> >
>>> >
>>> > This, exactly.  I think it’s much fairer to define Octavia as
>>> > an LBaaS purpose-built service vm framework, which will use
>>> > nova and haproxy initially to provide a highly scalable
>>> > backend. But before we get into terminology misunderstandings,
>>> > there are a bunch of different “drivers” at play here, exactly
>>> > because this is a framework:
>>> >   * Neutron lbaas drivers – what we all know and love
>>> >   * Octavia’s “network driver” - this is a piece of glue
>>> > that exists to hide internal calls we have to make
>>> > into Neutron until clean interfaces exist.  It might
>>> > be a no-op in the case of 

Re: [openstack-dev] [Nova] Feature Freeze Exception process for Juno

2014-09-02 Thread Michael Still
On Tue, Sep 2, 2014 at 1:40 PM, Nikola Đipanov  wrote:
> On 09/02/2014 08:16 PM, Michael Still wrote:
>> Hi.
>>
>> We're soon to hit feature freeze, as discussed in Thierry's recent
>> email. I'd like to outline the process for requesting a freeze
>> exception:
>>
>> * your code must already be up for review
>> * your blueprint must have an approved spec
>> * you need three (3) sponsoring cores for an exception to be granted
>
> Can core reviewers who have features up for review have this number
> lowered to two (2) sponsoring cores, as they in reality then need four
> (4) cores (since they themselves are one (1) core but cannot really
> vote) making it an order of magnitude more difficult for them to hit
> this checkbox?

That's a lot of numbers in that there paragraph.

Let me re-phrase your question... Can a core sponsor an exception they
themselves propose? I don't have a problem with someone doing that,
but you need to remember that does reduce the number of people who
have agreed to review the code for that exception.

Michael

>> * exceptions must be granted before midnight, Friday this week
>> (September 5) UTC
>> * the exception is valid until midnight Friday next week
>> (September 12) UTC when all exceptions expire
>>
>> For reference, our rc1 drops on approximately 25 September, so the
>> exception period needs to be short to maximise stabilization time.
>>
>> John Garbutt and I will both be granting exceptions, to maximise our
>> timezone coverage. We will grant exceptions as they come in and gather
>> the required number of cores, although I have also carved some time
>> out in the nova IRC meeting this week for people to discuss specific
>> exception requests.
>>
>> Michael
>>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Feature Freeze Exception process for Juno

2014-09-02 Thread Day, Phil
Needing 3 out of 19 instead of 3 out of 20 isn't an order of magnatude 
according to my calculator.   Its much closer/fairer than making it 2/19 vs 
3/20.

If a change is borderline in that it can only get 2 other cores maybe it 
doesn't have a strong enough case for an exception.

Phil


Sent from Samsung Mobile


 Original message 
From: Nikola Đipanov
Date:02/09/2014 19:41 (GMT+00:00)
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Nova] Feature Freeze Exception process for Juno

On 09/02/2014 08:16 PM, Michael Still wrote:
> Hi.
>
> We're soon to hit feature freeze, as discussed in Thierry's recent
> email. I'd like to outline the process for requesting a freeze
> exception:
>
> * your code must already be up for review
> * your blueprint must have an approved spec
> * you need three (3) sponsoring cores for an exception to be granted

Can core reviewers who have features up for review have this number
lowered to two (2) sponsoring cores, as they in reality then need four
(4) cores (since they themselves are one (1) core but cannot really
vote) making it an order of magnitude more difficult for them to hit
this checkbox?

Thanks,
N.

> * exceptions must be granted before midnight, Friday this week
> (September 5) UTC
> * the exception is valid until midnight Friday next week
> (September 12) UTC when all exceptions expire
>
> For reference, our rc1 drops on approximately 25 September, so the
> exception period needs to be short to maximise stabilization time.
>
> John Garbutt and I will both be granting exceptions, to maximise our
> timezone coverage. We will grant exceptions as they come in and gather
> the required number of cores, although I have also carved some time
> out in the nova IRC meeting this week for people to discuss specific
> exception requests.
>
> Michael
>


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] [infra] Alpha wheels for Python 3.x

2014-09-02 Thread Clark Boylan
On Tue, Sep 2, 2014, at 11:30 AM, Yuriy Taraday wrote:
> Hello.
> 
> Currently for alpha releases of oslo libraries we generate either
> universal
> or Python 2.x-only wheels. This presents a problem: we can't adopt alpha
> releases in projects where Python 3.x is supported and verified in the
> gate. I've ran into this in change request [1] generated after
> global-requirements change [2]. There we have oslotest library that can't
> be built as a universal wheel because of different requirements (mox vs
> mox3 as I understand is the main difference). Because of that py33 job in
> [1] failed and we can't bump oslotest version in requirements.
> 
> I propose to change infra scripts that generate and upload wheels to
> create
> py3 wheels as well as py2 wheels for projects that support Python 3.x (we
> can use setup.cfg classifiers to find that out) but don't support
> universal
> wheels. What do you think about that?
> 
> [1] https://review.openstack.org/117940
> [2] https://review.openstack.org/115643
> 
> -- 
> 
> Kind regards, Yuriy.
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

We may find that we will need to have py3k wheels in addition to the
existing wheels at some point, but I don't think this use case requires
it. If oslo.test needs to support python2 and python3 it should use mox3
in both cases which claims to support python2.6, 2.7 and 3.2. Then you
can ship a universal wheel. This should solve the immediate problem.

It has been pointed out to me that one case where it won't be so easy is
oslo.messaging and its use of eventlet under python2. Messaging will
almost certainly need python 2 and python 3 wheels to be separate. I
think we should continue to use universal wheels where possible and only
build python2 and python3 wheels in the special cases where necessary.

The setup.cfg classifiers should be able to do that for us, though PBR
may need updating? We will also need to learn to upload potentially >1
wheel in our wheel jobs. That bit is likely straight foward. The last
thing that we need to make sure we do is that we have some testing in
place for the special wheels. We currently have the requirements
integration test which runs under python2 checking that we can actually
install all the things together. This ends up exercising our wheels and
checking that they actually work. We don't have a python3 equivalent for
that job. It may be better to work out some explicit checking of the
wheels we produce that applies to both versions of python. I am not
quite sure how we should approach that yet.

Clark

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] znc as a service (was Re: [nova] Is the BP approval process broken?)

2014-09-02 Thread Stefano Maffulli
On 08/29/2014 11:17 AM, John Garbutt wrote:
> After moving to use ZNC, I find IRC works much better for me now, but
> I am still learning really.

There! this sentence has two very important points worth highlighting:

1- when people say IRC they mean IRC + a hack to overcome its limitation
2- IRC+znc is complex, not many people are used to it

I never used znc, refused to install, secure and maintain yet another
public facing service. For me IRC is: be there when it happens or read
the logs on eavesdrop, if needed.

Recently I found out that there are znc services out there that could
make things simpler but they're not easy to join (at least the couple I
looked at).

Would it make sense to offer znc as a service within the openstack project?

-- 
Ask and answer questions on https://ask.openstack.org

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Feature Freeze Exception process for Juno

2014-09-02 Thread Nikola Đipanov
On 09/02/2014 08:16 PM, Michael Still wrote:
> Hi.
> 
> We're soon to hit feature freeze, as discussed in Thierry's recent
> email. I'd like to outline the process for requesting a freeze
> exception:
> 
> * your code must already be up for review
> * your blueprint must have an approved spec
> * you need three (3) sponsoring cores for an exception to be granted

Can core reviewers who have features up for review have this number
lowered to two (2) sponsoring cores, as they in reality then need four
(4) cores (since they themselves are one (1) core but cannot really
vote) making it an order of magnitude more difficult for them to hit
this checkbox?

Thanks,
N.

> * exceptions must be granted before midnight, Friday this week
> (September 5) UTC
> * the exception is valid until midnight Friday next week
> (September 12) UTC when all exceptions expire
> 
> For reference, our rc1 drops on approximately 25 September, so the
> exception period needs to be short to maximise stabilization time.
> 
> John Garbutt and I will both be granting exceptions, to maximise our
> timezone coverage. We will grant exceptions as they come in and gather
> the required number of cores, although I have also carved some time
> out in the nova IRC meeting this week for people to discuss specific
> exception requests.
> 
> Michael
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] [infra] Alpha wheels for Python 3.x

2014-09-02 Thread Yuriy Taraday
Hello.

Currently for alpha releases of oslo libraries we generate either universal
or Python 2.x-only wheels. This presents a problem: we can't adopt alpha
releases in projects where Python 3.x is supported and verified in the
gate. I've ran into this in change request [1] generated after
global-requirements change [2]. There we have oslotest library that can't
be built as a universal wheel because of different requirements (mox vs
mox3 as I understand is the main difference). Because of that py33 job in
[1] failed and we can't bump oslotest version in requirements.

I propose to change infra scripts that generate and upload wheels to create
py3 wheels as well as py2 wheels for projects that support Python 3.x (we
can use setup.cfg classifiers to find that out) but don't support universal
wheels. What do you think about that?

[1] https://review.openstack.org/117940
[2] https://review.openstack.org/115643

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [zaqar] [marconi] Juno Performance Testing (Round 1)

2014-09-02 Thread Kurt Griffiths
Sure thing, I’ll add that to my list of things to try in “Round 2” (coming
later this week).

On 8/28/14, 9:05 AM, "Jay Pipes"  wrote:

>On 08/26/2014 05:41 PM, Kurt Griffiths wrote:
>>  * uWSGI + gevent
>>  * config: http://paste.openstack.org/show/100592/
>>  * app.py: http://paste.openstack.org/show/100593/
>
>Hi Kurt!
>
>Thanks for posting the benchmark configuration and results. Good stuff :)
>
>I'm curious about what effect removing http-keepalive from the uWSGI
>config would make. AIUI, for systems that need to support lots and lots
>of random reads/writes from lots of tenants, using keepalive sessions
>would cause congestion for incoming new connections, and may not be
>appropriate for such systems.
>
>Totally not a big deal; really, just curious if you'd run one or more of
>the benchmarks with keepalive turned off and what results you saw.
>
>Best,
>-jay
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Feature Freeze Exception process for Juno

2014-09-02 Thread Michael Still
Hi.

We're soon to hit feature freeze, as discussed in Thierry's recent
email. I'd like to outline the process for requesting a freeze
exception:

* your code must already be up for review
* your blueprint must have an approved spec
* you need three (3) sponsoring cores for an exception to be granted
* exceptions must be granted before midnight, Friday this week
(September 5) UTC
* the exception is valid until midnight Friday next week
(September 12) UTC when all exceptions expire

For reference, our rc1 drops on approximately 25 September, so the
exception period needs to be short to maximise stabilization time.

John Garbutt and I will both be granting exceptions, to maximise our
timezone coverage. We will grant exceptions as they come in and gather
the required number of cores, although I have also carved some time
out in the nova IRC meeting this week for people to discuss specific
exception requests.

Michael

-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] [third-party] [infra] New mailing lists for third party announcements and account requests

2014-09-02 Thread Stefano Maffulli
On Fri 29 Aug 2014 03:03:34 PM PDT, James E. Blair wrote:
> It's the best way we have right now, until we have time to make it more
> self-service.  We received one third-party CI request in 2 years, then
> we received 88 more in 6 months.  Our current process is built around
> the old conditions.  I don't know if the request list will continue
> indefinitely, but the announce list will.  We definitely need a
> low-volume place to announce changes to third-party CI operators.

Let me make it more clear: I acknowledge your pain and I think that
you're doing a great job at scaling up the efforts. It's great that
there is a team in charge of helping 3rd party CI folks get onboard.
Indeed, there are a lot of administrative requests coming into the infra
list and that is distracting.

NOTE before reading forward: I'm basing my opinion on the URL that
pleia2 shared before; if there have been conversations with the majority
of the 3rd party CI folks where they expressed agreement on the decision
to create 2 new lists, then most of my concerns disappear.

The reason of my concern comes from the way the decision was taken, its
speed and the method. I've seen in other OpenStack programs the bad
effects of important decisions, affecting multiple people, taken too
rapidly and with little socialization around them.

My job is to keep all openstack contributors happy and if all programs
start introducing radical changes with little discussions, my job
becomes almost impossible.

Summing up: I don't question infra's legitimate decision, provoked by a
visible pain in the program. I'm not sure adding more lists is going to
help but I don't want to dive in the choice of the tools: it's the
method that I disagree with because it may confuse new contributors and
makes my job less effective. I wish decisions involving
new/inexperienced contributors be taken by all teams with longer/deeper
conversations.

/stef

-- 
Ask and answer questions on https://ask.openstack.org

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Michael will be out of contact tonight

2014-09-02 Thread Michael Still
Hi,

I'll be on a long haul flight tonight from about 21:00 UTC. So... Once
feature freeze happens I'm not ignoring any freeze exceptions, it will
just take me a little while to get to them.

Michael

-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] library feature freeze and final releases

2014-09-02 Thread Ben Nemec
Sounds good to me too.

On 09/02/2014 08:20 AM, Doug Hellmann wrote:
> Oslo team,
> 
> We need to consider how we are going to handle the approaching feature freeze 
> deadline (4 Sept). We should, at this point, be focusing reviews on changes 
> associated with blueprints. We will have time to finish graduation work and 
> handle bugs between the freeze and the release candidate deadline, but 
> obviously it’s OK to review those now, too.
> 
> I propose that we apply the feature freeze rules to the incubator and any 
> library that has had a release this cycle and is being used by any other 
> project, but that libraries still being graduated not be frozen. I think that 
> gives us exceptions for oslo.concurrency, oslo.serialization, and 
> oslo.middleware. All of the other libraries should be planning to freeze new 
> feature work this week.
> 
> The app RC1 period starts 25 Sept, so we should be prepared to tag our final 
> releases of libraries before then to ensure those final releases don’t 
> introduce issues into the apps when they are released. We will apply 1.0 tags 
> to the same commits that have the last alpha in the release series for each 
> library, and then focus on fixing any bugs that come up during the release 
> candidate period. I propose that we tag our releases on 18 Sept, to give us a 
> few days to fix any issues that arise before the RC period starts.
> 
> Please let me know if you spot any issues with this plan.
> 
> Doug
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas][octavia]

2014-09-02 Thread Brandon Logan
Hi Susanne,

I believe the options for Octavia are:
1) Merge into the LBaaS tree (wherever LBaaS is)
2) Become its own openstack project
3) Remains in stackforge for eternity

#1 Is dependent on these options
1) LBaaS V2 graduates from the incubator into Neutron. V1 is deprecated.
2) LBaaS V2 remains in incubator until it can be spun out.  V1 in
Neutron is deprecated.
3) LBaaS V2 is abandoned in the incubator and LBaaS V1 remains.  (An
unlikely option)

I don't see any other feasible options.

On Tue, 2014-09-02 at 12:06 -0400, Susanne Balle wrote:
> Doug
> 
> 
> I agree with you but I need to understand the options. Susanne
> 
> 
> >> And I agree with Brandon’s sentiments.  We need to get something
> built before I’m going to worry too 
> >> much about where it should live.  Is this a candidate to get sucked
> into LBaaS?  Sure.  Could the reverse 
> >> happen?  Sure.  Let’s see how it develops.
> 
> 
> 
> On Tue, Sep 2, 2014 at 11:45 AM, Doug Wiegley 
> wrote:
> Hi all,
> 
> 
> > On the other hand one could also say that Octavia is the ML2
> equivalent of LBaaS. The equivalence here is very loose.
> Octavia would be a service-VM framework for doing load
> balancing using a variety of drivers. The drivers ultimately
> are in charge of using backends like haproxy or nginx running
> on the service VM to implement lbaas configuration.
> 
> 
> This, exactly.  I think it’s much fairer to define Octavia as
> an LBaaS purpose-built service vm framework, which will use
> nova and haproxy initially to provide a highly scalable
> backend. But before we get into terminology misunderstandings,
> there are a bunch of different “drivers” at play here, exactly
> because this is a framework:
>   * Neutron lbaas drivers – what we all know and love
>   * Octavia’s “network driver” - this is a piece of glue
> that exists to hide internal calls we have to make
> into Neutron until clean interfaces exist.  It might
> be a no-op in the case of an actual neutron lbaas
> driver, which could serve that function instead.
>   * Octavia’s “vm driver” - this is a piece of glue
> between the octavia controller and the nova VMs that
> are doing the load balancing.
>   * Octavia’s “compute driver” - you guessed it, an
> abstraction to Nova and its scheduler.
> Places that can be the “front-end” for Octavia:
>   * Neutron LBaaS v2 driver
>   * Neutron LBaaS v1 driver
>   * It’s own REST API
> Things that could have their own VM drivers:
>   * haproxy, running inside nova
>   * Nginx, running inside nova
>   * Anything else you want, running inside any hypervisor
> you want
>   * Vendor soft appliances
>   * Null-out the VM calls and go straight to some other
> backend?  Sure, though I’m not sure I’d see the point.
> There are quite a few synergies with other efforts, and we’re
> monitoring them, but not waiting for any of them.
> 
> 
> And I agree with Brandon’s sentiments.  We need to get
> something built before I’m going to worry too much about where
> it should live.  Is this a candidate to get sucked into
> LBaaS?  Sure.  Could the reverse happen?  Sure.  Let’s see how
> it develops.
> 
> 
> Incidentally, we are currently having a debate over the use of
> the term “vm” (and “vm driver”) as the name to describe
> octavia’s backends.  Feel free to chime in
> here: https://review.openstack.org/#/c/117701/
> 
> 
> Thanks,
> doug
> 
> 
> 
> 
> From: Salvatore Orlando 
> 
> Reply-To: "OpenStack Development Mailing List (not for usage
> questions)" 
> 
> Date: Tuesday, September 2, 2014 at 9:05 AM
> 
> To: "OpenStack Development Mailing List (not for usage
> questions)" 
> Subject: Re: [openstack-dev] [neutron][lbaas][octavia]
> 
> 
> 
> Hi Susanne,
> 
> 
> I'm just trying to gain a good understanding of the situation
> here.
> More comments and questions inline.
> 
> 
> Salvatore
> 
> On 2 September 2014 16:34, Susanne Balle
>  wrote:
> Salvatore 
> 
> 
> Thanks for your clarification below around the
> blueprint.
> 
> 
> > For LBaaS v2 therefore the relationship between it
> and Octavia s

Re: [openstack-dev] [Cinder][Nova]Quest about Cinder Brick proposal

2014-09-02 Thread Ivan Kolodyazhny
Hi all,

Emma,
thanks for raising this topic, I've added it for the next Cinder weekly
meeting [1].

Duncan,
I absolutely agree with you that if any code could be located in a one
place, it must be there.

I'm not sure which place is the best for Brick: oslo or stackforge. Let's
discuss it. I want to be a volunteer to make Brick as a separate library
and make OpenStack code better with re-using code as much, as possible.


[1] https://wiki.openstack.org/wiki/CinderMeetings#Next_meeting

Regards,
Ivan Kolodyazhny,
Software Engineer,
Mirantis Inc


On Tue, Sep 2, 2014 at 1:54 PM, Duncan Thomas 
wrote:

> On 2 September 2014 04:56, Emma Lin  wrote:
> > Hi Gurus,
> > I saw the wiki page for Cinder Brick proposal for Havana, but I didn’t
> see
> > any follow up on that idea. Is there any real progress on that idea?
> >
> > As this proposal is to address the local storage issue, I’d like to know
> the
> > status, and to see if there is any task required for hypervisor provider.
>
> Hi Emma
>
> Brick didn't really evolve to cover the local storage case, so we've
> not made much progress in that direction.
>
> Local storage comes up fairly regularly, but solving all of the points
> (availability, API behaviour completeness, performance, scheduling)
> from a pure cinder PoV is a hard problem - i.e. making local storage
> look like a normal cinder volume. Specs welcome, email me if you want
> more details on the problems - there are certainly many people
> interested in seeing the problem solved.
>
> There is code in brick that could be used in nova as is to reduce
> duplication and give a single place to fix bugs - nobody has yet taken
> this work on as far as I know.
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] feature branch for Nova v2.1 API?

2014-09-02 Thread Michael Still
On Tue, Sep 2, 2014 at 9:18 AM, Daniel P. Berrange  wrote:

> I think it is reasonable to assume that our Juno work is easily capable
> of keeping the entire core team 100% busy until Kilo opens. So having
> people review v2.1 stuff on a feature branch is definitely going to
> impact the work we get done for Juno to some extent, though it is
> admittedly hard to quantify this impact in any meaningful way.

I agree. I'm happy for people to be working on v2.1, but we really do
need cores to be focussing on reviews for Juno and everyone else to be
trying to close bugs. Its not that long until we reopen master for
Kilo, so asking people to divert their efforts for a few weeks doesn't
seem unreasonable.

Michael

-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas][octavia]

2014-09-02 Thread Brandon Logan
Yeah I've been worried about the term "driver" being overused here.
However, it might not be too bad if we get the other terminology correct
(network driver, vm/container/appliance driver, etc).

I was thinking of ML2 when I said Octavia living in the LBaaS tree might
be best.  I was also thinking that it makes sense if the end goal is for
Octavia to be in openstack.  Also, even if it goes into the LBaaS tree,
it doesn't mean it can't be spun out as its own openstack project,
though I do recognize the backwards-ness of that.

That said, I'm not stongly opposed to either options.  I just want
everyone involved to be happy, though that is not always going to
happen.

Thanks,
Brandon

On Tue, 2014-09-02 at 15:45 +, Doug Wiegley wrote:
> Hi all,
> 
> 
> > On the other hand one could also say that Octavia is the ML2
> equivalent of LBaaS. The equivalence here is very loose. Octavia would
> be a service-VM framework for doing load balancing using a variety of
> drivers. The drivers ultimately are in charge of using backends like
> haproxy or nginx running on the service VM to implement lbaas
> configuration.
> 
> 
> This, exactly.  I think it’s much fairer to define Octavia as an LBaaS
> purpose-built service vm framework, which will use nova and haproxy
> initially to provide a highly scalable backend. But before we get into
> terminology misunderstandings, there are a bunch of different
> “drivers” at play here, exactly because this is a framework:
>   * Neutron lbaas drivers – what we all know and love
>   * Octavia’s “network driver” - this is a piece of glue that
> exists to hide internal calls we have to make into Neutron
> until clean interfaces exist.  It might be a no-op in the case
> of an actual neutron lbaas driver, which could serve that
> function instead.
>   * Octavia’s “vm driver” - this is a piece of glue between the
> octavia controller and the nova VMs that are doing the load
> balancing.
>   * Octavia’s “compute driver” - you guessed it, an abstraction to
> Nova and its scheduler.
> Places that can be the “front-end” for Octavia:
>   * Neutron LBaaS v2 driver
>   * Neutron LBaaS v1 driver
>   * It’s own REST API
> Things that could have their own VM drivers:
>   * haproxy, running inside nova
>   * Nginx, running inside nova
>   * Anything else you want, running inside any hypervisor you want
>   * Vendor soft appliances
>   * Null-out the VM calls and go straight to some other backend?
>  Sure, though I’m not sure I’d see the point.
> There are quite a few synergies with other efforts, and we’re
> monitoring them, but not waiting for any of them.
> 
> 
> And I agree with Brandon’s sentiments.  We need to get something built
> before I’m going to worry too much about where it should live.  Is
> this a candidate to get sucked into LBaaS?  Sure.  Could the reverse
> happen?  Sure.  Let’s see how it develops.
> 
> 
> Incidentally, we are currently having a debate over the use of the
> term “vm” (and “vm driver”) as the name to describe octavia’s
> backends.  Feel free to chime in
> here: https://review.openstack.org/#/c/117701/
> 
> 
> Thanks,
> doug
> 
> 
> 
> 
> From: Salvatore Orlando 
> Reply-To: "OpenStack Development Mailing List (not for usage
> questions)" 
> Date: Tuesday, September 2, 2014 at 9:05 AM
> To: "OpenStack Development Mailing List (not for usage questions)"
> 
> Subject: Re: [openstack-dev] [neutron][lbaas][octavia]
> 
> 
> 
> Hi Susanne,
> 
> 
> I'm just trying to gain a good understanding of the situation here.
> More comments and questions inline.
> 
> 
> Salvatore
> 
> On 2 September 2014 16:34, Susanne Balle 
> wrote:
> Salvatore 
> 
> 
> Thanks for your clarification below around the blueprint.
> 
> 
> > For LBaaS v2 therefore the relationship between it and
> Octavia should be the same as with any other
> > backend. I see Octavia has a blueprint for a "network
> driver" - and the derivable of that should definitely be
> > part of the LBaaS project.
> 
> 
> > For the rest, it would seem a bit strange to me if the LBaaS
> project incorporated a backend as well. After 
> 
> > all, LBaaS v1 did not incorporate haproxy!
> > Also, as Adam points out, Nova does not incorporate an
> Hypervisor.
> 
> 
> In my vision Octavia is a LBaaS framework that should not be
> tied to ha-proxy. The interfaces should be clean and at a high
> enough level that we can switch load-balancer. We should be
> able to switch the load-balancer to nginx so to me the analogy
> is more Octavia+LBaaSV2 == nova and hypervisor ==
> load-balancer.
> 
> 
> Indeed I said that it would have been initially tied to haproxy
> considering the blueprints currently defined for octavia, b

Re: [openstack-dev] [vmware][nova] Refactor reseries rebased

2014-09-02 Thread Michael Still
By which John means "generally trying to avoid filling".

Michael

On Tue, Sep 2, 2014 at 10:52 AM, John Garbutt  wrote:
> On 2 September 2014 15:27, Matthew Booth  wrote:
>> We've been playing a game recently between oslo.vmware and the refactor
>> series where a patch from the refactor series goes in, requiring a
>> rebase of oslo.vmware daily. After a brief discussion with garyk earlier
>> I decided to head that off by rebasing the refactor series on top of
>> oslo.vmware, which has been sat in the integrated queue in the gate for
>> over 5 hours now. i.e. Whether it succeeds or fails, it will now go in
>> before anything else.
>>
>> Unfortunately, in doing that I have had to lose +2 +A on 4 refactor
>> series patches. I made a note of who had approved them:
>>
>> https://review.openstack.org/#/c/109754/
>> Brian Elliott +2
>> John Garbutt +2 +A
>>
>> https://review.openstack.org/#/c/109755/
>> Daniel Berrange +2
>> Andrew Laski +2
>> John Garbutt +2 +A
>>
>> https://review.openstack.org/#/c/114817/
>> Brian Elliott +2
>> Andrew Laski +2
>> John Garbutt +2 +A
>>
>> https://review.openstack.org/#/c/117467/
>> Brian Elliott +2
>> Andrew Laski +2
>> John Garbutt +2 +A
>>
>> These patches have been lightly touched to resolve merge conflicts with
>> the oslo.vmware integration, but no more. If people could take another
>> quick look I'd be very grateful.
>
> If stuff was approved, and you have rebased, just reach out for cores
> on (or us) on IRC. Thats a general invite, particularly at this FF
> sort of time.
>
> Generally trying to to fill the ML with review requests.
>
> Cheers,
> John
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [zaqar] [marconi] Removing GET message by ID in v1.1 (Redux)

2014-09-02 Thread Kurt Griffiths
Thanks everyone for your feedback. I think we have a consensus that this
sort of change would be best left to v2 of the API. We can start planning
v2 of the API at the Paris summit, and target some kind of “community
preview” of it to be released as part of Kilo.

On 8/29/14, 11:02 AM, "Everett Toews"  wrote:

>On Aug 28, 2014, at 3:08 AM, Flavio Percoco  wrote:
>
>> Unfortunately, as Nataliia mentioned, we can't just get rid of it in
>> v1.1 because that implies a major change in the API, which would require
>> a major release. What we can do, though, is start working on a spec for
>> the V2 of the API.
>
>+1
>
>Please don’t make breaking changes in minor version releases. v2 would be
>the place for this change.
>
>Thanks,
>Everett
>
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas][octavia]

2014-09-02 Thread Susanne Balle
Doug

I agree with you but I need to understand the options. Susanne

>> And I agree with Brandon’s sentiments.  We need to get something built
before I’m going to worry too
>> much about where it should live.  Is this a candidate to get sucked into
LBaaS?  Sure.  Could the reverse
>> happen?  Sure.  Let’s see how it develops.


On Tue, Sep 2, 2014 at 11:45 AM, Doug Wiegley  wrote:

>  Hi all,
>
>  > On the other hand one could also say that Octavia is the ML2
> equivalent of LBaaS. The equivalence here is very loose. Octavia would be a
> service-VM framework for doing load balancing using a variety of drivers.
> The drivers ultimately are in charge of using backends like haproxy or
> nginx running on the service VM to implement lbaas configuration.
>
>  This, exactly.  I think it’s much fairer to define Octavia as an LBaaS
> purpose-built service vm framework, which will use nova and haproxy
> initially to provide a highly scalable backend. But before we get into
> terminology misunderstandings, there are a bunch of different “drivers” at
> play here, exactly because this is a framework:
>
>- Neutron lbaas drivers – what we all know and love
>- Octavia’s “network driver” - this is a piece of glue that exists to
>hide internal calls we have to make into Neutron until clean interfaces
>exist.  It might be a no-op in the case of an actual neutron lbaas driver,
>which could serve that function instead.
>- Octavia’s “vm driver” - this is a piece of glue between the octavia
>controller and the nova VMs that are doing the load balancing.
>- Octavia’s “compute driver” - you guessed it, an abstraction to Nova
>and its scheduler.
>
> Places that can be the “front-end” for Octavia:
>
>- Neutron LBaaS v2 driver
>- Neutron LBaaS v1 driver
>- It’s own REST API
>
> Things that could have their own VM drivers:
>
>- haproxy, running inside nova
>- Nginx, running inside nova
>- Anything else you want, running inside any hypervisor you want
>- Vendor soft appliances
>- Null-out the VM calls and go straight to some other backend?  Sure,
>though I’m not sure I’d see the point.
>
> There are quite a few synergies with other efforts, and we’re monitoring
> them, but not waiting for any of them.
>
>  And I agree with Brandon’s sentiments.  We need to get something built
> before I’m going to worry too much about where it should live.  Is this a
> candidate to get sucked into LBaaS?  Sure.  Could the reverse happen?
>  Sure.  Let’s see how it develops.
>
>  Incidentally, we are currently having a debate over the use of the term
> “vm” (and “vm driver”) as the name to describe octavia’s backends.  Feel
> free to chime in here: https://review.openstack.org/#/c/117701/
>
>  Thanks,
> doug
>
>
>   From: Salvatore Orlando 
>
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Date: Tuesday, September 2, 2014 at 9:05 AM
>
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Subject: Re: [openstack-dev] [neutron][lbaas][octavia]
>
>   Hi Susanne,
>
>  I'm just trying to gain a good understanding of the situation here.
> More comments and questions inline.
>
>  Salvatore
>
> On 2 September 2014 16:34, Susanne Balle  wrote:
>
>> Salvatore
>>
>>  Thanks for your clarification below around the blueprint.
>>
>>  > For LBaaS v2 therefore the relationship between it and Octavia should
>> be the same as with any other
>> > backend. I see Octavia has a blueprint for a "network driver" - and the
>> derivable of that should definitely be
>> > part of the LBaaS project.
>>
>>  > For the rest, it would seem a bit strange to me if the LBaaS project
>> incorporated a backend as well. After
>>  > all, LBaaS v1 did not incorporate haproxy!
>> > Also, as Adam points out, Nova does not incorporate an Hypervisor.
>>
>>  In my vision Octavia is a LBaaS framework that should not be tied to
>> ha-proxy. The interfaces should be clean and at a high enough level that we
>> can switch load-balancer. We should be able to switch the load-balancer to
>> nginx so to me the analogy is more Octavia+LBaaSV2 == nova and hypervisor
>> == load-balancer.
>>
>
>  Indeed I said that it would have been initially tied to haproxy
> considering the blueprints currently defined for octavia, but I'm sure the
> solution could leverage nginx or something else in the future.
>
>  I think however it is correct to say that LBaaS v2 will have an Octavia
> driver on par with A10, radware, nestscaler and others.
> (Correct me if I'm wrong) On the other hand Octavia, within its
> implementation, might use different drivers - for instance nginx or
> haproxy. And in theory it cannot be excluded that the same appliance might
> implement some vips using haproxy and others using nginx.
>
>
>>  I am not sure the group is in agreement on the definition I just wrote.
>> Also going back the definition of Octa

Re: [openstack-dev] [magnetodb] Backup procedure for Cassandra backend

2014-09-02 Thread Romain Hardouin
On Tue, 2014-09-02 at 17:34 +0300, Dmitriy Ukhlov wrote:
> Hi Romain!
> 
> 
> Thank you for useful info about your Cassandra backuping.

It's always a pleasure to talk about Cassandra :)

> 
> We have not tried to tune Cassandra compaction properties yet.
> 
> MagnetoDB is DynamoDB-like REST API and it means that it is key-value
> storage itself and it should be able to work for different kind of
> load, because it depends on user application which use MagnetoDB.

The compaction strategy choice really matters when setting up a cluster.
In such a use case, I mean MagnetoDB, we can assume that the database
will be updated frequently. Thus LCS is more suitable than STCS.


> Do you have some recommendation or comments based on information about
> read/write ratio?

Yes, if read/write ratio >= 2 then LCS is a must have.
Just be aware that LCS is more IO intensive during compaction than STCS,
but it's for a good cause.

You'll find information here:
http://www.datastax.com/dev/blog/when-to-use-leveled-compaction

Best,

Romain




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] Request for python-heatclient project to adopt heat-translator

2014-09-02 Thread Sahdev P Zala
Hello guys,
 
As you know, the heat-translator project was started early this year with 
an aim to create a tool to translate non-Heat templates to HOT. It is a 
StackForge project licensed under Apache 2. We have made good progress 
with its development and a demo was given at the OpenStack 2014 Atlanta 
summit during a half-a-day session that was dedicated to heat-translator 
project and related TOSCA discussion. Currently the development and 
testing is done with the TOSCA template format but the tool is designed to 
be generic enough to work with templates other than TOSCA. There are five 
developers actively contributing to the development. In addition, all 
current Heat core members are already core members of the heat-translator 
project.
Recently, I attended Heat Mid Cycle Meet Up for Juno in Raleigh and 
updated the attendees on heat-translator project and ongoing progress. I 
also requested everyone for a formal adoption of the project in the 
python-heatclient and the consensus was that it is the right thing to do. 
Also when the project was started, the initial plan was to make it 
available in python-heatclient. Hereby, the heat-translator team would 
like to make a request to have the heat-translator project to be adopted 
by the python-heatclient/Heat program. 
Below are some of links related to the project, 
https://github.com/stackforge/heat-translator
https://launchpad.net/heat-translator
https://blueprints.launchpad.net/heat-translator
https://bugs.launchpad.net/heat-translator
http://heat-translator.readthedocs.org/ (in progress)

Thanks! 

Regards, 
Sahdev Zala 
IBM SWG Standards Strategy 
Durham, NC 
(919)486-2915 T/L: 526-2915 ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vmware][nova] Refactor reseries rebased

2014-09-02 Thread John Garbutt
On 2 September 2014 15:27, Matthew Booth  wrote:
> We've been playing a game recently between oslo.vmware and the refactor
> series where a patch from the refactor series goes in, requiring a
> rebase of oslo.vmware daily. After a brief discussion with garyk earlier
> I decided to head that off by rebasing the refactor series on top of
> oslo.vmware, which has been sat in the integrated queue in the gate for
> over 5 hours now. i.e. Whether it succeeds or fails, it will now go in
> before anything else.
>
> Unfortunately, in doing that I have had to lose +2 +A on 4 refactor
> series patches. I made a note of who had approved them:
>
> https://review.openstack.org/#/c/109754/
> Brian Elliott +2
> John Garbutt +2 +A
>
> https://review.openstack.org/#/c/109755/
> Daniel Berrange +2
> Andrew Laski +2
> John Garbutt +2 +A
>
> https://review.openstack.org/#/c/114817/
> Brian Elliott +2
> Andrew Laski +2
> John Garbutt +2 +A
>
> https://review.openstack.org/#/c/117467/
> Brian Elliott +2
> Andrew Laski +2
> John Garbutt +2 +A
>
> These patches have been lightly touched to resolve merge conflicts with
> the oslo.vmware integration, but no more. If people could take another
> quick look I'd be very grateful.

If stuff was approved, and you have rebased, just reach out for cores
on (or us) on IRC. Thats a general invite, particularly at this FF
sort of time.

Generally trying to to fill the ML with review requests.

Cheers,
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] feature branch for Nova v2.1 API?

2014-09-02 Thread Jeremy Stanley
On 2014-09-02 15:18:14 +0100 (+0100), Daniel P. Berrange wrote:
> Do we have any historic precedent of using feature branches in this kind
> of way in the past, either in Nova or other projects ? If so, I'd be
> interested in how successful it was.
[...]

The new Keystone API used a feature branch, as did Swift's erasure
coding support, and Zuul's Gearman integration... I get the
impression success varied a bit depending on expectations (but I'll
let those who were maintaining the branches speak to their
experiences). The workflow we have documented for this is
https://wiki.openstack.org/wiki/GerritJenkinsGit#Merge_Commits if
you're interested in how it's implemented.
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas][octavia]

2014-09-02 Thread Salvatore Orlando
Hi Susanne,

I'm just trying to gain a good understanding of the situation here.
More comments and questions inline.

Salvatore

On 2 September 2014 16:34, Susanne Balle  wrote:

> Salvatore
>
> Thanks for your clarification below around the blueprint.
>
> > For LBaaS v2 therefore the relationship between it and Octavia should be
> the same as with any other
> > backend. I see Octavia has a blueprint for a "network driver" - and the
> derivable of that should definitely be
> > part of the LBaaS project.
>
> > For the rest, it would seem a bit strange to me if the LBaaS project
> incorporated a backend as well. After
> > all, LBaaS v1 did not incorporate haproxy!
> > Also, as Adam points out, Nova does not incorporate an Hypervisor.
>
> In my vision Octavia is a LBaaS framework that should not be tied to
> ha-proxy. The interfaces should be clean and at a high enough level that we
> can switch load-balancer. We should be able to switch the load-balancer to
> nginx so to me the analogy is more Octavia+LBaaSV2 == nova and hypervisor
> == load-balancer.
>

Indeed I said that it would have been initially tied to haproxy considering
the blueprints currently defined for octavia, but I'm sure the solution
could leverage nginx or something else in the future.

I think however it is correct to say that LBaaS v2 will have an Octavia
driver on par with A10, radware, nestscaler and others.
(Correct me if I'm wrong) On the other hand Octavia, within its
implementation, might use different drivers - for instance nginx or
haproxy. And in theory it cannot be excluded that the same appliance might
implement some vips using haproxy and others using nginx.


> I am not sure the group is in agreement on the definition I just wrote.
> Also going back the definition of Octavia being a backend then I agree that
> we should write a blueprint to incorporate Octavia as a network driver.
>

What about this blueprint?
https://blueprints.launchpad.net/octavia/+spec/neutron-network-driver


>
> I guess I had always envisioned what we now call Octavia to be part of the
> LBaaS service itself and have ha-proxy, nginx be the drivers and not have
> the driver level be at the Octavia cut-over point, Given this new "design"
> I am now wondering why we didn't just write a driver for Libra and improved
> on Libra since to me that is the now the driver level we are discussing.
>

Octavia could be part of the lbaas service just like neutron has a set of
agents which at the end of the day provide a L2/L3 network virtualization
service. Personally I'm of the opinion that I would move that code in a
separate repo which could be maintained by networking experts (I can barely
plug an ethernet cable into a switch). But the current situation creates a
case for Octavia inclusion in lbaas.

On the other hand one could also say that Octavia is the ML2 equivalent of
LBaaS. The equivalence here is very loose. Octavia would be a service-VM
framework for doing load balancing using a variety of drivers. The drivers
ultimately are in charge of using backends like haproxy or nginx running on
the service VM to implement lbaas configuration.
To avoid further discussion it might be better to steer away from
discussing overlaps and synergies with the service VM project, at least for
now.

I think the ability of having the Libra driver was discussed in the past. I
do not know the details, but it seemed there was not a lot to gain from
having a Neutron LBaaS driver pointing to libra (ie: it was much easier to
just deploy libra instead of neutron lbaas).

Summarising, so far I haven't yet an opinion regarding where Octavia will
sit.
Nevertheless I think this is a discussion that it's useful for the
medium/long term - it does not seem to me that there is an urgency here.



>
> Regards Susanne
>
>
> On Tue, Sep 2, 2014 at 9:18 AM, Salvatore Orlando 
> wrote:
>
>> Some more comments from me inline.
>> Salvatore
>>
>>
>> On 2 September 2014 11:06, Adam Harwell 
>> wrote:
>>
>>> I also agree with most of what Brandon said, though I am slightly
>>> concerned by the talk of merging Octavia and [Neutron-]LBaaS-v2
>>> codebases.
>>>
>>
>> Beyond all the reasons listed in this thread - merging codebases is
>> always more difficult that what it seems!
>> Also it seems to me there's not yet a clear path for LBaaS v2. Mostly
>> because of the ongoing neutron incubator discussion.
>> However in my opinion there are 3 paths (and I have no idea whether they
>> might be applicable to Octavia as a standalone project).
>> 1) Aim at becoming part of neutron via the incubator or any equivalent
>> mechanisms
>> 2) Evolve in loosely coupled fashion with neutron, but still be part of
>> the networking program. (This means that LBaaS APIs will be part of
>> Openstack Network APIs)
>> 3) Evolve independently from neutron, and become part of a new program. I
>> have no idea however whether there's enough material to have a "load
>> balancing" program, and what would be the timeline for that.
>>

Re: [openstack-dev] [oslo] library feature freeze and final releases

2014-09-02 Thread Doug Hellmann
Yes, you’re right, I missed oslo.log.

Doug

On Sep 2, 2014, at 9:28 AM, Davanum Srinivas  wrote:

> Doug,
> 
> plan is good. Same criteria will mean exception for oslo.log as well
> 
> -- dims
> 
> On Tue, Sep 2, 2014 at 9:20 AM, Doug Hellmann  wrote:
>> Oslo team,
>> 
>> We need to consider how we are going to handle the approaching feature 
>> freeze deadline (4 Sept). We should, at this point, be focusing reviews on 
>> changes associated with blueprints. We will have time to finish graduation 
>> work and handle bugs between the freeze and the release candidate deadline, 
>> but obviously it’s OK to review those now, too.
>> 
>> I propose that we apply the feature freeze rules to the incubator and any 
>> library that has had a release this cycle and is being used by any other 
>> project, but that libraries still being graduated not be frozen. I think 
>> that gives us exceptions for oslo.concurrency, oslo.serialization, and 
>> oslo.middleware. All of the other libraries should be planning to freeze new 
>> feature work this week.
>> 
>> The app RC1 period starts 25 Sept, so we should be prepared to tag our final 
>> releases of libraries before then to ensure those final releases don’t 
>> introduce issues into the apps when they are released. We will apply 1.0 
>> tags to the same commits that have the last alpha in the release series for 
>> each library, and then focus on fixing any bugs that come up during the 
>> release candidate period. I propose that we tag our releases on 18 Sept, to 
>> give us a few days to fix any issues that arise before the RC period starts.
>> 
>> Please let me know if you spot any issues with this plan.
>> 
>> Doug
>> 
>> 
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> -- 
> Davanum Srinivas :: http://davanum.wordpress.com
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] library projects and milestone management

2014-09-02 Thread Doug Hellmann
Oslo team,

Thierry and I discussed some changes in how we manage milestones in launchpad 
during our 1:1 today. We settled on something close to what I think Mark has 
been doing for oslo.messaging. The idea is to use milestones named “next” in 
each library, and then rename those milestones when a release is made to use 
the alpha version number. At the end of the cycle, we will follow the process 
of the other projects and consolidate those “release candidates” into the final 
release so all of the features and fixed bugs are visible in one place.

For example, oslo.foo 1.0.0 will accumulate changes in “next” until we are 
ready to release 1.1.0.0a1. At that point, “next” is renamed to “1.1.0.1a1” (to 
preserve existing links to blueprints and bugs), all “fix committed” bugs are 
attached to 1.1.0.0a1 and changed to “fix released”, and a new “next” milestone 
is created. The same follows for 1.1.0.0a2. When we are ready to prepare the 
final release, a new “1.1.0” milestone is created and all of the bugs and 
blueprints associated with the alpha milestones are moved to the new final 
milestone.

Thierry has proposed creating some scripts to automate this, so I want to make 
sure everyone is clear on the details and that there are no holes in the plan 
before we go ahead with that work. Let me know if you have questions or if 
there’s an issue we need to resolve before setting up all of the existing Oslo 
projects on launchpad to support this.

Doug
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas][octavia]

2014-09-02 Thread Susanne Balle
I 100% agree with what Brandon wrote below and that is why IMHO they go
together and should be part of the same codebase.

Susanne

On Tue, Sep 2, 2014 at 1:12 AM, Brandon Logan 
 wrote:

I think the best course of action is to get Octavia itself into the same
codebase as LBaaS (Neutron or spun out).  They do go together, and the
maintainers will almost always be the same for both.  This makes even
more sense when LBaaS is spun out into its own project.


On Tue, Sep 2, 2014 at 1:12 AM, Brandon Logan 
wrote:

> Hi Susanne and everyone,
>
> My opinions are that keeping it in stackforge until it gets mature is
> the best solution.  I'm pretty sure we can all agree on that.  Whenever
> it is mature then, and only then, we should try to get it into openstack
> one way or another.  If Neutron LBaaS v2 is still incubated then it
> should be relatively easy to get it in that codebase.  If Neutron LBaaS
> has already spun out, even easier for us.  If we want Octavia to just
> become an openstack project all its own then that will be the difficult
> part.
>
> I think the best course of action is to get Octavia itself into the same
> codebase as LBaaS (Neutron or spun out).  They do go together, and the
> maintainers will almost always be the same for both.  This makes even
> more sense when LBaaS is spun out into its own project.
>
> I really think all of the answers to these questions will fall into
> place when we actually deliver a product that we are all wanting and
> talking about delivering with Octavia.  Once we prove that we can all
> come together as a community and manage a product from inception to
> maturity, we will then have the respect and trust to do what is best for
> an Openstack LBaaS product.
>
> Thanks,
> Brandon
>
> On Mon, 2014-09-01 at 10:18 -0400, Susanne Balle wrote:
> > Kyle, Adam,
> >
> >
> >
> > Based on this thread Kyle is suggesting the follow moving forward
> > plan:
> >
> >
> >
> > 1) We incubate Neutron LBaaS V2 in the “Neutron” incubator “and freeze
> > LBaas V1.0”
> > 2) “Eventually” It graduates into a project under the networking
> > program.
> > 3) “At that point” We deprecate Neutron LBaaS v1.
> >
> >
> >
> > The words in “xx“ are works I added to make sure I/We understand the
> > whole picture.
> >
> >
> >
> > And as Adam mentions: Octavia != LBaaS-v2. Octavia is a peer to F5 /
> > Radware / A10 / etc appliances which is a definition I agree with BTW.
> >
> >
> >
> > What I am trying to now understand is how we will move Octavia into
> > the new LBaaS project?
> >
> >
> >
> > If we do it later rather than develop Octavia in tree under the new
> > incubated LBaaS project when do we plan to bring it in-tree from
> > Stackforge? Kilo? Later? When LBaaS is a separate project under the
> > Networking program?
>
> >
> >
> > What are the criteria to bring a driver into the LBaaS project and
> > what do we need to do to replace the existing reference driver? Maybe
> > adding a software driver to LBaaS source tree is less of a problem
> > than converting a whole project to an OpenStack project.
>
> >
> >
> > Again I am open to both directions I just want to make sure we
> > understand why we are choosing to do one or the other and that our
> >  decision is based on data and not emotions.
> >
> >
> >
> > I am assuming that keeping Octavia in Stackforge will increase the
> > velocity of the project and allow us more freedom which is goodness.
> > We just need to have a plan to make it part of the Openstack LBaaS
> > project.
> >
> >
> >
> > Regards Susanne
> >
> >
> >
> >
> > On Sat, Aug 30, 2014 at 2:09 PM, Adam Harwell
> >  wrote:
> > Only really have comments on two of your related points:
> >
> >
> > [Susanne] To me Octavia is a driver so it is very hard to me
> > to think of it as a standalone project. It needs the new
> > Neutron LBaaS v2 to function which is why I think of them
> > together. This of course can change since we can add whatever
> > layers we want to Octavia.
> >
> >
> > [Adam] I guess I've always shared Stephen's
> > viewpoint — Octavia != LBaaS-v2. Octavia is a peer to F5 /
> > Radware / A10 / etcappliances, not to an Openstack API layer
> > like Neutron-LBaaS. It's a little tricky to clearly define
> > this difference in conversation, and I have noticed that quite
> > a few people are having the same issue differentiating. In a
> > small group, having quite a few people not on the same page is
> > a bit scary, so maybe we need to really sit down and map this
> > out so everyone is together one way or the other.
> >
> >
> > [Susanne] Ok now I am confused… But I agree with you that it
> > need to focus on our use cases. I remember us discussing
> > Octavia being the refenece implementation for OpenStack LBaaS
> > (whatever that is). Has that changed while I was on vacation?
> >
> >
> > [Adam] I believe that

Re: [openstack-dev] [magnetodb] Backup procedure for Cassandra backend

2014-09-02 Thread Dmitriy Ukhlov
Hi Romain!

Thank you for useful info about your Cassandra backuping.

We have not tried to tune Cassandra compaction properties yet.
MagnetoDB is DynamoDB-like REST API and it means that it is key-value
storage itself and it should be able to work for different kind of load,
because it depends on user application which use MagnetoDB.

Do you have some recommendation or comments based on information about
read/write ratio?

On Tue, Sep 2, 2014 at 4:29 PM, Romain Hardouin <
romain.hardo...@cloudwatt.com> wrote:

> Hi Mirantis guys,
>
> I have set up two Cassandra backups:
> The first backup procedure was similar to the one you want to achieve.
> The second backup used SAN features (EMC VNX snapshots) so it was very
> specific to the environment.
>
> Backup an entire cluster (therefore all replicas) is challenging when
> dealing with big data and not really needed. If your replicas are spread
> accross several data centers then you could backup just one data center. In
> that case you backup only one replica.
> Depending on your needs you may want to backup twice (I mean "backup the
> backup" using a tape library for example) and then store it in an external
> location for disaster recovery, requirements specification, norms, etc.
>
> The snapshot command issues a flush before to effectively take the
> snapshot. So the flush command is not necessary.
>
> https://github.com/apache/cassandra/blob/c7ebc01bbc6aa602b91e105b935d6779245c87d1/src/java/org/apache/cassandra/db/ColumnFamilyStore.java#L2213
> (snapshotWithoutFlush() is used by the scrub command)
>
> Just out of curiosity, have you tried the leveled compaction strategy? It
> seems that you use STCS.
> Does your use case imply many updates? What is your read/write ratio?
>
> Best,
>
> Romain
>
> --
> *From: *"Denis Makogon" 
> *To: *"OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> *Sent: *Friday, August 29, 2014 4:33:59 PM
> *Subject: *Re: [openstack-dev] [magnetodb] Backup procedure for
> Cassandrabackend
>
>
>
>
>
> On Fri, Aug 29, 2014 at 4:29 PM, Dmitriy Ukhlov 
> wrote:
>
>> Hello Denis,
>> Thank you for very useful knowledge sharing.
>>
>> But I have one more question. As far as I understood if we have
>> replication factor 3 it means that our backup may contain three copies of
>> the same data. Also it may contain some not compacted sstables set. Do we
>> have any ability to compact collected backup data before moving it to
>> backup storage?
>>
>
> Thanks for fast response, Dmitriy.
>
> With replication factor 3 - yes, this looks like a feature that allows to
> backup only one node instead of 3 of them. In other cases, we would need to
> iterate over each node, as you know.
> Correct, it is possible to have not compacted SSTables. To accomplish
> compaction we might need to use compaction mechanism provided by the
> nodetool, see
> http://www.datastax.com/documentation/cassandra/2.0/cassandra/tools/toolsCompact.html,
> we just need take into account that it's possible that sstable was already
> compacted and force compaction wouldn't give valuable benefits.
>
>
> Best regards,
> Denis Makogon
>
>
>>
>> On Fri, Aug 29, 2014 at 2:01 PM, Denis Makogon 
>> wrote:
>>
>>> Hello, stackers. I'd like to start thread related to backuping procedure
>>> for MagnetoDB, to be precise, for Cassandra backend.
>>>
>>> In order to accomplish backuping procedure for Cassandra we need to
>>> understand how does backuping work.
>>>
>>> To perform backuping:
>>>
>>>1.
>>>
>>>We need to SSH into each node
>>>2.
>>>
>>>Call ‘nodetool snapshot’ with appropriate parameters
>>>3.
>>>
>>>Collect backup.
>>>4.
>>>
>>>Send backup to remote storage.
>>>5.
>>>
>>>Remove initial snapshot
>>>
>>>
>>>  Lets take a look how does ‘nodetool snapshot’ works. Cassandra backs
>>> up data by taking a snapshot of all on-disk data files (SSTable files)
>>> stored in the data directory. Each time an SSTable gets flushed and
>>> snapshotted it becomes a hard link against initial SSTable pinned to
>>> specific timestamp.
>>>
>>> Snapshots are taken per keyspace or per-CF and while the system is
>>> online. However, nodes must be taken offline in order to restore a snapshot.
>>>
>>> Using a parallel ssh tool (such as pssh), you can flush and then
>>> snapshot an entire cluster. This provides an eventually consistent
>>> backup. Although no one node is guaranteed to be consistent with its
>>> replica nodes at the time a snapshot is taken, a restored snapshot can
>>> resume consistency using Cassandra's built-in consistency mechanisms.
>>>
>>> After a system-wide snapshot has been taken, you can enable incremental
>>> backups on each node (disabled by default) to backup data that has changed
>>> since the last snapshot was taken. Each time an SSTable is flushed, a hard
>>> link is copied into a /backups subdirectory of the data directory.
>>>
>>> Now lets see how can 

Re: [openstack-dev] [neutron][lbaas][octavia]

2014-09-02 Thread Susanne Balle
Salvatore

Thanks for your clarification below around the blueprint.

> For LBaaS v2 therefore the relationship between it and Octavia should be
the same as with any other
> backend. I see Octavia has a blueprint for a "network driver" - and the
derivable of that should definitely be
> part of the LBaaS project.

> For the rest, it would seem a bit strange to me if the LBaaS project
incorporated a backend as well. After
> all, LBaaS v1 did not incorporate haproxy!
> Also, as Adam points out, Nova does not incorporate an Hypervisor.

In my vision Octavia is a LBaaS framework that should not be tied to
ha-proxy. The interfaces should be clean and at a high enough level that we
can switch load-balancer. We should be able to switch the load-balancer to
nginx so to me the analogy is more Octavia+LBaaSV2 == nova and hypervisor
== load-balancer.

I am not sure the group is in agreement on the definition I just wrote.
Also going back the definition of Octavia being a backend then I agree that
we should write a blueprint to incorporate Octavia as a network driver.

I guess I had always envisioned what we now call Octavia to be part of the
LBaaS service itself and have ha-proxy, nginx be the drivers and not have
the driver level be at the Octavia cut-over point, Given this new "design"
I am now wondering why we didn't just write a driver for Libra and improved
on Libra since to me that is the now the driver level we are discussing.

Regards Susanne


On Tue, Sep 2, 2014 at 9:18 AM, Salvatore Orlando 
wrote:

> Some more comments from me inline.
> Salvatore
>
>
> On 2 September 2014 11:06, Adam Harwell 
> wrote:
>
>> I also agree with most of what Brandon said, though I am slightly
>> concerned by the talk of merging Octavia and [Neutron-]LBaaS-v2 codebases.
>>
>
> Beyond all the reasons listed in this thread - merging codebases is always
> more difficult that what it seems!
> Also it seems to me there's not yet a clear path for LBaaS v2. Mostly
> because of the ongoing neutron incubator discussion.
> However in my opinion there are 3 paths (and I have no idea whether they
> might be applicable to Octavia as a standalone project).
> 1) Aim at becoming part of neutron via the incubator or any equivalent
> mechanisms
> 2) Evolve in loosely coupled fashion with neutron, but still be part of
> the networking program. (This means that LBaaS APIs will be part of
> Openstack Network APIs)
> 3) Evolve independently from neutron, and become part of a new program. I
> have no idea however whether there's enough material to have a "load
> balancing" program, and what would be the timeline for that.
>
>
>> [blogan] "I think the best course of action is to get Octavia itself into
>> the same codebase as LBaaS (Neutron or spun out)."
>> [sballe] "What I am trying to now understand is how we will move Octavia
>> into the new LBaaS project?"
>>
>>
>> I didn't think that was ever going to be the plan -- sure, we'd have an
>> Octavia driver that is part of the [Neutron-]LBaaS-v2 codebase (which
>> Susanne did mention as well), but nothing more than that. The actual
>> Octavia code would still be in its own project at the end of all of this,
>> right? The driver code could be added to [Neutron-]LbaaS-v2 at any point
>> once Octavia is mature enough to be used, just by submitting it as a CR, I
>> believe. Doug might be able to comment on that, since he maintains the A10
>> driver?
>>
>
> From what I gathered so far Octavia is a fully fledged load balancing
> virtual appliance which (at least in its first iterations) will leverage
> haproxy.
> As also stated earlier in this thread it's a peer of commercial appliances
> from various vendors.
>
> For LBaaS v2 therefore the relationship between it and Octavia should be
> the same as with any other backend. I see Octavia has a blueprint for a
> "network driver" - and the derivable of that should definitely be part of
> the LBaaS project.
> For the rest, it would seem a bit strange to me if the LBaaS project
> incorporated a backend as well. After all, LBaaS v1 did not incorporate
> haproxy!
> Also, as Adam points out, Nova does not incorporate an Hypervisor.
>
>
>>
>> Also, I know I'm opening this same can of worms again, but I am curious
>> about the HP mandate that "everything must be OpenStack" when it comes to
>> Octavia. Since HP's offering would be "[Neutron-]LBaaS-v2", which happens
>> to use Octavia as a backend, does it matter whether Octavia is an official
>> OpenStack project**? If HP can offer Cloud Compute through Nova, and Nova
>> uses some hypervisor like Xen or KVM (neither of which are a part of
>> OpenStack), I am not sure how it is different to offer Cloud Load
>> Balancing via [Neutron-]LBaaS-v2 which could be using a non-Openstack
>> implementation for the backend. I don't see "Octavia needs to be in
>> Openstack" as a blocker so long as the "LBaaS API" is part of OpenStack.
>>
>> **NOTE: I AM DEFINITELY STILL IN FAVOR OF OCTAVIA BEING AN OPENSTACK
>> PROJECT. THIS IS JU

[openstack-dev] [vmware][nova] Refactor reseries rebased

2014-09-02 Thread Matthew Booth
We've been playing a game recently between oslo.vmware and the refactor
series where a patch from the refactor series goes in, requiring a
rebase of oslo.vmware daily. After a brief discussion with garyk earlier
I decided to head that off by rebasing the refactor series on top of
oslo.vmware, which has been sat in the integrated queue in the gate for
over 5 hours now. i.e. Whether it succeeds or fails, it will now go in
before anything else.

Unfortunately, in doing that I have had to lose +2 +A on 4 refactor
series patches. I made a note of who had approved them:

https://review.openstack.org/#/c/109754/
Brian Elliott +2
John Garbutt +2 +A

https://review.openstack.org/#/c/109755/
Daniel Berrange +2
Andrew Laski +2
John Garbutt +2 +A

https://review.openstack.org/#/c/114817/
Brian Elliott +2
Andrew Laski +2
John Garbutt +2 +A

https://review.openstack.org/#/c/117467/
Brian Elliott +2
Andrew Laski +2
John Garbutt +2 +A

These patches have been lightly touched to resolve merge conflicts with
the oslo.vmware integration, but no more. If people could take another
quick look I'd be very grateful.

Thanks,

Matt
-- 
Matthew Booth
Red Hat Engineering, Virtualisation Team

Phone: +442070094448 (UK)
GPG ID:  D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] feature branch for Nova v2.1 API?

2014-09-02 Thread Daniel P. Berrange
On Tue, Sep 02, 2014 at 08:54:31AM -0400, Sean Dague wrote:
> I think we finally got to a largely consensus agreement at the mid-cycle
> on the path forward for Nova v2.1 and microversioning. Summary for
> others: Nova v2.1 will be Nova v2 built on the v3 infrastructure (which
> is much cleaner), with json schema based request validation (to further
> clean up and make consistent all the input validation).
> 
> As we hit FF I'm wondering if we consider getting the API conversion a
> high enough priority to run it in parallel during the RC phase of Nova.
> It feels like a ton of other things become easier once we have all this
> working (like actually being able to sanely evolve our API with all the
> new features people show up with), so having that early Kilo would be great.
> 
> One thing we could do is take the v2.1 work into a feature branch during
> the freeze/rc, which would allow getting Tempest functioning on it, and
> get the patch stream ready for early merge in Kilo.
> 
> This has the disadvantage of some portion of the Nova team being focused
> on this during the RC phase, which is generally considered uncool. (I
> agree with that sentiment).
> 
> So it's a trade off. And honestly, I could go either way.
> 
> I'd like to get the feelings of the Nova drivers team on whether getting
> the API on track is deemed important enough to go this approach during
> the Juno RC phase. All opinions welcome.

Do we have any historic precedent of using feature branches in this kind
of way in the past, either in Nova or other projects ? If so, I'd be
interested in how successful it was.

I think it is reasonable to assume that our Juno work is easily capable
of keeping the entire core team 100% busy until Kilo opens. So having
people review v2.1 stuff on a feature branch is definitely going to
impact the work we get done for Juno to some extent, though it is
admittedly hard to quantify this impact in any meaningful way.

Is there a specific compelling reason we need to get it up & reviewed
through gerrit during the gap between juno-3 FF and kilo opening for
business ?  When you refer to getting tempest functioning, are you
talking about actually doing the coding work on tempest to exercise
the new v2.1 API, or are you talking about actually setting up tempest
in the gate systems. If the latter, I can understand why having it up
for review would be a win. If the former, it seems that could be done
regardless of existance of a feature branch.

I don't have a strong opinion since, even if there was a feature branch,
I'd likely ignore it until Kilo opens. If pushed though I'd be just on
the side of making it wait until Kilo opens, just to maximise core team
availability for Juno.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance][feature freeze exception] Proposal for using Launcher/ProcessLauncher for launching services

2014-09-02 Thread Kekane, Abhishek
Hi All,

I'd like to ask for a feature freeze exception for using oslo-incubator service 
framework in glance, based on the following blueprint:

https://blueprints.launchpad.net/glance/+spec/use-common-service-framework


The code to implement this feature is under review at present.

1. Sync oslo-incubator service module in glance: 
https://review.openstack.org/#/c/117135/2
2. Use Launcher/ProcessLauncher in glance: 
https://review.openstack.org/#/c/117988/


If we have this feature in glance then we can able to use features like reload 
glance configuration file without restart, graceful shutdown etc.
Also it will use common code like other OpenStack projects nova, keystone, 
cinder does.


We are ready to address all the concerns of the community if they have any.


Thanks & Regards,

Abhishek Kekane

__
Disclaimer:This email and any attachments are sent in strictest confidence for 
the sole use of the addressee and may contain legally privileged, confidential, 
and proprietary data.  If you are not the intended recipient, please advise the 
sender by replying promptly to this email and then delete and destroy this 
email and any attachments without any further use, copying or forwarding___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnetodb] Backup procedure for Cassandra backend

2014-09-02 Thread Romain Hardouin
Hi Mirantis guys, 

I have set up two Cassandra backups: 
The first backup procedure was similar to the one you want to achieve. 
The second backup used SAN features (EMC VNX snapshots) so it was very specific 
to the environment. 

Backup an entire cluster (therefore all replicas) is challenging when dealing 
with big data and not really needed. If your replicas are spread accross 
several data centers then you could backup just one data center. In that case 
you backup only one replica. 
Depending on your needs you may want to backup twice (I mean "backup the 
backup" using a tape library for example) and then store it in an external 
location for disaster recovery, requirements specification, norms, etc. 

The snapshot command issues a flush before to effectively take the snapshot. So 
the flush command is not necessary. 
https://github.com/apache/cassandra/blob/c7ebc01bbc6aa602b91e105b935d6779245c87d1/src/java/org/apache/cassandra/db/ColumnFamilyStore.java#L2213
 
(snapshotWithoutFlush() is used by the scrub command) 

Just out of curiosity, have you tried the leveled compaction strategy? It seems 
that you use STCS. 
Does your use case imply many updates? What is your read/write ratio? 

Best, 

Romain 

- Original Message -

From: "Denis Makogon"  
To: "OpenStack Development Mailing List (not for usage questions)" 
 
Sent: Friday, August 29, 2014 4:33:59 PM 
Subject: Re: [openstack-dev] [magnetodb] Backup procedure for Cassandra backend 




On Fri, Aug 29, 2014 at 4:29 PM, Dmitriy Ukhlov < dukh...@mirantis.com > wrote: 



Hello Denis, 
Thank you for very useful knowledge sharing. 

But I have one more question. As far as I understood if we have replication 
factor 3 it means that our backup may contain three copies of the same data. 
Also it may contain some not compacted sstables set. Do we have any ability to 
compact collected backup data before moving it to backup storage? 




Thanks for fast response, Dmitriy. 

With replication factor 3 - yes, this looks like a feature that allows to 
backup only one node instead of 3 of them. In other cases, we would need to 
iterate over each node, as you know. 
Correct, it is possible to have not compacted SSTables. To accomplish 
compaction we might need to use compaction mechanism provided by the nodetool, 
see 
http://www.datastax.com/documentation/cassandra/2.0/cassandra/tools/toolsCompact.html
 , we just need take into account that it's possible that sstable was already 
compacted and force compaction wouldn't give valuable benefits. 


Best regards, 
Denis Makogon 






On Fri, Aug 29, 2014 at 2:01 PM, Denis Makogon < dmako...@mirantis.com > wrote: 





Hello, stackers. I'd like to start thread related to backuping procedure for 
MagnetoDB, to be precise, for Cassandra backend. 

In order to accomplish backuping procedure for Cassandra we need to understand 
how does backuping work. 


To perform backuping: 

1. 

We need to SSH into each node 
2. 

Call ‘nodetool snapshot’ with appropriate parameters 
3. 

Collect backup. 
4. 

Send backup to remote storage. 
5. 

Remove initial snapshot 



Lets take a look how does ‘ nodetool snapshot ’ works. Cassandra backs up data 
by taking a snapshot of all on-disk data files (SSTable files) stored in the 
data directory. Each time an SSTable gets flushed and snapshotted it becomes a 
hard link against initial SSTable pinned to specific timestamp. 


Snapshots are taken per keyspace or per-CF and while the system is online. 
However, nodes must be taken offline in order to restore a snapshot. 


Using a parallel ssh tool (such as pssh), you can flush and then snapshot an 
entire cluster. This provides an eventually consistent backup. Although no one 
node is guaranteed to be consistent with its replica nodes at the time a 
snapshot is taken, a restored snapshot can resume consistency using Cassandra's 
built-in consistency mechanisms. 


After a system-wide snapshot has been taken, you can enable incremental backups 
on each node (disabled by default) to backup data that has changed since the 
last snapshot was taken. Each time an SSTable is flushed, a hard link is copied 
into a /backups subdirectory of the data directory. 


Now lets see how can we deal with snapshot once its taken. Below you can see a 
list of command that needs to be executed to prepare a snapshot: 


Flushing SSTables for consistency 

'nodetool flush' 


Creating snapshots (for example of all keyspaces) 

"nodetool snapshot -t %(backup_name)s 1>/dev/null", 


where 

* 

backup_name - is a name of snapshot 



Once it’s done we would need to collect all hard links into a common directory 
(with keeping initial file hierarchy): 


sudo tar cpzfP /tmp/all_ks.tar.gz\ 

$(sudo find %(datadir)s -type d -name %(backup_name)s)" 


where 

* 

backup_name - is a name of snapshot, 
* 

datadir - storage location (/var/lib/cassandra/data, by the default) 





Note that this operation can be extended: 


Re: [openstack-dev] [oslo] library feature freeze and final releases

2014-09-02 Thread Davanum Srinivas
Doug,

plan is good. Same criteria will mean exception for oslo.log as well

-- dims

On Tue, Sep 2, 2014 at 9:20 AM, Doug Hellmann  wrote:
> Oslo team,
>
> We need to consider how we are going to handle the approaching feature freeze 
> deadline (4 Sept). We should, at this point, be focusing reviews on changes 
> associated with blueprints. We will have time to finish graduation work and 
> handle bugs between the freeze and the release candidate deadline, but 
> obviously it’s OK to review those now, too.
>
> I propose that we apply the feature freeze rules to the incubator and any 
> library that has had a release this cycle and is being used by any other 
> project, but that libraries still being graduated not be frozen. I think that 
> gives us exceptions for oslo.concurrency, oslo.serialization, and 
> oslo.middleware. All of the other libraries should be planning to freeze new 
> feature work this week.
>
> The app RC1 period starts 25 Sept, so we should be prepared to tag our final 
> releases of libraries before then to ensure those final releases don’t 
> introduce issues into the apps when they are released. We will apply 1.0 tags 
> to the same commits that have the last alpha in the release series for each 
> library, and then focus on fixing any bugs that come up during the release 
> candidate period. I propose that we tag our releases on 18 Sept, to give us a 
> few days to fix any issues that arise before the RC period starts.
>
> Please let me know if you spot any issues with this plan.
>
> Doug
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: http://davanum.wordpress.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] library feature freeze and final releases

2014-09-02 Thread Doug Hellmann
Oslo team,

We need to consider how we are going to handle the approaching feature freeze 
deadline (4 Sept). We should, at this point, be focusing reviews on changes 
associated with blueprints. We will have time to finish graduation work and 
handle bugs between the freeze and the release candidate deadline, but 
obviously it’s OK to review those now, too.

I propose that we apply the feature freeze rules to the incubator and any 
library that has had a release this cycle and is being used by any other 
project, but that libraries still being graduated not be frozen. I think that 
gives us exceptions for oslo.concurrency, oslo.serialization, and 
oslo.middleware. All of the other libraries should be planning to freeze new 
feature work this week.

The app RC1 period starts 25 Sept, so we should be prepared to tag our final 
releases of libraries before then to ensure those final releases don’t 
introduce issues into the apps when they are released. We will apply 1.0 tags 
to the same commits that have the last alpha in the release series for each 
library, and then focus on fixing any bugs that come up during the release 
candidate period. I propose that we tag our releases on 18 Sept, to give us a 
few days to fix any issues that arise before the RC period starts.

Please let me know if you spot any issues with this plan.

Doug


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas][octavia]

2014-09-02 Thread Salvatore Orlando
Some more comments from me inline.
Salvatore


On 2 September 2014 11:06, Adam Harwell  wrote:

> I also agree with most of what Brandon said, though I am slightly
> concerned by the talk of merging Octavia and [Neutron-]LBaaS-v2 codebases.
>

Beyond all the reasons listed in this thread - merging codebases is always
more difficult that what it seems!
Also it seems to me there's not yet a clear path for LBaaS v2. Mostly
because of the ongoing neutron incubator discussion.
However in my opinion there are 3 paths (and I have no idea whether they
might be applicable to Octavia as a standalone project).
1) Aim at becoming part of neutron via the incubator or any equivalent
mechanisms
2) Evolve in loosely coupled fashion with neutron, but still be part of the
networking program. (This means that LBaaS APIs will be part of Openstack
Network APIs)
3) Evolve independently from neutron, and become part of a new program. I
have no idea however whether there's enough material to have a "load
balancing" program, and what would be the timeline for that.


> [blogan] "I think the best course of action is to get Octavia itself into
> the same codebase as LBaaS (Neutron or spun out)."
> [sballe] "What I am trying to now understand is how we will move Octavia
> into the new LBaaS project?"
>
>
> I didn't think that was ever going to be the plan -- sure, we'd have an
> Octavia driver that is part of the [Neutron-]LBaaS-v2 codebase (which
> Susanne did mention as well), but nothing more than that. The actual
> Octavia code would still be in its own project at the end of all of this,
> right? The driver code could be added to [Neutron-]LbaaS-v2 at any point
> once Octavia is mature enough to be used, just by submitting it as a CR, I
> believe. Doug might be able to comment on that, since he maintains the A10
> driver?
>

>From what I gathered so far Octavia is a fully fledged load balancing
virtual appliance which (at least in its first iterations) will leverage
haproxy.
As also stated earlier in this thread it's a peer of commercial appliances
from various vendors.

For LBaaS v2 therefore the relationship between it and Octavia should be
the same as with any other backend. I see Octavia has a blueprint for a
"network driver" - and the derivable of that should definitely be part of
the LBaaS project.
For the rest, it would seem a bit strange to me if the LBaaS project
incorporated a backend as well. After all, LBaaS v1 did not incorporate
haproxy!
Also, as Adam points out, Nova does not incorporate an Hypervisor.


>
> Also, I know I'm opening this same can of worms again, but I am curious
> about the HP mandate that "everything must be OpenStack" when it comes to
> Octavia. Since HP's offering would be "[Neutron-]LBaaS-v2", which happens
> to use Octavia as a backend, does it matter whether Octavia is an official
> OpenStack project**? If HP can offer Cloud Compute through Nova, and Nova
> uses some hypervisor like Xen or KVM (neither of which are a part of
> OpenStack), I am not sure how it is different to offer Cloud Load
> Balancing via [Neutron-]LBaaS-v2 which could be using a non-Openstack
> implementation for the backend. I don't see "Octavia needs to be in
> Openstack" as a blocker so long as the "LBaaS API" is part of OpenStack.
>
> **NOTE: I AM DEFINITELY STILL IN FAVOR OF OCTAVIA BEING AN OPENSTACK
> PROJECT. THIS IS JUST AN EXAMPLE FOR THE SAKE OF THIS PARTICULAR ARGUMENT.
> PLEASE DON'T THINK THAT I'M AGAINST OCTAVIA BEING OFFICIALLY INCUBATED!**
>

>
>  --Adam
>
>
> https://keybase.io/rm_you
>
>
>
> On 9/1/14 10:12 PM, "Brandon Logan"  wrote:
>
> >Hi Susanne and everyone,
> >
> >My opinions are that keeping it in stackforge until it gets mature is
> >the best solution.  I'm pretty sure we can all agree on that.  Whenever
> >it is mature then, and only then, we should try to get it into openstack
> >one way or another.  If Neutron LBaaS v2 is still incubated then it
> >should be relatively easy to get it in that codebase.  If Neutron LBaaS
> >has already spun out, even easier for us.  If we want Octavia to just
> >become an openstack project all its own then that will be the difficult
> >part.
> >
> >I think the best course of action is to get Octavia itself into the same
> >codebase as LBaaS (Neutron or spun out).  They do go together, and the
> >maintainers will almost always be the same for both.  This makes even
> >more sense when LBaaS is spun out into its own project.
> >
> >I really think all of the answers to these questions will fall into
> >place when we actually deliver a product that we are all wanting and
> >talking about delivering with Octavia.  Once we prove that we can all
> >come together as a community and manage a product from inception to
> >maturity, we will then have the respect and trust to do what is best for
> >an Openstack LBaaS product.
> >
> >Thanks,
> >Brandon
> >
> >On Mon, 2014-09-01 at 10:18 -0400, Susanne Balle wrote:
> >> Kyle, Adam,
> >>
> >>
> >>
> >> Based on this thread Kyle

Re: [openstack-dev] [third party][neutron] - OpenDaylight CI and -1 voting

2014-09-02 Thread Kyle Mestery
On Mon, Sep 1, 2014 at 10:47 PM, Kevin Benton  wrote:
> Thank you YAMAMOTO. I didn't think to look at stackalytics.
>
> Kyle, can you list yourself on the wiki? I don't want to do it in case there
> is someone else doing that job full time.
> Also, is there a re-trigger phrase that you can document on the Wiki or in
> the message body the CI posts to the reviews?
>
I'll add myself (and Dave Tucker, copied here) to the wiki for now.

Please note the OpenDaylight CI is undergoing some major changes at
the moment (it's being moved to the RAX cloud).

Thanks,
Kyle

> Thanks,
> Kevin Benton
>
>
> On Mon, Sep 1, 2014 at 8:08 PM, YAMAMOTO Takashi 
> wrote:
>>
>> > I have had multiple occasions where the OpenDaylight CI will vote a -1
>> > on a
>> > patch for something completely unrelated (e.g. [1]). This would be fine
>> > except for two issues. First, there doesn't appear to be any way to
>> > trigger
>> > a recheck. Second, there is no maintainer listed on the Neutron third
>> > party
>> > drivers page.[2] Because of this, there is effectively no way to get the
>> > -1
>> > removed without uploading a new patch and losing current code review
>> > votes.
>>
>> http://stackalytics.com/report/driverlog says its maintainer is
>> irc:mestery.  last time it happened to me, i asked him to trigger
>> recheck and it worked.
>>
>> YAMAMOTO Takashi
>>
>> >
>> > Can we remove the voting rights for the ODL CI until there is a
>> > documented
>> > way to trigger rechecks and a public contact on the drivers page for
>> > when
>> > things go wrong? Getting reviews is already hard enough, let alone when
>> > there is a -1 in the 'verified' column.
>> >
>> > 1. https://review.openstack.org/#/c/116187/
>> > 2.
>> >
>> > https://wiki.openstack.org/wiki/Neutron_Plugins_and_Drivers#Existing_Plugin
>> >
>> > --
>> > Kevin Benton
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> --
> Kevin Benton
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] feature branch for Nova v2.1 API?

2014-09-02 Thread Sean Dague
I think we finally got to a largely consensus agreement at the mid-cycle
on the path forward for Nova v2.1 and microversioning. Summary for
others: Nova v2.1 will be Nova v2 built on the v3 infrastructure (which
is much cleaner), with json schema based request validation (to further
clean up and make consistent all the input validation).

As we hit FF I'm wondering if we consider getting the API conversion a
high enough priority to run it in parallel during the RC phase of Nova.
It feels like a ton of other things become easier once we have all this
working (like actually being able to sanely evolve our API with all the
new features people show up with), so having that early Kilo would be great.

One thing we could do is take the v2.1 work into a feature branch during
the freeze/rc, which would allow getting Tempest functioning on it, and
get the patch stream ready for early merge in Kilo.

This has the disadvantage of some portion of the Nova team being focused
on this during the RC phase, which is generally considered uncool. (I
agree with that sentiment).

So it's a trade off. And honestly, I could go either way.

I'd like to get the feelings of the Nova drivers team on whether getting
the API on track is deemed important enough to go this approach during
the Juno RC phase. All opinions welcome.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Swift] Proposal for improving Metadata Search API

2014-09-02 Thread Paula Ta-Shma
Hi,

Great to see some renewed interest in metadata search for Swift. We are 
working on this, and would be glad if there is renewed interest from the 
community. 

I saw your comments on thew wiki discussion page and they look good to me. 
I just added some additional comments about Data Types and System Metadata 
to the same page (I posted these to the mailing list a while back). 

Best regards
Paula

Paula Ta-Shma, Ph.D.
Storage Research
IBM Research - Haifa
Phone: +972.3.7689402
Email: pa...@il.ibm.com




From:   
To: , 
Cc: , Paula Ta-Shma/Haifa/IBM@IBMIL, 
, , 
, 
Date:   02/09/2014 12:54 PM
Subject:[Swift] Proposal for improving Metadata Search API



Hi,

At NTT Data, we work on system development projects with Swift.
Recently we have big interest in the metadata search feature of Swift, and 
now we're planning to construct a search function into Swift.

I checked some materials on wiki page 
http://wiki.openstack.org/wiki/MetadataSearch to consider a detailed plan.
I found some discussions about the metadata search, but I cannot find any 
information about recent progresses.
Is there any progresses about the metadata search ?

I also checked the API document 
http://wiki.openstack.org/wiki/MetadataSearchAPI to make the specification 
of our metadata search based on community's standard, and I think this 
specification has a full featured API.
However, the detail formats of HTTP request and HTTP response, such as 
headers and response codes, are not clearly defined in this specification.

These formats about a HTTP message are necessary for implementations of 
clients, and without strict definition, clients may get confused.

I added my suggestions about HTTP headers and response codes to the Wiki 
discussion page https://wiki.openstack.org/wiki/Talk:MetadataSearchAPI
Does anyone have comments?

p.s. I also found some typos in the chapter 10 and fixed them.

Best regards,

Yuji Hagiwara
NTT DATA Corporation


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [bashate] .bashateignore

2014-09-02 Thread Sean Dague
On 09/01/2014 01:38 AM, Ian Wienand wrote:
> On 08/29/2014 10:42 PM, Sean Dague wrote:
>> I'm actually kind of convinced now that none of these approaches are
>> what we need, and that we should instead have a .bashateignore file in
>> the root dir for the project instead, which would be regex that would
>> match files or directories to throw out of the walk.
> 
> Dean's idea of reading .gitignore might be good.
> 
> I had a quick poke at git dir.c:match_pathspec_item() and sort of came
> up with something similar [2] which roughly follows that and then only
> matches on files that have a shell-script mimetype; which I feel is
> probably sane for a default implementation.
> 
> IMO devstack should just generate it's own file-list to pass in for
> checking and bashate shouldn't have special guessing code for it
> 
> It all feels a bit like a solution looking for a problem.  Making
> bashate only work on a passed-in list of files and leaving generating
> those files up to the test infrastructure is probably would probably
> best the best KISS choice...
> 
> -i
> 
> [1] https://github.com/git/git/blob/master/dir.c#L216
> [2] https://review.openstack.org/#/c/117425/

Sure, I think part of this is we built the tool inside of devstack, and
now in the base case it's really awkward to use with devstack. I'd say
that's part of what I feel is the 0.x lifecycle for bashate, figure out
how to live as a separate tool and not add a ton of work to our existing
projects.

One of the things that could make it better is to add file extensions to
all shell files in devstack. This would also solve the issue of gerrit
not syntax highlighting most of the files. If people are up for that,
I'll propose a rename patch to get us there. Then dumping the special
bashate discover bits is simple.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Is the BP approval process broken?

2014-09-02 Thread Day, Phil
>Adding in such case more bureaucracy (specs) is not the best way to resolve 
>team throughput issues...

I’d argue that  if fundamental design disagreements can be surfaced and debated 
at the design stage rather than first emerging on patch set XXX of an 
implementation, and be used to then prioritize what needs to be implemented 
then they do have a useful role to play.

Phil


From: Boris Pavlovic [mailto:bpavlo...@mirantis.com]
Sent: 28 August 2014 23:13
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] Is the BP approval process broken?

Joe,


This is a resource problem, the nova team simply does not have enough people 
doing enough reviews to make this possible.

Adding in such case more bureaucracy (specs) is not the best way to resolve 
team throughput issues...

my 2cents


Best regards,
Boris Pavlovic

On Fri, Aug 29, 2014 at 2:01 AM, Joe Gordon 
mailto:joe.gord...@gmail.com>> wrote:


On Thu, Aug 28, 2014 at 2:43 PM, Alan Kavanagh 
mailto:alan.kavan...@ericsson.com>> wrote:
I share Donald's points here, I believe what would help is to clearly describe 
in the Wiki the process and workflow for the BP approval process and build in 
this process how to deal with discrepancies/disagreements and build timeframes 
for each stage and process of appeal etc.
The current process would benefit from some fine tuning and helping to build 
safe guards and time limits/deadlines so folks can expect responses within a 
reasonable time and not be left waiting in the cold.


This is a resource problem, the nova team simply does not have enough people 
doing enough reviews to make this possible.

My 2cents!
/Alan

-Original Message-
From: Dugger, Donald D 
[mailto:donald.d.dug...@intel.com]
Sent: August-28-14 10:43 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] Is the BP approval process broken?

I would contend that that right there is an indication that there's a problem 
with the process.  You submit a BP and then you have no idea of what is 
happening and no way of addressing any issues.  If the priority is wrong I can 
explain why I think the priority should be higher, getting stonewalled leaves 
me with no idea what's wrong and no way to address any problems.

I think, in general, almost everyone is more than willing to adjust proposals 
based upon feedback.  Tell me what you think is wrong and I'll either explain 
why the proposal is correct or I'll change it to address the concerns.

Trying to deal with silence is really hard and really frustrating.  Especially 
given that we're not supposed to spam the mailing it's really hard to know what 
to do.  I don't know the solution but we need to do something.  More core team 
members would help, maybe something like an automatic timeout where BPs/patches 
with no negative scores and no activity for a week get flagged for special 
handling.

I feel we need to change the process somehow.

--
Don Dugger
"Censeo Toto nos in Kansa esse decisse." - D. Gale
Ph: 303/443-3786

-Original Message-
From: Jay Pipes [mailto:jaypi...@gmail.com]
Sent: Thursday, August 28, 2014 1:44 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [nova] Is the BP approval process broken?

On 08/27/2014 09:04 PM, Dugger, Donald D wrote:
> I'll try and not whine about my pet project but I do think there is a
> problem here.  For the Gantt project to split out the scheduler there
> is a crucial BP that needs to be implemented (
> https://review.openstack.org/#/c/89893/ ) and, unfortunately, the BP
> has been rejected and we'll have to try again for Kilo.  My question
> is did we do something wrong or is the process broken?
>
> Note that we originally proposed the BP on 4/23/14, went through 10
> iterations to the final version on 7/25/14 and the final version got
> three +1s and a +2 by 8/5.  Unfortunately, even after reaching out to
> specific people, we didn't get the second +2, hence the rejection.
>
> I understand that reviews are a burden and very hard but it seems
> wrong that a BP with multiple positive reviews and no negative reviews
> is dropped because of what looks like indifference.

I would posit that this is not actually indifference. The reason that there may 
not have been >1 +2 from a core team member may very well have been that the 
core team members did not feel that the blueprint's priority was high enough to 
put before other work, or that the core team members did have the time to 
comment on the spec (due to them not feeling the blueprint had the priority to 
justify the time to do a full review).

Note that I'm not a core drivers team member.

Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.o

Re: [openstack-dev] [oslo.db]A proposal for DB read/write separation

2014-09-02 Thread Duncan Thomas
On 11 August 2014 19:26, Jay Pipes  wrote:

> The above does not really make sense for MySQL Galera/PXC clusters *if only
> Galera nodes are used in the cluster*. Since Galera is synchronously
> replicated, there's no real point in segregating writers from readers, IMO.
> Better to just spread the write AND read load equally among all Galera
> cluster nodes.

Unfortunately it is possible to get bitten by the difference between
'synchronous' and 'virtually synchronous' in practice. I'll try to get
somebody more knowledgeable about the details to comment further.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder][Nova]Quest about Cinder Brick proposal

2014-09-02 Thread Duncan Thomas
On 2 September 2014 04:56, Emma Lin  wrote:
> Hi Gurus,
> I saw the wiki page for Cinder Brick proposal for Havana, but I didn’t see
> any follow up on that idea. Is there any real progress on that idea?
>
> As this proposal is to address the local storage issue, I’d like to know the
> status, and to see if there is any task required for hypervisor provider.

Hi Emma

Brick didn't really evolve to cover the local storage case, so we've
not made much progress in that direction.

Local storage comes up fairly regularly, but solving all of the points
(availability, API behaviour completeness, performance, scheduling)
from a pure cinder PoV is a hard problem - i.e. making local storage
look like a normal cinder volume. Specs welcome, email me if you want
more details on the problems - there are certainly many people
interested in seeing the problem solved.

There is code in brick that could be used in nova as is to reduce
duplication and give a single place to fix bugs - nobody has yet taken
this work on as far as I know.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][nova] PCI pass-through feature/topic proposals for Kilo Release

2014-09-02 Thread Irena Berezovsky
Following the last PCI pass-through meeting , we want to start thinking about 
features/add-ons that need to be addressed in the Kilo Release.

I created an etherpad (reused Doug's template) for topics related to PCI 
pass-through, mostly focused on SR-IOV networking:
https://etherpad.openstack.org/p/kilo_sriov_pci_passthrough

Please see some instructions at the top of the page.
Based on the topics interest, we may need to work out the overall details and 
propose summit session to present and get community feedback.

BR,
Irena
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder][Nova]Quest about Cinder Brick proposal

2014-09-02 Thread Preston L. Bannister
Hi Emma,

I do not claim to be an OpenStack guru, but might know something about
backing up a vCloud.

What proposal did you have in mind? A link would be helpful.

Backing up a step (ha), the existing cinder-backup API is very close to
useless. Backup needs to apply to an active instance. The Nova backup API
is closer, but needs (much) work.

Local storage is a big issue, as at scale we need to extract changed-block
lists. (VMware has an advantage here, for now.)




On Mon, Sep 1, 2014 at 8:56 PM, Emma Lin  wrote:

>  Hi Gurus,
> I saw the wiki page for Cinder Brick proposal for Havana, but I didn’t see
> any follow up on that idea. Is there any real progress on that idea?
>
>  As this proposal is to address the local storage issue, I’d like to know
> the status, and to see if there is any task required for hypervisor
> provider.
>
>  Any comments are appreciated
> Emma
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Swift] Proposal for improving Metadata Search API

2014-09-02 Thread hagiwarayuj
Hi,

At NTT Data, we work on system development projects with Swift.
Recently we have big interest in the metadata search feature of Swift, and now 
we're planning to construct a search function into Swift.

I checked some materials on wiki page 
http://wiki.openstack.org/wiki/MetadataSearch to consider a detailed plan.
I found some discussions about the metadata search, but I cannot find any 
information about recent progresses.
Is there any progresses about the metadata search ?

I also checked the API document 
http://wiki.openstack.org/wiki/MetadataSearchAPI to make the specification of 
our metadata search based on community's standard, and I think this 
specification has a full featured API.
However, the detail formats of HTTP request and HTTP response, such as headers 
and response codes, are not clearly defined in this specification.

These formats about a HTTP message are necessary for implementations of 
clients, and without strict definition, clients may get confused.

I added my suggestions about HTTP headers and response codes to the Wiki 
discussion page https://wiki.openstack.org/wiki/Talk:MetadataSearchAPI
Does anyone have comments?

p.s. I also found some typos in the chapter 10 and fixed them.

Best regards,

Yuji Hagiwara
NTT DATA Corporation

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas][octavia]

2014-09-02 Thread Adam Harwell
I also agree with most of what Brandon said, though I am slightly
concerned by the talk of merging Octavia and [Neutron-]LBaaS-v2 codebases.

[blogan] "I think the best course of action is to get Octavia itself into
the same codebase as LBaaS (Neutron or spun out)."
[sballe] "What I am trying to now understand is how we will move Octavia
into the new LBaaS project?"


I didn't think that was ever going to be the plan -- sure, we'd have an
Octavia driver that is part of the [Neutron-]LBaaS-v2 codebase (which
Susanne did mention as well), but nothing more than that. The actual
Octavia code would still be in its own project at the end of all of this,
right? The driver code could be added to [Neutron-]LbaaS-v2 at any point
once Octavia is mature enough to be used, just by submitting it as a CR, I
believe. Doug might be able to comment on that, since he maintains the A10
driver?

Also, I know I'm opening this same can of worms again, but I am curious
about the HP mandate that "everything must be OpenStack" when it comes to
Octavia. Since HP's offering would be "[Neutron-]LBaaS-v2", which happens
to use Octavia as a backend, does it matter whether Octavia is an official
OpenStack project**? If HP can offer Cloud Compute through Nova, and Nova
uses some hypervisor like Xen or KVM (neither of which are a part of
OpenStack), I am not sure how it is different to offer Cloud Load
Balancing via [Neutron-]LBaaS-v2 which could be using a non-Openstack
implementation for the backend. I don't see "Octavia needs to be in
Openstack" as a blocker so long as the "LBaaS API" is part of OpenStack.

**NOTE: I AM DEFINITELY STILL IN FAVOR OF OCTAVIA BEING AN OPENSTACK
PROJECT. THIS IS JUST AN EXAMPLE FOR THE SAKE OF THIS PARTICULAR ARGUMENT.
PLEASE DON'T THINK THAT I'M AGAINST OCTAVIA BEING OFFICIALLY INCUBATED!**


 --Adam


https://keybase.io/rm_you



On 9/1/14 10:12 PM, "Brandon Logan"  wrote:

>Hi Susanne and everyone,
>
>My opinions are that keeping it in stackforge until it gets mature is
>the best solution.  I'm pretty sure we can all agree on that.  Whenever
>it is mature then, and only then, we should try to get it into openstack
>one way or another.  If Neutron LBaaS v2 is still incubated then it
>should be relatively easy to get it in that codebase.  If Neutron LBaaS
>has already spun out, even easier for us.  If we want Octavia to just
>become an openstack project all its own then that will be the difficult
>part.
>
>I think the best course of action is to get Octavia itself into the same
>codebase as LBaaS (Neutron or spun out).  They do go together, and the
>maintainers will almost always be the same for both.  This makes even
>more sense when LBaaS is spun out into its own project.
>
>I really think all of the answers to these questions will fall into
>place when we actually deliver a product that we are all wanting and
>talking about delivering with Octavia.  Once we prove that we can all
>come together as a community and manage a product from inception to
>maturity, we will then have the respect and trust to do what is best for
>an Openstack LBaaS product.
>
>Thanks,
>Brandon
>
>On Mon, 2014-09-01 at 10:18 -0400, Susanne Balle wrote:
>> Kyle, Adam,
>> 
>>  
>> 
>> Based on this thread Kyle is suggesting the follow moving forward
>> plan: 
>> 
>>  
>> 
>> 1) We incubate Neutron LBaaS V2 in the ³Neutron² incubator ³and freeze
>> LBaas V1.0²
>> 2) ³Eventually² It graduates into a project under the networking
>> program.
>> 3) ³At that point² We deprecate Neutron LBaaS v1.
>> 
>>  
>> 
>> The words in ³xx³ are works I added to make sure I/We understand the
>> whole picture.
>> 
>>  
>> 
>> And as Adam mentions: Octavia != LBaaS-v2. Octavia is a peer to F5 /
>> Radware / A10 / etc appliances which is a definition I agree with BTW.
>> 
>>  
>> 
>> What I am trying to now understand is how we will move Octavia into
>> the new LBaaS project?
>> 
>>  
>> 
>> If we do it later rather than develop Octavia in tree under the new
>> incubated LBaaS project when do we plan to bring it in-tree from
>> Stackforge? Kilo? Later? When LBaaS is a separate project under the
>> Networking program?
>
>>  
>> 
>> What are the criteria to bring a driver into the LBaaS project and
>> what do we need to do to replace the existing reference driver? Maybe
>> adding a software driver to LBaaS source tree is less of a problem
>> than converting a whole project to an OpenStack project.
>
>>  
>> 
>> Again I am open to both directions I just want to make sure we
>> understand why we are choosing to do one or the other and that our
>>  decision is based on data and not emotions.
>> 
>>  
>> 
>> I am assuming that keeping Octavia in Stackforge will increase the
>> velocity of the project and allow us more freedom which is goodness.
>> We just need to have a plan to make it part of the Openstack LBaaS
>> project.
>> 
>>  
>> 
>> Regards Susanne
>> 
>> 
>> 
>> 
>> On Sat, Aug 30, 2014 at 2:09 PM, Adam Harwell
>>  wrote:
>> Only really have c

Re: [openstack-dev] [all] Design Summit reloaded

2014-09-02 Thread Thierry Carrez
Hayes, Graham wrote:
> On Fri, 2014-08-29 at 17:56 +0200, Thierry Carrez wrote:
>> Hayes, Graham wrote:
>>> Would the programs for those projects not get design summit time? I
>>> thought the Programs got Design summit time, not projects... If not, can
>>> the Programs get design summit time? 
>>
>> Sure, that's what Anne probably meant. Time for the program behind every
>> incubated project.
> 
> Sure,
> I was referring to the the 2 main days - (days 2 and 3)
> 
> I thought that was a benefit of having a Program? The PTL chooses the
> sessions, and the PTL is over a program, so I assumed that programs
> would get both Pods and some design summit time (not 1/2 a day on the
> Tuesday)
> 
> I know we (designate) got some great work done last year, but most of it
> was in isolation, as we had one 40 min session, and one 1/2 day session,
> but the rest of the sessions were unofficial ones, which meant that
> people in the community who were not as engaged missed out on the
> discussions.
> 
> Would there be space for programs with incubated projects at the
> 'Contributors meetups' ?

We have limited space in Paris, so there won't be pods for everyone like
in Atlanta. I'm still waiting for venue maps to see how we can make the
best use of the space we have.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] Juno Feature freeze is coming - Last day for feature reviews !

2014-09-02 Thread Thierry Carrez
Hi everyone,

As you probably know, the Juno-3 milestone should be published Thursday
this week, and with it comes the Juno feature freeze. The general
schedule is as follows:

Tuesday:
Defer/-2 blueprints that will obviously not get the required approvals
in the next 20 hours. Review and approve as much as you can.

Wednesday:
Everything that is not approved and in-flight in the gate should be
deferred/-2.

Thursday:
Wait for the last approved stuff to make it through the gate and publish
the milestone.

Friday:
Start considering feature freeze exceptions for critical Juno features.

That really means *today is the last day for feature reviews*.
So please plan that extra review effort today, rather than tomorrow !

Thanks everyone for helping us do a successful Juno release.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel-dev] [Fuel] Beta milestone of Fuel 5.1 now available!

2014-09-02 Thread Sergey Vasilenko
On Wed, Aug 27, 2014 at 2:11 AM, David Easter  wrote:

> What’s New in Fuel 5.1?
>
>  The primary new features of Fuel 5.1 are:
>

* Compute, cinder, ceph nodes has no external (public) network interface
and don't get public IP addresses if it's not necessary.


/sv
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Not able to create volume backup using cinder

2014-09-02 Thread Denis Makogon
Hello, Vinod.


Sorry but dev. mailing list is not for usage questions. But it seems that
you don't have launched cindre-backup service (see Service cinder-backup
could not be found.). Take a look at
https://github.com/openstack/cinder/blob/master/bin/cinder-backup.

Best regards,
Denis Makogon


On Tue, Sep 2, 2014 at 10:49 AM, Vinod Borol  wrote:

> I am unable to create a back up of the volume using the cinder command
> even if all the conditions required are satisfied. I am getting a HTTP 500
> error. I am not sure what could be the problem here. Really appreciate if
> some one can give me some pointers to where i can look.
>
> I am using Openstack Havana
>
> C:>cinder list
> +--+---+--+--+-+--+--+
> | ID | Status | Display Name | Size | Volume Type | Bootable | Attached to
> |
> +--+---+--+--+-+--+--+
> | db9cd065-1906-4bb7-b00c-f3f04245f514 | available |
> ShitalVmNew1407152394855 | 50 | None | true | |
> +--+---+--+--+-+--+--+
>
> C:>cinder backup-create db9cd065-1906-4bb7-b00c-f3f04245f514 ERROR:
> Service cinder-backup could not be found. (HTTP 500) (Request-ID:
> req-734e6a87-33f6-4654-8a14-5e3242318e87)
>
> Below is the exception in cinder logs
>
> Creating new backup {u'backup': {u'volume_id':
> u'f57c9a6f-cba2-4c4f-aca1-c4633e6bbbe4', u'container': None,
> u'description': None, u'name': u'vin-vol-cmd-bck'}} from (pid=4193) create
> /opt/stack/cinder/cinder/api/contrib/backups.py:218 2014-08-29 11:11:52.805
> AUDIT cinder.api.contrib.backups [req-12ec811b-a5b8-4043-8656-09a832e407d7
> 433bc3b202ae4b438b2530cb487b97a5 7da65e0e61124e54a8bd0d91f22a1ac0] Creating
> backup of volume f57c9a6f-cba2-4c4f-aca1-c4633e6bbbe4 in container None
> 2014-08-29 11:11:52.833 INFO cinder.api.openstack.wsgi
> [req-12ec811b-a5b8-4043-8656-09a832e407d7 433bc3b202ae4b438b2530cb487b97a5
> 7da65e0e61124e54a8bd0d91f22a1ac0] HTTP exception thrown: Service
> cinder-backup could not be found. 2014-08-29 11:11:52.834 INFO
> cinder.api.openstack.wsgi [req-12ec811b-a5b8-4043-8656-09a832e407d7
> 433bc3b202ae4b438b2530cb487b97a5 7da65e0e61124e54a8bd0d91f22a1ac0]
> http://10.193.72.195:8776/v1/7da65e0e61124e54a8bd0d91f22a1ac0/backups (
> http://10.193.72.195:8776/v1/7da65e0e...) returned with HTTP 500
> 2014-08-29 11:11:52.835 INFO eventlet.wsgi.server
> [req-12ec811b-a5b8-4043-8656-09a832e407d7 433bc3b202ae4b438b2530cb487b97a5
> 7da65e0e61124e54a8bd0d91f22a1ac0] 10.65.53.105 - - [29/Aug/2014 11:11:52]
> "POST /v1/7da65e0e61124e54a8bd0d91f22a1ac0/backups HTTP/1.1" 500 359
> 0.043090
>
>
>
> Regards
> VB
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Not able to create volume backup using cinder

2014-09-02 Thread Vinod Borol
I am unable to create a back up of the volume using the cinder command even
if all the conditions required are satisfied. I am getting a HTTP 500
error. I am not sure what could be the problem here. Really appreciate if
some one can give me some pointers to where i can look.

I am using Openstack Havana

C:>cinder list
+--+---+--+--+-+--+--+
| ID | Status | Display Name | Size | Volume Type | Bootable | Attached to
|
+--+---+--+--+-+--+--+
| db9cd065-1906-4bb7-b00c-f3f04245f514 | available |
ShitalVmNew1407152394855 | 50 | None | true | |
+--+---+--+--+-+--+--+

C:>cinder backup-create db9cd065-1906-4bb7-b00c-f3f04245f514 ERROR: Service
cinder-backup could not be found. (HTTP 500) (Request-ID:
req-734e6a87-33f6-4654-8a14-5e3242318e87)

Below is the exception in cinder logs

Creating new backup {u'backup': {u'volume_id':
u'f57c9a6f-cba2-4c4f-aca1-c4633e6bbbe4', u'container': None,
u'description': None, u'name': u'vin-vol-cmd-bck'}} from (pid=4193) create
/opt/stack/cinder/cinder/api/contrib/backups.py:218 2014-08-29 11:11:52.805
AUDIT cinder.api.contrib.backups [req-12ec811b-a5b8-4043-8656-09a832e407d7
433bc3b202ae4b438b2530cb487b97a5 7da65e0e61124e54a8bd0d91f22a1ac0] Creating
backup of volume f57c9a6f-cba2-4c4f-aca1-c4633e6bbbe4 in container None
2014-08-29 11:11:52.833 INFO cinder.api.openstack.wsgi
[req-12ec811b-a5b8-4043-8656-09a832e407d7 433bc3b202ae4b438b2530cb487b97a5
7da65e0e61124e54a8bd0d91f22a1ac0] HTTP exception thrown: Service
cinder-backup could not be found. 2014-08-29 11:11:52.834 INFO
cinder.api.openstack.wsgi [req-12ec811b-a5b8-4043-8656-09a832e407d7
433bc3b202ae4b438b2530cb487b97a5 7da65e0e61124e54a8bd0d91f22a1ac0]
http://10.193.72.195:8776/v1/7da65e0e61124e54a8bd0d91f22a1ac0/backups (
http://10.193.72.195:8776/v1/7da65e0e...) returned with HTTP 500 2014-08-29
11:11:52.835 INFO eventlet.wsgi.server
[req-12ec811b-a5b8-4043-8656-09a832e407d7 433bc3b202ae4b438b2530cb487b97a5
7da65e0e61124e54a8bd0d91f22a1ac0] 10.65.53.105 - - [29/Aug/2014 11:11:52]
"POST /v1/7da65e0e61124e54a8bd0d91f22a1ac0/backups HTTP/1.1" 500 359
0.043090



Regards
VB
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [NFV] NFV sub group - How to improve our message and obtain our goals better

2014-09-02 Thread MENDELSOHN, ITAI (ITAI)
Following last week IRC, please find below a suggestion to improve the 
description and the messaging to the wider community about NFV.

I think this group is a great initiative and very important to explain NFV to 
the wider developers community of OpenStack.
While NFV is becoming a more known term for the community (a new and good 
thing!) I feel there is still a gap in explaining why it needs special care. Or 
maybe more precise, do we want to claim it needs special care?
I would claim we shouldn’t think about it as “special” but as a great use case 
for OS with some fine tune needs that are relevant (in most) to other cases too.

So in our wiki we have few great starts.

  *   We have the HL description of ETSI NFV use case
  *   We have the workloads types definition
  *   We have the great section by Calum about two specific apps

What we might consider as missing?
Answering questions like

  *   Why (it) is special ?
  *   Why we need it now?
  *   Who will use it and why it can’t be achieve otherwise?

My feeling is that in order to explain the reasoning we need to emphasise the 
workloads types piece.
We want to avoid using special and specific NFV terms, because they are 
frightening and not relevant for the majority of the community.

So I would suggest to enrich the workload type section. Maybe add more options 
to explain availability schemes needs and other common cases like storing large 
files in burst or any other example.
In addition I would add a section for “transition” time needs just because of 
the state of the apps. A good example for it is the “VLAN” trunking BP. It 
feels more like a transition need (apps today are sending traffic with VLANs) 
rather than a long term real need. In comparison to state or performance needs 
that have a better justification for the long term. My app owners friends, 
please don’t “jump” on the VLANs example, I assume we can argue about it…. I 
hope the point I am is clear though

Then for each section (type)  answer the questions above and link the BPs to 
each of those “types”.

What we will achieve?

  *   Not having special NFV terms as part of the discussion with the community
  *   Clear reasoning for the needs
  *   Not position NFV as not “cloudy”

If make sense, we can start and working on it.

Itai
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas][octavia]

2014-09-02 Thread Avishay Balderman
+1

-Original Message-
From: Brandon Logan [mailto:brandon.lo...@rackspace.com] 
Sent: Tuesday, September 02, 2014 8:13 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [neutron][lbaas][octavia]

Hi Susanne and everyone,

My opinions are that keeping it in stackforge until it gets mature is the best 
solution.  I'm pretty sure we can all agree on that.  Whenever it is mature 
then, and only then, we should try to get it into openstack one way or another. 
 If Neutron LBaaS v2 is still incubated then it should be relatively easy to 
get it in that codebase.  If Neutron LBaaS has already spun out, even easier 
for us.  If we want Octavia to just become an openstack project all its own 
then that will be the difficult part.

I think the best course of action is to get Octavia itself into the same 
codebase as LBaaS (Neutron or spun out).  They do go together, and the 
maintainers will almost always be the same for both.  This makes even more 
sense when LBaaS is spun out into its own project.

I really think all of the answers to these questions will fall into place when 
we actually deliver a product that we are all wanting and talking about 
delivering with Octavia.  Once we prove that we can all come together as a 
community and manage a product from inception to maturity, we will then have 
the respect and trust to do what is best for an Openstack LBaaS product.

Thanks,
Brandon

On Mon, 2014-09-01 at 10:18 -0400, Susanne Balle wrote:
> Kyle, Adam,
> 
>  
> 
> Based on this thread Kyle is suggesting the follow moving forward
> plan: 
> 
>  
> 
> 1) We incubate Neutron LBaaS V2 in the “Neutron” incubator “and freeze 
> LBaas V1.0”
> 2) “Eventually” It graduates into a project under the networking 
> program.
> 3) “At that point” We deprecate Neutron LBaaS v1.
> 
>  
> 
> The words in “xx“ are works I added to make sure I/We understand the 
> whole picture.
> 
>  
> 
> And as Adam mentions: Octavia != LBaaS-v2. Octavia is a peer to F5 / 
> Radware / A10 / etc appliances which is a definition I agree with BTW.
> 
>  
> 
> What I am trying to now understand is how we will move Octavia into 
> the new LBaaS project?
> 
>  
> 
> If we do it later rather than develop Octavia in tree under the new 
> incubated LBaaS project when do we plan to bring it in-tree from 
> Stackforge? Kilo? Later? When LBaaS is a separate project under the 
> Networking program?

>  
> 
> What are the criteria to bring a driver into the LBaaS project and 
> what do we need to do to replace the existing reference driver? Maybe 
> adding a software driver to LBaaS source tree is less of a problem 
> than converting a whole project to an OpenStack project.

>  
> 
> Again I am open to both directions I just want to make sure we 
> understand why we are choosing to do one or the other and that our  
> decision is based on data and not emotions.
> 
>  
> 
> I am assuming that keeping Octavia in Stackforge will increase the 
> velocity of the project and allow us more freedom which is goodness.
> We just need to have a plan to make it part of the Openstack LBaaS 
> project.
> 
>  
> 
> Regards Susanne
> 
> 
> 
> 
> On Sat, Aug 30, 2014 at 2:09 PM, Adam Harwell 
>  wrote:
> Only really have comments on two of your related points:
> 
> 
> [Susanne] To me Octavia is a driver so it is very hard to me
> to think of it as a standalone project. It needs the new
> Neutron LBaaS v2 to function which is why I think of them
> together. This of course can change since we can add whatever
> layers we want to Octavia.
> 
> 
> [Adam] I guess I've always shared Stephen's
> viewpoint — Octavia != LBaaS-v2. Octavia is a peer to F5 /
> Radware / A10 / etcappliances, not to an Openstack API layer
> like Neutron-LBaaS. It's a little tricky to clearly define
> this difference in conversation, and I have noticed that quite
> a few people are having the same issue differentiating. In a
> small group, having quite a few people not on the same page is
> a bit scary, so maybe we need to really sit down and map this
> out so everyone is together one way or the other.
> 
> 
> [Susanne] Ok now I am confused… But I agree with you that it
> need to focus on our use cases. I remember us discussing
> Octavia being the refenece implementation for OpenStack LBaaS
> (whatever that is). Has that changed while I was on vacation?
> 
> 
> [Adam] I believe that having the Octavia "driver" (not the
> Octavia codebase itself, technically) become the reference
> implementation for Neutron-LBaaS is still the plan in my eyes.
> The Octavia Driver in Neutron-LBaaS is a separate bit of code
> from the actual Octavia project, similar to the way the A10
> driver is a separate bit of code from the A10 app