Re: [openstack-dev] Scheduler proposal

2015-10-09 Thread Chris Friesen

On 10/09/2015 07:29 PM, Clint Byrum wrote:


Even if you figured out how to make the in-memory scheduler crazy fast,
There's still value in concurrency for other reasons. No matter how
fast you make the scheduler, you'll be slave to the response time of
a single scheduling request. If you take 1ms to schedule each node
(including just reading the request and pushing out your scheduling
result!) you will never achieve greater than 1000/s. 1ms is way lower
than it's going to take just to shove a tiny message into RabbitMQ or
even 0mq. So I'm pretty sure this is o-k for small clouds, but would be
a disaster for a large, busy cloud.

If, however, you can have 20 schedulers that all take 10ms on average,
and have the occasional lock contention for a resource counter resulting
in 100ms, now you're at 2000/s minus the lock contention rate. This
strategy would scale better with the number of compute nodes, since
more nodes means more distinct locks, so you can scale out the number
of running servers separate from the number of scheduling requests.


As far as I can see, moving to an in-memory scheduler is essentially orthogonal 
to allowing multiple schedulers to run concurrently.  We can do both.


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Ceilometer, New measurements, Types

2015-10-09 Thread phot...@126.com
Meters are measurable value. So it cannot be a string.

If you really need to have notifications about event that is exist but
cannot be measured you can use Events.

http://docs.openstack.org/admin-guide-cloud/telemetry-events.html

Cheers,

Igor Degtiarov
Software Engineer
Mirantis Inc.
www.mirantis.com

On Sat, Oct 10, 2015 at 6:19 AM, phot...@126.com  wrote:

> There are three type of meters are defined in Ceilometer, including
> Cumulative, Gauge, Delta.
> In fact, these three types are numeric, but i need string.
> How could i make ceilometer to support string.
>
> Doc: http://docs.openstack.org/developer/ceilometer/new_meters.html
> image:
> --
> phot...@126.com
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Ceilometer, New measurements, Types

2015-10-09 Thread Igor Degtiarov
Meters are measurable value. So it cannot be a string.

If you really need to have notifications about event that is exist but
cannot be measured you can use Events.

http://docs.openstack.org/admin-guide-cloud/telemetry-events.html

Cheers,

Igor Degtiarov
Software Engineer
Mirantis Inc.
www.mirantis.com

On Sat, Oct 10, 2015 at 6:19 AM, phot...@126.com  wrote:

> There are three type of meters are defined in Ceilometer, including
> Cumulative, Gauge, Delta.
> In fact, these three types are numeric, but i need string.
> How could i make ceilometer to support string.
>
> Doc: http://docs.openstack.org/developer/ceilometer/new_meters.html
> image:
> --
> phot...@126.com
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Scheduler proposal

2015-10-09 Thread Alec Hothan (ahothan)





On 10/9/15, 6:29 PM, "Clint Byrum"  wrote:

>Excerpts from Chris Friesen's message of 2015-10-09 17:33:38 -0700:
>> On 10/09/2015 03:36 PM, Ian Wells wrote:
>> > On 9 October 2015 at 12:50, Chris Friesen > > > wrote:
>> >
>> > Has anybody looked at why 1 instance is too slow and what it would 
>> > take to
>> >
>> > make 1 scheduler instance work fast enough? This does not preclude 
>> > the
>> > use of
>> > concurrency for finer grain tasks in the background.
>> >
>> >
>> > Currently we pull data on all (!) of the compute nodes out of the 
>> > database
>> > via a series of RPC calls, then evaluate the various filters in python 
>> > code.
>> >
>> >
>> > I'll say again: the database seems to me to be the problem here.  Not to
>> > mention, you've just explained that they are in practice holding all the 
>> > data in
>> > memory in order to do the work so the benefit we're getting here is really 
>> > a
>> > N-to-1-to-M pattern with a DB in the middle (the store-to-DB is rather
>> > secondary, in fact), and that without incremental updates to the receivers.
>> 
>> I don't see any reason why you couldn't have an in-memory scheduler.
>> 
>> Currently the database serves as the persistant storage for the resource 
>> usage, 
>> so if we take it out of the picture I imagine you'd want to have some way of 
>> querying the compute nodes for their current state when the scheduler first 
>> starts up.
>> 
>> I think the current code uses the fact that objects are remotable via the 
>> conductor, so changing that to do explicit posts to a known scheduler topic 
>> would take some work.
>> 
>
>Funny enough, I think thats exactly what Josh's "just use Zookeeper"
>message is about. Except in memory, it is "in an observable storage
>location".
>
>Instead of having the scheduler do all of the compute node inspection
>and querying though, you have the nodes push their stats into something
>like Zookeeper or consul, and then have schedulers watch those stats
>for changes to keep their in-memory version of the data up to date. So
>when you bring a new one online, you don't have to query all the nodes,
>you just scrape the data store, which all of these stores (etcd, consul,
>ZK) are built to support atomically querying and watching at the same
>time, so you can have a reasonable expectation of correctness.
>
>Even if you figured out how to make the in-memory scheduler crazy fast,
>There's still value in concurrency for other reasons. No matter how
>fast you make the scheduler, you'll be slave to the response time of
>a single scheduling request. If you take 1ms to schedule each node
>(including just reading the request and pushing out your scheduling
>result!) you will never achieve greater than 1000/s. 1ms is way lower
>than it's going to take just to shove a tiny message into RabbitMQ or
>even 0mq.

That is not what I have seen, measurements that I did or done by others show 
between 5000 and 1 send *per sec* (depending on mirroring, up to 1KB msg 
size) using oslo messaging/kombu over rabbitMQ.
And this is unmodified/highly unoptimized oslo messaging code.
If you remove the oslo messaging layer, you get 25000 to 45000 msg/sec with 
kombu/rabbitMQ (which shows how inefficient is oslo messaging layer itself)


> So I'm pretty sure this is o-k for small clouds, but would be
>a disaster for a large, busy cloud.

It all depends on how many sched/sec for the "large busy cloud"...

>
>If, however, you can have 20 schedulers that all take 10ms on average,
>and have the occasional lock contention for a resource counter resulting
>in 100ms, now you're at 2000/s minus the lock contention rate. This
>strategy would scale better with the number of compute nodes, since
>more nodes means more distinct locks, so you can scale out the number
>of running servers separate from the number of scheduling requests.

How many compute nodes are we talking about max? How many scheduling per second 
is the requirement? And where are we today with the latest nova scheduler?
My point is that without these numbers we could end up under-shooting, 
over-shooting or over-engineering along with the cost of maintaining that extra 
complexity over the lifetime of openstack.

I'll just make up some numbers for the sake of this discussion:

nova scheduler latest can do only 100 sched/sec for 1 instance (I guess the 
10ms average you bring out may not be that unrealistic)
the requirement is a sustained 500 sched/sec worst case with 10K nodes (that is 
5% of 10K and today we can barely launch 100VM/sec sustained)

Are we going to achieve 5x with just 3 instances which is what most people 
deploy? Not likely.
Will using more elaborate distributed infra/DLM like consul/zk/etcd going to 
get us to that 500 mark with 3 instances? Maybe but it will be at the expense 
of added complexity of the overall solution.
Can we instead optimize nova scheduler with single instance to do 500/sec?

[openstack-dev] Ceilometer, New measurements, Types

2015-10-09 Thread phot...@126.com
There are three type of meters are defined in Ceilometer, including Cumulative, 
Gauge, Delta. 
In fact, these three types are numeric, but i need string.
How could i make ceilometer to support string.

Doc: http://docs.openstack.org/developer/ceilometer/new_meters.html
image: 


phot...@126.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Let's get together and fix all the bugs

2015-10-09 Thread Adam Young

On 10/09/2015 11:04 PM, Chen, Wei D wrote:


Great idea! core reviewer’s advice is definitely much important and 
valuable before proposing a fixing. I was always thinking it will help 
save us if we can get some agreement at some point.


Best Regards,

Dave Chen

*From:*David Stanek [mailto:dsta...@dstanek.com]
*Sent:* Saturday, October 10, 2015 3:54 AM
*To:* OpenStack Development Mailing List
*Subject:* [openstack-dev] [keystone] Let's get together and fix all 
the bugs


I would like to start running a recurring bug squashing day. The 
general idea is to get more focus on bugs and stability. You can find 
the details here: https://etherpad.openstack.org/p/keystone-office-hours



Can we start with Bug 968696?


--

David
blog: http://www.traceback.org
twitter: http://twitter.com/dstanek

www: http://dstanek.com



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Let's get together and fix all the bugs

2015-10-09 Thread Chen, Wei D
Great idea! core reviewer’s advice is definitely much important and valuable 
before proposing a fixing. I was always thinking it will help save us if we can 
get some agreement at some point.

 

 

Best Regards,

Dave Chen

 

From: David Stanek [mailto:dsta...@dstanek.com] 
Sent: Saturday, October 10, 2015 3:54 AM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [keystone] Let's get together and fix all the bugs

 

I would like to start running a recurring bug squashing day. The general idea 
is to get more focus on bugs and stability. You can find the details here: 
https://etherpad.openstack.org/p/keystone-office-hours

 

 

-- 

David
blog: http://www.traceback.org
twitter: http://twitter.com/dstanek

www: http://dstanek.com



smime.p7s
Description: S/MIME cryptographic signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Scheduler proposal

2015-10-09 Thread Ian Wells
On 9 October 2015 at 18:29, Clint Byrum  wrote:

> Instead of having the scheduler do all of the compute node inspection
> and querying though, you have the nodes push their stats into something
> like Zookeeper or consul, and then have schedulers watch those stats
> for changes to keep their in-memory version of the data up to date. So
> when you bring a new one online, you don't have to query all the nodes,
> you just scrape the data store, which all of these stores (etcd, consul,
> ZK) are built to support atomically querying and watching at the same
> time, so you can have a reasonable expectation of correctness.
>

We have to be careful about our definition of 'correctness' here.  In
practice, the data is never going to be perfect because compute hosts
update periodically and the information is therefore always dated.  With
ZK, it's going to be strictly consistent with regard to the updates from
the compute hosts, but again that doesn't really matter too much because
the scheduler is going to have to make a best effort job with a mixed bag
of information anyway.

In fact, putting ZK in the middle basically means that your compute hosts
now synchronously update a majority of nodes in a minimum 3 node quorum -
not the fastest form of update - and then the quorum will see to notifying
the schedulers.  In practice this is just a store-and-fanout again. Once
more it's not clear to me whether the store serves much use, and as for the
fanout, I wonder if we'll need >>3 schedulers running so that this is
reducing communication overhead.

Even if you figured out how to make the in-memory scheduler crazy fast,
> There's still value in concurrency for other reasons. No matter how
> fast you make the scheduler, you'll be slave to the response time of
> a single scheduling request. If you take 1ms to schedule each node
> (including just reading the request and pushing out your scheduling
> result!) you will never achieve greater than 1000/s. 1ms is way lower
> than it's going to take just to shove a tiny message into RabbitMQ or
> even 0mq. So I'm pretty sure this is o-k for small clouds, but would be
> a disaster for a large, busy cloud.
>

Per before, my suggestion was that every scheduler tries to maintain a copy
of the cloud's state in memory (in much the same way, per the previous
example, as every router on the internet tries to make a route table out of
what it learns from BGP).  They don't have to be perfect.  They don't have
to be in sync.  As long as there's some variability in the decision making,
they don't have to update when another scheduler schedules something (and
you can make the compute node send an immediate update when a new VM is
run, anyway).  They all stand a good chance of scheduling VMs well
simultaneously.

If, however, you can have 20 schedulers that all take 10ms on average,
> and have the occasional lock contention for a resource counter resulting
> in 100ms, now you're at 2000/s minus the lock contention rate. This
> strategy would scale better with the number of compute nodes, since
> more nodes means more distinct locks, so you can scale out the number
> of running servers separate from the number of scheduling requests.
>

If you have 20 schedulers that take 1ms on average, and there's absolutely
no lock contention, then you're at 20,000/s.  (Unfair, granted, since what
I'm suggesting is more likely to make rejected scheduling decisions, but
they could be rare.)

But to be fair, we're throwing made up numbers around at this point.  Maybe
it's time to work out how to test this for scale in a harness - which is
the bit of work we all really need to do this properly, or there's no proof
we've actually helped - and leave people to code their ideas up?
-- 
Ian.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Scheduler proposal

2015-10-09 Thread Clint Byrum
Excerpts from Chris Friesen's message of 2015-10-09 17:33:38 -0700:
> On 10/09/2015 03:36 PM, Ian Wells wrote:
> > On 9 October 2015 at 12:50, Chris Friesen  > > wrote:
> >
> > Has anybody looked at why 1 instance is too slow and what it would take 
> > to
> >
> > make 1 scheduler instance work fast enough? This does not preclude 
> > the
> > use of
> > concurrency for finer grain tasks in the background.
> >
> >
> > Currently we pull data on all (!) of the compute nodes out of the 
> > database
> > via a series of RPC calls, then evaluate the various filters in python 
> > code.
> >
> >
> > I'll say again: the database seems to me to be the problem here.  Not to
> > mention, you've just explained that they are in practice holding all the 
> > data in
> > memory in order to do the work so the benefit we're getting here is really a
> > N-to-1-to-M pattern with a DB in the middle (the store-to-DB is rather
> > secondary, in fact), and that without incremental updates to the receivers.
> 
> I don't see any reason why you couldn't have an in-memory scheduler.
> 
> Currently the database serves as the persistant storage for the resource 
> usage, 
> so if we take it out of the picture I imagine you'd want to have some way of 
> querying the compute nodes for their current state when the scheduler first 
> starts up.
> 
> I think the current code uses the fact that objects are remotable via the 
> conductor, so changing that to do explicit posts to a known scheduler topic 
> would take some work.
> 

Funny enough, I think thats exactly what Josh's "just use Zookeeper"
message is about. Except in memory, it is "in an observable storage
location".

Instead of having the scheduler do all of the compute node inspection
and querying though, you have the nodes push their stats into something
like Zookeeper or consul, and then have schedulers watch those stats
for changes to keep their in-memory version of the data up to date. So
when you bring a new one online, you don't have to query all the nodes,
you just scrape the data store, which all of these stores (etcd, consul,
ZK) are built to support atomically querying and watching at the same
time, so you can have a reasonable expectation of correctness.

Even if you figured out how to make the in-memory scheduler crazy fast,
There's still value in concurrency for other reasons. No matter how
fast you make the scheduler, you'll be slave to the response time of
a single scheduling request. If you take 1ms to schedule each node
(including just reading the request and pushing out your scheduling
result!) you will never achieve greater than 1000/s. 1ms is way lower
than it's going to take just to shove a tiny message into RabbitMQ or
even 0mq. So I'm pretty sure this is o-k for small clouds, but would be
a disaster for a large, busy cloud.

If, however, you can have 20 schedulers that all take 10ms on average,
and have the occasional lock contention for a resource counter resulting
in 100ms, now you're at 2000/s minus the lock contention rate. This
strategy would scale better with the number of compute nodes, since
more nodes means more distinct locks, so you can scale out the number
of running servers separate from the number of scheduling requests.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Scheduler proposal

2015-10-09 Thread Chris Friesen

On 10/09/2015 03:36 PM, Ian Wells wrote:

On 9 October 2015 at 12:50, Chris Friesen mailto:chris.frie...@windriver.com>> wrote:

Has anybody looked at why 1 instance is too slow and what it would take to

make 1 scheduler instance work fast enough? This does not preclude the
use of
concurrency for finer grain tasks in the background.


Currently we pull data on all (!) of the compute nodes out of the database
via a series of RPC calls, then evaluate the various filters in python code.


I'll say again: the database seems to me to be the problem here.  Not to
mention, you've just explained that they are in practice holding all the data in
memory in order to do the work so the benefit we're getting here is really a
N-to-1-to-M pattern with a DB in the middle (the store-to-DB is rather
secondary, in fact), and that without incremental updates to the receivers.


I don't see any reason why you couldn't have an in-memory scheduler.

Currently the database serves as the persistant storage for the resource usage, 
so if we take it out of the picture I imagine you'd want to have some way of 
querying the compute nodes for their current state when the scheduler first 
starts up.


I think the current code uses the fact that objects are remotable via the 
conductor, so changing that to do explicit posts to a known scheduler topic 
would take some work.


Chris


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [All][Elections] Results of the TC Election

2015-10-09 Thread Tristan Cacqueray
Please join me in congratulating the 6 newly elected members of the TC.

* Doug Hellmann (dhellmann)
* Monty Taylor (mordred)
* Anne Gentle (annegentle)
* Sean Dague (sdague)
* Russell Bryant (russellb)
* Kyle Mestery (mestery)

Full results:
http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_4ef58718618691a0

Thank you to all candidates who stood for election, having a good group
of candidates helps engage the community in our democratic process,

Thank you to all who voted and who encouraged others to vote. We need to
ensure your voice is heard.

Thanks to my fellow election official, Tony Breeds, I appreciate your
help and perspective.

Thank you for another great round.

Here's to Mitaka,
Tristan




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Nominating two new core reviewers

2015-10-09 Thread Jay Faulkner
+1


From: Jim Rollenhagen 
Sent: Thursday, October 8, 2015 2:47 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [ironic] Nominating two new core reviewers

Hi all,

I've been thinking a lot about Ironic's core reviewer team and how we might
make it better.

I'd like to grow the team more through trust and mentoring. We should be
able to promote someone to core based on a good knowledge of *some* of
the code base, and trust them not to +2 things they don't know about. I'd
also like to build a culture of mentoring non-cores on how to review, in
preparation for adding them to the team. Through these pieces, I'm hoping
we can have a few rounds of core additions this cycle.

With that said...

I'd like to nominate Vladyslav Drok (vdrok) for the core team. His reviews
have been super high quality, and the quantity is ever-increasing. He's
also started helping out with some smaller efforts (full tempest, for
example), and I'd love to see that continue with larger efforts.

I'd also like to nominate John Villalovos (jlvillal). John has been
reviewing a ton of code and making a real effort to learn everything,
and keep track of everything going on in the project.

Ironic cores, please reply with your vote; provided feedback is positive,
I'd like to make this official next week sometime. Thanks!

// jim


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Different OpenStack components

2015-10-09 Thread Fox, Kevin M
The official list is 
http://git.openstack.org/cgit/openstack/governance/tree/reference/projects.yaml

Thanks,
Kevin

From: Amrith Kumar [amr...@tesora.com]
Sent: Friday, October 09, 2015 3:20 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Different OpenStack components

A google search produced this as result #2.

http://governance.openstack.org/reference/projects/index.html

Looks pretty complete to me.

-amrith

--
Amrith Kumar, CTO   | amr...@tesora.com
Tesora, Inc | @amrithkumar
125 CambridgePark Drive, Suite 400  | http://www.tesora.com
Cambridge, MA. 02140|



From: Abhishek Talwar [mailto:abhishek.tal...@tcs.com]
Sent: Friday, October 09, 2015 3:46 PM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: [openstack-dev] Different OpenStack components

Hi Folks,

I have been working with OpenStack from a while now, I know that other than the 
main componets (nova, neutron, glance, cinder, horizon, tempest, keystone etc) 
there are many more components in OpenStack (like Sahara, Trove).

So, where can I see the list of all existing OpenStack components and is there 
any documentation for these components so that I can read what all roles these 
components play.

Thanks and Regards
Abhishek Talwar

=-=-=
Notice: The information contained in this e-mail
message and/or attachments to it may contain
confidential or privileged information. If you are
not the intended recipient, any dissemination, use,
review, distribution, printing or copying of the
information contained in this e-mail message
and/or attachments to it are strictly prohibited. If
you have received this communication in error,
please notify us by reply e-mail or telephone and
immediately and permanently delete the message
and any attachments. Thank you
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] service catalog: TNG

2015-10-09 Thread Clint Byrum
Excerpts from Sean Dague's message of 2015-10-09 14:00:40 -0700:
> On 10/09/2015 02:52 PM, Jonathan D. Proulx wrote:
> > On Fri, Oct 09, 2015 at 02:17:26PM -0400, Monty Taylor wrote:
> > :On 10/09/2015 01:39 PM, David Stanek wrote:
> > :>
> > :>On Fri, Oct 9, 2015 at 1:28 PM, Jonathan D. Proulx  > :>> wrote:
> > :>As an operator I'd be happy to use SRV records to define endpoints,
> > :>though multiple regions could make that messy.
> > :>
> > :>would we make subdomins per region or include region name in the
> > :>service name?
> > :>
> > :>_compute-regionone._tcp.example.com 
> > :>-vs-
> > :>_compute._tcp.regionone.example.com 
> > :>
> > :>Also not all operators can controll their DNS to this level so it
> > :>couldn't be the only option.
> > :
> > :SO - XMPP does this. The way it works is that if your XMPP provider
> > :has put the approriate records in DNS, then everything Just Works. If
> > :not, then you, as a consumer, have several pieces of information you
> > :need to provide by hand.
> > :
> > :Of course, there are already several pieces of information you have
> > :to provide by hand to connect to OpenStack, so needing to download a
> > :manifest file or something like that to talk to a cloud in an
> > :environment where the people running a cloud do not have the ability
> > :to add information to DNS (boggles) shouldn't be that terrible.
> > 
> > yes but XMPP require 2 (maybe 3) SRV records so an equivelent number
> > of local config options is managable. A cloud with X endpoints and Y
> > regions is significantly more.
> > 
> > Not to say this couldn't be done by packing more stuff into the openrc
> > or equivelent so users don't need to directly enter all that, but that
> > would be a significant change and one I think would be more difficult
> > for smaller operations.
> > 
> > :One could also imagine an in-between option where OpenStack could run
> > :an _optional_ DNS for this purpose - and then the only 'by-hand'
> > :you'd need for clouds with no real DNS is the location of the
> > :discover DNS.
> > 
> > Yes a special purpose DNS (a la dnsbl) might be preferable to
> > pushing around static configs.
> 
> I do realize lots of people want to go in much more radical directions
> here. I think we have to be really careful about that. The current
> cinder v1 -> v2 transition challenges demonstrate how much inertia there
> is. 3 years of talking about a Tasks API is another instance of it.
> 
> We aren't starting with a blank slate. This is brownfield development.
> There are enough users of this that making shifts need to be done in
> careful shifts that enable a new thing similar enough to the old thing,
> that people will easily be able to take advantage of it. Which means I
> think deciding to jump off the REST bandwagon for this is currently a
> bridge too far. At least to get anything tangible done in the next 6 to
> 12 months.
> 

I'm 100% in agreement that we can't abandon things that we've created. If
we create a DNS based catalog that is ready for prime time tomorrow,
we will have the REST based catalog for _years_.

> I think getting us a service catalog served over REST that doesn't
> require auth, and doesn't require tenant_ids in urls, gets us someplace
> we could figure out a DNS representation (for those that wanted that).
> But we have to tick / tock this and not change transports and
> representations at the same time.
> 

I don't think we're suggesting that we abandon the current one. We don't
break userspace!

However, replacing the underpinnings of the current one with the new one,
and leaving the current one as a compatibility layer _is_ a way to get
progress on the new one without shafting users. So I think considerable
consideration should be given to an approach where we limit working on
the core of the current solution, and replace that core with the new
solution + compatibility layer.

> And, as I've definitely discovered through this process the Service
> Catalog today has been fluid enough that where it is used, and what
> people rely on in it, isn't always clear all at once. For instance,
> tenant_ids in urls are very surface features in Nova (we don't rely on
> it, we're using the context), don't exist at all in most new services,
> and are very corely embedded in Swift. This is part of what has also
> required the service catalog is embedded in the Token, which causes toke
> bloat, and has led to other features to try to shrink the catalog by
> filtering it by what a user is allowed. Which in turn ended up being
> used by Horizon to populate the feature matrix users see.
> 
> So we're pulling on a thread, and we have to do that really carefully.
> 
> I think the important thing is to focus on what we have in 6 months
> doesn't break current users / applications, and is incrementally closer
> to our end game. That's the lens 

[openstack-dev] [app-catalog] Tokyo Summit Sessions

2015-10-09 Thread Christopher Aedo
Hello!  I wanted to send a note letting people know about the two
sessions we have planned for the Tokyo Summit coming up.  Both of them
are on Thursday, with a Fishbowl session followed by a working
session.

I'm eager to get feedback and input while we're in Tokyo.  We are
interested not only in adding content, but in making that content
easier to add and easier to consume.  To that end we've got an
excellent Horizon plugin that makes the App Catalog essentially a
native element of your OpenStack cloud.  Once we complete the first
pass of a real API for the site we will also write a plugin for the
unified client.  We have other plans and ideas along these lines, but
could really use your help in making sure we are headed in the right
direction.

During the Fishbowl we will go over the progress we've made in the
last six months, where things with the App Catalog stand today, and
what our plans are for the next cycle.  This will be highly
interactive, so if you care even a little bit about making OpenStack
better for the end users you should join us!

That session will be followed by a working session where we'll have a
chance to talk over some of the major design decisions we're making
and discuss improvements, concerns, or anything else related to the
catalog that comes up.

Status, progress and plans
http://mitakadesignsummit.sched.org/event/27bf7f9a29094cf9e96026d682db1609
Thursday in the Kotobuki room, from 1:50 to 2:30

Work session
http://mitakadesignsummit.sched.org/event/7754b46437c14cd4fdb51debebe89fb0
Thursday in the Tachibana room, from 2:40 to 3:20

-Christopher

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Scheduler proposal

2015-10-09 Thread Joshua Harlow

And one last reply with more code:

http://paste.openstack.org/show/475941/ (a creator of services that 
dynamically creates services, and destroys them after a set amount of 
time is included in here, along with the prior resource watcher).


Works locally, should work for u as well.

Output from example run of 'creator process'

http://paste.openstack.org/show/475942/

Output from example run of 'watcher process'

http://paste.openstack.org/show/475943/

Enjoy!

-josh

Joshua Harlow wrote:

Further example stuff,

Get kazoo installed (http://kazoo.readthedocs.org/)

Output from my local run (with no data)

$ python test.py
Kazoo client has changed to state: CONNECTED
Got data: '' for new resource /node/compute_nodes/h1.hypervisor.yahoo.com
Idling (ran for 0.00s).
Known resources:
- h1.hypervisor.yahoo.com => {}
Idling (ran for 1.00s).
Known resources:
- h1.hypervisor.yahoo.com => {}
Idling (ran for 2.00s).
Known resources:
- h1.hypervisor.yahoo.com => {}
Idling (ran for 3.00s).
Known resources:
- h1.hypervisor.yahoo.com => {}
Idling (ran for 4.00s).
Known resources:
- h1.hypervisor.yahoo.com => {}
Idling (ran for 5.00s).
Kazoo client has changed to state: LOST
Traceback (most recent call last):
File "test.py", line 72, in 
time.sleep(1.0)
KeyboardInterrupt

Joshua Harlow wrote:

Gregory Haynes wrote:

Excerpts from Joshua Harlow's message of 2015-10-08 15:24:18 +:

On this point, and just thinking out loud. If we consider saving
compute_node information into say a node in said DLM backend (for
example a znode in zookeeper[1]); this information would be updated
periodically by that compute_node *itself* (it would say contain
information about what VMs are running on it, what there utilization is
and so-on).

For example the following layout could be used:

/nova/compute_nodes/

 data could be:

{
vms: [],
memory_free: XYZ,
cpu_usage: ABC,
memory_used: MNO,
...
}

Now if we imagine each/all schedulers having watches
on /nova/compute_nodes/ ([2] consul and etc.d have equivalent concepts
afaik) then when a compute_node updates that information a push
notification (the watch being triggered) will be sent to the
scheduler(s) and the scheduler(s) could then update a local in-memory
cache of the data about all the hypervisors that can be selected from
for scheduling. This avoids any reading of a large set of data in the
first place (besides an initial read-once on startup to read the
initial list + setup the watches); in a way its similar to push
notifications. Then when scheduling a VM -> hypervisor there isn't any
need to query anything but the local in-memory representation that the
scheduler is maintaining (and updating as watches are triggered)...

So this is why I was wondering about what capabilities of cassandra are
being used here; because the above I think are unique capababilties of
DLM like systems (zookeeper, consul, etcd) that could be advantageous
here...

[1]
https://zookeeper.apache.org/doc/trunk/zookeeperProgrammers.html#sc_zkDataModel_znodes



[2]
https://zookeeper.apache.org/doc/trunk/zookeeperProgrammers.html#ch_zkWatches




I wonder if we would even need to make something so specialized to get
this kind of local caching. I dont know what the current ZK tools are
but the original Chubby paper described that clients always have a
write-through cache for nodes which they set up subscriptions for in
order to break the cache.


Perhaps not, make it as simple as we want as long as people agree that
the concept is useful. My idea is it would look like something like:

(simplified obviously):

http://paste.openstack.org/show/475938/

Then resources (in this example compute_nodes) would register themselves
via a call like:

>>> from kazoo import client
>>> import json
>>> c = client.KazooClient()
>>> c.start()
>>> n = "/node/compute_nodes"
>>> c.ensure_path(n)
>>> c.create("%s/h1.hypervisor.yahoo.com" % n, json.dumps({}))

^^^ the dictionary above would be whatever data to then put into the
receivers caches...

Then in the pasted program (running in a different shell/computer/...)
the cache would then get updated, and then a user of that cache can use
it to find resources to schedule things to

The example should work, just get zookeeper setup:

http://packages.ubuntu.com/precise/zookeeperd should do all of that, and
then try it out...



Also, re: etcd - The last time I checked their subscription API was
woefully inadequate for performing this type of thing without hurding
issues.


Any idea on the consul watch capabilities?

Similar API(s) appear to exist (but I don't know how they work, if they
do at all); https://www.consul.io/docs/agent/watches.html



__


OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Re: [openstack-dev] Scheduler proposal

2015-10-09 Thread Neil Jerram
FWIW - and somewhat ironically given what you said just before - I couldn't 
parse your last sentence below... You might like to follow up with a corrected 
version.

(On the broad point, BTW, I really agree with you. So much OpenStack discussion 
is rendered difficult to get into by use of wrong or imprecise language.)

Regards,
 Neil


  Original Message
From: Clint Byrum
Sent: Friday, 9 October 2015 19:08
To: openstack-dev
Reply To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Scheduler proposal


Excerpts from Chris Friesen's message of 2015-10-09 10:54:36 -0700:
> On 10/09/2015 11:09 AM, Zane Bitter wrote:
>
> > The optimal way to do this would be a weighted random selection, where the
> > probability of any given host being selected is proportional to its 
> > weighting.
> > (Obviously this is limited by the accuracy of the weighting function in
> > expressing your actual preferences - and it's at least conceivable that this
> > could vary with the number of schedulers running.)
> >
> > In fact, the choice of the name 'weighting' would normally imply that it's 
> > done
> > this way; hearing that the 'weighting' is actually used as a 'score' with 
> > the
> > highest one always winning is quite surprising.
>
> If you've only got one scheduler, there's no need to get fancy, you just pick
> the "best" host based on your weighing function.
>
> It's only when you've got parallel schedulers that things get tricky.
>

Note that I think you mean _concurrent_ not _parallel_ schedulers.

Parallel schedulers would be trying to solve the same unit of work by
breaking it up into smaller components and doing them at the same time.

Concurrent means they're just doing different things at the same time.

I know this is nit-picky, but we use the wrong word _A LOT_ and the
problem space is actually vastly different, as parallelizable problems
have a whole set of optimizations and advantages that generic concurrent
problems (especially those involving mutating state!) have a whole set
of race conditions that must be managed.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Different OpenStack components

2015-10-09 Thread Amrith Kumar
A google search produced this as result #2.

http://governance.openstack.org/reference/projects/index.html

Looks pretty complete to me.

-amrith

--
Amrith Kumar, CTO   | amr...@tesora.com
Tesora, Inc | @amrithkumar
125 CambridgePark Drive, Suite 400  | http://www.tesora.com
Cambridge, MA. 02140|



From: Abhishek Talwar [mailto:abhishek.tal...@tcs.com]
Sent: Friday, October 09, 2015 3:46 PM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: [openstack-dev] Different OpenStack components

Hi Folks,

I have been working with OpenStack from a while now, I know that other than the 
main componets (nova, neutron, glance, cinder, horizon, tempest, keystone etc) 
there are many more components in OpenStack (like Sahara, Trove).

So, where can I see the list of all existing OpenStack components and is there 
any documentation for these components so that I can read what all roles these 
components play.

Thanks and Regards
Abhishek Talwar

=-=-=
Notice: The information contained in this e-mail
message and/or attachments to it may contain
confidential or privileged information. If you are
not the intended recipient, any dissemination, use,
review, distribution, printing or copying of the
information contained in this e-mail message
and/or attachments to it are strictly prohibited. If
you have received this communication in error,
please notify us by reply e-mail or telephone and
immediately and permanently delete the message
and any attachments. Thank you
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Let's get together and fix all the bugs

2015-10-09 Thread Steve Martinelli

Dave, thanks for much for organizing this recurring event. I'll definitely
be there to help squash some bugs this Friday!

Thanks,

Steve Martinelli
OpenStack Keystone Project Team Lead



From:   David Stanek 
To: OpenStack Development Mailing List

Date:   2015/10/09 03:55 PM
Subject:[openstack-dev] [keystone] Let's get together and fix all the
bugs



I would like to start running a recurring bug squashing day. The general
idea is to get more focus on bugs and stability. You can find the details
here: https://etherpad.openstack.org/p/keystone-office-hours


--
David
blog: http://www.traceback.org
twitter: http://twitter.com/dstanek
www: http://dstanek.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Rally][Meeting][Agenda]

2015-10-09 Thread Roman Vasilets
Hi, its a friendly reminder that if you what to discuss some topics at
Rally meetings, please add you topic to our Meeting agenda
https://wiki.openstack.org/wiki/Meetings/Rally#Agenda. Don't forget to
specify who will lead topic discussion. Add some information about
topic(links, etc.) Thank you for your attention.

- Best regards, Vasilets Roman.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Auto-abandon bot

2015-10-09 Thread Ben Nemec
Hi OoOers,

As discussed in the meeting a week or two ago, we would like to bring
back the auto-abandon functionality for old, unloved gerrit reviews.
I've got a first implementation of a tool to do that:
https://github.com/cybertron/tripleo-auto-abandon

It currently follows these rules for determining what would be abandoned:

Never abandoned:
-WIP patches are never abandoned
-Approved patches are never abandoned
-Patches with no feedback are never abandoned
-Patches with negative feedback, followed by any sort of non-negative
comment are never abandoned (this is to allow committers to respond to
reviewer comments)
-Patches that get restored after a first abandonment are not abandoned
again, unless a new patch set is pushed and also receives negative feedback.

Candidates for abandonment:
-Patches with negative feedback that has not been responded to in over a
month.
-Patches that are failing CI for over a month on the same patch set
(regardless of any followup comments - the intent is that patches
expected to fail CI should be marked WIP).

My intent with this can be summed up as "when in doubt, leave it open".
 I'm open to discussion on any of the points above though.  I expect
that at least the current message for abandonment needs tweaking before
this gets run for real.

I'm a little torn on whether this should be run under my account or a
dedicated bot account.  On the one hand, I don't really want to end up
subscribed to every dead change, but on the other this is only supposed
to run on changes that are unlikely to be resurrected, so that should
limit the review spam.

Anyway, please take a look and let me know what you think.  Thanks.

-Ben

For the curious, this is the list of patches that would currently be
abandoned by the tool:

Abandoning https://review.openstack.org/192521 - Add centos7 test
Abandoning https://review.openstack.org/168002 - Allow dib to be lauched
from venv
Abandoning https://review.openstack.org/180807 - Warn when silently
ignoring executable files
Abandoning https://review.openstack.org/91376 - RabbitMQ: VHost support
Abandoning https://review.openstack.org/112870 - Adding configuration
options for stunnel.
Abandoning https://review.openstack.org/217511 - Fix "pkg-map failed"
issue building IPA ramdisk
Abandoning https://review.openstack.org/176060 - Introduce Overcloud Log
Aggregation
Abandoning https://review.openstack.org/141380 - Add --force-yes option
for install-packages
Abandoning https://review.openstack.org/149433 - Double quote to prevent
globbing and word splitting in os-db-create
Abandoning https://review.openstack.org/204639 - Perform a booting test
for our images
Abandoning https://review.openstack.org/102304 - Configures keystone
with apache
Abandoning https://review.openstack.org/214771 - Ramdisk should consider
the size unit when inspecting the amount of RAM
Abandoning https://review.openstack.org/87223 - Install the "classic"
icinga interface
Abandoning https://review.openstack.org/89744 - configure keystone with
apache
Abandoning https://review.openstack.org/176057 - Introduce Elements for
Log Aggregation
Abandoning https://review.openstack.org/153747 - Fail job if SELinux
denials are found
Abandoning https://review.openstack.org/179229 - Document how to use
network isolation/static IPs
Abandoning https://review.openstack.org/109651 - Add explicit
configuraton parameters for DB pool size
Abandoning https://review.openstack.org/189026 - shorter sleeps if
metadata changes are detected
Abandoning https://review.openstack.org/139627 - Nothing to see here
Abandoning https://review.openstack.org/117887 - Support Debian distro
for haproxy iptables
Abandoning https://review.openstack.org/113823 - Allow single node
mariadb clusters to restart
Abandoning https://review.openstack.org/110906 - Install pkg-config to
use ceilometer-agent-compute
Abandoning https://review.openstack.org/86580 - Add support for
specifying swift ring directory range
Abandoning https://review.openstack.org/142529 - Allow enabling debug
logs at build time
Abandoning https://review.openstack.org/89742 - configure keystone with
apache
Abandoning https://review.openstack.org/138007 - add elements Memory and
Disk limit to rabbitmq
Abandoning https://review.openstack.org/130826 - Nova rule needs to be
added with add-rule for persistence
Abandoning https://review.openstack.org/177043 - Make backwards
compatible qcow2s by default
Abandoning https://review.openstack.org/165118 - Make os_net_config
package private
Abandoning https://review.openstack.org/118220 - Added a MySQL logrotate
configuration
Abandoning https://review.openstack.org/94500 - Ceilometer Service
Update/Upgrade in TripleO
Abandoning https://review.openstack.org/177559 - Dont pass xattrs to tar
if its unsupported
Abandoning https://review.openstack.org/87226 - Install check_mk server
Abandoning https://review.openstack.org/113827 - Configure haproxy logging

__

Re: [openstack-dev] Scheduler proposal

2015-10-09 Thread Joshua Harlow

Further example stuff,

Get kazoo installed (http://kazoo.readthedocs.org/)

Output from my local run (with no data)

$ python test.py
Kazoo client has changed to state: CONNECTED
Got data: '' for new resource /node/compute_nodes/h1.hypervisor.yahoo.com
Idling (ran for 0.00s).
Known resources:
 - h1.hypervisor.yahoo.com => {}
Idling (ran for 1.00s).
Known resources:
 - h1.hypervisor.yahoo.com => {}
Idling (ran for 2.00s).
Known resources:
 - h1.hypervisor.yahoo.com => {}
Idling (ran for 3.00s).
Known resources:
 - h1.hypervisor.yahoo.com => {}
Idling (ran for 4.00s).
Known resources:
 - h1.hypervisor.yahoo.com => {}
Idling (ran for 5.00s).
Kazoo client has changed to state: LOST
Traceback (most recent call last):
  File "test.py", line 72, in 
time.sleep(1.0)
KeyboardInterrupt

Joshua Harlow wrote:

Gregory Haynes wrote:

Excerpts from Joshua Harlow's message of 2015-10-08 15:24:18 +:

On this point, and just thinking out loud. If we consider saving
compute_node information into say a node in said DLM backend (for
example a znode in zookeeper[1]); this information would be updated
periodically by that compute_node *itself* (it would say contain
information about what VMs are running on it, what there utilization is
and so-on).

For example the following layout could be used:

/nova/compute_nodes/

 data could be:

{
vms: [],
memory_free: XYZ,
cpu_usage: ABC,
memory_used: MNO,
...
}

Now if we imagine each/all schedulers having watches
on /nova/compute_nodes/ ([2] consul and etc.d have equivalent concepts
afaik) then when a compute_node updates that information a push
notification (the watch being triggered) will be sent to the
scheduler(s) and the scheduler(s) could then update a local in-memory
cache of the data about all the hypervisors that can be selected from
for scheduling. This avoids any reading of a large set of data in the
first place (besides an initial read-once on startup to read the
initial list + setup the watches); in a way its similar to push
notifications. Then when scheduling a VM -> hypervisor there isn't any
need to query anything but the local in-memory representation that the
scheduler is maintaining (and updating as watches are triggered)...

So this is why I was wondering about what capabilities of cassandra are
being used here; because the above I think are unique capababilties of
DLM like systems (zookeeper, consul, etcd) that could be advantageous
here...

[1]
https://zookeeper.apache.org/doc/trunk/zookeeperProgrammers.html#sc_zkDataModel_znodes


[2]
https://zookeeper.apache.org/doc/trunk/zookeeperProgrammers.html#ch_zkWatches



I wonder if we would even need to make something so specialized to get
this kind of local caching. I dont know what the current ZK tools are
but the original Chubby paper described that clients always have a
write-through cache for nodes which they set up subscriptions for in
order to break the cache.


Perhaps not, make it as simple as we want as long as people agree that
the concept is useful. My idea is it would look like something like:

(simplified obviously):

http://paste.openstack.org/show/475938/

Then resources (in this example compute_nodes) would register themselves
via a call like:

 >>> from kazoo import client
 >>> import json
 >>> c = client.KazooClient()
 >>> c.start()
 >>> n = "/node/compute_nodes"
 >>> c.ensure_path(n)
 >>> c.create("%s/h1.hypervisor.yahoo.com" % n, json.dumps({}))

^^^ the dictionary above would be whatever data to then put into the
receivers caches...

Then in the pasted program (running in a different shell/computer/...)
the cache would then get updated, and then a user of that cache can use
it to find resources to schedule things to

The example should work, just get zookeeper setup:

http://packages.ubuntu.com/precise/zookeeperd should do all of that, and
then try it out...



Also, re: etcd - The last time I checked their subscription API was
woefully inadequate for performing this type of thing without hurding
issues.


Any idea on the consul watch capabilities?

Similar API(s) appear to exist (but I don't know how they work, if they
do at all); https://www.consul.io/docs/agent/watches.html



__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Scheduler proposal

2015-10-09 Thread Joshua Harlow

Gregory Haynes wrote:

Excerpts from Joshua Harlow's message of 2015-10-08 15:24:18 +:

On this point, and just thinking out loud. If we consider saving
compute_node information into say a node in said DLM backend (for
example a znode in zookeeper[1]); this information would be updated
periodically by that compute_node *itself* (it would say contain
information about what VMs are running on it, what there utilization is
and so-on).

For example the following layout could be used:

/nova/compute_nodes/

  data could be:

{
 vms: [],
 memory_free: XYZ,
 cpu_usage: ABC,
 memory_used: MNO,
 ...
}

Now if we imagine each/all schedulers having watches
on /nova/compute_nodes/ ([2] consul and etc.d have equivalent concepts
afaik) then when a compute_node updates that information a push
notification (the watch being triggered) will be sent to the
scheduler(s) and the scheduler(s) could then update a local in-memory
cache of the data about all the hypervisors that can be selected from
for scheduling. This avoids any reading of a large set of data in the
first place (besides an initial read-once on startup to read the
initial list + setup the watches); in a way its similar to push
notifications. Then when scheduling a VM ->  hypervisor there isn't any
need to query anything but the local in-memory representation that the
scheduler is maintaining (and updating as watches are triggered)...

So this is why I was wondering about what capabilities of cassandra are
being used here; because the above I think are unique capababilties of
DLM like systems (zookeeper, consul, etcd) that could be advantageous
here...

[1]
https://zookeeper.apache.org/doc/trunk/zookeeperProgrammers.html#sc_zkDataModel_znodes

[2]
https://zookeeper.apache.org/doc/trunk/zookeeperProgrammers.html#ch_zkWatches


I wonder if we would even need to make something so specialized to get
this kind of local caching. I dont know what the current ZK tools are
but the original Chubby paper described that clients always have a
write-through cache for nodes which they set up subscriptions for in
order to break the cache.


Perhaps not, make it as simple as we want as long as people agree that 
the concept is useful. My idea is it would look like something like:


(simplified obviously):

http://paste.openstack.org/show/475938/

Then resources (in this example compute_nodes) would register themselves 
via a call like:


>>> from kazoo import client
>>> import json
>>> c = client.KazooClient()
>>> c.start()
>>> n = "/node/compute_nodes"
>>> c.ensure_path(n)
>>> c.create("%s/h1.hypervisor.yahoo.com" % n, json.dumps({}))

^^^ the dictionary above would be whatever data to then put into the 
receivers caches...


Then in the pasted program (running in a different shell/computer/...) 
the cache would then get updated, and then a user of that cache can use 
it to find resources to schedule things to


The example should work, just get zookeeper setup:

http://packages.ubuntu.com/precise/zookeeperd should do all of that, and 
then try it out...




Also, re: etcd - The last time I checked their subscription API was
woefully inadequate for performing this type of thing without hurding
issues.


Any idea on the consul watch capabilities?

Similar API(s) appear to exist (but I don't know how they work, if they 
do at all); https://www.consul.io/docs/agent/watches.html




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Scheduler proposal

2015-10-09 Thread Ian Wells
On 9 October 2015 at 12:50, Chris Friesen 
wrote:

> Has anybody looked at why 1 instance is too slow and what it would take to
>
>> make 1 scheduler instance work fast enough? This does not preclude the
>> use of
>> concurrency for finer grain tasks in the background.
>>
>
> Currently we pull data on all (!) of the compute nodes out of the database
> via a series of RPC calls, then evaluate the various filters in python code.
>

I'll say again: the database seems to me to be the problem here.  Not to
mention, you've just explained that they are in practice holding all the
data in memory in order to do the work so the benefit we're getting here is
really a N-to-1-to-M pattern with a DB in the middle (the store-to-DB is
rather secondary, in fact), and that without incremental updates to the
receivers.

I suspect it'd be a lot quicker if each filter was a DB query.
>

That's certainly one solution, but again, unless you can tell me *why* this
information will not all fit in memory per process (when it does right
now), I'm still not clear why a database is required at all, let alone a
central one.  Even if it doesn't fit, then a local DB might be reasonable
compared to a centralised one.  The schedulers don't need to work off of
precisely the same state, they just need to make different choices to each
other, which doesn't require a that's-mine-hands-off approach; and they
aren't going to have a perfect view of the state of a distributed system
anyway, so retries are inevitable.

On a different topic, on the weighted choice: it's not 'optimal', given
this is a packing problem, so there isn't a perfect solution.  In fact,
given we're trying to balance the choice of a preferable host with the
chance that multiple schedulers make different choices, it's likely worse
than even weighting.  (Technically I suspect we'd want to rethink whether
the weighting mechanism, is actually getting us a benefit.)
-- 
Ian.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] service catalog: TNG

2015-10-09 Thread Shamail Tahir
Well said!

On Fri, Oct 9, 2015 at 5:00 PM, Sean Dague  wrote:

> On 10/09/2015 02:52 PM, Jonathan D. Proulx wrote:
> > On Fri, Oct 09, 2015 at 02:17:26PM -0400, Monty Taylor wrote:
> > :On 10/09/2015 01:39 PM, David Stanek wrote:
> > :>
> > :>On Fri, Oct 9, 2015 at 1:28 PM, Jonathan D. Proulx  > :>> wrote:
> > :>As an operator I'd be happy to use SRV records to define endpoints,
> > :>though multiple regions could make that messy.
> > :>
> > :>would we make subdomins per region or include region name in the
> > :>service name?
> > :>
> > :>_compute-regionone._tcp.example.com 
> > :>-vs-
> > :>_compute._tcp.regionone.example.com <
> http://tcp.regionone.example.com>
> > :>
> > :>Also not all operators can controll their DNS to this level so it
> > :>couldn't be the only option.
> > :
> > :SO - XMPP does this. The way it works is that if your XMPP provider
> > :has put the approriate records in DNS, then everything Just Works. If
> > :not, then you, as a consumer, have several pieces of information you
> > :need to provide by hand.
> > :
> > :Of course, there are already several pieces of information you have
> > :to provide by hand to connect to OpenStack, so needing to download a
> > :manifest file or something like that to talk to a cloud in an
> > :environment where the people running a cloud do not have the ability
> > :to add information to DNS (boggles) shouldn't be that terrible.
> >
> > yes but XMPP require 2 (maybe 3) SRV records so an equivelent number
> > of local config options is managable. A cloud with X endpoints and Y
> > regions is significantly more.
> >
> > Not to say this couldn't be done by packing more stuff into the openrc
> > or equivelent so users don't need to directly enter all that, but that
> > would be a significant change and one I think would be more difficult
> > for smaller operations.
> >
> > :One could also imagine an in-between option where OpenStack could run
> > :an _optional_ DNS for this purpose - and then the only 'by-hand'
> > :you'd need for clouds with no real DNS is the location of the
> > :discover DNS.
> >
> > Yes a special purpose DNS (a la dnsbl) might be preferable to
> > pushing around static configs.
>
> I do realize lots of people want to go in much more radical directions
> here. I think we have to be really careful about that. The current
> cinder v1 -> v2 transition challenges demonstrate how much inertia there
> is. 3 years of talking about a Tasks API is another instance of it.

Yep... very valid point.

>
>
We aren't starting with a blank slate. This is brownfield development.
> There are enough users of this that making shifts need to be done in
> careful shifts that enable a new thing similar enough to the old thing,
> that people will easily be able to take advantage of it. Which means I
> think deciding to jump off the REST bandwagon for this is currently a
> bridge too far. At least to get anything tangible done in the next 6 to
> 12 months.
>
++ but I think it does make sense to consider possible future design
considerations into account.  For example, we shouldn't abandon REST (for
the points you have raised) but if there is interest in possibly using DNS
in the future then we should try to make design choices today that would
allow for that direction in the future.  To further the compatibility
conversation, if/when we do decide to add DNS... we will still need to
support REST for an indefinite amount of time to let people choose their
desired mode of operation over a time window that should be (for the most
part) in their control due to their own pace of adopting changes.

>
> I think getting us a service catalog served over REST that doesn't
> require auth, and doesn't require tenant_ids in urls, gets us someplace
> we could figure out a DNS representation (for those that wanted that).
> But we have to tick / tock this and not change transports and
> representations at the same time.
>
> And, as I've definitely discovered through this process the Service
> Catalog today has been fluid enough that where it is used, and what
> people rely on in it, isn't always clear all at once. For instance,
> tenant_ids in urls are very surface features in Nova (we don't rely on
> it, we're using the context), don't exist at all in most new services,
> and are very corely embedded in Swift. This is part of what has also
> required the service catalog is embedded in the Token, which causes toke
> bloat, and has led to other features to try to shrink the catalog by
> filtering it by what a user is allowed. Which in turn ended up being
> used by Horizon to populate the feature matrix users see.
>
> ++

> So we're pulling on a thread, and we have to do that really carefully.
>
> I think the important thing is to focus on what we have in 6 months
> doesn't break current users / applications, and is incrementally closer
> to our end game. That's the lens I'm

Re: [openstack-dev] Scheduler proposal

2015-10-09 Thread Alec Hothan (ahothan)

There are several ways to make python code that deals with a lot of data 
faster, especially when it comes to operating on DB fields from SQL tables (and 
that is not limited to the nova scheduler).
Pulling data from large SQL tables and operating on them through regular python 
code (using python loops) is extremely inefficient due to the nature of the 
python interpreter. If this is what nova scheduler code is doing today, the 
good thing is there is a potentially huge room for improvement.


The approach to scale out, in practice means a few instances (3 instances is 
common), meaning the gain would be in the order of 3x (or 1 order of magnitude) 
but with sharply increased complexity to deal with concurrent schedulers and 
potentially conflicting results (with the use of tools lie ZK or Consul...). 
But in essence we're basically just running the same unoptimized code 
concurrently to achieve a better throughput.
On the other hand optimizing something that is not very optimized to start with 
can yield a much better return than 3x, with the advantage of simplicity (one 
active scheduler, which could be backed by a standby for HA).

Python is actually one of the better languages to do *fast* in-memory big data 
processing using open source python scientific and data analysis libraries as 
they can provide native speed through cythonized libraries and powerful high 
level abstraction to do complex filters and vectorized operations. Not only it 
is fast but it also yields much smaller code.

I have used libraries such as numpy and pandas to operate on very large data 
sets (the equivalent of SQL tables with hundreds of thousands of rows) and 
there is easily 2 orders of magnitude of difference for operating on these data 
in memory between plain python code with loops and python code using these 
libraries (that is without any DB access).
The order of filtering on the kind of reduction that you describe below 
certainly helps but becomes second order when you use pandas filters because 
they are extremely fast even for very large datasets.

I'm curious to know why this path was not explored more before embarking full 
speed on concurrency/scale out options which is a very complex and treacherous 
path as we see in this discussion. Clearly very attractive intellectually to 
work with all these complex distributed frameworks, but the cost of complexity 
is often overlooked.

Is there any data showing the performance of the current nova scheduler? How 
many scheduling can nova do per second at scale with worst case filters?
When you think about it, 10,000 nodes and their associated properties is not 
such a big number if you use the right libraries.




On 10/9/15, 1:10 PM, "Joshua Harlow"  wrote:

>And also we should probably deprecate/not recommend:
>
>http://docs.openstack.org/developer/nova/api/nova.scheduler.filters.json_filter.html#nova.scheduler.filters.json_filter.JsonFilter
>
>That filter IMHO basically disallows optimizations like forming SQL 
>statements for each filter (and then letting the DB do the heavy 
>lifting) or say having each filter say 'oh my logic can be performed by 
>a prepared statement ABC and u should just use that instead' (and then 
>letting the DB do the heavy lifting).
>
>Chris Friesen wrote:
>> On 10/09/2015 12:25 PM, Alec Hothan (ahothan) wrote:
>>>
>>> Still the point from Chris is valid. I guess the main reason openstack is
>>> going with multiple concurrent schedulers is to scale out by
>>> distributing the
>>> load between multiple instances of schedulers because 1 instance is too
>>> slow. This discussion is about coordinating the many instances of
>>> schedulers
>>> in a way that works and this is actually a difficult problem and will get
>>> worst as the number of variables for instance placement increases (for
>>> example NFV is going to require a lot more than just cpu pinning, huge
>>> pages
>>> and numa).
>>>
>>> Has anybody looked at why 1 instance is too slow and what it would
>>> take to
>>> make 1 scheduler instance work fast enough? This does not preclude the
>>> use of
>>> concurrency for finer grain tasks in the background.
>>
>> Currently we pull data on all (!) of the compute nodes out of the
>> database via a series of RPC calls, then evaluate the various filters in
>> python code.
>>
>> I suspect it'd be a lot quicker if each filter was a DB query.
>>
>> Also, ideally we'd want to query for the most "strict" criteria first,
>> to reduce the total number of comparisons. For example, if you want to
>> implement the "affinity" server group policy, you only need to test a
>> single host. If you're matching against host aggregate metadata, you
>> only need to test against hosts in matching aggregates.
>>
>> Chris
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstac

Re: [openstack-dev] Feedback about Swift API - Especially about Large Objects

2015-10-09 Thread Clay Gerrard
A lot of these deficiencies are drastically improved with static large
objects - and non-trivial to address (impossible?) with DLO's because of
their dynamic nature.  It's unfortunate, but DLO's don't really serve your
use-case very well - and you should find a way to transition to SLO's [1].

We talked about improving the checksumming behavior in SLO's for the
general naive sync case back at the hack-a-thon before the Vancouver summit
- but it's tricky (MD5 => CRC) - and would probably require a API version
bump.

All we've been able to get done so far is improve the native client
handling [2] - but if using SLO's you may find a similar solution quite
manageable.

Thanks for the feedback.

-Clay

1.
http://docs-draft.openstack.org/91/219991/7/check/gate-swift-docs/75fb84c//doc/build/html/overview_large_objects.html#module-swift.common.middleware.slo
2.
https://github.com/openstack/python-swiftclient/commit/ff0b3b02f07de341fa9eb81156ac2a0565d85cd4

On Friday, October 9, 2015, Pierre SOUCHAY 
wrote:

> Hi Swift Developpers,
>
> We have been using Swift as a IAAS provider for more than two years now,
> but this mail is about feedback on the API side. I think it would be great
> to include some of the ideas in future revisions of API.
>
> I’ve been developping a few Swift clients in HTML (in Cloudwatt Dashboard)
> with CORS, Java with Swing GUI (
> https://github.com/pierresouchay/swiftbrowser) and Go for Swift to
> filesystem (https://github.com/pierresouchay/swiftsync/), so I have now a
> few ideas about how improving a bit the API.
>
> The API is quite straightforward and intuitive to use, and writing a
> client is now that difficult, but unfortunately, the Large Object support
> is not easy at all to deal with.
>
> The biggest issue is that there is now way to know whenever a file is a
> large object when performing listings using JSON format, since, AFAIK a
> large object is an object with 0 bytes (so its size in bytes is 0), but it
> also has a hash of a zero file bytes.
>
> For instance, a signature of such object is :
>  {"hash": "d41d8cd98f00b204e9800998ecf8427e", "last_modified":
> "2015-06-04T10:23:57.618760", "bytes": 0, "name": "5G", "content_type": "
> octet/stream"}
>
> which is, exactly the hash of a 0 bytes file :
> $ echo -n | md5
> d41d8cd98f00b204e9800998ecf8427e
>
> Ok, now lets try HEAD :
> $ curl -vv -XHEAD -H X-Auth-Token:$TOKEN '
> https://storage.fr1.cloudwatt.com/v1/AUTH_61b8fe6dfd0a4ce69f6622ea7e0f/large_files/5G
> …
> < HTTP/1.1 200 OK
> < Date: Fri, 09 Oct 2015 19:43:09 GMT
> < Content-Length: 50
> < Accept-Ranges: bytes
> < X-Object-Manifest: large_files/5G/.part-50-
> < Last-Modified: Thu, 04 Jun 2015 10:16:33 GMT
> < Etag: "479517ec4767ca08ed0547dca003d116"
> < X-Timestamp: 1433413437.61876
> < Content-Type: octet/stream
> < X-Trans-Id: txba36522b0b7743d683a5d-00561818cd
>
> WTF ? While all files have the same value for ETag and hash, this is not
> the case for Large files…
>
> Furthermore, the ETag is not the md5 of the whole file, but the hash of
> the hash of all manifest files (as described somewhere hidden deeply in the
> documentation)
>
> Why this is a problem ?
> ---
>
> Imagine a « naive »  client using the API which performs some kind of Sync.
>
> The client download each file and when it syncs, compares the local md5 to
> the md5 of the listing… of course, the hash is the hash of a zero bytes
> files… so it downloads the file again… and again… and again. Unfortunaly
> for our naive client, this is exactly the kind of files we don’t want to
> download twice… since the file is probably huge (after all, it has been
> split for a reason no ?)
>
> I think this is really a design flaw since you need to know everything
> about Swift API and extensions to have a proper behavior. The minimum would
> be to at least return the same value as the ETag header.
>
> OK, let’s continue…
>
> We are not so Naive… our Swift Sync client know that 0 files needs more
> work.
>
> * First issue: we have to know whenever the file is a « real » 0 bytes
> file or not. You may think most people do not create 0 bytes files after
> all… this is dummy. Actually, some I have seen two Object Storage
> middleware using many 0 bytes files (for instance to store meta data or two
> set up some kind of directory like structure). So, in this cas, we need to
> perform a HEAD request to each 0 bytes files. If you have 1000 files like
> this, you have to perform 1000 HEAD requests to finally know that there are
> not any Large file. Not very efficient. Your Swift Sync client took 1
> second to sync 20G of data with naive approach, now, you need 5 minutes…
> hash of 0 bytes is not a good idea at all.
>
> * Second issue: since the hash is the hash of all parts (I have an idea
> about why this decision was made, probably for performance reasons), your
> client cannot work on files since the hash of local file is not the hash of
> the Swift aggregated file (which i

Re: [openstack-dev] [TripleO] review priorities etherpad

2015-10-09 Thread Dan Prince
On Thu, 2015-10-08 at 09:17 -0400, James Slagle wrote:
> At the TripleO meething this week, we talked about using an etherpad
> to help get some organization around reviews for the high priority
> themes in progress.
> 
> I started on one: 
> https://etherpad.openstack.org/p/tripleo-review-priorities

Nice. Thanks.

> 
> And I subjectively added a few things :). Feel free to add more
> stuff.
> Personally, I like seeing it organized by "feature" or theme instead
> of git repo, but we can work out whatever seems best.

Agree. For some things it really helps to see things grouped by feature
in an etherpad.

Dan


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] REST API to return ip-margin

2015-10-09 Thread Aihua Li

>Network names are not guaranteed to be unique. This could cause >problems. If 
>I recall correctly we had a similar discussion about one of the >plugins (One 
>of the IBM ones?) where they ran into the issue of network >names and 
>uniqueness.
My proposal is to use network-uuid as the key, and send the network name in the 
body, as shown below.
{ "network-1-uuid": { "total-ips" : 256
   "available-ips" : count1,
   "name" : test-network,
}}
 == Aihua Edward Li == 


 On Friday, October 9, 2015 1:27 PM, Sean M. Collins  
wrote:
   

 On Fri, Oct 09, 2015 at 02:38:03PM EDT, Aihua Li wrote:
>  For this use-case, we need to return network name in the response.We also 
>have the implementation and accompanying tempest test scripts.The issue 
>1457986 is currently assigned to Mike Dorman. I am curious to see where we are 
>on this issue. Is the draft REST API ready? Can we incorporate my use-case 
>input into the considerations.

Network names are not guaranteed to be unique. This could cause problems. If I 
recall correctly we had a similar discussion about one of the plugins (One of 
the IBM ones?) where they ran into the issue of network names and uniqueness.

-- 
Sean M. Collins


  __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Scheduler proposal

2015-10-09 Thread Gregory Haynes
Excerpts from Joshua Harlow's message of 2015-10-08 15:24:18 +:
> On this point, and just thinking out loud. If we consider saving
> compute_node information into say a node in said DLM backend (for
> example a znode in zookeeper[1]); this information would be updated
> periodically by that compute_node *itself* (it would say contain
> information about what VMs are running on it, what there utilization is
> and so-on).
> 
> For example the following layout could be used:
> 
> /nova/compute_nodes/
> 
>  data could be:
> 
> {
> vms: [],
> memory_free: XYZ,
> cpu_usage: ABC,
> memory_used: MNO,
> ...
> }
> 
> Now if we imagine each/all schedulers having watches
> on /nova/compute_nodes/ ([2] consul and etc.d have equivalent concepts
> afaik) then when a compute_node updates that information a push
> notification (the watch being triggered) will be sent to the
> scheduler(s) and the scheduler(s) could then update a local in-memory
> cache of the data about all the hypervisors that can be selected from
> for scheduling. This avoids any reading of a large set of data in the
> first place (besides an initial read-once on startup to read the
> initial list + setup the watches); in a way its similar to push
> notifications. Then when scheduling a VM -> hypervisor there isn't any
> need to query anything but the local in-memory representation that the
> scheduler is maintaining (and updating as watches are triggered)...
> 
> So this is why I was wondering about what capabilities of cassandra are
> being used here; because the above I think are unique capababilties of
> DLM like systems (zookeeper, consul, etcd) that could be advantageous
> here...
> 
> [1]
> https://zookeeper.apache.org/doc/trunk/zookeeperProgrammers.html#sc_zkDataModel_znodes
> 
> [2]
> https://zookeeper.apache.org/doc/trunk/zookeeperProgrammers.html#ch_zkWatches

I wonder if we would even need to make something so specialized to get
this kind of local caching. I dont know what the current ZK tools are
but the original Chubby paper described that clients always have a
write-through cache for nodes which they set up subscriptions for in
order to break the cache.

Also, re: etcd - The last time I checked their subscription API was
woefully inadequate for performing this type of thing without hurding
issues.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Scheduler proposal

2015-10-09 Thread Gregory Haynes
Excerpts from Chris Friesen's message of 2015-10-09 19:36:03 +:
> On 10/09/2015 12:55 PM, Gregory Haynes wrote:
> 
> > There is a more generalized version of this algorithm for concurrent
> > scheduling I've seen a few times - Pick N options at random, apply
> > heuristic over that N to pick the best, attempt to schedule at your
> > choice, retry on failure. As long as you have a fast heuristic and your
> > N is sufficiently smaller than the total number of options then the
> > retries are rare-ish and cheap. It also can scale out extremely well.
> 
> If you're looking for a resource that is relatively rare (say you want a 
> particular hardware accelerator, or a very large number of CPUs, or even to 
> be 
> scheduled "near" to a specific other instance) then you may have to retry 
> quite 
> a lot.
> 
> Chris
> 

Yep. You can either be fast or correct. There is no solution which will
both scale easily and allow you to schedule to a very precise node
efficiently or this would be a solved problem.

There is a not too bad middle ground here though - you can definitely do
some filtering beforehand efficiently (especially if you have some kind
of local cache similar to what Josh mentioned with ZK) and then this is
less of an issue. This is definitely a big step in complexity though...

Cheers,
Greg

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] service catalog: TNG

2015-10-09 Thread Sean Dague
On 10/09/2015 02:52 PM, Jonathan D. Proulx wrote:
> On Fri, Oct 09, 2015 at 02:17:26PM -0400, Monty Taylor wrote:
> :On 10/09/2015 01:39 PM, David Stanek wrote:
> :>
> :>On Fri, Oct 9, 2015 at 1:28 PM, Jonathan D. Proulx  :>> wrote:
> :>As an operator I'd be happy to use SRV records to define endpoints,
> :>though multiple regions could make that messy.
> :>
> :>would we make subdomins per region or include region name in the
> :>service name?
> :>
> :>_compute-regionone._tcp.example.com 
> :>-vs-
> :>_compute._tcp.regionone.example.com 
> :>
> :>Also not all operators can controll their DNS to this level so it
> :>couldn't be the only option.
> :
> :SO - XMPP does this. The way it works is that if your XMPP provider
> :has put the approriate records in DNS, then everything Just Works. If
> :not, then you, as a consumer, have several pieces of information you
> :need to provide by hand.
> :
> :Of course, there are already several pieces of information you have
> :to provide by hand to connect to OpenStack, so needing to download a
> :manifest file or something like that to talk to a cloud in an
> :environment where the people running a cloud do not have the ability
> :to add information to DNS (boggles) shouldn't be that terrible.
> 
> yes but XMPP require 2 (maybe 3) SRV records so an equivelent number
> of local config options is managable. A cloud with X endpoints and Y
> regions is significantly more.
> 
> Not to say this couldn't be done by packing more stuff into the openrc
> or equivelent so users don't need to directly enter all that, but that
> would be a significant change and one I think would be more difficult
> for smaller operations.
> 
> :One could also imagine an in-between option where OpenStack could run
> :an _optional_ DNS for this purpose - and then the only 'by-hand'
> :you'd need for clouds with no real DNS is the location of the
> :discover DNS.
> 
> Yes a special purpose DNS (a la dnsbl) might be preferable to
> pushing around static configs.

I do realize lots of people want to go in much more radical directions
here. I think we have to be really careful about that. The current
cinder v1 -> v2 transition challenges demonstrate how much inertia there
is. 3 years of talking about a Tasks API is another instance of it.

We aren't starting with a blank slate. This is brownfield development.
There are enough users of this that making shifts need to be done in
careful shifts that enable a new thing similar enough to the old thing,
that people will easily be able to take advantage of it. Which means I
think deciding to jump off the REST bandwagon for this is currently a
bridge too far. At least to get anything tangible done in the next 6 to
12 months.

I think getting us a service catalog served over REST that doesn't
require auth, and doesn't require tenant_ids in urls, gets us someplace
we could figure out a DNS representation (for those that wanted that).
But we have to tick / tock this and not change transports and
representations at the same time.

And, as I've definitely discovered through this process the Service
Catalog today has been fluid enough that where it is used, and what
people rely on in it, isn't always clear all at once. For instance,
tenant_ids in urls are very surface features in Nova (we don't rely on
it, we're using the context), don't exist at all in most new services,
and are very corely embedded in Swift. This is part of what has also
required the service catalog is embedded in the Token, which causes toke
bloat, and has led to other features to try to shrink the catalog by
filtering it by what a user is allowed. Which in turn ended up being
used by Horizon to populate the feature matrix users see.

So we're pulling on a thread, and we have to do that really carefully.

I think the important thing is to focus on what we have in 6 months
doesn't break current users / applications, and is incrementally closer
to our end game. That's the lens I'm going to keep putting on this one.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] service catalog: TNG

2015-10-09 Thread Everett Toews
On Oct 9, 2015, at 9:39 AM, Sean Dague  wrote:
> 
> It looks like some great conversation got going on the service catalog
> standardization spec / discussion at the last cross project meeting.
> Sorry I wasn't there to participate.
> 
> A lot of that ended up in here (which was an ether pad stevemar and I
> started working on the other day) -
> https://etherpad.openstack.org/p/mitaka-service-catalog which is great.
> 
> A couple of things that would make this more useful:
> 
> 1) if you are commenting, please (ircnick) your comments. It's not easy
> to always track down folks later if the comment was not understood.
> 
> 2) please provide link to code when explaining a point. Github supports
> the ability to very nicely link to (and highlight) a range of code by a
> stable object ref. For instance -
> https://github.com/openstack/nova/blob/2dc2153c289c9d5d7e9827a4908b0ca61d87dabb/nova/context.py#L126-L132
> 
> That will make comments about X does Y, or Z can't do W, more clear
> because we'll all be looking at the same chunk of code and start to
> build more shared context here. One of the reasons this has been long
> and difficult is that we're missing a lot of that shared context between
> projects. Reassembling that by reading each other's relevant code will
> go a long way to understanding the whole picture.
> 
> 
> Lastly, I think it's pretty clear we probably need a dedicated workgroup
> meeting to keep this ball rolling, come to a reasonable plan that
> doesn't break any existing deployed code, but lets us get to a better
> world in a few cycles. annegentle, stevemar, and I have been pushing on
> that ball so far, however I'd like to know who else is willing to commit
> a chunk of time over this cycle to this. Once we know that we can try to
> figure out when a reasonable weekly meeting point would be.

It's likely you're already aware of it but see

https://wiki.openstack.org/wiki/API_Working_Group/Current_Design/Service_Catalog

for many examples of service catalogs from both public and private OpenStack 
clouds.

Everett


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Feedback about Swift API - Especially about Large Objects

2015-10-09 Thread Pierre SOUCHAY
Hi Swift Developpers,

We have been using Swift as a IAAS provider for more than two years now, but 
this mail is about feedback on the API side. I think it would be great to 
include some of the ideas in future revisions of API.

I’ve been developping a few Swift clients in HTML (in Cloudwatt Dashboard) with 
CORS, Java with Swing GUI (https://github.com/pierresouchay/swiftbrowser 
) and Go for Swift to filesystem 
(https://github.com/pierresouchay/swiftsync/ 
), so I have now a few ideas about 
how improving a bit the API.

The API is quite straightforward and intuitive to use, and writing a client is 
now that difficult, but unfortunately, the Large Object support is not easy at 
all to deal with.

The biggest issue is that there is now way to know whenever a file is a large 
object when performing listings using JSON format, since, AFAIK a large object 
is an object with 0 bytes (so its size in bytes is 0), but it also has a hash 
of a zero file bytes.

For instance, a signature of such object is :
 {"hash": "d41d8cd98f00b204e9800998ecf8427e", "last_modified": 
"2015-06-04T10:23:57.618760", "bytes": 0, "name": "5G", "content_type": 
"octet/stream"}

which is, exactly the hash of a 0 bytes file :
$ echo -n | md5
d41d8cd98f00b204e9800998ecf8427e

Ok, now lets try HEAD :
$ curl -vv -XHEAD -H X-Auth-Token:$TOKEN 
'https://storage.fr1.cloudwatt.com/v1/AUTH_61b8fe6dfd0a4ce69f6622ea7e0f/large_files/5G
…
< HTTP/1.1 200 OK
< Date: Fri, 09 Oct 2015 19:43:09 GMT
< Content-Length: 50
< Accept-Ranges: bytes
< X-Object-Manifest: large_files/5G/.part-50-
< Last-Modified: Thu, 04 Jun 2015 10:16:33 GMT
< Etag: "479517ec4767ca08ed0547dca003d116"
< X-Timestamp: 1433413437.61876
< Content-Type: octet/stream
< X-Trans-Id: txba36522b0b7743d683a5d-00561818cd

WTF ? While all files have the same value for ETag and hash, this is not the 
case for Large files…

Furthermore, the ETag is not the md5 of the whole file, but the hash of the 
hash of all manifest files (as described somewhere hidden deeply in the 
documentation)

Why this is a problem ?
---

Imagine a « naive »  client using the API which performs some kind of Sync.

The client download each file and when it syncs, compares the local md5 to the 
md5 of the listing… of course, the hash is the hash of a zero bytes files… so 
it downloads the file again… and again… and again. Unfortunaly for our naive 
client, this is exactly the kind of files we don’t want to download twice… 
since the file is probably huge (after all, it has been split for a reason no ?)

I think this is really a design flaw since you need to know everything about 
Swift API and extensions to have a proper behavior. The minimum would be to at 
least return the same value as the ETag header.

OK, let’s continue…

We are not so Naive… our Swift Sync client know that 0 files needs more work.

* First issue: we have to know whenever the file is a « real » 0 bytes file or 
not. You may think most people do not create 0 bytes files after all… this is 
dummy. Actually, some I have seen two Object Storage middleware using many 0 
bytes files (for instance to store meta data or two set up some kind of 
directory like structure). So, in this cas, we need to perform a HEAD request 
to each 0 bytes files. If you have 1000 files like this, you have to perform 
1000 HEAD requests to finally know that there are not any Large file. Not very 
efficient. Your Swift Sync client took 1 second to sync 20G of data with naive 
approach, now, you need 5 minutes… hash of 0 bytes is not a good idea at all.

* Second issue: since the hash is the hash of all parts (I have an idea about 
why this decision was made, probably for performance reasons), your client 
cannot work on files since the hash of local file is not the hash of the Swift 
aggregated file (which is the hash of all the hash of manifest). So, it means 
you cannot work on existing data, you have to either :
 - split all the files in the same way as the manifest, compute the MD5 of each 
part, than compute the MD5 of the hashes and compare to the MD5 on server… (ok… 
doable, but I gave up with such system)
 - have a local database in your client (when you download, store the REAL Hash 
of file and store that in fact you have to compare it the the HASH returned by 
server)
 - perform some kind of crappy heuristics (size + grab the starting bytes of 
each data of each part or something like that…)

* Third issue:
 - If you don’t want to store the parts of your object file, you have to wait 
for all your HEAD requests to finish since it is the only way to guess all the 
files that are referenced in your manifest headers.

So summarize, I think the current API really need some refinements about the 
listings since a competent developper may trust the bytes value and the hash 
value and create an algorithm that does not behave nicely

Re: [openstack-dev] [Horizon] Suggestions for handling new panels and refactors in the future

2015-10-09 Thread Tripp, Travis S
Hi Doug!

I think the is a great discussion topic and you summarize your points very 
nicely!

 I wish you’d responded to this thread, though:  
https://openstack.nimeyo.com/58582/openstack-dev-horizon-patterns-for-angular-panels,
 because it is talking about the same problem. This is option 3 I mentioned 
there and I do think this is still a viable option to consider, but we should 
discuss all the options.

Please consider that thread as my initial response to your email… and let’s 
keep discussing!

Thanks,
Travis

From: Douglas Fish
Reply-To: OpenStack List
Date: Friday, October 9, 2015 at 8:42 AM
To: OpenStack List
Subject: [openstack-dev] [Horizon] Suggestions for handling new panels and 
refactors in the future

I have two suggestions for handling both new panels and refactoring existing 
panels that I think could benefit us in the future:
1) When we are creating a panel that's a major refactor of an existing it 
should be a new separate panel, not a direct code replacement of the existing 
panel
2) New panels (include the refactors of existing panels) should be developed in 
an out of tree gerrit repository.

Why make refactors a separate panel?

I was taken a bit off guard after we merged the Network Topology->Curvature 
improvement: this was a surprise to some people outside of the Horizon 
community (though it had been discussed within Horizon for as long as I've been 
on the project). In retrospect, I think it would have been better to keep both 
the old Network Topology and new curvature based topology in our Horizon 
codebase. Doing so would have allowed operators to perform A-B/ Red-Black 
testing if they weren't immediately convinced of the awesomeness of the panel. 
It also would have allowed anyone with a customization of the Network Topology 
panel to have some time to configure their Horizon instance to continue to use 
the Legacy panel while they updated their customization to work with the new 
panel.

Perhaps we should treat panels more like an API element and take them through a 
deprecation cycle before removing them completely. Giving time for customizers 
to update their code is going to be especially important as we build angular 
replacements for python panels. While we have much better plugin support for 
angular there is still a learning curve for those developers.

Why build refactors and new panels out of tree?

First off, it appears to me trying to build new panels in tree has been fairly 
painful. I've seen big long lived patches pushed along without being merged. 
It's quite acceptable and expected to quickly merge half-complete patches into 
a brand new repository - but you can't behave that way working in tree in 
Horizon. Horizon needs to be kept production/operator ready. External 
repositories do not. Merging code quickly can ease collaboration and avoid this 
kind of long lived patch set.

Secondly, keeping new panels/plugins in a separate repository decentralizes 
decisions about which panels are "ready" and which aren't. If one group feels a 
plugin is "ready" they can make it their default version of the panel, and 
perhaps put resources toward translating it. If we develop these panels in-tree 
we need to make a common decision about what "ready" means - and once it's in 
everyone who wants a translated Horizon will need to translate it.

Finally, I believe developing new panels out of tree will help improve our 
plugin support in Horizon. It's this whole "eating your own dog food" idea. As 
soon as we start using our own Horizon plugin mechanism for our own development 
we are going to become aware of it's shortcomings (like quotas) and will be 
sufficiently motivated to fix them.

Looking forward to further discussion and other ideas on this!

Doug Fish

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] REST API to return ip-margin

2015-10-09 Thread Sean M. Collins
On Fri, Oct 09, 2015 at 02:38:03PM EDT, Aihua Li wrote:
>  For this use-case, we need to return network name in the response.We also 
> have the implementation and accompanying tempest test scripts.The issue 
> 1457986 is currently assigned to Mike Dorman. I am curious to see where we 
> are on this issue. Is the draft REST API ready? Can we incorporate my 
> use-case input into the considerations.

Network names are not guaranteed to be unique. This could cause problems. If I 
recall correctly we had a similar discussion about one of the plugins (One of 
the IBM ones?) where they ran into the issue of network names and uniqueness.

-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Scheduler proposal

2015-10-09 Thread Joshua Harlow

And also we should probably deprecate/not recommend:

http://docs.openstack.org/developer/nova/api/nova.scheduler.filters.json_filter.html#nova.scheduler.filters.json_filter.JsonFilter

That filter IMHO basically disallows optimizations like forming SQL 
statements for each filter (and then letting the DB do the heavy 
lifting) or say having each filter say 'oh my logic can be performed by 
a prepared statement ABC and u should just use that instead' (and then 
letting the DB do the heavy lifting).


Chris Friesen wrote:

On 10/09/2015 12:25 PM, Alec Hothan (ahothan) wrote:


Still the point from Chris is valid. I guess the main reason openstack is
going with multiple concurrent schedulers is to scale out by
distributing the
load between multiple instances of schedulers because 1 instance is too
slow. This discussion is about coordinating the many instances of
schedulers
in a way that works and this is actually a difficult problem and will get
worst as the number of variables for instance placement increases (for
example NFV is going to require a lot more than just cpu pinning, huge
pages
and numa).

Has anybody looked at why 1 instance is too slow and what it would
take to
make 1 scheduler instance work fast enough? This does not preclude the
use of
concurrency for finer grain tasks in the background.


Currently we pull data on all (!) of the compute nodes out of the
database via a series of RPC calls, then evaluate the various filters in
python code.

I suspect it'd be a lot quicker if each filter was a DB query.

Also, ideally we'd want to query for the most "strict" criteria first,
to reduce the total number of comparisons. For example, if you want to
implement the "affinity" server group policy, you only need to test a
single host. If you're matching against host aggregate metadata, you
only need to test against hosts in matching aggregates.

Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] Let's get together and fix all the bugs

2015-10-09 Thread David Stanek
I would like to start running a recurring bug squashing day. The general
idea is to get more focus on bugs and stability. You can find the details
here: https://etherpad.openstack.org/p/keystone-office-hours


-- 
David
blog: http://www.traceback.org
twitter: http://twitter.com/dstanek
www: http://dstanek.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Different OpenStack components

2015-10-09 Thread Abhishek Talwar
Hi Folks,I have been working with OpenStack from a while now, I know that other than the main componets (nova, neutron, glance, cinder, horizon, tempest, keystone etc) there are many more components in OpenStack (like Sahara, Trove).So, where can I see the list of all existing OpenStack components and is there any documentation for these components so that I can read what all roles these components play.Thanks and RegardsAbhishek Talwar
=-=-=
Notice: The information contained in this e-mail
message and/or attachments to it may contain 
confidential or privileged information. If you are 
not the intended recipient, any dissemination, use, 
review, distribution, printing or copying of the 
information contained in this e-mail message 
and/or attachments to it are strictly prohibited. If 
you have received this communication in error, 
please notify us by reply e-mail or telephone and 
immediately and permanently delete the message 
and any attachments. Thank you



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Scheduler proposal

2015-10-09 Thread Chris Friesen

On 10/09/2015 12:25 PM, Alec Hothan (ahothan) wrote:


Still the point from Chris is valid. I guess the main reason openstack is
going with multiple concurrent schedulers is to scale out by distributing the
load between multiple instances of schedulers because 1 instance is too
slow. This discussion is about coordinating the many instances of schedulers
in a way that works and this is actually a difficult problem and will get
worst as the number of variables for instance placement increases (for
example NFV is going to require a lot more than just cpu pinning, huge pages
and numa).

Has anybody looked at why 1 instance is too slow and what it would take to
make 1 scheduler instance work fast enough? This does not preclude the use of
concurrency for finer grain tasks in the background.


Currently we pull data on all (!) of the compute nodes out of the database via a 
series of RPC calls, then evaluate the various filters in python code.


I suspect it'd be a lot quicker if each filter was a DB query.

Also, ideally we'd want to query for the most "strict" criteria first, to reduce 
the total number of comparisons.  For example, if you want to implement the 
"affinity" server group policy, you only need to test a single host.  If you're 
matching against host aggregate metadata, you only need to test against hosts in 
matching aggregates.


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Scheduler proposal

2015-10-09 Thread Chris Friesen

On 10/09/2015 12:55 PM, Gregory Haynes wrote:


There is a more generalized version of this algorithm for concurrent
scheduling I've seen a few times - Pick N options at random, apply
heuristic over that N to pick the best, attempt to schedule at your
choice, retry on failure. As long as you have a fast heuristic and your
N is sufficiently smaller than the total number of options then the
retries are rare-ish and cheap. It also can scale out extremely well.


If you're looking for a resource that is relatively rare (say you want a 
particular hardware accelerator, or a very large number of CPUs, or even to be 
scheduled "near" to a specific other instance) then you may have to retry quite 
a lot.


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Blueprint to change (expand) traditional Ethernet interface naming schema in Fuel

2015-10-09 Thread Sergey Vasilenko
>
> >I would like to pay your attention to the changing interface naming
> >schema, which is proposed to be implemented in FuelA [1].A In brief,
> >Ethernet network interfaces may not be named as ethX, and there is a
> >reported bug about itA [2]
> >There are a lot of reasons to switch to the new naming schema, not
> only
> >because it has been used in CentOS 7 (and probably will be used in
> next
> >Ubuntu LTS), but becauseA new naming schema gave more predictable
> >interface namesA [3]. There is a reported bug related to the topicA
> [4]
>

L23network module is a interface naming scheme agnostic.
Only bridge and bond interface name protection found -- You can't call bond
or bridge like 'enp2s0', because this name reserved for NICs.



> You might be interested to look at the os-net-config tool - we faced this
> exact same issue with TripleO, and solved it via os-net-config, which
> provides abstractions for network configuration, including mapping device
> aliases (e.g "nic1") to real NIC names (e.g "em1" or whatever).
>
> https://github.com/openstack/os-net-config
>
>
It's interesting project. Proposed format for network configuration, so
interesting, but...
Project too young. And doesn't allow to configure some things, that
L23network already support.
Main problem of this project -- is a approach to change interface options
options. They doesn't use prefetch/flush mechanics as in the puppet. They
just executing commands for change, instead in most cases. Such approach
doesn't allow re-configure existing cloud properly, if one under production
load.

I can support config format from os-net-config as additional network scheme
format too, but, IMHO, this hierarchical format not so convenient as flat.

NIC mapping, in Nailgun, already implemented in the template-networking. If
wee need use it for another cases -- ask Alexey Kasatkin, please.

/sv
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] nova-manage db archive_deleted_rows broken

2015-10-09 Thread Jay Pipes

On 10/09/2015 02:16 PM, Matt Riedemann wrote:

On 10/9/2015 12:03 PM, Jay Pipes wrote:

I had a proposal [1] to completely rework the whole shadow table mess
and db archiving functionality. I continue to believe that is the
appropriate solution for this, and that we should rip out the existing
functionality because it simply does not work properly.

Best,
-jay

[1] https://review.openstack.org/#/c/137669/


Are you going to pick that back up? Or sick some minions on it.


I don't personally have the bandwidth to do this. If anyone out there in 
Nova contributor land has interest, just find me on IRC. :)


-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Scheduler proposal

2015-10-09 Thread Gregory Haynes
Excerpts from Zane Bitter's message of 2015-10-09 17:09:46 +:
> On 08/10/15 21:32, Ian Wells wrote:
> >
> > > 2. if many hosts suit the 5 VMs then this is *very* unlucky,because 
> > we should be choosing a host at random from the set of
> > suitable hosts and that's a huge coincidence - so this is a tiny
> > corner case that we shouldn't be designing around
> >
> > Here is where we differ in our understanding. With the current
> > system of filters and weighers, 5 schedulers getting requests for
> > identical VMs and having identical information are *expected* to
> > select the same host. It is not a tiny corner case; it is the most
> > likely result for the current system design. By catching this
> > situation early (in the scheduling process) we can avoid multiple
> > RPC round-trips to handle the fail/retry mechanism.
> >
> >
> > And so maybe this would be a different fix - choose, at random, one of
> > the hosts above a weighting threshold, not choose the top host every
> > time? Technically, any host passing the filter is adequate to the task
> > from the perspective of an API user (and they can't prove if they got
> > the highest weighting or not), so if we assume weighting an operator
> > preference, and just weaken it slightly, we'd have a few more options.
> 
> The optimal way to do this would be a weighted random selection, where 
> the probability of any given host being selected is proportional to its 
> weighting. (Obviously this is limited by the accuracy of the weighting 
> function in expressing your actual preferences - and it's at least 
> conceivable that this could vary with the number of schedulers running.)
> 
> In fact, the choice of the name 'weighting' would normally imply that 
> it's done this way; hearing that the 'weighting' is actually used as a 
> 'score' with the highest one always winning is quite surprising.
> 
> cheers,
> Zane.
> 

There is a more generalized version of this algorithm for concurrent
scheduling I've seen a few times - Pick N options at random, apply
heuristic over that N to pick the best, attempt to schedule at your
choice, retry on failure. As long as you have a fast heuristic and your
N is sufficiently smaller than the total number of options then the
retries are rare-ish and cheap. It also can scale out extremely well.

Obviously you lose some of the ability to micro-manage where things are
placed with a scheduling setup like that, but if scaling up is the
concern I really hope that isnt a problem...

Cheers,
Greg

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] service catalog: TNG

2015-10-09 Thread Jonathan D. Proulx
On Fri, Oct 09, 2015 at 02:17:26PM -0400, Monty Taylor wrote:
:On 10/09/2015 01:39 PM, David Stanek wrote:
:>
:>On Fri, Oct 9, 2015 at 1:28 PM, Jonathan D. Proulx > wrote:
:>As an operator I'd be happy to use SRV records to define endpoints,
:>though multiple regions could make that messy.
:>
:>would we make subdomins per region or include region name in the
:>service name?
:>
:>_compute-regionone._tcp.example.com 
:>-vs-
:>_compute._tcp.regionone.example.com 
:>
:>Also not all operators can controll their DNS to this level so it
:>couldn't be the only option.
:
:SO - XMPP does this. The way it works is that if your XMPP provider
:has put the approriate records in DNS, then everything Just Works. If
:not, then you, as a consumer, have several pieces of information you
:need to provide by hand.
:
:Of course, there are already several pieces of information you have
:to provide by hand to connect to OpenStack, so needing to download a
:manifest file or something like that to talk to a cloud in an
:environment where the people running a cloud do not have the ability
:to add information to DNS (boggles) shouldn't be that terrible.

yes but XMPP require 2 (maybe 3) SRV records so an equivelent number
of local config options is managable. A cloud with X endpoints and Y
regions is significantly more.

Not to say this couldn't be done by packing more stuff into the openrc
or equivelent so users don't need to directly enter all that, but that
would be a significant change and one I think would be more difficult
for smaller operations.

:One could also imagine an in-between option where OpenStack could run
:an _optional_ DNS for this purpose - and then the only 'by-hand'
:you'd need for clouds with no real DNS is the location of the
:discover DNS.

Yes a special purpose DNS (a la dnsbl) might be preferable to
pushing around static configs.

-Jon

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] py26 support in python-muranoclient

2015-10-09 Thread Tim Bell
There is a need to distinguish between server side py26 support which is
generally under the control of the service provider and py26 support on the
client side. For a service provider to push all of their hypervisors and
service machines to RHEL 7 is under their control but requiring all of their
users to do the same is much more difficult.

 

Thus, I feel there should be different decisions and communication w.r.t.
the time scales for deprecation of py26 on clients compared to the server
side. A project may choose to make them together but equally some may choose
to delay the mandatory client migration to py27 while requiring the server
to move.

 

Tim

 

From: Sahdev P Zala [mailto:spz...@us.ibm.com] 
Sent: 09 October 2015 20:42
To: OpenStack Development Mailing List (not for usage questions)

Subject: Re: [openstack-dev] [Murano] py26 support in python-muranoclient

 

> From: Clark Boylan mailto:cboy...@sapwetik.org> >
> To: openstack-dev@lists.openstack.org
 
> Date: 10/09/2015 02:00 PM
> Subject: Re: [openstack-dev] [Murano] py26 support in python-muranoclient
> 
> 
> 
> On Fri, Oct 9, 2015, at 10:32 AM, Vahid S Hashemian wrote:
> > Serg, Jeremy,
> > 
> > Thank you for your response, so the issue I ran into with my patch is
the 
> > gate job failing on python26.
> > You can see it here:  
https://review.openstack.org/#/c/232271/
> > 
> > Serg suggested that we add 2.6 support to tosca-parser, which is fine
> > with 
> > us.
> > But I got a bit confused after reading Jeremy's response.
> > It seems to me that the support will be going away, but there is no 
> > timeline (and therefore no near-term plan?)
> > So, I'm hoping Jeremy can advise whether he also recommends the same 
> > thing, or not.
> There is a timeline (though admittedly hard to find) at
>  
https://etherpad.openstack.org/p/YVR-relmgt-stable-branchwhich says
> Juno support would run through the end of November. Since Juno is the
> last release to support python2.6 we will remove python2.6 support from
> the test infrastructure at that time as well.
> 
> I personally probably wouldn't bother with extra work to support
> python2.6, but that all depends on how much work it is and whether or
> not you find value in it. Ultimately it is up to you, just know that the
> Infrastructure team will stop hosting testing for python2.6 when Juno is
> EOLed.
> 
> Hope this helps,
> Clark

Thanks Clark and Jeremy! This is very helpful. 

Serg, now knowing that CI testing is not going to continue in few weeks and
many other projects has dropped python 2.6 support or getting there, if
Murano decides the same that would be great. If Murano team decide to
continue the 2.6 support, we will need to enable support in tosca-parser as
well. As you mentioned it may not be a lot of work for us and we are totally
fine in making changes, but without automated tests it can be challenging in
future. 

Thanks! 
Sahdev Zala



> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 
>  
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 



smime.p7s
Description: S/MIME cryptographic signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] L2 gateway project

2015-10-09 Thread Sukhdev Kapur
Hey Kyle,

We are down to couple of patches that are awaiting approval/merge. As soon
as it is done, I will let you know.

Thanks
-Sukhdev


On Fri, Oct 9, 2015 at 9:39 AM, Kyle Mestery  wrote:

> On Fri, Oct 9, 2015 at 10:13 AM, Gary Kotton  wrote:
>
>> Hi,
>> Who will be creating the stable/liberty branch?
>> Thanks
>> Gary
>>
>>
> I'll be doing this once someone from the L2GW team lets me know a commit
> SHA to create it from.
>
> Thanks,
> Kyle
>
>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] We should move strutils.mask_password back into oslo-incubator

2015-10-09 Thread Doug Hellmann
Excerpts from Matt Riedemann's message of 2015-10-09 09:51:28 -0500:
> 
> On 10/9/2015 1:49 AM, Paul Carlton wrote:
> >
> > On 08/10/15 16:49, Doug Hellmann wrote:
> >> Excerpts from Matt Riedemann's message of 2015-10-07 14:38:07 -0500:
> >>> Here's why:
> >>>
> >>> https://review.openstack.org/#/c/220622/
> >>>
> >>> That's marked as fixing an OSSA which means we'll have to backport the
> >>> fix in nova but it depends on a change to strutils.mask_password in
> >>> oslo.utils, which required a release and a minimum version bump in
> >>> global-requirements.
> >>>
> >>> To backport the change in nova, we either have to:
> >>>
> >>> 1. Copy mask_password out of oslo.utils and add it to nova in the
> >>> backport or,
> >>>
> >>> 2. Backport the oslo.utils change to a stable branch, release it as a
> >>> patch release, bump minimum required version in stable g-r and then
> >>> backport the nova change and depend on the backported oslo.utils stable
> >>> release - which also makes it a dependent library version bump for any
> >>> packagers/distros that have already frozen libraries for their stable
> >>> releases, which is kind of not fun.
> >> Bug fix releases do not generally require a minimum version bump. The
> >> API hasn't changed, and there's nothing new in the library in this case,
> >> so it's a documentation issue to ensure that users update to the new
> >> release. All we should need to do is backport the fix to the appropriate
> >> branch of oslo.utils and release a new version from that branch that is
> >> compatible with the same branch of nova.
> >>
> >> Doug
> >>
> >>> So I'm thinking this is one of those things that should ultimately live
> >>> in oslo-incubator so it can live in the respective projects. If
> >>> mask_password were in oslo-incubator, we'd have just fixed and
> >>> backported it there and then synced to nova on master and stable
> >>> branches, no dependent library version bumps required.
> >>>
> >>> Plus I miss the good old days of reviewing oslo-incubator
> >>> syncs...(joking of course).
> >>>
> >> __
> >>
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> >> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > I've been following this discussion, is there now a consensus on the way
> > forward?
> >
> > My understanding is that Doug is suggesting back porting my oslo.utils
> > change to the stable juno and kilo branches?
> >
> 
> It means you'll have to backport the oslo.utils change to each stable 
> branch that you also backport the nova change to, which probably goes 
> back to stable/juno (so liberty->kilo->juno backports in both projects).
> 

That sounds right. Ping the Oslo team in #openstack-oslo for reviews on
those stable branches as you prepare them and I'm sure we can help
expedite the updates.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] py26 support in python-muranoclient

2015-10-09 Thread Sahdev P Zala
> From: Clark Boylan 
> To: openstack-dev@lists.openstack.org
> Date: 10/09/2015 02:00 PM
> Subject: Re: [openstack-dev] [Murano] py26 support in 
python-muranoclient
> 
> 
> 
> On Fri, Oct 9, 2015, at 10:32 AM, Vahid S Hashemian wrote:
> > Serg, Jeremy,
> > 
> > Thank you for your response, so the issue I ran into with my patch is 
the 
> > gate job failing on python26.
> > You can see it here: https://review.openstack.org/#/c/232271/
> > 
> > Serg suggested that we add 2.6 support to tosca-parser, which is fine
> > with 
> > us.
> > But I got a bit confused after reading Jeremy's response.
> > It seems to me that the support will be going away, but there is no 
> > timeline (and therefore no near-term plan?)
> > So, I'm hoping Jeremy can advise whether he also recommends the same 
> > thing, or not.
> There is a timeline (though admittedly hard to find) at
> https://etherpad.openstack.org/p/YVR-relmgt-stable-branch which says
> Juno support would run through the end of November. Since Juno is the
> last release to support python2.6 we will remove python2.6 support from
> the test infrastructure at that time as well.
> 
> I personally probably wouldn't bother with extra work to support
> python2.6, but that all depends on how much work it is and whether or
> not you find value in it. Ultimately it is up to you, just know that the
> Infrastructure team will stop hosting testing for python2.6 when Juno is
> EOLed.
> 
> Hope this helps,
> Clark

Thanks Clark and Jeremy! This is very helpful. 

Serg, now knowing that CI testing is not going to continue in few weeks 
and many other projects has dropped python 2.6 support or getting there, 
if Murano decides the same that would be great. If Murano team decide to 
continue the 2.6 support, we will need to enable support in tosca-parser 
as well. As you mentioned it may not be a lot of work for us and we are 
totally fine in making changes, but without automated tests it can be 
challenging in future. 

Thanks! 
Sahdev Zala



> 
__
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] REST API to return ip-margin

2015-10-09 Thread Aihua Li
 Hi, Neutrons,
We have desire to have an API for neutron to return ip-margin, or the number of 
IP still available for use, for each network belong to a tenant.
I have filed a launchpad issue 1503462. This is now marked as duplicate of 
1457986.
I am glad to see there are more parties that are interested in this feature. I 
would like to see how we can coordinate the effort on this feature.We have two 
use-cases for this feature:
1. To assist nova to select a network when creating a new VM instance.    For 
this use-case, we like to have a tenant-id in GET request and get ip-margins 
for all the networks belong to the network.2. To provide information to monitor 
tools to show the capacities on dash-board. 
 For this use-case, we need to return network name in the response.We also have 
the implementation and accompanying tempest test scripts.The issue 1457986 is 
currently assigned to Mike Dorman. I am curious to see where we are on this 
issue. Is the draft REST API ready? Can we incorporate my use-case input into 
the considerations.
-- Aihua
Reference:1. https://bugs.launchpad.net/neutron/+bug/15034622. 
https://bugs.launchpad.net/neutron/+bug/1457986


Reply, Reply All or Forward | moreopenstack-dev-ow...@lists.openstack.org 
toaihuaedwar...@yahoo.com Oct 8 at 11:41 AM- Forwarded Message -

You have to be a subscriber to post to this mailing list, so your
message has been automatically rejected. You can subscribe at:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

If you think that your messages are being rejected in error, contact
the mailing list owner at %(listowner)


Hi, Neutrons,
We have desire to have an API for neutron to return ip-margin, or the number of 
IP still available for use, for each network belong to a tenant.
I have filed a launchpad issue 1503462. This is now marked as duplicate of 
1457986.
I am glad to see there are more parties that are interested in this feature. I 
would like to see how we can coordinate the effort on this feature.We have two 
use-cases for this feature:
1. To assist nova to select a network when creating a new VM instance.    For 
this use-case, we like to have a tenant-id in GET request and get ip-margins 
for all the networks belong to the network.2. To provide information to monitor 
tools to show the capacities on dash-board. 
 For this use-case, we need to return network name in the response.We also have 
the implementation and accompanying tempest test scripts.The issue 1457986 is 
currently assigned to Mike Dorman. I am curious to see where we are on this 
issue. Is the draft REST API ready? Can we incorporate my use-case input into 
the considerations.
-- Aihua
Reference:1. https://bugs.launchpad.net/neutron/+bug/15034622. 
https://bugs.launchpad.net/neutron/+bug/1457986

== Aihua Edward Li ==__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Requests + urllib3 + distro packages

2015-10-09 Thread Robert Collins
On 10 October 2015 at 03:57, Cory Benfield  wrote:
>
>> On 9 Oct 2015, at 15:18, Jeremy Stanley  wrote:
>>
>> On 2015-10-09 14:58:36 +0100 (+0100), Cory Benfield wrote:
>> [...]
>>> IMO, what OpenStack needs is a decision about where it’s getting
>>> its packages from, and then to refuse to mix the two.
>>
>> I have yet to find a Python-based operating system installable in
>> whole via pip. There will always be _at_least_some_ packages you
>> install from your operating system's package management. What you
>> seem to be missing is that Linux distros are now shipping base
>> images which include their python-requests and python-urllib3
>> packages already pre-installed as dependencies of Python-based tools
>> they deem important to their users.
>>
>
> Yeah, this has been an ongoing problem.
>
> For my part, Donald Stufft has informed me that if the distribution-provided 
> requests package has the appropriate install_requires field in its setup.py, 
> pip will respect that dependency.

It should but it won't :).

https://github.com/pypa/pip/issues/2687
and
https://github.com/pypa/pip/issues/988

The first one means that if someone does 'pip install -U urllib3' and
an unbundled requests with appropriate pin on urllib3 is already
installed, that pip will happily upgrade urllib3, breaking requests,
without complaining. It is fixable (with correct metadata of course).

The second one means that if anything - another package, or the user
via direct mention or requirements/constraints files - specifies a
urllib3 dependency (of any sort) then the requests dependency will be
silently ignored.

Both of these will be solved in the medium future - we're now at the
point of having POC branches, and once we've finished with the
constraints rollout and PEP-426 marker polish will be moving onto the
resolver work.

> Given that requests has recently switched to not providing mid-cycle urllib3 
> versions, it should be entirely possible for downstream redistributors in 
> Debian/Fedora to put that metadata into their packages when they unbundle 
> requests. I’m chasing up with our downstream redistributors right now to ask 
> them to start doing that.
>
> This should resolve the problem for systems where requests 2.7.0 or higher 
> are being used. In other systems, this problem still exists and cannot be 
> fixed by requests directly.

Well, if we get to a future where it is in-principle fixed, I'll be happy.

-Rob


-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Scheduler proposal

2015-10-09 Thread Alec Hothan (ahothan)

Still the point from Chris is valid.
I guess the main reason openstack is going with multiple concurrent schedulers 
is to scale out by distributing the load between multiple instances of 
schedulers because 1 instance is too slow.
This discussion is about coordinating the many instances of schedulers in a way 
that works and this is actually a difficult problem and will get worst as the 
number of variables for instance placement increases (for example NFV is going 
to require a lot more than just cpu pinning, huge pages and numa).

Has anybody looked at why 1 instance is too slow and what it would take to make 
1 scheduler instance work fast enough? This does not preclude the use of 
concurrency for finer grain tasks in the background.




On 10/9/15, 11:05 AM, "Clint Byrum"  wrote:

>Excerpts from Chris Friesen's message of 2015-10-09 10:54:36 -0700:
>> On 10/09/2015 11:09 AM, Zane Bitter wrote:
>> 
>> > The optimal way to do this would be a weighted random selection, where the
>> > probability of any given host being selected is proportional to its 
>> > weighting.
>> > (Obviously this is limited by the accuracy of the weighting function in
>> > expressing your actual preferences - and it's at least conceivable that 
>> > this
>> > could vary with the number of schedulers running.)
>> >
>> > In fact, the choice of the name 'weighting' would normally imply that it's 
>> > done
>> > this way; hearing that the 'weighting' is actually used as a 'score' with 
>> > the
>> > highest one always winning is quite surprising.
>> 
>> If you've only got one scheduler, there's no need to get fancy, you just 
>> pick 
>> the "best" host based on your weighing function.
>> 
>> It's only when you've got parallel schedulers that things get tricky.
>> 
>
>Note that I think you mean _concurrent_ not _parallel_ schedulers.
>
>Parallel schedulers would be trying to solve the same unit of work by
>breaking it up into smaller components and doing them at the same time.
>
>Concurrent means they're just doing different things at the same time.
>
>I know this is nit-picky, but we use the wrong word _A LOT_ and the
>problem space is actually vastly different, as parallelizable problems
>have a whole set of optimizations and advantages that generic concurrent
>problems (especially those involving mutating state!) have a whole set
>of race conditions that must be managed.
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] service catalog: TNG

2015-10-09 Thread Monty Taylor

On 10/09/2015 01:39 PM, David Stanek wrote:


On Fri, Oct 9, 2015 at 1:28 PM, Jonathan D. Proulx mailto:j...@csail.mit.edu>> wrote:

On Fri, Oct 09, 2015 at 01:01:20PM -0400, Shamail wrote:
:> On Oct 9, 2015, at 12:28 PM, Monty Taylor mailto:mord...@inaugust.com>> wrote:
:>
:>> On 10/09/2015 11:21 AM, Shamail wrote:
:>>
:>>
:>>> On Oct 9, 2015, at 10:39 AM, Sean Dague mailto:s...@dague.net>> wrote:
:>>>
:>>> It looks like some great conversation got going on the service
catalog
:>>> standardization spec / discussion at the last cross project
meeting.
:>>> Sorry I wasn't there to participate.
:>> Apologize if this is a question that has already been address
but why can't we just leverage something like consul.io
?
:>
:> It's a good question and there have actually been some
discussions about leveraging it on the backend. However, even if we
did, we'd still need keystone to provide the multi-tenancy view on
the subject. consul wasn't designed (quite correctly I think) to be
a user-facing service for 50k users.
:>
:> I think it would be an excellent backend.
:Thanks, that makes sense.  I agree that it might be a good backend
but not the overall solution... I was bringing it up to ensure we
consider existing options (where possible) and spend cycles on the
unsolved bits.

As an operator I'd be happy to use SRV records to define endpoints,
though multiple regions could make that messy.

would we make subdomins per region or include region name in the
service name?

_compute-regionone._tcp.example.com 
-vs-
_compute._tcp.regionone.example.com 

Also not all operators can controll their DNS to this level so it
couldn't be the only option.


SO - XMPP does this. The way it works is that if your XMPP provider has 
put the approriate records in DNS, then everything Just Works. If not, 
then you, as a consumer, have several pieces of information you need to 
provide by hand.


Of course, there are already several pieces of information you have to 
provide by hand to connect to OpenStack, so needing to download a 
manifest file or something like that to talk to a cloud in an 
environment where the people running a cloud do not have the ability to 
add information to DNS (boggles) shouldn't be that terrible.


One could also imagine an in-between option where OpenStack could run an 
_optional_ DNS for this purpose - and then the only 'by-hand' you'd need 
for clouds with no real DNS is the location of the discover DNS.



Or are you talking about using an internal DNS implementation private
to the OpenStack Deployment?  I'm actually a bit less happy with that
idea.


I was able to put together an implementation[1] of DNS-SD loosely based
on RFC-6763[2]. It'd really a proof of concept, but we've talked so much
about it that I decided to get something working. Although if this seems
like a viable option then there's still much work to be done.

I'd love feedback.

1. https://gist.github.com/dstanek/093f851fdea8ebfd893d
2. https://tools.ietf.org/html/rfc6763

--
David
blog: http://www.traceback.org
twitter: http://twitter.com/dstanek
www: http://dstanek.com


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] nova-manage db archive_deleted_rows broken

2015-10-09 Thread Matt Riedemann



On 10/9/2015 12:03 PM, Jay Pipes wrote:

On 10/07/2015 11:04 AM, Matt Riedemann wrote:

I'm wondering why we don't reverse sort the tables using the sqlalchemy
metadata object before processing the tables for delete?  That's the
same thing I did in the 267 migration since we needed to process the
tree starting with the leafs and then eventually get back to the
instances table (since most roads lead to the instances table).


Yes, that would make a lot of sense to me if we used the SA metadata
object for reverse sorting.


When I get some free time next week I'm going to play with this.




Another thing that's really weird is how max_rows is used in this code.
There is cumulative tracking of the max_rows value so if the value you
pass in is too small, you might not actually be removing anything.

I figured max_rows meant up to max_rows from each table, not max_rows
*total* across all tables. By my count, there are 52 tables in the nova
db model. The way I read the code, if I pass in max_rows=10 and say it
processes table A and archives 7 rows, then when it processes table B it
will pass max_rows=(max_rows - rows_archived), which would be 3 for
table B. If we archive 3 rows from table B, rows_archived >= max_rows
and we quit. So to really make this work, you have to pass in something
big for max_rows, like 1000, which seems completely random.

Does this seem odd to anyone else?


Uhm, yes it does.

 > Given the relationships between

tables, I'd think you'd want to try and delete max_rows for all tables,
so archive 10 instances, 10 block_device_mapping, 10 pci_devices, etc.

I'm also bringing this up now because there is a thread in the operators
list which pointed me to a set of scripts that operators at GoDaddy are
using for archiving deleted rows:

http://lists.openstack.org/pipermail/openstack-operators/2015-October/008392.html


Presumably because the command in nova doesn't work. We should either
make this thing work or just punt and delete it because no one cares.


The db archive code in Nova just doesn't make much sense to me at all.
The algorithm for purging stuff, like you mention above, does not take
into account the relationships between tables; instead of diving into
the children relations and archiving those first, the code just uses a
simplistic "well, if we hit a foreign key error, just ignore and
continue archiving other things, we will eventually repeat the call to
delete this row" strategy:

https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/api.py#L6021-L6023


Yeah, I noticed that too and I don't think it actually does anything. We 
never actually come back since that would require some 
tracking/stack/recursion stuff to retry failed tables, which we don't do.





I had a proposal [1] to completely rework the whole shadow table mess
and db archiving functionality. I continue to believe that is the
appropriate solution for this, and that we should rip out the existing
functionality because it simply does not work properly.

Best,
-jay

[1] https://review.openstack.org/#/c/137669/


Are you going to pick that back up? Or sick some minions on it.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] py26 support in python-muranoclient

2015-10-09 Thread Vahid S Hashemian
Thank Clark. That really helps.
We'll consider this timeline and make a decision on which way to go 
bearing in mind that waiting for 2.6 phase out would mean a 2 months delay 
in getting this blueprint implemented.

Regards,
-
Vahid Hashemian, Ph.D.
Advisory Software Engineer, IBM Cloud



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] mox to mock migration

2015-10-09 Thread Jay Dobies
This sounds good, I was hoping it'd be acceptable to use etherpad. I 
filed a blueprint [1] but I'm anticipating using the etherpad much more 
regularly to track which files are being worked or completed.


[1] https://blueprints.launchpad.net/heat/+spec/mox-to-mock-conversion
[2] https://etherpad.openstack.org/p/heat-mox-to-mock

Thanks for the guidance :)

On 10/09/2015 12:42 PM, Steven Hardy wrote:

On Fri, Oct 09, 2015 at 09:06:57AM -0400, Jay Dobies wrote:

I forget where we left things at the last meeting with regard to whether or
not there should be a blueprint on this. I was going to work on some during
some downtime but I wanted to make sure I wasn't overlapping with what
others may be converting (it's more time consuming than I anticipated).

Any thoughts on how to track it?


I'd probably suggest raising either a bug or a blueprint (not spec), then
link from that to an etherpad where you can track all the tests requiring
rework, and who's working on them.

"it's more time consuming than I anticipated" is pretty much my default
response for anything to do with heat unit tests btw, good luck! :)

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Backport policy for Liberty

2015-10-09 Thread Fox, Kevin M
As an Op, that sounds reasonable so long as they aren't defaulted on. In theory 
it shouldn't be much different then a distro adding additional packages. The 
new packages don't affect existing systems unless the op requests them to be 
installed.

With my App Catalog hat on, I'm curious how horizon plugins might fit into that 
scheme. The App Catalog plugin would need to be added directly to the Horizon 
container. I'm sure there are other plugins that may want to get loaded into 
the container too. They should all be able to be enabled/disabled though via 
docker env variables though. Any thoughts there?

Thanks,
Kevin

From: Steven Dake (stdake) [std...@cisco.com]
Sent: Thursday, October 08, 2015 12:47 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [kolla] Backport policy for Liberty

Kolla operators and developers,

The general consensus of the Core Reviewer team for Kolla is that we should 
embrace a liberal backport policy for the Liberty release.  An example of 
liberal -> We add a new server service to Ansible, we would backport the 
feature to liberty.  This is in breaking with the typical OpenStack backports 
policy.  It also creates a whole bunch more work and has potential to introduce 
regressions in the Liberty release.

Given these realities I want to put on hold any liberal backporting until after 
Summit.  I will schedule a fishbowl session for a backport policy discussion 
where we will decide as a community what type of backport policy we want.  The 
delivery required before we introduce any liberal backporting policy then 
should be a description of that backport policy discussion at Summit distilled 
into a RST file in our git repository.

If you have any questions, comments, or concerns, please chime in on the thread.

Regards
-steve
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Scheduler proposal

2015-10-09 Thread Clint Byrum
Excerpts from Chris Friesen's message of 2015-10-09 10:54:36 -0700:
> On 10/09/2015 11:09 AM, Zane Bitter wrote:
> 
> > The optimal way to do this would be a weighted random selection, where the
> > probability of any given host being selected is proportional to its 
> > weighting.
> > (Obviously this is limited by the accuracy of the weighting function in
> > expressing your actual preferences - and it's at least conceivable that this
> > could vary with the number of schedulers running.)
> >
> > In fact, the choice of the name 'weighting' would normally imply that it's 
> > done
> > this way; hearing that the 'weighting' is actually used as a 'score' with 
> > the
> > highest one always winning is quite surprising.
> 
> If you've only got one scheduler, there's no need to get fancy, you just pick 
> the "best" host based on your weighing function.
> 
> It's only when you've got parallel schedulers that things get tricky.
> 

Note that I think you mean _concurrent_ not _parallel_ schedulers.

Parallel schedulers would be trying to solve the same unit of work by
breaking it up into smaller components and doing them at the same time.

Concurrent means they're just doing different things at the same time.

I know this is nit-picky, but we use the wrong word _A LOT_ and the
problem space is actually vastly different, as parallelizable problems
have a whole set of optimizations and advantages that generic concurrent
problems (especially those involving mutating state!) have a whole set
of race conditions that must be managed.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Scheduler proposal

2015-10-09 Thread Chris Friesen

On 10/09/2015 11:09 AM, Zane Bitter wrote:


The optimal way to do this would be a weighted random selection, where the
probability of any given host being selected is proportional to its weighting.
(Obviously this is limited by the accuracy of the weighting function in
expressing your actual preferences - and it's at least conceivable that this
could vary with the number of schedulers running.)

In fact, the choice of the name 'weighting' would normally imply that it's done
this way; hearing that the 'weighting' is actually used as a 'score' with the
highest one always winning is quite surprising.


If you've only got one scheduler, there's no need to get fancy, you just pick 
the "best" host based on your weighing function.


It's only when you've got parallel schedulers that things get tricky.

Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] py26 support in python-muranoclient

2015-10-09 Thread Clark Boylan


On Fri, Oct 9, 2015, at 10:32 AM, Vahid S Hashemian wrote:
> Serg, Jeremy,
> 
> Thank you for your response, so the issue I ran into with my patch is the 
> gate job failing on python26.
> You can see it here: https://review.openstack.org/#/c/232271/
> 
> Serg suggested that we add 2.6 support to tosca-parser, which is fine
> with 
> us.
> But I got a bit confused after reading Jeremy's response.
> It seems to me that the support will be going away, but there is no 
> timeline (and therefore no near-term plan?)
> So, I'm hoping Jeremy can advise whether he also recommends the same 
> thing, or not.
There is a timeline (though admittedly hard to find) at
https://etherpad.openstack.org/p/YVR-relmgt-stable-branch which says
Juno support would run through the end of November. Since Juno is the
last release to support python2.6 we will remove python2.6 support from
the test infrastructure at that time as well.

I personally probably wouldn't bother with extra work to support
python2.6, but that all depends on how much work it is and whether or
not you find value in it. Ultimately it is up to you, just know that the
Infrastructure team will stop hosting testing for python2.6 when Juno is
EOLed.

Hope this helps,
Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] service catalog: TNG

2015-10-09 Thread David Stanek
On Fri, Oct 9, 2015 at 1:28 PM, Jonathan D. Proulx 
wrote:

> On Fri, Oct 09, 2015 at 01:01:20PM -0400, Shamail wrote:
> :> On Oct 9, 2015, at 12:28 PM, Monty Taylor  wrote:
> :>
> :>> On 10/09/2015 11:21 AM, Shamail wrote:
> :>>
> :>>
> :>>> On Oct 9, 2015, at 10:39 AM, Sean Dague  wrote:
> :>>>
> :>>> It looks like some great conversation got going on the service catalog
> :>>> standardization spec / discussion at the last cross project meeting.
> :>>> Sorry I wasn't there to participate.
> :>> Apologize if this is a question that has already been address but why
> can't we just leverage something like consul.io?
> :>
> :> It's a good question and there have actually been some discussions
> about leveraging it on the backend. However, even if we did, we'd still
> need keystone to provide the multi-tenancy view on the subject. consul
> wasn't designed (quite correctly I think) to be a user-facing service for
> 50k users.
> :>
> :> I think it would be an excellent backend.
> :Thanks, that makes sense.  I agree that it might be a good backend but
> not the overall solution... I was bringing it up to ensure we consider
> existing options (where possible) and spend cycles on the unsolved bits.
>
> As an operator I'd be happy to use SRV records to define endpoints,
> though multiple regions could make that messy.
>
> would we make subdomins per region or include region name in the
> service name?
>
> _compute-regionone._tcp.example.com
>-vs-
> _compute._tcp.regionone.example.com
>
> Also not all operators can controll their DNS to this level so it
> couldn't be the only option.
>
> Or are you talking about using an internal DNS implementation private
> to the OpenStack Deployment?  I'm actually a bit less happy with that
> idea.
>

I was able to put together an implementation[1] of DNS-SD loosely based on
RFC-6763[2]. It'd really a proof of concept, but we've talked so much about
it that I decided to get something working. Although if this seems like a
viable option then there's still much work to be done.

I'd love feedback.

1. https://gist.github.com/dstanek/093f851fdea8ebfd893d
2. https://tools.ietf.org/html/rfc6763

-- 
David
blog: http://www.traceback.org
twitter: http://twitter.com/dstanek
www: http://dstanek.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Recording little everyday OpenStack successes

2015-10-09 Thread Mike Spreitzer
Thierry Carrez  wrote on 10/09/2015 05:42:49 AM:

...
> So whenever you feel like you made progress, or had a little success in
> your OpenStack adventures, or have some joyful moment to share, just
> throw the following message on your local IRC channel:
> 
> #success [Your message here]
> 
> The openstackstatus bot will take that and record it on this wiki page:
> 
> https://wiki.openstack.org/wiki/Successes
> 
> We'll add a few of those every week to the weekly newsletter (as part of
> the developer digest that we reecently added there).
> 
> Caveats: Obviously that only works on channels where openstackstatus is
> present (the official OpenStack IRC channels), and we may remove entries
> that are off-topic or spam.
> 
> So... please use #success liberally and record lttle everyday OpenStack
> successes. Share the joy and make the OpenStack community a happy place.

Great.  I am about to contribute one myself.  Lucky I noticed this email. 
How will the word get out to those who did not?  How about a pointer to 
instructions on the Successes page?

Thanks,
Mike


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] py26 support in python-muranoclient

2015-10-09 Thread Vahid S Hashemian
Serg, Jeremy,

Thank you for your response, so the issue I ran into with my patch is the 
gate job failing on python26.
You can see it here: https://review.openstack.org/#/c/232271/

Serg suggested that we add 2.6 support to tosca-parser, which is fine with 
us.
But I got a bit confused after reading Jeremy's response.
It seems to me that the support will be going away, but there is no 
timeline (and therefore no near-term plan?)
So, I'm hoping Jeremy can advise whether he also recommends the same 
thing, or not.

Thank you both again.

Regards,
-
Vahid Hashemian, Ph.D.
Advisory Software Engineer, IBM Cloud




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] Scheduler proposal

2015-10-09 Thread Clint Byrum
Excerpts from Chris Friesen's message of 2015-10-08 23:52:41 -0700:
> On 10/08/2015 01:37 AM, Clint Byrum wrote:
> > Excerpts from Maish Saidel-Keesing's message of 2015-10-08 00:14:55 -0700:
> >> Forgive the top-post.
> >>
> >> Cross-posting to openstack-operators for their feedback as well.
> >>
> >> Ed the work seems very promising, and I am interested to see how this
> >> evolves.
> >>
> >> With my operator hat on I have one piece of feedback.
> >>
> >> By adding in a new Database solution (Cassandra) we are now up to three
> >> different database solutions in use in OpenStack
> >>
> >> MySQL (practically everything)
> >> MongoDB (Ceilometer)
> >> Cassandra.
> >>
> >> Not to mention two different message queues
> >> Kafka (Monasca)
> >> RabbitMQ (everything else)
> >>
> >> Operational overhead has a cost - maintaining 3 different database
> >> tools, backing them up, providing HA, etc. has operational cost.
> >>
> >> This is not to say that this cannot be overseen, but it should be taken
> >> into consideration.
> >>
> >> And *if* they can be consolidated into an agreed solution across the
> >> whole of OpenStack - that would be highly beneficial (IMHO).
> >>
> >
> > Just because they both say they're databases, doesn't mean they're even
> > remotely similar.
> 
> True, but the fact remains that it means operators (and developers) would 
> have 
> to become familiar with the quirks and problems of yet another piece of 
> technology.
> 

Indeed! And we can get really opinionated here now that we have some
experience I think. Personally, I'd rather become familiar with the
quirks and problems of Cassandra, than try to become familiar with the
quirks and problems of OpenStack's invented complex workarounds for high
scale state management cells.

So I agree with the statement that the cost of adding a technology should
be weighed. However, the cost of inventing a workaround should be weighed
with the same scale. Complex workarounds will, in most cases, weigh much
more than adopting a well known and proven technology that is aimed at
what turns out to be a common problem set.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] service catalog: TNG

2015-10-09 Thread Jonathan D. Proulx
On Fri, Oct 09, 2015 at 01:01:20PM -0400, Shamail wrote:
:> On Oct 9, 2015, at 12:28 PM, Monty Taylor  wrote:
:> 
:>> On 10/09/2015 11:21 AM, Shamail wrote:
:>> 
:>> 
:>>> On Oct 9, 2015, at 10:39 AM, Sean Dague  wrote:
:>>> 
:>>> It looks like some great conversation got going on the service catalog
:>>> standardization spec / discussion at the last cross project meeting.
:>>> Sorry I wasn't there to participate.
:>> Apologize if this is a question that has already been address but why can't 
we just leverage something like consul.io?
:> 
:> It's a good question and there have actually been some discussions about 
leveraging it on the backend. However, even if we did, we'd still need keystone 
to provide the multi-tenancy view on the subject. consul wasn't designed (quite 
correctly I think) to be a user-facing service for 50k users.
:> 
:> I think it would be an excellent backend.
:Thanks, that makes sense.  I agree that it might be a good backend but not the 
overall solution... I was bringing it up to ensure we consider existing options 
(where possible) and spend cycles on the unsolved bits.

As an operator I'd be happy to use SRV records to define endpoints,
though multiple regions could make that messy.

would we make subdomins per region or include region name in the
service name? 

_compute-regionone._tcp.example.com 
   -vs-
_compute._tcp.regionone.example.com

Also not all operators can controll their DNS to this level so it
couldn't be the only option.

Or are you talking about using an internal DNS implementation private
to the OpenStack Deployment?  I'm actually a bit less happy with that
idea.

-Jon
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] service catalog: TNG

2015-10-09 Thread Clint Byrum
Excerpts from Adam Young's message of 2015-10-09 09:51:55 -0700:
> On 10/09/2015 12:28 PM, Monty Taylor wrote:
> > On 10/09/2015 11:21 AM, Shamail wrote:
> >>
> >>
> >>> On Oct 9, 2015, at 10:39 AM, Sean Dague  wrote:
> >>>
> >>> It looks like some great conversation got going on the service catalog
> >>> standardization spec / discussion at the last cross project meeting.
> >>> Sorry I wasn't there to participate.
> >>>
> >> Apologize if this is a question that has already been address but why 
> >> can't we just leverage something like consul.io?
> >
> > It's a good question and there have actually been some discussions 
> > about leveraging it on the backend. However, even if we did, we'd 
> > still need keystone to provide the multi-tenancy view on the subject. 
> > consul wasn't designed (quite correctly I think) to be a user-facing 
> > service for 50k users.
> >
> > I think it would be an excellent backend.
> 
> The better question is, "Why are we not using DNS for the service catalog?"
> 

Agreed, we're using HTTP and JSON for what DNS is supposed to do.

As an aside, consul has a lovely DNS interface.

> Right now, we have the aspect of "project filtering of endpoints" which 
> means that a token does not need to have every endpoint for a specified 
> service.  If we were to use DNS, how would that map to the existing 
> functionality.
> 

There are a number of "how?" answers, but the "what?" question is the
more interesting one. As in, what is the actual point of this
functionality, and what do people want to do per-project?

I think what really ends up happening is you have 99.9% the same
catalogs to the majority of projects, with a few getting back a
different endpoint or two. For that, it seems like you would have two
queries needed in the "discovery" phase:

SRV compute.myprojectid.region1.mycloud.com
SRV compute.region1.mycloud.com

Use the first one you get an answer for. Keystone would simply add
or remove entries for special project<->endpoint mappings. You don't
need Keystone to tell you what your project ID is, so you just make
these queries. When you get a negative answer, respect the TTL and stop
querying for it.

Did I miss a use case with that?

> 
> Can we make better use of regions to help in endpoint filtering/selection?
> 
> Do we still need a query to Keystone to play arbiter if there are two 
> endpoints assigned for a specific use case to help determine which is 
> appropriate?
> 

I'd hope not. If the user is authorized then they should be able
to access the endpoint that they're assigned to. It's confusing to
me sometimes how keystone is thought of as an authorization service,
when it is named "Identity", and primarily performs authentication and
service discovery.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] PTL & Component Leads elections

2015-10-09 Thread Mike Scherbakov
Congratulations to Dmitry!
Now you are officially titled with PTL.
It won't be easy, but we will support you!

118 contributors voted. Thanks everyone! Thank you Sergey for organizing
elections for us.

On Thu, Oct 8, 2015 at 3:52 PM Sergey Lukjanov 
wrote:

> Voting period ended and so we have an officially selected Fuel PTL - DB.
> Congrats!
>
> Poll results & details -
> http://civs.cs.cornell.edu/cgi-bin/results.pl?num_winners=1&id=E_b79041aa56684ec0
>
> Let's start proposing candidates for the component lead positions!
>
> On Wed, Sep 30, 2015 at 8:47 PM, Sergey Lukjanov 
> wrote:
>
>> Hi folks,
>>
>> I've just setup the voting system and you should start receiving email
>> with topic "Poll: Fuel PTL Elections Fall 2015".
>>
>> NOTE: Please, don't forward this email, it contains *personal* unique
>> token for the voting.
>>
>> Thanks.
>>
>> On Wed, Sep 30, 2015 at 3:28 AM, Vladimir Kuklin 
>> wrote:
>>
>>> +1 to Igor. Do we have voting system set up?
>>>
>>> On Wed, Sep 30, 2015 at 4:35 AM, Igor Kalnitsky >> > wrote:
>>>
 > * September 29 - October 8: PTL elections

 So, it's in progress. Where I can vote? I didn't receive any emails.

 On Mon, Sep 28, 2015 at 7:31 PM, Tomasz Napierala
  wrote:
 >> On 18 Sep 2015, at 04:39, Sergey Lukjanov 
 wrote:
 >>
 >>
 >> Time line:
 >>
 >> PTL elections
 >> * September 18 - September 28, 21:59 UTC: Open candidacy for PTL
 position
 >> * September 29 - October 8: PTL elections
 >
 > Just a reminder that we have a deadline for candidates today.
 >
 > Regards,
 > --
 > Tomasz 'Zen' Napierala
 > Product Engineering - Poland
 >
 >
 >
 >
 >
 >
 >
 >
 >
 __
 > OpenStack Development Mailing List (not for usage questions)
 > Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

>>>
>>>
>>>
>>> --
>>> Yours Faithfully,
>>> Vladimir Kuklin,
>>> Fuel Library Tech Lead,
>>> Mirantis, Inc.
>>> +7 (495) 640-49-04
>>> +7 (926) 702-39-68
>>> Skype kuklinvv
>>> 35bk3, Vorontsovskaya Str.
>>> Moscow, Russia,
>>> www.mirantis.com 
>>> www.mirantis.ru
>>> vkuk...@mirantis.com
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>> --
>> Sincerely yours,
>> Sergey Lukjanov
>> Sahara Technical Lead
>> (OpenStack Data Processing)
>> Principal Software Engineer
>> Mirantis Inc.
>>
>
>
>
> --
> Sincerely yours,
> Sergey Lukjanov
> Sahara Technical Lead
> (OpenStack Data Processing)
> Principal Software Engineer
> Mirantis Inc.
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-- 
Mike Scherbakov
#mihgen
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Nominating two new core reviewers

2015-10-09 Thread Devananda van der Veen
++ on both counts!

On Thu, Oct 8, 2015 at 2:47 PM, Jim Rollenhagen 
wrote:

> Hi all,
>
> I've been thinking a lot about Ironic's core reviewer team and how we might
> make it better.
>
> I'd like to grow the team more through trust and mentoring. We should be
> able to promote someone to core based on a good knowledge of *some* of
> the code base, and trust them not to +2 things they don't know about. I'd
> also like to build a culture of mentoring non-cores on how to review, in
> preparation for adding them to the team. Through these pieces, I'm hoping
> we can have a few rounds of core additions this cycle.
>
> With that said...
>
> I'd like to nominate Vladyslav Drok (vdrok) for the core team. His reviews
> have been super high quality, and the quantity is ever-increasing. He's
> also started helping out with some smaller efforts (full tempest, for
> example), and I'd love to see that continue with larger efforts.
>
> I'd also like to nominate John Villalovos (jlvillal). John has been
> reviewing a ton of code and making a real effort to learn everything,
> and keep track of everything going on in the project.
>
> Ironic cores, please reply with your vote; provided feedback is positive,
> I'd like to make this official next week sometime. Thanks!
>
> // jim
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Scheduler proposal

2015-10-09 Thread Zane Bitter

On 08/10/15 21:32, Ian Wells wrote:


> 2. if many hosts suit the 5 VMs then this is *very* unlucky,because we 
should be choosing a host at random from the set of
suitable hosts and that's a huge coincidence - so this is a tiny
corner case that we shouldn't be designing around

Here is where we differ in our understanding. With the current
system of filters and weighers, 5 schedulers getting requests for
identical VMs and having identical information are *expected* to
select the same host. It is not a tiny corner case; it is the most
likely result for the current system design. By catching this
situation early (in the scheduling process) we can avoid multiple
RPC round-trips to handle the fail/retry mechanism.


And so maybe this would be a different fix - choose, at random, one of
the hosts above a weighting threshold, not choose the top host every
time? Technically, any host passing the filter is adequate to the task
from the perspective of an API user (and they can't prove if they got
the highest weighting or not), so if we assume weighting an operator
preference, and just weaken it slightly, we'd have a few more options.


The optimal way to do this would be a weighted random selection, where 
the probability of any given host being selected is proportional to its 
weighting. (Obviously this is limited by the accuracy of the weighting 
function in expressing your actual preferences - and it's at least 
conceivable that this could vary with the number of schedulers running.)


In fact, the choice of the name 'weighting' would normally imply that 
it's done this way; hearing that the 'weighting' is actually used as a 
'score' with the highest one always winning is quite surprising.


cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] service catalog: TNG

2015-10-09 Thread Shamail


> On Oct 9, 2015, at 12:28 PM, Monty Taylor  wrote:
> 
>> On 10/09/2015 11:21 AM, Shamail wrote:
>> 
>> 
>>> On Oct 9, 2015, at 10:39 AM, Sean Dague  wrote:
>>> 
>>> It looks like some great conversation got going on the service catalog
>>> standardization spec / discussion at the last cross project meeting.
>>> Sorry I wasn't there to participate.
>> Apologize if this is a question that has already been address but why can't 
>> we just leverage something like consul.io?
> 
> It's a good question and there have actually been some discussions about 
> leveraging it on the backend. However, even if we did, we'd still need 
> keystone to provide the multi-tenancy view on the subject. consul wasn't 
> designed (quite correctly I think) to be a user-facing service for 50k users.
> 
> I think it would be an excellent backend.
Thanks, that makes sense.  I agree that it might be a good backend but not the 
overall solution... I was bringing it up to ensure we consider existing options 
(where possible) and spend cycles on the unsolved bits.

I am going to look into the scaling limitations for consul to educate myself.
> 
>> 
>>> A lot of that ended up in here (which was an ether pad stevemar and I
>>> started working on the other day) -
>>> https://etherpad.openstack.org/p/mitaka-service-catalog which is great.
>> I didn't see anything immediately in the etherpad that couldn't be covered 
>> with the tool mentioned above.  It is open-source so we could always try to 
>> contribute there if we need something extra (written in golang though).
>>> 
>>> A couple of things that would make this more useful:
>>> 
>>> 1) if you are commenting, please (ircnick) your comments. It's not easy
>>> to always track down folks later if the comment was not understood.
>>> 
>>> 2) please provide link to code when explaining a point. Github supports
>>> the ability to very nicely link to (and highlight) a range of code by a
>>> stable object ref. For instance -
>>> https://github.com/openstack/nova/blob/2dc2153c289c9d5d7e9827a4908b0ca61d87dabb/nova/context.py#L126-L132
>>> 
>>> That will make comments about X does Y, or Z can't do W, more clear
>>> because we'll all be looking at the same chunk of code and start to
>>> build more shared context here. One of the reasons this has been long
>>> and difficult is that we're missing a lot of that shared context between
>>> projects. Reassembling that by reading each other's relevant code will
>>> go a long way to understanding the whole picture.
>>> 
>>> 
>>> Lastly, I think it's pretty clear we probably need a dedicated workgroup
>>> meeting to keep this ball rolling, come to a reasonable plan that
>>> doesn't break any existing deployed code, but lets us get to a better
>>> world in a few cycles. annegentle, stevemar, and I have been pushing on
>>> that ball so far, however I'd like to know who else is willing to commit
>>> a chunk of time over this cycle to this. Once we know that we can try to
>>> figure out when a reasonable weekly meeting point would be.
>>> 
>>> Thanks,
>>> 
>>>-Sean
>>> 
>>> --
>>> Sean Dague
>>> http://dague.net
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] nova-manage db archive_deleted_rows broken

2015-10-09 Thread Jay Pipes

On 10/07/2015 11:04 AM, Matt Riedemann wrote:

I'm wondering why we don't reverse sort the tables using the sqlalchemy
metadata object before processing the tables for delete?  That's the
same thing I did in the 267 migration since we needed to process the
tree starting with the leafs and then eventually get back to the
instances table (since most roads lead to the instances table).


Yes, that would make a lot of sense to me if we used the SA metadata 
object for reverse sorting.



Another thing that's really weird is how max_rows is used in this code.
There is cumulative tracking of the max_rows value so if the value you
pass in is too small, you might not actually be removing anything.

I figured max_rows meant up to max_rows from each table, not max_rows
*total* across all tables. By my count, there are 52 tables in the nova
db model. The way I read the code, if I pass in max_rows=10 and say it
processes table A and archives 7 rows, then when it processes table B it
will pass max_rows=(max_rows - rows_archived), which would be 3 for
table B. If we archive 3 rows from table B, rows_archived >= max_rows
and we quit. So to really make this work, you have to pass in something
big for max_rows, like 1000, which seems completely random.

Does this seem odd to anyone else?


Uhm, yes it does.

> Given the relationships between

tables, I'd think you'd want to try and delete max_rows for all tables,
so archive 10 instances, 10 block_device_mapping, 10 pci_devices, etc.

I'm also bringing this up now because there is a thread in the operators
list which pointed me to a set of scripts that operators at GoDaddy are
using for archiving deleted rows:

http://lists.openstack.org/pipermail/openstack-operators/2015-October/008392.html

Presumably because the command in nova doesn't work. We should either
make this thing work or just punt and delete it because no one cares.


The db archive code in Nova just doesn't make much sense to me at all. 
The algorithm for purging stuff, like you mention above, does not take 
into account the relationships between tables; instead of diving into 
the children relations and archiving those first, the code just uses a 
simplistic "well, if we hit a foreign key error, just ignore and 
continue archiving other things, we will eventually repeat the call to 
delete this row" strategy:


https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/api.py#L6021-L6023

I had a proposal [1] to completely rework the whole shadow table mess 
and db archiving functionality. I continue to believe that is the 
appropriate solution for this, and that we should rip out the existing 
functionality because it simply does not work properly.


Best,
-jay

[1] https://review.openstack.org/#/c/137669/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Backport policy for Liberty

2015-10-09 Thread Jastrzebski, Michal
Hello,

Since we have little actual logic, and ansible itself is pretty pluggable by 
its very nature, backporting should be quite easy and would not affect existing 
deployment much. We will make sure that it will be safe to have stable/liberty 
code and will keep working at all times. I agree with Sam that we need careful 
CI for that, and it will be our first priority.

I would very much like to introduce operators to our session regarding this 
policy, as they will be most affected party and we want to make sure that they 
will take part in decision.

Regards,
Michał

From: Sam Yaple [mailto:sam...@yaple.net]
Sent: Friday, October 9, 2015 4:15 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [kolla] Backport policy for Liberty

On Thu, Oct 8, 2015 at 2:47 PM, Steven Dake (stdake) 
mailto:std...@cisco.com>> wrote:
Kolla operators and developers,

The general consensus of the Core Reviewer team for Kolla is that we should 
embrace a liberal backport policy for the Liberty release.  An example of 
liberal -> We add a new server service to Ansible, we would backport the 
feature to liberty.  This is in breaking with the typical OpenStack backports 
policy.  It also creates a whole bunch more work and has potential to introduce 
regressions in the Liberty release.

Given these realities I want to put on hold any liberal backporting until after 
Summit.  I will schedule a fishbowl session for a backport policy discussion 
where we will decide as a community what type of backport policy we want.  The 
delivery required before we introduce any liberal backporting policy then 
should be a description of that backport policy discussion at Summit distilled 
into a RST file in our git repository.

If you have any questions, comments, or concerns, please chime in on the thread.

Regards
-steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

I am in favor of a very liberal backport policy. We have the potential to have 
very little code difference between N, N-1, and N-2 releases while still 
deploying the different versions of OpenStack. However, I recognize is a big 
undertaking to backport all things, not to mention the testing involved.

I would like to see two things before we truly embrace a liberal policy. The 
first is better testing. A true gate that does upgrades and potentially 
multinode (at least from a network perspective). The second thing is a bot or 
automation of some kind to automatically propose non-conflicting patches to the 
stable branches if they include the 'backport: xyz' tag in the commit message. 
Cores would still need to confirm these changes with the normal review process 
and could easily abandon them, but that would remove alot of overhead of 
performing the actual backport.
Since Kolla simply deploys OpenStack, it is alot closer to a client or a 
library than it is to Nova or Neutron. And given its mission maybe it should 
break from the "typical OpenStack backports policy" so we can give a consistent 
deployment experience across all stable and supported version of OpenStack at 
any given time.
Those are my thoughts on the matter at least. I look forward to some 
conversations about this in Tokyo.
Sam Yaple

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] service catalog: TNG

2015-10-09 Thread Adam Young

On 10/09/2015 12:28 PM, Monty Taylor wrote:

On 10/09/2015 11:21 AM, Shamail wrote:




On Oct 9, 2015, at 10:39 AM, Sean Dague  wrote:

It looks like some great conversation got going on the service catalog
standardization spec / discussion at the last cross project meeting.
Sorry I wasn't there to participate.

Apologize if this is a question that has already been address but why 
can't we just leverage something like consul.io?


It's a good question and there have actually been some discussions 
about leveraging it on the backend. However, even if we did, we'd 
still need keystone to provide the multi-tenancy view on the subject. 
consul wasn't designed (quite correctly I think) to be a user-facing 
service for 50k users.


I think it would be an excellent backend.


The better question is, "Why are we not using DNS for the service catalog?"

Right now, we have the aspect of "project filtering of endpoints" which 
means that a token does not need to have every endpoint for a specified 
service.  If we were to use DNS, how would that map to the existing 
functionality.



Can we make better use of regions to help in endpoint filtering/selection?

Do we still need a query to Keystone to play arbiter if there are two 
endpoints assigned for a specific use case to help determine which is 
appropriate?










A lot of that ended up in here (which was an ether pad stevemar and I
started working on the other day) -
https://etherpad.openstack.org/p/mitaka-service-catalog which is great.
I didn't see anything immediately in the etherpad that couldn't be 
covered with the tool mentioned above.  It is open-source so we could 
always try to contribute there if we need something extra (written in 
golang though).


A couple of things that would make this more useful:

1) if you are commenting, please (ircnick) your comments. It's not easy
to always track down folks later if the comment was not understood.

2) please provide link to code when explaining a point. Github supports
the ability to very nicely link to (and highlight) a range of code by a
stable object ref. For instance -
https://github.com/openstack/nova/blob/2dc2153c289c9d5d7e9827a4908b0ca61d87dabb/nova/context.py#L126-L132 



That will make comments about X does Y, or Z can't do W, more clear
because we'll all be looking at the same chunk of code and start to
build more shared context here. One of the reasons this has been long
and difficult is that we're missing a lot of that shared context 
between

projects. Reassembling that by reading each other's relevant code will
go a long way to understanding the whole picture.


Lastly, I think it's pretty clear we probably need a dedicated 
workgroup

meeting to keep this ball rolling, come to a reasonable plan that
doesn't break any existing deployed code, but lets us get to a better
world in a few cycles. annegentle, stevemar, and I have been pushing on
that ball so far, however I'd like to know who else is willing to 
commit
a chunk of time over this cycle to this. Once we know that we can 
try to

figure out when a reasonable weekly meeting point would be.

Thanks,

-Sean

--
Sean Dague
http://dague.net

__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [congress] Symantec's security group management policies

2015-10-09 Thread Shiv Haris
Hi Su,

This looks very good.

Will it be possible to put your usecase as part of the Usecase VM.
Have you tried it out on the Usecase VM that I published earlier. I can help if 
you get stuck.

LMK,

Thanks,

-Shiv


From: Su Zhang [mailto:westlif...@gmail.com]
Sent: Thursday, October 08, 2015 1:23 PM
To: openstack-dev
Subject: [openstack-dev] [congress] Symantec's security group management 
policies

Hello,

I've implemented a set of security group management policies and already put 
them into our usecase doc.
Let me know if you guys have any comments. My policies is called "Security 
Group Management "
You can find the use case doc at: 
https://docs.google.com/document/d/1ExDmT06vDZjzOPePYBqojMRfXodvsk0R8nRkX-zrkSw/edit#heading=h.6z1ggtfrzg3n

Thanks,

--
Su Zhang
Senior Software Engineer
Symantec Corporation
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] mox to mock migration

2015-10-09 Thread Steven Hardy
On Fri, Oct 09, 2015 at 09:06:57AM -0400, Jay Dobies wrote:
> I forget where we left things at the last meeting with regard to whether or
> not there should be a blueprint on this. I was going to work on some during
> some downtime but I wanted to make sure I wasn't overlapping with what
> others may be converting (it's more time consuming than I anticipated).
> 
> Any thoughts on how to track it?

I'd probably suggest raising either a bug or a blueprint (not spec), then
link from that to an etherpad where you can track all the tests requiring
rework, and who's working on them.

"it's more time consuming than I anticipated" is pretty much my default
response for anything to do with heat unit tests btw, good luck! :)

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] L2 gateway project

2015-10-09 Thread Kyle Mestery
On Fri, Oct 9, 2015 at 10:13 AM, Gary Kotton  wrote:

> Hi,
> Who will be creating the stable/liberty branch?
> Thanks
> Gary
>
>
I'll be doing this once someone from the L2GW team lets me know a commit
SHA to create it from.

Thanks,
Kyle


> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] service catalog: TNG

2015-10-09 Thread Monty Taylor

On 10/09/2015 11:21 AM, Shamail wrote:




On Oct 9, 2015, at 10:39 AM, Sean Dague  wrote:

It looks like some great conversation got going on the service catalog
standardization spec / discussion at the last cross project meeting.
Sorry I wasn't there to participate.


Apologize if this is a question that has already been address but why can't we 
just leverage something like consul.io?


It's a good question and there have actually been some discussions about 
leveraging it on the backend. However, even if we did, we'd still need 
keystone to provide the multi-tenancy view on the subject. consul wasn't 
designed (quite correctly I think) to be a user-facing service for 50k 
users.


I think it would be an excellent backend.




A lot of that ended up in here (which was an ether pad stevemar and I
started working on the other day) -
https://etherpad.openstack.org/p/mitaka-service-catalog which is great.

I didn't see anything immediately in the etherpad that couldn't be covered with 
the tool mentioned above.  It is open-source so we could always try to 
contribute there if we need something extra (written in golang though).


A couple of things that would make this more useful:

1) if you are commenting, please (ircnick) your comments. It's not easy
to always track down folks later if the comment was not understood.

2) please provide link to code when explaining a point. Github supports
the ability to very nicely link to (and highlight) a range of code by a
stable object ref. For instance -
https://github.com/openstack/nova/blob/2dc2153c289c9d5d7e9827a4908b0ca61d87dabb/nova/context.py#L126-L132

That will make comments about X does Y, or Z can't do W, more clear
because we'll all be looking at the same chunk of code and start to
build more shared context here. One of the reasons this has been long
and difficult is that we're missing a lot of that shared context between
projects. Reassembling that by reading each other's relevant code will
go a long way to understanding the whole picture.


Lastly, I think it's pretty clear we probably need a dedicated workgroup
meeting to keep this ball rolling, come to a reasonable plan that
doesn't break any existing deployed code, but lets us get to a better
world in a few cycles. annegentle, stevemar, and I have been pushing on
that ball so far, however I'd like to know who else is willing to commit
a chunk of time over this cycle to this. Once we know that we can try to
figure out when a reasonable weekly meeting point would be.

Thanks,

-Sean

--
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Recording little everyday OpenStack successes

2015-10-09 Thread Nick Chase
This is AWESOME!  And I've already found useful resources on the list of 
successes.  Beautiful job, and fantastic idea!


  Nick


On 10/09/2015 05:42 AM, Thierry Carrez wrote:
> Hello everyone,
>
> OpenStack has become quite big, and it's easier than ever to
feel lost,
> to feel like nothing is really happening. It's more difficult
than ever
> to feel part of a single community, and to celebrate little
successes
> and progress.
>
> In a (small) effort to help with that, I suggested making it
easier to
> record little moments of joy and small success bits. Those are
usually
> not worth the effort of a blog post or a new mailing-list
thread, but
> they show that our community makes progress *every day*.
>
> So whenever you feel like you made progress, or had a little
success in
> your OpenStack adventures, or have some joyful moment to share, just
> throw the following message on your local IRC channel:
>
> #success [Your message here]
>
> The openstackstatus bot will take that and record it on this
wiki page:
>
> https://wiki.openstack.org/wiki/Successes
>
> We'll add a few of those every week to the weekly newsletter (as
part of
> the developer digest that we reecently added there).
>
> Caveats: Obviously that only works on channels where
openstackstatus is
> present (the official OpenStack IRC channels), and we may remove
entries
> that are off-topic or spam.
>
> So... please use #success liberally and record lttle everyday
OpenStack
> successes. Share the joy and make the OpenStack community a
happy place.
>



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Recording little everyday OpenStack successes

2015-10-09 Thread Carl Baldwin
+1 Great idea!

On Fri, Oct 9, 2015 at 3:42 AM, Thierry Carrez  wrote:
> Hello everyone,
>
> OpenStack has become quite big, and it's easier than ever to feel lost,
> to feel like nothing is really happening. It's more difficult than ever
> to feel part of a single community, and to celebrate little successes
> and progress.
>
> In a (small) effort to help with that, I suggested making it easier to
> record little moments of joy and small success bits. Those are usually
> not worth the effort of a blog post or a new mailing-list thread, but
> they show that our community makes progress *every day*.
>
> So whenever you feel like you made progress, or had a little success in
> your OpenStack adventures, or have some joyful moment to share, just
> throw the following message on your local IRC channel:
>
> #success [Your message here]
>
> The openstackstatus bot will take that and record it on this wiki page:
>
> https://wiki.openstack.org/wiki/Successes
>
> We'll add a few of those every week to the weekly newsletter (as part of
> the developer digest that we reecently added there).
>
> Caveats: Obviously that only works on channels where openstackstatus is
> present (the official OpenStack IRC channels), and we may remove entries
> that are off-topic or spam.
>
> So... please use #success liberally and record lttle everyday OpenStack
> successes. Share the joy and make the OpenStack community a happy place.
>
> --
> Thierry Carrez (ttx)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Recording little everyday OpenStack successes

2015-10-09 Thread Kyle Mestery
On Fri, Oct 9, 2015 at 8:52 AM, Russell Bryant  wrote:

> On 10/09/2015 05:42 AM, Thierry Carrez wrote:
> > Hello everyone,
> >
> > OpenStack has become quite big, and it's easier than ever to feel lost,
> > to feel like nothing is really happening. It's more difficult than ever
> > to feel part of a single community, and to celebrate little successes
> > and progress.
> >
> > In a (small) effort to help with that, I suggested making it easier to
> > record little moments of joy and small success bits. Those are usually
> > not worth the effort of a blog post or a new mailing-list thread, but
> > they show that our community makes progress *every day*.
> >
> > So whenever you feel like you made progress, or had a little success in
> > your OpenStack adventures, or have some joyful moment to share, just
> > throw the following message on your local IRC channel:
> >
> > #success [Your message here]
> >
> > The openstackstatus bot will take that and record it on this wiki page:
> >
> > https://wiki.openstack.org/wiki/Successes
> >
> > We'll add a few of those every week to the weekly newsletter (as part of
> > the developer digest that we reecently added there).
> >
> > Caveats: Obviously that only works on channels where openstackstatus is
> > present (the official OpenStack IRC channels), and we may remove entries
> > that are off-topic or spam.
> >
> > So... please use #success liberally and record lttle everyday OpenStack
> > successes. Share the joy and make the OpenStack community a happy place.
> >
>
> This is *really* cool.  I'm excited to use this and see all the things
> others record.  Thanks!!
>
>
Indeed, sometimes it's easy to get lost in the bike shedding, this is a
good way for everyone to remember the little successes that people are
having. After all, this project is composed of actual people, it's good to
highlight the little things we each consider a success. Well done!

Thanks,
Kyle


> --
> Russell Bryant
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] jobs that make break when we remove Devstack extras.d in 10 weeks

2015-10-09 Thread Dmitry Tantsur

On 10/09/2015 12:58 PM, Dmitry Tantsur wrote:

On 10/09/2015 12:35 PM, Sean Dague wrote:

 From now until the removal of devstack extras.d support I'm going to
send a weekly email of jobs that may break. A warning was added that we
can track in logstash.

Here are the top 25 jobs (by volume) that are currently tripping the
warning:

gate-murano-devstack-dsvm
gate-cue-integration-dsvm-rabbitmq
gate-murano-congress-devstack-dsvm
gate-solum-devstack-dsvm-centos7
gate-rally-dsvm-murano-task
gate-congress-dsvm-api
gate-tempest-dsvm-ironic-agent_ssh
gate-solum-devstack-dsvm
gate-tempest-dsvm-ironic-pxe_ipa-nv
gate-ironic-inspector-dsvm-nv
gate-tempest-dsvm-ironic-pxe_ssh
gate-tempest-dsvm-ironic-parallel-nv
gate-tempest-dsvm-ironic-pxe_ipa
gate-designate-dsvm-powerdns
gate-python-barbicanclient-devstack-dsvm
gate-tempest-dsvm-ironic-pxe_ssh-postgres
gate-rally-dsvm-designate-designate
gate-tempest-dsvm-ironic-pxe_ssh-dib
gate-tempest-dsvm-ironic-agent_ssh-src
gate-tempest-dsvm-ironic-pxe_ipa-src
gate-muranoclient-dsvm-functional
gate-designate-dsvm-bind9
gate-tempest-dsvm-python-ironicclient-src
gate-python-ironic-inspector-client-dsvm
gate-tempest-dsvm-ironic-lib-src-nv

(You can view this query with http://goo.gl/6p8lvn)

The ironic jobs are surprising, as something is crudding up extras.d
with a file named 23, which isn't currently run. Eventual removal of
that directory is going to potentially make those jobs fail, so someone
more familiar with it should look into it.


Thanks for noticing, looking now.


As I'm leaving for the weekend, I'll post my findings here.

I was not able to spot what writes these files (in my case it was named 
33). I also was not able to reproduce it on my regular devstack environment.


I've posted a temporary patch https://review.openstack.org/#/c/233017/ 
so that we're able to track where and when these files appear. Right now 
I only understood that they really appear during the devstack run, not 
earlier.






This is not guarunteed to be a complete list, but as jobs are removed /
fixed we should end up with other less frequently run jobs popping up in
future weeks.

-Sean




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] service catalog: TNG

2015-10-09 Thread Monty Taylor

On 10/09/2015 10:39 AM, Sean Dague wrote:

It looks like some great conversation got going on the service catalog
standardization spec / discussion at the last cross project meeting.
Sorry I wasn't there to participate.


Just so folks know, the collection of existing service catalogs has been 
updated:


https://wiki.openstack.org/wiki/API_Working_Group/Current_Design/Service_Catalog

It now includes a new and correct catalog for Rackspace Private (the 
previous entry was just a copy of Rackspace Public) as well as entries 
for every public cloud I have an account on.


Hopefully that is useful information for folks looking at this.


A lot of that ended up in here (which was an ether pad stevemar and I
started working on the other day) -
https://etherpad.openstack.org/p/mitaka-service-catalog which is great.

A couple of things that would make this more useful:

1) if you are commenting, please (ircnick) your comments. It's not easy
to always track down folks later if the comment was not understood.

2) please provide link to code when explaining a point. Github supports
the ability to very nicely link to (and highlight) a range of code by a
stable object ref. For instance -
https://github.com/openstack/nova/blob/2dc2153c289c9d5d7e9827a4908b0ca61d87dabb/nova/context.py#L126-L132

That will make comments about X does Y, or Z can't do W, more clear
because we'll all be looking at the same chunk of code and start to
build more shared context here. One of the reasons this has been long
and difficult is that we're missing a lot of that shared context between
projects. Reassembling that by reading each other's relevant code will
go a long way to understanding the whole picture.


Lastly, I think it's pretty clear we probably need a dedicated workgroup
meeting to keep this ball rolling, come to a reasonable plan that
doesn't break any existing deployed code, but lets us get to a better
world in a few cycles. annegentle, stevemar, and I have been pushing on
that ball so far, however I'd like to know who else is willing to commit
a chunk of time over this cycle to this. Once we know that we can try to
figure out when a reasonable weekly meeting point would be.

Thanks,

-Sean




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] service catalog: TNG

2015-10-09 Thread Shamail


> On Oct 9, 2015, at 10:39 AM, Sean Dague  wrote:
> 
> It looks like some great conversation got going on the service catalog
> standardization spec / discussion at the last cross project meeting.
> Sorry I wasn't there to participate.
> 
Apologize if this is a question that has already been address but why can't we 
just leverage something like consul.io?

> A lot of that ended up in here (which was an ether pad stevemar and I
> started working on the other day) -
> https://etherpad.openstack.org/p/mitaka-service-catalog which is great.
I didn't see anything immediately in the etherpad that couldn't be covered with 
the tool mentioned above.  It is open-source so we could always try to 
contribute there if we need something extra (written in golang though).
> 
> A couple of things that would make this more useful:
> 
> 1) if you are commenting, please (ircnick) your comments. It's not easy
> to always track down folks later if the comment was not understood.
> 
> 2) please provide link to code when explaining a point. Github supports
> the ability to very nicely link to (and highlight) a range of code by a
> stable object ref. For instance -
> https://github.com/openstack/nova/blob/2dc2153c289c9d5d7e9827a4908b0ca61d87dabb/nova/context.py#L126-L132
> 
> That will make comments about X does Y, or Z can't do W, more clear
> because we'll all be looking at the same chunk of code and start to
> build more shared context here. One of the reasons this has been long
> and difficult is that we're missing a lot of that shared context between
> projects. Reassembling that by reading each other's relevant code will
> go a long way to understanding the whole picture.
> 
> 
> Lastly, I think it's pretty clear we probably need a dedicated workgroup
> meeting to keep this ball rolling, come to a reasonable plan that
> doesn't break any existing deployed code, but lets us get to a better
> world in a few cycles. annegentle, stevemar, and I have been pushing on
> that ball so far, however I'd like to know who else is willing to commit
> a chunk of time over this cycle to this. Once we know that we can try to
> figure out when a reasonable weekly meeting point would be.
> 
> Thanks,
> 
>-Sean
> 
> -- 
> Sean Dague
> http://dague.net
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] Try to introduce RFC mechanism to CI.

2015-10-09 Thread Znoinski, Waldemar
 >-Original Message-
 >From: Jeremy Stanley [mailto:fu...@yuggoth.org]
 >Sent: Friday, October 9, 2015 1:17 PM
 >To: OpenStack Development Mailing List (not for usage questions)
 >
 >Subject: Re: [openstack-dev] [infra] Try to introduce RFC mechanism to CI.
 >
 >On 2015-10-09 18:06:55 +0800 (+0800), Tang Chen wrote:
 >[...]
 >> It is just a waste of resource if reviewers are discussing about where
 >> this function should be, or what the function should be named. After
 >> all these details are agreed on, run the CI.
 >[...]
 
[WZ] I'm maintaining 2 3rdparty CIs here, for Nova and Neutron each, and to me 
there's no big difference in maintaining/supporting a CI that runs 5 or 150 
times a day. The only difference may be in resources required to keep up with 
the Gerrit stream. In my opinion (3rdparty) CIs should help early-discover the 
problems so should run on all patchsets as they appear - that's their main 
purpose to me.

 >As one of the people maintaining the upstream CI and helping coordinate our
 >resources/quotas, I don't see that providing early test feedback is a waste.
 >We're steadily increasing the instance quotas available to us, so check
 >pipeline utilization should continue to become less and less of a concern
 >anyway.
 >
 >For a change which is still under debate, feel free to simply ignore test 
 >results
 >until you get it to a point where you see them start to become relevant.
 >--
 >Jeremy Stanley
 >
 >__
 >
 >OpenStack Development Mailing List (not for usage questions)
 >Unsubscribe: OpenStack-dev-
 >requ...@lists.openstack.org?subject:unsubscribe
 >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
--
Intel Shannon Limited
Registered in Ireland
Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
Registered Number: 308263
Business address: Dromore House, East Park, Shannon, Co. Clare

This e-mail and any attachments may contain confidential material for the sole 
use of the intended recipient(s). Any review or distribution by others is 
strictly prohibited. If you are not the intended recipient, please contact the 
sender and delete all copies.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] service catalog: TNG

2015-10-09 Thread Monty Taylor

On 10/09/2015 11:07 AM, David Lyle wrote:

I'm in too.


Yes please.


On Fri, Oct 9, 2015 at 8:51 AM, Dean Troyer  wrote:

On Fri, Oct 9, 2015 at 9:39 AM, Sean Dague  wrote:


Lastly, I think it's pretty clear we probably need a dedicated workgroup
meeting to keep this ball rolling, come to a reasonable plan that
doesn't break any existing deployed code, but lets us get to a better
world in a few cycles. annegentle, stevemar, and I have been pushing on
that ball so far, however I'd like to know who else is willing to commit
a chunk of time over this cycle to this. Once we know that we can try to
figure out when a reasonable weekly meeting point would be.



Count me in...

dt

--

Dean Troyer
dtro...@gmail.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] L2 gateway project

2015-10-09 Thread Gary Kotton
Hi,
Who will be creating the stable/liberty branch?
Thanks
Gary
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Recording little everyday OpenStack successes

2015-10-09 Thread Shamail


> On Oct 9, 2015, at 10:49 AM, John Griffith  wrote:
> 
> ​Great idea Thierry, great to promote some positive things!  Thanks for 
> putting this together.​

+1
Great indeed... thanks Thierry.

Regards,
Shamail 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] service catalog: TNG

2015-10-09 Thread David Lyle
I'm in too.

David

On Fri, Oct 9, 2015 at 8:51 AM, Dean Troyer  wrote:
> On Fri, Oct 9, 2015 at 9:39 AM, Sean Dague  wrote:
>>
>> Lastly, I think it's pretty clear we probably need a dedicated workgroup
>> meeting to keep this ball rolling, come to a reasonable plan that
>> doesn't break any existing deployed code, but lets us get to a better
>> world in a few cycles. annegentle, stevemar, and I have been pushing on
>> that ball so far, however I'd like to know who else is willing to commit
>> a chunk of time over this cycle to this. Once we know that we can try to
>> figure out when a reasonable weekly meeting point would be.
>
>
> Count me in...
>
> dt
>
> --
>
> Dean Troyer
> dtro...@gmail.com
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Requests + urllib3 + distro packages

2015-10-09 Thread Cory Benfield

> On 9 Oct 2015, at 15:18, Jeremy Stanley  wrote:
> 
> On 2015-10-09 14:58:36 +0100 (+0100), Cory Benfield wrote:
> [...]
>> IMO, what OpenStack needs is a decision about where it’s getting
>> its packages from, and then to refuse to mix the two.
> 
> I have yet to find a Python-based operating system installable in
> whole via pip. There will always be _at_least_some_ packages you
> install from your operating system's package management. What you
> seem to be missing is that Linux distros are now shipping base
> images which include their python-requests and python-urllib3
> packages already pre-installed as dependencies of Python-based tools
> they deem important to their users.
> 

Yeah, this has been an ongoing problem.

For my part, Donald Stufft has informed me that if the distribution-provided 
requests package has the appropriate install_requires field in its setup.py, 
pip will respect that dependency. Given that requests has recently switched to 
not providing mid-cycle urllib3 versions, it should be entirely possible for 
downstream redistributors in Debian/Fedora to put that metadata into their 
packages when they unbundle requests. I’m chasing up with our downstream 
redistributors right now to ask them to start doing that.

This should resolve the problem for systems where requests 2.7.0 or higher are 
being used. In other systems, this problem still exists and cannot be fixed by 
requests directly.

Cory


signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] We should move strutils.mask_password back into oslo-incubator

2015-10-09 Thread Matt Riedemann



On 10/9/2015 1:49 AM, Paul Carlton wrote:


On 08/10/15 16:49, Doug Hellmann wrote:

Excerpts from Matt Riedemann's message of 2015-10-07 14:38:07 -0500:

Here's why:

https://review.openstack.org/#/c/220622/

That's marked as fixing an OSSA which means we'll have to backport the
fix in nova but it depends on a change to strutils.mask_password in
oslo.utils, which required a release and a minimum version bump in
global-requirements.

To backport the change in nova, we either have to:

1. Copy mask_password out of oslo.utils and add it to nova in the
backport or,

2. Backport the oslo.utils change to a stable branch, release it as a
patch release, bump minimum required version in stable g-r and then
backport the nova change and depend on the backported oslo.utils stable
release - which also makes it a dependent library version bump for any
packagers/distros that have already frozen libraries for their stable
releases, which is kind of not fun.

Bug fix releases do not generally require a minimum version bump. The
API hasn't changed, and there's nothing new in the library in this case,
so it's a documentation issue to ensure that users update to the new
release. All we should need to do is backport the fix to the appropriate
branch of oslo.utils and release a new version from that branch that is
compatible with the same branch of nova.

Doug


So I'm thinking this is one of those things that should ultimately live
in oslo-incubator so it can live in the respective projects. If
mask_password were in oslo-incubator, we'd have just fixed and
backported it there and then synced to nova on master and stable
branches, no dependent library version bumps required.

Plus I miss the good old days of reviewing oslo-incubator
syncs...(joking of course).


__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

I've been following this discussion, is there now a consensus on the way
forward?

My understanding is that Doug is suggesting back porting my oslo.utils
change to the stable juno and kilo branches?



It means you'll have to backport the oslo.utils change to each stable 
branch that you also backport the nova change to, which probably goes 
back to stable/juno (so liberty->kilo->juno backports in both projects).


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] service catalog: TNG

2015-10-09 Thread Dean Troyer
On Fri, Oct 9, 2015 at 9:39 AM, Sean Dague  wrote:

> Lastly, I think it's pretty clear we probably need a dedicated workgroup
> meeting to keep this ball rolling, come to a reasonable plan that
> doesn't break any existing deployed code, but lets us get to a better
> world in a few cycles. annegentle, stevemar, and I have been pushing on
> that ball so far, however I'd like to know who else is willing to commit
> a chunk of time over this cycle to this. Once we know that we can try to
> figure out when a reasonable weekly meeting point would be.
>

Count me in...

dt

-- 

Dean Troyer
dtro...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Recording little everyday OpenStack successes

2015-10-09 Thread John Griffith
On Fri, Oct 9, 2015 at 3:42 AM, Thierry Carrez 
wrote:

> Hello everyone,
>
> OpenStack has become quite big, and it's easier than ever to feel lost,
> to feel like nothing is really happening. It's more difficult than ever
> to feel part of a single community, and to celebrate little successes
> and progress.
>
> In a (small) effort to help with that, I suggested making it easier to
> record little moments of joy and small success bits. Those are usually
> not worth the effort of a blog post or a new mailing-list thread, but
> they show that our community makes progress *every day*.
>
> So whenever you feel like you made progress, or had a little success in
> your OpenStack adventures, or have some joyful moment to share, just
> throw the following message on your local IRC channel:
>
> #success [Your message here]
>
> The openstackstatus bot will take that and record it on this wiki page:
>
> https://wiki.openstack.org/wiki/Successes
>
> We'll add a few of those every week to the weekly newsletter (as part of
> the developer digest that we reecently added there).
>
> Caveats: Obviously that only works on channels where openstackstatus is
> present (the official OpenStack IRC channels), and we may remove entries
> that are off-topic or spam.
>
> So... please use #success liberally and record lttle everyday OpenStack
> successes. Share the joy and make the OpenStack community a happy place.
>
> --
> Thierry Carrez (ttx)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

​Great idea Thierry, great to promote some positive things!  Thanks for
putting this together.​
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Horizon] Suggestions for handling new panels and refactors in the future

2015-10-09 Thread Douglas Fish
I have two suggestions for handling both new panels and refactoring existing panels that I think could benefit us in the future:
1) When we are creating a panel that's a major refactor of an existing it should be a new separate panel, not a direct code replacement of the existing panel
2) New panels (include the refactors of existing panels) should be developed in an out of tree gerrit repository.
 
Why make refactors a separate panel?
 
I was taken a bit off guard after we merged the Network Topology->Curvature improvement: this was a surprise to some people outside of the Horizon community (though it had been discussed within Horizon for as long as I've been on the project). In retrospect, I think it would have been better to keep both the old Network Topology and new curvature based topology in our Horizon codebase. Doing so would have allowed operators to perform A-B/ Red-Black testing if they weren't immediately convinced of the awesomeness of the panel. It also would have allowed anyone with a customization of the Network Topology panel to have some time to configure their Horizon instance to continue to use the Legacy panel while they updated their customization to work with the new panel.
 
Perhaps we should treat panels more like an API element and take them through a deprecation cycle before removing them completely. Giving time for customizers to update their code is going to be especially important as we build angular replacements for python panels. While we have much better plugin support for angular there is still a learning curve for those developers.
 
Why build refactors and new panels out of tree?
 
First off, it appears to me trying to build new panels in tree has been fairly painful. I've seen big long lived patches pushed along without being merged. It's quite acceptable and expected to quickly merge half-complete patches into a brand new repository - but you can't behave that way working in tree in Horizon. Horizon needs to be kept production/operator ready. External repositories do not. Merging code quickly can ease collaboration and avoid this kind of long lived patch set.
 
Secondly, keeping new panels/plugins in a separate repository decentralizes decisions about which panels are "ready" and which aren't. If one group feels a plugin is "ready" they can make it their default version of the panel, and perhaps put resources toward translating it. If we develop these panels in-tree we need to make a common decision about what "ready" means - and once it's in everyone who wants a translated Horizon will need to translate it.
 
Finally, I believe developing new panels out of tree will help improve our plugin support in Horizon. It's this whole "eating your own dog food" idea. As soon as we start using our own Horizon plugin mechanism for our own development we are going to become aware of it's shortcomings (like quotas) and will be sufficiently motivated to fix them.
 
Looking forward to further discussion and other ideas on this!
Doug Fish


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Requests + urllib3 + distro packages

2015-10-09 Thread Joshua Harlow
For those who are interested in more of the historical aspect around this,

https://github.com/kennethreitz/requests/issues/1811

https://github.com/kennethreitz/requests/pull/1812

My own thoughts are varied here, I get the angle of vendoring, but I don't get 
the resistance to unvendoring it (which it seems like quite a few people have 
asked for); if many people want it unvendored then this just ends up creating a 
bad taste in the mouth of many people (this is a bad thing to have happen in 
opensource and is how forks and such get created...).

But as was stated,

The decision to stop vendoring it likely won't be made here anyway ;)

From: c...@lukasa.co.uk
Date: Fri, 9 Oct 2015 14:58:36 +0100
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] Requests + urllib3 + distro packages

 
> On 9 Oct 2015, at 14:40, William M Edmonds  wrote:
> 
> Cory Benfield  writes:
>>> The problem that occurs is the result of a few interacting things:
>>>  - requests has very very specific versions of urllib3 it works with.
>>> So specific they aren't always released yet.
>>
>> This should no longer be true. Our downstream redistributors pointedout to us
>> that this  was making their lives harder than they needed to be, so it's now
>> our policy to only  update to actual release versions of urllib3.
> 
> That's great... except that I'm confused as to why requests would continue to 
> repackage urllib3 if that's the case. Why not just prereq the version of 
> urllib3 that it needs? I thought the one and only answer to that question had 
> been so that requests could package non-standard versions.
> 
 
That is not and was never the only reason for vendoring urllib3. However, and I 
cannot stress this enough, the decision to vendor urllib3 is *not going to be 
changed on this thread*. If and when it changes, it will be by consensus 
decision from the requests maintenance team, which we do not have at this time.
 
Further, as I pointed out to Donald Stufft on IRC, if requests unbundled 
urllib3 *today* that would not fix the problem. The reason is that we’d specify 
our urllib3 dependency as: urllib3>=1.12,   
 __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] service catalog: TNG

2015-10-09 Thread Sean Dague
It looks like some great conversation got going on the service catalog
standardization spec / discussion at the last cross project meeting.
Sorry I wasn't there to participate.

A lot of that ended up in here (which was an ether pad stevemar and I
started working on the other day) -
https://etherpad.openstack.org/p/mitaka-service-catalog which is great.

A couple of things that would make this more useful:

1) if you are commenting, please (ircnick) your comments. It's not easy
to always track down folks later if the comment was not understood.

2) please provide link to code when explaining a point. Github supports
the ability to very nicely link to (and highlight) a range of code by a
stable object ref. For instance -
https://github.com/openstack/nova/blob/2dc2153c289c9d5d7e9827a4908b0ca61d87dabb/nova/context.py#L126-L132

That will make comments about X does Y, or Z can't do W, more clear
because we'll all be looking at the same chunk of code and start to
build more shared context here. One of the reasons this has been long
and difficult is that we're missing a lot of that shared context between
projects. Reassembling that by reading each other's relevant code will
go a long way to understanding the whole picture.


Lastly, I think it's pretty clear we probably need a dedicated workgroup
meeting to keep this ball rolling, come to a reasonable plan that
doesn't break any existing deployed code, but lets us get to a better
world in a few cycles. annegentle, stevemar, and I have been pushing on
that ball so far, however I'd like to know who else is willing to commit
a chunk of time over this cycle to this. Once we know that we can try to
figure out when a reasonable weekly meeting point would be.

Thanks,

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Requests + urllib3 + distro packages

2015-10-09 Thread Jeremy Stanley
On 2015-10-09 14:58:36 +0100 (+0100), Cory Benfield wrote:
[...]
> IMO, what OpenStack needs is a decision about where it’s getting
> its packages from, and then to refuse to mix the two.

I have yet to find a Python-based operating system installable in
whole via pip. There will always be _at_least_some_ packages you
install from your operating system's package management. What you
seem to be missing is that Linux distros are now shipping base
images which include their python-requests and python-urllib3
packages already pre-installed as dependencies of Python-based tools
they deem important to their users.

To work around this in our test infrastructure we're effectively
abandoning all hope of using distro-provided server images, and
building our own from scratch to avoid the possibility that they may
bring with them their own versions of any Python libraries
whatsoever. We're at the point where we're basically maintaining our
own derivative Linux distributions. The web of dependencies in
OpenStack has reached a level of complexity where it's guaranteed to
overlap with just about any pre-installed python-.* packages in a
distro-supplied image.

We're only now reaching the point where our Python dependencies
actually all function within the context of a virtualenv without
needing system-site-packages contamination, so the next logical step
is probably to see if virtualenv isolation is possible for
frameworks like DevStack (the QA team may already be trying to
figure that out, I'm not sure).
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Backport policy for Liberty

2015-10-09 Thread Sam Yaple
On Thu, Oct 8, 2015 at 2:47 PM, Steven Dake (stdake) 
wrote:

> Kolla operators and developers,
>
> The general consensus of the Core Reviewer team for Kolla is that we
> should embrace a liberal backport policy for the Liberty release.  An
> example of liberal -> We add a new server service to Ansible, we would
> backport the feature to liberty.  This is in breaking with the typical
> OpenStack backports policy.  It also creates a whole bunch more work and
> has potential to introduce regressions in the Liberty release.
>
> Given these realities I want to put on hold any liberal backporting until
> after Summit.  I will schedule a fishbowl session for a backport policy
> discussion where we will decide as a community what type of backport policy
> we want.  The delivery required before we introduce any liberal backporting
> policy then should be a description of that backport policy discussion at
> Summit distilled into a RST file in our git repository.
>
> If you have any questions, comments, or concerns, please chime in on the
> thread.
>
> Regards
> -steve
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
I am in favor of a very liberal backport policy. We have the potential to
have very little code difference between N, N-1, and N-2 releases while
still deploying the different versions of OpenStack. However, I recognize
is a big undertaking to backport all things, not to mention the testing
involved.

I would like to see two things before we truly embrace a liberal policy.
The first is better testing. A true gate that does upgrades and potentially
multinode (at least from a network perspective). The second thing is a bot
or automation of some kind to automatically propose non-conflicting patches
to the stable branches if they include the 'backport: xyz' tag in the
commit message. Cores would still need to confirm these changes with the
normal review process and could easily abandon them, but that would remove
alot of overhead of performing the actual backport.

Since Kolla simply deploys OpenStack, it is alot closer to a client or a
library than it is to Nova or Neutron. And given its mission maybe it
should break from the "typical OpenStack backports policy" so we can give a
consistent deployment experience across all stable and supported version of
OpenStack at any given time.

Those are my thoughts on the matter at least. I look forward to some
conversations about this in Tokyo.

Sam Yaple
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Recording little everyday OpenStack successes

2015-10-09 Thread Jeremy Stanley
On 2015-10-09 15:53:17 +0200 (+0200), Ihar Hrachyshka wrote:
[...]
> There are already multiple karmabot implementations that could be
> reused, like https://github.com/chromakode/karmabot
> 
> Can we just adopt one of those?

Perhaps, though we're trying to reduce rather than increase the
number of individual IRC bots we're managing. Ultimately I'm
interested in seeing us collapse our current family (gerritbot,
statusbot, meetbot) into one codebase/framework to further reduce
the maintenance burden, but haven't found interested parties yet to
contribute the coding effort.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Requests + urllib3 + distro packages

2015-10-09 Thread Cory Benfield

> On 9 Oct 2015, at 14:40, William M Edmonds  wrote:
> 
> Cory Benfield  writes:
> > > The problem that occurs is the result of a few interacting things:
> > >  - requests has very very specific versions of urllib3 it works with.
> > > So specific they aren't always released yet.
> >
> > This should no longer be true. Our downstream redistributors pointedout to 
> > us
> > that this  was making their lives harder than they needed to be, so it's now
> > our policy to only  update to actual release versions of urllib3.
> 
> That's great... except that I'm confused as to why requests would continue to 
> repackage urllib3 if that's the case. Why not just prereq the version of 
> urllib3 that it needs? I thought the one and only answer to that question had 
> been so that requests could package non-standard versions.
> 

That is not and was never the only reason for vendoring urllib3. However, and I 
cannot stress this enough, the decision to vendor urllib3 is *not going to be 
changed on this thread*. If and when it changes, it will be by consensus 
decision from the requests maintenance team, which we do not have at this time.

Further, as I pointed out to Donald Stufft on IRC, if requests unbundled 
urllib3 *today* that would not fix the problem. The reason is that we’d specify 
our urllib3 dependency as: urllib3>=1.12,<1.13. This dependency note would 
still cause exactly the problem observed in this thread.

As you correctly identify in your subsequent email, William, the core problem 
is mixing of packages from distributions and PyPI. This happens with any tool 
with external dependencies: if you subsequently install a different version of 
a dependency using a packaging tool that is not aware of some of the dependency 
tree, it is entirely plausible that an incompatible version will be installed. 
It’s not hard to trigger this kind of thing on Ubuntu. IMO, what OpenStack 
needs is a decision about where it’s getting its packages from, and then to 
refuse to mix the two.

Cory


signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Recording little everyday OpenStack successes

2015-10-09 Thread Russell Bryant
On 10/09/2015 05:42 AM, Thierry Carrez wrote:
> Hello everyone,
> 
> OpenStack has become quite big, and it's easier than ever to feel lost,
> to feel like nothing is really happening. It's more difficult than ever
> to feel part of a single community, and to celebrate little successes
> and progress.
> 
> In a (small) effort to help with that, I suggested making it easier to
> record little moments of joy and small success bits. Those are usually
> not worth the effort of a blog post or a new mailing-list thread, but
> they show that our community makes progress *every day*.
> 
> So whenever you feel like you made progress, or had a little success in
> your OpenStack adventures, or have some joyful moment to share, just
> throw the following message on your local IRC channel:
> 
> #success [Your message here]
> 
> The openstackstatus bot will take that and record it on this wiki page:
> 
> https://wiki.openstack.org/wiki/Successes
> 
> We'll add a few of those every week to the weekly newsletter (as part of
> the developer digest that we reecently added there).
> 
> Caveats: Obviously that only works on channels where openstackstatus is
> present (the official OpenStack IRC channels), and we may remove entries
> that are off-topic or spam.
> 
> So... please use #success liberally and record lttle everyday OpenStack
> successes. Share the joy and make the OpenStack community a happy place.
> 

This is *really* cool.  I'm excited to use this and see all the things
others record.  Thanks!!

-- 
Russell Bryant

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >