Re: [openstack-dev] Scheduler proposal

2015-10-10 Thread Clint Byrum
Excerpts from Joshua Harlow's message of 2015-10-10 17:43:40 -0700:
> I'm curious is there any more detail about #1 below anywhere online?
> 
> Does cassandra use some features of the JVM that the openJDK version 
> doesn't support? Something else?
> 

This about sums it up:

https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/service/StartupChecks.java#L153-L155

// There is essentially no QA done on OpenJDK builds, and
// clusters running OpenJDK have seen many heap and load issues.
logger.warn("OpenJDK is not recommended. Please upgrade to the newest 
Oracle Java release");

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Scheduler proposal

2015-10-10 Thread Clint Byrum
Excerpts from Ian Wells's message of 2015-10-09 19:14:17 -0700:
> On 9 October 2015 at 18:29, Clint Byrum  wrote:
> 
> > Instead of having the scheduler do all of the compute node inspection
> > and querying though, you have the nodes push their stats into something
> > like Zookeeper or consul, and then have schedulers watch those stats
> > for changes to keep their in-memory version of the data up to date. So
> > when you bring a new one online, you don't have to query all the nodes,
> > you just scrape the data store, which all of these stores (etcd, consul,
> > ZK) are built to support atomically querying and watching at the same
> > time, so you can have a reasonable expectation of correctness.
> >
> 
> We have to be careful about our definition of 'correctness' here.  In
> practice, the data is never going to be perfect because compute hosts
> update periodically and the information is therefore always dated.  With
> ZK, it's going to be strictly consistent with regard to the updates from
> the compute hosts, but again that doesn't really matter too much because
> the scheduler is going to have to make a best effort job with a mixed bag
> of information anyway.
> 

I was actually thinking nodes would update ZK _when they are changed
themselves_. As in, the scheduler would reduce the available resources
upon allocating them, and the nodes would only update them after
reclaiming those resources or when they start fresh.

> In fact, putting ZK in the middle basically means that your compute hosts
> now synchronously update a majority of nodes in a minimum 3 node quorum -
> not the fastest form of update - and then the quorum will see to notifying
> the schedulers.  In practice this is just a store-and-fanout again. Once
> more it's not clear to me whether the store serves much use, and as for the
> fanout, I wonder if we'll need >>3 schedulers running so that this is
> reducing communication overhead.
> 

This is indeed store and fanout. Except unlike mysql+rabbitMQ, we're
using a service optimized for store and fanout. :)

All of the DLM-ish primitive things we've talked about can handle a
ton of churn in what turns out to be very small amounts of data. The
difference here is that instead of a scheduler querying for the data,
it has already received it because it was watching for changes. And
if some of it hasn't changed, there's no query, and there's no fanout,
and the local cache is just used.

So yes, if we did things the same as now, this would be terrible. But we
wouldn't. We'd let ZK or Consul do this for us, because they are better
than anything we can build to do this.

> Even if you figured out how to make the in-memory scheduler crazy fast,
> > There's still value in concurrency for other reasons. No matter how
> > fast you make the scheduler, you'll be slave to the response time of
> > a single scheduling request. If you take 1ms to schedule each node
> > (including just reading the request and pushing out your scheduling
> > result!) you will never achieve greater than 1000/s. 1ms is way lower
> > than it's going to take just to shove a tiny message into RabbitMQ or
> > even 0mq. So I'm pretty sure this is o-k for small clouds, but would be
> > a disaster for a large, busy cloud.
> >
> 
> Per before, my suggestion was that every scheduler tries to maintain a copy
> of the cloud's state in memory (in much the same way, per the previous
> example, as every router on the internet tries to make a route table out of
> what it learns from BGP).  They don't have to be perfect.  They don't have
> to be in sync.  As long as there's some variability in the decision making,
> they don't have to update when another scheduler schedules something (and
> you can make the compute node send an immediate update when a new VM is
> run, anyway).  They all stand a good chance of scheduling VMs well
> simultaneously.
> 

I'm quite in favor of eventual consistency and retries. Even if we had
a system of perfect updating of all state records everywhere, it would
break sometimes and I'd still want to not trust any record of state as
being correct for the entire distributed system. However, there is an
efficiency win gained by staying _close_ to correct. It is actually a
function of the expected entropy. The more concurrent schedulers, the
more entropy there will be to deal with.

> If, however, you can have 20 schedulers that all take 10ms on average,
> > and have the occasional lock contention for a resource counter resulting
> > in 100ms, now you're at 2000/s minus the lock contention rate. This
> > strategy would scale better with the number of compute nodes, since
> > more nodes means more distinct locks, so you can scale out the number
> > of running servers separate from the number of scheduling requests.
> >
> 
> If you have 20 schedulers that take 1ms on average, and there's absolutely
> no lock contention, then you're at 20,000/s.  (Unfair, granted, since what
> I'm suggesting is more likely to make reject

Re: [openstack-dev] Scheduler proposal

2015-10-10 Thread Clint Byrum
Excerpts from Alec Hothan (ahothan)'s message of 2015-10-09 21:19:14 -0700:
> 
> On 10/9/15, 6:29 PM, "Clint Byrum"  wrote:
> 
> >Excerpts from Chris Friesen's message of 2015-10-09 17:33:38 -0700:
> >> On 10/09/2015 03:36 PM, Ian Wells wrote:
> >> > On 9 October 2015 at 12:50, Chris Friesen  >> > > wrote:
> >> >
> >> > Has anybody looked at why 1 instance is too slow and what it would 
> >> > take to
> >> >
> >> > make 1 scheduler instance work fast enough? This does not 
> >> > preclude the
> >> > use of
> >> > concurrency for finer grain tasks in the background.
> >> >
> >> >
> >> > Currently we pull data on all (!) of the compute nodes out of the 
> >> > database
> >> > via a series of RPC calls, then evaluate the various filters in 
> >> > python code.
> >> >
> >> >
> >> > I'll say again: the database seems to me to be the problem here.  Not to
> >> > mention, you've just explained that they are in practice holding all the 
> >> > data in
> >> > memory in order to do the work so the benefit we're getting here is 
> >> > really a
> >> > N-to-1-to-M pattern with a DB in the middle (the store-to-DB is rather
> >> > secondary, in fact), and that without incremental updates to the 
> >> > receivers.
> >> 
> >> I don't see any reason why you couldn't have an in-memory scheduler.
> >> 
> >> Currently the database serves as the persistant storage for the resource 
> >> usage, 
> >> so if we take it out of the picture I imagine you'd want to have some way 
> >> of 
> >> querying the compute nodes for their current state when the scheduler 
> >> first 
> >> starts up.
> >> 
> >> I think the current code uses the fact that objects are remotable via the 
> >> conductor, so changing that to do explicit posts to a known scheduler 
> >> topic 
> >> would take some work.
> >> 
> >
> >Funny enough, I think thats exactly what Josh's "just use Zookeeper"
> >message is about. Except in memory, it is "in an observable storage
> >location".
> >
> >Instead of having the scheduler do all of the compute node inspection
> >and querying though, you have the nodes push their stats into something
> >like Zookeeper or consul, and then have schedulers watch those stats
> >for changes to keep their in-memory version of the data up to date. So
> >when you bring a new one online, you don't have to query all the nodes,
> >you just scrape the data store, which all of these stores (etcd, consul,
> >ZK) are built to support atomically querying and watching at the same
> >time, so you can have a reasonable expectation of correctness.
> >
> >Even if you figured out how to make the in-memory scheduler crazy fast,
> >There's still value in concurrency for other reasons. No matter how
> >fast you make the scheduler, you'll be slave to the response time of
> >a single scheduling request. If you take 1ms to schedule each node
> >(including just reading the request and pushing out your scheduling
> >result!) you will never achieve greater than 1000/s. 1ms is way lower
> >than it's going to take just to shove a tiny message into RabbitMQ or
> >even 0mq.
> 
> That is not what I have seen, measurements that I did or done by others show 
> between 5000 and 1 send *per sec* (depending on mirroring, up to 1KB msg 
> size) using oslo messaging/kombu over rabbitMQ.

You're quoting througput of RabbitMQ, but how many threads were
involved? An in-memory scheduler that was multi-threaded would need to
implement synchronization at a fairly granular level to use the same
in-memory store, and we're right back to the extreme need for efficient
concurrency in the design, though with much better latency on the
synchronization.

> And this is unmodified/highly unoptimized oslo messaging code.
> If you remove the oslo messaging layer, you get 25000 to 45000 msg/sec with 
> kombu/rabbitMQ (which shows how inefficient is oslo messaging layer itself)
> 
> > So I'm pretty sure this is o-k for small clouds, but would be
> >a disaster for a large, busy cloud.
> 
> It all depends on how many sched/sec for the "large busy cloud"...
> 

I think there are two interesting things to discern. Of course, the
exact rate would be great to have as a target, but operational security
and just plain secrecy of business models will probably prevent us from
getting at many of these requirements.

The second is the complexity model of scaling. We can just think about
the actual cost benefit of running 1, 3, and more schedulers and come up
with some rough numbers for a lower bounds for scheduler performance
that would make sense.

> >
> >If, however, you can have 20 schedulers that all take 10ms on average,
> >and have the occasional lock contention for a resource counter resulting
> >in 100ms, now you're at 2000/s minus the lock contention rate. This
> >strategy would scale better with the number of compute nodes, since
> >more nodes means more distinct locks, so you can scale out the number
> >of running servers

Re: [openstack-dev] Scheduler proposal

2015-10-10 Thread Clint Byrum
Excerpts from Chris Friesen's message of 2015-10-09 23:16:43 -0700:
> On 10/09/2015 07:29 PM, Clint Byrum wrote:
> 
> > Even if you figured out how to make the in-memory scheduler crazy fast,
> > There's still value in concurrency for other reasons. No matter how
> > fast you make the scheduler, you'll be slave to the response time of
> > a single scheduling request. If you take 1ms to schedule each node
> > (including just reading the request and pushing out your scheduling
> > result!) you will never achieve greater than 1000/s. 1ms is way lower
> > than it's going to take just to shove a tiny message into RabbitMQ or
> > even 0mq. So I'm pretty sure this is o-k for small clouds, but would be
> > a disaster for a large, busy cloud.
> >
> > If, however, you can have 20 schedulers that all take 10ms on average,
> > and have the occasional lock contention for a resource counter resulting
> > in 100ms, now you're at 2000/s minus the lock contention rate. This
> > strategy would scale better with the number of compute nodes, since
> > more nodes means more distinct locks, so you can scale out the number
> > of running servers separate from the number of scheduling requests.
> 
> As far as I can see, moving to an in-memory scheduler is essentially 
> orthogonal 
> to allowing multiple schedulers to run concurrently.  We can do both.
> 

Agreed, and I want to make sure we continue to be able to run concurrent
schedulers.

Going in memory won't reduce contention for the same resources. So it
will definitely schedule faster, but it may also serialize with concurrent
schedulers sooner, and thus turn into a situation where scaling out more
nodes means the same, or even less throughput.

Keep in mind, I actually think we give our users _WAY_ too much power
over our clouds, and I actually think we should simply have flavor based
scheduling and let compute nodes grab node reservation requests directly
out of flavor based queues based on their own current observation of
their ability to service it.

But I understand that there are quite a few clouds now that have been
given shiny dynamic scheduling tools and now we have to engineer for
those.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Document adding --memory option to create containers

2015-10-10 Thread Steven Dake (stdake)
Adrian,

What I suggest is to commit a table of contents for a new comprehensive user 
guide.  Then new contributors can put stuff there instead of the quick start 
guide.  Any complexity in the quick start guide can also be transitioned into 
the comprehensive user guide.

So who is going to do the work? :)

Regards
-steve


From: Adrian Otto mailto:adrian.o...@rackspace.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Thursday, October 8, 2015 at 1:04 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum] Document adding --memory option to create 
containers

Steve,

I agree with the concept of a simple quickstart doc, but there also needs to be 
a comprehensive user guide, which does not yet exist. In the absence of the 
user guide, the quick start is the void where this stuff is starting to land. 
We simply need to put together a magnum reference document, and start moving 
content into that.

Adrian

On Oct 8, 2015, at 12:54 PM, Steven Dake (stdake) 
mailto:std...@cisco.com>> wrote:

Quickstart guide should be dead dead dead dead simple.  The goal of the 
quickstart guide isn’t to tach people best practices around Magnum.  It is to 
get a developer operational to give them that sense of feeling that Magnum can 
be worked on.  The goal of any quickstart guide should be to encourage the 
thinking that a person involving themselves with the project the quickstart 
guide represents is a good use of the person’s limited time on the planet.

Regards
-steve


From: Hongbin Lu mailto:hongbin...@huawei.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Thursday, October 8, 2015 at 9:00 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [magnum] Document adding --memory option to create 
containers

Hi team,

I want to move the discussion in the review below to here, so that we can get 
more feedback

https://review.openstack.org/#/c/232175/

In summary, magnum currently added support for specifying the memory size of 
containers. The specification of the memory size is optional, and the COE won’t 
reserve any memory to the containers with unspecified memory size. The debate 
is whether we should document this optional parameter in the quickstart guide. 
Below is the positions of both sides:

Pros:
· It is a good practice to always specifying the memory size, because 
containers with unspecified memory size won’t have QoS guarantee.
· The in-development autoscaling feature [1] will query the memory size 
of each container to estimate the residual capacity and triggers scaling 
accordingly. Containers with unspecified memory size will be treated as taking 
0 memory, which negatively affects the scaling decision.
Cons:
· The quickstart guide should be kept as simple as possible, so it is 
not a good idea to have the optional parameter in the guide.

Thoughts?

[1] https://blueprints.launchpad.net/magnum/+spec/autoscale-bay
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][kolla] Work for new contributors to Koala

2015-10-10 Thread Steven Dake (stdake)
Hi folks,

We have had about five new ATCs to Kolla over the last month.  The four biggest 
impact areas for new contributors to make an improvement in the code is related 
to:

  *   Documentation
  *   Functional gating
  *   Docker containers
  *   Ansible roles for the Docker containers

I have noticed that the new successful contributors start out writing 
documentation for the Kolla project.  Interestingly most of these new 
contributors are in APAC; therefore English is a second language.  What is 
fascinating is these contributors make the best of it, and through our native 
English speaker reviewers produce high quality native-speaking documentation 
here:

http://docs.openstack.org/developer/kolla/

One immediate area folks could help out if they want to improve the Kolla 
project Is writing per-service documentation that follows the style here:

https://github.com/openstack/kolla/blob/master/doc/cinder-guide.rst

We need documentation for the following services:
HA Guide
Design Guide
Keystone
Glance
Nova
Neutron
Murano
Horizon
Heat

If your not a native english speaker, it is no problem!  One of the core 
reviewers that is a native english speaker will be happy to offer editing 
suggestions on grammar/spelling/syntax. The hard part is the technical aspect 
of things, which is where you can really help out!  Once some documentation 
work is done, the new contributors generally have a much better understanding 
of Kolla which puts you in a position to make bigger impacts around Kolla's 
other 3 focus areas.

If your a new OpenStack developer looking for a hot OpenStack project to work 
on with a fantastic friendly enthusiastic community that will allow you to 
obtain mastery in Container based software deployment, OpenStack deployment, 
Docker, and Ansible, Kolla could use your attention.

Regards
-steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Scheduler proposal

2015-10-10 Thread Davanum Srinivas
Not implying cassandra is the right option. Just curious about the
assertion.

-- Dims

On Sat, Oct 10, 2015 at 5:53 PM, Davanum Srinivas  wrote:

> Thomas,
>
> i am curious as well. AFAIK, cassandra works well with OpenJDK. Can you
> please elaborate what you concerns are for #1?
>
> Thanks,
> Dims
>
> On Sat, Oct 10, 2015 at 5:43 PM, Joshua Harlow 
> wrote:
>
>> I'm curious is there any more detail about #1 below anywhere online?
>>
>> Does cassandra use some features of the JVM that the openJDK version
>> doesn't support? Something else?
>>
>> -Josh
>>
>> Thomas Goirand wrote:
>>
>>> On 10/07/2015 07:36 PM, Ed Leafe wrote:
>>>
 Several months ago I proposed an experiment [0] to see if switching
 the data model for the Nova scheduler to use Cassandra as the backend
 would be a significant improvement as opposed to the current design

>>>
>>> This is probably right. I don't know, I'm not an expert in Nova, or its
>>> scheduler. However, to make it possible for us (ie: downstream
>>> distributions and/or OpenStack users) to use Cassandra, you have to
>>> solve one of the below issues:
>>>
>>> 1/ Cassandra developers upstream should start caring about OpenJDK, and
>>> make sure that it is also a good platform for it. They should stop
>>> caring only about the Oracle JVM.
>>>
>>> ... or ...
>>>
>>> 2/ Oracle should make its JVM free software.
>>>
>>> As there is no hope for any of the above, Cassandra is a no-go for
>>> downstream distributions.
>>>
>>> So, by all means, propose a new back-end, implement it, profit. But that
>>> back-end cannot be Cassandra the way it is now.
>>>
>>> Cheers,
>>>
>>> Thomas Goirand (zigo)
>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> Davanum Srinivas :: https://twitter.com/dims
>



-- 
Davanum Srinivas :: https://twitter.com/dims
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Nominating two new core reviewers

2015-10-10 Thread Chris K
+1 for both so +2 :)

-Chris

On Fri, Oct 9, 2015 at 4:26 PM, Jay Faulkner  wrote:

> +1
>
> 
> From: Jim Rollenhagen 
> Sent: Thursday, October 8, 2015 2:47 PM
> To: openstack-dev@lists.openstack.org
> Subject: [openstack-dev] [ironic] Nominating two new core reviewers
>
> Hi all,
>
> I've been thinking a lot about Ironic's core reviewer team and how we might
> make it better.
>
> I'd like to grow the team more through trust and mentoring. We should be
> able to promote someone to core based on a good knowledge of *some* of
> the code base, and trust them not to +2 things they don't know about. I'd
> also like to build a culture of mentoring non-cores on how to review, in
> preparation for adding them to the team. Through these pieces, I'm hoping
> we can have a few rounds of core additions this cycle.
>
> With that said...
>
> I'd like to nominate Vladyslav Drok (vdrok) for the core team. His reviews
> have been super high quality, and the quantity is ever-increasing. He's
> also started helping out with some smaller efforts (full tempest, for
> example), and I'd love to see that continue with larger efforts.
>
> I'd also like to nominate John Villalovos (jlvillal). John has been
> reviewing a ton of code and making a real effort to learn everything,
> and keep track of everything going on in the project.
>
> Ironic cores, please reply with your vote; provided feedback is positive,
> I'd like to make this official next week sometime. Thanks!
>
> // jim
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Scheduler proposal

2015-10-10 Thread Davanum Srinivas
Thomas,

i am curious as well. AFAIK, cassandra works well with OpenJDK. Can you
please elaborate what you concerns are for #1?

Thanks,
Dims

On Sat, Oct 10, 2015 at 5:43 PM, Joshua Harlow 
wrote:

> I'm curious is there any more detail about #1 below anywhere online?
>
> Does cassandra use some features of the JVM that the openJDK version
> doesn't support? Something else?
>
> -Josh
>
> Thomas Goirand wrote:
>
>> On 10/07/2015 07:36 PM, Ed Leafe wrote:
>>
>>> Several months ago I proposed an experiment [0] to see if switching
>>> the data model for the Nova scheduler to use Cassandra as the backend
>>> would be a significant improvement as opposed to the current design
>>>
>>
>> This is probably right. I don't know, I'm not an expert in Nova, or its
>> scheduler. However, to make it possible for us (ie: downstream
>> distributions and/or OpenStack users) to use Cassandra, you have to
>> solve one of the below issues:
>>
>> 1/ Cassandra developers upstream should start caring about OpenJDK, and
>> make sure that it is also a good platform for it. They should stop
>> caring only about the Oracle JVM.
>>
>> ... or ...
>>
>> 2/ Oracle should make its JVM free software.
>>
>> As there is no hope for any of the above, Cassandra is a no-go for
>> downstream distributions.
>>
>> So, by all means, propose a new back-end, implement it, profit. But that
>> back-end cannot be Cassandra the way it is now.
>>
>> Cheers,
>>
>> Thomas Goirand (zigo)
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Davanum Srinivas :: https://twitter.com/dims
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Scheduler proposal

2015-10-10 Thread Joshua Harlow

I'm curious is there any more detail about #1 below anywhere online?

Does cassandra use some features of the JVM that the openJDK version 
doesn't support? Something else?


-Josh

Thomas Goirand wrote:

On 10/07/2015 07:36 PM, Ed Leafe wrote:

Several months ago I proposed an experiment [0] to see if switching
the data model for the Nova scheduler to use Cassandra as the backend
would be a significant improvement as opposed to the current design


This is probably right. I don't know, I'm not an expert in Nova, or its
scheduler. However, to make it possible for us (ie: downstream
distributions and/or OpenStack users) to use Cassandra, you have to
solve one of the below issues:

1/ Cassandra developers upstream should start caring about OpenJDK, and
make sure that it is also a good platform for it. They should stop
caring only about the Oracle JVM.

... or ...

2/ Oracle should make its JVM free software.

As there is no hope for any of the above, Cassandra is a no-go for
downstream distributions.

So, by all means, propose a new back-end, implement it, profit. But that
back-end cannot be Cassandra the way it is now.

Cheers,

Thomas Goirand (zigo)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Scheduler proposal

2015-10-10 Thread Thomas Goirand
On 10/07/2015 07:36 PM, Ed Leafe wrote:
> Several months ago I proposed an experiment [0] to see if switching
> the data model for the Nova scheduler to use Cassandra as the backend
> would be a significant improvement as opposed to the current design

This is probably right. I don't know, I'm not an expert in Nova, or its
scheduler. However, to make it possible for us (ie: downstream
distributions and/or OpenStack users) to use Cassandra, you have to
solve one of the below issues:

1/ Cassandra developers upstream should start caring about OpenJDK, and
make sure that it is also a good platform for it. They should stop
caring only about the Oracle JVM.

... or ...

2/ Oracle should make its JVM free software.

As there is no hope for any of the above, Cassandra is a no-go for
downstream distributions.

So, by all means, propose a new back-end, implement it, profit. But that
back-end cannot be Cassandra the way it is now.

Cheers,

Thomas Goirand (zigo)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] 7.0.0 (liberty) release: which modules?

2015-10-10 Thread Emilien Macchi
There are some modules that are not tested (no beaker & no integration CI).
I'm not sure it's a good idea to include them in 7.0.0 release.

Modules affected:
* gnocchi
* mistral
* murano
* aodh
* barbican
* zaqar
* tuskar

If someone wants a 7.0.0 release for these modules, please push some
functional tests (for at least one distro), otherwise, we should make
efforts during the next cycle for testing, so we cover all modules.

Of course, discussion is open.
-- 
Emilien Macchi
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Zaqar] UI for pools and flavors

2015-10-10 Thread Shifali Agrawal
Greetings!

I have prepared mock-ups[1],[2] to build Zaqar UI, at present focusing only
to bring pools and flavors on dashboard. Sharing two mock-ups for same
purpose - allowing all operations related to them(CRUD).

It will be great to know the views of zaqar developers/users if the design
is satisfactory to them or they want some amendments. Also let me know if
any information of pools/flavors is missing and need to be added.

In first mockup[1] showing pools information by default and will show
flavors if user click on flavors button present on top second menu bar.

[1]: http://tinyurl.com/o2b9q6r
[2]: http://tinyurl.com/pqwgwhl
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Let's get together and fix all the bugs

2015-10-10 Thread Lance Bragstad
On Sat, Oct 10, 2015 at 8:07 AM, Boris Bobrov  wrote:

> On Saturday 10 October 2015 08:42:10 Shinobu Kinjo wrote:
> > So what's the procedure?
>
> You go to #openstack-keystone on Friday, choose a bug, talk to someone of
> the
> core reviewers. After talking to them fix the bug.
>

Wash, rinse, repeat? ;)

Looking forward to it, I think this is a much needed pattern!

>
> > Shinobu
> >
> > - Original Message -
> > From: "Adam Young" 
> > To: openstack-dev@lists.openstack.org
> > Sent: Saturday, October 10, 2015 12:11:35 PM
> > Subject: Re: [openstack-dev] [keystone] Let's get together and fix all
> the
> > bugs
> >
> > On 10/09/2015 11:04 PM, Chen, Wei D wrote:
> >
> >
> >
> >
> >
> > Great idea! core reviewer’s advice is definitely much important and
> valuable
> > before proposing a fixing. I was always thinking it will help save us if
> we
> > can get some agreement at some point.
> >
> >
> >
> >
> >
> > Best Regards,
> >
> > Dave Chen
> >
> >
> >
> >
> > From: David Stanek [ mailto:dsta...@dstanek.com ]
> > Sent: Saturday, October 10, 2015 3:54 AM
> > To: OpenStack Development Mailing List
> > Subject: [openstack-dev] [keystone] Let's get together and fix all the
> bugs
> >
> >
> >
> >
> >
> > I would like to start running a recurring bug squashing day. The general
> > idea is to get more focus on bugs and stability. You can find the details
> > here: https://etherpad.openstack.org/p/keystone-office-hours Can we
> start
> > with Bug 968696?
>
> --
> С наилучшими пожеланиями,
> Boris
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [All][Elections] Results of the TC Election

2015-10-10 Thread Amrith Kumar
Congratulations to the newly elected members of the TC, and thanks to all who 
ran for election, Mike, Joe, Clint, Steven, Joshua, Julien, Ed, Edgar, Carol, 
Justin, Rochelle and Maish! 

Thanks to Tristan and the rest of the election team for a smooth process. Now, 
on to Mitaka!

-amrith

> -Original Message-
> From: Tristan Cacqueray [mailto:tdeca...@redhat.com]
> Sent: Friday, October 09, 2015 8:05 PM
> To: OpenStack Development Mailing List (not for usage questions)
> 
> Subject: [openstack-dev] [All][Elections] Results of the TC Election
> 
> Please join me in congratulating the 6 newly elected members of the TC.
> 
> * Doug Hellmann (dhellmann)
> * Monty Taylor (mordred)
> * Anne Gentle (annegentle)
> * Sean Dague (sdague)
> * Russell Bryant (russellb)
> * Kyle Mestery (mestery)
> 
> Full results:
> http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_4ef58718618691a0
> 
> Thank you to all candidates who stood for election, having a good group of
> candidates helps engage the community in our democratic process,
> 
> Thank you to all who voted and who encouraged others to vote. We need to
> ensure your voice is heard.
> 
> Thanks to my fellow election official, Tony Breeds, I appreciate your help and
> perspective.
> 
> Thank you for another great round.
> 
> Here's to Mitaka,
> Tristan
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [All][Elections] Results of the TC Election

2015-10-10 Thread Ed Leafe
On Oct 9, 2015, at 7:05 PM, Tristan Cacqueray  wrote:

> Please join me in congratulating the 6 newly elected members of the TC.
> 
> * Doug Hellmann (dhellmann)
> * Monty Taylor (mordred)
> * Anne Gentle (annegentle)
> * Sean Dague (sdague)
> * Russell Bryant (russellb)
> * Kyle Mestery (mestery)

Congratulations to all of you - it's well deserved. And it makes losing feel 
much better knowing how well we'll all be represented on the TC.


-- Ed Leafe







signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Auto-abandon bot

2015-10-10 Thread Jeremy Stanley
On 2015-10-09 17:10:15 -0500 (-0500), Ben Nemec wrote:
> As discussed in the meeting a week or two ago, we would like to bring
> back the auto-abandon functionality for old, unloved gerrit reviews.
[...]
> -WIP patches are never abandoned
[...]
> -Patches that are failing CI for over a month on the same patch set
> (regardless of any followup comments - the intent is that patches
> expected to fail CI should be marked WIP).
[...]

Have you considered the possibility of switching stale changes to
WIP instead of abandoning them?

I usually have somewhere around 50-100 open changes submitted (often
more), and for some of those I might miss failures or negative
review comments for a month or so at a time depending on what else I
have going on. It's very easy to lose track of a change if someone
abandons it for me.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar][cli][openstackclient] conflict in nova flavor and zaqar flavor

2015-10-10 Thread Shifali Agrawal
All right, thanks for responses, will code accordingly :)

On Wed, Oct 7, 2015 at 9:31 PM, Doug Hellmann  wrote:

> Excerpts from Steve Martinelli's message of 2015-10-06 16:09:32 -0400:
> >
> > Using `message flavor` works for me, and having two words is just fine.
>
> It might even be good to change "flavor" to "server flavor" (keeping
> flavor as a backwards-compatible alias, of course).
>
> Doug
>
> >
> > I'm in the process of collecting all of the existing "object" works are
> > putting them online, there's a lot of them. Hopefully this will reduce
> the
> > collisions in the future.
> >
> > Thanks,
> >
> > Steve Martinelli
> > OpenStack Keystone Core
> >
> >
> >
> > From:Shifali Agrawal 
> > To:openstack-dev@lists.openstack.org
> > Date:2015/10/06 03:40 PM
> > Subject:[openstack-dev] [Zaqar][cli][openstack-client] conflict in
> nova
> > flavor and zaqar flavor
> >
> >
> >
> > Greetings,
> >
> > I am implementing cli commands for Zaqar flavors, the command should be
> > like:
> >
> > "openstack flavor "
> >
> > But there is already same command present for Nova flavors. After
> > discussing with Zaqar devs we thought to change all zaqar commands such
> > that they include `message` word after openstack, thus above Zaqar flavor
> > command will become:
> >
> > "openstack message flavor "
> >
> > Does openstack-client devs have something to say for this? Or they also
> > feel its good to move with adding `message` word to all Zaqar cli
> > commands?
> >
> > Already existing Zaqar commands will work with get a deprecation
> > message/warning and also I will implement them all to work with `message`
> > word, and all new commands will be implement so that they work only with
> > `message` word.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Let's get together and fix all the bugs

2015-10-10 Thread Boris Bobrov
On Saturday 10 October 2015 08:42:10 Shinobu Kinjo wrote:
> So what's the procedure?

You go to #openstack-keystone on Friday, choose a bug, talk to someone of the 
core reviewers. After talking to them fix the bug. 

> Shinobu
> 
> - Original Message -
> From: "Adam Young" 
> To: openstack-dev@lists.openstack.org
> Sent: Saturday, October 10, 2015 12:11:35 PM
> Subject: Re: [openstack-dev] [keystone] Let's get together and fix all the
> bugs
> 
> On 10/09/2015 11:04 PM, Chen, Wei D wrote:
> 
> 
> 
> 
> 
> Great idea! core reviewer’s advice is definitely much important and valuable
> before proposing a fixing. I was always thinking it will help save us if we
> can get some agreement at some point.
> 
> 
> 
> 
> 
> Best Regards,
> 
> Dave Chen
> 
> 
> 
> 
> From: David Stanek [ mailto:dsta...@dstanek.com ]
> Sent: Saturday, October 10, 2015 3:54 AM
> To: OpenStack Development Mailing List
> Subject: [openstack-dev] [keystone] Let's get together and fix all the bugs
> 
> 
> 
> 
> 
> I would like to start running a recurring bug squashing day. The general
> idea is to get more focus on bugs and stability. You can find the details
> here: https://etherpad.openstack.org/p/keystone-office-hours Can we start
> with Bug 968696?

-- 
С наилучшими пожеланиями,
Boris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Different OpenStack components

2015-10-10 Thread Jeremy Stanley
On 2015-10-09 23:13:48 + (+), Fox, Kevin M wrote:
> On 2015-10-09 22:20:43 + (+), Amrith Kumar wrote:
> [...]
> > A google search produced this as result #2.
> > 
> > http://governance.openstack.org/reference/projects/index.html
> > 
> > Looks pretty complete to me.
> 
> The official list is
> http://git.openstack.org/cgit/openstack/governance/tree/reference/projects.yaml

They're both official. The former is automatically rendered and
published from the latter at every update by a CI job.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Let's get together and fix all the bugs

2015-10-10 Thread Shinobu Kinjo
So what's the procedure?

Shinobu

- Original Message -
From: "Adam Young" 
To: openstack-dev@lists.openstack.org
Sent: Saturday, October 10, 2015 12:11:35 PM
Subject: Re: [openstack-dev] [keystone] Let's get together and fix all the bugs

On 10/09/2015 11:04 PM, Chen, Wei D wrote: 





Great idea! core reviewer’s advice is definitely much important and valuable 
before proposing a fixing. I was always thinking it will help save us if we can 
get some agreement at some point. 





Best Regards, 

Dave Chen 




From: David Stanek [ mailto:dsta...@dstanek.com ] 
Sent: Saturday, October 10, 2015 3:54 AM 
To: OpenStack Development Mailing List 
Subject: [openstack-dev] [keystone] Let's get together and fix all the bugs 





I would like to start running a recurring bug squashing day. The general idea 
is to get more focus on bugs and stability. You can find the details here: 
https://etherpad.openstack.org/p/keystone-office-hours 
Can we start with Bug 968696? 














-- 


David 
blog: http://www.traceback.org 
twitter: http://twitter.com/dstanek 


www: http://dstanek.com 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Let's get together and fix all thebugs

2015-10-10 Thread Brad Topol
Great idea David!!!

--Brad


Brad Topol, Ph.D.
IBM Distinguished Engineer
OpenStack
(919) 543-0646
Internet:  bto...@us.ibm.com
Assistant: Kendra Witherspoon (919) 254-0680



From:   Samuel de Medeiros Queiroz 
To: "OpenStack Development Mailing List (not for usage questions)"

Date:   10/10/2015 05:57 AM
Subject:Re: [openstack-dev] [keystone] Let's get together and fix all
the bugs



Hi,

Thanks for the initiative, David.

Putting more focus on solving bugs is a great idea, as it will both improve
the
stability of the project and help new contributors to get involved.

Count me in!

Regards,
Samuel


On Sat, Oct 10, 2015 at 12:11 AM, Adam Young  wrote:
  On 10/09/2015 11:04 PM, Chen, Wei D wrote:


Great idea! core reviewer’s advice is definitely much important and
valuable before proposing a fixing. I was always thinking it will
help save us if we can get some agreement at some point.








Best Regards,


Dave Chen





From: David Stanek [mailto:dsta...@dstanek.com]
Sent: Saturday, October 10, 2015 3:54 AM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [keystone] Let's get together and fix all
the bugs





I would like to start running a recurring bug squashing day. The
general idea is to get more focus on bugs and stability. You can
find the details here:
https://etherpad.openstack.org/p/keystone-office-hours
  Can we start with Bug 968696?








--


David
blog: http://www.traceback.org
twitter: http://twitter.com/dstanek


www: http://dstanek.com





__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



  __

  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] py26 support in python-muranoclient

2015-10-10 Thread Jeremy Stanley
On 2015-10-09 18:51:51 + (+), Tim Bell wrote:
> There is a need to distinguish between server side py26 support
> which is generally under the control of the service provider and
> py26 support on the client side. For a service provider to push
> all of their hypervisors and service machines to RHEL 7 is under
> their control but requiring all of their users to do the same is
> much more difficult.
[...]

Agreed, this is why the master branches of clients/libraries opted
to continue testing with Python 2.6 throughout the supported
lifetime of the stable/juno branches. But much like for the service
providers and users, continuing to run older Linux distributions
(CentOS 6.x, Ubuntu 12.4.x) in our test infrastructure comes at a
significant cost and at some point we need to free up our limited
sysadmin resources to focus on better support for more recent distro
releases.

Once new patches to OpenStack projects are no longer actively tested
on these older platforms, we have to expect that the ability to run
new versions of the projects on them will quickly vanish. However,
users on those older distro releases can opt to continue using
whatever versions last supported them or perhaps fall back on the
versions provided by their distro's package manager.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Suggestions for handling new panels and refactors in the future

2015-10-10 Thread Rob Cresswell (rcresswe)
Personally, I¹d still vote for feature branching: I¹d really love to see
an Angular Panels feature branch worked on separately and then merged back
in; effectively, Travis¹ 2nd option. Pick 3(?) panels, like Users, Images,
and the System Information one I¹ve seen around, iterate quickly, merge in
M-1/M-2. There¹s not really any risk there, and it¹s very easy for people
to toggle on or off in their existing workflows. Also, there isn¹t any
spin up time in terms of setting up new repos and adding new core groups
through infra etc. The ideal time to do this would be at the start of the
cycle, too.

New repos sounds like too big a split, and I¹m worried about where that
leaves the community as a whole. Also, I¹m failing to see how mixing the
angular work in with plugin challenges is going to speed to it upŠ sounds
like a bit of a recipe for disaster, IMO. We should keep those concerns
separate for now; there are multiple plugins being written in both
AngularJS and Python already, so I think issues will arise organically as
is.

Rob



On 09/10/2015 21:35, "Tripp, Travis S"  wrote:

>Hi Doug!
>
>I think the is a great discussion topic and you summarize your points
>very nicely!
>
> I wish you¹d responded to this thread, though:
>https://openstack.nimeyo.com/58582/openstack-dev-horizon-patterns-for-angu
>lar-panels, because it is talking about the same problem. This is option
>3 I mentioned there and I do think this is still a viable option to
>consider, but we should discuss all the options.
>
>Please consider that thread as my initial response to your emailŠ and
>let¹s keep discussing!
>
>Thanks,
>Travis
>
>From: Douglas Fish
>Reply-To: OpenStack List
>Date: Friday, October 9, 2015 at 8:42 AM
>To: OpenStack List
>Subject: [openstack-dev] [Horizon] Suggestions for handling new panels
>and refactors in the future
>
>I have two suggestions for handling both new panels and refactoring
>existing panels that I think could benefit us in the future:
>1) When we are creating a panel that's a major refactor of an existing it
>should be a new separate panel, not a direct code replacement of the
>existing panel
>2) New panels (include the refactors of existing panels) should be
>developed in an out of tree gerrit repository.
>
>Why make refactors a separate panel?
>
>I was taken a bit off guard after we merged the Network
>Topology->Curvature improvement: this was a surprise to some people
>outside of the Horizon community (though it had been discussed within
>Horizon for as long as I've been on the project). In retrospect, I think
>it would have been better to keep both the old Network Topology and new
>curvature based topology in our Horizon codebase. Doing so would have
>allowed operators to perform A-B/ Red-Black testing if they weren't
>immediately convinced of the awesomeness of the panel. It also would have
>allowed anyone with a customization of the Network Topology panel to have
>some time to configure their Horizon instance to continue to use the
>Legacy panel while they updated their customization to work with the new
>panel.
>
>Perhaps we should treat panels more like an API element and take them
>through a deprecation cycle before removing them completely. Giving time
>for customizers to update their code is going to be especially important
>as we build angular replacements for python panels. While we have much
>better plugin support for angular there is still a learning curve for
>those developers.
>
>Why build refactors and new panels out of tree?
>
>First off, it appears to me trying to build new panels in tree has been
>fairly painful. I've seen big long lived patches pushed along without
>being merged. It's quite acceptable and expected to quickly merge
>half-complete patches into a brand new repository - but you can't behave
>that way working in tree in Horizon. Horizon needs to be kept
>production/operator ready. External repositories do not. Merging code
>quickly can ease collaboration and avoid this kind of long lived patch
>set.
>
>Secondly, keeping new panels/plugins in a separate repository
>decentralizes decisions about which panels are "ready" and which aren't.
>If one group feels a plugin is "ready" they can make it their default
>version of the panel, and perhaps put resources toward translating it. If
>we develop these panels in-tree we need to make a common decision about
>what "ready" means - and once it's in everyone who wants a translated
>Horizon will need to translate it.
>
>Finally, I believe developing new panels out of tree will help improve
>our plugin support in Horizon. It's this whole "eating your own dog food"
>idea. As soon as we start using our own Horizon plugin mechanism for our
>own development we are going to become aware of it's shortcomings (like
>quotas) and will be sufficiently motivated to fix them.
>
>Looking forward to further discussion and other ideas on this!
>
>Doug Fish
>
>__
>OpenStack Dev

Re: [openstack-dev] [keystone] Let's get together and fix all the bugs

2015-10-10 Thread Samuel de Medeiros Queiroz
Hi,

Thanks for the initiative, David.

Putting more focus on solving bugs is a great idea, as it will both improve
the
stability of the project and help new contributors to get involved.

Count me in!

Regards,
Samuel


On Sat, Oct 10, 2015 at 12:11 AM, Adam Young  wrote:

> On 10/09/2015 11:04 PM, Chen, Wei D wrote:
>
> Great idea! core reviewer’s advice is definitely much important and
> valuable before proposing a fixing. I was always thinking it will help save
> us if we can get some agreement at some point.
>
>
>
>
>
> Best Regards,
>
> Dave Chen
>
>
>
> *From:* David Stanek [mailto:dsta...@dstanek.com ]
> *Sent:* Saturday, October 10, 2015 3:54 AM
> *To:* OpenStack Development Mailing List
> *Subject:* [openstack-dev] [keystone] Let's get together and fix all the
> bugs
>
>
>
> I would like to start running a recurring bug squashing day. The general
> idea is to get more focus on bugs and stability. You can find the details
> here: https://etherpad.openstack.org/p/keystone-office-hours
>
> Can we start with Bug 968696?
>
>
>
>
>
> --
>
> David
> blog: http://www.traceback.org
> twitter: http://twitter.com/dstanek
>
> www: http://dstanek.com
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron]What happened when the 3-rd controller restarted?

2015-10-10 Thread Germy Lure
Hi all,

After restarting, Agents load data from Neutron via RPC. What about 3-rd
controller? They only can re-gather data via NBI. Right?

Is it possible to provide some mechanism for those controllers and agents
to sync data? or something else I missed?

Thanks
Germy
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] help for test

2015-10-10 Thread 方金星
help for test__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron]Anyone tried to mix-use openstack components or projects?

2015-10-10 Thread Germy Lure
Hi all,

As you know, openstack projects are developed separately. And
theoretically, people can create networks with Neutron in Kilo version for
Nova in Havana version.

Did Anyone tried it?
Do we have some pages to show what combination can work together?

Thanks.
Germy
.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova-compute][nova][libvirt] Extending Nova-Compute for image prefetching

2015-10-10 Thread Alberto Geniola
Hi Michael,

and thank you for your answer.

Indeed, what I want to do is to add methods to the ImageCache.py module
(listing, adding, deleting). So far, this module only takes care of image
deletion: this represents the "cache" of images. Now, I want to populate
the cache with some images on the hypervisor (as you mention) without
having any instance running it, yet. The method I want like to add should
call the appropriate method from the hypervisor driver (let's say libvirt)
to trigger the image download/creation without starting it (I guess
something like calling _create_image() should do the trick).

Your question is actually good: how will an user be  able to trigger this
image caching mechanism?
My idea is to extend the HTTP Nova Compute API. I would lilke to add a
resource, let's say "CachedImages", to the API tree. In this way,
interacting via CRUD operation we should be able to CALL/CAST rpc api and
interact with the imagecache module.

Is it clearer now? Do you see any problem in this approach?

Regards,

Alberto.




On Thu, Oct 8, 2015 at 11:45 PM, Michael Still  wrote:

> I think I'd rephrase your definition of pre-fetched to be honest --
> something more like "images on this hypervisor node without a currently
> running instance". So, your operations would become:
>
>  - trigger an image prefetching
>  - list unused base images (and perhaps when they were last used)
>  - delete an unused image
>
> All of that would need to tie into the image cache management code so that
> its not stomping on your images. In fact, you're probably best of adding
> all of this as tweaks to the image cache manager anyways.
>
> One question though -- who is calling these APIs? Are you adding a central
> service to orchestrate these calls?
>
> Michael
>
>
>
> On Thu, Oct 8, 2015 at 10:50 PM, Alberto Geniola  > wrote:
>
>> Hi all,
>>
>> I'm considering to extend the Nova-Compute API in order to provide
>> image-prefetching capabilities to OS.
>>
>> In order to allow image prefetching, I ended up with the need to add
>> three different APIs on the nova-compute nodes:
>>
>>   1. Trigger an image prefetching
>>
>>   2. List prefetched images
>>
>>   3. Delete a prefetched image
>>
>>
>>
>> About the point 1 I saw I can re-use the libvirt driver function
>> _create_image() to trigger the image prefetching. However, this approach
>> will not store any information about the stored image locally. This leads
>> to an issue: how do I retrieve a list of already fetched images? A quick
>> and simple possibility would be having a local file, storing information
>> about the fetched images. Would it be acceptable? Is there any best
>> practice in OS community?
>>
>>
>>
>> Any ideas?
>>
>>
>> Ty,
>>
>> Al.
>>
>> --
>> Dott. Alberto Geniola
>>
>>   albertogeni...@gmail.com
>>   +39-346-6271105
>>   https://www.linkedin.com/in/albertogeniola
>>
>> Web: http://www.hw4u.it
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Rackspace Australia
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Dott. Alberto Geniola

  albertogeni...@gmail.com
  +39-346-6271105
  https://www.linkedin.com/in/albertogeniola

Web: http://www.hw4u.it
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev