Re: [openstack-dev] [Congress] meeting time change

2015-08-02 Thread Rui Chen
Convert to Asia timezone, the new time is easy to remember for us :)

For CST (UTC+8:00):
Thursday 08:00 AM

For JST (UTC+9:00):
Thursday 09:00 AM

http://www.timeanddate.com/worldclock/fixedtime.html?iso=20150806T00p1=1440


2015-08-01 1:20 GMT+08:00 Tim Hinrichs t...@styra.com:

 Peter pointed out that no one uses #openstack-meeting-2.  So we'll go with
 #openstack-meeting.  Here are the updated meeting details.

 Room: #openstack-meeting
 Time: Wednesday 5p Pacific = Thursday midnight UTC

 There's a change out for review that will update the meeting website once
 it merges.
 http://eavesdrop.openstack.org/#Congress_Team_Meeting
 https://review.openstack.org/#/c/207981/

 Tim

 On Fri, Jul 31, 2015 at 9:24 AM Tim Hinrichs t...@styra.com wrote:

 Hi all,

 We managed to find a day/time where all the active contributors can
 attend (without being up too early/late).  The room, day, and time have all
 changed.

 Room: #openstack-meeting-2
 Time: Wednesday 5p Pacific = Thursday midnight UTC

 Next week we begin with this new schedule.

 And don't forget that next week Thu/Fri is our Mid-cycle sprint.  Hope to
 see you there!

 Tim


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Keystone][Fernet]multisite identity service management

2015-08-02 Thread joehuang
Hi,

Glad to know you guys are talking about the key distribution and rotation for 
Fernet token. Hans and I did a prototype for multisite identity service 
management, and have a similar issue.

The use case is : a user should, using a single authentication point be able to 
manage virtual resources spread over multiple OpenStack regions  
(https://etherpad.opnfv.org/p/multisite_identity_management)

We did the prototype of Fernet token used in multi-KeyStone cluster for 
multi-OpenStack instances installed in multi-sites, “write” is only allowed in 
the master KeyStone cluster, the slave KeyStone cluster is read only (  
https://github.com/hafe/dockers, remember that the slave Galera cluster should 
be configured with replicate_do_db=KeyStone, but not binlog_do_db=KeyStone, 
Hans may haven’t update the script yet. The prototype is for candidate solution 
2 )

From the prototype, we found that Fernet token validation could be successfully 
done by local KeyStone server with the async-replicated db. This means if we 
have a lot of sites with OpenStack installed, we can deploy a fully distributed 
KeyStone service in each site, provide token validation in local site only to 
realize high performance and high availability.

After the prototype, I think the candidate solution 3 would be better one 
solution for multisite identity service management.

“Candidate solution 3”. KeyStone service(Distributed) with Fernet token + Async 
replication ( star-mode).
one master KeyStone cluster with Fernet token in two sites (for site level high 
availability purpose), other sites will be installed with at least 2 slave 
nodes where the node is configured with DB async replication from the master 
cluster members, and one slave’s mater node in site1, another slave’s master 
node in site 2.

Only the master cluster nodes are allowed to write,  other slave nodes waiting 
for replication from the master cluster ( very little delay) member.

Pros.
1) Why cluster in the master sites? There are lots of master nodes in the 
cluster, in order to provide more slaves could be done async. replication in 
parallel.
2) Why two sites for the master cluster? to provide higher reliability (site 
level) for writing request.
3) Why using multi-slaves in other sites. Slave has no knowledge of other 
slaves, so easy to manage multi-slaves in one site than a cluster, and 
multi-slaves work independently but provide multi-instance redundancy(like a 
cluster, but independent).

Cons. The distribution/rotation of key management.



Appreciate the new introduced Fernet token very much in addressing the scenario 
of multi-site cloud identity management, but it brings a new challenge that how 
to address the key distribution and rotation in multi-site cloud. Should the 
key distribution/rotation management be the responsibility of a new service or 
KeyStone itself? It’s tough to depends on script to manage multi-sites (lots of 
sites, not only 3 or 5).

Best Regards
Chaoyi Huang ( Joe Huang )

From: Dolph Mathews [mailto:dolph.math...@gmail.com]
Sent: Tuesday, July 28, 2015 3:31 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Keystone][Fernet] HA SQL backend for Fernet keys



On Mon, Jul 27, 2015 at 2:03 PM, Clint Byrum 
cl...@fewbar.commailto:cl...@fewbar.com wrote:
Excerpts from Dolph Mathews's message of 2015-07-27 11:48:12 -0700:
 On Mon, Jul 27, 2015 at 1:31 PM, Clint Byrum 
 cl...@fewbar.commailto:cl...@fewbar.com wrote:

  Excerpts from Alexander Makarov's message of 2015-07-27 10:01:34 -0700:
   Greetings!
  
   I'd like to discuss pro's and contra's of having Fernet encryption keys
   stored in a database backend.
   The idea itself emerged during discussion about synchronizing rotated
  keys
   in HA environment.
   Now Fernet keys are stored in the filesystem that has some availability
   issues in unstable cluster.
   OTOH, making SQL highly available is considered easier than that for a
   filesystem.
  
 
  I don't think HA is the root of the problem here. The problem is
  synchronization. If I have 3 keystone servers (n+1), and I rotate keys on
  them, I must very carefully restart them all at the exact right time to
  make sure one of them doesn't issue a token which will not be validated
  on another. This is quite a real possibility because the validation
  will not come from the user, but from the service, so it's not like we
  can use simple persistence rules. One would need a layer 7 capable load
  balancer that can find the token ID and make sure it goes back to the
  server that issued it.
 

 This is not true (or if it is, I'd love see a bug report). keystone-manage
 fernet_rotate uses a three phase rotation strategy (staged - primary -
 secondary) that allows you to distribute a staged key (used only for token
 validation) throughout your cluster before it becomes a primary key (used
 for token creation and validation) anywhere. 

Re: [openstack-dev] [Cinder] A possible solution for HA Active-Active

2015-08-02 Thread Joshua Harlow

Daniel Comnea wrote:

 From Operators point of view i'd love to see less technology
proliferation in OpenStack, if you wear the developer hat please don't
be selfish, take into account the others :)

ZK is a robust technology but hey is a beast like Rabbit, there is a lot
to massage and over 2 data centers ZK is not very efficient.


Very much understand the operator view here, IMHO in its current state 
according to http://gorka.eguileor.com/a-cinder-road-to-activeactive-ha/ 
I'd say the operators of cinder in are a much worse boat right now, and 
adding a robust technology in that could help the current state doesn't 
exactly seem that bad.


IMHO if you are planning to (or are) running a cloud you are likely 
going to have to be running zookeeper or similar service soon if you 
aren't already anyway; because most cloudy projects already depend on 
such services (for service discovery, configuration 
discovery/management, DLM locking, fault detection, leader election...)


As for the 2 data centers, afaik the following is making this better:

https://zookeeper.apache.org/doc/trunk/zookeeperObservers.html




On Sat, Aug 1, 2015 at 4:27 AM, Joshua Harlow harlo...@outlook.com
mailto:harlo...@outlook.com wrote:

Monty Taylor wrote:

On 08/01/2015 03:40 AM, Mike Perez wrote:

On Fri, Jul 31, 2015 at 8:56 AM, Joshua
Harlowharlo...@outlook.com mailto:harlo...@outlook.com
wrote:

...random thought here, skip as needed... in all honesty
orchestration
solutions like mesos

(http://mesos.apache.org/assets/img/documentation/architecture3.jpg),
map-reduce solutions like hadoop, stream processing
systems like apache
storm (...), are already using zookeeper and I'm not
saying we should just
use it cause they are, but the likelihood that they just
picked it for no
reason are imho slim.

I'd really like to see focus cross project. I don't want
Ceilometer to
depend on Zoo Keeper, Cinder to depend on etcd, etc. This is
not ideal
for an operator to have to deploy, learn and maintain each
of these
solutions.

I think this is difficult when you consider everyone wants
options of
their preferred DLM. If we went this route, we should pick one.

Regardless, I want to know if we really need a DLM. Does
Ceilometer
really need a DLM? Does Cinder really need a DLM? Can we
just use a
hash ring solution where operators don't even have to know
or care
about deploying a DLM and running multiple instances of
Cinder manager
just works?


I'd like to take that one step further and say that we should
also look
holistically at the other things that such technology are often
used for
in distributed systems and see if, in addition to Does Cinder
need a
DLM - ask does Cinder need service discover and does Cinder need
distributed KV store and does anyone else?

Adding something like zookeeper or etcd or consul has the
potential to
allow us to design an OpenStack that works better. Adding all of
them in
an ad-hoc and uncoordinated manner is a bit sledgehammery.

The Java community uses zookeeper a lot
The container orchestration community seem to all love etcd
I hear tell that there a bunch of ops people who are in love
with consul

I'd suggest we look at more than lock management.


Oh I very much agree, but gotta start somewhere :)




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack 

Re: [openstack-dev] [Cinder] A possible solution for HA Active-Active

2015-08-02 Thread Morgan Fainberg


 On Aug 1, 2015, at 09:51, Monty Taylor mord...@inaugust.com wrote:
 
 On 08/01/2015 03:40 AM, Mike Perez wrote:
 On Fri, Jul 31, 2015 at 8:56 AM, Joshua Harlow harlo...@outlook.com wrote:
 ...random thought here, skip as needed... in all honesty orchestration
 solutions like mesos
 (http://mesos.apache.org/assets/img/documentation/architecture3.jpg),
 map-reduce solutions like hadoop, stream processing systems like apache
 storm (...), are already using zookeeper and I'm not saying we should just
 use it cause they are, but the likelihood that they just picked it for no
 reason are imho slim.
 
 I'd really like to see focus cross project. I don't want Ceilometer to
 depend on Zoo Keeper, Cinder to depend on etcd, etc. This is not ideal
 for an operator to have to deploy, learn and maintain each of these
 solutions.
 
 I think this is difficult when you consider everyone wants options of
 their preferred DLM. If we went this route, we should pick one.
 
 Regardless, I want to know if we really need a DLM. Does Ceilometer
 really need a DLM? Does Cinder really need a DLM? Can we just use a
 hash ring solution where operators don't even have to know or care
 about deploying a DLM and running multiple instances of Cinder manager
 just works?
 
 I'd like to take that one step further and say that we should also look
 holistically at the other things that such technology are often used for
 in distributed systems and see if, in addition to Does Cinder need a
 DLM - ask does Cinder need service discover and does Cinder need
 distributed KV store and does anyone else?
 
 Adding something like zookeeper or etcd or consul has the potential to
 allow us to design an OpenStack that works better. Adding all of them in
 an ad-hoc and uncoordinated manner is a bit sledgehammery.
 
 The Java community uses zookeeper a lot
 The container orchestration community seem to all love etcd
 I hear tell that there a bunch of ops people who are in love with consul
 
 I'd suggest we look at more than lock management.
 

From the perspective of what zookeeper, consul, or etcd (no particular order 
of preference) brings to the table, i would like to see a hard look taken at 
incorporating at least one of them this way. 

I see it as a huge win (especially from the keystone side and with distributed 
key-value-store capabilities). There are so many things we can do really 
improve openstack across the board. 

Utilizing consul or similar for helping to manage the keystone catalog or 
sourcing the individual endpoint policy.json without needing to copy it to 
horizon is just a start beyond the proposed DLM uses in this thread. 

There is a lot we can benefit from with one of these tools being generally 
available for openstack deployments. 

--morgan

Sent via mobile
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] A possible solution for HA Active-Active

2015-08-02 Thread Daniel Comnea
From Operators point of view i'd love to see less technology proliferation
in OpenStack, if you wear the developer hat please don't be selfish, take
into account the others :)

ZK is a robust technology but hey is a beast like Rabbit, there is a lot to
massage and over 2 data centers ZK is not very efficient.


On Sat, Aug 1, 2015 at 4:27 AM, Joshua Harlow harlo...@outlook.com wrote:

 Monty Taylor wrote:

 On 08/01/2015 03:40 AM, Mike Perez wrote:

 On Fri, Jul 31, 2015 at 8:56 AM, Joshua Harlowharlo...@outlook.com
 wrote:

 ...random thought here, skip as needed... in all honesty orchestration
 solutions like mesos
 (http://mesos.apache.org/assets/img/documentation/architecture3.jpg),
 map-reduce solutions like hadoop, stream processing systems like apache
 storm (...), are already using zookeeper and I'm not saying we should
 just
 use it cause they are, but the likelihood that they just picked it for
 no
 reason are imho slim.

 I'd really like to see focus cross project. I don't want Ceilometer to
 depend on Zoo Keeper, Cinder to depend on etcd, etc. This is not ideal
 for an operator to have to deploy, learn and maintain each of these
 solutions.

 I think this is difficult when you consider everyone wants options of
 their preferred DLM. If we went this route, we should pick one.

 Regardless, I want to know if we really need a DLM. Does Ceilometer
 really need a DLM? Does Cinder really need a DLM? Can we just use a
 hash ring solution where operators don't even have to know or care
 about deploying a DLM and running multiple instances of Cinder manager
 just works?


 I'd like to take that one step further and say that we should also look
 holistically at the other things that such technology are often used for
 in distributed systems and see if, in addition to Does Cinder need a
 DLM - ask does Cinder need service discover and does Cinder need
 distributed KV store and does anyone else?

 Adding something like zookeeper or etcd or consul has the potential to
 allow us to design an OpenStack that works better. Adding all of them in
 an ad-hoc and uncoordinated manner is a bit sledgehammery.

 The Java community uses zookeeper a lot
 The container orchestration community seem to all love etcd
 I hear tell that there a bunch of ops people who are in love with consul

 I'd suggest we look at more than lock management.


 Oh I very much agree, but gotta start somewhere :)



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] [murano] An online YAQL evaluator

2015-08-02 Thread ELISHA, Moshe (Moshe)
Hey,

A couple of weeks ago we had a Python-oriented Hackathon at Alcatel-Lucent 
CloudBand.
We divided into groups and each group had to think of an original / innovative 
/ creative idea to be completed in 24 hours.

My group decided to create an online YAQL evaluator. Although we did not win 
first place, we got really good feedback and decided to publish it [1].

Some features:
* Provide a YAML and a YAQL and view the evaluation result.
* A catalog of common OpenStack API responses.
* YAQL auto complete (very basic for now)
* Currently using yaql-0.2.7 - we will upgrade it to yaql-1.0 once Mistral and 
Murano will upgrade as well.

All the source code of the project is available on GitHub [2].

Hope it will come as useful for you as it is for us.

Best regards.

[1] - http://yaqluator.comhttp://yaqluator.com/
[2] - https://github.com/ALU-CloudBand/yaqluator

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral] [murano] An online YAQL evaluator

2015-08-02 Thread Nikolay Makhotkin
Hi guys!

That's awesome! It is very useful for all us!

-- 
Best Regards,
Nikolay
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] A possible solution for HA Active-Active

2015-08-02 Thread Daniel Comnea
I couldn't put it better, nice write up Morgan!! +1

On Sun, Aug 2, 2015 at 10:28 AM, Morgan Fainberg morgan.fainb...@gmail.com
wrote:




  On Aug 2, 2015, at 16:00, Daniel Comnea comnea.d...@gmail.com wrote:
 
  From Operators point of view i'd love to see less technology
 proliferation in OpenStack, if you wear the developer hat please don't be
 selfish, take into account the others :)
 
  ZK is a robust technology but hey is a beast like Rabbit, there is a lot
 to massage and over 2 data centers ZK is not very efficient.
 

 Sure, lets evaluate the more far reaching benefits of running the new
 service for all openstack deployments. This is not a hey neat tech
 debate, it is a lets see if this tool solves enough issues that it is
 worth using an 'innovation token' on. I think it is worth it personally,
 but it should be a consistent choice with a strong reason and added value
 beyond a single one-off usecase.

 --morgan

 Sent via mobile
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] A possible solution for HA Active-Active

2015-08-02 Thread Morgan Fainberg



 On Aug 2, 2015, at 16:00, Daniel Comnea comnea.d...@gmail.com wrote:
 
 From Operators point of view i'd love to see less technology proliferation in 
 OpenStack, if you wear the developer hat please don't be selfish, take into 
 account the others :)
 
 ZK is a robust technology but hey is a beast like Rabbit, there is a lot to 
 massage and over 2 data centers ZK is not very efficient.
 

Sure, lets evaluate the more far reaching benefits of running the new service 
for all openstack deployments. This is not a hey neat tech debate, it is a 
lets see if this tool solves enough issues that it is worth using an 
'innovation token' on. I think it is worth it personally, but it should be a 
consistent choice with a strong reason and added value beyond a single one-off 
usecase. 

--morgan

Sent via mobile
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] A possible solution for HA Active-Active

2015-08-02 Thread Gorka Eguileor
On Fri, Jul 31, 2015 at 01:47:22AM -0700, Mike Perez wrote:
 On Mon, Jul 27, 2015 at 12:35 PM, Gorka Eguileor gegui...@redhat.com wrote:
  I know we've all been looking at the HA Active-Active problem in Cinder
  and trying our best to figure out possible solutions to the different
  issues, and since current plan is going to take a while (because it
  requires that we finish first fixing Cinder-Nova interactions), I've been
  looking at alternatives that allow Active-Active configurations without
  needing to wait for those changes to take effect.
 
  And I think I have found a possible solution, but since the HA A-A
  problem has a lot of moving parts I ended up upgrading my initial
  Etherpad notes to a post [1].
 
  Even if we decide that this is not the way to go, which we'll probably
  do, I still think that the post brings a little clarity on all the
  moving parts of the problem, even some that are not reflected on our
  Etherpad [2], and it can help us not miss anything when deciding on a
  different solution.
 
 Based on IRC conversations in the Cinder room and hearing people's
 opinions in the spec reviews, I'm not convinced the complexity that a
 distributed lock manager adds to Cinder for both developers and the
 operators who ultimately are going to have to learn to maintain things
 like Zoo Keeper as a result is worth it.
 
 **Key point**: We're not scaling Cinder itself, it's about scaling to
 avoid build up of operations from the storage backend solutions
 themselves.
 
 Whatever people think ZooKeeper scaling level is going to accomplish
 is not even a question. We don't need it, because Cinder isn't as
 complex as people are making it.
 
 I'd like to think the Cinder team is a great in recognizing potential
 cross project initiatives. Look at what Thang Pham has done with
 Nova's version object solution. He made a generic solution into an
 Oslo solution for all, and Cinder is using it. That was awesome, and
 people really appreciated that there was a focus for other projects to
 get better, not just Cinder.
 
 Have people consider Ironic's hash ring solution? The project Akanda
 is now adopting it [1], and I think it might have potential. I'd
 appreciate it if interested parties could have this evaluated before
 the Cinder midcycle sprint next week, to be ready for discussion.
 
 [1] - https://review.openstack.org/#/c/195366/
 
 -- Mike Perez

Hi all,

Since my original proposal was more complex that it needed be I have a
new proposal of a simpler solution, and I describe how we can do it with
or without a DLM since we don't seem to reach an agreement on that.

The solution description was more rushed than previous one so I may have
missed some things.

http://gorka.eguileor.com/simpler-road-to-cinder-active-active/

Cheers,
Gorka.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral] [murano] An online YAQL evaluator

2015-08-02 Thread Stan Lagun
Guys, this is awesome!!!

Happy to see yaql gets attention. One more initiative that you may find
interesting is https://review.openstack.org/#/c/159905/
This is an attempt to port yaql 1.0 from Python to JS so that the same can
be done in browser

Sincerely yours,
Stan Lagun
Principal Software Engineer @ Mirantis

sla...@mirantis.com

On Sun, Aug 2, 2015 at 5:30 PM, Nikolay Makhotkin nmakhot...@mirantis.com
wrote:

 Hi guys!

 That's awesome! It is very useful for all us!

 --
 Best Regards,
 Nikolay

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] How should we expose host capabilities to the scheduler

2015-08-02 Thread Dugger, Donald D
As we discussed at the mid-cycle meetup there is a bit of an issue related to 
host capabilities.  Currently, we are overloading the flavor extra_specs with a 
whole lot of meaning, including requirements for specific host capabilities.  
Although this has allowed some impressive extension capabilities we are 
effectively creating an uncontrolled API that is ad hoc, ill defined and 
subject to arbitrary change.  I think it's time to consider this in more detail 
and come up with a better solution.

The problem is that we have hosts with different capabilities and both users 
and operators need to be able to discover and control access to machines with 
different capabilities.  Also note that, although many capabilities can be 
represented  by simple key/value pairs (e.g. the presence of a specific special 
instruction) that is not true for all capabilities (e.g. Numa topology doesn't 
really fit into this model) and even the need to specify ranges of values (more 
that x byes of RAM, less than y bandwidth utilization) makes things more 
complex.

We need to rethink how we represent and discover this information and this will 
potential involve an externally visible API change so it'll probably take 
multiple release cycles to implement something but we need to start thinking 
about this now.

Without going into the solution space the first thing we need to do is make 
sure we know what the requirements are for exposing host capabilities.  At a 
minimu we need to:


1)  Enumerate the capabilities.  This will involve both quantitative values 
(amount of RAM, amount of disk, ...) and Boolean (magic instructions present).  
Also, there will be static capabilities that are discovered at boot time and 
don't change afterwards and dynamic capabilities that vary during node 
operation.

2)  Expose the capabilities to both users and operators.

3)  Request specific capabilities.  A way of requesting an instance with an 
explicit list of specific capabilities is a minimal requirement.  It would 
probably also be good to have a way to easily specify an aggregate that 
encompasses a set of capabilities.

Note that I'm not saying we should remove flavors, but we might need a 
different way to specify what makes up a flavor.

As I said, I don't have the answer to how to do this but I want to start a 
discussion on where we go from here.

--
Don Dugger
Censeo Toto nos in Kansa esse decisse. - D. Gale
Ph: 303/443-3786

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova-scheduler] Scheduler sub-group meeting - Agenda 8/3

2015-08-02 Thread Dugger, Donald D
Meeting on #openstack-meeting-alt at 1400 UTC (8:00AM MDT)



1)  Liberty specs - 
https://etherpad.openstack.org/p/liberty-nova-priorities-tracking

2)  CPU feature representation - follow up from the mid-cycle

3)  Opens


--
Don Dugger
Censeo Toto nos in Kansa esse decisse. - D. Gale
Ph: 303/443-3786

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] OS_SERVICE_TOKEN usage in Fuel

2015-08-02 Thread Adam Heczko
Agree that SERVICE_TOKEN usage eradication will be probably long standing
process, but IMO radosgw should follow usual way of managing Openstack
service interactions. Usually when service wants to integrate with
OpenStack, an appropriate user with role admin is created. I believe that
for radosgw probably  user radosgw should be created or something similar.
Of course requirement of its adminess and role assignment is a different
topic.

Regards,

Adam

On Thu, Jul 30, 2015 at 4:33 PM, Oleksiy Molchanov omolcha...@mirantis.com
wrote:

 Update from Radoslaw Zarzynski
 ---

 Hi,

 I'm afraid that eradication of OS_SERVICE_TOKEN won't be quick
 nor painless process due to dependencies. We would need to identify
 and fix all applications that requires this auth method.

 For example, Ceph RADOS Gateway (radosgw) currently requires [1]
 it in order to provide Keystone integration in its S3 API implementation.
 We have customers using that in production.

 Best regards,
 Radoslaw Zarzynski

 [1] https://github.com/ceph/ceph/blob/master/src/rgw/rgw_rest_s3.cc#L

 On Wed, Jul 29, 2015 at 6:38 PM, Konstantin Danilov kdani...@mirantis.com
  wrote:

 Would send ceph estimation tomorrow.
 Yet estimation != ETTA

 On Wed, Jul 29, 2015 at 12:27 AM, Sergii Golovatiuk
 sgolovat...@mirantis.com wrote:
  Hi,
 
  Let's ask our Ceph developers how much time/resources they need to
 implement
  such functionality.
 
  --
  Best regards,
  Sergii Golovatiuk,
  Skype #golserge
  IRC #holser
 
  On Tue, Jul 28, 2015 at 11:21 PM, Andrew Woodward 
 awoodw...@mirantis.com
  wrote:
 
  It's literally how radosgw goes about verifying users, it has no
 scheme of
  using a user or working with auth-tokens. It would have to fixed in the
  ceph-radosgw codebase. PKI tokens (which we don't use) rely on this
 less,
  but its still used.
 
  On Tue, Jul 28, 2015 at 2:16 PM Sergii Golovatiuk
  sgolovat...@mirantis.com wrote:
 
  Why can't radosgw use own own credentials? If it's technical debt we
 need
  to put it on plate to address in next release.
 
 
  --
  Best regards,
  Sergii Golovatiuk,
  Skype #golserge
  IRC #holser
 
  On Tue, Jul 28, 2015 at 10:21 PM, Andrew Woodward xar...@gmail.com
  wrote:
 
  Keystone authtoken is also used by radosgw to validate users
 
  On Tue, Jul 28, 2015 at 10:31 AM Andrew Woodward
  awoodw...@mirantis.com wrote:
 
  IIRC the puppet modules, and even the heat domain create script make
  use of the token straight from the config file. It not being
 present could
  cause problems for some of the manifests. We would need to ensure
 that their
  usage is minimized or removed.
 
  On Tue, Jul 28, 2015 at 9:29 AM Sergii Golovatiuk
  sgolovat...@mirantis.com wrote:
 
  Hi Oleksiy,
 
  Good catch. Also OSTF should get endpoints from hiera as some
 plugins
  may override the initial deployment settings. There may be cases
 when
  keystone is detached by plugin.
 
  --
  Best regards,
  Sergii Golovatiuk,
  Skype #golserge
  IRC #holser
 
  On Tue, Jul 28, 2015 at 5:26 PM, Oleksiy Molchanov
  omolcha...@mirantis.com wrote:
 
  Hello all,
 
  We need to discuss removal of OS_SERVICE_TOKEN usage in Fuel after
  deployment. This came from
 https://bugs.launchpad.net/fuel/+bug/1430619. I
  guess not all of us have an access to this bug, so to be short:
 
  # A shared secret that can be used to bootstrap Keystone.
  # This token does not represent a user, and carries no
  # explicit authorization. To disable in production (highly
  # recommended), remove AdminTokenAuthMiddleware from your
  # paste application pipelines (for example, in keystone-
  # paste.ini). (string value)
 
  After removing this and testing we found out that OSTF fails
 because
  it uses admin token.
 
  What do you think if we create ostf user like for workloads, but
 with
  wider permissions?
 
  BR,
  Oleksiy.
 
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  --
  --
  Andrew Woodward
  Mirantis
  Fuel Community Ambassador
  Ceph Community
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  --
 
  --
 
  Andrew Woodward
 
  Mirantis
 
  Fuel Community Ambassador
 
  Ceph Community
 
 
 
 
 __
  

Re: [openstack-dev] [Keystone][Fernet] HA SQL backend for Fernet keys

2015-08-02 Thread David Medberry
Glad to see you weighed in on this. -d

On Sat, Aug 1, 2015 at 3:50 PM, Matt Fischer m...@mattfischer.com wrote:

 Agree that you guys are way over thinking this. You don't need to rotate
 keys at exactly the same time, we do it in within a one or two hours
 typically based on how our regions are setup. We do it with puppet, puppet
 runs on one keystone node at a time and drops the keys into place. The
 actual rotation and generation we handle with a script that then proposes
 the new key structure as a review which is then approved and deployed via
 the normal process. For this process I always drop keys 0, 1, 2 into place,
 I'm not bumping the numbers like the normal rotations do.

 We had also considered ansible which would be perfect for this, but that
 makes our ability to setup throw away environments with a single click a
 bit more complicated. If you don't have that requirement, a simple ansible
 script is what you should do.


 On Sat, Aug 1, 2015 at 3:41 PM, Clint Byrum cl...@fewbar.com wrote:

 Excerpts from Boris Bobrov's message of 2015-08-01 14:18:21 -0700:
  On Saturday 01 August 2015 16:27:17 bdobre...@mirantis.com wrote:
   I suggest to use pacemaker multistate clone resource to rotate and
  rsync
   fernet tokens from local directories across cluster nodes. The
 resource
   prototype is described here
   https://etherpad.openstack.org/p/fernet_tokens_pacemaker Pros:
  Pacemaker
   will care about CAP/split-brain stuff for us, we just design rotate
 and
   rsync logic. Also no shared FS/DB involved but only Corosync CIB - to
  store
   few internal resource state related params, not tokens. Cons: Keystone
   nodes hosting fernet tokens directories must be members of pacemaker
   cluster. Also custom OCF script should be created to implement this.
 __
   Regards,
   Bogdan Dobrelya.
   IRC: bogdando
 
  Looks complex.
 
  I suggest this kind of bash or python script, running on Fuel master
 node:
 
  0. Check that all controllers are online;
  1. Go to one of the controllers, rotate keys there;
  2. Fetch key 0 from there;
  3. For each other controller rotate keys there and put the 0-key
 instead of
  their new 0-key.
  4. If any of the nodes fail to get new keys (because they went offline
 or for
  some other reason) revert the rotate (move the key with the biggest
 index
  back to 0).
 
  The script can be launched by cron or by button in Fuel.
 
  I don't see anything critically bad if one rotation/sync event fails.
 

 This too is overly complex and will cause failures. If you replace key 0,
 you will stop validating tokens that were encrypted with the old key 0.

 You simply need to run rotate on one, and then rsync that key repository
 to all of the others. You _must not_ run rotate again until you rsync to
 all of the others, since the key 0 from one rotation becomes the primary
 token encrypting key going forward, so you need it to get pushed out to
 all nodes as 0 first.

 Don't over think it. Just read http://lbragstad.com/?p=133 and it will
 remain simple.

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] encryption is not supported in ceph volume

2015-08-02 Thread Adam Heczko
Indeed, it works only for iSCSI Cinder backends.
I believe there are at least two ways in which volume encryption for Ceph
could be achieved:
- by implementing encryption at librbd level (user space)
- rewriting Ceph's Cinder plugin, to attach RBD images not through
libvirt/librbd but for accessing Ceph use native Linux kernel RBD driver
and stack LUKS atop of RBD (device-mapper way)

Regards,

Adam

On Thu, Jul 30, 2015 at 8:02 AM, Li, Xiaoyan xiaoyan...@intel.com wrote:

 Hi all,

 I created an encryption type, and create a volume in Ceph with the volume
 type.
  cinder encryption-type-create

 But failed to attach it to a VM. The error message shows that no
 device_path in connection_info.

 ^[[01;31m2015-07-30 05:55:57.117 TRACE oslo_messaging.rpc.dispatcher
 ^[[01;35m^[[00mself.symlink_path =
 connection_info['data']['device_path']^M
 ^[[01;31m2015-07-30 05:55:57.117 TRACE oslo_messaging.rpc.dispatcher
 ^[[01;35m^[[00mKeyError: 'device_path'

 Two questions:
 1. Is it not supported to create volume in Ceph with encrypted volume type?
 2. If yes, should we prohibit to create a Ceph volume with encrypted
 volume type.

 Best wishes
 Lisa


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Adam Heczko
Security Engineer @ Mirantis Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][oslo][policy] oslo.policy adoption in Nova.

2015-08-02 Thread Davanum Srinivas
Sean, Nova-cores,

The following Nova review is stuck:
https://review.openstack.org/#/c/198065/

What's the minimum features in oslo.policy that we have to add in
oslo.policy to unblock that work?

If we get Nova wants to put defaults in code working, is that enough
for getting oslo.policy into Nova for liberty?

Thanks,
dims


-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Kuryr] - Bringing Dockers networking to Neutron

2015-08-02 Thread Mohammad Banikazemi

Please note that the meeting will be held in #openstack-meeting-4 channel
on Freenode starting this Monday August 3rd at 1500 UTC.
The agenda and other related information will be kept at:
https://wiki.openstack.org/wiki/Meetings/Kuryr

Best,

Mohammad



From:   Gal Sagie gal.sa...@gmail.com
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Cc: Eran Gampel eran.gam...@toganetworks.com, Irena Berezovsky
ir...@midokura.com
Date:   07/29/2015 01:12 PM
Subject:Re: [openstack-dev] [Neutron][Kuryr] - Bringing Dockers
networking  to Neutron



Hi Yalei,

We set a date/time, its going to be Monday at 15:00 UTC
If you can't attend, i will be happy to update you on IRC when i am around,
and you
can also approach Antoni (apuimedo in IRC) if you have any questions.

Hope to see you involved!

Gal.


On Wed, Jul 29, 2015 at 12:53 PM, Wang, Yalei yalei.w...@intel.com wrote:
  Interested in, have the time been determined?





  /Yalei





  From: Antoni Segura Puimedon [mailto:toni+openstac...@midokura.com]
  Sent: Friday, July 24, 2015 3:30 AM
  To: Stephen Wong
  Cc: Eran Gampel; OpenStack Development Mailing List (not for usage
  questions); Irena Berezovsky
  Subject: Re: [openstack-dev] [Neutron][Kuryr] - Bringing Dockers
  networking to Neutron











  On Thu, Jul 23, 2015 at 8:10 PM, Stephen Wong stephen.kf.w...@gmail.com
  wrote:


  +1 for Monday








  +1 for Monday UTC. Maybe at 14 or 15h ?



  On Wed, Jul 22, 2015 at 10:29 PM, Mohammad Banikazemi m...@us.ibm.com
  wrote:


  Gal, This time conflicts with the Neutron ML2 weekly meeting time [1]. I
  realize there are several networking related weekly meetings, but I would
  like to see if we can find a different time. I suggest the same time,
  that is 1600 UTC but either on Mondays or Thursdays.

  Please note there is the Ironic/Neutron Integration meeting at the same
  time on Mondays but that conflict may be a smaller conflict. Looking at
  the meetings listed at [2] I do not see any other conflict.

  Best,

  Mohammad

  [1] https://wiki.openstack.org/wiki/Meetings/ML2
  [2] http://eavesdrop.openstack.org


  Inactive hide details for Gal Sagie ---07/22/2015 12:31:46 PM---Hello
  Everyone, Project Kuryr is now officially part of NeutronGal Sagie
  ---07/22/2015 12:31:46 PM---Hello Everyone, Project Kuryr is now
  officially part of Neutron's big tent.

  From: Gal Sagie gal.sa...@gmail.com
  To: OpenStack Development Mailing List (not for usage questions) 
  openstack-dev@lists.openstack.org, Eran Gampel 
  eran.gam...@toganetworks.com, Antoni Segura Puimedon t...@midokura.com
  , Irena Berezovsky ir...@midokura.com
  Date: 07/22/2015 12:31 PM
  Subject: [openstack-dev] [Neutron][Kuryr] - Bringing Dockers networking
  to Neutron









  Hello Everyone,

  Project Kuryr is now officially part of Neutron's big tent.
  Kuryr is aimed to be used as a generic docker remote driver that connects
  docker to Neutron API's
  and provide containerised images for the common Neutron plugins.
  We also plan on providing common additional networking services API's
  from other sub projects
  in the Neutron big tent.

  We hope to get everyone on board with this project and leverage this
  joint effort for the many different solutions out there (instead of
  everyone re-inventing the wheel for each different project).

  We want to start doing a weekly IRC meeting to coordinate the different
  requierments and
  tasks, so anyone that is interested to participate please share your time
  preference
  and we will try to find the best time for the majority.

  Remember we have people in Europe, Tokyo and US, so we won't be able to
  find time that fits
  everyone.

  The currently proposed time is  Wedensday at 16:00 UTC

  Please reply with your suggested time/day,
  Hope to see you all, we have an interesting and important project ahead
  of us

  Thanks


  Gal.
  __

  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




  __

  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev







  __

  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Best Regards ,

The G.
__
OpenStack Development Mailing 

[openstack-dev] [Magnum]horizontal scalability

2015-08-02 Thread 王华
Hi all,

As discussed in the Vancouver Summit, we are going to drop the bay lock
implementation. Instead, each conductor will call Heat concurrently and
rely on heat for concurrency control. However, I think we need an approach
for state convergence from heat to magnum. Either periodic task [1] or heat
notification [2] looks like a candidate.

[1] https://blueprints.launchpad.net/magnum/+spec/add-periodic-task
[2]
http://lists.openstack.org/pipermail/openstack-dev/2015-March/058898.html
--hongbin

If we use periodic task to sync state from heat to magnum, I think we
should make periodic task a independent process and magnum-conductor only
operates heat stack.

How to make periodic task high available?

1.We can run several periodic tasks.

2.Or we can use a leader selection mechanism to make only a periodic task
running

and other periodic tasks waiting.

Shall we make periodic task a independent process? How to make periodic
task high available?

Regards,

Hua Wang
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev