Re: [openstack-dev] [Heat] Convergence proof-of-concept showdown

2014-11-27 Thread Jastrzebski, Michal
W dniu 11/27/2014 o 5:15 AM, Angus Salkeld pisze: On Thu, Nov 27, 2014 
at 12:20 PM, Zane Bitter zbit...@redhat.com 
 mailto:zbit...@redhat.com wrote:
 
 A bunch of us have spent the last few weeks working independently on
 proof of concept designs for the convergence architecture. I think
 those efforts have now reached a sufficient level of maturity that
 we should start working together on synthesising them into a plan
 that everyone can forge ahead with. As a starting point I'm going to
 summarise my take on the three efforts; hopefully the authors of the
 other two will weigh in to give us their perspective.
 
 
 Zane's Proposal
 ===
 
 
 https://github.com/zaneb/heat-__convergence-prototype/tree/__distributed-graph
 
 https://github.com/zaneb/heat-convergence-prototype/tree/distributed-graph
 
 I implemented this as a simulator of the algorithm rather than using
 the Heat codebase itself in order to be able to iterate rapidly on
 the design, and indeed I have changed my mind many, many times in
 the process of implementing it. Its notable departure from a
 realistic simulation is that it runs only one operation at a time -
 essentially giving up the ability to detect race conditions in
 exchange for a completely deterministic test framework. You just
 have to imagine where the locks need to be. Incidentally, the test
 framework is designed so that it can easily be ported to the actual
 Heat code base as functional tests so that the same scenarios could
 be used without modification, allowing us to have confidence that
 the eventual implementation is a faithful replication of the
 simulation (which can be rapidly experimented on, adjusted and
 tested when we inevitably run into implementation issues).
 
 This is a complete implementation of Phase 1 (i.e. using existing
 resource plugins), including update-during-update, resource
 clean-up, replace on update and rollback; with tests.
 
 Some of the design goals which were successfully incorporated:
 - Minimise changes to Heat (it's essentially a distributed version
 of the existing algorithm), and in particular to the database
 - Work with the existing plugin API
 - Limit total DB access for Resource/Stack to O(n) in the number of
 resources
 - Limit overall DB access to O(m) in the number of edges
 - Limit lock contention to only those operations actually contending
 (i.e. no global locks)
 - Each worker task deals with only one resource
 - Only read resource attributes once
 
 Open questions:
 - What do we do when we encounter a resource that is in progress
 from a previous update while doing a subsequent update? Obviously we
 don't want to interrupt it, as it will likely be left in an unknown
 state. Making a replacement is one obvious answer, but in many cases
 there could be serious down-sides to that. How long should we wait
 before trying it? What if it's still in progress because the engine
 processing the resource already died?
 
 
 Also, how do we implement resource level timeouts in general?
 
 
 Michał's Proposal
 =
 
 https://github.com/inc0/heat-__convergence-prototype/tree/__iterative 
 https://github.com/inc0/heat-convergence-prototype/tree/iterative
 
 Note that a version modified by me to use the same test scenario
 format (but not the same scenarios) is here:
 
 
 https://github.com/zaneb/heat-__convergence-prototype/tree/__iterative-adapted
 
 https://github.com/zaneb/heat-convergence-prototype/tree/iterative-adapted
 
 This is based on my simulation framework after a fashion, but with
 everything implemented synchronously and a lot of handwaving about
 how the actual implementation could be distributed. The central
 premise is that at each step of the algorithm, the entire graph is
 examined for tasks that can be performed next, and those are then
 started. Once all are complete (it's synchronous, remember), the
 next step is run. Keen observers will be asking how we know when it
 is time to run the next step in a distributed version of this
 algorithm, where it will be run and what to do about resources that
 are in an intermediate state at that time. All of these questions
 remain unanswered.

Q1. When is time to run next step?
There is no really next step or previous step, there is step which can be run 
*right now*, which effectively means when current step finishes, because then 
and only then all requirements are met and we can proceed.

Q2. I can see 3 main processes there:

* Converger (I'd assume thats current engine):
Process which parses graph and schedules next steps, it will be run after 
change in reality is detected.

* Observer:
Process which keeps reality_db and actual resources aligned. Its mostly for 
phase2. This one 

Re: [openstack-dev] [stable] Proposal to add Dave Walker back to stable-maint-core

2014-11-27 Thread Thierry Carrez
Adam Gandelman wrote:
 Daviey was an original member of the stable-maint team and one of the
 driving forces behind the creation of the team and branches back in the
 early days. He was removed from the team later on during a pruning of
 inactive members. Recently, he has began focusing on the stable branches
 again and has been providing valuable reviews across both branches:
 
 https://review.openstack.org/#/q/reviewer:%22Dave+Walker+%253Cemail%2540daviey.com%253E%22++branch:stable/icehouse,n,z
 
 https://review.openstack.org/#/q/reviewer:%22Dave+Walker+%253Cemail%2540daviey.com%253E%22++branch:stable/juno,n,z
 
 I think his understanding of policy, attention to detail and willingness
 to question the appropriateness of proposed backports would make him a
 great member of the team (again!).  Having worked with him in
 Ubuntu-land, I also think he'd be a great candidate to help out with the
 release management aspect of things (if he wanted to).

+1 for current openstack-stable-maint (and upcoming stable-maint-core)

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas] Shared Objects in LBaaS - Use Cases that led us to adopt this.

2014-11-27 Thread Samuel Bercovici
Hi,

I apologize for the long delays, it is hectic time for me at work.

Stephen, I agree that concluding the status discussion will lead to an easier 
discussion around relationships and sharing. So do this first if acceptable by 
everyone.
Brandon, you got this right. This is the concept that I was shooting for.

We may want to be able to consume different compound status/stats properties 
via different calls and not via a single one (ex: provisioning status, backend 
status and admin status).
Also, a status tree, optionally may be filtered to only return error status. 
So if all is fine, you only get back the top level green status.
I think that there are three distinct status compound properties and one stats 
compound property . This creates clear definition of who writes into such a 
field (tenant via API, backend, configuration process, etc.).
So follows the compound fields I think should be available (via single API 
call of distinct API calls):
1. Provisional properties (on LB) - reflects the LB provisioning status, this 
may be a single global compound status or a task ID compound status. The 
task ID based may work better if we allow a-sync concurrent API calls so 
each call has a task ID and the compound status should relate to the task ID
2. Backend properties status (on LB) - this is where the back end can report 
different statuses such as availability (calculated out of health checks). So 
for example if a health monitor detects that a node is unavailable, this status 
will be returned by the back end and written so that it can return back via an 
API call. It similar to the OFFLINE info Brandon has shown below, but it 
should be on a different field than the provisional status 
3. user/operator defined state - this filed makes sense to be part of the 
logical objects so that if a node needs to be take down for maintenance 
reasons, the user/operator should mark this on the logical object. As a result 
there is a provisioning process going on so that its status will be reflected 
via the provisioning properties and can be detected if this fails. If 
succeeded on a specific LB (if shared), the backend should return that the 
status is  disabled on the backed properties field in the place relating to 
the  node in the LB

4. Statistics properties (on LB) - this is where all the statistics elated to 
an LB and the logical objects on it should be returned in a similar way as 1 
and 2.

Regards,
-Sam.
 

-Original Message-
From: Brandon Logan [mailto:brandon.lo...@rackspace.com] 
Sent: Tuesday, November 25, 2014 12:23 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [neutron][lbaas] Shared Objects in LBaaS - Use 
Cases that led us to adopt this.

My impression is that the statuses of each entity will be shown on a detailed 
info request of a loadbalancer.  The root level objects would not have any 
statuses.  For example a user makes a GET request to /loadbalancers/{lb_id} and 
the status of every child of that load balancer is show in a status_tree json 
object.  For example:

{name: loadbalancer1,
 status_tree:
  {listeners: 
[{name: listener1, operating_status: ACTIVE,
  default_pool:
{name: pool1, status: ACTIVE,
 members:
   [{ip_address: 10.0.0.1, status: OFFLINE}]}}

Sam, correct me if I am wrong.

I generally like this idea.  I do have a few reservations with this:

1) Creating and updating a load balancer requires a full tree configuration 
with the current extension/plugin logic in neutron.  Since updates will require 
a full tree, it means the user would have to know the full tree configuration 
just to simply update a name.  Solving this would require nested child 
resources in the URL, which the current neutron extension/plugin does not 
allow.  Maybe the new one will.

2) The status_tree can get quite large depending on the number of listeners and 
pools being used.  This is a minor issue really as it will make horizon's (or 
any other UI tool's) job easier to show statuses.

Thanks,
Brandon

On Mon, 2014-11-24 at 12:43 -0800, Stephen Balukoff wrote:
 Hi Samuel,
 
 
 We've actually been avoiding having a deeper discussion about status 
 in Neutron LBaaS since this can get pretty hairy as the back-end 
 implementations get more complicated. I suspect managing that is 
 probably one of the bigger reasons we have disagreements around object 
 sharing. Perhaps it's time we discussed representing state correctly 
 (whatever that means), instead of a round-a-bout discussion about 
 object sharing (which, I think, is really just avoiding this issue)?
 
 
 Do you have a proposal about how status should be represented 
 (possibly including a description of the state machine) if we collapse 
 everything down to be logical objects except the loadbalancer object?
 (From what you're proposing, I suspect it might be too general to, for 
 example, represent the UP/DOWN status of members of a given pool.)
 
 
 Also, from an 

Re: [openstack-dev] [neutron][lbaas] Shared Objects in LBaaS - Use Cases that led us to adopt this.

2014-11-27 Thread Samuel Bercovici
Brandon, can you please explain further (1) bellow?

-Original Message-
From: Brandon Logan [mailto:brandon.lo...@rackspace.com] 
Sent: Tuesday, November 25, 2014 12:23 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [neutron][lbaas] Shared Objects in LBaaS - Use 
Cases that led us to adopt this.

My impression is that the statuses of each entity will be shown on a detailed 
info request of a loadbalancer.  The root level objects would not have any 
statuses.  For example a user makes a GET request to /loadbalancers/{lb_id} and 
the status of every child of that load balancer is show in a status_tree json 
object.  For example:

{name: loadbalancer1,
 status_tree:
  {listeners: 
[{name: listener1, operating_status: ACTIVE,
  default_pool:
{name: pool1, status: ACTIVE,
 members:
   [{ip_address: 10.0.0.1, status: OFFLINE}]}}

Sam, correct me if I am wrong.

I generally like this idea.  I do have a few reservations with this:

1) Creating and updating a load balancer requires a full tree configuration 
with the current extension/plugin logic in neutron.  Since updates will require 
a full tree, it means the user would have to know the full tree configuration 
just to simply update a name.  Solving this would require nested child 
resources in the URL, which the current neutron extension/plugin does not 
allow.  Maybe the new one will.

2) The status_tree can get quite large depending on the number of listeners and 
pools being used.  This is a minor issue really as it will make horizon's (or 
any other UI tool's) job easier to show statuses.

Thanks,
Brandon

On Mon, 2014-11-24 at 12:43 -0800, Stephen Balukoff wrote:
 Hi Samuel,
 
 
 We've actually been avoiding having a deeper discussion about status 
 in Neutron LBaaS since this can get pretty hairy as the back-end 
 implementations get more complicated. I suspect managing that is 
 probably one of the bigger reasons we have disagreements around object 
 sharing. Perhaps it's time we discussed representing state correctly 
 (whatever that means), instead of a round-a-bout discussion about 
 object sharing (which, I think, is really just avoiding this issue)?
 
 
 Do you have a proposal about how status should be represented 
 (possibly including a description of the state machine) if we collapse 
 everything down to be logical objects except the loadbalancer object?
 (From what you're proposing, I suspect it might be too general to, for 
 example, represent the UP/DOWN status of members of a given pool.)
 
 
 Also, from an haproxy perspective, sharing pools within a single 
 listener actually isn't a problem. That is to say, having the same 
 L7Policy pointing at the same pool is OK, so I personally don't have a 
 problem allowing sharing of objects within the scope of parent 
 objects. What do the rest of y'all think?
 
 
 Stephen
 
 
 
 On Sat, Nov 22, 2014 at 11:06 PM, Samuel Bercovici 
 samu...@radware.com wrote:
 Hi Stephen,
 
  
 
 1.  The issue is that if we do 1:1 and allow status/state
 to proliferate throughout all objects we will then get an
 issue to fix it later, hence even if we do not do sharing, I
 would still like to have all objects besides LB be treated as
 logical.
 
 2.  The 3rd use case bellow will not be reasonable without
 pool sharing between different policies. Specifying different
 pools which are the same for each policy make it non-started
 to me. 
 
  
 
 -Sam.
 
  
 
  
 
  
 
 From: Stephen Balukoff [mailto:sbaluk...@bluebox.net] 
 Sent: Friday, November 21, 2014 10:26 PM
 To: OpenStack Development Mailing List (not for usage
 questions)
 Subject: Re: [openstack-dev] [neutron][lbaas] Shared Objects
 in LBaaS - Use Cases that led us to adopt this.
 
  
 
 I think the idea was to implement 1:1 initially to reduce the
 amount of code and operational complexity we'd have to deal
 with in initial revisions of LBaaS v2. Many to many can be
 simulated in this scenario, though it does shift the burden of
 maintenance to the end user. It does greatly simplify the
 initial code for v2, in any case, though.
 
 
  
 
 
 Did we ever agree to allowing listeners to be shared among
 load balancers?  I think that still might be a N:1
 relationship even in our latest models.
 
  
 
 
 There's also the difficulty introduced by supporting different
 flavors:  Since flavors are essentially an association between
 a load balancer object and a driver (with parameters), once
 flavors are introduced, any sub-objects of a given load
 

Re: [openstack-dev] [Fuel] fuel master monitoring

2014-11-27 Thread Dmitriy Shulyak
Is it possible to send http requests from monit, e.g for creating
notifications?
I scanned through the docs and found only alerts for sending mail,
also where token (username/pass) for monit will be stored?

Or maybe there is another plan? without any api interaction

On Thu, Nov 27, 2014 at 9:39 AM, Przemyslaw Kaminski pkamin...@mirantis.com
 wrote:

  This I didn't know. It's true in fact, I checked the manifests. Though
 monit is not deployed yet because of lack of packages in Fuel ISO. Anyways,
 I think the argument about using yet another monitoring service is now
 rendered invalid.

 So +1 for monit? :)

 P.


 On 11/26/2014 05:55 PM, Sergii Golovatiuk wrote:

 Monit is easy and is used to control states of Compute nodes. We can adopt
 it for master node.

  --
 Best regards,
 Sergii Golovatiuk,
 Skype #golserge
 IRC #holser

 On Wed, Nov 26, 2014 at 4:46 PM, Stanislaw Bogatkin 
 sbogat...@mirantis.com wrote:

 As for me - zabbix is overkill for one node. Zabbix Server + Agent +
 Frontend + DB + HTTP server, and all of it for one node? Why not use
 something that was developed for monitoring one node, doesn't have many
 deps and work out of the box? Not necessarily Monit, but something similar.

 On Wed, Nov 26, 2014 at 6:22 PM, Przemyslaw Kaminski 
 pkamin...@mirantis.com wrote:

 We want to monitor Fuel master node while Zabbix is only on slave nodes
 and not on master. The monitoring service is supposed to be installed on
 Fuel master host (not inside a Docker container) and provide basic info
 about free disk space, etc.

 P.


 On 11/26/2014 02:58 PM, Jay Pipes wrote:

 On 11/26/2014 08:18 AM, Fox, Kevin M wrote:

 So then in the end, there will be 3 monitoring systems to learn,
 configure, and debug? Monasca for cloud users, zabbix for most of the
 physical systems, and sensu or monit to be small?

 Seems very complicated.

 If not just monasca, why not the zabbix thats already being deployed?


 Yes, I had the same thoughts... why not just use zabbix since it's used
 already?

 Best,
 -jay

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 ___
 OpenStack-dev mailing 
 listOpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] fuel master monitoring

2014-11-27 Thread Przemyslaw Kaminski
I mean with monit you can execute arbitrary scripts so use curl? Or save 
them directly to DB?


http://omgitsmgp.com/2013/09/07/a-monit-primer/

I guess some data has to be stored in a configuration file (either DB 
credentials or Nailgun API URL at least, if we were to create 
notifications via the API). I proposed a hand-crafted solution


https://review.openstack.org/#/c/135314/

that lives in fuel-web code and uses settings.yaml so no config file is 
necessary. It has the drawback though that the nailgun code lives inside 
a Docker container so the monitoring data isn't reliable.


P.

On 11/27/2014 09:53 AM, Dmitriy Shulyak wrote:
Is it possible to send http requests from monit, e.g for creating 
notifications?

I scanned through the docs and found only alerts for sending mail,
also where token (username/pass) for monit will be stored?

Or maybe there is another plan? without any api interaction

On Thu, Nov 27, 2014 at 9:39 AM, Przemyslaw Kaminski 
pkamin...@mirantis.com mailto:pkamin...@mirantis.com wrote:


This I didn't know. It's true in fact, I checked the manifests.
Though monit is not deployed yet because of lack of packages in
Fuel ISO. Anyways, I think the argument about using yet another
monitoring service is now rendered invalid.

So +1 for monit? :)

P.


On 11/26/2014 05:55 PM, Sergii Golovatiuk wrote:

Monit is easy and is used to control states of Compute nodes. We
can adopt it for master node.

--
Best regards,
Sergii Golovatiuk,
Skype #golserge
IRC #holser

On Wed, Nov 26, 2014 at 4:46 PM, Stanislaw Bogatkin
sbogat...@mirantis.com mailto:sbogat...@mirantis.com wrote:

As for me - zabbix is overkill for one node. Zabbix Server +
Agent + Frontend + DB + HTTP server, and all of it for one
node? Why not use something that was developed for monitoring
one node, doesn't have many deps and work out of the box? Not
necessarily Monit, but something similar.

On Wed, Nov 26, 2014 at 6:22 PM, Przemyslaw Kaminski
pkamin...@mirantis.com mailto:pkamin...@mirantis.com wrote:

We want to monitor Fuel master node while Zabbix is only
on slave nodes and not on master. The monitoring service
is supposed to be installed on Fuel master host (not
inside a Docker container) and provide basic info about
free disk space, etc.

P.


On 11/26/2014 02:58 PM, Jay Pipes wrote:

On 11/26/2014 08:18 AM, Fox, Kevin M wrote:

So then in the end, there will be 3 monitoring
systems to learn,
configure, and debug? Monasca for cloud users,
zabbix for most of the
physical systems, and sensu or monit to be small?

Seems very complicated.

If not just monasca, why not the zabbix thats
already being deployed?


Yes, I had the same thoughts... why not just use
zabbix since it's used already?

Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org  
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] fuel master monitoring

2014-11-27 Thread Aleksandr Didenko
Hi,

Dmitriy, first of all, monit can provide HTTP interface for communication -
so it's possible to poll that this interface to get info or even control
monit (stop/start/restart service, stop/start monitoring of a service,
etc). Secondly, you can configure different triggers in monit and set
appropriate actions, for example running some script - and that script can
do what ever you like. As for built-in possibility to send http requests -
I've never heard about it.

Regards,
Alex

On Thu, Nov 27, 2014 at 10:53 AM, Dmitriy Shulyak dshul...@mirantis.com
wrote:

 Is it possible to send http requests from monit, e.g for creating
 notifications?
 I scanned through the docs and found only alerts for sending mail,
 also where token (username/pass) for monit will be stored?

 Or maybe there is another plan? without any api interaction

 On Thu, Nov 27, 2014 at 9:39 AM, Przemyslaw Kaminski 
 pkamin...@mirantis.com wrote:

  This I didn't know. It's true in fact, I checked the manifests. Though
 monit is not deployed yet because of lack of packages in Fuel ISO. Anyways,
 I think the argument about using yet another monitoring service is now
 rendered invalid.

 So +1 for monit? :)

 P.


 On 11/26/2014 05:55 PM, Sergii Golovatiuk wrote:

 Monit is easy and is used to control states of Compute nodes. We can
 adopt it for master node.

  --
 Best regards,
 Sergii Golovatiuk,
 Skype #golserge
 IRC #holser

 On Wed, Nov 26, 2014 at 4:46 PM, Stanislaw Bogatkin 
 sbogat...@mirantis.com wrote:

 As for me - zabbix is overkill for one node. Zabbix Server + Agent +
 Frontend + DB + HTTP server, and all of it for one node? Why not use
 something that was developed for monitoring one node, doesn't have many
 deps and work out of the box? Not necessarily Monit, but something similar.

 On Wed, Nov 26, 2014 at 6:22 PM, Przemyslaw Kaminski 
 pkamin...@mirantis.com wrote:

 We want to monitor Fuel master node while Zabbix is only on slave nodes
 and not on master. The monitoring service is supposed to be installed on
 Fuel master host (not inside a Docker container) and provide basic info
 about free disk space, etc.

 P.


 On 11/26/2014 02:58 PM, Jay Pipes wrote:

 On 11/26/2014 08:18 AM, Fox, Kevin M wrote:

 So then in the end, there will be 3 monitoring systems to learn,
 configure, and debug? Monasca for cloud users, zabbix for most of the
 physical systems, and sensu or monit to be small?

 Seems very complicated.

 If not just monasca, why not the zabbix thats already being deployed?


 Yes, I had the same thoughts... why not just use zabbix since it's
 used already?

 Best,
 -jay

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 ___
 OpenStack-dev mailing 
 listOpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] Proposal to add Dave Walker back to stable-maint-core

2014-11-27 Thread Alan Pevec
+1

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] fuel master monitoring

2014-11-27 Thread Simon Pasquier
I've added another option to the Etherpad: collectd can do basic threshold
monitoring and run any kind of scripts on alert notifications. The other
advantage of collectd would be the RRD graphs for (almost) free.
Of course since monit is already supported in Fuel, this is the fastest
path to get something done.
Simon

On Thu, Nov 27, 2014 at 9:53 AM, Dmitriy Shulyak dshul...@mirantis.com
wrote:

 Is it possible to send http requests from monit, e.g for creating
 notifications?
 I scanned through the docs and found only alerts for sending mail,
 also where token (username/pass) for monit will be stored?

 Or maybe there is another plan? without any api interaction

 On Thu, Nov 27, 2014 at 9:39 AM, Przemyslaw Kaminski 
 pkamin...@mirantis.com wrote:

  This I didn't know. It's true in fact, I checked the manifests. Though
 monit is not deployed yet because of lack of packages in Fuel ISO. Anyways,
 I think the argument about using yet another monitoring service is now
 rendered invalid.

 So +1 for monit? :)

 P.


 On 11/26/2014 05:55 PM, Sergii Golovatiuk wrote:

 Monit is easy and is used to control states of Compute nodes. We can
 adopt it for master node.

  --
 Best regards,
 Sergii Golovatiuk,
 Skype #golserge
 IRC #holser

 On Wed, Nov 26, 2014 at 4:46 PM, Stanislaw Bogatkin 
 sbogat...@mirantis.com wrote:

 As for me - zabbix is overkill for one node. Zabbix Server + Agent +
 Frontend + DB + HTTP server, and all of it for one node? Why not use
 something that was developed for monitoring one node, doesn't have many
 deps and work out of the box? Not necessarily Monit, but something similar.

 On Wed, Nov 26, 2014 at 6:22 PM, Przemyslaw Kaminski 
 pkamin...@mirantis.com wrote:

 We want to monitor Fuel master node while Zabbix is only on slave nodes
 and not on master. The monitoring service is supposed to be installed on
 Fuel master host (not inside a Docker container) and provide basic info
 about free disk space, etc.

 P.


 On 11/26/2014 02:58 PM, Jay Pipes wrote:

 On 11/26/2014 08:18 AM, Fox, Kevin M wrote:

 So then in the end, there will be 3 monitoring systems to learn,
 configure, and debug? Monasca for cloud users, zabbix for most of the
 physical systems, and sensu or monit to be small?

 Seems very complicated.

 If not just monasca, why not the zabbix thats already being deployed?


 Yes, I had the same thoughts... why not just use zabbix since it's
 used already?

 Best,
 -jay

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 ___
 OpenStack-dev mailing 
 listOpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] nova host-update gives error 'Virt driver does not implement host disabled status'

2014-11-27 Thread Vineet Menon
Thanks Vladik for the reply. Jay Lau directed me to your patch.

But what I don't get is.. Shouldn't NotImplementedException be removed?
I mean, if other drivers are implementing set_host_enabled method,
shouldn't libvirt also implement the same?

Regards,

Vineet Menon


On 26 November 2014 at 16:07, Vladik Romanovsky 
vladik.romanov...@enovance.com wrote:



 - Original Message -
  From: Vineet Menon mvineetme...@gmail.com
  To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
  Sent: Wednesday, 26 November, 2014 5:14:09 AM
  Subject: Re: [openstack-dev] [nova] nova host-update gives error 'Virt
 driver does not implement host disabled
  status'
 
  Hi Kevin,
 
  Oh. Yes. That could be the problem.
  Thanks for pointing that out.
 
 
  Regards,
 
  Vineet Menon
 
 
  On 26 November 2014 at 02:02, Chen CH Ji  jiche...@cn.ibm.com  wrote:
 
 
 
 
 
  are you using libvirt ? it's not implemented
  ,guess your bug are talking about other hypervisors?
 
  the message was printed here:
 
 http://git.openstack.org/cgit/openstack/nova/tree/nova/api/openstack/compute/contrib/hosts.py#n236
 
  Best Regards!
 
  Kevin (Chen) Ji 纪 晨
 
  Engineer, zVM Development, CSTL
  Notes: Chen CH Ji/China/IBM@IBMCN Internet: jiche...@cn.ibm.com
  Phone: +86-10-82454158
  Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District,
  Beijing 100193, PRC
 
  Vineet Menon ---11/26/2014 12:10:39 AM---Hi, I'm trying to reproduce the
 bug
  https://bugs.launchpad.net/nova/+bug/1259535 .
 
  From: Vineet Menon  mvineetme...@gmail.com 
  To: openstack-dev  openstack-dev@lists.openstack.org 
  Date: 11/26/2014 12:10 AM
  Subject: [openstack-dev] [nova] nova host-update gives error 'Virt driver
  does not implement host disabled status'
 
 
 Hi Vinet,

 There are two methods in the API for changing the service/host status.
 nova host-update and nova service-update.

 Currently, in order to disable the service one should use the nova
 service-update command,
 which maps to service_update method in the manager class.

 nova host-update maps to set_host_enabled() methodin the virt drivers,
 which is not implemented
 in the libvirt driver.
 Not sure what is the purpose of this method, but libvirt driver doesn't
 implement it.

 For a short period of time, this method was implemented, for a wrong
 reason, which was causing the bug in the title,
 however, it was fix with https://review.openstack.org/#/c/61016

 Let me know if you have any questions.

 Thanks,
 Vladik



 
 
  Hi,
 
  I'm trying to reproduce the bug
 https://bugs.launchpad.net/nova/+bug/1259535
  .
  While trying to issue the command, nova host-update --status disable
  machine1, an error is thrown saying,
 
 
  ERROR (HTTPNotImplemented): Virt driver does not implement host disabled
  status. (HTTP 501) (Request-ID: req-1f58feda-93af-42e0-b7b6-bcdd095f7d8c)
 
  What is this error about?
 
  Regards,
  Vineet Menon
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] Proposal to add Dave Walker back to stable-maint-core

2014-11-27 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

+2

On 27/11/14 10:15, Alan Pevec wrote:
 +1
 
 ___ OpenStack-dev
 mailing list OpenStack-dev@lists.openstack.org 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)

iQEcBAEBCgAGBQJUduzkAAoJEC5aWaUY1u57LqEIANo5jmbsNqMWC3EsjpNTlDyN
XRlMesMKl1aPglsM9bdB7orStkkynHTyMZxIo0Z9DgLSmh3ZaB02ayog8Aj3BJ5d
vXBBg06VUXSXQEl1h1wEsVvOqFsuEwXhrCz5w5gPKb+bYD4YZq5sU/KG52vcYwtw
O7JJBDlrUy5MSXbqrKYQqDorJqSBUsa4FLkXuW6N+YiVv+POd4hL0VNDdOMDzPYC
xmAfsa/piY5MPMXzmi9Z9A75Tyxd/Ck3VcpiSWzcO+phCRYnEzQkmX4mPiElRnSX
nPA+FTNDLA6V9NCCbAdJNHNKt+e/Xe+4QsmClqKSy9BQvw8/hHvaW1ae0t3VU3U=
=I7+h
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [horizon] Allowed actions interface in cinder

2014-11-27 Thread Duncan Thomas
Hi

At the cinder mid-cycle meetup, we had a chat with some horizon devs, and
one issue that was bought up is that horizon keeps a copy of the cinder
policy.js and attempts to check it to see if certain operations are allowed
by a specific user.

Since this was universally seen as a sub-optimal idea, we talked about
adding an API for horizon to find out such things. A patch has now bee
proposed, and it would be great if the horizon folks could take a look and
see if it meets their needs. Once we have some agreement, we can then look
at proposing a similar approach for other projects.

https://review.openstack.org/#/c/136980/

-- 
Duncan Thomas
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance] Proposal for a StorPool image backend driver

2014-11-27 Thread Peter Penchev
Hi,

Hm, looks like I was a bit confused as to the procedure of submitting
a blueprint and a spec for a new Glance image backend driver - and
yeah, the actual text of the Reviewers section of the spec should
have made me at least ask on the list...  So here I am, asking, and
sorry for the prematurely filed blueprint.

So, what do you peope think about a new image backend driver, using
snapshots in a StorPool distributed storage cluster?  The idea is that
since we already have an approved Cinder blueprint for volumes and we
are on the way of getting an approved Nova blueprint for attaching
StorPool-backed Cinder volumes, it might be nice to also have Glance
support, so that creating volumes from images and creating images from
volumes would be a breeze.  This will also help with another Nova spec
that I'll file today - using StorPool volumes instead of local files
for the Nova VM images; then cloning a Glance image to a Nova VM image
will also be a breeze.

Here's some more information:

- StorPool distributed storage: http://www.storpool.com/

- my prematurely filed blueprint:
https://blueprints.launchpad.net/glance/+spec/storpool-image-store

- my prematurely filed spec: https://review.openstack.org/#/c/137360/

- the Cinder volume support blueprint:
https://blueprints.launchpad.net/cinder/+spec/storpool-block-driver

- the Nova attach StorPool-backed Cinder volumes blueprint:
https://blueprints.launchpad.net/nova/+spec/libvirt-storpool-volume-attach

Please feel free to ask for any additional information and
clarifications, and once again, sorry for filing a blueprint against
the rules!

G'luck,
Peter

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [All] Start Contributing to OpenStack - The Easy Way with a docker container

2014-11-27 Thread Maish Saidel-Keesing
I would like to share with a small tool that should make things a lot
easier to start getting involved in contributing code  into Openstack.

The OpenStack-git-env docker[1] container.

Simple.

docker pull maishsk/openstack-git-env

More details can be found on this blog post[2].

[1] https://registry.hub.docker.com/u/maishsk/openstack-git-env/
[2]
http://technodrone.blogspot.com/2014/11/start-contributing-to-openstack-easy.html

Feedback is always welcome

-- 
Maish Saidel-Keesing


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Proposal for a StorPool image backend driver

2014-11-27 Thread Flavio Percoco

On 27/11/14 11:34 +0200, Peter Penchev wrote:

Hi,

Hm, looks like I was a bit confused as to the procedure of submitting
a blueprint and a spec for a new Glance image backend driver - and
yeah, the actual text of the Reviewers section of the spec should
have made me at least ask on the list...  So here I am, asking, and
sorry for the prematurely filed blueprint.

So, what do you peope think about a new image backend driver, using
snapshots in a StorPool distributed storage cluster?  The idea is that
since we already have an approved Cinder blueprint for volumes and we
are on the way of getting an approved Nova blueprint for attaching
StorPool-backed Cinder volumes, it might be nice to also have Glance
support, so that creating volumes from images and creating images from
volumes would be a breeze.  This will also help with another Nova spec
that I'll file today - using StorPool volumes instead of local files
for the Nova VM images; then cloning a Glance image to a Nova VM image
will also be a breeze.

Here's some more information:

- StorPool distributed storage: http://www.storpool.com/

- my prematurely filed blueprint:
https://blueprints.launchpad.net/glance/+spec/storpool-image-store

- my prematurely filed spec: https://review.openstack.org/#/c/137360/

- the Cinder volume support blueprint:
https://blueprints.launchpad.net/cinder/+spec/storpool-block-driver

- the Nova attach StorPool-backed Cinder volumes blueprint:
https://blueprints.launchpad.net/nova/+spec/libvirt-storpool-volume-attach

Please feel free to ask for any additional information and
clarifications, and once again, sorry for filing a blueprint against
the rules!


Hey Peter,

Thanks for reaching out and sending this to the mailing list.

As you may probably already know, Glance's store drivers live in a
library called glance_store. This library allows glance to store and
get data from the different stores.

A good thing about this library is that it allows for external drivers
to exist. While I'm definitely not opposed to adding this store driver
in the code base, I'd prefer to take advantage of this plugin module
first and have the StorPool driver implemented outside the codebase.

Once it's been implemented and maintained for at least one cycle -
Kilo - it'd be nice to have the spec proposed and the driver pooled
into glance_store.

It's not a hard rule, really. It's a presonal opinion on a process
that have been quite broken in Glance. In addition to this, we're
about to refactor glance_store's API and it may require refactoring
the driver as well.

I'll comment on the spec as well but I thought I'd also provide
feedback on your email.

Thanks,
Flavio




G'luck,
Peter

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
@flaper87
Flavio Percoco


pgpHOlfFCTHsb.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [All] Proposal to add examples/usecase as part of new features / cli / functionality patches

2014-11-27 Thread Duncan Thomas
I'm not sure about making it mandatory, but I can certainly see the
benefits of doing this in some cases. Maybe we can start by creating the
area and making the doc optional, and allow reviews to ask for it to be
added where they consider it useful?

Sometimes (often in cinder), a feature gets written well before the
cinder-cli part gets written, but I guess you can still document via curl
or whatever testing mechanism you used - a separate patch can improve the
doc later once the cli part is written.

My one big worry, as with all documentation, is that we'll end up with a
large amount of stale documentation and nobody motivated to fix it.

On 27 November 2014 at 06:49, Deepak Shetty dpkshe...@gmail.com wrote:

 But isn't *-specs comes very early in the process where you have an
 idea/proposal of a feature, u don't have it yet implemented. Hence specs
 just end up with Para's on how the feature is supposed to work, but doesn't
 include any real world screen shots as the code is not yet ready at that
 point of time. Along with patch it would make more sense, since the author
 would have tested it so it isn't a big overhead to catch those cli screen
 shots and put it in a .txt or .md file so that patch reviewers can see the
 patch in action and hence can review more effectively

 thanx,
 deepak


 On Thu, Nov 27, 2014 at 8:30 AM, Dolph Mathews dolph.math...@gmail.com
 wrote:


 On Wed, Nov 26, 2014 at 1:15 PM, Steve Gordon sgor...@redhat.com wrote:

 - Original Message -
  From: Deepak Shetty dpkshe...@gmail.com
  To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 
  Hi stackers,
 I was having this thought which i believe applies to all projects of
  openstack (Hence All in the subject tag)
 
  My proposal is to have examples or usecase folder in each project
 which has
  info on how to use the feature/enhancement (which was submitted as
 part of
  a gerrit patch)
  In short, a description with screen shots (cli, not GUI) which should
 be
  submitted (optionally or mandatory) along with patch (liek how
 testcases
  are now enforced)
 
  I would like to take an example to explain. Take this patch @
  https://review.openstack.org/#/c/127587/ which adds a default volume
 type
  in Manila
 
  Now it would have been good if we could have a .txt or .md file alogn
 with
  the patch that explains :
 
  1) What changes are needed in manila.conf to make this work
 
  2) How to use the cli with this change incorporated
 
  3) Some screen shots of actual usage (Now the author/submitted would
 have
  tested in devstack before sending patch, so just copying those cli
 screen
  shots wouldn't be too big of a deal)
 
  4) Any caution/caveats that one has to keep in mind while using this
 
  It can be argued that some of the above is satisfied via commit msg and
  lookign at test cases.
  But i personally feel that those still doesn't give a good
 visualization of
  how a feature patch works in reality
 
  Adding such a example/usecase file along with patch helps in multiple
 ways:
 
  1) It helps the reviewer get a good picture of how/which clis are
 affected
  and how this patch fits in the flow
 
  2) It helps documentor get a good view of how this patch adds value,
 hence
  can document it better
 
  3) It may help the author or anyone else write a good detailed blog
 post
  using the examples/usecase as a reference
 
  4) Since this becomes part of the patch and hence git log, if the
  feature/cli/flow changes in future, we can always refer to how the
 feature
  was designed, worked when it was first posted by looking at the example
  usecase
 
  5) It helps add a lot of clarity to the patch, since we know how the
 author
  tested it and someone can point missing flows or issues (which
 otherwise
  now has to be visualised)
 
  6) I feel this will help attract more reviewers to the patch, since
 now its
  more clear what this patch affects, how it affects and how flows are
  changing, even a novice reviewer can feel more comfortable and be
 confident
  to provide comments.
 
  Thoughts ?

 I would argue that for the projects that use *-specs repositories this
 is the type of detail we would like to see in the specifications associated
 with the feature themselves rather than creating another separate
 mechanism. For the projects that don't use specs repositories (e.g. Manila)
 maybe this demand is an indication they should be considering them?


 +1 this is describing exactly what I expect out of *-specs.



 -Steve

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 

Re: [openstack-dev] [TripleO] [CI] Cinder/Ceph CI setup

2014-11-27 Thread Duncan Thomas
I'd suggest starting by making it an extra job, so that it can be monitored
for a while for stability without affecting what is there.

I'd be supportive of making it the default HA job in the longer term as
long as the LVM code is still getting tested somewhere - LVM is still the
reference implementation in cinder and after discussion there was strong
resistance to changing that.

I've no strong opinions on the node layout, I'll leave that to more
knowledgable people to discuss.

Is the ceph/tripleO code in a working state yet? Is there a guide to using
it?


On 26 November 2014 at 13:10, Giulio Fidente gfide...@redhat.com wrote:

 hi there,

 while working on the TripleO cinder-ha spec meant to provide HA for Cinder
 via Ceph [1], we wondered how to (if at all) test this in CI, so we're
 looking for some feedback

 first of all, shall we make Cinder/Ceph the default for our (currently
 non-voting) HA job? (check-tripleo-ironic-overcloud-precise-ha)

 current implementation (under review) should permit for the deployment of
 both the Ceph monitors and Ceph OSDs on either controllers, dedicated
 nodes, or to split them up so that only OSDs are on dedicated nodes

 what would be the best scenario for CI?

 * a single additional node hosting a Ceph OSD with the Ceph monitors
 deployed on all controllers (my preference is for this one)

 * a single additional node hosting a Ceph OSD and a Ceph monitor

 * no additional nodes with controllers also service as Ceph monitor and
 Ceph OSD

 more scenarios? comments? Thanks for helping

 1. https://blueprints.launchpad.net/tripleo/+spec/tripleo-kilo-cinder-ha
 --
 Giulio Fidente
 GPG KEY: 08D733BA

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Duncan Thomas
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] team meeting Nov 27 1800 UTC

2014-11-27 Thread Sergey Lukjanov
Let's keep this meeting, if there will be not enough people it'll be very
short :)

On Wed, Nov 26, 2014 at 11:36 PM, Sergey Lukjanov slukja...@mirantis.com
wrote:

 Thanks for the note, it sounds like we could cancel the meeting this week
 because of it... Anybody except Russian team folks planning to attend the
 meeting this week?

 On Wed, Nov 26, 2014 at 9:44 PM, Matthew Farrellee m...@redhat.com
 wrote:

 On 11/26/2014 01:10 PM, Sergey Lukjanov wrote:

 Hi folks,

 We'll be having the Sahara team meeting as usual in
 #openstack-meeting-alt channel.

 Agenda: https://wiki.openstack.org/wiki/Meetings/SaharaAgenda#
 Next_meetings

 http://www.timeanddate.com/worldclock/fixedtime.html?msg=
 Sahara+Meetingiso=20141127T18

 --
 Sincerely yours,
 Sergey Lukjanov
 Sahara Technical Lead
 (OpenStack Data Processing)
 Principal Software Engineer
 Mirantis Inc.


 fyi, it's the Thanksgiving holiday for folks in the US, so we'll be
 absent.

 best,


 matt

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Sincerely yours,
 Sergey Lukjanov
 Sahara Technical Lead
 (OpenStack Data Processing)
 Principal Software Engineer
 Mirantis Inc.




-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Anyone Using the Open Solaris ZFS Driver?

2014-11-27 Thread Duncan Thomas
The question about upgrade was purely to see if it had been thought about -
I agree that the code is sufficiently broken that it is certainly not
necessary, and quite possibly impossible - I just wondered if I was right
in that analysis.

On 26 November 2014 at 06:43, John Griffith john.griffi...@gmail.com
wrote:

 On Mon, Nov 24, 2014 at 10:53 AM, Monty Taylor mord...@inaugust.com
 wrote:
  On 11/24/2014 10:14 AM, Drew Fisher wrote:
 
 
  On 11/17/14 10:27 PM, Duncan Thomas wrote:
  Is the new driver drop-in compatible with the old one? IF not, can
  existing systems be upgraded to the new driver via some manual steps,
 or
  is it basically a completely new driver with similar functionality?
 
  Possibly none of my business- but if the current driver is actually just
  flat broken, then upgrading from it to the new solaris ZFS driver seems
  unlikely to be possibly, simply because the from case is broken.

 Most certainly is your business as much as anybody elses, and complete
 valid point.

 IMO upgrade is a complete non-issue, drivers that are no longer
 maintained and obviously don't work should be marked as such in Kilo
 and probably removed as well.  Removal question etc is up to PTL and
 Core but my two cents is they're useless anyway for the most part.

 
  The driver in san/solaris.py focuses entirely on iSCSI.  I don't think
  existing systems can be upgraded manually but I've never really tried.
  We started with a clean slate for Solaris 11 and Cinder and added local
  ZFS support for single-system and demo rigs along with a fibre channel
  and iSCSI drivers.
 
  The driver is publically viewable here:
 
 
 https://java.net/projects/solaris-userland/sources/gate/content/components/openstack/cinder/files/solaris/zfs.py
 
  Please note that this driver is based on Havana.  We know it's old and
  we're working to get it updated to Juno right now.  I can try to work
  with my team to get a blueprint filed and start working on getting it
  integrated into trunk.
 
  -Drew
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Duncan Thomas
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] Asia friendly IRC meeting time

2014-11-27 Thread Sergey Lukjanov
If there will be no objections till the next week, we'll try the new time
(14:00UTC) next week.

On Thu, Nov 27, 2014 at 4:06 AM, Zhidong Yu zdyu2...@gmail.com wrote:

 Thank you all!

 On Thu, Nov 27, 2014 at 1:15 AM, Sergey Lukjanov slukja...@mirantis.com
 wrote:

 I think that 6 am for US west works much better than 3 am for Saratov.

 So, I'm ok with keeping current time and add 1400 UTC.

18:00UTC: Moscow (9pm)  China(2am)  US West(10am)/US East (1pm)
14:00UTC: Moscow (5pm)  China(10pm)  US (W 6am / E 9am)

 I think it's the best option to make all of us able to join.


 On Wed, Nov 26, 2014 at 8:33 AM, Zhidong Yu zdyu2...@gmail.com wrote:

 If 6am works for people in US west, then I'm fine with Matt's suggestion
 (UTC14:00).

 Thanks, Zhidong

 On Tue, Nov 25, 2014 at 11:26 PM, Matthew Farrellee m...@redhat.com
 wrote:

 On 11/25/2014 02:37 AM, Zhidong Yu wrote:

  Current meeting time:
  18:00UTC: Moscow (9pm)China(2am) US West(10am)

 My proposal:
  18:00UTC: Moscow (9pm)China(2am) US West(10am)
  00:00UTC: Moscow (3am)China(8am) US West(4pm)


 fyi, a number of us are UW East (US West + 3 hours), so...

 current meeting time:
  18:00UTC: Moscow (9pm)  China(2am)  US West(10am)/US East (1pm)

 and during daylight savings it's US West(11am)/US East(2pm)

 so the proposal is:
  18:00UTC: Moscow (9pm)  China(2am)  US (W 10am / E 1pm)
  00:00UTC: Moscow (3am)  China(8am)  US (W 4pm / E 7pm)

 given it's literally impossible to schedule a meeting during business
 hours across saratov, china and the us, that's a pretty reasonable
 proposal. my concern is that 00:00UTC may be thin on saratov  US
 participants.

 also consider alternating the existing schedule w/ something that's ~4
 hours earlier...
  14:00UTC: Moscow (5pm)  China(10pm)  US (W 6am / E 9am)

 best,


 matt


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Sincerely yours,
 Sergey Lukjanov
 Sahara Technical Lead
 (OpenStack Data Processing)
 Principal Software Engineer
 Mirantis Inc.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] suds-jurko, new in our global-requirements.txt: what is the point?!?

2014-11-27 Thread Thomas Goirand
On 11/27/2014 12:31 AM, Donald Stufft wrote:
 
 On Nov 26, 2014, at 10:34 AM, Thomas Goirand z...@debian.org wrote:

 Hi,

 I tried to package suds-jurko. I was first happy to see that there was
 some progress to make things work with Python 3. Unfortunately, the
 reality is that suds-jurko has many issues with Python 3. For example,
 it has many:

 except Exception, e:

 as well as many:

 raise Exception, 'Duplicate key %s found' % k

 This is clearly not Python3 code. I tried quickly to fix some of these
 issues, but as I fixed a few, others appear.

 So I wonder, what is the point of using suds-jurko, which is half-baked,
 and which will conflict with the suds package?

 It looks like it uses 2to3 to become Python 3 compatible.

Outch! That's horrible.

I think it'd be best if someone spent some time on writing real code
rather than using such a hack as 2to3. Thoughts anyone?

Cheers,

Thomas Goirand (zigo)


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][sahara] Sahara is broken in stable/juno

2014-11-27 Thread Sergey Lukjanov
Already merged into both master and stable/juno.

On Thu, Nov 27, 2014 at 12:45 AM, Sergey Lukjanov slukja...@mirantis.com
wrote:

 Hi,

 Sahara is broken in stable/juno now by new alembic release (unit tests).
 The patch is already done and approved for master [0] and I've already
 backported it to the stable/juno branch [1].

 [0] https://review.openstack.org/#/c/137035/
 [1] https://review.openstack.org/#/c/137469/

 P.S. If anyone from stable team will approve it, it'll be great :)

 --
 Sincerely yours,
 Sergey Lukjanov
 Sahara Technical Lead
 (OpenStack Data Processing)
 Principal Software Engineer
 Mirantis Inc.




-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] 答复: [Neutron][L2 Agent][Debt] Bootstrapping an L2 agent debt repayment task force

2014-11-27 Thread marios
On 27/11/14 03:59, Linchengyong wrote:
 My name is also on the etherpad, I'd like to take part in these works. 

Hi, so far we have this going
https://etherpad.openstack.org/p/restructure-l2-agent

thanks, marios

 -邮件原件-
 发件人: marios [mailto:mar...@redhat.com] 
 发送时间: 2014年11月19日 18:10
 收件人: openstack-dev@lists.openstack.org
 主题: Re: [openstack-dev] [Neutron][L2 Agent][Debt] Bootstrapping an L2 agent 
 debt repayment task force
 
 On 19/11/14 11:34, Rossella Sblendido wrote:
 My name is also on the etherpad, I'd like to work on improving RPC and 
 ovslib. I am willing to take care of the BP, if somebody else is 
 interested we can do it together.
 
 sure I can help - (let's sync up on irc to split out the sections and get an 
 etherpad going which we can then just paste into a spec - we can just use the 
 l3 one as a template?)
 
 thanks, marios
 

 cheers,

 Rossella


 On 11/19/2014 12:01 AM, Kevin Benton wrote:
 My name is already on the etherpad in the RPC section, but I'll 
 reiterate here that I'm very interested in optimizing a lot of the 
 expensive ongoing communication between the L2 agent and the server 
 on the message bus.

 On Tue, Nov 18, 2014 at 9:12 AM, Carl Baldwin c...@ecbaldwin.net 
 mailto:c...@ecbaldwin.net wrote:

 At the recent summit, we held a session about debt repayment in the
 Neutron agents [1].  Some work was identified for the L2 agent.  We
 had a discussion in the Neutron meeting today about bootstrapping that
 work.

 The first order of business will be to generate a blueprint
 specification for the work similar, in purpose, to the one that is
 under discussion for the L3 agent [3].  I personally am at or over
 capacity for BP writing this cycle.  We need a volunteer to take this
 on coordinating with others who have been identified on the etherpad
 for L2 agent work (you know who you are) and other volunteers who have
 yet to be identified.

 This task force will use the weekly Neutron meeting, the ML, and IRC
 to coordinate efforts.  But first, we need to bootstrap the task
 force.  If you plan to participate, please reply to this email and
 describe how you will contribute, especially if you are willing to be
 the lead author of a BP.  I will reconcile this with the etherpad to
 see where gaps have been left.

 I am planning to contribute as a core reviewer of blueprints and code
 submissions only.

 Carl

 [1] https://etherpad.openstack.org/p/kilo-neutron-agents-technical-debt
 [2]
 
 http://eavesdrop.openstack.org/meetings/networking/2014/networking.2014-11-18-14.02.html
 [3] https://review.openstack.org/#/c/131535/

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Kevin Benton


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] How to perform VMs live migration?

2014-11-27 Thread Timur Nurlygayanov
Hi Danny,

probably information on the following pages can help:
http://docs.openstack.org/admin-guide-cloud/content/section_configuring-compute-migrations.html
http://kimizhang.wordpress.com/2013/08/26/openstack-vm-live-migration/
https://www.mirantis.com/blog/tutorial-openstack-live-migration-with-kvm-hypervisor-and-nfs-shared-storage/



On Thu, Nov 27, 2014 at 4:06 AM, Danny Choi (dannchoi) dannc...@cisco.com
wrote:

  Hi,

  When I try “nova host-evacuate-live”, I I got the following error:

  *Error while live migrating instance: tme209 is not on shared storage:
  Live migration can not be used without shared storage.*

  What do I need to do to create a shared storage?

  Thanks,
 Danny

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

Timur,
Senior QA Engineer
OpenStack Projects
Mirantis Inc

My OpenStack summit schedule:
http://kilodesignsummit.sched.org/timur.nurlygayanov#.VFSrD8mhhOI
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [pecan] [WSME] Different content-type in request and response

2014-11-27 Thread Renat Akhmerov
Doug, thanks for your answer! 

My explanations below..


 On 26 Nov 2014, at 21:18, Doug Hellmann d...@doughellmann.com wrote:
 
 
 On Nov 26, 2014, at 3:49 AM, Renat Akhmerov rakhme...@mirantis.com 
 mailto:rakhme...@mirantis.com wrote:
 
 Hi,
 
 I traced the WSME code and found a place [0] where it tries to get arguments 
 from request body based on different mimetype. So looks like WSME supports 
 only json, xml and “application/x-www-form-urlencoded”.
 
 So my question is: Can we fix WSME to also support “text/plain” mimetype? I 
 think the first snippet that Nikolay provided is valid from WSME standpoint.
 
 WSME is intended for building APIs with structured arguments. It seems like 
 the case of wanting to use text/plain for a single input string argument just 
 hasn’t come up before, so this may be a new feature.
 
 How many different API calls do you have that will look like this? Would this 
 be the only one in the API? Would it make sense to consistently use JSON, 
 even though you only need a single string argument in this case?

We have 5-6 API calls where we need it.

And let me briefly explain the context. In Mistral we have a language (we call 
it DSL) to describe different object types: workflows, workbooks, actions. So 
currently when we upload say a workbook we run in a command line:

mistral workbook-create my_wb.yaml

where my_wb.yaml contains that DSL. The result is a table representation of 
actually create server side workbook. From technical perspective we now have:

Request:

POST /mistral_url/workbooks

{
  “definition”: “escaped content of my_wb.yaml
}

Response:

{
  “id”: “1-2-3-4”,
  “name”: “my_wb_name”,
  “description”: “my workbook”,
  ...
}

The point is that if we use, for example, something like “curl” we every time 
have to obtain that “escaped content of my_wb.yaml” and create that, in fact, 
synthetic JSON to be able to send it to the server side.

So for us it would be much more convenient if we could just send a plain text 
but still be able to receive a JSON as response. I personally don’t want to use 
some other technology because generally WSME does it job and I like this 
concept of rest resources defined as classes. If it supported text/plain it 
would be just the best fit for us.

 
 Or if we don’t understand something in WSME philosophy then it’d nice to 
 hear some explanations from WSME team. Will appreciate that.
 
 
 Another issue that previously came across is that if we use WSME then we 
 can’t pass arbitrary set of parameters in a url query string, as I 
 understand they should always correspond to WSME resource structure. So, in 
 fact, we can’t have any dynamic parameters. In our particular use case it’s 
 very inconvenient. Hoping you could also provide some info about that: how 
 it can be achieved or if we can just fix it.
 
 Ceilometer uses an array of query arguments to allow an arbitrary number.
 
 On the other hand, it sounds like perhaps your desired API may be easier to 
 implement using some of the other tools being used, such as JSONSchema. Are 
 you extending an existing API or building something completely new?

We want to improve our existing Mistral API. Basically, the idea is to be able 
to apply dynamic filters when we’re requesting a collection of objects using 
url query string. Yes, we could use JSONSchema if you say it’s absolutely 
impossible to do and doesn’t follow WSME concepts, that’s fine. But like I said 
generally I like the approach that WSME takes and don’t feel like jumping to 
another technology just because of this issue.

Thanks for mentioning Ceilometer, we’ll look at it and see if that works for us.

Renat___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Convergence proof-of-concept showdown

2014-11-27 Thread Murugan, Visnusaran
Hi Zane,

At this stage our implementation (as mentioned in 
wikihttps://wiki.openstack.org/wiki/Heat/ConvergenceDesign) achieves your 
design goals.


1.   In case of a parallel update, our implementation adjusts graph 
according to new template and waits for dispatched resource tasks to complete.

2.   Reason for basing our PoC on Heat code:

a.   To solve contention processing parent resource by all dependent 
resources in parallel.

b.  To avoid porting issue from PoC to HeatBase. (just to be aware of 
potential issues asap)

3.   Resource timeout would be helpful, but I guess its resource specific 
and has to come from template and default values from plugins.

4.   We see resource notification aggregation and processing next level of 
resources without contention and with minimal DB usage as the problem area. We 
are working on the following approaches in parallel.

a.   Use a Queue per stack to serialize notification.

b.  Get parent ProcessLog (ResourceID, EngineID) and initiate convergence 
upon first child notification. Subsequent children who fail to get parent 
resource lock will directly send message to waiting parent task 
(topic=stack_id.parent_resource_id)
Based on performance/feedback we can select either or a mashed version.

Advantages:

1.   Failed Resource tasks can be re-initiated after ProcessLog table 
lookup.

2.   One worker == one resource.

3.   Supports concurrent updates

4.   Delete == update with empty stack

5.   Rollback == update to previous know good/completed stack.

Disadvantages:

1.   Still holds stackLock (WIP to remove with ProcessLog)

Completely understand your concern on reviewing our code, since commits are 
numerous and there is change of course at places.  Our start commit is 
[c1b3eb22f7ab6ea60b095f88982247dd249139bf] though this might not help ☺

Your Thoughts.

Happy Thanksgiving.
Vishnu.

From: Angus Salkeld [mailto:asalk...@mirantis.com]
Sent: Thursday, November 27, 2014 9:46 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Heat] Convergence proof-of-concept showdown

On Thu, Nov 27, 2014 at 12:20 PM, Zane Bitter 
zbit...@redhat.commailto:zbit...@redhat.com wrote:
A bunch of us have spent the last few weeks working independently on proof of 
concept designs for the convergence architecture. I think those efforts have 
now reached a sufficient level of maturity that we should start working 
together on synthesising them into a plan that everyone can forge ahead with. 
As a starting point I'm going to summarise my take on the three efforts; 
hopefully the authors of the other two will weigh in to give us their 
perspective.


Zane's Proposal
===

https://github.com/zaneb/heat-convergence-prototype/tree/distributed-graph

I implemented this as a simulator of the algorithm rather than using the Heat 
codebase itself in order to be able to iterate rapidly on the design, and 
indeed I have changed my mind many, many times in the process of implementing 
it. Its notable departure from a realistic simulation is that it runs only one 
operation at a time - essentially giving up the ability to detect race 
conditions in exchange for a completely deterministic test framework. You just 
have to imagine where the locks need to be. Incidentally, the test framework is 
designed so that it can easily be ported to the actual Heat code base as 
functional tests so that the same scenarios could be used without modification, 
allowing us to have confidence that the eventual implementation is a faithful 
replication of the simulation (which can be rapidly experimented on, adjusted 
and tested when we inevitably run into implementation issues).

This is a complete implementation of Phase 1 (i.e. using existing resource 
plugins), including update-during-update, resource clean-up, replace on update 
and rollback; with tests.

Some of the design goals which were successfully incorporated:
- Minimise changes to Heat (it's essentially a distributed version of the 
existing algorithm), and in particular to the database
- Work with the existing plugin API
- Limit total DB access for Resource/Stack to O(n) in the number of resources
- Limit overall DB access to O(m) in the number of edges
- Limit lock contention to only those operations actually contending (i.e. no 
global locks)
- Each worker task deals with only one resource
- Only read resource attributes once


Open questions:
- What do we do when we encounter a resource that is in progress from a 
previous update while doing a subsequent update? Obviously we don't want to 
interrupt it, as it will likely be left in an unknown state. Making a 
replacement is one obvious answer, but in many cases there could be serious 
down-sides to that. How long should we wait before trying it? What if it's 
still in progress because the engine processing the resource already died?

Also, how do we implement resource level 

Re: [openstack-dev] [neutron] - the setup of a DHCP sub-group

2014-11-27 Thread Richard Woo
Don, the spec is location at:
http://git.openstack.org/cgit/openstack/neutron-specs/


On Wed, Nov 26, 2014 at 11:55 PM, Don Kehn dek...@gmail.com wrote:

 Sure, will try and gen to it over the holiday, do you have a link to the
 spec repo?

 On Mon, Nov 24, 2014 at 3:27 PM, Carl Baldwin c...@ecbaldwin.net wrote:

 Don,

 Could the spec linked to your BP be moved to the specs repository?
 I'm hesitant to start reading it as a google doc when I know I'm going
 to want to make comments and ask questions.

 Carl

 On Thu, Nov 13, 2014 at 9:19 AM, Don Kehn dek...@gmail.com wrote:
  If this shows up twice sorry for the repeat:
 
  Armando, Carl:
  During the Summit, Armando and I had a very quick conversation concern a
  blue print that I submitted,
  https://blueprints.launchpad.net/neutron/+spec/dhcp-cpnr-integration
 and
  Armando had mention the possibility of getting together a sub-group
 tasked
  with DHCP Neutron concerns. I have talk with Infoblox folks (see
  https://blueprints.launchpad.net/neutron/+spec/neutron-ipam), and
 everyone
  seems to be in agreement that there is synergy especially concerning the
  development of a relay and potentially looking into how DHCP is
 handled. In
  addition during the Fridays meetup session on DHCP that I gave there
 seems
  to be some general interest by some of the operators as well.
 
  So what would be the formality in going forth to start a sub-group and
  getting this underway?
 
  DeKehn
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 
 Don Kehn
 303-442-0060

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift] a way of checking replicate completion on swift cluster

2014-11-27 Thread Osanai, Hisashi

Hi,

I think it is a good idea to have the object-replicator's failure info 
in recon like the other replicators.

I think the following info can be added in object-replicator in addition to 
object_replication_last and object_replication_time.

If there is any technical reason to not add them, I can make it. What do 
you think?

{
replication_last: 1416334368.60865,
replication_stats: {
attempted: 13346,
empty: 0,
failure: 870,
failure_nodes: {192.168.0.1: 3,
  192.168.0.2: 860,
  192.168.0.3: 7},
hashmatch: 0,
remove: 0,
start: 1416354240.9761429,
success: 1908
ts_repl: 0
},
replication_time: 2316.5563162644703,
object_replication_last: 1416334368.60865,
object_replication_time: 2316.5563162644703
}

Cheers,
Hisashi Osanai

On Tuesday, November 25, 2014 4:37 PM, Matsuda, Kenichiro 
[mailto:matsuda_keni...@jp.fujitsu.com] wrote:
 I understood that the logs are necessary to judge whether no failure on
 object-replicator.
 And also, I thought that the recon info of object-replicator having failure
 (just like the recon info of account-replicator and container-replicator)
 is useful.
 Are there any reason to not included failure in recon?

On Tuesday, November 25, 2014 5:53 AM, Clay Gerrard 
[mailto:clay.gerr...@gmail.com] wrote:
  replication logs

On Friday, November 21, 2014 4:22 AM, Clay Gerrard 
[mailto:clay.gerr...@gmail.com] wrote:
 You might check if the swift-recon tool has the data you're looking for.  It 
 can report 
 the last completed replication pass time across nodes in the ring.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Compute Node lost the net-connection after spawning vm

2014-11-27 Thread Aman Kumar
Hi,

I am using DevStack since 4 months and it was working fine but 2 days back
i got some problem and i tried to re-install devstack by cloning it again
from git, and it got successfully installed, my both compute node got up.

After that i spawned vm from horizon, my spawned vm got ip and it is
running successfully,
but my compute node lost the net connection and i am not able to ssh that
node from putty.

I checked all the settings there is no problem in my VM setting i think
there is some problem with devstack because i tried 5-6 times with my old
setup and also with new vm configurations. every time only compute node is
getting lost net connection but spawned vm will be running and  also
compute node will be enabled.

can anyone please help me, thanks in advance

Regards
Aman Kumar
HP Software India
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [CI] Cinder/Ceph CI setup

2014-11-27 Thread Derek Higgins
On 27/11/14 10:21, Duncan Thomas wrote:
 I'd suggest starting by making it an extra job, so that it can be
 monitored for a while for stability without affecting what is there.

we have to be careful here, adding an extra job for this is probably the
safest option but tripleo CI resources are a constraint, for that reason
I would add it to the HA job (which is currently non voting) and once
its stable we should make it voting.

 
 I'd be supportive of making it the default HA job in the longer term as
 long as the LVM code is still getting tested somewhere - LVM is still
 the reference implementation in cinder and after discussion there was
 strong resistance to changing that.
We are and would continue to use lvm for our non ha jobs, If I
understand it correctly the tripleo lvm support isn't HA so continuing
to test it on our HA job doesn't achieve much.

 
 I've no strong opinions on the node layout, I'll leave that to more
 knowledgable people to discuss.
 
 Is the ceph/tripleO code in a working state yet? Is there a guide to
 using it?
 
 
 On 26 November 2014 at 13:10, Giulio Fidente gfide...@redhat.com
 mailto:gfide...@redhat.com wrote:
 
 hi there,
 
 while working on the TripleO cinder-ha spec meant to provide HA for
 Cinder via Ceph [1], we wondered how to (if at all) test this in CI,
 so we're looking for some feedback
 
 first of all, shall we make Cinder/Ceph the default for our
 (currently non-voting) HA job?
 (check-tripleo-ironic-__overcloud-precise-ha)
 
 current implementation (under review) should permit for the
 deployment of both the Ceph monitors and Ceph OSDs on either
 controllers, dedicated nodes, or to split them up so that only OSDs
 are on dedicated nodes
 
 what would be the best scenario for CI?
 
 * a single additional node hosting a Ceph OSD with the Ceph monitors
 deployed on all controllers (my preference is for this one)

I would be happy with this so long as it didn't drastically increase the
time to run the HA job.

 
 * a single additional node hosting a Ceph OSD and a Ceph monitor
 
 * no additional nodes with controllers also service as Ceph monitor
 and Ceph OSD
 
 more scenarios? comments? Thanks for helping
 
 1.
 https://blueprints.launchpad.__net/tripleo/+spec/tripleo-__kilo-cinder-ha
 https://blueprints.launchpad.net/tripleo/+spec/tripleo-kilo-cinder-ha
 -- 
 Giulio Fidente
 GPG KEY: 08D733BA
 
 _
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.__org
 mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 -- 
 Duncan Thomas
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] IaaS Devroom at FOSDEM 15: Call for Participation

2014-11-27 Thread Thierry Carrez
Last days to submit a presentation for FOSDEM'15 IaaS devroom (Brussels,
January 31, 2015):

 =
 
 FOSDEM '15 Infrastructure-as-a-Service devroom
 
 -
 Important Dates and Info!
 -
 
 Submission deadline: 1 December 2014
 Acceptance notifications: 15 December 2014
 Final schedule announcement: 9 January 2015
 Devroom: 31 January 2015
 
 -
 Call for Participation
 -
 
 The open source IaaS devroom will host sessions around open source
 Infrastructure-as-a-Service projects such as (but not limited to)
 Apache CloudStack, OpenStack, oVirt, OpenNebula, and Ganeti.
 
 This room will focus on collaboration between projects on common
 problems and software, such as shared storage, virtualized networking,
 interfacing with multiple hypervisors, and scaling across hundreds or
 thousands of servers.
 
 Organizers are seeking topics that are interesting to multiple projects,
 and hope to encourage developers to share experience solving problems
 with their own projects.
 
 -
 Call for Volunteers
 -
 
 We are also looking for volunteers to help run the devroom. We need
 assistance watching time for the speakers, and helping with video
 for the devroom. Please contact Joe Brockmeier (jzb at redhat.com) for
 more information here.
 
 -
 Details: READ CAREFULLY
 -
 
 This year at FOSDEM there will be a one-day devroom to focus on IaaS
 projects. If your project is related to IaaS, we would love to see
 your submissions.
 
 Please note that we expect more proposals than we can possibly accept,
 so it is vitally important that you submit your proposal on or before
 the deadline. Late submissions are unlikely to be considered.
 
 All slots are 40 minutes, with 30 minutes planned for presentations, and
 10 minutes for QA.
 
 All presentations *will* be recorded and made available under Creative
 Commons licenses. Please indicate when submitting your talk that your
 presentation will be licensed under the CC-By-SA-4.0 or CC-By-4.0
 license when submitting the talk and that you agree to have your
 presentation recorded. For example:
 
   If my presentation is accepted for FOSDEM, I hereby agree to license
all recordings, slides, and other associated materials under the
Creative Commons Attribution Share-Alike 4.0 International License.
Sincerely, NAME.
 
 Also, in the notes field, please confirm tnat if your talk is accepted
 that you *will* be able to attend FOSDEM and deliver your presentation.
 We will not consider proposals from prospective speakers unsure whether
 they will be able to secure funds for travel and lodging to attend
 FOSDEM. (Sadly, we are not able to offer travel funding for prospective
 speakers.)
 
 -
 How to Submit
 -
 
 All submissions are made via the Pentabarf event planning site:
 
 https://penta.fosdem.org/submission/FOSDEM15
 
 If you have not used Pentabarf before, you will need to create an account.
 
 After creating the account, select Create Event and then be sure to
 select Infrastructure as a service devroom from the options under
 Track.
 
 -
 Questions
 -
 
 If you have any questions about this devroom, please send your questions
 to:
 
 iaas-virt-devr...@lists.fosdem.org
 
 We will respond as quickly as possible. Thanks!
 
 

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Compute Node lost the net-connection after spawning vm

2014-11-27 Thread Vineet Menon
Please post the all the relevant files including local.conf.

If you believe that it's a bug, then you can report a bug as well..

Regards,

Vineet Menon


On 27 November 2014 at 14:15, Aman Kumar amank3...@gmail.com wrote:

 Hi,

 I am using DevStack since 4 months and it was working fine but 2 days back
 i got some problem and i tried to re-install devstack by cloning it again
 from git, and it got successfully installed, my both compute node got up.

 After that i spawned vm from horizon, my spawned vm got ip and it is
 running successfully,
 but my compute node lost the net connection and i am not able to ssh that
 node from putty.

 I checked all the settings there is no problem in my VM setting i think
 there is some problem with devstack because i tried 5-6 times with my old
 setup and also with new vm configurations. every time only compute node is
 getting lost net connection but spawned vm will be running and  also
 compute node will be enabled.

 can anyone please help me, thanks in advance

 Regards
 Aman Kumar
 HP Software India

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] suds-jurko, new in our global-requirements.txt: what is the point?!?

2014-11-27 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 27/11/14 12:09, Thomas Goirand wrote:
 On 11/27/2014 12:31 AM, Donald Stufft wrote:
 
 On Nov 26, 2014, at 10:34 AM, Thomas Goirand z...@debian.org
 wrote:
 
 Hi,
 
 I tried to package suds-jurko. I was first happy to see that
 there was some progress to make things work with Python 3.
 Unfortunately, the reality is that suds-jurko has many issues
 with Python 3. For example, it has many:
 
 except Exception, e:
 
 as well as many:
 
 raise Exception, 'Duplicate key %s found' % k
 
 This is clearly not Python3 code. I tried quickly to fix some
 of these issues, but as I fixed a few, others appear.
 
 So I wonder, what is the point of using suds-jurko, which is
 half-baked, and which will conflict with the suds package?
 
 It looks like it uses 2to3 to become Python 3 compatible.
 
 Outch! That's horrible.
 
 I think it'd be best if someone spent some time on writing real
 code rather than using such a hack as 2to3. Thoughts anyone?

That sounds very subjective. If upstream is able to support multiple
python versions from the same codebase, then I see no reason for them
to split the code into multiple branches and introduce additional
burden syncing fixes between those.

/Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)

iQEcBAEBCgAGBQJUd0wMAAoJEC5aWaUY1u57gVMIAI70aQjReaa32WVJExL18c4t
QfJ3U+4yGxURIwqu/VKfpujN+KNQ7JWR+zqSUpv1gGxRTQcwFYLLKeW9XRxBnETw
0YxLvCrju3MInFDCFZrJzm3mTMlnQSosQSoK08Phn++cyRs1R/isaWPU7UHMbiSM
jIqRQkLYYPoSnhiTm1LkOoWg3oP82g3vxOPQmAlTAlx38lJ81ioBq7z9rRQzW+CX
DcZy+64t+iePY9w0P4mvXdl/saDAlh7Hl/nu7RKcC5ycoa2un07N6SAazycMPvln
naQvaXFfjPjGP5ToLNWIDwhRWmMkUa581ng37+6LewvbFNUDttKCHobp8cvVqy8=
=d0S7
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] About the DEFAULT_PAGE_SIZE

2014-11-27 Thread Qiming Teng
It looks like some constants not yet used in pagination.

(refer to: heatclient/v1/stacks.py: StackManager.list())

Regards,
  Qiming

On Thu, Nov 27, 2014 at 03:56:07PM +0800, Baohua Yang wrote:
 Hi, all
  Just notice there're several DEFAULT_PAGE_SIZE=20 lines inside the
 latest python-heatclient package, e.g.,
 
 $ grep DEFAULT_PAGE_SIZE . -r
 ./heatclient/v1/actions.py:DEFAULT_PAGE_SIZE = 20
 ./heatclient/v1/events.py:DEFAULT_PAGE_SIZE = 20
 ./heatclient/v1/resources.py:DEFAULT_PAGE_SIZE = 20
 
   What is that for? I checked the source code, but no where utilizes
 that.
   From google, it seems some Java things?
 
   Thanks!
 
 -- 
 Best wishes!
 Baohua

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] [Plugins] Further development of plugin metadata format

2014-11-27 Thread Vitaly Kramskikh
Folks,

In the 6.0 release we'll support simple plugins for Fuel. The current
architecture allows to create only very simple plugins and doesn't allow to
pluginize complex features like Ceph, vCenter, etc. I'd like to propose
some changes to make it possible. They are subtle enough and the plugin
template still can be autogenerated by Fuel Plugin Builder. Here they are:

https://github.com/vkramskikh/fuel-plugins/commit/1ddb166731fc4bf614f502b276eb136687cb20cf

   1. environment_config.yaml should contain exact config which will be
   mixed into cluster_attributes. No need to implicitly generate any controls
   like it is done now.
   2. metadata.yaml should also contain is_removable field. This field is
   needed to determine whether it is possible to remove installed plugin. It
   is impossible to remove plugins in the current implementation. This
   field should contain an expression written in our DSL which we already use
   in a few places. The LBaaS plugin also uses it to hide the checkbox if
   Neutron is not used, so even simple plugins like this need to utilize it.
   This field can also be autogenerated, for more complex plugins plugin
   writer needs to fix it manually. For example, for Ceph it could look like
   settings:storage.volumes_ceph.value == false and
   settings:storage.images_ceph.value == false.
   3. For every task in tasks.yaml there should be added new condition
   field with an expression which determines whether the task should be run.
   In the current implementation tasks are always run for specified roles. For
   example, vCenter plugin can have a few tasks with conditions like
   settings:common.libvirt_type.value == 'vcenter' or
   settings:storage.volumes_vmdk.value == true. Also, AFAIU, similar
   approach will be used in implementation of Granular Deployment feature.

These simple changes will allow to write much more complex plugins. What do
you think?
-- 
Vitaly Kramskikh,
Software Engineer,
Mirantis, Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db][nova] NovaObject.save() needs its own DB transaction

2014-11-27 Thread Jay Pipes

On 11/19/2014 01:25 PM, Mike Bayer wrote:

OK so here is why EngineFacade as described so far doesn’t work, because if it 
is like this:

def some_api_operation -

 novaobject1.save() -

@writer
def do_some_db_thing()

 novaobject2.save() -

@writer
def do_some_other_db_thing()

then yes, those two @writer calls aren’t coordinated.   So yes, I think 
something that ultimately communicates the same meaning as @writer needs to be 
at the top:

@something_that_invokes_writer_without_exposing_db_stuff
def some_api_operation -

 # … etc

If my decorator is not clear enough, let me clarify that a decorator that is 
present at the API/ nova objects layer will interact with the SQL layer through 
some form of dependency injection, and not any kind of explicit import; that 
is, when the SQL layer is invoked, it registers some kind of state onto the 
@something_that_invokes_writer_without_exposing_db_stuff system that causes its 
“cleanup”, in this case the commit(), to occur at the end of that topmost 
decorator.



I think the following pattern would solve it:

@remotable
def save():
session = insert magic here
try:
r = self._save(session)
session.commit() (or reader/writer magic from oslo.db)
return r
except Exception:
session.rollback() (or reader/writer magic from oslo.db)
raise

@definitelynotremotable
def _save(session):
previous contents of save() move here
session is explicitly passed to db api calls
cascading saves call object._save(session)


so again with EngineFacade rewrite, the @definitelynotremotable system should 
also interact such that if @writer is invoked internally, an error is raised, 
just the same as when @writer is invoked within @reader.


My impression after reading the EngineFacade spec (and the reason I 
supported it, and still support the idea behind it) was that the API 
call referred to in the EngineFacade spec was the *nova-conductor* API 
call, not the *nova-api* API call. We need a way to mark an RPC API call 
on the nova-conductor as involving a set of writer or reader DB calls, 
and that's what I thought we were referring to in that spec. I 
specifically did not think that we were leaving the domain of the 
nova-conductor, because clearly we would be leaving the domain of a 
single RPC call in that case, and in order to do transactional 
containers, we'd need to use two-phase commit, which is definitely not 
something I recommend...


So, in short, for the EngineFacade effort, I believe the @reader and 
@writer decorators should be on the conductor RPC API calls.


Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] suds-jurko, new in our global-requirements.txt: what is the point?!?

2014-11-27 Thread Thomas Goirand
On 11/28/2014 12:06 AM, Ihar Hrachyshka wrote:
 On 27/11/14 12:09, Thomas Goirand wrote:
 On 11/27/2014 12:31 AM, Donald Stufft wrote:

 On Nov 26, 2014, at 10:34 AM, Thomas Goirand z...@debian.org
 wrote:

 Hi,

 I tried to package suds-jurko. I was first happy to see that
 there was some progress to make things work with Python 3.
 Unfortunately, the reality is that suds-jurko has many issues
 with Python 3. For example, it has many:

 except Exception, e:

 as well as many:

 raise Exception, 'Duplicate key %s found' % k

 This is clearly not Python3 code. I tried quickly to fix some
 of these issues, but as I fixed a few, others appear.

 So I wonder, what is the point of using suds-jurko, which is
 half-baked, and which will conflict with the suds package?

 It looks like it uses 2to3 to become Python 3 compatible.
 
 Outch! That's horrible.
 
 I think it'd be best if someone spent some time on writing real
 code rather than using such a hack as 2to3. Thoughts anyone?
 
 That sounds very subjective. If upstream is able to support multiple
 python versions from the same codebase, then I see no reason for them
 to split the code into multiple branches and introduce additional
 burden syncing fixes between those.
 
 /Ihar

Objectively, using 2to3 sux, and it's much better to fix the code,
rather than using such a band-aid. It is possible to support multiple
version of Python with a single code base. So many projects are able to
do it, I don't see why suds would be any different.

Cheers,

Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [diskimage-builder] Tracing levels for scripts (119023)

2014-11-27 Thread Sullivan, Jon Paul
 -Original Message-
 From: Ben Nemec [mailto:openst...@nemebean.com]
 Sent: 26 November 2014 17:03
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [diskimage-builder] Tracing levels for
 scripts (119023)
 
 On 11/25/2014 10:58 PM, Ian Wienand wrote:
  Hi,
 
  My change [1] to enable a consistent tracing mechanism for the many
  scripts diskimage-builder runs during its build seems to have hit a
  stalemate.
 
  I hope we can agree that the current situation is not good.  When
  trying to develop with diskimage-builder, I find myself constantly
  going and fiddling with set -x in various scripts, requiring me
  re-running things needlessly as I try and trace what's happening.
  Conversley some scripts set -x all the time and give output when you
  don't want it.
 
  Now nodepool is using d-i-b more, it would be even nicer to have
  consistency in the tracing so relevant info is captured in the image
  build logs.
 
  The crux of the issue seems to be some disagreement between reviewers
  over having a single trace everything flag or a more fine-grained
  approach, as currently implemented after it was asked for in reviews.
 
  I must be honest, I feel a bit silly calling out essentially a
  four-line patch here.
 
 My objections are documented in the review, but basically boil down to
 the fact that it's not a four line patch, it's a 500+ line patch that
 does essentially the same thing as:
 
 set +e
 set -x
 export SHELLOPTS

I don't think this is true, as there are many more things in SHELLOPTS than 
just xtrace.  I think it is wrong to call the two approaches equivalent.

 
 in disk-image-create.  You do lose set -e in disk-image-create itself on
 debug runs because that's not something we can safely propagate,
 although we could work around that by unsetting it before calling hooks.
  FWIW I've used this method locally and it worked fine.

So this does say that your alternative implementation has a difference from the 
proposed one.  And that the difference has a negative impact.

 
 The only drawback is it doesn't allow the granularity of an if block in
 every script, but I don't personally see that as a particularly useful
 feature anyway.  I would like to hear from someone who requested that
 functionality as to what their use case is and how they would define the
 different debug levels before we merge an intrusive patch that would
 need to be added to every single new script in dib or tripleo going
 forward.

So currently we have boilerplate to be added to all new elements, and that 
boilerplate is:

set -eux
set -o pipefail

This patch would change that boilerplate to:

if [ ${DIB_DEBUG_TRACE:-0} -gt 0 ]; then
set -x
fi
set -eu
set -o pipefail

So it's adding 3 lines.  It doesn't seem onerous, especially as most people 
creating a new element will either copy an existing one or copy/paste the 
header anyway.

I think that giving control over what is effectively debug or non-debug output 
is a desirable feature.

We have a patch that implements that desirable feature.

I don't see a compelling technical reason to reject that patch.

 
 -Ben
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Thanks, 
Jon-Paul Sullivan ☺ Cloud Services - @hpcloud

Postal Address: Hewlett-Packard Galway Limited, Ballybrit Business Park, Galway.
Registered Office: Hewlett-Packard Galway Limited, 63-74 Sir John Rogerson's 
Quay, Dublin 2. 
Registered Number: 361933
 
The contents of this message and any attachments to it are confidential and may 
be legally privileged. If you have received this message in error you should 
delete it from your system immediately and advise the sender.

To any recipient of this message within HP, unless otherwise stated, you should 
consider this message and attachments as HP CONFIDENTIAL.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [horizon] REST and Django

2014-11-27 Thread Tripp, Travis S
Hi Richard,

You are right, we should put this out on the main ML, so copying thread out to 
there.  ML: FYI that this started after some impromptu IRC discussions about a 
specific patch led into an impromptu google hangout discussion with all the 
people on the thread below.

As I mentioned in the review[1], Thai and I were mainly discussing the possible 
performance implications of network hops from client to horizon server and 
whether or not any aggregation should occur server side.   In other words, some 
views  require several APIs to be queried before any data can displayed and it 
would eliminate some extra network requests from client to server if some of 
the data was first collected on the server side across service APIs.  For 
example, the launch instance wizard will need to collect data from quite a few 
APIs before even the first step is displayed (I’ve listed those out in the 
blueprint [2]).

The flip side to that (as you also pointed out) is that if we keep the API’s 
fine grained then the wizard will be able to optimize in one place the calls 
for data as it is needed. For example, the first step may only need half of the 
API calls. It also could lead to perceived performance increases just due to 
the wizard making a call for different data independently and displaying it as 
soon as it can.

I tend to lean towards your POV and starting with discrete API calls and 
letting the client optimize calls.  If there are performance problems or other 
reasons then doing data aggregation on the server side could be considered at 
that point.  Of course if anybody is able to do some performance testing 
between the two approaches then that could affect the direction taken.

[1] 
https://review.openstack.org/#/c/136676/8/openstack_dashboard/api/rest/urls.py
[2] https://blueprints.launchpad.net/horizon/+spec/launch-instance-redesign

-Travis

From: Richard Jones r1chardj0...@gmail.commailto:r1chardj0...@gmail.com
Date: Wednesday, November 26, 2014 at 11:55 PM
To: Travis Tripp travis.tr...@hp.commailto:travis.tr...@hp.com, Thai Q 
Tran/Silicon Valley/IBM tqt...@us.ibm.commailto:tqt...@us.ibm.com, David 
Lyle dkly...@gmail.commailto:dkly...@gmail.com, Maxime Vidori 
maxime.vid...@enovance.commailto:maxime.vid...@enovance.com, Wroblewski, 
Szymon szymon.wroblew...@intel.commailto:szymon.wroblew...@intel.com, 
Wood, Matthew David (HP Cloud - Horizon) 
matt.w...@hp.commailto:matt.w...@hp.com, Chen, Shaoquan 
sean.ch...@hp.commailto:sean.ch...@hp.com, Farina, Matt (HP Cloud) 
matthew.far...@hp.commailto:matthew.far...@hp.com, Cindy Lu/Silicon 
Valley/IBM c...@us.ibm.commailto:c...@us.ibm.com, Justin 
Pomeroy/Rochester/IBM jpom...@us.ibm.commailto:jpom...@us.ibm.com, Neill 
Cox neill@ingenious.com.aumailto:neill@ingenious.com.au
Subject: Re: REST and Django

I'm not sure whether this is the appropriate place to discuss this, or whether 
I should be posting to the list under [Horizon] but I think we need to have a 
clear idea of what goes in the REST API and what goes in the client (angular) 
code.

In my mind, the thinner the REST API the better. Indeed if we can get away with 
proxying requests through without touching any *client code, that would be 
great.

Coding additional logic into the REST API means that a developer would need to 
look in two places, instead of one, to determine what was happening for a 
particular call. If we keep it thin then the API presented to the client 
developer is very, very similar to the API presented by the services. Minimum 
surprise.

Your thoughts?


 Richard


On Wed Nov 26 2014 at 2:40:52 PM Richard Jones 
r1chardj0...@gmail.commailto:r1chardj0...@gmail.com wrote:
Thanks for the great summary, Travis.

I've completed the work I pledged this morning, so now the REST API change set 
has:

- no rest framework dependency
- AJAX scaffolding in openstack_dashboard.api.rest.utils
- code in openstack_dashboard/api/rest/
- renamed the API from identity to keystone to be consistent
- added a sample of testing, mostly for my own sanity to check things were 
working

https://review.openstack.org/#/c/136676


  Richard

On Wed Nov 26 2014 at 12:18:25 PM Tripp, Travis S 
travis.tr...@hp.commailto:travis.tr...@hp.com wrote:
Hello all,

Great discussion on the REST urls today! I think that we are on track to come 
to a common REST API usage pattern.  To provide quick summary:

We all agreed that going to a straight REST pattern rather than through tables 
was a good idea. We discussed using direct get / post in Django views like what 
Max originally used[1][2] and Thai also started[3] with the identity table 
rework or to go with djangorestframework [5] like what Richard was prototyping 
with[4].

The main things we would use from Django Rest Framework were built in JSON 
serialization (avoid boilerplate), better exception handling, and some request 
wrapping.  However, we all weren’t sure about the need for a full new framework 
just for that. At the end of the 

[openstack-dev] [Neutron] Edge-VPN and Edge-Id

2014-11-27 Thread Mohammad Hanif
Folks,

Recently, as part of the L2 gateway thread, there was some discussion on 
BGP/MPLS/Edge VPN and how to bridge any overlay networks to the neutron 
network.  Just to update everyone in the community, Ian and I have separately 
submitted specs which make an attempt to address the cloud edge connectivity.  
Below are the links describing it:

Edge-Id: https://review.openstack.org/#/c/136555/
Edge-VPN: https://review.openstack.org/#/c/136929 .  This is a resubmit of 
https://review.openstack.org/#/c/101043/ for the kilo release under the “Edge 
VPN” title.  “Inter-datacenter connectivity orchestration” was just too long 
and just too generic of a title to continue discussing about :-(

I had discussions with some of you (who are active on this mailing lis) on edge 
VPN connectivity during the Paris summit and also went over it during the 
Neutron lightning talks session.  Please take the time to see if you can review 
the above mentioned specs and provide your valuable comments.

For those of you in US, have a happy Thanksgiving holidays!

Thanks,
—Hanif.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Handling soft delete for instance rows in a new cells database

2014-11-27 Thread Michael Still
On Fri, Nov 28, 2014 at 2:59 AM, Jay Pipes jaypi...@gmail.com wrote:
 On 11/26/2014 04:24 PM, Mike Bayer wrote:

 Precisely. Why is the RDBMS the thing that is used for
 archival/audit logging? Why not a NoSQL store or a centralized log
 facility? All that would be needed would be for us to standardize
 on the format of the archival record, standardize on the things to
 provide with the archival record (for instance system metadata,
 etc), and then write a simple module that would write an archival
 record to some backend data store.

 Then we could rid ourselves of the awfulness of the shadow tables
 and all of the read_deleted=yes crap.



 +1000 - if we’re really looking to “do this right”, as the original
 message suggested, this would be “right”.  If you don’t need these
 rows in the app (and it would be very nice if you didn’t), dump it
 out to an archive file / non-relational datastore.   As mentioned
 elsewhere, this is entirely acceptable for organizations that are
 “obliged” to store records for auditing purposes.   Nova even already
 has a dictionary format for everything set up with nova objects, so
 dumping these dictionaries out as JSON would be the way to go.


 OK, spec added:

 https://review.openstack.org/137669

At this point I don't think we should block the cells reworking effort
on this spec. I'm happy for people to pursue this, but I think its
unlikely to be work that is completed in kilo. We can transition the
new cells databases at the same time we fix the main database.

Michael

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] is there a way to simulate thousands or millions of compute nodes?

2014-11-27 Thread Michael Still
I would say that supporting millions of compute nodes is not a current
priority for nova... We are actively working on improving support for
thousands of compute nodes, but that is via cells (so each nova deploy
except the top is still in the hundreds of nodes).

I worry about pursuing academic goals such as millions of compute
nodes -- as I think it distracts from solving the problems affecting
our real world users. Do you have a deployment in mind at this scale?

Michael

On Thu, Nov 27, 2014 at 2:29 AM, Gareth academicgar...@gmail.com wrote:
 Hi all,

 Is there a way to simulate thousands or millions of compute nodes? Maybe we
 could have many fake nova-compute services on one physical machine. By this
 other nova components would have pressure from thousands of compute services
 and this could help us find more problem from large-scale deployment (fake ;
 -) )

 I know there is a fake virt driver in nova, but that is not so real. Maybe
 we need a fake driver could sleep 20s (which is close to real booting time)
 in 'spawn' function.

 --
 Gareth

 Cloud Computing, OpenStack, Distributed Storage, Fitness, Basketball
 OpenStack contributor, kun_huang@freenode
 My promise: if you find any spelling or grammar mistakes in my email from
 Mar 1 2013, notify me
 and I'll donate $1 or ¥1 to an open organization you specify.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] the hostname regex pattern fix also changed behaviour :(

2014-11-27 Thread Angus Lees
Context: https://review.openstack.org/#/c/135616

As far as I can make out, the fix for CVE-2014-7821 removed a backslash
that effectively disables the negative look-ahead assertion that verifies
that hostname can't be all-digits. Worse, the new version now rejects
hostnames where a component starts with a digit.

This certainly addressed the immediate issue of that regex was expensive,
but the change in behaviour looks like it was unintended.  Given that we
backported this DoS fix to released versions of neutron, what do we want to
do about it now?

In general this regex is crazy complex for what it verifies.  I can't see
any discussion of where it came from nor precisely what it was intended to
accept/reject when it was introduced in patch 16 of
https://review.openstack.org/#/c/14219.

If we're happy disabling the check for components being all-digits, then a
minimal change to the existing regex that could be backported might be
something like
  
r'(?=^.{1,254}$)(^(?:[a-zA-Z0-9_](?:[a-zA-Z0-9_-]{,61}[a-zA-Z0-9])\.)*(?:[a-zA-Z]{2,})$)'

Alternatively (and clearly preferable for Kilo), Kevin has a replacement
underway that rewrites this entirely to conform to modern RFCs in
I003cf14d95070707e43e40d55da62e11a28dfa4e
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] is there a way to simulate thousands or millions of compute nodes?

2014-11-27 Thread Sandy Walsh
From: Michael Still [mi...@stillhq.com] Thursday, November 27, 2014 6:57 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] is there a way to simulate thousands or 
millions of compute nodes?

I would say that supporting millions of compute nodes is not a current
priority for nova... We are actively working on improving support for
thousands of compute nodes, but that is via cells (so each nova deploy
except the top is still in the hundreds of nodes).

ramble on

Agreed, it wouldn't make much sense to simulate this on a single machine. 

That said, if one *was* to simulate this, there are the well known bottlenecks:

1. the API. How much can one node handle with given hardware specs? Which 
operations hit the DB the hardest?
2. the Scheduler. There's your API bottleneck and big load on the DB for Create 
operations. 
3. the Conductor. Shouldn't be too bad, essentially just a proxy. 
4. child-to-global-cell updates. Assuming a two-cell deployment. 
5. the virt driver. YMMV. 
... and that's excluding networking, volumes, etc. 

The virt driver should be load tested independently. So FakeDriver would be 
fine (with some delays added for common operations as Gareth suggests). 
Something like Bees-with-MachineGuns could be used to get a baseline metric for 
the API. Then it comes down to DB performance in the scheduler and conductor 
(for a single cell). Finally, inter-cell loads. Who blows out the queue first?

All-in-all, I think you'd be better off load testing each piece independently 
on a fixed hardware platform and faking out all the incoming/outgoing services. 
Test the API with fake everything. Test the Scheduler with fake API calls and 
fake compute nodes. Test the conductor with fake compute nodes (not 
FakeDriver). Test the compute node directly. 

Probably all going to come down to the DB and I think there is some good 
performance data around that already?

But I'm just spit-ballin' ... and I agree, not something I could see the Nova 
team taking on in the near term ;)

-S




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] REST and Django

2014-11-27 Thread Richard Jones
On Fri Nov 28 2014 at 5:58:00 AM Tripp, Travis S travis.tr...@hp.com
wrote:

  Hi Richard,

  You are right, we should put this out on the main ML, so copying thread
 out to there.  ML: FYI that this started after some impromptu IRC
 discussions about a specific patch led into an impromptu google hangout
 discussion with all the people on the thread below.


Thanks Travis!



 As I mentioned in the review[1], Thai and I were mainly discussing the
 possible performance implications of network hops from client to horizon
 server and whether or not any aggregation should occur server side.   In
 other words, some views  require several APIs to be queried before any data
 can displayed and it would eliminate some extra network requests from
 client to server if some of the data was first collected on the server side
 across service APIs.  For example, the launch instance wizard will need to
 collect data from quite a few APIs before even the first step is displayed
 (I’ve listed those out in the blueprint [2]).

  The flip side to that (as you also pointed out) is that if we keep the
 API’s fine grained then the wizard will be able to optimize in one place
 the calls for data as it is needed. For example, the first step may only
 need half of the API calls. It also could lead to perceived performance
 increases just due to the wizard making a call for different data
 independently and displaying it as soon as it can.


Indeed, looking at the current launch wizard code it seems like you
wouldn't need to load all that data for the wizard to be displayed, since
only some subset of it would be necessary to display any given panel of the
wizard.



 I tend to lean towards your POV and starting with discrete API calls and
 letting the client optimize calls.  If there are performance problems or
 other reasons then doing data aggregation on the server side could be
 considered at that point.


I'm glad to hear it. I'm a fan of optimising when necessary, and not
beforehand :)



 Of course if anybody is able to do some performance testing between the
 two approaches then that could affect the direction taken.


I would certainly like to see us take some measurements when performance
issues pop up. Optimising without solid metrics is bad idea :)


Richard



  [1]
 https://review.openstack.org/#/c/136676/8/openstack_dashboard/api/rest/urls.py
 [2]
 https://blueprints.launchpad.net/horizon/+spec/launch-instance-redesign

  -Travis

   From: Richard Jones r1chardj0...@gmail.com
 Date: Wednesday, November 26, 2014 at 11:55 PM
 To: Travis Tripp travis.tr...@hp.com, Thai Q Tran/Silicon Valley/IBM 
 tqt...@us.ibm.com, David Lyle dkly...@gmail.com, Maxime Vidori 
 maxime.vid...@enovance.com, Wroblewski, Szymon 
 szymon.wroblew...@intel.com, Wood, Matthew David (HP Cloud - Horizon) 
 matt.w...@hp.com, Chen, Shaoquan sean.ch...@hp.com, Farina, Matt
 (HP Cloud) matthew.far...@hp.com, Cindy Lu/Silicon Valley/IBM 
 c...@us.ibm.com, Justin Pomeroy/Rochester/IBM jpom...@us.ibm.com, Neill
 Cox neill@ingenious.com.au
 Subject: Re: REST and Django

  I'm not sure whether this is the appropriate place to discuss this, or
 whether I should be posting to the list under [Horizon] but I think we need
 to have a clear idea of what goes in the REST API and what goes in the
 client (angular) code.

  In my mind, the thinner the REST API the better. Indeed if we can get
 away with proxying requests through without touching any *client code, that
 would be great.

  Coding additional logic into the REST API means that a developer would
 need to look in two places, instead of one, to determine what was happening
 for a particular call. If we keep it thin then the API presented to the
 client developer is very, very similar to the API presented by the
 services. Minimum surprise.

  Your thoughts?


   Richard


 On Wed Nov 26 2014 at 2:40:52 PM Richard Jones r1chardj0...@gmail.com
 wrote:

 Thanks for the great summary, Travis.

  I've completed the work I pledged this morning, so now the REST API
 change set has:

  - no rest framework dependency
 - AJAX scaffolding in openstack_dashboard.api.rest.utils
 - code in openstack_dashboard/api/rest/
 - renamed the API from identity to keystone to be consistent
 - added a sample of testing, mostly for my own sanity to check things
 were working

  https://review.openstack.org/#/c/136676


Richard

 On Wed Nov 26 2014 at 12:18:25 PM Tripp, Travis S travis.tr...@hp.com
 wrote:

  Hello all,

  Great discussion on the REST urls today! I think that we are on track
 to come to a common REST API usage pattern.  To provide quick summary:

  We all agreed that going to a straight REST pattern rather than
 through tables was a good idea. We discussed using direct get / post in
 Django views like what Max originally used[1][2] and Thai also started[3]
 with the identity table rework or to go with djangorestframework [5] like
 what Richard was prototyping with[4].

  The main things we would 

Re: [openstack-dev] [Heat] About the DEFAULT_PAGE_SIZE

2014-11-27 Thread Baohua Yang
Thanks qiming!


On Fri, Nov 28, 2014 at 12:14 AM, Qiming Teng teng...@linux.vnet.ibm.com
wrote:

 It looks like some constants not yet used in pagination.

 (refer to: heatclient/v1/stacks.py: StackManager.list())

 Regards,
   Qiming

 On Thu, Nov 27, 2014 at 03:56:07PM +0800, Baohua Yang wrote:
  Hi, all
   Just notice there're several DEFAULT_PAGE_SIZE=20 lines inside the
  latest python-heatclient package, e.g.,
 
  $ grep DEFAULT_PAGE_SIZE . -r
  ./heatclient/v1/actions.py:DEFAULT_PAGE_SIZE = 20
  ./heatclient/v1/events.py:DEFAULT_PAGE_SIZE = 20
  ./heatclient/v1/resources.py:DEFAULT_PAGE_SIZE = 20
 
What is that for? I checked the source code, but no where utilizes
  that.
From google, it seems some Java things?
 
Thanks!
 
  --
  Best wishes!
  Baohua

  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Best wishes!
Baohua
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] CDH plugin integration test in Mirantis CI

2014-11-27 Thread Zhidong Yu
Hi Sergey,

May I know why CDH integration test cases are 'non-voting' in Mirantis CI
system?

We have a plan to harden the integration test and also make it voting.

Thanks, Zhidong
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] Rework auto-scaling support in Heat

2014-11-27 Thread Qiming Teng
Dear all,

Auto-Scaling is an important feature supported by Heat and needed by
many users we talked to.  There are two flavors of AutoScalingGroup
resources in Heat today: the AWS-based one and the Heat native one.  As
more requests coming in, the team has proposed to separate auto-scaling
support into a separate service so that people who are interested in it
can jump onto it.  At the same time, Heat engine (especially the resource
type code) will be drastically simplified.  The separated AS service
could move forward more rapidly and efficiently.

This work was proposed a while ago with the following wiki and
blueprints (mostly approved during Havana cycle), but the progress is
slow.  A group of developers now volunteer to take over this work and
move it forward.

wiki: https://wiki.openstack.org/wiki/Heat/AutoScaling
BPs:
 - https://blueprints.launchpad.net/heat/+spec/as-lib-db
 - https://blueprints.launchpad.net/heat/+spec/as-lib
 - https://blueprints.launchpad.net/heat/+spec/as-engine-db
 - https://blueprints.launchpad.net/heat/+spec/as-engine
 - https://blueprints.launchpad.net/heat/+spec/autoscaling-api
 - https://blueprints.launchpad.net/heat/+spec/autoscaling-api-client
 - https://blueprints.launchpad.net/heat/+spec/as-api-group-resource
 - https://blueprints.launchpad.net/heat/+spec/as-api-policy-resource
 - https://blueprints.launchpad.net/heat/+spec/as-api-webhook-trigger-resource
 - https://blueprints.launchpad.net/heat/+spec/autoscaling-api-resources

Once this whole thing lands, Heat engine will talk to the AS engine in
terms of ResourceGroup, ScalingPolicy, Webhooks.  Heat engine won't care
how auto-scaling is implemented although the AS engine may in turn ask
Heat to create/update stacks for scaling's purpose.  In theory, AS
engine can create/destroy resources by directly invoking other OpenStack
services.  This new AutoScaling service may eventually have its own DB,
engine, API, api-client.  We can definitely aim high while work hard on
real code.

After reviewing the BPs/Wiki and some communication, we get two options
to push forward this.  I'm writing this to solicit ideas and comments
from the community.

Option A: Top-Down Quick Split
--

This means we will follow a roadmap shown below, which is not 100% 
accurate yet and very rough:

  1) Get the separated REST service in place and working
  2) Switch Heat resources to use the new REST service

Pros:
  - Separate code base means faster review/commit cycle
  - Less code churn in Heat
Cons:
  - A new service need to be installed/configured/launched
  - Need commitments from dedicated, experienced developers from very
beginning

Option B: Bottom-Up Slow Growth
---

The roadmap is more conservative, with many (yes, many) incremental
patches to migrate things carefully.

  1) Separate some of the autoscaling logic into libraries in Heat
  2) Augment heat-engine with new AS RPCs
  3) Switch AS related resource types to use the new RPCs
  4) Add new REST service that also talks to the same RPC
 (create new GIT repo, API endpoint and client lib...) 

Pros:
  - Less risk breaking user lands with each revision well tested
  - More smooth transition for users in terms of upgrades

Cons:
  - A lot of churn within Heat code base, which means long review cycles
  - Still need commitments from cores to supervise the whole process

There could be option C, D... but the two above are what we came up with
during the discussion.

Another important thing we talked about is about the open discussion on
this.  OpenStack Wiki seems a good place to document settled designs but
not for interactive discussions.  Probably we should leverage etherpad
and the mailinglist when moving forward.  Suggestions on this are also
welcomed.

Thanks.

Regards,
 Qiming


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev