Re: [openstack-dev] [cinder] Cinder with remote LVM proposal

2017-02-01 Thread Philipp Marek
Hi everybody,

> > Hi, I'd like to know if it is possible to use openstack-cinder-volume with
> > a remote LVM. This could be a new feature proposal if the idea is good.
> > More precisely, I'm thinking a solution where openstack-cinder-volume runs
> > on a dedicated node and LVM on another node (called "storage node").  On
> > the storage node I need to have a volume group (normally named
> > cinder-volumes) and the targetcli package, so the iscsi_ip_address in
> > cinder.conf should be an address associated with the storage node.
> > Advantages of this solution are: 1) When I need to upgrade openstack I can
> > leave out the storage node from the process (because it has only LVM and
> > targetcli or another daemon used for iscsi targets). 2) Down on the
> > cinder-volume node cannot creates problems to the iscsi part exposed to
> > vms. 3) Support to the open source world in cinder: LVM is the common
> > solution for cinder in low budget environment but performance are good if
> > the storage node is powerful enough
> >
> > In my idea, the "interface" between openstack-cinder-volume and lvm can be
> > SSH. Basically we need to create/remove/manage logical volumes on a remote
> > node and the same for the iscsi targets
> >
> > Please, let me know if this can be a valid solution.
> What you are proposing is almost like to create an LVM storage box. I
> haven't seen any real benefit from the advantages you listed. For 1), the
> same problems you can have upgrading the services within the same node will
> happen if the LVM services are not in the same host. For, 2), now you have
> 2 nodes to manage instead of 1, which double the changes of having
> problems. And for 3), I really didn't get the advantage related to the
> solution you are proposing.
> 
> If you have real deployments cases where this could help (or if there are
> other people interested), please list it here so people can see more
> concrete benefits of using this solution.

please let me suggest to look at the DRBD Cinder driver that's already 
upstream.
Basically, this allows to use one or more storage boxes, and to export 
their storage (via LV and DRBD) to the Compute nodes.

If you're using the DRBD protocol instead of iSCSI (and configure the DRBD 
Cinder driver to store 2 copies of the data), you'll even benefit from 
redundancy - you can maintain one of the storage nodes while the other is 
serving data, and (as soon as the data is synchronized up again) then do 
your maintenance on the other storage node.


See here for more details:
http://www.drbd.org/en/doc/users-guide-90/ch-openstack

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] [stable] [all] Changing stable policy for drivers

2016-08-08 Thread Philipp Marek
> Ok, to turn the question around, we (the cinder team) have recognised a
> definite and strong need to have somewhere for vendors to share patches on
> versions of Cinder older than the stable branch policy allows.
> 
> Given this need, what are our options?
> 
> 1. We could do all this outside Openstack infrastructure. There are
> significant downsides to doing so from organisational, maintenance, cost
> etc points of view. Also means that the place vendors go for these patches
> is not obvious, and the process for getting patches in is not standard.
And if some people have 2 or more different storages, they might need to 
run separate Cinder processes, because the vendors' stable tree diverge and 
cannot easily be merged again.

> 2. We could have something not named 'stable' that has looser rules than
> stable branches,, maybe just pep8 / unit / cinder in-tree tests. No
> devstack.
+1 from me.
Name it "long-term-driver-only-updates" or so ;)


> 3. We go with the Neutron model and take drivers out of tree. This is not
> something the cinder core team are in favour of - we see significant value
> in the code review that drivers currently get - the code quality
> improvements between when a driver is submitted and when it is merged are
> sometimes very significant. Also, taking the code out of tree makes it
> difficult to get all the drivers checked out in one place to analyse e.g.
> how a certain driver call is implemented across all the drivers, when
> reasoning or making changes to core code.
-1

> Given we've identified a clear need, and have repeated rejected one
> solution (take drivers out of tree - it has been discussed at every summit
> and midcycle for 3+ cycles), what positive suggestions can people make?
Number 2 - a centralized branch (ie. in openstack.org) that *only* takes 
driver updates (and doesn't need CI).
If a driver is broken, too bad for that vendor - must have been the last 
driver update, as the Cinder code didn't change since EOL...


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] [stable] [all] Changing stable policy for drivers

2016-08-06 Thread Philipp Marek
> I want to propose
> we officially make a change to our stable policy to call out that
> drivers bugfixes (NOT new driver features) be allowed at any time.
Emphatically +1 from me.


With the small addendum that "bugfixes" should include compatibility
changes for libraries used.


Thanks for bringing that up!

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] Limits on volume read throughput?

2016-03-02 Thread Philipp Marek
Hi Preston,

 
> The benchmark scripts are in:
> 
>   https://github.com/pbannister/openstack-bootstrap
in case that might help, here are a few notes and hints about doing 
benchmarks for the DRDB block device driver:

http://blogs.linbit.com/p/897/benchmarking-drbd/

Perhaps there's something interesting for you.


> Found that if I repeatedly scanned the same 8GB volume from the physical
> host (with 1/4TB of memory), the entire volume was cached in (host) memory
> (very fast scan times).
If the iSCSI target (or QEMU, for direct access) is set up to use buffer 
cache, yes.
Whether you really want that is up to discussion - it might be much more 
beneficial to move that RAM from the Hypervisor to the VM, which should 
then be able to do more efficient caching of the filesystem contents that 
it should operate on.


> Scanning the same volume from within the instance still gets the same
> ~450MB/s that I saw before. 
Hmmm, with iSCSI inbetween that could be the TCP memcpy limitation.

> The "iostat" numbers from the instance show ~44 %iowait, and ~50 %idle.
> (Which to my reading might explain the ~50% loss of performance.) Why so
> much idle/latency?
> 
> The in-instance "dd" CPU use is ~12%. (Not very interesting.)
Because your "dd" testcase will be single-threaded, io-depth 1.
And that means synchronous access, each IO has to wait for the preceeding 
one to finish...


> Not sure from where the (apparent) latency comes. The host iSCSI target?
> The QEMU iSCSI initiator? Onwards...
Thread scheduling, inter-CPU cache trashing (if the iSCSI target is on 
a different physical CPU package/socket than the VM), ...


Benchmarking is a dark art.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ThirdParty][CI] [patch] Status page at http://ci-watch.tintri.com/project

2015-12-21 Thread Philipp Marek
Hi all,

I quite like the page at http://ci-watch.tintri.com/project - it gives 
a very quick overview about the failures one should look into, and which to 
ignore ;)


Please let me state before anything else that I don't know any of the 
restrictions that may have led into the current design - it's very likely 
that I'm just missing a few points, and that some or all of my comments 
below are invalid anyway. As always, take enough salt!


One thing about that page that is bothering me is the performance... my 
(current) Firefox asks me several times whether I'd like to stop the JS,
or whether it should be allowed to continue.

With this patch (and a local exported copy of the page) I don't get asked 
about that any more; it seems to give me a speedup of ~200, as no 
intermediate lists need to be built and filtered any more:

$ diff -u verified.js.orig verified.js
--- verified.js.orig2015-12-21 15:03:45.614529924 +0100
+++ verified.js 2015-12-21 15:03:36.114432601 +0100
@@ -33,9 +33,9 @@
 $(document).ready(function () {
   $("colgroup").each(function (i, elem) {
 if ($(elem).hasClass("verified-1")) {
-  $("#results").find("td").filter(":nth-child(" + (i + 1) + 
")").addClass("verified-1");
+  $("#results td:nth-child(" + (i + 1) + ")").addClass("verified-1");
 } else if ($(elem).hasClass("verified1")) {
-  $("#results").find("td").filter(":nth-child(" + (i + 1) + 
")").addClass("verified1");
+  $("#results td:nth-child(" + (i + 1) + ")").addClass("verified1");
 }
   });
   $("#verified1-button").on("click", toggle_verified_plus);


Furthermore, I'm wondering whether







couldn't be simplified to






with the rest being done via CSS? Perhaps a  would be needed within 
the  to get the vertical size right, but everything else should be 
possible via CSS, I believe.

This change should reduce the size of the generated HTML big some 50% or 
so, too.



Thanks for listening - if you disagree, please ignore and continue working 
on something else ;)


Regards,

Phil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder][DRBD] questions about pep8/flake8 etc.

2015-12-21 Thread Philipp Marek
Hi everybody,

in the current patch https://review.openstack.org/#/c/259973/1 the test 
script needs to use a lot of the constant definitions of the backend driver 
it's using (DRBDmanage).

As the DRBDmanage libraries need not be installed on the CI nodes, I'm 
providing a minimum of upstream files, accumulated in a separate directory 
- they get imported and "fixed" to the expected location, so that the 
driver that should be tested runs as if DRBDmanage is installed.


My problem is now that the upstream project doesn't accept all the pep8 
conventions like Openstack does; so the CI run 

http://logs.openstack.org/73/259973/1/check/gate-cinder-pep8/5032b16/console.html
 
gives a lot of messages like "E221 multiple spaces before operator" and 
similar. (It even crashes during AST parsing ;)


So, I can see these options now:

  * Make pep8 ignore these files - they're only used by one test script,
and are never used in production anyway.
+ Simple
+ New upstream files can simply be dropped in as needed
- bad example?
  
  * Reformat the files to conform to pep8
- some work for every new version that needs to be incorporated
- can't be compared for equality with upstream any more
- might result in mismatches later on, ie. production code uses
  different values from test code

  * Throw upstream files away, and do "manual" fakes
- A lot of work
- Work needed for every new needed constant
- lots of duplicated code
- might result in mismatches later on, ie. production code uses
  different values from test code
+ whole checkout still "clean" for pep8

  * Require DRBDmanage to be installed
+ uses same values as upstream and production
- Need to get it upstream into PyPi
- Meaning delay
- delay for every new release of DRBDmanage
- Might not even be compatible with every used distribution/CI
  out there


I would prefer the first option - make pep8 ignore these files.
But I'm only a small player here, what's the opinion of the Cinder cores?
Would that be acceptable?


Regards,

Phil

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder]Do we have project scope for cinder?

2015-11-29 Thread Philipp Marek
Hi Hao Wang,

> In fact, there is a reason that I ask this question. Recently I have a
> confusion about if cinder should provide the ability of Disaster
> Recovery to storage resources, like volume. I mean we have volume
> replication v1, but for DR, specially DR between two independent
> OpenStack sites(production and DR site), I feel we still need more
> features to support it, for example consistency group for replication,
> etc.
I'm currently developing consistency groups for the DRBD volume driver.


The only way it can do replication *is* for a consistency group as a whole, 
so that feature will be available soonish - if you don't care that 
Cinder/Openstack knows nothing about the ongoing replication.


> I'm not sure if those features belong in Cinder or some new
> project for DR.
IMO this belongs to Cinder.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][glance]Upload encrypted volumes to images

2015-11-22 Thread Philipp Marek
> About uploading encrypted volumes to image, there are three options:
> 1. Glance only keeps non-encrypted images. So when uploading encrypted 
>volumes to image, cinder de-crypts the data and upload.
> 2. Glance maintain encrypted images. Cinder just upload the encrypted 
>data to image. 
> 3. Just prevent the function to upload encrypted volumes to images.
>
> Option 1 No changes needed in Glance. But it may be not safe. As we decrypt 
> the data, and upload it to images. 
> Option 2 This imports encryption to Glance which needs to manage the 
> encryption metadata.
> 
> Please add more if you have other suggestions. How do you think which one is 
> preferred.
Well, IMO only option 1 is useful.

Option 2 means that the original volume, the image, and all derived volumes 
will share the same key, right?
That's not good. (Originally: "unacceptable")


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] LVM snapshot performance issue -- why isn't thin provisioning the default?

2015-09-15 Thread Philipp Marek
> > I'm currently trying to work around an issue where activating LVM
> > snapshots created through cinder takes potentially a long time. 
[[ thick LVM snapshot performance problem ]]
> > Given the above, is there any reason why we couldn't make thin
> > provisioning the default?
> 
> My intention is to move toward thin-provisioned LVM as the default -- it
> is definitely better suited to our use of LVM.
...
> The other issue preventing using thin by default is that we default the
> max oversubscription ratio to 20.  IMO that isn't a safe thing to do for
> the reference implementation, since it means that people who deploy
> Cinder LVM on smaller storage configurations can easily fill up their
> volume group and have things grind to halt.  I think we want something
> closer to the semantics of thick LVM for the default case.
The DRBDmanage backend has to deal with the same problem.

We decided to provide 3 different storage strategies:

 * Use Thick LVs - with the known performance implications when using
   snapshots.
 * Use one Thin Pool for the volumes - this uses the available space
   "optimally", but gives the oversubscription problem mentioned above.
 * Use multiple Thin Pools, one for each volume.
   This provides efficient snapshots *and* space reservation for each
   volume.
   
The last strategy is no panacea, though - something needs to check the free 
space in the pool, because the snapshots can still fill it up...
Without impacting the other volumes, though.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] A possible solution for HA Active-Active

2015-08-05 Thread Philipp Marek

 Well, is it already decided that Pacemaker would be chosen to provide HA in
 Openstack? There's been a talk Pacemaker: the PID 1 of Openstack IIRC.
 
 I know that Pacemaker's been pushed aside in an earlier ML post, but IMO
 there's already *so much* been done for HA in Pacemaker that Openstack
 should just use it.
 
 All HA nodes needs to participate in a Pacemaker cluster - and if one node
 looses connection, all services will get stopped automatically (by
 Pacemaker) - or the node gets fenced.
 
 
 No need to invent some sloppy scripts to do exactly the tasks (badly!) that
 the Linux HA Stack has been providing for quite a few years.
 So just a piece of information, but yahoo (the company I work for, with vms
 in the tens of thousands, baremetal in the much more than that...) hasn't
 used pacemaker, and in all honesty this is the first project (openstack)
 that I have heard that needs such a solution. I feel that we really should
 be building our services better so that they can be A-A vs having to depend
 on another piece of software to get around our 'sloppiness' (for lack of a
 better word).
 
 Nothing against pacemaker personally... IMHO it just doesn't feel like we
 are doing this right if we need such a product in the first place.
Well, Pacemaker is *the* Linux HA Stack.

So, before trying to achieve similar goals by self-written scripts (and 
having to re-discover all the gotchas involved), it would be much better to 
learn from previous experiences - even if they are not one's own.

Pacemaker has eg. the concept of clones[1] - these define services that run 
multiple instances within a cluster. And behold! the instances get some 
Pacemaker-internal unique id[2], which can be used to do sharding.


Yes, that still means that upon service or node crash the failed instance 
has to be started on some other node; but as that'll typically be up and 
running already, the startup time should be in the range of seconds.


We'd instantly get
 * a supervisor to start/stop/restart/fence/monitor the service(s)
 * node/service failure detection
 * only small changes needed in the services
 * and all that in a tested software that's available in all distributions,
   and that already has its own testsuite...


If we decide that this solution won't fulfill all our expectations, fine -
let's use something else.

But I don't think it makes *any* sense to try to redo some (existing) 
High-Availability code in some quickly written scripts, just because it 
looks easy - there are quite a few traps for the unwary.


Ad 1: 
http://clusterlabs.org/doc/en-US/Pacemaker/1.1/html/Pacemaker_Explained/s-resource-clone.html
Ad 2: OCF_RESKEY_CRM_meta_clone; that's not guaranteed to be an unbroken 
sequence, though.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] A possible solution for HA Active-Active

2015-08-05 Thread Philipp Marek
 [...]
  Pacemaker is *the* Linux HA Stack.
 [...]
 
 Can you expand on this assertion? It doesn't look to me like it's
 part of the Linux source tree and I see strong evidence to suggest
 it's released and distributed completely separately from the kernel.
If you read Linux as GNU/Linux or Linux platform, instead of
Linux kernel, it's what I meant.


 Statements like this one make the rest of your messages look even
 more like a marketing campaign, so I'd love to understand what you
 really mean (I seriously doubt you're campaigning for this specific
 piece of software, after all, but that's the way it comes across).
Sorry for not being entirely clear.


I thought that my message was good enough, as the OpenStack documentation 
itself already talks about Pacemaker:

  http://docs.openstack.org/high-availability-guide/content/ch-pacemaker.html

  OpenStack infrastructure high availability relies on the Pacemaker 
   cluster stack, the state-of-the-art high availability and load 
   balancing stack for the Linux platform. Pacemaker is storage and 
   application-agnostic, and is in no way specific to OpenStack.


Expanding on what we have, what GNU/Linux already has, and what is 
being used for Linux (platform) HA, I wanted to point out that most of the 
parts for _one_ possible solution already exists.


Whether we want to go *that* route is yet to be decided, of course.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] A possible solution for HA Active-Active

2015-08-05 Thread Philipp Marek
   [...]
Pacemaker is *the* Linux HA Stack.
   [...]
   
   Can you expand on this assertion? It doesn't look to me like it's
   part of the Linux source tree and I see strong evidence to suggest
   it's released and distributed completely separately from the kernel.
  
  If you read Linux as GNU/Linux or Linux platform, instead of
  Linux kernel, it's what I meant.
 [...]
 
 Okay, that makes slightly more sense. So you're implying that
 Pacemaker is the only HA stack available for Linux-based platforms,
 or that it's the most popular, or... I guess I'm mostly thrown by
 your use of the definite article the (which you emphasized, so it
 seems like you must mean there are effectively no others?).
Well, SUSE and Redhat (7) use Pacemaker by default, Debian/Ubuntu have it 
(along with others)...

That gives it quite some market share, wouldn't you think?


Yes, I guess the most popular meaning is a good match here.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] A possible solution for HA Active-Active

2015-08-05 Thread Philipp Marek
  Well, SUSE and Redhat (7) use Pacemaker by default, Debian/Ubuntu have it 
  (along with others)...
  
  That gives it quite some market share, wouldn't you think?
  
  Yes, I guess the most popular meaning is a good match here.
 
 I see, so in the same way that nano is *the* Linux text editor
 (Debian/Ubuntu configure it as the default, SUSE and Redhat have it
 packaged).
Along with quite a few alternatives.

How many cluster stack alternatives can you see in SUSE?
How many cluster stack alternatives are available in _every_ major 
distribution?


 Popularity alone doesn't seem like a great criterion for
 making these sorts of technology choices.
Popularity _alone_ is not the sole criteria, right.

But to write something new just because of NIH is the wrong approach, IMO.




[[ I'm going to stop arguing now. ]]

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] A possible solution for HA Active-Active

2015-08-04 Thread Philipp Marek
 If we end up using a DLM then we have to detect when the connection to
 the DLM is lost on a node and stop all ongoing operations to prevent
 data corruption.
 
 It may not be trivial to do, but we will have to do it in any solution
 we use, even on my last proposal that only uses the DB in Volume Manager
 we would still need to stop all operations if we lose connection to the
 DB.

Well, is it already decided that Pacemaker would be chosen to provide HA in 
Openstack? There's been a talk Pacemaker: the PID 1 of Openstack IIRC.

I know that Pacemaker's been pushed aside in an earlier ML post, but IMO 
there's already *so much* been done for HA in Pacemaker that Openstack 
should just use it.

All HA nodes needs to participate in a Pacemaker cluster - and if one node 
looses connection, all services will get stopped automatically (by 
Pacemaker) - or the node gets fenced.


No need to invent some sloppy scripts to do exactly the tasks (badly!) that 
the Linux HA Stack has been providing for quite a few years.


Yes, Pacemaker needs learning - but not more than any other involved 
project, and there are already quite a few here, which have to be known to 
any operator or developer already.


(BTW, LINBIT sells training for the Linux HA Cluster Stack - and yes,
 I work for them ;)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] [cinder] CI via infra for the DRBD Cinder driver

2015-06-22 Thread Philipp Marek
 this is a reflection of the discussion I just had on #openstack-infra; it's 
 about (re-)using the central CI infrastructure for our Open-Source DRBD 
 driver too.
As this is done now, I'd like to discuss a few points.


First of all -- thanks to everybody involved; if you happen to find someone 
(Europe is a bad timezone for Openstack, I guess), people on IRC are *very*
helpful.


Secondly,
   «Yup, so the two things I will start with is that multinode testing is 
still really rudimentary, we only just got tempest sort of working with 
it. ... »
I didn't follow all the conversations on the ML - is there an update for 
multi-node testing?
What's the policy? Good idea, but not needed or A must-have?

Especially for distributed/replicated storage drivers (like DRBD ;) I guess 
it would make sense to have some.


Third point, mostly of interest for other driver maintainers: we're 
following the various CI runs, and sporadically check failures to see 
whether we can improve our driver or the devstack installation script.
For that we use Kibana on http://logstash.openstack.org, with a filter 
string of

project:openstack/cinder AND build_status:FAILURE AND
build_name:check-tempest-dsvm-full-drbd-devstack-nv AND Detailed logs

This shows the initial line of the failed devstack runs (the one including 
the line with the log URL), which we can then paste into a script that 
fetches the (for us) relevant files for further analysis.
The newest failures are already at the top.

Another nice feature is using the existing Graphite installation to get 
a visual display of the success/failure ratio over time[1]; here we can see 
the impact of individual changes, eg. on June 19th we diagnosed (and fixed) 
an udev/blkid race with the kernel-attach, since then the number of 
failures has clearly gone down. I just pushed another patch that should us 
bring even more into the green area ;)

One more idea is to watch http://status.openstack.org/zuul/? with a filter 
string of Cinder, and to open reviews that will finish soon, so that 
current behaviour can be easily checked.


I hope that this helps other people a bit.


Regards,

Phil



Ad 1:
http://graphite.openstack.org/render/?width=600height=344_salt=1434709688.361from=-7daystitle=DRBD%20Cinder%2FDevstack%20statscolorList=red%2Cgreen%2Cbluetarget=stats_counts.zuul.pipeline.check.job.check-tempest-dsvm-full-drbd-devstack-nv.FAILUREtarget=stats_counts.zuul.pipeline.check.job.check-tempest-dsvm-full-drbd-devstack-nv.SUCCESS

-- 
: Ing. Philipp Marek
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting http://www.linbit.com :

DRBD® and LINBIT® are registered trademarks of LINBIT, Austria.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Ask for help on supportting the 3-rd party CI for HDFS driver

2015-06-12 Thread Philipp Marek
  It doesn't make sense to require people to learn about things they
  will never use again - and the amount of time spent answering the
  questions, diagnosing problems and so on is quite a bit higher
  than doing it simply right the first time.
 
 This is, I think, also a common misconception. The people who write
 these jobs to run in our CI need to stick around or induct
 successors to help maintain them and avoid bitrot as our systems
 constantly change and evolve. I know the same goes for the drivers
 themselves... if people don't keep them current with the OpenStack
 software into which they're integrating, support will quickly be
 dropped due to quality control concerns.
Maintaining and supporting the drivers isn't the problem - that has to 
happen more or less regularly, if only to add features and/or to stay 
compatible with the rest of the Openstack code.

It's also much nearer to the contributor's normal work, so not that likely 
to be forgotten - whereas the job definitions are looked at (perhaps) never 
again, and so the knowledge about that will be lost (from the contributor's 
brain, that is).


To repeat the important point: Thanks to all people that are helping on 
IRC, this is one of the most important assets Openstack has today!


-- 
: Ing. Philipp Marek
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting http://www.linbit.com :

DRBD® and LINBIT® are registered trademarks of LINBIT, Austria.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Ask for help on supportting the 3-rd party CI for HDFS driver

2015-06-10 Thread Philipp Marek
Hi all,

  Thanks Jeremy.  I assume Chen could follow this example to add a
  job for the HDFS driver?
  
  https://review.openstack.org/#/c/188744/
 
 That's a fine short-form answer. The longer answer is to solicit
 input from some of the people who have done similar work, so that
 mistakes can be learned from rather than repeated.
As requested, here's some input from me.

Oh well, where to start? Perhaps at the beginning ...

Openstack has quite a lot of documentation - in Wiki pages, code examples, 
and so on. Navigating all of that (well, even finding the relevant pieces) 
is not that easy - and some parts are wrong in details or contradict each 
other. Unavoidable for a project of this size, but still annoying for 
newcomers.

What I'd very much have liked to see was some big-picture overview how the 
processes and configurations work[1]. Some short description what the files 
in the project-config mean, what they are used for, etc., to get a basic 
understanding how that works _without_ needing to learn all the auxillary 
tools (of which there are quite a few - Zuul, Jenkins, Puppet, ...).

What *I* did get wrong (and this an important point to learn from!) is that 
even things like the devstack plugin name matters. This isn't written 
anywhere, but if you get it wrong then the various jobs won't match the 
generated check  gate scriptlet names anymore, and then there are some 
more hoops to jump through...[2]


I still stand by my opinion (as voiced in Vancouver) that for such one-off 
things (that contributors are not likely to repeat over and over again) it 
might make sense to have -infra simply *do* them[3].


But let's get back to the topic - what I have learned, resp. what I did 
wrong, so that others can learn.

The quoted change number above is not enough; I've pushed a few updates 
already, so to get a more complete example it might be better to take 
a checkout and to filter by my changes:

# git log --committer philipp.ma...@linbit.com

Looking at the *combined* diff of that might be a first step. Or perhaps 
not, because of the wrong name...


Another idea that I can pass on is that as soon as you've had the first CI 
run, navigate the logfile directory to fetch the local.conf and 
tempest.conf file (which will be compressed), and to look at them. Some 
small errors might be diagnosed from there[4], without needing to spend 
time looking what goes wrong and why.


Anything else? Hmmm... yeah, the last point I can mention is that people on 
IRC (-infra, for that topic) are very helpful. If you're unlucky, they're 
stressed by other problems and won't have that much time; but generally, 
you'll receive answers quite fast.
(Unless you're in the wrong timezone. AJaeger is very helpful in the CET 
zones - but he's alone [I guess], and so it's a game of luck again. Having 
some more -infra cores distributed around the globe would be nice.)

Well - you have to know to ask the right questions, right. While sometimes 
you'll get extensive answers, in busier times you might get the *exact* 
answer to your question - whether it's helpful in the long term, or not.


Overall, it's been both more awful than expected, and at the same time more 
people help than with other Open-Source projects; I guess I'll have to stop 
here, to avoid starting a rant about another topic.


I hope that helps - at least a little bit.


Regards,

Phil


==

Ad 1: In the last 8 weeks (since starting the CI in -infra) I got some 
ideas; I'm not sure how much is correct, but if there's interest, I can try 
to put them down into a wiki (please point me to some suitable location).

Ad 2: Another thing that is not written down (but could be guessed) is that 
some lists have to be kept in alphabetical order...

Ad 3: Eg. for free if it's an Open Source project, and for a small fee like 
$200 or so for proprietary ones - that's still some major savings for the 
company, compared to spending tens of man-hours trying to get that right.
  It doesn't make sense to require people to learn about things they will
never use again - and the amount of time spent answering the questions, 
diagnosing problems and so on is quite a bit higher than doing it simply 
right the first time.
And if it's *that* often needed, why not write a small script that, given 
a name, does the needed changes, so that only a commit  review is needed?

Ad 4: Eg., the cinder tests multi-backend tests failed for my driver.
After looking at tempest.conf I figured out (thanks, jgriffith, for telling 
me that too) that they were *wrongly* activated - because I've had 
  -  CINDER_ENABLED_BACKENDS=,drbd:drbdmanage
  +  CINDER_ENABLED_BACKENDS=drbd:drbdmanage
A small comma in the setup definition - and several tests fail, instead of 
them getting skipped...

-- 
: Ing. Philipp Marek
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting http://www.linbit.com :

DRBD® and LINBIT® are registered trademarks

Re: [openstack-dev] [Nova][Cinder] Questions re progress

2015-03-19 Thread Philipp Marek
 So others have/will chime in here... one thing I think is kinda missing in
 the statement above is the single host, that's actually the whole point
 of Ceph and other vendor driven clustered storage technologies out there.
 There's a ton to choose from at this point, open source as well as
 proprietary and a lot of them are really really good.  This is also very
 much what DRBD aims to solve for you.  You're not tying data access to a
 single host/node, that's kinda the whole point.
Current status of the DRBD driver is: you can have redundant (replicated) 
storage 
in Cinder, but the connection to Nova is still done via iSCSI.

 Granted in the case of DRBD we've still got a ways to go and something we
 haven't even scratched the surface on much is virtual/shared IP's for
 targets but we're getting there albeit slowly (there are folks who are
 doing this already but haven't contributed their work back upstream), so in
 that case yes we still have a shortcoming in that if the node that's acting
 as your target server goes down you're kinda hosed.
The WIP is that the Nova nodes use DRBD as a transport protocol to the 
storage nodes, too; that would implicitly be a multi-connection setup.

The Nova side
https://review.openstack.org/#/c/149244/
got delayed to L, sadly, and so the Cinder side
https://review.openstack.org/#/c/156212/
is on hold, too.

(We've got github repositories where I try to keep these branches 
up-to-date for people who want to test, BTW.)


Of course, if the hypervisor crashes, you'll have to restart the VMs (or 
create new ones).


If you've got any questions, please don't hesitate to ask me (or drbd-user, 
if you prefer that).



Regards,

Phil


-- 
: Ing. Philipp Marek
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting http://www.linbit.com :

DRBD® and LINBIT® are registered trademarks of LINBIT, Austria.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] qemu-img disk corruption bug to patch or not?

2015-02-20 Thread Philipp Marek
  Potentially corrupted images are bad; depending on the affected data it 
  might only be diagnosed some time after installation, so IMO the fix is 
  needed.
 
 Sure but there is a (potentially hefty) performance impact.
Well, do you want fast or working/consistent images?

  Only a minor issue: I'd like to see it have a check for the qemu version, 
  and only then kick in.
 
 That dosn't work as $distro may have a qemu 2.0.x that has the fix backported.
 That's why the workarounds group was created.  You specify the safe default 
 and
 if a distributor knows it's package is safe it can alter the default.
Oh, okay... with the safe default being derived from the qemu version ;)


 At some point you can invert that or remove the option altogether.
Yeah, 5 years down the line ... ;[

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra] [cinder] CI via infra for the DRBD Cinder driver

2015-02-20 Thread Philipp Marek
Hi all,

this is a reflection of the discussion I just had on #openstack-infra; it's 
about (re-)using the central CI infrastructure for our Open-Source DRBD 
driver too.


The current status is:
 * The DRBD driver is already in Cinder, so DRBD-replicated Cinder storage
   using iSCSI to the hypervisors does work out-of-the-box.
 * The Nova-parts didn't make it in for Kilo; we'll try to get them into L.
 * I've got a lib/backends/drbd for devstack, that together with a matching 
   local.conf can set up a node - at least for a limited set of 
   distributions (as DRBD needs a kernel module, Ubuntu/Debian via DKMS are 
   the easy way).
   [Please note that package installation is not yet done in this script 
   yet - I'm not sure whether I can/may/should simply add an 
   apt-repository.]


Now, clarkb told me about two caveats:

  «Yup, so the two things I will start with is that multinode testing is 
   still really rudimentary, we only just got tempest sort of working with 
   it. So I might suggest running on a single node first to get the general 
   thing working.

   The other thing is that we don't have the zuul code to vote with 
   a different account deployed/merged yet. So initially you could run your 
   job but it wouldn't vote against, say, cinder.»


Cinder has a deadline for CI: March 19th; upon relaying that fact (resp. 
nearly correct date) clarkb said

  «thats about 3 weeks... probably at least for the zuul thing.»

So, actually it's nearly 4 weeks, let's hope that it all works out.


Actually, the multi-node testing will only be needed when we get the Nova 
parts in, because then it would make sense to test (Nova) via both iSCSI 
and the DRBD transport; for Cinder CI a single-node setup is sufficient.


My remaining questions are:
 * Is it possible to have our driver tested via the common infrastructure?
 * Is it okay to setup another apt-repository during the devstack run,
   to install the needed packages? I'm not sure whether our servers
   would simply be accessible, some firewall or filtering proxy could
   break such things easily.
 * Apart from the cinder-backend script in devstack (which I'll have to 
   finish first, see eg. package installation), is any other information 
   needed from us?


Thank you for your feedback and any help you can offer!


Regards,

Phil

-- 
: Ing. Philipp Marek
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting http://www.linbit.com :

DRBD® and LINBIT® are registered trademarks of LINBIT, Austria.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] qemu-img disk corruption bug to patch or not?

2015-02-19 Thread Philipp Marek
  In September I started working on a simple (but controvertial) fix in 
  nova[1]
  to avoid some disk corruption issues.  John kindly ran with a similar fix 
  to cinder[2].
  John's review ended up in merge conflict so I tried my own version [3].
 Thanks for sending this out, and for the complete summary.  I'm
 inclined to lean towards working the option for Cinder personally.
 Granted it looks like the update is in the distro packages to fix
 this, but I'm wondering about backport and cases for installs that are
 on say 12.04?  Maybe that's not an issue?
 
 Anybody else have thoughts on this?
Potentially corrupted images are bad; depending on the affected data it 
might only be diagnosed some time after installation, so IMO the fix is 
needed.

Only a minor issue: I'd like to see it have a check for the qemu version, 
and only then kick in.


Regards,

Phil


-- 
: Ing. Philipp Marek
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting http://www.linbit.com :

DRBD® and LINBIT® are registered trademarks of LINBIT, Austria.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Request Spec Freeze Exception (DRBD for Nova)

2015-02-16 Thread Philipp Marek
Hi all,

Nikola just told me that I need an FFE for the code as well.

Here it is: please grant a FFE for 

https://review.openstack.org/#/c/149244/

which is the code for the spec at

https://review.openstack.org/#/c/134153/


Regards,

Phil

-- 
: Ing. Philipp Marek
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting http://www.linbit.com :

DRBD® and LINBIT® are registered trademarks of LINBIT, Austria.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Request Spec Freeze Exception (DRBD for Nova)

2015-02-16 Thread Philipp Marek
Hi Daniel,
 
 The current feature freeze exceptions are being gathered for code
 reviews whose specs/blueprints are already approved. Since your
 spec does not appear to be approved, you would have to first
 request a spec freeze exception, but I believe it is too late for
 that now in Kilo.
I sent the Spec Freeze Exception request in January:
http://lists.openstack.org/pipermail/openstack-dev/2015-January/054225.html


Regards,

Phil

-- 
: Ing. Philipp Marek
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting http://www.linbit.com :

DRBD® and LINBIT® are registered trademarks of LINBIT, Austria.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder][neutron][security] Rootwrap on root-intensive nodes

2015-02-05 Thread Philipp Marek
Hi Daniel,

  Yes, there's some semantic meaning at that level. But this level already 
  exists at the current rootwrap caller site, too - and if that one can be 
  tricked to do something against image.img  rm -rf /, then the additional 
  layer can be tricked, too.
 
 No, that is really not correct. If you are passing full command strings to
 rootwrap then the caller can trick rootwrap into running commands with those
 shell metacharacter exploits.  If you have a formal API like the one I 
 describe
 and correctly implement it, there would be no shell involved at all. ie the
 nova-compute-worker program would directly invoke the system call
 
execve(/usr/bin/qemu-img, create, image.img  rm -rf /)
 
 and this would at worse create a file called  'image.img  rm -rf /' and
 *not* invoke the rm command as you get when you use shell.
 
 This is really just another example of why rootwrap/sudo as a concept is a
 bad idea. The shell should never be involved in executing any external
 commands that Nova/Neutron/etc need to run, because it is impractical to
 correctly validate shell commands anywhere in the stack. The only safe
 thing todo is to take shell out of the picture entirely.
that was just an example.

If cinder calls rootwrap with a list of arguments, and this is then called 
via sudo, there's no shell inbetween either.


But now say there's something else, like the bash functions bug, and 
some codepath runs a command in a way that sets the environment - then any 
grandchild processes might cause a breach.


I'm not arguing that *this exact string* is a problem.
My point is that the cinder/nova code already have as much information as 
possible, and *any* library abstraction can just remove some available 
data.
So, besides increasing the LOC, it won't get more secure.


I'm arguing that the rootwrap call sites need to be fixed, ie. they need to 
proof in some way that the arguments they pass on are sane - that's more 
or less the same thing that this new library would need to do too.



Regards,

Phil

-- 
: Ing. Philipp Marek
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting http://www.linbit.com :

DRBD® and LINBIT® are registered trademarks of LINBIT, Austria.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder][neutron][security] Rootwrap on root-intensive nodes

2015-02-04 Thread Philipp Marek
Here are my 2¢.

   (1) we could get our act together and audit and fix those filter
   definitions. Remove superfluous usage of root rights, make use of
   advanced filters for where we actually need them. We have been preaching
   for that at many many design summits. This is a lot of work though...
   There were such efforts in the past, but they were never completed for
   some types of nodes. Worse, the bad filter definitions kept coming back,
   since developers take shortcuts, reviewers may not have sufficient
   security awareness to detect crappy filter definitions, and I don't
   think we can design a gate test that would have such awareness.
Sounds like a lot of work... ongoing, too.


   (2) bite the bullet and accept that some types of nodes actually need
   root rights for so many different things, they should just run as root
   anyway. I know a few distributions which won't be very pleased by such a
   prospect, but that would be a more honest approach (rather than claiming
   we provide efficient isolation when we really don't). An added benefit
   is that we could replace a number of shell calls by Python code, which
   would simplify the code and increase performance.
Practical, but unsafe.

I'd very much like to have some best-effort filter against bugs in my 
programming - even more so during development.


 (4) I think that ultimately we need to ditch rootwrap and provide a proper
 privilege separated, formal RPC mechanism for each project.
...
 we should have a  nova-compute-worker daemon running as root, that accepts
 an RPC command from nova-compute running unprivileged. eg

 CreateImage(instane0001, qcow2, disk.qcow)
...
 This is certainly alot more work than trying to patchup rootwrap, but
 it would provide a level of security that rootwrap can never achieve IMHO.
A lot of work, and if input sanitation didn't work in one piece of code, 
why should it here?

I think this only leads to _more_ work, without any real benefit.
If we can't get the filters right the first round, we won't make it here 
either.


Regarding the idea of using containers ... take Cinder as an example.
If the cinder container can access *all* the VM data, why should someone 
bother to get *out* of the container? Everything that she wants is already 
here...
I'm not sure what the containers would buy us, but perhaps I just don't 
understand something here.


So, IMO, solution 1 (one) would be the way to go ... it gets to security 
asymptotically (and might never reach it), but at least it provides a bit 
of help.

And if the rootwrap filter specification would be linked to in the rootwrap 
config files, it would help newcomers to see the available syntax, instead 
of simply copying a bad example ;P



-- 
: Ing. Philipp Marek
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting http://www.linbit.com :

DRBD® and LINBIT® are registered trademarks of LINBIT, Austria.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder][neutron][security] Rootwrap on root-intensive nodes

2015-02-04 Thread Philipp Marek
   (4) I think that ultimately we need to ditch rootwrap and provide a proper
   privilege separated, formal RPC mechanism for each project.
  ...
   we should have a  nova-compute-worker daemon running as root, that accepts
   an RPC command from nova-compute running unprivileged. eg
  
   CreateImage(instane0001, qcow2, disk.qcow)
  ...
   This is certainly alot more work than trying to patchup rootwrap, but
   it would provide a level of security that rootwrap can never achieve IMHO.
  A lot of work, and if input sanitation didn't work in one piece of code, 
  why should it here?
  
  I think this only leads to _more_ work, without any real benefit.
  If we can't get the filters right the first round, we won't make it here 
  either.
 
 The difference is that the API I illustrate here has *semantic* context
 about the operation. In the API example the caller is not permitted to
 provide a directory path - only the name of the instance and the name
 of the disk image. The privileged nova-compute-worker program can thus
 enforce exactly what directory the image is created in, and ensure it
 doesn't clash with a disk image from another VM.  This kind of validation
 is impractical when you are just given a 'qemu-img' command line args
 with a full directory path, so there is no semantic conext for the privileged
 rootwrap to know whether it is reasonable to create the disk image in that
 particular location.
Sorry about being unclear.

Yes, there's some semantic meaning at that level. But this level already 
exists at the current rootwrap caller site, too - and if that one can be 
tricked to do something against image.img  rm -rf /, then the additional 
layer can be tricked, too.


I'm trying to get at the point
  everything that can be forgot to check at the rootwrap call site now,
  can be forgotten in the additional API too.

So let's get the current call sites tight, and we're done.  (Ha!)


-- 
: Ing. Philipp Marek
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting http://www.linbit.com :

DRBD® and LINBIT® are registered trademarks of LINBIT, Austria.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Request Spec Freeze Exception (DRBD for Nova)

2015-01-30 Thread Philipp Marek
Hi all,

 in Paris (and later on, on IRC and the mailing list) I began to ask around 
 about providing a DRBD storage driver for Nova.
 This is an alternative to using iSCSI for block storage access, and would 
 be especially helpful for backends already using DRBD for replicated 
 storage.
any news about this?
https://review.openstack.org/#/c/134153/


To reiterate:
  * Spec was submitted in time (Nov 13)
  * Spec wasn't approved, because the Cinder implementation
(https://review.openstack.org/#/c/140451/) merge got delayed,
because its prerequisite (https://review.openstack.org/#/c/135139/,
Transition LVM Driver to use Target Objects) wasn't merged in time,
because on deadline day (Dec. 17) Gerrit was so much used that
this devstack run got timeouts against some python site during
setup
  * Spec Freeze Exception was submitted in time
http://lists.openstack.org/pipermail/openstack-dev/2015-January/054225.html
  * Having DRBD for Cinder is good, but using the *same* protocol
to the Nova nodes should really help performance (and reliability);
for example, the transitions
  Network - Kernel - iSCSI daemon - Kernel - Block Device
wouldn't be needed anymore; the Kernel could directly respond to the
queries, and in the near future even using RDMA (where available).
Reliability should be improved as the Nova node can access multiple
storage nodes _at the same time_, so it wouldn't matter if one of them
crashes for whatever reason.


Please help us *now* to get the change in. It's only a few lines in
a separate driver, so until it gets configured it won't even be
noticed!


And yes, of course we're planning to do CI for that Nova driver, too.


Regards,
 
Phil

-- 
: Ing. Philipp Marek
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting http://www.linbit.com :

DRBD® and LINBIT® are registered trademarks of LINBIT, Austria.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] [scheduler] Nova node name passed to Cinder

2015-01-28 Thread Philipp Marek
Hello Vishvananda,

 Initialize connection passes that data to cinder in the call. The connector
 dictionary in the call should contain the info from nova:
 
 https://github.com/openstack/cinder/blob/master/cinder/volume/driver.py#L1051
Ah yes, I see.

Thank you very much!


Regards,

Phil

-- 
: Ing. Philipp Marek
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting http://www.linbit.com :

DRBD® and LINBIT® are registered trademarks of LINBIT, Austria.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] [scheduler] Nova node name passed to Cinder

2015-01-26 Thread Philipp Marek
Hello Vish,

 Nova passes ip, iqn, and hostname into initialize_connection. That should 
 give you the info you need.
thank you, but that is on the _Nova_ side.

I need to know that on the Cinder node already:

  For that the cinder volume driver needs to know at
...
  time which Nova host will be used to access the data.

but it's not passed in there:

  The arguments passed to this functions already include an
  attached_host value, sadly it's currently given as None...


Therefore my question where/when that value is calculated...


Regards,

Phil

-- 
: Ing. Philipp Marek
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting http://www.linbit.com :

DRBD® and LINBIT® are registered trademarks of LINBIT, Austria.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] [nova] [scheduler] Nova node name passed to Cinder

2015-01-26 Thread Philipp Marek
Hello everybody,

I'm currently working on providing DRBD as a block storage protocol.
For that the cinder volume driver needs to know at

initialize_connection,
create_export, and
ensure_export

time which Nova host will be used to access the data.

I'd like to ask for a bit of help; can somebody tell me which part of
the code decides that, and where the data flows?
Is it already known at that time which node will receive the VM?

The arguments passed to this functions already include an
attached_host value, sadly it's currently given as None...


Thank you for any tips, ideas, and pointers into the code - and,
of course, even more so for full-blown patches on review.openstack.org ;)


Regards,

Phil


-- 
: Ing. Philipp Marek
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting http://www.linbit.com :

DRBD® and LINBIT® are registered trademarks of LINBIT, Austria.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Request Spec Freeze Exception (DRBD for Nova)

2015-01-12 Thread Philipp Marek
Hello all,

in Paris (and later on, on IRC and the mailing list) I began to ask around 
about providing a DRBD storage driver for Nova.
This is an alternative to using iSCSI for block storage access, and would 
be especially helpful for backends already using DRBD for replicated 
storage.


The spec at

https://review.openstack.org/#/c/134153/

was not approved in December on the grounds that the DRBD Cinder driver 

https://review.openstack.org/#/c/140451/

should be merged first; because of (network) timeouts during the K-1 
milestone (and then merge conflicts, rebased dependencies, etc.) it wasn't
merged until recently (Jan 5th).

Now that the Cinder driver is already upstream, we'd like to ask for 
approval of the Nova driver - it would provide quite some performance boost 
over having all block storage data.


Thank you for your kind consideration!


Regards,

Phil

-- 
: Ing. Philipp Marek
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting http://www.linbit.com :

DRBD® and LINBIT® are registered trademarks of LINBIT, Austria.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] Consistency groups?

2014-11-17 Thread Philipp Marek
Hello Xing,

 Do you have a libvirt volume driver on the Nova side for DRBD?
No, we don't. We'd just use the existing DRBD 9 kernel module to
provide the local block devices.


 Regarding getting consistency group information to the Nova nodes, can 
 you help me understand the steps you need to go through?
 
 1. Create a consistency group
 2. Create a volume and add volume to the group
Repeat the above step until all volumes are created and added to the 
group
 3. Attach volume in the group
 4. Create a snapshot of the consistency group
The question I'm asking right now isn't about snapshots.

 Do you setup the volume on the Nova side at step 3?  We currently don't 
 have a group level API that setup all volumes in a group.  Is it 
 possible for you to detect whether a volume is in a group or not when 
 attaching one volume and setup all volumes in the same group?  
Well, our Cinder driver passes some information to the Nova nodes; within 
that information block we can pass the consistency group (which will be the 
DRBD resource name) as well, to detect that case.

 Otherwise, it sounds like we need to add a group level API for this 
 purpose.
Perhaps just adding a volume is in consistency group X data item would be 
enough, too?


Sorry about being so vague; I'm just not familiar enough with all the 
interdependencies from Cinder to Nova.


Regards,

Phil


-- 
: Ing. Philipp Marek
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting http://www.linbit.com :

DRBD® and LINBIT® are registered trademarks of LINBIT, Austria.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] [nova] Consistency groups?

2014-11-13 Thread Philipp Marek
Hi,

I'm working on the DRBD Cinder driver, and am looking at the Nova side, 
too. Is there any idea how Cinder's consistency groups should be used on 
the Nova nodes?

DRBD has easy support for consistency groups (a DRBD resource is 
a collection of DRBD volumes that share a single, serialized connection) 
and so can guarantee write consistency across multiple volumes. 

[ Which does make sense anyway; eg. with multiple iSCSI
  connections one could break down because of STP or
  other packet loss, and then the storage backend and/or
  snapshots/backups/etc. wouldn't be consistent anymore.]


What I'm missing now is a way to get the consistency group information 
to the Nova nodes. I can easily put such a piece of data into the 
transmitted transport information (along with the storage nodes' IP 
addresses etc.) and use it on the Nova side; but that also means that
on the Nova side there'll be several calls to establish the connection,
and several for tear down - and (to exactly adhere to the API contract)
I'd have to make sure that each individual volume is set up (and closed)
in exactly that order again.

That means quite a few unnecessary external calls, and so on.


Is there some idea, proposal, etc., that says that
   *within a consistency group*
all volumes *have* to be set up and shutdown 
   *as a single logical operation*?
[ well, there is one now ;]


Because in that case all volume transport information can (optionally) be 
transmitted in a single data block, with several iSCSI/DRBD/whatever
volumes being set up in a single operation; and later calls (for the other 
volumes in the same group) can be simply ignored as long as they have the
same transport information block in them.


Thank you for all pointers to existing proposals, ideas, opinions, etc.


Phil

-- 
: Ing. Philipp Marek
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting http://www.linbit.com :

DRBD® and LINBIT® are registered trademarks of LINBIT, Austria.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] openstack/requirements and tarball subdirs

2014-07-14 Thread Philipp Marek
  It might be better to work with the python-dbus authors to get a
  pip-installable package released to PyPI, so that it can be included
  in the tox virtualenv (though for DevStack we'd still want to use
  distro packages instead of PyPI, I think).
 I sent Simon an email about that now.
I talked to him today.

The recommendation is to use the available (distribution) packages, because 
he's had bad experiences with the distutils build system, so he wants to 
stick to the Autotools - and the general issues (needing a C compiler, some 
header files) would be the same with other libraries, too.



   You'll also need to modify cinder's tox.ini to set sitepackages =
   True so the virtualenvs created for the unit tests can see the global
   site-packages directory. Nova does the same thing for some of its
   dependencies.
  ... I'm a little worried about taking on
  sitepackages=True in more projects given the headaches it causes
  (conflicts between versions in your virtualenv and system-installed
  python modules which happen to be dependencies of the operating
  system, for example the issues we ran into with Jinja2 on CentOS 6
  last year).
 But such a change would affect _all_ people, right?
 Hmmm... If you think such a change will be accepted?
So we're back to this question now.


While I don't have enough knowledge about the interactions to just change 
the virtual-env setup in DevStack, I can surely create an issue on 
https://github.com/openstack-dev/devstack.


How would this requirement be done for production setups? Should 
installers read the requirements.txt and install matching distribution 
packages?

Or is that out of scope of OpenStack/Cinder development anyway, and so 
I can/should ignore that?


Regards,

Phil

-- 
: Ing. Philipp Marek
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting http://www.linbit.com :

DRBD® and LINBIT® are registered trademarks of LINBIT, Austria.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder][replication-api] Questions about #64026

2014-07-11 Thread Philipp Marek
Hi Ronen,
hello everybody else,

now that I'm trying to write a DRBD implementation for the Replication API 
(https://review.openstack.org/#/c/64026/) a few questions pop up.

As requested by Ronan I'll put them here on -dev, so that the questions 
(and, hopefully, the answers ;) can be easily found.

To provide a bit of separation I'll do one question per mail, and each in 
a subthread of this mail.


Thanks for the patience, I'm looking forward to hearing your helpful ideas!


Regards,

Phil


-- 
: Ing. Philipp Marek
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting http://www.linbit.com :

DRBD® and LINBIT® are registered trademarks of LINBIT, Austria.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder][replication-api] extra_specs too constant

2014-07-11 Thread Philipp Marek
I think that extra_specs in the database is too static, too hard to 
change.


In the case of eg. DRBD, where many nodes may provide some storage space, the 
list replication_partners is likely to change often, even if only newly 
added nodes have to be done[1]

This means that
  a) the admin has to add each node manually
  b) volume_type_extra_specs:value is a VARCHAR(255), which can only provide 
 a few host names. (With FQDN even more so.)

What if the list of hosts would be matched by each one saying I'm product XYZ 
version compat N-M (eg. via get_volume_stats), and all nodes that report 
the same product with an overlapping version range are considered eligible 
for replication?


Furthermore, replication_rpo_range might depend on other circumstances 
too... if the network connection to the second site is heavily loaded, the 
RPO will vary, too - from a few seconds to a few hours.

So, should we announce a range of (0,7200)?


Ad 1: because Openstack sees by itself which nodes are available.


-- 
: Ing. Philipp Marek
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting http://www.linbit.com :

DRBD® and LINBIT® are registered trademarks of LINBIT, Austria.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder][replication-api] replication_rpo_range - why two values?

2014-07-11 Thread Philipp Marek
replication_rpo_range currently gets set with two values - a lower and an 
upper bound. File cinder/scheduler/filter_scheduler.py:118 has
if target_rpo  rpo_range[0] or target_rpo  rpo_range[1]:

Why do we check for target_rpo  rpo_range[1]?

Don't use that one if replication is too fast? Because we'd like to 
revert to an older state?
I believe that using snapshots would be more sane for that use case.

Or I just don't understand the reason, which is very likely, too.


-- 
: Ing. Philipp Marek
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting http://www.linbit.com :

DRBD® and LINBIT® are registered trademarks of LINBIT, Austria.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] openstack/requirements and tarball subdirs

2014-07-09 Thread Philipp Marek
 It might be better to work with the python-dbus authors to get a
 pip-installable package released to PyPI, so that it can be included
 in the tox virtualenv (though for DevStack we'd still want to use
 distro packages instead of PyPI, I think).
I sent Simon an email about that now.


-- 
: Ing. Philipp Marek
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting http://www.linbit.com :

DRBD® and LINBIT® are registered trademarks of LINBIT, Austria.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] openstack/requirements and tarball subdirs

2014-07-08 Thread Philipp Marek
  The 1.2.0 release from
  http://dbus.freedesktop.org/releases/dbus-python/ doesn't look like an
  sdist, either (I see a configure script, so I think they've switched
  to autoconf). Uploading that version to PyPI isn't going to give you
  something you can install with pip. Are there system packages for
  dbus-python for the distros we support directly?
  Yes; RHEL6 and Ubuntu 12.04 include python-dbus packages.
 
 How about SuSE and Debian?
Ubuntu got the package from Debian AFAIK; it's available.

A google search seems to indicate a 
dbus-1-python-devel-0.83.0-27.1.43.x86_64.rpm
for SLES11-SP3.


  That release also appears to be just over a year old. Do you know if
  dbus-python is being actively maintained any more? Are there other
  libraries for talking to dbus?
  AFAIK dbus-python is the most current and preferred one.
 
  http://dbus.freedesktop.org/doc/dbus-python/ lists two alternatives, but as
  these are not packaged (yet) I chose python-dbus instead.
 
 
  Can Jenkins use the pre-packaged versions instead of downloading and
  compiling the tarball?
 
 If dbus-python is indeed the best library, that may be the way to go.
 System-level dependencies can be installed via devstack, so you could
 submit a patch to devstack to install this library for cinder's use by
 editing files/*/cinder.
Within the devstack repository, I guess.


 You'll also need to modify cinder's tox.ini to set sitepackages =
 True so the virtualenvs created for the unit tests can see the global
 site-packages directory. Nova does the same thing for some of its
 dependencies.
But such a change would affect _all_ people, right? 

Hmmm... If you think such a change will be accepted?


Thank you for your help!


Regards,

Phil

-- 
: Ing. Philipp Marek
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting http://www.linbit.com :

DRBD® and LINBIT® are registered trademarks of LINBIT, Austria.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] openstack/requirements and tarball subdirs

2014-07-07 Thread Philipp Marek
Hi everybody,

I'm trying to get
https://review.openstack.org/#/c/99013/
through Jenkins, but keep failing.


The requirement I'm trying to add is
 dbus-python=0.83 # MIT License


The logfile at

http://logs.openstack.org/13/99013/2/check/check-requirements-integration-dsvm/d6e5418/console.html.gz
says this:

 Downloading/unpacking dbus-python=0.83 (from -r /tmp/tmpFt8D8L (line 13))
Loads the tarball from
  
https://pypi.python.org/packages/source/d/dbus-python/dbus-python-0.84.0.tar.gz.
   Using download cache from /tmp/tmp.JszD7LLXey/download/...
   Running setup.py (path:/tmp/...) egg_info for package dbus-python

but then fails
Traceback (most recent call last):
  File string, line 17, in module
IOError: [Errno 2] No such file or directory: 
'/tmp/tmpH1D5G3/build/dbus-python/setup.py'
Complete output from command python setup.py egg_info:
Traceback (most recent call last):

  File string, line 17, in module

 IOError: [Errno 2] No such file or directory:
   '/tmp/tmpH1D5G3/build/dbus-python/setup.py'

I guess the problem is that the subdirectory within that tarball includes 
the version number, as in dbus-python-0.84.0/. How can I tell the extract 
script that it should look into that one?


Thank you for your help!


Regards,

Phil

-- 
: Ing. Philipp Marek
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting http://www.linbit.com :

DRBD® and LINBIT® are registered trademarks of LINBIT, Austria.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] openstack/requirements and tarball subdirs

2014-07-07 Thread Philipp Marek

Hello Doug,


thank you for your help.

  I guess the problem is that the subdirectory within that tarball includes
  the version number, as in dbus-python-0.84.0/. How can I tell the extract
  script that it should look into that one?
 
 It looks like that package wasn't built correctly as an sdist, so pip
 won't install it. Have you contacted the author to report the problem
 as a bug?
No, not yet.

I thought that it was okay, being hosted on python.org and so on.

The other tarballs in that hierarchy follow the same schema; perhaps the 
cached download is broken?


The most current releases are available on
http://dbus.freedesktop.org/releases/dbus-python/
though; perhaps the 1.2.0 release works better?

But how could I specify to use _that_ source URL?


Thank you!


Regards,

Phil


-- 
: Ing. Philipp Marek
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting http://www.linbit.com :

DRBD® and LINBIT® are registered trademarks of LINBIT, Austria.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] DRBD integration as volume driver

2014-06-10 Thread Philipp Marek
Hello Duncan,

 The best thing to do with the code is push up a gerrit review! No need
 to be shy, and you're very welcome to push up code before the
 blueprint is in, it just won't get merged.
thank you for your encouragement!


I pushed another fix for Cinder last week (2 lines, allowing to start 
the services via pudb) by committing


commit 7b6c6685ba3fb40b6ed65d8e3697fa9aac899d85
Author: Philipp Marek philipp.ma...@linbit.com
Date:   Fri Jun 6 11:48:52 2014 +0200

Make starting cinder services possible with pudb, too.


I had that rebased to be on top of
6ff7d035bf507bf2ec9d066e3fcf81f29b4b481c
(the then-master HEAD), and pushed to
refs/for/master
on
ssh://phma...@review.openstack.org:29418/openstack/cinder

but couldn't find the commit in Gerrit anywhere ..

Even a search
https://review.openstack.org/#/q/owner:self,n,z
is empty.


Clicking around I found
https://review.openstack.org/#/admin/projects/openstack/cinder
which says 
Require a valid contributor agreement to upload: TRUE
but to the best of my knowledge this should be done:
https://review.openstack.org/#/settings/agreements
says
Verified   ICLA   OpenStack Individual Contributor License Agreement


So I'm a bit confused right now - what am I doing wrong?



 I'm very interested in this code.
As soon as I've figured out how this Gerrit thing works you can take a 
look ... (or even sooner, see the github link in my previous mail.)



Regards,

Phil


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] DRBD integration as volume driver

2014-06-10 Thread Philipp Marek
So, I now tried to push the proof-of-concept driver to Gerrit,
and got this:

 Downloading/unpacking dbus (from -r /home/jenkins/workspace/gate-
cinder-pep8/requirements.txt (line 32))
   http://pypi.openstack.org/openstack/dbus/ uses an insecure transport 
scheme (http). Consider using https if pypi.openstack.org has it 
available
   Could not find any downloads that satisfy the requirement dbus (from 
-r /home/jenkins/workspace/gate-cinder-pep8/requirements.txt (line 32))


So, how would I get additional modules (dbus and its dependencies) onto 
pypi.openstack.org? I couldn't find a process for that.


Regards,

Phil


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] DRBD integration as volume driver

2014-06-10 Thread Philipp Marek
Hrmpf, sent too fast again.

I guess https://wiki.openstack.org/wiki/Requirements is the link I was 
looking for.


Sorry for the noise.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] DRBD integration as volume driver

2014-06-03 Thread Philipp Marek
Hi everybody,

at the Juno Design Summit we held a presentation about using DRBD 9 
within OpenStack.

Here's an overview about the situation; I apologize in advance that the 
mail got a bit longer, but I think it makes sense to capture all that 
information in a single piece.



 WHAT WE HAVE


Design Summit notes:
https://etherpad.openstack.org/p/juno-cinder-DRBD


As promised we've got a proof-of-concept implementation for the simplest 
case, using DRBD to access data on all nodes - the DRBDmanage volume 
driver as per the Etherpad notes (link see below).


As both DRBD 9 and DRBDmanage are still in heavy development, there are 
quite a few rough edges; in case anyone's interested in setting that up 
on some testsystem, I can offer RPMs and DEBs of drbd-utils and 
drbdmanage, and for the DRBD 9 kernel module for a small set of kernel 
versions:

Ubuntu 12.043.8.0-34-generic
RHEL6 ( compat)2.6.32_431.11.2.el6.x86_64

If there's consensus that some specific kernel version should be used 
for testing instead I can try to build packages for that, too.


There's a cinder git clone with our changes at
https://github.com/phmarek/cinder
so that all developments can be discussed easily.
(Should I use some branch in github.com/OpenStack/Cinder instead?)



 FUTURE PLANS


The (/our) plans are:

 * LINBIT will continue DRBD 9 and DRBDmanage development,
   so that these get production-ready ASAP.
   Note: DRBDmanage is heavily influenced by outside
   requirements, eg. OpenStack Cinder Consistency Groups...
   So the sooner we're aware of such needs the better;
   I'd like to avoid changing the DBUS api multiple times ;)

 * LINBIT continues to work on the DRBD Cinder volume driver,
   as this is 

 * LINBIT starts to work to provide DRBD 9 integration
   between the LVM and iSCSI layer.
   That needs the Replication API to be more or less finished.

There are a few dependencies, though ... please see below.


All help - ideas, comments (both for design and code), all feedback, 
and, last but not least, patches resp. pull requests - are *really* 
welcome, of course.

(For real-time communication I'm available in the #openstack-cinder 
channel too, mostly during European working hours; I'm flip\d+.)



 WHAT WE NEED


Now, while I filled out the CLA, I haven't read through all the 
documentation regarding Processes  Workflow yet ... and that'll take 
some time, I gather.


Furthermore, on the technical side there's a lot to discuss, too;
eg. regarding snapshots there are quite a few things to decide.

 * Should snapshots be taken on _one_ of the storage nodes,
 * on some subnet, or
 * on all of them?

I'm not sure whether the same redundancy that's defined for the volume 
is wanted for the snapshots, too.
(I guess one usecase that should be possible is to take at least one 
snapshot of the volume in _each_ data center?)


Please note that having volume groups would be good-to-have (if not 
essential) for a DRBD integration, because only then DRBD could ensure 
data integrity *across* volumes (by using a single resource for all of 
them).
See also 
https://etherpad.openstack.org/p/juno-cinder-cinder-consistency-groups; 
basically, the volume driver just needs to get an 
additional value associate into this group.



 EULA


Now, there'll be quite a few things I forgot to mention, or that I'm 
simply missing. Please bear with me, I'm fairly new to OpenStack.


So ... ideas, comments, other feedback?


Regards,

Phil

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev