Hi everybody,
> > Hi, I'd like to know if it is possible to use openstack-cinder-volume with
> > a remote LVM. This could be a new feature proposal if the idea is good.
> > More precisely, I'm thinking a solution where openstack-cinder-volume runs
> > on a dedicated node and LVM on another node
> Ok, to turn the question around, we (the cinder team) have recognised a
> definite and strong need to have somewhere for vendors to share patches on
> versions of Cinder older than the stable branch policy allows.
>
> Given this need, what are our options?
>
> 1. We could do all this outside
> I want to propose
> we officially make a change to our stable policy to call out that
> drivers bugfixes (NOT new driver features) be allowed at any time.
Emphatically +1 from me.
With the small addendum that "bugfixes" should include compatibility
changes for libraries used.
Thanks for
Hi Preston,
> The benchmark scripts are in:
>
> https://github.com/pbannister/openstack-bootstrap
in case that might help, here are a few notes and hints about doing
benchmarks for the DRDB block device driver:
http://blogs.linbit.com/p/897/benchmarking-drbd/
Perhaps there's something
Hi all,
I quite like the page at http://ci-watch.tintri.com/project - it gives
a very quick overview about the failures one should look into, and which to
ignore ;)
Please let me state before anything else that I don't know any of the
restrictions that may have led into the current design -
Hi everybody,
in the current patch https://review.openstack.org/#/c/259973/1 the test
script needs to use a lot of the constant definitions of the backend driver
it's using (DRBDmanage).
As the DRBDmanage libraries need not be installed on the CI nodes, I'm
providing a minimum of upstream
Hi Hao Wang,
> In fact, there is a reason that I ask this question. Recently I have a
> confusion about if cinder should provide the ability of Disaster
> Recovery to storage resources, like volume. I mean we have volume
> replication v1, but for DR, specially DR between two independent
>
> About uploading encrypted volumes to image, there are three options:
> 1. Glance only keeps non-encrypted images. So when uploading encrypted
>volumes to image, cinder de-crypts the data and upload.
> 2. Glance maintain encrypted images. Cinder just upload the encrypted
>data to image.
> > I'm currently trying to work around an issue where activating LVM
> > snapshots created through cinder takes potentially a long time.
[[ thick LVM snapshot performance problem ]]
> > Given the above, is there any reason why we couldn't make thin
> > provisioning the default?
>
> My intention
Well, is it already decided that Pacemaker would be chosen to provide HA in
Openstack? There's been a talk Pacemaker: the PID 1 of Openstack IIRC.
I know that Pacemaker's been pushed aside in an earlier ML post, but IMO
there's already *so much* been done for HA in Pacemaker that Openstack
[...]
Pacemaker is *the* Linux HA Stack.
[...]
Can you expand on this assertion? It doesn't look to me like it's
part of the Linux source tree and I see strong evidence to suggest
it's released and distributed completely separately from the kernel.
If you read Linux as GNU/Linux or Linux
[...]
Pacemaker is *the* Linux HA Stack.
[...]
Can you expand on this assertion? It doesn't look to me like it's
part of the Linux source tree and I see strong evidence to suggest
it's released and distributed completely separately from the kernel.
If you read Linux as
Well, SUSE and Redhat (7) use Pacemaker by default, Debian/Ubuntu have it
(along with others)...
That gives it quite some market share, wouldn't you think?
Yes, I guess the most popular meaning is a good match here.
I see, so in the same way that nano is *the* Linux text editor
If we end up using a DLM then we have to detect when the connection to
the DLM is lost on a node and stop all ongoing operations to prevent
data corruption.
It may not be trivial to do, but we will have to do it in any solution
we use, even on my last proposal that only uses the DB in
-full-drbd-devstack-nv.FAILUREtarget=stats_counts.zuul.pipeline.check.job.check-tempest-dsvm-full-drbd-devstack-nv.SUCCESS
--
: Ing. Philipp Marek
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting http://www.linbit.com :
DRBD® and LINBIT® are registered
on
IRC, this is one of the most important assets Openstack has today!
--
: Ing. Philipp Marek
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting http://www.linbit.com :
DRBD® and LINBIT® are registered trademarks of LINBIT, Austria
getting skipped...
--
: Ing. Philipp Marek
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting http://www.linbit.com :
DRBD® and LINBIT® are registered trademarks of LINBIT, Austria
.)
Of course, if the hypervisor crashes, you'll have to restart the VMs (or
create new ones).
If you've got any questions, please don't hesitate to ask me (or drbd-user,
if you prefer that).
Regards,
Phil
--
: Ing. Philipp Marek
: LINBIT | Your Way to High Availability
: DRBD/HA support
Potentially corrupted images are bad; depending on the affected data it
might only be diagnosed some time after installation, so IMO the fix is
needed.
Sure but there is a (potentially hefty) performance impact.
Well, do you want fast or working/consistent images?
Only a minor issue:
.
* Apart from the cinder-backend script in devstack (which I'll have to
finish first, see eg. package installation), is any other information
needed from us?
Thank you for your feedback and any help you can offer!
Regards,
Phil
--
: Ing. Philipp Marek
: LINBIT | Your Way to High
on this?
Potentially corrupted images are bad; depending on the affected data it
might only be diagnosed some time after installation, so IMO the fix is
needed.
Only a minor issue: I'd like to see it have a check for the qemu version,
and only then kick in.
Regards,
Phil
--
: Ing. Philipp
Hi all,
Nikola just told me that I need an FFE for the code as well.
Here it is: please grant a FFE for
https://review.openstack.org/#/c/149244/
which is the code for the spec at
https://review.openstack.org/#/c/134153/
Regards,
Phil
--
: Ing. Philipp Marek
: LINBIT | Your Way
sent the Spec Freeze Exception request in January:
http://lists.openstack.org/pipermail/openstack-dev/2015-January/054225.html
Regards,
Phil
--
: Ing. Philipp Marek
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting http://www.linbit.com :
DRBD
more secure.
I'm arguing that the rootwrap call sites need to be fixed, ie. they need to
proof in some way that the arguments they pass on are sane - that's more
or less the same thing that this new library would need to do too.
Regards,
Phil
--
: Ing. Philipp Marek
: LINBIT | Your Way
syntax, instead
of simply copying a bad example ;P
--
: Ing. Philipp Marek
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting http://www.linbit.com :
DRBD® and LINBIT® are registered trademarks of LINBIT, Austria
at the rootwrap call site now,
can be forgotten in the additional API too.
So let's get the current call sites tight, and we're done. (Ha!)
--
: Ing. Philipp Marek
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting http://www.linbit.com :
DRBD® and LINBIT
for whatever reason.
Please help us *now* to get the change in. It's only a few lines in
a separate driver, so until it gets configured it won't even be
noticed!
And yes, of course we're planning to do CI for that Nova driver, too.
Regards,
Phil
--
: Ing. Philipp Marek
: LINBIT | Your Way
--
: Ing. Philipp Marek
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting http://www.linbit.com :
DRBD® and LINBIT® are registered trademarks of LINBIT, Austria.
__
OpenStack Development
will be used to access the data.
but it's not passed in there:
The arguments passed to this functions already include an
attached_host value, sadly it's currently given as None...
Therefore my question where/when that value is calculated...
Regards,
Phil
--
: Ing. Philipp Marek
: LINBIT | Your
, and pointers into the code - and,
of course, even more so for full-blown patches on review.openstack.org ;)
Regards,
Phil
--
: Ing. Philipp Marek
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting http://www.linbit.com :
DRBD® and LINBIT® are registered
--
: Ing. Philipp Marek
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting http://www.linbit.com :
DRBD® and LINBIT® are registered trademarks of LINBIT, Austria.
__
OpenStack Development
is in consistency group X data item would be
enough, too?
Sorry about being so vague; I'm just not familiar enough with all the
interdependencies from Cinder to Nova.
Regards,
Phil
--
: Ing. Philipp Marek
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting
, opinions, etc.
Phil
--
: Ing. Philipp Marek
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting http://www.linbit.com :
DRBD® and LINBIT® are registered trademarks of LINBIT, Austria.
___
OpenStack-dev mailing
would this requirement be done for production setups? Should
installers read the requirements.txt and install matching distribution
packages?
Or is that out of scope of OpenStack/Cinder development anyway, and so
I can/should ignore that?
Regards,
Phil
--
: Ing. Philipp Marek
: LINBIT
found.
To provide a bit of separation I'll do one question per mail, and each in
a subthread of this mail.
Thanks for the patience, I'm looking forward to hearing your helpful ideas!
Regards,
Phil
--
: Ing. Philipp Marek
: LINBIT | Your Way to High Availability
: DRBD/HA support
to a few hours.
So, should we announce a range of (0,7200)?
Ad 1: because Openstack sees by itself which nodes are available.
--
: Ing. Philipp Marek
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting http://www.linbit.com :
DRBD® and LINBIT® are registered
? Because we'd like to
revert to an older state?
I believe that using snapshots would be more sane for that use case.
Or I just don't understand the reason, which is very likely, too.
--
: Ing. Philipp Marek
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting
. Philipp Marek
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting http://www.linbit.com :
DRBD® and LINBIT® are registered trademarks of LINBIT, Austria.
___
OpenStack-dev mailing list
OpenStack-dev
see the global
site-packages directory. Nova does the same thing for some of its
dependencies.
But such a change would affect _all_ people, right?
Hmmm... If you think such a change will be accepted?
Thank you for your help!
Regards,
Phil
--
: Ing. Philipp Marek
: LINBIT | Your Way to High
can I tell the extract
script that it should look into that one?
Thank you for your help!
Regards,
Phil
--
: Ing. Philipp Marek
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting http://www.linbit.com :
DRBD® and LINBIT® are registered trademarks
releases are available on
http://dbus.freedesktop.org/releases/dbus-python/
though; perhaps the 1.2.0 release works better?
But how could I specify to use _that_ source URL?
Thank you!
Regards,
Phil
--
: Ing. Philipp Marek
: LINBIT | Your Way to High Availability
: DRBD/HA support
to start
the services via pudb) by committing
commit 7b6c6685ba3fb40b6ed65d8e3697fa9aac899d85
Author: Philipp Marek philipp.ma...@linbit.com
Date: Fri Jun 6 11:48:52 2014 +0200
Make starting cinder services possible with pudb, too.
I had that rebased to be on top
So, I now tried to push the proof-of-concept driver to Gerrit,
and got this:
Downloading/unpacking dbus (from -r /home/jenkins/workspace/gate-
cinder-pep8/requirements.txt (line 32))
http://pypi.openstack.org/openstack/dbus/ uses an insecure transport
scheme (http). Consider using https if
Hrmpf, sent too fast again.
I guess https://wiki.openstack.org/wiki/Requirements is the link I was
looking for.
Sorry for the noise.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
Hi everybody,
at the Juno Design Summit we held a presentation about using DRBD 9
within OpenStack.
Here's an overview about the situation; I apologize in advance that the
mail got a bit longer, but I think it makes sense to capture all that
information in a single piece.
WHAT WE HAVE
45 matches
Mail list logo