Re: [openstack-dev] [Nova] [RBD] Copy-on-write cloning for RBD-backed disks

2014-12-04 Thread Dan Genin

Hello Dmitry,

This is off the topic of original email but related to RBD.

Looking through the code of LibvirtDriver._get_instance_disk_info() I 
noticed that it only returns data for disks with source file=... or 
source dev=... but not source proto=... so RBD backed instances 
would be incorrectly reported to have no ephemeral disks. I don't know 
if this is already handled somehow by RBD code but just wanted to bring 
it to your attention since you seem to be working on RBD support.


Best regards,
Dan

On 07/16/2014 05:17 PM, Dmitry Borodaenko wrote:
This message has been archived. View the original item 
http://APLARC1a.dom1.jhuapl.edu/EnterpriseVault/ViewMessage.asp?VaultId=16B79165D28865E4DB5D396F435B79C93111aplevsite.dom1.jhuapl.eduSavesetId=201409181084127%7E20140716211826%7EZ%7EF1048757F1C512B786F178CCA7EBA931

I've got a bit of good news and bad news about the state of landing
the rbd-ephemeral-clone patch series for Nova in Juno.

The good news is that the first patch in the series
(https://review.openstack.org/91722 fixing a data loss inducing bug
with live migrations of instances with RBD backed ephemeral drives)
was merged yesterday.

The bad news is that after 2 months of sitting in review queue and
only getting its first a +1 from a core reviewer on the spec approval
freeze day, the spec for the blueprint rbd-clone-image-handler
(https://review.openstack.org/91486) wasn't approved in time. Because
of that, today the blueprint was rejected along with the rest of the
commits in the series, even though the code itself was reviewed and
approved a number of times.

Our last chance to avoid putting this work on hold for yet another
OpenStack release cycle is to petition for a spec freeze exception in
the next Nova team meeting:
https://wiki.openstack.org/wiki/Meetings/Nova

If you're using Ceph RBD as backend for ephemeral disks in Nova and
are interested this patch series, please speak up. Since the biggest
concern raised about this spec so far has been lack of CI coverage,
please let us know if you're already using this patch series with
Juno, Icehouse, or Havana.

I've put together an etherpad with a summary of where things are with
this patch series and how we got here:
https://etherpad.openstack.org/p/nova-ephemeral-rbd-clone-status

Previous thread about this patch series on ceph-users ML:
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2014-March/028097.html

--
Dmitry Borodaenko

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





smime.p7s
Description: S/MIME Cryptographic Signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Enable LVM ephemeral storage for Nova

2014-10-29 Thread Dan Genin

On 10/28/2014 02:50 PM, John Griffith wrote:



On Tue, Oct 28, 2014 at 12:37 PM, Dan Genin daniel.ge...@jhuapl.edu 
mailto:daniel.ge...@jhuapl.edu wrote:


Great, thank you, Duncan. I will then proceed with the shared
volume group.

Dan


On 10/28/2014 02:06 PM, Duncan Thomas wrote:

Cinder volumes are always (unless you go change the default)
in the
form: volume-uuid, and since the string 'volume-' is never a
valid
uuid, then I think we can work around nova volumes fine when
we come
to write our tests.

Sorry for the repeated circling on this, but I think I'm now
happy.

Thanks



On 28 October 2014 17:53, Dan Genin daniel.ge...@jhuapl.edu
mailto:daniel.ge...@jhuapl.edu wrote:

On 10/28/2014 11:56 AM, Dean Troyer wrote:

On Tue, Oct 28, 2014 at 9:27 AM, Dan Genin
daniel.ge...@jhuapl.edu mailto:daniel.ge...@jhuapl.edu
wrote:

So this brings us back to the original proposal of
having separate backing
files for Cinder and Nova which Dean thought might
take too much space.


Between Cinder, Nova and Swift (and Ceph, etc) everybody
wants some loopback
disk images.  DevStack's Swift and Ceph configurations
assume loopback
devices and do no sharing.

Duncan, could you please elaborate on the pain a
single volume group is
likely to cause for Cinder? Is it a show stopper?


Back in the day, DevStack was built to configure Cinder
(and Nova Volume
before that) to use a specific existing volume group
(VOLUME_GROUP_NAME) or
create a loopback file if necessary.  With the help of
VOLUME_NAME_PREFIX
and volume_name_template DevStack knew which logical
volumes belong to
Cinder and could Do The Right Thing.

With three loopback files being created, all wanting
larger and larger
defaults, adding a fourth becomes Just One More Thing.  If
Nova's use of LVM
is similar enough to Cinder's (uses deterministic naming
for the LVs) I'm
betting we could make it work.

dt

Nova's disk names are of the form
instance-uuid_disk_type. So
deterministic but, unfortunately, not necessarily
predictable. It sounds
like Duncan is saying that Cinder needs a fixed prefix for
testing its
functionality. I will be honest, I am not optimistic about
convincing Nova
to change their disk naming scheme for the sake of LVM
testing. Far more
important changes have lingered for months and sometimes
longer.

It sounds like you are concerned about two issues with
regard to the
separate volume groups approach: 1) potential loop device
shortage and 2)
growing space demand. The second issue, it seems to me,
will arise no matter
which of the two solutions we choose. More space will be
required for
testing Nova's LVM functionality one way or another,
although, using a
shared volume group would permit a more efficient use of
the available
space. The first issue is, indeed, a direct consequence of
the choice to use
distinct volume groups. However, the number of available
loop devices can be
increased by passing the appropriate boot parameter to the
kernel, which can
be easy or hard depending on how the test VMs are spun up.

I am not saying that we should necessarily go the way of
separate volume
groups but, assuming for the moment that changing Nova's
disk naming scheme
is not an option, we need to figure out what will bring
the least amount of
pain forcing Cinder tests to work around Nova volumes or
create separate
volume groups.

Let me know what you think.
Dan


--

Dean Troyer
dtro...@gmail.com mailto:dtro...@gmail.com


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org

Re: [openstack-dev] [devstack] Enable LVM ephemeral storage for Nova

2014-10-28 Thread Dan Genin
Duncan, I don't think it's possible to have multiple volume groups using 
the same physical volume[1]. In fact, counter-intuitively (at least to 
me) the nesting actually goes the other way with multiple physical 
volumes comprising a single volume group. The LVM naming scheme actually 
makes more sense with this hierarchy.


So this brings us back to the original proposal of having separate 
backing files for Cinder and Nova which Dean thought might take too much 
space.


Duncan, could you please elaborate on the pain a single volume group is 
likely to cause for Cinder? Is it a show stopper?


Thank you,
Dan

1. https://wiki.archlinux.org/index.php/LVM#LVM_Building_Blocks


On 10/21/2014 03:10 PM, Duncan Thomas wrote:


Sharing the vg with cinder is likely to cause some pain testing 
proposed features cinder reconciling backend with the cinder db. 
Creating a second vg sharing the same backend pv is easy and avoids 
all such problems.


Duncan Thomas

On Oct 21, 2014 4:07 PM, Dan Genin daniel.ge...@jhuapl.edu 
mailto:daniel.ge...@jhuapl.edu wrote:


Hello,

I would like to add to DevStack the ability to stand up Nova with
LVM ephemeral
storage. Below is a draft of the blueprint describing the proposed
feature.

Suggestions on architecture, implementation and the blueprint in
general are very
welcome.

Best,
Dan


Enable LVM ephemeral storage for Nova


Currently DevStack supports only file based ephemeral storage for
Nova, e.g.,
raw and qcow2. This is an obstacle to Tempest testing of Nova with
LVM ephemeral
storage, which in the past has been inadvertantly broken
(see for example, https://bugs.launchpad.net/nova/+bug/1373962),
and to Tempest
testing of new features based on LVM ephemeral storage, such as
LVM ephemeral
storage encryption.

To enable Nova to come up with LVM ephemeral storage it must be
provided a
volume group. Based on an initial discussion with Dean Troyer,
this is best
achieved by creating a single volume group for all services that
potentially
need LVM storage; at the moment these are Nova and Cinder.

Implementation of this feature will:

 * move code in lib/cinder/cinder_backends/lvm to lib/lvm with
appropriate
   modifications

 * rename the Cinder volume group to something generic, e.g.,
devstack-vg

 * modify the Cinder initialization and cleanup code appropriately
to use
   the new volume group

 * initialize the volume group in stack.sh, shortly before
services are
   launched

 * cleanup the volume group in unstack.sh after the services have been
   shutdown

The question of how large to make the common Nova-Cinder volume
group in order
to enable LVM ephemeral Tempest testing will have to be explored.
Although,
given the tiny instance disks used in Nova Tempest tests, the current
Cinder volume group size may already be adequate.

No new configuration options will be necessary, assuming the
volume group size
will not be made configurable.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




smime.p7s
Description: S/MIME Cryptographic Signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Enable LVM ephemeral storage for Nova

2014-10-28 Thread Dan Genin

On 10/28/2014 11:56 AM, Dean Troyer wrote:
On Tue, Oct 28, 2014 at 9:27 AM, Dan Genin daniel.ge...@jhuapl.edu 
mailto:daniel.ge...@jhuapl.edu wrote:


So this brings us back to the original proposal of having separate
backing files for Cinder and Nova which Dean thought might take
too much space.


Between Cinder, Nova and Swift (and Ceph, etc) everybody wants some 
loopback disk images.  DevStack's Swift and Ceph configurations assume 
loopback devices and do no sharing.


Duncan, could you please elaborate on the pain a single volume
group is likely to cause for Cinder? Is it a show stopper?


Back in the day, DevStack was built to configure Cinder (and Nova 
Volume before that) to use a specific existing volume group 
(VOLUME_GROUP_NAME) or create a loopback file if necessary.  With the 
help of VOLUME_NAME_PREFIX and volume_name_template DevStack knew 
which logical volumes belong to Cinder and could Do The Right Thing.


With three loopback files being created, all wanting larger and larger 
defaults, adding a fourth becomes Just One More Thing.  If Nova's use 
of LVM is similar enough to Cinder's (uses deterministic naming for 
the LVs) I'm betting we could make it work.


dt
Nova's disk names are of the form instance-uuid_disk_type. So 
deterministic but, unfortunately, not necessarily predictable. It sounds 
like Duncan is saying that Cinder needs a fixed prefix for testing its 
functionality. I will be honest, I am not optimistic about convincing 
Nova to change their disk naming scheme for the sake of LVM testing. Far 
more important changes have lingered for months and sometimes longer.


It sounds like you are concerned about two issues with regard to the 
separate volume groups approach: 1) potential loop device shortage and 
2) growing space demand. The second issue, it seems to me, will arise no 
matter which of the two solutions we choose. More space will be required 
for testing Nova's LVM functionality one way or another, although, using 
a shared volume group would permit a more efficient use of the available 
space. The first issue is, indeed, a direct consequence of the choice to 
use distinct volume groups. However, the number of available loop 
devices can be increased by passing the appropriate boot parameter to 
the kernel, which can be easy or hard depending on how the test VMs are 
spun up.


I am not saying that we should necessarily go the way of separate volume 
groups but, assuming for the moment that changing Nova's disk naming 
scheme is not an option, we need to figure out what will bring the least 
amount of pain forcing Cinder tests to work around Nova volumes or 
create separate volume groups.


Let me know what you think.
Dan



--

Dean Troyer
dtro...@gmail.com mailto:dtro...@gmail.com


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




smime.p7s
Description: S/MIME Cryptographic Signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Enable LVM ephemeral storage for Nova

2014-10-28 Thread Dan Genin

On 10/28/2014 12:47 PM, Duncan Thomas wrote:

Hi Dan

You're quite right, the nesting isn't as I thought it was, sorry to mislead you.

It isn't a show stopper, it just makes testing some proposed useful
functionality slightly harder. If nova were to namespace its volumes
(e.g. start all the volume names with nova-*) then that would allow
the problem to be easily worked around in the test, does that sound
reasonable?
Changing Nova disk names is a lng shot. It's likely I will be doing 
something else by the time that gets merged:) So we are left with the 
two options of 1) using a shared volume group and, thus, complicating 
life for Cinder or 2) using separate volume groups potentially causing 
headaches for DevStack. I am trying to figure out which of these two is 
the lesser evil. It seems that Dean's concerns can be addressed, though, 
he still has to weigh in on the proposed mitigation approaches. I have 
little understanding of what problems a shared Cinder-Nova volume group 
would cause for Cinder testing. How hard would it be to make the tests 
work with a shared volume group?


Dan

On 28 October 2014 14:27, Dan Genin daniel.ge...@jhuapl.edu wrote:

Duncan, I don't think it's possible to have multiple volume groups using the
same physical volume[1]. In fact, counter-intuitively (at least to me) the
nesting actually goes the other way with multiple physical volumes
comprising a single volume group. The LVM naming scheme actually makes more
sense with this hierarchy.

So this brings us back to the original proposal of having separate backing
files for Cinder and Nova which Dean thought might take too much space.

Duncan, could you please elaborate on the pain a single volume group is
likely to cause for Cinder? Is it a show stopper?

Thank you,
Dan

1. https://wiki.archlinux.org/index.php/LVM#LVM_Building_Blocks


On 10/21/2014 03:10 PM, Duncan Thomas wrote:

Sharing the vg with cinder is likely to cause some pain testing proposed
features cinder reconciling backend with the cinder db. Creating a second vg
sharing the same backend pv is easy and avoids all such problems.

Duncan Thomas

On Oct 21, 2014 4:07 PM, Dan Genin daniel.ge...@jhuapl.edu wrote:

Hello,

I would like to add to DevStack the ability to stand up Nova with LVM
ephemeral
storage. Below is a draft of the blueprint describing the proposed
feature.

Suggestions on architecture, implementation and the blueprint in general
are very
welcome.

Best,
Dan


Enable LVM ephemeral storage for Nova


Currently DevStack supports only file based ephemeral storage for Nova,
e.g.,
raw and qcow2. This is an obstacle to Tempest testing of Nova with LVM
ephemeral
storage, which in the past has been inadvertantly broken
(see for example, https://bugs.launchpad.net/nova/+bug/1373962), and to
Tempest
testing of new features based on LVM ephemeral storage, such as LVM
ephemeral
storage encryption.

To enable Nova to come up with LVM ephemeral storage it must be provided a
volume group. Based on an initial discussion with Dean Troyer, this is
best
achieved by creating a single volume group for all services that
potentially
need LVM storage; at the moment these are Nova and Cinder.

Implementation of this feature will:

  * move code in lib/cinder/cinder_backends/lvm to lib/lvm with appropriate
modifications

  * rename the Cinder volume group to something generic, e.g., devstack-vg

  * modify the Cinder initialization and cleanup code appropriately to use
the new volume group

  * initialize the volume group in stack.sh, shortly before services are
launched

  * cleanup the volume group in unstack.sh after the services have been
shutdown

The question of how large to make the common Nova-Cinder volume group in
order
to enable LVM ephemeral Tempest testing will have to be explored.
Although,
given the tiny instance disks used in Nova Tempest tests, the current
Cinder volume group size may already be adequate.

No new configuration options will be necessary, assuming the volume group
size
will not be made configurable.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev









smime.p7s
Description: S/MIME Cryptographic Signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Enable LVM ephemeral storage for Nova

2014-10-28 Thread Dan Genin

Great, thank you, Duncan. I will then proceed with the shared volume group.

Dan

On 10/28/2014 02:06 PM, Duncan Thomas wrote:

Cinder volumes are always (unless you go change the default) in the
form: volume-uuid, and since the string 'volume-' is never a valid
uuid, then I think we can work around nova volumes fine when we come
to write our tests.

Sorry for the repeated circling on this, but I think I'm now happy.

Thanks



On 28 October 2014 17:53, Dan Genin daniel.ge...@jhuapl.edu wrote:

On 10/28/2014 11:56 AM, Dean Troyer wrote:

On Tue, Oct 28, 2014 at 9:27 AM, Dan Genin daniel.ge...@jhuapl.edu wrote:

So this brings us back to the original proposal of having separate backing
files for Cinder and Nova which Dean thought might take too much space.


Between Cinder, Nova and Swift (and Ceph, etc) everybody wants some loopback
disk images.  DevStack's Swift and Ceph configurations assume loopback
devices and do no sharing.


Duncan, could you please elaborate on the pain a single volume group is
likely to cause for Cinder? Is it a show stopper?


Back in the day, DevStack was built to configure Cinder (and Nova Volume
before that) to use a specific existing volume group (VOLUME_GROUP_NAME) or
create a loopback file if necessary.  With the help of VOLUME_NAME_PREFIX
and volume_name_template DevStack knew which logical volumes belong to
Cinder and could Do The Right Thing.

With three loopback files being created, all wanting larger and larger
defaults, adding a fourth becomes Just One More Thing.  If Nova's use of LVM
is similar enough to Cinder's (uses deterministic naming for the LVs) I'm
betting we could make it work.

dt

Nova's disk names are of the form instance-uuid_disk_type. So
deterministic but, unfortunately, not necessarily predictable. It sounds
like Duncan is saying that Cinder needs a fixed prefix for testing its
functionality. I will be honest, I am not optimistic about convincing Nova
to change their disk naming scheme for the sake of LVM testing. Far more
important changes have lingered for months and sometimes longer.

It sounds like you are concerned about two issues with regard to the
separate volume groups approach: 1) potential loop device shortage and 2)
growing space demand. The second issue, it seems to me, will arise no matter
which of the two solutions we choose. More space will be required for
testing Nova's LVM functionality one way or another, although, using a
shared volume group would permit a more efficient use of the available
space. The first issue is, indeed, a direct consequence of the choice to use
distinct volume groups. However, the number of available loop devices can be
increased by passing the appropriate boot parameter to the kernel, which can
be easy or hard depending on how the test VMs are spun up.

I am not saying that we should necessarily go the way of separate volume
groups but, assuming for the moment that changing Nova's disk naming scheme
is not an option, we need to figure out what will bring the least amount of
pain forcing Cinder tests to work around Nova volumes or create separate
volume groups.

Let me know what you think.
Dan


--

Dean Troyer
dtro...@gmail.com


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev









smime.p7s
Description: S/MIME Cryptographic Signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [devstack] Enable LVM ephemeral storage for Nova

2014-10-21 Thread Dan Genin

Hello,

I would like to add to DevStack the ability to stand up Nova with LVM 
ephemeral

storage. Below is a draft of the blueprint describing the proposed feature.

Suggestions on architecture, implementation and the blueprint in general 
are very

welcome.

Best,
Dan


Enable LVM ephemeral storage for Nova


Currently DevStack supports only file based ephemeral storage for Nova, 
e.g.,
raw and qcow2. This is an obstacle to Tempest testing of Nova with LVM 
ephemeral

storage, which in the past has been inadvertantly broken
(see for example, https://bugs.launchpad.net/nova/+bug/1373962), and to 
Tempest
testing of new features based on LVM ephemeral storage, such as LVM 
ephemeral

storage encryption.

To enable Nova to come up with LVM ephemeral storage it must be provided a
volume group. Based on an initial discussion with Dean Troyer, this is best
achieved by creating a single volume group for all services that potentially
need LVM storage; at the moment these are Nova and Cinder.

Implementation of this feature will:

 * move code in lib/cinder/cinder_backends/lvm to lib/lvm with appropriate
   modifications

 * rename the Cinder volume group to something generic, e.g., devstack-vg

 * modify the Cinder initialization and cleanup code appropriately to use
   the new volume group

 * initialize the volume group in stack.sh, shortly before services are
   launched

 * cleanup the volume group in unstack.sh after the services have been
   shutdown

The question of how large to make the common Nova-Cinder volume group in 
order

to enable LVM ephemeral Tempest testing will have to be explored. Although,
given the tiny instance disks used in Nova Tempest tests, the current
Cinder volume group size may already be adequate.

No new configuration options will be necessary, assuming the volume 
group size

will not be made configurable.



smime.p7s
Description: S/MIME Cryptographic Signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Enable LVM ephemeral storage for Nova

2014-10-21 Thread Dan Genin
Did you mean EBS? I thought it was generally hard to get the same kind 
of performance from block storage that local ephemeral storage provides 
but perhaps Amazon has found a way. Life would certainly be much simpler 
with a single ephemeral backend. Storage pools 
(https://blueprints.launchpad.net/nova/+spec/use-libvirt-storage-pools) 
should provide some of the same benefits.


On 10/21/2014 02:54 PM, Preston L. Bannister wrote:
As a side-note, the new AWS flavors seem to indicate that the Amazon 
infrastructure is moving to all ECS volumes (and all flash, possibly), 
both ephemeral and not. This makes sense, as fewer code paths and less 
interoperability complexity is a good thing.


That the same balance of concerns should apply in OpenStack, seems likely.




On Tue, Oct 21, 2014 at 7:59 AM, Dan Genin daniel.ge...@jhuapl.edu 
mailto:daniel.ge...@jhuapl.edu wrote:


Hello,

I would like to add to DevStack the ability to stand up Nova with
LVM ephemeral
storage. Below is a draft of the blueprint describing the proposed
feature.

Suggestions on architecture, implementation and the blueprint in
general are very
welcome.

Best,
Dan


Enable LVM ephemeral storage for Nova


Currently DevStack supports only file based ephemeral storage for
Nova, e.g.,
raw and qcow2. This is an obstacle to Tempest testing of Nova with
LVM ephemeral
storage, which in the past has been inadvertantly broken
(see for example, https://bugs.launchpad.net/nova/+bug/1373962),
and to Tempest
testing of new features based on LVM ephemeral storage, such as
LVM ephemeral
storage encryption.

To enable Nova to come up with LVM ephemeral storage it must be
provided a
volume group. Based on an initial discussion with Dean Troyer,
this is best
achieved by creating a single volume group for all services that
potentially
need LVM storage; at the moment these are Nova and Cinder.

Implementation of this feature will:

 * move code in lib/cinder/cinder_backends/lvm to lib/lvm with
appropriate
   modifications

 * rename the Cinder volume group to something generic, e.g.,
devstack-vg

 * modify the Cinder initialization and cleanup code appropriately
to use
   the new volume group

 * initialize the volume group in stack.sh, shortly before
services are
   launched

 * cleanup the volume group in unstack.sh after the services have been
   shutdown

The question of how large to make the common Nova-Cinder volume
group in order
to enable LVM ephemeral Tempest testing will have to be explored.
Although,
given the tiny instance disks used in Nova Tempest tests, the current
Cinder volume group size may already be adequate.

No new configuration options will be necessary, assuming the
volume group size
will not be made configurable.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




smime.p7s
Description: S/MIME Cryptographic Signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Enable LVM ephemeral storage for Nova

2014-10-21 Thread Dan Genin
Do you mean that Cinder will be confused by Nova's volumes in its volume 
group?


Yeah, sure that would be similarly easy to implement. Thank you for the 
suggestion!


Dan

On 10/21/2014 03:10 PM, Duncan Thomas wrote:


Sharing the vg with cinder is likely to cause some pain testing 
proposed features cinder reconciling backend with the cinder db. 
Creating a second vg sharing the same backend pv is easy and avoids 
all such problems.


Duncan Thomas

On Oct 21, 2014 4:07 PM, Dan Genin daniel.ge...@jhuapl.edu 
mailto:daniel.ge...@jhuapl.edu wrote:


Hello,

I would like to add to DevStack the ability to stand up Nova with
LVM ephemeral
storage. Below is a draft of the blueprint describing the proposed
feature.

Suggestions on architecture, implementation and the blueprint in
general are very
welcome.

Best,
Dan


Enable LVM ephemeral storage for Nova


Currently DevStack supports only file based ephemeral storage for
Nova, e.g.,
raw and qcow2. This is an obstacle to Tempest testing of Nova with
LVM ephemeral
storage, which in the past has been inadvertantly broken
(see for example, https://bugs.launchpad.net/nova/+bug/1373962),
and to Tempest
testing of new features based on LVM ephemeral storage, such as
LVM ephemeral
storage encryption.

To enable Nova to come up with LVM ephemeral storage it must be
provided a
volume group. Based on an initial discussion with Dean Troyer,
this is best
achieved by creating a single volume group for all services that
potentially
need LVM storage; at the moment these are Nova and Cinder.

Implementation of this feature will:

 * move code in lib/cinder/cinder_backends/lvm to lib/lvm with
appropriate
   modifications

 * rename the Cinder volume group to something generic, e.g.,
devstack-vg

 * modify the Cinder initialization and cleanup code appropriately
to use
   the new volume group

 * initialize the volume group in stack.sh, shortly before
services are
   launched

 * cleanup the volume group in unstack.sh after the services have been
   shutdown

The question of how large to make the common Nova-Cinder volume
group in order
to enable LVM ephemeral Tempest testing will have to be explored.
Although,
given the tiny instance disks used in Nova Tempest tests, the current
Cinder volume group size may already be adequate.

No new configuration options will be necessary, assuming the
volume group size
will not be made configurable.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




smime.p7s
Description: S/MIME Cryptographic Signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Enable LVM ephemeral storage for Nova

2014-10-21 Thread Dan Genin
So then it is probably best to leave existing Cinder LVM code in 
lib/cinder_backends/lvm alone and create a similar set of lvm scripts 
for Nova,

perhaps in lib/nova_backends/lvm?

Dan

On 10/21/2014 03:10 PM, Duncan Thomas wrote:


Sharing the vg with cinder is likely to cause some pain testing 
proposed features cinder reconciling backend with the cinder db. 
Creating a second vg sharing the same backend pv is easy and avoids 
all such problems.


Duncan Thomas

On Oct 21, 2014 4:07 PM, Dan Genin daniel.ge...@jhuapl.edu 
mailto:daniel.ge...@jhuapl.edu wrote:


Hello,

I would like to add to DevStack the ability to stand up Nova with
LVM ephemeral
storage. Below is a draft of the blueprint describing the proposed
feature.

Suggestions on architecture, implementation and the blueprint in
general are very
welcome.

Best,
Dan


Enable LVM ephemeral storage for Nova


Currently DevStack supports only file based ephemeral storage for
Nova, e.g.,
raw and qcow2. This is an obstacle to Tempest testing of Nova with
LVM ephemeral
storage, which in the past has been inadvertantly broken
(see for example, https://bugs.launchpad.net/nova/+bug/1373962),
and to Tempest
testing of new features based on LVM ephemeral storage, such as
LVM ephemeral
storage encryption.

To enable Nova to come up with LVM ephemeral storage it must be
provided a
volume group. Based on an initial discussion with Dean Troyer,
this is best
achieved by creating a single volume group for all services that
potentially
need LVM storage; at the moment these are Nova and Cinder.

Implementation of this feature will:

 * move code in lib/cinder/cinder_backends/lvm to lib/lvm with
appropriate
   modifications

 * rename the Cinder volume group to something generic, e.g.,
devstack-vg

 * modify the Cinder initialization and cleanup code appropriately
to use
   the new volume group

 * initialize the volume group in stack.sh, shortly before
services are
   launched

 * cleanup the volume group in unstack.sh after the services have been
   shutdown

The question of how large to make the common Nova-Cinder volume
group in order
to enable LVM ephemeral Tempest testing will have to be explored.
Although,
given the tiny instance disks used in Nova Tempest tests, the current
Cinder volume group size may already be adequate.

No new configuration options will be necessary, assuming the
volume group size
will not be made configurable.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




smime.p7s
Description: S/MIME Cryptographic Signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [FFE] requesting FFE for LVM ephemeral storage encryption

2014-09-04 Thread Dan Genin

I would like to request a feature freeze exception for

LVM ephemeral storage encryption[1].

The spec[2] for which was approved early in the Juno release cycle.

This feature provides security for data at-rest on compute nodes. The
proposed feature protects user data from disclosure due to disk block reuse
and improper storage media disposal among other threats and also eliminates
the need to sanitize LVM volumes.  The feature is crucial to data security
in OpenStack as explained in the OpenStack Security Guide[3] and benefits
cloud users and operators regardless of their industry and scale.

The feature was first submitted for review on August 6, 2013 and two of the
three patches implementing this feature were merged in Icehouse[4,5]. The
remaining patch has had approval from a core reviewer for most of the Icehouse
and Juno development cycles. The code is well vetted and ready to be merged.

The main concern about accepting this feature pertains to key management.
In particular, it uses Barbican to avoid storing keys on the compute host,
and Barbican at present has no gate testing.  However, the risk of
regression in case of failure to integrate Barbican is minimal because the
feature interacts with the key manager through an*existing*  abstract keymgr
interface, i.e., has no*explicit*  dependence on Barbican. Moreover, the
feature provides some measure of security even with the existing
place-holder key manager, for example, against disk block reuse attack.

For all of the above reasons I request a feature freeze exception for
LVM ephemeral storage encryption.

Best regards,
Dan

1.https://review.openstack.org/#/c/40467/
2.https://blueprints.launchpad.net/nova/+spec/lvm-ephemeral-storage-encryption
3. http://docs.openstack.org/security-guide/content/
4. https://review.openstack.org/#/c/60621/
5. https://review.openstack.org/#/c/61544/



smime.p7s
Description: S/MIME Cryptographic Signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova][FFE] requesting FFE for LVM ephemeral storage encryption

2014-09-04 Thread Dan Genin

I would like to request a feature freeze exception for

LVM ephemeral storage encryption[1].

The spec[2] for which was approved early in the Juno release cycle.

This feature provides security for data at-rest on compute nodes. The
proposed feature protects user data from disclosure due to disk block reuse
and improper storage media disposal among other threats and also eliminates
the need to sanitize LVM volumes.  The feature is crucial to data security
in OpenStack as explained in the OpenStack Security Guide[3] and benefits
cloud users and operators regardless of their industry and scale.

The feature was first submitted for review on August 6, 2013 and two of the
three patches implementing this feature were merged in Icehouse[4,5]. The
remaining patch has had approval from a core reviewer for most of the Icehouse
and Juno development cycles. The code is well vetted and ready to be merged.

The main concern about accepting this feature pertains to key management.
In particular, it uses Barbican to avoid storing keys on the compute host,
and Barbican at present has no gate testing.  However, the risk of
regression in case of failure to integrate Barbican is minimal because the
feature interacts with the key manager through an*existing*  abstract keymgr
interface, i.e., has no*explicit*  dependence on Barbican. Moreover, the
feature provides some measure of security even with the existing
place-holder key manager, for example, against disk block reuse attack.

For all of the above reasons I request a feature freeze exception for
LVM ephemeral storage encryption.

Best regards,
Dan

1.https://review.openstack.org/#/c/40467/
2.https://blueprints.launchpad.net/nova/+spec/lvm-ephemeral-storage-encryption
3.http://docs.openstack.org/security-guide/content/
4.https://review.openstack.org/#/c/60621/
5.https://review.openstack.org/#/c/61544/



smime.p7s
Description: S/MIME Cryptographic Signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Feature Freeze Exception process for Juno

2014-09-03 Thread Dan Genin

On 09/03/2014 07:31 AM, Gary Kotton wrote:


On 9/3/14, 12:50 PM, Nikola Đipanov ndipa...@redhat.com wrote:


On 09/02/2014 09:23 PM, Michael Still wrote:

On Tue, Sep 2, 2014 at 1:40 PM, Nikola Đipanov ndipa...@redhat.com
wrote:

On 09/02/2014 08:16 PM, Michael Still wrote:

Hi.

We're soon to hit feature freeze, as discussed in Thierry's recent
email. I'd like to outline the process for requesting a freeze
exception:

 * your code must already be up for review
 * your blueprint must have an approved spec
 * you need three (3) sponsoring cores for an exception to be
granted

Can core reviewers who have features up for review have this number
lowered to two (2) sponsoring cores, as they in reality then need four
(4) cores (since they themselves are one (1) core but cannot really
vote) making it an order of magnitude more difficult for them to hit
this checkbox?

That's a lot of numbers in that there paragraph.

Let me re-phrase your question... Can a core sponsor an exception they
themselves propose? I don't have a problem with someone doing that,
but you need to remember that does reduce the number of people who
have agreed to review the code for that exception.


Michael has correctly picked up on a hint of snark in my email, so let
me explain where I was going with that:

The reason many features including my own may not make the FF is not
because there was not enough buy in from the core team (let's be
completely honest - I have 3+ other core members working for the same
company that are by nature of things easier to convince), but because of
any of the following:

* Crippling technical debt in some of the key parts of the code
* that we have not been acknowledging as such for a long time
* which leads to proposed code being arbitrarily delayed once it makes
the glaring flaws in the underlying infra apparent
* and that specs process has been completely and utterly useless in
helping uncover (not that process itself is useless, it is very useful
for other things)

I am almost positive we can turn this rather dire situation around
easily in a matter of months, but we need to start doing it! It will not
happen through pinning arbitrary numbers to arbitrary processes.

I will follow up with a more detailed email about what I believe we are
missing, once the FF settles and I have applied some soothing creme to
my burnout wounds, but currently my sentiment is:

Contributing features to Nova nowadays SUCKS!!1 (even as a core
reviewer) We _have_ to change that!

+1

Sadly what you have written above is true. The current process does not
encourage new developers in Nova. I really think that we need to work on
improving our community. I really think that maybe we should sit as a
community at the summit and talk about this.

+2

N.


Michael


 * exceptions must be granted before midnight, Friday this week
(September 5) UTC
 * the exception is valid until midnight Friday next week
(September 12) UTC when all exceptions expire

For reference, our rc1 drops on approximately 25 September, so the
exception period needs to be short to maximise stabilization time.

John Garbutt and I will both be granting exceptions, to maximise our
timezone coverage. We will grant exceptions as they come in and gather
the required number of cores, although I have also carved some time
out in the nova IRC meeting this week for people to discuss specific
exception requests.

Michael



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





smime.p7s
Description: S/MIME Cryptographic Signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Feature Freeze Exception process for Juno

2014-09-02 Thread Dan Genin
Just out of curiosity, what is the rational behind upping the number of 
core sponsors for feature freeze exception to 3 if only two +2 are 
required to merge? In Icehouse, IIRC, two core sponsors was deemed 
sufficient.


Dan

On 09/02/2014 02:16 PM, Michael Still wrote:

Hi.

We're soon to hit feature freeze, as discussed in Thierry's recent
email. I'd like to outline the process for requesting a freeze
exception:

 * your code must already be up for review
 * your blueprint must have an approved spec
 * you need three (3) sponsoring cores for an exception to be granted
 * exceptions must be granted before midnight, Friday this week
(September 5) UTC
 * the exception is valid until midnight Friday next week
(September 12) UTC when all exceptions expire

For reference, our rc1 drops on approximately 25 September, so the
exception period needs to be short to maximise stabilization time.

John Garbutt and I will both be granting exceptions, to maximise our
timezone coverage. We will grant exceptions as they come in and gather
the required number of cores, although I have also carved some time
out in the nova IRC meeting this week for people to discuss specific
exception requests.

Michael






smime.p7s
Description: S/MIME Cryptographic Signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Prioritizing review of potentially approvable patches

2014-08-21 Thread Dan Genin

Hear, hear!

Dan

On 08/21/2014 07:57 AM, Daniel P. Berrange wrote:

Tagged with '[nova]' but this might be relevant data / idea for other
teams too.

With my code contributor hat on, one of the things that I find most the
frustrating about Nova code review process is that a patch can get a +2
vote from one core team member and then sit around for days, weeks, even
months without getting a second +2 vote, even if it has no negative
feedback at all and is a simple  important bug fix.

If a patch is good enough to have received one +2 vote, then compared to
the open patches as a whole, this patch is much more likely to be one
that is ready for approval  merge. It will likely be easier to review,
since it can be assumed other reviewers have already caught the majority
of the silly / tedious / time consuming bugs.

Letting these patches languish with a single +2 for too long makes it very
likely that, when a second core reviewer finally appears, there will be a
merge conflict or other bit-rot that will cause it to have to undergo yet
another rebase  re-review. This is wasting time of both our contributors
and our review team.

On this basis I suggest that core team members should consider patches
that already have a +2 to be high(er) priority items to review than open
patches as a whole.

Currently Nova has (on master branch)

   - 158 patches which have at least one +2 vote, and are not approved
   - 122 patches which have at least one +2 vote, are not approved and
 don't have any -1 code review votes.

So that's 122 patches that should be easy candidates for merging right
now. Another 30 can possibly be merged depending on whether the core
reviewer agrees with the -1 feedback given or now.

That is way more patches than we should have outstanding in that state.
It is not unreasonable to say that once a patch has a single +2 vote, we
should aim to get either a second +2 vote or further -1 review feedback
in a matter of days, and certainly no longer than a week.

If everyone on the core team looked at the list of potentially approvable
patches each day I think it would significantly improve our throughput.
It would also decrease the amount of review work overall by reducing
chance that patches bitrot  need rebase for merge conflicts. And most
importantly of all it will give our code contributors a better impression
that we care about them.

As an added carrot, working through this list will be an effective way
to improve your rankings [1] against other core reviewers, not that I
mean to suggest we should care about rankings over review quality ;-P

The next version of gerrymander[2] will contain a new command to allow
core reviewers to easily identify these patches

$ gerrymander todo-approvable -g nova --branch master

This will of course filter out patches which you yourself own since you
can't approve your own work. It will also filter out patches which you
have given feedback on already. What's left will be a list of patches
where you are able to apply the casting +2 vote to get to +A state.
If the '--strict' arg is added it will also filter out any patches which
have a -1 code review comment.

Regards,
Daniel

[1] http://russellbryant.net/openstack-stats/nova-reviewers-30.txt
[2] 
https://github.com/berrange/gerrymander/commit/790df913fc512580d92e808f28793e29783fecd7





smime.p7s
Description: S/MIME Cryptographic Signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] LVM ephemeral storage encryption

2014-03-04 Thread Dan Genin

Hello Joe,

Sorry to be bugging on what is probably a very busy day for you, but it 
being the feature freeze and all, I just wanted to ask if there was any 
chance of the LVM ephemeral storage encryption patch 
https://review.openstack.org/#/c/40467/, that you -1'ed today, making 
it into Icehouse. The patch has received a lot of attention and has gone 
through numerous revisions. It is a pretty solid piece of code at this 
point.


Regarding your point about the lack of a trunk keymanager capable of 
providing different keys for encryption, you are, of course, correct. 
However, this situation is rapidly evolving and I believe that Barbican 
keymanager may achieve incubation status by the next release.


Thank you for your input and suggestions.
Best regards,
Dan


smime.p7s
Description: S/MIME Cryptographic Signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Proposal to add Matt Riedemann to nova-core

2014-01-27 Thread Dan Genin

As a reviewee of Matt I vote

+1

On 11/23/2013 10:17 AM, Gary Kotton wrote:
This message has been archived. View the original item 
http://APLARC1a.dom1.jhuapl.edu/EnterpriseVault/ViewMessage.asp?VaultId=16B79165D28865E4DB5D396F435B79C93111aplevsite.dom1.jhuapl.eduSavesetId=201401240526115%7E20131123151817%7EZ%7E012BB4E8FA51AEA0C01DEBB15CD237C1

+1

On 11/23/13 4:53 PM, Sean Dague s...@dague.net wrote:

+1 would be happy to have Matt on the team

On Fri, Nov 22, 2013 at 8:23 PM, Brian Elliott bdelli...@gmail.com
wrote:
 +1

 Solid reviewer!

 Sent from my iPad

 On Nov 22, 2013, at 2:53 PM, Russell Bryant rbry...@redhat.com 
wrote:


 Greetings,

 I would like to propose adding Matt Riedemann to the nova-core review
team.

 Matt has been involved with nova for a long time, taking on a wide
range
 of tasks.  He writes good code.  He's very engaged with the 
development

 community.  Most importantly, he provides good code reviews and has
 earned the trust of other members of the review team.


https://urldefense.proofpoint.com/v1/url?u=https://review.openstack.org/
%23/dashboard/6873k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NPZyF
6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=W%2FbCZODtVcj75xh%2FtJc5Dt79rku6S
ABkMbVij058%2FP4%3D%0As=00158ef2fef8fac11346e6ad5e5d49ae2367546c7002a80
5d72e8a448db18431

https://urldefense.proofpoint.com/v1/url?u=https://review.openstack.org/
%23/q/owner:6873%2Cn%2Czk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo
8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=W%2FbCZODtVcj75xh%2FtJc5Dt7
9rku6SABkMbVij058%2FP4%3D%0As=1c5a657875ba709ce9765dc9cbbccaacbf66a8f4c
5ed18ee5dbb7bb6b586a88e

https://urldefense.proofpoint.com/v1/url?u=https://review.openstack.org/
%23/q/reviewer:6873%2Cn%2Czk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxT
UZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=W%2FbCZODtVcj75xh%2FtJc5
Dt79rku6SABkMbVij058%2FP4%3D%0As=e4455cdcd7b6458ace3c28fe2742109b31e97f
1cce0cca10ef2380fc38fd27ef

 Please respond with +1/-1, or any further comments.

 Thanks,

 --
 Russell Bryant

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.opens





smime.p7s
Description: S/MIME Cryptographic Signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] instance migration strangeness in devstack

2014-01-16 Thread Dan Genin
Thank you for replying, Vish. I did sync and verified that the file was 
written to the host disk by mounting the LVM volume on the host.


When I tried live migration I got a Horizon blurb Error: Failed to live 
migrate instance to host but there were no errors in syslog.


I have been able to successfully migrate a Qcow2 backed instance.
Dan

On 01/16/2014 03:18 AM, Vishvananda Ishaya wrote:
This is probably more of a usage question, but I will go ahead and 
answer it.


If you are writing to the root drive you may need to run the sync 
command a few times to make sure that the data has been flushed to 
disk before you kick off the migration.


The confirm resize step should be deleting the old data, but there may 
be a bug in the lvm backend if this isn’t happening. Live(block) 
migration will probably be a bit more intuitive.


Vish
On Jan 15, 2014, at 2:40 PM, Dan Genin daniel.ge...@jhuapl.edu 
mailto:daniel.ge...@jhuapl.edu wrote:


I think this qualifies as a development question but please let me 
know if I am wrong.


I have been trying to test instance migration in devstack by setting 
up a multi-node devstack following directions at 
http://devstack.org/guides/multinode-lab.html. I tested that indeed 
there were multiple availability zones and that it was possible to 
create instances in each. The I tried migrating an instance from one 
compute node to another using the Horizon interface (I could not find 
a way to /confirm///migration, which is a necessary step, from the 
command line). I created a test file on the instance's ephemeral 
disk, before migrating it, to verify that the data was moved to the 
destination compute node. After migration, I observed an instance 
with the same id active on the destination node but the test file was 
not present.


Perhaps I misunderstand how migration is supposed to work but I 
expected that the data on the ephemeral disk would be migrated with 
the instance. I suppose it could take some time for the ephemeral 
disk to be copied but then I would not expect the instance to become 
active on the destination node before the copy operation was complete.


I also noticed that the ephemeral disk on the source compute node was 
not removed after the instance was migrated, although, the instance 
directory was. Nor was the disk removed after the instance was 
destroyed. I was using LVM backend for my tests.


I can provide more information about my setup but I just wanted to 
check whether I was doing (or expecting) something obviously stupid.


Thank you,
Dan
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org 
mailto:OpenStack-dev@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






smime.p7s
Description: S/MIME Cryptographic Signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] instance migration strangeness in devstack

2014-01-16 Thread Dan Genin

Raw backed instance migration also works so this appears to be an LVM issue.

On 01/16/2014 11:04 AM, Dan Genin wrote:
Thank you for replying, Vish. I did sync and verified that the file 
was written to the host disk by mounting the LVM volume on the host.


When I tried live migration I got a Horizon blurb Error: Failed to 
live migrate instance to host but there were no errors in syslog.


I have been able to successfully migrate a Qcow2 backed instance.
Dan

On 01/16/2014 03:18 AM, Vishvananda Ishaya wrote:
This is probably more of a usage question, but I will go ahead and 
answer it.


If you are writing to the root drive you may need to run the sync 
command a few times to make sure that the data has been flushed to 
disk before you kick off the migration.


The confirm resize step should be deleting the old data, but there 
may be a bug in the lvm backend if this isn’t happening. Live(block) 
migration will probably be a bit more intuitive.


Vish
On Jan 15, 2014, at 2:40 PM, Dan Genin daniel.ge...@jhuapl.edu 
mailto:daniel.ge...@jhuapl.edu wrote:


I think this qualifies as a development question but please let me 
know if I am wrong.


I have been trying to test instance migration in devstack by setting 
up a multi-node devstack following directions at 
http://devstack.org/guides/multinode-lab.html. I tested that indeed 
there were multiple availability zones and that it was possible to 
create instances in each. The I tried migrating an instance from one 
compute node to another using the Horizon interface (I could not 
find a way to /confirm///migration, which is a necessary step, from 
the command line). I created a test file on the instance's ephemeral 
disk, before migrating it, to verify that the data was moved to the 
destination compute node. After migration, I observed an instance 
with the same id active on the destination node but the test file 
was not present.


Perhaps I misunderstand how migration is supposed to work but I 
expected that the data on the ephemeral disk would be migrated with 
the instance. I suppose it could take some time for the ephemeral 
disk to be copied but then I would not expect the instance to become 
active on the destination node before the copy operation was complete.


I also noticed that the ephemeral disk on the source compute node 
was not removed after the instance was migrated, although, the 
instance directory was. Nor was the disk removed after the instance 
was destroyed. I was using LVM backend for my tests.


I can provide more information about my setup but I just wanted to 
check whether I was doing (or expecting) something obviously stupid.


Thank you,
Dan
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org 
mailto:OpenStack-dev@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev








smime.p7s
Description: S/MIME Cryptographic Signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] instance migration strangeness in devstack

2014-01-16 Thread Dan Genin

OK, thank you for the sanity check.

Dan

On 01/16/2014 11:29 AM, Vishvananda Ishaya wrote:
In that case, this sounds like a bug to me related to lvm volumes. You 
should check the nova-compute.log from both hosts and the 
nova-conductor.log. If it isn’t obvious what the problem is, you 
should open a bug and attach as much info as possible.


Vish

On Jan 16, 2014, at 8:04 AM, Dan Genin daniel.ge...@jhuapl.edu 
mailto:daniel.ge...@jhuapl.edu wrote:


Thank you for replying, Vish. I did sync and verified that the file 
was written to the host disk by mounting the LVM volume on the host.


When I tried live migration I got a Horizon blurb Error: Failed to 
live migrate instance to host but there were no errors in syslog.


I have been able to successfully migrate a Qcow2 backed instance.
Dan

On 01/16/2014 03:18 AM, Vishvananda Ishaya wrote:
This is probably more of a usage question, but I will go ahead and 
answer it.


If you are writing to the root drive you may need to run the sync 
command a few times to make sure that the data has been flushed to 
disk before you kick off the migration.


The confirm resize step should be deleting the old data, but there 
may be a bug in the lvm backend if this isn’t happening. Live(block) 
migration will probably be a bit more intuitive.


Vish
On Jan 15, 2014, at 2:40 PM, Dan Genin daniel.ge...@jhuapl.edu 
mailto:daniel.ge...@jhuapl.edu wrote:


I think this qualifies as a development question but please let me 
know if I am wrong.


I have been trying to test instance migration in devstack by 
setting up a multi-node devstack following directions at 
http://devstack.org/guides/multinode-lab.html. I tested that indeed 
there were multiple availability zones and that it was possible to 
create instances in each. The I tried migrating an instance from 
one compute node to another using the Horizon interface (I could 
not find a way to /confirm///migration, which is a necessary step, 
from the command line). I created a test file on the instance's 
ephemeral disk, before migrating it, to verify that the data was 
moved to the destination compute node. After migration, I observed 
an instance with the same id active on the destination node but the 
test file was not present.


Perhaps I misunderstand how migration is supposed to work but I 
expected that the data on the ephemeral disk would be migrated with 
the instance. I suppose it could take some time for the ephemeral 
disk to be copied but then I would not expect the instance to 
become active on the destination node before the copy operation was 
complete.


I also noticed that the ephemeral disk on the source compute node 
was not removed after the instance was migrated, although, the 
instance directory was. Nor was the disk removed after the instance 
was destroyed. I was using LVM backend for my tests.


I can provide more information about my setup but I just wanted to 
check whether I was doing (or expecting) something obviously stupid.


Thank you,
Dan
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org 
mailto:OpenStack-dev@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org 
mailto:OpenStack-dev@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






smime.p7s
Description: S/MIME Cryptographic Signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] instance migration strangeness in devstack

2014-01-15 Thread Dan Genin
I think this qualifies as a development question but please let me know 
if I am wrong.


I have been trying to test instance migration in devstack by setting up 
a multi-node devstack following directions at 
http://devstack.org/guides/multinode-lab.html. I tested that indeed 
there were multiple availability zones and that it was possible to 
create instances in each. The I tried migrating an instance from one 
compute node to another using the Horizon interface (I could not find a 
way to /confirm///migration, which is a necessary step, from the command 
line). I created a test file on the instance's ephemeral disk, before 
migrating it, to verify that the data was moved to the destination 
compute node. After migration, I observed an instance with the same id 
active on the destination node but the test file was not present.


Perhaps I misunderstand how migration is supposed to work but I expected 
that the data on the ephemeral disk would be migrated with the instance. 
I suppose it could take some time for the ephemeral disk to be copied 
but then I would not expect the instance to become active on the 
destination node before the copy operation was complete.


I also noticed that the ephemeral disk on the source compute node was 
not removed after the instance was migrated, although, the instance 
directory was. Nor was the disk removed after the instance was 
destroyed. I was using LVM backend for my tests.


I can provide more information about my setup but I just wanted to check 
whether I was doing (or expecting) something obviously stupid.


Thank you,
Dan


smime.p7s
Description: S/MIME Cryptographic Signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] instance migration strangeness in devstack

2014-01-15 Thread Dan Genin
I think this qualifies as a development question but please let me know 
if I am wrong.


I have been trying to test instance migration in devstack by setting up 
a multi-node devstack following directions at 
http://devstack.org/guides/multinode-lab.html. I tested that indeed 
there were multiple availability zones and that it was possible to 
create instances in each. The I tried migrating an instance from one 
compute node to another using the Horizon interface (I could not find a 
way to /confirm///migration, which is a necessary step, from the command 
line). I created a test file on the instance's ephemeral disk, before 
migrating it, to verify that the data was moved to the destination 
compute node. After migration, I observed an instance with the same id 
active on the destination node but the test file was not present.


Perhaps I misunderstand how migration is supposed to work but I expected 
that the data on the ephemeral disk would be migrated with the instance. 
I suppose it could take some time for the ephemeral disk to be copied 
but then I would not expect the instance to become active on the 
destination node before the copy operation was complete.


I also noticed that the ephemeral disk on the source compute node was 
not removed after the instance was migrated, although, the instance 
directory was. Nor was the disk removed after the instance was 
destroyed. I was using LVM backend for my tests.


I can provide more information about my setup but I just wanted to check 
whether I was doing (or expecting) something obviously stupid.


Thank you,
Dan


smime.p7s
Description: S/MIME Cryptographic Signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][documentation][devstack] Confused about how to set up a Nova development environment

2014-01-10 Thread Dan Genin

On 01/09/2014 06:14 PM, Brant Knudson wrote:




On Thu, Jan 9, 2014 at 12:21 PM, Mike Spreitzer mspre...@us.ibm.com 
mailto:mspre...@us.ibm.com wrote:


Brant Knudson b...@acm.org mailto:b...@acm.org wrote on
01/09/2014 10:07:27 AM:


 When I was starting out, I ran devstack (
http://devstack.org/) on
 an Ubuntu VM. You wind up with a system where you've got a basic
 running OpenStack so you can try things out with the command-line
 utilities, and also do development because it checks out all the
 repos. I learned a lot, and it's how I still do development.

What sort(s) of testing do you do in that environment, and how?


Just running devstack exercises quite a bit of code, because it's 
setting up users, project, and loading images. Now you've got a system 
that's set up so you can add your own images and boot them using 
regular OpenStack commands, and you can use the command-line utilities 
or REST API to exercise your changes. The command-line utilities and 
REST API are documented. There are some things that you aren't going 
to be able to do with devstack because it's a single node, but that 
hasn't affected my development.


 Does your code editing interfere with the running DevStack?


Code editing doesn't interfere with a running DevStack. After you make 
a change you can find the process's screen in devstack's and restart 
it to pick up your changes. For example if you made a change that 
affects nova-api, you can restart the process in the n-api screen.


 Can you run the unit tests without interference from/to the
running DevStack?


Running unit tests doesn't interfere with DevStack. The unit tests run 
in their own processes and also run in a virtual environment.


 How do you do bigger tests?


For bigger tests I'd need a cluster which I don't have, so I don't do 
bigger tests.
It is possible to setup a multi-node devstack cloud using multiple VMs, 
see http://devstack.org/guides/multinode-lab.html. Depending on what you 
are trying to test, e.g., VM migration, this may be sufficient.


 What is the process for switching from running the merged code to
running your modified code?


I don't know what the merged code is? I use eclipse, so I create a 
project for the different directories in /opt/stack so I can edit the 
code right there.


 Are the answers documented someplace I have not found?

Thanks,
Mike


Not that I know of... the wiki pages are editable, so you or I could 
update them to help out others.


- Brant






smime.p7s
Description: S/MIME Cryptographic Signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Thoughs please on how to address a problem with mutliple deletes leading to a nova-compute thread pool problem

2013-10-30 Thread Dan Genin
Punishing benign users as a defense against (potentially) malicious 
users sounds like a bad strategy. This should not be a zero-sum game.


On 10/28/2013 02:49 PM, Joshua Harlow wrote:

Sure, convergence model is great and likely how it has to be done.

Its just a question of what is that convergence model :)

I agree that its bad customer service to say 'yes u tried to delete it but
I am charging u anyway' but I think the difference is that the user
actually still has access to those resources when they are not completed
deletion (due to say a network partition). So this makes it a nice feature
for malicious users to take advantage of, freeing there quota while still
have access to the resources that previously existed under that quota. I'd
sure like that if I was a malicious user (free stuff!). Quotas are as u
said 'records of intentions' but they also permit/deny access to further
resources, and its the further resources that are the problem, not the
record of intention (which at its simplest is just a write-ahead-log).

What is stopping that write-ahead-log from being used at/in the billing
'system' and removing 'charges' for deletes that have not completed (if
this is how a deployer wants to operate)?

IMHO, I think this all goes back to having a well defined state-machine in
nova (and elsewhere), where that state-machine can be altered to have
states that may say prefer consistency vs user happiness.

On 10/28/13 9:29 AM, Clint Byrum cl...@fewbar.com wrote:


Excerpts from Joshua Harlow's message of 2013-10-28 09:01:44 -0700:

Except I think the CAP theorem would say that u can't accurately give
back there quota under thing like network partitions.

If nova-compute and the message queue have a network partition then u
can release there quota but can't actually delete there vms. I would
actually prefer to not release there quota, but then this should be a
deployer decision and not a one size fits all decision (IMHO).


CAP encourages convergence models to satisfy problems with consistency.
Quotas and records of allocated resources are records of intention and
we can converge the physical resources with the expressed intentions
later. The speed with which you do that is part of the cost of network
partition failures and should be considered when assessing and mitigating
risk.

It is really bad customer service to tell somebody Yes I know you've
asked me to stop charging you, but my equipment has failed so I MUST
keep charging you. Reminds me of that gym membership I tried to
cancel... _TRIED_.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





smime.p7s
Description: S/MIME Cryptographic Signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Nova master appears to fail PEP8 compliance

2013-08-21 Thread Dan Genin

Attempting to submit code to Nova, I got the following error from Jenkins:

2013-08-21 17:23:55.604 | pep8 runtests: commands[0] | flake8
2013-08-21 17:23:55.615 |   /home/jenkins/workspace/gate-nova-pep8$ 
/home/jenkins/workspace/gate-nova-pep8/.tox/pep8/bin/flake8
2013-08-21 17:25:44.400 | ./nova/tests/compute/test_compute_api.py:535:10: H202 
 assertRaises Exception too broad
2013-08-21 17:25:44.879 | ERROR: InvocationError: 
'/home/jenkins/workspace/gate-nova-pep8/.tox/pep8/bin/flake8'
2013-08-21 17:25:44.880 | pep8 runtests: commands[1] | 
/home/jenkins/workspace/gate-nova-pep8/tools/config/check_uptodate.sh
2013-08-21 17:25:44.884 |   /home/jenkins/workspace/gate-nova-pep8$ 
/home/jenkins/workspace/gate-nova-pep8/tools/config/check_uptodate.sh
2013-08-21 17:25:48.466 | ___ summary 

2013-08-21 17:25:48.466 | ERROR:   pep8: commands failed


Since I did not modify test_compute_api.py it seems that the problem is 
with the Nova master branch. My code was branched from commit 8f47cb6399.


Dan



smime.p7s
Description: S/MIME Cryptographic Signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] can't install devstack - nova-api did not start

2013-08-09 Thread Dan Genin

Thank you, that solved it.

Dan

On 08/09/2013 11:11 AM, Sean Dague wrote:

This should be addressed by the latest devstack, however because we
moved to oslo.config out of git, some install environments might still
have oslo.config 1.1.0 somewhere, that pip no longer sees (so can't
uninstall)

sudo pip install oslo.config
sudo pip uninstall oslo.config

rerun devstack, see if it works.

-Sean

On 08/09/2013 09:14 AM, Roman Gorodeckij wrote:

Tried to install devstack to dedicated server, ip's are defined.

Here's the output:

13-08-09 09:06:28 ++ echo -ne '\015'

2013-08-09 09:06:28 + NL=$'\r'
2013-08-09 09:06:28 + screen -S stack -p n-api -X stuff 'cd /opt/stack/nova  
/'sr/local/bin/nova-api || touch /opt/stack/status/stack/n-api.failure
2013-08-09 09:06:28 + echo 'Waiting for nova-api to start...'
2013-08-09 09:06:28 Waiting for nova-api to start...
2013-08-09 09:06:28 + wait_for_service 60http://192.168.1.6:8774
2013-08-09 09:06:28 + local timeout=60
2013-08-09 09:06:28 + local url=http://192.168.1.6:8774
2013-08-09 09:06:28 + timeout 60 sh -c 'while ! http_proxy= https_proxy= curl 
-shttp://192.168.1.6:8774  /dev/null; do sleep 1; done'
2013-08-09 09:07:28 + die 698 'nova-api did not start'
2013-08-09 09:07:28 + local exitcode=0
stack@hp:~/devstack$ 2013-08-09 09:07:28 + set +o xtrace

Here's the log:

2013-08-09 09:07:28 [ERROR] ./stack.sh:698 nova-api did not start
stack@hp:~/devstack$ cat /tmp/devstack/log//screen-n-api.log
t/stack/status/stack/n-api.failurenova  /usr/local/bin/nova-api || touch /op
Traceback (most recent call last):
File /usr/local/bin/nova-api, line 6, in module
  from nova.cmd.api import main
File /opt/stack/nova/nova/cmd/api.py, line 29, in module
  from nova import config
File /opt/stack/nova/nova/config.py, line 22, in module
  from nova.openstack.common.db.sqlalchemy import session as db_session
File /opt/stack/nova/nova/openstack/common/db/sqlalchemy/session.py, line 279, 
in module
  deprecated_opts=[cfg.DeprecatedOpt('sql_connection',
AttributeError: 'module' object has no attribute 'DeprecatedOpt'




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev








smime.p7s
Description: S/MIME Cryptographic Signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev