Re: [openstack-dev] [Fuel] Nominate Ilya Kutukov for fuel-plugins-core

2016-09-09 Thread Dmitry Guryanov
+1

On Fri, Sep 9, 2016 at 7:34 AM, Alexey Stepanov <astepa...@mirantis.com>
wrote:

> +1
> Best regards,
> Alexey Stepanov.
>
> чт, 8 сент. 2016 г., 12:19 Bulat Gaifullin <bgaiful...@mirantis.com>:
>
>> +1
>>
>> Regards,
>> Bulat Gaifullin
>> Mirantis Inc.
>>
>>
>>
>> On 08 Sep 2016, at 12:05, Georgy Kibardin <gkibar...@mirantis.com> wrote:
>>
>> +1
>>
>> On Thu, Sep 8, 2016 at 11:54 AM, Igor Kalnitsky <i...@kalnitsky.org>
>> wrote:
>>
>>> Hey Fuelers,
>>>
>>> I'd like to nominate Ilya for joining fuel-plugins-core group. He's a
>>> top contributor by both reviews [1] and commits [2] over the past
>>> release cycle. Fuel cores, please share your votes.
>>>
>>> - Igor
>>>
>>> [1] http://stackalytics.com/?module=fuel-plugins=
>>> newton=marks
>>> [2] http://stackalytics.com/?module=fuel-plugins=
>>> newton=commits
>>>
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
>>> unsubscribe
>>> <http://openstack-dev-requ...@lists.openstack.org/?subject:unsubscribe>
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
>> unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
>> unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Dmitry Guryanov
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [fuel] Unrelated changes in patches

2016-04-04 Thread Dmitry Guryanov
Hello, colleagues!

It's often not so easy to decide, if you should include some unrelated
changes to your patch, like fixing spaces, renaming variables or something
else, which don't change logic. On the one hand you see something's wrong
with the code and you'd like to fix it, on the other hand reviewers can
vote of -1 and you'll have to fix you patch and upload it again and this is
very annoying. You can also create separate review for such changes, but it
will require additional effort from you and reviewers.

If you are a reviewer, and you've noted unrelated changes you may hesitate,
if you should ask an author to remove them and upload new version of the
patch or not. Also such extra changes may confuse you sometimes.

So I suggest creating separate patches for unrelated changes if they add
new chucks to patch. And I'd like to ask authors to clearly state in the
subject of a commit message, that this patch just fixes formatting. And
reviewers shouldn't check such patches too severely, so that they'll get
into repo as soon as possible.

What do you think?


-- 
Dmitry Guryanov
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Wiping node's disks on delete

2016-03-25 Thread Dmitry Guryanov
On Fri, 2016-03-25 at 08:00 -0600, Alex Schultz wrote:
> 
> On Fri, Mar 25, 2016 at 7:32 AM, Dmitry Guryanov <dguryanov@mirantis.
> com> wrote:
> > Here is the bug which I'm trying to fix - https://bugs.launchpad.ne
> > t/fuel/+bug/1538587.
> > 
> > In  VMs (set up with fuel-virtualbox) kernel panic occurs every
> > time you delete node, stack trace shows error in ext4 driver [1].
> > The same as in the bug.
> > 
> > Here is a patch - https://review.openstack.org/297669 . I've
> > checked it with virtual box VMs and it works fine.
> > 
> > I propose also don't reboot nodes in case of kernel panic, so that
> > we'll catch possible errors, but maybe it's too dangerous before
> > release.
> > 
> > 
> The panic is in there to prevent controllers from staying active with
> a bad disk. If the file system on a controller goes RO, the node
> stays in the cluster and causes errors with the openstack
> deployment.  The node erase code tries to disable this prior to
> erasing the disk so if it's not working we need to fix that, not
> remove it.

There will be no filesystem errors because of erasing disks with my
patch. The node will be fully operable until reboot.


> Thanks,
> -Alex
>  
> > [1]
> > [13607.545119] EXT4-fs error (device dm-0) in
> > ext4_reserve_inode_write:4928: IO failure
> > [13608.157968] EXT4-fs error (device dm-0) in
> > ext4_reserve_inode_write:4928: IO failure
> > [13608.780695] EXT4-fs error (device dm-0) in
> > ext4_reserve_inode_write:4928: IO failure
> > [13609.471245] Aborting journal on device dm-0-8.
> > [13609.478549] EXT4-fs error (device dm-0) in
> > ext4_dirty_inode:5047: IO failure
> > [13610.069244] EXT4-fs error (device dm-0) in
> > ext4_dirty_inode:5047: IO failure
> > [13610.698915] Kernel panic - not syncing: EXT4-fs (device dm-0):
> > panic forced after error
> > [13610.698915] 
> > [13611.060673] CPU: 0 PID: 8676 Comm: systemd-udevd Not tainted
> > 3.13.0-83-generic #127-Ubuntu
> > [13611.236566] Hardware name: innotek GmbH VirtualBox/VirtualBox,
> > BIOS VirtualBox 12/01/2006
> > [13611.887198]  fffb 88003b6e9a08 81725992
> > 81a77878
> > [13612.527154]  88003b6e9a80 8171e80b 0010
> > 88003b6e9a90
> > [13613.037061]  88003b6e9a30 88003b6e9a50 8800367f2ad0
> > 0040
> > [13613.717119] Call Trace:
> > [13613.927162]  [] dump_stack+0x45/0x56
> > [13614.306858]  [] panic+0xc8/0x1e1
> > [13614.767154]  []
> > ext4_handle_error.part.187+0xa6/0xb0
> > [13615.187201]  [] __ext4_std_error+0x7b/0x100
> > [13615.627960]  []
> > ext4_reserve_inode_write+0x44/0xa0
> > [13616.007943]  [] ? ext4_dirty_inode+0x40/0x60
> > [13616.448084]  []
> > ext4_mark_inode_dirty+0x44/0x1f0
> > [13616.917611]  [] ?
> > __ext4_journal_start_sb+0x69/0xe0
> > [13617.367730]  [] ext4_dirty_inode+0x40/0x60
> > [13617.747567]  [] __mark_inode_dirty+0x10a/0x2d0
> > [13618.088060]  [] update_time+0x81/0xd0
> > [13618.467965]  [] file_update_time+0x80/0xd0
> > [13618.977649]  []
> > __generic_file_aio_write+0x180/0x3d0
> > [13619.467993]  []
> > generic_file_aio_write+0x58/0xa0
> > [13619.978080]  [] ext4_file_write+0xa2/0x3f0
> > [13620.467624]  [] ?
> > free_hot_cold_page_list+0x46/0xa0
> > [13621.038045]  [] ? release_pages+0x80/0x210
> > [13621.408080]  [] do_sync_write+0x5a/0x90
> > [13621.818155]  [] do_acct_process+0x4e6/0x5c0
> > [13622.278005]  [] acct_process+0x71/0xa0
> > [13622.597617]  [] do_exit+0x80f/0xa50
> > [13622.968015]  [] ? fput+0xe/0x10
> > [13623.337738]  [] do_group_exit+0x3f/0xa0
> > [13623.738020]  [] SyS_exit_group+0x14/0x20
> > [13624.137447]  [] system_call_fastpath+0x1a/0x1f
> > [13624.518044] Rebooting in 10 seconds..
> > 
> > On Tue, Mar 22, 2016 at 1:07 PM, Dmitry Guryanov <dguryanov@miranti
> > s.com> wrote:
> > > Hello,
> > > 
> > > Here is a start of the discussion - http://lists.openstack.org/pi
> > > permail/openstack-dev/2015-December/083021.html . I've subscribed
> > > to this mailing list later, so can reply there.
> > > 
> > > Currently we clear node's disks in two places. The first one is
> > > before reboot into bootstrap image [0] and the second - just
> > > before provisioning in fuel-agent [1].
> > > 
> > > There are two problems, which should be solved with erasing first
> > > megabyte of disk data: node should not boot from hdd after reboot
> > > and n

Re: [openstack-dev] [Fuel] Wiping node's disks on delete

2016-03-25 Thread Dmitry Guryanov
Here is the bug which I'm trying to fix -
https://bugs.launchpad.net/fuel/+bug/1538587.

In  VMs (set up with fuel-virtualbox) kernel panic occurs every time you
delete node, stack trace shows error in ext4 driver [1].
The same as in the bug.

Here is a patch - https://review.openstack.org/297669 . I've checked it
with virtual box VMs and it works fine.

I propose also don't reboot nodes in case of kernel panic, so that we'll
catch possible errors, but maybe it's too dangerous before release.

[1]
[13607.545119] EXT4-fs error (device dm-0) in
ext4_reserve_inode_write:4928: IO failure
[13608.157968] EXT4-fs error (device dm-0) in
ext4_reserve_inode_write:4928: IO failure
[13608.780695] EXT4-fs error (device dm-0) in
ext4_reserve_inode_write:4928: IO failure
[13609.471245] Aborting journal on device dm-0-8.
[13609.478549] EXT4-fs error (device dm-0) in ext4_dirty_inode:5047: IO
failure
[13610.069244] EXT4-fs error (device dm-0) in ext4_dirty_inode:5047: IO
failure
[13610.698915] Kernel panic - not syncing: EXT4-fs (device dm-0): panic
forced after error
[13610.698915]
[13611.060673] CPU: 0 PID: 8676 Comm: systemd-udevd Not tainted
3.13.0-83-generic #127-Ubuntu
[13611.236566] Hardware name: innotek GmbH VirtualBox/VirtualBox, BIOS
VirtualBox 12/01/2006
[13611.887198]  fffb 88003b6e9a08 81725992
81a77878
[13612.527154]  88003b6e9a80 8171e80b 0010
88003b6e9a90
[13613.037061]  88003b6e9a30 88003b6e9a50 8800367f2ad0
0040
[13613.717119] Call Trace:
[13613.927162]  [] dump_stack+0x45/0x56
[13614.306858]  [] panic+0xc8/0x1e1
[13614.767154]  [] ext4_handle_error.part.187+0xa6/0xb0
[13615.187201]  [] __ext4_std_error+0x7b/0x100
[13615.627960]  [] ext4_reserve_inode_write+0x44/0xa0
[13616.007943]  [] ? ext4_dirty_inode+0x40/0x60
[13616.448084]  [] ext4_mark_inode_dirty+0x44/0x1f0
[13616.917611]  [] ? __ext4_journal_start_sb+0x69/0xe0
[13617.367730]  [] ext4_dirty_inode+0x40/0x60
[13617.747567]  [] __mark_inode_dirty+0x10a/0x2d0
[13618.088060]  [] update_time+0x81/0xd0
[13618.467965]  [] file_update_time+0x80/0xd0
[13618.977649]  [] __generic_file_aio_write+0x180/0x3d0
[13619.467993]  [] generic_file_aio_write+0x58/0xa0
[13619.978080]  [] ext4_file_write+0xa2/0x3f0
[13620.467624]  [] ? free_hot_cold_page_list+0x46/0xa0
[13621.038045]  [] ? release_pages+0x80/0x210
[13621.408080]  [] do_sync_write+0x5a/0x90
[13621.818155]  [] do_acct_process+0x4e6/0x5c0
[13622.278005]  [] acct_process+0x71/0xa0
[13622.597617]  [] do_exit+0x80f/0xa50
[13622.968015]  [] ? fput+0xe/0x10
[13623.337738]  [] do_group_exit+0x3f/0xa0
[13623.738020]  [] SyS_exit_group+0x14/0x20
[13624.137447]  [] system_call_fastpath+0x1a/0x1f
[13624.518044] Rebooting in 10 seconds..

On Tue, Mar 22, 2016 at 1:07 PM, Dmitry Guryanov <dgurya...@mirantis.com>
wrote:

> Hello,
>
> Here is a start of the discussion -
> http://lists.openstack.org/pipermail/openstack-dev/2015-December/083021.html
> . I've subscribed to this mailing list later, so can reply there.
>
> Currently we clear node's disks in two places. The first one is before
> reboot into bootstrap image [0] and the second - just before provisioning
> in fuel-agent [1].
>
> There are two problems, which should be solved with erasing first megabyte
> of disk data: node should not boot from hdd after reboot and new
> partitioning scheme should overwrite the previous one.
>
> The first problem could be solved with zeroing first 512 bytes of each
> disk (not partition). Even 446 to be precise, because last 66 bytes are
> partition scheme, see
> https://wiki.archlinux.org/index.php/Master_Boot_Record .
>
> The second problem should be solved only after reboot into bootstrap.
> Because if we bring a new node to the cluster from some other place and
> boot it with bootstrap image it will possibly have disks with some
> partitions, md devices and lvm volumes. So all these entities should be
> correctly cleared before provisioning, not before reboot. And fuel-agent
> does it in [1].
>
> I propose to remove erasing first 1M of each partiton, because it can lead
> to errors in FS kernel drivers and kernel panic. An existing workaround,
> that in case of kernel panic we do reboot is bad because it may occur just
> after clearing first partition of the first disk and after reboot bios will
> read MBR of the second disk and boot from it instead of network. Let's just
> clear first 446 bytes of each disk.
>
>
> [0]
> https://github.com/openstack/fuel-astute/blob/master/mcagents/erase_node.rb#L162-L174
> [1]
> https://github.com/openstack/fuel-agent/blob/master/fuel_agent/manager.py#L194-L221
>
>
> --
> Dmitry Guryanov
>



-- 
Dmitry Guryanov
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Wiping node's disks on delete

2016-03-25 Thread Dmitry Guryanov
I've read carefully the article about bios_grub partition you've given link
to. And it turned out for me that it's only used for non-UEFI boot. It this
case it's impossible to boot without stage1 (in mbr), because our PXE
doesn't touch hard disks. So clearing BIOS_boot partition will not
introduce any advantages. I suggest that we just clear boot code in mbr,
let's don't add unneeded code.

On Thu, Mar 24, 2016 at 2:13 PM, Alexander Gordeev <agord...@mirantis.com>
wrote:

>
>
> On Wed, Mar 23, 2016 at 7:49 PM, Dmitry Guryanov <dgurya...@mirantis.com>
> wrote:
>
>>
>> I have no objections against clearing bios boot partition, but could
>> you describe scenario, how non-efi system will boot with valid
>> BIOS_grub and wiped boot code in MBR
>>
>
>
> I thoroughly agree that it's impossible to boot without stage1 from the
> disk for non-uefi system. Besides, it doesn't mean what we shouldn't wipe
> dedicated BIOS_grub partition.
>
> But... How about network booting over PXE? I'm not quire sure if it's
> still technically possible. I read that stage1 just contains of LBA48
> pointer to the stage1.5 or stage2. So, i can imagine a case when somebody
> has tweaked PXE loader so it will be jumping to predefined LBA48 pointer
> where stage1.5/2 resides absolutely bypassing stage1.
>
> I knew that partitioning layout for the first 2 partitions is always the
> same for all target nodes. The actual value of the partitioning boundaries
> may slightly vary due to partition boundaries alignment depending on the
> h/w itself. If all nodes were equipped with the identical h/w (which is
> almost true for real deployments), then BIOS_grub partition resides under
> the same LBA48 pointer for all nodes. So, even it may be sounded too tricky
> and requires a lot of manual steps, but it's still possible. No? Did I miss
> something?
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Dmitry Guryanov
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Wiping node's disks on delete

2016-03-23 Thread Dmitry Guryanov
On Wed, 2016-03-23 at 18:22 +0300, Alexander Gordeev wrote:
> Hello Dmitry,
> 
> .
>  
> Yep, astute needs to be fixed as the way how it wipes the disks is
> way too fragile, dangerous and not always reliable due to what you
> mentioned above.
> 
> Nope, I think that zeroing of 446 bytes is not enough. Why don't we
> want to wipe bios_boot partition too? Let's wipe all grub leftovers
> such as bios_boot partitions too. They doesn't contain any FS, so
> unlikely that kernel or any other process will prevent us from wiping
> it. No errors, no kernel panic are expected.
> 
> 
> On Tue, Mar 22, 2016 at 5:06 PM, Dmitry Guryanov <dguryanov@mirantis.
> com> wrote:
> > For GPT disks and non-UEFI boot this method will work, since MBR
> > will still contain first stage of a bootloader code.
> > 
> Agreed, it will work. But how about bios_boot partition? What do you
> think?
> 

I have no objections against clearing bios boot partition, but could
you describe scenario, how non-efi system will boot with valid
BIOS_grub and wiped boot code in MBR?

> 
> Thanks,  Alex.
> 
> _
> _
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
> cribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Wiping node's disks on delete

2016-03-22 Thread Dmitry Guryanov
For GPT disks and non-UEFI boot this method will work, since MBR will still
contain first stage of a bootloader code. For UEFI boot things are little
more complicated, we have to find EFI system partition, mount it and
remove/edit some files.

On Tue, Mar 22, 2016 at 4:26 PM, Bulat Gaifullin <bgaiful...@mirantis.com>
wrote:

> What about GPT[1] disks?
> As I know we have plans to support UEFI boot and GPT disks.
>
>
> [1] https://en.wikipedia.org/wiki/GUID_Partition_Table
>
> Regards,
> Bulat Gaifullin
> Mirantis Inc.
>
>
>
> > On 22 Mar 2016, at 13:46, Dmitry Guryanov <dgurya...@mirantis.com>
> wrote:
> >
> > On Tue, 2016-03-22 at 13:07 +0300, Dmitry Guryanov wrote:
> >> Hello,
> >>
> >>  ..
> >>
> >> [0] https://github.com/openstack/fuel-astute/blob/master/mcagents/era
> >> se_node.rb#L162-L174
> >> [1] https://github.com/openstack/fuel-
> >> agent/blob/master/fuel_agent/manager.py#L194-L221
> >
> >
> > Sorry, here is a correct link:
> > https://github.com/openstack/fuel-agent/blob/master/fuel_agent/manager.
> > py#L228-L252
> >
> >
> >>
> >>
> >> --
> >> Dmitry Guryanov
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> ______
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Dmitry Guryanov
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Wiping node's disks on delete

2016-03-22 Thread Dmitry Guryanov
On Tue, 2016-03-22 at 13:07 +0300, Dmitry Guryanov wrote:
> Hello,
> 
> ..
> 
> [0] https://github.com/openstack/fuel-astute/blob/master/mcagents/era
> se_node.rb#L162-L174
> [1] https://github.com/openstack/fuel-
> agent/blob/master/fuel_agent/manager.py#L194-L221


Sorry, here is a correct link:
https://github.com/openstack/fuel-agent/blob/master/fuel_agent/manager.
py#L228-L252


> 
> 
> -- 
> Dmitry Guryanov

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Wiping node's disks on delete

2016-03-22 Thread Dmitry Guryanov
Hello,

Here is a start of the discussion -
http://lists.openstack.org/pipermail/openstack-dev/2015-December/083021.html
. I've subscribed to this mailing list later, so can reply there.

Currently we clear node's disks in two places. The first one is before
reboot into bootstrap image [0] and the second - just before provisioning
in fuel-agent [1].

There are two problems, which should be solved with erasing first megabyte
of disk data: node should not boot from hdd after reboot and new
partitioning scheme should overwrite the previous one.

The first problem could be solved with zeroing first 512 bytes of each disk
(not partition). Even 446 to be precise, because last 66 bytes are
partition scheme, see
https://wiki.archlinux.org/index.php/Master_Boot_Record .

The second problem should be solved only after reboot into bootstrap.
Because if we bring a new node to the cluster from some other place and
boot it with bootstrap image it will possibly have disks with some
partitions, md devices and lvm volumes. So all these entities should be
correctly cleared before provisioning, not before reboot. And fuel-agent
does it in [1].

I propose to remove erasing first 1M of each partiton, because it can lead
to errors in FS kernel drivers and kernel panic. An existing workaround,
that in case of kernel panic we do reboot is bad because it may occur just
after clearing first partition of the first disk and after reboot bios will
read MBR of the second disk and boot from it instead of network. Let's just
clear first 446 bytes of each disk.


[0]
https://github.com/openstack/fuel-astute/blob/master/mcagents/erase_node.rb#L162-L174
[1]
https://github.com/openstack/fuel-agent/blob/master/fuel_agent/manager.py#L194-L221


-- 
Dmitry Guryanov
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] RemoteFS drivers refactoring: move code, which works with images to separate classes

2015-10-28 Thread Dmitry Guryanov



On 10/28/2015 03:55 PM, Eric Harney wrote:

On 10/28/2015 03:18 PM, Dmitry Guryanov wrote:

Hello!

Can we discuss this on the summit?

As I promised, I've written a blueprint for this change:

https://review.openstack.org/#/c/237094/


I assume we can talk about this at the Cinder contributors meetup on Friday.


Ok, I'll be there.




On 10/14/2015 03:57 AM, Dmitry Guryanov wrote:

Hello,

RemoteFS drivers combine 2 logical tasks. The first one is how to
mount a filesystem and select proper share for a new or existing
volume. The second one: how to deal with an image files in given
directory (mount point) (create, delete, create snapshot e.t.c.).

The first part is different for each volume driver. The second - the
same for all volume drivers, but it depends on selected volume format:
you can create qcow2 file on NFS or smbfs with the same code.

Since there are several volume formats (raw, qcow2, vhd and possibly
some others), I propose to move the code, which works with image to
separate classes, 'VolumeFormat' handlers.

This change have 3 advantages:

1. Duplicated code from remotefs driver will be removed.
2. All drivers will support all volume formats.
3. New volume formats could be added easily, including non-qcow2
snapshots.

Here is a draft version of a patch:
https://review.openstack.org/#/c/234359/

Although there are problems in it, most of the operations with volumes
work and there are only about 10 fails in tempest.


I'd like to discuss this approach before further work on the patch.


--
Dmitry Guryanov

__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] RemoteFS drivers refactoring: move code, which works with images to separate classes

2015-10-28 Thread Dmitry Guryanov

Hello!

Can we discuss this on the summit?

As I promised, I've written a blueprint for this change:

https://review.openstack.org/#/c/237094/


On 10/14/2015 03:57 AM, Dmitry Guryanov wrote:

Hello,

RemoteFS drivers combine 2 logical tasks. The first one is how to 
mount a filesystem and select proper share for a new or existing 
volume. The second one: how to deal with an image files in given 
directory (mount point) (create, delete, create snapshot e.t.c.).


The first part is different for each volume driver. The second - the 
same for all volume drivers, but it depends on selected volume format: 
you can create qcow2 file on NFS or smbfs with the same code.


Since there are several volume formats (raw, qcow2, vhd and possibly 
some others), I propose to move the code, which works with image to 
separate classes, 'VolumeFormat' handlers.


This change have 3 advantages:

1. Duplicated code from remotefs driver will be removed.
2. All drivers will support all volume formats.
3. New volume formats could be added easily, including non-qcow2 
snapshots.


Here is a draft version of a patch:
https://review.openstack.org/#/c/234359/

Although there are problems in it, most of the operations with volumes 
work and there are only about 10 fails in tempest.



I'd like to discuss this approach before further work on the patch.


--
Dmitry Guryanov

__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] RemoteFS drivers refactoring: move code, which works with images to separate classes

2015-10-19 Thread Dmitry Guryanov

On 10/14/2015 12:09 AM, Sean McGinnis wrote:

On Tue, Oct 13, 2015 at 07:01:45PM +, D'Angelo, Scott wrote:

If you create a blueprint and a spec for this, the details can be discussed in 
the spec.

Yes, something like this we should definitely have a spec and blueprint
for. Please write up a spec and propose to the cinder-specs repo so this
can be discussed and comment on.


I've written the spec:
https://review.openstack.org/237094






-Original Message-
From: Dmitry Guryanov [mailto:dgurya...@virtuozzo.com]
Sent: Tuesday, October 13, 2015 12:57 PM
To: OpenStack Development Mailing List; Maxim Nestratov
Subject: [openstack-dev] [cinder] RemoteFS drivers refactoring: move code, 
which works with images to separate classes

Hello,

RemoteFS drivers combine 2 logical tasks. The first one is how to mount a 
filesystem and select proper share for a new or existing volume. The second 
one: how to deal with an image files in given directory (mount
point) (create, delete, create snapshot e.t.c.).

The first part is different for each volume driver. The second - the same for 
all volume drivers, but it depends on selected volume format:
you can create qcow2 file on NFS or smbfs with the same code.

Since there are several volume formats (raw, qcow2, vhd and possibly some 
others), I propose to move the code, which works with image to separate 
classes, 'VolumeFormat' handlers.

This change have 3 advantages:

1. Duplicated code from remotefs driver will be removed.
2. All drivers will support all volume formats.
3. New volume formats could be added easily, including non-qcow2 snapshots.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] RemoteFS drivers refactoring: move code, which works with images to separate classes

2015-10-13 Thread Dmitry Guryanov

Hello,

RemoteFS drivers combine 2 logical tasks. The first one is how to mount 
a filesystem and select proper share for a new or existing volume. The 
second one: how to deal with an image files in given directory (mount 
point) (create, delete, create snapshot e.t.c.).


The first part is different for each volume driver. The second - the 
same for all volume drivers, but it depends on selected volume format: 
you can create qcow2 file on NFS or smbfs with the same code.


Since there are several volume formats (raw, qcow2, vhd and possibly 
some others), I propose to move the code, which works with image to 
separate classes, 'VolumeFormat' handlers.


This change have 3 advantages:

1. Duplicated code from remotefs driver will be removed.
2. All drivers will support all volume formats.
3. New volume formats could be added easily, including non-qcow2 snapshots.

Here is a draft version of a patch:
https://review.openstack.org/#/c/234359/

Although there are problems in it, most of the operations with volumes 
work and there are only about 10 fails in tempest.



I'd like to discuss this approach before further work on the patch.


--
Dmitry Guryanov

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Plan to consolidate FS-style libvirt volume drivers under a common base class

2015-06-17 Thread Dmitry Guryanov

On 06/17/2015 12:21 AM, Matt Riedemann wrote:

The NFS, GlusterFS, SMBFS, and Quobyte libvirt volume drivers are all
very similar.

I want to extract a common base class that abstracts some of the
common code and then let the sub-classes provide overrides where
necessary.

As part of this, I'm wondering if we could just have a single
'mount_point_base' config option rather than one per backend like we
have today:

nfs_mount_point_base
glusterfs_mount_point_base
smbfs_mount_point_base
quobyte_mount_point_base

With libvirt you can only have one of these drivers configured per
compute host right?  So it seems to make sense that we could have one
option used for all 4 different driver implementations and reduce some
of the config option noise.

I checked the os-brick change [1] proposed to nova to see if there
would be any conflicts there and so far that's not touching any of
these classes so seems like they could be worked in parallel.



os-brick has ability to mount different filesystems, you could find it 
in the os_brick/remotefs/remotefs.py file. This module is already used 
in cinder's FS volume drivers, which you've mentioned.



Are there any concerns with this?

Is a blueprint needed for this refactor?

[1] https://review.openstack.org/#/c/175569/




--
Dmitry Guryanov


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Plan to consolidate FS-style libvirt volume drivers under a common base class

2015-06-17 Thread Dmitry Guryanov

On 06/17/2015 02:14 PM, Duncan Thomas wrote:

On 17 June 2015 at 00:21, Matt Riedemann mrie...@linux.vnet.ibm.com
mailto:mrie...@linux.vnet.ibm.com wrote:

The NFS, GlusterFS, SMBFS, and Quobyte libvirt volume drivers are
all very similar.

I want to extract a common base class that abstracts some of the
common code and then let the sub-classes provide overrides where
necessary.

As part of this, I'm wondering if we could just have a single
'mount_point_base' config option rather than one per backend like
we have today:

nfs_mount_point_base
glusterfs_mount_point_base
smbfs_mount_point_base
quobyte_mount_point_base

With libvirt you can only have one of these drivers configured per
compute host right?  So it seems to make sense that we could have
one option used for all 4 different driver implementations and
reduce some of the config option noise.


I can't claim to have tried it, but from a cinder PoV there is nothing
stopping you having both e.g. an NFS and a gluster backend at the same
time, and I'd expect nova to work with it. If it doesn't, I'd consider
it a bug.


I agree, if 2 volume backends will use the same share definition, like 
10.10.2.3:/public you'll get the same mountpoint for them.






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Dmitry Guryanov


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] RFC: Increasing min libvirt to 1.0.6 for LXC driver ?

2015-02-21 Thread Dmitry Guryanov
Let's put off this cleanup to L release. There is a problem with mounting loop 
device with enabled user namespaces. so we can't commit the change and broke 
containers with user namespaces.

I going on vacation until 6th march, when I'll return I'm going to learn LXC 
code and figure out, what should be done so that containers with user 
namespaces will start from images over loop devices.



От: Dmitry Guryanov dgurya...@parallels.com
Отправлено: 16 февраля 2015 г. 16:46
Кому: Daniel P. Berrange
Копия: OpenStack Development Mailing List (not for usage questions); 
openstack-operat...@lists.openstack.org
Тема: Re: [openstack-dev] [Openstack-operators] RFC: Increasing min libvirt to 
1.0.6 for LXC driver ?

On 02/16/2015 04:36 PM, Daniel P. Berrange wrote:
 On Mon, Feb 16, 2015 at 04:31:21PM +0300, Dmitry Guryanov wrote:
 On 02/13/2015 05:50 PM, Jay Pipes wrote:
 On 02/13/2015 09:20 AM, Daniel P. Berrange wrote:
 On Fri, Feb 13, 2015 at 08:49:26AM -0500, Jay Pipes wrote:
 On 02/13/2015 07:04 AM, Daniel P. Berrange wrote:
 Historically Nova has had a bunch of code which mounted images on the
 host OS using qemu-nbd before passing them to libvirt to setup the
 LXC container. Since 1.0.6, libvirt is able todo this itself and it
 would simplify the codepaths in Nova if we can rely on that

 In general, without use of user namespaces, LXC can't really be
 considered secure in OpenStack, and this already requires libvirt
 version 1.1.1 and Nova Juno release.

 As such I'd be surprised if anyone is running OpenStack with libvirt
  LXC in production on libvirt  1.1.1 as it would be pretty insecure,
 but stranger things have happened.

 The general libvirt min requirement for LXC, QEMU and KVM currently
 is 0.9.11. We're *not* proposing to change the QEMU/KVM min libvirt,
 but feel it is worth increasing the LXC min libvirt to 1.0.6

 So would anyone object if we increased min libvirt to 1.0.6 when
 running the LXC driver ?
 Thanks for raising the question, Daniel!

 Since there are no objections, I'd like to make 1.1.1 the minimal required
 version. Let's also make parameters uid_maps and gid_maps mandatory and
 always add them to libvirt XML.
 I think it is probably not enough prior warning to actually turn on user
 namespace by default in Kilo. So I think what we should do for Kilo is to
 issue a warning message on nova startup if userns is not enabled in the
 config, telling users that this will become mandatory in Liberty. Then
 when Liberty dev opens, we make it mandatory.

 Regards,
 Daniel

OK, seems reasonable.

--
Dmitry Guryanov


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Propose removing Dmitry Guryanov from magnum-core

2015-02-17 Thread Dmitry Guryanov

On 02/17/2015 06:20 AM, Steven Dake (stdake) wrote:
The initial magnum core team was founded at a meeting where several 
people committed to being active in reviews and writing code for 
Magnum.  Nearly all of the folks that made that initial commitment 
have been active in IRC, on the mailing lists, or participating in 
code reviews or code development.


Out of our core team of 9 members [1], everyone has been active in 
some way except for Dmitry.  I propose removing him from the core 
team.  Dmitry is welcome to participate in the future if he chooses 
and be held to the same high standards we have held our last 4 new 
core members to that didn’t get an initial opt-in but were voted in by 
their peers.


Please vote (-1 remove, abstain, +1 keep in core team) - a vote of +1 
from any core acts as a veto meaning Dmitry will remain in the core team.


Hello, Steven,

Sorry for being inactive for so long. I have no real objections for 
removing me from magnum-core. I hope I'll return to the project in the 
near future.




[1] https://review.openstack.org/#/admin/groups/473,members


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Dmitry Guryanov


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] RFC: Increasing min libvirt to 1.0.6 for LXC driver ?

2015-02-16 Thread Dmitry Guryanov

On 02/13/2015 05:50 PM, Jay Pipes wrote:

On 02/13/2015 09:20 AM, Daniel P. Berrange wrote:

On Fri, Feb 13, 2015 at 08:49:26AM -0500, Jay Pipes wrote:

On 02/13/2015 07:04 AM, Daniel P. Berrange wrote:

Historically Nova has had a bunch of code which mounted images on the
host OS using qemu-nbd before passing them to libvirt to setup the
LXC container. Since 1.0.6, libvirt is able todo this itself and it
would simplify the codepaths in Nova if we can rely on that

In general, without use of user namespaces, LXC can't really be
considered secure in OpenStack, and this already requires libvirt
version 1.1.1 and Nova Juno release.

As such I'd be surprised if anyone is running OpenStack with libvirt
 LXC in production on libvirt  1.1.1 as it would be pretty insecure,
but stranger things have happened.

The general libvirt min requirement for LXC, QEMU and KVM currently
is 0.9.11. We're *not* proposing to change the QEMU/KVM min libvirt,
but feel it is worth increasing the LXC min libvirt to 1.0.6

So would anyone object if we increased min libvirt to 1.0.6 when
running the LXC driver ?


Thanks for raising the question, Daniel!

Since there are no objections, I'd like to make 1.1.1 the minimal 
required version. Let's also make parameters uid_maps and gid_maps 
mandatory and always add them to libvirt XML.





Why not 1.1.1?


Well I was only going for what's the technical bare minimum to get
the functionality wrt disk image mounting.

If we wish to declare use of user namespace is mandatory with the
libvirt LXC driver, then picking 1.1.1 would be fine too.


Personally, I'd be +1 on 1.1.1. :)

-jay

__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Dmitry Guryanov


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] RFC: Increasing min libvirt to 1.0.6 for LXC driver ?

2015-02-16 Thread Dmitry Guryanov

On 02/16/2015 04:36 PM, Daniel P. Berrange wrote:

On Mon, Feb 16, 2015 at 04:31:21PM +0300, Dmitry Guryanov wrote:

On 02/13/2015 05:50 PM, Jay Pipes wrote:

On 02/13/2015 09:20 AM, Daniel P. Berrange wrote:

On Fri, Feb 13, 2015 at 08:49:26AM -0500, Jay Pipes wrote:

On 02/13/2015 07:04 AM, Daniel P. Berrange wrote:

Historically Nova has had a bunch of code which mounted images on the
host OS using qemu-nbd before passing them to libvirt to setup the
LXC container. Since 1.0.6, libvirt is able todo this itself and it
would simplify the codepaths in Nova if we can rely on that

In general, without use of user namespaces, LXC can't really be
considered secure in OpenStack, and this already requires libvirt
version 1.1.1 and Nova Juno release.

As such I'd be surprised if anyone is running OpenStack with libvirt
 LXC in production on libvirt  1.1.1 as it would be pretty insecure,
but stranger things have happened.

The general libvirt min requirement for LXC, QEMU and KVM currently
is 0.9.11. We're *not* proposing to change the QEMU/KVM min libvirt,
but feel it is worth increasing the LXC min libvirt to 1.0.6

So would anyone object if we increased min libvirt to 1.0.6 when
running the LXC driver ?

Thanks for raising the question, Daniel!

Since there are no objections, I'd like to make 1.1.1 the minimal required
version. Let's also make parameters uid_maps and gid_maps mandatory and
always add them to libvirt XML.

I think it is probably not enough prior warning to actually turn on user
namespace by default in Kilo. So I think what we should do for Kilo is to
issue a warning message on nova startup if userns is not enabled in the
config, telling users that this will become mandatory in Liberty. Then
when Liberty dev opens, we make it mandatory.

Regards,
Daniel


OK, seems reasonable.

--
Dmitry Guryanov


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Raw vs Qcow2 images_type in nova/libvirt

2015-01-20 Thread Dmitry Guryanov

On 01/20/2015 12:18 AM, Pádraig Brady wrote:

On 19/01/15 20:41, Michael Still wrote:

Mostly.

qcow2 can do a copy on write layer, although it can be disabled IIRC.
So if COW is turned on, you get only the delta in the instance
directory when using qcow2.

Cheers,
Michael

On Tue, Jan 20, 2015 at 7:40 AM, Dmitry Guryanov
dgurya...@parallels.com wrote:

Hello,

Do I understand correctly, that both Qcow2 and Raw classes in
libvirt/imagebackend.py can work with images in qcow2 format, but Raw copies
the whole base image from cache to the instance's dir and Qcow2 only creates
a delta (and use base image from cache)?

Correct.  That Raw class should be renamed to Copy,
to clarify/distinguish from CopyOnWrite.

BTW there are some notes on these settings at:
http://www.pixelbeat.org/docs/openstack_libvirt_images/

Pádraig


Thanks! Excellent article.

--
Dmitry Guryanov


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Raw vs Qcow2 images_type in nova/libvirt

2015-01-20 Thread Dmitry Guryanov

On 01/19/2015 11:41 PM, Michael Still wrote:

Mostly.

qcow2 can do a copy on write layer, although it can be disabled IIRC.
So if COW is turned on, you get only the delta in the instance
directory when using qcow2.


It seems you have to set images_type=raw (or use_cow_images=false) to 
disable copy on write, image will be handled by Raw class from 
imagebackend.py.



Cheers,
Michael

On Tue, Jan 20, 2015 at 7:40 AM, Dmitry Guryanov
dgurya...@parallels.com wrote:

Hello,

Do I understand correctly, that both Qcow2 and Raw classes in
libvirt/imagebackend.py can work with images in qcow2 format, but Raw copies
the whole base image from cache to the instance's dir and Qcow2 only creates
a delta (and use base image from cache)?

--
Dmitry Guryanov


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






--
Dmitry Guryanov


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Raw vs Qcow2 images_type in nova/libvirt

2015-01-19 Thread Dmitry Guryanov

Hello,

Do I understand correctly, that both Qcow2 and Raw classes in 
libvirt/imagebackend.py can work with images in qcow2 format, but Raw 
copies the whole base image from cache to the instance's dir and Qcow2 
only creates a delta (and use base image from cache)?


--
Dmitry Guryanov


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova]Why nova mounts FS for LXC container instead of libvirt?

2015-01-15 Thread Dmitry Guryanov

On 01/12/2015 06:35 PM, Daniel P. Berrange wrote:

On Mon, Jan 12, 2015 at 06:28:53PM +0300, Dmitry Guryanov wrote:

On 01/05/2015 02:30 PM, Daniel P. Berrange wrote:

On Tue, Dec 30, 2014 at 05:18:19PM +0300, Dmitry Guryanov wrote:

Hello,

Libvirt can create loop or nbd device for LXC container and mount it by
itself, for instance, you can add something like this to xml config:

filesystem type='file'
   driver type='loop' format='raw'/
   source file='/fedora-20-raw'/
   target dir='/'/
/filesystem

But nova mounts filesystem for container by itself. Is this because rhel-6
doesn't support filesystems with type='file' or there are some other reasons?

The support for mounting using NBD in OpenStack pre-dated the support
for doing this in Libvirt. In faact the reason I added this feature to
libvirt was precisely because OpenStack was doing this.

We haven't switched Nova over to use this new syntax yet though, because
that would imply a change to the min required libvirt version for LXC.
That said we should probably make such a change, because honestly no
one should be using LXC without using user namespaces, othewise their
cloud is horribly insecure. This would imply making the min libvirt for
LXC much much newer than it is today.


It's not very hard to replace mounting in nova with generating proper xml
config. Can we do it before kilo release? Are there any people, who use
openstack with LXC in production?

Looking at libvirt history, it would mean we mandate 1.0.6 as the min
libvirt for use with the LXC driver.

Regards,
Daniel


I've created RFC patches:

https://review.openstack.org/#/c/147535/
https://review.openstack.org/#/c/147534/
https://review.openstack.org/#/c/147533/


--
Dmitry Guryanov


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova]Why nova mounts FS for LXC container instead of libvirt?

2015-01-12 Thread Dmitry Guryanov

On 01/05/2015 02:30 PM, Daniel P. Berrange wrote:

On Tue, Dec 30, 2014 at 05:18:19PM +0300, Dmitry Guryanov wrote:

Hello,

Libvirt can create loop or nbd device for LXC container and mount it by
itself, for instance, you can add something like this to xml config:

filesystem type='file'
   driver type='loop' format='raw'/
   source file='/fedora-20-raw'/
   target dir='/'/
/filesystem

But nova mounts filesystem for container by itself. Is this because rhel-6
doesn't support filesystems with type='file' or there are some other reasons?

The support for mounting using NBD in OpenStack pre-dated the support
for doing this in Libvirt. In faact the reason I added this feature to
libvirt was precisely because OpenStack was doing this.

We haven't switched Nova over to use this new syntax yet though, because
that would imply a change to the min required libvirt version for LXC.
That said we should probably make such a change, because honestly no
one should be using LXC without using user namespaces, othewise their
cloud is horribly insecure. This would imply making the min libvirt for
LXC much much newer than it is today.

Regards,
Daniel


It's not very hard to replace mounting in nova with generating proper 
xml config. Can we do it before kilo release? Are there any people, who 
use openstack with LXC in production?


--
Dmitry Guryanov


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Why nova mounts FS for LXC container instead of libvirt?

2014-12-30 Thread Dmitry Guryanov
Hello,

Libvirt can create loop or nbd device for LXC container and mount it by 
itself, for instance, you can add something like this to xml config:

filesystem type='file'
  driver type='loop' format='raw'/
  source file='/fedora-20-raw'/
  target dir='/'/
/filesystem

But nova mounts filesystem for container by itself. Is this because rhel-6 
doesn't support filesystems with type='file' or there are some other reasons?

-- 
Dmitry Guryanov

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Why does nova mount FS for LXC container instead of libvirt?

2014-12-30 Thread Dmitry Guryanov
On Tuesday 30 December 2014 17:18:19 Dmitry Guryanov wrote:
 Hello,
 
 Libvirt can create loop or nbd device for LXC container and mount it by
 itself, for instance, you can add something like this to xml config:
 
 filesystem type='file'
   driver type='loop' format='raw'/
   source file='/fedora-20-raw'/
   target dir='/'/
 /filesystem
 
 But nova mounts filesystem for container by itself. Is this because rhel-6
 doesn't support filesystems with type='file' or there are some other
 reasons?

You can define domain with such filesystem in rhel-6's libvirt, but container 
will use host's root fs, probably there is a bug.


-- 
Dmitry Guryanov

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Why nova mounts FS for LXC container instead of libvirt?

2014-12-30 Thread Dmitry Guryanov
On Tuesday 30 December 2014 17:18:19 Dmitry Guryanov wrote:
 Hello,
 
 Libvirt can create loop or nbd device for LXC container and mount it by
 itself, for instance, you can add something like this to xml config:
 
 filesystem type='file'
   driver type='loop' format='raw'/
   source file='/fedora-20-raw'/
   target dir='/'/
 /filesystem
 
 But nova mounts filesystem for container by itself. Is this because rhel-6
 doesn't support filesystems with type='file' or there are some other
 reasons?

Sorry, forgot to add [Nova] prefix in the first message.


-- 
Dmitry Guryanov

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Providing instance's guest OS with data (ssh keys, root password, hostname)

2014-12-19 Thread Dmitry Guryanov
Hello,

If I understood correctly, there are 3 ways to provide guest OS with some data 
(SSH keys, for example):

1. mount guest root fs on host (with libguestfs) and copy data there.
2. config drive and cloud-init
3. nova metadata service and cloud-init


All 3 methods do almost the same thing and can be enabled or disabled in nova 
config file. So which one is preferred? How do people usually configure their 
openstack clusters?

I'm asking, because we are going to extend nova/libvirt driver to support our 
virtualization solution (parallels driver in libvirt) and it seems it will not 
work as is and requires some development. Which method is first-priority and 
used by most people? 

-- 
Dmitry Guryanov

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Fwd: Re: [Nova] Providing instance's guest OS with data (ssh keys, root password, hostname)

2014-12-19 Thread Dmitry Guryanov
--  Forwarded Message  --

Subject: Re: [openstack-dev] [Nova] Providing instance's guest OS with data 
(ssh keys, root password, hostname)
Date: Friday 19 December 2014, 14:17:34
From: Daniel P. Berrange berra...@redhat.com
To: OpenStack Development Mailing List (not for usage questions) openstack-
d...@lists.openstack.org

Dmitry GuryanovOn Fri, Dec 19, 2014 at 05:11:57PM +0300,  wrote:
 Hello,
 
 If I understood correctly, there are 3 ways to provide guest OS with some 
data 
 (SSH keys, for example):
 
 1. mount guest root fs on host (with libguestfs) and copy data there.
 2. config drive and cloud-init
 3. nova metadata service and cloud-init
 
 
 All 3 methods do almost the same thing and can be enabled or disabled in 
nova 
 config file. So which one is preferred? How do people usually configure 
their 
 openstack clusters?
 
 I'm asking, because we are going to extend nova/libvirt driver to support 
our 
 virtualization solution (parallels driver in libvirt) and it seems it will 
not 
 work as is and requires some development. Which method is first-priority and 
 used by most people?

I'd probably prioritize in this order:

  1. config drive and cloud-init
  2. nova metadata service and cloud-init
  3. mount guest root fs on host (with libguestfs) and copy data there.

but there's not much to choose between 1  2.

NB, option 3 isn't actually hardcoded to use libguestfs - it falls back
to using loop devices / local mounts, albeit less secure, so not really
recommended. At some point option 3 may be removed from Nova entirely
since the first two options are preferred  more reliable in general.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
-
-- 
Dmitry Guryanov

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Providing instance's guest OS with data (ssh keys, root password, hostname)

2014-12-19 Thread Dmitry Guryanov
On Friday 19 December 2014 14:17:34 Daniel P. Berrange wrote:
 On Fri, Dec 19, 2014 at 05:11:57PM +0300, Dmitry Guryanov wrote:
  Hello,
  
  If I understood correctly, there are 3 ways to provide guest OS with some
  data (SSH keys, for example):
  
  1. mount guest root fs on host (with libguestfs) and copy data there.
  2. config drive and cloud-init
  3. nova metadata service and cloud-init
  
  
  All 3 methods do almost the same thing and can be enabled or disabled in
  nova config file. So which one is preferred? How do people usually
  configure their openstack clusters?
  
  I'm asking, because we are going to extend nova/libvirt driver to support
  our virtualization solution (parallels driver in libvirt) and it seems it
  will not work as is and requires some development. Which method is
  first-priority and used by most people?
 
 I'd probably prioritize in this order:
 
   1. config drive and cloud-init
   2. nova metadata service and cloud-init
   3. mount guest root fs on host (with libguestfs) and copy data there.
 
 but there's not much to choose between 1  2.

Thanks! Config drive already works for VMs, need to check how it will work 
with containers, since we can't add cdrom there.

 
 NB, option 3 isn't actually hardcoded to use libguestfs - it falls back
 to using loop devices / local mounts, albeit less secure, so not really
 recommended. At some point option 3 may be removed from Nova entirely
 since the first two options are preferred  more reliable in general.

I see!

I actually know that libguestfs is optional, just provided it as an example 
how nova mounts disks. BTW it will not reduce security level for containers, 
because we mount root fs on host to start it.

 
 Regards,
 Daniel

-- 
Dmitry Guryanov

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: Re: [Nova] Providing instance's guest OS with data (ssh keys, root password, hostname)

2014-12-19 Thread Dmitry Guryanov
On Friday 19 December 2014 17:27:18 Dmitry Guryanov wrote:

Sorry, forwarded to wrong list


 --  Forwarded Message  --
 
 Subject: Re: [openstack-dev] [Nova] Providing instance's guest OS with data
 (ssh keys, root password, hostname)
 Date: Friday 19 December 2014, 14:17:34
 From: Daniel P. Berrange berra...@redhat.com
 To: OpenStack Development Mailing List (not for usage questions) openstack-
 d...@lists.openstack.org
 
 Dmitry GuryanovOn Fri, Dec 19, 2014 at 05:11:57PM +0300,  wrote:
  Hello,
  
  If I understood correctly, there are 3 ways to provide guest OS with some
 
 data
 
  (SSH keys, for example):
  
  1. mount guest root fs on host (with libguestfs) and copy data there.
  2. config drive and cloud-init
  3. nova metadata service and cloud-init
  
  
  All 3 methods do almost the same thing and can be enabled or disabled in
 
 nova
 
  config file. So which one is preferred? How do people usually configure
 
 their
 
  openstack clusters?
  
  I'm asking, because we are going to extend nova/libvirt driver to support
 
 our
 
  virtualization solution (parallels driver in libvirt) and it seems it will
 
 not
 
  work as is and requires some development. Which method is first-priority
  and used by most people?
 
 I'd probably prioritize in this order:
 
   1. config drive and cloud-init
   2. nova metadata service and cloud-init
   3. mount guest root fs on host (with libguestfs) and copy data there.
 
 but there's not much to choose between 1  2.
 
 NB, option 3 isn't actually hardcoded to use libguestfs - it falls back
 to using loop devices / local mounts, albeit less secure, so not really
 recommended. At some point option 3 may be removed from Nova entirely
 since the first two options are preferred  more reliable in general.
 
 Regards,
 Daniel

-- 
Dmitry Guryanov

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Providing instance's guest OS with data (ssh keys, root password, hostname)

2014-12-19 Thread Dmitry Guryanov
On Friday 19 December 2014 14:38:29 Daniel P. Berrange wrote:
 On Fri, Dec 19, 2014 at 05:34:19PM +0300, Dmitry Guryanov wrote:
  On Friday 19 December 2014 14:17:34 Daniel P. Berrange wrote:
   On Fri, Dec 19, 2014 at 05:11:57PM +0300, Dmitry Guryanov wrote:
Hello,

If I understood correctly, there are 3 ways to provide guest OS with
some
data (SSH keys, for example):

1. mount guest root fs on host (with libguestfs) and copy data there.
2. config drive and cloud-init
3. nova metadata service and cloud-init


All 3 methods do almost the same thing and can be enabled or disabled
in
nova config file. So which one is preferred? How do people usually
configure their openstack clusters?

I'm asking, because we are going to extend nova/libvirt driver to
support
our virtualization solution (parallels driver in libvirt) and it seems
it
will not work as is and requires some development. Which method is
first-priority and used by most people?
   
   I'd probably prioritize in this order:
 1. config drive and cloud-init
 2. nova metadata service and cloud-init
 3. mount guest root fs on host (with libguestfs) and copy data there.
   
   but there's not much to choose between 1  2.
  
  Thanks! Config drive already works for VMs, need to check how it will work
  with containers, since we can't add cdrom there.
 
 There are currently two variables wrt config drive
 
  - device type - cdrom vs disk
  - filesystem  - vfat vs iso9660
 
 For your fully virt machines I'd probably just stick with the default
 that ibvirt already supports.
 
 When we discussed this for LXC, we came to the conclusion that for
 containers we shouldn't try to expose a block device at all. Instead
 just mount the contents of the config drive at the directory location
 that cloud-init wants the data (it was somewhere under /var/ but I
 can't remember where right now).  I think the same makes sense for
 parallels' container based guests.

That's good news, we have functions to mount host dir to container
in PCS, so we just need to add it to libvirt driver.

 
 Regards,
 Daniel

-- 
Dmitry Guryanov

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] question about Get Guest Info row in HypervisorSupportMatrix

2014-12-16 Thread Dmitry Guryanov
On Tuesday 09 December 2014 18:15:01 Markus Zoeller wrote:
   On Tue, Dec 09, 2014 at 06:33:47PM +0300, Dmitry Guryanov wrote:
   
   Hello!
   
   There is a feature in HypervisorSupportMatrix
   (https://wiki.openstack.org/wiki/HypervisorSupportMatrix) called Get
 
 Guest
 
   Info. Does anybody know, what does it mean? I haven't found anything
 
 like
 
   this neither in nova api nor in horizon and nova command line.
 
 I think this maps to the nova driver function get_info:
 https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L4
 054
 
 I believe (and didn't double-check) that this is used e.g. by the
 Nova CLI via `nova show [--minimal] server` command.
 

It seems Driver.get_info used only for obtaining instance's power state. It's 
strange. It think we can cleanup the code, rename get_info to get_power_state 
and return only power state from this function.

 I tried to map the features of the hypervisor support matrix to
 specific nova driver functions on this wiki page:
 https://wiki.openstack.org/wiki/HypervisorSupportMatrix/DriverAPI
 

Thanks!

  On Tue Dec 9 15:39:35 UTC 2014, Daniel P. Berrange wrote:
  I've pretty much no idea what the intention was for that field. I've
  been working on formally documenting all those things, but draw a blank
  for that
  
  FYI:
  
  https://review.openstack.org/#/c/136380/1/doc/hypervisor-support.ini
  
  Regards, Daniel
 
 Nice! I will keep an eye on that :)
 
 
 Regards,
 Markus Zoeller
 IRC: markus_z
 Launchpad: mzoeller
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Dmitry Guryanov

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] question about Get Guest Info row in HypervisorSupportMatrix

2014-12-16 Thread Dmitry Guryanov
On Tuesday 09 December 2014 15:39:35 Daniel P. Berrange wrote:
 On Tue, Dec 09, 2014 at 06:33:47PM +0300, Dmitry Guryanov wrote:
  Hello!
  
  There is a feature in HypervisorSupportMatrix
  (https://wiki.openstack.org/wiki/HypervisorSupportMatrix) called Get
  Guest
  Info. Does anybody know, what does it mean? I haven't found anything like
  this neither in nova api nor in horizon and nova command line.
 
 I've pretty much no idea what the intention was for that field. I've
 been working on formally documenting all those things, but draw a blank
 for that
 
 FYI:
 
   https://review.openstack.org/#/c/136380/1/doc/hypervisor-support.ini
 
 

Thanks, looks much betters than previous one.


I think Auto configure disk refers to resizing filesystems on root disk 
according to value given in flavor.


 Regards,
 Daniel

-- 
Dmitry Guryanov

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] question about Get Guest Info row in HypervisorSupportMatrix

2014-12-09 Thread Dmitry Guryanov
Hello!

There is a feature in HypervisorSupportMatrix 
(https://wiki.openstack.org/wiki/HypervisorSupportMatrix) called Get Guest 
Info. Does anybody know, what does it mean? I haven't found anything like 
this neither in nova api nor in horizon and nova command line.

-- 
Thanks,
Dmitry Guryanov

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] I've published parallels SDK

2014-08-18 Thread Dmitry Guryanov
Hello!

I've published parallels-sdk:

https://github.com/Parallels/parallels-sdk

-- 
Dmitry Guryanov
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] I've published parallels SDK

2014-08-18 Thread Dmitry Guryanov
On Monday 18 August 2014 22:45:17 Dmitry Guryanov wrote:
 Hello!
 
 I've published parallels-sdk:
 
 https://github.com/Parallels/parallels-sdk

Sorry, I've sent this mail to the wrong list :(, please, ignore.

-- 
Dmitry Guryanov
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Why people don't close bugs?

2014-08-04 Thread Dmitry Guryanov
Hello!

I looked through launchpad bugs and it seems there are a lot of bugs, 
which are fixed already, but still open, here are 3 ones:

https://bugs.launchpad.net/nova/+bug/909096
https://bugs.launchpad.net/nova/+bug/1206762
https://bugs.launchpad.net/nova/+bug/1208743

I've posted comments on these bugs, but nobody replied. How is it 
possible, to close them?

-- 
Dmitry Guryanov
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Why people don't close bugs?

2014-08-04 Thread Dmitry Guryanov
On Monday 04 August 2014 17:53:11 Tom Fifield wrote:
 On 04/08/14 17:46, Dmitry Guryanov wrote:
  Hello!
  
  I looked through launchpad bugs and it seems there are a lot of bugs,
  which are fixed already, but still open, here are 3 ones:
  
  https://bugs.launchpad.net/nova/+bug/909096
  
  https://bugs.launchpad.net/nova/+bug/1206762
  
  https://bugs.launchpad.net/nova/+bug/1208743
  
  I've posted comments on these bugs, but nobody replied. How is it
  possible, to close them?
 
 Hi Dimiry,
 
 Thanks for looking into the bug tracker. We definitely always need more
 people helping with triage (https://wiki.openstack.org/BugTriage).
 
 If you join the Nova Bug Team (https://launchpad.net/~nova-bugs) you
 will be able to change the bugs' status as appropriate.
 
 Regards,
 
 

Thanks, I've joined this team. I'll try to help with such outdated issues.


 Tom
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Dmitry Guryanov
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] PCS support spec

2014-07-31 Thread Dmitry Guryanov
Hello,

I've written spec for PCS support in nova/libvirt:



==
Parallels Cloud Server support in nova/libvirt driver
==

https://blueprints.launchpad.net/nova/+spec/pcs-support

This specification proposes to make changes in nova/libvirt driver in order to
support Parallels Cloud Server.

Problem description
===

Parallels Cloud Server (PCS) is a virtualization solution, which enables hosters
to use container and hypervisor virtualization over the same API. PCS is
supported by libvirt, but OpenStack can't use it because of some differences in
domains configration and supported features.


Proposed change
===

To implement this feature we need to make a set of small changes in nova/libvirt
driver so that it will create PCS domains correctly. The end user will be able
to configure nova to use PCS by setting libvirt.virt_type option to parallels.

Alternatives


The alternate way is to use separate nova driver
https://github.com/parallels/pcs-nova-driver

pros:
* There is no middle layer between OpenStack and PCS, pcs-nova-driver uses
PCS's python API.
* Changes in pcs-nova-driver will not affect nova/libvirt's code.
* Nova uses a small set of virtualization API to run instances so it's more
convenient to implement small driver in nova/virt directory (like xen or
hyperv).

cons:
* It's hard to maintain out-of-tree driver.
* Nova core team is unlikely to accept pcs-novadriver into nova's tree.

Data model impact
-

None.

REST API impact
---

None.

Security impact
---

None.

Notifications impact


None.

Other end user impact
-

None.

Performance Impact
--

None.

Other deployer impact
-

In order to use PCS as Openstack compute node, deployer must install
nova-compute packages on PCS node and set libvirt.virt_type config option
to parallels.

Developer impact


None

Implementation
==

Assignee(s)
---

Primary assignee:
  dguryanov

Work Items
--

To be filled


Dependencies


None

Testing
===

To be filled

Documentation Impact


None

References
==

Parallels Cloud Server: http://www.parallels.com/products/pcs/.



-- 
Dmitry Guryanov
==
Parallels Cloud Server support in nova/libvirt driver
==

https://blueprints.launchpad.net/nova/+spec/pcs-support

This specification proposes to make changes in nova/libvirt driver in order to
support Parallels Cloud Server.

Problem description
===

Parallels Cloud Server (PCS) is a virtualization solution, which enables hosters
to use container and hypervisor virtualization over the same API. PCS is
supported by libvirt, but OpenStack can't use it because of some differences in
domains configration and supported features.


Proposed change
===

To implement this feature we need to make a set of small changes in nova/libvirt
driver so that it will create PCS domains correctly. The end user will be able
to configure nova to use PCS by setting libvirt.virt_type option to parallels.

Alternatives


The alternate way is to use separate nova driver
https://github.com/parallels/pcs-nova-driver

pros:
* There is no middle layer between OpenStack and PCS, pcs-nova-driver uses
PCS's python API.
* Changes in pcs-nova-driver will not affect nova/libvirt's code.
* Nova uses a small set of virtualization API to run instances so it's more
convenient to implement small driver in nova/virt directory (like xen or
hyperv).

cons:
* It's hard to maintain out-of-tree driver.
* Nova core team is unlikely to accept pcs-novadriver into nova's tree.

Data model impact
-

None.

REST API impact
---

None.

Security impact
---

None.

Notifications impact


None.

Other end user impact
-

None.

Performance Impact
--

None.

Other deployer impact
-

In order to use PCS as Openstack compute node, deployer must install
nova-compute packages on PCS node and set libvirt.virt_type config option
to parallels.

Developer impact


None

Implementation
==

Assignee(s)
---

Primary assignee:
  dguryanov

Work Items
--

To be filled


Dependencies


None

Testing
===

To be filled

Documentation Impact


None

References
==

Parallels Cloud Server: http://www.parallels.com/products/pcs/.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Containers] Nova virt driver requirements

2014-07-10 Thread Dmitry Guryanov
On Monday 07 July 2014 16:11:21 Joe Gordon wrote:
 On Jul 3, 2014 11:43 AM, Dmitry Guryanov dgurya...@parallels.com wrote:
  Hi, All!
  
  As far as I know, there are some requirements, which virt driver must
 
 meet to
 
  use Openstack 'label'. For example, it's not allowed to mount cinder
 
 volumes
 
  inside host OS.
 
 I am a little unclear on what your question is. If it is simply about the
 OpenStack label then:
 
 'OpenStack' is a trademark that is enforced by the OpenStack foundation.
 You should check with the foundation to get a formal answer on commercial
 trademark usage. (As an OpenStack developer, my personal view is having out
 of tree drivers is a bad idea, but that decision isn't up to me.)
 
 If this is about contributing your driver to nova (great!), then this is
 the right forum to begin that discussion. We don't have a formal list of
 requirements for contributing new drivers to nova besides the need for CI
 testing. If you are interested in contributing a new nova driver, can you
 provide a brief overview along with your questions to get the discussion
 started.

OK, thanks!

Actually we are discussing, how to implement containers support in nova-
containers team.

I have a question about mounts - in OpenVZ project each container has its own 
filesystem in an image file. So to start a container we mount this filesystem 
in host OS (because all containers share the same linux kernel). Is it a 
security problem from the Openstack's developers vision?


I have this question, because libvirt's driver uses libguestfs to copy some 
files into guest filesystem instead of simple mount on host. Mounting with 
libguestfs is slower, then mount on host, so there should be strong reasons, 
why libvirt driver does it.


 
 Also there is an existing efforts to add container support into nova and I
 hear they are making excellent progress; do you plan on collaborating with
 those folks?
 
  Are there any documents, describing all such things? How can I determine,
 
 if
 
  my virtualization driver for nova (developed outside of nova mainline)
 
 works
 
  correctly and meet nova's security requirements?
  
  
  --
  Dmitry Guryanov
  
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Dmitry Guryanov

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Containers] Nova virt driver requirements

2014-07-10 Thread Dmitry Guryanov
On Thursday 10 July 2014 14:47:11 Daniel P. Berrange wrote:
 On Thu, Jul 10, 2014 at 05:36:59PM +0400, Dmitry Guryanov wrote:
  I have a question about mounts - in OpenVZ project each container has its
  own filesystem in an image file. So to start a container we mount this
  filesystem in host OS (because all containers share the same linux
  kernel). Is it a security problem from the Openstack's developers vision?
  
  
  I have this question, because libvirt's driver uses libguestfs to copy
  some
  files into guest filesystem instead of simple mount on host. Mounting with
  libguestfs is slower, then mount on host, so there should be strong
  reasons, why libvirt driver does it.
 
 We consider mounting untrusted filesystems on the host kernel to be
 an unacceptable security risk. A user can craft a malicious filesystem
 that expliots bugs in the kernel filesystem drivers. This is particularly
 bad if you allow the kernel to probe for filesystem type since Linux
 has many many many filesystem drivers most of which are likely not
 audited enough to be considered safe against malicious data. Even the
 mainstream ext4 driver had a crasher bug present for many years
 
   https://lwn.net/Articles/538898/
   http://libguestfs.org/guestfs.3.html#security-of-mounting-filesystems
 
 Now that all said, there are no absolutes in security. You have to
 decide what risks are important to you and which are not. In the case
 of KVM, I think this host filesystem risk is unacceptable because you
 presumably chose to use machine based virt in order get strong separation
 of kernels. If you have explicitly made the decision to use a container
 based virt solution (which inherantly has a shared kernel between host
 and guest) then I think it would be valid for you to say this filesystem
 risk is one you are prepared to accept, as it is not much worse than
 the risk you already have by using a single shared kernel for all tenants.
 

Thanks, Daniel, it seems you've answered this question second time :)

 So, IMHO, OpenStack should not dictate the security policy for things
 like this. Different technologies within openstack will provide protection
 against different attack scenarios. It is a deployment decision for the
 cloud administrator which of those risks they want to mitigate in their
 usage.  This is why we still kept the option of using a non-libguestfs
 approach for file injection.
 

That's exactly what I'd like to know.
I've also found the spec about starting LXC container from a block device: 
https://github.com/openstack/nova-specs/blob/master/specs/juno/libvirt-start-lxc-from-block-devices.rst

Is it up-to-date?


 Regards,
 Daniel

-- 
Dmitry Guryanov

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Containers] Nova virt driver requirements

2014-07-03 Thread Dmitry Guryanov
Hi, All!

As far as I know, there are some requirements, which virt driver must meet to 
use Openstack 'label'. For example, it's not allowed to mount cinder volumes 
inside host OS.

Are there any documents, describing all such things? How can I determine, if 
my virtualization driver for nova (developed outside of nova mainline) works 
correctly and meet nova's security requirements?


-- 
Dmitry Guryanov

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev