Re: Proposed F19 Feature: High Availability Container Resources
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 02/01/2013 03:55 PM, David Vossel wrote: - Original Message - From: Daniel J Walsh dwa...@redhat.com To: Development discussions related to Fedora devel@lists.fedoraproject.org Sent: Friday, February 1, 2013 10:09:27 AM Subject: Re: Proposed F19 Feature: High Availability Container Resources -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 01/29/2013 03:17 PM, Glauber Costa wrote: = Features/ High Availability Container Resources = https://fedoraproject.org/wiki/Features/High_Availability_Container_Resources Feature owner(s): David Vossel dvos...@redhat.com The Container Resources feature allows the HA stack (Pacemaker + Corosync) residing on a host machine to extend management of resources into virtual guest instances (KVM/LXC). Is this about LXC or libvirt-lxc? These two are entirely different projects, sharing no code, which makes me wonder which project is meant here? Yep, I left that vague and should have used the term linux containers instead of LXC. I'm going to update the page to reflect this. This feature architecturally doesn't care which project manages/initiates the container. All we care about is that the container has it's own isolated network namespace that is reachable from the host (or whatever node is remotely managing the resources within the container) I intentionally chose to use tcp/tls as the first transport we will support to avoid locking this feature into use with any specific virt technology. With that said, I'm likely going to be focusing my test cases on libvirt-lxc just because it seems like it has better fedora support. The LXC project appears to be moving all over the place. Part of the project is really to identify good use-cases for linux containers in an HA environment. The kvm use-case is fairly straight forward and well understood though. I'll update the page to list the linux container use-case as a possible risk. Please also keep in mind that LXC usually refers to a specific project, either the original lxc code or libvirt-lxc. We have either Container Solutions in Fedora, like OpenVZ. You may be able to reach a broader base by making your solution work on that too (and of course, I'd be more than happy to help to trim any issues you may find) -- E Mare, Libertas I would like to also understand how we can work together with virt-sandbox. (Secure Linux Containers) Really interesting idea. Integrating with virt-sandbox would allow the cluster to dynamically launch resources in a contained environment. My understand is that this contained environment would give users the ability to automatically set cpu and memory usage limits for a resource as well as isolate that resource's access to the rest of system. Everywhere that resource gets launched in the cluster, it gets the exact same environment. For the HA config we could do this in a really slick way. We could just allow people to start defining environment details (number cpus, memory usage, network settings) in the resource definition. Then when it's time to launch the resource, if we have certain environment details associated with the resource, we'll just launch the resource in a dynamically created guest sandbox environment instead of directly within the host. This is really brilliant... Conceptually this is like we are creating a virtual machine image on the fly for a resource to start in that follows the resource wherever it goes in the cluster. This would be fun to talk through sometime. The remote LRMD daemon I'm working on would be the piece of the puzzle that allows the HA stack to reach into contained environment to start/stop/monitor the resource living in the container. -- Vossel I think you would also want to talk to Dan Berrange and Vivek Goyal who are designing some higher level concepts for resource controls. -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.13 (GNU/Linux) Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ iEYEARECAAYFAlEREu0ACgkQrlYvE4MpobNxjACeNcMMrr50+i5BHDbQv2KvOyiR rwsAoI0flZpto2F6M7LiJdu/gr9MF8+X =flPL -END PGP SIGNATURE- -- devel mailing list devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/devel
Re: Proposed F19 Feature: High Availability Container Resources
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 01/29/2013 03:17 PM, Glauber Costa wrote: = Features/ High Availability Container Resources = https://fedoraproject.org/wiki/Features/High_Availability_Container_Resources Feature owner(s): David Vossel dvos...@redhat.com The Container Resources feature allows the HA stack (Pacemaker + Corosync) residing on a host machine to extend management of resources into virtual guest instances (KVM/LXC). Is this about LXC or libvirt-lxc? These two are entirely different projects, sharing no code, which makes me wonder which project is meant here? Yep, I left that vague and should have used the term linux containers instead of LXC. I'm going to update the page to reflect this. This feature architecturally doesn't care which project manages/initiates the container. All we care about is that the container has it's own isolated network namespace that is reachable from the host (or whatever node is remotely managing the resources within the container) I intentionally chose to use tcp/tls as the first transport we will support to avoid locking this feature into use with any specific virt technology. With that said, I'm likely going to be focusing my test cases on libvirt-lxc just because it seems like it has better fedora support. The LXC project appears to be moving all over the place. Part of the project is really to identify good use-cases for linux containers in an HA environment. The kvm use-case is fairly straight forward and well understood though. I'll update the page to list the linux container use-case as a possible risk. Please also keep in mind that LXC usually refers to a specific project, either the original lxc code or libvirt-lxc. We have either Container Solutions in Fedora, like OpenVZ. You may be able to reach a broader base by making your solution work on that too (and of course, I'd be more than happy to help to trim any issues you may find) -- E Mare, Libertas I would like to also understand how we can work together with virt-sandbox. (Secure Linux Containers) -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.13 (GNU/Linux) Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ iEYEARECAAYFAlEL6LcACgkQrlYvE4MpobNP2wCglY4RnI20xvM2hXrbKkQzcHyP rmcAnRTGdfi86tlQCRJs5lNucFr+IOda =pr0g -END PGP SIGNATURE- -- devel mailing list devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/devel
Re: Proposed F19 Feature: High Availability Container Resources
- Original Message - From: Daniel J Walsh dwa...@redhat.com To: Development discussions related to Fedora devel@lists.fedoraproject.org Sent: Friday, February 1, 2013 10:09:27 AM Subject: Re: Proposed F19 Feature: High Availability Container Resources -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 01/29/2013 03:17 PM, Glauber Costa wrote: = Features/ High Availability Container Resources = https://fedoraproject.org/wiki/Features/High_Availability_Container_Resources Feature owner(s): David Vossel dvos...@redhat.com The Container Resources feature allows the HA stack (Pacemaker + Corosync) residing on a host machine to extend management of resources into virtual guest instances (KVM/LXC). Is this about LXC or libvirt-lxc? These two are entirely different projects, sharing no code, which makes me wonder which project is meant here? Yep, I left that vague and should have used the term linux containers instead of LXC. I'm going to update the page to reflect this. This feature architecturally doesn't care which project manages/initiates the container. All we care about is that the container has it's own isolated network namespace that is reachable from the host (or whatever node is remotely managing the resources within the container) I intentionally chose to use tcp/tls as the first transport we will support to avoid locking this feature into use with any specific virt technology. With that said, I'm likely going to be focusing my test cases on libvirt-lxc just because it seems like it has better fedora support. The LXC project appears to be moving all over the place. Part of the project is really to identify good use-cases for linux containers in an HA environment. The kvm use-case is fairly straight forward and well understood though. I'll update the page to list the linux container use-case as a possible risk. Please also keep in mind that LXC usually refers to a specific project, either the original lxc code or libvirt-lxc. We have either Container Solutions in Fedora, like OpenVZ. You may be able to reach a broader base by making your solution work on that too (and of course, I'd be more than happy to help to trim any issues you may find) -- E Mare, Libertas I would like to also understand how we can work together with virt-sandbox. (Secure Linux Containers) Really interesting idea. Integrating with virt-sandbox would allow the cluster to dynamically launch resources in a contained environment. My understand is that this contained environment would give users the ability to automatically set cpu and memory usage limits for a resource as well as isolate that resource's access to the rest of system. Everywhere that resource gets launched in the cluster, it gets the exact same environment. For the HA config we could do this in a really slick way. We could just allow people to start defining environment details (number cpus, memory usage, network settings) in the resource definition. Then when it's time to launch the resource, if we have certain environment details associated with the resource, we'll just launch the resource in a dynamically created guest sandbox environment instead of directly within the host. This is really brilliant... Conceptually this is like we are creating a virtual machine image on the fly for a resource to start in that follows the resource wherever it goes in the cluster. This would be fun to talk through sometime. The remote LRMD daemon I'm working on would be the piece of the puzzle that allows the HA stack to reach into contained environment to start/stop/monitor the resource living in the container. -- Vossel -- devel mailing list devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/devel
Re: Proposed F19 Feature: High Availability Container Resources
On 29/01/2013, at 7:46 AM, David Vossel dvos...@redhat.com wrote: - Original Message - From: Bill Nottingham nott...@redhat.com To: devel@lists.fedoraproject.org Sent: Monday, January 28, 2013 1:18:06 PM Subject: Re: Proposed F19 Feature: High Availability Container Resources Jaroslav Reznik (jrez...@redhat.com) said: = Features/ High Availability Container Resources = https://fedoraproject.org/wiki/Features/High_Availability_Container_Resources Feature owner(s): David Vossel dvos...@redhat.com The Container Resources feature allows the HA stack (Pacemaker + Corosync) residing on a host machine to extend management of resources into virtual guest instances (KVM/LXC). == Detailed description == This feature is in response to the growing desire for high availability functionality to be extended outside of the host into virtual guest instances. Pacemaker is currently capable of managing virtual guests, meaning Pacemaker can start/stop/monitor/migrate virtual guests anywhere in the cluster, but Pacemaker has no ability to manage the resources that live within the virtual guests. At the moment these virtual guests are very much a black box to Pacemaker. The Container Resources feature changes this by giving Pacemaker the ability to reach into the virtual guests and manage resources in the exact same way resources are managed on the host nodes. Ultimately this gives the HA stack the ability to manage resources across all the nodes in cluster as well as any virtual guests that reside within those cluster nodes. Does this require the management to live on the virtual host, or can it be done entirely remotely with the cluster management server residing elsewhere and talking to all the virtual guest instances directly? Management can be done entirely remotely from any cluster node running the ha stack. There are no location restrictions. We are not restricted to the remote instance being a virtual guest either. It could be bare-metal for all we care. Initially the cli management tools and documentation will focus on virtual guest use case where the management is performed on the virtual host machine. This just means we are planning on making that a very easy use-case to configure. The tools will be available work outside of this use-case though. It will just take a little more knowledge from the user. Just to explicitly call this out, we will be supporting two use-cases: - whitebox, where a remote agent is installed on the guest (or non-clustered machine) - blackbox, where there is _nothing_ installed on the guest (or non-clustered machine) For the blackbox case you are obviously limiting yourself to testing externally exposed APIs to determine status and not being able to start/stop the services directly. We are adding support for nagios scripts which seem popular for this task. -- Andrew -- devel mailing list devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/devel
Re: Proposed F19 Feature: High Availability Container Resources
- Original Message - From: Lennart Poettering mzerq...@0pointer.de To: devel@lists.fedoraproject.org Cc: devel-annou...@lists.fedoraproject.org, David Vossel dvos...@redhat.com Sent: Monday, January 28, 2013 4:10:00 PM Subject: Re: Proposed F19 Feature: High Availability Container Resources On Sun, 27.01.13 17:32, Jaroslav Reznik (jrez...@redhat.com) wrote: = Features/ High Availability Container Resources = https://fedoraproject.org/wiki/Features/High_Availability_Container_Resources Feature owner(s): David Vossel dvos...@redhat.com The Container Resources feature allows the HA stack (Pacemaker + Corosync) residing on a host machine to extend management of resources into virtual guest instances (KVM/LXC). Is this about LXC or libvirt-lxc? These two are entirely different projects, sharing no code, which makes me wonder which project is meant here? Yep, I left that vague and should have used the term linux containers instead of LXC. I'm going to update the page to reflect this. This feature architecturally doesn't care which project manages/initiates the container. All we care about is that the container has it's own isolated network namespace that is reachable from the host (or whatever node is remotely managing the resources within the container) I intentionally chose to use tcp/tls as the first transport we will support to avoid locking this feature into use with any specific virt technology. With that said, I'm likely going to be focusing my test cases on libvirt-lxc just because it seems like it has better fedora support. The LXC project appears to be moving all over the place. Part of the project is really to identify good use-cases for linux containers in an HA environment. The kvm use-case is fairly straight forward and well understood though. I'll update the page to list the linux container use-case as a possible risk. -- Vossel Lennart (Lennart, sorry, you're getting this response twice.) -- Lennart Poettering - Red Hat, Inc. -- devel mailing list devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/devel
Re: Proposed F19 Feature: High Availability Container Resources
= Features/ High Availability Container Resources = https://fedoraproject.org/wiki/Features/High_Availability_Container_Resources Feature owner(s): David Vossel dvos...@redhat.com The Container Resources feature allows the HA stack (Pacemaker + Corosync) residing on a host machine to extend management of resources into virtual guest instances (KVM/LXC). Is this about LXC or libvirt-lxc? These two are entirely different projects, sharing no code, which makes me wonder which project is meant here? Yep, I left that vague and should have used the term linux containers instead of LXC. I'm going to update the page to reflect this. This feature architecturally doesn't care which project manages/initiates the container. All we care about is that the container has it's own isolated network namespace that is reachable from the host (or whatever node is remotely managing the resources within the container) I intentionally chose to use tcp/tls as the first transport we will support to avoid locking this feature into use with any specific virt technology. With that said, I'm likely going to be focusing my test cases on libvirt-lxc just because it seems like it has better fedora support. The LXC project appears to be moving all over the place. Part of the project is really to identify good use-cases for linux containers in an HA environment. The kvm use-case is fairly straight forward and well understood though. I'll update the page to list the linux container use-case as a possible risk. Please also keep in mind that LXC usually refers to a specific project, either the original lxc code or libvirt-lxc. We have either Container Solutions in Fedora, like OpenVZ. You may be able to reach a broader base by making your solution work on that too (and of course, I'd be more than happy to help to trim any issues you may find) -- E Mare, Libertas -- devel mailing list devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/devel
Re: Proposed F19 Feature: High Availability Container Resources
Jaroslav Reznik (jrez...@redhat.com) said: = Features/ High Availability Container Resources = https://fedoraproject.org/wiki/Features/High_Availability_Container_Resources Feature owner(s): David Vossel dvos...@redhat.com The Container Resources feature allows the HA stack (Pacemaker + Corosync) residing on a host machine to extend management of resources into virtual guest instances (KVM/LXC). == Detailed description == This feature is in response to the growing desire for high availability functionality to be extended outside of the host into virtual guest instances. Pacemaker is currently capable of managing virtual guests, meaning Pacemaker can start/stop/monitor/migrate virtual guests anywhere in the cluster, but Pacemaker has no ability to manage the resources that live within the virtual guests. At the moment these virtual guests are very much a black box to Pacemaker. The Container Resources feature changes this by giving Pacemaker the ability to reach into the virtual guests and manage resources in the exact same way resources are managed on the host nodes. Ultimately this gives the HA stack the ability to manage resources across all the nodes in cluster as well as any virtual guests that reside within those cluster nodes. Does this require the management to live on the virtual host, or can it be done entirely remotely with the cluster management server residing elsewhere and talking to all the virtual guest instances directly? Bill -- devel mailing list devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/devel
Re: Proposed F19 Feature: High Availability Container Resources
- Original Message - From: Bill Nottingham nott...@redhat.com To: devel@lists.fedoraproject.org Sent: Monday, January 28, 2013 1:18:06 PM Subject: Re: Proposed F19 Feature: High Availability Container Resources Jaroslav Reznik (jrez...@redhat.com) said: = Features/ High Availability Container Resources = https://fedoraproject.org/wiki/Features/High_Availability_Container_Resources Feature owner(s): David Vossel dvos...@redhat.com The Container Resources feature allows the HA stack (Pacemaker + Corosync) residing on a host machine to extend management of resources into virtual guest instances (KVM/LXC). == Detailed description == This feature is in response to the growing desire for high availability functionality to be extended outside of the host into virtual guest instances. Pacemaker is currently capable of managing virtual guests, meaning Pacemaker can start/stop/monitor/migrate virtual guests anywhere in the cluster, but Pacemaker has no ability to manage the resources that live within the virtual guests. At the moment these virtual guests are very much a black box to Pacemaker. The Container Resources feature changes this by giving Pacemaker the ability to reach into the virtual guests and manage resources in the exact same way resources are managed on the host nodes. Ultimately this gives the HA stack the ability to manage resources across all the nodes in cluster as well as any virtual guests that reside within those cluster nodes. Does this require the management to live on the virtual host, or can it be done entirely remotely with the cluster management server residing elsewhere and talking to all the virtual guest instances directly? Management can be done entirely remotely from any cluster node running the ha stack. There are no location restrictions. We are not restricted to the remote instance being a virtual guest either. It could be bare-metal for all we care. Initially the cli management tools and documentation will focus on virtual guest use case where the management is performed on the virtual host machine. This just means we are planning on making that a very easy use-case to configure. The tools will be available work outside of this use-case though. It will just take a little more knowledge from the user. -- Vossel Bill -- devel mailing list devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/devel -- devel mailing list devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/devel
Re: Proposed F19 Feature: High Availability Container Resources
On Sun, 27.01.13 17:32, Jaroslav Reznik (jrez...@redhat.com) wrote: = Features/ High Availability Container Resources = https://fedoraproject.org/wiki/Features/High_Availability_Container_Resources Feature owner(s): David Vossel dvos...@redhat.com The Container Resources feature allows the HA stack (Pacemaker + Corosync) residing on a host machine to extend management of resources into virtual guest instances (KVM/LXC). Is this about LXC or libvirt-lxc? These two are entirely different projects, sharing no code, which makes me wonder which project is meant here? Lennart -- Lennart Poettering - Red Hat, Inc. -- devel mailing list devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/devel