[lxc-devel] [lxc/lxc] c5e7a7: Revert "cgfsng: avoid tiny race window"
Branch: refs/heads/master Home: https://github.com/lxc/lxc Commit: c5e7a7acbf23f0c267179b3318af41423b39493a https://github.com/lxc/lxc/commit/c5e7a7acbf23f0c267179b3318af41423b39493a Author: Stéphane Graber Date: 2018-10-02 (Tue, 02 Oct 2018) Changed paths: M src/lxc/cgroups/cgfsng.c M src/lxc/utils.c Log Message: --- Revert "cgfsng: avoid tiny race window" This reverts commit 17e55991744576bca20e370a6d829da99c3fc801. Signed-off-by: Stéphane Graber **NOTE:** This service has been marked for deprecation: https://developer.github.com/changes/2018-04-25-github-services-deprecation/ Functionality will be removed from GitHub.com on January 31st, 2019. ___ lxc-devel mailing list lxc-devel@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-devel
[lxc-devel] [lxd/master] candid: Cleanup code a bit
The following pull request was submitted through Github. It can be accessed and reviewed at: https://github.com/lxc/lxd/pull/5095 This e-mail was sent by the LXC bot, direct replies will not reach the author unless they happen to be subscribed to this list. === Description (from pull-request) === Signed-off-by: Stéphane Graber From f12d912d23ad90e277286999d76d6fdef93bf19f Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?St=C3=A9phane=20Graber?= Date: Tue, 2 Oct 2018 17:26:43 -0400 Subject: [PATCH] candid: Cleanup code a bit MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Signed-off-by: Stéphane Graber --- lxc/remote.go | 3 +-- lxd/daemon.go | 18 +++--- 2 files changed, 8 insertions(+), 13 deletions(-) diff --git a/lxc/remote.go b/lxc/remote.go index 3a418af9b0..d359a38728 100644 --- a/lxc/remote.go +++ b/lxc/remote.go @@ -235,8 +235,7 @@ func (c *cmdRemoteAdd) Run(cmd *cobra.Command, args []string) error { uri.RawQuery = query.Encode() } - fmt.Println(uri) - return nil + return httpbakery.OpenWebBrowser(uri) }, }, }) diff --git a/lxd/daemon.go b/lxd/daemon.go index 2a94c65cf9..5f670d29a8 100644 --- a/lxd/daemon.go +++ b/lxd/daemon.go @@ -182,9 +182,13 @@ func (d *Daemon) checkTrustedClient(r *http.Request) error { if d.externalAuth != nil && r.Header.Get(httpbakery.BakeryProtocolHeader) != "" { ctx := httpbakery.ContextWithRequest(context.TODO(), r) - authChecker := d.externalAuth.bakery.Checker.Auth( - httpbakery.RequestMacaroons(r)...) - ops := getBakeryOps(r) + authChecker := d.externalAuth.bakery.Checker.Auth(httpbakery.RequestMacaroons(r)...) + + ops := []bakery.Op{{ + Entity: r.URL.Path, + Action: r.Method, + }} + _, err := authChecker.Allow(ctx, ops...) return err } @@ -198,14 +202,6 @@ func (d *Daemon) checkTrustedClient(r *http.Request) error { return fmt.Errorf("unauthorized") } -// Return the bakery operations implied by the given HTTP request -func getBakeryOps(r *http.Request) []bakery.Op { - return []bakery.Op{{ - Entity: r.URL.Path, - Action: r.Method, - }} -} - func writeMacaroonsRequiredResponse(b *identchecker.Bakery, r *http.Request, w http.ResponseWriter, derr *bakery.DischargeRequiredError, expiry int64) { ctx := httpbakery.ContextWithRequest(context.TODO(), r) caveats := append(derr.Caveats, ___ lxc-devel mailing list lxc-devel@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-devel
[lxc-devel] [lxc/lxc] c7f493: utils: fix lxc_set_death_signal()
Branch: refs/heads/master Home: https://github.com/lxc/lxc Commit: c7f493aee01806ec154d2af5c84a41a9baeecbe2 https://github.com/lxc/lxc/commit/c7f493aee01806ec154d2af5c84a41a9baeecbe2 Author: Christian Brauner Date: 2018-10-02 (Tue, 02 Oct 2018) Changed paths: M src/lxc/start.c M src/lxc/utils.c M src/lxc/utils.h Log Message: --- utils: fix lxc_set_death_signal() Signed-off-by: Christian Brauner Commit: a153a470b30e1b5061b4e8986018b4d7bf600602 https://github.com/lxc/lxc/commit/a153a470b30e1b5061b4e8986018b4d7bf600602 Author: Stéphane Graber Date: 2018-10-02 (Tue, 02 Oct 2018) Changed paths: M src/lxc/start.c M src/lxc/utils.c M src/lxc/utils.h Log Message: --- Merge pull request #2669 from brauner/2018-10-02/bugfixes utils: fix lxc_set_death_signal() Compare: https://github.com/lxc/lxc/compare/54b38b25b195...a153a470b30e **NOTE:** This service has been marked for deprecation: https://developer.github.com/changes/2018-04-25-github-services-deprecation/ Functionality will be removed from GitHub.com on January 31st, 2019. ___ lxc-devel mailing list lxc-devel@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-devel
[lxc-devel] [lxc/master] utils: fix lxc_set_death_signal()
The following pull request was submitted through Github. It can be accessed and reviewed at: https://github.com/lxc/lxc/pull/2669 This e-mail was sent by the LXC bot, direct replies will not reach the author unless they happen to be subscribed to this list. === Description (from pull-request) === Signed-off-by: Christian Brauner From c7f493aee01806ec154d2af5c84a41a9baeecbe2 Mon Sep 17 00:00:00 2001 From: Christian Brauner Date: Tue, 2 Oct 2018 20:59:34 +0200 Subject: [PATCH] utils: fix lxc_set_death_signal() Signed-off-by: Christian Brauner --- src/lxc/start.c | 6 +++--- src/lxc/utils.c | 9 +++-- src/lxc/utils.h | 2 +- 3 files changed, 7 insertions(+), 10 deletions(-) diff --git a/src/lxc/start.c b/src/lxc/start.c index 4f805525b..5899ea07b 100644 --- a/src/lxc/start.c +++ b/src/lxc/start.c @@ -1068,7 +1068,7 @@ static int do_start(void *data) * exit before we set the pdeath signal leading to a unsupervized * container. */ - ret = lxc_set_death_signal(SIGKILL); + ret = lxc_set_death_signal(SIGKILL, 0); if (ret < 0) { SYSERROR("Failed to set PR_SET_PDEATHSIG to SIGKILL"); goto out_warn_father; @@ -1146,7 +1146,7 @@ static int do_start(void *data) goto out_warn_father; /* set{g,u}id() clears deathsignal */ - ret = lxc_set_death_signal(SIGKILL); + ret = lxc_set_death_signal(SIGKILL, 0); if (ret < 0) { SYSERROR("Failed to set PR_SET_PDEATHSIG to SIGKILL"); goto out_warn_father; @@ -1388,7 +1388,7 @@ static int do_start(void *data) } if (handler->conf->monitor_signal_pdeath != SIGKILL) { - ret = lxc_set_death_signal(handler->conf->monitor_signal_pdeath); + ret = lxc_set_death_signal(handler->conf->monitor_signal_pdeath, 0); if (ret < 0) { SYSERROR("Failed to set PR_SET_PDEATHSIG to %d", handler->conf->monitor_signal_pdeath); diff --git a/src/lxc/utils.c b/src/lxc/utils.c index af0190fa3..1af6f512c 100644 --- a/src/lxc/utils.c +++ b/src/lxc/utils.c @@ -1652,7 +1652,7 @@ uint64_t lxc_find_next_power2(uint64_t n) return n; } -int lxc_set_death_signal(int signal) +int lxc_set_death_signal(int signal, pid_t parent) { int ret; pid_t ppid; @@ -1662,11 +1662,8 @@ int lxc_set_death_signal(int signal) /* Check whether we have been orphaned. */ ppid = (pid_t)syscall(SYS_getppid); - if (ppid == 1) { - pid_t self; - - self = lxc_raw_getpid(); - ret = kill(self, SIGKILL); + if (ppid != parent) { + ret = raise(SIGKILL); if (ret < 0) return -1; } diff --git a/src/lxc/utils.h b/src/lxc/utils.h index 6e53f71a1..7bb361cfb 100644 --- a/src/lxc/utils.h +++ b/src/lxc/utils.h @@ -434,7 +434,7 @@ static inline pid_t lxc_raw_gettid(void) } /* Set a signal the child process will receive after the parent has died. */ -extern int lxc_set_death_signal(int signal); +extern int lxc_set_death_signal(int signal, pid_t parent); extern int fd_cloexec(int fd, bool cloexec); extern int recursive_destroy(char *dirname); extern int lxc_setup_keyring(void); ___ lxc-devel mailing list lxc-devel@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-devel
[lxc-devel] [lxc/lxc] ee455b: cgfsng: do not reuse another monitor's cgroup
Branch: refs/heads/master Home: https://github.com/lxc/lxc Commit: ee455be41cd127dd9fb37f35a2ee826e637bb76d https://github.com/lxc/lxc/commit/ee455be41cd127dd9fb37f35a2ee826e637bb76d Author: Christian Brauner Date: 2018-10-02 (Tue, 02 Oct 2018) Changed paths: M src/lxc/cgroups/cgfsng.c Log Message: --- cgfsng: do not reuse another monitor's cgroup Otherwise we will create a race. Signed-off-by: Christian Brauner Commit: 17e55991744576bca20e370a6d829da99c3fc801 https://github.com/lxc/lxc/commit/17e55991744576bca20e370a6d829da99c3fc801 Author: Christian Brauner Date: 2018-10-02 (Tue, 02 Oct 2018) Changed paths: M src/lxc/cgroups/cgfsng.c M src/lxc/utils.c Log Message: --- cgfsng: avoid tiny race window Signed-off-by: Christian Brauner Commit: 54b38b25b195cc699b41fd2ad9aba2fdfaba6d55 https://github.com/lxc/lxc/commit/54b38b25b195cc699b41fd2ad9aba2fdfaba6d55 Author: Stéphane Graber Date: 2018-10-02 (Tue, 02 Oct 2018) Changed paths: M src/lxc/cgroups/cgfsng.c M src/lxc/utils.c Log Message: --- Merge pull request #2668 from brauner/2018-10-02/cgroups_monitor_fixes cgfsng: do not reuse another monitor's cgroup Compare: https://github.com/lxc/lxc/compare/7040a77e8fb7...54b38b25b195 **NOTE:** This service has been marked for deprecation: https://developer.github.com/changes/2018-04-25-github-services-deprecation/ Functionality will be removed from GitHub.com on January 31st, 2019. ___ lxc-devel mailing list lxc-devel@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-devel
[lxc-devel] [lxd/master] Document LVM support for storage quotas
The following pull request was submitted through Github. It can be accessed and reviewed at: https://github.com/lxc/lxd/pull/5094 This e-mail was sent by the LXC bot, direct replies will not reach the author unless they happen to be subscribed to this list. === Description (from pull-request) === LVM backend supports storage quotas while the documentation still refers to this feature as being unsupported: ac1740d7092bf0dfb5e70cc66330e60b68e95613 f88e381d2d7c63c3a35f82691cd3627bf029c2b8 https://github.com/lxc/lxd/issues/1205 Signed-off-by: Dmitrii Shcherbakov From f9cddab396290799e798cca0b6881cc035686908 Mon Sep 17 00:00:00 2001 From: Dmitrii Shcherbakov Date: Tue, 2 Oct 2018 21:11:54 +0300 Subject: [PATCH] Document LVM support for storage quotas LVM backend supports storage quotas while the documentation still refers to this feature as being unsupported: ac1740d7092bf0dfb5e70cc66330e60b68e95613 f88e381d2d7c63c3a35f82691cd3627bf029c2b8 https://github.com/lxc/lxd/issues/1205 Signed-off-by: Dmitrii Shcherbakov --- doc/storage.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/doc/storage.md b/doc/storage.md index 58946c6bba..2381fe609f 100644 --- a/doc/storage.md +++ b/doc/storage.md @@ -69,7 +69,7 @@ Block based | no| no | yes | no | Instant cloning | no| yes | yes | yes | yes Storage driver usable inside a container| yes | yes | no| no | no Restore from older snapshots (not latest) | yes | yes | yes | no | yes -Storage quotas | no| yes | no| yes | no +Storage quotas | no| yes | yes | yes | no ## Recommended setup The two best options for use with LXD are ZFS and btrfs. ___ lxc-devel mailing list lxc-devel@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-devel
Re: [lxc-devel] Integration Kubernetes and LXD/LXC
On Tue, 02 Oct 2018 18:04:12 +0200 Free Ekanayaka wrote: > Yeah, that's probably the key point indeed. The design is based on the > "pets vs kettle" idea, so the expectation is that you are fine with > any of your pod being restarted at any time. And if you use > kubernetes, you'll have a hard time going against this design > principle. Yes, but possible. Choosing the right container engine to avoid restarts is important. > That being said, one thing that is still not clear to me it's why k8s > would issue disruptive restart requests against your (MySQL?) pods. > Does it happen as consequence of some manifest/spec change? I'd > expect k8s to not restart stuff unless there's a good reason to do > that. The good reason is i.e. because it's easy: if you hash your pod attributes to calculate the pod's name and you extend your pod data structure in the next kubernetes release, you will get a new hash, which will be different. So for the kubelet, this *is* a good reason, because they needed a simple implementation to compare current state and desired state. For the purpose of 24/7 online, this is a bad reason of course. So they have good reasons to do, what they do - but thought from the point of view, that restarting stuff is not a big deal. For a production SQL master server, this *is* a big deal. What we want to archieve is some awareness for the requirement of 24/7 online. > If you have a cluster of stateful pods that need special orchestration > to handle restart (say the master must first perform a graceful > failover or things like that), I think the recommended way these days > would be to implement your own "operator" which drives the dance in > the way you want. See: > > https://coreos.com/operators/ Exactly, that is the way to go: controllers for operations - even if I can't really say, that we have experience with the CoreOS operators. However: that is the pattern. > I don't know the details, but I would expect such operator to handle > any operation gracefully, including possible restarts (but, again, I > think those shouldn't really happen quite that often in a stable > production environment). There are some problems with that approach as well, but we use that pattern. Best Regards Oli pgpSQTZKQcBgQ.pgp Description: OpenPGP digital signature ___ lxc-devel mailing list lxc-devel@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-devel
Re: [lxc-devel] Integration Kubernetes and LXD/LXC
Oliver Schad writes: > On Tue, 02 Oct 2018 16:49:36 +0200 > Free Ekanayaka wrote: > >> I know that folks to run stateful services on k8s, PostgreSQL is one >> of those IIRC. I wouldn't expect MySQL do be fundamentally different. > > Sorry, I have to repeat my point: if the container engine isn't made > to run 24/7, what includes that a container must be muteable(!), you're > out of business. > > So the foundation of a stateful (and high quality) service is the > container engine. > > In this area, Docker and crio-o are broken by design for that purpose. > >> Although LXE might be an approach that solves your immediate needs, it >> feels like a band aid. If you haven't already, I'd recommend >> approaching the k8s team/community describing the issues that you're >> seeing when using standard CRI implementations such as >> containerd/docker and cri-o. > > We did, no answer. But we know the answer for a lot of topics we > researched: the design goal was to deal without state. > > All the stuff with state is just a gimmick for in fact restartable > services. > > We did a lot of kubernetes, read a lot of code, read a lot of comments > to a lot issues and the result is: 24/7 always online is *not* a design > goal of docker or crio-o, nor CRI and nor Kubernetes. Yeah, that's probably the key point indeed. The design is based on the "pets vs kettle" idea, so the expectation is that you are fine with any of your pod being restarted at any time. And if you use kubernetes, you'll have a hard time going against this design principle. That being said, one thing that is still not clear to me it's why k8s would issue disruptive restart requests against your (MySQL?) pods. Does it happen as consequence of some manifest/spec change? I'd expect k8s to not restart stuff unless there's a good reason to do that. If you have a cluster of stateful pods that need special orchestration to handle restart (say the master must first perform a graceful failover or things like that), I think the recommended way these days would be to implement your own "operator" which drives the dance in the way you want. See: https://coreos.com/operators/ (the example operators include stateful services such as etcd and vault). For MySQL specifically, Oracle itself seems to have written an operator: https://medium.com/oracledevs/introducing-the-oracle-mysql-operator-for-kubernetes-b06bd0608726 I don't know the details, but I would expect such operator to handle any operation gracefully, including possible restarts (but, again, I think those shouldn't really happen quite that often in a stable production environment). Free ___ lxc-devel mailing list lxc-devel@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-devel
[lxc-devel] [lxc/master] cgfsng: do not reuse another monitor's cgroup
The following pull request was submitted through Github. It can be accessed and reviewed at: https://github.com/lxc/lxc/pull/2668 This e-mail was sent by the LXC bot, direct replies will not reach the author unless they happen to be subscribed to this list. === Description (from pull-request) === Otherwise we will create a race. Signed-off-by: Christian Brauner From ee455be41cd127dd9fb37f35a2ee826e637bb76d Mon Sep 17 00:00:00 2001 From: Christian Brauner Date: Tue, 2 Oct 2018 17:27:55 +0200 Subject: [PATCH] cgfsng: do not reuse another monitor's cgroup Otherwise we will create a race. Signed-off-by: Christian Brauner --- src/lxc/cgroups/cgfsng.c | 6 -- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/src/lxc/cgroups/cgfsng.c b/src/lxc/cgroups/cgfsng.c index e248db31a..0fc9b11d2 100644 --- a/src/lxc/cgroups/cgfsng.c +++ b/src/lxc/cgroups/cgfsng.c @@ -1262,8 +1262,10 @@ static bool monitor_create_path_for_hierarchy(struct hierarchy *h, char *cgname) int ret; h->monitor_full_path = must_make_path(h->mountpoint, h->container_base_path, cgname, NULL); - if (dir_exists(h->monitor_full_path)) - return true; + if (dir_exists(h->monitor_full_path)) { + ERROR("The cgroup \"%s\" already existed", h->monitor_full_path); + return false; + } if (!cg_legacy_handle_cpuset_hierarchy(h, cgname)) { ERROR("Failed to handle legacy cpuset controller"); ___ lxc-devel mailing list lxc-devel@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-devel
[lxc-devel] [lxc/lxc] 2291ea: parse: prefault config file with MAP_POPULATE
Branch: refs/heads/master Home: https://github.com/lxc/lxc Commit: 2291ea4a1a40f67d76a108e96cee83cb65038f9a https://github.com/lxc/lxc/commit/2291ea4a1a40f67d76a108e96cee83cb65038f9a Author: Christian Brauner Date: 2018-10-02 (Tue, 02 Oct 2018) Changed paths: M src/lxc/parse.c Log Message: --- parse: prefault config file with MAP_POPULATE When we call lxc_file_for_each_line_mmap() we will always parse the whole config file. Prefault it in case it is really long to optimize performance. Signed-off-by: Christian Brauner Commit: 7040a77e8fb7a9f4311433d30b8dcc2028acf694 https://github.com/lxc/lxc/commit/7040a77e8fb7a9f4311433d30b8dcc2028acf694 Author: Stéphane Graber Date: 2018-10-02 (Tue, 02 Oct 2018) Changed paths: M src/lxc/parse.c Log Message: --- Merge pull request #2667 from brauner/2018-10-02/prefault_mmaped_config_file parse: prefault config file with MAP_POPULATE Compare: https://github.com/lxc/lxc/compare/907e1332012e...7040a77e8fb7 **NOTE:** This service has been marked for deprecation: https://developer.github.com/changes/2018-04-25-github-services-deprecation/ Functionality will be removed from GitHub.com on January 31st, 2019. ___ lxc-devel mailing list lxc-devel@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-devel
Re: [lxc-devel] Integration Kubernetes and LXD/LXC
On Tue, 02 Oct 2018 16:49:36 +0200 Free Ekanayaka wrote: > I know that folks to run stateful services on k8s, PostgreSQL is one > of those IIRC. I wouldn't expect MySQL do be fundamentally different. Sorry, I have to repeat my point: if the container engine isn't made to run 24/7, what includes that a container must be muteable(!), you're out of business. So the foundation of a stateful (and high quality) service is the container engine. In this area, Docker and crio-o are broken by design for that purpose. > Although LXE might be an approach that solves your immediate needs, it > feels like a band aid. If you haven't already, I'd recommend > approaching the k8s team/community describing the issues that you're > seeing when using standard CRI implementations such as > containerd/docker and cri-o. We did, no answer. But we know the answer for a lot of topics we researched: the design goal was to deal without state. All the stuff with state is just a gimmick for in fact restartable services. We did a lot of kubernetes, read a lot of code, read a lot of comments to a lot issues and the result is: 24/7 always online is *not* a design goal of docker or crio-o, nor CRI and nor Kubernetes. But you can find your way around with LXC/LXE and some filtering. Best Regards Oli -- Automatic-Server AG • Oliver Schad Geschäftsführer Turnerstrasse 2 9000 St. Gallen | Schweiz www.automatic-server.com | oliver.sc...@automatic-server.com Tel: +41 71 511 31 11 | Mobile: +41 76 330 03 47 pgpmREdgCG0CY.pgp Description: OpenPGP digital signature ___ lxc-devel mailing list lxc-devel@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-devel
Re: [lxc-devel] Integration Kubernetes and LXD/LXC
Oliver Schad writes: > On Tue, 02 Oct 2018 11:29:30 +0200 > Free Ekanayaka wrote: > >> Oliver Schad writes: >> > If the container layer is unstable, you can't build a stable >> > service on top of it. >> >> How does LXE solve the issue of undesired restarts? I imagine that the >> restarts are triggered by the k8s control plane by connecting to the >> kubelet which in turns triggers some imperative CRI API which says >> "Please restart this pod". If that's the case, does LXE somehow ignore >> the restart request? I'm confused about this part. > > It's true, that Kubelet does sometimes stuff which it shouldn't do and > we filter some things from Kubelet. The imperative nature of CRI is bad > in fact. > > But: it really makes a difference, if sometimes Kubelet is wrong with > some things or the container engine itself creates problems (in case of > restart, update, too much logs, ...). > > If the container engine dies for whatever reason in case of LXD, > nothing happens. If Docker dies, all container dies. If Kubelet creates > trouble, we're able to try to work around that problem with filtering. > We saw and see a lot of restarts of Docker. > > In the area of platform services, it's hard to work one process > containers. And in the area of platform services it's hard to kill your > container, just because of updating a file. Both requirements you have > especially for stateful services. Avoid restart as much as you can and > if you have to restart something: do it in a planned/controlled way, > with fine grained options (i.e. notify other cluster members about a > node restart). I know that folks to run stateful services on k8s, PostgreSQL is one of those IIRC. I wouldn't expect MySQL do be fundamentally different. Although LXE might be an approach that solves your immediate needs, it feels like a band aid. If you haven't already, I'd recommend approaching the k8s team/community describing the issues that you're seeing when using standard CRI implementations such as containerd/docker and cri-o. If you have already reached them out, I'd be interested get pointers to their replies, since *in theory* stateful k8s services shouldn't need any particular CRI implementation (and if that's not the case in real-world, better report that). Free ___ lxc-devel mailing list lxc-devel@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-devel
[lxc-devel] [lxc/master] parse: prefault config file with MAP_POPULATE
The following pull request was submitted through Github. It can be accessed and reviewed at: https://github.com/lxc/lxc/pull/2667 This e-mail was sent by the LXC bot, direct replies will not reach the author unless they happen to be subscribed to this list. === Description (from pull-request) === When we call lxc_file_for_each_line_mmap() we will always parse the whole config file. Prefault it in case it is really long to optimize performance. Signed-off-by: Christian Brauner From 2291ea4a1a40f67d76a108e96cee83cb65038f9a Mon Sep 17 00:00:00 2001 From: Christian Brauner Date: Tue, 2 Oct 2018 16:40:13 +0200 Subject: [PATCH] parse: prefault config file with MAP_POPULATE When we call lxc_file_for_each_line_mmap() we will always parse the whole config file. Prefault it in case it is really long to optimize performance. Signed-off-by: Christian Brauner --- src/lxc/parse.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/src/lxc/parse.c b/src/lxc/parse.c index fcc174a77..1c0cc9f49 100644 --- a/src/lxc/parse.c +++ b/src/lxc/parse.c @@ -88,7 +88,8 @@ int lxc_file_for_each_line_mmap(const char *file, lxc_file_cb callback, return 0; } - buf = lxc_strmmap(NULL, st.st_size, PROT_READ | PROT_WRITE, MAP_PRIVATE, fd, 0); + buf = lxc_strmmap(NULL, st.st_size, PROT_READ | PROT_WRITE, + MAP_PRIVATE | MAP_POPULATE, fd, 0); if (buf == MAP_FAILED) { close(fd); return -1; ___ lxc-devel mailing list lxc-devel@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-devel
Re: [lxc-devel] Integration Kubernetes and LXD/LXC
On Tue, 02 Oct 2018 11:29:30 +0200 Free Ekanayaka wrote: > Oliver Schad writes: > > If the container layer is unstable, you can't build a stable > > service on top of it. > > How does LXE solve the issue of undesired restarts? I imagine that the > restarts are triggered by the k8s control plane by connecting to the > kubelet which in turns triggers some imperative CRI API which says > "Please restart this pod". If that's the case, does LXE somehow ignore > the restart request? I'm confused about this part. It's true, that Kubelet does sometimes stuff which it shouldn't do and we filter some things from Kubelet. The imperative nature of CRI is bad in fact. But: it really makes a difference, if sometimes Kubelet is wrong with some things or the container engine itself creates problems (in case of restart, update, too much logs, ...). If the container engine dies for whatever reason in case of LXD, nothing happens. If Docker dies, all container dies. If Kubelet creates trouble, we're able to try to work around that problem with filtering. We saw and see a lot of restarts of Docker. In the area of platform services, it's hard to work one process containers. And in the area of platform services it's hard to kill your container, just because of updating a file. Both requirements you have especially for stateful services. Avoid restart as much as you can and if you have to restart something: do it in a planned/controlled way, with fine grained options (i.e. notify other cluster members about a node restart). Best Regards Oli -- Automatic-Server AG • Oliver Schad Geschäftsführer Turnerstrasse 2 9000 St. Gallen | Schweiz www.automatic-server.com | oliver.sc...@automatic-server.com Tel: +41 71 511 31 11 | Mobile: +41 76 330 03 47 pgpoab7bFl9dJ.pgp Description: OpenPGP digital signature ___ lxc-devel mailing list lxc-devel@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-devel
[lxc-devel] [lxc/lxc] 99bb3f: cgroups: remove unnecessary line
Branch: refs/heads/master Home: https://github.com/lxc/lxc Commit: 99bb3fa8e8d064145ff4ab8a05cc1389927c0125 https://github.com/lxc/lxc/commit/99bb3fa8e8d064145ff4ab8a05cc1389927c0125 Author: 2xsec Date: 2018-10-02 (Tue, 02 Oct 2018) Changed paths: M src/lxc/cgroups/cgfsng.c Log Message: --- cgroups: remove unnecessary line Signed-off-by: 2xsec Commit: c3d9796f1fc6babcf2bf7782d470bf3d178ca814 https://github.com/lxc/lxc/commit/c3d9796f1fc6babcf2bf7782d470bf3d178ca814 Author: 2xsec Date: 2018-10-02 (Tue, 02 Oct 2018) Changed paths: M src/include/netns_ifaddrs.c Log Message: --- netns_iaddrs: remove unused functions Signed-off-by: 2xsec Commit: 907e1332012e4e99ee0f4bde098b96e638f79a58 https://github.com/lxc/lxc/commit/907e1332012e4e99ee0f4bde098b96e638f79a58 Author: Christian Brauner Date: 2018-10-02 (Tue, 02 Oct 2018) Changed paths: M src/include/netns_ifaddrs.c M src/lxc/cgroups/cgfsng.c Log Message: --- Merge pull request #2666 from 2xsec/bugfix cgroups: remove unnecessary line Compare: https://github.com/lxc/lxc/compare/74d968932956...907e1332012e **NOTE:** This service has been marked for deprecation: https://developer.github.com/changes/2018-04-25-github-services-deprecation/ Functionality will be removed from GitHub.com on January 31st, 2019. ___ lxc-devel mailing list lxc-devel@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-devel
[lxc-devel] [lxc/master] cgroups: remove unnecessary line
The following pull request was submitted through Github. It can be accessed and reviewed at: https://github.com/lxc/lxc/pull/2666 This e-mail was sent by the LXC bot, direct replies will not reach the author unless they happen to be subscribed to this list. === Description (from pull-request) === Signed-off-by: 2xsec From 99bb3fa8e8d064145ff4ab8a05cc1389927c0125 Mon Sep 17 00:00:00 2001 From: 2xsec Date: Tue, 2 Oct 2018 18:49:16 +0900 Subject: [PATCH] cgroups: remove unnecessary line Signed-off-by: 2xsec --- src/lxc/cgroups/cgfsng.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/lxc/cgroups/cgfsng.c b/src/lxc/cgroups/cgfsng.c index 629d371ec..e248db31a 100644 --- a/src/lxc/cgroups/cgfsng.c +++ b/src/lxc/cgroups/cgfsng.c @@ -136,10 +136,10 @@ static char *cg_legacy_must_prefix_named(char *entry) len = strlen(entry); prefixed = must_alloc(len + 6); - memcpy(prefixed, "name=", STRLITERALLEN("name=")); memcpy(prefixed + STRLITERALLEN("name="), entry, len); prefixed[len + 5] = '\0'; + return prefixed; } ___ lxc-devel mailing list lxc-devel@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-devel
Re: [lxc-devel] Integration Kubernetes and LXD/LXC
Oliver Schad writes: > Hi Free, > > On Tue, 02 Oct 2018 09:36:16 +0200 > Free Ekanayaka wrote: > >> Oliver Schad writes: >> >> [...] >> >> > What is wrong with new pod names? >> > >> > Think about a production database, MySQL. You're so proud, it runs >> > since 3 monthes, you tuned it, you have a great monitoring, you have >> > great backup/restore procedure, you've tested it, you're the hero of >> > this MySQL server and your Webshop earns a lot of money with that >> > thing. >> > >> > But, oh bad, you've updated Kubelet and that creates a new pod >> > name. So that means that your old pod will just die (die as in >> > forever, deleted, erased, terminated including your data). >> >> I'm not a k8s expert (actually I never really used it), but I believe >> what you'd want in this care are StatefulSets: >> >> https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/ > > The problem starts with the container engine: Docker restarts really > often for a lot of reasons (bugs, too much load, updates). This will > kill all your containers. > > So in short: Docker is really dangerous in case you want to have 24/7 > online. > > If the container layer is unstable, you can't build a stable service on > top of it. How does LXE solve the issue of undesired restarts? I imagine that the restarts are triggered by the k8s control plane by connecting to the kubelet which in turns triggers some imperative CRI API which says "Please restart this pod". If that's the case, does LXE somehow ignore the restart request? I'm confused about this part. ___ lxc-devel mailing list lxc-devel@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-devel
Re: [lxc-devel] Integration Kubernetes and LXD/LXC
Hi Free, On Tue, 02 Oct 2018 09:36:16 +0200 Free Ekanayaka wrote: > Oliver Schad writes: > > [...] > > > What is wrong with new pod names? > > > > Think about a production database, MySQL. You're so proud, it runs > > since 3 monthes, you tuned it, you have a great monitoring, you have > > great backup/restore procedure, you've tested it, you're the hero of > > this MySQL server and your Webshop earns a lot of money with that > > thing. > > > > But, oh bad, you've updated Kubelet and that creates a new pod > > name. So that means that your old pod will just die (die as in > > forever, deleted, erased, terminated including your data). > > I'm not a k8s expert (actually I never really used it), but I believe > what you'd want in this care are StatefulSets: > > https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/ The problem starts with the container engine: Docker restarts really often for a lot of reasons (bugs, too much load, updates). This will kill all your containers. So in short: Docker is really dangerous in case you want to have 24/7 online. If the container layer is unstable, you can't build a stable service on top of it. Best Regards Oli pgpXZwYeIgp20.pgp Description: OpenPGP digital signature ___ lxc-devel mailing list lxc-devel@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-devel
Re: [lxc-devel] Integration Kubernetes and LXD/LXC
Oliver Schad writes: [...] > What is wrong with new pod names? > > Think about a production database, MySQL. You're so proud, it runs > since 3 monthes, you tuned it, you have a great monitoring, you have > great backup/restore procedure, you've tested it, you're the hero of > this MySQL server and your Webshop earns a lot of money with that thing. > > But, oh bad, you've updated Kubelet and that creates a new pod name. So > that means that your old pod will just die (die as in forever, deleted, > erased, terminated including your data). I'm not a k8s expert (actually I never really used it), but I believe what you'd want in this care are StatefulSets: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/ In general, there is a growing number of k8s features that have been introduced specifically to meet the requirements of stateful services, for example Persistent Volumes: https://kubernetes.io/docs/concepts/storage/persistent-volumes/ Cheers, Free ___ lxc-devel mailing list lxc-devel@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-devel