[jira] [Assigned] (MESOS-9799) Adopt container file operations in secrets volumes.
[ https://issues.apache.org/jira/browse/MESOS-9799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] James Peach reassigned MESOS-9799: -- Assignee: James Peach > Adopt container file operations in secrets volumes. > --- > > Key: MESOS-9799 > URL: https://issues.apache.org/jira/browse/MESOS-9799 > Project: Mesos > Issue Type: Improvement > Components: containerization >Reporter: James Peach >Assignee: James Peach >Priority: Major > > Adopt containerized file operations in the secrets volume isolator so that it > doesn't have to use pre-exec commands. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (MESOS-9799) Adopt container file operations in secrets volumes.
James Peach created MESOS-9799: -- Summary: Adopt container file operations in secrets volumes. Key: MESOS-9799 URL: https://issues.apache.org/jira/browse/MESOS-9799 Project: Mesos Issue Type: Improvement Components: containerization Reporter: James Peach Adopt containerized file operations in the secrets volume isolator so that it doesn't have to use pre-exec commands. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (MESOS-9306) Mesos containerizer can get stuck during cgroup cleanup
[ https://issues.apache.org/jira/browse/MESOS-9306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16849042#comment-16849042 ] Andrei Budnik edited comment on MESOS-9306 at 5/27/19 4:10 PM: --- The patch `/r/70609/` was discarded. If `cgroups::destroy` hangs due to a blocking system call caused by a kernel bug, then there is no workaround available on Mesos side to fix the issue. In this case, we could only help an operator to detect the problem. This can be achieved by introducing a debug endpoint for the Mesos containerizer, see MESOS-9756. was (Author: abudnik): The patch `/r/70609/` was discarded. If `cgroups::destroy` hangs due to a blocking system call caused by a kernel bug, then there is no workaround available on Mesos side to fix the issue. In this case, we could only help an operator to detect the problem. This could be done by introducing a debug endpoint for the Mesos containerizer, see MESOS-9756. > Mesos containerizer can get stuck during cgroup cleanup > --- > > Key: MESOS-9306 > URL: https://issues.apache.org/jira/browse/MESOS-9306 > Project: Mesos > Issue Type: Bug > Components: agent, containerization >Affects Versions: 1.4.0, 1.5.0, 1.6.0, 1.7.0, 1.8.0 >Reporter: Greg Mann >Assignee: Andrei Budnik >Priority: Critical > Labels: containerizer, mesosphere > > I observed a task group's executor container which failed to be completely > destroyed after its associated tasks were killed. The following is an excerpt > from the agent log which is filtered to include only lines with the container > ID, {{d463b9fe-970d-4077-bab9-558464889a9e}}: > {code} > 2018-10-10 14:20:50: I1010 14:20:50.204756 6799 containerizer.cpp:2963] > Container d463b9fe-970d-4077-bab9-558464889a9e has exited > 2018-10-10 14:20:50: I1010 14:20:50.204839 6799 containerizer.cpp:2457] > Destroying container d463b9fe-970d-4077-bab9-558464889a9e in RUNNING state > 2018-10-10 14:20:50: I1010 14:20:50.204859 6799 containerizer.cpp:3124] > Transitioning the state of container d463b9fe-970d-4077-bab9-558464889a9e > from RUNNING to DESTROYING > 2018-10-10 14:20:50: I1010 14:20:50.204960 6799 linux_launcher.cpp:580] > Asked to destroy container d463b9fe-970d-4077-bab9-558464889a9e > 2018-10-10 14:20:50: I1010 14:20:50.204993 6799 linux_launcher.cpp:622] > Destroying cgroup > '/sys/fs/cgroup/freezer/mesos/d463b9fe-970d-4077-bab9-558464889a9e' > 2018-10-10 14:20:50: I1010 14:20:50.205417 6806 cgroups.cpp:2838] Freezing > cgroup /sys/fs/cgroup/freezer/mesos/d463b9fe-970d-4077-bab9-558464889a9e/mesos > 2018-10-10 14:20:50: I1010 14:20:50.205477 6810 cgroups.cpp:2838] Freezing > cgroup /sys/fs/cgroup/freezer/mesos/d463b9fe-970d-4077-bab9-558464889a9e > 2018-10-10 14:20:50: I1010 14:20:50.205708 6808 cgroups.cpp:1229] > Successfully froze cgroup > /sys/fs/cgroup/freezer/mesos/d463b9fe-970d-4077-bab9-558464889a9e/mesos after > 203008ns > 2018-10-10 14:20:50: I1010 14:20:50.205878 6800 cgroups.cpp:1229] > Successfully froze cgroup > /sys/fs/cgroup/freezer/mesos/d463b9fe-970d-4077-bab9-558464889a9e after > 339200ns > 2018-10-10 14:20:50: I1010 14:20:50.206185 6799 cgroups.cpp:2856] Thawing > cgroup /sys/fs/cgroup/freezer/mesos/d463b9fe-970d-4077-bab9-558464889a9e/mesos > 2018-10-10 14:20:50: I1010 14:20:50.206226 6808 cgroups.cpp:2856] Thawing > cgroup /sys/fs/cgroup/freezer/mesos/d463b9fe-970d-4077-bab9-558464889a9e > 2018-10-10 14:20:50: I1010 14:20:50.206455 6808 cgroups.cpp:1258] > Successfully thawed cgroup > /sys/fs/cgroup/freezer/mesos/d463b9fe-970d-4077-bab9-558464889a9e after > 83968ns > 2018-10-10 14:20:50: I1010 14:20:50.306803 6810 cgroups.cpp:1258] > Successfully thawed cgroup > /sys/fs/cgroup/freezer/mesos/d463b9fe-970d-4077-bab9-558464889a9e/mesos after > 100.50816ms > 2018-10-10 14:20:50: I1010 14:20:50.307531 6805 linux_launcher.cpp:654] > Destroying cgroup > '/sys/fs/cgroup/systemd/mesos/d463b9fe-970d-4077-bab9-558464889a9e' > 2018-10-10 14:21:40: W1010 14:21:40.032855 6809 containerizer.cpp:2401] > Skipping status for container d463b9fe-970d-4077-bab9-558464889a9e because: > Container does not exist > 2018-10-10 14:22:40: W1010 14:22:40.031224 6800 containerizer.cpp:2401] > Skipping status for container d463b9fe-970d-4077-bab9-558464889a9e because: > Container does not exist > 2018-10-10 14:23:40: W1010 14:23:40.031946 6799 containerizer.cpp:2401] > Skipping status for container d463b9fe-970d-4077-bab9-558464889a9e because: > Container does not exist > 2018-10-10 14:24:40: W1010 14:24:40.032979 6804 containerizer.cpp:2401] > Skipping status for container d463b9fe-970d-4077-bab9-558464889a9e because: > Container does not exist > 2018-10-10 14:25:40: W1010 14:25:40.030784 6808 container
[jira] [Commented] (MESOS-9306) Mesos containerizer can get stuck during cgroup cleanup
[ https://issues.apache.org/jira/browse/MESOS-9306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16849042#comment-16849042 ] Andrei Budnik commented on MESOS-9306: -- The patch `/r/70609/` was discarded. If `cgroups::destroy` hangs due to a blocking system call caused by a kernel bug, then there is no workaround available on Mesos side to fix the issue. In this case, we could only help an operator to detect the problem. This could be done by introducing a debug endpoint for the Mesos containerizer, see MESOS-9756. > Mesos containerizer can get stuck during cgroup cleanup > --- > > Key: MESOS-9306 > URL: https://issues.apache.org/jira/browse/MESOS-9306 > Project: Mesos > Issue Type: Bug > Components: agent, containerization >Affects Versions: 1.4.0, 1.5.0, 1.6.0, 1.7.0, 1.8.0 >Reporter: Greg Mann >Assignee: Andrei Budnik >Priority: Critical > Labels: containerizer, mesosphere > > I observed a task group's executor container which failed to be completely > destroyed after its associated tasks were killed. The following is an excerpt > from the agent log which is filtered to include only lines with the container > ID, {{d463b9fe-970d-4077-bab9-558464889a9e}}: > {code} > 2018-10-10 14:20:50: I1010 14:20:50.204756 6799 containerizer.cpp:2963] > Container d463b9fe-970d-4077-bab9-558464889a9e has exited > 2018-10-10 14:20:50: I1010 14:20:50.204839 6799 containerizer.cpp:2457] > Destroying container d463b9fe-970d-4077-bab9-558464889a9e in RUNNING state > 2018-10-10 14:20:50: I1010 14:20:50.204859 6799 containerizer.cpp:3124] > Transitioning the state of container d463b9fe-970d-4077-bab9-558464889a9e > from RUNNING to DESTROYING > 2018-10-10 14:20:50: I1010 14:20:50.204960 6799 linux_launcher.cpp:580] > Asked to destroy container d463b9fe-970d-4077-bab9-558464889a9e > 2018-10-10 14:20:50: I1010 14:20:50.204993 6799 linux_launcher.cpp:622] > Destroying cgroup > '/sys/fs/cgroup/freezer/mesos/d463b9fe-970d-4077-bab9-558464889a9e' > 2018-10-10 14:20:50: I1010 14:20:50.205417 6806 cgroups.cpp:2838] Freezing > cgroup /sys/fs/cgroup/freezer/mesos/d463b9fe-970d-4077-bab9-558464889a9e/mesos > 2018-10-10 14:20:50: I1010 14:20:50.205477 6810 cgroups.cpp:2838] Freezing > cgroup /sys/fs/cgroup/freezer/mesos/d463b9fe-970d-4077-bab9-558464889a9e > 2018-10-10 14:20:50: I1010 14:20:50.205708 6808 cgroups.cpp:1229] > Successfully froze cgroup > /sys/fs/cgroup/freezer/mesos/d463b9fe-970d-4077-bab9-558464889a9e/mesos after > 203008ns > 2018-10-10 14:20:50: I1010 14:20:50.205878 6800 cgroups.cpp:1229] > Successfully froze cgroup > /sys/fs/cgroup/freezer/mesos/d463b9fe-970d-4077-bab9-558464889a9e after > 339200ns > 2018-10-10 14:20:50: I1010 14:20:50.206185 6799 cgroups.cpp:2856] Thawing > cgroup /sys/fs/cgroup/freezer/mesos/d463b9fe-970d-4077-bab9-558464889a9e/mesos > 2018-10-10 14:20:50: I1010 14:20:50.206226 6808 cgroups.cpp:2856] Thawing > cgroup /sys/fs/cgroup/freezer/mesos/d463b9fe-970d-4077-bab9-558464889a9e > 2018-10-10 14:20:50: I1010 14:20:50.206455 6808 cgroups.cpp:1258] > Successfully thawed cgroup > /sys/fs/cgroup/freezer/mesos/d463b9fe-970d-4077-bab9-558464889a9e after > 83968ns > 2018-10-10 14:20:50: I1010 14:20:50.306803 6810 cgroups.cpp:1258] > Successfully thawed cgroup > /sys/fs/cgroup/freezer/mesos/d463b9fe-970d-4077-bab9-558464889a9e/mesos after > 100.50816ms > 2018-10-10 14:20:50: I1010 14:20:50.307531 6805 linux_launcher.cpp:654] > Destroying cgroup > '/sys/fs/cgroup/systemd/mesos/d463b9fe-970d-4077-bab9-558464889a9e' > 2018-10-10 14:21:40: W1010 14:21:40.032855 6809 containerizer.cpp:2401] > Skipping status for container d463b9fe-970d-4077-bab9-558464889a9e because: > Container does not exist > 2018-10-10 14:22:40: W1010 14:22:40.031224 6800 containerizer.cpp:2401] > Skipping status for container d463b9fe-970d-4077-bab9-558464889a9e because: > Container does not exist > 2018-10-10 14:23:40: W1010 14:23:40.031946 6799 containerizer.cpp:2401] > Skipping status for container d463b9fe-970d-4077-bab9-558464889a9e because: > Container does not exist > 2018-10-10 14:24:40: W1010 14:24:40.032979 6804 containerizer.cpp:2401] > Skipping status for container d463b9fe-970d-4077-bab9-558464889a9e because: > Container does not exist > 2018-10-10 14:25:40: W1010 14:25:40.030784 6808 containerizer.cpp:2401] > Skipping status for container d463b9fe-970d-4077-bab9-558464889a9e because: > Container does not exist > 2018-10-10 14:26:40: W1010 14:26:40.032526 6810 containerizer.cpp:2401] > Skipping status for container d463b9fe-970d-4077-bab9-558464889a9e because: > Container does not exist > 2018-10-10 14:27:40: W1010 14:27:40.029932 6801 containerizer.cpp:2401] > Skipping status for container d463b9fe-970d-4077-bab
[jira] [Created] (MESOS-9798) How to reduce compile time after had changed/improved source code?
chatsiri created MESOS-9798: --- Summary: How to reduce compile time after had changed/improved source code? Key: MESOS-9798 URL: https://issues.apache.org/jira/browse/MESOS-9798 Project: Mesos Issue Type: Improvement Components: cmake Affects Versions: 1.8.0 Environment: Linux firework-vm01 4.9.0-9-amd64 #1 SMP Debian 4.9.168-1+deb9u2 (2019-05-13) x86_64 GNU/Linux Reporter: chatsiri Hello all, I'm have changed variables in src/ directory finished, but compiler using long time to finished build steps. How can reduces compile time per component or source directory? Such as an simple steps below # I'm add new member function to class Docker on docker.hpp. This class declares on file at docker directory. # Compile source again from build directory. This directory create on the base source code directory same src/ , bin/ and include/. # Come to build path with ## $cd build ## $../configure --disable-python --disable-java --enable-debug --enable-fast-install ## $make ## $sudo make install. In steps No.3. Compiler used long time compiles source code. How we can reduce compile time per source directory that we had changed its? -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (MESOS-9797) SSL Ciphersuite settings can break client TLS handshake
Benno Evers created MESOS-9797: -- Summary: SSL Ciphersuite settings can break client TLS handshake Key: MESOS-9797 URL: https://issues.apache.org/jira/browse/MESOS-9797 Project: Mesos Issue Type: Improvement Environment: Ubuntu 18.04 w/ OpenSSL 1.1.0g Reporter: Benno Evers Starting a mesos-agent with the following environment variables: {noformat} env GLOG_v=2 LIBPROCESS_SSL_ENABLED=true LIBPROCESS_SSL_ENABLE_DOWNGRADE=false LIBPROCESS_SSL_VERIFY_CERT=false LIBPROCESS_SSL_CERT_FILE=/etc/ssl/certs/ssl-cert-snakeoil.pem LIBPROCESS_SSL_KEY_FILE=/etc/ssl/private/ssl-cert-snakeoil.key LIBPROCESS_SSL_CIPHERS=ECDHE-PSK-AES128-CBC-SHA mesos-agent --work_dir=/tmp/ --master=127.0.1.1:4447 --systemd_enable_support=false {noformat} caused a mesos-agent on my machine (using openssl 1.1.0g) to fail to send a ClientHello message after establishing a tcp connection to the given master, causing the TLS handshake to fail. Removing the `LIBPROCESS_SSL_CIPHERS=ECDHE-PSK-AES128-CBC-SHA` variable had the agent able to connect normally. The reason for this still needs to be investigated. -- This message was sent by Atlassian JIRA (v7.6.3#76005)