[jira] [Commented] (YARN-9814) JobHistoryServer can't delete aggregated files, if remote app root directory is created by NodeManager
[ https://issues.apache.org/jira/browse/YARN-9814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16932178#comment-16932178 ] Adam Antal commented on YARN-9814: -- The conflicts are trivial, due to YARN-6929 is absent from those branches. I'm fine with the resolution. Thank you very much [~sunilg] and thanks for the review [~Prabhu Joseph]! > JobHistoryServer can't delete aggregated files, if remote app root directory > is created by NodeManager > -- > > Key: YARN-9814 > URL: https://issues.apache.org/jira/browse/YARN-9814 > Project: Hadoop YARN > Issue Type: Bug > Components: log-aggregation, yarn >Affects Versions: 3.1.2 >Reporter: Adam Antal >Assignee: Adam Antal >Priority: Minor > Fix For: 3.3.0 > > Attachments: YARN-9814.001.patch, YARN-9814.002.patch, > YARN-9814.003.patch, YARN-9814.004.patch, YARN-9814.005.patch > > > If remote-app-log-dir is not created before starting Yarn processes, the > NodeManager creates it during the init of AppLogAggregator service. In a > custom system the primary group of the yarn user (which starts the NM/RM > daemons) is not hadoop, but set to a more restricted group (say yarn). If > NodeManager creates the folder it derives the group of the folder from the > primary group of the login user (which is yarn:yarn in this case), thus > setting the root log folder and all its subfolders to yarn group, ultimately > making it unaccessible to other processes - e.g. the JobHistoryServer's > AggregatedLogDeletionService. > I suggest to make this group configurable. If this new configuration is not > set then we can still stick to the existing behaviour. > Creating the root app-log-dir each time during the setup of this system is a > bit error prone, and an end user can easily forget it. I think the best to > put this step is the LogAggregationService, which was responsible for > creating the folder already. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9814) JobHistoryServer can't delete aggregated files, if remote app root directory is created by NodeManager
[ https://issues.apache.org/jira/browse/YARN-9814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16932026#comment-16932026 ] Sunil Govindan commented on YARN-9814: -- I have committed this to trunk, thanks . [~adam.antal] If this is needed for branch-3.2 or 3.1, you need to rebase the patch. For now, i am resolving the jira. please re-open if other branches backport is needed, Thanks. > JobHistoryServer can't delete aggregated files, if remote app root directory > is created by NodeManager > -- > > Key: YARN-9814 > URL: https://issues.apache.org/jira/browse/YARN-9814 > Project: Hadoop YARN > Issue Type: Bug > Components: log-aggregation, yarn >Affects Versions: 3.1.2 >Reporter: Adam Antal >Assignee: Adam Antal >Priority: Minor > Attachments: YARN-9814.001.patch, YARN-9814.002.patch, > YARN-9814.003.patch, YARN-9814.004.patch, YARN-9814.005.patch > > > If remote-app-log-dir is not created before starting Yarn processes, the > NodeManager creates it during the init of AppLogAggregator service. In a > custom system the primary group of the yarn user (which starts the NM/RM > daemons) is not hadoop, but set to a more restricted group (say yarn). If > NodeManager creates the folder it derives the group of the folder from the > primary group of the login user (which is yarn:yarn in this case), thus > setting the root log folder and all its subfolders to yarn group, ultimately > making it unaccessible to other processes - e.g. the JobHistoryServer's > AggregatedLogDeletionService. > I suggest to make this group configurable. If this new configuration is not > set then we can still stick to the existing behaviour. > Creating the root app-log-dir each time during the setup of this system is a > bit error prone, and an end user can easily forget it. I think the best to > put this step is the LogAggregationService, which was responsible for > creating the folder already. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9814) JobHistoryServer can't delete aggregated files, if remote app root directory is created by NodeManager
[ https://issues.apache.org/jira/browse/YARN-9814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16932025#comment-16932025 ] Hudson commented on YARN-9814: -- FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17321 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/17321/]) YARN-9814. JobHistoryServer can't delete aggregated files, if remote app (sunilg: rev 01d79244732c7f60dff3cd7181647c0460955491) * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/filecontroller/LogAggregationFileController.java * (add) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/logaggregation/filecontroller/TestLogAggregationFileController.java > JobHistoryServer can't delete aggregated files, if remote app root directory > is created by NodeManager > -- > > Key: YARN-9814 > URL: https://issues.apache.org/jira/browse/YARN-9814 > Project: Hadoop YARN > Issue Type: Bug > Components: log-aggregation, yarn >Affects Versions: 3.1.2 >Reporter: Adam Antal >Assignee: Adam Antal >Priority: Minor > Attachments: YARN-9814.001.patch, YARN-9814.002.patch, > YARN-9814.003.patch, YARN-9814.004.patch, YARN-9814.005.patch > > > If remote-app-log-dir is not created before starting Yarn processes, the > NodeManager creates it during the init of AppLogAggregator service. In a > custom system the primary group of the yarn user (which starts the NM/RM > daemons) is not hadoop, but set to a more restricted group (say yarn). If > NodeManager creates the folder it derives the group of the folder from the > primary group of the login user (which is yarn:yarn in this case), thus > setting the root log folder and all its subfolders to yarn group, ultimately > making it unaccessible to other processes - e.g. the JobHistoryServer's > AggregatedLogDeletionService. > I suggest to make this group configurable. If this new configuration is not > set then we can still stick to the existing behaviour. > Creating the root app-log-dir each time during the setup of this system is a > bit error prone, and an end user can easily forget it. I think the best to > put this step is the LogAggregationService, which was responsible for > creating the folder already. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9814) JobHistoryServer can't delete aggregated files, if remote app root directory is created by NodeManager
[ https://issues.apache.org/jira/browse/YARN-9814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16932022#comment-16932022 ] Sunil Govindan commented on YARN-9814: -- +1 Committing shortly > JobHistoryServer can't delete aggregated files, if remote app root directory > is created by NodeManager > -- > > Key: YARN-9814 > URL: https://issues.apache.org/jira/browse/YARN-9814 > Project: Hadoop YARN > Issue Type: Bug > Components: log-aggregation, yarn >Affects Versions: 3.1.2 >Reporter: Adam Antal >Assignee: Adam Antal >Priority: Minor > Attachments: YARN-9814.001.patch, YARN-9814.002.patch, > YARN-9814.003.patch, YARN-9814.004.patch, YARN-9814.005.patch > > > If remote-app-log-dir is not created before starting Yarn processes, the > NodeManager creates it during the init of AppLogAggregator service. In a > custom system the primary group of the yarn user (which starts the NM/RM > daemons) is not hadoop, but set to a more restricted group (say yarn). If > NodeManager creates the folder it derives the group of the folder from the > primary group of the login user (which is yarn:yarn in this case), thus > setting the root log folder and all its subfolders to yarn group, ultimately > making it unaccessible to other processes - e.g. the JobHistoryServer's > AggregatedLogDeletionService. > I suggest to make this group configurable. If this new configuration is not > set then we can still stick to the existing behaviour. > Creating the root app-log-dir each time during the setup of this system is a > bit error prone, and an end user can easily forget it. I think the best to > put this step is the LogAggregationService, which was responsible for > creating the folder already. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9814) JobHistoryServer can't delete aggregated files, if remote app root directory is created by NodeManager
[ https://issues.apache.org/jira/browse/YARN-9814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16930429#comment-16930429 ] Hadoop QA commented on YARN-9814: - | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 24s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 43s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 45s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 21s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 55s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 37s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 17s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 13s{color} | {color:green} hadoop-yarn-project/hadoop-yarn: The patch generated 0 new + 226 unchanged - 1 fixed = 226 total (was 227) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 42s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 32s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 58s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 56s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 46s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 79m 44s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:39e82acc485 | | JIRA Issue | YARN-9814 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12980378/YARN-9814.005.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle xml | | uname | Linux 98b970169c2a 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 85b1c72 | | maven | version:
[jira] [Commented] (YARN-9814) JobHistoryServer can't delete aggregated files, if remote app root directory is created by NodeManager
[ https://issues.apache.org/jira/browse/YARN-9814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16930361#comment-16930361 ] Adam Antal commented on YARN-9814: -- Thanks for the review [~sunilg]. - The extra debug logging seems a bit of overkill, since there's no computation that would we save, but added it anyways. - There was no test for the existing default log directory creation - there is now. Also mocked the loginUser of {{UserGroupInformation}} in both tests to make more precise test. > JobHistoryServer can't delete aggregated files, if remote app root directory > is created by NodeManager > -- > > Key: YARN-9814 > URL: https://issues.apache.org/jira/browse/YARN-9814 > Project: Hadoop YARN > Issue Type: Bug > Components: log-aggregation, yarn >Affects Versions: 3.1.2 >Reporter: Adam Antal >Assignee: Adam Antal >Priority: Minor > Attachments: YARN-9814.001.patch, YARN-9814.002.patch, > YARN-9814.003.patch, YARN-9814.004.patch, YARN-9814.005.patch > > > If remote-app-log-dir is not created before starting Yarn processes, the > NodeManager creates it during the init of AppLogAggregator service. In a > custom system the primary group of the yarn user (which starts the NM/RM > daemons) is not hadoop, but set to a more restricted group (say yarn). If > NodeManager creates the folder it derives the group of the folder from the > primary group of the login user (which is yarn:yarn in this case), thus > setting the root log folder and all its subfolders to yarn group, ultimately > making it unaccessible to other processes - e.g. the JobHistoryServer's > AggregatedLogDeletionService. > I suggest to make this group configurable. If this new configuration is not > set then we can still stick to the existing behaviour. > Creating the root app-log-dir each time during the setup of this system is a > bit error prone, and an end user can easily forget it. I think the best to > put this step is the LogAggregationService, which was responsible for > creating the folder already. -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9814) JobHistoryServer can't delete aggregated files, if remote app root directory is created by NodeManager
[ https://issues.apache.org/jira/browse/YARN-9814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16930335#comment-16930335 ] Hadoop QA commented on YARN-9814: - | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 37s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 31s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 34s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 30s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 29s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 25s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 25s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 20s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 51s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 43s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 46s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 85m 8s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.2 Server=19.03.2 Image:yetus/hadoop:39e82acc485 | | JIRA Issue | YARN-9814 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12980368/YARN-9814.004.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle xml | | uname | Linux a8ed60a4989b 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 85b1c72 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_212 | | findbugs | v3.1.0-RC1 | | Test Results |
[jira] [Commented] (YARN-9814) JobHistoryServer can't delete aggregated files, if remote app root directory is created by NodeManager
[ https://issues.apache.org/jira/browse/YARN-9814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16930281#comment-16930281 ] Sunil Govindan commented on YARN-9814: -- Thanks [~adam.antal]. This approach looks fine to me. couple of minor comments: # Please renamed remote-app-log-dir.group => remote-app-log-dir.groupname or group-name. wanted to explicitly understand what group means, as its bit less informations. # New LOG.debug which is added, please put it under if(LOG.isDebugEnabled()) flag # Is it possible to test when custom group is not added, it takes the default one ? if its already there, please point to me to that. Thanks > JobHistoryServer can't delete aggregated files, if remote app root directory > is created by NodeManager > -- > > Key: YARN-9814 > URL: https://issues.apache.org/jira/browse/YARN-9814 > Project: Hadoop YARN > Issue Type: Bug > Components: log-aggregation, yarn >Affects Versions: 3.1.2 >Reporter: Adam Antal >Assignee: Adam Antal >Priority: Minor > Attachments: YARN-9814.001.patch, YARN-9814.002.patch, > YARN-9814.003.patch, YARN-9814.004.patch > > > If remote-app-log-dir is not created before starting Yarn processes, the > NodeManager creates it during the init of AppLogAggregator service. In a > custom system the primary group of the yarn user (which starts the NM/RM > daemons) is not hadoop, but set to a more restricted group (say yarn). If > NodeManager creates the folder it derives the group of the folder from the > primary group of the login user (which is yarn:yarn in this case), thus > setting the root log folder and all its subfolders to yarn group, ultimately > making it unaccessible to other processes - e.g. the JobHistoryServer's > AggregatedLogDeletionService. > I suggest to make this group configurable. If this new configuration is not > set then we can still stick to the existing behaviour. > Creating the root app-log-dir each time during the setup of this system is a > bit error prone, and an end user can easily forget it. I think the best to > put this step is the LogAggregationService, which was responsible for > creating the folder already. -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9814) JobHistoryServer can't delete aggregated files, if remote app root directory is created by NodeManager
[ https://issues.apache.org/jira/browse/YARN-9814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16930269#comment-16930269 ] Adam Antal commented on YARN-9814: -- Thanks for the review [~Prabhu Joseph]. Indeed, you're right about this. Added the {{primaryGroup.isEmpty()}} part to the condition. [~sunilg], could you please take a look at this and commit if you agree? > JobHistoryServer can't delete aggregated files, if remote app root directory > is created by NodeManager > -- > > Key: YARN-9814 > URL: https://issues.apache.org/jira/browse/YARN-9814 > Project: Hadoop YARN > Issue Type: Bug > Components: log-aggregation, yarn >Affects Versions: 3.1.2 >Reporter: Adam Antal >Assignee: Adam Antal >Priority: Minor > Attachments: YARN-9814.001.patch, YARN-9814.002.patch, > YARN-9814.003.patch > > > If remote-app-log-dir is not created before starting Yarn processes, the > NodeManager creates it during the init of AppLogAggregator service. In a > custom system the primary group of the yarn user (which starts the NM/RM > daemons) is not hadoop, but set to a more restricted group (say yarn). If > NodeManager creates the folder it derives the group of the folder from the > primary group of the login user (which is yarn:yarn in this case), thus > setting the root log folder and all its subfolders to yarn group, ultimately > making it unaccessible to other processes - e.g. the JobHistoryServer's > AggregatedLogDeletionService. > I suggest to make this group configurable. If this new configuration is not > set then we can still stick to the existing behaviour. > Creating the root app-log-dir each time during the setup of this system is a > bit error prone, and an end user can easily forget it. I think the best to > put this step is the LogAggregationService, which was responsible for > creating the folder already. -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9814) JobHistoryServer can't delete aggregated files, if remote app root directory is created by NodeManager
[ https://issues.apache.org/jira/browse/YARN-9814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16929754#comment-16929754 ] Prabhu Joseph commented on YARN-9814: - [~adam.antal] Thanks for the patch. The patch looks good. Have found one issue. 1. With Hdfs, when yarn.nodemanager.remote-app-log-dir.group set to "" (default value). the root log directory group is supergroup (dfs.permissions.superusergroup) whereas without the patch it was yarn's primaryGroupName. I think we have to handle "" similar to Null. {code} String primaryGroupName = conf.get( YarnConfiguration.NM_REMOTE_APP_LOG_DIR_GROUP); if (primaryGroupName == null || primaryGroupName.equals("")) {code} > JobHistoryServer can't delete aggregated files, if remote app root directory > is created by NodeManager > -- > > Key: YARN-9814 > URL: https://issues.apache.org/jira/browse/YARN-9814 > Project: Hadoop YARN > Issue Type: Bug > Components: log-aggregation, yarn >Affects Versions: 3.1.2 >Reporter: Adam Antal >Assignee: Adam Antal >Priority: Minor > Attachments: YARN-9814.001.patch, YARN-9814.002.patch, > YARN-9814.003.patch > > > If remote-app-log-dir is not created before starting Yarn processes, the > NodeManager creates it during the init of AppLogAggregator service. In a > custom system the primary group of the yarn user (which starts the NM/RM > daemons) is not hadoop, but set to a more restricted group (say yarn). If > NodeManager creates the folder it derives the group of the folder from the > primary group of the login user (which is yarn:yarn in this case), thus > setting the root log folder and all its subfolders to yarn group, ultimately > making it unaccessible to other processes - e.g. the JobHistoryServer's > AggregatedLogDeletionService. > I suggest to make this group configurable. If this new configuration is not > set then we can still stick to the existing behaviour. > Creating the root app-log-dir each time during the setup of this system is a > bit error prone, and an end user can easily forget it. I think the best to > put this step is the LogAggregationService, which was responsible for > creating the folder already. -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9814) JobHistoryServer can't delete aggregated files, if remote app root directory is created by NodeManager
[ https://issues.apache.org/jira/browse/YARN-9814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16929319#comment-16929319 ] Adam Antal commented on YARN-9814: -- [~Prabhu Joseph] could you review this if you have time? > JobHistoryServer can't delete aggregated files, if remote app root directory > is created by NodeManager > -- > > Key: YARN-9814 > URL: https://issues.apache.org/jira/browse/YARN-9814 > Project: Hadoop YARN > Issue Type: Bug > Components: log-aggregation, yarn >Affects Versions: 3.1.2 >Reporter: Adam Antal >Assignee: Adam Antal >Priority: Minor > Attachments: YARN-9814.001.patch, YARN-9814.002.patch, > YARN-9814.003.patch > > > If remote-app-log-dir is not created before starting Yarn processes, the > NodeManager creates it during the init of AppLogAggregator service. In a > custom system the primary group of the yarn user (which starts the NM/RM > daemons) is not hadoop, but set to a more restricted group (say yarn). If > NodeManager creates the folder it derives the group of the folder from the > primary group of the login user (which is yarn:yarn in this case), thus > setting the root log folder and all its subfolders to yarn group, ultimately > making it unaccessible to other processes - e.g. the JobHistoryServer's > AggregatedLogDeletionService. > I suggest to make this group configurable. If this new configuration is not > set then we can still stick to the existing behaviour. > Creating the root app-log-dir each time during the setup of this system is a > bit error prone, and an end user can easily forget it. I think the best to > put this step is the LogAggregationService, which was responsible for > creating the folder already. -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9814) JobHistoryServer can't delete aggregated files, if remote app root directory is created by NodeManager
[ https://issues.apache.org/jira/browse/YARN-9814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16925728#comment-16925728 ] Hadoop QA commented on YARN-9814: - | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 4s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 14s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 22s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 37s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 3s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 17m 50s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 4s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 51s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 19s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 9m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 3s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 52s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 48s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 8s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 4m 43s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 58s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}106m 5s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e53b4 | | JIRA Issue | YARN-9814 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12979850/YARN-9814.003.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle xml | | uname | Linux 7b90213529c3 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 60af879 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_222 | | findbugs | v3.1.0-RC1 | | Test Results |
[jira] [Commented] (YARN-9814) JobHistoryServer can't delete aggregated files, if remote app root directory is created by NodeManager
[ https://issues.apache.org/jira/browse/YARN-9814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16925626#comment-16925626 ] Adam Antal commented on YARN-9814: -- Valid Jenkins error: forgot to add license header. Fixed in [^YARN-9814.003.patch]. > JobHistoryServer can't delete aggregated files, if remote app root directory > is created by NodeManager > -- > > Key: YARN-9814 > URL: https://issues.apache.org/jira/browse/YARN-9814 > Project: Hadoop YARN > Issue Type: Bug > Components: log-aggregation, yarn >Affects Versions: 3.1.2 >Reporter: Adam Antal >Assignee: Adam Antal >Priority: Minor > Attachments: YARN-9814.001.patch, YARN-9814.002.patch, > YARN-9814.003.patch > > > If remote-app-log-dir is not created before starting Yarn processes, the > NodeManager creates it during the init of AppLogAggregator service. In a > custom system the primary group of the yarn user (which starts the NM/RM > daemons) is not hadoop, but set to a more restricted group (say yarn). If > NodeManager creates the folder it derives the group of the folder from the > primary group of the login user (which is yarn:yarn in this case), thus > setting the root log folder and all its subfolders to yarn group, ultimately > making it unaccessible to other processes - e.g. the JobHistoryServer's > AggregatedLogDeletionService. > I suggest to make this group configurable. If this new configuration is not > set then we can still stick to the existing behaviour. > Creating the root app-log-dir each time during the setup of this system is a > bit error prone, and an end user can easily forget it. I think the best to > put this step is the LogAggregationService, which was responsible for > creating the folder already. -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9814) JobHistoryServer can't delete aggregated files, if remote app root directory is created by NodeManager
[ https://issues.apache.org/jira/browse/YARN-9814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16925570#comment-16925570 ] Hadoop QA commented on YARN-9814: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 36s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 45s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 59s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 40s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 49s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 30s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 21s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 20s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 49s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 50s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 39s{color} | {color:red} The patch generated 1 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 84m 11s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.2 Server=19.03.2 Image:yetus/hadoop:bdbca0e53b4 | | JIRA Issue | YARN-9814 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12979826/YARN-9814.002.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle xml | | uname | Linux e0dfb1660e93 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 60af879 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_222 | | findbugs | v3.1.0-RC1 | | Test Results |
[jira] [Commented] (YARN-9814) JobHistoryServer can't delete aggregated files, if remote app root directory is created by NodeManager
[ https://issues.apache.org/jira/browse/YARN-9814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16924389#comment-16924389 ] Hadoop QA commented on YARN-9814: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 36s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 43s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 58s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 35s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 19s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 10s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 1m 9s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch generated 4 new + 227 unchanged - 0 fixed = 231 total (was 227) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 45s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 30s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 52s{color} | {color:red} hadoop-yarn-api in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 51s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 55s{color} | {color:red} The patch generated 1 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 83m 13s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.conf.TestYarnConfigurationFields | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e53b4 | | JIRA Issue | YARN-9814 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12979682/YARN-9814.001.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 7e57a26ac8fe 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / d98c548 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_222 | | findbugs | v3.1.0-RC1 | |