[jira] [Commented] (YARN-8986) publish all exposed ports to random ports when using bridge network
[ https://issues.apache.org/jira/browse/YARN-8986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16694335#comment-16694335 ] Eric Yang commented on YARN-8986: - [~Charo Zhang] yarn.nodemanager.runtime.linux.docker.default-container-network will result in --net parameter being passed. My suggestion is specific for unit test to pass, and also allow undefined net parameter to fall back to bridge network behavior as default. Jenkins environment docker network ls will fail, hence it would be easier to solve unit test problem and default behavior at the same time by gracefully handle the logic where net parameter is not passed, and allow -p and -P to work. > publish all exposed ports to random ports when using bridge network > --- > > Key: YARN-8986 > URL: https://issues.apache.org/jira/browse/YARN-8986 > Project: Hadoop YARN > Issue Type: Sub-task > Components: yarn >Affects Versions: 3.1.1 >Reporter: Charo Zhang >Assignee: Charo Zhang >Priority: Minor > Labels: Docker > Attachments: YARN-8986.001.patch, YARN-8986.002.patch, > YARN-8986.003.patch, YARN-8986.004.patch, YARN-8986.005.patch, > YARN-8986.006.patch > > > it's better to publish all exposed ports to random ports(-P) or support port > mapping(-p) for bridge network when using bridge network for docker container. > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (YARN-8986) publish all exposed ports to random ports when using bridge network
[ https://issues.apache.org/jira/browse/YARN-8986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16694313#comment-16694313 ] Charo Zhang edited comment on YARN-8986 at 11/21/18 7:53 AM: - [~eyang] I know what your mean, it will use bridge network as default network type if ''--net" option don't exist for docker run. But in my opinion: the default network is controlled by yarn.nodemanager.runtime.linux.docker.default-container-network for Yarn Docker, we should't consider Docker default network type. That's mean when network name is not specified( it 's same to "net" is null in command_config), it's better to use yarn.nodemanager.runtime.linux.docker.default-container-network value or add "--net=none". was (Author: charo zhang): [~eyang] I know what your mean, it will use bridge network as default network type if ''--net" option don't exist for docker run. But in my opinion: the default network is controlled by yarn.nodemanager.runtime.linux.docker.default-container-network for Yarn Docker, we should't consider Docker default network type. That's mean when network name is not specified( it 's same to "net" is null in command_config), it's better to use yarn.nodemanager.runtime.linux.docker.default-container-network value or add "--net=none". At same time , i set network_name=bridge when it's not specified as you said in 007 patch. Hope it's the last one. > publish all exposed ports to random ports when using bridge network > --- > > Key: YARN-8986 > URL: https://issues.apache.org/jira/browse/YARN-8986 > Project: Hadoop YARN > Issue Type: Sub-task > Components: yarn >Affects Versions: 3.1.1 >Reporter: Charo Zhang >Assignee: Charo Zhang >Priority: Minor > Labels: Docker > Attachments: YARN-8986.001.patch, YARN-8986.002.patch, > YARN-8986.003.patch, YARN-8986.004.patch, YARN-8986.005.patch, > YARN-8986.006.patch, YARN-8986.007.patch > > > it's better to publish all exposed ports to random ports(-P) or support port > mapping(-p) for bridge network when using bridge network for docker container. > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-8986) publish all exposed ports to random ports when using bridge network
[ https://issues.apache.org/jira/browse/YARN-8986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Charo Zhang updated YARN-8986: -- Attachment: YARN-8986.007.patch > publish all exposed ports to random ports when using bridge network > --- > > Key: YARN-8986 > URL: https://issues.apache.org/jira/browse/YARN-8986 > Project: Hadoop YARN > Issue Type: Sub-task > Components: yarn >Affects Versions: 3.1.1 >Reporter: Charo Zhang >Assignee: Charo Zhang >Priority: Minor > Labels: Docker > Attachments: YARN-8986.001.patch, YARN-8986.002.patch, > YARN-8986.003.patch, YARN-8986.004.patch, YARN-8986.005.patch, > YARN-8986.006.patch, YARN-8986.007.patch > > > it's better to publish all exposed ports to random ports(-P) or support port > mapping(-p) for bridge network when using bridge network for docker container. > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-8986) publish all exposed ports to random ports when using bridge network
[ https://issues.apache.org/jira/browse/YARN-8986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Charo Zhang updated YARN-8986: -- Attachment: (was: YARN-8986.007.patch) > publish all exposed ports to random ports when using bridge network > --- > > Key: YARN-8986 > URL: https://issues.apache.org/jira/browse/YARN-8986 > Project: Hadoop YARN > Issue Type: Sub-task > Components: yarn >Affects Versions: 3.1.1 >Reporter: Charo Zhang >Assignee: Charo Zhang >Priority: Minor > Labels: Docker > Attachments: YARN-8986.001.patch, YARN-8986.002.patch, > YARN-8986.003.patch, YARN-8986.004.patch, YARN-8986.005.patch, > YARN-8986.006.patch, YARN-8986.007.patch > > > it's better to publish all exposed ports to random ports(-P) or support port > mapping(-p) for bridge network when using bridge network for docker container. > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-8882) Phase 1 - Add a shared device mapping manager for device plugin to use
[ https://issues.apache.org/jira/browse/YARN-8882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16694353#comment-16694353 ] Hadoop QA commented on YARN-8882: - | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 59s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 41s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 27s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 22s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 31s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 18m 52s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 70m 0s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f | | JIRA Issue | YARN-8882 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12948994/YARN-8882-trunk.007.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 9eca3a21fed0 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / c8b3dfa | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_181 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/22653/testReport/ | | Max. process+thread count | 414 (vs. ulimit of 1) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/22653/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > Phase 1 - Add a shared device mapping manager for
[jira] [Commented] (YARN-8986) publish all exposed ports to random ports when using bridge network
[ https://issues.apache.org/jira/browse/YARN-8986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16694340#comment-16694340 ] Charo Zhang commented on YARN-8986: --- [~eyang] i understand. I set network_name=bridge when it's not specified as you said in 007 patch. Hope it's the last one. > publish all exposed ports to random ports when using bridge network > --- > > Key: YARN-8986 > URL: https://issues.apache.org/jira/browse/YARN-8986 > Project: Hadoop YARN > Issue Type: Sub-task > Components: yarn >Affects Versions: 3.1.1 >Reporter: Charo Zhang >Assignee: Charo Zhang >Priority: Minor > Labels: Docker > Attachments: YARN-8986.001.patch, YARN-8986.002.patch, > YARN-8986.003.patch, YARN-8986.004.patch, YARN-8986.005.patch, > YARN-8986.006.patch, YARN-8986.007.patch > > > it's better to publish all exposed ports to random ports(-P) or support port > mapping(-p) for bridge network when using bridge network for docker container. > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (YARN-8986) publish all exposed ports to random ports when using bridge network
[ https://issues.apache.org/jira/browse/YARN-8986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16694313#comment-16694313 ] Charo Zhang edited comment on YARN-8986 at 11/21/18 7:42 AM: - [~eyang] I know what your mean, it will use bridge network as default network type if ''--net" option don't exist for docker run. But in my opinion: the default network is controlled by yarn.nodemanager.runtime.linux.docker.default-container-network for Yarn Docker, we should't consider Docker default network type. That's mean when network name is not specified( it 's same to "net" is null in command_config), it's better to use yarn.nodemanager.runtime.linux.docker.default-container-network value or add "--net=none". At same time , i set network_name=bridge when it's not specified as you said in 007 patch. Hope it's the last one. was (Author: charo zhang): [~eyang] I know what your mean, it will use bridge network as default network type if ''--net" option don't exist for docker run. But in my opinion: the default network is controlled by yarn.nodemanager.runtime.linux.docker.default-container-network for Yarn Docker, we should't consider Docker default network type. That's mean when network name is not specified( it 's same to "net" is null in command_config), it's better to use yarn.nodemanager.runtime.linux.docker.default-container-network value or add "--net=none". > publish all exposed ports to random ports when using bridge network > --- > > Key: YARN-8986 > URL: https://issues.apache.org/jira/browse/YARN-8986 > Project: Hadoop YARN > Issue Type: Sub-task > Components: yarn >Affects Versions: 3.1.1 >Reporter: Charo Zhang >Assignee: Charo Zhang >Priority: Minor > Labels: Docker > Attachments: YARN-8986.001.patch, YARN-8986.002.patch, > YARN-8986.003.patch, YARN-8986.004.patch, YARN-8986.005.patch, > YARN-8986.006.patch, YARN-8986.007.patch > > > it's better to publish all exposed ports to random ports(-P) or support port > mapping(-p) for bridge network when using bridge network for docker container. > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-8986) publish all exposed ports to random ports when using bridge network
[ https://issues.apache.org/jira/browse/YARN-8986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Charo Zhang updated YARN-8986: -- Attachment: YARN-8986.007.patch > publish all exposed ports to random ports when using bridge network > --- > > Key: YARN-8986 > URL: https://issues.apache.org/jira/browse/YARN-8986 > Project: Hadoop YARN > Issue Type: Sub-task > Components: yarn >Affects Versions: 3.1.1 >Reporter: Charo Zhang >Assignee: Charo Zhang >Priority: Minor > Labels: Docker > Attachments: YARN-8986.001.patch, YARN-8986.002.patch, > YARN-8986.003.patch, YARN-8986.004.patch, YARN-8986.005.patch, > YARN-8986.006.patch, YARN-8986.007.patch > > > it's better to publish all exposed ports to random ports(-P) or support port > mapping(-p) for bridge network when using bridge network for docker container. > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9041) Optimize FSPreemptionThread#identifyContainersToPreempt method
[ https://issues.apache.org/jira/browse/YARN-9041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16694329#comment-16694329 ] Steven Rand commented on YARN-9041: --- I'm not sure that this is correct. I think that it can lead to failure to preempt in cases where we should be preempting. This will happen if the initial {{potentialNodes}} contain preemptible containers, but the remaining nodes don't. Example to illustrate what I'm thinking: * We have nodes A, B, and C * At first {{potentialNodes}} includes only node A because we're preempting for a node-local request for that node * We find that we can preempt a container on node A, but it's an ApplicationMaster * With this patch, we change the search space to be only nodes B and C (without the patch, the search space becomes A, B, and C) * There are no preemptible containers on nodes B and C The outcome in this example is that we don't preempt at all. However, what we want to do is preempt the AM container on node A. Hopefully that makes sense, but let me know if I'm misunderstanding. > Optimize FSPreemptionThread#identifyContainersToPreempt method > -- > > Key: YARN-9041 > URL: https://issues.apache.org/jira/browse/YARN-9041 > Project: Hadoop YARN > Issue Type: Improvement > Components: scheduler preemption >Reporter: Wanqiang Ji >Assignee: Wanqiang Ji >Priority: Major > Attachments: YARN-9041.001.patch > > > In FSPreemptionThread#identifyContainersToPreempt method, I suggest if AM > preemption, and locality relaxation is allowed, then the search space is > expanded to all nodes changed to the remaining nodes. The remaining nodes are > equal to all nodes minus the potential nodes. > Judging condition changed to: > # rr.getRelaxLocality() > # !ResourceRequest.isAnyLocation(rr.getResourceName()) > # bestContainers != null > # bestContainers.numAMContainers > 0 > If I understand the deviation, please criticize me. thx~ -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-9016) DocumentStore as a backend for ATSv2
[ https://issues.apache.org/jira/browse/YARN-9016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sushil Ks updated YARN-9016: Attachment: (was: YARN-9016.001.patch) > DocumentStore as a backend for ATSv2 > > > Key: YARN-9016 > URL: https://issues.apache.org/jira/browse/YARN-9016 > Project: Hadoop YARN > Issue Type: New Feature > Components: ATSv2 >Reporter: Sushil Ks >Assignee: Sushil Ks >Priority: Major > > h1. Document Store for ATSv2 > The Document Store for ATSv2 is a framework for plugging in > any Document Store Vendor as a backend for ATSv2 i.e Azure CosmosDB , > MongoDB, ElasticSearch etc. > * Supports multiple Document Store Vendors like CosmosDB, ElasticSearch, > MongoDB etc by just adding new configurations properties and writing Document > Store reader and writer clients. > * Currently has support for CosmosDB. > * All writes are Async and buffered, latest document would be flushed to the > store either if the document buffer gets full or periodically at every flush > interval in background without adding any additional latency to the running > jobs.. > * All the REST API's of Timeline Reader Server are supported. > h4. > *How to enable?* > Add the flowing properties under *yarn-site.xml* > {code:java} > > > yarn.timeline-service.writer.class/name> > > org.apache.hadoop.yarn.server.timelineservice.storage.documentstore.DocumentStoreTimelineWriterImpl > > >yarn.timeline-service.reader.class/name> > org.apache.hadoop.yarn.server.timelineservice.storage.documentstore.DocumentStoreTimelineReaderImpl > > >yarn.timeline-service.documentstore.db-name >YOUR_DATABASE_NAME > {code} > h3. *Creating DB and Collections for storing documents* > This is similar to HBase *TimelineSchemaCreator* the > following command needs to be executed once for setting up the database and > collections for storing documents. > {code:java} > hadoop > org.apache.hadoop.yarn.server.timelineservice.documentstore.DocumentStoreCollectionCreator > {code} > h3. *Azure CosmosDB* > To use Azure CosmosDB as a DocumentStore for ATSv2, the additional > properties under *yarn-site.xml* is required.. > {code:java} > > >yarn.timeline-service.store-type >COSMOS_DB > > >yarn.timeline-service.cosmos-db.endpoint >http://YOUR_AZURE_COSMOS_DB_URL:443/ > > >yarn.timeline-service.cosmos-db.masterkey >YOUR_AZURE_COSMOS_DB_MASTER_KEY_CREDENTIAL > > {code} > > *Testing locally* > In order to test the Azure CosmosDB as a DocumentStore > locally, install the emulator from > [here|https://docs.microsoft.com/en-us/azure/cosmos-db/local-emulator] and > start it locally. Set the endpoint and master key under *yarn-site.xml* as > mentioned above and run any example job like DistributedShell etc. Later you > can check the data explorer UI of Azure CosmosDB locally to query the > documents or even launch the *TimelineReader* locally and fetch/query the > data from REST API's. > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-8986) publish all exposed ports to random ports when using bridge network
[ https://issues.apache.org/jira/browse/YARN-8986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16694313#comment-16694313 ] Charo Zhang commented on YARN-8986: --- [~eyang] I know what your mean, it will use bridge network as default network type if ''--net" option don't exist for docker run. But in my opinion: the default network is controlled by yarn.nodemanager.runtime.linux.docker.default-container-network for Yarn Docker, we should't consider Docker default network type. That's mean when network name is not specified( it 's same to "net" is null in command_config), it's better to use yarn.nodemanager.runtime.linux.docker.default-container-network value or add "--net=none". > publish all exposed ports to random ports when using bridge network > --- > > Key: YARN-8986 > URL: https://issues.apache.org/jira/browse/YARN-8986 > Project: Hadoop YARN > Issue Type: Sub-task > Components: yarn >Affects Versions: 3.1.1 >Reporter: Charo Zhang >Assignee: Charo Zhang >Priority: Minor > Labels: Docker > Attachments: YARN-8986.001.patch, YARN-8986.002.patch, > YARN-8986.003.patch, YARN-8986.004.patch, YARN-8986.005.patch, > YARN-8986.006.patch > > > it's better to publish all exposed ports to random ports(-P) or support port > mapping(-p) for bridge network when using bridge network for docker container. > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9005) FairScheduler maybe preempt the AM container
[ https://issues.apache.org/jira/browse/YARN-9005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16694310#comment-16694310 ] Wanqiang Ji commented on YARN-9005: --- Hi [~yufeigu], sorry to miss your latest reply. I created the new issue (YARN-9041) and clarify the performance in description filed. I found you watched the new issue, so we can discuss there. Thanks for your help and understanding. > FairScheduler maybe preempt the AM container > > > Key: YARN-9005 > URL: https://issues.apache.org/jira/browse/YARN-9005 > Project: Hadoop YARN > Issue Type: Bug > Components: fairscheduler, scheduler preemption >Reporter: Wanqiang Ji >Assignee: Wanqiang Ji >Priority: Major > Attachments: YARN-9005.001.patch, YARN-9005.002.patch > > > In the worst case, FS preempt the AM container. Due to > FSPreemptionThread#identifyContainersToPreempt return value contains AM > container. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-8882) Phase 1 - Add a shared device mapping manager for device plugin to use
[ https://issues.apache.org/jira/browse/YARN-8882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zhankun Tang updated YARN-8882: --- Attachment: YARN-8882-trunk.007.patch > Phase 1 - Add a shared device mapping manager for device plugin to use > -- > > Key: YARN-8882 > URL: https://issues.apache.org/jira/browse/YARN-8882 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Zhankun Tang >Assignee: Zhankun Tang >Priority: Major > Attachments: YARN-8882-trunk.001.patch, YARN-8882-trunk.002.patch, > YARN-8882-trunk.003.patch, YARN-8882-trunk.004.patch, > YARN-8882-trunk.005.patch, YARN-8882-trunk.006.patch, > YARN-8882-trunk.007.patch > > > Since a few devices uses FIFO policy to assign devices to the container, we > use a shared device manager to handle all types of devices. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9041) Optimize FSPreemptionThread#identifyContainersToPreempt method
[ https://issues.apache.org/jira/browse/YARN-9041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16694275#comment-16694275 ] Wanqiang Ji commented on YARN-9041: --- The UT failure should be irrelevant, I tested locally it can work correctly. Hi [~yufeigu] and [~tangzhankun] Can you help to review? > Optimize FSPreemptionThread#identifyContainersToPreempt method > -- > > Key: YARN-9041 > URL: https://issues.apache.org/jira/browse/YARN-9041 > Project: Hadoop YARN > Issue Type: Improvement > Components: scheduler preemption >Reporter: Wanqiang Ji >Assignee: Wanqiang Ji >Priority: Major > Attachments: YARN-9041.001.patch > > > In FSPreemptionThread#identifyContainersToPreempt method, I suggest if AM > preemption, and locality relaxation is allowed, then the search space is > expanded to all nodes changed to the remaining nodes. The remaining nodes are > equal to all nodes minus the potential nodes. > Judging condition changed to: > # rr.getRelaxLocality() > # !ResourceRequest.isAnyLocation(rr.getResourceName()) > # bestContainers != null > # bestContainers.numAMContainers > 0 > If I understand the deviation, please criticize me. thx~ -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9039) App ACLs are not validated when serving logs from Logs CLI/Yarn UI2
[ https://issues.apache.org/jira/browse/YARN-9039?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16694264#comment-16694264 ] Bibin A Chundatt commented on YARN-9039: Thank you [~suma.shivaprasad] for raising the issue # Can we use LogAggregationFileController#getApplicationAcls directly in checkAcl ? # AdminAclManager and ApplicationACLsManager changes are not required. # Can we add a testcase to cover this scenario too.. > App ACLs are not validated when serving logs from Logs CLI/Yarn UI2 > --- > > Key: YARN-9039 > URL: https://issues.apache.org/jira/browse/YARN-9039 > Project: Hadoop YARN > Issue Type: Bug > Components: log-aggregation >Reporter: Suma Shivaprasad >Assignee: Suma Shivaprasad >Priority: Critical > Attachments: YARN-9039.1.patch, YARN-9039.2.patch > > > App Acls are not being validated when serving logs through YARN CLI. > This also applies while serving logs through YARN UIV2 through ATSV2 Log > Webservice -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-8986) publish all exposed ports to random ports when using bridge network
[ https://issues.apache.org/jira/browse/YARN-8986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16694263#comment-16694263 ] Eric Yang commented on YARN-8986: - [~Charo Zhang] If I am not mistaken, the [default network type|https://docs.docker.com/network/] is bridge network. Therefore, port mapping should be allowed. > publish all exposed ports to random ports when using bridge network > --- > > Key: YARN-8986 > URL: https://issues.apache.org/jira/browse/YARN-8986 > Project: Hadoop YARN > Issue Type: Sub-task > Components: yarn >Affects Versions: 3.1.1 >Reporter: Charo Zhang >Assignee: Charo Zhang >Priority: Minor > Labels: Docker > Attachments: YARN-8986.001.patch, YARN-8986.002.patch, > YARN-8986.003.patch, YARN-8986.004.patch, YARN-8986.005.patch, > YARN-8986.006.patch > > > it's better to publish all exposed ports to random ports(-P) or support port > mapping(-p) for bridge network when using bridge network for docker container. > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9039) App ACLs are not validated when serving logs from Logs CLI/Yarn UI2
[ https://issues.apache.org/jira/browse/YARN-9039?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16694255#comment-16694255 ] Hadoop QA commented on YARN-9039: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 43s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 44s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 34s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 25s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common: The patch generated 3 new + 37 unchanged - 2 fixed = 40 total (was 39) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 28s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 32s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 25s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 57m 8s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f | | JIRA Issue | YARN-9039 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12948982/YARN-9039.2.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 15dabfdf6aeb 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / a41b648 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_181 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-YARN-Build/22652/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/22652/testReport/ | | Max. process+thread count | 468 (vs. ulimit of 1) | | modules | C:
[jira] [Commented] (YARN-9027) EntityGroupFSTimelineStore fails to init LevelDBCacheTimelineStore
[ https://issues.apache.org/jira/browse/YARN-9027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16694241#comment-16694241 ] Hadoop QA commented on YARN-9027: - | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 20s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 21s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 0s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 34s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 22s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 22s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 12s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timeline-pluginstorage: The patch generated 1 new + 4 unchanged - 0 fixed = 5 total (was 4) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 49s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 20s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 33s{color} | {color:green} hadoop-yarn-server-timeline-pluginstorage in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 26s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 57m 25s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f | | JIRA Issue | YARN-9027 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12948978/0002-YARN-9027.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux d5e1fbdf3c29 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / a41b648 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_181 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-YARN-Build/22651/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timeline-pluginstorage.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/22651/testReport/ | | Max. process+thread count | 307 (vs. ulimit of 1) | | modules | C:
[jira] [Commented] (YARN-8882) Phase 1 - Add a shared device mapping manager for device plugin to use
[ https://issues.apache.org/jira/browse/YARN-8882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16694244#comment-16694244 ] Hadoop QA commented on YARN-8882: - | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 58s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 59s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 35s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 29s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 55s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 22s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 21s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager: The patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 54s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 22s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 19m 4s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 69m 5s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f | | JIRA Issue | YARN-8882 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12948977/YARN-8882-trunk.006.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 00da40e85289 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / a41b648 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_181 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-YARN-Build/22650/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/22650/testReport/ | | Max. process+thread count | 415 (vs. ulimit of 1) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager U:
[jira] [Commented] (YARN-9039) App ACLs are not validated when serving logs from Logs CLI/Yarn UI2
[ https://issues.apache.org/jira/browse/YARN-9039?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16694220#comment-16694220 ] Suma Shivaprasad commented on YARN-9039: Fixed findbugs, CS > App ACLs are not validated when serving logs from Logs CLI/Yarn UI2 > --- > > Key: YARN-9039 > URL: https://issues.apache.org/jira/browse/YARN-9039 > Project: Hadoop YARN > Issue Type: Bug > Components: log-aggregation >Reporter: Suma Shivaprasad >Assignee: Suma Shivaprasad >Priority: Critical > Attachments: YARN-9039.1.patch, YARN-9039.2.patch > > > App Acls are not being validated when serving logs through YARN CLI. > This also applies while serving logs through YARN UIV2 through ATSV2 Log > Webservice -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-9039) App ACLs are not validated when serving logs from Logs CLI/Yarn UI2
[ https://issues.apache.org/jira/browse/YARN-9039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Suma Shivaprasad updated YARN-9039: --- Attachment: YARN-9039.2.patch > App ACLs are not validated when serving logs from Logs CLI/Yarn UI2 > --- > > Key: YARN-9039 > URL: https://issues.apache.org/jira/browse/YARN-9039 > Project: Hadoop YARN > Issue Type: Bug > Components: log-aggregation >Reporter: Suma Shivaprasad >Assignee: Suma Shivaprasad >Priority: Critical > Attachments: YARN-9039.1.patch, YARN-9039.2.patch > > > App Acls are not being validated when serving logs through YARN CLI. > This also applies while serving logs through YARN UIV2 through ATSV2 Log > Webservice -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-9027) EntityGroupFSTimelineStore fails to init LevelDBCacheTimelineStore
[ https://issues.apache.org/jira/browse/YARN-9027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prabhu Joseph updated YARN-9027: Attachment: 0002-YARN-9027.patch > EntityGroupFSTimelineStore fails to init LevelDBCacheTimelineStore > --- > > Key: YARN-9027 > URL: https://issues.apache.org/jira/browse/YARN-9027 > Project: Hadoop YARN > Issue Type: Bug > Components: timelineserver >Affects Versions: 2.7.3 >Reporter: Prabhu Joseph >Assignee: Prabhu Joseph >Priority: Major > Attachments: 0001-YARN-9027.patch, 0002-YARN-9027.patch > > > EntityGroupFSTimelineStore fails to init LevelDBCacheTimelineStore as the > expected default constructor is not present. > {code} > Caused by: java.lang.RuntimeException: java.lang.NoSuchMethodException: > org.apache.hadoop.yarn.server.timeline.LevelDBCacheTimelineStore.() > at > org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:134) > at > org.apache.hadoop.yarn.server.timeline.EntityCacheItem.refreshCache(EntityCacheItem.java:100) > at > org.apache.hadoop.yarn.server.timeline.EntityGroupFSTimelineStore.getCachedStore(EntityGroupFSTimelineStore.java:1026) > at > org.apache.hadoop.yarn.server.timeline.EntityGroupFSTimelineStore.getTimelineStoresFromCacheIds(EntityGroupFSTimelineStore.java:945) > at > org.apache.hadoop.yarn.server.timeline.EntityGroupFSTimelineStore.getTimelineStoresForRead(EntityGroupFSTimelineStore.java:998) > at > org.apache.hadoop.yarn.server.timeline.EntityGroupFSTimelineStore.getEntities(EntityGroupFSTimelineStore.java:1040) > at > org.apache.hadoop.yarn.server.timeline.TimelineDataManager.doGetEntities(TimelineDataManager.java:168) > at > org.apache.hadoop.yarn.server.timeline.TimelineDataManager.getEntities(TimelineDataManager.java:138) > at > org.apache.hadoop.yarn.server.timeline.webapp.TimelineWebServices.getEntities(TimelineWebServices.java:117) > ... 59 more > Caused by: java.lang.NoSuchMethodException: > org.apache.hadoop.yarn.server.timeline.LevelDBCacheTimelineStore.() > at java.lang.Class.getConstructor0(Class.java:3082) > at java.lang.Class.getDeclaredConstructor(Class.java:2178) > at > org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:128) > ... 67 more > {code} > Repro: > {code} > 1. Set Offline Caching with > yarn.timeline-service.entity-group-fs-store.cache-store-class=org.apache.hadoop.yarn.server.timeline.LevelDBCacheTimelineStore > 2. Run a Tez query > 3. Check Tez View > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-8882) Phase 1 - Add a shared device mapping manager for device plugin to use
[ https://issues.apache.org/jira/browse/YARN-8882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zhankun Tang updated YARN-8882: --- Attachment: YARN-8882-trunk.006.patch > Phase 1 - Add a shared device mapping manager for device plugin to use > -- > > Key: YARN-8882 > URL: https://issues.apache.org/jira/browse/YARN-8882 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Zhankun Tang >Assignee: Zhankun Tang >Priority: Major > Attachments: YARN-8882-trunk.001.patch, YARN-8882-trunk.002.patch, > YARN-8882-trunk.003.patch, YARN-8882-trunk.004.patch, > YARN-8882-trunk.005.patch, YARN-8882-trunk.006.patch > > > Since a few devices uses FIFO policy to assign devices to the container, we > use a shared device manager to handle all types of devices. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-8174) Add containerId to ResourceLocalizationService fetch failure
[ https://issues.apache.org/jira/browse/YARN-8174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16694187#comment-16694187 ] Hadoop QA commented on YARN-8174: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 41s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 55s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 36s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 15s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 30s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 25s{color} | {color:green} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager: The patch generated 0 new + 129 unchanged - 1 fixed = 129 total (was 130) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 33s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 20s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 19m 3s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 29s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 74m 29s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f | | JIRA Issue | YARN-8174 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12919797/YARN-8174.2.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 2a886f0892d7 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / a41b648 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_181 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/22649/testReport/ | | Max. process+thread count | 415 (vs. ulimit of 1) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager U:
[jira] [Commented] (YARN-8148) Update decimal values for queue capacities shown on queue status cli
[ https://issues.apache.org/jira/browse/YARN-8148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16694180#comment-16694180 ] Hadoop QA commented on YARN-8148: - | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 20s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 30s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 7s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 36s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 37s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 24m 51s{color} | {color:green} hadoop-yarn-client in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 23s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 72m 2s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f | | JIRA Issue | YARN-8148 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12919544/YARN-8148.1.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux adc25b29b488 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / a41b648 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_181 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/22648/testReport/ | | Max. process+thread count | 682 (vs. ulimit of 1) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/22648/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > Update decimal values for queue capacities shown on queue status cli > >
[jira] [Commented] (YARN-8986) publish all exposed ports to random ports when using bridge network
[ https://issues.apache.org/jira/browse/YARN-8986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16694176#comment-16694176 ] Hadoop QA commented on YARN-8986: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 42s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 59s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 16s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 2s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 7m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 15s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 18m 39s{color} | {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 21s{color} | {color:green} hadoop-yarn-site in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 40s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}102m 49s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | TEST-cetest | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f | | JIRA Issue | YARN-8986 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12948960/YARN-8986.006.patch | | Optional Tests | dupname asflicense
[jira] [Commented] (YARN-9007) CS preemption monitor should only select GUARANTEED containers as candidates for queue and reserved container preemption
[ https://issues.apache.org/jira/browse/YARN-9007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16694173#comment-16694173 ] Tao Yang commented on YARN-9007: Thanks [~wilfreds] for the comments. I agree that current preemptions have no relations with OPPORTUNISTIC containers. I agree with high level thoughts from [~sunilg] because some other preemptions may need to consider OPPORTUNISITC containers to release the used resource on nodes, and we did add a custom preemption for hotspot nodes to choose OPPORTUNISTIC containers as the first candidates in our cluster. I think we should fix bugs in current preemptions frist, [~sunilg], could you please share your thoughts? > CS preemption monitor should only select GUARANTEED containers as candidates > for queue and reserved container preemption > > > Key: YARN-9007 > URL: https://issues.apache.org/jira/browse/YARN-9007 > Project: Hadoop YARN > Issue Type: Bug > Components: capacityscheduler >Affects Versions: 3.2.1 >Reporter: Tao Yang >Assignee: Tao Yang >Priority: Major > Attachments: YARN-9007.001.patch > > > Currently CS preemption monitor doesn't consider execution type of > containers, so OPPORTUNISTIC containers maybe selected and killed without > effect. > In some scenario with OPPORTUNISTIC containers, not even preemption can't > work properly to balance resources, but also some apps with OPPORTUNISTIC > containers maybe effected and unable to work. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-8882) Phase 1 - Add a shared device mapping manager for device plugin to use
[ https://issues.apache.org/jira/browse/YARN-8882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16694168#comment-16694168 ] Hadoop QA commented on YARN-8882: - | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 37s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 45s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 58s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 23s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 19s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager: The patch generated 5 new + 0 unchanged - 0 fixed = 5 total (was 0) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 22s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 22s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 18m 57s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 24s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 71m 28s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f | | JIRA Issue | YARN-8882 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12948965/YARN-8882-trunk.005.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 51456bf83b15 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / a41b648 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_181 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-YARN-Build/22646/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/22646/testReport/ | | Max. process+thread count | 446 (vs. ulimit of 1) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager U:
[jira] [Commented] (YARN-9027) EntityGroupFSTimelineStore fails to init LevelDBCacheTimelineStore
[ https://issues.apache.org/jira/browse/YARN-9027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16694161#comment-16694161 ] Prabhu Joseph commented on YARN-9027: - Thanks [~tarunparimi] for the review. Will change it to AtomicInteger. > EntityGroupFSTimelineStore fails to init LevelDBCacheTimelineStore > --- > > Key: YARN-9027 > URL: https://issues.apache.org/jira/browse/YARN-9027 > Project: Hadoop YARN > Issue Type: Bug > Components: timelineserver >Affects Versions: 2.7.3 >Reporter: Prabhu Joseph >Assignee: Prabhu Joseph >Priority: Major > Attachments: 0001-YARN-9027.patch > > > EntityGroupFSTimelineStore fails to init LevelDBCacheTimelineStore as the > expected default constructor is not present. > {code} > Caused by: java.lang.RuntimeException: java.lang.NoSuchMethodException: > org.apache.hadoop.yarn.server.timeline.LevelDBCacheTimelineStore.() > at > org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:134) > at > org.apache.hadoop.yarn.server.timeline.EntityCacheItem.refreshCache(EntityCacheItem.java:100) > at > org.apache.hadoop.yarn.server.timeline.EntityGroupFSTimelineStore.getCachedStore(EntityGroupFSTimelineStore.java:1026) > at > org.apache.hadoop.yarn.server.timeline.EntityGroupFSTimelineStore.getTimelineStoresFromCacheIds(EntityGroupFSTimelineStore.java:945) > at > org.apache.hadoop.yarn.server.timeline.EntityGroupFSTimelineStore.getTimelineStoresForRead(EntityGroupFSTimelineStore.java:998) > at > org.apache.hadoop.yarn.server.timeline.EntityGroupFSTimelineStore.getEntities(EntityGroupFSTimelineStore.java:1040) > at > org.apache.hadoop.yarn.server.timeline.TimelineDataManager.doGetEntities(TimelineDataManager.java:168) > at > org.apache.hadoop.yarn.server.timeline.TimelineDataManager.getEntities(TimelineDataManager.java:138) > at > org.apache.hadoop.yarn.server.timeline.webapp.TimelineWebServices.getEntities(TimelineWebServices.java:117) > ... 59 more > Caused by: java.lang.NoSuchMethodException: > org.apache.hadoop.yarn.server.timeline.LevelDBCacheTimelineStore.() > at java.lang.Class.getConstructor0(Class.java:3082) > at java.lang.Class.getDeclaredConstructor(Class.java:2178) > at > org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:128) > ... 67 more > {code} > Repro: > {code} > 1. Set Offline Caching with > yarn.timeline-service.entity-group-fs-store.cache-store-class=org.apache.hadoop.yarn.server.timeline.LevelDBCacheTimelineStore > 2. Run a Tez query > 3. Check Tez View > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9040) LevelDBCacheTimelineStore in ATS 1.5 leaks native memory
[ https://issues.apache.org/jira/browse/YARN-9040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16694159#comment-16694159 ] Hadoop QA commented on YARN-9040: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 34s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 45s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 59s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 22s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 43s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 34s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 56s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server: The patch generated 1 new + 22 unchanged - 0 fixed = 23 total (was 22) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 30s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 36s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 45s{color} | {color:green} hadoop-yarn-server-applicationhistoryservice in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 21s{color} | {color:green} hadoop-yarn-server-timeline-pluginstorage in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 27s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 71m 45s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f | | JIRA Issue | YARN-9040 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12948962/YARN-9040.001.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 8b1df4d6ae2e 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git
[jira] [Commented] (YARN-9027) EntityGroupFSTimelineStore fails to init LevelDBCacheTimelineStore
[ https://issues.apache.org/jira/browse/YARN-9027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16694156#comment-16694156 ] Tarun Parimi commented on YARN-9027: Thanks [~Prabhu Joseph] for the patch. I have one comment on the patch. The increment of dbCtr in the default constructor is not thread safe. We have to synchronize or use AtomicInteger to make it thread safe. > EntityGroupFSTimelineStore fails to init LevelDBCacheTimelineStore > --- > > Key: YARN-9027 > URL: https://issues.apache.org/jira/browse/YARN-9027 > Project: Hadoop YARN > Issue Type: Bug > Components: timelineserver >Affects Versions: 2.7.3 >Reporter: Prabhu Joseph >Assignee: Prabhu Joseph >Priority: Major > Attachments: 0001-YARN-9027.patch > > > EntityGroupFSTimelineStore fails to init LevelDBCacheTimelineStore as the > expected default constructor is not present. > {code} > Caused by: java.lang.RuntimeException: java.lang.NoSuchMethodException: > org.apache.hadoop.yarn.server.timeline.LevelDBCacheTimelineStore.() > at > org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:134) > at > org.apache.hadoop.yarn.server.timeline.EntityCacheItem.refreshCache(EntityCacheItem.java:100) > at > org.apache.hadoop.yarn.server.timeline.EntityGroupFSTimelineStore.getCachedStore(EntityGroupFSTimelineStore.java:1026) > at > org.apache.hadoop.yarn.server.timeline.EntityGroupFSTimelineStore.getTimelineStoresFromCacheIds(EntityGroupFSTimelineStore.java:945) > at > org.apache.hadoop.yarn.server.timeline.EntityGroupFSTimelineStore.getTimelineStoresForRead(EntityGroupFSTimelineStore.java:998) > at > org.apache.hadoop.yarn.server.timeline.EntityGroupFSTimelineStore.getEntities(EntityGroupFSTimelineStore.java:1040) > at > org.apache.hadoop.yarn.server.timeline.TimelineDataManager.doGetEntities(TimelineDataManager.java:168) > at > org.apache.hadoop.yarn.server.timeline.TimelineDataManager.getEntities(TimelineDataManager.java:138) > at > org.apache.hadoop.yarn.server.timeline.webapp.TimelineWebServices.getEntities(TimelineWebServices.java:117) > ... 59 more > Caused by: java.lang.NoSuchMethodException: > org.apache.hadoop.yarn.server.timeline.LevelDBCacheTimelineStore.() > at java.lang.Class.getConstructor0(Class.java:3082) > at java.lang.Class.getDeclaredConstructor(Class.java:2178) > at > org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:128) > ... 67 more > {code} > Repro: > {code} > 1. Set Offline Caching with > yarn.timeline-service.entity-group-fs-store.cache-store-class=org.apache.hadoop.yarn.server.timeline.LevelDBCacheTimelineStore > 2. Run a Tez query > 3. Check Tez View > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9027) EntityGroupFSTimelineStore fails to init LevelDBCacheTimelineStore
[ https://issues.apache.org/jira/browse/YARN-9027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16694155#comment-16694155 ] Hadoop QA commented on YARN-9027: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 33s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 19s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 22s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 0s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 16s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 34s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 34s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timeline-pluginstorage generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 23s{color} | {color:green} hadoop-yarn-server-timeline-pluginstorage in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 49m 13s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timeline-pluginstorage | | | Write to static field org.apache.hadoop.yarn.server.timeline.LevelDBCacheTimelineStore.dbCtr from instance method new org.apache.hadoop.yarn.server.timeline.LevelDBCacheTimelineStore() At LevelDBCacheTimelineStore.java:from instance method new org.apache.hadoop.yarn.server.timeline.LevelDBCacheTimelineStore() At LevelDBCacheTimelineStore.java:[line 83] | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f | | JIRA Issue | YARN-9027 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12948968/0001-YARN-9027.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 6c714d53107f 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / a41b648 | | maven | version: Apache Maven 3.3.9
[jira] [Commented] (YARN-9041) Optimize FSPreemptionThread#identifyContainersToPreempt method
[ https://issues.apache.org/jira/browse/YARN-9041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16694139#comment-16694139 ] Hadoop QA commented on YARN-9041: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 41s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 36s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 23s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 31s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 43s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}101m 18s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 23s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}153m 14s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f | | JIRA Issue | YARN-9041 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12948932/YARN-9041.001.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 2ea47ba1f428 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / a41b648 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_181 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-YARN-Build/22643/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/22643/testReport/ | | Max. process+thread count | 945 (vs. ulimit of 1) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U:
[jira] [Commented] (YARN-8174) Add containerId to ResourceLocalizationService fetch failure
[ https://issues.apache.org/jira/browse/YARN-8174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16694132#comment-16694132 ] Prabhu Joseph commented on YARN-8174: - [~bibinchundatt] Can you review this patch when you get time. Thanks. > Add containerId to ResourceLocalizationService fetch failure > > > Key: YARN-8174 > URL: https://issues.apache.org/jira/browse/YARN-8174 > Project: Hadoop YARN > Issue Type: Bug > Components: nodemanager >Affects Versions: 2.7.3 >Reporter: Prabhu Joseph >Assignee: Prabhu Joseph >Priority: Minor > Attachments: YARN-8174.1.patch, YARN-8174.2.patch > > > When a localization for a resource failed due to change in timestamp, there > is no containerId logged to correlate. > {code} > 2018-04-18 07:31:46,033 WARN localizer.ResourceLocalizationService > (ResourceLocalizationService.java:processHeartbeat(1017)) - { > hdfs://tarunhdp-1.openstacklocal:8020/user/ambari-qa/.staging/job_1523550428406_0016/job.splitmetainfo, > 1524036694502, FILE, null } failed: Resource > hdfs://tarunhdp-1.openstacklocal:8020/user/ambari-qa/.staging/job_1523550428406_0016/job.splitmetainfo > changed on src filesystem (expected 1524036694502, was 1524036694502 > java.io.IOException: Resource > hdfs://tarunhdp-1.openstacklocal:8020/user/ambari-qa/.staging/job_1523550428406_0016/job.splitmetainfo > changed on src filesystem (expected 1524036694502, was 1524036694502 > at org.apache.hadoop.yarn.util.FSDownload.copy(FSDownload.java:258) > at > org.apache.hadoop.yarn.util.FSDownload.access$000(FSDownload.java:63) > at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:362) > at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:360) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866) > at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:360) > at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:62) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-8148) Update decimal values for queue capacities shown on queue status cli
[ https://issues.apache.org/jira/browse/YARN-8148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16694131#comment-16694131 ] Prabhu Joseph commented on YARN-8148: - [~sunilg] Can you review this patch when you get time. Thanks > Update decimal values for queue capacities shown on queue status cli > > > Key: YARN-8148 > URL: https://issues.apache.org/jira/browse/YARN-8148 > Project: Hadoop YARN > Issue Type: Bug > Components: client >Affects Versions: 3.0.0 >Reporter: Prabhu Joseph >Assignee: Prabhu Joseph >Priority: Major > Attachments: YARN-8148.1.patch > > > Capacities are shown with two decimal values in RM UI as part of YARN-6182. > The queue status cli are still showing one decimal value. > {code} > [root@bigdata3 yarn]# yarn queue -status default > Queue Information : > Queue Name : default > State : RUNNING > Capacity : 69.9% > Current Capacity : .0% > Maximum Capacity : 70.0% > Default Node Label expression : > Accessible Node Labels : * > Preemption : enabled > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9027) EntityGroupFSTimelineStore fails to init LevelDBCacheTimelineStore
[ https://issues.apache.org/jira/browse/YARN-9027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16694125#comment-16694125 ] Prabhu Joseph commented on YARN-9027: - [~rohithsharma] Please review the patch when you get time. Have tested the fix on repro and works fine. 1. On using same dbPath for every EntityCacheItem leads to Lock Issue {code} Caused by: org.fusesource.leveldbjni.internal.NativeDB$DBException: IO error: lock /hadoop/yarn/timeline/db-timeline-cache.ldb/LOCK: already held by process at org.fusesource.leveldbjni.internal.NativeDB.checkStatus(NativeDB.java:200) at org.fusesource.leveldbjni.internal.NativeDB.open(NativeDB.java:218) at org.fusesource.leveldbjni.JniDBFactory.open(JniDBFactory.java:168) at org.apache.hadoop.yarn.server.timeline.LevelDBCacheTimelineStore.serviceInit(LevelDBCacheTimelineStore.java:114) at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163) ... 67 more {code} 2. Have set a unique integer for every EntityCacheItem's dbPath as the EntityCacheItem is mapped with unique TimelineEntityGroupId in the cachedLogs. {code} private Map cachedLogs; {code} > EntityGroupFSTimelineStore fails to init LevelDBCacheTimelineStore > --- > > Key: YARN-9027 > URL: https://issues.apache.org/jira/browse/YARN-9027 > Project: Hadoop YARN > Issue Type: Bug > Components: timelineserver >Affects Versions: 2.7.3 >Reporter: Prabhu Joseph >Assignee: Prabhu Joseph >Priority: Major > Attachments: 0001-YARN-9027.patch > > > EntityGroupFSTimelineStore fails to init LevelDBCacheTimelineStore as the > expected default constructor is not present. > {code} > Caused by: java.lang.RuntimeException: java.lang.NoSuchMethodException: > org.apache.hadoop.yarn.server.timeline.LevelDBCacheTimelineStore.() > at > org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:134) > at > org.apache.hadoop.yarn.server.timeline.EntityCacheItem.refreshCache(EntityCacheItem.java:100) > at > org.apache.hadoop.yarn.server.timeline.EntityGroupFSTimelineStore.getCachedStore(EntityGroupFSTimelineStore.java:1026) > at > org.apache.hadoop.yarn.server.timeline.EntityGroupFSTimelineStore.getTimelineStoresFromCacheIds(EntityGroupFSTimelineStore.java:945) > at > org.apache.hadoop.yarn.server.timeline.EntityGroupFSTimelineStore.getTimelineStoresForRead(EntityGroupFSTimelineStore.java:998) > at > org.apache.hadoop.yarn.server.timeline.EntityGroupFSTimelineStore.getEntities(EntityGroupFSTimelineStore.java:1040) > at > org.apache.hadoop.yarn.server.timeline.TimelineDataManager.doGetEntities(TimelineDataManager.java:168) > at > org.apache.hadoop.yarn.server.timeline.TimelineDataManager.getEntities(TimelineDataManager.java:138) > at > org.apache.hadoop.yarn.server.timeline.webapp.TimelineWebServices.getEntities(TimelineWebServices.java:117) > ... 59 more > Caused by: java.lang.NoSuchMethodException: > org.apache.hadoop.yarn.server.timeline.LevelDBCacheTimelineStore.() > at java.lang.Class.getConstructor0(Class.java:3082) > at java.lang.Class.getDeclaredConstructor(Class.java:2178) > at > org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:128) > ... 67 more > {code} > Repro: > {code} > 1. Set Offline Caching with > yarn.timeline-service.entity-group-fs-store.cache-store-class=org.apache.hadoop.yarn.server.timeline.LevelDBCacheTimelineStore > 2. Run a Tez query > 3. Check Tez View > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-9027) EntityGroupFSTimelineStore fails to init LevelDBCacheTimelineStore
[ https://issues.apache.org/jira/browse/YARN-9027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prabhu Joseph updated YARN-9027: Attachment: 0001-YARN-9027.patch > EntityGroupFSTimelineStore fails to init LevelDBCacheTimelineStore > --- > > Key: YARN-9027 > URL: https://issues.apache.org/jira/browse/YARN-9027 > Project: Hadoop YARN > Issue Type: Bug > Components: timelineserver >Affects Versions: 2.7.3 >Reporter: Prabhu Joseph >Assignee: Prabhu Joseph >Priority: Major > Attachments: 0001-YARN-9027.patch, 0001-YARN-9027.patch > > > EntityGroupFSTimelineStore fails to init LevelDBCacheTimelineStore as the > expected default constructor is not present. > {code} > Caused by: java.lang.RuntimeException: java.lang.NoSuchMethodException: > org.apache.hadoop.yarn.server.timeline.LevelDBCacheTimelineStore.() > at > org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:134) > at > org.apache.hadoop.yarn.server.timeline.EntityCacheItem.refreshCache(EntityCacheItem.java:100) > at > org.apache.hadoop.yarn.server.timeline.EntityGroupFSTimelineStore.getCachedStore(EntityGroupFSTimelineStore.java:1026) > at > org.apache.hadoop.yarn.server.timeline.EntityGroupFSTimelineStore.getTimelineStoresFromCacheIds(EntityGroupFSTimelineStore.java:945) > at > org.apache.hadoop.yarn.server.timeline.EntityGroupFSTimelineStore.getTimelineStoresForRead(EntityGroupFSTimelineStore.java:998) > at > org.apache.hadoop.yarn.server.timeline.EntityGroupFSTimelineStore.getEntities(EntityGroupFSTimelineStore.java:1040) > at > org.apache.hadoop.yarn.server.timeline.TimelineDataManager.doGetEntities(TimelineDataManager.java:168) > at > org.apache.hadoop.yarn.server.timeline.TimelineDataManager.getEntities(TimelineDataManager.java:138) > at > org.apache.hadoop.yarn.server.timeline.webapp.TimelineWebServices.getEntities(TimelineWebServices.java:117) > ... 59 more > Caused by: java.lang.NoSuchMethodException: > org.apache.hadoop.yarn.server.timeline.LevelDBCacheTimelineStore.() > at java.lang.Class.getConstructor0(Class.java:3082) > at java.lang.Class.getDeclaredConstructor(Class.java:2178) > at > org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:128) > ... 67 more > {code} > Repro: > {code} > 1. Set Offline Caching with > yarn.timeline-service.entity-group-fs-store.cache-store-class=org.apache.hadoop.yarn.server.timeline.LevelDBCacheTimelineStore > 2. Run a Tez query > 3. Check Tez View > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-8882) Phase 1 - Add a shared device mapping manager for device plugin to use
[ https://issues.apache.org/jira/browse/YARN-8882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zhankun Tang updated YARN-8882: --- Attachment: YARN-8882-trunk.005.patch > Phase 1 - Add a shared device mapping manager for device plugin to use > -- > > Key: YARN-8882 > URL: https://issues.apache.org/jira/browse/YARN-8882 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Zhankun Tang >Assignee: Zhankun Tang >Priority: Major > Attachments: YARN-8882-trunk.001.patch, YARN-8882-trunk.002.patch, > YARN-8882-trunk.003.patch, YARN-8882-trunk.004.patch, > YARN-8882-trunk.005.patch > > > Since a few devices uses FIFO policy to assign devices to the container, we > use a shared device manager to handle all types of devices. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-9027) EntityGroupFSTimelineStore fails to init LevelDBCacheTimelineStore
[ https://issues.apache.org/jira/browse/YARN-9027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prabhu Joseph updated YARN-9027: Attachment: (was: 0001-YARN-9027.patch) > EntityGroupFSTimelineStore fails to init LevelDBCacheTimelineStore > --- > > Key: YARN-9027 > URL: https://issues.apache.org/jira/browse/YARN-9027 > Project: Hadoop YARN > Issue Type: Bug > Components: timelineserver >Affects Versions: 2.7.3 >Reporter: Prabhu Joseph >Assignee: Prabhu Joseph >Priority: Major > Attachments: 0001-YARN-9027.patch > > > EntityGroupFSTimelineStore fails to init LevelDBCacheTimelineStore as the > expected default constructor is not present. > {code} > Caused by: java.lang.RuntimeException: java.lang.NoSuchMethodException: > org.apache.hadoop.yarn.server.timeline.LevelDBCacheTimelineStore.() > at > org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:134) > at > org.apache.hadoop.yarn.server.timeline.EntityCacheItem.refreshCache(EntityCacheItem.java:100) > at > org.apache.hadoop.yarn.server.timeline.EntityGroupFSTimelineStore.getCachedStore(EntityGroupFSTimelineStore.java:1026) > at > org.apache.hadoop.yarn.server.timeline.EntityGroupFSTimelineStore.getTimelineStoresFromCacheIds(EntityGroupFSTimelineStore.java:945) > at > org.apache.hadoop.yarn.server.timeline.EntityGroupFSTimelineStore.getTimelineStoresForRead(EntityGroupFSTimelineStore.java:998) > at > org.apache.hadoop.yarn.server.timeline.EntityGroupFSTimelineStore.getEntities(EntityGroupFSTimelineStore.java:1040) > at > org.apache.hadoop.yarn.server.timeline.TimelineDataManager.doGetEntities(TimelineDataManager.java:168) > at > org.apache.hadoop.yarn.server.timeline.TimelineDataManager.getEntities(TimelineDataManager.java:138) > at > org.apache.hadoop.yarn.server.timeline.webapp.TimelineWebServices.getEntities(TimelineWebServices.java:117) > ... 59 more > Caused by: java.lang.NoSuchMethodException: > org.apache.hadoop.yarn.server.timeline.LevelDBCacheTimelineStore.() > at java.lang.Class.getConstructor0(Class.java:3082) > at java.lang.Class.getDeclaredConstructor(Class.java:2178) > at > org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:128) > ... 67 more > {code} > Repro: > {code} > 1. Set Offline Caching with > yarn.timeline-service.entity-group-fs-store.cache-store-class=org.apache.hadoop.yarn.server.timeline.LevelDBCacheTimelineStore > 2. Run a Tez query > 3. Check Tez View > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-9027) EntityGroupFSTimelineStore fails to init LevelDBCacheTimelineStore
[ https://issues.apache.org/jira/browse/YARN-9027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prabhu Joseph updated YARN-9027: Attachment: 0001-YARN-9027.patch > EntityGroupFSTimelineStore fails to init LevelDBCacheTimelineStore > --- > > Key: YARN-9027 > URL: https://issues.apache.org/jira/browse/YARN-9027 > Project: Hadoop YARN > Issue Type: Bug > Components: timelineserver >Affects Versions: 2.7.3 >Reporter: Prabhu Joseph >Assignee: Prabhu Joseph >Priority: Major > Attachments: 0001-YARN-9027.patch, 0001-YARN-9027.patch > > > EntityGroupFSTimelineStore fails to init LevelDBCacheTimelineStore as the > expected default constructor is not present. > {code} > Caused by: java.lang.RuntimeException: java.lang.NoSuchMethodException: > org.apache.hadoop.yarn.server.timeline.LevelDBCacheTimelineStore.() > at > org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:134) > at > org.apache.hadoop.yarn.server.timeline.EntityCacheItem.refreshCache(EntityCacheItem.java:100) > at > org.apache.hadoop.yarn.server.timeline.EntityGroupFSTimelineStore.getCachedStore(EntityGroupFSTimelineStore.java:1026) > at > org.apache.hadoop.yarn.server.timeline.EntityGroupFSTimelineStore.getTimelineStoresFromCacheIds(EntityGroupFSTimelineStore.java:945) > at > org.apache.hadoop.yarn.server.timeline.EntityGroupFSTimelineStore.getTimelineStoresForRead(EntityGroupFSTimelineStore.java:998) > at > org.apache.hadoop.yarn.server.timeline.EntityGroupFSTimelineStore.getEntities(EntityGroupFSTimelineStore.java:1040) > at > org.apache.hadoop.yarn.server.timeline.TimelineDataManager.doGetEntities(TimelineDataManager.java:168) > at > org.apache.hadoop.yarn.server.timeline.TimelineDataManager.getEntities(TimelineDataManager.java:138) > at > org.apache.hadoop.yarn.server.timeline.webapp.TimelineWebServices.getEntities(TimelineWebServices.java:117) > ... 59 more > Caused by: java.lang.NoSuchMethodException: > org.apache.hadoop.yarn.server.timeline.LevelDBCacheTimelineStore.() > at java.lang.Class.getConstructor0(Class.java:3082) > at java.lang.Class.getDeclaredConstructor(Class.java:2178) > at > org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:128) > ... 67 more > {code} > Repro: > {code} > 1. Set Offline Caching with > yarn.timeline-service.entity-group-fs-store.cache-store-class=org.apache.hadoop.yarn.server.timeline.LevelDBCacheTimelineStore > 2. Run a Tez query > 3. Check Tez View > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-9040) LevelDBCacheTimelineStore in ATS 1.5 leaks native memory
[ https://issues.apache.org/jira/browse/YARN-9040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tarun Parimi updated YARN-9040: --- Attachment: YARN-9040.001.patch > LevelDBCacheTimelineStore in ATS 1.5 leaks native memory > > > Key: YARN-9040 > URL: https://issues.apache.org/jira/browse/YARN-9040 > Project: Hadoop YARN > Issue Type: Bug > Components: timelineserver >Affects Versions: 2.8.0 >Reporter: Tarun Parimi >Assignee: Tarun Parimi >Priority: Major > Attachments: YARN-9040.001.patch > > > When LevelDBCacheTimelineStore from YARN-4219 is used as ATS 1.5 entity > caching storage, we observe memory leak due to leveldb files even after the > fix of YARN-5368 . > Top output shows 0.024TB (25GB) RES, even though heap size is only 8GB. > > > {code:java} > PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND > 25519 yarn 20 0 33.024g 0.024t 41468 S 6.2 26.0 21:07.39 > /usr/java/default/bin/java -Dproc_timelineserver -Xmx8192m > {code} > > Lsof shows a lot of open timeline-cache.ldb files which are referenced by > ATS, even though are deleted (DEL), since they are not present when listing > them . > > {code:java} > java 25519 yarn DEL REG 253,28 9438452 > /var/yarn/timeline/timelineEntityGroupId_1542280269959_55569_dag_1542280269959_55569_2-timeline-cache.ldb/07.sst > java 25519 yarn DEL REG 253,28 9438438 > /var/yarn/timeline/timelineEntityGroupId_1542280269959_55569_dag_1542280269959_55569_2-timeline-cache.ldb/07.sst > java 25519 yarn DEL REG 253,28 9438437 > /var/yarn/timeline/timelineEntityGroupId_1542280269959_55569_dag_1542280269959_55569_2-timeline-cache.ldb/05.sst > {code} > > Looks like LevelDBCacheTimelineStore is not closing these files as the > LevelDB DBIterator is not closed. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9036) Escape newlines in health report in YARN UI
[ https://issues.apache.org/jira/browse/YARN-9036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16694101#comment-16694101 ] Hadoop QA commented on YARN-9036: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 39s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 3s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 17s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 31s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 31s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 1 new + 1 unchanged - 0 fixed = 2 total (was 1) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 13s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 22s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}102m 35s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}156m 10s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f | | JIRA Issue | YARN-9036 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12948926/YARN-9036.004.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux c0142b9c362c 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / a41b648 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_181 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-YARN-Build/22642/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt | | unit |
[jira] [Commented] (YARN-8986) publish all exposed ports to random ports when using bridge network
[ https://issues.apache.org/jira/browse/YARN-8986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16694099#comment-16694099 ] Charo Zhang commented on YARN-8986: --- [~eyang] 1, network_names free added in 006 patch. 2, when network name is not specified, i think it need not add -P or any of the port mappings,because it's only valid for bridge network type. > publish all exposed ports to random ports when using bridge network > --- > > Key: YARN-8986 > URL: https://issues.apache.org/jira/browse/YARN-8986 > Project: Hadoop YARN > Issue Type: Sub-task > Components: yarn >Affects Versions: 3.1.1 >Reporter: Charo Zhang >Assignee: Charo Zhang >Priority: Minor > Labels: Docker > Attachments: YARN-8986.001.patch, YARN-8986.002.patch, > YARN-8986.003.patch, YARN-8986.004.patch, YARN-8986.005.patch, > YARN-8986.006.patch > > > it's better to publish all exposed ports to random ports(-P) or support port > mapping(-p) for bridge network when using bridge network for docker container. > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-8986) publish all exposed ports to random ports when using bridge network
[ https://issues.apache.org/jira/browse/YARN-8986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Charo Zhang updated YARN-8986: -- Attachment: YARN-8986.006.patch > publish all exposed ports to random ports when using bridge network > --- > > Key: YARN-8986 > URL: https://issues.apache.org/jira/browse/YARN-8986 > Project: Hadoop YARN > Issue Type: Sub-task > Components: yarn >Affects Versions: 3.1.1 >Reporter: Charo Zhang >Assignee: Charo Zhang >Priority: Minor > Labels: Docker > Attachments: YARN-8986.001.patch, YARN-8986.002.patch, > YARN-8986.003.patch, YARN-8986.004.patch, YARN-8986.005.patch, > YARN-8986.006.patch > > > it's better to publish all exposed ports to random ports(-P) or support port > mapping(-p) for bridge network when using bridge network for docker container. > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9026) DefaultOOMHandler should mark preempted containers as killed
[ https://issues.apache.org/jira/browse/YARN-9026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693976#comment-16693976 ] Zhankun Tang commented on YARN-9026: [~haibochen] . Yeah. Thanks. :) > DefaultOOMHandler should mark preempted containers as killed > > > Key: YARN-9026 > URL: https://issues.apache.org/jira/browse/YARN-9026 > Project: Hadoop YARN > Issue Type: Improvement > Components: nodemanager >Affects Versions: 3.2.1 >Reporter: Haibo Chen >Priority: Major > > DefaultOOMHandler today kills a selected container by sending kill -9 signal > to all processes running within the container cgroup. > The container would exit with a non-zero code, and hence treated as a failure > by ContainerLaunch threads. > We should instead mark the containers as killed. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-8986) publish all exposed ports to random ports when using bridge network
[ https://issues.apache.org/jira/browse/YARN-8986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693937#comment-16693937 ] Eric Yang commented on YARN-8986: - [~Charo Zhang] Patch 005 seems to return from add_ports_mapping_to_command without freeing network_names, and it also does not add -P or any of the port mappings. with network name is not specified. This looks like a bug. > publish all exposed ports to random ports when using bridge network > --- > > Key: YARN-8986 > URL: https://issues.apache.org/jira/browse/YARN-8986 > Project: Hadoop YARN > Issue Type: Sub-task > Components: yarn >Affects Versions: 3.1.1 >Reporter: Charo Zhang >Assignee: Charo Zhang >Priority: Minor > Labels: Docker > Attachments: YARN-8986.001.patch, YARN-8986.002.patch, > YARN-8986.003.patch, YARN-8986.004.patch, YARN-8986.005.patch > > > it's better to publish all exposed ports to random ports(-P) or support port > mapping(-p) for bridge network when using bridge network for docker container. > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-9041) Optimize FSPreemptionThread#identifyContainersToPreempt method
[ https://issues.apache.org/jira/browse/YARN-9041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wanqiang Ji updated YARN-9041: -- Component/s: scheduler preemption > Optimize FSPreemptionThread#identifyContainersToPreempt method > -- > > Key: YARN-9041 > URL: https://issues.apache.org/jira/browse/YARN-9041 > Project: Hadoop YARN > Issue Type: Improvement > Components: scheduler preemption >Reporter: Wanqiang Ji >Assignee: Wanqiang Ji >Priority: Major > Attachments: YARN-9041.001.patch > > > In FSPreemptionThread#identifyContainersToPreempt method, I suggest if AM > preemption, and locality relaxation is allowed, then the search space is > expanded to all nodes changed to the remaining nodes. The remaining nodes are > equal to all nodes minus the potential nodes. > Judging condition changed to: > # rr.getRelaxLocality() > # !ResourceRequest.isAnyLocation(rr.getResourceName()) > # bestContainers != null > # bestContainers.numAMContainers > 0 > If I understand the deviation, please criticize me. thx~ -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-9041) Optimize FSPreemptionThread#identifyContainersToPreempt method
Wanqiang Ji created YARN-9041: - Summary: Optimize FSPreemptionThread#identifyContainersToPreempt method Key: YARN-9041 URL: https://issues.apache.org/jira/browse/YARN-9041 Project: Hadoop YARN Issue Type: Improvement Reporter: Wanqiang Ji Assignee: Wanqiang Ji In FSPreemptionThread#identifyContainersToPreempt method, I suggest if AM preemption, and locality relaxation is allowed, then the search space is expanded to all nodes changed to the remaining nodes. The remaining nodes are equal to all nodes minus the potential nodes. Judging condition changed to: # rr.getRelaxLocality() # !ResourceRequest.isAnyLocation(rr.getResourceName()) # bestContainers != null # bestContainers.numAMContainers > 0 If I understand the deviation, please criticize me. thx~ -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9036) Escape newlines in health report in YARN UI
[ https://issues.apache.org/jira/browse/YARN-9036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693883#comment-16693883 ] Keqiu Hu commented on YARN-9036: Apparently IDE didn't pick up the right import, uploaded YARN-9036.004.patch > Escape newlines in health report in YARN UI > --- > > Key: YARN-9036 > URL: https://issues.apache.org/jira/browse/YARN-9036 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Jonathan Hung >Assignee: Keqiu Hu >Priority: Major > Attachments: YARN-9036.001.patch, YARN-9036.002.patch, > YARN-9036.003.patch, YARN-9036.003.patch, YARN-9036.004.patch > > > NodesPage prints health report info in the UI in a javascript string. If > health report contains newlines it will garble the generated code and the > list of nodes cannot be rendered. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-9036) Escape newlines in health report in YARN UI
[ https://issues.apache.org/jira/browse/YARN-9036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Keqiu Hu updated YARN-9036: --- Attachment: YARN-9036.004.patch > Escape newlines in health report in YARN UI > --- > > Key: YARN-9036 > URL: https://issues.apache.org/jira/browse/YARN-9036 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Jonathan Hung >Assignee: Keqiu Hu >Priority: Major > Attachments: YARN-9036.001.patch, YARN-9036.002.patch, > YARN-9036.003.patch, YARN-9036.003.patch, YARN-9036.004.patch > > > NodesPage prints health report info in the UI in a javascript string. If > health report contains newlines it will garble the generated code and the > list of nodes cannot be rendered. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-8992) Fair scheduler can delete a dynamic queue while an application attempt is being added to the queue
[ https://issues.apache.org/jira/browse/YARN-8992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693874#comment-16693874 ] Hudson commented on YARN-8992: -- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15478 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/15478/]) YARN-8992. Fair scheduler can delete a dynamic queue while an (haibochen: rev a41b648e98b6a1c5a9cdb7393e73e576a97f56d4) * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestFSParentQueue.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/QueueManager.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSParentQueue.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSQueue.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSLeafQueue.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestQueueManager.java > Fair scheduler can delete a dynamic queue while an application attempt is > being added to the queue > -- > > Key: YARN-8992 > URL: https://issues.apache.org/jira/browse/YARN-8992 > Project: Hadoop YARN > Issue Type: Bug > Components: fairscheduler >Affects Versions: 3.1.1 >Reporter: Haibo Chen >Assignee: Wilfred Spiegelenburg >Priority: Major > Fix For: 3.2.1 > > Attachments: YARN-8992.001.patch, YARN-8992.002.patch > > > As discovered in YARN-8990, QueueManager can see a leaf queue being empty > while FSLeafQueue.addApp() is called in the middle of > {code:java} > return queue.getNumRunnableApps() == 0 && > leafQueue.getNumNonRunnableApps() == 0 && > leafQueue.getNumAssignedApps() == 0;{code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9036) Escape newlines in health report in YARN UI
[ https://issues.apache.org/jira/browse/YARN-9036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693873#comment-16693873 ] Hadoop QA commented on YARN-9036: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 37s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 39s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 49s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 40s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 15s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 32s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 24s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 24s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 24s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 32s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 1 new + 1 unchanged - 0 fixed = 2 total (was 1) {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 26s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 4m 18s{color} | {color:red} patch has errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 28s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 27s{color} | {color:red} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager generated 1 new + 4 unchanged - 0 fixed = 5 total (was 4) {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 28s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 23s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 46m 56s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f | | JIRA Issue | YARN-9036 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12948921/YARN-9036.003.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux c991a36fd23c 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 1734ace | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_181 | | findbugs | v3.1.0-RC1 | | mvninstall |
[jira] [Commented] (YARN-9025) Make TestFairScheduler#testChildMaxResources more reliable, as it is flaky now
[ https://issues.apache.org/jira/browse/YARN-9025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693863#comment-16693863 ] Haibo Chen commented on YARN-9025: -- [~snemeth] I notice the existence of MockRM.drainEvents(). Do that help get rid of the non-determinism completely? > Make TestFairScheduler#testChildMaxResources more reliable, as it is flaky now > -- > > Key: YARN-9025 > URL: https://issues.apache.org/jira/browse/YARN-9025 > Project: Hadoop YARN > Issue Type: Bug >Affects Versions: 3.3.0 >Reporter: Szilard Nemeth >Assignee: Szilard Nemeth >Priority: Major > Attachments: YARN-9025.001.patch > > > During making the code patch for YARN-8059, I come across a flaky test, see > this link: > https://builds.apache.org/job/PreCommit-YARN-Build/22412/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt > This is the error message: > {code:java} > [ERROR] Tests run: 108, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: > 19.37 s <<< FAILURE! - in > org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler > [ERROR] > testChildMaxResources(org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler) > Time elapsed: 0.164 s <<< FAILURE! > java.lang.AssertionError: App 1 is not running with the correct number of > containers expected:<2> but was:<0> > at org.junit.Assert.fail(Assert.java:88){code} > So the thing is, even if we had 8 node updates, due to the nature of how we > handle the events, it can happen that no container is allocated for the > application. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-8986) publish all exposed ports to random ports when using bridge network
[ https://issues.apache.org/jira/browse/YARN-8986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693857#comment-16693857 ] Hadoop QA commented on YARN-8986: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 8s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 19s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 8s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 35s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 3s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 45s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 7m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 40s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 18m 57s{color} | {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 17s{color} | {color:green} hadoop-yarn-site in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 30s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 90m 44s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.nodemanager.containermanager.logaggregation.TestLogAggregationService | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f | | JIRA Issue | YARN-8986 | | JIRA Patch URL |
[jira] [Commented] (YARN-8992) Fair scheduler can delete a dynamic queue while an application attempt is being added to the queue
[ https://issues.apache.org/jira/browse/YARN-8992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693855#comment-16693855 ] Haibo Chen commented on YARN-8992: -- +1. Checking it in shortly > Fair scheduler can delete a dynamic queue while an application attempt is > being added to the queue > -- > > Key: YARN-8992 > URL: https://issues.apache.org/jira/browse/YARN-8992 > Project: Hadoop YARN > Issue Type: Bug > Components: fairscheduler >Affects Versions: 3.1.1 >Reporter: Haibo Chen >Assignee: Wilfred Spiegelenburg >Priority: Major > Attachments: YARN-8992.001.patch, YARN-8992.002.patch > > > As discovered in YARN-8990, QueueManager can see a leaf queue being empty > while FSLeafQueue.addApp() is called in the middle of > {code:java} > return queue.getNumRunnableApps() == 0 && > leafQueue.getNumNonRunnableApps() == 0 && > leafQueue.getNumAssignedApps() == 0;{code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9030) Log aggregation changes to handle filesystems which do not support permissions
[ https://issues.apache.org/jira/browse/YARN-9030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693852#comment-16693852 ] Wangda Tan commented on YARN-9030: -- Thanks [~suma.shivaprasad], +1, will get it committed later today. > Log aggregation changes to handle filesystems which do not support permissions > -- > > Key: YARN-9030 > URL: https://issues.apache.org/jira/browse/YARN-9030 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Suma Shivaprasad >Assignee: Suma Shivaprasad >Priority: Major > Attachments: YARN-9030.1.patch, YARN-9030.2.patch > > > Some cloud storages like ADLS do not support permissions in which case they > throw an UnsupportedOperationException. Log aggregation code should > log/ignore these exceptions and not set permissions henceforth for log > aggregation base dir/sub dirs > {noformat} > 2018-11-12 15:37:28,726 WARN logaggregation.LogAggregationService > (LogAggregationService.java:initApp(209)) - Application failed to init > aggregation > org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Failed to check > permissions for dir [abfs://testc...@test.blob.core.windows.net/app-logs] > at > org.apache.hadoop.yarn.logaggregation.filecontroller.LogAggregationFileController.verifyAndCreateRemoteLogDir(LogAggregationFileController.java:277) > at > org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.LogAggregationService.initAppAggregator(LogAggregationService.java:238) > at > org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.LogAggregationService.initApp(LogAggregationService.java:204) > at > org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.LogAggregationService.handle(LogAggregationService.java:347) > at > org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.LogAggregationService.handle(LogAggregationService.java:69) > at > org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:197) > at > org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:126) > at java.lang.Thread.run(Thread.java:748) > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-9036) Escape newlines in health report in YARN UI
[ https://issues.apache.org/jira/browse/YARN-9036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Hung updated YARN-9036: Attachment: YARN-9036.003.patch > Escape newlines in health report in YARN UI > --- > > Key: YARN-9036 > URL: https://issues.apache.org/jira/browse/YARN-9036 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Jonathan Hung >Assignee: Keqiu Hu >Priority: Major > Attachments: YARN-9036.001.patch, YARN-9036.002.patch, > YARN-9036.003.patch, YARN-9036.003.patch > > > NodesPage prints health report info in the UI in a javascript string. If > health report contains newlines it will garble the generated code and the > list of nodes cannot be rendered. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (YARN-8986) publish all exposed ports to random ports when using bridge network
[ https://issues.apache.org/jira/browse/YARN-8986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693797#comment-16693797 ] Charo Zhang edited comment on YARN-8986 at 11/20/18 9:29 PM: - [~eyang] I think network_name is allowed to not exist, and i find set_network return 0 when do not get "net" key. so i process it with the same way in add_ports_mapping_to_command, if network_name is null, it return 0 to do nothing, there is no need to set default network is bridge. At same time, i add two more test cases in test_docker_util.cc if network_name not exist. The latest 005 patch is uploaded. was (Author: charo zhang): [~eyang] I think network_name is allowed to not exist, and i find set_network return 0 when do not get "net" key. so i process it with the same way in add_ports_mapping_to_command, if network_name is null, it return 0 to do nothing, there is not need to set default network is bridge. At same time, i add two more test cases in test_docker_util.cc if network_name not exist. The latest 005 patch is uploaded. > publish all exposed ports to random ports when using bridge network > --- > > Key: YARN-8986 > URL: https://issues.apache.org/jira/browse/YARN-8986 > Project: Hadoop YARN > Issue Type: Sub-task > Components: yarn >Affects Versions: 3.1.1 >Reporter: Charo Zhang >Assignee: Charo Zhang >Priority: Minor > Labels: Docker > Attachments: YARN-8986.001.patch, YARN-8986.002.patch, > YARN-8986.003.patch, YARN-8986.004.patch, YARN-8986.005.patch > > > it's better to publish all exposed ports to random ports(-P) or support port > mapping(-p) for bridge network when using bridge network for docker container. > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-8986) publish all exposed ports to random ports when using bridge network
[ https://issues.apache.org/jira/browse/YARN-8986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693797#comment-16693797 ] Charo Zhang commented on YARN-8986: --- [~eyang] I think network_name is allowed to not exist, and i find set_network return 0 when do not get "net" key. so i process it with the same way in add_ports_mapping_to_command, if network_name is null, it return 0 to do nothing, there is not need to set default network is bridge. At same time, i add two more test cases in test_docker_util.cc if network_name not exist. The latest 005 patch is uploaded. > publish all exposed ports to random ports when using bridge network > --- > > Key: YARN-8986 > URL: https://issues.apache.org/jira/browse/YARN-8986 > Project: Hadoop YARN > Issue Type: Sub-task > Components: yarn >Affects Versions: 3.1.1 >Reporter: Charo Zhang >Assignee: Charo Zhang >Priority: Minor > Labels: Docker > Attachments: YARN-8986.001.patch, YARN-8986.002.patch, > YARN-8986.003.patch, YARN-8986.004.patch, YARN-8986.005.patch > > > it's better to publish all exposed ports to random ports(-P) or support port > mapping(-p) for bridge network when using bridge network for docker container. > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-8986) publish all exposed ports to random ports when using bridge network
[ https://issues.apache.org/jira/browse/YARN-8986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Charo Zhang updated YARN-8986: -- Attachment: YARN-8986.005.patch > publish all exposed ports to random ports when using bridge network > --- > > Key: YARN-8986 > URL: https://issues.apache.org/jira/browse/YARN-8986 > Project: Hadoop YARN > Issue Type: Sub-task > Components: yarn >Affects Versions: 3.1.1 >Reporter: Charo Zhang >Assignee: Charo Zhang >Priority: Minor > Labels: Docker > Attachments: YARN-8986.001.patch, YARN-8986.002.patch, > YARN-8986.003.patch, YARN-8986.004.patch, YARN-8986.005.patch > > > it's better to publish all exposed ports to random ports(-P) or support port > mapping(-p) for bridge network when using bridge network for docker container. > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-8738) FairScheduler configures maxResources or minResources as negative, the value parse to a positive number.
[ https://issues.apache.org/jira/browse/YARN-8738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693780#comment-16693780 ] Hadoop QA commented on YARN-8738: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 7s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 41s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 49s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 36s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 34s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 33s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 3 new + 12 unchanged - 0 fixed = 15 total (was 12) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 22s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}116m 24s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 30s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}173m 39s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.resourcemanager.scheduler.capacity.TestQueueManagementDynamicEditPolicy | | | hadoop.yarn.server.resourcemanager.scheduler.capacity.TestIncreaseAllocationExpirer | | | hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerSchedulingRequestUpdate | | | hadoop.yarn.server.resourcemanager.TestCapacitySchedulerMetrics | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f | | JIRA Issue | YARN-8738 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12948899/YARN-8738.002.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 9a82e2bbed9d 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / c747830 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_181 | | findbugs | v3.1.0-RC1 | | checkstyle |
[jira] [Commented] (YARN-8962) Add ability to use interactive shell with normal yarn container
[ https://issues.apache.org/jira/browse/YARN-8962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693779#comment-16693779 ] Hadoop QA commented on YARN-8962: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 37s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 58s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 29s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} cc {color} | {color:red} 0m 55s{color} | {color:red} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 22s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager: The patch generated 7 new + 7 unchanged - 0 fixed = 14 total (was 7) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 31s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 5s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 19m 20s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 28s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 71m 2s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f | | JIRA Issue | YARN-8962 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12946747/YARN-8962.002.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle cc | | uname | Linux 021558b1c86b 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 49824ed | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_181 | | findbugs | v3.1.0-RC1 | | cc |
[jira] [Created] (YARN-9040) LevelDBCacheTimelineStore in ATS 1.5 leaks native memory
Tarun Parimi created YARN-9040: -- Summary: LevelDBCacheTimelineStore in ATS 1.5 leaks native memory Key: YARN-9040 URL: https://issues.apache.org/jira/browse/YARN-9040 Project: Hadoop YARN Issue Type: Bug Components: timelineserver Affects Versions: 2.8.0 Reporter: Tarun Parimi Assignee: Tarun Parimi When LevelDBCacheTimelineStore from YARN-4219 is used as ATS 1.5 entity caching storage, we observe memory leak due to leveldb files even after the fix of YARN-5368 . Top output shows 0.024TB (25GB) RES, even though heap size is only 8GB. {code:java} PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 25519 yarn 20 0 33.024g 0.024t 41468 S 6.2 26.0 21:07.39 /usr/java/default/bin/java -Dproc_timelineserver -Xmx8192m {code} Lsof shows a lot of open timeline-cache.ldb files which are referenced by ATS, even though are deleted (DEL), since they are not present when listing them . {code:java} java 25519 yarn DEL REG 253,28 9438452 /var/yarn/timeline/timelineEntityGroupId_1542280269959_55569_dag_1542280269959_55569_2-timeline-cache.ldb/07.sst java 25519 yarn DEL REG 253,28 9438438 /var/yarn/timeline/timelineEntityGroupId_1542280269959_55569_dag_1542280269959_55569_2-timeline-cache.ldb/07.sst java 25519 yarn DEL REG 253,28 9438437 /var/yarn/timeline/timelineEntityGroupId_1542280269959_55569_dag_1542280269959_55569_2-timeline-cache.ldb/05.sst {code} Looks like LevelDBCacheTimelineStore is not closing these files as the LevelDB DBIterator is not closed. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-8279) AggregationLogDeletionService does not honor yarn.log-aggregation.IndexedFormat.remote-app-log-dir-suffix
[ https://issues.apache.org/jira/browse/YARN-8279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tarun Parimi updated YARN-8279: --- Release Note: This issue affects only when IndexedFormat is used for log-aggregation by setting below properties: yarn.log-aggregation.file-formats=IndexedFormat yarn.log-aggregation.file-controller.IndexedFormat.class=org.apache.hadoop.yarn.logaggregation.filecontroller.ifile.LogAggregationIndexedFileController The fix is to set the below properties in yarn-site.xml and then restart Mapreduce HistoryServer and YARN services. yarn.log-aggregation.IndexedFormat.remote-app-log-dir-suffix and yarn.nodemanager.remote-app-log-dir-suffix to same value "logs-ifile" which serves AggregationLogDeletionService was: This issue affects only when IndexedFormat is used for log-aggregation by setting below properties: yarn.log-aggregation.file-formats=IndexedFormat yarn.log-aggregation.file-controller.IndexedFormat.class=org.apache.hadoop.yarn.logaggregation.filecontroller.ifile.LogAggregationIndexedFileController The fix is to set yarn.log-aggregation.IndexedFormat.remote-app-log-dir-suffix and yarn.nodemanager.remote-app-log-dir-suffix to same value "logs-ifile" and then restart Mapreduce HistoryServer which serves AggregationLogDeletionService > AggregationLogDeletionService does not honor > yarn.log-aggregation.IndexedFormat.remote-app-log-dir-suffix > - > > Key: YARN-8279 > URL: https://issues.apache.org/jira/browse/YARN-8279 > Project: Hadoop YARN > Issue Type: Bug > Components: log-aggregation >Affects Versions: 2.9.1 >Reporter: Prabhu Joseph >Assignee: Tarun Parimi >Priority: Major > > AggregationLogDeletionService does not honor > yarn.log-aggregation.IndexedFormat.remote-app-log-dir-suffix. > AggregationLogService writes the logs into /app-logs//logs-ifile > where as AggregationLogDeletion tries to delete from > /app-logs//logs. > Workaround is to set > yarn.log-aggregation.IndexedFormat.remote-app-log-dir-suffix and > yarn.nodemanager.remote-app-log-dir-suffix to same value "logs-ifile" and > Restart HistoryServer which serves AggregationLogDeletionService > AggregationLogDeletionService has to check the format and based upon that > choose the suffix. Currently it only checks the older suffix > yarn.nodemanager.remote-app-log-dir-suffix. > AggregatedLogDeletionService tries to delete older suffix directory. > {code} > 2018-05-11 08:48:19,989 ERROR logaggregation.AggregatedLogDeletionService > (AggregatedLogDeletionService.java:logIOException(182)) - Could not read the > contents of hdfs://prabhucluster:8020/app-logs/hive/logs > java.io.FileNotFoundException: File > hdfs://prabhucluster:8020/app-logs/hive/logs does not exist. > at > org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:923) > at > org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:114) > at > org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:985) > at > org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:981) > at > org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) > at > org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:992) > at > org.apache.hadoop.yarn.logaggregation.AggregatedLogDeletionService$LogDeletionTask.deleteOldLogDirsFrom(AggregatedLogDeletionService.java:98) > at > org.apache.hadoop.yarn.logaggregation.AggregatedLogDeletionService$LogDeletionTask.run(AggregatedLogDeletionService.java:85) > at java.util.TimerThread.mainLoop(Timer.java:555) > at java.util.TimerThread.run(Timer.java:505) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-8838) Add security check for container user is same as websocket user
[ https://issues.apache.org/jira/browse/YARN-8838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693695#comment-16693695 ] Hudson commented on YARN-8838: -- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15476 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/15476/]) YARN-8838. Check that container user is same as websocket user for (billie: rev 49824ed260d31350d9b836a4c31319e2b3501dd0) * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/webapp/TestNMContainerWebSocket.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/webapp/ContainerShellWebSocket.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/YarnClientImpl.java > Add security check for container user is same as websocket user > --- > > Key: YARN-8838 > URL: https://issues.apache.org/jira/browse/YARN-8838 > Project: Hadoop YARN > Issue Type: Sub-task > Components: nodemanager >Reporter: Eric Yang >Assignee: Eric Yang >Priority: Major > Labels: docker > Fix For: 3.3.0 > > Attachments: YARN-8838.001.patch, YARN-8838.002.patch, > YARN-8838.003.patch, YARN-8838.004.patch, YARN-8838.005.patch, > YARN-8838.006.patch, YARN-8838.007.patch > > > When user is authenticate via SPNEGO entry point, node manager must verify > the remote user is the same as the container user to start the web socket > session. One possible solution is to verify the web request user matches > yarn container local directory owne during onWebSocketConnect.. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-8838) Add security check for container user is same as websocket user
[ https://issues.apache.org/jira/browse/YARN-8838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693673#comment-16693673 ] Billie Rinaldi commented on YARN-8838: -- +1 for patch 7. This fixes whitespace and line length checkstyle from patch 6. The TestAMRMClient test failure appears to be due to a flaky test (see YARN-6272). Thanks, [~eyang]! > Add security check for container user is same as websocket user > --- > > Key: YARN-8838 > URL: https://issues.apache.org/jira/browse/YARN-8838 > Project: Hadoop YARN > Issue Type: Sub-task > Components: nodemanager >Reporter: Eric Yang >Assignee: Eric Yang >Priority: Major > Labels: docker > Attachments: YARN-8838.001.patch, YARN-8838.002.patch, > YARN-8838.003.patch, YARN-8838.004.patch, YARN-8838.005.patch, > YARN-8838.006.patch, YARN-8838.007.patch > > > When user is authenticate via SPNEGO entry point, node manager must verify > the remote user is the same as the container user to start the web socket > session. One possible solution is to verify the web request user matches > yarn container local directory owne during onWebSocketConnect.. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-8838) Add security check for container user is same as websocket user
[ https://issues.apache.org/jira/browse/YARN-8838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693657#comment-16693657 ] Hadoop QA commented on YARN-8838: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 9m 46s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 43s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 6s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 44s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 5s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 20s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 19m 28s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 25m 14s{color} | {color:red} hadoop-yarn-client in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 35s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}130m 7s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.client.api.impl.TestAMRMClient | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f | | JIRA Issue | YARN-8838 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12948886/YARN-8838.007.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 8a1fbacd3e3c 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 10b5da8 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_181 | | findbugs | v3.1.0-RC1 | | unit |
[jira] [Commented] (YARN-9016) DocumentStore as a backend for ATSv2
[ https://issues.apache.org/jira/browse/YARN-9016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693633#comment-16693633 ] Hadoop QA commented on YARN-9016: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 12 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 9s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 34s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 19m 50s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-assemblies hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 45s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 18s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 3m 1s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 4m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 3s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 20s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-assemblies hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 44s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 21s{color} | {color:green} hadoop-assemblies in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}148m 38s{color} | {color:red} hadoop-yarn-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 26s{color} | {color:green} hadoop-yarn-server-timelineservice in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 40s{color} | {color:green} hadoop-yarn-server-timelineservice-documentstore in the patch passed. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 47s{color} | {color:red} The patch generated 1 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}264m
[jira] [Resolved] (YARN-9026) DefaultOOMHandler should mark preempted containers as killed
[ https://issues.apache.org/jira/browse/YARN-9026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Haibo Chen resolved YARN-9026. -- Resolution: Invalid > DefaultOOMHandler should mark preempted containers as killed > > > Key: YARN-9026 > URL: https://issues.apache.org/jira/browse/YARN-9026 > Project: Hadoop YARN > Issue Type: Improvement > Components: nodemanager >Affects Versions: 3.2.1 >Reporter: Haibo Chen >Priority: Major > > DefaultOOMHandler today kills a selected container by sending kill -9 signal > to all processes running within the container cgroup. > The container would exit with a non-zero code, and hence treated as a failure > by ContainerLaunch threads. > We should instead mark the containers as killed. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9026) DefaultOOMHandler should mark preempted containers as killed
[ https://issues.apache.org/jira/browse/YARN-9026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693638#comment-16693638 ] Haibo Chen commented on YARN-9026: -- Ah, I was not aware of this. Indeed, this is already taken care of today. Thanks for pointing this out, [~tangzhankun]! Closing this Jira as invalid. > DefaultOOMHandler should mark preempted containers as killed > > > Key: YARN-9026 > URL: https://issues.apache.org/jira/browse/YARN-9026 > Project: Hadoop YARN > Issue Type: Improvement > Components: nodemanager >Affects Versions: 3.2.1 >Reporter: Haibo Chen >Priority: Major > > DefaultOOMHandler today kills a selected container by sending kill -9 signal > to all processes running within the container cgroup. > The container would exit with a non-zero code, and hence treated as a failure > by ContainerLaunch threads. > We should instead mark the containers as killed. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9036) Escape newlines in health report in YARN UI
[ https://issues.apache.org/jira/browse/YARN-9036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693615#comment-16693615 ] Keqiu Hu commented on YARN-9036: Uploaded [YARN-9036.002.patch|https://issues.apache.org/jira/secure/attachment/12948900/YARN-9036.003.patch] to rebase trunk > Escape newlines in health report in YARN UI > --- > > Key: YARN-9036 > URL: https://issues.apache.org/jira/browse/YARN-9036 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Jonathan Hung >Assignee: Keqiu Hu >Priority: Major > Attachments: YARN-9036.001.patch, YARN-9036.002.patch, > YARN-9036.003.patch > > > NodesPage prints health report info in the UI in a javascript string. If > health report contains newlines it will garble the generated code and the > list of nodes cannot be rendered. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (YARN-9036) Escape newlines in health report in YARN UI
[ https://issues.apache.org/jira/browse/YARN-9036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693615#comment-16693615 ] Keqiu Hu edited comment on YARN-9036 at 11/20/18 6:16 PM: -- Uploaded [YARN-9036.003.patch|https://issues.apache.org/jira/secure/attachment/12948900/YARN-9036.003.patch] to rebase trunk was (Author: oliverhuh...@gmail.com): Uploaded [YARN-9036.002.patch|https://issues.apache.org/jira/secure/attachment/12948900/YARN-9036.003.patch] to rebase trunk > Escape newlines in health report in YARN UI > --- > > Key: YARN-9036 > URL: https://issues.apache.org/jira/browse/YARN-9036 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Jonathan Hung >Assignee: Keqiu Hu >Priority: Major > Attachments: YARN-9036.001.patch, YARN-9036.002.patch, > YARN-9036.003.patch > > > NodesPage prints health report info in the UI in a javascript string. If > health report contains newlines it will garble the generated code and the > list of nodes cannot be rendered. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-9036) Escape newlines in health report in YARN UI
[ https://issues.apache.org/jira/browse/YARN-9036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Keqiu Hu updated YARN-9036: --- Attachment: YARN-9036.003.patch > Escape newlines in health report in YARN UI > --- > > Key: YARN-9036 > URL: https://issues.apache.org/jira/browse/YARN-9036 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Jonathan Hung >Assignee: Keqiu Hu >Priority: Major > Attachments: YARN-9036.001.patch, YARN-9036.002.patch, > YARN-9036.003.patch > > > NodesPage prints health report info in the UI in a javascript string. If > health report contains newlines it will garble the generated code and the > list of nodes cannot be rendered. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9036) Escape newlines in health report in YARN UI
[ https://issues.apache.org/jira/browse/YARN-9036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693602#comment-16693602 ] Hadoop QA commented on YARN-9036: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 5s{color} | {color:red} YARN-9036 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | YARN-9036 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12948897/YARN-9036.002.patch | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/22637/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > Escape newlines in health report in YARN UI > --- > > Key: YARN-9036 > URL: https://issues.apache.org/jira/browse/YARN-9036 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Jonathan Hung >Assignee: Keqiu Hu >Priority: Major > Attachments: YARN-9036.001.patch, YARN-9036.002.patch > > > NodesPage prints health report info in the UI in a javascript string. If > health report contains newlines it will garble the generated code and the > list of nodes cannot be rendered. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (YARN-8738) FairScheduler configures maxResources or minResources as negative, the value parse to a positive number.
[ https://issues.apache.org/jira/browse/YARN-8738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693598#comment-16693598 ] Szilard Nemeth edited comment on YARN-8738 at 11/20/18 6:00 PM: Thanks [~wilfreds] for having an initial look at this! Yep, something went wrong with the build. Uploaded a new patch that fixes the checkstyle issues. Hopefully the next build will be fine. was (Author: snemeth): Thanks [~wilfreds] for the review! Yep, something went wrong with the build. Uploaded a new patch that fixes the checkstyle issues. Hopefully the next build will be fine. > FairScheduler configures maxResources or minResources as negative, the value > parse to a positive number. > > > Key: YARN-8738 > URL: https://issues.apache.org/jira/browse/YARN-8738 > Project: Hadoop YARN > Issue Type: Bug > Components: fairscheduler >Affects Versions: 3.2.0 >Reporter: Sen Zhao >Assignee: Szilard Nemeth >Priority: Major > Attachments: YARN-8738.001.patch, YARN-8738.002.patch > > > If maxResources or minResources is configured as a negative number, the value > will be positive after parsing. > If this is a problem, I will fix it. If not, the > FairSchedulerConfiguration#parseNewStyleResource parse negative number should > be same with parseOldStyleResource . > cc:[~templedf], [~leftnoteasy] -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-8738) FairScheduler configures maxResources or minResources as negative, the value parse to a positive number.
[ https://issues.apache.org/jira/browse/YARN-8738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693598#comment-16693598 ] Szilard Nemeth commented on YARN-8738: -- Thanks [~wilfreds] for the review! Yep, something went wrong with the build. Uploaded a new patch that fixes the checkstyle issues. Hopefully the next build will be fine. > FairScheduler configures maxResources or minResources as negative, the value > parse to a positive number. > > > Key: YARN-8738 > URL: https://issues.apache.org/jira/browse/YARN-8738 > Project: Hadoop YARN > Issue Type: Bug > Components: fairscheduler >Affects Versions: 3.2.0 >Reporter: Sen Zhao >Assignee: Szilard Nemeth >Priority: Major > Attachments: YARN-8738.001.patch, YARN-8738.002.patch > > > If maxResources or minResources is configured as a negative number, the value > will be positive after parsing. > If this is a problem, I will fix it. If not, the > FairSchedulerConfiguration#parseNewStyleResource parse negative number should > be same with parseOldStyleResource . > cc:[~templedf], [~leftnoteasy] -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-8738) FairScheduler configures maxResources or minResources as negative, the value parse to a positive number.
[ https://issues.apache.org/jira/browse/YARN-8738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Szilard Nemeth updated YARN-8738: - Attachment: YARN-8738.002.patch > FairScheduler configures maxResources or minResources as negative, the value > parse to a positive number. > > > Key: YARN-8738 > URL: https://issues.apache.org/jira/browse/YARN-8738 > Project: Hadoop YARN > Issue Type: Bug > Components: fairscheduler >Affects Versions: 3.2.0 >Reporter: Sen Zhao >Assignee: Szilard Nemeth >Priority: Major > Attachments: YARN-8738.001.patch, YARN-8738.002.patch > > > If maxResources or minResources is configured as a negative number, the value > will be positive after parsing. > If this is a problem, I will fix it. If not, the > FairSchedulerConfiguration#parseNewStyleResource parse negative number should > be same with parseOldStyleResource . > cc:[~templedf], [~leftnoteasy] -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9036) Escape newlines in health report in YARN UI
[ https://issues.apache.org/jira/browse/YARN-9036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693587#comment-16693587 ] Keqiu Hu commented on YARN-9036: Uploaded YARN-9036.002.patch to make the fix more robust. > Escape newlines in health report in YARN UI > --- > > Key: YARN-9036 > URL: https://issues.apache.org/jira/browse/YARN-9036 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Jonathan Hung >Assignee: Keqiu Hu >Priority: Major > Attachments: YARN-9036.001.patch, YARN-9036.002.patch > > > NodesPage prints health report info in the UI in a javascript string. If > health report contains newlines it will garble the generated code and the > list of nodes cannot be rendered. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-9036) Escape newlines in health report in YARN UI
[ https://issues.apache.org/jira/browse/YARN-9036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Keqiu Hu updated YARN-9036: --- Attachment: YARN-9036.002.patch > Escape newlines in health report in YARN UI > --- > > Key: YARN-9036 > URL: https://issues.apache.org/jira/browse/YARN-9036 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Jonathan Hung >Assignee: Keqiu Hu >Priority: Major > Attachments: YARN-9036.001.patch, YARN-9036.002.patch > > > NodesPage prints health report info in the UI in a javascript string. If > health report contains newlines it will garble the generated code and the > list of nodes cannot be rendered. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-8986) publish all exposed ports to random ports when using bridge network
[ https://issues.apache.org/jira/browse/YARN-8986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693584#comment-16693584 ] Eric Yang commented on YARN-8986: - [~Charo Zhang] add_ports_mapping_to_command assumes that network_name is always supplied in .cmd file. However, network_name might not exist, and cause docker network inspect to fail. This is what happen in the failed [unit tests|https://builds.apache.org/job/PreCommit-YARN-Build/22634/testReport/]. When network_name is unspecified, docker default network is bridge. You may want to add that logic to be part of add_ports_mapping_to_command. Thanks > publish all exposed ports to random ports when using bridge network > --- > > Key: YARN-8986 > URL: https://issues.apache.org/jira/browse/YARN-8986 > Project: Hadoop YARN > Issue Type: Sub-task > Components: yarn >Affects Versions: 3.1.1 >Reporter: Charo Zhang >Assignee: Charo Zhang >Priority: Minor > Labels: Docker > Attachments: YARN-8986.001.patch, YARN-8986.002.patch, > YARN-8986.003.patch, YARN-8986.004.patch > > > it's better to publish all exposed ports to random ports(-P) or support port > mapping(-p) for bridge network when using bridge network for docker container. > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-8586) Extract log aggregation related fields and methods from RMAppImpl
[ https://issues.apache.org/jira/browse/YARN-8586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693565#comment-16693565 ] Szilard Nemeth commented on YARN-8586: -- UT failure seems unrelated, this is ready to review. > Extract log aggregation related fields and methods from RMAppImpl > - > > Key: YARN-8586 > URL: https://issues.apache.org/jira/browse/YARN-8586 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Szilard Nemeth >Assignee: Szilard Nemeth >Priority: Major > Attachments: YARN-8586.001.patch, YARN-8586.002.patch, > YARN-8586.002.patch > > > Given that RMAppImpl is already above 2000 lines and it is very complex, as a > very simple > and straightforward step, all Log aggregation related fields and methods > could be extracted to a new class. > The clients of RMAppImpl may access the same methods and RMAppImpl would > delegate all those calls to the newly introduced class. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9024) ClusterNodeTracker maximum allocation does not respect resource units
[ https://issues.apache.org/jira/browse/YARN-9024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693571#comment-16693571 ] Szilard Nemeth commented on YARN-9024: -- I haven't noticed YARN-7159, that jira is about to move resource unit conversions to the PB layer. So the failure and the unit testcase itself does not make too much sense. Closing this jira off. > ClusterNodeTracker maximum allocation does not respect resource units > - > > Key: YARN-9024 > URL: https://issues.apache.org/jira/browse/YARN-9024 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Szilard Nemeth >Assignee: Szilard Nemeth >Priority: Major > Attachments: YARN-9024.001.patch > > > If a custom resource is defined with a default unit value (base unit) and a > node reports its total capability in a different unit (e.g. M) then > {{ClusterNodeTracker.getMaxAllowedAllocation}} returns the max allocation > resource in the base unit, so the reported resource unit is not respected. > The issue is when the \{{updateMaxResources}} method is called (i.e. NM node > is registered), the unit of the node's resources is not checked. In this > method, we need to convert the reported value to the unit defined by RM for > the individual resource types. > I also wanted to add a testcase where memory has G as its unit, but it was > not possible easily without hacky code so I only added a testcase that > verifies custom resource values. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5106) Provide a builder interface for FairScheduler allocations for use in tests
[ https://issues.apache.org/jira/browse/YARN-5106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693562#comment-16693562 ] Szilard Nemeth commented on YARN-5106: -- Hi [~tangzhankun]! Thanks for the review! The unit test issue is still there for the same class and testcase. I don't have any more idea how running it with Jenkins (linux) can be different. For me, the test passes on Mac OS. [~wilfreds], [~tangzhankun]: I know it should be my responsibility to fix the test, but maybe you have some idea to add. Thanks! > Provide a builder interface for FairScheduler allocations for use in tests > -- > > Key: YARN-5106 > URL: https://issues.apache.org/jira/browse/YARN-5106 > Project: Hadoop YARN > Issue Type: Improvement > Components: fairscheduler >Affects Versions: 2.8.0 >Reporter: Karthik Kambatla >Assignee: Szilard Nemeth >Priority: Major > Labels: newbie++ > Attachments: YARN-5106.001.patch, YARN-5106.002.patch, > YARN-5106.003.patch, YARN-5106.004.patch > > > Most, if not all, fair scheduler tests create an allocations XML file. Having > a helper class that potentially uses a builder would make the tests cleaner. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-8986) publish all exposed ports to random ports when using bridge network
[ https://issues.apache.org/jira/browse/YARN-8986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693507#comment-16693507 ] Hadoop QA commented on YARN-8986: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 58s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 11s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 8s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 22s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 4s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 7m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 26s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 19m 32s{color} | {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 25s{color} | {color:green} hadoop-yarn-site in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 40s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 93m 22s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | TEST-cetest | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f | | JIRA Issue | YARN-8986 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12948880/YARN-8986.004.patch | | Optional Tests | dupname asflicense
[jira] [Updated] (YARN-8838) Add security check for container user is same as websocket user
[ https://issues.apache.org/jira/browse/YARN-8838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Billie Rinaldi updated YARN-8838: - Attachment: YARN-8838.007.patch > Add security check for container user is same as websocket user > --- > > Key: YARN-8838 > URL: https://issues.apache.org/jira/browse/YARN-8838 > Project: Hadoop YARN > Issue Type: Sub-task > Components: nodemanager >Reporter: Eric Yang >Assignee: Eric Yang >Priority: Major > Labels: docker > Attachments: YARN-8838.001.patch, YARN-8838.002.patch, > YARN-8838.003.patch, YARN-8838.004.patch, YARN-8838.005.patch, > YARN-8838.006.patch, YARN-8838.007.patch > > > When user is authenticate via SPNEGO entry point, node manager must verify > the remote user is the same as the container user to start the web socket > session. One possible solution is to verify the web request user matches > yarn container local directory owne during onWebSocketConnect.. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-8986) publish all exposed ports to random ports when using bridge network
[ https://issues.apache.org/jira/browse/YARN-8986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693392#comment-16693392 ] Charo Zhang commented on YARN-8986: --- [~eyang] “ports-mapping” test has added to TestDockerRunCommand.java in Patch 004. > publish all exposed ports to random ports when using bridge network > --- > > Key: YARN-8986 > URL: https://issues.apache.org/jira/browse/YARN-8986 > Project: Hadoop YARN > Issue Type: Sub-task > Components: yarn >Affects Versions: 3.1.1 >Reporter: Charo Zhang >Assignee: Charo Zhang >Priority: Minor > Labels: Docker > Attachments: YARN-8986.001.patch, YARN-8986.002.patch, > YARN-8986.003.patch, YARN-8986.004.patch > > > it's better to publish all exposed ports to random ports(-P) or support port > mapping(-p) for bridge network when using bridge network for docker container. > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-8986) publish all exposed ports to random ports when using bridge network
[ https://issues.apache.org/jira/browse/YARN-8986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Charo Zhang updated YARN-8986: -- Attachment: YARN-8986.004.patch > publish all exposed ports to random ports when using bridge network > --- > > Key: YARN-8986 > URL: https://issues.apache.org/jira/browse/YARN-8986 > Project: Hadoop YARN > Issue Type: Sub-task > Components: yarn >Affects Versions: 3.1.1 >Reporter: Charo Zhang >Assignee: Charo Zhang >Priority: Minor > Labels: Docker > Attachments: YARN-8986.001.patch, YARN-8986.002.patch, > YARN-8986.003.patch, YARN-8986.004.patch > > > it's better to publish all exposed ports to random ports(-P) or support port > mapping(-p) for bridge network when using bridge network for docker container. > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-8586) Extract log aggregation related fields and methods from RMAppImpl
[ https://issues.apache.org/jira/browse/YARN-8586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693373#comment-16693373 ] Hadoop QA commented on YARN-8586: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 6s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 33s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 45s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 34s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 10s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 29s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 2 new + 102 unchanged - 10 fixed = 104 total (was 112) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 1s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 22s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}104m 5s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 28s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}159m 21s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f | | JIRA Issue | YARN-8586 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12948865/YARN-8586.002.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 285a0f6db41b 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / c946f1b | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_181 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-YARN-Build/22630/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt | | unit |
[jira] [Commented] (YARN-9039) App ACLs are not validated when serving logs from Logs CLI/Yarn UI2
[ https://issues.apache.org/jira/browse/YARN-9039?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693368#comment-16693368 ] Hadoop QA commented on YARN-9039: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 41s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 6s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 20s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 43s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 35s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 25s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common: The patch generated 7 new + 32 unchanged - 0 fixed = 39 total (was 32) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 1s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 33s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 21s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 23s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 56m 40s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common | | | Nullcheck of indexedLogsMeta at line 527 of value previously dereferenced in org.apache.hadoop.yarn.logaggregation.filecontroller.ifile.LogAggregationIndexedFileController.readAggregatedLogs(ContainerLogsRequest, OutputStream) At LogAggregationIndexedFileController.java:527 of value previously dereferenced in org.apache.hadoop.yarn.logaggregation.filecontroller.ifile.LogAggregationIndexedFileController.readAggregatedLogs(ContainerLogsRequest, OutputStream) At LogAggregationIndexedFileController.java:[line 527] | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f | | JIRA Issue | YARN-9039 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12948872/YARN-9039.1.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname
[jira] [Updated] (YARN-9030) Log aggregation changes to handle filesystems which do not support permissions
[ https://issues.apache.org/jira/browse/YARN-9030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Suma Shivaprasad updated YARN-9030: --- Description: Some cloud storages like ADLS do not support permissions in which case they throw an UnsupportedOperationException. Log aggregation code should log/ignore these exceptions and not set permissions henceforth for log aggregation base dir/sub dirs {noformat} 2018-11-12 15:37:28,726 WARN logaggregation.LogAggregationService (LogAggregationService.java:initApp(209)) - Application failed to init aggregation org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Failed to check permissions for dir [abfs://testc...@test.blob.core.windows.net/app-logs] at org.apache.hadoop.yarn.logaggregation.filecontroller.LogAggregationFileController.verifyAndCreateRemoteLogDir(LogAggregationFileController.java:277) at org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.LogAggregationService.initAppAggregator(LogAggregationService.java:238) at org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.LogAggregationService.initApp(LogAggregationService.java:204) at org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.LogAggregationService.handle(LogAggregationService.java:347) at org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.LogAggregationService.handle(LogAggregationService.java:69) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:197) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:126) at java.lang.Thread.run(Thread.java:748) {noformat} was: Some cloud storages like ADLS do not support permissions in which case they throw an UnsupportedOperationException. Log aggregation should hanlde these case and not set permissions for log aggregation base dir/ sub dirs {noformat} 2018-11-12 15:37:28,726 WARN logaggregation.LogAggregationService (LogAggregationService.java:initApp(209)) - Application failed to init aggregation org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Failed to check permissions for dir [abfs://testc...@test.blob.core.windows.net/app-logs] at org.apache.hadoop.yarn.logaggregation.filecontroller.LogAggregationFileController.verifyAndCreateRemoteLogDir(LogAggregationFileController.java:277) at org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.LogAggregationService.initAppAggregator(LogAggregationService.java:238) at org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.LogAggregationService.initApp(LogAggregationService.java:204) at org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.LogAggregationService.handle(LogAggregationService.java:347) at org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.LogAggregationService.handle(LogAggregationService.java:69) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:197) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:126) at java.lang.Thread.run(Thread.java:748) {noformat} > Log aggregation changes to handle filesystems which do not support permissions > -- > > Key: YARN-9030 > URL: https://issues.apache.org/jira/browse/YARN-9030 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Suma Shivaprasad >Assignee: Suma Shivaprasad >Priority: Major > Attachments: YARN-9030.1.patch, YARN-9030.2.patch > > > Some cloud storages like ADLS do not support permissions in which case they > throw an UnsupportedOperationException. Log aggregation code should > log/ignore these exceptions and not set permissions henceforth for log > aggregation base dir/sub dirs > {noformat} > 2018-11-12 15:37:28,726 WARN logaggregation.LogAggregationService > (LogAggregationService.java:initApp(209)) - Application failed to init > aggregation > org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Failed to check > permissions for dir [abfs://testc...@test.blob.core.windows.net/app-logs] > at > org.apache.hadoop.yarn.logaggregation.filecontroller.LogAggregationFileController.verifyAndCreateRemoteLogDir(LogAggregationFileController.java:277) > at > org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.LogAggregationService.initAppAggregator(LogAggregationService.java:238) > at > org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.LogAggregationService.initApp(LogAggregationService.java:204) > at >
[jira] [Commented] (YARN-9034) ApplicationCLI should have option to take clusterId
[ https://issues.apache.org/jira/browse/YARN-9034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693327#comment-16693327 ] Hadoop QA commented on YARN-9034: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 33s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 37s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 48s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 20s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client: The patch generated 7 new + 283 unchanged - 0 fixed = 290 total (was 283) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 27s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch 1 line(s) with tabs. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 45s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 26m 20s{color} | {color:green} hadoop-yarn-client in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 29s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 82m 35s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f | | JIRA Issue | YARN-9034 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12948868/YARN-9034.02.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux c233743e640e 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / c946f1b | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_181 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-YARN-Build/22631/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt | | whitespace | https://builds.apache.org/job/PreCommit-YARN-Build/22631/artifact/out/whitespace-tabs.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/22631/testReport/ | | Max. process+thread count | 683 (vs. ulimit of 1) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client U:
[jira] [Commented] (YARN-8738) FairScheduler configures maxResources or minResources as negative, the value parse to a positive number.
[ https://issues.apache.org/jira/browse/YARN-8738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693316#comment-16693316 ] Wilfred Spiegelenburg commented on YARN-8738: - Thank you for the patch [~snemeth]. The build has had some issues and the findbugs seems to be a resource issue but it would be good to get a full build run on it again. I cannot find the failure for the shaded client either. Since there are some checkstyle issues you could get two things in one go: fix the new checkstyle issues and get the build re-run with a new patch. > FairScheduler configures maxResources or minResources as negative, the value > parse to a positive number. > > > Key: YARN-8738 > URL: https://issues.apache.org/jira/browse/YARN-8738 > Project: Hadoop YARN > Issue Type: Bug > Components: fairscheduler >Affects Versions: 3.2.0 >Reporter: Sen Zhao >Assignee: Szilard Nemeth >Priority: Major > Attachments: YARN-8738.001.patch > > > If maxResources or minResources is configured as a negative number, the value > will be positive after parsing. > If this is a problem, I will fix it. If not, the > FairSchedulerConfiguration#parseNewStyleResource parse negative number should > be same with parseOldStyleResource . > cc:[~templedf], [~leftnoteasy] -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-9039) App ACLs are not validated when serving logs from Logs CLI/Yarn UI2
Suma Shivaprasad created YARN-9039: -- Summary: App ACLs are not validated when serving logs from Logs CLI/Yarn UI2 Key: YARN-9039 URL: https://issues.apache.org/jira/browse/YARN-9039 Project: Hadoop YARN Issue Type: Bug Components: log-aggregation Reporter: Suma Shivaprasad Assignee: Suma Shivaprasad Attachments: YARN-9039.1.patch App Acls are not being validated when serving logs through YARN CLI. This also applies while serving logs through YARN UIV2 through ATSV2 Log Webservice -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9025) Make TestFairScheduler#testChildMaxResources more reliable, as it is flaky now
[ https://issues.apache.org/jira/browse/YARN-9025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693298#comment-16693298 ] Wilfred Spiegelenburg commented on YARN-9025: - Change looks good, we discussed the test off-line and the best approach to make it more stable The test failure is due to the ZK issue and is already known and logged +1 (non binding) > Make TestFairScheduler#testChildMaxResources more reliable, as it is flaky now > -- > > Key: YARN-9025 > URL: https://issues.apache.org/jira/browse/YARN-9025 > Project: Hadoop YARN > Issue Type: Bug >Affects Versions: 3.3.0 >Reporter: Szilard Nemeth >Assignee: Szilard Nemeth >Priority: Major > Attachments: YARN-9025.001.patch > > > During making the code patch for YARN-8059, I come across a flaky test, see > this link: > https://builds.apache.org/job/PreCommit-YARN-Build/22412/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt > This is the error message: > {code:java} > [ERROR] Tests run: 108, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: > 19.37 s <<< FAILURE! - in > org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler > [ERROR] > testChildMaxResources(org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler) > Time elapsed: 0.164 s <<< FAILURE! > java.lang.AssertionError: App 1 is not running with the correct number of > containers expected:<2> but was:<0> > at org.junit.Assert.fail(Assert.java:88){code} > So the thing is, even if we had 8 node updates, due to the nature of how we > handle the events, it can happen that no container is allocated for the > application. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-9039) App ACLs are not validated when serving logs from Logs CLI/Yarn UI2
[ https://issues.apache.org/jira/browse/YARN-9039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Suma Shivaprasad updated YARN-9039: --- Attachment: YARN-9039.1.patch > App ACLs are not validated when serving logs from Logs CLI/Yarn UI2 > --- > > Key: YARN-9039 > URL: https://issues.apache.org/jira/browse/YARN-9039 > Project: Hadoop YARN > Issue Type: Bug > Components: log-aggregation >Reporter: Suma Shivaprasad >Assignee: Suma Shivaprasad >Priority: Critical > Attachments: YARN-9039.1.patch > > > App Acls are not being validated when serving logs through YARN CLI. > This also applies while serving logs through YARN UIV2 through ATSV2 Log > Webservice -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9019) Ratio calculation of ResourceCalculator implementations could return NaN
[ https://issues.apache.org/jira/browse/YARN-9019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693290#comment-16693290 ] Wilfred Spiegelenburg commented on YARN-9019: - Change looks good +1 (non binding) > Ratio calculation of ResourceCalculator implementations could return NaN > > > Key: YARN-9019 > URL: https://issues.apache.org/jira/browse/YARN-9019 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Szilard Nemeth >Assignee: Szilard Nemeth >Priority: Major > Attachments: YARN-9019.001.patch > > > Found out that ResourceCalculator.ratio (with implementors > DefaultResourceCalculator and DominantResourceCalculator) can produce NaN > (Not-A-Number) as a result. > This is because [IEEE 754|http://grouper.ieee.org/groups/754/] defines {{1.0 > / 0.0}} as Infinity and {{-1.0 / 0.0}} as -Infinity and {{0.0 / 0.0}} as NaN, > see here: [https://stackoverflow.com/a/14138032/1106893] > I think it's very dangerous to rely on NaN can be returned from ratio > calculations and this could have side-effects. > When ratio calculates the result and if both the numerator and the > denominator is zero, we should use 0 as a result, I think. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-8951) Defining default queue placement rule in allocations file with create="false" throws an NPE
[ https://issues.apache.org/jira/browse/YARN-8951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693266#comment-16693266 ] Szilard Nemeth commented on YARN-8951: -- Thanks [~wilfreds] for your comment! I debugged the test code a bit more, it turned out that the call of scheduler.init() eventually calls FS.initScheduler() so in terms of this, the scheduler is properly initialized. With our further offline debugging together, we realized the following: 1. When {{AllocationFileLoaderService#reloadAllocations}} gets called, it creates the Queue placement policy with calling {{getQueuePlacementPolicy(allocationFileParser, queueProperties, conf)}}, then that calls {{QueuePlacementPolicy.fromXml()}} and eventually creates the QueuePlacementPolicy object. In {{AllocationFileLoaderService#getQueuePlacementPolicy}}, the configured queues are passed in with {{queueProperties.getConfiguredQueues()}}, which means it's just the config file. So the queues are coming from the config file, regardless what the {{QueueManager}} has. In other words, {{QueuePlacementPolicy}} has a separate (and different) set of queues that the {{QueueManager}} has. This could cause several issues. As [~wilfreds] said, the code changes possibly involve to fix it pretty much in common with YARN-7769 so this is getting on hold until that issue is fixed. > Defining default queue placement rule in allocations file with create="false" > throws an NPE > --- > > Key: YARN-8951 > URL: https://issues.apache.org/jira/browse/YARN-8951 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Szilard Nemeth >Assignee: Szilard Nemeth >Priority: Major > Attachments: default-placement-rule-with-create-false.patch > > > If the default queue placement rule is defined with {{create="false"}} and a > scheduling request is created for queue {{"root.default"}}, then > {{FairScheduler#assignToQueue}} throws an NPE, while trying to construct an > error message in the catch block of {{IllegalStateException}}, relying on the > fact that the {{rmApp}} is not null but it is. > Example of such a config file: > {code:java} > > > > 1024mb,0vcores > > > > > > {code} > This is suspicious, as there are some null checks for {{rmApp}} in the same > method. > Not sure if this is a special case for the tests or it is reproducable in a > cluster, this needs further investigation. > In any case, it's not good that we try to dereference the {{rmApp}} that is > null. > On the other hand, I'm not sure if the default queue placement rule with > {{create="false"}} makes sense at all. Looking at the documentation > ([https://hadoop.apache.org/docs/r3.1.0/hadoop-yarn/hadoop-yarn-site/FairScheduler.html):] > {quote}default: the app is placed into the queue specified in the ‘queue’ > attribute of the default rule. *If ‘queue’ attribute is not specified, the > app is placed into ‘root.default’ queue.* > A queuePlacementPolicy element: which contains a list of rule elements that > tell the scheduler how to place incoming apps into queues. Rules are applied > in the order that they are listed. Rules may take arguments. *All rules > accept the “create” argument, which indicates whether the rule can create a > new queue. “Create” defaults to true; if set to false and the rule would > place the app in a queue that is not configured in the allocations file, we > continue on to the next rule.* The last rule must be one that can never issue > a continue > {quote} > In this case, the rule has the queue property suppressed so the apps should > be placed to the {{root.default}} queue (which is an undefined queue > according to the config file), and create is false, meaning that the queue > {{root.default}} cannot be created at all. > *This seems to be a case of an invalid queue configuration file for me.* > [~jlowe], [~leftnoteasy]: What is your take on this? > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-9016) DocumentStore as a backend for ATSv2
[ https://issues.apache.org/jira/browse/YARN-9016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sushil Ks updated YARN-9016: Attachment: (was: YARN-9016.001.patch) > DocumentStore as a backend for ATSv2 > > > Key: YARN-9016 > URL: https://issues.apache.org/jira/browse/YARN-9016 > Project: Hadoop YARN > Issue Type: New Feature > Components: ATSv2 >Reporter: Sushil Ks >Assignee: Sushil Ks >Priority: Major > > h1. Document Store for ATSv2 > The Document Store for ATSv2 is a framework for plugging in > any Document Store Vendor as a backend for ATSv2 i.e Azure CosmosDB , > MongoDB, ElasticSearch etc. > * Supports multiple Document Store Vendors like CosmosDB, ElasticSearch, > MongoDB etc by just adding new configurations properties and writing Document > Store reader and writer clients. > * Currently has support for CosmosDB. > * All writes are Async and buffered, latest document would be flushed to the > store either if the document buffer gets full or periodically at every flush > interval in background without adding any additional latency to the running > jobs.. > * All the REST API's of Timeline Reader Server are supported. > h4. > *How to enable?* > Add the flowing properties under *yarn-site.xml* > {code:java} > > > yarn.timeline-service.writer.class/name> > > org.apache.hadoop.yarn.server.timelineservice.storage.documentstore.DocumentStoreTimelineWriterImpl > > >yarn.timeline-service.reader.class/name> > org.apache.hadoop.yarn.server.timelineservice.storage.documentstore.DocumentStoreTimelineReaderImpl > > >yarn.timeline-service.documentstore.db-name >YOUR_DATABASE_NAME > {code} > h3. *Creating DB and Collections for storing documents* > This is similar to HBase *TimelineSchemaCreator* the > following command needs to be executed once for setting up the database and > collections for storing documents. > {code:java} > hadoop > org.apache.hadoop.yarn.server.timelineservice.documentstore.DocumentStoreCollectionCreator > {code} > h3. *Azure CosmosDB* > To use Azure CosmosDB as a DocumentStore for ATSv2, the additional > properties under *yarn-site.xml* is required.. > {code:java} > > >yarn.timeline-service.store-type >COSMOS_DB > > >yarn.timeline-service.cosmos-db.endpoint >http://YOUR_AZURE_COSMOS_DB_URL:443/ > > >yarn.timeline-service.cosmos-db.masterkey >YOUR_AZURE_COSMOS_DB_MASTER_KEY_CREDENTIAL > > {code} > > *Testing locally* > In order to test the Azure CosmosDB as a DocumentStore > locally, install the emulator from > [here|https://docs.microsoft.com/en-us/azure/cosmos-db/local-emulator] and > start it locally. Set the endpoint and master key under *yarn-site.xml* as > mentioned above and run any example job like DistributedShell etc. Later you > can check the data explorer UI of Azure CosmosDB locally to query the > documents or even launch the *TimelineReader* locally and fetch/query the > data from REST API's. > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5106) Provide a builder interface for FairScheduler allocations for use in tests
[ https://issues.apache.org/jira/browse/YARN-5106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693198#comment-16693198 ] Hadoop QA commented on YARN-5106: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 27 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 34s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 8s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 2s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 8m 3s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 1m 26s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch generated 65 new + 318 unchanged - 37 fixed = 383 total (was 355) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 41s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}104m 9s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 25m 25s{color} | {color:green} hadoop-yarn-client in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 31s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}206m 23s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.resourcemanager.scheduler.fair.TestAllocationFileLoaderService | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f | | JIRA Issue | YARN-5106 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12948828/YARN-5106.004.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux f92fa3546aae 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / d0cc679 | | maven | version:
[jira] [Updated] (YARN-9034) ApplicationCLI should have option to take clusterId
[ https://issues.apache.org/jira/browse/YARN-9034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rohith Sharma K S updated YARN-9034: Attachment: YARN-9034.02.patch > ApplicationCLI should have option to take clusterId > --- > > Key: YARN-9034 > URL: https://issues.apache.org/jira/browse/YARN-9034 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Rohith Sharma K S >Assignee: Rohith Sharma K S >Priority: Major > Attachments: YARN-9034.01.patch, YARN-9034.02.patch > > > Post YARN-8303, LogsCLI provide an option to input clusterid which could be > used for fetching data from atsv2. ApplicationCLI also should have this > option. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org