[jira] [Commented] (MAPREDUCE-7319) Log list of mappers at trace level in ShuffleHandler audit log

2021-02-09 Thread Jim Brennan (Jira)


[ 
https://issues.apache.org/jira/browse/MAPREDUCE-7319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17282035#comment-17282035
 ] 

Jim Brennan commented on MAPREDUCE-7319:


Thanks [~ebadger]!

> Log list of mappers at trace level in ShuffleHandler audit log
> --
>
> Key: MAPREDUCE-7319
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-7319
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>  Components: yarn
>Affects Versions: 3.4.0
>Reporter: Jim Brennan
>Assignee: Jim Brennan
>Priority: Minor
> Fix For: 3.4.0, 3.1.5, 3.3.1, 2.10.2, 3.2.3
>
> Attachments: MAPREDUCE-7319.001.patch
>
>
> [MAPREDUCE-6958] added the content length to ShuffleHandler audit log, which 
> is logged at DEBUG level.  After enabling it, we found that the list of 
> mappers for large jobs was filling up our audit logs.  It would be good to 
> move the list of mappers to TRACE level to reduce the logging impact without 
> disabling the log message entirely.
> For example a log message like this:
> {noformat}
> 2018-01-25 23:43:02,669 [New I/O worker #1] DEBUG ShuffleHandler.audit: 
> shuffle for job_1512479762132_1318600 reducer 241 length 482072 mappers: 
> [attempt_1512479762132_1318600_1_00_004852_0_10003,
> attempt_1512479762132_1318600_1_00_004190_0_10003, 
> attempt_1512479762132_1318600_1_00_004393_0_10003, 
> attempt_1512479762132_1318600_1_00_005057_0_10003, 
> attempt_1512479762132_1318600_1_00_004855_0_10002,
> attempt_1512479762132_1318600_1_00_003976_0_10003, 
> attempt_1512479762132_1318600_1_00_004058_0_10003, 
> attempt_1512479762132_1318600_1_00_004355_0_10003, 
> attempt_1512479762132_1318600_1_00_004436_0_10002,
> attempt_1512479762132_1318600_1_00_004854_0_10003, 
> attempt_1512479762132_1318600_1_00_005174_0_10004, 
> attempt_1512479762132_1318600_1_00_003972_0_10002, 
> attempt_1512479762132_1318600_1_00_004853_0_10002,
> attempt_1512479762132_1318600_1_00_004856_0_10002]
> {noformat}
> Would become this with 
> {{log4j.logger.org.apache.hadoop.mapred.ShuffleHandler.audit=DEBUG}}:
> {noformat}
> 2018-01-25 23:43:02,669 [New I/O worker #1] DEBUG ShuffleHandler.audit: 
> shuffle for job_1512479762132_1318600 reducer 241 length 482072
> {noformat}
> And this with 
> {{log4j.logger.org.apache.hadoop.mapred.ShuffleHandler.audit=TRACE}}:
> {noformat}
> 2018-01-25 23:43:02,669 [New I/O worker #1] DEBUG ShuffleHandler.audit: 
> shuffle for job_1512479762132_1318600 reducer 241 length 482072
> 2018-01-25 23:43:02,669 [New I/O worker #1] TRACE ShuffleHandler.audit: 
> shuffle for job_1512479762132_1318600 mappers: 
> [attempt_1512479762132_1318600_1_00_004852_0_10003,
> attempt_1512479762132_1318600_1_00_004190_0_10003, 
> attempt_1512479762132_1318600_1_00_004393_0_10003, 
> attempt_1512479762132_1318600_1_00_005057_0_10003, 
> attempt_1512479762132_1318600_1_00_004855_0_10002,
> attempt_1512479762132_1318600_1_00_003976_0_10003, 
> attempt_1512479762132_1318600_1_00_004058_0_10003, 
> attempt_1512479762132_1318600_1_00_004355_0_10003, 
> attempt_1512479762132_1318600_1_00_004436_0_10002,
> attempt_1512479762132_1318600_1_00_004854_0_10003, 
> attempt_1512479762132_1318600_1_00_005174_0_10004, 
> attempt_1512479762132_1318600_1_00_003972_0_10002, 
> attempt_1512479762132_1318600_1_00_004853_0_10002,
> attempt_1512479762132_1318600_1_00_004856_0_10002]
> {noformat}
> One question is whether there are any downstream consumers of this audit log 
> that might have a problem with this change?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org



[jira] [Work logged] (MAPREDUCE-7141) Allow KMS generated spill encryption keys

2021-02-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-7141?focusedWorklogId=550398=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-550398
 ]

ASF GitHub Bot logged work on MAPREDUCE-7141:
-

Author: ASF GitHub Bot
Created on: 09/Feb/21 20:09
Start Date: 09/Feb/21 20:09
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2695:
URL: https://github.com/apache/hadoop/pull/2695#issuecomment-776211060


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 31s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 6 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m  7s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  20m 12s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   2m 28s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |   2m  4s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01  |
   | +1 :green_heart: |  checkstyle  |   1m 19s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 36s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  17m  1s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 53s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 44s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01  |
   | +0 :ok: |  spotbugs  |   0m 47s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m 52s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 28s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 56s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 19s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |   2m 19s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 59s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01  |
   | +1 :green_heart: |  javac  |   1m 59s |  |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   1m 13s |  |  
hadoop-mapreduce-project/hadoop-mapreduce-client: The patch generated 0 new + 
811 unchanged - 1 fixed = 811 total (was 812)  |
   | +1 :green_heart: |  mvnsite  |   1m 57s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  xml  |   0m  2s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  12m 46s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 36s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 31s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01  |
   | +1 :green_heart: |  findbugs  |   4m  7s |  |  the patch passed  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   6m 55s |  |  hadoop-mapreduce-client-core in 
the patch passed.  |
   | +1 :green_heart: |  unit  |   1m  2s |  |  hadoop-mapreduce-client-common 
in the patch passed.  |
   | +1 :green_heart: |  unit  |   8m 24s |  |  hadoop-mapreduce-client-app in 
the patch passed.  |
   | +1 :green_heart: |  unit  | 132m 55s |  |  
hadoop-mapreduce-client-jobclient in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 49s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 247m 41s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2695/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2695 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux 5a090a713d5b 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 9434c1eccc2 |
   | Default Java | Private 

[jira] [Updated] (MAPREDUCE-7319) Log list of mappers at trace level in ShuffleHandler audit log

2021-02-09 Thread Eric Badger (Jira)


 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-7319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Badger updated MAPREDUCE-7319:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Log list of mappers at trace level in ShuffleHandler audit log
> --
>
> Key: MAPREDUCE-7319
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-7319
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>  Components: yarn
>Affects Versions: 3.4.0
>Reporter: Jim Brennan
>Assignee: Jim Brennan
>Priority: Minor
> Fix For: 3.4.0, 3.1.5, 3.3.1, 2.10.2, 3.2.3
>
> Attachments: MAPREDUCE-7319.001.patch
>
>
> [MAPREDUCE-6958] added the content length to ShuffleHandler audit log, which 
> is logged at DEBUG level.  After enabling it, we found that the list of 
> mappers for large jobs was filling up our audit logs.  It would be good to 
> move the list of mappers to TRACE level to reduce the logging impact without 
> disabling the log message entirely.
> For example a log message like this:
> {noformat}
> 2018-01-25 23:43:02,669 [New I/O worker #1] DEBUG ShuffleHandler.audit: 
> shuffle for job_1512479762132_1318600 reducer 241 length 482072 mappers: 
> [attempt_1512479762132_1318600_1_00_004852_0_10003,
> attempt_1512479762132_1318600_1_00_004190_0_10003, 
> attempt_1512479762132_1318600_1_00_004393_0_10003, 
> attempt_1512479762132_1318600_1_00_005057_0_10003, 
> attempt_1512479762132_1318600_1_00_004855_0_10002,
> attempt_1512479762132_1318600_1_00_003976_0_10003, 
> attempt_1512479762132_1318600_1_00_004058_0_10003, 
> attempt_1512479762132_1318600_1_00_004355_0_10003, 
> attempt_1512479762132_1318600_1_00_004436_0_10002,
> attempt_1512479762132_1318600_1_00_004854_0_10003, 
> attempt_1512479762132_1318600_1_00_005174_0_10004, 
> attempt_1512479762132_1318600_1_00_003972_0_10002, 
> attempt_1512479762132_1318600_1_00_004853_0_10002,
> attempt_1512479762132_1318600_1_00_004856_0_10002]
> {noformat}
> Would become this with 
> {{log4j.logger.org.apache.hadoop.mapred.ShuffleHandler.audit=DEBUG}}:
> {noformat}
> 2018-01-25 23:43:02,669 [New I/O worker #1] DEBUG ShuffleHandler.audit: 
> shuffle for job_1512479762132_1318600 reducer 241 length 482072
> {noformat}
> And this with 
> {{log4j.logger.org.apache.hadoop.mapred.ShuffleHandler.audit=TRACE}}:
> {noformat}
> 2018-01-25 23:43:02,669 [New I/O worker #1] DEBUG ShuffleHandler.audit: 
> shuffle for job_1512479762132_1318600 reducer 241 length 482072
> 2018-01-25 23:43:02,669 [New I/O worker #1] TRACE ShuffleHandler.audit: 
> shuffle for job_1512479762132_1318600 mappers: 
> [attempt_1512479762132_1318600_1_00_004852_0_10003,
> attempt_1512479762132_1318600_1_00_004190_0_10003, 
> attempt_1512479762132_1318600_1_00_004393_0_10003, 
> attempt_1512479762132_1318600_1_00_005057_0_10003, 
> attempt_1512479762132_1318600_1_00_004855_0_10002,
> attempt_1512479762132_1318600_1_00_003976_0_10003, 
> attempt_1512479762132_1318600_1_00_004058_0_10003, 
> attempt_1512479762132_1318600_1_00_004355_0_10003, 
> attempt_1512479762132_1318600_1_00_004436_0_10002,
> attempt_1512479762132_1318600_1_00_004854_0_10003, 
> attempt_1512479762132_1318600_1_00_005174_0_10004, 
> attempt_1512479762132_1318600_1_00_003972_0_10002, 
> attempt_1512479762132_1318600_1_00_004853_0_10002,
> attempt_1512479762132_1318600_1_00_004856_0_10002]
> {noformat}
> One question is whether there are any downstream consumers of this audit log 
> that might have a problem with this change?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org



[jira] [Updated] (MAPREDUCE-7319) Log list of mappers at trace level in ShuffleHandler audit log

2021-02-09 Thread Eric Badger (Jira)


 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-7319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Badger updated MAPREDUCE-7319:
---
Fix Version/s: 3.2.3
   2.10.2
   3.3.1
   3.1.5
   3.4.0

+1

Thanks for the patch, [~Jim_Brennan]! I've committed this to trunk (3.4), 
branch-3.3, branch-3.2, branch-3.1, and branch-2.10

> Log list of mappers at trace level in ShuffleHandler audit log
> --
>
> Key: MAPREDUCE-7319
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-7319
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>  Components: yarn
>Affects Versions: 3.4.0
>Reporter: Jim Brennan
>Assignee: Jim Brennan
>Priority: Minor
> Fix For: 3.4.0, 3.1.5, 3.3.1, 2.10.2, 3.2.3
>
> Attachments: MAPREDUCE-7319.001.patch
>
>
> [MAPREDUCE-6958] added the content length to ShuffleHandler audit log, which 
> is logged at DEBUG level.  After enabling it, we found that the list of 
> mappers for large jobs was filling up our audit logs.  It would be good to 
> move the list of mappers to TRACE level to reduce the logging impact without 
> disabling the log message entirely.
> For example a log message like this:
> {noformat}
> 2018-01-25 23:43:02,669 [New I/O worker #1] DEBUG ShuffleHandler.audit: 
> shuffle for job_1512479762132_1318600 reducer 241 length 482072 mappers: 
> [attempt_1512479762132_1318600_1_00_004852_0_10003,
> attempt_1512479762132_1318600_1_00_004190_0_10003, 
> attempt_1512479762132_1318600_1_00_004393_0_10003, 
> attempt_1512479762132_1318600_1_00_005057_0_10003, 
> attempt_1512479762132_1318600_1_00_004855_0_10002,
> attempt_1512479762132_1318600_1_00_003976_0_10003, 
> attempt_1512479762132_1318600_1_00_004058_0_10003, 
> attempt_1512479762132_1318600_1_00_004355_0_10003, 
> attempt_1512479762132_1318600_1_00_004436_0_10002,
> attempt_1512479762132_1318600_1_00_004854_0_10003, 
> attempt_1512479762132_1318600_1_00_005174_0_10004, 
> attempt_1512479762132_1318600_1_00_003972_0_10002, 
> attempt_1512479762132_1318600_1_00_004853_0_10002,
> attempt_1512479762132_1318600_1_00_004856_0_10002]
> {noformat}
> Would become this with 
> {{log4j.logger.org.apache.hadoop.mapred.ShuffleHandler.audit=DEBUG}}:
> {noformat}
> 2018-01-25 23:43:02,669 [New I/O worker #1] DEBUG ShuffleHandler.audit: 
> shuffle for job_1512479762132_1318600 reducer 241 length 482072
> {noformat}
> And this with 
> {{log4j.logger.org.apache.hadoop.mapred.ShuffleHandler.audit=TRACE}}:
> {noformat}
> 2018-01-25 23:43:02,669 [New I/O worker #1] DEBUG ShuffleHandler.audit: 
> shuffle for job_1512479762132_1318600 reducer 241 length 482072
> 2018-01-25 23:43:02,669 [New I/O worker #1] TRACE ShuffleHandler.audit: 
> shuffle for job_1512479762132_1318600 mappers: 
> [attempt_1512479762132_1318600_1_00_004852_0_10003,
> attempt_1512479762132_1318600_1_00_004190_0_10003, 
> attempt_1512479762132_1318600_1_00_004393_0_10003, 
> attempt_1512479762132_1318600_1_00_005057_0_10003, 
> attempt_1512479762132_1318600_1_00_004855_0_10002,
> attempt_1512479762132_1318600_1_00_003976_0_10003, 
> attempt_1512479762132_1318600_1_00_004058_0_10003, 
> attempt_1512479762132_1318600_1_00_004355_0_10003, 
> attempt_1512479762132_1318600_1_00_004436_0_10002,
> attempt_1512479762132_1318600_1_00_004854_0_10003, 
> attempt_1512479762132_1318600_1_00_005174_0_10004, 
> attempt_1512479762132_1318600_1_00_003972_0_10002, 
> attempt_1512479762132_1318600_1_00_004853_0_10002,
> attempt_1512479762132_1318600_1_00_004856_0_10002]
> {noformat}
> One question is whether there are any downstream consumers of this audit log 
> that might have a problem with this change?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org



[jira] [Commented] (MAPREDUCE-7319) Log list of mappers at trace level in ShuffleHandler audit log

2021-02-09 Thread Jim Brennan (Jira)


[ 
https://issues.apache.org/jira/browse/MAPREDUCE-7319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17281905#comment-17281905
 ] 

Jim Brennan commented on MAPREDUCE-7319:


I don't think a unit test is needed for this log message change.
[~ebadger] can you please review?


> Log list of mappers at trace level in ShuffleHandler audit log
> --
>
> Key: MAPREDUCE-7319
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-7319
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>  Components: yarn
>Affects Versions: 3.4.0
>Reporter: Jim Brennan
>Assignee: Jim Brennan
>Priority: Minor
> Attachments: MAPREDUCE-7319.001.patch
>
>
> [MAPREDUCE-6958] added the content length to ShuffleHandler audit log, which 
> is logged at DEBUG level.  After enabling it, we found that the list of 
> mappers for large jobs was filling up our audit logs.  It would be good to 
> move the list of mappers to TRACE level to reduce the logging impact without 
> disabling the log message entirely.
> For example a log message like this:
> {noformat}
> 2018-01-25 23:43:02,669 [New I/O worker #1] DEBUG ShuffleHandler.audit: 
> shuffle for job_1512479762132_1318600 reducer 241 length 482072 mappers: 
> [attempt_1512479762132_1318600_1_00_004852_0_10003,
> attempt_1512479762132_1318600_1_00_004190_0_10003, 
> attempt_1512479762132_1318600_1_00_004393_0_10003, 
> attempt_1512479762132_1318600_1_00_005057_0_10003, 
> attempt_1512479762132_1318600_1_00_004855_0_10002,
> attempt_1512479762132_1318600_1_00_003976_0_10003, 
> attempt_1512479762132_1318600_1_00_004058_0_10003, 
> attempt_1512479762132_1318600_1_00_004355_0_10003, 
> attempt_1512479762132_1318600_1_00_004436_0_10002,
> attempt_1512479762132_1318600_1_00_004854_0_10003, 
> attempt_1512479762132_1318600_1_00_005174_0_10004, 
> attempt_1512479762132_1318600_1_00_003972_0_10002, 
> attempt_1512479762132_1318600_1_00_004853_0_10002,
> attempt_1512479762132_1318600_1_00_004856_0_10002]
> {noformat}
> Would become this with 
> {{log4j.logger.org.apache.hadoop.mapred.ShuffleHandler.audit=DEBUG}}:
> {noformat}
> 2018-01-25 23:43:02,669 [New I/O worker #1] DEBUG ShuffleHandler.audit: 
> shuffle for job_1512479762132_1318600 reducer 241 length 482072
> {noformat}
> And this with 
> {{log4j.logger.org.apache.hadoop.mapred.ShuffleHandler.audit=TRACE}}:
> {noformat}
> 2018-01-25 23:43:02,669 [New I/O worker #1] DEBUG ShuffleHandler.audit: 
> shuffle for job_1512479762132_1318600 reducer 241 length 482072
> 2018-01-25 23:43:02,669 [New I/O worker #1] TRACE ShuffleHandler.audit: 
> shuffle for job_1512479762132_1318600 mappers: 
> [attempt_1512479762132_1318600_1_00_004852_0_10003,
> attempt_1512479762132_1318600_1_00_004190_0_10003, 
> attempt_1512479762132_1318600_1_00_004393_0_10003, 
> attempt_1512479762132_1318600_1_00_005057_0_10003, 
> attempt_1512479762132_1318600_1_00_004855_0_10002,
> attempt_1512479762132_1318600_1_00_003976_0_10003, 
> attempt_1512479762132_1318600_1_00_004058_0_10003, 
> attempt_1512479762132_1318600_1_00_004355_0_10003, 
> attempt_1512479762132_1318600_1_00_004436_0_10002,
> attempt_1512479762132_1318600_1_00_004854_0_10003, 
> attempt_1512479762132_1318600_1_00_005174_0_10004, 
> attempt_1512479762132_1318600_1_00_003972_0_10002, 
> attempt_1512479762132_1318600_1_00_004853_0_10002,
> attempt_1512479762132_1318600_1_00_004856_0_10002]
> {noformat}
> One question is whether there are any downstream consumers of this audit log 
> that might have a problem with this change?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org



[jira] [Commented] (MAPREDUCE-7319) Log list of mappers at trace level in ShuffleHandler audit log

2021-02-09 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/MAPREDUCE-7319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17281901#comment-17281901
 ] 

Hadoop QA commented on MAPREDUCE-7319:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime ||  Logfile || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
38s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} || ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green}{color} | {color:green} No case conflicting files 
found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green}{color} | {color:green} The patch does not contain any 
@author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red}{color} | {color:red} The patch doesn't appear to 
include any new or modified tests. Please justify why no new tests are needed 
for this patch. Also please list what manual steps were performed to verify 
this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
59s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green}{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 12s{color} | {color:green}{color} | {color:green} branch has no errors when 
building and testing our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green}{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  0m 
43s{color} | {color:blue}{color} | {color:blue} Used deprecated FindBugs 
config; considering switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
40s{color} | {color:green}{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
23s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
21s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
19s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
19s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green}{color} | {color:green} The patch has no whitespace 
issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 40s{color} | {color:green}{color} | {color:green} patch has no errors when 
building and testing our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 

[jira] [Updated] (MAPREDUCE-7319) Log list of mappers at trace level in ShuffleHandler audit log

2021-02-09 Thread Jim Brennan (Jira)


 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-7319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Brennan updated MAPREDUCE-7319:
---
Status: Patch Available  (was: Open)

> Log list of mappers at trace level in ShuffleHandler audit log
> --
>
> Key: MAPREDUCE-7319
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-7319
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>  Components: yarn
>Affects Versions: 3.4.0
>Reporter: Jim Brennan
>Assignee: Jim Brennan
>Priority: Minor
> Attachments: MAPREDUCE-7319.001.patch
>
>
> [MAPREDUCE-6958] added the content length to ShuffleHandler audit log, which 
> is logged at DEBUG level.  After enabling it, we found that the list of 
> mappers for large jobs was filling up our audit logs.  It would be good to 
> move the list of mappers to TRACE level to reduce the logging impact without 
> disabling the log message entirely.
> For example a log message like this:
> {noformat}
> 2018-01-25 23:43:02,669 [New I/O worker #1] DEBUG ShuffleHandler.audit: 
> shuffle for job_1512479762132_1318600 reducer 241 length 482072 mappers: 
> [attempt_1512479762132_1318600_1_00_004852_0_10003,
> attempt_1512479762132_1318600_1_00_004190_0_10003, 
> attempt_1512479762132_1318600_1_00_004393_0_10003, 
> attempt_1512479762132_1318600_1_00_005057_0_10003, 
> attempt_1512479762132_1318600_1_00_004855_0_10002,
> attempt_1512479762132_1318600_1_00_003976_0_10003, 
> attempt_1512479762132_1318600_1_00_004058_0_10003, 
> attempt_1512479762132_1318600_1_00_004355_0_10003, 
> attempt_1512479762132_1318600_1_00_004436_0_10002,
> attempt_1512479762132_1318600_1_00_004854_0_10003, 
> attempt_1512479762132_1318600_1_00_005174_0_10004, 
> attempt_1512479762132_1318600_1_00_003972_0_10002, 
> attempt_1512479762132_1318600_1_00_004853_0_10002,
> attempt_1512479762132_1318600_1_00_004856_0_10002]
> {noformat}
> Would become this with 
> {{log4j.logger.org.apache.hadoop.mapred.ShuffleHandler.audit=DEBUG}}:
> {noformat}
> 2018-01-25 23:43:02,669 [New I/O worker #1] DEBUG ShuffleHandler.audit: 
> shuffle for job_1512479762132_1318600 reducer 241 length 482072
> {noformat}
> And this with 
> {{log4j.logger.org.apache.hadoop.mapred.ShuffleHandler.audit=TRACE}}:
> {noformat}
> 2018-01-25 23:43:02,669 [New I/O worker #1] DEBUG ShuffleHandler.audit: 
> shuffle for job_1512479762132_1318600 reducer 241 length 482072
> 2018-01-25 23:43:02,669 [New I/O worker #1] TRACE ShuffleHandler.audit: 
> shuffle for job_1512479762132_1318600 mappers: 
> [attempt_1512479762132_1318600_1_00_004852_0_10003,
> attempt_1512479762132_1318600_1_00_004190_0_10003, 
> attempt_1512479762132_1318600_1_00_004393_0_10003, 
> attempt_1512479762132_1318600_1_00_005057_0_10003, 
> attempt_1512479762132_1318600_1_00_004855_0_10002,
> attempt_1512479762132_1318600_1_00_003976_0_10003, 
> attempt_1512479762132_1318600_1_00_004058_0_10003, 
> attempt_1512479762132_1318600_1_00_004355_0_10003, 
> attempt_1512479762132_1318600_1_00_004436_0_10002,
> attempt_1512479762132_1318600_1_00_004854_0_10003, 
> attempt_1512479762132_1318600_1_00_005174_0_10004, 
> attempt_1512479762132_1318600_1_00_003972_0_10002, 
> attempt_1512479762132_1318600_1_00_004853_0_10002,
> attempt_1512479762132_1318600_1_00_004856_0_10002]
> {noformat}
> One question is whether there are any downstream consumers of this audit log 
> that might have a problem with this change?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org



[jira] [Updated] (MAPREDUCE-7141) Allow KMS generated spill encryption keys

2021-02-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-7141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated MAPREDUCE-7141:
--
Labels: pull-request-available  (was: )

> Allow KMS generated spill encryption keys
> -
>
> Key: MAPREDUCE-7141
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-7141
> Project: Hadoop Map/Reduce
>  Issue Type: Task
>Reporter: Kuhu Shukla
>Assignee: Ahmed Hussein
>Priority: Major
>  Labels: pull-request-available
> Attachments: MAPREDUCE-7141-001.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently the only way an encryption key for task spills is generated is by 
> the AM's key generator. This JIRA tracks the work required to add KMS support 
> to this key's generation allowing fault tolerance to AM failures/re-runs and 
> also give another option to the client on how it wants the keys to be created.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org



[jira] [Work logged] (MAPREDUCE-7141) Allow KMS generated spill encryption keys

2021-02-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-7141?focusedWorklogId=550300=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-550300
 ]

ASF GitHub Bot logged work on MAPREDUCE-7141:
-

Author: ASF GitHub Bot
Created on: 09/Feb/21 16:00
Start Date: 09/Feb/21 16:00
Worklog Time Spent: 10m 
  Work Description: amahussein opened a new pull request #2695:
URL: https://github.com/apache/hadoop/pull/2695


   [MAPREDUCE-7141: Allow KMS generated spill encryption 
keys](https://issues.apache.org/jira/browse/MAPREDUCE-7141)
   Add KMS support to generate key for the encryption of the spilled data on 
disk. The feature improves fault tolerance to AM failures/re-runs and also 
gives another option to the client on how it wants the keys to be created.
   The current implementation assumed that the KMS key can be retrieved from 
the DFS.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 550300)
Remaining Estimate: 0h
Time Spent: 10m

> Allow KMS generated spill encryption keys
> -
>
> Key: MAPREDUCE-7141
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-7141
> Project: Hadoop Map/Reduce
>  Issue Type: Task
>Reporter: Kuhu Shukla
>Assignee: Ahmed Hussein
>Priority: Major
> Attachments: MAPREDUCE-7141-001.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently the only way an encryption key for task spills is generated is by 
> the AM's key generator. This JIRA tracks the work required to add KMS support 
> to this key's generation allowing fault tolerance to AM failures/re-runs and 
> also give another option to the client on how it wants the keys to be created.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org



[jira] [Created] (MAPREDUCE-7321) TestMRIntermediateDataEncryption does not cleanup data folders

2021-02-09 Thread Ahmed Hussein (Jira)
Ahmed Hussein created MAPREDUCE-7321:


 Summary: TestMRIntermediateDataEncryption does not cleanup data 
folders
 Key: MAPREDUCE-7321
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-7321
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: mrv1, test
Reporter: Ahmed Hussein


The data generated by the {{TestMRIntermediateDataEncryption}} does not get 
deleted after Completing the tests. This contributes to Hadoop taking large 
disk spaces to build and run tests.
 The following folders need to be removed:
 * folders of the DFSCluster and the YarnCluster
 * Files used to submit jobs in the test-dir folder



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org