[jira] [Updated] (MAPREDUCE-4883) Reducer's Maximum Shuffle Buffer Size should be enlarged for 64bit JVM

2013-01-26 Thread Jerry Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-4883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jerry Chen updated MAPREDUCE-4883:
--

Attachment: MAPREDUCE-4883.patch

Attach the patch for trunk

 Reducer's Maximum Shuffle Buffer Size should be enlarged for 64bit JVM
 --

 Key: MAPREDUCE-4883
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4883
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
Affects Versions: 0.20.2, 1.0.3
 Environment: Especially for 64bit JVM
Reporter: Lijie Xu
Assignee: Jerry Chen
  Labels: patch
 Attachments: MAPREDUCE-4883.patch

   Original Estimate: 12h
  Remaining Estimate: 12h

 In hadoop-0.20.2, hadoop-1.0.3 or other versions, reducer's shuffle buffer 
 size cannot exceed 2048MB (i.e., Integer.MAX_VALUE). This is reasonable for 
 32bit JVM.
 But for 64bit JVM, although reducer's JVM size can be set more than 2048MB 
 (e.g., mapred.child.java.opts=-Xmx4000m), the heap size used for shuffle 
 buffer is at most 2048MB * maxInMemCopyUse (default 0.7) not 4000MB * 
 maxInMemCopyUse. 
 So the pointed piece of code in ReduceTask.java needs modification for 64bit 
 JVM.
 ---
   private final long maxSize;
   private final long maxSingleShuffleLimit;
  
   private long size = 0;
  
   private Object dataAvailable = new Object();
   private long fullSize = 0;
   private int numPendingRequests = 0;
   private int numRequiredMapOutputs = 0;
   private int numClosed = 0;
   private boolean closed = false;
  
   public ShuffleRamManager(Configuration conf) throws IOException {
 final float maxInMemCopyUse =
   conf.getFloat(mapred.job.shuffle.input.buffer.percent, 0.70f);
 if (maxInMemCopyUse  1.0 || maxInMemCopyUse  0.0) {
   throw new IOException(mapred.job.shuffle.input.buffer.percent +
 maxInMemCopyUse);
 }
 // Allow unit tests to fix Runtime memory
 --   maxSize = (int)(conf.getInt(mapred.job.reduce.total.mem.bytes,
 --(int)Math.min(Runtime.getRuntime().maxMemory(), Integer.MAX_VALUE))
 --  * maxInMemCopyUse);
 maxSingleShuffleLimit = (long)(maxSize * 
 MAX_SINGLE_SHUFFLE_SEGMENT_FRACTION);
 LOG.info(ShuffleRamManager: MemoryLimit= + maxSize +
  , MaxSingleShuffleLimit= + maxSingleShuffleLimit);
   }

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (MAPREDUCE-4883) Reducer's Maximum Shuffle Buffer Size should be enlarged for 64bit JVM

2013-01-26 Thread Jerry Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-4883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jerry Chen updated MAPREDUCE-4883:
--

Target Version/s: trunk  (was: 0.20.2, 1.0.3)
  Status: Patch Available  (was: Open)

Patch for trunk submitted.

Changes are:
1. Enable memoryLimit more than Integer.MAX_VALUE in trunk. (Which map needed 
for 64bit machine)
2. And because a single map output buffer still cannot exceed 
Integer.MAX_VALUE, maxSingleShuffleLimit will still be limited by 
Integer.MAX_VALUE. 

 Reducer's Maximum Shuffle Buffer Size should be enlarged for 64bit JVM
 --

 Key: MAPREDUCE-4883
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4883
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
Affects Versions: 1.0.3, 0.20.2
 Environment: Especially for 64bit JVM
Reporter: Lijie Xu
Assignee: Jerry Chen
  Labels: patch
 Attachments: MAPREDUCE-4883.patch

   Original Estimate: 12h
  Remaining Estimate: 12h

 In hadoop-0.20.2, hadoop-1.0.3 or other versions, reducer's shuffle buffer 
 size cannot exceed 2048MB (i.e., Integer.MAX_VALUE). This is reasonable for 
 32bit JVM.
 But for 64bit JVM, although reducer's JVM size can be set more than 2048MB 
 (e.g., mapred.child.java.opts=-Xmx4000m), the heap size used for shuffle 
 buffer is at most 2048MB * maxInMemCopyUse (default 0.7) not 4000MB * 
 maxInMemCopyUse. 
 So the pointed piece of code in ReduceTask.java needs modification for 64bit 
 JVM.
 ---
   private final long maxSize;
   private final long maxSingleShuffleLimit;
  
   private long size = 0;
  
   private Object dataAvailable = new Object();
   private long fullSize = 0;
   private int numPendingRequests = 0;
   private int numRequiredMapOutputs = 0;
   private int numClosed = 0;
   private boolean closed = false;
  
   public ShuffleRamManager(Configuration conf) throws IOException {
 final float maxInMemCopyUse =
   conf.getFloat(mapred.job.shuffle.input.buffer.percent, 0.70f);
 if (maxInMemCopyUse  1.0 || maxInMemCopyUse  0.0) {
   throw new IOException(mapred.job.shuffle.input.buffer.percent +
 maxInMemCopyUse);
 }
 // Allow unit tests to fix Runtime memory
 --   maxSize = (int)(conf.getInt(mapred.job.reduce.total.mem.bytes,
 --(int)Math.min(Runtime.getRuntime().maxMemory(), Integer.MAX_VALUE))
 --  * maxInMemCopyUse);
 maxSingleShuffleLimit = (long)(maxSize * 
 MAX_SINGLE_SHUFFLE_SEGMENT_FRACTION);
 LOG.info(ShuffleRamManager: MemoryLimit= + maxSize +
  , MaxSingleShuffleLimit= + maxSingleShuffleLimit);
   }

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (MAPREDUCE-4882) Error in estimating the length of the output file in Spill Phase

2013-01-26 Thread Jerry Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-4882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13563424#comment-13563424
 ] 

Jerry Chen commented on MAPREDUCE-4882:
---

[~gelesh]
Map task will choose the splill file dir on local disks according to the 
estimating size if there are mutliple local dirs configuraed. The wrong 
estimating size may cause a wrong decision such as choosing the smaller space 
dir according to the give size (the wrong one) while the actual spill is larger 
and thus cause disk full error, although there may be another disk dir with 
enough space available.


 Error in estimating the length of the output file in Spill Phase
 

 Key: MAPREDUCE-4882
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4882
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Affects Versions: 0.20.2, 1.0.3
 Environment: Any Environment
Reporter: Lijie Xu
  Labels: patch
   Original Estimate: 1h
  Remaining Estimate: 1h

 The sortAndSpill() method in MapTask.java has an error in estimating the 
 length of the output file. 
 The long size should be (bufvoid - bufstart) + bufend not (bufvoid - 
 bufend) + bufstart when bufend  bufstart.
 Here is the original code in MapTask.java.
  private void sortAndSpill() throws IOException, ClassNotFoundException,
InterruptedException {
   //approximate the length of the output file to be the length of the
   //buffer + header lengths for the partitions
   long size = (bufend = bufstart
   ? bufend - bufstart
   : (bufvoid - bufend) + bufstart) +
   partitions * APPROX_HEADER_LENGTH;
   FSDataOutputStream out = null;
 --
 I had a test on TeraSort. A snippet from mapper's log is as follows:
 MapTask: Spilling map output: record full = true
 MapTask: bufstart = 157286200; bufend = 10485460; bufvoid = 199229440
 MapTask: kvstart = 262142; kvend = 131069; length = 655360
 MapTask: Finished spill 3
 In this occasioin, Spill Bytes should be (199229440 - 157286200) + 10485460 = 
 52428700 (52 MB) because the number of spilled records is 524287 and each 
 record costs 100B.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (MAPREDUCE-4882) Error in estimating the length of the output file in Spill Phase

2013-01-26 Thread Jerry Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-4882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jerry Chen reassigned MAPREDUCE-4882:
-

Assignee: Jerry Chen

 Error in estimating the length of the output file in Spill Phase
 

 Key: MAPREDUCE-4882
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4882
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Affects Versions: 0.20.2, 1.0.3
 Environment: Any Environment
Reporter: Lijie Xu
Assignee: Jerry Chen
  Labels: patch
   Original Estimate: 1h
  Remaining Estimate: 1h

 The sortAndSpill() method in MapTask.java has an error in estimating the 
 length of the output file. 
 The long size should be (bufvoid - bufstart) + bufend not (bufvoid - 
 bufend) + bufstart when bufend  bufstart.
 Here is the original code in MapTask.java.
  private void sortAndSpill() throws IOException, ClassNotFoundException,
InterruptedException {
   //approximate the length of the output file to be the length of the
   //buffer + header lengths for the partitions
   long size = (bufend = bufstart
   ? bufend - bufstart
   : (bufvoid - bufend) + bufstart) +
   partitions * APPROX_HEADER_LENGTH;
   FSDataOutputStream out = null;
 --
 I had a test on TeraSort. A snippet from mapper's log is as follows:
 MapTask: Spilling map output: record full = true
 MapTask: bufstart = 157286200; bufend = 10485460; bufvoid = 199229440
 MapTask: kvstart = 262142; kvend = 131069; length = 655360
 MapTask: Finished spill 3
 In this occasioin, Spill Bytes should be (199229440 - 157286200) + 10485460 = 
 52428700 (52 MB) because the number of spilled records is 524287 and each 
 record costs 100B.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (MAPREDUCE-4883) Reducer's Maximum Shuffle Buffer Size should be enlarged for 64bit JVM

2013-01-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-4883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13563425#comment-13563425
 ] 

Hadoop QA commented on MAPREDUCE-4883:
--

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12566621/MAPREDUCE-4883.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/3280//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/3280//console

This message is automatically generated.

 Reducer's Maximum Shuffle Buffer Size should be enlarged for 64bit JVM
 --

 Key: MAPREDUCE-4883
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4883
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
Affects Versions: 0.20.2, 1.0.3
 Environment: Especially for 64bit JVM
Reporter: Lijie Xu
Assignee: Jerry Chen
  Labels: patch
 Attachments: MAPREDUCE-4883.patch

   Original Estimate: 12h
  Remaining Estimate: 12h

 In hadoop-0.20.2, hadoop-1.0.3 or other versions, reducer's shuffle buffer 
 size cannot exceed 2048MB (i.e., Integer.MAX_VALUE). This is reasonable for 
 32bit JVM.
 But for 64bit JVM, although reducer's JVM size can be set more than 2048MB 
 (e.g., mapred.child.java.opts=-Xmx4000m), the heap size used for shuffle 
 buffer is at most 2048MB * maxInMemCopyUse (default 0.7) not 4000MB * 
 maxInMemCopyUse. 
 So the pointed piece of code in ReduceTask.java needs modification for 64bit 
 JVM.
 ---
   private final long maxSize;
   private final long maxSingleShuffleLimit;
  
   private long size = 0;
  
   private Object dataAvailable = new Object();
   private long fullSize = 0;
   private int numPendingRequests = 0;
   private int numRequiredMapOutputs = 0;
   private int numClosed = 0;
   private boolean closed = false;
  
   public ShuffleRamManager(Configuration conf) throws IOException {
 final float maxInMemCopyUse =
   conf.getFloat(mapred.job.shuffle.input.buffer.percent, 0.70f);
 if (maxInMemCopyUse  1.0 || maxInMemCopyUse  0.0) {
   throw new IOException(mapred.job.shuffle.input.buffer.percent +
 maxInMemCopyUse);
 }
 // Allow unit tests to fix Runtime memory
 --   maxSize = (int)(conf.getInt(mapred.job.reduce.total.mem.bytes,
 --(int)Math.min(Runtime.getRuntime().maxMemory(), Integer.MAX_VALUE))
 --  * maxInMemCopyUse);
 maxSingleShuffleLimit = (long)(maxSize * 
 MAX_SINGLE_SHUFFLE_SEGMENT_FRACTION);
 LOG.info(ShuffleRamManager: MemoryLimit= + maxSize +
  , MaxSingleShuffleLimit= + maxSingleShuffleLimit);
   }

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (MAPREDUCE-4049) plugin for generic shuffle service

2013-01-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-4049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13563435#comment-13563435
 ] 

Hudson commented on MAPREDUCE-4049:
---

Integrated in Hadoop-Yarn-trunk #108 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/108/])
Amending MR CHANGES.txt to reflect that MAPREDUCE-4049/4809/4807/4808 are 
in branch-2 (Revision 1438799)

 Result = FAILURE
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1438799
Files : 
* /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt


 plugin for generic shuffle service
 --

 Key: MAPREDUCE-4049
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4049
 Project: Hadoop Map/Reduce
  Issue Type: Sub-task
  Components: performance, task, tasktracker
Affects Versions: 1.0.3, 1.1.0, 2.0.0-alpha, 3.0.0
Reporter: Avner BenHanoch
Assignee: Avner BenHanoch
  Labels: merge, plugin, rdma, shuffle
 Fix For: 2.0.3-alpha

 Attachments: HADOOP-1.x.y.patch, Hadoop Shuffle Plugin Design.rtf, 
 MAPREDUCE-4049--branch-1.patch, mapreduce-4049.patch


 Support generic shuffle service as set of two plugins: ShuffleProvider  
 ShuffleConsumer.
 This will satisfy the following needs:
 # Better shuffle and merge performance. For example: we are working on 
 shuffle plugin that performs shuffle over RDMA in fast networks (10gE, 40gE, 
 or Infiniband) instead of using the current HTTP shuffle. Based on the fast 
 RDMA shuffle, the plugin can also utilize a suitable merge approach during 
 the intermediate merges. Hence, getting much better performance.
 # Satisfy MAPREDUCE-3060 - generic shuffle service for avoiding hidden 
 dependency of NodeManager with a specific version of mapreduce shuffle 
 (currently targeted to 0.24.0).
 References:
 # Hadoop Acceleration through Network Levitated Merging, by Prof. Weikuan Yu 
 from Auburn University with others, 
 [http://pasl.eng.auburn.edu/pubs/sc11-netlev.pdf]
 # I am attaching 2 documents with suggested Top Level Design for both plugins 
 (currently, based on 1.0 branch)
 # I am providing link for downloading UDA - Mellanox's open source plugin 
 that implements generic shuffle service using RDMA and levitated merge.  
 Note: At this phase, the code is in C++ through JNI and you should consider 
 it as beta only.  Still, it can serve anyone that wants to implement or 
 contribute to levitated merge. (Please be advised that levitated merge is 
 mostly suit in very fast networks) - 
 [http://www.mellanox.com/content/pages.php?pg=products_dynproduct_family=144menu_section=69]

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (MAPREDUCE-4882) Error in estimating the length of the output file in Spill Phase

2013-01-26 Thread Jerry Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-4882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jerry Chen updated MAPREDUCE-4882:
--

Attachment: MAPREDUCE-4882.patch

 Error in estimating the length of the output file in Spill Phase
 

 Key: MAPREDUCE-4882
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4882
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Affects Versions: 0.20.2, 1.0.3
 Environment: Any Environment
Reporter: Lijie Xu
Assignee: Jerry Chen
  Labels: patch
 Attachments: MAPREDUCE-4882.patch

   Original Estimate: 1h
  Remaining Estimate: 1h

 The sortAndSpill() method in MapTask.java has an error in estimating the 
 length of the output file. 
 The long size should be (bufvoid - bufstart) + bufend not (bufvoid - 
 bufend) + bufstart when bufend  bufstart.
 Here is the original code in MapTask.java.
  private void sortAndSpill() throws IOException, ClassNotFoundException,
InterruptedException {
   //approximate the length of the output file to be the length of the
   //buffer + header lengths for the partitions
   long size = (bufend = bufstart
   ? bufend - bufstart
   : (bufvoid - bufend) + bufstart) +
   partitions * APPROX_HEADER_LENGTH;
   FSDataOutputStream out = null;
 --
 I had a test on TeraSort. A snippet from mapper's log is as follows:
 MapTask: Spilling map output: record full = true
 MapTask: bufstart = 157286200; bufend = 10485460; bufvoid = 199229440
 MapTask: kvstart = 262142; kvend = 131069; length = 655360
 MapTask: Finished spill 3
 In this occasioin, Spill Bytes should be (199229440 - 157286200) + 10485460 = 
 52428700 (52 MB) because the number of spilled records is 524287 and each 
 record costs 100B.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (MAPREDUCE-4882) Error in estimating the length of the output file in Spill Phase

2013-01-26 Thread Jerry Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-4882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jerry Chen updated MAPREDUCE-4882:
--

Target Version/s: trunk  (was: 0.20.2, 1.0.3)
  Status: Patch Available  (was: Open)

Patch for fixing the problem attached.

Change from (bufvoid - bufend) + bufstart to (bufvoid - bufstart) + bufend 
and add test case for detecting invalid estimation size as for the case of 
bufend  bufstart, (bufvoid - bufend) + bufstart will greater than bufvoid.

Please kindly help review the patch.

 Error in estimating the length of the output file in Spill Phase
 

 Key: MAPREDUCE-4882
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4882
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Affects Versions: 1.0.3, 0.20.2
 Environment: Any Environment
Reporter: Lijie Xu
Assignee: Jerry Chen
  Labels: patch
 Attachments: MAPREDUCE-4882.patch

   Original Estimate: 1h
  Remaining Estimate: 1h

 The sortAndSpill() method in MapTask.java has an error in estimating the 
 length of the output file. 
 The long size should be (bufvoid - bufstart) + bufend not (bufvoid - 
 bufend) + bufstart when bufend  bufstart.
 Here is the original code in MapTask.java.
  private void sortAndSpill() throws IOException, ClassNotFoundException,
InterruptedException {
   //approximate the length of the output file to be the length of the
   //buffer + header lengths for the partitions
   long size = (bufend = bufstart
   ? bufend - bufstart
   : (bufvoid - bufend) + bufstart) +
   partitions * APPROX_HEADER_LENGTH;
   FSDataOutputStream out = null;
 --
 I had a test on TeraSort. A snippet from mapper's log is as follows:
 MapTask: Spilling map output: record full = true
 MapTask: bufstart = 157286200; bufend = 10485460; bufvoid = 199229440
 MapTask: kvstart = 262142; kvend = 131069; length = 655360
 MapTask: Finished spill 3
 In this occasioin, Spill Bytes should be (199229440 - 157286200) + 10485460 = 
 52428700 (52 MB) because the number of spilled records is 524287 and each 
 record costs 100B.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (MAPREDUCE-4883) Reducer's Maximum Shuffle Buffer Size should be enlarged for 64bit JVM

2013-01-26 Thread Lijie Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-4883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13563455#comment-13563455
 ] 

Lijie Xu commented on MAPREDUCE-4883:
-

[~jerrychenhf]Good job! Yes, maxSingleShuffleLimit should not be changed. In 
practice, maxSingleShuffleLimit boundary is rarely achieved. I have another 
question, here. I think mapred.job.reduce.input.buffer.percent is bizarre and 
hardly used. Is it possible to remove this parameter although it is always set 
to 0.0f.

 Reducer's Maximum Shuffle Buffer Size should be enlarged for 64bit JVM
 --

 Key: MAPREDUCE-4883
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4883
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
Affects Versions: 0.20.2, 1.0.3
 Environment: Especially for 64bit JVM
Reporter: Lijie Xu
Assignee: Jerry Chen
  Labels: patch
 Attachments: MAPREDUCE-4883.patch

   Original Estimate: 12h
  Remaining Estimate: 12h

 In hadoop-0.20.2, hadoop-1.0.3 or other versions, reducer's shuffle buffer 
 size cannot exceed 2048MB (i.e., Integer.MAX_VALUE). This is reasonable for 
 32bit JVM.
 But for 64bit JVM, although reducer's JVM size can be set more than 2048MB 
 (e.g., mapred.child.java.opts=-Xmx4000m), the heap size used for shuffle 
 buffer is at most 2048MB * maxInMemCopyUse (default 0.7) not 4000MB * 
 maxInMemCopyUse. 
 So the pointed piece of code in ReduceTask.java needs modification for 64bit 
 JVM.
 ---
   private final long maxSize;
   private final long maxSingleShuffleLimit;
  
   private long size = 0;
  
   private Object dataAvailable = new Object();
   private long fullSize = 0;
   private int numPendingRequests = 0;
   private int numRequiredMapOutputs = 0;
   private int numClosed = 0;
   private boolean closed = false;
  
   public ShuffleRamManager(Configuration conf) throws IOException {
 final float maxInMemCopyUse =
   conf.getFloat(mapred.job.shuffle.input.buffer.percent, 0.70f);
 if (maxInMemCopyUse  1.0 || maxInMemCopyUse  0.0) {
   throw new IOException(mapred.job.shuffle.input.buffer.percent +
 maxInMemCopyUse);
 }
 // Allow unit tests to fix Runtime memory
 --   maxSize = (int)(conf.getInt(mapred.job.reduce.total.mem.bytes,
 --(int)Math.min(Runtime.getRuntime().maxMemory(), Integer.MAX_VALUE))
 --  * maxInMemCopyUse);
 maxSingleShuffleLimit = (long)(maxSize * 
 MAX_SINGLE_SHUFFLE_SEGMENT_FRACTION);
 LOG.info(ShuffleRamManager: MemoryLimit= + maxSize +
  , MaxSingleShuffleLimit= + maxSingleShuffleLimit);
   }

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (MAPREDUCE-4883) Reducer's Maximum Shuffle Buffer Size should be enlarged for 64bit JVM

2013-01-26 Thread Lijie Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-4883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13563455#comment-13563455
 ] 

Lijie Xu commented on MAPREDUCE-4883:
-

[~jerrychenhf]Good job! Yes, maxSingleShuffleLimit should not be changed. In 
practice, maxSingleShuffleLimit boundary is rarely achieved. I have another 
question, here. I think mapred.job.reduce.input.buffer.percent is bizarre and 
hardly used. Is it possible to remove this parameter although it is always set 
to 0.0f.

 Reducer's Maximum Shuffle Buffer Size should be enlarged for 64bit JVM
 --

 Key: MAPREDUCE-4883
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4883
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
Affects Versions: 0.20.2, 1.0.3
 Environment: Especially for 64bit JVM
Reporter: Lijie Xu
Assignee: Jerry Chen
  Labels: patch
 Attachments: MAPREDUCE-4883.patch

   Original Estimate: 12h
  Remaining Estimate: 12h

 In hadoop-0.20.2, hadoop-1.0.3 or other versions, reducer's shuffle buffer 
 size cannot exceed 2048MB (i.e., Integer.MAX_VALUE). This is reasonable for 
 32bit JVM.
 But for 64bit JVM, although reducer's JVM size can be set more than 2048MB 
 (e.g., mapred.child.java.opts=-Xmx4000m), the heap size used for shuffle 
 buffer is at most 2048MB * maxInMemCopyUse (default 0.7) not 4000MB * 
 maxInMemCopyUse. 
 So the pointed piece of code in ReduceTask.java needs modification for 64bit 
 JVM.
 ---
   private final long maxSize;
   private final long maxSingleShuffleLimit;
  
   private long size = 0;
  
   private Object dataAvailable = new Object();
   private long fullSize = 0;
   private int numPendingRequests = 0;
   private int numRequiredMapOutputs = 0;
   private int numClosed = 0;
   private boolean closed = false;
  
   public ShuffleRamManager(Configuration conf) throws IOException {
 final float maxInMemCopyUse =
   conf.getFloat(mapred.job.shuffle.input.buffer.percent, 0.70f);
 if (maxInMemCopyUse  1.0 || maxInMemCopyUse  0.0) {
   throw new IOException(mapred.job.shuffle.input.buffer.percent +
 maxInMemCopyUse);
 }
 // Allow unit tests to fix Runtime memory
 --   maxSize = (int)(conf.getInt(mapred.job.reduce.total.mem.bytes,
 --(int)Math.min(Runtime.getRuntime().maxMemory(), Integer.MAX_VALUE))
 --  * maxInMemCopyUse);
 maxSingleShuffleLimit = (long)(maxSize * 
 MAX_SINGLE_SHUFFLE_SEGMENT_FRACTION);
 LOG.info(ShuffleRamManager: MemoryLimit= + maxSize +
  , MaxSingleShuffleLimit= + maxSingleShuffleLimit);
   }

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (MAPREDUCE-4882) Error in estimating the length of the output file in Spill Phase

2013-01-26 Thread Lijie Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-4882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13563459#comment-13563459
 ] 

Lijie Xu commented on MAPREDUCE-4882:
-

[~jerrychenhf]
Thanks, I checked this patch and think it is correct. In fact, I had run many 
jobs under this change and found nothing abnormal. If I find more problems 
about this change, I will report.

 Error in estimating the length of the output file in Spill Phase
 

 Key: MAPREDUCE-4882
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4882
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Affects Versions: 0.20.2, 1.0.3
 Environment: Any Environment
Reporter: Lijie Xu
Assignee: Jerry Chen
  Labels: patch
 Attachments: MAPREDUCE-4882.patch

   Original Estimate: 1h
  Remaining Estimate: 1h

 The sortAndSpill() method in MapTask.java has an error in estimating the 
 length of the output file. 
 The long size should be (bufvoid - bufstart) + bufend not (bufvoid - 
 bufend) + bufstart when bufend  bufstart.
 Here is the original code in MapTask.java.
  private void sortAndSpill() throws IOException, ClassNotFoundException,
InterruptedException {
   //approximate the length of the output file to be the length of the
   //buffer + header lengths for the partitions
   long size = (bufend = bufstart
   ? bufend - bufstart
   : (bufvoid - bufend) + bufstart) +
   partitions * APPROX_HEADER_LENGTH;
   FSDataOutputStream out = null;
 --
 I had a test on TeraSort. A snippet from mapper's log is as follows:
 MapTask: Spilling map output: record full = true
 MapTask: bufstart = 157286200; bufend = 10485460; bufvoid = 199229440
 MapTask: kvstart = 262142; kvend = 131069; length = 655360
 MapTask: Finished spill 3
 In this occasioin, Spill Bytes should be (199229440 - 157286200) + 10485460 = 
 52428700 (52 MB) because the number of spilled records is 524287 and each 
 record costs 100B.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (MAPREDUCE-4882) Error in estimating the length of the output file in Spill Phase

2013-01-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-4882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13563472#comment-13563472
 ] 

Hadoop QA commented on MAPREDUCE-4882:
--

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12566626/MAPREDUCE-4882.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/3281//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/3281//console

This message is automatically generated.

 Error in estimating the length of the output file in Spill Phase
 

 Key: MAPREDUCE-4882
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4882
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Affects Versions: 0.20.2, 1.0.3
 Environment: Any Environment
Reporter: Lijie Xu
Assignee: Jerry Chen
  Labels: patch
 Attachments: MAPREDUCE-4882.patch

   Original Estimate: 1h
  Remaining Estimate: 1h

 The sortAndSpill() method in MapTask.java has an error in estimating the 
 length of the output file. 
 The long size should be (bufvoid - bufstart) + bufend not (bufvoid - 
 bufend) + bufstart when bufend  bufstart.
 Here is the original code in MapTask.java.
  private void sortAndSpill() throws IOException, ClassNotFoundException,
InterruptedException {
   //approximate the length of the output file to be the length of the
   //buffer + header lengths for the partitions
   long size = (bufend = bufstart
   ? bufend - bufstart
   : (bufvoid - bufend) + bufstart) +
   partitions * APPROX_HEADER_LENGTH;
   FSDataOutputStream out = null;
 --
 I had a test on TeraSort. A snippet from mapper's log is as follows:
 MapTask: Spilling map output: record full = true
 MapTask: bufstart = 157286200; bufend = 10485460; bufvoid = 199229440
 MapTask: kvstart = 262142; kvend = 131069; length = 655360
 MapTask: Finished spill 3
 In this occasioin, Spill Bytes should be (199229440 - 157286200) + 10485460 = 
 52428700 (52 MB) because the number of spilled records is 524287 and each 
 record costs 100B.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (MAPREDUCE-4049) plugin for generic shuffle service

2013-01-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-4049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13563479#comment-13563479
 ] 

Hudson commented on MAPREDUCE-4049:
---

Integrated in Hadoop-Hdfs-trunk #1297 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1297/])
Amending MR CHANGES.txt to reflect that MAPREDUCE-4049/4809/4807/4808 are 
in branch-2 (Revision 1438799)

 Result = FAILURE
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1438799
Files : 
* /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt


 plugin for generic shuffle service
 --

 Key: MAPREDUCE-4049
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4049
 Project: Hadoop Map/Reduce
  Issue Type: Sub-task
  Components: performance, task, tasktracker
Affects Versions: 1.0.3, 1.1.0, 2.0.0-alpha, 3.0.0
Reporter: Avner BenHanoch
Assignee: Avner BenHanoch
  Labels: merge, plugin, rdma, shuffle
 Fix For: 2.0.3-alpha

 Attachments: HADOOP-1.x.y.patch, Hadoop Shuffle Plugin Design.rtf, 
 MAPREDUCE-4049--branch-1.patch, mapreduce-4049.patch


 Support generic shuffle service as set of two plugins: ShuffleProvider  
 ShuffleConsumer.
 This will satisfy the following needs:
 # Better shuffle and merge performance. For example: we are working on 
 shuffle plugin that performs shuffle over RDMA in fast networks (10gE, 40gE, 
 or Infiniband) instead of using the current HTTP shuffle. Based on the fast 
 RDMA shuffle, the plugin can also utilize a suitable merge approach during 
 the intermediate merges. Hence, getting much better performance.
 # Satisfy MAPREDUCE-3060 - generic shuffle service for avoiding hidden 
 dependency of NodeManager with a specific version of mapreduce shuffle 
 (currently targeted to 0.24.0).
 References:
 # Hadoop Acceleration through Network Levitated Merging, by Prof. Weikuan Yu 
 from Auburn University with others, 
 [http://pasl.eng.auburn.edu/pubs/sc11-netlev.pdf]
 # I am attaching 2 documents with suggested Top Level Design for both plugins 
 (currently, based on 1.0 branch)
 # I am providing link for downloading UDA - Mellanox's open source plugin 
 that implements generic shuffle service using RDMA and levitated merge.  
 Note: At this phase, the code is in C++ through JNI and you should consider 
 it as beta only.  Still, it can serve anyone that wants to implement or 
 contribute to levitated merge. (Please be advised that levitated merge is 
 mostly suit in very fast networks) - 
 [http://www.mellanox.com/content/pages.php?pg=products_dynproduct_family=144menu_section=69]

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (MAPREDUCE-4049) plugin for generic shuffle service

2013-01-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-4049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13563485#comment-13563485
 ] 

Hudson commented on MAPREDUCE-4049:
---

Integrated in Hadoop-Mapreduce-trunk #1325 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1325/])
Amending MR CHANGES.txt to reflect that MAPREDUCE-4049/4809/4807/4808 are 
in branch-2 (Revision 1438799)

 Result = FAILURE
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1438799
Files : 
* /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt


 plugin for generic shuffle service
 --

 Key: MAPREDUCE-4049
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4049
 Project: Hadoop Map/Reduce
  Issue Type: Sub-task
  Components: performance, task, tasktracker
Affects Versions: 1.0.3, 1.1.0, 2.0.0-alpha, 3.0.0
Reporter: Avner BenHanoch
Assignee: Avner BenHanoch
  Labels: merge, plugin, rdma, shuffle
 Fix For: 2.0.3-alpha

 Attachments: HADOOP-1.x.y.patch, Hadoop Shuffle Plugin Design.rtf, 
 MAPREDUCE-4049--branch-1.patch, mapreduce-4049.patch


 Support generic shuffle service as set of two plugins: ShuffleProvider  
 ShuffleConsumer.
 This will satisfy the following needs:
 # Better shuffle and merge performance. For example: we are working on 
 shuffle plugin that performs shuffle over RDMA in fast networks (10gE, 40gE, 
 or Infiniband) instead of using the current HTTP shuffle. Based on the fast 
 RDMA shuffle, the plugin can also utilize a suitable merge approach during 
 the intermediate merges. Hence, getting much better performance.
 # Satisfy MAPREDUCE-3060 - generic shuffle service for avoiding hidden 
 dependency of NodeManager with a specific version of mapreduce shuffle 
 (currently targeted to 0.24.0).
 References:
 # Hadoop Acceleration through Network Levitated Merging, by Prof. Weikuan Yu 
 from Auburn University with others, 
 [http://pasl.eng.auburn.edu/pubs/sc11-netlev.pdf]
 # I am attaching 2 documents with suggested Top Level Design for both plugins 
 (currently, based on 1.0 branch)
 # I am providing link for downloading UDA - Mellanox's open source plugin 
 that implements generic shuffle service using RDMA and levitated merge.  
 Note: At this phase, the code is in C++ through JNI and you should consider 
 it as beta only.  Still, it can serve anyone that wants to implement or 
 contribute to levitated merge. (Please be advised that levitated merge is 
 mostly suit in very fast networks) - 
 [http://www.mellanox.com/content/pages.php?pg=products_dynproduct_family=144menu_section=69]

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (MAPREDUCE-4039) Sort Avoidance

2013-01-26 Thread Mariappan Asokan (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-4039?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13563500#comment-13563500
 ] 

Mariappan Asokan commented on MAPREDUCE-4039:
-

Hi Anty,
  Now that MAPREDUCE-4809, MAPREDUCE-4807, MAPREDUCE-4808, and MAPREDUCE-4049 
are all committed, it is possible to implement sort avoidance as implementation 
of plugins for {{MapOutputCollector}} and {{ShuffleConsumerPlugin}} with a 
special implementation of {{MergeManager.}}

If you don't mind, I can assign this Jira to me and work on it.  Please let me 
know.

Thanks.

-- Asokan


 Sort Avoidance
 --

 Key: MAPREDUCE-4039
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4039
 Project: Hadoop Map/Reduce
  Issue Type: New Feature
  Components: mrv2
Affects Versions: 0.23.2
Reporter: anty.rao
Assignee: anty
Priority: Minor
 Fix For: 0.23.2

 Attachments: IndexedCountingSortable.java, 
 MAPREDUCE-4039-branch-0.23.2.patch, MAPREDUCE-4039-branch-0.23.2.patch, 
 MAPREDUCE-4039-branch-0.23.2.patch


 Inspired by 
 [Tenzing|http://static.googleusercontent.com/external_content/untrusted_dlcp/research.google.com/en//pubs/archive/37200.pdf],
  in 5.1 MapReduce Enhanceemtns:
 {quote}*Sort Avoidance*. Certain operators such as hash join
 and hash aggregation require shuffling, but not sorting. The
 MapReduce API was enhanced to automatically turn off
 sorting for these operations. When sorting is turned off, the
 mapper feeds data to the reducer which directly passes the
 data to the Reduce() function bypassing the intermediate
 sorting step. This makes many SQL operators significantly
 more ecient.{quote}
 There are a lot of applications which need aggregation only, not 
 sorting.Using sorting to achieve aggregation is costly and inefficient. 
 Without sorting, up application can make use of hash table or hash map to do 
 aggregation efficiently.But application should bear in mind that reduce 
 memory is limited, itself is committed to manage memory of reduce, guard 
 against out of memory. Map-side combiner is not supported, you can also do 
 hash aggregation in map side  as a workaround.
 the following is the main points of sort avoidance implementation
 # add a configuration parameter ??mapreduce.sort.avoidance??, boolean type, 
 to turn on/off sort avoidance workflow.Two type of workflow are coexist 
 together.
 # key/value pairs emitted by map function is sorted by partition only, using 
 a more efficient sorting algorithm: counting sort.
 # map-side merge, use a kind of byte merge, which just concatenate bytes from 
 generated spills, read in bytes, write out bytes, without overhead of 
 key/value serialization/deserailization, comparison, which current version 
 incurs.
 # reduce can start up as soon as there is any map output available, in 
 contrast to sort workflow which must wait until all map outputs are fetched 
 and merged.
 # map output in memory can be directly consumed by reduce.When reduce can't 
 catch up with the speed of incoming map outputs, in-memory merge thread will 
 kick in, merging in-memory map outputs onto disk.
 # sequentially read in on-disk files to feed reduce, in contrast to currently 
 implementation which read multiple files concurrently, result in many disk 
 seek. Map output in memory take precedence over on disk files in feeding 
 reduce function.
 I have already implement this feature based on hadoop CDH3U3 and done some 
 performance evaluation, you can reference to 
 [https://github.com/hanborq/hadoop] for details. Now,I'm willing to port it 
 into yarn. Welcome for commenting.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (MAPREDUCE-4961) Map reduce running local should also go through ShuffleConsumerPlugin for enabling different MergeManager implementations

2013-01-26 Thread Mariappan Asokan (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-4961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13563539#comment-13563539
 ] 

Mariappan Asokan commented on MAPREDUCE-4961:
-

Hi Jerry,
  I agree with you that local jobs should also be supported by 
{{ShuffleConsumerPlugin.}}  However, we can simplify the patch if the default 
implementation {{Shuffle}} itself calls the static method {{merge()}} in 
{{Merger}} for the implementation of {{runLocal()}} mthod.  There is no need to 
change {{MergeManager.}}  What do you think?

-- Asokan 

 Map reduce running local should also go through ShuffleConsumerPlugin for 
 enabling different MergeManager implementations
 -

 Key: MAPREDUCE-4961
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4961
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Affects Versions: trunk
Reporter: Jerry Chen
Assignee: Jerry Chen
 Attachments: MAPREDUCE-4961.patch

   Original Estimate: 72h
  Remaining Estimate: 72h

 MAPREDUCE-4049 provide the ability for pluggable Shuffle and MAPREDUCE-4080 
 extends Shuffle to be able to provide different MergeManager implementations. 
 While using these pluggable features, I find that when a map reduce is 
 running locally, a RawKeyValueIterator was returned directly from a static 
 call of Merge.merge, which break the assumption that the Shuffle may provide 
 different merge methods although there is no copy phase for this situation.
 The use case is when I am implementating a hash-based MergeManager, we don't 
 need sort in map side, while when running the map reduce locally, the 
 hash-based MergeManager will have no chance to be used as it goes directly to 
 Merger.merge. This makes the pluggable Shuffle and MergeManager incomplete.
 So we need to move the code calling Merger.merge from Reduce Task to 
 ShuffleConsumerPlugin implementation, so that the Suffle implementation can 
 decide how to do the merge and return corresponding iterator.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (MAPREDUCE-4962) jobdetails.jsp uses display name instead of real name to get counters

2013-01-26 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-4962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13563622#comment-13563622
 ] 

Alejandro Abdelnur commented on MAPREDUCE-4962:
---

+1

 jobdetails.jsp uses display name instead of real name to get counters
 -

 Key: MAPREDUCE-4962
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4962
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: jobtracker, mrv1
Affects Versions: 1.1.1
Reporter: Sandy Ryza
Assignee: Sandy Ryza
 Attachments: MAPREDUCE-4962.patch


 jobdetails.jsp displays details for a job including its counters.  Counters 
 may have different real names and display names, but the display names are 
 used to look the counter values up, so counter values can incorrectly show up 
 as 0.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (MAPREDUCE-4962) jobdetails.jsp uses display name instead of real name to get counters

2013-01-26 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-4962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated MAPREDUCE-4962:
--

   Resolution: Fixed
Fix Version/s: 1.2.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Thanks Sandy. Committed to branch-1.

 jobdetails.jsp uses display name instead of real name to get counters
 -

 Key: MAPREDUCE-4962
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4962
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: jobtracker, mrv1
Affects Versions: 1.1.1
Reporter: Sandy Ryza
Assignee: Sandy Ryza
 Fix For: 1.2.0

 Attachments: MAPREDUCE-4962.patch


 jobdetails.jsp displays details for a job including its counters.  Counters 
 may have different real names and display names, but the display names are 
 used to look the counter values up, so counter values can incorrectly show up 
 as 0.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (MAPREDUCE-4049) plugin for generic shuffle service

2013-01-26 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-4049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13563638#comment-13563638
 ] 

Alejandro Abdelnur commented on MAPREDUCE-4049:
---

Avner, as this is a new feature we should have the corresponding release notes. 
Please let me know if you want me to take care of it. Thx

 plugin for generic shuffle service
 --

 Key: MAPREDUCE-4049
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4049
 Project: Hadoop Map/Reduce
  Issue Type: Sub-task
  Components: performance, task, tasktracker
Affects Versions: 1.0.3, 1.1.0, 2.0.0-alpha, 3.0.0
Reporter: Avner BenHanoch
Assignee: Avner BenHanoch
  Labels: merge, plugin, rdma, shuffle
 Fix For: 2.0.3-alpha

 Attachments: HADOOP-1.x.y.patch, Hadoop Shuffle Plugin Design.rtf, 
 MAPREDUCE-4049--branch-1.patch, mapreduce-4049.patch


 Support generic shuffle service as set of two plugins: ShuffleProvider  
 ShuffleConsumer.
 This will satisfy the following needs:
 # Better shuffle and merge performance. For example: we are working on 
 shuffle plugin that performs shuffle over RDMA in fast networks (10gE, 40gE, 
 or Infiniband) instead of using the current HTTP shuffle. Based on the fast 
 RDMA shuffle, the plugin can also utilize a suitable merge approach during 
 the intermediate merges. Hence, getting much better performance.
 # Satisfy MAPREDUCE-3060 - generic shuffle service for avoiding hidden 
 dependency of NodeManager with a specific version of mapreduce shuffle 
 (currently targeted to 0.24.0).
 References:
 # Hadoop Acceleration through Network Levitated Merging, by Prof. Weikuan Yu 
 from Auburn University with others, 
 [http://pasl.eng.auburn.edu/pubs/sc11-netlev.pdf]
 # I am attaching 2 documents with suggested Top Level Design for both plugins 
 (currently, based on 1.0 branch)
 # I am providing link for downloading UDA - Mellanox's open source plugin 
 that implements generic shuffle service using RDMA and levitated merge.  
 Note: At this phase, the code is in C++ through JNI and you should consider 
 it as beta only.  Still, it can serve anyone that wants to implement or 
 contribute to levitated merge. (Please be advised that levitated merge is 
 mostly suit in very fast networks) - 
 [http://www.mellanox.com/content/pages.php?pg=products_dynproduct_family=144menu_section=69]

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (MAPREDUCE-4807) Allow MapOutputBuffer to be pluggable

2013-01-26 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-4807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13563640#comment-13563640
 ] 

Alejandro Abdelnur commented on MAPREDUCE-4807:
---

Asokan, as this is a new feature we should have the corresponding release 
notes. Please let me know if you want me to take care of it. Thx

 Allow MapOutputBuffer to be pluggable
 -

 Key: MAPREDUCE-4807
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4807
 Project: Hadoop Map/Reduce
  Issue Type: Sub-task
Affects Versions: 2.0.2-alpha
Reporter: Arun C Murthy
Assignee: Mariappan Asokan
 Fix For: 2.0.3-alpha

 Attachments: COMBO-mapreduce-4809-4807.patch, 
 COMBO-mapreduce-4809-4807.patch, COMBO-mapreduce-4809-4807.patch, 
 mapreduce-4807.patch, mapreduce-4807.patch, mapreduce-4807.patch, 
 mapreduce-4807.patch, mapreduce-4807.patch, mapreduce-4807.patch


 Allow MapOutputBuffer to be pluggable

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (MAPREDUCE-4807) Allow MapOutputBuffer to be pluggable

2013-01-26 Thread Mariappan Asokan (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-4807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13563680#comment-13563680
 ] 

Mariappan Asokan commented on MAPREDUCE-4807:
-

Hi Alejandro,
  How does the following sound?

*NEW FEATURE*
  *Allow the sort step in a map task to be pluggable.*

Please let me know.

Thanks.

-- Asokan


 Allow MapOutputBuffer to be pluggable
 -

 Key: MAPREDUCE-4807
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4807
 Project: Hadoop Map/Reduce
  Issue Type: Sub-task
Affects Versions: 2.0.2-alpha
Reporter: Arun C Murthy
Assignee: Mariappan Asokan
 Fix For: 2.0.3-alpha

 Attachments: COMBO-mapreduce-4809-4807.patch, 
 COMBO-mapreduce-4809-4807.patch, COMBO-mapreduce-4809-4807.patch, 
 mapreduce-4807.patch, mapreduce-4807.patch, mapreduce-4807.patch, 
 mapreduce-4807.patch, mapreduce-4807.patch, mapreduce-4807.patch


 Allow MapOutputBuffer to be pluggable

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (MAPREDUCE-4964) JobLocalizer#localizeJobFiles can potentially write job.xml to the wrong user's directory

2013-01-26 Thread Karthik Kambatla (JIRA)
Karthik Kambatla created MAPREDUCE-4964:
---

 Summary: JobLocalizer#localizeJobFiles can potentially write 
job.xml to the wrong user's directory
 Key: MAPREDUCE-4964
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4964
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: mrv1
Affects Versions: 1.1.1
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla


In the following code, if jobs corresponding to different users (X and Y) are 
localized simultaneously, it is possible that jobconf can be written to the 
wrong user's directory. (X's job.xml can be written to Y's directory)

{code}
  public void localizeJobFiles(JobID jobid, JobConf jConf,
  Path localJobTokenFile, TaskUmbilicalProtocol taskTracker)
  throws IOException, InterruptedException {
localizeJobFiles(jobid, jConf,
lDirAlloc.getLocalPathForWrite(JOBCONF, ttConf), localJobTokenFile,
taskTracker);
  }
{code}


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (MAPREDUCE-4964) JobLocalizer#localizeJobFiles can potentially write job.xml to the wrong user's directory

2013-01-26 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-4964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated MAPREDUCE-4964:


Attachment: MR-4964.patch

The patch sets the username on the conf before calling getLocalPathToWrite() to 
make sure the correct username is used.

 JobLocalizer#localizeJobFiles can potentially write job.xml to the wrong 
 user's directory
 -

 Key: MAPREDUCE-4964
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4964
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: mrv1
Affects Versions: 1.1.1
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
 Attachments: MR-4964.patch


 In the following code, if jobs corresponding to different users (X and Y) are 
 localized simultaneously, it is possible that jobconf can be written to the 
 wrong user's directory. (X's job.xml can be written to Y's directory)
 {code}
   public void localizeJobFiles(JobID jobid, JobConf jConf,
   Path localJobTokenFile, TaskUmbilicalProtocol taskTracker)
   throws IOException, InterruptedException {
 localizeJobFiles(jobid, jConf,
 lDirAlloc.getLocalPathForWrite(JOBCONF, ttConf), localJobTokenFile,
 taskTracker);
   }
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (MAPREDUCE-4803) Duplicate copies of TestIndexCache.java

2013-01-26 Thread Mariappan Asokan (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-4803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mariappan Asokan updated MAPREDUCE-4803:


Status: Open  (was: Patch Available)

 Duplicate copies of TestIndexCache.java
 ---

 Key: MAPREDUCE-4803
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4803
 Project: Hadoop Map/Reduce
  Issue Type: Test
  Components: test
Affects Versions: 2.0.2-alpha
Reporter: Mariappan Asokan
Assignee: Mariappan Asokan
Priority: Minor
 Fix For: 2.0.3-alpha

 Attachments: mapreduce-4803.patch, mapreduce-4803.patch


 I am not sure whether it was intentional, but I found two identical copies of 
 TestIndexCache.java one in hadoop-mapreduce-client-core and the other in 
 hadoop-mapreduce-client-jobclient.
 If someone confirms me it was not intentional, I can submit a small patch on 
 this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (MAPREDUCE-4803) Duplicate copies of TestIndexCache.java

2013-01-26 Thread Mariappan Asokan (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-4803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mariappan Asokan updated MAPREDUCE-4803:


Attachment: mapreduce-4803.patch

 Duplicate copies of TestIndexCache.java
 ---

 Key: MAPREDUCE-4803
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4803
 Project: Hadoop Map/Reduce
  Issue Type: Test
  Components: test
Affects Versions: 2.0.2-alpha
Reporter: Mariappan Asokan
Assignee: Mariappan Asokan
Priority: Minor
 Fix For: 2.0.3-alpha

 Attachments: mapreduce-4803.patch, mapreduce-4803.patch


 I am not sure whether it was intentional, but I found two identical copies of 
 TestIndexCache.java one in hadoop-mapreduce-client-core and the other in 
 hadoop-mapreduce-client-jobclient.
 If someone confirms me it was not intentional, I can submit a small patch on 
 this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (MAPREDUCE-4803) Duplicate copies of TestIndexCache.java

2013-01-26 Thread Mariappan Asokan (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-4803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mariappan Asokan updated MAPREDUCE-4803:


Attachment: (was: mapreduce-4803.patch)

 Duplicate copies of TestIndexCache.java
 ---

 Key: MAPREDUCE-4803
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4803
 Project: Hadoop Map/Reduce
  Issue Type: Test
  Components: test
Affects Versions: 2.0.2-alpha
Reporter: Mariappan Asokan
Assignee: Mariappan Asokan
Priority: Minor
 Fix For: 2.0.3-alpha

 Attachments: mapreduce-4803.patch, mapreduce-4803.patch


 I am not sure whether it was intentional, but I found two identical copies of 
 TestIndexCache.java one in hadoop-mapreduce-client-core and the other in 
 hadoop-mapreduce-client-jobclient.
 If someone confirms me it was not intentional, I can submit a small patch on 
 this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (MAPREDUCE-4803) Duplicate copies of TestIndexCache.java

2013-01-26 Thread Mariappan Asokan (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-4803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mariappan Asokan updated MAPREDUCE-4803:


Status: Patch Available  (was: Open)

 Duplicate copies of TestIndexCache.java
 ---

 Key: MAPREDUCE-4803
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4803
 Project: Hadoop Map/Reduce
  Issue Type: Test
  Components: test
Affects Versions: 2.0.2-alpha
Reporter: Mariappan Asokan
Assignee: Mariappan Asokan
Priority: Minor
 Fix For: 2.0.3-alpha

 Attachments: mapreduce-4803.patch, mapreduce-4803.patch


 I am not sure whether it was intentional, but I found two identical copies of 
 TestIndexCache.java one in hadoop-mapreduce-client-core and the other in 
 hadoop-mapreduce-client-jobclient.
 If someone confirms me it was not intentional, I can submit a small patch on 
 this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (MAPREDUCE-4964) JobLocalizer#localizeJobFiles can potentially write job.xml to the wrong user's directory

2013-01-26 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-4964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13563713#comment-13563713
 ] 

Karthik Kambatla commented on MAPREDUCE-4964:
-

Even though the race is hard to reproduce, I ll try to confirm the patch works 
or not.

 JobLocalizer#localizeJobFiles can potentially write job.xml to the wrong 
 user's directory
 -

 Key: MAPREDUCE-4964
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4964
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: mrv1
Affects Versions: 1.1.1
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
 Attachments: MR-4964.patch


 In the following code, if jobs corresponding to different users (X and Y) are 
 localized simultaneously, it is possible that jobconf can be written to the 
 wrong user's directory. (X's job.xml can be written to Y's directory)
 {code}
   public void localizeJobFiles(JobID jobid, JobConf jConf,
   Path localJobTokenFile, TaskUmbilicalProtocol taskTracker)
   throws IOException, InterruptedException {
 localizeJobFiles(jobid, jConf,
 lDirAlloc.getLocalPathForWrite(JOBCONF, ttConf), localJobTokenFile,
 taskTracker);
   }
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (MAPREDUCE-4803) Duplicate copies of TestIndexCache.java

2013-01-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-4803?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13563723#comment-13563723
 ] 

Hadoop QA commented on MAPREDUCE-4803:
--

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12566655/mapreduce-4803.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/3282//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/3282//console

This message is automatically generated.

 Duplicate copies of TestIndexCache.java
 ---

 Key: MAPREDUCE-4803
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4803
 Project: Hadoop Map/Reduce
  Issue Type: Test
  Components: test
Affects Versions: 2.0.2-alpha
Reporter: Mariappan Asokan
Assignee: Mariappan Asokan
Priority: Minor
 Fix For: 2.0.3-alpha

 Attachments: mapreduce-4803.patch, mapreduce-4803.patch


 I am not sure whether it was intentional, but I found two identical copies of 
 TestIndexCache.java one in hadoop-mapreduce-client-core and the other in 
 hadoop-mapreduce-client-jobclient.
 If someone confirms me it was not intentional, I can submit a small patch on 
 this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (MAPREDUCE-4964) JobLocalizer#localizeJobFiles can potentially write job.xml to the wrong user's directory

2013-01-26 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-4964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13563725#comment-13563725
 ] 

Karthik Kambatla commented on MAPREDUCE-4964:
-

The following stacktrace uses conf to set the localdirs in LocalDirAllocator - 
hence the need to set the user in the conf properly

{noformat}
INFO org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext: 
New local dirs are: /tmp/hdfs/mrlocal/taskTracker/kasha 
Saved local dirs are: java.lang.Thread.getStackTrace(Thread.java:1479) 
org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.confChanged(LocalDirAllocator.java:254)
 
org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:335)
 
org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:146)
 
org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:127)
 
org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:111)
 org.apache.hadoop.mapred.JobLocalizer.createWorkDir(JobLocalizer.java:464) 
org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:196)
 org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1336)
{noformat}

 JobLocalizer#localizeJobFiles can potentially write job.xml to the wrong 
 user's directory
 -

 Key: MAPREDUCE-4964
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4964
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: mrv1
Affects Versions: 1.1.1
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
 Attachments: MR-4964.patch


 In the following code, if jobs corresponding to different users (X and Y) are 
 localized simultaneously, it is possible that jobconf can be written to the 
 wrong user's directory. (X's job.xml can be written to Y's directory)
 {code}
   public void localizeJobFiles(JobID jobid, JobConf jConf,
   Path localJobTokenFile, TaskUmbilicalProtocol taskTracker)
   throws IOException, InterruptedException {
 localizeJobFiles(jobid, jConf,
 lDirAlloc.getLocalPathForWrite(JOBCONF, ttConf), localJobTokenFile,
 taskTracker);
   }
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (MAPREDUCE-4964) JobLocalizer#localizeJobFiles can potentially write job.xml to the wrong user's directory

2013-01-26 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-4964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13563726#comment-13563726
 ] 

Karthik Kambatla commented on MAPREDUCE-4964:
-

{noformat}
INFO org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext: 
New local dirs are: /tmp/hdfs/mrlocal/taskTracker/kasha 
Saved local dirs are: java.lang.Thread.getStackTrace(Thread.java:1479)
org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.confChanged(LocalDirAllocator.java:254)
org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:335)
org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:146)
org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:127)
org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:111)
org.apache.hadoop.mapred.JobLocalizer.createWorkDir(JobLocalizer.java:464)
org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:196)
org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1336)
{noformat}

 JobLocalizer#localizeJobFiles can potentially write job.xml to the wrong 
 user's directory
 -

 Key: MAPREDUCE-4964
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4964
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: mrv1
Affects Versions: 1.1.1
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
 Attachments: MR-4964.patch


 In the following code, if jobs corresponding to different users (X and Y) are 
 localized simultaneously, it is possible that jobconf can be written to the 
 wrong user's directory. (X's job.xml can be written to Y's directory)
 {code}
   public void localizeJobFiles(JobID jobid, JobConf jConf,
   Path localJobTokenFile, TaskUmbilicalProtocol taskTracker)
   throws IOException, InterruptedException {
 localizeJobFiles(jobid, jConf,
 lDirAlloc.getLocalPathForWrite(JOBCONF, ttConf), localJobTokenFile,
 taskTracker);
   }
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira