[jira] Resolved: (MAPREDUCE-623) Resolve javac warnings in mapred

2010-02-18 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas resolved MAPREDUCE-623.
-

   Resolution: Fixed
Fix Version/s: (was: 0.21.0)
   0.20.2

I committed this to the 0.20 branch

 Resolve javac warnings in mapred
 

 Key: MAPREDUCE-623
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-623
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: build
Reporter: Jothi Padmanabhan
Assignee: Jothi Padmanabhan
 Fix For: 0.20.2

 Attachments: M623-0v20.patch, mapreduce-623.patch


 Towards a solution for HADOOP-5628, we need to resolve all javac warnings. 
 This jira will try to resolve javac warnings where ever possible and suppress 
 them where resolution is not possible.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (MAPREDUCE-1503) Push HADOOP-6551 into MapReduce

2010-02-18 Thread Owen O'Malley (JIRA)
Push HADOOP-6551 into MapReduce
---

 Key: MAPREDUCE-1503
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1503
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
Reporter: Owen O'Malley


We need to throw readable exceptions instead of returning false.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (MAPREDUCE-1504) SequenceFile.Reader constructor leaking resources

2010-02-18 Thread Zheng Shao (JIRA)
SequenceFile.Reader constructor leaking resources
-

 Key: MAPREDUCE-1504
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1504
 Project: Hadoop Map/Reduce
  Issue Type: New Feature
Reporter: Zheng Shao


When {{SequenceFile.Reader}} constructor throws an {{IOException}} (because the 
file does not conform to {{SequenceFile}} format), we will have such a problem.
The caller won't have a pointer to the reader because of the {{IOException}} 
thrown.

We should call {{in.close()}} inside the constructor to make sure that we don't 
leak resources (file descriptor and connection to the data node, etc).


-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (MAPREDUCE-1505) Job class should create the rpc client only when needed

2010-02-18 Thread Devaraj Das (JIRA)
Job class should create the rpc client only when needed
---

 Key: MAPREDUCE-1505
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1505
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: client
Affects Versions: 0.20.2
Reporter: Devaraj Das
 Fix For: 0.22.0


It will be good to have the org.apache.hadoop.mapreduce.Cluster create the rpc 
client object only when needed (when a call to the jobtracker is actually 
required). org.apache.hadoop.mapreduce.Job constructs the Cluster object 
internally and in many cases the application that created the Job object really 
wants to look at the configuration only. It'd help to not have these 
connections to the jobtracker especially when Job is used in the tasks (for 
e.g., Pig calls mapreduce.FileInputFormat.setInputPath in the tasks and that 
requires a Job object to be passed).

In Hadoop 20, the Job object internally creates the JobClient object, and the 
same argument applies there too.


-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (MAPREDUCE-1506) Assertion failure in TestTaskTrackerMemoryManager

2010-02-18 Thread Aaron Kimball (JIRA)
Assertion failure in TestTaskTrackerMemoryManager
-

 Key: MAPREDUCE-1506
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1506
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Reporter: Aaron Kimball
 Attachments: 
TEST-org.apache.hadoop.mapred.TestTaskTrackerMemoryManager.txt

With asserts enabled, TestTaskTrackerMemoryManager sometimes fails. From what 
I've inspected, it's because some tasks are marked as FAILED/TIPFAILED while 
others are marked SUCCEEDED.

This can be reproduced by applying MAPREDUCE-1092 and then running {{ant clean 
test -Dtestcase=TestTaskTrackerMemoryManager}}

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (MAPREDUCE-1093) Java assertion failures triggered by tests

2010-02-18 Thread Aaron Kimball (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-1093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron Kimball resolved MAPREDUCE-1093.
--

Resolution: Invalid

 Java assertion failures triggered by tests
 --

 Key: MAPREDUCE-1093
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1093
 Project: Hadoop Map/Reduce
  Issue Type: Test
Reporter: Eli Collins
Assignee: Aaron Kimball
 Attachments: MAPREDUCE-1093.patch


 While running the tests with java asserts enabled the following two asserts 
 fired:
 {code}
 testStateRefresh in TestQueueManager:
 try{
 Job job = submitSleepJob(10, 2, 10, 10, true,null, default );
 assert(job.isSuccessful()); ==
 }catch(Exception e){
 {code}
 {code}
 runJobExceedingMemoryLimit in TestTaskTrackerMemoryManager:
 for (TaskCompletionEvent tce : taskComplEvents) {
   // Every task HAS to fail
   assert (tce.getTaskStatus() == TaskCompletionEvent.Status.TIPFAILED || 
 tce ==
   .getTaskStatus() == TaskCompletionEvent.Status.FAILED);
 {code}

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (MAPREDUCE-1507) The old MapReduce API is only partially deprecated

2010-02-18 Thread Tom White (JIRA)
The old MapReduce API is only partially deprecated
--

 Key: MAPREDUCE-1507
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1507
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Reporter: Tom White
Assignee: Tom White


Not all of the old API is currently marked as deprecated. E.g. 
org.apache.hadoop.mapred.OutputFormat is deprecated, but 
org.apache.hadoop.mapred.FileOutputFormat isn't.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (MAPREDUCE-1508) NPE in TestMultipleLevelCaching on error cleanup path

2010-02-18 Thread Aaron Kimball (JIRA)
NPE in TestMultipleLevelCaching on error cleanup path
-

 Key: MAPREDUCE-1508
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1508
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: test
Reporter: Aaron Kimball


TestMultipleLevelCaching dereferences objects in a finally block which may not 
have been initialized.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (MAPREDUCE-1509) Partition size of Hadoop Archives should be configurable

2010-02-18 Thread Rodrigo Schmidt (JIRA)
Partition size of Hadoop Archives should be configurable


 Key: MAPREDUCE-1509
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1509
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: harchive
Reporter: Rodrigo Schmidt
Assignee: Rodrigo Schmidt


The size of partitions on Hadoop Archives is fixed to 2G:

  static final long partSize = 2 * 1024 * 1024 * 1024l;

We should make it a configurable parameter so that users can define it at the 
command line

$ hadoop archive -partsize 4 




-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (MAPREDUCE-1510) RAID should regenerate parity files if they get deleted

2010-02-18 Thread Rodrigo Schmidt (JIRA)
RAID should regenerate parity files if they get deleted
---

 Key: MAPREDUCE-1510
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1510
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: contrib/raid
Reporter: Rodrigo Schmidt
Assignee: Rodrigo Schmidt


Currently, if a source file has a replication factor lower or equal to that 
expected by RAID, the file is skipped and no parity file is generated. I don't 
think this is a good behavior since parity files can get wrongly deleted, 
leaving the source file with a low replication factor. In that case, raid 
should be able to recreate the parity file.


-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (MAPREDUCE-1511) Examples should not use deprecated APIs

2010-02-18 Thread Tom White (JIRA)
Examples should not use deprecated APIs
---

 Key: MAPREDUCE-1511
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1511
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: job submission
Reporter: Tom White
Assignee: Tom White


MAPREDUCE-777 deprecated some APIs which are still being used by the examples. 
This issue is to fix the examples so they use the replacements.


-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.