[ https://issues.apache.org/jira/browse/MAPREDUCE-64?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12799024#action_12799024 ]
Chris Douglas commented on MAPREDUCE-64: ---------------------------------------- The failure in {{TestRecoveryManager}} is not related: {noformat} 2010-01-12 00:23:10,054 ERROR mapred.MiniMRCluster (MiniMRCluster.java:run(121)) - Job tracker crashed java.lang.IllegalArgumentException: port out of range:-1 at java.net.InetSocketAddress.<init>(InetSocketAddress.java:118) at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:166) at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:124) at org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:1409) at org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:243) at org.apache.hadoop.mapred.MiniMRCluster$JobTrackerRunner.run(MiniMRCluster.java:118) at java.lang.Thread.run(Thread.java:619) {noformat} Looks like HADOOP-4744 in the JobTracker, which fails to start so {{MiniMRCluster.JobTrackerRunner.tracker}} is null. Instead of logging the error, MiniMRCluster should fail the test (MAPREDUCE-1366). It also passes on my machine. > Map-side sort is hampered by io.sort.record.percent > --------------------------------------------------- > > Key: MAPREDUCE-64 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-64 > Project: Hadoop Map/Reduce > Issue Type: Bug > Reporter: Arun C Murthy > Assignee: Chris Douglas > Attachments: M64-0.patch, M64-0i.png, M64-1.patch, M64-1i.png, > M64-2.patch, M64-2i.png, M64-3.patch, M64-4.patch, M64-5.patch, M64-6.patch, > M64-7.patch > > > Currently io.sort.record.percent is a fairly obscure, per-job configurable, > expert-level parameter which controls how much accounting space is available > for records in the map-side sort buffer (io.sort.mb). Typically values for > io.sort.mb (100) and io.sort.record.percent (0.05) imply that we can store > ~350,000 records in the buffer before necessitating a sort/combine/spill. > However for many applications which deal with small records e.g. the > world-famous wordcount and it's family this implies we can only use 5-10% of > io.sort.mb i.e. (5-10M) before we spill inspite of having _much_ more memory > available in the sort-buffer. The word-count for e.g. results in ~12 spills > (given hdfs block size of 64M). The presence of a combiner exacerbates the > problem by piling serialization/deserialization of records too... > Sure, jobs can configure io.sort.record.percent, but it's tedious and > obscure; we really can do better by getting the framework to automagically > pick it by using all available memory (upto io.sort.mb) for either the data > or accounting. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.