[ 
https://issues.apache.org/jira/browse/HADOOP-2119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12567461#action_12567461
 ] 

Arun C Murthy commented on HADOOP-2119:
---------------------------------------

bq. I guess you meant 'spare matrix' conceptually. AFAIK java doesn't have any 
inbuilt implementation for sparse matrix.

No, I believe it _is_ a concrete _sparse matrix_. My original proposal (alluded 
here: 
http://issues.apache.org/jira/browse/HADOOP-2119?focusedCommentId=12566309#action_12566309)
 had something very similar, I haven't had a chance to sync-up with Owen on his 
proposal...

Originally:

{code}
class TaskInProgress {
  // ...
  Map<Location, TaskInProgress} nextHostMap;
  Map<Location, TaskInProgress} prevHostMap;
  Map<Location, TaskInProgress} nextRackMap;
  Map<Location, TaskInProgress} prevRackMap;
  // ...
}

Map <Location, TaskInProgress> hostToTaskCache;
Map <Location, TaskInProgress> rackToTaskCache;
{code}

1. {{Location}} is either a node or a rack.
2. The algorithm to find a new task to run is exactly same as above.
3. The reason why the next/prev links are Maps is because every TIP is 
node-local/rack-local to multiple Locations.

The central idea is to keep all related scheduling information together and 
design an optimal way to delete a TIP from multiple Locations given only one 
Location (remember that given the usual case of a replication factor of 3, a 
TIP can be scheduled on 6 locations i.e. 3 nodes and 3 racks, worst case). This 
is precisely the reason for have the backlinks (prev) ...

bq. 1) I am not sure how easy it would be to insert into this structure. 
Insert is always _insert-front_ or _insert-back_, which is O(1).

bq. 2) Also how easily we can maintain the sortedness. This would be required 
for failed/running tasks. 
This approach is _designed_ to eliminate any sorting at all. When a task fails 
we just insert-front the TIP on all Location lists, so O(1).

bq. 3) Also the implementation and space complexity will also be HIGH.

Overall, this is _way_ more efficient time-wise (every operation is a constant 
time operation) at the cost of increased memory requirements. However, I 
believe it isn't prohibitively more expensive, the classic memory v/s 
performance tradeoff which is well-spent here:
1. It is O(2x) current memory requirements, the _2x_ stems from the fact that 
currently each TIP is already on multiple lists (in {{cachedTasks}}) and the 
only addition is the _prev_ links.
2. All these are per-job data-structures and are cleaned when the job 
completes, and we aren't running into any memory-related issues at the 
JobTracker currently. Of course with HoD this is moot.

Note on the implementation complexity: IMO it isn't insanely complicated, in 
fact fairly simple/obvious - but one man's food is another man's poison... 
there I managed to sound sagely and philosphical. *smile*

> JobTracker becomes non-responsive if the task trackers finish task too fast
> ---------------------------------------------------------------------------
>
>                 Key: HADOOP-2119
>                 URL: https://issues.apache.org/jira/browse/HADOOP-2119
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: mapred
>    Affects Versions: 0.16.0
>            Reporter: Runping Qi
>            Assignee: Amar Kamat
>            Priority: Critical
>             Fix For: 0.17.0
>
>         Attachments: hadoop-2119.patch, hadoop-jobtracker-thread-dump.txt
>
>
> I ran a job with 0 reducer on a cluster with 390 nodes.
> The mappers ran very fast.
> The jobtracker lacks behind on committing completed mapper tasks.
> The number of running mappers displayed on web UI getting bigger and bigger.
> The jos tracker eventually stopped responding to web UI.
> No progress is reported afterwards.
> Job tracker is running on a separate node.
> The job tracker process consumed 100% cpu, with vm size 1.01g (reach the heap 
> space limit).

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to