[ 
https://issues.apache.org/jira/browse/HADOOP-5919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12717210#action_12717210
 ] 

rahul k singh edited comment on HADOOP-5919 at 6/8/09 2:46 AM:
---------------------------------------------------------------

All physical memory related stuff have been removed.

mapred.task.default.maxvmem => removed completely
mapred.tasktracker.vmem.reserved ==> this key is removed.Admin can attain 
similar behaviour using keys mapred.cluster.map.memory.mb and 
mapred.cluster.map.memory.mb. 

mapred.task.limit.maxvmem   ==> split into  mapred.cluster.max.map.memory.mb 
mapred.cluster.max.reduce.memory.mb 
for example if you set "mapred.task.limit.maxvmem" = 10 , then 
"mapred.cluster.max.map.memory.mb" = 10/1024. same with reduce.

mapred.task.maxvmem ==> split into 
mapred.job.map.memory.mb,mapred.job.reduce.memory.mb. The set behaviour would 
be same 
as "mapred.task.limit.maxvmem"

Design.
Assumption:

1. Incase there is 1-1 mapping of the key , the value of the deprecated key is 
applied to the new key.

Data Structures.
{code}
deprecatedKeys<String,DeprecatedKeyMapping>
Skeleton of DeprecatedKeyMapping

static class DeprecatedKeyMapping{

   String[] keyMappings;
   String customMessage;
}
{code}

for keys which are removed , keyMappings would be null and keys would have only 
custom message.


while setting the conf , if there is 1-1 mapping ,Configuration would take care 
of that , incase of multiple values for a deprecated key , respective code 
paths would be setting the value.


      was (Author: rksingh):
    All physical memory related stuff have been removed.

mapred.task.default.maxvmem => removed completely
mapred.tasktracker.vmem.reserved ==> this key is removed.Admin can attain 
similar behaviour using keys mapred.cluster.map.memory.mb and 
mapred.cluster.map.memory.mb. 

mapred.task.limit.maxvmem   ==> split into  mapred.cluster.max.map.memory.mb 
mapred.cluster.max.reduce.memory.mb 
for example if you set "mapred.task.limit.maxvmem" = 10 , then 
"mapred.cluster.max.map.memory.mb" = 10/1024. same with reduce.

mapred.task.maxvmem ==> split into 
mapred.job.map.memory.mb,mapred.job.reduce.memory.mb. The set behaviour would 
be same 
as "mapred.task.limit.maxvmem"

Design.
Assumption:

1. Incase there is 1-1 mapping of the key , the value of the deprecated key is 
applied to the new key.

Data Structures.

deprecatedKeys<String,DeprecatedKeyMapping>
Skeleton of DeprecatedKeyMapping

static class DeprecatedKeyMapping{

   String[] keyMappings;
   String customMessage;
}


for keys which are removed , keyMappings would be null and keys would have only 
custom message.


while setting the conf , if there is 1-1 mapping ,Configuration would take care 
of that , incase of multiple values for a deprecated key , respective code 
paths would be setting the value.

  
> Memory management variables need a backwards compatibility option after 
> HADOOP-5881
> -----------------------------------------------------------------------------------
>
>                 Key: HADOOP-5919
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5919
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: mapred
>            Reporter: Hemanth Yamijala
>            Assignee: rahul k singh
>            Priority: Blocker
>
> HADOOP-5881 modified variables related to memory management without looking 
> at the backwards compatibility angle. This JIRA is to adress the gap. Marking 
> it a blocker for 0.20.1

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to