[ 
https://issues.apache.org/jira/browse/YARN-7605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16318862#comment-16318862
 ] 

Eric Yang edited comment on YARN-7605 at 1/9/18 6:24 PM:
---------------------------------------------------------

[~jianhe] If we load spec from HDFS, then the information shown would be:

When application is running:
{code}
{  
   "name":"qqq",
   "id":"application_1515463608990_0002",
   "lifetime":-1,
   "components":[  
      {  
         "name":"sleeper",
         "dependencies":[  
         ],
         "resource":{  
            "cpus":1,
            "memory":"256"
         },
         "state":"FLEXING",
         "configuration":{  
            "properties":{  
            },
            "env":{  
            },
            "files":[  
            ]
         },
         "quicklinks":[  
         ],
         "containers":[  
         ],
         "launch_command":"sleep 900000",
         "number_of_containers":2,
         "run_privileged_container":false
      }
   ],
   "configuration":{  
      "properties":{  
      },
      "env":{  
      },
      "files":[  
      ]
   },
   "state":"FAILED",
   "quicklinks":{  
   },
   "kerberos_principal":{  
      "principal_name":"hbase/[email protected]",
      "keytab":"file:///etc/security/keytabs/hbase.service.keytab"
   }
}
{code}

When application is not running, we show JSON as:
{code}
{  
   "name":"qqq",
   "id":"application_1515463608990_0002",
   "lifetime":-1,
   "components":[  
      {  
         "name":"sleeper",
         "dependencies":[  
         ],
         "resource":{  
            "cpus":1,
            "memory":"256"
         },
         "state":"STOPPED",
         "configuration":{  
            "properties":{  
            },
            "env":{  
            },
            "files":[  
            ]
         },
         "quicklinks":[  
         ],
         "containers":[  
         ],
         "launch_command":"sleep 900000",
         "number_of_containers":2,
         "run_privileged_container":false
      }
   ],
   "configuration":{  
      "properties":{  
      },
      "env":{  
      },
      "files":[  
      ]
   },
   "state":"FAILED",
   "quicklinks":{  
   },
   "kerberos_principal":{  
      "principal_name":"hbase/[email protected]",
      "keytab":"file:///etc/security/keytabs/hbase.service.keytab"
   }
}
{code}

Without loading spec from HDFS:

When application is running, we show JSON as:

{code}
{  
   "name":"qqq",
   "id":"application_1515463608990_0002",
   "lifetime":-1,
   "components":[  
      {  
         "name":"sleeper",
         "dependencies":[  
         ],
         "resource":{  
            "cpus":1,
            "memory":"256"
         },
         "state":"FLEXING",
         "configuration":{  
            "properties":{  
            },
            "env":{  
            },
            "files":[  
            ]
         },
         "quicklinks":[  
         ],
         "containers":[  
         ],
         "launch_command":"sleep 900000",
         "number_of_containers":2,
         "run_privileged_container":false
      }
   ],
   "configuration":{  
      "properties":{  
      },
      "env":{  
      },
      "files":[  
      ]
   },
   "state":"FAILED",
   "quicklinks":{  
   },
   "kerberos_principal":{  
      "principal_name":"hbase/[email protected]",
      "keytab":"file:///etc/security/keytabs/hbase.service.keytab"
   }
}{code}

When application is not running, we show JSON as:
{code}
{  
   "name":"q1",
   "components":[  
   ],
   "configuration":{  
      "properties":{  
      },
      "env":{  
      },
      "files":[  
      ]
   },
   "state":"ACCEPTED",
   "quicklinks":{  
   },
   "kerberos_principal":{  
   }
}
{code}

HDFS load is only increased, when application is not running.  If code logic 
are written properly, ajax will only need to retrieve stopped application once. 
 Therefore, the concern of HDFS overloading is not real issue.  Without the 
ability to retrieve full spec, UI will have to get information from other 
location like history timeline server.  I think it is useful to return spec 
information as part of status code because we do not have another API to 
retrieve spec at this time.  If the community think that returning partial 
information is better, then I will revert back to getStatus call changes.


was (Author: eyang):
[~jianhe] If we don't load spec from HDFS, then the information shown would be:

When application is running:
{code}
{  
   "name":"qqq",
   "id":"application_1515463608990_0002",
   "lifetime":-1,
   "components":[  
      {  
         "name":"sleeper",
         "dependencies":[  
         ],
         "resource":{  
            "cpus":1,
            "memory":"256"
         },
         "state":"FLEXING",
         "configuration":{  
            "properties":{  
            },
            "env":{  
            },
            "files":[  
            ]
         },
         "quicklinks":[  
         ],
         "containers":[  
         ],
         "launch_command":"sleep 900000",
         "number_of_containers":2,
         "run_privileged_container":false
      }
   ],
   "configuration":{  
      "properties":{  
      },
      "env":{  
      },
      "files":[  
      ]
   },
   "state":"FAILED",
   "quicklinks":{  
   },
   "kerberos_principal":{  
      "principal_name":"hbase/[email protected]",
      "keytab":"file:///etc/security/keytabs/hbase.service.keytab"
   }
}
{code}

When application is not running, we show JSON as:
{code}
{  
   "name":"qqq",
   "id":"application_1515463608990_0002",
   "lifetime":-1,
   "components":[  
      {  
         "name":"sleeper",
         "dependencies":[  
         ],
         "resource":{  
            "cpus":1,
            "memory":"256"
         },
         "state":"STOPPED",
         "configuration":{  
            "properties":{  
            },
            "env":{  
            },
            "files":[  
            ]
         },
         "quicklinks":[  
         ],
         "containers":[  
         ],
         "launch_command":"sleep 900000",
         "number_of_containers":2,
         "run_privileged_container":false
      }
   ],
   "configuration":{  
      "properties":{  
      },
      "env":{  
      },
      "files":[  
      ]
   },
   "state":"FAILED",
   "quicklinks":{  
   },
   "kerberos_principal":{  
      "principal_name":"hbase/[email protected]",
      "keytab":"file:///etc/security/keytabs/hbase.service.keytab"
   }
}
{code}

Without loading spec from HDFS:

When application is running, we show JSON as:

{code}
{  
   "name":"qqq",
   "id":"application_1515463608990_0002",
   "lifetime":-1,
   "components":[  
      {  
         "name":"sleeper",
         "dependencies":[  
         ],
         "resource":{  
            "cpus":1,
            "memory":"256"
         },
         "state":"FLEXING",
         "configuration":{  
            "properties":{  
            },
            "env":{  
            },
            "files":[  
            ]
         },
         "quicklinks":[  
         ],
         "containers":[  
         ],
         "launch_command":"sleep 900000",
         "number_of_containers":2,
         "run_privileged_container":false
      }
   ],
   "configuration":{  
      "properties":{  
      },
      "env":{  
      },
      "files":[  
      ]
   },
   "state":"FAILED",
   "quicklinks":{  
   },
   "kerberos_principal":{  
      "principal_name":"hbase/[email protected]",
      "keytab":"file:///etc/security/keytabs/hbase.service.keytab"
   }
}{code}

When application is not running, we show JSON as:
{code}
{  
   "name":"q1",
   "components":[  
   ],
   "configuration":{  
      "properties":{  
      },
      "env":{  
      },
      "files":[  
      ]
   },
   "state":"ACCEPTED",
   "quicklinks":{  
   },
   "kerberos_principal":{  
   }
}
{code}

HDFS load is only increased, when application is not running.  If code logic 
are written properly, ajax will only need to retrieve stopped application once. 
 Therefore, the concern of HDFS overloading is not real issue.  Without the 
ability to retrieve full spec, UI will have to get information from other 
location like history timeline server.  I think it is useful to return spec 
information as part of status code because we do not have another API to 
retrieve spec at this time.  If the community think that returning partial 
information is better, then I will revert back to getStatus call changes.

> Implement doAs for Api Service REST API
> ---------------------------------------
>
>                 Key: YARN-7605
>                 URL: https://issues.apache.org/jira/browse/YARN-7605
>             Project: Hadoop YARN
>          Issue Type: Sub-task
>            Reporter: Eric Yang
>            Assignee: Eric Yang
>             Fix For: yarn-native-services
>
>         Attachments: YARN-7605.001.patch, YARN-7605.004.patch, 
> YARN-7605.005.patch, YARN-7605.006.patch, YARN-7605.007.patch, 
> YARN-7605.008.patch, YARN-7605.009.patch, YARN-7605.010.patch, 
> YARN-7605.011.patch, YARN-7605.012.patch, YARN-7605.013.patch, 
> YARN-7605.014.patch
>
>
> In YARN-7540, all client entry points for API service is centralized to use 
> REST API instead of having direct file system and resource manager rpc calls. 
>  This change helped to centralize yarn metadata to be owned by yarn user 
> instead of crawling through every user's home directory to find metadata.  
> The next step is to make sure "doAs" calls work properly for API Service.  
> The metadata is stored by YARN user, but the actual workload still need to be 
> performed as end users, hence API service must authenticate end user kerberos 
> credential, and perform doAs call when requesting containers via 
> ServiceClient.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to