[ 
https://issues.apache.org/jira/browse/YARN-1530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14125990#comment-14125990
 ] 

Zhijie Shen commented on YARN-1530:
-----------------------------------

[~sjlee0], thanks for your feedback. Here're some additional thoughts and 
clarifications upon your comments.

bq. This option would make sense only if the imports are less frequent.

To be more specific, I mean sending the same amount of entities (not too big, 
if too big HTTP REST request has to chunk them into some continuous HTTP 
requests with reasonable size) via HTTP REST or HDFS should perform similar. 
HTTP REST may be better because of less secondary storage I/O (ethernet should 
be fast than disk). HTTP REST doesn't prevent the user from batching the 
entities and put them once, and current API supports it. It's up to the user to 
put the entity immediately for realtime/near-realtime inquiry, or to batch 
entities if the can tolerant some delay.

However, I agree HDFS or some other single-node storage technique is a 
interesting part to prevent losing the entities when they are not published to 
the timeline server yet, in particular when we batching them.

bq. Regarding option (2), I think your point is valid that it would be a 
transition from a thin client to a fat client.
bq. However, I'm not too sure if it would make changing the data store much 
more complicated than other scenarios.

I'm also not very sure about the necessary changes. As what I mentioned before, 
timeline server doesn't simply put the entities into the data store. One 
immediate problem I can come up with is the authorization. I'm not sure it's 
going to be logically correct to check the user's access in the client at the 
user's side. If we move authorization to the data store, HBase supports access 
control, but Levedb seems not. And I'm not sure HBase access control is enough 
for timeline sever's specific logic. Still need to think more about it.

As the client is growing fatter, it's difficult to maintain different versions 
of clients. For example, if we do some incompatible optimization for the 
storage schema, only the new client can write into it, while the old client 
will not work any more. Moreover, as most writing logic is conducted at user 
land, which is not predictable, it is likely to raise some unexpected failure 
than a well setup server. In general, I prefer to keep the client simple, such 
that the future client distribution and maintenance could be of less effort.

bq. But then again, if we consider a scenario such as a cluster of ATS 
instances, the same problem exists there.

Right the same problem will exist at the server side, but the web front has 
isolated it from the users. Compared to the clients at the application, the ATS 
instances are a relatively small controllable set that we can pause them and do 
proper upgrading process. How do you think?

> [Umbrella] Store, manage and serve per-framework application-timeline data
> --------------------------------------------------------------------------
>
>                 Key: YARN-1530
>                 URL: https://issues.apache.org/jira/browse/YARN-1530
>             Project: Hadoop YARN
>          Issue Type: Bug
>            Reporter: Vinod Kumar Vavilapalli
>         Attachments: ATS-Write-Pipeline-Design-Proposal.pdf, 
> ATS-meet-up-8-28-2014-notes.pdf, application timeline design-20140108.pdf, 
> application timeline design-20140116.pdf, application timeline 
> design-20140130.pdf, application timeline design-20140210.pdf
>
>
> This is a sibling JIRA for YARN-321.
> Today, each application/framework has to do store, and serve per-framework 
> data all by itself as YARN doesn't have a common solution. This JIRA attempts 
> to solve the storage, management and serving of per-framework data from 
> various applications, both running and finished. The aim is to change YARN to 
> collect and store data in a generic manner with plugin points for frameworks 
> to do their own thing w.r.t interpretation and serving.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to