Zhijie Shen commented on YARN-3448:

Jonathan, the patch looks good  to me overall. I tried it locally and 
everything seems to work fine so far. Just some minor comments:

1. Why do we need to change {{hadoop-hdfs-client/pom.xml}}?

2. Why do we need fstConf?
184           <groupId>de.ruedigermoeller</groupId>
185           <artifactId>fst</artifactId>

3. We import {{*}} in RollingLevelDBTimelineStore.

4. Move it into serviceInit phase?
309         if (conf.getBoolean(TIMELINE_SERVICE_TTL_ENABLE, true)) {
310           deletionThread = new EntityDeletionThread(conf);
311           deletionThread.start();
312         }

5. I'm wondering it will have some compatibility problem, as previous user 
don't know they need to handle this error code. On the other side, we will 
state ATSv1 stable in YARN-3539. Thoughts?
136         public static final int EXPIRED_ENTITY = 7;

> Add Rolling Time To Lives Level DB Plugin Capabilities
> ------------------------------------------------------
>                 Key: YARN-3448
>                 URL: https://issues.apache.org/jira/browse/YARN-3448
>             Project: Hadoop YARN
>          Issue Type: Sub-task
>          Components: timelineserver
>            Reporter: Jonathan Eagles
>            Assignee: Jonathan Eagles
>              Labels: BB2015-05-TBR
>         Attachments: YARN-3448.1.patch, YARN-3448.10.patch, 
> YARN-3448.12.patch, YARN-3448.13.patch, YARN-3448.14.patch, 
> YARN-3448.15.patch, YARN-3448.16.patch, YARN-3448.2.patch, YARN-3448.3.patch, 
> YARN-3448.4.patch, YARN-3448.5.patch, YARN-3448.7.patch, YARN-3448.8.patch, 
> YARN-3448.9.patch
> For large applications, the majority of the time in LeveldbTimelineStore is 
> spent deleting old entities record at a time. An exclusive write lock is held 
> during the entire deletion phase which in practice can be hours. If we are to 
> relax some of the consistency constraints, other performance enhancing 
> techniques can be employed to maximize the throughput and minimize locking 
> time.
> Split the 5 sections of the leveldb database (domain, owner, start time, 
> entity, index) into 5 separate databases. This allows each database to 
> maximize the read cache effectiveness based on the unique usage patterns of 
> each database. With 5 separate databases each lookup is much faster. This can 
> also help with I/O to have the entity and index databases on separate disks.
> Rolling DBs for entity and index DBs. 99.9% of the data are in these two 
> sections 4:1 ration (index to entity) at least for tez. We replace DB record 
> removal with file system removal if we create a rolling set of databases that 
> age out and can be efficiently removed. To do this we must place a constraint 
> to always place an entity's events into it's correct rolling db instance 
> based on start time. This allows us to stitching the data back together while 
> reading and artificial paging.
> Relax the synchronous writes constraints. If we are willing to accept losing 
> some records that we not flushed in the operating system during a crash, we 
> can use async writes that can be much faster.
> Prefer Sequential writes. sequential writes can be several times faster than 
> random writes. Spend some small effort arranging the writes in such a way 
> that will trend towards sequential write performance over random write 
> performance.

This message was sent by Atlassian JIRA

Reply via email to