[ 
https://issues.apache.org/jira/browse/YARN-3448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Eagles updated YARN-3448:
----------------------------------
    Description: 
For large applications, the majority of the time in LeveldbTimelineStore is 
spent deleting old entities record at a time. An exclusive write lock is held 
during the entire deletion phase which in practice can be hours. If we are to 
relax some of the consistency constraints, other performance enhancing 
techniques can be employed to maximize the throughput and minimize locking time.

Split the 5 sections of the leveldb database (domain, owner, start time, 
entity, index) into 5 separate databases. This allows each database to maximize 
the read cache effectiveness based on the unique usage patterns of each 
database. With 5 separate databases each lookup is much faster. This can also 
help with I/O to have the entity and index databases on separate disks.

Rolling DBs for entity and index DBs. 99.9% of the data are in these two 
sections 4:1 ration (index to entity) at least for tez. We replace DB record 
removal with file system removal if we create a rolling set of databases that 
age out and can be efficiently removed. To do this we must place a constraint 
to always place an entity's events into it's correct rolling db instance based 
on start time. This allows us to stitching the data back together while reading 
and artificial paging.

Relax the synchronous writes constraints. If we are willing to accept losing 
some records that we not flushed in the operating system during a crash, we can 
use async writes that can be much faster.

Prefer Sequential writes. sequential writes can be several times faster than 
random writes. Spend some small effort arranging the writes in such a way that 
will trend towards sequential write performance over random write performance.

  was:For large applications, the majority of the time in LeveldbTimelineStore 
is spent deleting old entities record at a time. A write lock is held during 
the entire deletion phase which in practice can be hours. An alternative is to 
create a rolling set of databases that age out and can be efficiently removed 
via a recursive directory delete. The removes the lock in the deletion thread 
and clients and servers can share access to the underlying database which 
already implements its only internal locking mechanism.


> Add Rolling Time To Lives Level DB Plugin Capabilities
> ------------------------------------------------------
>
>                 Key: YARN-3448
>                 URL: https://issues.apache.org/jira/browse/YARN-3448
>             Project: Hadoop YARN
>          Issue Type: Improvement
>            Reporter: Jonathan Eagles
>
> For large applications, the majority of the time in LeveldbTimelineStore is 
> spent deleting old entities record at a time. An exclusive write lock is held 
> during the entire deletion phase which in practice can be hours. If we are to 
> relax some of the consistency constraints, other performance enhancing 
> techniques can be employed to maximize the throughput and minimize locking 
> time.
> Split the 5 sections of the leveldb database (domain, owner, start time, 
> entity, index) into 5 separate databases. This allows each database to 
> maximize the read cache effectiveness based on the unique usage patterns of 
> each database. With 5 separate databases each lookup is much faster. This can 
> also help with I/O to have the entity and index databases on separate disks.
> Rolling DBs for entity and index DBs. 99.9% of the data are in these two 
> sections 4:1 ration (index to entity) at least for tez. We replace DB record 
> removal with file system removal if we create a rolling set of databases that 
> age out and can be efficiently removed. To do this we must place a constraint 
> to always place an entity's events into it's correct rolling db instance 
> based on start time. This allows us to stitching the data back together while 
> reading and artificial paging.
> Relax the synchronous writes constraints. If we are willing to accept losing 
> some records that we not flushed in the operating system during a crash, we 
> can use async writes that can be much faster.
> Prefer Sequential writes. sequential writes can be several times faster than 
> random writes. Spend some small effort arranging the writes in such a way 
> that will trend towards sequential write performance over random write 
> performance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to