[ 
https://issues.apache.org/jira/browse/YARN-3448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14958283#comment-14958283
 ] 

Shiwei Guo commented on YARN-3448:
----------------------------------

[~jeagles], I noticed that in patch.12, have following changes compared to 
patch.10:
1. 
{code:title=RollingLevelDBTimelineStore#putEntities}
  // write entity marker
  // ... ignore code to prepare markerKey
  byte[] markerValue = writeReverseOrderedLong(startAndInsertTime
      .insertTime);
  writeBatch.put(markerKey, markerValue);
{code}
is changed to 
{code}
   // write entity marker
   // ... ignore code to prepare markerKey
   writeBatch.put(markerKey, EMPTY_BYTES);
{code}
and 2.
{code:title=TestRollingLevelDBTimelineStore.java}
  @Test
  public void testGetEntitiesWithFromTs() throws IOException {
    super.testGetEntitiesWithFromTs();
  }
{code}
is changed to 
{code}
  @Test
  public void testGetEntitiesWithFromTs() throws IOException {
    // feature not supported
  }
{code}

What's the reason to make this change? Cause I found that if we keep the 
patch.10 version of these code, plus a small bug fix code(as following code), 
the testGetEntitiesWithFromTs test case can pass, so we can support the 
GetEntitiesWithFromTs feature. Maybe I'm missing some other things.

BUGFIX code in getEntityByTime
{code:title= RollingLevelDBTimelineStore#getEntityByTime}
if (fromTs != null) {
            long insertTime = readReverseOrderedLong(iterator.peekNext()
                .getValue(), 0);
            if (insertTime > fromTs) {
              byte[] firstKey = key;
              while (iterator.hasNext()) {
                key = iterator.peekNext().getKey();

             // BUGFIX code block
               iterator.next();
                if (!prefixMatches(firstKey, kp.getOffset(), key)) {
                  break;
                }
              }
              continue;
            }
          }
{code}
change to
{code}
if (fromTs != null) {
            long insertTime = readReverseOrderedLong(iterator.peekNext()
                .getValue(), 0);
            if (insertTime > fromTs) {
              byte[] firstKey = key;
              while (iterator.hasNext()) {
                key = iterator.peekNext().getKey();
               
                // BUGFIX code block
                if (!prefixMatches(firstKey, kp.getOffset(), key)) {
                  break;
                }
                iterator.next();
              }
              continue;
            }
          }
{code}

> Add Rolling Time To Lives Level DB Plugin Capabilities
> ------------------------------------------------------
>
>                 Key: YARN-3448
>                 URL: https://issues.apache.org/jira/browse/YARN-3448
>             Project: Hadoop YARN
>          Issue Type: Sub-task
>          Components: timelineserver
>            Reporter: Jonathan Eagles
>            Assignee: Jonathan Eagles
>             Fix For: 2.8.0
>
>         Attachments: YARN-3448.1.patch, YARN-3448.10.patch, 
> YARN-3448.12.patch, YARN-3448.13.patch, YARN-3448.14.patch, 
> YARN-3448.15.patch, YARN-3448.16.patch, YARN-3448.17.patch, 
> YARN-3448.2.patch, YARN-3448.3.patch, YARN-3448.4.patch, YARN-3448.5.patch, 
> YARN-3448.7.patch, YARN-3448.8.patch, YARN-3448.9.patch
>
>
> For large applications, the majority of the time in LeveldbTimelineStore is 
> spent deleting old entities record at a time. An exclusive write lock is held 
> during the entire deletion phase which in practice can be hours. If we are to 
> relax some of the consistency constraints, other performance enhancing 
> techniques can be employed to maximize the throughput and minimize locking 
> time.
> Split the 5 sections of the leveldb database (domain, owner, start time, 
> entity, index) into 5 separate databases. This allows each database to 
> maximize the read cache effectiveness based on the unique usage patterns of 
> each database. With 5 separate databases each lookup is much faster. This can 
> also help with I/O to have the entity and index databases on separate disks.
> Rolling DBs for entity and index DBs. 99.9% of the data are in these two 
> sections 4:1 ration (index to entity) at least for tez. We replace DB record 
> removal with file system removal if we create a rolling set of databases that 
> age out and can be efficiently removed. To do this we must place a constraint 
> to always place an entity's events into it's correct rolling db instance 
> based on start time. This allows us to stitching the data back together while 
> reading and artificial paging.
> Relax the synchronous writes constraints. If we are willing to accept losing 
> some records that we not flushed in the operating system during a crash, we 
> can use async writes that can be much faster.
> Prefer Sequential writes. sequential writes can be several times faster than 
> random writes. Spend some small effort arranging the writes in such a way 
> that will trend towards sequential write performance over random write 
> performance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to