Github user VladRodionov commented on a diff in the pull request:

    https://github.com/apache/incubator-ratis/pull/4#discussion_r214147512
  
    --- Diff: 
ratis-logservice/src/main/java/org/apache/ratis/logservice/api/LogReader.java 
---
    @@ -0,0 +1,53 @@
    +/**
    + * Licensed to the Apache Software Foundation (ASF) under one
    + * or more contributor license agreements.  See the NOTICE file
    + * distributed with this work for additional information
    + * regarding copyright ownership.  The ASF licenses this file
    + * to you under the Apache License, Version 2.0 (the
    + * "License"); you may not use this file except in compliance
    + * with the License.  You may obtain a copy of the License at
    + *
    + *     http://www.apache.org/licenses/LICENSE-2.0
    + *
    + * Unless required by applicable law or agreed to in writing, software
    + * distributed under the License is distributed on an "AS IS" BASIS,
    + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    + * See the License for the specific language governing permissions and
    + * limitations under the License.
    + */
    +package org.apache.ratis.logservice.api;
    +
    +import java.io.IOException;
    +import java.nio.ByteBuffer;
    +import java.util.List;
    +
    +/**
    + * Synchronous client interface to read from a LogStream.
    + */
    +public interface LogReader extends AutoCloseable {
    +
    +  /**
    +   * Seeks to the position before the record at the provided {@code 
offset} in the LogStream.
    +   *
    +   * @param offset A non-negative, offset in the LogStream
    +   * @return A future for when the operation is completed.
    +   */
    +  void seek(long recordId) throws IOException;
    --- End diff --
    
    >> Moving to a specific point in a LogStream is absolutely a necessary API 
call
    
    I think everyone here agree with you. The issue is keeping separate indexes 
for records. I do not know how this was implemented in Ratis, but in any case, 
maintaining index eats CPU and IO and slow everything down. If it is there 
already - fine, we can reuse it. To resume HBase replication  - offset in a log 
stream (not record id) would suffice (in our use case, of course).
     


---

Reply via email to