Apache9 commented on code in PR #7617:
URL: https://github.com/apache/hbase/pull/7617#discussion_r2721481232
##########
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/ReplicationEndpoint.java:
##########
@@ -283,4 +283,35 @@ public int getTimeout() {
* @throws IllegalStateException if this service's state isn't FAILED.
*/
Throwable failureCause();
+
+ /**
+ * @return true if this endpoint buffers WAL entries and requires explicit
flush control before
+ * persisting replication offsets.
+ */
+ default boolean isBufferedReplicationEndpoint() {
+ return false;
+ }
+
+ /**
+ * Maximum WAL size (bytes) to buffer before forcing a flush. Only
meaningful when
+ * isBufferedReplicationEndpoint() == true.
+ */
+ default long getMaxBufferSize() {
+ return -1L;
+ }
+
+ /**
+ * Maximum time (ms) to wait before forcing a flush. Only meaningful when
+ * isBufferedReplicationEndpoint() == true.
+ */
+ default long maxFlushInterval() {
+ return Long.MAX_VALUE;
+ }
+
+ /**
+ * Hook invoked before persisting replication offsets. Buffered endpoints
should flush/close WALs
+ * here.
+ */
+ default void beforePersistingReplicationOffset() {
Review Comment:
Sorry, this is exactly what we want to avoid here. This method just tells
the endpoint what the shipper going to do, and the endpoint can do anything it
wants. For our normal replication framework, there is no flush and close
operations, as we will always send everything out and return until we get acks,
so basically we do not need to implement this method. And for S3 based
replication endpoint, we need to close the file to persist it on S3. Maybe in
the future we have other types of replication endpoint which may do other
works, so we do not want to name it `flushAndCloseWAL`.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]