[
https://issues.apache.org/jira/browse/HBASE-21503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16694133#comment-16694133
]
Hadoop QA commented on HBASE-21503:
-----------------------------------
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m
11s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m
0s{color} | {color:green} The patch appears to include 1 new or modified test
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m
18s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m
49s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m
11s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 3m
57s{color} | {color:green} branch has no errors when building our shaded
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m
1s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m
30s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m
45s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m
12s{color} | {color:red} hbase-server: The patch generated 3 new + 9 unchanged
- 0 fixed = 12 total (was 9) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m
0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} shadedjars {color} | {color:red} 3m
8s{color} | {color:red} patch has 10 errors when building our shaded downstream
artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}
8m 48s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4
or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m
36s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}132m
0s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m
29s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}168m 53s{color} |
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b |
| JIRA Issue | HBASE-21503 |
| JIRA Patch URL |
https://issues.apache.org/jira/secure/attachment/12948929/HBASE-21503-master.001.patch
|
| Optional Tests | dupname asflicense javac javadoc unit findbugs
shadedjars hadoopcheck hbaseanti checkstyle compile |
| uname | Linux 24c332b09732 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27
10:45:36 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality |
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
|
| git revision | master / 405bf5e638 |
| maven | version: Apache Maven 3.5.4
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC3 |
| checkstyle |
https://builds.apache.org/job/PreCommit-HBASE-Build/15080/artifact/patchprocess/diff-checkstyle-hbase-server.txt
|
| shadedjars |
https://builds.apache.org/job/PreCommit-HBASE-Build/15080/artifact/patchprocess/patch-shadedjars.txt
|
| Test Results |
https://builds.apache.org/job/PreCommit-HBASE-Build/15080/testReport/ |
| asflicense |
https://builds.apache.org/job/PreCommit-HBASE-Build/15080/artifact/patchprocess/patch-asflicense-problems.txt
|
| Max. process+thread count | 5101 (vs. ulimit of 10000) |
| modules | C: hbase-server U: hbase-server |
| Console output |
https://builds.apache.org/job/PreCommit-HBASE-Build/15080/console |
| Powered by | Apache Yetus 0.8.0 http://yetus.apache.org |
This message was automatically generated.
> Replication normal source can get stuck due potential race conditions between
> source wal reader and wal provider initialization threads.
> ----------------------------------------------------------------------------------------------------------------------------------------
>
> Key: HBASE-21503
> URL: https://issues.apache.org/jira/browse/HBASE-21503
> Project: HBase
> Issue Type: Bug
> Components: Replication
> Reporter: Wellington Chevreuil
> Assignee: Wellington Chevreuil
> Priority: Blocker
> Fix For: 3.0.0, 2.2.0, 2.0.3, 2.1.2
>
> Attachments: HBASE-21503-master.001.patch, HBASE-21503.patch
>
>
> Noticed replication sources could get stuck while doing some tests that
> involved RS restart. On these cases, upon RS restart, the newly created
> normal source was reaching wal end and not recognising it was open for write,
> what leads to remove it from source queue. Thus, no new OP get's replicated
> unless this log goes to a recovery queue.
> Checking this further, my understanding is that, during restart, RS will
> start replication services, which inits ReplicationSourceManager and
> ReplicationSources for each wal group id, in below sequence:
> {noformat}
> HRegionServer -> Replication.startReplicationService() ->
> ReplicationSourceManager.init() -> add ReplicationSource
> {noformat}
> At this point, ReplicationSources have no paths yet, so WAL reader thread is
> not running. ReplicationSourceManager is registered as a WAL listener, in
> order to get notified whenever new wal file is available. During
> ReplicationSourceManager and ReplicationSource instances creation, a
> WALFileLengthProvider instance is obtained from WALProvider and cached by
> both ReplicationSourceManager and ReplicationSource. The default
> implementation for this WALFileLengthProvider is below, on WALProvider
> interface:
> {noformat}
> default WALFileLengthProvider getWALFileLengthProvider() {
> return path -> getWALs().stream().map(w ->
> w.getLogFileSizeIfBeingWritten(path))
> .filter(o -> o.isPresent()).findAny().orElse(OptionalLong.empty());
> }
> {noformat}
> Notice that if WALProvider.getWALs returns an empty list, this
> WALFileLengthProvider instance is always going to return nothing. This is
> relevant because when ReplicationSource finally starts
> ReplicationSourceWALReader thread, it passes this WALFileLengthProvider,
> which is used by WALEntryStream (inside the wal reader) to determine if wal
> is being written (and should be kept in the queue) here:
> {noformat}
> private void tryAdvanceEntry() throws IOException {
> if (checkReader()) {
> boolean beingWritten = readNextEntryAndRecordReaderPosition();
> LOG.trace("reading wal file {}. Current open for write: {}",
> this.currentPath, beingWritten);
> if (currentEntry == null && !beingWritten) {
> // no more entries in this log file, and the file is already closed,
> i.e, rolled
> // Before dequeueing, we should always get one more attempt at
> reading.
> // This is in case more entries came in after we opened the reader,
> and the log is rolled
> // while we were reading. See HBASE-6758
> resetReader();
> readNextEntryAndRecordReaderPosition();
> if (currentEntry == null) {
> if (checkAllBytesParsed()) { // now we're certain we're done with
> this log file
> dequeueCurrentLog();
> if (openNextLog()) {
> readNextEntryAndRecordReaderPosition();
> }
> }
> }
> }
> ...
> {noformat}
> Here code snippet for WALEntryStream.readNextEntryAndRecordReaderPosition()
> method that relies on the WALFileLengthProvider:
> {noformat}
> ...
> #1 OptionalLong fileLength =
> walFileLengthProvider.getLogFileSizeIfBeingWritten(currentPath);
> if (fileLength.isPresent() && readerPos > fileLength.getAsLong()) {
> // see HBASE-14004, for AsyncFSWAL which uses fan-out, it is possible
> that we read uncommitted
> // data, so we need to make sure that we do not read beyond the
> committed file length.
> if (LOG.isDebugEnabled()) {
> LOG.debug("The provider tells us the valid length for " + currentPath
> + " is " +
> fileLength.getAsLong() + ", but we have advanced to " +
> readerPos);
> }
> resetReader();
> return true;
> }
> if (readEntry != null) {
> metrics.incrLogEditsRead();
> metrics.incrLogReadInBytes(readerPos - currentPositionOfEntry);
> }
> currentEntry = readEntry; // could be null
> this.currentPositionOfReader = readerPos;
> return fileLength.isPresent();
> ...
> {noformat}
> The problem can occur because when wal file is indeed created in
> AbstractFSWALProvider.getWAL() method (snippet shown below), line marked as
> #2 in below snippet triggers notification of registered WALListeners,
> including ReplicationSourceManager, which will start
> ReplicationSourceWALReader thread. If ReplicationSourceWALReader thread
> reaches the point #1 from snippet above before the thread running
> AbstractFSWALProvider.getWAL() method gets to point #3 from below snippet,
> then WALFileLengthProvider will return empty and the wal will not be
> considered as open, causing it to be dequeued:
> {noformat}
> @Override
> public T getWAL(RegionInfo region) throws IOException {
> T walCopy = wal;
> if (walCopy == null) {
> // only lock when need to create wal, and need to lock since
> // creating hlog on fs is time consuming
> synchronized (walCreateLock) {
> walCopy = wal;
> if (walCopy == null) {
> walCopy = createWAL();
> boolean succ = false;
> try {
> #2 walCopy.init();
> succ = true;
> } finally {
> if (!succ) {
> walCopy.close();
> }
> }
> #3 wal = walCopy;
> }
> }
> }
> return walCopy;
> }
> {noformat}
> This can be sorted by making AbstractFSWALProvider.getWALs reuse
> AbstractFSWALProvider.getWAL method to obtain the WAL instance. Do we really
> have scenarios where we want to return no WAL instance? Another possibility
> could be to synchronize getWALs on same lock currently used by getWAL.
> Am proposing an initial patch with the 1st solution, after some tests, it
> does seem to be enough to sort the problem.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)