Hi Nick,
Apologies for the delayed reply.
If you see the current patch we are just ensuring that there is no
behaviour change by moving the archival to an async thread. So I don't
expect any behaviour change there. The existing behaviour was to abort the
server on an archival failure and now we
[
https://issues.apache.org/jira/browse/HBASE-25128?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Guanghao Zhang resolved HBASE-25128.
Fix Version/s: 2.4.0
3.0.0-alpha-1
Resolution: Fixed
Pushed to
[
https://issues.apache.org/jira/browse/HBASE-25214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Sean Busbey resolved HBASE-25214.
-
Resolution: Duplicate
This is a duplicate of HBASE-24802. Please follow our work over there.
Why are we not just leaving things IA.Private? What are we trying to
enable? Don't downstream clients just need DoNotRetryIOException and
IOException?
If we're going to go with LimitedPrivate is it worth defining a LP
audience that means "you can refer safely to this class name but you
should not
[
https://issues.apache.org/jira/browse/HBASE-25207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Duo Zhang resolved HBASE-25207.
---
Hadoop Flags: Reviewed
Resolution: Fixed
Pushed to branch-2.2+.
Thanks [~brfrn169] for
openlookeng created HBASE-25214:
---
Summary: HTrace is not maintained. Please fix these
vulnerabilities?
Key: HBASE-25214
URL: https://issues.apache.org/jira/browse/HBASE-25214
Project: HBase
niuyulin created HBASE-25213:
Summary: Should request Compaction after bulkLoadHFiles is done
Key: HBASE-25213
URL: https://issues.apache.org/jira/browse/HBASE-25213
Project: HBase
Issue Type:
Let's wait for a while to see if there are other feedbacks.
Yulin Niu 于2020年10月21日周三 上午11:58写道:
> So, we introduce a new Annotation IA.LimitedPrivate(Exception) to decorate
> the exceptions, which are free to catched and propagated by users, but
> should not be created by users themselves.
>
>
More procedure workers (hbase.master.procedure.threads 5 => 16) can reduce
the time from 8 mins to 6 mins. But branch-2.3 used the same 5 procedure
workers.
Guanghao Zhang 于2020年10月22日周四 下午2:22写道:
> And I am sure that it is not the lock problem, because there is no
> "Waiting on xlock for"
And I am sure that it is not the lock problem, because there is no "Waiting
on xlock for" log.
LOG.info("Waiting on xlock for {} held by pid={}", procedure,
regionLocks[i].getExclusiveLockProcIdOwner());
Guanghao Zhang 于2020年10月22日周四 下午2:12写道:
> Run the same UT
Run the same UT TestRegionReplicaFailover on my local PC, mvn clean test
-Dtest=TestRegionReplicaFailover, branch-2.2 takes 8 mins but branch-2.3
only needs 2 mins.
I found the problem is related to procedure schedule. See the below log:
2020-10-21 13:52:28,097 INFO [PEWorker-1]
11 matches
Mail list logo