[jira] [Commented] (HBASE-21688) Address WAL filesystem issues
[ https://issues.apache.org/jira/browse/HBASE-21688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756972#comment-16756972 ] Hadoop QA commented on HBASE-21688: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 5 new or modified test files. {color} | || || || || {color:brown} branch-2.1 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 38s{color} | {color:green} branch-2.1 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 35s{color} | {color:green} branch-2.1 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 32s{color} | {color:green} branch-2.1 passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 0s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 5s{color} | {color:green} branch-2.1 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 40s{color} | {color:green} branch-2.1 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 24s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 16s{color} | {color:red} hbase-server: The patch generated 5 new + 119 unchanged - 0 fixed = 124 total (was 119) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 16s{color} | {color:red} hbase-it: The patch generated 3 new + 111 unchanged - 0 fixed = 114 total (was 111) {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 3 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch 1 line(s) with tabs. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 8s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 9m 17s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 or 3.0.0. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 23m 57s{color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 45s{color} | {color:green} hbase-it in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 23s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 66m 54s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hbase.master.TestMasterWALManager | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:42ca976 | | JIRA Issue | HBASE-21688 | | JIRA Patch URL |
[jira] [Commented] (HBASE-21773) rowcounter utility should respond to pleas for help
[ https://issues.apache.org/jira/browse/HBASE-21773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756956#comment-16756956 ] Hadoop QA commented on HBASE-21773: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 26s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 40s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 8s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 42s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 37s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 21s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 36s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 5s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 34s{color} | {color:red} hbase-mapreduce generated 2 new + 155 unchanged - 3 fixed = 157 total (was 158) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 19s{color} | {color:red} hbase-mapreduce: The patch generated 1 new + 43 unchanged - 1 fixed = 44 total (was 44) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 36s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 9m 43s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 or 3.0.0. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 32s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 35s{color} | {color:green} hbase-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 15m 9s{color} | {color:green} hbase-mapreduce in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 24s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 55m 51s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b | | JIRA Issue | HBASE-21773 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12957012/HBASE-21773.master.004.patch | | Optional Tests | dupname asflicense javac javadoc unit findbugs shadedjars hadoopcheck hbaseanti checkstyle compile | | uname | Linux 2d3f5a6585f5 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 5 08:56:16 UTC 2018 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | master / c90e9ff5ef | | maven |
[jira] [Commented] (HBASE-21809) Add retry thrift client for ThriftTable/Admin
[ https://issues.apache.org/jira/browse/HBASE-21809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756948#comment-16756948 ] Allan Yang commented on HBASE-21809: v2 address the misuse of operationtimeout. > Add retry thrift client for ThriftTable/Admin > - > > Key: HBASE-21809 > URL: https://issues.apache.org/jira/browse/HBASE-21809 > Project: HBase > Issue Type: Sub-task >Reporter: Allan Yang >Assignee: Allan Yang >Priority: Major > Fix For: 3.0.0, 2.2.0 > > Attachments: HBASE-21809.patch, HBASE-21809v2.patch > > > This is for ThriftTable/Admin to handle exceptions like connection loss. > And only available for http thrift client. For client using TSocket, it is > not so easy to implement a retry client, maybe later. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-21809) Add retry thrift client for ThriftTable/Admin
[ https://issues.apache.org/jira/browse/HBASE-21809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allan Yang updated HBASE-21809: --- Attachment: HBASE-21809v2.patch > Add retry thrift client for ThriftTable/Admin > - > > Key: HBASE-21809 > URL: https://issues.apache.org/jira/browse/HBASE-21809 > Project: HBase > Issue Type: Sub-task >Reporter: Allan Yang >Assignee: Allan Yang >Priority: Major > Fix For: 3.0.0, 2.2.0 > > Attachments: HBASE-21809.patch, HBASE-21809v2.patch > > > This is for ThriftTable/Admin to handle exceptions like connection loss. > And only available for http thrift client. For client using TSocket, it is > not so easy to implement a retry client, maybe later. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-21764) Size of in-memory compaction thread pool shoud be configurable
[ https://issues.apache.org/jira/browse/HBASE-21764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zheng Hu updated HBASE-21764: - Attachment: HBASE-21764.v9.patch > Size of in-memory compaction thread pool shoud be configurable > -- > > Key: HBASE-21764 > URL: https://issues.apache.org/jira/browse/HBASE-21764 > Project: HBase > Issue Type: Sub-task > Components: in-memory-compaction >Reporter: Zheng Hu >Assignee: Zheng Hu >Priority: Major > Fix For: 3.0.0, 2.2.0, 2.1.3, 2.0.5 > > Attachments: HBASE-21764.v1.patch, HBASE-21764.v2.patch, > HBASE-21764.v3.patch, HBASE-21764.v4.patch, HBASE-21764.v5.patch, > HBASE-21764.v6.patch, HBASE-21764.v7.patch, HBASE-21764.v8.patch, > HBASE-21764.v9.patch > > > In RegionServicesForStores, we can see : > {code} > private static final int POOL_SIZE = 10; > private static final ThreadPoolExecutor INMEMORY_COMPACTION_POOL = > new ThreadPoolExecutor(POOL_SIZE, POOL_SIZE, 60, TimeUnit.SECONDS, > new LinkedBlockingQueue<>(), > new ThreadFactory() { > @Override > public Thread newThread(Runnable r) { > String name = Thread.currentThread().getName() + > "-inmemoryCompactions-" + > System.currentTimeMillis(); > return new Thread(r, name); > } > }); > {code} > The pool size should be configurable, because if many regions on a rs, the > default 10 threads will be not enough. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21775) The BufferedMutator doesn't ever refresh region location cache
[ https://issues.apache.org/jira/browse/HBASE-21775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756918#comment-16756918 ] Duo Zhang commented on HBASE-21775: --- Thanks lads. > The BufferedMutator doesn't ever refresh region location cache > -- > > Key: HBASE-21775 > URL: https://issues.apache.org/jira/browse/HBASE-21775 > Project: HBase > Issue Type: Bug > Components: Client >Reporter: Tommy Li >Assignee: Tommy Li >Priority: Major > Fix For: 3.0.0, 1.5.0, 2.2.0, 1.4.10, 2.1.3, 2.0.5, 1.3.4 > > Attachments: HBASE-21775-ADDENDUM.master.001.patch, > HBASE-21775.master.001.patch, > org.apache.hadoop.hbase.client.TestAsyncProcess-with-HBASE-21775.txt, > org.apache.hadoop.hbase.client.TestAsyncProcess-without-HBASE-21775.txt > > > {color:#22}I noticed in some of my writing jobs that the BufferedMutator > would get stuck retrying writes against a dead server.{color} > {code:java} > 19/01/18 15:15:47 INFO [Executor task launch worker for task 0] > client.AsyncRequestFutureImpl: #2, waiting for 1 actions to finish on table: > dummy_table > 19/01/18 15:15:54 WARN [htable-pool3-t56] client.AsyncRequestFutureImpl: > id=2, table=dummy_table, attempt=15/21, failureCount=1ops, last > exception=org.apache.hadoop.hbase.DoNotRetryIOException: Operation rpcTimeout > on ,17020,1547848193782, tracking started Fri Jan 18 14:55:37 PST > 2019; NOT retrying, failed=1 -- final attempt! > 19/01/18 15:15:54 ERROR [Executor task launch worker for task 0] > IngestRawData.map(): [B@258bc2c7: > org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 > action: Operation rpcTimeout: 1 time, servers with issues: > ,17020,1547848193782 > {code} > > After the single remaining action permanently failed, it would resume > progress only to get stuck again retrying against the same dead server: > {code:java} > 19/01/18 15:21:18 INFO [Executor task launch worker for task 0] > client.AsyncRequestFutureImpl: #2, waiting for 1 actions to finish on table: > dummy_table > 19/01/18 15:21:18 INFO [Executor task launch worker for task 0] > client.AsyncRequestFutureImpl: #2, waiting for 1 actions to finish on table: > dummy_table > 19/01/18 15:21:20 INFO [htable-pool3-t55] client.AsyncRequestFutureImpl: > id=2, table=dummy_table, attempt=6/21, failureCount=1ops, last > exception=java.net.ConnectException: Call to failed on connection > exception: > org.apache.hbase.thirdparty.io.netty.channel.ConnectTimeoutException: > connection timed out: on ,17020,1547848193782, tracking > started null, retrying after=20089ms, operationsToReplay=1 > {code} > > Only restarting the client process to generate a new BufferedMutator instance > would fix the issue, at least until the next regionserver crash > The logs I've pasted show the issue happening with a > ConnectionTimeoutException, but we've also seen it with > NotServingRegionException and some others -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-21598) HBASE_WAL_DIR if not configured, recovered.edits directory's are sidelined from the table dir path.
[ https://issues.apache.org/jira/browse/HBASE-21598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-21598: -- Fix Version/s: (was: 2.1.3) > HBASE_WAL_DIR if not configured, recovered.edits directory's are sidelined > from the table dir path. > --- > > Key: HBASE-21598 > URL: https://issues.apache.org/jira/browse/HBASE-21598 > Project: HBase > Issue Type: Bug > Components: Recovery, wal >Affects Versions: 2.1.1 >Reporter: Y. SREENIVASULU REDDY >Priority: Major > > If HBASE_WAL_DIR if not configured, then > recovered.edits dir path should be old method only. > If user is creating x no. of tables, in different namespaces, then all are > creating in the "hbase.rootdir" path only. > {code} > //datarecovered.edits > eg: > /hbase/data/default/testTable/eaf343d35d3e66e6e5fd38106ba61c62/recovered.edits > {code} > But the format is currently. > {code} > /recovered.edits > eg: > /hbase/default/testTable/eaf343d35d3e66e6e5fd38106ba61c62/recovered.edits > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-21574) createConnection / getTable should not return if there's no cluster available
[ https://issues.apache.org/jira/browse/HBASE-21574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-21574: -- Fix Version/s: (was: 2.1.3) > createConnection / getTable should not return if there's no cluster available > - > > Key: HBASE-21574 > URL: https://issues.apache.org/jira/browse/HBASE-21574 > Project: HBase > Issue Type: Bug > Components: Client >Affects Versions: 2.1.1 >Reporter: Cosmin Lehene >Priority: Major > > You can get a connection / table successfully with no cluster (no zk, hms, > hrs) and it also says it's open (closed = false) > {code} > Connection con = ConnectionFactory.createConnection(getConfiguration()); > con.getTable(TableName.valueOf(customersTable)); > {code} > {code} > con = \{ConnectionImplementation@5192} "hconnection-0x32093c94" > hostnamesCanChange = true > pause = 100 > pauseForCQTBE = 100 > useMetaReplicas = false > metaReplicaCallTimeoutScanInMicroSecond = 100 > numTries = 16 > rpcTimeout = 6 > asyncProcess = \{AsyncProcess@5242} > stats = null > closed = false > aborted = false > clusterStatusListener = null > metaRegionLock = \{Object@5249} > masterLock = \{Object@5250} > batchPool = \{ThreadPoolExecutor@5240} > "java.util.concurrent.ThreadPoolExecutor@5bb40116[Running, pool size = 0, > active threads = 0, queued tasks = 0, completed tasks = 0]" > metaLookupPool = null > cleanupPool = true > conf = \{Configuration@5238} "Configuration: core-default.xml, > core-site.xml, hbase-default.xml, hbase-site.xml" > connectionConfig = \{ConnectionConfiguration@5239} > rpcClient = \{NettyRpcClient@5251} > metaCache = \{MetaCache@5252} > metrics = null > user = \{User$SecureHadoopUser@5253} "clehene (auth:SIMPLE)" > rpcCallerFactory = \{RpcRetryingCallerFactory@5243} > rpcControllerFactory = \{RpcControllerFactory@5244} > interceptor = \{NoOpRetryableCallerInterceptor@5254} > "NoOpRetryableCallerInterceptor" > registry = \{ZKAsyncRegistry@5255} > backoffPolicy = \{ClientBackoffPolicyFactory$NoBackoffPolicy@5256} > alternateBufferedMutatorClassName = null > userRegionLock = \{ReentrantLock@5257} > "java.util.concurrent.locks.ReentrantLock@4d368ebc[Unlocked]" > clusterId = "default-cluster" > stubs = \{ConcurrentHashMap@5259} size = 0 > masterServiceState = \{ConnectionImplementation$MasterServiceState@5260} > "MasterService" > table = \{HTable@5193} "customers;hconnection-0x32093c94" > connection = \{ConnectionImplementation@5192} "hconnection-0x32093c94" > tableName = \{TableName@5237} "customers" > configuration = \{Configuration@5238} "Configuration: core-default.xml, > core-site.xml, hbase-default.xml, hbase-site.xml" > connConfiguration = \{ConnectionConfiguration@5239} > closed = false > scannerCaching = 2147483647 > scannerMaxResultSize = 2097152 > pool = \{ThreadPoolExecutor@5240} > "java.util.concurrent.ThreadPoolExecutor@5bb40116[Running, pool size = 0, > active threads = 0, queued tasks = 0, completed tasks = 0]" > operationTimeoutMs = 120 > rpcTimeoutMs = 6 > readRpcTimeoutMs = 6 > writeRpcTimeoutMs = 6 > cleanupPoolOnClose = false > locator = \{HRegionLocator@5241} > multiAp = \{AsyncProcess@5242} > rpcCallerFactory = \{RpcRetryingCallerFactory@5243} > rpcControllerFactory = \{RpcControllerFactory@5244} > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21462) Add note for CopyTable section explained it does not perform a diff, but a full write from source to target
[ https://issues.apache.org/jira/browse/HBASE-21462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756919#comment-16756919 ] Subrat Mishra commented on HBASE-21462: --- [~wchevreuil], There is a small typo in the patch. Instead of *starttow*/stoprow should be *startrow*/stoprow > Add note for CopyTable section explained it does not perform a diff, but a > full write from source to target > --- > > Key: HBASE-21462 > URL: https://issues.apache.org/jira/browse/HBASE-21462 > Project: HBase > Issue Type: Improvement > Components: documentation >Reporter: Wellington Chevreuil >Assignee: Wellington Chevreuil >Priority: Minor > Attachments: HBASE-21462.master.001.patch > > > Performance is a common issue with CopyTable is when the key/time range for > the data to be copied is large because it basically scans all rows on the > specified range, in the source and perform related puts on the source. We > should add extra note explaining that on the reference guide, to help > users/admins understand when to choose between the different tools/approaches > for syncing clusters. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-21644) Modify table procedure runs infinitely for a table having region replication > 1
[ https://issues.apache.org/jira/browse/HBASE-21644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-21644: -- Fix Version/s: 2.3.0 2.0.5 2.1.3 2.2.0 3.0.0 > Modify table procedure runs infinitely for a table having region replication > > 1 > > > Key: HBASE-21644 > URL: https://issues.apache.org/jira/browse/HBASE-21644 > Project: HBase > Issue Type: Bug > Components: Admin >Affects Versions: 3.0.0, 2.1.1, 2.1.2 >Reporter: Nihal Jain >Assignee: Nihal Jain >Priority: Critical > Fix For: 3.0.0, 2.2.0, 2.1.3, 2.0.5, 2.3.0 > > Attachments: HBASE-21644.master.001.patch, > HBASE-21644.master.002.patch, HBASE-21644.master.003.patch, > HBASE-21644.master.003.patch, HBASE-21644.master.UT.patch > > > *Steps to reproduce* > # Create a table with region replication set to a value greater than 1 > # Modify any of the table properties, say max file size > *Expected Result* > The modify table should succeed and run to completion. > *Actual Result* > The modify table keep running infinitely > *Analysis/Issue* > The problem occurs due to inifinitely looping between states > {{REOPEN_TABLE_REGIONS_REOPEN_REGIONS}} and > {{REOPEN_TABLE_REGIONS_CONFIRM_REOPENED}} of {{ReopenTableRegionsProcedure}}, > called as part of {{ModifyTableProcedure}}. > *Consequences* > For a table having region replicas: > - Any modify table operation fails to complete > - Also, enable table replication fails to complete as it is unable to change > the replication scope of the table in source cluster -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-18822) Create table for peer cluster automatically when creating table in source cluster of using namespace replication.
[ https://issues.apache.org/jira/browse/HBASE-18822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756912#comment-16756912 ] Nihal Jain commented on HBASE-18822: Thank [~openinx] :) > Create table for peer cluster automatically when creating table in source > cluster of using namespace replication. > - > > Key: HBASE-18822 > URL: https://issues.apache.org/jira/browse/HBASE-18822 > Project: HBase > Issue Type: Improvement > Components: Replication >Affects Versions: 2.0.0-alpha-2 >Reporter: Zheng Hu >Assignee: Zheng Hu >Priority: Major > Fix For: 3.0.0, 2.2.0 > > Attachments: HBASE-18822.v1.patch, HBASE-18822.v1.patch > > > In our cluster of using namespace replication, we always forget to create > table in peer cluster, which lead to replication get stuck. > We have implemented the feature in our cluster: create table for peer > cluster automatically when creating table in source cluster of using > namespace replication. > > I'm not sure if someone else needs this feature, so create an issue here for > discussing -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (HBASE-18822) Create table for peer cluster automatically when creating table in source cluster of using namespace replication.
[ https://issues.apache.org/jira/browse/HBASE-18822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756912#comment-16756912 ] Nihal Jain edited comment on HBASE-18822 at 1/31/19 6:23 AM: - Thanks [~openinx] :) was (Author: nihaljain.cs): Thank [~openinx] :) > Create table for peer cluster automatically when creating table in source > cluster of using namespace replication. > - > > Key: HBASE-18822 > URL: https://issues.apache.org/jira/browse/HBASE-18822 > Project: HBase > Issue Type: Improvement > Components: Replication >Affects Versions: 2.0.0-alpha-2 >Reporter: Zheng Hu >Assignee: Zheng Hu >Priority: Major > Fix For: 3.0.0, 2.2.0 > > Attachments: HBASE-18822.v1.patch, HBASE-18822.v1.patch > > > In our cluster of using namespace replication, we always forget to create > table in peer cluster, which lead to replication get stuck. > We have implemented the feature in our cluster: create table for peer > cluster automatically when creating table in source cluster of using > namespace replication. > > I'm not sure if someone else needs this feature, so create an issue here for > discussing -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21688) Address WAL filesystem issues
[ https://issues.apache.org/jira/browse/HBASE-21688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756911#comment-16756911 ] Nihal Jain commented on HBASE-21688: Triggered QA > Address WAL filesystem issues > - > > Key: HBASE-21688 > URL: https://issues.apache.org/jira/browse/HBASE-21688 > Project: HBase > Issue Type: Bug > Components: Filesystem Integration, wal >Reporter: Vladimir Rodionov >Assignee: Vladimir Rodionov >Priority: Major > Labels: s3 > Fix For: 3.0.0 > > Attachments: HBASE-21688-amend.2.patch, HBASE-21688-amend.patch, > HBASE-21688-branch-2.1-v1.patch, HBASE-21688-v1.patch > > > Scan and fix code base to use new way of instantiating WAL File System. > https://issues.apache.org/jira/browse/HBASE-21457?focusedCommentId=16734688=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16734688 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-21688) Address WAL filesystem issues
[ https://issues.apache.org/jira/browse/HBASE-21688?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nihal Jain updated HBASE-21688: --- Status: Patch Available (was: Reopened) > Address WAL filesystem issues > - > > Key: HBASE-21688 > URL: https://issues.apache.org/jira/browse/HBASE-21688 > Project: HBase > Issue Type: Bug > Components: Filesystem Integration, wal >Reporter: Vladimir Rodionov >Assignee: Vladimir Rodionov >Priority: Major > Labels: s3 > Fix For: 3.0.0 > > Attachments: HBASE-21688-amend.2.patch, HBASE-21688-amend.patch, > HBASE-21688-branch-2.1-v1.patch, HBASE-21688-v1.patch > > > Scan and fix code base to use new way of instantiating WAL File System. > https://issues.apache.org/jira/browse/HBASE-21457?focusedCommentId=16734688=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16734688 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21688) Address WAL filesystem issues
[ https://issues.apache.org/jira/browse/HBASE-21688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756909#comment-16756909 ] Nihal Jain commented on HBASE-21688: [~vrodionov] Thanks for the patches. I think the {{TestMasterWALManager}} testclass would fail with [^HBASE-21688-branch-2.1-v1.patch]. (I have already ported this in our internal branch and it had failed there). The changes I had made in {{TestMasterWALManager.before()}} were as follows, you may consider using them, if it looks fine: {code:java} -this.mwm = new MasterWalManager(this.masterServices); +this.mwm = new MasterWalManager(this.masterServices) { + + @Override + Path getWALDirPath() throws IOException { +return walRootDir; + } + + @Override + Path getWALDirectoryName(ServerName serverName) throws IOException { +return new Path(walRootDir, +AbstractFSWALProvider.getWALDirectoryName(serverName.toString())); + } +}; {code} IMO the following change is redundant in {{MasterWalManager}}: {code:java} -FileStatus[] walDirForServerNames = FSUtils.listStatus(fs, walDirPath, filter); +FileStatus[] walDirForServerNames = FSUtils.listStatus(CommonFSUtils.getWALFileSystem(conf), + walDirPath, filter); {code} You should also be removing following javadoc string from {{WALEntryStream()}}, since we have dropped fs param in the patch: {code:java} * @param fs {@link FileSystem} to use to create {@link Reader} for this stream {code} > Address WAL filesystem issues > - > > Key: HBASE-21688 > URL: https://issues.apache.org/jira/browse/HBASE-21688 > Project: HBase > Issue Type: Bug > Components: Filesystem Integration, wal >Reporter: Vladimir Rodionov >Assignee: Vladimir Rodionov >Priority: Major > Labels: s3 > Fix For: 3.0.0 > > Attachments: HBASE-21688-amend.2.patch, HBASE-21688-amend.patch, > HBASE-21688-branch-2.1-v1.patch, HBASE-21688-v1.patch > > > Scan and fix code base to use new way of instantiating WAL File System. > https://issues.apache.org/jira/browse/HBASE-21457?focusedCommentId=16734688=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16734688 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-21644) Modify table procedure runs infinitely for a table having region replication > 1
[ https://issues.apache.org/jira/browse/HBASE-21644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-21644: -- Attachment: HBASE-21644.master.003.patch > Modify table procedure runs infinitely for a table having region replication > > 1 > > > Key: HBASE-21644 > URL: https://issues.apache.org/jira/browse/HBASE-21644 > Project: HBase > Issue Type: Bug > Components: Admin >Affects Versions: 3.0.0, 2.1.1, 2.1.2 >Reporter: Nihal Jain >Assignee: Nihal Jain >Priority: Critical > Attachments: HBASE-21644.master.001.patch, > HBASE-21644.master.002.patch, HBASE-21644.master.003.patch, > HBASE-21644.master.003.patch, HBASE-21644.master.UT.patch > > > *Steps to reproduce* > # Create a table with region replication set to a value greater than 1 > # Modify any of the table properties, say max file size > *Expected Result* > The modify table should succeed and run to completion. > *Actual Result* > The modify table keep running infinitely > *Analysis/Issue* > The problem occurs due to inifinitely looping between states > {{REOPEN_TABLE_REGIONS_REOPEN_REGIONS}} and > {{REOPEN_TABLE_REGIONS_CONFIRM_REOPENED}} of {{ReopenTableRegionsProcedure}}, > called as part of {{ModifyTableProcedure}}. > *Consequences* > For a table having region replicas: > - Any modify table operation fails to complete > - Also, enable table replication fails to complete as it is unable to change > the replication scope of the table in source cluster -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21644) Modify table procedure runs infinitely for a table having region replication > 1
[ https://issues.apache.org/jira/browse/HBASE-21644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756907#comment-16756907 ] Duo Zhang commented on HBASE-21644: --- Retry. > Modify table procedure runs infinitely for a table having region replication > > 1 > > > Key: HBASE-21644 > URL: https://issues.apache.org/jira/browse/HBASE-21644 > Project: HBase > Issue Type: Bug > Components: Admin >Affects Versions: 3.0.0, 2.1.1, 2.1.2 >Reporter: Nihal Jain >Assignee: Nihal Jain >Priority: Critical > Attachments: HBASE-21644.master.001.patch, > HBASE-21644.master.002.patch, HBASE-21644.master.003.patch, > HBASE-21644.master.003.patch, HBASE-21644.master.UT.patch > > > *Steps to reproduce* > # Create a table with region replication set to a value greater than 1 > # Modify any of the table properties, say max file size > *Expected Result* > The modify table should succeed and run to completion. > *Actual Result* > The modify table keep running infinitely > *Analysis/Issue* > The problem occurs due to inifinitely looping between states > {{REOPEN_TABLE_REGIONS_REOPEN_REGIONS}} and > {{REOPEN_TABLE_REGIONS_CONFIRM_REOPENED}} of {{ReopenTableRegionsProcedure}}, > called as part of {{ModifyTableProcedure}}. > *Consequences* > For a table having region replicas: > - Any modify table operation fails to complete > - Also, enable table replication fails to complete as it is unable to change > the replication scope of the table in source cluster -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21644) Modify table procedure runs infinitely for a table having region replication > 1
[ https://issues.apache.org/jira/browse/HBASE-21644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756906#comment-16756906 ] Duo Zhang commented on HBASE-21644: --- The patch is fine, can solve part of the problem. But I think scheduling a reopen for the primary region is still necessary. For example, in ReopenTableRegionProcedure, when we get all the replicas for region A, A_0(the default replica) is OPENED, and A_1(the secondary replica is OPENING, so we will reopen A_0, and wait for A_1 to be OPENED. And before A_1 is OPENED, A_0 is OPENED, so the openSeqNum for A_1 will be the same with the opened A_0, so no matter how many times we reopen A_1, the openSeqNum will not increase, unless A_0 is reopened again... Anyway, the patch here is simple and can solve part of the problem. Can open new issue for the above problem. > Modify table procedure runs infinitely for a table having region replication > > 1 > > > Key: HBASE-21644 > URL: https://issues.apache.org/jira/browse/HBASE-21644 > Project: HBase > Issue Type: Bug > Components: Admin >Affects Versions: 3.0.0, 2.1.1, 2.1.2 >Reporter: Nihal Jain >Assignee: Nihal Jain >Priority: Critical > Attachments: HBASE-21644.master.001.patch, > HBASE-21644.master.002.patch, HBASE-21644.master.003.patch, > HBASE-21644.master.003.patch, HBASE-21644.master.UT.patch > > > *Steps to reproduce* > # Create a table with region replication set to a value greater than 1 > # Modify any of the table properties, say max file size > *Expected Result* > The modify table should succeed and run to completion. > *Actual Result* > The modify table keep running infinitely > *Analysis/Issue* > The problem occurs due to inifinitely looping between states > {{REOPEN_TABLE_REGIONS_REOPEN_REGIONS}} and > {{REOPEN_TABLE_REGIONS_CONFIRM_REOPENED}} of {{ReopenTableRegionsProcedure}}, > called as part of {{ModifyTableProcedure}}. > *Consequences* > For a table having region replicas: > - Any modify table operation fails to complete > - Also, enable table replication fails to complete as it is unable to change > the replication scope of the table in source cluster -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21764) Size of in-memory compaction thread pool shoud be configurable
[ https://issues.apache.org/jira/browse/HBASE-21764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756903#comment-16756903 ] Hadoop QA commented on HBASE-21764: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 27s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 53s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 14s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 13s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 5s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 30s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 52s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 9s{color} | {color:red} hbase-server: The patch generated 1 new + 299 unchanged - 6 fixed = 300 total (was 305) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 8s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 8m 43s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 or 3.0.0. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 29s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 27m 32s{color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 15s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 65m 48s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hbase.regionserver.TestRecoveredEditsReplayAndAbort | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b | | JIRA Issue | HBASE-21764 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12957003/HBASE-21764.v8.patch | | Optional Tests | dupname asflicense javac javadoc unit findbugs shadedjars hadoopcheck hbaseanti checkstyle compile | | uname | Linux 1e493490ead2 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 17:16:02 UTC 2018 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | master / c90e9ff5ef | | maven | version: Apache Maven 3.5.4 (1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) | | Default Java | 1.8.0_181 | | findbugs | v3.1.0-RC3 | | checkstyle | https://builds.apache.org/job/PreCommit-HBASE-Build/15805/artifact/patchprocess/diff-checkstyle-hbase-server.txt | | unit | https://builds.apache.org/job/PreCommit-HBASE-Build/15805/artifact/patchprocess/patch-unit-hbase-server.txt | |
[jira] [Updated] (HBASE-21773) rowcounter utility should respond to pleas for help
[ https://issues.apache.org/jira/browse/HBASE-21773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wellington Chevreuil updated HBASE-21773: - Status: Patch Available (was: In Progress) One more time... BTW: {noformat} [WARNING] /testptch/hbase/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/RowCounter.java:[140,38] [StringSplitter] Prefer Splitter to String.split{noformat} Is it worth then to start replace String.split usages? > rowcounter utility should respond to pleas for help > --- > > Key: HBASE-21773 > URL: https://issues.apache.org/jira/browse/HBASE-21773 > Project: HBase > Issue Type: Bug > Components: tooling >Affects Versions: 2.1.0 >Reporter: Sean Busbey >Assignee: Wellington Chevreuil >Priority: Critical > Attachments: HBASE-21773.master.001.patch, > HBASE-21773.master.002.patch, HBASE-21773.master.003.patch, > HBASE-21773.master.004.patch > > > {{hbase rowcounter}} does not respond to reasonable requests for help, i.e. > {{--help}}, {{-h}}, or {{-?}} > {code} > [systest@busbey-training-1 root]$ hbase rowcounter -? > OpenJDK 64-Bit Server VM warning: Using incremental CMS is deprecated and > will likely be removed in a future release > 19/01/24 12:30:00 INFO client.RMProxy: Connecting to ResourceManager at > busbey-training-1.gce.cloudera.com/172.31.116.31:8032 > 19/01/24 12:30:01 INFO hdfs.DFSClient: Created token for systest: > HDFS_DELEGATION_TOKEN owner=syst...@gce.cloudera.com, renewer=yarn, > realUser=, issueDate=1548361801519, maxDate=1548966601519, sequenceNumber=3, > masterKeyId=8 on 172.31.116.31:8020 > 19/01/24 12:30:01 INFO kms.KMSClientProvider: Getting new token from > https://busbey-training-3.gce.cloudera.com:16000/kms/v1/, > renewer:yarn/busbey-training-1.gce.cloudera@gce.cloudera.com > 19/01/24 12:30:02 INFO kms.KMSClientProvider: New token received: (Kind: > kms-dt, Service: 172.31.116.52:16000, Ident: (kms-dt owner=systest, > renewer=yarn, realUser=, issueDate=1548361801965, maxDate=1548966601965, > sequenceNumber=5, masterKeyId=17)) > 19/01/24 12:30:02 INFO security.TokenCache: Got dt for > hdfs://busbey-training-1.gce.cloudera.com:8020; Kind: HDFS_DELEGATION_TOKEN, > Service: 172.31.116.31:8020, Ident: (token for systest: HDFS_DELEGATION_TOKEN > owner=syst...@gce.cloudera.com, renewer=yarn, realUser=, > issueDate=1548361801519, maxDate=1548966601519, sequenceNumber=3, > masterKeyId=8) > 19/01/24 12:30:02 INFO security.TokenCache: Got dt for > hdfs://busbey-training-1.gce.cloudera.com:8020; Kind: kms-dt, Service: > 172.31.116.52:16000, Ident: (kms-dt owner=systest, renewer=yarn, realUser=, > issueDate=1548361801965, maxDate=1548966601965, sequenceNumber=5, > masterKeyId=17) > 19/01/24 12:30:02 INFO kms.KMSClientProvider: Getting new token from > https://busbey-training-4.gce.cloudera.com:16000/kms/v1/, > renewer:yarn/busbey-training-1.gce.cloudera@gce.cloudera.com > 19/01/24 12:30:02 INFO kms.KMSClientProvider: New token received: (Kind: > kms-dt, Service: 172.31.116.50:16000, Ident: (kms-dt owner=systest, > renewer=yarn, realUser=, issueDate=1548361802363, maxDate=1548966602363, > sequenceNumber=6, masterKeyId=18)) > 19/01/24 12:30:02 INFO security.TokenCache: Got dt for > hdfs://busbey-training-1.gce.cloudera.com:8020; Kind: kms-dt, Service: > 172.31.116.50:16000, Ident: (kms-dt owner=systest, renewer=yarn, realUser=, > issueDate=1548361802363, maxDate=1548966602363, sequenceNumber=6, > masterKeyId=18) > 19/01/24 12:30:02 INFO mapreduce.JobResourceUploader: Disabling Erasure > Coding for path: /user/systest/.staging/job_1548349234632_0003 > 19/01/24 12:30:03 INFO mapreduce.JobSubmitter: Cleaning up the staging area > /user/systest/.staging/job_1548349234632_0003 > Exception in thread "main" java.lang.IllegalArgumentException: Illegal first > character <45> at 0. User-space table qualifiers can only start with > 'alphanumeric characters' from any language: -? > at > org.apache.hadoop.hbase.TableName.isLegalTableQualifierName(TableName.java:193) > at > org.apache.hadoop.hbase.TableName.isLegalTableQualifierName(TableName.java:156) > at org.apache.hadoop.hbase.TableName.(TableName.java:346) > at > org.apache.hadoop.hbase.TableName.createTableNameIfNecessary(TableName.java:382) > at org.apache.hadoop.hbase.TableName.valueOf(TableName.java:469) > at > org.apache.hadoop.hbase.mapreduce.TableInputFormat.initialize(TableInputFormat.java:198) > at > org.apache.hadoop.hbase.mapreduce.TableInputFormatBase.getSplits(TableInputFormatBase.java:243) > at > org.apache.hadoop.hbase.mapreduce.TableInputFormat.getSplits(TableInputFormat.java:254) > at >
[jira] [Updated] (HBASE-21773) rowcounter utility should respond to pleas for help
[ https://issues.apache.org/jira/browse/HBASE-21773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wellington Chevreuil updated HBASE-21773: - Status: In Progress (was: Patch Available) > rowcounter utility should respond to pleas for help > --- > > Key: HBASE-21773 > URL: https://issues.apache.org/jira/browse/HBASE-21773 > Project: HBase > Issue Type: Bug > Components: tooling >Affects Versions: 2.1.0 >Reporter: Sean Busbey >Assignee: Wellington Chevreuil >Priority: Critical > Attachments: HBASE-21773.master.001.patch, > HBASE-21773.master.002.patch, HBASE-21773.master.003.patch, > HBASE-21773.master.004.patch > > > {{hbase rowcounter}} does not respond to reasonable requests for help, i.e. > {{--help}}, {{-h}}, or {{-?}} > {code} > [systest@busbey-training-1 root]$ hbase rowcounter -? > OpenJDK 64-Bit Server VM warning: Using incremental CMS is deprecated and > will likely be removed in a future release > 19/01/24 12:30:00 INFO client.RMProxy: Connecting to ResourceManager at > busbey-training-1.gce.cloudera.com/172.31.116.31:8032 > 19/01/24 12:30:01 INFO hdfs.DFSClient: Created token for systest: > HDFS_DELEGATION_TOKEN owner=syst...@gce.cloudera.com, renewer=yarn, > realUser=, issueDate=1548361801519, maxDate=1548966601519, sequenceNumber=3, > masterKeyId=8 on 172.31.116.31:8020 > 19/01/24 12:30:01 INFO kms.KMSClientProvider: Getting new token from > https://busbey-training-3.gce.cloudera.com:16000/kms/v1/, > renewer:yarn/busbey-training-1.gce.cloudera@gce.cloudera.com > 19/01/24 12:30:02 INFO kms.KMSClientProvider: New token received: (Kind: > kms-dt, Service: 172.31.116.52:16000, Ident: (kms-dt owner=systest, > renewer=yarn, realUser=, issueDate=1548361801965, maxDate=1548966601965, > sequenceNumber=5, masterKeyId=17)) > 19/01/24 12:30:02 INFO security.TokenCache: Got dt for > hdfs://busbey-training-1.gce.cloudera.com:8020; Kind: HDFS_DELEGATION_TOKEN, > Service: 172.31.116.31:8020, Ident: (token for systest: HDFS_DELEGATION_TOKEN > owner=syst...@gce.cloudera.com, renewer=yarn, realUser=, > issueDate=1548361801519, maxDate=1548966601519, sequenceNumber=3, > masterKeyId=8) > 19/01/24 12:30:02 INFO security.TokenCache: Got dt for > hdfs://busbey-training-1.gce.cloudera.com:8020; Kind: kms-dt, Service: > 172.31.116.52:16000, Ident: (kms-dt owner=systest, renewer=yarn, realUser=, > issueDate=1548361801965, maxDate=1548966601965, sequenceNumber=5, > masterKeyId=17) > 19/01/24 12:30:02 INFO kms.KMSClientProvider: Getting new token from > https://busbey-training-4.gce.cloudera.com:16000/kms/v1/, > renewer:yarn/busbey-training-1.gce.cloudera@gce.cloudera.com > 19/01/24 12:30:02 INFO kms.KMSClientProvider: New token received: (Kind: > kms-dt, Service: 172.31.116.50:16000, Ident: (kms-dt owner=systest, > renewer=yarn, realUser=, issueDate=1548361802363, maxDate=1548966602363, > sequenceNumber=6, masterKeyId=18)) > 19/01/24 12:30:02 INFO security.TokenCache: Got dt for > hdfs://busbey-training-1.gce.cloudera.com:8020; Kind: kms-dt, Service: > 172.31.116.50:16000, Ident: (kms-dt owner=systest, renewer=yarn, realUser=, > issueDate=1548361802363, maxDate=1548966602363, sequenceNumber=6, > masterKeyId=18) > 19/01/24 12:30:02 INFO mapreduce.JobResourceUploader: Disabling Erasure > Coding for path: /user/systest/.staging/job_1548349234632_0003 > 19/01/24 12:30:03 INFO mapreduce.JobSubmitter: Cleaning up the staging area > /user/systest/.staging/job_1548349234632_0003 > Exception in thread "main" java.lang.IllegalArgumentException: Illegal first > character <45> at 0. User-space table qualifiers can only start with > 'alphanumeric characters' from any language: -? > at > org.apache.hadoop.hbase.TableName.isLegalTableQualifierName(TableName.java:193) > at > org.apache.hadoop.hbase.TableName.isLegalTableQualifierName(TableName.java:156) > at org.apache.hadoop.hbase.TableName.(TableName.java:346) > at > org.apache.hadoop.hbase.TableName.createTableNameIfNecessary(TableName.java:382) > at org.apache.hadoop.hbase.TableName.valueOf(TableName.java:469) > at > org.apache.hadoop.hbase.mapreduce.TableInputFormat.initialize(TableInputFormat.java:198) > at > org.apache.hadoop.hbase.mapreduce.TableInputFormatBase.getSplits(TableInputFormatBase.java:243) > at > org.apache.hadoop.hbase.mapreduce.TableInputFormat.getSplits(TableInputFormat.java:254) > at > org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:310) > at > org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:327) > at > org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:200) > at
[jira] [Updated] (HBASE-21773) rowcounter utility should respond to pleas for help
[ https://issues.apache.org/jira/browse/HBASE-21773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wellington Chevreuil updated HBASE-21773: - Attachment: HBASE-21773.master.004.patch > rowcounter utility should respond to pleas for help > --- > > Key: HBASE-21773 > URL: https://issues.apache.org/jira/browse/HBASE-21773 > Project: HBase > Issue Type: Bug > Components: tooling >Affects Versions: 2.1.0 >Reporter: Sean Busbey >Assignee: Wellington Chevreuil >Priority: Critical > Attachments: HBASE-21773.master.001.patch, > HBASE-21773.master.002.patch, HBASE-21773.master.003.patch, > HBASE-21773.master.004.patch > > > {{hbase rowcounter}} does not respond to reasonable requests for help, i.e. > {{--help}}, {{-h}}, or {{-?}} > {code} > [systest@busbey-training-1 root]$ hbase rowcounter -? > OpenJDK 64-Bit Server VM warning: Using incremental CMS is deprecated and > will likely be removed in a future release > 19/01/24 12:30:00 INFO client.RMProxy: Connecting to ResourceManager at > busbey-training-1.gce.cloudera.com/172.31.116.31:8032 > 19/01/24 12:30:01 INFO hdfs.DFSClient: Created token for systest: > HDFS_DELEGATION_TOKEN owner=syst...@gce.cloudera.com, renewer=yarn, > realUser=, issueDate=1548361801519, maxDate=1548966601519, sequenceNumber=3, > masterKeyId=8 on 172.31.116.31:8020 > 19/01/24 12:30:01 INFO kms.KMSClientProvider: Getting new token from > https://busbey-training-3.gce.cloudera.com:16000/kms/v1/, > renewer:yarn/busbey-training-1.gce.cloudera@gce.cloudera.com > 19/01/24 12:30:02 INFO kms.KMSClientProvider: New token received: (Kind: > kms-dt, Service: 172.31.116.52:16000, Ident: (kms-dt owner=systest, > renewer=yarn, realUser=, issueDate=1548361801965, maxDate=1548966601965, > sequenceNumber=5, masterKeyId=17)) > 19/01/24 12:30:02 INFO security.TokenCache: Got dt for > hdfs://busbey-training-1.gce.cloudera.com:8020; Kind: HDFS_DELEGATION_TOKEN, > Service: 172.31.116.31:8020, Ident: (token for systest: HDFS_DELEGATION_TOKEN > owner=syst...@gce.cloudera.com, renewer=yarn, realUser=, > issueDate=1548361801519, maxDate=1548966601519, sequenceNumber=3, > masterKeyId=8) > 19/01/24 12:30:02 INFO security.TokenCache: Got dt for > hdfs://busbey-training-1.gce.cloudera.com:8020; Kind: kms-dt, Service: > 172.31.116.52:16000, Ident: (kms-dt owner=systest, renewer=yarn, realUser=, > issueDate=1548361801965, maxDate=1548966601965, sequenceNumber=5, > masterKeyId=17) > 19/01/24 12:30:02 INFO kms.KMSClientProvider: Getting new token from > https://busbey-training-4.gce.cloudera.com:16000/kms/v1/, > renewer:yarn/busbey-training-1.gce.cloudera@gce.cloudera.com > 19/01/24 12:30:02 INFO kms.KMSClientProvider: New token received: (Kind: > kms-dt, Service: 172.31.116.50:16000, Ident: (kms-dt owner=systest, > renewer=yarn, realUser=, issueDate=1548361802363, maxDate=1548966602363, > sequenceNumber=6, masterKeyId=18)) > 19/01/24 12:30:02 INFO security.TokenCache: Got dt for > hdfs://busbey-training-1.gce.cloudera.com:8020; Kind: kms-dt, Service: > 172.31.116.50:16000, Ident: (kms-dt owner=systest, renewer=yarn, realUser=, > issueDate=1548361802363, maxDate=1548966602363, sequenceNumber=6, > masterKeyId=18) > 19/01/24 12:30:02 INFO mapreduce.JobResourceUploader: Disabling Erasure > Coding for path: /user/systest/.staging/job_1548349234632_0003 > 19/01/24 12:30:03 INFO mapreduce.JobSubmitter: Cleaning up the staging area > /user/systest/.staging/job_1548349234632_0003 > Exception in thread "main" java.lang.IllegalArgumentException: Illegal first > character <45> at 0. User-space table qualifiers can only start with > 'alphanumeric characters' from any language: -? > at > org.apache.hadoop.hbase.TableName.isLegalTableQualifierName(TableName.java:193) > at > org.apache.hadoop.hbase.TableName.isLegalTableQualifierName(TableName.java:156) > at org.apache.hadoop.hbase.TableName.(TableName.java:346) > at > org.apache.hadoop.hbase.TableName.createTableNameIfNecessary(TableName.java:382) > at org.apache.hadoop.hbase.TableName.valueOf(TableName.java:469) > at > org.apache.hadoop.hbase.mapreduce.TableInputFormat.initialize(TableInputFormat.java:198) > at > org.apache.hadoop.hbase.mapreduce.TableInputFormatBase.getSplits(TableInputFormatBase.java:243) > at > org.apache.hadoop.hbase.mapreduce.TableInputFormat.getSplits(TableInputFormat.java:254) > at > org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:310) > at > org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:327) > at > org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:200) > at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1570)
[jira] [Commented] (HBASE-21810) bulkload support set hfile compression on client
[ https://issues.apache.org/jira/browse/HBASE-21810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756889#comment-16756889 ] Hadoop QA commented on HBASE-21810: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 1s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:orange}-0{color} | {color:orange} test4tests {color} | {color:orange} 0m 0s{color} | {color:orange} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} branch-1.2 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 17s{color} | {color:green} branch-1.2 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 44s{color} | {color:green} branch-1.2 passed with JDK v1.8.0_201 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s{color} | {color:green} branch-1.2 passed with JDK v1.7.0_201 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 33s{color} | {color:green} branch-1.2 passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 2m 56s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 41s{color} | {color:green} branch-1.2 passed with JDK v1.8.0_201 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 42s{color} | {color:green} branch-1.2 passed with JDK v1.7.0_201 {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 41s{color} | {color:green} the patch passed with JDK v1.8.0_201 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s{color} | {color:green} the patch passed with JDK v1.7.0_201 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 2m 45s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 9m 30s{color} | {color:green} Patch does not cause any errors with Hadoop 2.4.1 2.5.2 2.6.5 2.7.4. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 34s{color} | {color:green} the patch passed with JDK v1.8.0_201 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 42s{color} | {color:green} the patch passed with JDK v1.7.0_201 {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 87m 19s{color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}123m 11s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hbase.regionserver.TestRegionReplicaFailover | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:34a9b27 | | JIRA Issue | HBASE-21810 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12956987/HBASE-21810.branch-1.2.001.patch | | Optional Tests | dupname
[jira] [Commented] (HBASE-21225) Having RPC & Space quota on a table/Namespace doesn't allow space quota to be removed using 'NONE'
[ https://issues.apache.org/jira/browse/HBASE-21225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756883#comment-16756883 ] Hudson commented on HBASE-21225: Results for branch branch-2.1 [build #817 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/817/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/817//General_Nightly_Build_Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/817//JDK8_Nightly_Build_Report_(Hadoop2)/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/817//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Having RPC & Space quota on a table/Namespace doesn't allow space quota to be > removed using 'NONE' > -- > > Key: HBASE-21225 > URL: https://issues.apache.org/jira/browse/HBASE-21225 > Project: HBase > Issue Type: Bug >Reporter: Sakthi >Assignee: Sakthi >Priority: Major > Fix For: 3.0.0, 2.2.0 > > Attachments: hbase-21225.master.001.patch, > hbase-21225.master.002.patch, hbase-21225.master.003.patch, > hbase-21225.master.004.patch, hbase-21225.master.005.patch > > > A part of HBASE-20705 is still unresolved. In that Jira it was assumed that > problem is: when table having both rpc & space quotas is dropped (with > hbase.quota.remove.on.table.delete set as true), the rpc quota is not set to > be dropped along with table-drops, and space quota was not being able to be > removed completely because of the "EMPTY" row that rpc quota left even after > removing. > The proposed solution for that was to make sure that rpc quota didn't leave > empty rows after removal of quota. And setting automatic removal of rpc quota > with table drops. That made sure that space quotas can be recreated/removed. > But all this was under the assumption that hbase.quota.remove.on.table.delete > is set as true. When it is set as false, the same issue can reproduced. Also > the below shown steps can used to reproduce the issue without table-drops. > {noformat} > hbase(main):005:0> create 't2','cf' > Created table t2 > Took 0.7619 seconds > => Hbase::Table - t2 > hbase(main):006:0> set_quota TYPE => THROTTLE, TABLE => 't2', LIMIT => > '10M/sec' > Took 0.0514 seconds > hbase(main):007:0> set_quota TYPE => SPACE, TABLE => 't2', LIMIT => '1G', > POLICY => NO_WRITES > Took 0.0162 seconds > hbase(main):008:0> list_quotas > OWNER QUOTAS > TABLE => t2 TYPE => THROTTLE, THROTTLE_TYPE => REQUEST_SIZE, > LIMIT => 10M/sec, SCOPE => >MACHINE > TABLE => t2 TYPE => SPACE, TABLE => t2, LIMIT => 1073741824, > VIOLATION_POLICY => NO_WRIT >ES > 2 row(s) > Took 0.0716 seconds > hbase(main):009:0> set_quota TYPE => SPACE, TABLE => 't2', LIMIT => NONE > Took 0.0082 seconds > hbase(main):010:0> list_quotas > OWNER QUOTAS > TABLE => t2TYPE => THROTTLE, THROTTLE_TYPE => > REQUEST_SIZE, LIMIT => 10M/sec, SCOPE => MACHINE > TABLE => t2TYPE => SPACE, TABLE => t2, REMOVE => true > 2 row(s) > Took 0.0254 seconds > hbase(main):011:0> set_quota TYPE => SPACE, TABLE => 't2', LIMIT => '1G', > POLICY => NO_WRITES > Took 0.0082 seconds > hbase(main):012:0> list_quotas > OWNER QUOTAS > TABLE => t2TYPE => THROTTLE, THROTTLE_TYPE => > REQUEST_SIZE, LIMIT => 10M/sec, SCOPE => MACHINE > TABLE => t2TYPE => SPACE, TABLE => t2, REMOVE => true > 2 row(s) > Took 0.0411 seconds > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21775) The BufferedMutator doesn't ever refresh region location cache
[ https://issues.apache.org/jira/browse/HBASE-21775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756882#comment-16756882 ] Hudson commented on HBASE-21775: Results for branch branch-2.1 [build #817 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/817/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/817//General_Nightly_Build_Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/817//JDK8_Nightly_Build_Report_(Hadoop2)/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/817//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > The BufferedMutator doesn't ever refresh region location cache > -- > > Key: HBASE-21775 > URL: https://issues.apache.org/jira/browse/HBASE-21775 > Project: HBase > Issue Type: Bug > Components: Client >Reporter: Tommy Li >Assignee: Tommy Li >Priority: Major > Fix For: 3.0.0, 1.5.0, 2.2.0, 1.4.10, 2.1.3, 2.0.5, 1.3.4 > > Attachments: HBASE-21775-ADDENDUM.master.001.patch, > HBASE-21775.master.001.patch, > org.apache.hadoop.hbase.client.TestAsyncProcess-with-HBASE-21775.txt, > org.apache.hadoop.hbase.client.TestAsyncProcess-without-HBASE-21775.txt > > > {color:#22}I noticed in some of my writing jobs that the BufferedMutator > would get stuck retrying writes against a dead server.{color} > {code:java} > 19/01/18 15:15:47 INFO [Executor task launch worker for task 0] > client.AsyncRequestFutureImpl: #2, waiting for 1 actions to finish on table: > dummy_table > 19/01/18 15:15:54 WARN [htable-pool3-t56] client.AsyncRequestFutureImpl: > id=2, table=dummy_table, attempt=15/21, failureCount=1ops, last > exception=org.apache.hadoop.hbase.DoNotRetryIOException: Operation rpcTimeout > on ,17020,1547848193782, tracking started Fri Jan 18 14:55:37 PST > 2019; NOT retrying, failed=1 -- final attempt! > 19/01/18 15:15:54 ERROR [Executor task launch worker for task 0] > IngestRawData.map(): [B@258bc2c7: > org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 > action: Operation rpcTimeout: 1 time, servers with issues: > ,17020,1547848193782 > {code} > > After the single remaining action permanently failed, it would resume > progress only to get stuck again retrying against the same dead server: > {code:java} > 19/01/18 15:21:18 INFO [Executor task launch worker for task 0] > client.AsyncRequestFutureImpl: #2, waiting for 1 actions to finish on table: > dummy_table > 19/01/18 15:21:18 INFO [Executor task launch worker for task 0] > client.AsyncRequestFutureImpl: #2, waiting for 1 actions to finish on table: > dummy_table > 19/01/18 15:21:20 INFO [htable-pool3-t55] client.AsyncRequestFutureImpl: > id=2, table=dummy_table, attempt=6/21, failureCount=1ops, last > exception=java.net.ConnectException: Call to failed on connection > exception: > org.apache.hbase.thirdparty.io.netty.channel.ConnectTimeoutException: > connection timed out: on ,17020,1547848193782, tracking > started null, retrying after=20089ms, operationsToReplay=1 > {code} > > Only restarting the client process to generate a new BufferedMutator instance > would fix the issue, at least until the next regionserver crash > The logs I've pasted show the issue happening with a > ConnectionTimeoutException, but we've also seen it with > NotServingRegionException and some others -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21775) The BufferedMutator doesn't ever refresh region location cache
[ https://issues.apache.org/jira/browse/HBASE-21775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756870#comment-16756870 ] Hudson commented on HBASE-21775: Results for branch branch-2.0 [build #1301 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/1301/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/1301//General_Nightly_Build_Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/1301//JDK8_Nightly_Build_Report_(Hadoop2)/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/1301//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. > The BufferedMutator doesn't ever refresh region location cache > -- > > Key: HBASE-21775 > URL: https://issues.apache.org/jira/browse/HBASE-21775 > Project: HBase > Issue Type: Bug > Components: Client >Reporter: Tommy Li >Assignee: Tommy Li >Priority: Major > Fix For: 3.0.0, 1.5.0, 2.2.0, 1.4.10, 2.1.3, 2.0.5, 1.3.4 > > Attachments: HBASE-21775-ADDENDUM.master.001.patch, > HBASE-21775.master.001.patch, > org.apache.hadoop.hbase.client.TestAsyncProcess-with-HBASE-21775.txt, > org.apache.hadoop.hbase.client.TestAsyncProcess-without-HBASE-21775.txt > > > {color:#22}I noticed in some of my writing jobs that the BufferedMutator > would get stuck retrying writes against a dead server.{color} > {code:java} > 19/01/18 15:15:47 INFO [Executor task launch worker for task 0] > client.AsyncRequestFutureImpl: #2, waiting for 1 actions to finish on table: > dummy_table > 19/01/18 15:15:54 WARN [htable-pool3-t56] client.AsyncRequestFutureImpl: > id=2, table=dummy_table, attempt=15/21, failureCount=1ops, last > exception=org.apache.hadoop.hbase.DoNotRetryIOException: Operation rpcTimeout > on ,17020,1547848193782, tracking started Fri Jan 18 14:55:37 PST > 2019; NOT retrying, failed=1 -- final attempt! > 19/01/18 15:15:54 ERROR [Executor task launch worker for task 0] > IngestRawData.map(): [B@258bc2c7: > org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 > action: Operation rpcTimeout: 1 time, servers with issues: > ,17020,1547848193782 > {code} > > After the single remaining action permanently failed, it would resume > progress only to get stuck again retrying against the same dead server: > {code:java} > 19/01/18 15:21:18 INFO [Executor task launch worker for task 0] > client.AsyncRequestFutureImpl: #2, waiting for 1 actions to finish on table: > dummy_table > 19/01/18 15:21:18 INFO [Executor task launch worker for task 0] > client.AsyncRequestFutureImpl: #2, waiting for 1 actions to finish on table: > dummy_table > 19/01/18 15:21:20 INFO [htable-pool3-t55] client.AsyncRequestFutureImpl: > id=2, table=dummy_table, attempt=6/21, failureCount=1ops, last > exception=java.net.ConnectException: Call to failed on connection > exception: > org.apache.hbase.thirdparty.io.netty.channel.ConnectTimeoutException: > connection timed out: on ,17020,1547848193782, tracking > started null, retrying after=20089ms, operationsToReplay=1 > {code} > > Only restarting the client process to generate a new BufferedMutator instance > would fix the issue, at least until the next regionserver crash > The logs I've pasted show the issue happening with a > ConnectionTimeoutException, but we've also seen it with > NotServingRegionException and some others -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21225) Having RPC & Space quota on a table/Namespace doesn't allow space quota to be removed using 'NONE'
[ https://issues.apache.org/jira/browse/HBASE-21225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756871#comment-16756871 ] Hudson commented on HBASE-21225: Results for branch branch-2.0 [build #1301 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/1301/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/1301//General_Nightly_Build_Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/1301//JDK8_Nightly_Build_Report_(Hadoop2)/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/1301//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. > Having RPC & Space quota on a table/Namespace doesn't allow space quota to be > removed using 'NONE' > -- > > Key: HBASE-21225 > URL: https://issues.apache.org/jira/browse/HBASE-21225 > Project: HBase > Issue Type: Bug >Reporter: Sakthi >Assignee: Sakthi >Priority: Major > Fix For: 3.0.0, 2.2.0 > > Attachments: hbase-21225.master.001.patch, > hbase-21225.master.002.patch, hbase-21225.master.003.patch, > hbase-21225.master.004.patch, hbase-21225.master.005.patch > > > A part of HBASE-20705 is still unresolved. In that Jira it was assumed that > problem is: when table having both rpc & space quotas is dropped (with > hbase.quota.remove.on.table.delete set as true), the rpc quota is not set to > be dropped along with table-drops, and space quota was not being able to be > removed completely because of the "EMPTY" row that rpc quota left even after > removing. > The proposed solution for that was to make sure that rpc quota didn't leave > empty rows after removal of quota. And setting automatic removal of rpc quota > with table drops. That made sure that space quotas can be recreated/removed. > But all this was under the assumption that hbase.quota.remove.on.table.delete > is set as true. When it is set as false, the same issue can reproduced. Also > the below shown steps can used to reproduce the issue without table-drops. > {noformat} > hbase(main):005:0> create 't2','cf' > Created table t2 > Took 0.7619 seconds > => Hbase::Table - t2 > hbase(main):006:0> set_quota TYPE => THROTTLE, TABLE => 't2', LIMIT => > '10M/sec' > Took 0.0514 seconds > hbase(main):007:0> set_quota TYPE => SPACE, TABLE => 't2', LIMIT => '1G', > POLICY => NO_WRITES > Took 0.0162 seconds > hbase(main):008:0> list_quotas > OWNER QUOTAS > TABLE => t2 TYPE => THROTTLE, THROTTLE_TYPE => REQUEST_SIZE, > LIMIT => 10M/sec, SCOPE => >MACHINE > TABLE => t2 TYPE => SPACE, TABLE => t2, LIMIT => 1073741824, > VIOLATION_POLICY => NO_WRIT >ES > 2 row(s) > Took 0.0716 seconds > hbase(main):009:0> set_quota TYPE => SPACE, TABLE => 't2', LIMIT => NONE > Took 0.0082 seconds > hbase(main):010:0> list_quotas > OWNER QUOTAS > TABLE => t2TYPE => THROTTLE, THROTTLE_TYPE => > REQUEST_SIZE, LIMIT => 10M/sec, SCOPE => MACHINE > TABLE => t2TYPE => SPACE, TABLE => t2, REMOVE => true > 2 row(s) > Took 0.0254 seconds > hbase(main):011:0> set_quota TYPE => SPACE, TABLE => 't2', LIMIT => '1G', > POLICY => NO_WRITES > Took 0.0082 seconds > hbase(main):012:0> list_quotas > OWNER QUOTAS > TABLE => t2TYPE => THROTTLE, THROTTLE_TYPE => > REQUEST_SIZE, LIMIT => 10M/sec, SCOPE => MACHINE > TABLE => t2TYPE => SPACE, TABLE => t2, REMOVE => true > 2 row(s) > Took 0.0411 seconds > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-21764) Size of in-memory compaction thread pool shoud be configurable
[ https://issues.apache.org/jira/browse/HBASE-21764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zheng Hu updated HBASE-21764: - Attachment: HBASE-21764.v8.patch > Size of in-memory compaction thread pool shoud be configurable > -- > > Key: HBASE-21764 > URL: https://issues.apache.org/jira/browse/HBASE-21764 > Project: HBase > Issue Type: Sub-task > Components: in-memory-compaction >Reporter: Zheng Hu >Assignee: Zheng Hu >Priority: Major > Fix For: 3.0.0, 2.2.0, 2.1.3, 2.0.5 > > Attachments: HBASE-21764.v1.patch, HBASE-21764.v2.patch, > HBASE-21764.v3.patch, HBASE-21764.v4.patch, HBASE-21764.v5.patch, > HBASE-21764.v6.patch, HBASE-21764.v7.patch, HBASE-21764.v8.patch > > > In RegionServicesForStores, we can see : > {code} > private static final int POOL_SIZE = 10; > private static final ThreadPoolExecutor INMEMORY_COMPACTION_POOL = > new ThreadPoolExecutor(POOL_SIZE, POOL_SIZE, 60, TimeUnit.SECONDS, > new LinkedBlockingQueue<>(), > new ThreadFactory() { > @Override > public Thread newThread(Runnable r) { > String name = Thread.currentThread().getName() + > "-inmemoryCompactions-" + > System.currentTimeMillis(); > return new Thread(r, name); > } > }); > {code} > The pool size should be configurable, because if many regions on a rs, the > default 10 threads will be not enough. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21811) region can be opened on two servers due to race condition with procedures and server reports
[ https://issues.apache.org/jira/browse/HBASE-21811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756857#comment-16756857 ] Hadoop QA commented on HBASE-21811: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:orange}-0{color} | {color:orange} test4tests {color} | {color:orange} 0m 0s{color} | {color:orange} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 6s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 4s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 15s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 37s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 30s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 31s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 2s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 14s{color} | {color:red} hbase-server: The patch generated 4 new + 64 unchanged - 0 fixed = 68 total (was 64) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 32s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 9m 57s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 or 3.0.0. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 41s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green}135m 23s{color} | {color:green} hbase-server in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 24s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}178m 28s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b | | JIRA Issue | HBASE-21811 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12956979/HBASE-21811.patch | | Optional Tests | dupname asflicense javac javadoc unit findbugs shadedjars hadoopcheck hbaseanti checkstyle compile | | uname | Linux a09c5f4b0491 4.4.0-139-generic #165~14.04.1-Ubuntu SMP Wed Oct 31 10:55:11 UTC 2018 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/component/dev-support/hbase-personality.sh | | git revision | master / c90e9ff5ef | | maven | version: Apache Maven 3.5.4 (1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) | | Default Java | 1.8.0_181 | | findbugs | v3.1.0-RC3 | | checkstyle | https://builds.apache.org/job/PreCommit-HBASE-Build/15801/artifact/patchprocess/diff-checkstyle-hbase-server.txt | | Test Results |
[jira] [Updated] (HBASE-21764) Size of in-memory compaction thread pool shoud be configurable
[ https://issues.apache.org/jira/browse/HBASE-21764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zheng Hu updated HBASE-21764: - Attachment: HBASE-21764.v7.patch > Size of in-memory compaction thread pool shoud be configurable > -- > > Key: HBASE-21764 > URL: https://issues.apache.org/jira/browse/HBASE-21764 > Project: HBase > Issue Type: Sub-task > Components: in-memory-compaction >Reporter: Zheng Hu >Assignee: Zheng Hu >Priority: Major > Fix For: 3.0.0, 2.2.0, 2.1.3, 2.0.5 > > Attachments: HBASE-21764.v1.patch, HBASE-21764.v2.patch, > HBASE-21764.v3.patch, HBASE-21764.v4.patch, HBASE-21764.v5.patch, > HBASE-21764.v6.patch, HBASE-21764.v7.patch > > > In RegionServicesForStores, we can see : > {code} > private static final int POOL_SIZE = 10; > private static final ThreadPoolExecutor INMEMORY_COMPACTION_POOL = > new ThreadPoolExecutor(POOL_SIZE, POOL_SIZE, 60, TimeUnit.SECONDS, > new LinkedBlockingQueue<>(), > new ThreadFactory() { > @Override > public Thread newThread(Runnable r) { > String name = Thread.currentThread().getName() + > "-inmemoryCompactions-" + > System.currentTimeMillis(); > return new Thread(r, name); > } > }); > {code} > The pool size should be configurable, because if many regions on a rs, the > default 10 threads will be not enough. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-21634) Print error message when user uses unacceptable values for LIMIT while setting quotas.
[ https://issues.apache.org/jira/browse/HBASE-21634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guanghao Zhang updated HBASE-21634: --- Resolution: Fixed Fix Version/s: 2.3.0 2.0.5 2.1.3 Status: Resolved (was: Patch Available) Pushed to branch-2.0 and branch-2.1. > Print error message when user uses unacceptable values for LIMIT while > setting quotas. > -- > > Key: HBASE-21634 > URL: https://issues.apache.org/jira/browse/HBASE-21634 > Project: HBase > Issue Type: Improvement >Reporter: Sakthi >Assignee: Sakthi >Priority: Minor > Fix For: 3.0.0, 2.2.0, 2.1.3, 2.0.5, 2.3.0 > > Attachments: hbase-21634.branch-2.0.001.patch, > hbase-21634.branch-2.0.002.patch, hbase-21634.master.001.patch, > hbase-21634.master.002.patch, hbase-21634.master.003.patch, > hbase-21634.master.004.patch > > > When unacceptable value(like 2.3G or 70H) to LIMIT are passed while setting > quotas, we currently do not print any error message (informing the user about > the erroneous input). Like below: > {noformat} > hbase(main):002:0> set_quota TYPE => SPACE, TABLE => 't2', LIMIT => '2.3G', > POLICY => NO_WRITES > Took 0.0792 seconds > hbase(main):003:0> list_quotas > OWNERQUOTAS > TABLE => t2 TYPE => SPACE, > TABLE => t2, LIMIT => 2B, VIOLATION_POLICY => NO_WRITES > 1 row(s) > Took 0.0512 seconds > hbase(main):006:0> set_quota TYPE => SPACE, TABLE => 't2', LIMIT => '70H', > POLICY => NO_WRITES > Took 0.0088 seconds > hbase(main):007:0> list_quotas > OWNERQUOTAS > TABLE => t2 TYPE => SPACE, > TABLE => t2, LIMIT => 70B, VIOLATION_POLICY => NO_WRITES > 1 row(s) > Took 0.0448 seconds > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-18822) Create table for peer cluster automatically when creating table in source cluster of using namespace replication.
[ https://issues.apache.org/jira/browse/HBASE-18822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756850#comment-16756850 ] Zheng Hu commented on HBASE-18822: -- Of course, [~nihaljain.cs] please go ahead. > Create table for peer cluster automatically when creating table in source > cluster of using namespace replication. > - > > Key: HBASE-18822 > URL: https://issues.apache.org/jira/browse/HBASE-18822 > Project: HBase > Issue Type: Improvement > Components: Replication >Affects Versions: 2.0.0-alpha-2 >Reporter: Zheng Hu >Assignee: Zheng Hu >Priority: Major > Fix For: 3.0.0, 2.2.0 > > Attachments: HBASE-18822.v1.patch, HBASE-18822.v1.patch > > > In our cluster of using namespace replication, we always forget to create > table in peer cluster, which lead to replication get stuck. > We have implemented the feature in our cluster: create table for peer > cluster automatically when creating table in source cluster of using > namespace replication. > > I'm not sure if someone else needs this feature, so create an issue here for > discussing -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-21806) add an option to roll WAL on very slow syncs
[ https://issues.apache.org/jira/browse/HBASE-21806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anoop Sam John updated HBASE-21806: --- Issue Type: Improvement (was: Bug) > add an option to roll WAL on very slow syncs > > > Key: HBASE-21806 > URL: https://issues.apache.org/jira/browse/HBASE-21806 > Project: HBase > Issue Type: Improvement >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Major > Fix For: 3.0.0 > > Attachments: HBASE-21806.patch > > > In large heterogeneous clusters sometimes a slow datanode can cause WAL syncs > to be very slow. In this case, before the bad datanode recovers, or is > discovered and repaired, it would be helpful to roll WAL on a very slow sync > to get a new pipeline. > Otherwise the slow WAL will impact write latency for a long time (slow writes > result in less writes result in the WAL not being rolled for longer) -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21806) add an option to roll WAL on very slow syncs
[ https://issues.apache.org/jira/browse/HBASE-21806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756832#comment-16756832 ] Anoop Sam John commented on HBASE-21806: Pls add RN describing all new configs and how to achieve this feature. Tks. > add an option to roll WAL on very slow syncs > > > Key: HBASE-21806 > URL: https://issues.apache.org/jira/browse/HBASE-21806 > Project: HBase > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Major > Fix For: 3.0.0 > > Attachments: HBASE-21806.patch > > > In large heterogeneous clusters sometimes a slow datanode can cause WAL syncs > to be very slow. In this case, before the bad datanode recovers, or is > discovered and repaired, it would be helpful to roll WAL on a very slow sync > to get a new pipeline. > Otherwise the slow WAL will impact write latency for a long time (slow writes > result in less writes result in the WAL not being rolled for longer) -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21804) Remove 0.94 check from the Linkchecker job
[ https://issues.apache.org/jira/browse/HBASE-21804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756828#comment-16756828 ] Hadoop QA commented on HBASE-21804: --- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue} 0m 0s{color} | {color:blue} Shelldocs was not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 0s{color} | {color:green} There were no new shellcheck issues. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 14s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 0m 37s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b | | JIRA Issue | HBASE-21804 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12956985/hbase-21804.master.001.patch | | Optional Tests | dupname asflicense shellcheck shelldocs | | uname | Linux 45a49a343ac2 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 5 08:56:16 UTC 2018 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | master / c90e9ff5ef | | maven | version: Apache Maven 3.5.4 (1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) | | shellcheck | v0.4.4 | | Max. process+thread count | 42 (vs. ulimit of 1) | | modules | C: . U: . | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/15803/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > Remove 0.94 check from the Linkchecker job > -- > > Key: HBASE-21804 > URL: https://issues.apache.org/jira/browse/HBASE-21804 > Project: HBase > Issue Type: Sub-task >Reporter: Sakthi >Assignee: Sakthi >Priority: Major > Attachments: hbase-21804.master.001.patch > > > This is a pretty old release. Even though we don't have the link to the doc > from our main page, the linkchecker job lands directly at > [https://hbase.apache.org/0.94/] which has around 90 odd missing file issues. > I haven't yet looked at the missing anchors stuff yet. > We can set linkchecker to not check 0.94. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21535) Zombie Master detector is not working
[ https://issues.apache.org/jira/browse/HBASE-21535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756814#comment-16756814 ] Pankaj Kumar commented on HBASE-21535: -- Thanks [~stack] and [~zghaobac] for reviewing and committing the patch. :) > Zombie Master detector is not working > - > > Key: HBASE-21535 > URL: https://issues.apache.org/jira/browse/HBASE-21535 > Project: HBase > Issue Type: Bug > Components: master >Affects Versions: 3.0.0, 2.2.0, 2.1.1, 2.0.3 >Reporter: Pankaj Kumar >Assignee: Pankaj Kumar >Priority: Critical > Fix For: 3.0.0, 2.2.0, 2.1.3, 2.0.5, 2.3.0 > > Attachments: HBASE-21535.branch-2.patch, HBASE-21535.branch-2.patch, > HBASE-21535.patch, HBASE-21535.v2.patch > > > We have InitializationMonitor thread in HMaster which detects Zombie Hmaster > based on _hbase.master.initializationmonitor.timeout _and halts if > _hbase.master.initializationmonitor.haltontimeout_ set _true_. > After HBASE-19694, HMaster initialization order was correted. Hmaster is set > active after Initializing ZK system trackers as follows, > {noformat} > status.setStatus("Initializing ZK system trackers"); > initializeZKBasedSystemTrackers(); > status.setStatus("Loading last flushed sequence id of regions"); > try { > this.serverManager.loadLastFlushedSequenceIds(); > } catch (IOException e) { > LOG.debug("Failed to load last flushed sequence id of regions" > + " from file system", e); > } > // Set ourselves as active Master now our claim has succeeded up in zk. > this.activeMaster = true; > {noformat} > But Zombie detector thread is started at the begining phase of > finishActiveMasterInitialization(), > {noformat} > private void finishActiveMasterInitialization(MonitoredTask status) throws > IOException, > InterruptedException, KeeperException, ReplicationException { > Thread zombieDetector = new Thread(new InitializationMonitor(this), > "ActiveMasterInitializationMonitor-" + System.currentTimeMillis()); > zombieDetector.setDaemon(true); > zombieDetector.start(); > {noformat} > During zombieDetector execution "master.isActiveMaster()" will be false, so > it won't wait and cant detect zombie master. > {noformat} > @Override > public void run() { > try { > while (!master.isStopped() && master.isActiveMaster()) { > Thread.sleep(timeout); > if (master.isInitialized()) { > LOG.debug("Initialization completed within allotted tolerance. Monitor > exiting."); > } else { > LOG.error("Master failed to complete initialization after " + timeout + "ms. > Please" > + " consider submitting a bug report including a thread dump of this > process."); > if (haltOnTimeout) { > LOG.error("Zombie Master exiting. Thread dump to stdout"); > Threads.printThreadInfo(System.out, "Zombie HMaster"); > System.exit(-1); > } > } > } > } catch (InterruptedException ie) { > LOG.trace("InitMonitor thread interrupted. Existing."); > } > } > } > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-21814) Remove the TODO in AccessControlLists#addUserPermission
[ https://issues.apache.org/jira/browse/HBASE-21814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guanghao Zhang updated HBASE-21814: --- Issue Type: Improvement (was: Bug) > Remove the TODO in AccessControlLists#addUserPermission > --- > > Key: HBASE-21814 > URL: https://issues.apache.org/jira/browse/HBASE-21814 > Project: HBase > Issue Type: Improvement >Reporter: Guanghao Zhang >Assignee: Guanghao Zhang >Priority: Major > Fix For: 3.0.0, 2.2.0, 2.3.0 > > Attachments: HBASE-21814.master.001.patch > > > The TODO was added by me. Because this method happens within the RS. The old > impl use a login user(User.runAsLoginUser where the login user is the user > who started RS process) to call Table.put(). And it will check the permission > when put record to ACL table. At RpcServer we have a ThreadLocal where we > keep the CallContext and inside that the current RPC called user info is set. > We need Table.put(List) to change to a new thread and and so old > ThreadLocal variable is not accessible and so it looks as if no Rpc context > and we were relying on the super user who starts the RS process. > > {code:java} > User.runAsLoginUser(new PrivilegedExceptionAction() { > @Override > public Void run() throws Exception { > > AccessControlLists.addUserPermission(regionEnv.getConfiguration(), perm, > regionEnv.getTable(AccessControlLists.ACL_TABLE_NAME), > request.getMergeExistingPermissions()); > return null; > } > }); > {code} > > But after HBASE-21739, no need to User.runAsLoginUser. Because we will call > Admin method to grant/revoke. And this will be execute in master and use the > master user(the user who started master process) to call Table.put. So this > is not a problem now. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-21814) Remove the TODO in AccessControlLists#addUserPermission
[ https://issues.apache.org/jira/browse/HBASE-21814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guanghao Zhang updated HBASE-21814: --- Description: The TODO was added by me. Because this method happens within the RS. The old impl use a login user(User.runAsLoginUser where the login user is the user who started RS process) to call Table.put(). And it will check the permission when put record to ACL table. At RpcServer we have a ThreadLocal where we keep the CallContext and inside that the current RPC called user info is set. We need Table.put(List) to change to a new thread and and so old ThreadLocal variable is not accessible and so it looks as if no Rpc context and we were relying on the super user who starts the RS process. {code:java} User.runAsLoginUser(new PrivilegedExceptionAction() { @Override public Void run() throws Exception { AccessControlLists.addUserPermission(regionEnv.getConfiguration(), perm, regionEnv.getTable(AccessControlLists.ACL_TABLE_NAME), request.getMergeExistingPermissions()); return null; } }); {code} But after HBASE-21739, no need to User.runAsLoginUser. Because we will call Admin method to grant/revoke. And this will be execute in master and use the master user(the user who started master process) to call Table.put. So this is not a problem now. was: The TODO was added by me. Because this method happens within the RS. The old impl use a login user(User.runAsLoginUser where the login user is the user who started RS process) to call Table.put(). And it will check the permission when put record to ACL table. {code:java} User.runAsLoginUser(new PrivilegedExceptionAction() { @Override public Void run() throws Exception { AccessControlLists.addUserPermission(regionEnv.getConfiguration(), perm, regionEnv.getTable(AccessControlLists.ACL_TABLE_NAME), request.getMergeExistingPermissions()); return null; } }); {code} But after HBASE-21739, no need to User.runAsLoginUser. Because we will call Admin method to grant/revoke. And this will be execute in master and use the master user(the user who started master process) to call Table.put. So this is not a problem now. > Remove the TODO in AccessControlLists#addUserPermission > --- > > Key: HBASE-21814 > URL: https://issues.apache.org/jira/browse/HBASE-21814 > Project: HBase > Issue Type: Bug >Reporter: Guanghao Zhang >Assignee: Guanghao Zhang >Priority: Major > Fix For: 3.0.0, 2.2.0, 2.3.0 > > Attachments: HBASE-21814.master.001.patch > > > The TODO was added by me. Because this method happens within the RS. The old > impl use a login user(User.runAsLoginUser where the login user is the user > who started RS process) to call Table.put(). And it will check the permission > when put record to ACL table. At RpcServer we have a ThreadLocal where we > keep the CallContext and inside that the current RPC called user info is set. > We need Table.put(List) to change to a new thread and and so old > ThreadLocal variable is not accessible and so it looks as if no Rpc context > and we were relying on the super user who starts the RS process. > > {code:java} > User.runAsLoginUser(new PrivilegedExceptionAction() { > @Override > public Void run() throws Exception { > > AccessControlLists.addUserPermission(regionEnv.getConfiguration(), perm, > regionEnv.getTable(AccessControlLists.ACL_TABLE_NAME), > request.getMergeExistingPermissions()); > return null; > } > }); > {code} > > But after HBASE-21739, no need to User.runAsLoginUser. Because we will call > Admin method to grant/revoke. And this will be execute in master and use the > master user(the user who started master process) to call Table.put. So this > is not a problem now. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-21814) Remove the TODO in AccessControlLists#addUserPermission
[ https://issues.apache.org/jira/browse/HBASE-21814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guanghao Zhang updated HBASE-21814: --- Description: The TODO was added by me. Because this method happens within the RS. The old impl use a login user(User.runAsLoginUser where the login user is the user who started RS process) to call Table.put(). And it will check the permission when put record to ACL table. {code:java} User.runAsLoginUser(new PrivilegedExceptionAction() { @Override public Void run() throws Exception { AccessControlLists.addUserPermission(regionEnv.getConfiguration(), perm, regionEnv.getTable(AccessControlLists.ACL_TABLE_NAME), request.getMergeExistingPermissions()); return null; } }); {code} But after HBASE-21739, no need to User.runAsLoginUser. Because we will call Admin method to grant/revoke. And this will be execute in master and use the master user(the user who started master process) to call Table.put. So this is not a problem now. was:The TODO was added by me. Because this method happens within the RS. But after HBASE-21739, grant/revoke will execute by master. So this is not a problem now. > Remove the TODO in AccessControlLists#addUserPermission > --- > > Key: HBASE-21814 > URL: https://issues.apache.org/jira/browse/HBASE-21814 > Project: HBase > Issue Type: Bug >Reporter: Guanghao Zhang >Assignee: Guanghao Zhang >Priority: Major > Fix For: 3.0.0, 2.2.0, 2.3.0 > > Attachments: HBASE-21814.master.001.patch > > > The TODO was added by me. Because this method happens within the RS. The old > impl use a login user(User.runAsLoginUser where the login user is the user > who started RS process) to call Table.put(). And it will check the permission > when put record to ACL table. > > {code:java} > User.runAsLoginUser(new PrivilegedExceptionAction() { > @Override > public Void run() throws Exception { > > AccessControlLists.addUserPermission(regionEnv.getConfiguration(), perm, > regionEnv.getTable(AccessControlLists.ACL_TABLE_NAME), > request.getMergeExistingPermissions()); > return null; > } > }); > {code} > > But after HBASE-21739, no need to User.runAsLoginUser. Because we will call > Admin method to grant/revoke. And this will be execute in master and use the > master user(the user who started master process) to call Table.put. So this > is not a problem now. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21773) rowcounter utility should respond to pleas for help
[ https://issues.apache.org/jira/browse/HBASE-21773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756794#comment-16756794 ] Hadoop QA commented on HBASE-21773: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 39s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 6s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 43s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 35s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 20s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 32s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 5s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 34s{color} | {color:red} hbase-mapreduce generated 3 new + 155 unchanged - 3 fixed = 158 total (was 158) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 19s{color} | {color:red} hbase-mapreduce: The patch generated 1 new + 43 unchanged - 1 fixed = 44 total (was 44) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 35s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 9m 35s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 or 3.0.0. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 32s{color} | {color:green} hbase-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 15m 38s{color} | {color:green} hbase-mapreduce in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 24s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 55m 42s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b | | JIRA Issue | HBASE-21773 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12956983/HBASE-21773.master.003.patch | | Optional Tests | dupname asflicense javac javadoc unit findbugs shadedjars hadoopcheck hbaseanti checkstyle compile | | uname | Linux c896627d1e19 4.4.0-139-generic #165~14.04.1-Ubuntu SMP Wed Oct 31 10:55:11 UTC 2018 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | master / c90e9ff5ef | | maven |
[jira] [Commented] (HBASE-21634) Print error message when user uses unacceptable values for LIMIT while setting quotas.
[ https://issues.apache.org/jira/browse/HBASE-21634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756787#comment-16756787 ] Wellington Chevreuil commented on HBASE-21634: -- lgtm, +1. > Print error message when user uses unacceptable values for LIMIT while > setting quotas. > -- > > Key: HBASE-21634 > URL: https://issues.apache.org/jira/browse/HBASE-21634 > Project: HBase > Issue Type: Improvement >Reporter: Sakthi >Assignee: Sakthi >Priority: Minor > Fix For: 3.0.0, 2.2.0 > > Attachments: hbase-21634.branch-2.0.001.patch, > hbase-21634.branch-2.0.002.patch, hbase-21634.master.001.patch, > hbase-21634.master.002.patch, hbase-21634.master.003.patch, > hbase-21634.master.004.patch > > > When unacceptable value(like 2.3G or 70H) to LIMIT are passed while setting > quotas, we currently do not print any error message (informing the user about > the erroneous input). Like below: > {noformat} > hbase(main):002:0> set_quota TYPE => SPACE, TABLE => 't2', LIMIT => '2.3G', > POLICY => NO_WRITES > Took 0.0792 seconds > hbase(main):003:0> list_quotas > OWNERQUOTAS > TABLE => t2 TYPE => SPACE, > TABLE => t2, LIMIT => 2B, VIOLATION_POLICY => NO_WRITES > 1 row(s) > Took 0.0512 seconds > hbase(main):006:0> set_quota TYPE => SPACE, TABLE => 't2', LIMIT => '70H', > POLICY => NO_WRITES > Took 0.0088 seconds > hbase(main):007:0> list_quotas > OWNERQUOTAS > TABLE => t2 TYPE => SPACE, > TABLE => t2, LIMIT => 70B, VIOLATION_POLICY => NO_WRITES > 1 row(s) > Took 0.0448 seconds > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-21814) Remove the TODO in AccessControlLists#addUserPermission
[ https://issues.apache.org/jira/browse/HBASE-21814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guanghao Zhang updated HBASE-21814: --- Attachment: HBASE-21814.master.001.patch > Remove the TODO in AccessControlLists#addUserPermission > --- > > Key: HBASE-21814 > URL: https://issues.apache.org/jira/browse/HBASE-21814 > Project: HBase > Issue Type: Bug >Reporter: Guanghao Zhang >Priority: Major > Attachments: HBASE-21814.master.001.patch > > > The TODO was added by me. Because this method happens within the RS. But > after HBASE-21739, grant/revoke will execute by master. So this is not a > problem now. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-21688) Address WAL filesystem issues
[ https://issues.apache.org/jira/browse/HBASE-21688?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Rodionov updated HBASE-21688: -- Attachment: HBASE-21688-branch-2.1-v1.patch > Address WAL filesystem issues > - > > Key: HBASE-21688 > URL: https://issues.apache.org/jira/browse/HBASE-21688 > Project: HBase > Issue Type: Bug > Components: Filesystem Integration, wal >Reporter: Vladimir Rodionov >Assignee: Vladimir Rodionov >Priority: Major > Labels: s3 > Fix For: 3.0.0 > > Attachments: HBASE-21688-amend.2.patch, HBASE-21688-amend.patch, > HBASE-21688-branch-2.1-v1.patch, HBASE-21688-v1.patch > > > Scan and fix code base to use new way of instantiating WAL File System. > https://issues.apache.org/jira/browse/HBASE-21457?focusedCommentId=16734688=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16734688 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-21814) Remove the TODO in AccessControlLists#addUserPermission
[ https://issues.apache.org/jira/browse/HBASE-21814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guanghao Zhang updated HBASE-21814: --- Fix Version/s: 2.3.0 2.2.0 3.0.0 > Remove the TODO in AccessControlLists#addUserPermission > --- > > Key: HBASE-21814 > URL: https://issues.apache.org/jira/browse/HBASE-21814 > Project: HBase > Issue Type: Bug >Reporter: Guanghao Zhang >Assignee: Guanghao Zhang >Priority: Major > Fix For: 3.0.0, 2.2.0, 2.3.0 > > Attachments: HBASE-21814.master.001.patch > > > The TODO was added by me. Because this method happens within the RS. But > after HBASE-21739, grant/revoke will execute by master. So this is not a > problem now. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-21814) Remove the TODO in AccessControlLists#addUserPermission
Guanghao Zhang created HBASE-21814: -- Summary: Remove the TODO in AccessControlLists#addUserPermission Key: HBASE-21814 URL: https://issues.apache.org/jira/browse/HBASE-21814 Project: HBase Issue Type: Bug Reporter: Guanghao Zhang The TODO was added by me. Because this method happens within the RS. But after HBASE-21739, grant/revoke will execute by master. So this is not a problem now. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-21814) Remove the TODO in AccessControlLists#addUserPermission
[ https://issues.apache.org/jira/browse/HBASE-21814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guanghao Zhang updated HBASE-21814: --- Assignee: Guanghao Zhang Status: Patch Available (was: Open) > Remove the TODO in AccessControlLists#addUserPermission > --- > > Key: HBASE-21814 > URL: https://issues.apache.org/jira/browse/HBASE-21814 > Project: HBase > Issue Type: Bug >Reporter: Guanghao Zhang >Assignee: Guanghao Zhang >Priority: Major > Attachments: HBASE-21814.master.001.patch > > > The TODO was added by me. Because this method happens within the RS. But > after HBASE-21739, grant/revoke will execute by master. So this is not a > problem now. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21810) bulkload support set hfile compression on client
[ https://issues.apache.org/jira/browse/HBASE-21810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756778#comment-16756778 ] Yechao Chen commented on HBASE-21810: - branch-1 does not aplly the branch-1.2 ,add a branch-1.2 patch > bulkload support set hfile compression on client > -- > > Key: HBASE-21810 > URL: https://issues.apache.org/jira/browse/HBASE-21810 > Project: HBase > Issue Type: Improvement > Components: mapreduce >Affects Versions: 1.3.3, 1.4.9, 2.1.2, 1.2.10, 2.0.4 >Reporter: Yechao Chen >Assignee: Yechao Chen >Priority: Major > Attachments: HBASE-21810.branch-1.001.patch, > HBASE-21810.branch-1.2.001.patch, HBASE-21810.branch-2.001.patch, > HBASE-21810.master.001.patch > > > hbase bulkload (HFileOutputFormat2) generate hfile ,the compression from the > table(cf) compression, > if the compression can be set on client ,somethings it's useful, > some case in our production: > 1、hfile bulkload replication between the data center with bandwidth limit, we > can set the compression of the bulkload hfile not changing the table > compression > 2、bulkload hfile not set compression ,but the table compression is > gz/zstd/snappy... ,can reduce the hfile created time and compaction will make > the hfile to compression finally > 3、somethings the yarn nodes (hfile created by reduce) /dobulkload client has > no compression lib,but the hbase cluster has,it's useful for this case -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-21810) bulkload support set hfile compression on client
[ https://issues.apache.org/jira/browse/HBASE-21810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yechao Chen updated HBASE-21810: Attachment: HBASE-21810.branch-1.2.001.patch > bulkload support set hfile compression on client > -- > > Key: HBASE-21810 > URL: https://issues.apache.org/jira/browse/HBASE-21810 > Project: HBase > Issue Type: Improvement > Components: mapreduce >Affects Versions: 1.3.3, 1.4.9, 2.1.2, 1.2.10, 2.0.4 >Reporter: Yechao Chen >Assignee: Yechao Chen >Priority: Major > Attachments: HBASE-21810.branch-1.001.patch, > HBASE-21810.branch-1.2.001.patch, HBASE-21810.branch-2.001.patch, > HBASE-21810.master.001.patch > > > hbase bulkload (HFileOutputFormat2) generate hfile ,the compression from the > table(cf) compression, > if the compression can be set on client ,somethings it's useful, > some case in our production: > 1、hfile bulkload replication between the data center with bandwidth limit, we > can set the compression of the bulkload hfile not changing the table > compression > 2、bulkload hfile not set compression ,but the table compression is > gz/zstd/snappy... ,can reduce the hfile created time and compaction will make > the hfile to compression finally > 3、somethings the yarn nodes (hfile created by reduce) /dobulkload client has > no compression lib,but the hbase cluster has,it's useful for this case -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-21804) Remove 0.94 check from the Linkchecker job
[ https://issues.apache.org/jira/browse/HBASE-21804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sakthi updated HBASE-21804: --- Attachment: hbase-21804.master.001.patch > Remove 0.94 check from the Linkchecker job > -- > > Key: HBASE-21804 > URL: https://issues.apache.org/jira/browse/HBASE-21804 > Project: HBase > Issue Type: Sub-task >Reporter: Sakthi >Assignee: Sakthi >Priority: Major > Attachments: hbase-21804.master.001.patch > > > This is a pretty old release. Even though we don't have the link to the doc > from our main page, the linkchecker job lands directly at > [https://hbase.apache.org/0.94/] which has around 90 odd missing file issues. > I haven't yet looked at the missing anchors stuff yet. > We can set linkchecker to not check 0.94. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-21803) HBase website cleanup
[ https://issues.apache.org/jira/browse/HBASE-21803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sakthi updated HBASE-21803: --- Description: This umbrella is to track the HBase website cleanup. The last Linkchecker job(which was run on 25th jan (takes approximately 20+ hours to finish)) has reported "1458 missing files" and "7143 missing named anchors". Have filed this Jira to track several sub-jiras that might follow. Also, any other issues, other than the LinkChecker job ones, can be tracked here. Please feel free to create sub-tasks here for the found website related issues. Here's a copy of the most recent Linkchecker job: {code:java} Index of Linklint results Sat, 26-Jan-2019 05:04:24 (local) Linklint version: 2.3.5_ingo_020 summary.txt: summary of results log.txt: log of progress file.html: found 51187 files fileX.html: found 51187 files (cross referenced) fileF.html: found 51051 files with forward links remote.html: found 3431 other links remoteX.html: found 3431 other links (cross referenced) anchor.html: found 21525313 named anchors anchorX.html: found 21525313 named anchors (cross referenced) action.html: - 1 action skipped actionX.html: - 1 action skipped (cross referenced) skipped.html: - 2 files skipped skipX.html: - 2 files skipped (cross referenced) warn.html: warn 696 warnings warnX.html: warn 696 warnings (cross referenced) warnF.html: warn 253 files with warnings error.html: ERROR 1458 missing files errorX.html: ERROR 1458 missing files (cross referenced) errorF.html: ERROR 1417 files had broken links errorA.html: ERROR 7145 missing named anchors errorAX.html: ERROR 7145 missing named anchors (cross referenced) httpfail.html: - 1458 links: failed via http httpok.html: - 51187 links: ok via http mapped.html: - 4 links were mapped urlindex.html: results for remote urls {code} Of the reported 1458 missing files, almost all of them are javadoc issues(1200 of them belonging to 1.2 itself, around 90 of them from 0.94, 40 from 2.0, 30 from 2.1 & rest master). was: This umbrella is to track the HBase website cleanup. The last Linkchecker job(which was run on 25th jan (takes approximately 20+ hours to finish)) has reported "1458 missing files" and "7143 missing named anchors". Have filed this Jira to track several sub-jiras that might follow. Also, any other issues, other than the LinkChecker job ones, can be tracked here. Please feel free to create sub-tasks here for the found website related issues. Here's a copy of the most recent Linkchecker job: {code:java} Index of Linklint results Sat, 26-Jan-2019 05:04:24 (local) Linklint version: 2.3.5_ingo_020 summary.txt: summary of results log.txt: log of progress file.html: found 51187 files fileX.html: found 51187 files (cross referenced) fileF.html: found 51051 files with forward links remote.html: found 3431 other links remoteX.html: found 3431 other links (cross referenced) anchor.html: found 21525313 named anchors anchorX.html: found 21525313 named anchors (cross referenced) action.html: - 1 action skipped actionX.html: - 1 action skipped (cross referenced) skipped.html: - 2 files skipped skipX.html: - 2 files skipped (cross referenced) warn.html: warn 696 warnings warnX.html: warn 696 warnings (cross referenced) warnF.html: warn 253 files with warnings error.html: ERROR 1458 missing files errorX.html: ERROR 1458 missing files (cross referenced) errorF.html: ERROR 1417 files had broken links errorA.html: ERROR 7145 missing named anchors errorAX.html: ERROR 7145 missing named anchors (cross referenced) httpfail.html: - 1458 links: failed via http httpok.html: - 51187 links: ok via http mapped.html: - 4 links were mapped urlindex.html: results for remote urls {code} > HBase website cleanup > - > > Key: HBASE-21803 > URL: https://issues.apache.org/jira/browse/HBASE-21803 > Project: HBase > Issue Type: Umbrella > Components: website >Reporter: Sakthi >Assignee: Sakthi >Priority: Major > > This umbrella is to track the HBase website cleanup. The last Linkchecker > job(which was run on 25th jan (takes approximately 20+ hours to finish)) has > reported "1458 missing files" and "7143 missing named anchors". Have filed > this Jira to track several sub-jiras that might follow. > Also, any other issues, other than the LinkChecker job ones, can be tracked > here. Please feel free to create sub-tasks here for the found website related > issues. > Here's a copy of the most recent Linkchecker job: > {code:java} > Index of Linklint results > Sat, 26-Jan-2019 05:04:24 (local) > Linklint version: 2.3.5_ingo_020 > summary.txt: summary of results > log.txt:
[jira] [Comment Edited] (HBASE-21804) Remove 0.94 check from the Linkchecker job
[ https://issues.apache.org/jira/browse/HBASE-21804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756774#comment-16756774 ] Sakthi edited comment on HBASE-21804 at 1/31/19 1:42 AM: - The updated numbers in the report, after exclusion of 0.94: {code:none} summary.txt: summary of results ... error.html: ERROR 1350 missing files errorX.html: ERROR 1350 missing files (cross referenced) errorF.html: ERROR 1295 files had broken links errorA.html: ERROR 1753 missing named anchors errorAX.html: ERROR 1753 missing named anchors (cross referenced) ... {code} FYI: The umbrella jira contains the original numbers was (Author: jatsakthi): The updated numbers in the report, after exclusion of 0.94: {code:none} summary.txt: summary of results ... error.html: ERROR 1350 missing files errorX.html: ERROR 1350 missing files (cross referenced) errorF.html: ERROR 1295 files had broken links errorA.html: ERROR 1753 missing named anchors errorAX.html: ERROR 1753 missing named anchors (cross referenced) ... {code} > Remove 0.94 check from the Linkchecker job > -- > > Key: HBASE-21804 > URL: https://issues.apache.org/jira/browse/HBASE-21804 > Project: HBase > Issue Type: Sub-task >Reporter: Sakthi >Assignee: Sakthi >Priority: Major > Attachments: hbase-21804.master.001.patch > > > This is a pretty old release. Even though we don't have the link to the doc > from our main page, the linkchecker job lands directly at > [https://hbase.apache.org/0.94/] which has around 90 odd missing file issues. > I haven't yet looked at the missing anchors stuff yet. > We can set linkchecker to not check 0.94. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-21804) Remove 0.94 check from the Linkchecker job
[ https://issues.apache.org/jira/browse/HBASE-21804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sakthi updated HBASE-21804: --- Status: Patch Available (was: In Progress) > Remove 0.94 check from the Linkchecker job > -- > > Key: HBASE-21804 > URL: https://issues.apache.org/jira/browse/HBASE-21804 > Project: HBase > Issue Type: Sub-task >Reporter: Sakthi >Assignee: Sakthi >Priority: Major > Attachments: hbase-21804.master.001.patch > > > This is a pretty old release. Even though we don't have the link to the doc > from our main page, the linkchecker job lands directly at > [https://hbase.apache.org/0.94/] which has around 90 odd missing file issues. > I haven't yet looked at the missing anchors stuff yet. > We can set linkchecker to not check 0.94. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21804) Remove 0.94 check from the Linkchecker job
[ https://issues.apache.org/jira/browse/HBASE-21804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756774#comment-16756774 ] Sakthi commented on HBASE-21804: The updated numbers in the report, after exclusion of 0.94: {code:none} summary.txt: summary of results ... error.html: ERROR 1350 missing files errorX.html: ERROR 1350 missing files (cross referenced) errorF.html: ERROR 1295 files had broken links errorA.html: ERROR 1753 missing named anchors errorAX.html: ERROR 1753 missing named anchors (cross referenced) ... {code} > Remove 0.94 check from the Linkchecker job > -- > > Key: HBASE-21804 > URL: https://issues.apache.org/jira/browse/HBASE-21804 > Project: HBase > Issue Type: Sub-task >Reporter: Sakthi >Assignee: Sakthi >Priority: Major > Attachments: hbase-21804.master.001.patch > > > This is a pretty old release. Even though we don't have the link to the doc > from our main page, the linkchecker job lands directly at > [https://hbase.apache.org/0.94/] which has around 90 odd missing file issues. > I haven't yet looked at the missing anchors stuff yet. > We can set linkchecker to not check 0.94. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21810) bulkload support set hfile compression on client
[ https://issues.apache.org/jira/browse/HBASE-21810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756768#comment-16756768 ] Yechao Chen commented on HBASE-21810: - The unit error not related. > bulkload support set hfile compression on client > -- > > Key: HBASE-21810 > URL: https://issues.apache.org/jira/browse/HBASE-21810 > Project: HBase > Issue Type: Improvement > Components: mapreduce >Affects Versions: 1.3.3, 1.4.9, 2.1.2, 1.2.10, 2.0.4 >Reporter: Yechao Chen >Assignee: Yechao Chen >Priority: Major > Attachments: HBASE-21810.branch-1.001.patch, > HBASE-21810.branch-2.001.patch, HBASE-21810.master.001.patch > > > hbase bulkload (HFileOutputFormat2) generate hfile ,the compression from the > table(cf) compression, > if the compression can be set on client ,somethings it's useful, > some case in our production: > 1、hfile bulkload replication between the data center with bandwidth limit, we > can set the compression of the bulkload hfile not changing the table > compression > 2、bulkload hfile not set compression ,but the table compression is > gz/zstd/snappy... ,can reduce the hfile created time and compaction will make > the hfile to compression finally > 3、somethings the yarn nodes (hfile created by reduce) /dobulkload client has > no compression lib,but the hbase cluster has,it's useful for this case -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Work started] (HBASE-21804) Remove 0.94 check from the Linkchecker job
[ https://issues.apache.org/jira/browse/HBASE-21804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HBASE-21804 started by Sakthi. -- > Remove 0.94 check from the Linkchecker job > -- > > Key: HBASE-21804 > URL: https://issues.apache.org/jira/browse/HBASE-21804 > Project: HBase > Issue Type: Sub-task >Reporter: Sakthi >Assignee: Sakthi >Priority: Major > > This is a pretty old release. Even though we don't have the link to the doc > from our main page, the linkchecker job lands directly at > [https://hbase.apache.org/0.94/] which has around 90 odd missing file issues. > I haven't yet looked at the missing anchors stuff yet. > We can set linkchecker to not check 0.94. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-21773) rowcounter utility should respond to pleas for help
[ https://issues.apache.org/jira/browse/HBASE-21773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wellington Chevreuil updated HBASE-21773: - Attachment: HBASE-21773.master.003.patch > rowcounter utility should respond to pleas for help > --- > > Key: HBASE-21773 > URL: https://issues.apache.org/jira/browse/HBASE-21773 > Project: HBase > Issue Type: Bug > Components: tooling >Affects Versions: 2.1.0 >Reporter: Sean Busbey >Assignee: Wellington Chevreuil >Priority: Critical > Attachments: HBASE-21773.master.001.patch, > HBASE-21773.master.002.patch, HBASE-21773.master.003.patch > > > {{hbase rowcounter}} does not respond to reasonable requests for help, i.e. > {{--help}}, {{-h}}, or {{-?}} > {code} > [systest@busbey-training-1 root]$ hbase rowcounter -? > OpenJDK 64-Bit Server VM warning: Using incremental CMS is deprecated and > will likely be removed in a future release > 19/01/24 12:30:00 INFO client.RMProxy: Connecting to ResourceManager at > busbey-training-1.gce.cloudera.com/172.31.116.31:8032 > 19/01/24 12:30:01 INFO hdfs.DFSClient: Created token for systest: > HDFS_DELEGATION_TOKEN owner=syst...@gce.cloudera.com, renewer=yarn, > realUser=, issueDate=1548361801519, maxDate=1548966601519, sequenceNumber=3, > masterKeyId=8 on 172.31.116.31:8020 > 19/01/24 12:30:01 INFO kms.KMSClientProvider: Getting new token from > https://busbey-training-3.gce.cloudera.com:16000/kms/v1/, > renewer:yarn/busbey-training-1.gce.cloudera@gce.cloudera.com > 19/01/24 12:30:02 INFO kms.KMSClientProvider: New token received: (Kind: > kms-dt, Service: 172.31.116.52:16000, Ident: (kms-dt owner=systest, > renewer=yarn, realUser=, issueDate=1548361801965, maxDate=1548966601965, > sequenceNumber=5, masterKeyId=17)) > 19/01/24 12:30:02 INFO security.TokenCache: Got dt for > hdfs://busbey-training-1.gce.cloudera.com:8020; Kind: HDFS_DELEGATION_TOKEN, > Service: 172.31.116.31:8020, Ident: (token for systest: HDFS_DELEGATION_TOKEN > owner=syst...@gce.cloudera.com, renewer=yarn, realUser=, > issueDate=1548361801519, maxDate=1548966601519, sequenceNumber=3, > masterKeyId=8) > 19/01/24 12:30:02 INFO security.TokenCache: Got dt for > hdfs://busbey-training-1.gce.cloudera.com:8020; Kind: kms-dt, Service: > 172.31.116.52:16000, Ident: (kms-dt owner=systest, renewer=yarn, realUser=, > issueDate=1548361801965, maxDate=1548966601965, sequenceNumber=5, > masterKeyId=17) > 19/01/24 12:30:02 INFO kms.KMSClientProvider: Getting new token from > https://busbey-training-4.gce.cloudera.com:16000/kms/v1/, > renewer:yarn/busbey-training-1.gce.cloudera@gce.cloudera.com > 19/01/24 12:30:02 INFO kms.KMSClientProvider: New token received: (Kind: > kms-dt, Service: 172.31.116.50:16000, Ident: (kms-dt owner=systest, > renewer=yarn, realUser=, issueDate=1548361802363, maxDate=1548966602363, > sequenceNumber=6, masterKeyId=18)) > 19/01/24 12:30:02 INFO security.TokenCache: Got dt for > hdfs://busbey-training-1.gce.cloudera.com:8020; Kind: kms-dt, Service: > 172.31.116.50:16000, Ident: (kms-dt owner=systest, renewer=yarn, realUser=, > issueDate=1548361802363, maxDate=1548966602363, sequenceNumber=6, > masterKeyId=18) > 19/01/24 12:30:02 INFO mapreduce.JobResourceUploader: Disabling Erasure > Coding for path: /user/systest/.staging/job_1548349234632_0003 > 19/01/24 12:30:03 INFO mapreduce.JobSubmitter: Cleaning up the staging area > /user/systest/.staging/job_1548349234632_0003 > Exception in thread "main" java.lang.IllegalArgumentException: Illegal first > character <45> at 0. User-space table qualifiers can only start with > 'alphanumeric characters' from any language: -? > at > org.apache.hadoop.hbase.TableName.isLegalTableQualifierName(TableName.java:193) > at > org.apache.hadoop.hbase.TableName.isLegalTableQualifierName(TableName.java:156) > at org.apache.hadoop.hbase.TableName.(TableName.java:346) > at > org.apache.hadoop.hbase.TableName.createTableNameIfNecessary(TableName.java:382) > at org.apache.hadoop.hbase.TableName.valueOf(TableName.java:469) > at > org.apache.hadoop.hbase.mapreduce.TableInputFormat.initialize(TableInputFormat.java:198) > at > org.apache.hadoop.hbase.mapreduce.TableInputFormatBase.getSplits(TableInputFormatBase.java:243) > at > org.apache.hadoop.hbase.mapreduce.TableInputFormat.getSplits(TableInputFormat.java:254) > at > org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:310) > at > org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:327) > at > org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:200) > at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1570) > at
[jira] [Commented] (HBASE-21773) rowcounter utility should respond to pleas for help
[ https://issues.apache.org/jira/browse/HBASE-21773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756769#comment-16756769 ] Wellington Chevreuil commented on HBASE-21773: -- Addressing checkstyles/findbugs. BTW, one of the checkstyle issues does not seem valid, order of import should be correct. > rowcounter utility should respond to pleas for help > --- > > Key: HBASE-21773 > URL: https://issues.apache.org/jira/browse/HBASE-21773 > Project: HBase > Issue Type: Bug > Components: tooling >Affects Versions: 2.1.0 >Reporter: Sean Busbey >Assignee: Wellington Chevreuil >Priority: Critical > Attachments: HBASE-21773.master.001.patch, > HBASE-21773.master.002.patch, HBASE-21773.master.003.patch > > > {{hbase rowcounter}} does not respond to reasonable requests for help, i.e. > {{--help}}, {{-h}}, or {{-?}} > {code} > [systest@busbey-training-1 root]$ hbase rowcounter -? > OpenJDK 64-Bit Server VM warning: Using incremental CMS is deprecated and > will likely be removed in a future release > 19/01/24 12:30:00 INFO client.RMProxy: Connecting to ResourceManager at > busbey-training-1.gce.cloudera.com/172.31.116.31:8032 > 19/01/24 12:30:01 INFO hdfs.DFSClient: Created token for systest: > HDFS_DELEGATION_TOKEN owner=syst...@gce.cloudera.com, renewer=yarn, > realUser=, issueDate=1548361801519, maxDate=1548966601519, sequenceNumber=3, > masterKeyId=8 on 172.31.116.31:8020 > 19/01/24 12:30:01 INFO kms.KMSClientProvider: Getting new token from > https://busbey-training-3.gce.cloudera.com:16000/kms/v1/, > renewer:yarn/busbey-training-1.gce.cloudera@gce.cloudera.com > 19/01/24 12:30:02 INFO kms.KMSClientProvider: New token received: (Kind: > kms-dt, Service: 172.31.116.52:16000, Ident: (kms-dt owner=systest, > renewer=yarn, realUser=, issueDate=1548361801965, maxDate=1548966601965, > sequenceNumber=5, masterKeyId=17)) > 19/01/24 12:30:02 INFO security.TokenCache: Got dt for > hdfs://busbey-training-1.gce.cloudera.com:8020; Kind: HDFS_DELEGATION_TOKEN, > Service: 172.31.116.31:8020, Ident: (token for systest: HDFS_DELEGATION_TOKEN > owner=syst...@gce.cloudera.com, renewer=yarn, realUser=, > issueDate=1548361801519, maxDate=1548966601519, sequenceNumber=3, > masterKeyId=8) > 19/01/24 12:30:02 INFO security.TokenCache: Got dt for > hdfs://busbey-training-1.gce.cloudera.com:8020; Kind: kms-dt, Service: > 172.31.116.52:16000, Ident: (kms-dt owner=systest, renewer=yarn, realUser=, > issueDate=1548361801965, maxDate=1548966601965, sequenceNumber=5, > masterKeyId=17) > 19/01/24 12:30:02 INFO kms.KMSClientProvider: Getting new token from > https://busbey-training-4.gce.cloudera.com:16000/kms/v1/, > renewer:yarn/busbey-training-1.gce.cloudera@gce.cloudera.com > 19/01/24 12:30:02 INFO kms.KMSClientProvider: New token received: (Kind: > kms-dt, Service: 172.31.116.50:16000, Ident: (kms-dt owner=systest, > renewer=yarn, realUser=, issueDate=1548361802363, maxDate=1548966602363, > sequenceNumber=6, masterKeyId=18)) > 19/01/24 12:30:02 INFO security.TokenCache: Got dt for > hdfs://busbey-training-1.gce.cloudera.com:8020; Kind: kms-dt, Service: > 172.31.116.50:16000, Ident: (kms-dt owner=systest, renewer=yarn, realUser=, > issueDate=1548361802363, maxDate=1548966602363, sequenceNumber=6, > masterKeyId=18) > 19/01/24 12:30:02 INFO mapreduce.JobResourceUploader: Disabling Erasure > Coding for path: /user/systest/.staging/job_1548349234632_0003 > 19/01/24 12:30:03 INFO mapreduce.JobSubmitter: Cleaning up the staging area > /user/systest/.staging/job_1548349234632_0003 > Exception in thread "main" java.lang.IllegalArgumentException: Illegal first > character <45> at 0. User-space table qualifiers can only start with > 'alphanumeric characters' from any language: -? > at > org.apache.hadoop.hbase.TableName.isLegalTableQualifierName(TableName.java:193) > at > org.apache.hadoop.hbase.TableName.isLegalTableQualifierName(TableName.java:156) > at org.apache.hadoop.hbase.TableName.(TableName.java:346) > at > org.apache.hadoop.hbase.TableName.createTableNameIfNecessary(TableName.java:382) > at org.apache.hadoop.hbase.TableName.valueOf(TableName.java:469) > at > org.apache.hadoop.hbase.mapreduce.TableInputFormat.initialize(TableInputFormat.java:198) > at > org.apache.hadoop.hbase.mapreduce.TableInputFormatBase.getSplits(TableInputFormatBase.java:243) > at > org.apache.hadoop.hbase.mapreduce.TableInputFormat.getSplits(TableInputFormat.java:254) > at > org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:310) > at > org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:327) > at >
[jira] [Commented] (HBASE-21772) hbase cli help does not mention 'zkcli'
[ https://issues.apache.org/jira/browse/HBASE-21772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756753#comment-16756753 ] Sakthi commented on HBASE-21772: [~busbey], looks like we filter out the display of few of the help options(including zkcli) based on whether or not we are in the omnibus tarball. {code} # Detect if we are in the omnibus tarball in_omnibus_tarball="false" if [ -f "${HBASE_HOME}/bin/hbase-daemons.sh" ]; then in_omnibus_tarball="true" fi echo "Usage: hbase [] []" ... echo "Commands:" echo "Some commands take arguments. Pass no args or -h for usage." echo " shell Run the HBase shell" echo " hbckRun the HBase 'fsck' tool. Defaults read-only hbck1." ... if [ "${in_omnibus_tarball}" = "true" ]; then echo " wal Write-ahead-log analyzer" ... echo " zkcli Run the ZooKeeper shell" ... echo " clean Run the HBase clean up script" fi echo " classpath Dump hbase CLASSPATH" ... echo " CLASSNAME Run the class named CLASSNAME" {code} > hbase cli help does not mention 'zkcli' > --- > > Key: HBASE-21772 > URL: https://issues.apache.org/jira/browse/HBASE-21772 > Project: HBase > Issue Type: Bug > Components: Operability, Zookeeper >Affects Versions: 2.1.0 >Reporter: Sean Busbey >Assignee: Sakthi >Priority: Minor > > the hbase command's help doesn't mention zkcli > {code} > hbase > Usage: hbase [] [] > Options: > --config DIR Configuration direction to use. Default: ./conf > --hosts HOSTSOverride the list in 'regionservers' file > --auth-as-server Authenticate to ZooKeeper using servers configuration > --internal-classpath Skip attempting to use client facing jars (WARNING: > unstable results between versions) > Commands: > Some commands take arguments. Pass no args or -h for usage. > shell Run the HBase shell > hbckRun the HBase 'fsck' tool. Defaults read-only hbck1. > Pass '-j /path/to/HBCK2.jar' to run hbase-2.x HBCK2. > snapshotTool for managing snapshots > classpath Dump hbase CLASSPATH > mapredcpDump CLASSPATH entries required by mapreduce > pe Run PerformanceEvaluation > ltt Run LoadTestTool > canary Run the Canary tool > version Print the version > regionsplitter Run RegionSplitter tool > rowcounter Run RowCounter tool > cellcounter Run CellCounter tool > pre-upgrade Run Pre-Upgrade validator tool > CLASSNAME Run the class named CLASSNAME > {code} > the command itself still appears to work. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-21806) add an option to roll WAL on very slow syncs
[ https://issues.apache.org/jira/browse/HBASE-21806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HBASE-21806: - Resolution: Fixed Fix Version/s: 3.0.0 Status: Resolved (was: Patch Available) Tests look flaky in other JIRAs and pass locally. Committed to master after fixing the log line. Thanks for the review! > add an option to roll WAL on very slow syncs > > > Key: HBASE-21806 > URL: https://issues.apache.org/jira/browse/HBASE-21806 > Project: HBase > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Major > Fix For: 3.0.0 > > Attachments: HBASE-21806.patch > > > In large heterogeneous clusters sometimes a slow datanode can cause WAL syncs > to be very slow. In this case, before the bad datanode recovers, or is > discovered and repaired, it would be helpful to roll WAL on a very slow sync > to get a new pipeline. > Otherwise the slow WAL will impact write latency for a long time (slow writes > result in less writes result in the WAL not being rolled for longer) -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21773) rowcounter utility should respond to pleas for help
[ https://issues.apache.org/jira/browse/HBASE-21773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756746#comment-16756746 ] Hadoop QA commented on HBASE-21773: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 1s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 24s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 36s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 3s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 42s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 30s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 13s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 30s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 1s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 32s{color} | {color:red} hbase-mapreduce generated 4 new + 155 unchanged - 3 fixed = 159 total (was 158) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 18s{color} | {color:red} hbase-mapreduce: The patch generated 2 new + 43 unchanged - 1 fixed = 45 total (was 44) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 33s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 9m 31s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 or 3.0.0. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 48s{color} | {color:red} hbase-mapreduce generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 31s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 29s{color} | {color:green} hbase-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 14m 17s{color} | {color:green} hbase-mapreduce in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 25s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 53m 47s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hbase-mapreduce | | | Should org.apache.hadoop.hbase.mapreduce.RowCounter$RowCounterCommandLineParser be a _static_ inner class? At RowCounter.java:inner class? At RowCounter.java:[lines 285-292] | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b | | JIRA Issue | HBASE-21773 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12956976/HBASE-21773.master.002.patch | | Optional Tests | dupname asflicense javac javadoc unit findbugs shadedjars hadoopcheck hbaseanti checkstyle compile | | uname |
[jira] [Comment Edited] (HBASE-21806) add an option to roll WAL on very slow syncs
[ https://issues.apache.org/jira/browse/HBASE-21806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756740#comment-16756740 ] Sergey Shelukhin edited comment on HBASE-21806 at 1/31/19 12:46 AM: Yeah based on a test run in our cluster I think the value for the new parameter could be around 3-5sec. The timeout could also be shorter although I see it will cause RS to crash when the timeout is hit during memstore flush WAL write, so I'm not sure if making it very short is a good idea. I will keep the new setting it at the conservative 10sec for now. was (Author: sershe): Yeah based on a test run in our cluster I think the value for the new parameter could be around 3-5sec. The timeout could also be shorter although I see it will cause RS to crash when the timeout it hit during memstore flush WAL write, so I'm not sure if making it very short is a good idea. I will keep the new setting it at the conservative 10sec for now. > add an option to roll WAL on very slow syncs > > > Key: HBASE-21806 > URL: https://issues.apache.org/jira/browse/HBASE-21806 > Project: HBase > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Major > Attachments: HBASE-21806.patch > > > In large heterogeneous clusters sometimes a slow datanode can cause WAL syncs > to be very slow. In this case, before the bad datanode recovers, or is > discovered and repaired, it would be helpful to roll WAL on a very slow sync > to get a new pipeline. > Otherwise the slow WAL will impact write latency for a long time (slow writes > result in less writes result in the WAL not being rolled for longer) -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21806) add an option to roll WAL on very slow syncs
[ https://issues.apache.org/jira/browse/HBASE-21806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756740#comment-16756740 ] Sergey Shelukhin commented on HBASE-21806: -- Yeah based on a test run in our cluster I think the value for the new parameter could be around 3-5sec. The timeout could also be shorter although I see it will cause RS to crash when the timeout it hit during memstore flush WAL write, so I'm not sure if making it very short is a good idea. I will keep the new setting it at the conservative 10sec for now. > add an option to roll WAL on very slow syncs > > > Key: HBASE-21806 > URL: https://issues.apache.org/jira/browse/HBASE-21806 > Project: HBase > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Major > Attachments: HBASE-21806.patch > > > In large heterogeneous clusters sometimes a slow datanode can cause WAL syncs > to be very slow. In this case, before the bad datanode recovers, or is > discovered and repaired, it would be helpful to roll WAL on a very slow sync > to get a new pipeline. > Otherwise the slow WAL will impact write latency for a long time (slow writes > result in less writes result in the WAL not being rolled for longer) -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21812) Address ruby static analysis for shell/bin modules [2nd pass]
[ https://issues.apache.org/jira/browse/HBASE-21812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756738#comment-16756738 ] Sakthi commented on HBASE-21812: Also, [~elserj], what's your opinion on this one? :) > Address ruby static analysis for shell/bin modules [2nd pass] > - > > Key: HBASE-21812 > URL: https://issues.apache.org/jira/browse/HBASE-21812 > Project: HBase > Issue Type: Improvement >Reporter: Sakthi >Assignee: Sakthi >Priority: Major > > -HBASE-18237- did a pass in the shell and bin directories. I think we can go > for another round. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21807) Backport HBASE-21225 to branch-2.0 and branch-2.1
[ https://issues.apache.org/jira/browse/HBASE-21807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756736#comment-16756736 ] Sakthi commented on HBASE-21807: Thanks [~elserj] ! > Backport HBASE-21225 to branch-2.0 and branch-2.1 > - > > Key: HBASE-21807 > URL: https://issues.apache.org/jira/browse/HBASE-21807 > Project: HBase > Issue Type: Task >Reporter: Sakthi >Assignee: Sakthi >Priority: Minor > Fix For: 2.1.3, 2.0.5 > > Attachments: hbase-21225.branch-2.0.001.patch > > > Backport HBASE-21225 to branch-2.0 and branch-2.1. The specified Jira is > closed, hence filing this one to track the backport. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-21811) region can be opened on two servers due to race condition with procedures and server reports
[ https://issues.apache.org/jira/browse/HBASE-21811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HBASE-21811: - Status: Patch Available (was: Open) A patch to fix both issues. I'll file a separate JIRA for SNRYE optimization cause it might require API changes. > region can be opened on two servers due to race condition with procedures and > server reports > > > Key: HBASE-21811 > URL: https://issues.apache.org/jira/browse/HBASE-21811 > Project: HBase > Issue Type: Bug > Components: amv2 >Affects Versions: 3.0.0 >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Blocker > Attachments: HBASE-21811.patch > > > Looks like the region server responses are being processed incorrectly in > places allowing te region to be opened on two servers. > * The region server report handling in procedures should check which server > is reporting. > * Also although I didn't check (and it isn't implicated in this bug), RS must > check in OPEN that it's actually the correct RS master sent open to (w.r.t. > start timestamp) > This was previosuly "mitigated" by master killing the RS with incorrect > reports, but due to race conditions with reports and assignment the kill was > replaced with a warning, so now this condition persists. > Regardless, the kill approach is not a good fix because there's still a > window when a region can be opened on two servers. > A region is being opened by server_48c. The server dies, and we process the > retry correctly (retry=3 because 2 previous similar open failures were > processed correctly). > We start opening it on server_1aa now. > {noformat} > 2019-01-28 18:12:09,862 INFO [KeepAlivePEWorker-104] > assignment.RegionStateStore: pid=4915 updating hbase:meta > row=8be2a423b16471b9417f0f7de04281c6, regionState=ABNORMALLY_CLOSED > 2019-01-28 18:12:09,862 INFO [KeepAlivePEWorker-104] > procedure.ServerCrashProcedure: pid=11944, > state=RUNNABLE:SERVER_CRASH_ASSIGN, hasLock=true; ServerCrashProcedure > server=server_48c,17020,1548726406632, splitWal=true, meta=false found RIT > pid=4915, ppid=7, state=WAITING:REGION_STATE_TRANSITION_CONFIRM_OPENED, > hasLock=true; TransitRegionStateProcedure table=table, > region=8be2a423b16471b9417f0f7de04281c6, ASSIGN; rit=OPENING, > location=server_48c,17020,1548726406632, table=table, > region=8be2a423b16471b9417f0f7de04281c6 > 2019-01-28 18:12:10,778 INFO [KeepAlivePEWorker-80] > assignment.TransitRegionStateProcedure: Retry=3 of max=2147483647; pid=4915, > ppid=7, state=RUNNABLE:REGION_STATE_TRANSITION_CONFIRM_OPENED, hasLock=true; > TransitRegionStateProcedure table=table, > region=8be2a423b16471b9417f0f7de04281c6, ASSIGN; rit=ABNORMALLY_CLOSED, > location=null > ... > 2019-01-28 18:12:10,902 INFO [KeepAlivePEWorker-80] > assignment.TransitRegionStateProcedure: Starting pid=4915, ppid=7, > state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, hasLock=true; > TransitRegionStateProcedure table=table, > region=8be2a423b16471b9417f0f7de04281c6, ASSIGN; rit=ABNORMALLY_CLOSED, > location=null; forceNewPlan=true, retain=false > 2019-01-28 18:12:11,114 INFO [PEWorker-7] assignment.RegionStateStore: > pid=4915 updating hbase:meta row=8be2a423b16471b9417f0f7de04281c6, > regionState=OPENING, regionLocation=server_1aa,17020,1548727658713 > {noformat} > However, we get the remote procedure failure from 48c after we've already > started that. > It actually tried to open on the restarted RS, which makes me wonder if this > is safe also w.r.t. other races - what if RS already initialized and didn't > error out? > Need to check if we verify the start code expected by master on RS when > opening. > {noformat} > 2019-01-28 18:12:12,179 WARN [RSProcedureDispatcher-pool4-t362] > assignment.RegionRemoteProcedureBase: The remote operation pid=11050, > ppid=4915, state=SUCCESS, hasLock=false; > org.apache.hadoop.hbase.master.assignment.OpenRegionProcedure for region > {ENCODED => 8be2a423b16471b9417f0f7de04281c6 ... to server > server_48c,17020,1548726406632 failed > org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: > org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server > server_48c,17020,1548727752747 is not running yet > 2019-01-28 18:12:12,179 WARN [RSProcedureDispatcher-pool4-t362] > procedure.RSProcedureDispatcher: server server_48c,17020,1548726406632 is not > up for a while; try a new one > {noformat} > Without any other reason (at least logged), the RIT immediately retries again > and chooses a new candidate. It then retries again and goes to the new 48c, > but that's unrelated. > {noformat} > 2019-01-28 18:12:12,289 INFO [KeepAlivePEWorker-100] >
[jira] [Updated] (HBASE-21807) Backport HBASE-21225 to branch-2.0 and branch-2.1
[ https://issues.apache.org/jira/browse/HBASE-21807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Josh Elser updated HBASE-21807: --- Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 2.0.5 2.1.3 Status: Resolved (was: Patch Available) Thanks for the clean backports, Sakthi. Quota unit tests on branch-2.0 and branch-2.1 ran cleanly. > Backport HBASE-21225 to branch-2.0 and branch-2.1 > - > > Key: HBASE-21807 > URL: https://issues.apache.org/jira/browse/HBASE-21807 > Project: HBase > Issue Type: Task >Reporter: Sakthi >Assignee: Sakthi >Priority: Minor > Fix For: 2.1.3, 2.0.5 > > Attachments: hbase-21225.branch-2.0.001.patch > > > Backport HBASE-21225 to branch-2.0 and branch-2.1. The specified Jira is > closed, hence filing this one to track the backport. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-21813) ServerNotRunningYet exception should include machine-readable server name
[ https://issues.apache.org/jira/browse/HBASE-21813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HBASE-21813: - Description: As far as I see, this exception is thrown before the start code of the destination can be verified from the request on RS side; the code that handles it (e.g. retries from open region procedure) use it to retry later on the same server. However, if the start code of the server that is not running is different from the intended start code of the operation those retries are a waste of time. The exception should include a server name with a start code (which it already includes as part of the message in some cases), so that the caller could check that. was: As far as I see, this exception is thrown before the start code of the destination can be verified from the request on RS side; the code that handles it (e.g. retries from open region procedure) use it to retry later on the same server. However, if the start code of the server that is not running is different from the intended start code of the operations those retries are a waste of time. The exception should include a server name with a start code (which it already includes as part of the message in some cases), so that the caller could check that. > ServerNotRunningYet exception should include machine-readable server name > - > > Key: HBASE-21813 > URL: https://issues.apache.org/jira/browse/HBASE-21813 > Project: HBase > Issue Type: Improvement >Reporter: Sergey Shelukhin >Priority: Major > > As far as I see, this exception is thrown before the start code of the > destination can be verified from the request on RS side; the code that > handles it (e.g. retries from open region procedure) use it to retry later on > the same server. > However, if the start code of the server that is not running is different > from the intended start code of the operation those retries are a waste of > time. > The exception should include a server name with a start code (which it > already includes as part of the message in some cases), so that the caller > could check that. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-21811) region can be opened on two servers due to race condition with procedures and server reports
[ https://issues.apache.org/jira/browse/HBASE-21811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HBASE-21811: - Attachment: HBASE-21811.patch > region can be opened on two servers due to race condition with procedures and > server reports > > > Key: HBASE-21811 > URL: https://issues.apache.org/jira/browse/HBASE-21811 > Project: HBase > Issue Type: Bug > Components: amv2 >Affects Versions: 3.0.0 >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Blocker > Attachments: HBASE-21811.patch > > > Looks like the region server responses are being processed incorrectly in > places allowing te region to be opened on two servers. > * The region server report handling in procedures should check which server > is reporting. > * Also although I didn't check (and it isn't implicated in this bug), RS must > check in OPEN that it's actually the correct RS master sent open to (w.r.t. > start timestamp) > This was previosuly "mitigated" by master killing the RS with incorrect > reports, but due to race conditions with reports and assignment the kill was > replaced with a warning, so now this condition persists. > Regardless, the kill approach is not a good fix because there's still a > window when a region can be opened on two servers. > A region is being opened by server_48c. The server dies, and we process the > retry correctly (retry=3 because 2 previous similar open failures were > processed correctly). > We start opening it on server_1aa now. > {noformat} > 2019-01-28 18:12:09,862 INFO [KeepAlivePEWorker-104] > assignment.RegionStateStore: pid=4915 updating hbase:meta > row=8be2a423b16471b9417f0f7de04281c6, regionState=ABNORMALLY_CLOSED > 2019-01-28 18:12:09,862 INFO [KeepAlivePEWorker-104] > procedure.ServerCrashProcedure: pid=11944, > state=RUNNABLE:SERVER_CRASH_ASSIGN, hasLock=true; ServerCrashProcedure > server=server_48c,17020,1548726406632, splitWal=true, meta=false found RIT > pid=4915, ppid=7, state=WAITING:REGION_STATE_TRANSITION_CONFIRM_OPENED, > hasLock=true; TransitRegionStateProcedure table=table, > region=8be2a423b16471b9417f0f7de04281c6, ASSIGN; rit=OPENING, > location=server_48c,17020,1548726406632, table=table, > region=8be2a423b16471b9417f0f7de04281c6 > 2019-01-28 18:12:10,778 INFO [KeepAlivePEWorker-80] > assignment.TransitRegionStateProcedure: Retry=3 of max=2147483647; pid=4915, > ppid=7, state=RUNNABLE:REGION_STATE_TRANSITION_CONFIRM_OPENED, hasLock=true; > TransitRegionStateProcedure table=table, > region=8be2a423b16471b9417f0f7de04281c6, ASSIGN; rit=ABNORMALLY_CLOSED, > location=null > ... > 2019-01-28 18:12:10,902 INFO [KeepAlivePEWorker-80] > assignment.TransitRegionStateProcedure: Starting pid=4915, ppid=7, > state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, hasLock=true; > TransitRegionStateProcedure table=table, > region=8be2a423b16471b9417f0f7de04281c6, ASSIGN; rit=ABNORMALLY_CLOSED, > location=null; forceNewPlan=true, retain=false > 2019-01-28 18:12:11,114 INFO [PEWorker-7] assignment.RegionStateStore: > pid=4915 updating hbase:meta row=8be2a423b16471b9417f0f7de04281c6, > regionState=OPENING, regionLocation=server_1aa,17020,1548727658713 > {noformat} > However, we get the remote procedure failure from 48c after we've already > started that. > It actually tried to open on the restarted RS, which makes me wonder if this > is safe also w.r.t. other races - what if RS already initialized and didn't > error out? > Need to check if we verify the start code expected by master on RS when > opening. > {noformat} > 2019-01-28 18:12:12,179 WARN [RSProcedureDispatcher-pool4-t362] > assignment.RegionRemoteProcedureBase: The remote operation pid=11050, > ppid=4915, state=SUCCESS, hasLock=false; > org.apache.hadoop.hbase.master.assignment.OpenRegionProcedure for region > {ENCODED => 8be2a423b16471b9417f0f7de04281c6 ... to server > server_48c,17020,1548726406632 failed > org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: > org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server > server_48c,17020,1548727752747 is not running yet > 2019-01-28 18:12:12,179 WARN [RSProcedureDispatcher-pool4-t362] > procedure.RSProcedureDispatcher: server server_48c,17020,1548726406632 is not > up for a while; try a new one > {noformat} > Without any other reason (at least logged), the RIT immediately retries again > and chooses a new candidate. It then retries again and goes to the new 48c, > but that's unrelated. > {noformat} > 2019-01-28 18:12:12,289 INFO [KeepAlivePEWorker-100] > assignment.TransitRegionStateProcedure: Retry=4 of max=2147483647; pid=4915, > ppid=7, state=RUNNABLE:REGION_STATE_TRANSITION_CONFIRM_OPENED,
[jira] [Updated] (HBASE-21813) ServerNotRunningYet exception should include machine-readable server name
[ https://issues.apache.org/jira/browse/HBASE-21813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HBASE-21813: - Description: As far as I see, this exception is thrown before the start code of the destination can be verified from the request on RS side; the code that handles it (e.g. retries from open region procedure) use it to retry later on the same server. However, if the start code of the server that is not running is different from the intended start code of the operation those retries are a waste of time. The exception should include the not-yet-running server name with its start code (in fact, it is already includes as part of the message in some cases), so that the caller could check that. was: As far as I see, this exception is thrown before the start code of the destination can be verified from the request on RS side; the code that handles it (e.g. retries from open region procedure) use it to retry later on the same server. However, if the start code of the server that is not running is different from the intended start code of the operation those retries are a waste of time. The exception should include a server name with a start code (which it already includes as part of the message in some cases), so that the caller could check that. > ServerNotRunningYet exception should include machine-readable server name > - > > Key: HBASE-21813 > URL: https://issues.apache.org/jira/browse/HBASE-21813 > Project: HBase > Issue Type: Improvement >Reporter: Sergey Shelukhin >Priority: Major > > As far as I see, this exception is thrown before the start code of the > destination can be verified from the request on RS side; the code that > handles it (e.g. retries from open region procedure) use it to retry later on > the same server. > However, if the start code of the server that is not running is different > from the intended start code of the operation those retries are a waste of > time. > The exception should include the not-yet-running server name with its start > code (in fact, it is already includes as part of the message in some cases), > so that the caller could check that. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-21813) ServerNotRunningYet exception should include machine-readable server name
Sergey Shelukhin created HBASE-21813: Summary: ServerNotRunningYet exception should include machine-readable server name Key: HBASE-21813 URL: https://issues.apache.org/jira/browse/HBASE-21813 Project: HBase Issue Type: Improvement Reporter: Sergey Shelukhin As far as I see, this exception is thrown before the start code of the destination can be verified from the request on RS side; the code that handles it (e.g. retries from open region procedure) use it to retry later on the same server. However, if the start code of the server that is not running is different from the intended start code of the operations those retries are a waste of time. The exception should include a server name with a start code (which it already includes as part of the message in some cases), so that the caller could check that. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21773) rowcounter utility should respond to pleas for help
[ https://issues.apache.org/jira/browse/HBASE-21773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756721#comment-16756721 ] Hadoop QA commented on HBASE-21773: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 5s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 18s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 7s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 43s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 16s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 3m 15s{color} | {color:red} root in the patch failed. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 20s{color} | {color:red} hbase-mapreduce in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 20s{color} | {color:red} hbase-mapreduce in the patch failed. {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 17s{color} | {color:red} hbase-mapreduce: The patch generated 2 new + 43 unchanged - 1 fixed = 45 total (was 44) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:red}-1{color} | {color:red} shadedjars {color} | {color:red} 3m 23s{color} | {color:red} patch has 14 errors when building our shaded downstream artifacts. {color} | | {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 1m 57s{color} | {color:red} The patch causes 14 errors with Hadoop v2.7.4. {color} | | {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 3m 57s{color} | {color:red} The patch causes 14 errors with Hadoop v3.0.0. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 20s{color} | {color:red} hbase-mapreduce in the patch failed. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 21s{color} | {color:red} hbase-mapreduce in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 10s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 22m 59s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b | | JIRA Issue | HBASE-21773 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12956975/HBASE-21773.master.001.patch | | Optional Tests | dupname asflicense javac javadoc unit findbugs shadedjars hadoopcheck hbaseanti checkstyle compile | | uname | Linux fb943158aa18 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 17:16:02 UTC 2018 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | master / 513ba9ac59 | | maven | version: Apache Maven 3.5.4 (1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) | | Default Java | 1.8.0_181 | | findbugs | v3.1.0-RC3 | | mvninstall | https://builds.apache.org/job/PreCommit-HBASE-Build/15798/artifact/patchprocess/patch-mvninstall-root.txt | | compile |
[jira] [Commented] (HBASE-21773) rowcounter utility should respond to pleas for help
[ https://issues.apache.org/jira/browse/HBASE-21773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756697#comment-16756697 ] Wellington Chevreuil commented on HBASE-21773: -- Submitted a new one with the missing class from previous. > rowcounter utility should respond to pleas for help > --- > > Key: HBASE-21773 > URL: https://issues.apache.org/jira/browse/HBASE-21773 > Project: HBase > Issue Type: Bug > Components: tooling >Affects Versions: 2.1.0 >Reporter: Sean Busbey >Assignee: Wellington Chevreuil >Priority: Critical > Attachments: HBASE-21773.master.001.patch, > HBASE-21773.master.002.patch > > > {{hbase rowcounter}} does not respond to reasonable requests for help, i.e. > {{--help}}, {{-h}}, or {{-?}} > {code} > [systest@busbey-training-1 root]$ hbase rowcounter -? > OpenJDK 64-Bit Server VM warning: Using incremental CMS is deprecated and > will likely be removed in a future release > 19/01/24 12:30:00 INFO client.RMProxy: Connecting to ResourceManager at > busbey-training-1.gce.cloudera.com/172.31.116.31:8032 > 19/01/24 12:30:01 INFO hdfs.DFSClient: Created token for systest: > HDFS_DELEGATION_TOKEN owner=syst...@gce.cloudera.com, renewer=yarn, > realUser=, issueDate=1548361801519, maxDate=1548966601519, sequenceNumber=3, > masterKeyId=8 on 172.31.116.31:8020 > 19/01/24 12:30:01 INFO kms.KMSClientProvider: Getting new token from > https://busbey-training-3.gce.cloudera.com:16000/kms/v1/, > renewer:yarn/busbey-training-1.gce.cloudera@gce.cloudera.com > 19/01/24 12:30:02 INFO kms.KMSClientProvider: New token received: (Kind: > kms-dt, Service: 172.31.116.52:16000, Ident: (kms-dt owner=systest, > renewer=yarn, realUser=, issueDate=1548361801965, maxDate=1548966601965, > sequenceNumber=5, masterKeyId=17)) > 19/01/24 12:30:02 INFO security.TokenCache: Got dt for > hdfs://busbey-training-1.gce.cloudera.com:8020; Kind: HDFS_DELEGATION_TOKEN, > Service: 172.31.116.31:8020, Ident: (token for systest: HDFS_DELEGATION_TOKEN > owner=syst...@gce.cloudera.com, renewer=yarn, realUser=, > issueDate=1548361801519, maxDate=1548966601519, sequenceNumber=3, > masterKeyId=8) > 19/01/24 12:30:02 INFO security.TokenCache: Got dt for > hdfs://busbey-training-1.gce.cloudera.com:8020; Kind: kms-dt, Service: > 172.31.116.52:16000, Ident: (kms-dt owner=systest, renewer=yarn, realUser=, > issueDate=1548361801965, maxDate=1548966601965, sequenceNumber=5, > masterKeyId=17) > 19/01/24 12:30:02 INFO kms.KMSClientProvider: Getting new token from > https://busbey-training-4.gce.cloudera.com:16000/kms/v1/, > renewer:yarn/busbey-training-1.gce.cloudera@gce.cloudera.com > 19/01/24 12:30:02 INFO kms.KMSClientProvider: New token received: (Kind: > kms-dt, Service: 172.31.116.50:16000, Ident: (kms-dt owner=systest, > renewer=yarn, realUser=, issueDate=1548361802363, maxDate=1548966602363, > sequenceNumber=6, masterKeyId=18)) > 19/01/24 12:30:02 INFO security.TokenCache: Got dt for > hdfs://busbey-training-1.gce.cloudera.com:8020; Kind: kms-dt, Service: > 172.31.116.50:16000, Ident: (kms-dt owner=systest, renewer=yarn, realUser=, > issueDate=1548361802363, maxDate=1548966602363, sequenceNumber=6, > masterKeyId=18) > 19/01/24 12:30:02 INFO mapreduce.JobResourceUploader: Disabling Erasure > Coding for path: /user/systest/.staging/job_1548349234632_0003 > 19/01/24 12:30:03 INFO mapreduce.JobSubmitter: Cleaning up the staging area > /user/systest/.staging/job_1548349234632_0003 > Exception in thread "main" java.lang.IllegalArgumentException: Illegal first > character <45> at 0. User-space table qualifiers can only start with > 'alphanumeric characters' from any language: -? > at > org.apache.hadoop.hbase.TableName.isLegalTableQualifierName(TableName.java:193) > at > org.apache.hadoop.hbase.TableName.isLegalTableQualifierName(TableName.java:156) > at org.apache.hadoop.hbase.TableName.(TableName.java:346) > at > org.apache.hadoop.hbase.TableName.createTableNameIfNecessary(TableName.java:382) > at org.apache.hadoop.hbase.TableName.valueOf(TableName.java:469) > at > org.apache.hadoop.hbase.mapreduce.TableInputFormat.initialize(TableInputFormat.java:198) > at > org.apache.hadoop.hbase.mapreduce.TableInputFormatBase.getSplits(TableInputFormatBase.java:243) > at > org.apache.hadoop.hbase.mapreduce.TableInputFormat.getSplits(TableInputFormat.java:254) > at > org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:310) > at > org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:327) > at > org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:200) > at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1570) >
[jira] [Commented] (HBASE-21775) The BufferedMutator doesn't ever refresh region location cache
[ https://issues.apache.org/jira/browse/HBASE-21775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756690#comment-16756690 ] stack commented on HBASE-21775: --- I pushed addendum on branch-2.0+ branch-1 doesn't do this static messing w/ CONF so it should be ok. Resolving. > The BufferedMutator doesn't ever refresh region location cache > -- > > Key: HBASE-21775 > URL: https://issues.apache.org/jira/browse/HBASE-21775 > Project: HBase > Issue Type: Bug > Components: Client >Reporter: Tommy Li >Assignee: Tommy Li >Priority: Major > Fix For: 3.0.0, 1.5.0, 2.2.0, 1.4.10, 2.1.3, 2.0.5, 1.3.4 > > Attachments: HBASE-21775-ADDENDUM.master.001.patch, > HBASE-21775.master.001.patch, > org.apache.hadoop.hbase.client.TestAsyncProcess-with-HBASE-21775.txt, > org.apache.hadoop.hbase.client.TestAsyncProcess-without-HBASE-21775.txt > > > {color:#22}I noticed in some of my writing jobs that the BufferedMutator > would get stuck retrying writes against a dead server.{color} > {code:java} > 19/01/18 15:15:47 INFO [Executor task launch worker for task 0] > client.AsyncRequestFutureImpl: #2, waiting for 1 actions to finish on table: > dummy_table > 19/01/18 15:15:54 WARN [htable-pool3-t56] client.AsyncRequestFutureImpl: > id=2, table=dummy_table, attempt=15/21, failureCount=1ops, last > exception=org.apache.hadoop.hbase.DoNotRetryIOException: Operation rpcTimeout > on ,17020,1547848193782, tracking started Fri Jan 18 14:55:37 PST > 2019; NOT retrying, failed=1 -- final attempt! > 19/01/18 15:15:54 ERROR [Executor task launch worker for task 0] > IngestRawData.map(): [B@258bc2c7: > org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 > action: Operation rpcTimeout: 1 time, servers with issues: > ,17020,1547848193782 > {code} > > After the single remaining action permanently failed, it would resume > progress only to get stuck again retrying against the same dead server: > {code:java} > 19/01/18 15:21:18 INFO [Executor task launch worker for task 0] > client.AsyncRequestFutureImpl: #2, waiting for 1 actions to finish on table: > dummy_table > 19/01/18 15:21:18 INFO [Executor task launch worker for task 0] > client.AsyncRequestFutureImpl: #2, waiting for 1 actions to finish on table: > dummy_table > 19/01/18 15:21:20 INFO [htable-pool3-t55] client.AsyncRequestFutureImpl: > id=2, table=dummy_table, attempt=6/21, failureCount=1ops, last > exception=java.net.ConnectException: Call to failed on connection > exception: > org.apache.hbase.thirdparty.io.netty.channel.ConnectTimeoutException: > connection timed out: on ,17020,1547848193782, tracking > started null, retrying after=20089ms, operationsToReplay=1 > {code} > > Only restarting the client process to generate a new BufferedMutator instance > would fix the issue, at least until the next regionserver crash > The logs I've pasted show the issue happening with a > ConnectionTimeoutException, but we've also seen it with > NotServingRegionException and some others -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21796) RecoverableZooKeeper indefinitely retries a client stuck in AUTH_FAILED
[ https://issues.apache.org/jira/browse/HBASE-21796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756688#comment-16756688 ] Enis Soztutar commented on HBASE-21796: --- The patch seems to at least solve the problem of Zookeeper object going bust after AUTH_FAILED, so it does not change the behaviour in terms of infinite retries. The only question remaining is whether AUTH_FAILED is a non-recoverable event that we should immediately throw back, or is it (at least in some cases) recoverable. > RecoverableZooKeeper indefinitely retries a client stuck in AUTH_FAILED > --- > > Key: HBASE-21796 > URL: https://issues.apache.org/jira/browse/HBASE-21796 > Project: HBase > Issue Type: Bug > Components: Zookeeper >Reporter: Josh Elser >Assignee: Josh Elser >Priority: Major > Fix For: 1.5.0 > > Attachments: HBASE-21796.001.branch-1.patch, > HBASE-21796.002.branch-1.patch > > > We've observed the following situation inside of a RegionServer which leaves > an HConnection in a broken state as a result of the ZooKeeper client having > received an AUTH_FAILED case in the Phoenix secondary indexing code-path. The > result was that the HConnection used to write the secondary index updates > failed every time the client re-attempted the write but we had no outward > signs from the HConnection that there was a problem with that HConnection > instance. > ZooKeeper programmer docs tell us that if a ZooKeeper instance goes to the > {{AUTH_FAILED}} state that we must open a new ZooKeeper instance: > [https://zookeeper.apache.org/doc/r3.4.13/zookeeperProgrammers.html#ch_zkSessions] > When a new HConnection (or one without a cached meta location) tries to > access ZooKeeper to find meta's location or the cluster ID, this spin > indefinitely because we can never access ZooKeeper because our client is > broken from the AUTH_FAILED. For the Phoenix use-case (where we're trying to > use this HConnection within the RS), this breaks things pretty fast. > The circumstances that caused us to observe this are not an HBase (or Phoenix > or ZooKeeper) problem. The AUTH_FAILED exception we see is a result of > networking issues on a user's system. Despite this, we can make our handling > of this situation better. > We already have logic inside of RecoverableZooKeeper to re-create a ZooKeeper > object when we need one (e.g. session expired/closed). We can extend this > same logic to also re-create the ZK client object if we observe an > AUTH_FAILED state. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-21773) rowcounter utility should respond to pleas for help
[ https://issues.apache.org/jira/browse/HBASE-21773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wellington Chevreuil updated HBASE-21773: - Attachment: HBASE-21773.master.001.patch > rowcounter utility should respond to pleas for help > --- > > Key: HBASE-21773 > URL: https://issues.apache.org/jira/browse/HBASE-21773 > Project: HBase > Issue Type: Bug > Components: tooling >Affects Versions: 2.1.0 >Reporter: Sean Busbey >Assignee: Wellington Chevreuil >Priority: Critical > Attachments: HBASE-21773.master.001.patch > > > {{hbase rowcounter}} does not respond to reasonable requests for help, i.e. > {{--help}}, {{-h}}, or {{-?}} > {code} > [systest@busbey-training-1 root]$ hbase rowcounter -? > OpenJDK 64-Bit Server VM warning: Using incremental CMS is deprecated and > will likely be removed in a future release > 19/01/24 12:30:00 INFO client.RMProxy: Connecting to ResourceManager at > busbey-training-1.gce.cloudera.com/172.31.116.31:8032 > 19/01/24 12:30:01 INFO hdfs.DFSClient: Created token for systest: > HDFS_DELEGATION_TOKEN owner=syst...@gce.cloudera.com, renewer=yarn, > realUser=, issueDate=1548361801519, maxDate=1548966601519, sequenceNumber=3, > masterKeyId=8 on 172.31.116.31:8020 > 19/01/24 12:30:01 INFO kms.KMSClientProvider: Getting new token from > https://busbey-training-3.gce.cloudera.com:16000/kms/v1/, > renewer:yarn/busbey-training-1.gce.cloudera@gce.cloudera.com > 19/01/24 12:30:02 INFO kms.KMSClientProvider: New token received: (Kind: > kms-dt, Service: 172.31.116.52:16000, Ident: (kms-dt owner=systest, > renewer=yarn, realUser=, issueDate=1548361801965, maxDate=1548966601965, > sequenceNumber=5, masterKeyId=17)) > 19/01/24 12:30:02 INFO security.TokenCache: Got dt for > hdfs://busbey-training-1.gce.cloudera.com:8020; Kind: HDFS_DELEGATION_TOKEN, > Service: 172.31.116.31:8020, Ident: (token for systest: HDFS_DELEGATION_TOKEN > owner=syst...@gce.cloudera.com, renewer=yarn, realUser=, > issueDate=1548361801519, maxDate=1548966601519, sequenceNumber=3, > masterKeyId=8) > 19/01/24 12:30:02 INFO security.TokenCache: Got dt for > hdfs://busbey-training-1.gce.cloudera.com:8020; Kind: kms-dt, Service: > 172.31.116.52:16000, Ident: (kms-dt owner=systest, renewer=yarn, realUser=, > issueDate=1548361801965, maxDate=1548966601965, sequenceNumber=5, > masterKeyId=17) > 19/01/24 12:30:02 INFO kms.KMSClientProvider: Getting new token from > https://busbey-training-4.gce.cloudera.com:16000/kms/v1/, > renewer:yarn/busbey-training-1.gce.cloudera@gce.cloudera.com > 19/01/24 12:30:02 INFO kms.KMSClientProvider: New token received: (Kind: > kms-dt, Service: 172.31.116.50:16000, Ident: (kms-dt owner=systest, > renewer=yarn, realUser=, issueDate=1548361802363, maxDate=1548966602363, > sequenceNumber=6, masterKeyId=18)) > 19/01/24 12:30:02 INFO security.TokenCache: Got dt for > hdfs://busbey-training-1.gce.cloudera.com:8020; Kind: kms-dt, Service: > 172.31.116.50:16000, Ident: (kms-dt owner=systest, renewer=yarn, realUser=, > issueDate=1548361802363, maxDate=1548966602363, sequenceNumber=6, > masterKeyId=18) > 19/01/24 12:30:02 INFO mapreduce.JobResourceUploader: Disabling Erasure > Coding for path: /user/systest/.staging/job_1548349234632_0003 > 19/01/24 12:30:03 INFO mapreduce.JobSubmitter: Cleaning up the staging area > /user/systest/.staging/job_1548349234632_0003 > Exception in thread "main" java.lang.IllegalArgumentException: Illegal first > character <45> at 0. User-space table qualifiers can only start with > 'alphanumeric characters' from any language: -? > at > org.apache.hadoop.hbase.TableName.isLegalTableQualifierName(TableName.java:193) > at > org.apache.hadoop.hbase.TableName.isLegalTableQualifierName(TableName.java:156) > at org.apache.hadoop.hbase.TableName.(TableName.java:346) > at > org.apache.hadoop.hbase.TableName.createTableNameIfNecessary(TableName.java:382) > at org.apache.hadoop.hbase.TableName.valueOf(TableName.java:469) > at > org.apache.hadoop.hbase.mapreduce.TableInputFormat.initialize(TableInputFormat.java:198) > at > org.apache.hadoop.hbase.mapreduce.TableInputFormatBase.getSplits(TableInputFormatBase.java:243) > at > org.apache.hadoop.hbase.mapreduce.TableInputFormat.getSplits(TableInputFormat.java:254) > at > org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:310) > at > org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:327) > at > org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:200) > at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1570) > at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1567) > at
[jira] [Updated] (HBASE-21773) rowcounter utility should respond to pleas for help
[ https://issues.apache.org/jira/browse/HBASE-21773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wellington Chevreuil updated HBASE-21773: - Status: Patch Available (was: In Progress) > rowcounter utility should respond to pleas for help > --- > > Key: HBASE-21773 > URL: https://issues.apache.org/jira/browse/HBASE-21773 > Project: HBase > Issue Type: Bug > Components: tooling >Affects Versions: 2.1.0 >Reporter: Sean Busbey >Assignee: Wellington Chevreuil >Priority: Critical > Attachments: HBASE-21773.master.001.patch, > HBASE-21773.master.002.patch > > > {{hbase rowcounter}} does not respond to reasonable requests for help, i.e. > {{--help}}, {{-h}}, or {{-?}} > {code} > [systest@busbey-training-1 root]$ hbase rowcounter -? > OpenJDK 64-Bit Server VM warning: Using incremental CMS is deprecated and > will likely be removed in a future release > 19/01/24 12:30:00 INFO client.RMProxy: Connecting to ResourceManager at > busbey-training-1.gce.cloudera.com/172.31.116.31:8032 > 19/01/24 12:30:01 INFO hdfs.DFSClient: Created token for systest: > HDFS_DELEGATION_TOKEN owner=syst...@gce.cloudera.com, renewer=yarn, > realUser=, issueDate=1548361801519, maxDate=1548966601519, sequenceNumber=3, > masterKeyId=8 on 172.31.116.31:8020 > 19/01/24 12:30:01 INFO kms.KMSClientProvider: Getting new token from > https://busbey-training-3.gce.cloudera.com:16000/kms/v1/, > renewer:yarn/busbey-training-1.gce.cloudera@gce.cloudera.com > 19/01/24 12:30:02 INFO kms.KMSClientProvider: New token received: (Kind: > kms-dt, Service: 172.31.116.52:16000, Ident: (kms-dt owner=systest, > renewer=yarn, realUser=, issueDate=1548361801965, maxDate=1548966601965, > sequenceNumber=5, masterKeyId=17)) > 19/01/24 12:30:02 INFO security.TokenCache: Got dt for > hdfs://busbey-training-1.gce.cloudera.com:8020; Kind: HDFS_DELEGATION_TOKEN, > Service: 172.31.116.31:8020, Ident: (token for systest: HDFS_DELEGATION_TOKEN > owner=syst...@gce.cloudera.com, renewer=yarn, realUser=, > issueDate=1548361801519, maxDate=1548966601519, sequenceNumber=3, > masterKeyId=8) > 19/01/24 12:30:02 INFO security.TokenCache: Got dt for > hdfs://busbey-training-1.gce.cloudera.com:8020; Kind: kms-dt, Service: > 172.31.116.52:16000, Ident: (kms-dt owner=systest, renewer=yarn, realUser=, > issueDate=1548361801965, maxDate=1548966601965, sequenceNumber=5, > masterKeyId=17) > 19/01/24 12:30:02 INFO kms.KMSClientProvider: Getting new token from > https://busbey-training-4.gce.cloudera.com:16000/kms/v1/, > renewer:yarn/busbey-training-1.gce.cloudera@gce.cloudera.com > 19/01/24 12:30:02 INFO kms.KMSClientProvider: New token received: (Kind: > kms-dt, Service: 172.31.116.50:16000, Ident: (kms-dt owner=systest, > renewer=yarn, realUser=, issueDate=1548361802363, maxDate=1548966602363, > sequenceNumber=6, masterKeyId=18)) > 19/01/24 12:30:02 INFO security.TokenCache: Got dt for > hdfs://busbey-training-1.gce.cloudera.com:8020; Kind: kms-dt, Service: > 172.31.116.50:16000, Ident: (kms-dt owner=systest, renewer=yarn, realUser=, > issueDate=1548361802363, maxDate=1548966602363, sequenceNumber=6, > masterKeyId=18) > 19/01/24 12:30:02 INFO mapreduce.JobResourceUploader: Disabling Erasure > Coding for path: /user/systest/.staging/job_1548349234632_0003 > 19/01/24 12:30:03 INFO mapreduce.JobSubmitter: Cleaning up the staging area > /user/systest/.staging/job_1548349234632_0003 > Exception in thread "main" java.lang.IllegalArgumentException: Illegal first > character <45> at 0. User-space table qualifiers can only start with > 'alphanumeric characters' from any language: -? > at > org.apache.hadoop.hbase.TableName.isLegalTableQualifierName(TableName.java:193) > at > org.apache.hadoop.hbase.TableName.isLegalTableQualifierName(TableName.java:156) > at org.apache.hadoop.hbase.TableName.(TableName.java:346) > at > org.apache.hadoop.hbase.TableName.createTableNameIfNecessary(TableName.java:382) > at org.apache.hadoop.hbase.TableName.valueOf(TableName.java:469) > at > org.apache.hadoop.hbase.mapreduce.TableInputFormat.initialize(TableInputFormat.java:198) > at > org.apache.hadoop.hbase.mapreduce.TableInputFormatBase.getSplits(TableInputFormatBase.java:243) > at > org.apache.hadoop.hbase.mapreduce.TableInputFormat.getSplits(TableInputFormat.java:254) > at > org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:310) > at > org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:327) > at > org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:200) > at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1570) > at
[jira] [Work started] (HBASE-21773) rowcounter utility should respond to pleas for help
[ https://issues.apache.org/jira/browse/HBASE-21773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HBASE-21773 started by Wellington Chevreuil. > rowcounter utility should respond to pleas for help > --- > > Key: HBASE-21773 > URL: https://issues.apache.org/jira/browse/HBASE-21773 > Project: HBase > Issue Type: Bug > Components: tooling >Affects Versions: 2.1.0 >Reporter: Sean Busbey >Assignee: Wellington Chevreuil >Priority: Critical > Attachments: HBASE-21773.master.001.patch, > HBASE-21773.master.002.patch > > > {{hbase rowcounter}} does not respond to reasonable requests for help, i.e. > {{--help}}, {{-h}}, or {{-?}} > {code} > [systest@busbey-training-1 root]$ hbase rowcounter -? > OpenJDK 64-Bit Server VM warning: Using incremental CMS is deprecated and > will likely be removed in a future release > 19/01/24 12:30:00 INFO client.RMProxy: Connecting to ResourceManager at > busbey-training-1.gce.cloudera.com/172.31.116.31:8032 > 19/01/24 12:30:01 INFO hdfs.DFSClient: Created token for systest: > HDFS_DELEGATION_TOKEN owner=syst...@gce.cloudera.com, renewer=yarn, > realUser=, issueDate=1548361801519, maxDate=1548966601519, sequenceNumber=3, > masterKeyId=8 on 172.31.116.31:8020 > 19/01/24 12:30:01 INFO kms.KMSClientProvider: Getting new token from > https://busbey-training-3.gce.cloudera.com:16000/kms/v1/, > renewer:yarn/busbey-training-1.gce.cloudera@gce.cloudera.com > 19/01/24 12:30:02 INFO kms.KMSClientProvider: New token received: (Kind: > kms-dt, Service: 172.31.116.52:16000, Ident: (kms-dt owner=systest, > renewer=yarn, realUser=, issueDate=1548361801965, maxDate=1548966601965, > sequenceNumber=5, masterKeyId=17)) > 19/01/24 12:30:02 INFO security.TokenCache: Got dt for > hdfs://busbey-training-1.gce.cloudera.com:8020; Kind: HDFS_DELEGATION_TOKEN, > Service: 172.31.116.31:8020, Ident: (token for systest: HDFS_DELEGATION_TOKEN > owner=syst...@gce.cloudera.com, renewer=yarn, realUser=, > issueDate=1548361801519, maxDate=1548966601519, sequenceNumber=3, > masterKeyId=8) > 19/01/24 12:30:02 INFO security.TokenCache: Got dt for > hdfs://busbey-training-1.gce.cloudera.com:8020; Kind: kms-dt, Service: > 172.31.116.52:16000, Ident: (kms-dt owner=systest, renewer=yarn, realUser=, > issueDate=1548361801965, maxDate=1548966601965, sequenceNumber=5, > masterKeyId=17) > 19/01/24 12:30:02 INFO kms.KMSClientProvider: Getting new token from > https://busbey-training-4.gce.cloudera.com:16000/kms/v1/, > renewer:yarn/busbey-training-1.gce.cloudera@gce.cloudera.com > 19/01/24 12:30:02 INFO kms.KMSClientProvider: New token received: (Kind: > kms-dt, Service: 172.31.116.50:16000, Ident: (kms-dt owner=systest, > renewer=yarn, realUser=, issueDate=1548361802363, maxDate=1548966602363, > sequenceNumber=6, masterKeyId=18)) > 19/01/24 12:30:02 INFO security.TokenCache: Got dt for > hdfs://busbey-training-1.gce.cloudera.com:8020; Kind: kms-dt, Service: > 172.31.116.50:16000, Ident: (kms-dt owner=systest, renewer=yarn, realUser=, > issueDate=1548361802363, maxDate=1548966602363, sequenceNumber=6, > masterKeyId=18) > 19/01/24 12:30:02 INFO mapreduce.JobResourceUploader: Disabling Erasure > Coding for path: /user/systest/.staging/job_1548349234632_0003 > 19/01/24 12:30:03 INFO mapreduce.JobSubmitter: Cleaning up the staging area > /user/systest/.staging/job_1548349234632_0003 > Exception in thread "main" java.lang.IllegalArgumentException: Illegal first > character <45> at 0. User-space table qualifiers can only start with > 'alphanumeric characters' from any language: -? > at > org.apache.hadoop.hbase.TableName.isLegalTableQualifierName(TableName.java:193) > at > org.apache.hadoop.hbase.TableName.isLegalTableQualifierName(TableName.java:156) > at org.apache.hadoop.hbase.TableName.(TableName.java:346) > at > org.apache.hadoop.hbase.TableName.createTableNameIfNecessary(TableName.java:382) > at org.apache.hadoop.hbase.TableName.valueOf(TableName.java:469) > at > org.apache.hadoop.hbase.mapreduce.TableInputFormat.initialize(TableInputFormat.java:198) > at > org.apache.hadoop.hbase.mapreduce.TableInputFormatBase.getSplits(TableInputFormatBase.java:243) > at > org.apache.hadoop.hbase.mapreduce.TableInputFormat.getSplits(TableInputFormat.java:254) > at > org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:310) > at > org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:327) > at > org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:200) > at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1570) > at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1567) > at
[jira] [Updated] (HBASE-21773) rowcounter utility should respond to pleas for help
[ https://issues.apache.org/jira/browse/HBASE-21773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wellington Chevreuil updated HBASE-21773: - Attachment: HBASE-21773.master.002.patch > rowcounter utility should respond to pleas for help > --- > > Key: HBASE-21773 > URL: https://issues.apache.org/jira/browse/HBASE-21773 > Project: HBase > Issue Type: Bug > Components: tooling >Affects Versions: 2.1.0 >Reporter: Sean Busbey >Assignee: Wellington Chevreuil >Priority: Critical > Attachments: HBASE-21773.master.001.patch, > HBASE-21773.master.002.patch > > > {{hbase rowcounter}} does not respond to reasonable requests for help, i.e. > {{--help}}, {{-h}}, or {{-?}} > {code} > [systest@busbey-training-1 root]$ hbase rowcounter -? > OpenJDK 64-Bit Server VM warning: Using incremental CMS is deprecated and > will likely be removed in a future release > 19/01/24 12:30:00 INFO client.RMProxy: Connecting to ResourceManager at > busbey-training-1.gce.cloudera.com/172.31.116.31:8032 > 19/01/24 12:30:01 INFO hdfs.DFSClient: Created token for systest: > HDFS_DELEGATION_TOKEN owner=syst...@gce.cloudera.com, renewer=yarn, > realUser=, issueDate=1548361801519, maxDate=1548966601519, sequenceNumber=3, > masterKeyId=8 on 172.31.116.31:8020 > 19/01/24 12:30:01 INFO kms.KMSClientProvider: Getting new token from > https://busbey-training-3.gce.cloudera.com:16000/kms/v1/, > renewer:yarn/busbey-training-1.gce.cloudera@gce.cloudera.com > 19/01/24 12:30:02 INFO kms.KMSClientProvider: New token received: (Kind: > kms-dt, Service: 172.31.116.52:16000, Ident: (kms-dt owner=systest, > renewer=yarn, realUser=, issueDate=1548361801965, maxDate=1548966601965, > sequenceNumber=5, masterKeyId=17)) > 19/01/24 12:30:02 INFO security.TokenCache: Got dt for > hdfs://busbey-training-1.gce.cloudera.com:8020; Kind: HDFS_DELEGATION_TOKEN, > Service: 172.31.116.31:8020, Ident: (token for systest: HDFS_DELEGATION_TOKEN > owner=syst...@gce.cloudera.com, renewer=yarn, realUser=, > issueDate=1548361801519, maxDate=1548966601519, sequenceNumber=3, > masterKeyId=8) > 19/01/24 12:30:02 INFO security.TokenCache: Got dt for > hdfs://busbey-training-1.gce.cloudera.com:8020; Kind: kms-dt, Service: > 172.31.116.52:16000, Ident: (kms-dt owner=systest, renewer=yarn, realUser=, > issueDate=1548361801965, maxDate=1548966601965, sequenceNumber=5, > masterKeyId=17) > 19/01/24 12:30:02 INFO kms.KMSClientProvider: Getting new token from > https://busbey-training-4.gce.cloudera.com:16000/kms/v1/, > renewer:yarn/busbey-training-1.gce.cloudera@gce.cloudera.com > 19/01/24 12:30:02 INFO kms.KMSClientProvider: New token received: (Kind: > kms-dt, Service: 172.31.116.50:16000, Ident: (kms-dt owner=systest, > renewer=yarn, realUser=, issueDate=1548361802363, maxDate=1548966602363, > sequenceNumber=6, masterKeyId=18)) > 19/01/24 12:30:02 INFO security.TokenCache: Got dt for > hdfs://busbey-training-1.gce.cloudera.com:8020; Kind: kms-dt, Service: > 172.31.116.50:16000, Ident: (kms-dt owner=systest, renewer=yarn, realUser=, > issueDate=1548361802363, maxDate=1548966602363, sequenceNumber=6, > masterKeyId=18) > 19/01/24 12:30:02 INFO mapreduce.JobResourceUploader: Disabling Erasure > Coding for path: /user/systest/.staging/job_1548349234632_0003 > 19/01/24 12:30:03 INFO mapreduce.JobSubmitter: Cleaning up the staging area > /user/systest/.staging/job_1548349234632_0003 > Exception in thread "main" java.lang.IllegalArgumentException: Illegal first > character <45> at 0. User-space table qualifiers can only start with > 'alphanumeric characters' from any language: -? > at > org.apache.hadoop.hbase.TableName.isLegalTableQualifierName(TableName.java:193) > at > org.apache.hadoop.hbase.TableName.isLegalTableQualifierName(TableName.java:156) > at org.apache.hadoop.hbase.TableName.(TableName.java:346) > at > org.apache.hadoop.hbase.TableName.createTableNameIfNecessary(TableName.java:382) > at org.apache.hadoop.hbase.TableName.valueOf(TableName.java:469) > at > org.apache.hadoop.hbase.mapreduce.TableInputFormat.initialize(TableInputFormat.java:198) > at > org.apache.hadoop.hbase.mapreduce.TableInputFormatBase.getSplits(TableInputFormatBase.java:243) > at > org.apache.hadoop.hbase.mapreduce.TableInputFormat.getSplits(TableInputFormat.java:254) > at > org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:310) > at > org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:327) > at > org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:200) > at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1570) > at
[jira] [Commented] (HBASE-21634) Print error message when user uses unacceptable values for LIMIT while setting quotas.
[ https://issues.apache.org/jira/browse/HBASE-21634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756675#comment-16756675 ] Hadoop QA commented on HBASE-21634: --- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} branch-2.0 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 51s{color} | {color:green} branch-2.0 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s{color} | {color:green} branch-2.0 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} rubocop {color} | {color:green} 0m 9s{color} | {color:green} The patch generated 0 new + 111 unchanged - 11 fixed = 111 total (was 122) {color} | | {color:orange}-0{color} | {color:orange} ruby-lint {color} | {color:orange} 0m 6s{color} | {color:orange} The patch generated 82 new + 220 unchanged - 7 fixed = 302 total (was 227) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 10s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 6m 49s{color} | {color:green} hbase-shell in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 15s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 17m 4s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:6f01af0 | | JIRA Issue | HBASE-21634 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12956969/hbase-21634.branch-2.0.002.patch | | Optional Tests | dupname asflicense javac javadoc unit rubocop ruby_lint | | uname | Linux a90d543580e3 4.4.0-139-generic #165~14.04.1-Ubuntu SMP Wed Oct 31 10:55:11 UTC 2018 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | branch-2.0 / d94a719e8f | | maven | version: Apache Maven 3.5.4 (1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) | | Default Java | 1.8.0_181 | | rubocop | v0.60.0 | | ruby-lint | v2.3.1 | | ruby-lint | https://builds.apache.org/job/PreCommit-HBASE-Build/15797/artifact/patchprocess/diff-patch-ruby-lint.txt | | Test Results | https://builds.apache.org/job/PreCommit-HBASE-Build/15797/testReport/ | | Max. process+thread count | 2105 (vs. ulimit of 1) | | modules | C: hbase-shell U: hbase-shell | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/15797/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > Print error message when user uses unacceptable values for LIMIT while > setting quotas. > -- > > Key: HBASE-21634 > URL: https://issues.apache.org/jira/browse/HBASE-21634 > Project: HBase > Issue Type: Improvement >Reporter: Sakthi >Assignee: Sakthi >Priority: Minor > Fix For: 3.0.0, 2.2.0 > > Attachments: hbase-21634.branch-2.0.001.patch, > hbase-21634.branch-2.0.002.patch, hbase-21634.master.001.patch, > hbase-21634.master.002.patch, hbase-21634.master.003.patch, > hbase-21634.master.004.patch > > > When unacceptable value(like 2.3G or 70H) to LIMIT are passed while setting > quotas, we currently do not print any error message (informing the user about > the erroneous input). Like below: > {noformat} > hbase(main):002:0> set_quota TYPE => SPACE, TABLE => 't2', LIMIT => '2.3G', > POLICY => NO_WRITES > Took 0.0792 seconds > hbase(main):003:0> list_quotas > OWNERQUOTAS >
[jira] [Updated] (HBASE-21773) rowcounter utility should respond to pleas for help
[ https://issues.apache.org/jira/browse/HBASE-21773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wellington Chevreuil updated HBASE-21773: - Status: Open (was: Patch Available) Just realised it's incomplete. > rowcounter utility should respond to pleas for help > --- > > Key: HBASE-21773 > URL: https://issues.apache.org/jira/browse/HBASE-21773 > Project: HBase > Issue Type: Bug > Components: tooling >Affects Versions: 2.1.0 >Reporter: Sean Busbey >Assignee: Wellington Chevreuil >Priority: Critical > Attachments: HBASE-21773.master.001.patch > > > {{hbase rowcounter}} does not respond to reasonable requests for help, i.e. > {{--help}}, {{-h}}, or {{-?}} > {code} > [systest@busbey-training-1 root]$ hbase rowcounter -? > OpenJDK 64-Bit Server VM warning: Using incremental CMS is deprecated and > will likely be removed in a future release > 19/01/24 12:30:00 INFO client.RMProxy: Connecting to ResourceManager at > busbey-training-1.gce.cloudera.com/172.31.116.31:8032 > 19/01/24 12:30:01 INFO hdfs.DFSClient: Created token for systest: > HDFS_DELEGATION_TOKEN owner=syst...@gce.cloudera.com, renewer=yarn, > realUser=, issueDate=1548361801519, maxDate=1548966601519, sequenceNumber=3, > masterKeyId=8 on 172.31.116.31:8020 > 19/01/24 12:30:01 INFO kms.KMSClientProvider: Getting new token from > https://busbey-training-3.gce.cloudera.com:16000/kms/v1/, > renewer:yarn/busbey-training-1.gce.cloudera@gce.cloudera.com > 19/01/24 12:30:02 INFO kms.KMSClientProvider: New token received: (Kind: > kms-dt, Service: 172.31.116.52:16000, Ident: (kms-dt owner=systest, > renewer=yarn, realUser=, issueDate=1548361801965, maxDate=1548966601965, > sequenceNumber=5, masterKeyId=17)) > 19/01/24 12:30:02 INFO security.TokenCache: Got dt for > hdfs://busbey-training-1.gce.cloudera.com:8020; Kind: HDFS_DELEGATION_TOKEN, > Service: 172.31.116.31:8020, Ident: (token for systest: HDFS_DELEGATION_TOKEN > owner=syst...@gce.cloudera.com, renewer=yarn, realUser=, > issueDate=1548361801519, maxDate=1548966601519, sequenceNumber=3, > masterKeyId=8) > 19/01/24 12:30:02 INFO security.TokenCache: Got dt for > hdfs://busbey-training-1.gce.cloudera.com:8020; Kind: kms-dt, Service: > 172.31.116.52:16000, Ident: (kms-dt owner=systest, renewer=yarn, realUser=, > issueDate=1548361801965, maxDate=1548966601965, sequenceNumber=5, > masterKeyId=17) > 19/01/24 12:30:02 INFO kms.KMSClientProvider: Getting new token from > https://busbey-training-4.gce.cloudera.com:16000/kms/v1/, > renewer:yarn/busbey-training-1.gce.cloudera@gce.cloudera.com > 19/01/24 12:30:02 INFO kms.KMSClientProvider: New token received: (Kind: > kms-dt, Service: 172.31.116.50:16000, Ident: (kms-dt owner=systest, > renewer=yarn, realUser=, issueDate=1548361802363, maxDate=1548966602363, > sequenceNumber=6, masterKeyId=18)) > 19/01/24 12:30:02 INFO security.TokenCache: Got dt for > hdfs://busbey-training-1.gce.cloudera.com:8020; Kind: kms-dt, Service: > 172.31.116.50:16000, Ident: (kms-dt owner=systest, renewer=yarn, realUser=, > issueDate=1548361802363, maxDate=1548966602363, sequenceNumber=6, > masterKeyId=18) > 19/01/24 12:30:02 INFO mapreduce.JobResourceUploader: Disabling Erasure > Coding for path: /user/systest/.staging/job_1548349234632_0003 > 19/01/24 12:30:03 INFO mapreduce.JobSubmitter: Cleaning up the staging area > /user/systest/.staging/job_1548349234632_0003 > Exception in thread "main" java.lang.IllegalArgumentException: Illegal first > character <45> at 0. User-space table qualifiers can only start with > 'alphanumeric characters' from any language: -? > at > org.apache.hadoop.hbase.TableName.isLegalTableQualifierName(TableName.java:193) > at > org.apache.hadoop.hbase.TableName.isLegalTableQualifierName(TableName.java:156) > at org.apache.hadoop.hbase.TableName.(TableName.java:346) > at > org.apache.hadoop.hbase.TableName.createTableNameIfNecessary(TableName.java:382) > at org.apache.hadoop.hbase.TableName.valueOf(TableName.java:469) > at > org.apache.hadoop.hbase.mapreduce.TableInputFormat.initialize(TableInputFormat.java:198) > at > org.apache.hadoop.hbase.mapreduce.TableInputFormatBase.getSplits(TableInputFormatBase.java:243) > at > org.apache.hadoop.hbase.mapreduce.TableInputFormat.getSplits(TableInputFormat.java:254) > at > org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:310) > at > org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:327) > at > org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:200) > at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1570) > at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1567)
[jira] [Updated] (HBASE-21775) The BufferedMutator doesn't ever refresh region location cache
[ https://issues.apache.org/jira/browse/HBASE-21775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-21775: -- Resolution: Fixed Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) Resolving. > The BufferedMutator doesn't ever refresh region location cache > -- > > Key: HBASE-21775 > URL: https://issues.apache.org/jira/browse/HBASE-21775 > Project: HBase > Issue Type: Bug > Components: Client >Reporter: Tommy Li >Assignee: Tommy Li >Priority: Major > Fix For: 3.0.0, 1.5.0, 2.2.0, 1.4.10, 2.1.3, 2.0.5, 1.3.4 > > Attachments: HBASE-21775-ADDENDUM.master.001.patch, > HBASE-21775.master.001.patch, > org.apache.hadoop.hbase.client.TestAsyncProcess-with-HBASE-21775.txt, > org.apache.hadoop.hbase.client.TestAsyncProcess-without-HBASE-21775.txt > > > {color:#22}I noticed in some of my writing jobs that the BufferedMutator > would get stuck retrying writes against a dead server.{color} > {code:java} > 19/01/18 15:15:47 INFO [Executor task launch worker for task 0] > client.AsyncRequestFutureImpl: #2, waiting for 1 actions to finish on table: > dummy_table > 19/01/18 15:15:54 WARN [htable-pool3-t56] client.AsyncRequestFutureImpl: > id=2, table=dummy_table, attempt=15/21, failureCount=1ops, last > exception=org.apache.hadoop.hbase.DoNotRetryIOException: Operation rpcTimeout > on ,17020,1547848193782, tracking started Fri Jan 18 14:55:37 PST > 2019; NOT retrying, failed=1 -- final attempt! > 19/01/18 15:15:54 ERROR [Executor task launch worker for task 0] > IngestRawData.map(): [B@258bc2c7: > org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 > action: Operation rpcTimeout: 1 time, servers with issues: > ,17020,1547848193782 > {code} > > After the single remaining action permanently failed, it would resume > progress only to get stuck again retrying against the same dead server: > {code:java} > 19/01/18 15:21:18 INFO [Executor task launch worker for task 0] > client.AsyncRequestFutureImpl: #2, waiting for 1 actions to finish on table: > dummy_table > 19/01/18 15:21:18 INFO [Executor task launch worker for task 0] > client.AsyncRequestFutureImpl: #2, waiting for 1 actions to finish on table: > dummy_table > 19/01/18 15:21:20 INFO [htable-pool3-t55] client.AsyncRequestFutureImpl: > id=2, table=dummy_table, attempt=6/21, failureCount=1ops, last > exception=java.net.ConnectException: Call to failed on connection > exception: > org.apache.hbase.thirdparty.io.netty.channel.ConnectTimeoutException: > connection timed out: on ,17020,1547848193782, tracking > started null, retrying after=20089ms, operationsToReplay=1 > {code} > > Only restarting the client process to generate a new BufferedMutator instance > would fix the issue, at least until the next regionserver crash > The logs I've pasted show the issue happening with a > ConnectionTimeoutException, but we've also seen it with > NotServingRegionException and some others -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-21773) rowcounter utility should respond to pleas for help
[ https://issues.apache.org/jira/browse/HBASE-21773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wellington Chevreuil updated HBASE-21773: - Status: Patch Available (was: In Progress) Attaching patch proposal addressing this. > rowcounter utility should respond to pleas for help > --- > > Key: HBASE-21773 > URL: https://issues.apache.org/jira/browse/HBASE-21773 > Project: HBase > Issue Type: Bug > Components: tooling >Affects Versions: 2.1.0 >Reporter: Sean Busbey >Assignee: Wellington Chevreuil >Priority: Critical > Attachments: HBASE-21773.master.001.patch > > > {{hbase rowcounter}} does not respond to reasonable requests for help, i.e. > {{--help}}, {{-h}}, or {{-?}} > {code} > [systest@busbey-training-1 root]$ hbase rowcounter -? > OpenJDK 64-Bit Server VM warning: Using incremental CMS is deprecated and > will likely be removed in a future release > 19/01/24 12:30:00 INFO client.RMProxy: Connecting to ResourceManager at > busbey-training-1.gce.cloudera.com/172.31.116.31:8032 > 19/01/24 12:30:01 INFO hdfs.DFSClient: Created token for systest: > HDFS_DELEGATION_TOKEN owner=syst...@gce.cloudera.com, renewer=yarn, > realUser=, issueDate=1548361801519, maxDate=1548966601519, sequenceNumber=3, > masterKeyId=8 on 172.31.116.31:8020 > 19/01/24 12:30:01 INFO kms.KMSClientProvider: Getting new token from > https://busbey-training-3.gce.cloudera.com:16000/kms/v1/, > renewer:yarn/busbey-training-1.gce.cloudera@gce.cloudera.com > 19/01/24 12:30:02 INFO kms.KMSClientProvider: New token received: (Kind: > kms-dt, Service: 172.31.116.52:16000, Ident: (kms-dt owner=systest, > renewer=yarn, realUser=, issueDate=1548361801965, maxDate=1548966601965, > sequenceNumber=5, masterKeyId=17)) > 19/01/24 12:30:02 INFO security.TokenCache: Got dt for > hdfs://busbey-training-1.gce.cloudera.com:8020; Kind: HDFS_DELEGATION_TOKEN, > Service: 172.31.116.31:8020, Ident: (token for systest: HDFS_DELEGATION_TOKEN > owner=syst...@gce.cloudera.com, renewer=yarn, realUser=, > issueDate=1548361801519, maxDate=1548966601519, sequenceNumber=3, > masterKeyId=8) > 19/01/24 12:30:02 INFO security.TokenCache: Got dt for > hdfs://busbey-training-1.gce.cloudera.com:8020; Kind: kms-dt, Service: > 172.31.116.52:16000, Ident: (kms-dt owner=systest, renewer=yarn, realUser=, > issueDate=1548361801965, maxDate=1548966601965, sequenceNumber=5, > masterKeyId=17) > 19/01/24 12:30:02 INFO kms.KMSClientProvider: Getting new token from > https://busbey-training-4.gce.cloudera.com:16000/kms/v1/, > renewer:yarn/busbey-training-1.gce.cloudera@gce.cloudera.com > 19/01/24 12:30:02 INFO kms.KMSClientProvider: New token received: (Kind: > kms-dt, Service: 172.31.116.50:16000, Ident: (kms-dt owner=systest, > renewer=yarn, realUser=, issueDate=1548361802363, maxDate=1548966602363, > sequenceNumber=6, masterKeyId=18)) > 19/01/24 12:30:02 INFO security.TokenCache: Got dt for > hdfs://busbey-training-1.gce.cloudera.com:8020; Kind: kms-dt, Service: > 172.31.116.50:16000, Ident: (kms-dt owner=systest, renewer=yarn, realUser=, > issueDate=1548361802363, maxDate=1548966602363, sequenceNumber=6, > masterKeyId=18) > 19/01/24 12:30:02 INFO mapreduce.JobResourceUploader: Disabling Erasure > Coding for path: /user/systest/.staging/job_1548349234632_0003 > 19/01/24 12:30:03 INFO mapreduce.JobSubmitter: Cleaning up the staging area > /user/systest/.staging/job_1548349234632_0003 > Exception in thread "main" java.lang.IllegalArgumentException: Illegal first > character <45> at 0. User-space table qualifiers can only start with > 'alphanumeric characters' from any language: -? > at > org.apache.hadoop.hbase.TableName.isLegalTableQualifierName(TableName.java:193) > at > org.apache.hadoop.hbase.TableName.isLegalTableQualifierName(TableName.java:156) > at org.apache.hadoop.hbase.TableName.(TableName.java:346) > at > org.apache.hadoop.hbase.TableName.createTableNameIfNecessary(TableName.java:382) > at org.apache.hadoop.hbase.TableName.valueOf(TableName.java:469) > at > org.apache.hadoop.hbase.mapreduce.TableInputFormat.initialize(TableInputFormat.java:198) > at > org.apache.hadoop.hbase.mapreduce.TableInputFormatBase.getSplits(TableInputFormatBase.java:243) > at > org.apache.hadoop.hbase.mapreduce.TableInputFormat.getSplits(TableInputFormat.java:254) > at > org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:310) > at > org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:327) > at > org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:200) > at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1570) > at
[jira] [Commented] (HBASE-21775) The BufferedMutator doesn't ever refresh region location cache
[ https://issues.apache.org/jira/browse/HBASE-21775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756687#comment-16756687 ] Tommy Li commented on HBASE-21775: -- Sorry about that [~stack], yeah I need to update my editor settings to match this project's styleguide > The BufferedMutator doesn't ever refresh region location cache > -- > > Key: HBASE-21775 > URL: https://issues.apache.org/jira/browse/HBASE-21775 > Project: HBase > Issue Type: Bug > Components: Client >Reporter: Tommy Li >Assignee: Tommy Li >Priority: Major > Fix For: 3.0.0, 1.5.0, 2.2.0, 1.4.10, 2.1.3, 2.0.5, 1.3.4 > > Attachments: HBASE-21775-ADDENDUM.master.001.patch, > HBASE-21775.master.001.patch, > org.apache.hadoop.hbase.client.TestAsyncProcess-with-HBASE-21775.txt, > org.apache.hadoop.hbase.client.TestAsyncProcess-without-HBASE-21775.txt > > > {color:#22}I noticed in some of my writing jobs that the BufferedMutator > would get stuck retrying writes against a dead server.{color} > {code:java} > 19/01/18 15:15:47 INFO [Executor task launch worker for task 0] > client.AsyncRequestFutureImpl: #2, waiting for 1 actions to finish on table: > dummy_table > 19/01/18 15:15:54 WARN [htable-pool3-t56] client.AsyncRequestFutureImpl: > id=2, table=dummy_table, attempt=15/21, failureCount=1ops, last > exception=org.apache.hadoop.hbase.DoNotRetryIOException: Operation rpcTimeout > on ,17020,1547848193782, tracking started Fri Jan 18 14:55:37 PST > 2019; NOT retrying, failed=1 -- final attempt! > 19/01/18 15:15:54 ERROR [Executor task launch worker for task 0] > IngestRawData.map(): [B@258bc2c7: > org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 > action: Operation rpcTimeout: 1 time, servers with issues: > ,17020,1547848193782 > {code} > > After the single remaining action permanently failed, it would resume > progress only to get stuck again retrying against the same dead server: > {code:java} > 19/01/18 15:21:18 INFO [Executor task launch worker for task 0] > client.AsyncRequestFutureImpl: #2, waiting for 1 actions to finish on table: > dummy_table > 19/01/18 15:21:18 INFO [Executor task launch worker for task 0] > client.AsyncRequestFutureImpl: #2, waiting for 1 actions to finish on table: > dummy_table > 19/01/18 15:21:20 INFO [htable-pool3-t55] client.AsyncRequestFutureImpl: > id=2, table=dummy_table, attempt=6/21, failureCount=1ops, last > exception=java.net.ConnectException: Call to failed on connection > exception: > org.apache.hbase.thirdparty.io.netty.channel.ConnectTimeoutException: > connection timed out: on ,17020,1547848193782, tracking > started null, retrying after=20089ms, operationsToReplay=1 > {code} > > Only restarting the client process to generate a new BufferedMutator instance > would fix the issue, at least until the next regionserver crash > The logs I've pasted show the issue happening with a > ConnectionTimeoutException, but we've also seen it with > NotServingRegionException and some others -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21775) The BufferedMutator doesn't ever refresh region location cache
[ https://issues.apache.org/jira/browse/HBASE-21775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756679#comment-16756679 ] stack commented on HBASE-21775: --- Thank you [~tommyzli] On the addendum, it looks good but in future please don't do this (maybe you have to change your config on IDE?) -import org.junit.Assert; -import org.junit.BeforeClass; -import org.junit.ClassRule; -import org.junit.Test; +import org.junit.*; I changed the addendum and fixed the checkstyle and applied the patch. Thank you. Smile one of the checkstyle complaints was ./hbase-client/src/test/java/org/apache/hadoop/hbase/client/TestAsyncProcess.java:71:import org.junit.*;: Using the '.*' form of import should be avoided - org.junit.*. [AvoidStarImport] > The BufferedMutator doesn't ever refresh region location cache > -- > > Key: HBASE-21775 > URL: https://issues.apache.org/jira/browse/HBASE-21775 > Project: HBase > Issue Type: Bug > Components: Client >Reporter: Tommy Li >Assignee: Tommy Li >Priority: Major > Fix For: 3.0.0, 1.5.0, 2.2.0, 1.4.10, 2.1.3, 2.0.5, 1.3.4 > > Attachments: HBASE-21775-ADDENDUM.master.001.patch, > HBASE-21775.master.001.patch, > org.apache.hadoop.hbase.client.TestAsyncProcess-with-HBASE-21775.txt, > org.apache.hadoop.hbase.client.TestAsyncProcess-without-HBASE-21775.txt > > > {color:#22}I noticed in some of my writing jobs that the BufferedMutator > would get stuck retrying writes against a dead server.{color} > {code:java} > 19/01/18 15:15:47 INFO [Executor task launch worker for task 0] > client.AsyncRequestFutureImpl: #2, waiting for 1 actions to finish on table: > dummy_table > 19/01/18 15:15:54 WARN [htable-pool3-t56] client.AsyncRequestFutureImpl: > id=2, table=dummy_table, attempt=15/21, failureCount=1ops, last > exception=org.apache.hadoop.hbase.DoNotRetryIOException: Operation rpcTimeout > on ,17020,1547848193782, tracking started Fri Jan 18 14:55:37 PST > 2019; NOT retrying, failed=1 -- final attempt! > 19/01/18 15:15:54 ERROR [Executor task launch worker for task 0] > IngestRawData.map(): [B@258bc2c7: > org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 > action: Operation rpcTimeout: 1 time, servers with issues: > ,17020,1547848193782 > {code} > > After the single remaining action permanently failed, it would resume > progress only to get stuck again retrying against the same dead server: > {code:java} > 19/01/18 15:21:18 INFO [Executor task launch worker for task 0] > client.AsyncRequestFutureImpl: #2, waiting for 1 actions to finish on table: > dummy_table > 19/01/18 15:21:18 INFO [Executor task launch worker for task 0] > client.AsyncRequestFutureImpl: #2, waiting for 1 actions to finish on table: > dummy_table > 19/01/18 15:21:20 INFO [htable-pool3-t55] client.AsyncRequestFutureImpl: > id=2, table=dummy_table, attempt=6/21, failureCount=1ops, last > exception=java.net.ConnectException: Call to failed on connection > exception: > org.apache.hbase.thirdparty.io.netty.channel.ConnectTimeoutException: > connection timed out: on ,17020,1547848193782, tracking > started null, retrying after=20089ms, operationsToReplay=1 > {code} > > Only restarting the client process to generate a new BufferedMutator instance > would fix the issue, at least until the next regionserver crash > The logs I've pasted show the issue happening with a > ConnectionTimeoutException, but we've also seen it with > NotServingRegionException and some others -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21739) Move grant/revoke from regionserver to master
[ https://issues.apache.org/jira/browse/HBASE-21739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756669#comment-16756669 ] Hudson commented on HBASE-21739: Results for branch branch-2 [build #1647 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1647/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1647//General_Nightly_Build_Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1647//JDK8_Nightly_Build_Report_(Hadoop2)/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1647//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Move grant/revoke from regionserver to master > - > > Key: HBASE-21739 > URL: https://issues.apache.org/jira/browse/HBASE-21739 > Project: HBase > Issue Type: Sub-task >Affects Versions: 3.0.0, 2.2.0, 2.3.0 >Reporter: Yi Mei >Assignee: Yi Mei >Priority: Major > Fix For: 3.0.0, 2.2.0, 2.3.0 > > Attachments: HBASE-21739.branch-2.001.patch, > HBASE-21739.master.001.patch, HBASE-21739.master.002.patch, > HBASE-21739.master.003.patch, HBASE-21739.master.003.patch, > HBASE-21739.master.004.patch, HBASE-21739.master.005.patch > > > Create a sub-task to move grant/revoke from regionserver to master. Other > access control operations(getUserPermissions/ checkPermissions/ > hasPermission) will be moved in another sub-task. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21812) Address ruby static analysis for shell/bin modules [2nd pass]
[ https://issues.apache.org/jira/browse/HBASE-21812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756650#comment-16756650 ] Sakthi commented on HBASE-21812: Heads Up: In shell module: 190 files inspected, 3062 offenses detected, 1452 offenses corrected In bin module: 7 files inspected, 104 offenses detected, 15 offenses corrected If we have a "go-ahead", I can put up the patch here and we can look at the tests. > Address ruby static analysis for shell/bin modules [2nd pass] > - > > Key: HBASE-21812 > URL: https://issues.apache.org/jira/browse/HBASE-21812 > Project: HBase > Issue Type: Improvement >Reporter: Sakthi >Assignee: Sakthi >Priority: Major > > -HBASE-18237- did a pass in the shell and bin directories. I think we can go > for another round. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-21812) Address ruby static analysis for shell module [2nd pass]
[ https://issues.apache.org/jira/browse/HBASE-21812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sakthi updated HBASE-21812: --- Description: -HBASE-18237- did a pass in the shell and bin directories. I think we can go for another round. (was: HBASE-18239 did a pass. I think we can go for another round. ) > Address ruby static analysis for shell module [2nd pass] > > > Key: HBASE-21812 > URL: https://issues.apache.org/jira/browse/HBASE-21812 > Project: HBase > Issue Type: Improvement >Reporter: Sakthi >Assignee: Sakthi >Priority: Major > > -HBASE-18237- did a pass in the shell and bin directories. I think we can go > for another round. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21775) The BufferedMutator doesn't ever refresh region location cache
[ https://issues.apache.org/jira/browse/HBASE-21775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756653#comment-16756653 ] Hadoop QA commented on HBASE-21775: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 29s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 42s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 26s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 0s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 6m 54s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 7s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 44s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 31s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 0s{color} | {color:red} hbase-client: The patch generated 3 new + 16 unchanged - 1 fixed = 19 total (was 17) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 6m 52s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 10m 23s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 or 3.0.0. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 20s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 54s{color} | {color:green} hbase-client in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 9s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 51m 17s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b | | JIRA Issue | HBASE-21775 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12956966/HBASE-21775-ADDENDUM.master.001.patch | | Optional Tests | dupname asflicense javac javadoc unit findbugs shadedjars hadoopcheck hbaseanti checkstyle compile | | uname | Linux 55f2d5de9471 4.4.0-139-generic #165~14.04.1-Ubuntu SMP Wed Oct 31 10:55:11 UTC 2018 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | master / 5ddda1a1f6 | | maven | version: Apache Maven 3.5.4 (1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) | | Default Java | 1.8.0_181 | | findbugs | v3.1.0-RC3 | | checkstyle | https://builds.apache.org/job/PreCommit-HBASE-Build/15795/artifact/patchprocess/diff-checkstyle-hbase-client.txt | | Test Results | https://builds.apache.org/job/PreCommit-HBASE-Build/15795/testReport/ | | Max. process+thread count | 260 (vs. ulimit of 1) | | modules | C: hbase-client U: hbase-client | | Console
[jira] [Updated] (HBASE-21812) Address ruby static analysis for shell/bin modules [2nd pass]
[ https://issues.apache.org/jira/browse/HBASE-21812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sakthi updated HBASE-21812: --- Summary: Address ruby static analysis for shell/bin modules [2nd pass] (was: Address ruby static analysis for shell module [2nd pass]) > Address ruby static analysis for shell/bin modules [2nd pass] > - > > Key: HBASE-21812 > URL: https://issues.apache.org/jira/browse/HBASE-21812 > Project: HBase > Issue Type: Improvement >Reporter: Sakthi >Assignee: Sakthi >Priority: Major > > -HBASE-18237- did a pass in the shell and bin directories. I think we can go > for another round. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HBASE-20993) [Auth] IPC client fallback to simple auth allowed doesn't work
[ https://issues.apache.org/jira/browse/HBASE-20993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jack Bearden reassigned HBASE-20993: Assignee: Reid Chan (was: Jack Bearden) > [Auth] IPC client fallback to simple auth allowed doesn't work > -- > > Key: HBASE-20993 > URL: https://issues.apache.org/jira/browse/HBASE-20993 > Project: HBase > Issue Type: Bug > Components: Client, IPC/RPC, security >Affects Versions: 1.2.6, 1.3.2, 1.2.7, 1.4.7 >Reporter: Reid Chan >Assignee: Reid Chan >Priority: Critical > Fix For: 1.5.0 > > Attachments: HBASE-20993.001.patch, > HBASE-20993.003.branch-1.flowchart.png, HBASE-20993.branch-1.002.patch, > HBASE-20993.branch-1.003.patch, HBASE-20993.branch-1.004.patch, > HBASE-20993.branch-1.005.patch, HBASE-20993.branch-1.006.patch, > HBASE-20993.branch-1.007.patch, HBASE-20993.branch-1.008.patch, > HBASE-20993.branch-1.009.patch, HBASE-20993.branch-1.009.patch, > HBASE-20993.branch-1.2.001.patch, HBASE-20993.branch-1.wip.002.patch, > HBASE-20993.branch-1.wip.patch, yetus-local-testpatch-output-009.txt > > > It is easily reproducible. > client's hbase-site.xml: hadoop.security.authentication:kerberos, > hbase.security.authentication:kerberos, > hbase.ipc.client.fallback-to-simple-auth-allowed:true, keytab and principal > are right set > A simple auth hbase cluster, a kerberized hbase client application. > application trying to r/w/c/d table will have following exception: > {code} > javax.security.sasl.SaslException: GSS initiate failed [Caused by > GSSException: No valid credentials provided (Mechanism level: Failed to find > any Kerberos tgt)] > at > com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211) > at > org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:179) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupSaslConnection(RpcClientImpl.java:617) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.access$700(RpcClientImpl.java:162) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:743) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:740) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:740) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:906) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:873) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1241) > at > org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:227) > at > org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:336) > at > org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58383) > at > org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.isMasterRunning(ConnectionManager.java:1592) > at > org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStubNoRetries(ConnectionManager.java:1530) > at > org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1552) > at > org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionManager.java:1581) > at > org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1738) > at > org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38) > at > org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:134) > at > org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4297) > at > org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4289) > at > org.apache.hadoop.hbase.client.HBaseAdmin.createTableAsyncV2(HBaseAdmin.java:753) > at > org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:674) > at > org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:607) > at > org.playground.hbase.KerberizedClientFallback.main(KerberizedClientFallback.java:55)
[jira] [Assigned] (HBASE-21112) [Auth] IPC client fallback to simple auth (forward-port to master)
[ https://issues.apache.org/jira/browse/HBASE-21112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jack Bearden reassigned HBASE-21112: Assignee: (was: Jack Bearden) > [Auth] IPC client fallback to simple auth (forward-port to master) > -- > > Key: HBASE-21112 > URL: https://issues.apache.org/jira/browse/HBASE-21112 > Project: HBase > Issue Type: Bug > Components: Client, IPC/RPC, security >Affects Versions: 2.1.0, 2.0.2 >Reporter: Jack Bearden >Priority: Critical > Fix For: 3.0.0, 2.2.0 > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21812) Address ruby static analysis for shell module [2nd pass]
[ https://issues.apache.org/jira/browse/HBASE-21812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756644#comment-16756644 ] Sakthi commented on HBASE-21812: what do you think [~stack] ? > Address ruby static analysis for shell module [2nd pass] > > > Key: HBASE-21812 > URL: https://issues.apache.org/jira/browse/HBASE-21812 > Project: HBase > Issue Type: Improvement >Reporter: Sakthi >Assignee: Sakthi >Priority: Major > > -HBASE-18237- did a pass in the shell and bin directories. I think we can go > for another round. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-21812) Address ruby static analysis for shell module [2nd pass]
Sakthi created HBASE-21812: -- Summary: Address ruby static analysis for shell module [2nd pass] Key: HBASE-21812 URL: https://issues.apache.org/jira/browse/HBASE-21812 Project: HBase Issue Type: Improvement Reporter: Sakthi Assignee: Sakthi HBASE-18239 did a pass. I think we can go for another round. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-21634) Print error message when user uses unacceptable values for LIMIT while setting quotas.
[ https://issues.apache.org/jira/browse/HBASE-21634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sakthi updated HBASE-21634: --- Attachment: hbase-21634.branch-2.0.002.patch > Print error message when user uses unacceptable values for LIMIT while > setting quotas. > -- > > Key: HBASE-21634 > URL: https://issues.apache.org/jira/browse/HBASE-21634 > Project: HBase > Issue Type: Improvement >Reporter: Sakthi >Assignee: Sakthi >Priority: Minor > Fix For: 3.0.0, 2.2.0 > > Attachments: hbase-21634.branch-2.0.001.patch, > hbase-21634.branch-2.0.002.patch, hbase-21634.master.001.patch, > hbase-21634.master.002.patch, hbase-21634.master.003.patch, > hbase-21634.master.004.patch > > > When unacceptable value(like 2.3G or 70H) to LIMIT are passed while setting > quotas, we currently do not print any error message (informing the user about > the erroneous input). Like below: > {noformat} > hbase(main):002:0> set_quota TYPE => SPACE, TABLE => 't2', LIMIT => '2.3G', > POLICY => NO_WRITES > Took 0.0792 seconds > hbase(main):003:0> list_quotas > OWNERQUOTAS > TABLE => t2 TYPE => SPACE, > TABLE => t2, LIMIT => 2B, VIOLATION_POLICY => NO_WRITES > 1 row(s) > Took 0.0512 seconds > hbase(main):006:0> set_quota TYPE => SPACE, TABLE => 't2', LIMIT => '70H', > POLICY => NO_WRITES > Took 0.0088 seconds > hbase(main):007:0> list_quotas > OWNERQUOTAS > TABLE => t2 TYPE => SPACE, > TABLE => t2, LIMIT => 70B, VIOLATION_POLICY => NO_WRITES > 1 row(s) > Took 0.0448 seconds > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-18822) Create table for peer cluster automatically when creating table in source cluster of using namespace replication.
[ https://issues.apache.org/jira/browse/HBASE-18822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756631#comment-16756631 ] Hadoop QA commented on HBASE-18822: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 5s{color} | {color:red} HBASE-18822 does not apply to master. Rebase required? Wrong Branch? See https://yetus.apache.org/documentation/0.8.0/precommit-patchnames for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HBASE-18822 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12933059/HBASE-18822.v1.patch | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/15796/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > Create table for peer cluster automatically when creating table in source > cluster of using namespace replication. > - > > Key: HBASE-18822 > URL: https://issues.apache.org/jira/browse/HBASE-18822 > Project: HBase > Issue Type: Improvement > Components: Replication >Affects Versions: 2.0.0-alpha-2 >Reporter: Zheng Hu >Assignee: Zheng Hu >Priority: Major > Fix For: 3.0.0, 2.2.0 > > Attachments: HBASE-18822.v1.patch, HBASE-18822.v1.patch > > > In our cluster of using namespace replication, we always forget to create > table in peer cluster, which lead to replication get stuck. > We have implemented the feature in our cluster: create table for peer > cluster automatically when creating table in source cluster of using > namespace replication. > > I'm not sure if someone else needs this feature, so create an issue here for > discussing -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-21811) region can be opened on two servers due to race condition with procedures and server reports
[ https://issues.apache.org/jira/browse/HBASE-21811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Busbey updated HBASE-21811: Component/s: amv2 > region can be opened on two servers due to race condition with procedures and > server reports > > > Key: HBASE-21811 > URL: https://issues.apache.org/jira/browse/HBASE-21811 > Project: HBase > Issue Type: Bug > Components: amv2 >Affects Versions: 3.0.0 >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Blocker > > Looks like the region server responses are being processed incorrectly in > places allowing te region to be opened on two servers. > * The region server report handling in procedures should check which server > is reporting. > * Also although I didn't check (and it isn't implicated in this bug), RS must > check in OPEN that it's actually the correct RS master sent open to (w.r.t. > start timestamp) > This was previosuly "mitigated" by master killing the RS with incorrect > reports, but due to race conditions with reports and assignment the kill was > replaced with a warning, so now this condition persists. > Regardless, the kill approach is not a good fix because there's still a > window when a region can be opened on two servers. > A region is being opened by server_48c. The server dies, and we process the > retry correctly (retry=3 because 2 previous similar open failures were > processed correctly). > We start opening it on server_1aa now. > {noformat} > 2019-01-28 18:12:09,862 INFO [KeepAlivePEWorker-104] > assignment.RegionStateStore: pid=4915 updating hbase:meta > row=8be2a423b16471b9417f0f7de04281c6, regionState=ABNORMALLY_CLOSED > 2019-01-28 18:12:09,862 INFO [KeepAlivePEWorker-104] > procedure.ServerCrashProcedure: pid=11944, > state=RUNNABLE:SERVER_CRASH_ASSIGN, hasLock=true; ServerCrashProcedure > server=server_48c,17020,1548726406632, splitWal=true, meta=false found RIT > pid=4915, ppid=7, state=WAITING:REGION_STATE_TRANSITION_CONFIRM_OPENED, > hasLock=true; TransitRegionStateProcedure table=table, > region=8be2a423b16471b9417f0f7de04281c6, ASSIGN; rit=OPENING, > location=server_48c,17020,1548726406632, table=table, > region=8be2a423b16471b9417f0f7de04281c6 > 2019-01-28 18:12:10,778 INFO [KeepAlivePEWorker-80] > assignment.TransitRegionStateProcedure: Retry=3 of max=2147483647; pid=4915, > ppid=7, state=RUNNABLE:REGION_STATE_TRANSITION_CONFIRM_OPENED, hasLock=true; > TransitRegionStateProcedure table=table, > region=8be2a423b16471b9417f0f7de04281c6, ASSIGN; rit=ABNORMALLY_CLOSED, > location=null > ... > 2019-01-28 18:12:10,902 INFO [KeepAlivePEWorker-80] > assignment.TransitRegionStateProcedure: Starting pid=4915, ppid=7, > state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, hasLock=true; > TransitRegionStateProcedure table=table, > region=8be2a423b16471b9417f0f7de04281c6, ASSIGN; rit=ABNORMALLY_CLOSED, > location=null; forceNewPlan=true, retain=false > 2019-01-28 18:12:11,114 INFO [PEWorker-7] assignment.RegionStateStore: > pid=4915 updating hbase:meta row=8be2a423b16471b9417f0f7de04281c6, > regionState=OPENING, regionLocation=server_1aa,17020,1548727658713 > {noformat} > However, we get the remote procedure failure from 48c after we've already > started that. > It actually tried to open on the restarted RS, which makes me wonder if this > is safe also w.r.t. other races - what if RS already initialized and didn't > error out? > Need to check if we verify the start code expected by master on RS when > opening. > {noformat} > 2019-01-28 18:12:12,179 WARN [RSProcedureDispatcher-pool4-t362] > assignment.RegionRemoteProcedureBase: The remote operation pid=11050, > ppid=4915, state=SUCCESS, hasLock=false; > org.apache.hadoop.hbase.master.assignment.OpenRegionProcedure for region > {ENCODED => 8be2a423b16471b9417f0f7de04281c6 ... to server > server_48c,17020,1548726406632 failed > org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: > org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server > server_48c,17020,1548727752747 is not running yet > 2019-01-28 18:12:12,179 WARN [RSProcedureDispatcher-pool4-t362] > procedure.RSProcedureDispatcher: server server_48c,17020,1548726406632 is not > up for a while; try a new one > {noformat} > Without any other reason (at least logged), the RIT immediately retries again > and chooses a new candidate. It then retries again and goes to the new 48c, > but that's unrelated. > {noformat} > 2019-01-28 18:12:12,289 INFO [KeepAlivePEWorker-100] > assignment.TransitRegionStateProcedure: Retry=4 of max=2147483647; pid=4915, > ppid=7, state=RUNNABLE:REGION_STATE_TRANSITION_CONFIRM_OPENED, hasLock=true; > TransitRegionStateProcedure table=table, >
[jira] [Commented] (HBASE-21811) region can be opened on two servers due to race condition with procedures and server reports
[ https://issues.apache.org/jira/browse/HBASE-21811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756626#comment-16756626 ] Sergey Shelukhin commented on HBASE-21811: -- We are not running branch-2, but I expect it will affect at least 2.2, because the assignment is very similar. W.r.t. 2.1 I'm not sure, I suspect they are affected, but there were some changes to RITs since then. > region can be opened on two servers due to race condition with procedures and > server reports > > > Key: HBASE-21811 > URL: https://issues.apache.org/jira/browse/HBASE-21811 > Project: HBase > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Blocker > > Looks like the region server responses are being processed incorrectly in > places allowing te region to be opened on two servers. > * The region server report handling in procedures should check which server > is reporting. > * Also although I didn't check (and it isn't implicated in this bug), RS must > check in OPEN that it's actually the correct RS master sent open to (w.r.t. > start timestamp) > This was previosuly "mitigated" by master killing the RS with incorrect > reports, but due to race conditions with reports and assignment the kill was > replaced with a warning, so now this condition persists. > Regardless, the kill approach is not a good fix because there's still a > window when a region can be opened on two servers. > A region is being opened by server_48c. The server dies, and we process the > retry correctly (retry=3 because 2 previous similar open failures were > processed correctly). > We start opening it on server_1aa now. > {noformat} > 2019-01-28 18:12:09,862 INFO [KeepAlivePEWorker-104] > assignment.RegionStateStore: pid=4915 updating hbase:meta > row=8be2a423b16471b9417f0f7de04281c6, regionState=ABNORMALLY_CLOSED > 2019-01-28 18:12:09,862 INFO [KeepAlivePEWorker-104] > procedure.ServerCrashProcedure: pid=11944, > state=RUNNABLE:SERVER_CRASH_ASSIGN, hasLock=true; ServerCrashProcedure > server=server_48c,17020,1548726406632, splitWal=true, meta=false found RIT > pid=4915, ppid=7, state=WAITING:REGION_STATE_TRANSITION_CONFIRM_OPENED, > hasLock=true; TransitRegionStateProcedure table=table, > region=8be2a423b16471b9417f0f7de04281c6, ASSIGN; rit=OPENING, > location=server_48c,17020,1548726406632, table=table, > region=8be2a423b16471b9417f0f7de04281c6 > 2019-01-28 18:12:10,778 INFO [KeepAlivePEWorker-80] > assignment.TransitRegionStateProcedure: Retry=3 of max=2147483647; pid=4915, > ppid=7, state=RUNNABLE:REGION_STATE_TRANSITION_CONFIRM_OPENED, hasLock=true; > TransitRegionStateProcedure table=table, > region=8be2a423b16471b9417f0f7de04281c6, ASSIGN; rit=ABNORMALLY_CLOSED, > location=null > ... > 2019-01-28 18:12:10,902 INFO [KeepAlivePEWorker-80] > assignment.TransitRegionStateProcedure: Starting pid=4915, ppid=7, > state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, hasLock=true; > TransitRegionStateProcedure table=table, > region=8be2a423b16471b9417f0f7de04281c6, ASSIGN; rit=ABNORMALLY_CLOSED, > location=null; forceNewPlan=true, retain=false > 2019-01-28 18:12:11,114 INFO [PEWorker-7] assignment.RegionStateStore: > pid=4915 updating hbase:meta row=8be2a423b16471b9417f0f7de04281c6, > regionState=OPENING, regionLocation=server_1aa,17020,1548727658713 > {noformat} > However, we get the remote procedure failure from 48c after we've already > started that. > It actually tried to open on the restarted RS, which makes me wonder if this > is safe also w.r.t. other races - what if RS already initialized and didn't > error out? > Need to check if we verify the start code expected by master on RS when > opening. > {noformat} > 2019-01-28 18:12:12,179 WARN [RSProcedureDispatcher-pool4-t362] > assignment.RegionRemoteProcedureBase: The remote operation pid=11050, > ppid=4915, state=SUCCESS, hasLock=false; > org.apache.hadoop.hbase.master.assignment.OpenRegionProcedure for region > {ENCODED => 8be2a423b16471b9417f0f7de04281c6 ... to server > server_48c,17020,1548726406632 failed > org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: > org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server > server_48c,17020,1548727752747 is not running yet > 2019-01-28 18:12:12,179 WARN [RSProcedureDispatcher-pool4-t362] > procedure.RSProcedureDispatcher: server server_48c,17020,1548726406632 is not > up for a while; try a new one > {noformat} > Without any other reason (at least logged), the RIT immediately retries again > and chooses a new candidate. It then retries again and goes to the new 48c, > but that's unrelated. > {noformat} > 2019-01-28 18:12:12,289 INFO [KeepAlivePEWorker-100]
[jira] [Commented] (HBASE-18822) Create table for peer cluster automatically when creating table in source cluster of using namespace replication.
[ https://issues.apache.org/jira/browse/HBASE-18822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756622#comment-16756622 ] Nihal Jain commented on HBASE-18822: [~openinx] I have prepared a patch considering the above points. It has been tested in our environment. Do you mind if I attach it here? > Create table for peer cluster automatically when creating table in source > cluster of using namespace replication. > - > > Key: HBASE-18822 > URL: https://issues.apache.org/jira/browse/HBASE-18822 > Project: HBase > Issue Type: Improvement > Components: Replication >Affects Versions: 2.0.0-alpha-2 >Reporter: Zheng Hu >Assignee: Zheng Hu >Priority: Major > Fix For: 3.0.0, 2.2.0 > > Attachments: HBASE-18822.v1.patch, HBASE-18822.v1.patch > > > In our cluster of using namespace replication, we always forget to create > table in peer cluster, which lead to replication get stuck. > We have implemented the feature in our cluster: create table for peer > cluster automatically when creating table in source cluster of using > namespace replication. > > I'm not sure if someone else needs this feature, so create an issue here for > discussing -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Work started] (HBASE-21772) hbase cli help does not mention 'zkcli'
[ https://issues.apache.org/jira/browse/HBASE-21772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HBASE-21772 started by Sakthi. -- > hbase cli help does not mention 'zkcli' > --- > > Key: HBASE-21772 > URL: https://issues.apache.org/jira/browse/HBASE-21772 > Project: HBase > Issue Type: Bug > Components: Operability, Zookeeper >Affects Versions: 2.1.0 >Reporter: Sean Busbey >Assignee: Sakthi >Priority: Minor > > the hbase command's help doesn't mention zkcli > {code} > hbase > Usage: hbase [] [] > Options: > --config DIR Configuration direction to use. Default: ./conf > --hosts HOSTSOverride the list in 'regionservers' file > --auth-as-server Authenticate to ZooKeeper using servers configuration > --internal-classpath Skip attempting to use client facing jars (WARNING: > unstable results between versions) > Commands: > Some commands take arguments. Pass no args or -h for usage. > shell Run the HBase shell > hbckRun the HBase 'fsck' tool. Defaults read-only hbck1. > Pass '-j /path/to/HBCK2.jar' to run hbase-2.x HBCK2. > snapshotTool for managing snapshots > classpath Dump hbase CLASSPATH > mapredcpDump CLASSPATH entries required by mapreduce > pe Run PerformanceEvaluation > ltt Run LoadTestTool > canary Run the Canary tool > version Print the version > regionsplitter Run RegionSplitter tool > rowcounter Run RowCounter tool > cellcounter Run CellCounter tool > pre-upgrade Run Pre-Upgrade validator tool > CLASSNAME Run the class named CLASSNAME > {code} > the command itself still appears to work. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21800) RegionServer aborted due to NPE from MetaTableMetrics coprocessor
[ https://issues.apache.org/jira/browse/HBASE-21800?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756614#comment-16756614 ] Sakthi commented on HBASE-21800: [~apurtell], can we commit this, yet? > RegionServer aborted due to NPE from MetaTableMetrics coprocessor > - > > Key: HBASE-21800 > URL: https://issues.apache.org/jira/browse/HBASE-21800 > Project: HBase > Issue Type: Bug > Components: Coprocessors, meta, metrics, Operability >Reporter: Sakthi >Assignee: Sakthi >Priority: Critical > Labels: Meta > Attachments: hbase-21800.master.001.patch, > hbase-21800.master.002.patch, hbase-21800.master.003.patch > > > I was just playing around the code, trying to capture "Top k" table metrics > from MetaMetrics, when I bumped into this issue. Though currently we are not > capturing "Top K" table metrics, but we can encounter this issue because of > the "Top k Clients" that is implemented using the LossyAlgo. > > RegionServer gets aborted due to a NPE from MetaTableMetrics coprocessor. The > log looks somewhat like this: > {code:java} > 2019-01-28 23:31:10,311 ERROR > [RpcServer.priority.FPBQ.Fifo.handler=19,queue=1,port=16020] > coprocessor.CoprocessorHost: The coprocessor > org.apache.hadoop.hbase.coprocessor.MetaTableMetrics threw > java.lang.NullPointerException > java.lang.NullPointerException > at > org.apache.hadoop.hbase.coprocessor.MetaTableMetrics$ExampleRegionObserverMeta.markMeterIfPresent(MetaTableMetrics.java:123) > at > org.apache.hadoop.hbase.coprocessor.MetaTableMetrics$ExampleRegionObserverMeta.tableMetricRegisterAndMark2(MetaTableMetrics.java:233) > at > org.apache.hadoop.hbase.coprocessor.MetaTableMetrics$ExampleRegionObserverMeta.preGetOp(MetaTableMetrics.java:82) > at > org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$19.call(RegionCoprocessorHost.java:840) > at > org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$19.call(RegionCoprocessorHost.java:837) > at > org.apache.hadoop.hbase.coprocessor.CoprocessorHost$ObserverOperationWithoutResult.callObserver(CoprocessorHost.java:551) > at > org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperation(CoprocessorHost.java:625) > at > org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preGet(RegionCoprocessorHost.java:837) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2608) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2547) > at > org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:41998) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130) > at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324) > at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) > 2019-01-28 23:31:10,314 ERROR > [RpcServer.priority.FPBQ.Fifo.handler=19,queue=1,port=16020] > regionserver.HRegionServer: * ABORTING region server > 10.0.0.24,16020,1548747043814: The coprocessor > org.apache.hadoop.hbase.coprocessor.MetaTableMetrics threw > java.lang.NullPointerException * > java.lang.NullPointerException > at > org.apache.hadoop.hbase.coprocessor.MetaTableMetrics$ExampleRegionObserverMeta.markMeterIfPresent(MetaTableMetrics.java:123) > at > org.apache.hadoop.hbase.coprocessor.MetaTableMetrics$ExampleRegionObserverMeta.tableMetricRegisterAndMark2(MetaTableMetrics.java:233) > at > org.apache.hadoop.hbase.coprocessor.MetaTableMetrics$ExampleRegionObserverMeta.preGetOp(MetaTableMetrics.java:82) > at > org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$19.call(RegionCoprocessorHost.java:840) > at > org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$19.call(RegionCoprocessorHost.java:837) > at > org.apache.hadoop.hbase.coprocessor.CoprocessorHost$ObserverOperationWithoutResult.callObserver(CoprocessorHost.java:551) > at > org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperation(CoprocessorHost.java:625) > at > org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preGet(RegionCoprocessorHost.java:837) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2608) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2547) > at > org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:41998) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413) > at