[jira] [Comment Edited] (HBASE-23887) BlockCache performance improve by reduce eviction rate

2020-06-01 Thread Danil Lipovoy (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17123362#comment-17123362
 ] 

Danil Lipovoy edited comment on HBASE-23887 at 6/2/20, 5:54 AM:


Did more tests with the same tables, but in this time _recordcount_ = count of 
records in the table and

*hbase.lru.cache.heavy.eviction.count.limit* = 0

*hbase.lru.cache.heavy.eviction.mb.size.limit* = 200

The results:

!requests_new_100p.png!

 

And YCSB stats:
| |*original*|*feature*|*%*|
|tbl1-u (ops/sec)|29,601|39,088|132|
|tbl2-u (ops/sec)|38,793|61,692|159|
|tbl3-u (ops/sec)|38,216|60,415|158|
|tbl4-u (ops/sec)|325|657|202|
|tbl1-z (ops/sec)|46,990|58,252|124|
|tbl2-z (ops/sec)|54,401|72,484|133|
|tbl3-z (ops/sec)|57,100|69,984|123|
|tbl4-z (ops/sec)|452|763|169|
|tbl1-l (ops/sec)|56,001|63,804|114|
|tbl2-l (ops/sec)|68,700|76,074|111|
|tbl3-l (ops/sec)|64,189|72,229|113|
|tbl4-l (ops/sec)|619|897|145|
| | | | |
| | | | |
| |*original*|*feature*|*%*|
|tbl1-u AverageLatency(us)|1,686|1,276|76|
|tbl2-u AverageLatency(us)|1,287|808|63|
|tbl3-u AverageLatency(us)|1,306|825|63|
|tbl4-u AverageLatency(us)|76,810|38,007|49|
|tbl1-z AverageLatency(us)|1,061|856|81|
|tbl2-z AverageLatency(us)|917|688|75|
|tbl3-z AverageLatency(us)|873|712|82|
|tbl4-z AverageLatency(us)|55,114|32,670|59|
|tbl1-l AverageLatency(us)|890|781|88|
|tbl2-l AverageLatency(us)|726|655|90|
|tbl3-l AverageLatency(us)|777|690|89|
|tbl4-l AverageLatency(us)|40,235|27,774|69|
| | | | |
| | | | |
| |*original*|*feature*|*%*|
|tbl1-u 95thPercentileLatency(us)|2,831|2,569|91|
|tbl2-u 95thPercentileLatency(us)|1,266|1,073|85|
|tbl3-u 95thPercentileLatency(us)|1,497|1,194|80|
|tbl4-u 95thPercentileLatency(us)|370,943|49,471|13|
|tbl1-z 95thPercentileLatency(us)|1,784|1,669|94|
|tbl2-z 95thPercentileLatency(us)|918|871|95|
|tbl3-z 95thPercentileLatency(us)|978|933|95|
|tbl4-z 95thPercentileLatency(us)|336,639|48,863|15|
|tbl1-l 95thPercentileLatency(us)|1,523|1,441|95|
|tbl2-l 95thPercentileLatency(us)|820|825|101|
|tbl3-l 95thPercentileLatency(us)|918|907|99|
|tbl4-l 95thPercentileLatency(us)|77,951|48,575|62|


was (Author: pustota):
Did more tests with the same tables, but in this time _recordcount_ = count of 
records in the table and

*hbase.lru.cache.heavy.eviction.count.limit* = 0

*hbase.lru.cache.heavy.eviction.mb.size.limit* = 200

The results:

!requests_new_100p.png!

 

sdYCSB stats:
 | |*original*|*feature*|*%*|
|tbl1-u (ops/sec)|29,601|39,088|132|
|tbl2-u (ops/sec)|38,793|61,692|159|
|tbl3-u (ops/sec)|38,216|60,415|158|
|tbl4-u (ops/sec)|325|657|202|
|tbl1-z (ops/sec)|46,990|58,252|124|
|tbl2-z (ops/sec)|54,401|72,484|133|
|tbl3-z (ops/sec)|57,100|69,984|123|
|tbl4-z (ops/sec)|452|763|169|
|tbl1-l (ops/sec)|56,001|63,804|114|
|tbl2-l (ops/sec)|68,700|76,074|111|
|tbl3-l (ops/sec)|64,189|72,229|113|
|tbl4-l (ops/sec)|619|897|145|
| | | | |
| | | | |
| |*original*|*feature*|*%*|
|tbl1-u AverageLatency(us)|1,686|1,276|76|
|tbl2-u AverageLatency(us)|1,287|808|63|
|tbl3-u AverageLatency(us)|1,306|825|63|
|tbl4-u AverageLatency(us)|76,810|38,007|49|
|tbl1-z AverageLatency(us)|1,061|856|81|
|tbl2-z AverageLatency(us)|917|688|75|
|tbl3-z AverageLatency(us)|873|712|82|
|tbl4-z AverageLatency(us)|55,114|32,670|59|
|tbl1-l AverageLatency(us)|890|781|88|
|tbl2-l AverageLatency(us)|726|655|90|
|tbl3-l AverageLatency(us)|777|690|89|
|tbl4-l AverageLatency(us)|40,235|27,774|69|
| | | | |
| | | | |
| |*original*|*feature*|*%*|
|tbl1-u 95thPercentileLatency(us)|2,831|2,569|91|
|tbl2-u 95thPercentileLatency(us)|1,266|1,073|85|
|tbl3-u 95thPercentileLatency(us)|1,497|1,194|80|
|tbl4-u 95thPercentileLatency(us)|370,943|49,471|13|
|tbl1-z 95thPercentileLatency(us)|1,784|1,669|94|
|tbl2-z 95thPercentileLatency(us)|918|871|95|
|tbl3-z 95thPercentileLatency(us)|978|933|95|
|tbl4-z 95thPercentileLatency(us)|336,639|48,863|15|
|tbl1-l 95thPercentileLatency(us)|1,523|1,441|95|
|tbl2-l 95thPercentileLatency(us)|820|825|101|
|tbl3-l 95thPercentileLatency(us)|918|907|99|
|tbl4-l 95thPercentileLatency(us)|77,951|48,575|62|

> BlockCache performance improve by reduce eviction rate
> --
>
> Key: HBASE-23887
> URL: https://issues.apache.org/jira/browse/HBASE-23887
> Project: HBase
>  Issue Type: Improvement
>  Components: BlockCache, Performance
>Reporter: Danil Lipovoy
>Priority: Minor
> Attachments: 1582787018434_rs_metrics.jpg, 
> 1582801838065_rs_metrics_new.png, BC_LongRun.png, 
> BlockCacheEvictionProcess.gif, cmp.png, evict_BC100_vs_BC23.png, 
> eviction_100p.png, eviction_100p.png, eviction_100p.png, gc_100p.png, 
> read_requests_100pBC_vs_23pBC.png, requests_100p.png, requests_100p.png, 
> requests_new_100p.png
>
>
> Hi!
> I first time here, correct me please if something wrong.
> I want propose how to improve performance 

[jira] [Commented] (HBASE-23887) BlockCache performance improve by reduce eviction rate

2020-06-01 Thread Danil Lipovoy (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17123362#comment-17123362
 ] 

Danil Lipovoy commented on HBASE-23887:
---

Did more tests with the same tables, but in this time _recordcount_ = count of 
records in the table and

*hbase.lru.cache.heavy.eviction.count.limit* = 0

*hbase.lru.cache.heavy.eviction.mb.size.limit* = 200

The results:

!requests_new_100p.png!

 

sdYCSB stats:
 | |*original*|*feature*|*%*|
|tbl1-u (ops/sec)|29,601|39,088|132|
|tbl2-u (ops/sec)|38,793|61,692|159|
|tbl3-u (ops/sec)|38,216|60,415|158|
|tbl4-u (ops/sec)|325|657|202|
|tbl1-z (ops/sec)|46,990|58,252|124|
|tbl2-z (ops/sec)|54,401|72,484|133|
|tbl3-z (ops/sec)|57,100|69,984|123|
|tbl4-z (ops/sec)|452|763|169|
|tbl1-l (ops/sec)|56,001|63,804|114|
|tbl2-l (ops/sec)|68,700|76,074|111|
|tbl3-l (ops/sec)|64,189|72,229|113|
|tbl4-l (ops/sec)|619|897|145|
| | | | |
| | | | |
| |*original*|*feature*|*%*|
|tbl1-u AverageLatency(us)|1,686|1,276|76|
|tbl2-u AverageLatency(us)|1,287|808|63|
|tbl3-u AverageLatency(us)|1,306|825|63|
|tbl4-u AverageLatency(us)|76,810|38,007|49|
|tbl1-z AverageLatency(us)|1,061|856|81|
|tbl2-z AverageLatency(us)|917|688|75|
|tbl3-z AverageLatency(us)|873|712|82|
|tbl4-z AverageLatency(us)|55,114|32,670|59|
|tbl1-l AverageLatency(us)|890|781|88|
|tbl2-l AverageLatency(us)|726|655|90|
|tbl3-l AverageLatency(us)|777|690|89|
|tbl4-l AverageLatency(us)|40,235|27,774|69|
| | | | |
| | | | |
| |*original*|*feature*|*%*|
|tbl1-u 95thPercentileLatency(us)|2,831|2,569|91|
|tbl2-u 95thPercentileLatency(us)|1,266|1,073|85|
|tbl3-u 95thPercentileLatency(us)|1,497|1,194|80|
|tbl4-u 95thPercentileLatency(us)|370,943|49,471|13|
|tbl1-z 95thPercentileLatency(us)|1,784|1,669|94|
|tbl2-z 95thPercentileLatency(us)|918|871|95|
|tbl3-z 95thPercentileLatency(us)|978|933|95|
|tbl4-z 95thPercentileLatency(us)|336,639|48,863|15|
|tbl1-l 95thPercentileLatency(us)|1,523|1,441|95|
|tbl2-l 95thPercentileLatency(us)|820|825|101|
|tbl3-l 95thPercentileLatency(us)|918|907|99|
|tbl4-l 95thPercentileLatency(us)|77,951|48,575|62|

> BlockCache performance improve by reduce eviction rate
> --
>
> Key: HBASE-23887
> URL: https://issues.apache.org/jira/browse/HBASE-23887
> Project: HBase
>  Issue Type: Improvement
>  Components: BlockCache, Performance
>Reporter: Danil Lipovoy
>Priority: Minor
> Attachments: 1582787018434_rs_metrics.jpg, 
> 1582801838065_rs_metrics_new.png, BC_LongRun.png, 
> BlockCacheEvictionProcess.gif, cmp.png, evict_BC100_vs_BC23.png, 
> eviction_100p.png, eviction_100p.png, eviction_100p.png, gc_100p.png, 
> read_requests_100pBC_vs_23pBC.png, requests_100p.png, requests_100p.png, 
> requests_new_100p.png
>
>
> Hi!
> I first time here, correct me please if something wrong.
> I want propose how to improve performance when data in HFiles much more than 
> BlockChache (usual story in BigData). The idea - caching only part of DATA 
> blocks. It is good becouse LruBlockCache starts to work and save huge amount 
> of GC. 
> Sometimes we have more data than can fit into BlockCache and it is cause a 
> high rate of evictions. In this case we can skip cache a block N and insted 
> cache the N+1th block. Anyway we would evict N block quite soon and that why 
> that skipping good for performance.
> Example:
> Imagine we have little cache, just can fit only 1 block and we are trying to 
> read 3 blocks with offsets:
> 124
> 198
> 223
> Current way - we put the block 124, then put 198, evict 124, put 223, evict 
> 198. A lot of work (5 actions).
> With the feature - last few digits evenly distributed from 0 to 99. When we 
> divide by modulus we got:
> 124 -> 24
> 198 -> 98
> 223 -> 23
> It helps to sort them. Some part, for example below 50 (if we set 
> *hbase.lru.cache.data.block.percent* = 50) go into the cache. And skip 
> others. It means we will not try to handle the block 198 and save CPU for 
> other job. In the result - we put block 124, then put 223, evict 124 (3 
> actions). 
> See the picture in attachment with test below. Requests per second is higher, 
> GC is lower.
>  
> The key point of the code:
> Added the parameter: *hbase.lru.cache.data.block.percent* which by default = 
> 100
>  
> But if we set it 1-99, then will work the next logic:
>  
>  
> {code:java}
> public void cacheBlock(BlockCacheKey cacheKey, Cacheable buf, boolean 
> inMemory) {   
>   if (cacheDataBlockPercent != 100 && buf.getBlockType().isData())      
> if (cacheKey.getOffset() % 100 >= cacheDataBlockPercent) 
>   return;    
> ... 
> // the same code as usual
> }
> {code}
>  
> Other parameters help to control when this logic will be enabled. It means it 
> will work only while heavy reading going on.
> hbase.lru.cache.heavy.eviction.count.limit - set how many times have to run 

[jira] [Updated] (HBASE-23887) BlockCache performance improve by reduce eviction rate

2020-06-01 Thread Danil Lipovoy (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Danil Lipovoy updated HBASE-23887:
--
Attachment: requests_new_100p.png

> BlockCache performance improve by reduce eviction rate
> --
>
> Key: HBASE-23887
> URL: https://issues.apache.org/jira/browse/HBASE-23887
> Project: HBase
>  Issue Type: Improvement
>  Components: BlockCache, Performance
>Reporter: Danil Lipovoy
>Priority: Minor
> Attachments: 1582787018434_rs_metrics.jpg, 
> 1582801838065_rs_metrics_new.png, BC_LongRun.png, 
> BlockCacheEvictionProcess.gif, cmp.png, evict_BC100_vs_BC23.png, 
> eviction_100p.png, eviction_100p.png, eviction_100p.png, gc_100p.png, 
> read_requests_100pBC_vs_23pBC.png, requests_100p.png, requests_100p.png, 
> requests_new_100p.png
>
>
> Hi!
> I first time here, correct me please if something wrong.
> I want propose how to improve performance when data in HFiles much more than 
> BlockChache (usual story in BigData). The idea - caching only part of DATA 
> blocks. It is good becouse LruBlockCache starts to work and save huge amount 
> of GC. 
> Sometimes we have more data than can fit into BlockCache and it is cause a 
> high rate of evictions. In this case we can skip cache a block N and insted 
> cache the N+1th block. Anyway we would evict N block quite soon and that why 
> that skipping good for performance.
> Example:
> Imagine we have little cache, just can fit only 1 block and we are trying to 
> read 3 blocks with offsets:
> 124
> 198
> 223
> Current way - we put the block 124, then put 198, evict 124, put 223, evict 
> 198. A lot of work (5 actions).
> With the feature - last few digits evenly distributed from 0 to 99. When we 
> divide by modulus we got:
> 124 -> 24
> 198 -> 98
> 223 -> 23
> It helps to sort them. Some part, for example below 50 (if we set 
> *hbase.lru.cache.data.block.percent* = 50) go into the cache. And skip 
> others. It means we will not try to handle the block 198 and save CPU for 
> other job. In the result - we put block 124, then put 223, evict 124 (3 
> actions). 
> See the picture in attachment with test below. Requests per second is higher, 
> GC is lower.
>  
> The key point of the code:
> Added the parameter: *hbase.lru.cache.data.block.percent* which by default = 
> 100
>  
> But if we set it 1-99, then will work the next logic:
>  
>  
> {code:java}
> public void cacheBlock(BlockCacheKey cacheKey, Cacheable buf, boolean 
> inMemory) {   
>   if (cacheDataBlockPercent != 100 && buf.getBlockType().isData())      
> if (cacheKey.getOffset() % 100 >= cacheDataBlockPercent) 
>   return;    
> ... 
> // the same code as usual
> }
> {code}
>  
> Other parameters help to control when this logic will be enabled. It means it 
> will work only while heavy reading going on.
> hbase.lru.cache.heavy.eviction.count.limit - set how many times have to run 
> eviction process that start to avoid of putting data to BlockCache
> hbase.lru.cache.heavy.eviction.bytes.size.limit - set how many bytes have to 
> evicted each time that start to avoid of putting data to BlockCache
> By default: if 10 times (100 secunds) evicted more than 10 MB (each time) 
> then we start to skip 50% of data blocks.
> When heavy evitions process end then new logic off and will put into 
> BlockCache all blocks again.
>  
> Descriptions of the test:
> 4 nodes E5-2698 v4 @ 2.20GHz, 700 Gb Mem.
> 4 RegionServers
> 4 tables by 64 regions by 1.88 Gb data in each = 600 Gb total (only FAST_DIFF)
> Total BlockCache Size = 48 Gb (8 % of data in HFiles)
> Random read in 20 threads
>  
> I am going to make Pull Request, hope it is right way to make some 
> contribution in this cool product.  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Apache-HBase commented on pull request #1737: HBASE-24382 Flush partial stores of region filtered by seqId when arc…

2020-06-01 Thread GitBox


Apache-HBase commented on pull request #1737:
URL: https://github.com/apache/hbase/pull/1737#issuecomment-637292793


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 30s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m  4s |  master passed  |
   | +1 :green_heart: |  compile  |   1m  3s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   5m 41s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | -0 :warning: |  javadoc  |   0m 42s |  hbase-server in master failed.  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 57s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  4s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m  4s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   5m 46s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | -0 :warning: |  javadoc  |   0m 39s |  hbase-server in the patch failed.  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  | 125m 35s |  hbase-server in the patch passed.  
|
   |  |   | 151m 16s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.11 Server=19.03.11 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1737/11/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/1737 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 12e81f4322c4 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / bad2d4e409 |
   | Default Java | 2020-01-14 |
   | javadoc | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1737/11/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-server.txt
 |
   | javadoc | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1737/11/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-server.txt
 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1737/11/testReport/
 |
   | Max. process+thread count | 4576 (vs. ulimit of 12500) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1737/11/console |
   | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #1774: HBASE-24389 Introduce new master rpc methods to locate meta region through root region

2020-06-01 Thread GitBox


Apache-HBase commented on pull request #1774:
URL: https://github.com/apache/hbase/pull/1774#issuecomment-637285917


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 38s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +0 :ok: |  prototool  |   0m  1s |  prototool was not available.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 24s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   4m 50s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   2m 48s |  master passed  |
   | +1 :green_heart: |  spotbugs  |   9m  5s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 15s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   4m 33s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 11s |  The patch passed checkstyle 
in hbase-protocol-shaded  |
   | +1 :green_heart: |  checkstyle  |   0m 44s |  hbase-client: The patch 
generated 0 new + 242 unchanged - 2 fixed = 242 total (was 244)  |
   | +1 :green_heart: |  checkstyle  |   0m 14s |  The patch passed checkstyle 
in hbase-zookeeper  |
   | +1 :green_heart: |  checkstyle  |   1m 36s |  hbase-server: The patch 
generated 0 new + 454 unchanged - 6 fixed = 454 total (was 460)  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  hadoopcheck  |  15m 13s |  Patch does not cause any 
errors with Hadoop 3.1.2 3.2.1.  |
   | +1 :green_heart: |  hbaseprotoc  |   2m 41s |  the patch passed  |
   | +1 :green_heart: |  spotbugs  |   9m 37s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 49s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  63m 49s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.11 Server=19.03.11 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1774/11/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/1774 |
   | Optional Tests | dupname asflicense spotbugs hadoopcheck hbaseanti 
checkstyle cc hbaseprotoc prototool |
   | uname | Linux a7732b19fe9b 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / bad2d4e409 |
   | Max. process+thread count | 94 (vs. ulimit of 12500) |
   | modules | C: hbase-protocol-shaded hbase-client hbase-zookeeper 
hbase-server U: . |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1774/11/console |
   | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) 
spotbugs=3.1.12 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #1648: HBASE-8458 Support for batch version of checkAndMutate()

2020-06-01 Thread GitBox


Apache-HBase commented on pull request #1648:
URL: https://github.com/apache/hbase/pull/1648#issuecomment-637273320


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 33s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 22s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   4m 12s |  master passed  |
   | +1 :green_heart: |  compile  |   3m 20s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   6m 13s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m  5s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 14s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   4m  2s |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m 15s |  the patch passed  |
   | +1 :green_heart: |  javac  |   3m 15s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   6m 10s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m  1s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 48s |  hbase-protocol-shaded in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   1m  8s |  hbase-client in the patch passed.  
|
   | +1 :green_heart: |  unit  | 226m 19s |  hbase-server in the patch passed.  
|
   | +1 :green_heart: |  unit  |   5m 51s |  hbase-thrift in the patch passed.  
|
   | +1 :green_heart: |  unit  |   3m 51s |  hbase-rest in the patch passed.  |
   |  |   | 273m 43s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.11 Server=19.03.11 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1648/5/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/1648 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 4bb98e0c2895 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 
10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 716702a349 |
   | Default Java | 1.8.0_232 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1648/5/testReport/
 |
   | Max. process+thread count | 2904 (vs. ulimit of 12500) |
   | modules | C: hbase-protocol-shaded hbase-client hbase-server hbase-thrift 
hbase-rest U: . |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1648/5/console |
   | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #1648: HBASE-8458 Support for batch version of checkAndMutate()

2020-06-01 Thread GitBox


Apache-HBase commented on pull request #1648:
URL: https://github.com/apache/hbase/pull/1648#issuecomment-637271325


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 33s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  2s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 22s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   4m 59s |  master passed  |
   | +1 :green_heart: |  compile  |   4m 12s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   6m 33s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | -0 :warning: |  javadoc  |   0m 27s |  hbase-client in master failed.  |
   | -0 :warning: |  javadoc  |   0m 21s |  hbase-rest in master failed.  |
   | -0 :warning: |  javadoc  |   0m 48s |  hbase-server in master failed.  |
   | -0 :warning: |  javadoc  |   1m  1s |  hbase-thrift in master failed.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 15s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   4m 48s |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m 57s |  the patch passed  |
   | +1 :green_heart: |  javac  |   3m 57s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   6m 39s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | -0 :warning: |  javadoc  |   0m 32s |  hbase-client in the patch failed.  |
   | -0 :warning: |  javadoc  |   0m 49s |  hbase-server in the patch failed.  |
   | -0 :warning: |  javadoc  |   1m 12s |  hbase-thrift in the patch failed.  |
   | -0 :warning: |  javadoc  |   0m 23s |  hbase-rest in the patch failed.  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 16s |  hbase-protocol-shaded in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   1m 33s |  hbase-client in the patch passed.  
|
   | +1 :green_heart: |  unit  | 213m 36s |  hbase-server in the patch passed.  
|
   | +1 :green_heart: |  unit  |   5m 59s |  hbase-thrift in the patch passed.  
|
   | +1 :green_heart: |  unit  |   3m 55s |  hbase-rest in the patch passed.  |
   |  |   | 267m 52s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.11 Server=19.03.11 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1648/5/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/1648 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux e76416ddb173 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 
10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 716702a349 |
   | Default Java | 2020-01-14 |
   | javadoc | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1648/5/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-client.txt
 |
   | javadoc | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1648/5/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-rest.txt
 |
   | javadoc | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1648/5/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-server.txt
 |
   | javadoc | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1648/5/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-thrift.txt
 |
   | javadoc | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1648/5/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-client.txt
 |
   | javadoc | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1648/5/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-server.txt
 |
   | javadoc | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1648/5/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-thrift.txt
 |
   | javadoc | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1648/5/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-rest.txt
 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1648/5/testReport/
 |
   | Max. process+thread count | 3130 (vs. ulimit of 12500) |
   | modules | C: hbase-protocol-shaded hbase-client hbase-server hbase-thrift 
hbase-rest U: . |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1648/5/console |
   | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



[GitHub] [hbase] Apache-HBase commented on pull request #1730: HBASE-24289 Heterogeneous Storage for Date Tiered Compaction

2020-06-01 Thread GitBox


Apache-HBase commented on pull request #1730:
URL: https://github.com/apache/hbase/pull/1730#issuecomment-637262268


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 35s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +0 :ok: |  shelldocs  |   0m  0s |  Shelldocs was not available.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 24s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   3m 37s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   1m 58s |  master passed  |
   | +1 :green_heart: |  spotbugs  |  11m  5s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 13s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 20s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   1m 57s |  the patch passed  |
   | +1 :green_heart: |  shellcheck  |   0m  0s |  There were no new shellcheck 
issues.  |
   | -0 :warning: |  whitespace  |   0m  0s |  The patch has 5 line(s) that end 
in whitespace. Use git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  hadoopcheck  |  11m  7s |  Patch does not cause any 
errors with Hadoop 3.1.2 3.2.1.  |
   | +1 :green_heart: |  spotbugs  |  11m 41s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 41s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  54m 25s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.11 Server=19.03.11 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1730/4/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/1730 |
   | Optional Tests | dupname asflicense shellcheck shelldocs spotbugs 
hadoopcheck hbaseanti checkstyle |
   | uname | Linux a826f45a0cc8 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / bad2d4e409 |
   | whitespace | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1730/4/artifact/yetus-general-check/output/whitespace-eol.txt
 |
   | Max. process+thread count | 137 (vs. ulimit of 12500) |
   | modules | C: hbase-common hbase-server . U: . |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1730/4/console |
   | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) 
shellcheck=0.4.6 spotbugs=3.1.12 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #1828: HBASE-24446 Use EnvironmentEdgeManager to compute clock skew in Master

2020-06-01 Thread GitBox


Apache-HBase commented on pull request #1828:
URL: https://github.com/apache/hbase/pull/1828#issuecomment-637257780


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   2m 55s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m  4s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   1m 11s |  master passed  |
   | +1 :green_heart: |  spotbugs  |   2m  6s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 45s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   1m 10s |  hbase-server: The patch 
generated 4 new + 15 unchanged - 0 fixed = 19 total (was 15)  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  hadoopcheck  |  12m 13s |  Patch does not cause any 
errors with Hadoop 3.1.2 3.2.1.  |
   | +1 :green_heart: |  spotbugs  |   2m 15s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 13s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  37m 31s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.11 Server=19.03.11 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1828/1/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/1828 |
   | Optional Tests | dupname asflicense spotbugs hadoopcheck hbaseanti 
checkstyle |
   | uname | Linux 5f8850c79c28 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 
08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / bad2d4e409 |
   | checkstyle | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1828/1/artifact/yetus-general-check/output/diff-checkstyle-hbase-server.txt
 |
   | Max. process+thread count | 84 (vs. ulimit of 12500) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1828/1/console |
   | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) 
spotbugs=3.1.12 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #1737: HBASE-24382 Flush partial stores of region filtered by seqId when arc…

2020-06-01 Thread GitBox


Apache-HBase commented on pull request #1737:
URL: https://github.com/apache/hbase/pull/1737#issuecomment-637255117


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 39s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   5m  8s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   1m 25s |  master passed  |
   | +1 :green_heart: |  spotbugs  |   2m 20s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 43s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   1m 15s |  hbase-server: The patch 
generated 7 new + 272 unchanged - 0 fixed = 279 total (was 272)  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  hadoopcheck  |  12m  0s |  Patch does not cause any 
errors with Hadoop 3.1.2 3.2.1.  |
   | +1 :green_heart: |  spotbugs  |   2m 15s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 13s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  36m 49s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.11 Server=19.03.11 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1737/11/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/1737 |
   | Optional Tests | dupname asflicense spotbugs hadoopcheck hbaseanti 
checkstyle |
   | uname | Linux 07a8760acafd 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 
10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / bad2d4e409 |
   | checkstyle | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1737/11/artifact/yetus-general-check/output/diff-checkstyle-hbase-server.txt
 |
   | Max. process+thread count | 84 (vs. ulimit of 12500) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1737/11/console |
   | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) 
spotbugs=3.1.12 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Resolved] (HBASE-24474) Rename LocalRegion to MasterRegion

2020-06-01 Thread Duo Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-24474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang resolved HBASE-24474.
---
Resolution: Fixed

Pushed to branch-2.3+.

Thanks all for reviewing.

> Rename LocalRegion to MasterRegion
> --
>
> Key: HBASE-24474
> URL: https://issues.apache.org/jira/browse/HBASE-24474
> Project: HBase
>  Issue Type: Improvement
>  Components: master
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Blocker
> Fix For: 3.0.0-alpha-1, 2.3.0
>
>
> This is a suggestion by [~ndimiduk] when reviewing the PR for HBASE-24408.
> https://github.com/apache/hbase/pull/1753#discussion_r432783115
> I think this can make the code less confusing.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HBASE-24489) Rewrite TestClusterRestartFailover.test since namespace table is gone on on master

2020-06-01 Thread Duo Zhang (Jira)
Duo Zhang created HBASE-24489:
-

 Summary: Rewrite TestClusterRestartFailover.test since namespace 
table is gone on on master
 Key: HBASE-24489
 URL: https://issues.apache.org/jira/browse/HBASE-24489
 Project: HBase
  Issue Type: Task
  Components: test
Reporter: Duo Zhang


We still have this
{code}
// Find server that does not have hbase:namespace on it. This tests holds 
up SCPs. If it
// holds up the server w/ hbase:namespace, the Master initialization will 
be held up
// because this table is not online and test fails.
for (JVMClusterUtil.RegionServerThread rst:
UTIL.getHBaseCluster().getLiveRegionServerThreads()) {
  HRegionServer rs = rst.getRegionServer();
  if (rs.getRegions(TableName.NAMESPACE_TABLE_NAME).isEmpty()) {
SERVER_FOR_TEST = rs.getServerName();
  }
}
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Apache-HBase commented on pull request #1786: HBASE-24418 Consolidate Normalizer implementations

2020-06-01 Thread GitBox


Apache-HBase commented on pull request #1786:
URL: https://github.com/apache/hbase/pull/1786#issuecomment-637254074


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 39s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 24s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   3m 37s |  master passed  |
   | +1 :green_heart: |  compile  |   2m 15s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   5m 36s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   3m  0s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 13s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 19s |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 19s |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m 19s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   5m 33s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | -0 :warning: |  javadoc  |   0m 34s |  hbase-server generated 25 new + 1 
unchanged - 0 fixed = 26 total (was 1)  |
   | -0 :warning: |  javadoc  |   1m 59s |  root generated 25 new + 88 
unchanged - 0 fixed = 113 total (was 88)  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 280m 35s |  root in the patch failed.  |
   |  |   | 312m 53s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.11 Server=19.03.11 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1786/3/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/1786 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 2eca36fca414 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 716702a349 |
   | Default Java | 1.8.0_232 |
   | javadoc | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1786/3/artifact/yetus-jdk8-hadoop3-check/output/diff-javadoc-javadoc-hbase-server.txt
 |
   | javadoc | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1786/3/artifact/yetus-jdk8-hadoop3-check/output/diff-javadoc-javadoc-root.txt
 |
   | unit | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1786/3/artifact/yetus-jdk8-hadoop3-check/output/patch-unit-root.txt
 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1786/3/testReport/
 |
   | Max. process+thread count | 4755 (vs. ulimit of 12500) |
   | modules | C: hbase-common hbase-server . U: . |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1786/3/console |
   | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] anoopsjohn commented on a change in pull request #1825: HBASE-24189 WALSplit recreates region dirs for deleted table with rec…

2020-06-01 Thread GitBox


anoopsjohn commented on a change in pull request #1825:
URL: https://github.com/apache/hbase/pull/1825#discussion_r433601716



##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/wal/WALSplitter.java
##
@@ -285,23 +286,35 @@ boolean splitLogFile(FileStatus logfile, 
CancelableProgressable reporter) throws
 String encodedRegionNameAsStr = Bytes.toString(region);
 lastFlushedSequenceId = 
lastFlushedSequenceIds.get(encodedRegionNameAsStr);
 if (lastFlushedSequenceId == null) {
-  if (sequenceIdChecker != null) {
-RegionStoreSequenceIds ids = 
sequenceIdChecker.getLastSequenceId(region);
-Map maxSeqIdInStores = new 
TreeMap<>(Bytes.BYTES_COMPARATOR);
-for (StoreSequenceId storeSeqId : ids.getStoreSequenceIdList()) {
-  maxSeqIdInStores.put(storeSeqId.getFamilyName().toByteArray(),
-storeSeqId.getSequenceId());
+  if (!(isRegionDirPresentUnderRoot(entry.getKey().getTableName(), 
encodedRegionNameAsStr))) {
+// The region directory itself is not present in the WAL FS. This 
indicates that

Review comment:
   Oh ya..  Wrong comments.. I need to change..  We check the data FS only 
to know whether this is still a valid region or not.  WAL fs might not have the 
dir unless a wal split happened (in 1.x). From 2.x the WAL FS will always have 
the region dir once it is 1st opened.  





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #1786: HBASE-24418 Consolidate Normalizer implementations

2020-06-01 Thread GitBox


Apache-HBase commented on pull request #1786:
URL: https://github.com/apache/hbase/pull/1786#issuecomment-637248419


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 29s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 33s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   5m  9s |  master passed  |
   | +1 :green_heart: |  compile  |   3m 48s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   7m 36s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | -0 :warning: |  javadoc  |   0m 19s |  hbase-common in master failed.  |
   | -0 :warning: |  javadoc  |   0m 48s |  hbase-server in master failed.  |
   | -0 :warning: |  javadoc  |   0m 15s |  root in master failed.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 14s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   5m  8s |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m 38s |  the patch passed  |
   | +1 :green_heart: |  javac  |   3m 38s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   7m 28s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | -0 :warning: |  javadoc  |   0m 22s |  hbase-common in the patch failed.  |
   | -0 :warning: |  javadoc  |   0m 54s |  hbase-server in the patch failed.  |
   | -0 :warning: |  javadoc  |   0m 18s |  root in the patch failed.  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  | 253m 14s |  root in the patch passed.  |
   |  |   | 292m 27s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.11 Server=19.03.11 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1786/3/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/1786 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux f8bd796aaad9 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 
10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 716702a349 |
   | Default Java | 2020-01-14 |
   | javadoc | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1786/3/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-common.txt
 |
   | javadoc | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1786/3/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-server.txt
 |
   | javadoc | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1786/3/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-root.txt
 |
   | javadoc | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1786/3/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-common.txt
 |
   | javadoc | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1786/3/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-server.txt
 |
   | javadoc | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1786/3/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-root.txt
 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1786/3/testReport/
 |
   | Max. process+thread count | 5092 (vs. ulimit of 12500) |
   | modules | C: hbase-common hbase-server . U: . |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1786/3/console |
   | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] sguggilam opened a new pull request #1828: HBASE-24446 Use EnvironmentEdgeManager to compute clock skew in Master

2020-06-01 Thread GitBox


sguggilam opened a new pull request #1828:
URL: https://github.com/apache/hbase/pull/1828


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] bsglz commented on pull request #1737: HBASE-24382 Flush partial stores of region filtered by seqId when arc…

2020-06-01 Thread GitBox


bsglz commented on pull request #1737:
URL: https://github.com/apache/hbase/pull/1737#issuecomment-637243836


   rebase



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (HBASE-23296) Add CompositeBucketCache to support tiered BC

2020-06-01 Thread chenxu (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17121511#comment-17121511
 ] 

chenxu commented on HBASE-23296:


bq. Why don't use bigger heap as L1, because gc problem?
Do you mean BucketCache with heap ioEngine? seems this is not supported now. If 
use LruBlockCache there may be some gc problems.

> Add CompositeBucketCache to support tiered BC
> -
>
> Key: HBASE-23296
> URL: https://issues.apache.org/jira/browse/HBASE-23296
> Project: HBase
>  Issue Type: New Feature
>  Components: BlockCache
>Reporter: chenxu
>Assignee: chenxu
>Priority: Major
>
> LruBlockCache is not suitable in the following scenarios:
> (1) cache size too large (will take too much heap memory, and 
> evictBlocksByHfileName is not so efficient, as HBASE-23277 mentioned)
> (2) block evicted frequently, especially cacheOnWrite & prefetchOnOpen are 
> enabled.
> Since block‘s data is reclaimed by GC, this may affect GC performance.
> So how about enabling a Bucket based L1 Cache.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-24474) Rename LocalRegion to MasterRegion

2020-06-01 Thread Duo Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-24474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-24474:
--
Hadoop Flags: Reviewed

> Rename LocalRegion to MasterRegion
> --
>
> Key: HBASE-24474
> URL: https://issues.apache.org/jira/browse/HBASE-24474
> Project: HBase
>  Issue Type: Improvement
>  Components: master
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Blocker
> Fix For: 3.0.0-alpha-1, 2.3.0
>
>
> This is a suggestion by [~ndimiduk] when reviewing the PR for HBASE-24408.
> https://github.com/apache/hbase/pull/1753#discussion_r432783115
> I think this can make the code less confusing.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] songxincun commented on pull request #1819: HBASE-24478 The regionInfo parameter for MasterProcedureScheduler#wai…

2020-06-01 Thread GitBox


songxincun commented on pull request #1819:
URL: https://github.com/apache/hbase/pull/1819#issuecomment-637233794


   > May I recommend regionsInfo instead of regionInfos?
   
   IMO, The variable type is final RegionInfo..., it represent to a list of 
regioninfo, not a info of regions. So I think regionInfos may be better.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache9 merged pull request #1811: HBASE-24474 Rename LocalRegion to MasterRegion

2020-06-01 Thread GitBox


Apache9 merged pull request #1811:
URL: https://github.com/apache/hbase/pull/1811


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache9 commented on a change in pull request #1811: HBASE-24474 Rename LocalRegion to MasterRegion

2020-06-01 Thread GitBox


Apache9 commented on a change in pull request #1811:
URL: https://github.com/apache/hbase/pull/1811#discussion_r433584900



##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
##
@@ -1563,7 +1564,7 @@ protected void stopServiceThreads() {
   private void createProcedureExecutor() throws IOException {
 MasterProcedureEnv procEnv = new MasterProcedureEnv(this);
 procedureStore =
-  new RegionProcedureStore(this, localStore, new 
MasterProcedureEnv.FsUtilsLeaseRecovery(this));
+  new RegionProcedureStore(this, masterRegion, new 
MasterProcedureEnv.FsUtilsLeaseRecovery(this));

Review comment:
   The idea is to use different families. There is a known risk that, if 
someone stores a lot data in one of the families, it will slow down the start 
up of the whole HMaster, even if it is not necessary. We should document this 
in our ref guide. Can be a follow on issue?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache9 commented on a change in pull request #1811: HBASE-24474 Rename LocalRegion to MasterRegion

2020-06-01 Thread GitBox


Apache9 commented on a change in pull request #1811:
URL: https://github.com/apache/hbase/pull/1811#discussion_r433583919



##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
##
@@ -258,6 +259,14 @@
   public static final String SPECIAL_RECOVERED_EDITS_DIR =
 "hbase.hregion.special.recovered.edits.dir";
 
+  /**
+   * Whether to use {@link MetaCellComparator} even if we are not meta region. 
Used when creating
+   * master local region.
+   */
+  public static final String USE_META_CELL_COMPARATOR = 
"hbase.region.use.meta.cell.comparator";

Review comment:
   For now it is only used in HMaster but I do not think it should be 
prefixed with hbase.master, as it is a configuration for HRegion.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache9 commented on a change in pull request #1811: HBASE-24474 Rename LocalRegion to MasterRegion

2020-06-01 Thread GitBox


Apache9 commented on a change in pull request #1811:
URL: https://github.com/apache/hbase/pull/1811#discussion_r433582964



##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/region/MasterRegion.java
##
@@ -79,14 +79,14 @@
  * Notice that, you can use different root file system and WAL file system. 
Then the above directory
  * will be on two file systems, the root file system will have the data 
directory while the WAL
  * filesystem will have the WALs directory. The archived HFile will be moved 
to the global HFile
- * archived directory with the {@link LocalRegionParams#archivedWalSuffix()} 
suffix. The archived
+ * archived directory with the {@link MasterRegionParams#archivedWalSuffix()} 
suffix. The archived
  * WAL will be moved to the global WAL archived directory with the
- * {@link LocalRegionParams#archivedHFileSuffix()} suffix.
+ * {@link MasterRegionParams#archivedHFileSuffix()} suffix.
  */
 @InterfaceAudience.Private
-public final class LocalRegion {
+public final class MasterRegion {

Review comment:
   This is intentional. You can see the implementation of 
MasterRegion.update, we have to call `flusherAndCompactor.onUpdate();` after 
each update. So if we expose the HRegion directly, the callers have to do this 
by their own, and I believe it will be easy to forget and then cause big 
problem...





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] saintstack commented on a change in pull request #1648: HBASE-8458 Support for batch version of checkAndMutate()

2020-06-01 Thread GitBox


saintstack commented on a change in pull request #1648:
URL: https://github.com/apache/hbase/pull/1648#discussion_r433579358



##
File path: 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/RawAsyncTableImpl.java
##
@@ -318,38 +319,28 @@ private void preCheck() {
 public CompletableFuture thenPut(Put put) {
   validatePut(put, conn.connConf.getMaxKeyValueSize());
   preCheck();
-  return RawAsyncTableImpl.this. newCaller(row, 
put.getPriority(), rpcTimeoutNs)
-.action((controller, loc, stub) -> 
RawAsyncTableImpl.mutate(controller, loc,
-  stub, put,
-  (rn, p) -> RequestConverter.buildMutateRequest(rn, row, family, 
qualifier, op, value,
-null, timeRange, p),
-  (c, r) -> r.getProcessed()))
-.call();
+  return checkAndMutate(CheckAndMutate.newBuilder(row)
+.ifMatches(family, qualifier, op, value)
+.timeRange(timeRange)
+.build(put));
 }
 
 @Override
 public CompletableFuture thenDelete(Delete delete) {
   preCheck();
-  return RawAsyncTableImpl.this. newCaller(row, 
delete.getPriority(), rpcTimeoutNs)
-.action((controller, loc, stub) -> RawAsyncTableImpl.mutate(controller,
-  loc, stub, delete,
-  (rn, d) -> RequestConverter.buildMutateRequest(rn, row, family, 
qualifier, op, value,
-null, timeRange, d),
-  (c, r) -> r.getProcessed()))
-.call();
+  return checkAndMutate(CheckAndMutate.newBuilder(row)
+.ifMatches(family, qualifier, op, value)
+.timeRange(timeRange)
+.build(delete));

Review comment:
   Ditto

##
File path: 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/CheckAndMutate.java
##
@@ -0,0 +1,362 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.client;
+
+import java.util.Collections;
+import java.util.List;
+import java.util.NavigableMap;
+import org.apache.hadoop.hbase.Cell;
+import org.apache.hadoop.hbase.CellBuilder;
+import org.apache.hadoop.hbase.CellBuilderType;
+import org.apache.hadoop.hbase.CompareOperator;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.filter.Filter;
+import org.apache.hadoop.hbase.io.TimeRange;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.apache.yetus.audience.InterfaceStability;
+import org.apache.hbase.thirdparty.com.google.common.base.Preconditions;
+
+/**
+ * Used to perform CheckAndMutate operations. Currently {@link Put}, {@link 
Delete}
+ * and {@link RowMutations} are supported.
+ * 
+ * Use the builder class to instantiate a CheckAndMutate object.
+ * This builder class is fluent style APIs, the code are like:
+ * 
+ * 
+ * // A CheckAndMutate operation where do the specified action if the column 
(specified by the
+ * // family and the qualifier) of the row equals to the specified value
+ * CheckAndMutate checkAndMutate = CheckAndMutate.newBuilder(row)
+ *   .ifEquals(family, qualifier, value)
+ *   .build(put);
+ *
+ * // A CheckAndMutate operation where do the specified action if the column 
(specified by the
+ * // family and the qualifier) of the row doesn't exist
+ * CheckAndMutate checkAndMutate = CheckAndMutate.newBuilder(row)
+ *   .ifNotExists(family, qualifier)
+ *   .build(put);
+ *
+ * // A CheckAndMutate operation where do the specified action if the row 
matches the filter
+ * CheckAndMutate checkAndMutate = CheckAndMutate.newBuilder(row)
+ *   .ifMatches(filter)
+ *   .build(delete);
+ * 
+ * 
+ */
+@InterfaceAudience.Public
+@InterfaceStability.Evolving
+public final class CheckAndMutate extends Mutation {
+
+  /**
+   * A builder class for building a CheckAndMutate object.
+   */
+  @InterfaceAudience.Public
+  @InterfaceStability.Evolving
+  public static final class Builder {
+private final byte[] row;
+private byte[] family;
+private byte[] qualifier;
+private CompareOperator op;
+private byte[] value;
+private Filter filter;
+private TimeRange timeRange;
+
+private Builder(byte[] row) {
+  this.row = Preconditions.checkNotNull(row, "row is null");
+}
+
+

[GitHub] [hbase] Apache9 commented on pull request #1826: HBASE-24438 Don't update TaskMonitor when deserializing ServerCrashProcedure

2020-06-01 Thread GitBox


Apache9 commented on pull request #1826:
URL: https://github.com/apache/hbase/pull/1826#issuecomment-637222180


   I think the problem here is in the implementation of updateProgress? We pass 
updateState as false when calling it in the deserialize method, but the only 
place to update the currentRunningState is in the updateProgress method, so 
when calling from deserialize the currentRunningState will always be null.
   
   And the updateProgress method is totally a mess. The order of the execution 
it really strange...
   I think it should be like this
   ```
   String msg = "Processing ServerCrashProcedure of " + serverName;
   if (status == null) {
 status = TaskMonitor.get().createStatus(msg);
   }
   if (updateState) {
 currentRunningState = getCurrentState();
   }
   if (currentRunningState == ServerCrashState.SERVER_CRASH_FINISH) {
 status.markComplete(msg + " done");
 return;
   }
   int childrenLatch = getChildrenLatch();
   status.setStatus(msg + " current State " + currentRunningState
   + (childrenLatch > 0 ? "; remaining num of running child procedures 
= " + childrenLatch
   : ""));
   ```
   
   And we should call updateProgress(true) in deserializeStateData. Could you 
please try if this can fix your problem?
   
   Thanks.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (HBASE-24184) listSnapshots returns empty when just use simple acl but not use authentication

2020-06-01 Thread tianhang tang (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-24184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17121480#comment-17121480
 ] 

tianhang tang commented on HBASE-24184:
---

[~binlijin] Hi, I notice that 
[HBASE-23896|https://issues.apache.org/jira/browse/HBASE-23896] just fix 2.x, 
so i keep a PR for branch-1. Could you help me to review this patch?

> listSnapshots returns empty when just use simple acl but not use 
> authentication
> ---
>
> Key: HBASE-24184
> URL: https://issues.apache.org/jira/browse/HBASE-24184
> Project: HBase
>  Issue Type: Bug
>  Components: snapshots
>Reporter: tianhang tang
>Assignee: tianhang tang
>Priority: Minor
>
> For the owner of snapshots(not global admin user), currently list_snapshots 
> returns empty if i just use simple acls for authorization but not use 
> authentication.
> The code in AccessController.preListSnapshot:
> {code:java}
> if (SnapshotDescriptionUtils.isSnapshotOwner(snapshot, user)) {
> // list it, if user is the owner of snapshot
> AuthResult result = AuthResult.allow("listSnapshot " + snapshot.getName(),
> "Snapshot owner check allowed", user, null, null, null);
> accessChecker.logResult(result);
> }{code}
> And SnapshotManager.takeSnapshotInternal:
> {code:java}
> if (User.isHBaseSecurityEnabled(master.getConfiguration()) && user != null) {
>   builder.setOwner(user.getShortName());
> }
> {code}
> User.isHBaseSecurityEnabled:
> {code:java}
> public static boolean isHBaseSecurityEnabled(Configuration conf) {
>   return "kerberos".equalsIgnoreCase(conf.get(HBASE_SECURITY_CONF_KEY));
> }
> {code}
> So i think the logic of setOwner is used for authorization, not 
> authentication, SnapshotManager should not only setOwner when 
> hbase.security.authentication = kerberos, which cause listSnapshots returns 
> empty when i just use simple acls.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Apache-HBase commented on pull request #1826: HBASE-24438 Don't update TaskMonitor when deserializing ServerCrashProcedure

2020-06-01 Thread GitBox


Apache-HBase commented on pull request #1826:
URL: https://github.com/apache/hbase/pull/1826#issuecomment-637221760


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 13s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 13s |  master passed  |
   | +1 :green_heart: |  compile  |   1m  1s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   6m  9s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 39s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 49s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 59s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 59s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   6m  0s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 36s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  | 216m  6s |  hbase-server in the patch passed.  
|
   |  |   | 242m 39s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.11 Server=19.03.11 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1826/1/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/1826 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux f673620e475f 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 
10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 716702a349 |
   | Default Java | 1.8.0_232 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1826/1/testReport/
 |
   | Max. process+thread count | 3675 (vs. ulimit of 12500) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1826/1/console |
   | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] thangTang closed pull request #1530: HBASE-24184 listSnapshots returns empty when just use simple acl but not use authentication

2020-06-01 Thread GitBox


thangTang closed pull request #1530:
URL: https://github.com/apache/hbase/pull/1530


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Work started] (HBASE-24485) Backport to branch-1 HBASE-17738 BucketCache startup is slow

2020-06-01 Thread tianhang tang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-24485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HBASE-24485 started by tianhang tang.
-
> Backport to branch-1 HBASE-17738 BucketCache startup is slow
> 
>
> Key: HBASE-24485
> URL: https://issues.apache.org/jira/browse/HBASE-24485
> Project: HBase
>  Issue Type: Improvement
>  Components: BucketCache
>Reporter: tianhang tang
>Assignee: tianhang tang
>Priority: Minor
>
> I'd like to backport 
> [HBASE-17738|https://issues.apache.org/jira/browse/HBASE-17738] to branch-1.
> Also removed some Unnecessary locks related to 
> [HBASE-15785|https://issues.apache.org/jira/browse/HBASE-15785]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Apache-HBase commented on pull request #1648: HBASE-8458 Support for batch version of checkAndMutate()

2020-06-01 Thread GitBox


Apache-HBase commented on pull request #1648:
URL: https://github.com/apache/hbase/pull/1648#issuecomment-637212740


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   2m 29s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +0 :ok: |  prototool  |   0m  0s |  prototool was not available.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 20s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   3m 59s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   2m 54s |  master passed  |
   | +1 :green_heart: |  spotbugs  |   8m 34s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 12s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 45s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m  8s |  The patch passed checkstyle 
in hbase-protocol-shaded  |
   | +1 :green_heart: |  checkstyle  |   0m 31s |  hbase-client: The patch 
generated 0 new + 130 unchanged - 5 fixed = 130 total (was 135)  |
   | -0 :warning: |  checkstyle  |   1m 12s |  hbase-server: The patch 
generated 1 new + 67 unchanged - 0 fixed = 68 total (was 67)  |
   | +1 :green_heart: |  checkstyle  |   0m 45s |  The patch passed checkstyle 
in hbase-thrift  |
   | +1 :green_heart: |  checkstyle  |   0m 15s |  The patch passed checkstyle 
in hbase-rest  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  hadoopcheck  |  12m 12s |  Patch does not cause any 
errors with Hadoop 3.1.2 3.2.1.  |
   | +1 :green_heart: |  hbaseprotoc  |   2m 52s |  the patch passed  |
   | +1 :green_heart: |  spotbugs  |   9m 18s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 53s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  58m 50s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.11 Server=19.03.11 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1648/5/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/1648 |
   | Optional Tests | dupname asflicense spotbugs hadoopcheck hbaseanti 
checkstyle cc hbaseprotoc prototool |
   | uname | Linux 955386b0c255 4.15.0-91-generic #92-Ubuntu SMP Fri Feb 28 
11:09:48 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 716702a349 |
   | checkstyle | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1648/5/artifact/yetus-general-check/output/diff-checkstyle-hbase-server.txt
 |
   | Max. process+thread count | 84 (vs. ulimit of 12500) |
   | modules | C: hbase-protocol-shaded hbase-client hbase-server hbase-thrift 
hbase-rest U: . |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1648/5/console |
   | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) 
spotbugs=3.1.12 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #1826: HBASE-24438 Don't update TaskMonitor when deserializing ServerCrashProcedure

2020-06-01 Thread GitBox


Apache-HBase commented on pull request #1826:
URL: https://github.com/apache/hbase/pull/1826#issuecomment-637196973


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 38s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 15s |  master passed  |
   | +1 :green_heart: |  compile  |   1m  3s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   5m 48s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | -0 :warning: |  javadoc  |   0m 41s |  hbase-server in master failed.  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m  1s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  4s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m  4s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   5m 42s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | -0 :warning: |  javadoc  |   0m 40s |  hbase-server in the patch failed.  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  | 126m  2s |  hbase-server in the patch passed.  
|
   |  |   | 153m  4s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.11 Server=19.03.11 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1826/1/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/1826 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 8f066f9262d1 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 716702a349 |
   | Default Java | 2020-01-14 |
   | javadoc | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1826/1/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-server.txt
 |
   | javadoc | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1826/1/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-server.txt
 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1826/1/testReport/
 |
   | Max. process+thread count | 4411 (vs. ulimit of 12500) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1826/1/console |
   | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] brfrn169 commented on pull request #1648: HBASE-8458 Support for batch version of checkAndMutate()

2020-06-01 Thread GitBox


brfrn169 commented on pull request #1648:
URL: https://github.com/apache/hbase/pull/1648#issuecomment-637195780


   @joshelser Thank you for reviewing this! I just modified the patch for your 
review. Thanks.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #1786: HBASE-24418 Consolidate Normalizer implementations

2020-06-01 Thread GitBox


Apache-HBase commented on pull request #1786:
URL: https://github.com/apache/hbase/pull/1786#issuecomment-637184230


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 11s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 21s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   3m 56s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   2m 16s |  master passed  |
   | +0 :ok: |  refguide  |   5m 26s |  branch has no errors when building the 
reference guide. See footer for rendered docs, which you should manually 
inspect.  |
   | +1 :green_heart: |  spotbugs  |  12m 16s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 12s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 42s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   2m 16s |  root: The patch generated 0 
new + 100 unchanged - 1 fixed = 100 total (was 101)  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  2s |  The patch has no ill-formed XML 
file.  |
   | +0 :ok: |  refguide  |   5m 31s |  patch has no errors when building the 
reference guide. See footer for rendered docs, which you should manually 
inspect.  |
   | +1 :green_heart: |  hadoopcheck  |  12m  9s |  Patch does not cause any 
errors with Hadoop 3.1.2 3.2.1.  |
   | +1 :green_heart: |  spotbugs  |  12m 42s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 35s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  70m 43s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.11 Server=19.03.11 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1786/3/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/1786 |
   | Optional Tests | dupname asflicense refguide xml spotbugs hadoopcheck 
hbaseanti checkstyle |
   | uname | Linux bc1af458458d 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 
10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 716702a349 |
   | refguide | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1786/3/artifact/yetus-general-check/output/branch-site/book.html
 |
   | refguide | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1786/3/artifact/yetus-general-check/output/patch-site/book.html
 |
   | Max. process+thread count | 122 (vs. ulimit of 12500) |
   | modules | C: hbase-common hbase-server . U: . |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1786/3/console |
   | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) 
spotbugs=3.1.12 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #1827: HBASE-24488 Update docs re: ZooKeeper compatibility of 2.3.x release

2020-06-01 Thread GitBox


Apache-HBase commented on pull request #1827:
URL: https://github.com/apache/hbase/pull/1827#issuecomment-637183803


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 31s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 44s |  master passed  |
   | +0 :ok: |  refguide  |   4m 59s |  branch has no errors when building the 
reference guide. See footer for rendered docs, which you should manually 
inspect.  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 18s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  1s |  The patch has no whitespace 
issues.  |
   | +0 :ok: |  refguide  |   4m 59s |  patch has no errors when building the 
reference guide. See footer for rendered docs, which you should manually 
inspect.  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 17s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  19m 21s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.11 Server=19.03.11 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1827/2/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/1827 |
   | Optional Tests | dupname asflicense refguide |
   | uname | Linux 3160e8155625 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 716702a349 |
   | refguide | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1827/2/artifact/yetus-general-check/output/branch-site/book.html
 |
   | refguide | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1827/2/artifact/yetus-general-check/output/patch-site/book.html
 |
   | Max. process+thread count | 57 (vs. ulimit of 12500) |
   | modules | C: . U: . |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1827/2/console |
   | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #1827: HBASE-24488 Update docs re: ZooKeeper compatibility of 2.3.x release

2020-06-01 Thread GitBox


Apache-HBase commented on pull request #1827:
URL: https://github.com/apache/hbase/pull/1827#issuecomment-637178699


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 10s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   ||| _ Patch Compile Tests _ |
   ||| _ Other Tests _ |
   |  |   |   2m  8s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.11 Server=19.03.11 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1827/2/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/1827 |
   | Optional Tests |  |
   | uname | Linux e6e71160b2d5 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 
10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 716702a349 |
   | Max. process+thread count | 43 (vs. ulimit of 12500) |
   | modules | C: . U: . |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1827/2/console |
   | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #1827: HBASE-24488 Update docs re: ZooKeeper compatibility of 2.3.x release

2020-06-01 Thread GitBox


Apache-HBase commented on pull request #1827:
URL: https://github.com/apache/hbase/pull/1827#issuecomment-637178526


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 29s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   ||| _ Patch Compile Tests _ |
   ||| _ Other Tests _ |
   |  |   |   1m 35s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.11 Server=19.03.11 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1827/2/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/1827 |
   | Optional Tests |  |
   | uname | Linux 2bad30dbed4c 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 716702a349 |
   | Max. process+thread count | 47 (vs. ulimit of 12500) |
   | modules | C: . U: . |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1827/2/console |
   | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #1827: HBASE-24488 Update docs re: ZooKeeper compatibility of 2.3.x release

2020-06-01 Thread GitBox


Apache-HBase commented on pull request #1827:
URL: https://github.com/apache/hbase/pull/1827#issuecomment-637177676


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 33s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 46s |  master passed  |
   | +0 :ok: |  refguide  |   4m 48s |  branch has no errors when building the 
reference guide. See footer for rendered docs, which you should manually 
inspect.  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 22s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +0 :ok: |  refguide  |   4m 51s |  patch has no errors when building the 
reference guide. See footer for rendered docs, which you should manually 
inspect.  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 17s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  19m  8s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.11 Server=19.03.11 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1827/1/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/1827 |
   | Optional Tests | dupname asflicense refguide |
   | uname | Linux 6588853f2b4f 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 716702a349 |
   | refguide | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1827/1/artifact/yetus-general-check/output/branch-site/book.html
 |
   | refguide | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1827/1/artifact/yetus-general-check/output/patch-site/book.html
 |
   | Max. process+thread count | 78 (vs. ulimit of 12500) |
   | modules | C: . U: . |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1827/1/console |
   | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (HBASE-24478) The regionInfo parameter for MasterProcedureScheduler#waitRegions and MasterProcedureScheduler#wakeRegions should be plural

2020-06-01 Thread Clara Xiong (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-24478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17121397#comment-17121397
 ] 

Clara Xiong commented on HBASE-24478:
-

May I recommend RegionsInfo instead of RegionInfos?  

> The regionInfo parameter for MasterProcedureScheduler#waitRegions and 
> MasterProcedureScheduler#wakeRegions should be plural 
> 
>
> Key: HBASE-24478
> URL: https://issues.apache.org/jira/browse/HBASE-24478
> Project: HBase
>  Issue Type: Improvement
>  Components: proc-v2
>Affects Versions: 3.0.0-alpha-1
>Reporter: song XinCun
>Assignee: song XinCun
>Priority: Minor
>
> MasterProcedureScheduler#waitRegions and MasterProcedureScheduler#wakeRegions 
> deal with a list of regions, so the variable name of region info should be 
> plural



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] ndimiduk commented on pull request #1827: HBASE-24488 Update docs re: ZooKeeper compatibility of 2.3.x release

2020-06-01 Thread GitBox


ndimiduk commented on pull request #1827:
URL: https://github.com/apache/hbase/pull/1827#issuecomment-637174500


   Did some archeology and learned about 
[HBASE-16598](https://issues.apache.org/jira/browse/HBASE-16598).



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (HBASE-24412) Canary support check only one column family per RegionTask

2020-06-01 Thread Clara Xiong (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-24412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17121394#comment-17121394
 ] 

Clara Xiong commented on HBASE-24412:
-

Patch looks good. 
 # Could you provide some info on the use case to support checking all families 
and a random family?
 # Could you expose the new option in usage?
 # It was nice to clean up the old code by reformatting. But this makes review 
difficult.Could you undo formatting and keep formatting as the last commit?

> Canary support check only one column family per RegionTask
> --
>
> Key: HBASE-24412
> URL: https://issues.apache.org/jira/browse/HBASE-24412
> Project: HBase
>  Issue Type: Improvement
>  Components: canary
>Reporter: niuyulin
>Assignee: niuyulin
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Apache-HBase commented on pull request #1827: HBASE-24488 Update docs re: ZooKeeper compatibility of 2.3.x release

2020-06-01 Thread GitBox


Apache-HBase commented on pull request #1827:
URL: https://github.com/apache/hbase/pull/1827#issuecomment-637172175


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 10s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   ||| _ Patch Compile Tests _ |
   ||| _ Other Tests _ |
   |  |   |   2m 16s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.11 Server=19.03.11 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1827/1/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/1827 |
   | Optional Tests |  |
   | uname | Linux 758edea1a21b 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 
10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 716702a349 |
   | Max. process+thread count | 44 (vs. ulimit of 12500) |
   | modules | C: . U: . |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1827/1/console |
   | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #1827: HBASE-24488 Update docs re: ZooKeeper compatibility of 2.3.x release

2020-06-01 Thread GitBox


Apache-HBase commented on pull request #1827:
URL: https://github.com/apache/hbase/pull/1827#issuecomment-637171949


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 29s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   ||| _ Patch Compile Tests _ |
   ||| _ Other Tests _ |
   |  |   |   1m 36s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.11 Server=19.03.11 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1827/1/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/1827 |
   | Optional Tests |  |
   | uname | Linux c77a9e3c0aa1 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 716702a349 |
   | Max. process+thread count | 47 (vs. ulimit of 12500) |
   | modules | C: . U: . |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1827/1/console |
   | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (HBASE-24280) Hadoop2 and Hadoop3 profiles being activated simultaneously causing test failures

2020-06-01 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-24280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17121389#comment-17121389
 ] 

Hudson commented on HBASE-24280:


Results for branch branch-2.2
[build #883 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/883/]: 
(/) *{color:green}+1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/883//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/883//JDK8_Nightly_Build_Report_(Hadoop2)/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/883//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Hadoop2 and Hadoop3 profiles being activated simultaneously causing test 
> failures
> -
>
> Key: HBASE-24280
> URL: https://issues.apache.org/jira/browse/HBASE-24280
> Project: HBase
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Istvan Toth
>Priority: Major
> Fix For: 2.3.0
>
> Attachments: HBASE-24280.master.001.patch, 
> TEST-org.apache.hadoop.hbase.rest.TestSecureRESTServer.xml
>
>
> [~ndimiduk] pointed out that, after this change went in, TestSecureRESTServer 
> started failing with Hadoop3 on branch-2.3
> https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/56/
> Of course, I ran this with 1.8.0_241 and Maven 3.6.33 and it passed :) {{mvn 
> clean package -Dtest=TestSecureRESTServer -Dhadoop.profile=3.0 
> -DfailIfNoTests=false}}
> FYI [~stoty] in case you can repro a failure and want to dig in. Feel free to 
> re-assign.
> It looks like we didn't have a nightly run of branch-2.2 due to docker 
> container build issues. Will be interesting to see if it fails there. It did 
> not fail the master nightly.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] ndimiduk commented on pull request #1827: HBASE-24488 Update docs re: ZooKeeper compatibility of 2.3.x release

2020-06-01 Thread GitBox


ndimiduk commented on pull request #1827:
URL: https://github.com/apache/hbase/pull/1827#issuecomment-637169917


   https://hbase.apache.org/book.html#zookeeper.requirements says "ZooKeeper 
3.4.x is required." without context of either HBase version or feature that 
demands that version. Seems we should say more.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (HBASE-24132) Upgrade to Apache ZooKeeper 3.5.7

2020-06-01 Thread Nick Dimiduk (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-24132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17121387#comment-17121387
 ] 

Nick Dimiduk commented on HBASE-24132:
--

I've updated the release note here, and I've added a comment about this to the 
upgrade section on our book via HBASE-24488. The PR is available, please have a 
look.

> Upgrade to Apache ZooKeeper 3.5.7
> -
>
> Key: HBASE-24132
> URL: https://issues.apache.org/jira/browse/HBASE-24132
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha-1, 2.2.3
>Reporter: Jianfei Jiang
>Assignee: Jianfei Jiang
>Priority: Major
> Fix For: 3.0.0-alpha-1, 2.3.0
>
>
> Apache ZooKeeper 3.5.7 has been released, HDFS and other projects have 
> updated the dependent zookeeper verison. Perhaps Hbase may update as well. 
> Some of the interfaces are changed in this zookeeper version .



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] ndimiduk opened a new pull request #1827: HBASE-24488 Update docs re: ZooKeeper compatibility of 2.3.x release

2020-06-01 Thread GitBox


ndimiduk opened a new pull request #1827:
URL: https://github.com/apache/hbase/pull/1827


   Add a note to the 2.3 upgrade section regarding the ZooKeeper version
   bump and include a link off to ZooKeeper's FAQ.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Work started] (HBASE-24488) Update docs re: ZooKeeper compatibility of 2.3.x release

2020-06-01 Thread Nick Dimiduk (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-24488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HBASE-24488 started by Nick Dimiduk.

> Update docs re: ZooKeeper compatibility of 2.3.x release
> 
>
> Key: HBASE-24488
> URL: https://issues.apache.org/jira/browse/HBASE-24488
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
>Priority: Major
>
> HBASE-24132 bumps the ZooKeeper version, which itself has some known upgrade 
> steps. We have a release note, but we can call this out in our [2.3 upgrading 
> section|https://hbase.apache.org/book.html#upgrade2.3] of the book.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (HBASE-24488) Update docs re: ZooKeeper compatibility of 2.3.x release

2020-06-01 Thread Nick Dimiduk (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-24488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk reassigned HBASE-24488:


Assignee: Nick Dimiduk

> Update docs re: ZooKeeper compatibility of 2.3.x release
> 
>
> Key: HBASE-24488
> URL: https://issues.apache.org/jira/browse/HBASE-24488
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
>Priority: Major
>
> HBASE-24132 bumps the ZooKeeper version, which itself has some known upgrade 
> steps. We have a release note, but we can call this out in our [2.3 upgrading 
> section|https://hbase.apache.org/book.html#upgrade2.3] of the book.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] saintstack commented on a change in pull request #1802: HBASE-24444 Should shutdown mini cluster after class in TestMetaAssignmentWithStopMaster

2020-06-01 Thread GitBox


saintstack commented on a change in pull request #1802:
URL: https://github.com/apache/hbase/pull/1802#discussion_r432911333



##
File path: 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMetaAssignmentWithStopMaster.java
##
@@ -57,6 +58,15 @@ public static void setUp() throws Exception {
 UTIL.startMiniCluster(option);
   }
 
+  @AfterClass
+  public static void cleanup() {
+try {
+  UTIL.shutdownMiniCluster();
+} catch (Exception e) {

Review comment:
   What @Apache9 said





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #1761: HBASE-21406 "status 'replication'" should not show SINK if the cluste…

2020-06-01 Thread GitBox


Apache-HBase commented on pull request #1761:
URL: https://github.com/apache/hbase/pull/1761#issuecomment-637166866


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 21s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 40s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   4m 16s |  master passed  |
   | +1 :green_heart: |  compile  |   2m 56s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   6m 32s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m  0s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 14s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 51s |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 47s |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m 47s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   6m  9s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 38s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 47s |  hbase-protocol-shaded in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   0m 36s |  hbase-hadoop-compat in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   1m 13s |  hbase-client in the patch passed.  
|
   | +1 :green_heart: |  unit  | 201m  9s |  hbase-server in the patch passed.  
|
   | +1 :green_heart: |  unit  |   8m 18s |  hbase-shell in the patch passed.  |
   |  |   | 246m 41s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.11 Server=19.03.11 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1761/6/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/1761 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 28a7f6cd352c 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 
10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 716702a349 |
   | Default Java | 1.8.0_232 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1761/6/testReport/
 |
   | Max. process+thread count | 3485 (vs. ulimit of 12500) |
   | modules | C: hbase-protocol-shaded hbase-hadoop-compat hbase-client 
hbase-server hbase-shell U: . |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1761/6/console |
   | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] saintstack commented on a change in pull request #1825: HBASE-24189 WALSplit recreates region dirs for deleted table with rec…

2020-06-01 Thread GitBox


saintstack commented on a change in pull request #1825:
URL: https://github.com/apache/hbase/pull/1825#discussion_r433525389



##
File path: 
hbase-common/src/main/java/org/apache/hadoop/hbase/util/CommonFSUtils.java
##
@@ -423,6 +423,19 @@ public static Path getTableDir(Path rootdir, final 
TableName tableName) {
 tableName.getQualifierAsString());
   }
 
+  /**
+   * Returns the {@link org.apache.hadoop.fs.Path} object representing the 
region directory under
+   * path rootdir
+   *
+   * @param rootdir qualified path of HBase root directory
+   * @param tableName name of table
+   * @param regionName The encoded region name
+   * @return {@link org.apache.hadoop.fs.Path} for region
+   */
+  public static Path getRegionDir(Path rootdir, TableName tableName, String 
regionName) {

Review comment:
   Should hbase-common know about region dirs? Seems like it knows about 
table dirs so I suppose this ok.

##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/wal/WALSplitter.java
##
@@ -285,23 +286,35 @@ boolean splitLogFile(FileStatus logfile, 
CancelableProgressable reporter) throws
 String encodedRegionNameAsStr = Bytes.toString(region);
 lastFlushedSequenceId = 
lastFlushedSequenceIds.get(encodedRegionNameAsStr);
 if (lastFlushedSequenceId == null) {
-  if (sequenceIdChecker != null) {
-RegionStoreSequenceIds ids = 
sequenceIdChecker.getLastSequenceId(region);
-Map maxSeqIdInStores = new 
TreeMap<>(Bytes.BYTES_COMPARATOR);
-for (StoreSequenceId storeSeqId : ids.getStoreSequenceIdList()) {
-  maxSeqIdInStores.put(storeSeqId.getFamilyName().toByteArray(),
-storeSeqId.getSequenceId());
+  if (!(isRegionDirPresentUnderRoot(entry.getKey().getTableName(), 
encodedRegionNameAsStr))) {
+// The region directory itself is not present in the WAL FS. This 
indicates that

Review comment:
   Is it WAL FS dir or the data/hfile FS?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (HBASE-24488) Update docs re: ZooKeeper compatibility of 2.3.x release

2020-06-01 Thread Nick Dimiduk (Jira)
Nick Dimiduk created HBASE-24488:


 Summary: Update docs re: ZooKeeper compatibility of 2.3.x release
 Key: HBASE-24488
 URL: https://issues.apache.org/jira/browse/HBASE-24488
 Project: HBase
  Issue Type: Sub-task
Reporter: Nick Dimiduk


HBASE-24132 bumps the ZooKeeper version, which itself has some known upgrade 
steps. We have a release note, but we can call this out in our [2.3 upgrading 
section|https://hbase.apache.org/book.html#upgrade2.3] of the book.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23785) Update docs re: Hadoop compatibility of 2.3.x release

2020-06-01 Thread Nick Dimiduk (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-23785:
-
Parent: HBASE-24012
Issue Type: Sub-task  (was: Task)

> Update docs re: Hadoop compatibility of 2.3.x release
> -
>
> Key: HBASE-23785
> URL: https://issues.apache.org/jira/browse/HBASE-23785
> Project: HBase
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Nick Dimiduk
>Priority: Major
> Fix For: 3.0.0-alpha-1, 2.3.0
>
>
> Update the [book|http://hbase.apache.org/book.html#hadoop] regarding our 
> Hadoop compatibility for the 2.3 release lines.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Apache-HBase commented on pull request #1826: HBASE-24438 Don't update TaskMonitor when deserializing ServerCrashProcedure

2020-06-01 Thread GitBox


Apache-HBase commented on pull request #1826:
URL: https://github.com/apache/hbase/pull/1826#issuecomment-637160620


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 13s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 12s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   1m 17s |  master passed  |
   | +1 :green_heart: |  spotbugs  |   2m 17s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 47s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   1m 12s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  hadoopcheck  |  12m 19s |  Patch does not cause any 
errors with Hadoop 3.1.2 3.2.1.  |
   | +1 :green_heart: |  spotbugs  |   2m 58s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 15s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  38m  4s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.11 Server=19.03.11 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1826/1/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/1826 |
   | Optional Tests | dupname asflicense spotbugs hadoopcheck hbaseanti 
checkstyle |
   | uname | Linux df9af8824035 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 
10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 716702a349 |
   | Max. process+thread count | 84 (vs. ulimit of 12500) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1826/1/console |
   | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) 
spotbugs=3.1.12 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] ndimiduk commented on a change in pull request #1786: HBASE-24418 Consolidate Normalizer implementations

2020-06-01 Thread GitBox


ndimiduk commented on a change in pull request #1786:
URL: https://github.com/apache/hbase/pull/1786#discussion_r433523464



##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/normalizer/SimpleRegionNormalizer.java
##
@@ -18,126 +17,436 @@
  */
 package org.apache.hadoop.hbase.master.normalizer;
 
+import java.io.IOException;
+import java.time.Instant;
+import java.time.Period;
 import java.util.ArrayList;
 import java.util.Collections;
 import java.util.Comparator;
 import java.util.List;
-
-import org.apache.hadoop.hbase.HBaseIOException;
+import java.util.Objects;
+import java.util.function.BooleanSupplier;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HBaseInterfaceAudience;
+import org.apache.hadoop.hbase.RegionMetrics;
+import org.apache.hadoop.hbase.ServerName;
+import org.apache.hadoop.hbase.Size;
 import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.MasterSwitchType;
 import org.apache.hadoop.hbase.client.RegionInfo;
+import org.apache.hadoop.hbase.client.TableDescriptor;
+import org.apache.hadoop.hbase.master.MasterServices;
+import org.apache.hadoop.hbase.master.RegionState;
+import org.apache.hadoop.hbase.master.assignment.RegionStates;
 import org.apache.hadoop.hbase.master.normalizer.NormalizationPlan.PlanType;
+import org.apache.hadoop.hbase.util.EnvironmentEdgeManager;
 import org.apache.yetus.audience.InterfaceAudience;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
+import 
org.apache.hbase.thirdparty.org.apache.commons.collections4.CollectionUtils;
 
 /**
  * Simple implementation of region normalizer. Logic in use:
  * 
- * Get all regions of a given table
- * Get avg size S of each region (by total size of store files reported in 
RegionMetrics)
- * Seek every single region one by one. If a region R0 is bigger than S * 
2, it is kindly
- * requested to split. Thereon evaluate the next region R1
- * Otherwise, if R0 + R1 is smaller than S, R0 and R1 are kindly requested 
to merge. Thereon
- * evaluate the next region R2
- * Otherwise, R1 is evaluated
+ *   Get all regions of a given table
+ *   Get avg size S of the regions in the table (by total size of store 
files reported in
+ * RegionMetrics)
+ *   For each region R0, if R0 is bigger than S * 2, it is kindly 
requested to split.
+ *   Otherwise, for the next region in the chain R1, if R0 + R1 is smaller 
then S, R0 and R1
+ * are kindly requested to merge.
+ * 
+ * 
+ * The following parameters are configurable:
+ * 
+ *   Whether to split a region as part of normalization. Configuration:
+ * {@value SPLIT_ENABLED_KEY}, default: {@value 
DEFAULT_SPLIT_ENABLED}.
+ *   Whether to merge a region as part of normalization. Configuration:
+ * {@value MERGE_ENABLED_KEY}, default: {@value 
DEFAULT_MERGE_ENABLED}.
+ *   The minimum number of regions in a table to consider it for 
normalization. Configuration:
+ * {@value MIN_REGION_COUNT_KEY}, default: {@value 
DEFAULT_MIN_REGION_COUNT}.
+ *   The minimum age for a region to be considered for a merge, in days. 
Configuration:
+ * {@value MERGE_MIN_REGION_AGE_DAYS_KEY}, default:
+ * {@value DEFAULT_MERGE_MIN_REGION_AGE_DAYS}.
+ *   The minimum size for a region to be considered for a merge, in whole 
MBs. Configuration:
+ * {@value MERGE_MIN_REGION_SIZE_MB_KEY}, default:
+ * {@value DEFAULT_MERGE_MIN_REGION_SIZE_MB}.
  * 
  * 
- * Region sizes are coarse and approximate on the order of megabytes. 
Additionally, "empty" regions
- * (less than 1MB, with the previous note) are not merged away. This is by 
design to prevent
- * normalization from undoing the pre-splitting of a table.
+ * To see detailed logging of the application of these configuration values, 
set the log level for
+ * this class to `TRACE`.
  */
-@InterfaceAudience.Private
-public class SimpleRegionNormalizer extends AbstractRegionNormalizer {
-
+@InterfaceAudience.LimitedPrivate(HBaseInterfaceAudience.CONFIG)
+public class SimpleRegionNormalizer implements RegionNormalizer {
   private static final Logger LOG = 
LoggerFactory.getLogger(SimpleRegionNormalizer.class);
-  private static long[] skippedCount = new 
long[NormalizationPlan.PlanType.values().length];
+
+  static final String SPLIT_ENABLED_KEY = "hbase.normalizer.split.enabled";
+  static final boolean DEFAULT_SPLIT_ENABLED = true;
+  static final String MERGE_ENABLED_KEY = "hbase.normalizer.merge.enabled";
+  static final boolean DEFAULT_MERGE_ENABLED = true;
+  // TODO: after HBASE-24416, `min.region.count` only applies to merge plans; 
should
+  //  deprecate/rename the configuration key.
+  static final String MIN_REGION_COUNT_KEY = 
"hbase.normalizer.min.region.count";
+  static final int DEFAULT_MIN_REGION_COUNT = 3;
+  static final String MERGE_MIN_REGION_AGE_DAYS_KEY = 
"hbase.normalizer.merge.min_region_age.days";
+  static final int DEFAULT_MERGE_MIN_REGION_AGE_DAYS = 3;
+  static final String 

[GitHub] [hbase] clarax commented on pull request #1826: HBASE-24438 Don't update TaskMonitor when deserializing ServerCrashProcedure

2020-06-01 Thread GitBox


clarax commented on pull request #1826:
URL: https://github.com/apache/hbase/pull/1826#issuecomment-637158651


   Patch LGTM. But need to check with author @jingyuntian  for 
https://issues.apache.org/jira/browse/HBASE-21647 to see if this will break 
other cases. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] joshelser commented on a change in pull request #1648: HBASE-8458 Support for batch version of checkAndMutate()

2020-06-01 Thread GitBox


joshelser commented on a change in pull request #1648:
URL: https://github.com/apache/hbase/pull/1648#discussion_r433439130



##
File path: 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/CheckAndMutate.java
##
@@ -0,0 +1,360 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.client;
+
+import java.util.Collections;
+import java.util.List;
+import java.util.NavigableMap;
+import org.apache.hadoop.hbase.Cell;
+import org.apache.hadoop.hbase.CellBuilder;
+import org.apache.hadoop.hbase.CellBuilderType;
+import org.apache.hadoop.hbase.CompareOperator;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.filter.Filter;
+import org.apache.hadoop.hbase.io.TimeRange;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.apache.yetus.audience.InterfaceStability;
+import org.apache.hbase.thirdparty.com.google.common.base.Preconditions;
+
+/**
+ * Used to perform CheckAndMutate operations. Currently {@link Put}, {@link 
Delete}
+ * and {@link RowMutations} are supported.
+ * 
+ * Use the builder class to instantiate a CheckAndMutate object.
+ * This builder class is fluent style APIs, the code are like:
+ * 
+ * 
+ * // A CheckAndMutate operation where do the specified action if the column 
(specified by the
+ * // family and the qualifier) of the row equals to the specified value
+ * CheckAndMutate checkAndMutate = CheckAndMutate.newBuilder(row)
+ *   .ifEquals(family, qualifier, value)
+ *   .build(put);
+ *
+ * // A CheckAndMutate operation where do the specified action if the column 
(specified by the
+ * // family and the qualifier) of the row doesn't exist
+ * CheckAndMutate checkAndMutate = CheckAndMutate.newBuilder(row)
+ *   .ifNotExists(family, qualifier)
+ *   .build(put);
+ *
+ * // A CheckAndMutate operation where do the specified action if the row 
matches the filter
+ * CheckAndMutate checkAndMutate = CheckAndMutate.newBuilder(row)
+ *   .ifMatches(filter)
+ *   .build(delete);
+ * 
+ * 
+ */
+@InterfaceAudience.Public
+@InterfaceStability.Evolving
+public final class CheckAndMutate extends Mutation {
+
+  /**
+   * A builder class for building a CheckAndMutate object.
+   */
+  public static final class Builder {

Review comment:
   Best practice to have interface audience/stability here too, I think.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (HBASE-8458) Support for batch version of checkAndMutate()

2020-06-01 Thread Josh Elser (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-8458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17121379#comment-17121379
 ] 

Josh Elser commented on HBASE-8458:
---

One final, gentle nudge for those not following in 
[https://github.com/apache/hbase/pull/1648] – I think this is ready. I'll 
commit this tmrw if I don't hear back from anyone.

> Support for batch version of checkAndMutate()
> -
>
> Key: HBASE-8458
> URL: https://issues.apache.org/jira/browse/HBASE-8458
> Project: HBase
>  Issue Type: New Feature
>  Components: Client, regionserver
>Reporter: Hari Mankude
>Assignee: Toshihiro Suzuki
>Priority: Major
> Fix For: 3.0.0-alpha-1, 2.4.0
>
>
> The use case is that the user has multiple threads loading hundreds of keys 
> into a hbase table. Occasionally there are collisions in the keys being 
> uploaded by different threads. So for correctness, it is required to do 
> checkAndMutate() instead of a put(). However, doing a checkAndMutate() rpc 
> for every key update is non optimal. It would be good to have a batch version 
> of checkAndMutate() similar to batch put(). The client can partition the keys 
> on region boundaries.
> The jira is NOT looking for any type of cross-row locking or multi-row 
> atomicity with checkAndMutate().



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-24455) Correct the doc of "On the number of column families"

2020-06-01 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-24455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17121370#comment-17121370
 ] 

Hudson commented on HBASE-24455:


Results for branch branch-2.3
[build #117 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/117/]: 
(/) *{color:green}+1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/117/General_20Nightly_20Build_20Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/117/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/117/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/117/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Correct the doc of "On the number of column families"
> -
>
> Key: HBASE-24455
> URL: https://issues.apache.org/jira/browse/HBASE-24455
> Project: HBase
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Zheng Wang
>Assignee: Zheng Wang
>Priority: Minor
> Fix For: 3.0.0-alpha-1, 2.3.0, 1.7.0, 2.2.6
>
>
> Currently all the compaction is store basis yet, so correct the content.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Apache-HBase commented on pull request #1761: HBASE-21406 "status 'replication'" should not show SINK if the cluste…

2020-06-01 Thread GitBox


Apache-HBase commented on pull request #1761:
URL: https://github.com/apache/hbase/pull/1761#issuecomment-637139463


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 41s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 22s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   5m 21s |  master passed  |
   | +1 :green_heart: |  compile  |   4m 13s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   8m 11s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | -0 :warning: |  javadoc  |   0m 31s |  hbase-client in master failed.  |
   | -0 :warning: |  javadoc  |   0m 22s |  hbase-hadoop-compat in master 
failed.  |
   | -0 :warning: |  javadoc  |   0m 49s |  hbase-server in master failed.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 17s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   5m 38s |  the patch passed  |
   | +1 :green_heart: |  compile  |   4m  4s |  the patch passed  |
   | +1 :green_heart: |  javac  |   4m  4s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   8m 33s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | -0 :warning: |  javadoc  |   0m 22s |  hbase-hadoop-compat in the patch 
failed.  |
   | -0 :warning: |  javadoc  |   0m 31s |  hbase-client in the patch failed.  |
   | -0 :warning: |  javadoc  |   0m 47s |  hbase-server in the patch failed.  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 10s |  hbase-protocol-shaded in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   0m 40s |  hbase-hadoop-compat in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   1m 21s |  hbase-client in the patch passed.  
|
   | +1 :green_heart: |  unit  | 136m 26s |  hbase-server in the patch passed.  
|
   | +1 :green_heart: |  unit  |   8m  4s |  hbase-shell in the patch passed.  |
   |  |   | 192m 19s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.11 Server=19.03.11 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1761/6/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/1761 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux c3cc0b913af1 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 716702a349 |
   | Default Java | 2020-01-14 |
   | javadoc | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1761/6/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-client.txt
 |
   | javadoc | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1761/6/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-hadoop-compat.txt
 |
   | javadoc | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1761/6/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-server.txt
 |
   | javadoc | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1761/6/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-hadoop-compat.txt
 |
   | javadoc | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1761/6/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-client.txt
 |
   | javadoc | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1761/6/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-server.txt
 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1761/6/testReport/
 |
   | Max. process+thread count | 4058 (vs. ulimit of 12500) |
   | modules | C: hbase-protocol-shaded hbase-hadoop-compat hbase-client 
hbase-server hbase-shell U: . |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1761/6/console |
   | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] timoha opened a new pull request #1826: HBASE-24438 Don't update TaskMonitor when deserializing ServerCrashProcedure

2020-06-01 Thread GitBox


timoha opened a new pull request #1826:
URL: https://github.com/apache/hbase/pull/1826


   The ServerCrashProcedure could have been completed by previously active
   HBase Master which results in a stale ServerCrashProcedure task in 
TaskMonitor.
   The TaskMonitor should only reflect the procedure in case the procedure has 
actually
   been started/resumed which is done when 
ServerCrashProcedure.executeFromState is called.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (HBASE-13541) Deprecate Scan caching in 2.0.0

2020-06-01 Thread Andrew Olson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-13541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17121317#comment-17121317
 ] 

Andrew Olson commented on HBASE-13541:
--

I realize it's been over 5 years since this issue was opened, but I'm curious 
what the current status of it is?

> Deprecate Scan caching in 2.0.0
> ---
>
> Key: HBASE-13541
> URL: https://issues.apache.org/jira/browse/HBASE-13541
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Jonathan Lawlor
>Priority: Major
> Attachments: HBASE-13541-WIP.patch
>
>
> The public Scan API exposes caching to the application. Caching deals with 
> the number of rows that are transferred per scan RPC request issued to the 
> server. It does not seem like a detail that users of a scan should control 
> and introduces some unneeded complication. Seems more like a detail that 
> should be controlled from the server based on the current scan request RPC 
> load. This issue proposes that we deprecate the caching API in 2.0.0 so that 
> it can be removed later. Of course, if there are any concerns please raise 
> them here.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Apache-HBase commented on pull request #1825: HBASE-24189 WALSplit recreates region dirs for deleted table with rec…

2020-06-01 Thread GitBox


Apache-HBase commented on pull request #1825:
URL: https://github.com/apache/hbase/pull/1825#issuecomment-637085971


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 34s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 22s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   4m 15s |  master passed  |
   | +1 :green_heart: |  compile  |   1m 28s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   5m 50s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | -0 :warning: |  javadoc  |   0m 19s |  hbase-common in master failed.  |
   | -0 :warning: |  javadoc  |   0m 42s |  hbase-server in master failed.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 15s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   4m  0s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 31s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 31s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   5m 49s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | -0 :warning: |  javadoc  |   0m 18s |  hbase-common in the patch failed.  |
   | -0 :warning: |  javadoc  |   0m 39s |  hbase-server in the patch failed.  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 22s |  hbase-common in the patch passed.  
|
   | -1 :x: |  unit  | 135m 58s |  hbase-server in the patch failed.  |
   |  |   | 165m 32s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.11 Server=19.03.11 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1825/1/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/1825 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux b0ffb9558fd9 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 716702a349 |
   | Default Java | 2020-01-14 |
   | javadoc | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1825/1/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-common.txt
 |
   | javadoc | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1825/1/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-server.txt
 |
   | javadoc | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1825/1/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-common.txt
 |
   | javadoc | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1825/1/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-server.txt
 |
   | unit | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1825/1/artifact/yetus-jdk11-hadoop3-check/output/patch-unit-hbase-server.txt
 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1825/1/testReport/
 |
   | Max. process+thread count | 4264 (vs. ulimit of 12500) |
   | modules | C: hbase-common hbase-server U: . |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1825/1/console |
   | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #1825: HBASE-24189 WALSplit recreates region dirs for deleted table with rec…

2020-06-01 Thread GitBox


Apache-HBase commented on pull request #1825:
URL: https://github.com/apache/hbase/pull/1825#issuecomment-637084325


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 33s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 21s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   3m 37s |  master passed  |
   | +1 :green_heart: |  compile  |   1m 16s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   5m 39s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 59s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 17s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 23s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 16s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 16s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   5m 33s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 56s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 24s |  hbase-common in the patch passed.  
|
   | -1 :x: |  unit  | 134m 15s |  hbase-server in the patch failed.  |
   |  |   | 161m 51s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.11 Server=19.03.11 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1825/1/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/1825 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 34d2211d4b30 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 716702a349 |
   | Default Java | 1.8.0_232 |
   | unit | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1825/1/artifact/yetus-jdk8-hadoop3-check/output/patch-unit-hbase-server.txt
 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1825/1/testReport/
 |
   | Max. process+thread count | 4708 (vs. ulimit of 12500) |
   | modules | C: hbase-common hbase-server U: . |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1825/1/console |
   | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Comment Edited] (HBASE-24440) Prevent temporal misordering on timescales smaller than one clock tick

2020-06-01 Thread Andrew Kyle Purtell (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-24440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17121293#comment-17121293
 ] 

Andrew Kyle Purtell edited comment on HBASE-24440 at 6/1/20, 8:12 PM:
--

I am aware. If we do this I don’t think we will need it at all, configurable or 
not. But that is out of scope for this issue.

Edit: Some might respond, validly, that this is splitting hairs, because one 
follows the other: If we will never have two exact keys including timestamps 
ever committed to a row, then we don't need a sorting rule by operator 
precedence for a case that, after this proposed change, can never happen. I am 
proposing we do it in steps, with small reversible changes, because this is 
such a critical area for correctness, but if the consensus is to do it 
together, I would not oppose that for what it's worth.


was (Author: apurtell):
I am aware. If we do this I don’t think we will need it at all, configurable or 
not. But that is out of scope for this issue.

Edit: Some might respond, validly, that this is splitting hairs, because one 
follows the other: If we will never have two exact keys including timestamps 
ever committed to a row, then we don't need a sorting rule by operator 
precedence. I am proposing we do it in steps, with small reversible changes, 
because this is such a critical area for correctness, but if the consensus is 
to do it together, I would not oppose that for what it's worth.

> Prevent temporal misordering on timescales smaller than one clock tick
> --
>
> Key: HBASE-24440
> URL: https://issues.apache.org/jira/browse/HBASE-24440
> Project: HBase
>  Issue Type: Brainstorming
>Reporter: Andrew Kyle Purtell
>Priority: Major
>
> When mutations are sent to the servers without a timestamp explicitly 
> assigned by the client the server will substitute the current wall clock 
> time. There are edge cases where it is at least theoretically possible for 
> more than one mutation to be committed to a given row within the same clock 
> tick. When this happens we have to track and preserve the ordering of these 
> mutations in some other way besides the timestamp component of the key. Let 
> me bypass most discussion here by noting that whether we do this or not, we 
> do not pass such ordering information in the cross cluster replication 
> protocol. We also have interesting edge cases regarding key type precedence 
> when mutations arrive "simultaneously": we sort deletes ahead of puts. This, 
> especially in the presence of replication, can lead to visible anomalies for 
> clients able to interact with both source and sink. 
> There is a simple solution that removes the possibility that these edge cases 
> can occur: 
> We can detect, when we are about to commit a mutation to a row, if we have 
> already committed a mutation to this same row in the current clock tick. 
> Occurrences of this condition will be rare. We are already tracking current 
> time. We have to know this in order to assign the timestamp. Where this 
> becomes interesting is how we might track the last commit time per row. 
> Making the detection of this case efficient for the normal code path is the 
> bulk of the challenge. One option is to keep track of the last locked time 
> for row locks. (Todo: How would we track and garbage collect this efficiently 
> and correctly. Not the ideal option.) We might also do this tracking somehow 
> via the memstore. (At least in this case the lifetime and distribution of in 
> memory row state, including the proposed timestamps, would align.) Assuming 
> we can efficiently know if we are about to commit twice to the same row 
> within a single clock tick, we would simply sleep/yield the current thread 
> until the clock ticks over, and then proceed. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (HBASE-24440) Prevent temporal misordering on timescales smaller than one clock tick

2020-06-01 Thread Andrew Kyle Purtell (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-24440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17121293#comment-17121293
 ] 

Andrew Kyle Purtell edited comment on HBASE-24440 at 6/1/20, 8:11 PM:
--

I am aware. If we do this I don’t think we will need it at all, configurable or 
not. But that is out of scope for this issue.

Edit: Some might respond, validly, that this is splitting hairs, because one 
follows the other: If we will never have two exact keys including timestamps 
ever committed to a row, then we don't need a sorting rule by operator 
precedence. I am proposing we do it in steps, with small reversible changes, 
because this is such a critical area for correctness, but if the consensus is 
to do it together, I would not oppose that for what it's worth.


was (Author: apurtell):
I am aware. If we do this I don’t think we will need it at all, configurable or 
not. But that is out of scope for this issue. 

> Prevent temporal misordering on timescales smaller than one clock tick
> --
>
> Key: HBASE-24440
> URL: https://issues.apache.org/jira/browse/HBASE-24440
> Project: HBase
>  Issue Type: Brainstorming
>Reporter: Andrew Kyle Purtell
>Priority: Major
>
> When mutations are sent to the servers without a timestamp explicitly 
> assigned by the client the server will substitute the current wall clock 
> time. There are edge cases where it is at least theoretically possible for 
> more than one mutation to be committed to a given row within the same clock 
> tick. When this happens we have to track and preserve the ordering of these 
> mutations in some other way besides the timestamp component of the key. Let 
> me bypass most discussion here by noting that whether we do this or not, we 
> do not pass such ordering information in the cross cluster replication 
> protocol. We also have interesting edge cases regarding key type precedence 
> when mutations arrive "simultaneously": we sort deletes ahead of puts. This, 
> especially in the presence of replication, can lead to visible anomalies for 
> clients able to interact with both source and sink. 
> There is a simple solution that removes the possibility that these edge cases 
> can occur: 
> We can detect, when we are about to commit a mutation to a row, if we have 
> already committed a mutation to this same row in the current clock tick. 
> Occurrences of this condition will be rare. We are already tracking current 
> time. We have to know this in order to assign the timestamp. Where this 
> becomes interesting is how we might track the last commit time per row. 
> Making the detection of this case efficient for the normal code path is the 
> bulk of the challenge. One option is to keep track of the last locked time 
> for row locks. (Todo: How would we track and garbage collect this efficiently 
> and correctly. Not the ideal option.) We might also do this tracking somehow 
> via the memstore. (At least in this case the lifetime and distribution of in 
> memory row state, including the proposed timestamps, would align.) Assuming 
> we can efficiently know if we are about to commit twice to the same row 
> within a single clock tick, we would simply sleep/yield the current thread 
> until the clock ticks over, and then proceed. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23959) Fix javadoc for JDK11

2020-06-01 Thread Nick Dimiduk (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17121308#comment-17121308
 ] 

Nick Dimiduk commented on HBASE-23959:
--

>From a random jenkins job,

{noformat}
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-javadoc-plugin:3.2.0:javadoc (default-cli) on 
project hbase-logging: An error has occurred in Javadoc report generation: 
[ERROR] Exit code: 1 - javadoc: error - The code being documented uses modules 
but the packages defined in https://docs.oracle.com/javase/8/docs/api/ are in 
the unnamed module.
[ERROR] 
[ERROR] Command line was: /usr/lib/jvm/jdk-11.0.6+10/bin/javadoc @options 
@packages
[ERROR] 
[ERROR] Refer to the generated Javadoc files in 
'/home/jenkins/jenkins-slave/workspace/Base-PreCommit-GitHub-PR_PR-1786/yetus-jdk11-hadoop3-check/src/hbase-logging/target/site/apidocs'
 dir.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hbase-logging
{noformat}

> Fix javadoc for JDK11
> -
>
> Key: HBASE-23959
> URL: https://issues.apache.org/jira/browse/HBASE-23959
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha-1, 2.3.0
>Reporter: Nick Dimiduk
>Priority: Major
>
> Javadoc build fails with JDK11. See if this can be fixed to pass on both 8 
> and 11.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] saintstack commented on a change in pull request #1811: HBASE-24474 Rename LocalRegion to MasterRegion

2020-06-01 Thread GitBox


saintstack commented on a change in pull request #1811:
URL: https://github.com/apache/hbase/pull/1811#discussion_r433453029



##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/region/MasterRegion.java
##
@@ -284,18 +284,21 @@ public static LocalRegion create(LocalRegionParams 
params) throws IOException {
 Configuration conf = new Configuration(baseConf);
 CommonFSUtils.setRootDir(conf, rootDir);
 CommonFSUtils.setWALRootDir(conf, walRootDir);
-LocalRegionFlusherAndCompactor.setupConf(conf, params.flushSize(), 
params.flushPerChanges(),
+MasterRegionFlusherAndCompactor.setupConf(conf, params.flushSize(), 
params.flushPerChanges(),
   params.flushIntervalMs());
 conf.setInt(AbstractFSWAL.MAX_LOGS, params.maxWals());
 if (params.useHsync() != null) {
   conf.setBoolean(HRegion.WAL_HSYNC_CONF_KEY, params.useHsync());
 }
+if (params.useMetaCellComparator() != null) {

Review comment:
   ok

##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/region/MasterRegionFactory.java
##
@@ -89,45 +82,8 @@
   private static final TableDescriptor TABLE_DESC = 
TableDescriptorBuilder.newBuilder(TABLE_NAME)
 .setColumnFamily(ColumnFamilyDescriptorBuilder.of(PROC_FAMILY)).build();
 
-  private final LocalRegion region;
-
-  private LocalStore(LocalRegion region) {
-this.region = region;
-  }
-
-  public void update(UpdateLocalRegion action) throws IOException {
-region.update(action);
-  }
-
-  public Result get(Get get) throws IOException {
-return region.get(get);
-  }
-
-  public RegionScanner getScanner(Scan scan) throws IOException {
-return region.getScanner(scan);
-  }
-
-  public void close(boolean abort) {
-region.close(abort);
-  }
-
-  @VisibleForTesting
-  public FlushResult flush(boolean force) throws IOException {
-return region.flush(force);
-  }
-
-  @VisibleForTesting
-  public void requestRollAll() {
-region.requestRollAll();
-  }
-
-  @VisibleForTesting
-  public void waitUntilWalRollFinished() throws InterruptedException {
-region.waitUntilWalRollFinished();
-  }
-
-  public static LocalStore create(Server server) throws IOException {
-LocalRegionParams params = new LocalRegionParams().server(server)

Review comment:
   ok





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (HBASE-24440) Prevent temporal misordering on timescales smaller than one clock tick

2020-06-01 Thread Andrew Kyle Purtell (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-24440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17121293#comment-17121293
 ] 

Andrew Kyle Purtell commented on HBASE-24440:
-

I am aware. If we do this I don’t think we will need it at all, configurable or 
not. But that is out of scope for this issue. 

> Prevent temporal misordering on timescales smaller than one clock tick
> --
>
> Key: HBASE-24440
> URL: https://issues.apache.org/jira/browse/HBASE-24440
> Project: HBase
>  Issue Type: Brainstorming
>Reporter: Andrew Kyle Purtell
>Priority: Major
>
> When mutations are sent to the servers without a timestamp explicitly 
> assigned by the client the server will substitute the current wall clock 
> time. There are edge cases where it is at least theoretically possible for 
> more than one mutation to be committed to a given row within the same clock 
> tick. When this happens we have to track and preserve the ordering of these 
> mutations in some other way besides the timestamp component of the key. Let 
> me bypass most discussion here by noting that whether we do this or not, we 
> do not pass such ordering information in the cross cluster replication 
> protocol. We also have interesting edge cases regarding key type precedence 
> when mutations arrive "simultaneously": we sort deletes ahead of puts. This, 
> especially in the presence of replication, can lead to visible anomalies for 
> clients able to interact with both source and sink. 
> There is a simple solution that removes the possibility that these edge cases 
> can occur: 
> We can detect, when we are about to commit a mutation to a row, if we have 
> already committed a mutation to this same row in the current clock tick. 
> Occurrences of this condition will be rare. We are already tracking current 
> time. We have to know this in order to assign the timestamp. Where this 
> becomes interesting is how we might track the last commit time per row. 
> Making the detection of this case efficient for the normal code path is the 
> bulk of the challenge. One option is to keep track of the last locked time 
> for row locks. (Todo: How would we track and garbage collect this efficiently 
> and correctly. Not the ideal option.) We might also do this tracking somehow 
> via the memstore. (At least in this case the lifetime and distribution of in 
> memory row state, including the proposed timestamps, would align.) Assuming 
> we can efficiently know if we are about to commit twice to the same row 
> within a single clock tick, we would simply sleep/yield the current thread 
> until the clock ticks over, and then proceed. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Apache-HBase commented on pull request #1761: HBASE-21406 "status 'replication'" should not show SINK if the cluste…

2020-06-01 Thread GitBox


Apache-HBase commented on pull request #1761:
URL: https://github.com/apache/hbase/pull/1761#issuecomment-637061626


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 38s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +0 :ok: |  prototool  |   0m  0s |  prototool was not available.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 29s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   3m 49s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   2m 13s |  master passed  |
   | +1 :green_heart: |  spotbugs  |   6m 50s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 12s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 41s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   2m 11s |  the patch passed  |
   | -0 :warning: |  rubocop  |   0m  8s |  The patch generated 22 new + 325 
unchanged - 4 fixed = 347 total (was 329)  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  hadoopcheck  |  12m  2s |  Patch does not cause any 
errors with Hadoop 3.1.2 3.2.1.  |
   | +1 :green_heart: |  hbaseprotoc  |   2m 38s |  the patch passed  |
   | +1 :green_heart: |  spotbugs  |   7m 51s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 49s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  51m 43s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.11 Server=19.03.11 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1761/6/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/1761 |
   | Optional Tests | dupname asflicense spotbugs hadoopcheck hbaseanti 
checkstyle cc hbaseprotoc prototool rubocop |
   | uname | Linux 250082e1e68a 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 
10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 716702a349 |
   | rubocop | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1761/6/artifact/yetus-general-check/output/diff-patch-rubocop.txt
 |
   | Max. process+thread count | 84 (vs. ulimit of 12500) |
   | modules | C: hbase-protocol-shaded hbase-hadoop-compat hbase-client 
hbase-server hbase-shell U: . |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1761/6/console |
   | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) 
spotbugs=3.1.12 rubocop=0.80.0 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (HBASE-24347) Hadoop2 profiles are both active when pre-commit PR builds run

2020-06-01 Thread Josh Elser (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-24347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17121266#comment-17121266
 ] 

Josh Elser commented on HBASE-24347:


Thanks Duo and Guanghao :)

> Hadoop2 profiles are both active when pre-commit PR builds run
> --
>
> Key: HBASE-24347
> URL: https://issues.apache.org/jira/browse/HBASE-24347
> Project: HBase
>  Issue Type: Sub-task
>  Components: build
>Reporter: Michael Stack
>Assignee: Josh Elser
>Priority: Major
> Fix For: 2.3.0, 2.4.0, 2.2.6
>
> Attachments: HBASE-24280.001.branch-2.3.patch, 
> HBASE-24280.001.branch-2.patch
>
>
> We need the magic done in the parent out in our precommit builds too. See how 
> https://github.com/apache/hbase/pull/1664 fails in hbase-rest w/ complaint 
> about jersey; this is a symptom of double hadoop2+hadoop3 profile activation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (HBASE-7345) subprocedure zk info should be dumpable from the shell

2020-06-01 Thread Michael Stack (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-7345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Stack resolved HBASE-7345.
--
Resolution: Incomplete

Resolving old issue. I think its about stages of a snapshot which we probably 
need still. Lets open new issue for new context.

> subprocedure zk info should be dumpable from the shell
> --
>
> Key: HBASE-7345
> URL: https://issues.apache.org/jira/browse/HBASE-7345
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: hbase-6055
>Reporter: Jonathan Hsieh
>Priority: Major
>
> For debugging by admins, we should include the ability to dump subprocedure 
> information either as part of the hbase shell's zk_dump or via some new 
> command.  It should include all the status of the different procedure 
> portions and include timestamp information.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] anoopsjohn commented on a change in pull request #1818: HBASE-24456 : Create ImmutableScan and use it for CustomizedScanInfoBuilder

2020-06-01 Thread GitBox


anoopsjohn commented on a change in pull request #1818:
URL: https://github.com/apache/hbase/pull/1818#discussion_r433431811



##
File path: 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/ImmutableScan.java
##
@@ -0,0 +1,477 @@
+/*
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.client;
+
+import java.io.IOException;
+import java.util.Collections;
+import java.util.Map;
+import java.util.NavigableSet;
+import org.apache.hadoop.hbase.filter.Filter;
+import org.apache.hadoop.hbase.io.TimeRange;
+import org.apache.hadoop.hbase.security.access.Permission;
+import org.apache.hadoop.hbase.security.visibility.Authorizations;
+import org.apache.yetus.audience.InterfaceAudience;
+
+/**
+ * Immutable version of Scan
+ */
+@InterfaceAudience.Public
+public final class ImmutableScan extends Scan {
+
+  /**
+   * Create Immutable instance of Scan from given Scan object
+   *
+   * @param scan Copy all values from Scan
+   * @throws IOException From parent constructor
+   */
+  public ImmutableScan(Scan scan) throws IOException {
+super(scan);
+super.setIsolationLevel(scan.getIsolationLevel());
+Map> familyMap = scan.getFamilyMap();
+for (Map.Entry> entry : familyMap.entrySet()) 
{
+  byte[] family = entry.getKey();
+  NavigableSet cols = entry.getValue();
+  if (cols != null && cols.size() > 0) {
+for (byte[] col : cols) {
+  super.addColumn(family, col);
+}
+  } else {
+super.addFamily(family);
+  }
+}
+for (Map.Entry attr : scan.getAttributesMap().entrySet()) {
+  super.setAttribute(attr.getKey(), attr.getValue());
+}
+for (Map.Entry entry : 
scan.getColumnFamilyTimeRange().entrySet()) {
+  TimeRange tr = entry.getValue();
+  super.setColumnFamilyTimeRange(entry.getKey(), tr.getMin(), tr.getMax());
+}
+super.setPriority(scan.getPriority());
+  }
+
+  /**
+   * Create Immutable instance of Scan from given Get object
+   *
+   * @param get Get to model Scan after
+   */
+  public ImmutableScan(Get get) {
+super(get);
+super.setIsolationLevel(get.getIsolationLevel());
+for (Map.Entry attr : get.getAttributesMap().entrySet()) {
+  super.setAttribute(attr.getKey(), attr.getValue());
+}
+for (Map.Entry entry : 
get.getColumnFamilyTimeRange().entrySet()) {
+  TimeRange tr = entry.getValue();
+  super.setColumnFamilyTimeRange(entry.getKey(), tr.getMin(), tr.getMax());
+}
+super.setPriority(get.getPriority());
+  }
+
+  /**
+   * Create a new Scan with a cursor. It only set the position information 
like start row key.
+   * The others (like cfs, stop row, limit) should still be filled in by the 
user.
+   * {@link Result#isCursor()}
+   * {@link Result#getCursor()}
+   * {@link Cursor}
+   */
+  public static Scan createScanFromCursor(Cursor cursor) {

Review comment:
   It is not just wrapper.  ImmutableScan will have both is-a and has-a 
relationship.  So what we will pass to CPs will be Scan type only. In reality 
that will be a wrapper.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] ndimiduk commented on a change in pull request #1811: HBASE-24474 Rename LocalRegion to MasterRegion

2020-06-01 Thread GitBox


ndimiduk commented on a change in pull request #1811:
URL: https://github.com/apache/hbase/pull/1811#discussion_r433425854



##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
##
@@ -450,7 +451,7 @@ public void run() {
   private ProcedureStore procedureStore;
 
   // the master local storage to store procedure data, etc.
-  private LocalStore localStore;
+  private MasterRegion masterRegion;

Review comment:
   Good.

##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/region/MasterRegion.java
##
@@ -79,14 +79,14 @@
  * Notice that, you can use different root file system and WAL file system. 
Then the above directory
  * will be on two file systems, the root file system will have the data 
directory while the WAL
  * filesystem will have the WALs directory. The archived HFile will be moved 
to the global HFile
- * archived directory with the {@link LocalRegionParams#archivedWalSuffix()} 
suffix. The archived
+ * archived directory with the {@link MasterRegionParams#archivedWalSuffix()} 
suffix. The archived
  * WAL will be moved to the global WAL archived directory with the
- * {@link LocalRegionParams#archivedHFileSuffix()} suffix.
+ * {@link MasterRegionParams#archivedHFileSuffix()} suffix.
  */
 @InterfaceAudience.Private
-public final class LocalRegion {
+public final class MasterRegion {

Review comment:
   We still need the wrapper class with delegation? How about have the 
factory manage creation of the `HRegion` (wiring up the wal, ) and `HMaster` 
hold the instance of `HRegion` directly?

##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
##
@@ -1563,7 +1564,7 @@ protected void stopServiceThreads() {
   private void createProcedureExecutor() throws IOException {
 MasterProcedureEnv procEnv = new MasterProcedureEnv(this);
 procedureStore =
-  new RegionProcedureStore(this, localStore, new 
MasterProcedureEnv.FsUtilsLeaseRecovery(this));
+  new RegionProcedureStore(this, masterRegion, new 
MasterProcedureEnv.FsUtilsLeaseRecovery(this));

Review comment:
   Having everything use a single region has me a little nervous. Seems 
like it'll make it easy for two unrelated subsystems to step on each others' 
toes later on -- conflicting row keys, columns,  This should be fine for 
initial work though.

##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
##
@@ -258,6 +259,14 @@
   public static final String SPECIAL_RECOVERED_EDITS_DIR =
 "hbase.hregion.special.recovered.edits.dir";
 
+  /**
+   * Whether to use {@link MetaCellComparator} even if we are not meta region. 
Used when creating
+   * master local region.
+   */
+  public static final String USE_META_CELL_COMPARATOR = 
"hbase.region.use.meta.cell.comparator";

Review comment:
   If this configuration point is specific to master side, should it have 
`master` in its name?

##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/region/MasterRegionFactory.java
##
@@ -89,45 +82,8 @@
   private static final TableDescriptor TABLE_DESC = 
TableDescriptorBuilder.newBuilder(TABLE_NAME)
 .setColumnFamily(ColumnFamilyDescriptorBuilder.of(PROC_FAMILY)).build();
 
-  private final LocalRegion region;
-
-  private LocalStore(LocalRegion region) {
-this.region = region;
-  }
-
-  public void update(UpdateLocalRegion action) throws IOException {
-region.update(action);
-  }
-
-  public Result get(Get get) throws IOException {
-return region.get(get);
-  }
-
-  public RegionScanner getScanner(Scan scan) throws IOException {
-return region.getScanner(scan);
-  }
-
-  public void close(boolean abort) {
-region.close(abort);
-  }
-
-  @VisibleForTesting
-  public FlushResult flush(boolean force) throws IOException {
-return region.flush(force);
-  }
-
-  @VisibleForTesting
-  public void requestRollAll() {
-region.requestRollAll();
-  }
-
-  @VisibleForTesting
-  public void waitUntilWalRollFinished() throws InterruptedException {
-region.waitUntilWalRollFinished();
-  }
-
-  public static LocalStore create(Server server) throws IOException {
-LocalRegionParams params = new LocalRegionParams().server(server)

Review comment:
    

##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
##
@@ -258,6 +259,14 @@
   public static final String SPECIAL_RECOVERED_EDITS_DIR =
 "hbase.hregion.special.recovered.edits.dir";
 
+  /**
+   * Whether to use {@link MetaCellComparator} even if we are not meta region. 
Used when creating
+   * master local region.
+   */
+  public static final String USE_META_CELL_COMPARATOR = 
"hbase.region.use.meta.cell.comparator";

Review comment:
   It's a little nit-pick, but I like my configurations to specify 
components via `.`-separator, and use `_` for component names. So 

[jira] [Resolved] (HBASE-7368) Add shell tricks documentation to the refguide

2020-06-01 Thread Michael Stack (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-7368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Stack resolved HBASE-7368.
--
Resolution: Incomplete

Resolving old, incomplete.

> Add shell tricks documentation to the refguide
> --
>
> Key: HBASE-7368
> URL: https://issues.apache.org/jira/browse/HBASE-7368
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client
>Reporter: Jonathan Hsieh
>Priority: Major
>
> bq. Consider adding a sentence to 
> http://hbase.apache.org/book.html#shell_tricks on your new fangled assignable 
> additions. ...  (Should do same for your count change too). 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (HBASE-7161) Table does not come out of 'enabling' state

2020-06-01 Thread Michael Stack (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-7161?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Stack resolved HBASE-7161.
--
Resolution: Later

Resolving old issue as 'later'/no-longer-pertinent

> Table does not come out of 'enabling' state
> ---
>
> Key: HBASE-7161
> URL: https://issues.apache.org/jira/browse/HBASE-7161
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.2
>Reporter: Devaraj Das
>Priority: Major
>
> I was running a test, and the test failed because a table didn't get 
> 'enabled' in the timeframe the test expected. When I checked the state of the 
> table on ZK, it showed the state as 'enabling'. 
> When I dug up the master logs, found that the BulkAssigner.bulkAssign 
> returned false (the first line in the logs below), and the table never became 
> 'enabled'. There was one region which could not be opened in the time 
> bulkAssign ran (and that got 'opened' after the bulkAssign method returned). 
> Also the table could not be enabled later on (from the hbase shell, for 
> example), since the state of the table on ZK was 'enabling' (as opposed to 
> 'disabled' and the table-state checks would fail).
> {noformat}
> 2012-11-13 06:41:27,257 INFO 
> org.apache.hadoop.hbase.master.handler.EnableTableHandler: Enabled table is 
> done=false 
> 2012-11-13 06:41:49,569 DEBUG 
> org.apache.hadoop.hbase.master.AssignmentManager: Handling 
> transition=RS_ZK_REGION_OPENING, server=hrt20n32.foo.net,60020,1352782575357, 
> region=0c0f9c71a81112f07c8f0ea130a65d05
> 2012-11-13 06:41:49,579 DEBUG 
> org.apache.hadoop.hbase.master.AssignmentManager: Handling 
> transition=RS_ZK_REGION_OPENING, server=hrt20n32.foo.net,60020,1352782575357, 
> region=0c0f9c71a81112f07c8f0ea130a65d05
> 2012-11-13 06:41:49,586 DEBUG 
> org.apache.hadoop.hbase.master.AssignmentManager: Handling 
> transition=RS_ZK_REGION_OPENED, server=hrt20n32.foo.net,60020,1352782575357, 
> region=0c0f9c71a81112f07c8f0ea130a65d05
> 2012-11-13 06:41:49,586 DEBUG 
> org.apache.hadoop.hbase.master.handler.OpenedRegionHandler: Handling OPENED 
> event for 
> loadtest_d1,,1352788441221.0c0f9c71a81112f07c8f0ea130a65d05. from 
> hrt20n32.foo.net,60020,1352782575357; deleting unassigned node
> 2012-11-13 06:41:49,586 DEBUG org.apache.hadoop.hbase.zookeeper.ZKAssign: 
> master:6-0x13af81eec6f0004 Deleting existing unassigned node for 
> 0c0f9c71a81112f07c8f0ea130a65d05 that is in expected state RS_ZK_REGION_OPENED
> 2012-11-13 06:41:49,589 DEBUG 
> org.apache.hadoop.hbase.master.AssignmentManager: The znode of region 
> loadtest_d1,,1352788441221.0c0f9c71a81112f07c8f0ea130a65d05. has been 
> deleted.
> 2012-11-13 06:41:49,589 INFO 
> org.apache.hadoop.hbase.master.AssignmentManager: The master has opened the 
> region loadtest_d1,,1352788441221.0c0f9c71a81112f07c8f0ea130a65d05. 
> that was online on hrt20n32.foo.net,60020,1352782575357
> {noformat}
> The client (that invoked HBA.enableTable) gave up eventually (enableTable 
> invokes isTableEnabled in a loop and in this case it was always returning 
> false).
> The handling on the master side for regions that take longer to get 'opened' 
> can be improved.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-24347) Hadoop2 profiles are both active when pre-commit PR builds run

2020-06-01 Thread Nick Dimiduk (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-24347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17121248#comment-17121248
 ] 

Nick Dimiduk commented on HBASE-24347:
--

[~zghao] I think much of the CI refactoring I did to support JDK11 didn't go 
back to branch-2.2 since it wasn't to be supported there. I think that would be 
the primary reason for differences.

> Hadoop2 profiles are both active when pre-commit PR builds run
> --
>
> Key: HBASE-24347
> URL: https://issues.apache.org/jira/browse/HBASE-24347
> Project: HBase
>  Issue Type: Sub-task
>  Components: build
>Reporter: Michael Stack
>Assignee: Josh Elser
>Priority: Major
> Fix For: 2.3.0, 2.4.0, 2.2.6
>
> Attachments: HBASE-24280.001.branch-2.3.patch, 
> HBASE-24280.001.branch-2.patch
>
>
> We need the magic done in the parent out in our precommit builds too. See how 
> https://github.com/apache/hbase/pull/1664 fails in hbase-rest w/ complaint 
> about jersey; this is a symptom of double hadoop2+hadoop3 profile activation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-24440) Prevent temporal misordering on timescales smaller than one clock tick

2020-06-01 Thread Geoffrey Jacoby (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-24440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17121247#comment-17121247
 ] 

Geoffrey Jacoby commented on HBASE-24440:
-

[~apurtell] - in HBase 2.x and above, the sort-delete-before-put rule is 
configurable. (see 29.3 in the HBase book). It can be disabled at the cost of 
some CPU perf on read. 

> Prevent temporal misordering on timescales smaller than one clock tick
> --
>
> Key: HBASE-24440
> URL: https://issues.apache.org/jira/browse/HBASE-24440
> Project: HBase
>  Issue Type: Brainstorming
>Reporter: Andrew Kyle Purtell
>Priority: Major
>
> When mutations are sent to the servers without a timestamp explicitly 
> assigned by the client the server will substitute the current wall clock 
> time. There are edge cases where it is at least theoretically possible for 
> more than one mutation to be committed to a given row within the same clock 
> tick. When this happens we have to track and preserve the ordering of these 
> mutations in some other way besides the timestamp component of the key. Let 
> me bypass most discussion here by noting that whether we do this or not, we 
> do not pass such ordering information in the cross cluster replication 
> protocol. We also have interesting edge cases regarding key type precedence 
> when mutations arrive "simultaneously": we sort deletes ahead of puts. This, 
> especially in the presence of replication, can lead to visible anomalies for 
> clients able to interact with both source and sink. 
> There is a simple solution that removes the possibility that these edge cases 
> can occur: 
> We can detect, when we are about to commit a mutation to a row, if we have 
> already committed a mutation to this same row in the current clock tick. 
> Occurrences of this condition will be rare. We are already tracking current 
> time. We have to know this in order to assign the timestamp. Where this 
> becomes interesting is how we might track the last commit time per row. 
> Making the detection of this case efficient for the normal code path is the 
> bulk of the challenge. One option is to keep track of the last locked time 
> for row locks. (Todo: How would we track and garbage collect this efficiently 
> and correctly. Not the ideal option.) We might also do this tracking somehow 
> via the memstore. (At least in this case the lifetime and distribution of in 
> memory row state, including the proposed timestamps, would align.) Assuming 
> we can efficiently know if we are about to commit twice to the same row 
> within a single clock tick, we would simply sleep/yield the current thread 
> until the clock ticks over, and then proceed. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Apache-HBase commented on pull request #1818: HBASE-24456 : Create ImmutableScan and use it for CustomizedScanInfoBuilder

2020-06-01 Thread GitBox


Apache-HBase commented on pull request #1818:
URL: https://github.com/apache/hbase/pull/1818#issuecomment-637039368


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   2m  3s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 21s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   4m 32s |  master passed  |
   | +1 :green_heart: |  compile  |   1m 28s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   6m 21s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 19s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 16s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 52s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 22s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 22s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   6m 12s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 59s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 12s |  hbase-client in the patch passed.  
|
   | +1 :green_heart: |  unit  | 216m 35s |  hbase-server in the patch passed.  
|
   |  |   | 248m 28s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.9 Server=19.03.9 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1818/4/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/1818 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux f367dd013346 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 
10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 716702a349 |
   | Default Java | 1.8.0_232 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1818/4/testReport/
 |
   | Max. process+thread count | 3046 (vs. ulimit of 12500) |
   | modules | C: hbase-client hbase-server U: . |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1818/4/console |
   | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (HBASE-19455) Try to re-enable a disabled BucketCache after a set timeout

2020-06-01 Thread Zach York (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-19455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17121243#comment-17121243
 ] 

Zach York commented on HBASE-19455:
---

[~anoop.hbase] I probably won't be able to take this up anytime soon... You can 
do this and pull me in for a review

> Try to re-enable a disabled BucketCache after a set timeout
> ---
>
> Key: HBASE-19455
> URL: https://issues.apache.org/jira/browse/HBASE-19455
> Project: HBase
>  Issue Type: Bug
>Reporter: Zach York
>Assignee: Zach York
>Priority: Major
>
> This JIRA is a follow-up to HBASE-19435. Currently, if the BucketCache is 
> disabled, the cache will try to enable itself again. This isn’t ideal for 
> situations where the cache is disabled because of transient issues. Instead, 
> we should have BucketCache try to re-enable itself after a time period.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] virajjasani commented on a change in pull request #1818: HBASE-24456 : Create ImmutableScan and use it for CustomizedScanInfoBuilder

2020-06-01 Thread GitBox


virajjasani commented on a change in pull request #1818:
URL: https://github.com/apache/hbase/pull/1818#discussion_r433410406



##
File path: 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/ImmutableScan.java
##
@@ -0,0 +1,477 @@
+/*
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.client;
+
+import java.io.IOException;
+import java.util.Collections;
+import java.util.Map;
+import java.util.NavigableSet;
+import org.apache.hadoop.hbase.filter.Filter;
+import org.apache.hadoop.hbase.io.TimeRange;
+import org.apache.hadoop.hbase.security.access.Permission;
+import org.apache.hadoop.hbase.security.visibility.Authorizations;
+import org.apache.yetus.audience.InterfaceAudience;
+
+/**
+ * Immutable version of Scan
+ */
+@InterfaceAudience.Public
+public final class ImmutableScan extends Scan {
+
+  /**
+   * Create Immutable instance of Scan from given Scan object
+   *
+   * @param scan Copy all values from Scan
+   * @throws IOException From parent constructor
+   */
+  public ImmutableScan(Scan scan) throws IOException {
+super(scan);
+super.setIsolationLevel(scan.getIsolationLevel());
+Map> familyMap = scan.getFamilyMap();
+for (Map.Entry> entry : familyMap.entrySet()) 
{
+  byte[] family = entry.getKey();
+  NavigableSet cols = entry.getValue();
+  if (cols != null && cols.size() > 0) {
+for (byte[] col : cols) {
+  super.addColumn(family, col);
+}
+  } else {
+super.addFamily(family);
+  }
+}
+for (Map.Entry attr : scan.getAttributesMap().entrySet()) {
+  super.setAttribute(attr.getKey(), attr.getValue());
+}
+for (Map.Entry entry : 
scan.getColumnFamilyTimeRange().entrySet()) {
+  TimeRange tr = entry.getValue();
+  super.setColumnFamilyTimeRange(entry.getKey(), tr.getMin(), tr.getMax());
+}
+super.setPriority(scan.getPriority());
+  }
+
+  /**
+   * Create Immutable instance of Scan from given Get object
+   *
+   * @param get Get to model Scan after
+   */
+  public ImmutableScan(Get get) {
+super(get);
+super.setIsolationLevel(get.getIsolationLevel());
+for (Map.Entry attr : get.getAttributesMap().entrySet()) {
+  super.setAttribute(attr.getKey(), attr.getValue());
+}
+for (Map.Entry entry : 
get.getColumnFamilyTimeRange().entrySet()) {
+  TimeRange tr = entry.getValue();
+  super.setColumnFamilyTimeRange(entry.getKey(), tr.getMin(), tr.getMax());
+}
+super.setPriority(get.getPriority());
+  }
+
+  /**
+   * Create a new Scan with a cursor. It only set the position information 
like start row key.
+   * The others (like cfs, stop row, limit) should still be filled in by the 
user.
+   * {@link Result#isCursor()}
+   * {@link Result#getCursor()}
+   * {@link Cursor}
+   */
+  public static Scan createScanFromCursor(Cursor cursor) {

Review comment:
   We can have wrapper but providing Scan object (which is Immutable by 
nature) to coprocessor hooks sounds better. Even though we provide Scan object 
to store scanner coproc hooks, it is Immutable as per the use-case and that is 
something HBase can take care of. 
   Moreover, dev adding new method to Scan are more likely to override it in 
ImmutableScan class because all methods are overridden(they can get IDE hints) 
but they are less likely to realize usage of each method in some wrapper class 
and add the method in wrapper.
   
   However, if we have better pros of using wrapper, I am fine with that also.
   Thoughts?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #1825: HBASE-24189 WALSplit recreates region dirs for deleted table with rec…

2020-06-01 Thread GitBox


Apache-HBase commented on pull request #1825:
URL: https://github.com/apache/hbase/pull/1825#issuecomment-637026477


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 33s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 22s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   3m 35s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   1m 25s |  master passed  |
   | +1 :green_heart: |  spotbugs  |   2m 38s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 14s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 23s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   1m  2s |  hbase-server: The patch 
generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0)  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  hadoopcheck  |  11m  5s |  Patch does not cause any 
errors with Hadoop 3.1.2 3.2.1.  |
   | +1 :green_heart: |  spotbugs  |   2m 57s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 26s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  35m 27s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.11 Server=19.03.11 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1825/1/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/1825 |
   | Optional Tests | dupname asflicense spotbugs hadoopcheck hbaseanti 
checkstyle |
   | uname | Linux a9761ed4ab9a 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 716702a349 |
   | checkstyle | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1825/1/artifact/yetus-general-check/output/diff-checkstyle-hbase-server.txt
 |
   | Max. process+thread count | 94 (vs. ulimit of 12500) |
   | modules | C: hbase-common hbase-server U: . |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1825/1/console |
   | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) 
spotbugs=3.1.12 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #1823: HBASE-24485 Backport to branch-1 HBASE-17738 BucketCache startup is slow

2020-06-01 Thread GitBox


Apache-HBase commented on pull request #1823:
URL: https://github.com/apache/hbase/pull/1823#issuecomment-637026079


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 36s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ branch-1 Compile Tests _ |
   | +0 :ok: |  mvndep  |   2m 24s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   8m 12s |  branch-1 passed  |
   | +1 :green_heart: |  compile  |   1m  5s |  branch-1 passed with JDK 
v1.8.0_252  |
   | +1 :green_heart: |  compile  |   1m 15s |  branch-1 passed with JDK 
v1.7.0_262  |
   | +1 :green_heart: |  checkstyle  |   2m 23s |  branch-1 passed  |
   | +1 :green_heart: |  shadedjars  |   2m 59s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 59s |  branch-1 passed with JDK 
v1.8.0_252  |
   | +1 :green_heart: |  javadoc  |   1m  3s |  branch-1 passed with JDK 
v1.7.0_262  |
   | +0 :ok: |  spotbugs  |   2m 38s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m 54s |  branch-1 passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 17s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 54s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  0s |  the patch passed with JDK 
v1.8.0_252  |
   | +1 :green_heart: |  javac  |   1m  0s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  5s |  the patch passed with JDK 
v1.7.0_262  |
   | +1 :green_heart: |  javac  |   1m  5s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 30s |  hbase-common: The patch 
generated 0 new + 1 unchanged - 3 fixed = 1 total (was 4)  |
   | +1 :green_heart: |  checkstyle  |   1m 30s |  The patch passed checkstyle 
in hbase-server  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedjars  |   2m 50s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  hadoopcheck  |   4m 36s |  Patch does not cause any 
errors with Hadoop 2.8.5 2.9.2.  |
   | +1 :green_heart: |  javadoc  |   0m 48s |  the patch passed with JDK 
v1.8.0_252  |
   | +1 :green_heart: |  javadoc  |   1m  5s |  the patch passed with JDK 
v1.7.0_262  |
   | +1 :green_heart: |  findbugs  |   4m  1s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 45s |  hbase-common in the patch passed.  
|
   | +1 :green_heart: |  unit  | 122m 53s |  hbase-server in the patch passed.  
|
   | +1 :green_heart: |  asflicense  |   0m 56s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 174m 46s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.11 Server=19.03.11 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1823/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/1823 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs 
shadedjars hadoopcheck hbaseanti checkstyle compile |
   | uname | Linux 0dc6d451dcc2 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-slave/workspace/Base-PreCommit-GitHub-PR_PR-1823/out/precommit/personality/provided.sh
 |
   | git revision | branch-1 / 4096925 |
   | Default Java | 1.7.0_262 |
   | Multi-JDK versions | /usr/lib/jvm/zulu-8-amd64:1.8.0_252 
/usr/lib/jvm/zulu-7-amd64:1.7.0_262 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1823/3/testReport/
 |
   | Max. process+thread count | 4428 (vs. ulimit of 1) |
   | modules | C: hbase-common hbase-server U: . |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1823/3/console |
   | versions | git=1.9.1 maven=3.0.5 findbugs=3.0.1 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (HBASE-24440) Prevent temporal misordering on timescales smaller than one clock tick

2020-06-01 Thread Andrew Kyle Purtell (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-24440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17121223#comment-17121223
 ] 

Andrew Kyle Purtell commented on HBASE-24440:
-

Correct [~anoop.hbase] , two versions with two distinct timestamps instead 
of duplicate row keys with only something like an internal only seqno to 
differentiate them (which is not replicated).

We can also consider removing the implicit sort-delete-before-put rule that can 
cause temporal anomalies under some conditions, but that is out of scope for 
this proposal.

> Prevent temporal misordering on timescales smaller than one clock tick
> --
>
> Key: HBASE-24440
> URL: https://issues.apache.org/jira/browse/HBASE-24440
> Project: HBase
>  Issue Type: Brainstorming
>Reporter: Andrew Kyle Purtell
>Priority: Major
>
> When mutations are sent to the servers without a timestamp explicitly 
> assigned by the client the server will substitute the current wall clock 
> time. There are edge cases where it is at least theoretically possible for 
> more than one mutation to be committed to a given row within the same clock 
> tick. When this happens we have to track and preserve the ordering of these 
> mutations in some other way besides the timestamp component of the key. Let 
> me bypass most discussion here by noting that whether we do this or not, we 
> do not pass such ordering information in the cross cluster replication 
> protocol. We also have interesting edge cases regarding key type precedence 
> when mutations arrive "simultaneously": we sort deletes ahead of puts. This, 
> especially in the presence of replication, can lead to visible anomalies for 
> clients able to interact with both source and sink. 
> There is a simple solution that removes the possibility that these edge cases 
> can occur: 
> We can detect, when we are about to commit a mutation to a row, if we have 
> already committed a mutation to this same row in the current clock tick. 
> Occurrences of this condition will be rare. We are already tracking current 
> time. We have to know this in order to assign the timestamp. Where this 
> becomes interesting is how we might track the last commit time per row. 
> Making the detection of this case efficient for the normal code path is the 
> bulk of the challenge. One option is to keep track of the last locked time 
> for row locks. (Todo: How would we track and garbage collect this efficiently 
> and correctly. Not the ideal option.) We might also do this tracking somehow 
> via the memstore. (At least in this case the lifetime and distribution of in 
> memory row state, including the proposed timestamps, would align.) Assuming 
> we can efficiently know if we are about to commit twice to the same row 
> within a single clock tick, we would simply sleep/yield the current thread 
> until the clock ticks over, and then proceed. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] saintstack commented on a change in pull request #1814: HBASE-20904 Prometheus /metrics http endpoint for monitoring

2020-06-01 Thread GitBox


saintstack commented on a change in pull request #1814:
URL: https://github.com/apache/hbase/pull/1814#discussion_r433401973



##
File path: hbase-common/src/main/resources/hbase-default.xml
##
@@ -1727,6 +1727,15 @@ possible configurations would overwhelm and obscure the 
important.
   ThreadPool.
 
   
+  
+hbase.http.enable.prometheus.servlets
+false
+
+  Enable prometheus servlets /prom and /prom2 for prometheus based 
monitoring.
+  /prom is based on new HBase metrics API and all metrics are not exported 
for now.
+  /prom2 is based on the old hadoop2 metrics API and has all the metrics.

Review comment:
   Thanks. Now I see where you are coming from.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (HBASE-20904) Prometheus /metrics http endpoint for monitoring integration

2020-06-01 Thread Michael Stack (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-20904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17121216#comment-17121216
 ] 

Michael Stack commented on HBASE-20904:
---

[~mmpataki] added helpful note up on the PR. I'm adding it here too.

bq. HBASE-9774 brought in hbase-native metrics collection for the coprocessors 
and a decision was made to to use this API to record all the other metrics as 
well (hbase-metrics-api / README.txt). I am referring to this as "new API'. Now 
all the work to consume this new API isn't done so we still have to depend on 
the "old", "hadoop2" based metric collection.

> Prometheus /metrics http endpoint for monitoring integration
> 
>
> Key: HBASE-20904
> URL: https://issues.apache.org/jira/browse/HBASE-20904
> Project: HBase
>  Issue Type: New Feature
>  Components: metrics, monitoring
>Reporter: Hari Sekhon
>Priority: Major
>
> Feature Request to add Prometheus /metrics http endpoint for monitoring 
> integration:
> [https://prometheus.io/docs/prometheus/latest/configuration/configuration/#%3Cscrape_config%3E]
> Prometheus metrics format for that endpoint:
> [https://github.com/prometheus/docs/blob/master/content/docs/instrumenting/exposition_formats.md]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] huaxiangsun commented on pull request #1786: HBASE-24418 Consolidate Normalizer implementations

2020-06-01 Thread GitBox


huaxiangsun commented on pull request #1786:
URL: https://github.com/apache/hbase/pull/1786#issuecomment-637017304


   +1 for the new diff, looks great!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (HBASE-24487) Add 2.3 Documentation to the website

2020-06-01 Thread Nick Dimiduk (Jira)
Nick Dimiduk created HBASE-24487:


 Summary: Add 2.3 Documentation to the website
 Key: HBASE-24487
 URL: https://issues.apache.org/jira/browse/HBASE-24487
 Project: HBase
  Issue Type: Sub-task
  Components: documentation
Reporter: Nick Dimiduk






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] anoopsjohn commented on a change in pull request #1818: HBASE-24456 : Create ImmutableScan and use it for CustomizedScanInfoBuilder

2020-06-01 Thread GitBox


anoopsjohn commented on a change in pull request #1818:
URL: https://github.com/apache/hbase/pull/1818#discussion_r433387520



##
File path: 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/ImmutableScan.java
##
@@ -0,0 +1,477 @@
+/*
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.client;
+
+import java.io.IOException;
+import java.util.Collections;
+import java.util.Map;
+import java.util.NavigableSet;
+import org.apache.hadoop.hbase.filter.Filter;
+import org.apache.hadoop.hbase.io.TimeRange;
+import org.apache.hadoop.hbase.security.access.Permission;
+import org.apache.hadoop.hbase.security.visibility.Authorizations;
+import org.apache.yetus.audience.InterfaceAudience;
+
+/**
+ * Immutable version of Scan
+ */
+@InterfaceAudience.Public
+public final class ImmutableScan extends Scan {
+
+  /**
+   * Create Immutable instance of Scan from given Scan object
+   *
+   * @param scan Copy all values from Scan
+   * @throws IOException From parent constructor
+   */
+  public ImmutableScan(Scan scan) throws IOException {
+super(scan);
+super.setIsolationLevel(scan.getIsolationLevel());
+Map> familyMap = scan.getFamilyMap();
+for (Map.Entry> entry : familyMap.entrySet()) 
{
+  byte[] family = entry.getKey();
+  NavigableSet cols = entry.getValue();
+  if (cols != null && cols.size() > 0) {
+for (byte[] col : cols) {
+  super.addColumn(family, col);
+}
+  } else {
+super.addFamily(family);
+  }
+}
+for (Map.Entry attr : scan.getAttributesMap().entrySet()) {
+  super.setAttribute(attr.getKey(), attr.getValue());
+}
+for (Map.Entry entry : 
scan.getColumnFamilyTimeRange().entrySet()) {
+  TimeRange tr = entry.getValue();
+  super.setColumnFamilyTimeRange(entry.getKey(), tr.getMin(), tr.getMax());
+}
+super.setPriority(scan.getPriority());
+  }
+
+  /**
+   * Create Immutable instance of Scan from given Get object
+   *
+   * @param get Get to model Scan after
+   */
+  public ImmutableScan(Get get) {
+super(get);
+super.setIsolationLevel(get.getIsolationLevel());
+for (Map.Entry attr : get.getAttributesMap().entrySet()) {
+  super.setAttribute(attr.getKey(), attr.getValue());
+}
+for (Map.Entry entry : 
get.getColumnFamilyTimeRange().entrySet()) {
+  TimeRange tr = entry.getValue();
+  super.setColumnFamilyTimeRange(entry.getKey(), tr.getMin(), tr.getMax());
+}
+super.setPriority(get.getPriority());
+  }
+
+  /**
+   * Create a new Scan with a cursor. It only set the position information 
like start row key.
+   * The others (like cfs, stop row, limit) should still be filled in by the 
user.
+   * {@link Result#isCursor()}
+   * {@link Result#getCursor()}
+   * {@link Cursor}
+   */
+  public static Scan createScanFromCursor(Cursor cursor) {

Review comment:
   can we try making ImmutableScan as a wrapper over actual Scan object.  
The setters in the ImmutableScan can throw Exception and getter can just 
delegate to original Scan object





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Comment Edited] (HBASE-24189) WALSplit recreates region dirs for deleted table with recovered edits data

2020-06-01 Thread Anoop Sam John (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-24189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17118699#comment-17118699
 ] 

Anoop Sam John edited comment on HBASE-24189 at 6/1/20, 5:37 PM:
-

bq.Will be created at region open only not at WAL split time
I was partially right here.  For 2.x this is the case.  On 1.x seems not (I 
need to confirm once more though)
In fact the region dir check, for knowing whether this region is still present, 
can be done on rootFS . Latest PR doing that


was (Author: anoop.hbase):
bq.Will be created at region open only not at WAL split time
I was partially right here.  For 2.x this is the case.  On 1.x seems not (I 
need to confirm once more though)

> WALSplit recreates region dirs for deleted table with recovered edits data
> --
>
> Key: HBASE-24189
> URL: https://issues.apache.org/jira/browse/HBASE-24189
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver, wal
>Affects Versions: 2.2.4
> Environment: * HDFS 3.1.3
>  * HBase 2.1.4
>  * OpenJDK 8
>Reporter: Andrey Elenskiy
>Assignee: Anoop Sam John
>Priority: Major
>
> Under the following scenario region directories in HDFS can be recreated with 
> only recovered.edits in them:
>  # Create table "test"
>  # Put into "test"
>  # Delete table "test"
>  # Create table "test" again
>  # Crash the regionserver to which the put has went to force the WAL replay
>  # Region directory in old table is recreated in new table
>  # hbase hbck returns inconsistency
> This appears to happen due to the fact that WALs are not cleaned up once a 
> table is deleted and they still contain the edits from old table. I've tried 
> wal_roll command on the regionserver before crashing it, but it doesn't seem 
> to help as under some circumstances there are still WAL files around. The 
> only solution that works consistently is to restart regionserver before 
> creating the table at step 4 because that triggers log cleanup on startup: 
> [https://github.com/apache/hbase/blob/f3ee9b8aa37dd30d34ff54cd39fb9b4b6d22e683/hbase-procedure/src/main/java/org/apache/hadoop/hbase/procedure2/store/wal/WALProcedureStore.java#L508|https://github.com/apache/hbase/blob/f3ee9b8aa37dd30d34ff54cd39fb9b4b6d22e683/hbase-procedure/src/main/java/org/apache/hadoop/hbase/procedure2/store/wal/WALProcedureStore.java#L508)]
>  
> Truncating a table also would be a workaround by in our case it's a no-go as 
> we create and delete tables in our tests which run back to back (create table 
> in the beginning of the test and delete in the end of the test).
> A nice option in our case would be to provide hbase shell utility to force 
> clean up of log files manually as I realize that it's not really viable to 
> clean all of those up every time some table is removed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] anoopsjohn opened a new pull request #1825: HBASE-24189 WALSplit recreates region dirs for deleted table with rec…

2020-06-01 Thread GitBox


anoopsjohn opened a new pull request #1825:
URL: https://github.com/apache/hbase/pull/1825


   …overed edits data.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] virajjasani commented on a change in pull request #1818: HBASE-24456 : Create ImmutableScan and use it for CustomizedScanInfoBuilder

2020-06-01 Thread GitBox


virajjasani commented on a change in pull request #1818:
URL: https://github.com/apache/hbase/pull/1818#discussion_r433122495



##
File path: 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/ImmutableScan.java
##
@@ -0,0 +1,477 @@
+/*
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.client;
+
+import java.io.IOException;
+import java.util.Collections;
+import java.util.Map;
+import java.util.NavigableSet;
+import org.apache.hadoop.hbase.filter.Filter;
+import org.apache.hadoop.hbase.io.TimeRange;
+import org.apache.hadoop.hbase.security.access.Permission;
+import org.apache.hadoop.hbase.security.visibility.Authorizations;
+import org.apache.yetus.audience.InterfaceAudience;
+
+/**
+ * Immutable version of Scan
+ */
+@InterfaceAudience.Public
+public final class ImmutableScan extends Scan {
+
+  /**
+   * Create Immutable instance of Scan from given Scan object
+   *
+   * @param scan Copy all values from Scan
+   * @throws IOException From parent constructor
+   */
+  public ImmutableScan(Scan scan) throws IOException {
+super(scan);
+super.setIsolationLevel(scan.getIsolationLevel());
+Map> familyMap = scan.getFamilyMap();
+for (Map.Entry> entry : familyMap.entrySet()) 
{
+  byte[] family = entry.getKey();
+  NavigableSet cols = entry.getValue();
+  if (cols != null && cols.size() > 0) {
+for (byte[] col : cols) {
+  super.addColumn(family, col);
+}
+  } else {
+super.addFamily(family);
+  }
+}
+for (Map.Entry attr : scan.getAttributesMap().entrySet()) {
+  super.setAttribute(attr.getKey(), attr.getValue());
+}
+for (Map.Entry entry : 
scan.getColumnFamilyTimeRange().entrySet()) {
+  TimeRange tr = entry.getValue();
+  super.setColumnFamilyTimeRange(entry.getKey(), tr.getMin(), tr.getMax());
+}
+super.setPriority(scan.getPriority());
+  }
+
+  /**
+   * Create Immutable instance of Scan from given Get object
+   *
+   * @param get Get to model Scan after
+   */
+  public ImmutableScan(Get get) {
+super(get);
+super.setIsolationLevel(get.getIsolationLevel());
+for (Map.Entry attr : get.getAttributesMap().entrySet()) {
+  super.setAttribute(attr.getKey(), attr.getValue());
+}
+for (Map.Entry entry : 
get.getColumnFamilyTimeRange().entrySet()) {
+  TimeRange tr = entry.getValue();
+  super.setColumnFamilyTimeRange(entry.getKey(), tr.getMin(), tr.getMax());
+}
+super.setPriority(get.getPriority());
+  }
+
+  /**
+   * Create a new Scan with a cursor. It only set the position information 
like start row key.
+   * The others (like cfs, stop row, limit) should still be filled in by the 
user.
+   * {@link Result#isCursor()}
+   * {@link Result#getCursor()}
+   * {@link Cursor}
+   */
+  public static Scan createScanFromCursor(Cursor cursor) {
+Scan scan = new Scan().withStartRow(cursor.getRow());
+try {
+  return new ImmutableScan(scan);
+} catch (IOException e) {
+  throw new RuntimeException("Scan should not throw IOException", e);
+}
+  }
+
+  @Override
+  public Scan addFamily(byte[] family) {
+throw new IllegalStateException("ImmutableScan does not allow access to 
addFamily");
+  }
+
+  @Override
+  public Scan addColumn(byte[] family, byte[] qualifier) {
+throw new IllegalStateException("ImmutableScan does not allow access to 
addColumn");
+  }
+
+  @Override
+  public Scan setTimeRange(long minStamp, long maxStamp) {
+throw new IllegalStateException("ImmutableScan does not allow access to 
setTimeRange");
+  }
+
+  @Deprecated
+  @Override
+  public Scan setTimeStamp(long timestamp) {
+throw new IllegalStateException("ImmutableScan does not allow access to 
setTimeStamp");
+  }
+
+  @Override
+  public Scan setTimestamp(long timestamp) {
+throw new IllegalStateException("ImmutableScan does not allow access to 
setTimestamp");
+  }
+
+  @Override
+  public Scan setColumnFamilyTimeRange(byte[] cf, long minStamp, long 
maxStamp) {
+throw new IllegalStateException(
+  "ImmutableScan does not allow access to setColumnFamilyTimeRange");
+  }
+
+  @Override
+  public Scan withStartRow(byte[] 

[GitHub] [hbase] virajjasani commented on a change in pull request #1818: HBASE-24456 : Create ImmutableScan and use it for CustomizedScanInfoBuilder

2020-06-01 Thread GitBox


virajjasani commented on a change in pull request #1818:
URL: https://github.com/apache/hbase/pull/1818#discussion_r433122495



##
File path: 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/ImmutableScan.java
##
@@ -0,0 +1,477 @@
+/*
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.client;
+
+import java.io.IOException;
+import java.util.Collections;
+import java.util.Map;
+import java.util.NavigableSet;
+import org.apache.hadoop.hbase.filter.Filter;
+import org.apache.hadoop.hbase.io.TimeRange;
+import org.apache.hadoop.hbase.security.access.Permission;
+import org.apache.hadoop.hbase.security.visibility.Authorizations;
+import org.apache.yetus.audience.InterfaceAudience;
+
+/**
+ * Immutable version of Scan
+ */
+@InterfaceAudience.Public
+public final class ImmutableScan extends Scan {
+
+  /**
+   * Create Immutable instance of Scan from given Scan object
+   *
+   * @param scan Copy all values from Scan
+   * @throws IOException From parent constructor
+   */
+  public ImmutableScan(Scan scan) throws IOException {
+super(scan);
+super.setIsolationLevel(scan.getIsolationLevel());
+Map> familyMap = scan.getFamilyMap();
+for (Map.Entry> entry : familyMap.entrySet()) 
{
+  byte[] family = entry.getKey();
+  NavigableSet cols = entry.getValue();
+  if (cols != null && cols.size() > 0) {
+for (byte[] col : cols) {
+  super.addColumn(family, col);
+}
+  } else {
+super.addFamily(family);
+  }
+}
+for (Map.Entry attr : scan.getAttributesMap().entrySet()) {
+  super.setAttribute(attr.getKey(), attr.getValue());
+}
+for (Map.Entry entry : 
scan.getColumnFamilyTimeRange().entrySet()) {
+  TimeRange tr = entry.getValue();
+  super.setColumnFamilyTimeRange(entry.getKey(), tr.getMin(), tr.getMax());
+}
+super.setPriority(scan.getPriority());
+  }
+
+  /**
+   * Create Immutable instance of Scan from given Get object
+   *
+   * @param get Get to model Scan after
+   */
+  public ImmutableScan(Get get) {
+super(get);
+super.setIsolationLevel(get.getIsolationLevel());
+for (Map.Entry attr : get.getAttributesMap().entrySet()) {
+  super.setAttribute(attr.getKey(), attr.getValue());
+}
+for (Map.Entry entry : 
get.getColumnFamilyTimeRange().entrySet()) {
+  TimeRange tr = entry.getValue();
+  super.setColumnFamilyTimeRange(entry.getKey(), tr.getMin(), tr.getMax());
+}
+super.setPriority(get.getPriority());
+  }
+
+  /**
+   * Create a new Scan with a cursor. It only set the position information 
like start row key.
+   * The others (like cfs, stop row, limit) should still be filled in by the 
user.
+   * {@link Result#isCursor()}
+   * {@link Result#getCursor()}
+   * {@link Cursor}
+   */
+  public static Scan createScanFromCursor(Cursor cursor) {
+Scan scan = new Scan().withStartRow(cursor.getRow());
+try {
+  return new ImmutableScan(scan);
+} catch (IOException e) {
+  throw new RuntimeException("Scan should not throw IOException", e);
+}
+  }
+
+  @Override
+  public Scan addFamily(byte[] family) {
+throw new IllegalStateException("ImmutableScan does not allow access to 
addFamily");
+  }
+
+  @Override
+  public Scan addColumn(byte[] family, byte[] qualifier) {
+throw new IllegalStateException("ImmutableScan does not allow access to 
addColumn");
+  }
+
+  @Override
+  public Scan setTimeRange(long minStamp, long maxStamp) {
+throw new IllegalStateException("ImmutableScan does not allow access to 
setTimeRange");
+  }
+
+  @Deprecated
+  @Override
+  public Scan setTimeStamp(long timestamp) {
+throw new IllegalStateException("ImmutableScan does not allow access to 
setTimeStamp");
+  }
+
+  @Override
+  public Scan setTimestamp(long timestamp) {
+throw new IllegalStateException("ImmutableScan does not allow access to 
setTimestamp");
+  }
+
+  @Override
+  public Scan setColumnFamilyTimeRange(byte[] cf, long minStamp, long 
maxStamp) {
+throw new IllegalStateException(
+  "ImmutableScan does not allow access to setColumnFamilyTimeRange");
+  }
+
+  @Override
+  public Scan withStartRow(byte[] 

[GitHub] [hbase] virajjasani commented on a change in pull request #1818: HBASE-24456 : Create ImmutableScan and use it for CustomizedScanInfoBuilder

2020-06-01 Thread GitBox


virajjasani commented on a change in pull request #1818:
URL: https://github.com/apache/hbase/pull/1818#discussion_r433237718



##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/CustomizedScanInfoBuilder.java
##
@@ -42,13 +43,18 @@
 
   public CustomizedScanInfoBuilder(ScanInfo scanInfo) {
 this.scanInfo = scanInfo;
-this.scan = new Scan();
+try {
+  this.scan = new ImmutableScan(new Scan());

Review comment:
   I just provided 2 constructors to ImmutableScan:
   1) ImmutableScan(Scan scan)
   2) ImmutableScan(Get get)
   
   I believe, given that this is Immutable subclass, providing default 
constructor might not be much useful. Even if we give, it will internally use 
`this(new Scan())`. Hence, I thought of not providing default constructor at 
all. Because ultimately, we want Immutable class to construct everything during 
initialization only and after init, no setters should be accessible. Thought?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] virajjasani commented on a change in pull request #1818: HBASE-24456 : Create ImmutableScan and use it for CustomizedScanInfoBuilder

2020-06-01 Thread GitBox


virajjasani commented on a change in pull request #1818:
URL: https://github.com/apache/hbase/pull/1818#discussion_r433237718



##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/CustomizedScanInfoBuilder.java
##
@@ -42,13 +43,18 @@
 
   public CustomizedScanInfoBuilder(ScanInfo scanInfo) {
 this.scanInfo = scanInfo;
-this.scan = new Scan();
+try {
+  this.scan = new ImmutableScan(new Scan());

Review comment:
   I just provided 2 constructors to ImmutableScan:
   1) ImmutableScan(Scan scan)
   2) ImmutableScan(Get get)
   
   I believe, given that this is Immutable subclass, providing default 
constructor might not be much useful. Even if we give, it will internally use 
`this(new Scan())`. Hence, I thought of not providing default constructor at 
all. Because ultimately, we want Immutable class to construct everything during 
initialization only and not afterwards. Thought?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (HBASE-24472) Enhanced version of KeyPrefixRegionSplitPolicy

2020-06-01 Thread Anil Sadineni (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-24472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anil Sadineni updated HBASE-24472:
--
Attachment: HBASE-24472.001.patch
Status: Patch Available  (was: Open)

Attached patch container modified version of 
DelimitedKeyPrefixRegionSplitPolicy.

Summary of change - Introduced new attribute for ordinal of delimiter - 
"DelimitedKeyPrefixRegionSplitPolicy.delimiterOrdinal"

Please review and suggest if any changes needed. 

> Enhanced version of KeyPrefixRegionSplitPolicy
> --
>
> Key: HBASE-24472
> URL: https://issues.apache.org/jira/browse/HBASE-24472
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Reporter: Anil Sadineni
>Priority: Major
> Attachments: HBASE-24472.001.patch
>
>
> With KeyPrefixRegionSplitPolicy and DelimitedKeyPrefixRegionSplitPolicy, 
> splitting regions policy is limited to either fixed length or based on 
> delimiter. With Yarn Application Timeline Server V2, as discussed in 
> YARN-10077, it will be nice to have enhanced version of 
> DelimitedKeyPrefixRegionSplitPolicy that will give more flexibility to users 
> to define the ordinance of delimiter to consider.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (HBASE-24480) Deflake TestRSGroupsBasics#testClearDeadServers

2020-06-01 Thread Bharath Vissapragada (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-24480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharath Vissapragada resolved HBASE-24480.
--
Resolution: Fixed

Thanks Viraj/Reid for the reviews.

> Deflake TestRSGroupsBasics#testClearDeadServers
> ---
>
> Key: HBASE-24480
> URL: https://issues.apache.org/jira/browse/HBASE-24480
> Project: HBase
>  Issue Type: Bug
>  Components: rsgroup
>Affects Versions: 1.7.0
>Reporter: Bharath Vissapragada
>Assignee: Bharath Vissapragada
>Priority: Major
> Fix For: 1.7.0
>
>
> Ran into this on our internal forks based on branch-1. It also applies to 
> branch-2 but not master because the code has been re-implemented without 
> co-proc due to HBASE-22514
> Running into this exception in the test run..
> {noformat}
> org.apache.hadoop.hbase.constraint.ConstraintException: The set of servers to 
> remove cannot be null or empty.at 
> org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeServers(RSGroupAdminServer.java:391)
>at 
> org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint.postClearDeadServers(RSGroupAdminEndpoint.java:1175)
>at 
> org.apache.hadoop.hbase.master.MasterCoprocessorHost$104.call(MasterCoprocessorHost.java:1251)
>   at 
> org.apache.hadoop.hbase.master.MasterCoprocessorHost.execOperation(MasterCoprocessorHost.java:1507)
>  at 
> org.apache.hadoop.hbase.master.MasterCoprocessorHost.postClearDeadServers(MasterCoprocessorHost.java:1247)
>   at 
> org.apache.hadoop.hbase.master.MasterRpcServices.clearDeadServers(MasterRpcServices.java:1167)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java)
>at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2421) at 
> org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124) at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:311)
>at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:291)"
>  
> type="org.apache.hadoop.hbase.constraint.ConstraintException">org.apache.hadoop.hbase.constraint.ConstraintException:
>  
> org.apache.hadoop.hbase.constraint.ConstraintException: The set of servers to 
> remove cannot be null or empty.
>   at 
> org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeServers(RSGroupAdminServer.java:391)
>   at 
> org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint.postClearDeadServers(RSGroupAdminEndpoint.java:1175)
>   at 
> org.apache.hadoop.hbase.master.MasterCoprocessorHost$104.call(MasterCoprocessorHost.java:1251)
>   at 
> org.apache.hadoop.hbase.master.MasterCoprocessorHost.execOperation(MasterCoprocessorHost.java:1507)
>   at 
> org.apache.hadoop.hbase.master.MasterCoprocessorHost.postClearDeadServers(MasterCoprocessorHost.java:1247)
>   at 
> org.apache.hadoop.hbase.master.MasterRpcServices.clearDeadServers(MasterRpcServices.java:1167)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2421)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:311)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:291)
>   at 
> org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.testClearDeadServers(TestRSGroupsBasics.java:215)
> Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException: 
> org.apache.hadoop.hbase.constraint.ConstraintException: The set of servers to 
> remove cannot be null or empty.
>   at 
> org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeServers(RSGroupAdminServer.java:391)
>   at 
> org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint.postClearDeadServers(RSGroupAdminEndpoint.java:1175)
>   at 
> org.apache.hadoop.hbase.master.MasterCoprocessorHost$104.call(MasterCoprocessorHost.java:1251)
>   at 
> org.apache.hadoop.hbase.master.MasterCoprocessorHost.execOperation(MasterCoprocessorHost.java:1507)
>   at 
> org.apache.hadoop.hbase.master.MasterCoprocessorHost.postClearDeadServers(MasterCoprocessorHost.java:1247)
>   at 
> org.apache.hadoop.hbase.master.MasterRpcServices.clearDeadServers(MasterRpcServices.java:1167)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2421)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:311)
>   at 
> 

[jira] [Commented] (HBASE-24480) Deflake TestRSGroupsBasics#testClearDeadServers

2020-06-01 Thread Bharath Vissapragada (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-24480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17121170#comment-17121170
 ] 

Bharath Vissapragada commented on HBASE-24480:
--

While back-porting to branch-2, I figured HBASE-20927 already addressed the 
server side issue. So it is very unlikely that we will run into this failure on 
branch-2 and above. 

> Deflake TestRSGroupsBasics#testClearDeadServers
> ---
>
> Key: HBASE-24480
> URL: https://issues.apache.org/jira/browse/HBASE-24480
> Project: HBase
>  Issue Type: Bug
>  Components: rsgroup
>Affects Versions: 1.7.0
>Reporter: Bharath Vissapragada
>Assignee: Bharath Vissapragada
>Priority: Major
> Fix For: 1.7.0
>
>
> Ran into this on our internal forks based on branch-1. It also applies to 
> branch-2 but not master because the code has been re-implemented without 
> co-proc due to HBASE-22514
> Running into this exception in the test run..
> {noformat}
> org.apache.hadoop.hbase.constraint.ConstraintException: The set of servers to 
> remove cannot be null or empty.at 
> org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeServers(RSGroupAdminServer.java:391)
>at 
> org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint.postClearDeadServers(RSGroupAdminEndpoint.java:1175)
>at 
> org.apache.hadoop.hbase.master.MasterCoprocessorHost$104.call(MasterCoprocessorHost.java:1251)
>   at 
> org.apache.hadoop.hbase.master.MasterCoprocessorHost.execOperation(MasterCoprocessorHost.java:1507)
>  at 
> org.apache.hadoop.hbase.master.MasterCoprocessorHost.postClearDeadServers(MasterCoprocessorHost.java:1247)
>   at 
> org.apache.hadoop.hbase.master.MasterRpcServices.clearDeadServers(MasterRpcServices.java:1167)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java)
>at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2421) at 
> org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124) at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:311)
>at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:291)"
>  
> type="org.apache.hadoop.hbase.constraint.ConstraintException">org.apache.hadoop.hbase.constraint.ConstraintException:
>  
> org.apache.hadoop.hbase.constraint.ConstraintException: The set of servers to 
> remove cannot be null or empty.
>   at 
> org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeServers(RSGroupAdminServer.java:391)
>   at 
> org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint.postClearDeadServers(RSGroupAdminEndpoint.java:1175)
>   at 
> org.apache.hadoop.hbase.master.MasterCoprocessorHost$104.call(MasterCoprocessorHost.java:1251)
>   at 
> org.apache.hadoop.hbase.master.MasterCoprocessorHost.execOperation(MasterCoprocessorHost.java:1507)
>   at 
> org.apache.hadoop.hbase.master.MasterCoprocessorHost.postClearDeadServers(MasterCoprocessorHost.java:1247)
>   at 
> org.apache.hadoop.hbase.master.MasterRpcServices.clearDeadServers(MasterRpcServices.java:1167)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2421)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:311)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:291)
>   at 
> org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.testClearDeadServers(TestRSGroupsBasics.java:215)
> Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException: 
> org.apache.hadoop.hbase.constraint.ConstraintException: The set of servers to 
> remove cannot be null or empty.
>   at 
> org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeServers(RSGroupAdminServer.java:391)
>   at 
> org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint.postClearDeadServers(RSGroupAdminEndpoint.java:1175)
>   at 
> org.apache.hadoop.hbase.master.MasterCoprocessorHost$104.call(MasterCoprocessorHost.java:1251)
>   at 
> org.apache.hadoop.hbase.master.MasterCoprocessorHost.execOperation(MasterCoprocessorHost.java:1507)
>   at 
> org.apache.hadoop.hbase.master.MasterCoprocessorHost.postClearDeadServers(MasterCoprocessorHost.java:1247)
>   at 
> org.apache.hadoop.hbase.master.MasterRpcServices.clearDeadServers(MasterRpcServices.java:1167)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2421)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)

[jira] [Updated] (HBASE-24480) Deflake TestRSGroupsBasics#testClearDeadServers

2020-06-01 Thread Bharath Vissapragada (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-24480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharath Vissapragada updated HBASE-24480:
-
Affects Version/s: (was: 2.3.0)

> Deflake TestRSGroupsBasics#testClearDeadServers
> ---
>
> Key: HBASE-24480
> URL: https://issues.apache.org/jira/browse/HBASE-24480
> Project: HBase
>  Issue Type: Bug
>  Components: rsgroup
>Affects Versions: 1.7.0
>Reporter: Bharath Vissapragada
>Assignee: Bharath Vissapragada
>Priority: Major
> Fix For: 1.7.0
>
>
> Ran into this on our internal forks based on branch-1. It also applies to 
> branch-2 but not master because the code has been re-implemented without 
> co-proc due to HBASE-22514
> Running into this exception in the test run..
> {noformat}
> org.apache.hadoop.hbase.constraint.ConstraintException: The set of servers to 
> remove cannot be null or empty.at 
> org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeServers(RSGroupAdminServer.java:391)
>at 
> org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint.postClearDeadServers(RSGroupAdminEndpoint.java:1175)
>at 
> org.apache.hadoop.hbase.master.MasterCoprocessorHost$104.call(MasterCoprocessorHost.java:1251)
>   at 
> org.apache.hadoop.hbase.master.MasterCoprocessorHost.execOperation(MasterCoprocessorHost.java:1507)
>  at 
> org.apache.hadoop.hbase.master.MasterCoprocessorHost.postClearDeadServers(MasterCoprocessorHost.java:1247)
>   at 
> org.apache.hadoop.hbase.master.MasterRpcServices.clearDeadServers(MasterRpcServices.java:1167)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java)
>at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2421) at 
> org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124) at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:311)
>at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:291)"
>  
> type="org.apache.hadoop.hbase.constraint.ConstraintException">org.apache.hadoop.hbase.constraint.ConstraintException:
>  
> org.apache.hadoop.hbase.constraint.ConstraintException: The set of servers to 
> remove cannot be null or empty.
>   at 
> org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeServers(RSGroupAdminServer.java:391)
>   at 
> org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint.postClearDeadServers(RSGroupAdminEndpoint.java:1175)
>   at 
> org.apache.hadoop.hbase.master.MasterCoprocessorHost$104.call(MasterCoprocessorHost.java:1251)
>   at 
> org.apache.hadoop.hbase.master.MasterCoprocessorHost.execOperation(MasterCoprocessorHost.java:1507)
>   at 
> org.apache.hadoop.hbase.master.MasterCoprocessorHost.postClearDeadServers(MasterCoprocessorHost.java:1247)
>   at 
> org.apache.hadoop.hbase.master.MasterRpcServices.clearDeadServers(MasterRpcServices.java:1167)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2421)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:311)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:291)
>   at 
> org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.testClearDeadServers(TestRSGroupsBasics.java:215)
> Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException: 
> org.apache.hadoop.hbase.constraint.ConstraintException: The set of servers to 
> remove cannot be null or empty.
>   at 
> org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeServers(RSGroupAdminServer.java:391)
>   at 
> org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint.postClearDeadServers(RSGroupAdminEndpoint.java:1175)
>   at 
> org.apache.hadoop.hbase.master.MasterCoprocessorHost$104.call(MasterCoprocessorHost.java:1251)
>   at 
> org.apache.hadoop.hbase.master.MasterCoprocessorHost.execOperation(MasterCoprocessorHost.java:1507)
>   at 
> org.apache.hadoop.hbase.master.MasterCoprocessorHost.postClearDeadServers(MasterCoprocessorHost.java:1247)
>   at 
> org.apache.hadoop.hbase.master.MasterRpcServices.clearDeadServers(MasterRpcServices.java:1167)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2421)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:311)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:291)
> 

  1   2   3   >