[jira] [Commented] (HBASE-15560) TinyLFU-based BlockCache
[ https://issues.apache.org/jira/browse/HBASE-15560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15525151#comment-15525151 ] Ben Manes commented on HBASE-15560: --- Thanks [~busbey]! I made the update and will fix the definition in my build. > TinyLFU-based BlockCache > > > Key: HBASE-15560 > URL: https://issues.apache.org/jira/browse/HBASE-15560 > Project: HBase > Issue Type: Improvement > Components: BlockCache >Affects Versions: 2.0.0 >Reporter: Ben Manes >Assignee: Ben Manes > Attachments: HBASE-15560.patch, HBASE-15560.patch, HBASE-15560.patch, > tinylfu.patch > > > LruBlockCache uses the Segmented LRU (SLRU) policy to capture frequency and > recency of the working set. It achieves concurrency by using an O( n ) > background thread to prioritize the entries and evict. Accessing an entry is > O(1) by a hash table lookup, recording its logical access time, and setting a > frequency flag. A write is performed in O(1) time by updating the hash table > and triggering an async eviction thread. This provides ideal concurrency and > minimizes the latencies by penalizing the thread instead of the caller. > However the policy does not age the frequencies and may not be resilient to > various workload patterns. > W-TinyLFU ([research paper|http://arxiv.org/pdf/1512.00727.pdf]) records the > frequency in a counting sketch, ages periodically by halving the counters, > and orders entries by SLRU. An entry is discarded by comparing the frequency > of the new arrival (candidate) to the SLRU's victim, and keeping the one with > the highest frequency. This allows the operations to be performed in O(1) > time and, though the use of a compact sketch, a much larger history is > retained beyond the current working set. In a variety of real world traces > the policy had [near optimal hit > rates|https://github.com/ben-manes/caffeine/wiki/Efficiency]. > Concurrency is achieved by buffering and replaying the operations, similar to > a write-ahead log. A read is recorded into a striped ring buffer and writes > to a queue. The operations are applied in batches under a try-lock by an > asynchronous thread, thereby track the usage pattern without incurring high > latencies > ([benchmarks|https://github.com/ben-manes/caffeine/wiki/Benchmarks#server-class]). > In YCSB benchmarks the results were inconclusive. For a large cache (99% hit > rates) the two caches have near identical throughput and latencies with > LruBlockCache narrowly winning. At medium and small caches, TinyLFU had a > 1-4% hit rate improvement and therefore lower latencies. The lack luster > result is because a synthetic Zipfian distribution is used, which SLRU > performs optimally. In a more varied, real-world workload we'd expect to see > improvements by being able to make smarter predictions. > The provided patch implements BlockCache using the > [Caffeine|https://github.com/ben-manes/caffeine] caching library (see > HighScalability > [article|http://highscalability.com/blog/2016/1/25/design-of-a-modern-cache.html]). > Edward Bortnikov and Eshcar Hillel have graciously provided guidance for > evaluating this patch ([github > branch|https://github.com/ben-manes/hbase/tree/tinylfu]). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-15560) TinyLFU-based BlockCache
[ https://issues.apache.org/jira/browse/HBASE-15560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ben Manes updated HBASE-15560: -- Attachment: HBASE-15560.patch > TinyLFU-based BlockCache > > > Key: HBASE-15560 > URL: https://issues.apache.org/jira/browse/HBASE-15560 > Project: HBase > Issue Type: Improvement > Components: BlockCache >Affects Versions: 2.0.0 >Reporter: Ben Manes >Assignee: Ben Manes > Attachments: HBASE-15560.patch, HBASE-15560.patch, HBASE-15560.patch, > tinylfu.patch > > > LruBlockCache uses the Segmented LRU (SLRU) policy to capture frequency and > recency of the working set. It achieves concurrency by using an O( n ) > background thread to prioritize the entries and evict. Accessing an entry is > O(1) by a hash table lookup, recording its logical access time, and setting a > frequency flag. A write is performed in O(1) time by updating the hash table > and triggering an async eviction thread. This provides ideal concurrency and > minimizes the latencies by penalizing the thread instead of the caller. > However the policy does not age the frequencies and may not be resilient to > various workload patterns. > W-TinyLFU ([research paper|http://arxiv.org/pdf/1512.00727.pdf]) records the > frequency in a counting sketch, ages periodically by halving the counters, > and orders entries by SLRU. An entry is discarded by comparing the frequency > of the new arrival (candidate) to the SLRU's victim, and keeping the one with > the highest frequency. This allows the operations to be performed in O(1) > time and, though the use of a compact sketch, a much larger history is > retained beyond the current working set. In a variety of real world traces > the policy had [near optimal hit > rates|https://github.com/ben-manes/caffeine/wiki/Efficiency]. > Concurrency is achieved by buffering and replaying the operations, similar to > a write-ahead log. A read is recorded into a striped ring buffer and writes > to a queue. The operations are applied in batches under a try-lock by an > asynchronous thread, thereby track the usage pattern without incurring high > latencies > ([benchmarks|https://github.com/ben-manes/caffeine/wiki/Benchmarks#server-class]). > In YCSB benchmarks the results were inconclusive. For a large cache (99% hit > rates) the two caches have near identical throughput and latencies with > LruBlockCache narrowly winning. At medium and small caches, TinyLFU had a > 1-4% hit rate improvement and therefore lower latencies. The lack luster > result is because a synthetic Zipfian distribution is used, which SLRU > performs optimally. In a more varied, real-world workload we'd expect to see > improvements by being able to make smarter predictions. > The provided patch implements BlockCache using the > [Caffeine|https://github.com/ben-manes/caffeine] caching library (see > HighScalability > [article|http://highscalability.com/blog/2016/1/25/design-of-a-modern-cache.html]). > Edward Bortnikov and Eshcar Hillel have graciously provided guidance for > evaluating this patch ([github > branch|https://github.com/ben-manes/hbase/tree/tinylfu]). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16653) Backport HBASE-11393 to all branches which support namespace
[ https://issues.apache.org/jira/browse/HBASE-16653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15525138#comment-15525138 ] Hadoop QA commented on HBASE-16653: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 13m 42s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 17 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 8s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 8s {color} | {color:green} branch-1 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 3s {color} | {color:green} branch-1 passed with JDK v1.8.0_101 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 13s {color} | {color:green} branch-1 passed with JDK v1.7.0_111 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 27s {color} | {color:green} branch-1 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 45s {color} | {color:green} branch-1 passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 4s {color} | {color:red} hbase-server in branch-1 has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s {color} | {color:green} branch-1 passed with JDK v1.8.0_101 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s {color} | {color:green} branch-1 passed with JDK v1.7.0_111 {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 23s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 3s {color} | {color:green} the patch passed with JDK v1.8.0_101 {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 3s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 3s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 11s {color} | {color:green} the patch passed with JDK v1.7.0_111 {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 11s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 11s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 6m 41s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 36s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s {color} | {color:red} The patch has 10 line(s) that end in whitespace. Use git apply --whitespace=fix. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 17m 2s {color} | {color:green} The patch does not cause any errors with Hadoop 2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 9s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 45s {color} | {color:green} the patch passed with JDK v1.8.0_101 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 1s {color} | {color:green} the patch passed with JDK v1.7.0_111 {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 19s {color} | {color:green} hbase-protocol in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 50s {color} | {color:green} hbase-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 84m 46s {color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 44s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 156m 3s
[jira] [Commented] (HBASE-15560) TinyLFU-based BlockCache
[ https://issues.apache.org/jira/browse/HBASE-15560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15525123#comment-15525123 ] Sean Busbey commented on HBASE-15560: - the patch failures are because the caffeine library improperly refers to the ALv2: {code} The Apache Software License, Version 2.0 http://www.apache.org/licenses/LICENSE-2.0.txt repo {code} The correct name is "Apache License, Version 2.0". You should update the supplemental license information; there are a bunch of examples there from ASF projects that used the wrong name for years. the file is {{hbase-resource-bundle/src/main/resources/supplemental-models.xml}} > TinyLFU-based BlockCache > > > Key: HBASE-15560 > URL: https://issues.apache.org/jira/browse/HBASE-15560 > Project: HBase > Issue Type: Improvement > Components: BlockCache >Affects Versions: 2.0.0 >Reporter: Ben Manes >Assignee: Ben Manes > Attachments: HBASE-15560.patch, HBASE-15560.patch, tinylfu.patch > > > LruBlockCache uses the Segmented LRU (SLRU) policy to capture frequency and > recency of the working set. It achieves concurrency by using an O( n ) > background thread to prioritize the entries and evict. Accessing an entry is > O(1) by a hash table lookup, recording its logical access time, and setting a > frequency flag. A write is performed in O(1) time by updating the hash table > and triggering an async eviction thread. This provides ideal concurrency and > minimizes the latencies by penalizing the thread instead of the caller. > However the policy does not age the frequencies and may not be resilient to > various workload patterns. > W-TinyLFU ([research paper|http://arxiv.org/pdf/1512.00727.pdf]) records the > frequency in a counting sketch, ages periodically by halving the counters, > and orders entries by SLRU. An entry is discarded by comparing the frequency > of the new arrival (candidate) to the SLRU's victim, and keeping the one with > the highest frequency. This allows the operations to be performed in O(1) > time and, though the use of a compact sketch, a much larger history is > retained beyond the current working set. In a variety of real world traces > the policy had [near optimal hit > rates|https://github.com/ben-manes/caffeine/wiki/Efficiency]. > Concurrency is achieved by buffering and replaying the operations, similar to > a write-ahead log. A read is recorded into a striped ring buffer and writes > to a queue. The operations are applied in batches under a try-lock by an > asynchronous thread, thereby track the usage pattern without incurring high > latencies > ([benchmarks|https://github.com/ben-manes/caffeine/wiki/Benchmarks#server-class]). > In YCSB benchmarks the results were inconclusive. For a large cache (99% hit > rates) the two caches have near identical throughput and latencies with > LruBlockCache narrowly winning. At medium and small caches, TinyLFU had a > 1-4% hit rate improvement and therefore lower latencies. The lack luster > result is because a synthetic Zipfian distribution is used, which SLRU > performs optimally. In a more varied, real-world workload we'd expect to see > improvements by being able to make smarter predictions. > The provided patch implements BlockCache using the > [Caffeine|https://github.com/ben-manes/caffeine] caching library (see > HighScalability > [article|http://highscalability.com/blog/2016/1/25/design-of-a-modern-cache.html]). > Edward Bortnikov and Eshcar Hillel have graciously provided guidance for > evaluating this patch ([github > branch|https://github.com/ben-manes/hbase/tree/tinylfu]). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16712) fix hadoop-3.0 profile mvn install
[ https://issues.apache.org/jira/browse/HBASE-16712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15525057#comment-15525057 ] Hadoop QA commented on HBASE-16712: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s {color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s {color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 21s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 8s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 7s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 7s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 7s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s {color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 28m 15s {color} | {color:green} Patch does not cause any errors with Hadoop 2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} | | {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 0m 6s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 6s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 6s {color} | {color:green} hbase-resource-bundle in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 6s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 32m 54s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=1.11.2 Server=1.11.2 Image:yetus/hbase:7bda515 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12830440/hbase-16712.v1.patch | | JIRA Issue | HBASE-16712 | | Optional Tests | asflicense javac javadoc unit xml | | uname | Linux e1ebdde433d9 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | master / db394f5 | | Default Java | 1.8.0_101 | | Test Results | https://builds.apache.org/job/PreCommit-HBASE-Build/3725/testReport/ | | modules | C: hbase-resource-bundle U: hbase-resource-bundle | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/3725/console | | Powered by | Apache Yetus 0.3.0 http://yetus.apache.org | This message was automatically generated. > fix hadoop-3.0 profile mvn install > -- > > Key: HBASE-16712 > URL: https://issues.apache.org/jira/browse/HBASE-16712 > Project: HBase > Issue Type: Bug > Components: build, hadoop3 >Affects Versions: 2.0.0 >Reporter: Jonathan Hsieh >Assignee: Jonathan Hsieh > Fix For: 2.0.0 > > Attachments: hbase-16712.v0.patch, hbase-16712.v1.patch > > > After the compile is fixed, mvn install fails due to transitive dependencies > coming from hadoop3. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16604) Scanner retries on IOException can cause the scans to miss data
[ https://issues.apache.org/jira/browse/HBASE-16604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15525026#comment-15525026 ] ramkrishna.s.vasudevan commented on HBASE-16604: bq.Thus, ClientScanner will re-open a new RegionScanner by sending a new scan request and get a new scanner name. Let me check this. I was wrong may be. bq.The heap will be reset correctly, because the region scanner is closed for good. A completely new RegionScanner will be constructed from scratch. Ok. bq.Is it the case that if the scanner is already closed, shipped() will not free up the blocks? The problem here is that the finally block will reset the rpcCallBack with the RpcShippedCallBack and since the lease is already removed we don't add back the lease and so the LeaseExpiry logic does not work which actually does the return of the blocks. Anyway I think I have a better logic now to fix this after seeing your comments. Will be back here. bq.Yes, I have checked that in other contexts where we close the scanner in case of exception, we still call the coprocessor methods. Ok. If you have verified it then it is fine. Thanks a lot [~enis]. > Scanner retries on IOException can cause the scans to miss data > > > Key: HBASE-16604 > URL: https://issues.apache.org/jira/browse/HBASE-16604 > Project: HBase > Issue Type: Bug > Components: regionserver, Scanners >Reporter: Enis Soztutar >Assignee: Enis Soztutar > Fix For: 2.0.0, 1.3.0, 1.4.0, 1.1.7, 1.2.4 > > Attachments: HBASE-16604-branch-1.3-addendum.patch, > hbase-16604_v1.patch, hbase-16604_v2.patch, hbase-16604_v3.branch-1.patch, > hbase-16604_v3.patch > > > Debugging an ITBLL failure, where the Verify did not "see" all the data in > the cluster, I've noticed that if we end up getting a generic IOException > from the HFileReader level, we may end up missing the rest of the data in the > region. I was able to manually test this, and this stack trace helps to > understand what is going on: > {code} > 2016-09-09 16:27:15,633 INFO [hconnection-0x71ad3d8a-shared--pool21-t9] > client.ScannerCallable(376): Open scanner=1 for > scan={"loadColumnFamiliesOnDemand":null,"startRow":"","stopRow":"","batch":-1,"cacheBlocks":true,"totalColumns":1,"maxResultSize":2097152,"families":{"testFamily":["testFamily"]},"caching":100,"maxVersions":1,"timeRange":[0,9223372036854775807]} > on region > region=testScanThrowsException,,1473463632707.b2adfb618e5d0fe225c1dc40c0eabfee., > hostname=hw10676,51833,1473463626529, seqNum=2 > 2016-09-09 16:27:15,634 INFO > [B.fifo.QRpcServer.handler=5,queue=0,port=51833] > regionserver.RSRpcServices(2196): scan request:scanner_id: 1 number_of_rows: > 100 close_scanner: false next_call_seq: 0 client_handles_partials: true > client_handles_heartbeats: true renew: false > 2016-09-09 16:27:15,635 INFO > [B.fifo.QRpcServer.handler=5,queue=0,port=51833] > regionserver.RSRpcServices(2510): Rolling back next call seqId > 2016-09-09 16:27:15,635 INFO > [B.fifo.QRpcServer.handler=5,queue=0,port=51833] > regionserver.RSRpcServices(2565): Throwing new > ServiceExceptionjava.io.IOException: Could not reseek > StoreFileScanner[HFileScanner for reader > reader=hdfs://localhost:51795/user/enis/test-data/d6fb1c70-93c1-4099-acb7-5723fc05a737/data/default/testScanThrowsException/b2adfb618e5d0fe225c1dc40c0eabfee/testFamily/5a213cc23b714e5e8e1a140ebbe72f2c, > compression=none, cacheConf=blockCache=LruBlockCache{blockCount=0, > currentSize=1567264, freeSize=1525578848, maxSize=1527146112, > heapSize=1567264, minSize=1450788736, minFactor=0.95, multiSize=725394368, > multiFactor=0.5, singleSize=362697184, singleFactor=0.25}, > cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, > cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, > prefetchOnOpen=false, firstKey=aaa/testFamily:testFamily/1473463633859/Put, > lastKey=zzz/testFamily:testFamily/1473463634271/Put, avgKeyLen=35, > avgValueLen=3, entries=17576, length=866998, > cur=/testFamily:/OLDEST_TIMESTAMP/Minimum/vlen=0/seqid=0] to key > /testFamily:testFamily/LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0 > 2016-09-09 16:27:15,635 DEBUG > [B.fifo.QRpcServer.handler=5,queue=0,port=51833] ipc.CallRunner(110): > B.fifo.QRpcServer.handler=5,queue=0,port=51833: callId: 26 service: > ClientService methodName: Scan size: 26 connection: 192.168.42.75:51903 > java.io.IOException: Could not reseek StoreFileScanner[HFileScanner for > reader > reader=hdfs://localhost:51795/user/enis/test-data/d6fb1c70-93c1-4099-acb7-5723fc05a737/data/default/testScanThrowsException/b2adfb618e5d0fe225c1dc40c0eabfee/testFamily/5a213cc23b714e5e8e1a140ebbe72f2c, > compression=none, cacheConf=blockCache=LruBlockCache{blockCount=0, >
[jira] [Commented] (HBASE-16698) Performance issue: handlers stuck waiting for CountDownLatch inside WALKey#getWriteEntry under high writing workload
[ https://issues.apache.org/jira/browse/HBASE-16698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15524987#comment-15524987 ] Heng Chen commented on HBASE-16698: --- How much the performance will be downgrade when ops are just for one region. [~carp84] do you have some performance results? In our production cluster (Not big cluster), many tables have just few regions but QPS is high, have a litter worried about it after we set it to be default. > Performance issue: handlers stuck waiting for CountDownLatch inside > WALKey#getWriteEntry under high writing workload > > > Key: HBASE-16698 > URL: https://issues.apache.org/jira/browse/HBASE-16698 > Project: HBase > Issue Type: Improvement > Components: Performance >Affects Versions: 1.1.6, 1.2.3 >Reporter: Yu Li >Assignee: Yu Li > Attachments: HBASE-16698.patch, HBASE-16698.v2.patch, > hadoop0495.et2.jstack > > > As titled, on our production environment we observed 98 out of 128 handlers > get stuck waiting for the CountDownLatch {{seqNumAssignedLatch}} inside > {{WALKey#getWriteEntry}} under a high writing workload. > After digging into the problem, we found that the problem is mainly caused by > advancing mvcc in the append logic. Below is some detailed analysis: > Under current branch-1 code logic, all batch puts will call > {{WALKey#getWriteEntry}} after appending edit to WAL, and > {{seqNumAssignedLatch}} is only released when the relative append call is > handled by RingBufferEventHandler (see {{FSWALEntry#stampRegionSequenceId}}). > Because currently we're using a single event handler for the ringbuffer, the > append calls are handled one by one (actually lot's of our current logic > depending on this sequential dealing logic), and this becomes a bottleneck > under high writing workload. > The worst part is that by default we only use one WAL per RS, so appends on > all regions are dealt with in sequential, which causes contention among > different regions... > To fix this, we could also take use of the "sequential appends" mechanism, > that we could grab the WriteEntry before publishing append onto ringbuffer > and use it as sequence id, only that we need to add a lock to make "grab > WriteEntry" and "append edit" a transaction. This will still cause contention > inside a region but could avoid contention between different regions. This > solution is already verified in our online environment and proved to be > effective. > Notice that for master (2.0) branch since we already change the write > pipeline to sync before writing memstore (HBASE-15158), this issue only > exists for the ASYNC_WAL writes scenario. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16714) Procedure V2 - use base class to remove duplicate set up test code in table DDL procedures
[ https://issues.apache.org/jira/browse/HBASE-16714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Stephen Yuan Jiang updated HBASE-16714: --- Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 2.0.0 Status: Resolved (was: Patch Available) > Procedure V2 - use base class to remove duplicate set up test code in table > DDL procedures > --- > > Key: HBASE-16714 > URL: https://issues.apache.org/jira/browse/HBASE-16714 > Project: HBase > Issue Type: Improvement > Components: proc-v2, test >Affects Versions: 2.0.0 >Reporter: Stephen Yuan Jiang >Assignee: Stephen Yuan Jiang > Fix For: 2.0.0 > > Attachments: HBASE-16714.v1-master.patch > > > All table DDL procedure tests has the same set up. To avoid duplicate code > and help maintain the existing test, we should move the same set up in a base > class. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15560) TinyLFU-based BlockCache
[ https://issues.apache.org/jira/browse/HBASE-15560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15524953#comment-15524953 ] Ben Manes commented on HBASE-15560: --- [~eshcar] all of the issues don't appear to be related to my changes. Do you know if there is anything I can do about it? > TinyLFU-based BlockCache > > > Key: HBASE-15560 > URL: https://issues.apache.org/jira/browse/HBASE-15560 > Project: HBase > Issue Type: Improvement > Components: BlockCache >Affects Versions: 2.0.0 >Reporter: Ben Manes >Assignee: Ben Manes > Attachments: HBASE-15560.patch, HBASE-15560.patch, tinylfu.patch > > > LruBlockCache uses the Segmented LRU (SLRU) policy to capture frequency and > recency of the working set. It achieves concurrency by using an O( n ) > background thread to prioritize the entries and evict. Accessing an entry is > O(1) by a hash table lookup, recording its logical access time, and setting a > frequency flag. A write is performed in O(1) time by updating the hash table > and triggering an async eviction thread. This provides ideal concurrency and > minimizes the latencies by penalizing the thread instead of the caller. > However the policy does not age the frequencies and may not be resilient to > various workload patterns. > W-TinyLFU ([research paper|http://arxiv.org/pdf/1512.00727.pdf]) records the > frequency in a counting sketch, ages periodically by halving the counters, > and orders entries by SLRU. An entry is discarded by comparing the frequency > of the new arrival (candidate) to the SLRU's victim, and keeping the one with > the highest frequency. This allows the operations to be performed in O(1) > time and, though the use of a compact sketch, a much larger history is > retained beyond the current working set. In a variety of real world traces > the policy had [near optimal hit > rates|https://github.com/ben-manes/caffeine/wiki/Efficiency]. > Concurrency is achieved by buffering and replaying the operations, similar to > a write-ahead log. A read is recorded into a striped ring buffer and writes > to a queue. The operations are applied in batches under a try-lock by an > asynchronous thread, thereby track the usage pattern without incurring high > latencies > ([benchmarks|https://github.com/ben-manes/caffeine/wiki/Benchmarks#server-class]). > In YCSB benchmarks the results were inconclusive. For a large cache (99% hit > rates) the two caches have near identical throughput and latencies with > LruBlockCache narrowly winning. At medium and small caches, TinyLFU had a > 1-4% hit rate improvement and therefore lower latencies. The lack luster > result is because a synthetic Zipfian distribution is used, which SLRU > performs optimally. In a more varied, real-world workload we'd expect to see > improvements by being able to make smarter predictions. > The provided patch implements BlockCache using the > [Caffeine|https://github.com/ben-manes/caffeine] caching library (see > HighScalability > [article|http://highscalability.com/blog/2016/1/25/design-of-a-modern-cache.html]). > Edward Bortnikov and Eshcar Hillel have graciously provided guidance for > evaluating this patch ([github > branch|https://github.com/ben-manes/hbase/tree/tinylfu]). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16714) Procedure V2 - use base class to remove duplicate set up test code in table DDL procedures
[ https://issues.apache.org/jira/browse/HBASE-16714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15524948#comment-15524948 ] Stephen Yuan Jiang commented on HBASE-16714: The change is local to table DDL procedure tests, it should not affect other tests. > Procedure V2 - use base class to remove duplicate set up test code in table > DDL procedures > --- > > Key: HBASE-16714 > URL: https://issues.apache.org/jira/browse/HBASE-16714 > Project: HBase > Issue Type: Improvement > Components: proc-v2, test >Affects Versions: 2.0.0 >Reporter: Stephen Yuan Jiang >Assignee: Stephen Yuan Jiang > Attachments: HBASE-16714.v1-master.patch > > > All table DDL procedure tests has the same set up. To avoid duplicate code > and help maintain the existing test, we should move the same set up in a base > class. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16712) fix hadoop-3.0 profile mvn install
[ https://issues.apache.org/jira/browse/HBASE-16712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Hsieh updated HBASE-16712: --- Attachment: hbase-16712.v1.patch v1 fixes spaces / tabs. > fix hadoop-3.0 profile mvn install > -- > > Key: HBASE-16712 > URL: https://issues.apache.org/jira/browse/HBASE-16712 > Project: HBase > Issue Type: Bug > Components: build, hadoop3 >Affects Versions: 2.0.0 >Reporter: Jonathan Hsieh >Assignee: Jonathan Hsieh > Fix For: 2.0.0 > > Attachments: hbase-16712.v0.patch, hbase-16712.v1.patch > > > After the compile is fixed, mvn install fails due to transitive dependencies > coming from hadoop3. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15560) TinyLFU-based BlockCache
[ https://issues.apache.org/jira/browse/HBASE-15560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15524912#comment-15524912 ] Hadoop QA commented on HBASE-15560: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 53m 28s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 5 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 54s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 49s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 0s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 18s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 2m 0s {color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s {color} | {color:blue} Skipped patched modules with no Java source: . {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 42s {color} | {color:red} hbase-common in master has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 13s {color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 4m 14s {color} | {color:red} root in the patch failed. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 3m 4s {color} | {color:red} root in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 3m 4s {color} | {color:red} root in the patch failed. {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 7s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} mvneclipse {color} | {color:red} 1m 4s {color} | {color:red} root in the patch failed. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 4s {color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 5m 12s {color} | {color:red} The patch causes 11 errors with Hadoop v2.4.0. {color} | | {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 9m 7s {color} | {color:red} The patch causes 11 errors with Hadoop v2.4.1. {color} | | {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 13m 7s {color} | {color:red} The patch causes 11 errors with Hadoop v2.5.0. {color} | | {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 17m 7s {color} | {color:red} The patch causes 11 errors with Hadoop v2.5.1. {color} | | {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 21m 7s {color} | {color:red} The patch causes 11 errors with Hadoop v2.5.2. {color} | | {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 25m 17s {color} | {color:red} The patch causes 11 errors with Hadoop v2.6.1. {color} | | {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 29m 25s {color} | {color:red} The patch causes 11 errors with Hadoop v2.6.2. {color} | | {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 33m 29s {color} | {color:red} The patch causes 11 errors with Hadoop v2.6.3. {color} | | {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 37m 38s {color} | {color:red} The patch causes 11 errors with Hadoop v2.7.1. {color} | | {color:red}-1{color} | {color:red} hbaseprotoc {color} | {color:red} 2m 10s {color} | {color:red} root in the patch failed. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s {color} | {color:blue} Skipped patched modules with no Java source: . {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 49s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 47s {color} | {color:red} hbase-server generated 1
[jira] [Commented] (HBASE-16694) Reduce garbage for onDiskChecksum in HFileBlock
[ https://issues.apache.org/jira/browse/HBASE-16694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15524883#comment-15524883 ] Hudson commented on HBASE-16694: FAILURE: Integrated in Jenkins build HBase-1.4 #432 (See [https://builds.apache.org/job/HBase-1.4/432/]) HBASE-16694 Reduce garbage for onDiskChecksum in HFileBlock (binlijin) (apurtell: rev 67a43c30594329dc4a0de19787d2796e05f0b2c8) * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java > Reduce garbage for onDiskChecksum in HFileBlock > --- > > Key: HBASE-16694 > URL: https://issues.apache.org/jira/browse/HBASE-16694 > Project: HBase > Issue Type: Improvement >Affects Versions: 2.0.. >Reporter: binlijin >Assignee: binlijin >Priority: Minor > Fix For: 2.0.0, 1.4.0, 0.98.23 > > Attachments: HBASE-16694-master.patch > > > Current when finish a HFileBlock will create a new byte[] for onDiskChecksum, > we can reuse it. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16649) Truncate table with splits preserved can cause both data loss and truncated data appeared again
[ https://issues.apache.org/jira/browse/HBASE-16649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15524882#comment-15524882 ] Hudson commented on HBASE-16649: FAILURE: Integrated in Jenkins build HBase-1.4 #432 (See [https://builds.apache.org/job/HBase-1.4/432/]) HBASE-16649 Truncate table with splits preserved can cause both data (matteo.bertozzi: rev 4566e4df58bdd176228aab2cd3cfd80dd983072f) * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/master/CatalogJanitor.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/TruncateTableProcedure.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestCatalogJanitor.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/master/procedure/TestTruncateTableProcedure.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/DeleteTableProcedure.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/master/ServerManager.java > Truncate table with splits preserved can cause both data loss and truncated > data appeared again > --- > > Key: HBASE-16649 > URL: https://issues.apache.org/jira/browse/HBASE-16649 > Project: HBase > Issue Type: Bug >Affects Versions: 1.1.3 >Reporter: Allan Yang >Assignee: Matteo Bertozzi > Fix For: 2.0.0, 1.3.0, 1.1.7, 0.98.23, 1.2.4 > > Attachments: HBASE-16649-v0.patch, HBASE-16649-v1.patch, > HBASE-16649-v2.patch > > > Since truncate table with splits preserved will delete hfiles and use the > previous regioninfo. It can cause odd behaviors > - Case 1: *Data appeared after truncate* > reproduce procedure: > 1. create a table, let's say 'test' > 2. write data to 'test', make sure memstore of 'test' is not empty > 3. truncate 'test' with splits preserved > 4. kill the regionserver hosting the region(s) of 'test' > 5. start the regionserver, now it is the time to witness the miracle! the > truncated data appeared in table 'test' > - Case 2: *Data loss* > reproduce procedure: > 1. create a table, let's say 'test' > 2. write some data to 'test', no matter how many > 3. truncate 'test' with splits preserved > 4. restart the regionserver to reset the seqid > 5. write some data, but less than 2 since we don't want the seqid to run over > the one in 2 > 6. kill the regionserver hosting the region(s) of 'test' > 7. restart the regionserver. Congratulations! the data writen in 4 is now all > lost > *Why?* > for case 1 > Since preserve splits in truncate table procedure will not change the > regioninfo, when log replay happens, the 'unflushed' data will restore back > to the region > for case 2 > since the flushedSequenceIdByRegion are stored in Master in a map with the > region's encodedName. Although the table is truncated, the region's name is > not changed since we chose to preserve the splits. So after truncate the > table, the region's sequenceid is reset in the regionserver, but not reset in > master. When flush comes and report to master, master will reject the update > of sequenceid since the new one is smaller than the old one. The same happens > in log replay, all the edits writen in 4 will be skipped since they have a > smaller seqid -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16705) Eliminate long to Long auto boxing in LongComparator
[ https://issues.apache.org/jira/browse/HBASE-16705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15524881#comment-15524881 ] Hudson commented on HBASE-16705: FAILURE: Integrated in Jenkins build HBase-1.4 #432 (See [https://builds.apache.org/job/HBase-1.4/432/]) HBASE-16705 Eliminate long to Long auto boxing in LongComparator. (apurtell: rev a3485cc5ab42cf20f1edd53f1d235d2be3038a80) * (edit) hbase-client/src/main/java/org/apache/hadoop/hbase/filter/LongComparator.java > Eliminate long to Long auto boxing in LongComparator > > > Key: HBASE-16705 > URL: https://issues.apache.org/jira/browse/HBASE-16705 > Project: HBase > Issue Type: Improvement > Components: Filters >Affects Versions: 2.0.. >Reporter: binlijin >Assignee: binlijin >Priority: Minor > Fix For: 2.0.0, 1.4.0, 0.98.23 > > Attachments: HBASE-16705-master.patch > > > LongComparator > @Override > public int compareTo(byte[] value, int offset, int length) { > Long that = Bytes.toLong(value, offset, length); > return this.longValue.compareTo(that); > } > Every time need to convert long to Long, this is not necessary. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-11354) HConnectionImplementation#DelayedClosing does not start
[ https://issues.apache.org/jira/browse/HBASE-11354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell updated HBASE-11354: --- Resolution: Fixed Status: Resolved (was: Patch Available) > HConnectionImplementation#DelayedClosing does not start > --- > > Key: HBASE-11354 > URL: https://issues.apache.org/jira/browse/HBASE-11354 > Project: HBase > Issue Type: Bug > Components: Client >Affects Versions: 0.99.0, 0.98.3 >Reporter: Qianxi Zhang >Assignee: Qianxi Zhang >Priority: Minor > Fix For: 0.98.23 > > Attachments: HBASE-11354-0.98.patch, HBASE_11354 (1).patch, > HBASE_11354.patch, HBASE_11354.patch, HBASE_11354.patch > > > The method "createAndStart" in class DelayedClosing only creates a instance, > but forgets to start it. So thread delayedClosing is not running all the time. > ConnectionManager#1623 > {code} > static DelayedClosing createAndStart(HConnectionImplementation hci){ > Stoppable stoppable = new Stoppable() { > private volatile boolean isStopped = false; > @Override public void stop(String why) { isStopped = true;} > @Override public boolean isStopped() {return isStopped;} > }; > return new DelayedClosing(hci, stoppable); > } > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16714) Procedure V2 - use base class to remove duplicate set up test code in table DDL procedures
[ https://issues.apache.org/jira/browse/HBASE-16714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15524852#comment-15524852 ] Hadoop QA commented on HBASE-16714: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 13m 21s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 1s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 12 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 42s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 48s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 15s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 56s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 47s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 39s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 39s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 44s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 14s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 32m 15s {color} | {color:green} Patch does not cause any errors with Hadoop 2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} | | {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 0m 13s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 1s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 28s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 15m 14s {color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 11s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 74m 12s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hbase.filter.TestFilter | \\ \\ || Subsystem || Report/Notes || | Docker | Client=1.11.2 Server=1.11.2 Image:yetus/hbase:7bda515 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12830429/HBASE-16714.v1-master.patch | | JIRA Issue | HBASE-16714 | | Optional Tests | asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile | | uname | Linux 6347fb6f22a8 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | master / b9ec59e | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | unit | https://builds.apache.org/job/PreCommit-HBASE-Build/3723/artifact/patchprocess/patch-unit-hbase-server.txt | | unit test logs | https://builds.apache.org/job/PreCommit-HBASE-Build/3723/artifact/patchprocess/patch-unit-hbase-server.txt | | Test Results | https://builds.apache.org/job/PreCommit-HBASE-Build/3723/testReport/ | | modules | C: hbase-server U: hbase-server | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/3723/console | | Powered by | Apache Yetus 0.3.0 http://yetus.apache.org | This message was automatically generated. > Procedure V2 - use base class to remove duplicate set up test code in table > DDL procedures >
[jira] [Commented] (HBASE-16649) Truncate table with splits preserved can cause both data loss and truncated data appeared again
[ https://issues.apache.org/jira/browse/HBASE-16649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15524804#comment-15524804 ] Hudson commented on HBASE-16649: FAILURE: Integrated in Jenkins build HBase-1.3-JDK8 #24 (See [https://builds.apache.org/job/HBase-1.3-JDK8/24/]) HBASE-16649 Truncate table with splits preserved can cause both data (matteo.bertozzi: rev 1441b7c795292ce5b056ff9e3b5b2443ecd8e8cb) * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/master/CatalogJanitor.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/DeleteTableProcedure.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestCatalogJanitor.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/TruncateTableProcedure.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/master/ServerManager.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/master/procedure/TestTruncateTableProcedure.java > Truncate table with splits preserved can cause both data loss and truncated > data appeared again > --- > > Key: HBASE-16649 > URL: https://issues.apache.org/jira/browse/HBASE-16649 > Project: HBase > Issue Type: Bug >Affects Versions: 1.1.3 >Reporter: Allan Yang >Assignee: Matteo Bertozzi > Fix For: 2.0.0, 1.3.0, 1.1.7, 0.98.23, 1.2.4 > > Attachments: HBASE-16649-v0.patch, HBASE-16649-v1.patch, > HBASE-16649-v2.patch > > > Since truncate table with splits preserved will delete hfiles and use the > previous regioninfo. It can cause odd behaviors > - Case 1: *Data appeared after truncate* > reproduce procedure: > 1. create a table, let's say 'test' > 2. write data to 'test', make sure memstore of 'test' is not empty > 3. truncate 'test' with splits preserved > 4. kill the regionserver hosting the region(s) of 'test' > 5. start the regionserver, now it is the time to witness the miracle! the > truncated data appeared in table 'test' > - Case 2: *Data loss* > reproduce procedure: > 1. create a table, let's say 'test' > 2. write some data to 'test', no matter how many > 3. truncate 'test' with splits preserved > 4. restart the regionserver to reset the seqid > 5. write some data, but less than 2 since we don't want the seqid to run over > the one in 2 > 6. kill the regionserver hosting the region(s) of 'test' > 7. restart the regionserver. Congratulations! the data writen in 4 is now all > lost > *Why?* > for case 1 > Since preserve splits in truncate table procedure will not change the > regioninfo, when log replay happens, the 'unflushed' data will restore back > to the region > for case 2 > since the flushedSequenceIdByRegion are stored in Master in a map with the > region's encodedName. Although the table is truncated, the region's name is > not changed since we chose to preserve the splits. So after truncate the > table, the region's sequenceid is reset in the regionserver, but not reset in > master. When flush comes and report to master, master will reject the update > of sequenceid since the new one is smaller than the old one. The same happens > in log replay, all the edits writen in 4 will be skipped since they have a > smaller seqid -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16649) Truncate table with splits preserved can cause both data loss and truncated data appeared again
[ https://issues.apache.org/jira/browse/HBASE-16649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15524799#comment-15524799 ] Hudson commented on HBASE-16649: FAILURE: Integrated in Jenkins build HBase-1.2-JDK8 #30 (See [https://builds.apache.org/job/HBase-1.2-JDK8/30/]) HBASE-16649 Truncate table with splits preserved can cause both data (matteo.bertozzi: rev 2733e24d3f2f110ac98d8876ee1de1fd9740b51e) * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/master/procedure/TestTruncateTableProcedure.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/TruncateTableProcedure.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestCatalogJanitor.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/DeleteTableProcedure.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/master/ServerManager.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/master/CatalogJanitor.java > Truncate table with splits preserved can cause both data loss and truncated > data appeared again > --- > > Key: HBASE-16649 > URL: https://issues.apache.org/jira/browse/HBASE-16649 > Project: HBase > Issue Type: Bug >Affects Versions: 1.1.3 >Reporter: Allan Yang >Assignee: Matteo Bertozzi > Fix For: 2.0.0, 1.3.0, 1.1.7, 0.98.23, 1.2.4 > > Attachments: HBASE-16649-v0.patch, HBASE-16649-v1.patch, > HBASE-16649-v2.patch > > > Since truncate table with splits preserved will delete hfiles and use the > previous regioninfo. It can cause odd behaviors > - Case 1: *Data appeared after truncate* > reproduce procedure: > 1. create a table, let's say 'test' > 2. write data to 'test', make sure memstore of 'test' is not empty > 3. truncate 'test' with splits preserved > 4. kill the regionserver hosting the region(s) of 'test' > 5. start the regionserver, now it is the time to witness the miracle! the > truncated data appeared in table 'test' > - Case 2: *Data loss* > reproduce procedure: > 1. create a table, let's say 'test' > 2. write some data to 'test', no matter how many > 3. truncate 'test' with splits preserved > 4. restart the regionserver to reset the seqid > 5. write some data, but less than 2 since we don't want the seqid to run over > the one in 2 > 6. kill the regionserver hosting the region(s) of 'test' > 7. restart the regionserver. Congratulations! the data writen in 4 is now all > lost > *Why?* > for case 1 > Since preserve splits in truncate table procedure will not change the > regioninfo, when log replay happens, the 'unflushed' data will restore back > to the region > for case 2 > since the flushedSequenceIdByRegion are stored in Master in a map with the > region's encodedName. Although the table is truncated, the region's name is > not changed since we chose to preserve the splits. So after truncate the > table, the region's sequenceid is reset in the regionserver, but not reset in > master. When flush comes and report to master, master will reject the update > of sequenceid since the new one is smaller than the old one. The same happens > in log replay, all the edits writen in 4 will be skipped since they have a > smaller seqid -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16649) Truncate table with splits preserved can cause both data loss and truncated data appeared again
[ https://issues.apache.org/jira/browse/HBASE-16649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15524798#comment-15524798 ] Hudson commented on HBASE-16649: FAILURE: Integrated in Jenkins build HBase-1.2-JDK7 #33 (See [https://builds.apache.org/job/HBase-1.2-JDK7/33/]) HBASE-16649 Truncate table with splits preserved can cause both data (matteo.bertozzi: rev 2733e24d3f2f110ac98d8876ee1de1fd9740b51e) * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestCatalogJanitor.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/master/CatalogJanitor.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/DeleteTableProcedure.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/TruncateTableProcedure.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/master/ServerManager.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/master/procedure/TestTruncateTableProcedure.java > Truncate table with splits preserved can cause both data loss and truncated > data appeared again > --- > > Key: HBASE-16649 > URL: https://issues.apache.org/jira/browse/HBASE-16649 > Project: HBase > Issue Type: Bug >Affects Versions: 1.1.3 >Reporter: Allan Yang >Assignee: Matteo Bertozzi > Fix For: 2.0.0, 1.3.0, 1.1.7, 0.98.23, 1.2.4 > > Attachments: HBASE-16649-v0.patch, HBASE-16649-v1.patch, > HBASE-16649-v2.patch > > > Since truncate table with splits preserved will delete hfiles and use the > previous regioninfo. It can cause odd behaviors > - Case 1: *Data appeared after truncate* > reproduce procedure: > 1. create a table, let's say 'test' > 2. write data to 'test', make sure memstore of 'test' is not empty > 3. truncate 'test' with splits preserved > 4. kill the regionserver hosting the region(s) of 'test' > 5. start the regionserver, now it is the time to witness the miracle! the > truncated data appeared in table 'test' > - Case 2: *Data loss* > reproduce procedure: > 1. create a table, let's say 'test' > 2. write some data to 'test', no matter how many > 3. truncate 'test' with splits preserved > 4. restart the regionserver to reset the seqid > 5. write some data, but less than 2 since we don't want the seqid to run over > the one in 2 > 6. kill the regionserver hosting the region(s) of 'test' > 7. restart the regionserver. Congratulations! the data writen in 4 is now all > lost > *Why?* > for case 1 > Since preserve splits in truncate table procedure will not change the > regioninfo, when log replay happens, the 'unflushed' data will restore back > to the region > for case 2 > since the flushedSequenceIdByRegion are stored in Master in a map with the > region's encodedName. Although the table is truncated, the region's name is > not changed since we chose to preserve the splits. So after truncate the > table, the region's sequenceid is reset in the regionserver, but not reset in > master. When flush comes and report to master, master will reject the update > of sequenceid since the new one is smaller than the old one. The same happens > in log replay, all the edits writen in 4 will be skipped since they have a > smaller seqid -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16711) Fix hadoop-3.0 profile compile
[ https://issues.apache.org/jira/browse/HBASE-16711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15524793#comment-15524793 ] Hadoop QA commented on HBASE-16711: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 39m 13s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 10s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 2s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 10s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 42s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 33s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 1s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s {color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 32s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 14s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 14s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 36s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 29s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 38m 2s {color} | {color:green} Patch does not cause any errors with Hadoop 2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} | | {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 0m 35s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 14s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 1s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 19m 1s {color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 31s {color} | {color:green} hbase-thrift in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 26s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 122m 53s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hbase.filter.TestFilter | \\ \\ || Subsystem || Report/Notes || | Docker | Client=1.12.0 Server=1.12.0 Image:yetus/hbase:7bda515 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12830420/hbase-16711.v0.patch | | JIRA Issue | HBASE-16711 | | Optional Tests | asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile | | uname | Linux 0a6be6cd315f 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | master / b9ec59e | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | unit | https://builds.apache.org/job/PreCommit-HBASE-Build/3720/artifact/patchprocess/patch-unit-hbase-server.txt | | unit test logs | https://builds.apache.org/job/PreCommit-HBASE-Build/3720/artifact/patchprocess/patch-unit-hbase-server.txt | | Test Results | https://builds.apache.org/job/PreCommit-HBASE-Build/3720/testReport/ | |
[jira] [Commented] (HBASE-11354) HConnectionImplementation#DelayedClosing does not start
[ https://issues.apache.org/jira/browse/HBASE-11354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15524789#comment-15524789 ] Hadoop QA commented on HBASE-11354: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 14m 57s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s {color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 32s {color} | {color:green} 0.98 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 15s {color} | {color:green} 0.98 passed with JDK v1.8.0_101 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 17s {color} | {color:green} 0.98 passed with JDK v1.7.0_111 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 48s {color} | {color:green} 0.98 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 22s {color} | {color:green} 0.98 passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 57s {color} | {color:red} hbase-client in 0.98 has 16 extant Findbugs warnings. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 17s {color} | {color:red} hbase-client in 0.98 failed with JDK v1.8.0_101. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s {color} | {color:green} 0.98 passed with JDK v1.7.0_111 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 18s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 14s {color} | {color:green} the patch passed with JDK v1.8.0_101 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 14s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 16s {color} | {color:green} the patch passed with JDK v1.7.0_111 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 16s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 36s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 10s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 11m 38s {color} | {color:green} The patch does not cause any errors with Hadoop 2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} | | {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 0m 12s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 16s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 17s {color} | {color:red} hbase-client in the patch failed with JDK v1.8.0_101. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s {color} | {color:green} the patch passed with JDK v1.7.0_111 {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 18s {color} | {color:green} hbase-client in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 10s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 41m 54s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=1.11.2 Server=1.11.2 Image:yetus/hbase:date2016-09-27 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12830428/HBASE-11354-0.98.patch | | JIRA Issue | HBASE-11354 | | Optional Tests | asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile | | uname | Linux b55475204fa2 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality |
[jira] [Updated] (HBASE-16653) Backport HBASE-11393 to all branches which support namespace
[ https://issues.apache.org/jira/browse/HBASE-16653?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guanghao Zhang updated HBASE-16653: --- Attachment: HBASE-16653-branch-1-v2.patch Attach a v2 to fix findbugs. WhiteSpace is introduced by protobuf generated code. Retry ut. > Backport HBASE-11393 to all branches which support namespace > > > Key: HBASE-16653 > URL: https://issues.apache.org/jira/browse/HBASE-16653 > Project: HBase > Issue Type: Bug >Affects Versions: 1.4.0, 1.0.5, 1.3.1, 0.98.22, 1.1.7, 1.2.4 >Reporter: Guanghao Zhang >Assignee: Guanghao Zhang > Fix For: 1.4.0 > > Attachments: HBASE-16653-branch-1-v1.patch, > HBASE-16653-branch-1-v2.patch > > > As HBASE-11386 mentioned, the parse code about replication table-cfs config > will be wrong when table name contains namespace and we can only config the > default namespace's tables in the peer. It is a bug for all branches which > support namespace. HBASE-11393 resolved this by use a pb object but it was > only merged to master branch. Other branches still have this problem. I > thought we should fix this bug in all branches which support namespace. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16694) Reduce garbage for onDiskChecksum in HFileBlock
[ https://issues.apache.org/jira/browse/HBASE-16694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15524707#comment-15524707 ] binlijin commented on HBASE-16694: -- Thanks very much for the review and commit! [~andrew.purt...@gmail.com] > Reduce garbage for onDiskChecksum in HFileBlock > --- > > Key: HBASE-16694 > URL: https://issues.apache.org/jira/browse/HBASE-16694 > Project: HBase > Issue Type: Improvement >Affects Versions: 2.0.. >Reporter: binlijin >Assignee: binlijin >Priority: Minor > Fix For: 2.0.0, 1.4.0, 0.98.23 > > Attachments: HBASE-16694-master.patch > > > Current when finish a HFileBlock will create a new byte[] for onDiskChecksum, > we can reuse it. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16691) Optimize KeyOnlyFilter by utilizing KeyOnlyCell
[ https://issues.apache.org/jira/browse/HBASE-16691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15524689#comment-15524689 ] binlijin commented on HBASE-16691: -- Thanks very much for the review and commit! [~anoop.hbase] [~ramkrishna.s.vasude...@gmail.com] [~te...@apache.org] > Optimize KeyOnlyFilter by utilizing KeyOnlyCell > --- > > Key: HBASE-16691 > URL: https://issues.apache.org/jira/browse/HBASE-16691 > Project: HBase > Issue Type: Improvement >Reporter: binlijin >Assignee: binlijin >Priority: Minor > Fix For: 2.0.0 > > Attachments: HBASE-16691-master.patch > > > In KeyOnlyFilter#transformCell will return a KeyOnlyCell that have no value > or has valueLength as value, current will copy all row keys into a new byte[] > and new a KeyValue, we can eliminate the copy and have a wrap KeyOnlyCell > that ignore the cell's value. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16649) Truncate table with splits preserved can cause both data loss and truncated data appeared again
[ https://issues.apache.org/jira/browse/HBASE-16649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15524685#comment-15524685 ] Hudson commented on HBASE-16649: FAILURE: Integrated in Jenkins build HBase-1.3-JDK7 #23 (See [https://builds.apache.org/job/HBase-1.3-JDK7/23/]) HBASE-16649 Truncate table with splits preserved can cause both data (matteo.bertozzi: rev 1441b7c795292ce5b056ff9e3b5b2443ecd8e8cb) * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/master/procedure/TestTruncateTableProcedure.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/master/CatalogJanitor.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/master/ServerManager.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestCatalogJanitor.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/TruncateTableProcedure.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/DeleteTableProcedure.java > Truncate table with splits preserved can cause both data loss and truncated > data appeared again > --- > > Key: HBASE-16649 > URL: https://issues.apache.org/jira/browse/HBASE-16649 > Project: HBase > Issue Type: Bug >Affects Versions: 1.1.3 >Reporter: Allan Yang >Assignee: Matteo Bertozzi > Fix For: 2.0.0, 1.3.0, 1.1.7, 0.98.23, 1.2.4 > > Attachments: HBASE-16649-v0.patch, HBASE-16649-v1.patch, > HBASE-16649-v2.patch > > > Since truncate table with splits preserved will delete hfiles and use the > previous regioninfo. It can cause odd behaviors > - Case 1: *Data appeared after truncate* > reproduce procedure: > 1. create a table, let's say 'test' > 2. write data to 'test', make sure memstore of 'test' is not empty > 3. truncate 'test' with splits preserved > 4. kill the regionserver hosting the region(s) of 'test' > 5. start the regionserver, now it is the time to witness the miracle! the > truncated data appeared in table 'test' > - Case 2: *Data loss* > reproduce procedure: > 1. create a table, let's say 'test' > 2. write some data to 'test', no matter how many > 3. truncate 'test' with splits preserved > 4. restart the regionserver to reset the seqid > 5. write some data, but less than 2 since we don't want the seqid to run over > the one in 2 > 6. kill the regionserver hosting the region(s) of 'test' > 7. restart the regionserver. Congratulations! the data writen in 4 is now all > lost > *Why?* > for case 1 > Since preserve splits in truncate table procedure will not change the > regioninfo, when log replay happens, the 'unflushed' data will restore back > to the region > for case 2 > since the flushedSequenceIdByRegion are stored in Master in a map with the > region's encodedName. Although the table is truncated, the region's name is > not changed since we chose to preserve the splits. So after truncate the > table, the region's sequenceid is reset in the regionserver, but not reset in > master. When flush comes and report to master, master will reject the update > of sequenceid since the new one is smaller than the old one. The same happens > in log replay, all the edits writen in 4 will be skipped since they have a > smaller seqid -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16715) Signing keys could not be imported
[ https://issues.apache.org/jira/browse/HBASE-16715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Francis Chuang updated HBASE-16715: --- Description: I am trying to import the signing keys to verify downloaded hbase releases, but it appears to fail: {code} $ wget -O /tmp/KEYS https://www-us.apache.org/dist/hbase/KEYS Connecting to www-us.apache.org (140.211.11.105:443) KEYS 100% |***| 50537 0:00:00 ETA $ gpg --import /tmp/KEYS gpg: directory '/root/.gnupg' created gpg: new configuration file '/root/.gnupg/dirmngr.conf' created gpg: new configuration file '/root/.gnupg/gpg.conf' created gpg: keybox '/root/.gnupg/pubring.kbx' created gpg: /root/.gnupg/trustdb.gpg: trustdb created gpg: key 945D66AF: public key "Jean-Daniel Cryans (ASF key)" imported gpg: key D34B98D6: public key "Michael Stack " imported gpg: key 30CD0996: public key "Michael Stack " imported gpg: key AEC77EAF: public key "Todd Lipcon " imported gpg: key F48B08A4: public key "Ted Yu (Apache Public Key) " imported gpg: key 867B57B8: public key "Ramkrishna S Vasudevan (for code checkin) " imported gpg: key 7CA45750: public key "Lars Hofhansl (CODE SIGNING KEY) " imported gpg: key A1AC25A9: public key "Lars Hofhansl (CODE SIGNING KEY) " imported gpg: key C7CFE328: public key "Lars Hofhansl (CODE SIGNING KEY) " imported gpg: key E964B5FF: public key "Enis Soztutar (CODE SIGNING KEY) " imported gpg: key 0D80DB7C: public key "Sean Busbey (CODE SIGNING KEY) " imported gpg: key 8644EEB6: public key "Nick Dimiduk " imported gpg: invalid radix64 character 3A skipped gpg: CRC error; E1B6C3 - DFECFB gpg: [don't know]: invalid packet (ctb=55) gpg: read_block: read error: Invalid packet gpg: import from '/tmp/KEYS' failed: Invalid keyring gpg: Total number processed: 12 gpg: imported: 12 gpg: no ultimately trusted keys found {code} was: I am trying to import the signing keys to verify downloaded hbase releases, but it appears to fail: $ wget -O /tmp/KEYS https://www-us.apache.org/dist/hbase/KEYS Connecting to www-us.apache.org (140.211.11.105:443) KEYS 100% |***| 50537 0:00:00 ETA $ gpg --import /tmp/KEYS gpg: directory '/root/.gnupg' created gpg: new configuration file '/root/.gnupg/dirmngr.conf' created gpg: new configuration file '/root/.gnupg/gpg.conf' created gpg: keybox '/root/.gnupg/pubring.kbx' created gpg: /root/.gnupg/trustdb.gpg: trustdb created gpg: key 945D66AF: public key "Jean-Daniel Cryans (ASF key) " imported gpg: key D34B98D6: public key "Michael Stack " imported gpg: key 30CD0996: public key "Michael Stack " imported gpg: key AEC77EAF: public key "Todd Lipcon " imported gpg: key F48B08A4: public key "Ted Yu (Apache Public Key) " imported gpg: key 867B57B8: public key "Ramkrishna S Vasudevan (for code checkin) " imported gpg: key 7CA45750: public key "Lars Hofhansl (CODE SIGNING KEY) " imported gpg: key A1AC25A9: public key "Lars Hofhansl (CODE SIGNING KEY) " imported gpg: key C7CFE328: public key "Lars Hofhansl (CODE SIGNING KEY) " imported gpg: key E964B5FF: public key "Enis Soztutar (CODE SIGNING KEY) " imported gpg: key 0D80DB7C: public key "Sean Busbey (CODE SIGNING KEY) " imported gpg: key 8644EEB6: public key "Nick Dimiduk " imported gpg: invalid radix64 character 3A skipped gpg: CRC error; E1B6C3 - DFECFB gpg: [don't know]: invalid packet (ctb=55) gpg: read_block: read error: Invalid packet gpg: import from '/tmp/KEYS' failed: Invalid keyring gpg: Total number processed: 12 gpg: imported: 12 gpg: no ultimately trusted keys found > Signing keys could not be imported > -- > > Key: HBASE-16715 > URL: https://issues.apache.org/jira/browse/HBASE-16715 > Project: HBase > Issue Type: Bug >Reporter: Francis Chuang > > I am trying to import the signing keys to verify downloaded hbase releases, > but it appears to fail: > {code} > $ wget -O /tmp/KEYS https://www-us.apache.org/dist/hbase/KEYS > Connecting to www-us.apache.org (140.211.11.105:443) > KEYS 100% |***| 50537 0:00:00 > ETA > $ gpg --import /tmp/KEYS > gpg: directory '/root/.gnupg' created > gpg: new configuration file '/root/.gnupg/dirmngr.conf' created > gpg: new configuration file '/root/.gnupg/gpg.conf' created > gpg: keybox '/root/.gnupg/pubring.kbx' created >
[jira] [Created] (HBASE-16715) Signing keys could not be imported
Francis Chuang created HBASE-16715: -- Summary: Signing keys could not be imported Key: HBASE-16715 URL: https://issues.apache.org/jira/browse/HBASE-16715 Project: HBase Issue Type: Bug Reporter: Francis Chuang I am trying to import the signing keys to verify downloaded hbase releases, but it appears to fail: $ wget -O /tmp/KEYS https://www-us.apache.org/dist/hbase/KEYS Connecting to www-us.apache.org (140.211.11.105:443) KEYS 100% |***| 50537 0:00:00 ETA $ gpg --import /tmp/KEYS gpg: directory '/root/.gnupg' created gpg: new configuration file '/root/.gnupg/dirmngr.conf' created gpg: new configuration file '/root/.gnupg/gpg.conf' created gpg: keybox '/root/.gnupg/pubring.kbx' created gpg: /root/.gnupg/trustdb.gpg: trustdb created gpg: key 945D66AF: public key "Jean-Daniel Cryans (ASF key)" imported gpg: key D34B98D6: public key "Michael Stack " imported gpg: key 30CD0996: public key "Michael Stack " imported gpg: key AEC77EAF: public key "Todd Lipcon " imported gpg: key F48B08A4: public key "Ted Yu (Apache Public Key) " imported gpg: key 867B57B8: public key "Ramkrishna S Vasudevan (for code checkin) " imported gpg: key 7CA45750: public key "Lars Hofhansl (CODE SIGNING KEY) " imported gpg: key A1AC25A9: public key "Lars Hofhansl (CODE SIGNING KEY) " imported gpg: key C7CFE328: public key "Lars Hofhansl (CODE SIGNING KEY) " imported gpg: key E964B5FF: public key "Enis Soztutar (CODE SIGNING KEY) " imported gpg: key 0D80DB7C: public key "Sean Busbey (CODE SIGNING KEY) " imported gpg: key 8644EEB6: public key "Nick Dimiduk " imported gpg: invalid radix64 character 3A skipped gpg: CRC error; E1B6C3 - DFECFB gpg: [don't know]: invalid packet (ctb=55) gpg: read_block: read error: Invalid packet gpg: import from '/tmp/KEYS' failed: Invalid keyring gpg: Total number processed: 12 gpg: imported: 12 gpg: no ultimately trusted keys found -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Issue Comment Deleted] (HBASE-16714) Procedure V2 - use base class to remove duplicate set up test code in table DDL procedures
[ https://issues.apache.org/jira/browse/HBASE-16714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Stephen Yuan Jiang updated HBASE-16714: --- Comment: was deleted (was: The patch can be viewed in the RB: https://reviews.apache.org/r/52293/) > Procedure V2 - use base class to remove duplicate set up test code in table > DDL procedures > --- > > Key: HBASE-16714 > URL: https://issues.apache.org/jira/browse/HBASE-16714 > Project: HBase > Issue Type: Improvement > Components: proc-v2, test >Affects Versions: 2.0.0 >Reporter: Stephen Yuan Jiang >Assignee: Stephen Yuan Jiang > Attachments: HBASE-16714.v1-master.patch > > > All table DDL procedure tests has the same set up. To avoid duplicate code > and help maintain the existing test, we should move the same set up in a base > class. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16714) Procedure V2 - use base class to remove duplicate set up test code in table DDL procedures
[ https://issues.apache.org/jira/browse/HBASE-16714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Stephen Yuan Jiang updated HBASE-16714: --- Status: Patch Available (was: Open) > Procedure V2 - use base class to remove duplicate set up test code in table > DDL procedures > --- > > Key: HBASE-16714 > URL: https://issues.apache.org/jira/browse/HBASE-16714 > Project: HBase > Issue Type: Improvement > Components: proc-v2, test >Affects Versions: 2.0.0 >Reporter: Stephen Yuan Jiang >Assignee: Stephen Yuan Jiang > Attachments: HBASE-16714.v1-master.patch > > > All table DDL procedure tests has the same set up. To avoid duplicate code > and help maintain the existing test, we should move the same set up in a base > class. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16712) fix hadoop-3.0 profile mvn install
[ https://issues.apache.org/jira/browse/HBASE-16712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15524634#comment-15524634 ] Hadoop QA commented on HBASE-16712: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s {color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s {color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 33s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 7s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 7s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 7s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 7s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s {color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix. {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s {color} | {color:red} The patch 1 line(s) with tabs. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s {color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 27m 17s {color} | {color:green} Patch does not cause any errors with Hadoop 2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} | | {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 0m 7s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 7s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 7s {color} | {color:green} hbase-resource-bundle in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 8s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 32m 14s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=1.11.2 Server=1.11.2 Image:yetus/hbase:7bda515 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12830421/hbase-16712.v0.patch | | JIRA Issue | HBASE-16712 | | Optional Tests | asflicense javac javadoc unit xml | | uname | Linux f765ce717f96 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | master / b9ec59e | | Default Java | 1.8.0_101 | | whitespace | https://builds.apache.org/job/PreCommit-HBASE-Build/3721/artifact/patchprocess/whitespace-eol.txt | | whitespace | https://builds.apache.org/job/PreCommit-HBASE-Build/3721/artifact/patchprocess/whitespace-tabs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HBASE-Build/3721/testReport/ | | modules | C: hbase-resource-bundle U: hbase-resource-bundle | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/3721/console | | Powered by | Apache Yetus 0.3.0 http://yetus.apache.org | This message was automatically generated. > fix hadoop-3.0 profile mvn install > -- > > Key: HBASE-16712 > URL: https://issues.apache.org/jira/browse/HBASE-16712 > Project: HBase > Issue Type: Bug > Components: build, hadoop3 >Affects Versions: 2.0.0 >Reporter: Jonathan Hsieh >Assignee: Jonathan Hsieh > Fix For: 2.0.0 > > Attachments: hbase-16712.v0.patch > > > After the compile is fixed, mvn install fails due to transitive dependencies > coming from hadoop3. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16714) Procedure V2 - use base class to remove duplicate set up test code in table DDL procedures
[ https://issues.apache.org/jira/browse/HBASE-16714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Stephen Yuan Jiang updated HBASE-16714: --- Attachment: HBASE-16714.v1-master.patch > Procedure V2 - use base class to remove duplicate set up test code in table > DDL procedures > --- > > Key: HBASE-16714 > URL: https://issues.apache.org/jira/browse/HBASE-16714 > Project: HBase > Issue Type: Improvement > Components: proc-v2, test >Affects Versions: 2.0.0 >Reporter: Stephen Yuan Jiang >Assignee: Stephen Yuan Jiang > Attachments: HBASE-16714.v1-master.patch > > > All table DDL procedure tests has the same set up. To avoid duplicate code > and help maintain the existing test, we should move the same set up in a base > class. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-11354) HConnectionImplementation#DelayedClosing does not start
[ https://issues.apache.org/jira/browse/HBASE-11354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell updated HBASE-11354: --- Fix Version/s: 0.98.23 Status: Patch Available (was: Reopened) > HConnectionImplementation#DelayedClosing does not start > --- > > Key: HBASE-11354 > URL: https://issues.apache.org/jira/browse/HBASE-11354 > Project: HBase > Issue Type: Bug > Components: Client >Affects Versions: 0.98.3, 0.99.0 >Reporter: Qianxi Zhang >Assignee: Qianxi Zhang >Priority: Minor > Fix For: 0.98.23 > > Attachments: HBASE-11354-0.98.patch, HBASE_11354 (1).patch, > HBASE_11354.patch, HBASE_11354.patch, HBASE_11354.patch > > > The method "createAndStart" in class DelayedClosing only creates a instance, > but forgets to start it. So thread delayedClosing is not running all the time. > ConnectionManager#1623 > {code} > static DelayedClosing createAndStart(HConnectionImplementation hci){ > Stoppable stoppable = new Stoppable() { > private volatile boolean isStopped = false; > @Override public void stop(String why) { isStopped = true;} > @Override public boolean isStopped() {return isStopped;} > }; > return new DelayedClosing(hci, stoppable); > } > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-11354) HConnectionImplementation#DelayedClosing does not start
[ https://issues.apache.org/jira/browse/HBASE-11354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell updated HBASE-11354: --- Attachment: HBASE-11354-0.98.patch 0.98 patch provided by [~vincentpoon] > HConnectionImplementation#DelayedClosing does not start > --- > > Key: HBASE-11354 > URL: https://issues.apache.org/jira/browse/HBASE-11354 > Project: HBase > Issue Type: Bug > Components: Client >Affects Versions: 0.99.0, 0.98.3 >Reporter: Qianxi Zhang >Assignee: Qianxi Zhang >Priority: Minor > Attachments: HBASE-11354-0.98.patch, HBASE_11354 (1).patch, > HBASE_11354.patch, HBASE_11354.patch, HBASE_11354.patch > > > The method "createAndStart" in class DelayedClosing only creates a instance, > but forgets to start it. So thread delayedClosing is not running all the time. > ConnectionManager#1623 > {code} > static DelayedClosing createAndStart(HConnectionImplementation hci){ > Stoppable stoppable = new Stoppable() { > private volatile boolean isStopped = false; > @Override public void stop(String why) { isStopped = true;} > @Override public boolean isStopped() {return isStopped;} > }; > return new DelayedClosing(hci, stoppable); > } > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16649) Truncate table with splits preserved can cause both data loss and truncated data appeared again
[ https://issues.apache.org/jira/browse/HBASE-16649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15524616#comment-15524616 ] Hudson commented on HBASE-16649: FAILURE: Integrated in Jenkins build HBase-1.1-JDK8 #1871 (See [https://builds.apache.org/job/HBase-1.1-JDK8/1871/]) HBASE-16649 Truncate table with splits preserved can cause both data (matteo.bertozzi: rev 88512be52b8707fb87ab2c5979fd71664a417a90) * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/master/ServerManager.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestCatalogJanitor.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/DeleteTableProcedure.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/master/CatalogJanitor.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/master/handler/TruncateTableHandler.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/TruncateTableProcedure.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/master/procedure/TestTruncateTableProcedure.java > Truncate table with splits preserved can cause both data loss and truncated > data appeared again > --- > > Key: HBASE-16649 > URL: https://issues.apache.org/jira/browse/HBASE-16649 > Project: HBase > Issue Type: Bug >Affects Versions: 1.1.3 >Reporter: Allan Yang >Assignee: Matteo Bertozzi > Fix For: 2.0.0, 1.3.0, 1.1.7, 0.98.23, 1.2.4 > > Attachments: HBASE-16649-v0.patch, HBASE-16649-v1.patch, > HBASE-16649-v2.patch > > > Since truncate table with splits preserved will delete hfiles and use the > previous regioninfo. It can cause odd behaviors > - Case 1: *Data appeared after truncate* > reproduce procedure: > 1. create a table, let's say 'test' > 2. write data to 'test', make sure memstore of 'test' is not empty > 3. truncate 'test' with splits preserved > 4. kill the regionserver hosting the region(s) of 'test' > 5. start the regionserver, now it is the time to witness the miracle! the > truncated data appeared in table 'test' > - Case 2: *Data loss* > reproduce procedure: > 1. create a table, let's say 'test' > 2. write some data to 'test', no matter how many > 3. truncate 'test' with splits preserved > 4. restart the regionserver to reset the seqid > 5. write some data, but less than 2 since we don't want the seqid to run over > the one in 2 > 6. kill the regionserver hosting the region(s) of 'test' > 7. restart the regionserver. Congratulations! the data writen in 4 is now all > lost > *Why?* > for case 1 > Since preserve splits in truncate table procedure will not change the > regioninfo, when log replay happens, the 'unflushed' data will restore back > to the region > for case 2 > since the flushedSequenceIdByRegion are stored in Master in a map with the > region's encodedName. Although the table is truncated, the region's name is > not changed since we chose to preserve the splits. So after truncate the > table, the region's sequenceid is reset in the regionserver, but not reset in > master. When flush comes and report to master, master will reject the update > of sequenceid since the new one is smaller than the old one. The same happens > in log replay, all the edits writen in 4 will be skipped since they have a > smaller seqid -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16714) Procedure V2 - use base class to remove duplicate set up test code in table DDL procedures
[ https://issues.apache.org/jira/browse/HBASE-16714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15524613#comment-15524613 ] Stephen Yuan Jiang commented on HBASE-16714: The patch can be viewed in the RB: https://reviews.apache.org/r/52293/ > Procedure V2 - use base class to remove duplicate set up test code in table > DDL procedures > --- > > Key: HBASE-16714 > URL: https://issues.apache.org/jira/browse/HBASE-16714 > Project: HBase > Issue Type: Improvement > Components: proc-v2, test >Affects Versions: 2.0.0 >Reporter: Stephen Yuan Jiang >Assignee: Stephen Yuan Jiang > > All table DDL procedure tests has the same set up. To avoid duplicate code > and help maintain the existing test, we should move the same set up in a base > class. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16714) Procedure V2 - use base class to remove duplicate set up test code in table DDL procedures
[ https://issues.apache.org/jira/browse/HBASE-16714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15524612#comment-15524612 ] Stephen Yuan Jiang commented on HBASE-16714: The patch can be viewed in the RB: https://reviews.apache.org/r/52293/ > Procedure V2 - use base class to remove duplicate set up test code in table > DDL procedures > --- > > Key: HBASE-16714 > URL: https://issues.apache.org/jira/browse/HBASE-16714 > Project: HBase > Issue Type: Improvement > Components: proc-v2, test >Affects Versions: 2.0.0 >Reporter: Stephen Yuan Jiang >Assignee: Stephen Yuan Jiang > > All table DDL procedure tests has the same set up. To avoid duplicate code > and help maintain the existing test, we should move the same set up in a base > class. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-16714) Procedure V2 - use base class to remove duplicate set up test code in table DDL procedures
Stephen Yuan Jiang created HBASE-16714: -- Summary: Procedure V2 - use base class to remove duplicate set up test code in table DDL procedures Key: HBASE-16714 URL: https://issues.apache.org/jira/browse/HBASE-16714 Project: HBase Issue Type: Improvement Components: proc-v2, test Affects Versions: 2.0.0 Reporter: Stephen Yuan Jiang Assignee: Stephen Yuan Jiang All table DDL procedure tests has the same set up. To avoid duplicate code and help maintain the existing test, we should move the same set up in a base class. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Reopened] (HBASE-11354) HConnectionImplementation#DelayedClosing does not start
[ https://issues.apache.org/jira/browse/HBASE-11354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell reopened HBASE-11354: We came across this in an 0.98 install, let's apply just to 0.98 > HConnectionImplementation#DelayedClosing does not start > --- > > Key: HBASE-11354 > URL: https://issues.apache.org/jira/browse/HBASE-11354 > Project: HBase > Issue Type: Bug > Components: Client >Affects Versions: 0.99.0, 0.98.3 >Reporter: Qianxi Zhang >Assignee: Qianxi Zhang >Priority: Minor > Attachments: HBASE_11354 (1).patch, HBASE_11354.patch, > HBASE_11354.patch, HBASE_11354.patch > > > The method "createAndStart" in class DelayedClosing only creates a instance, > but forgets to start it. So thread delayedClosing is not running all the time. > ConnectionManager#1623 > {code} > static DelayedClosing createAndStart(HConnectionImplementation hci){ > Stoppable stoppable = new Stoppable() { > private volatile boolean isStopped = false; > @Override public void stop(String why) { isStopped = true;} > @Override public boolean isStopped() {return isStopped;} > }; > return new DelayedClosing(hci, stoppable); > } > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16649) Truncate table with splits preserved can cause both data loss and truncated data appeared again
[ https://issues.apache.org/jira/browse/HBASE-16649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15524575#comment-15524575 ] Hudson commented on HBASE-16649: FAILURE: Integrated in Jenkins build HBase-1.1-JDK7 #1787 (See [https://builds.apache.org/job/HBase-1.1-JDK7/1787/]) HBASE-16649 Truncate table with splits preserved can cause both data (matteo.bertozzi: rev 88512be52b8707fb87ab2c5979fd71664a417a90) * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/master/CatalogJanitor.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/master/ServerManager.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/master/handler/TruncateTableHandler.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestCatalogJanitor.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/DeleteTableProcedure.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/master/procedure/TestTruncateTableProcedure.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/TruncateTableProcedure.java > Truncate table with splits preserved can cause both data loss and truncated > data appeared again > --- > > Key: HBASE-16649 > URL: https://issues.apache.org/jira/browse/HBASE-16649 > Project: HBase > Issue Type: Bug >Affects Versions: 1.1.3 >Reporter: Allan Yang >Assignee: Matteo Bertozzi > Fix For: 2.0.0, 1.3.0, 1.1.7, 0.98.23, 1.2.4 > > Attachments: HBASE-16649-v0.patch, HBASE-16649-v1.patch, > HBASE-16649-v2.patch > > > Since truncate table with splits preserved will delete hfiles and use the > previous regioninfo. It can cause odd behaviors > - Case 1: *Data appeared after truncate* > reproduce procedure: > 1. create a table, let's say 'test' > 2. write data to 'test', make sure memstore of 'test' is not empty > 3. truncate 'test' with splits preserved > 4. kill the regionserver hosting the region(s) of 'test' > 5. start the regionserver, now it is the time to witness the miracle! the > truncated data appeared in table 'test' > - Case 2: *Data loss* > reproduce procedure: > 1. create a table, let's say 'test' > 2. write some data to 'test', no matter how many > 3. truncate 'test' with splits preserved > 4. restart the regionserver to reset the seqid > 5. write some data, but less than 2 since we don't want the seqid to run over > the one in 2 > 6. kill the regionserver hosting the region(s) of 'test' > 7. restart the regionserver. Congratulations! the data writen in 4 is now all > lost > *Why?* > for case 1 > Since preserve splits in truncate table procedure will not change the > regioninfo, when log replay happens, the 'unflushed' data will restore back > to the region > for case 2 > since the flushedSequenceIdByRegion are stored in Master in a map with the > region's encodedName. Although the table is truncated, the region's name is > not changed since we chose to preserve the splits. So after truncate the > table, the region's sequenceid is reset in the regionserver, but not reset in > master. When flush comes and report to master, master will reject the update > of sequenceid since the new one is smaller than the old one. The same happens > in log replay, all the edits writen in 4 will be skipped since they have a > smaller seqid -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16649) Truncate table with splits preserved can cause both data loss and truncated data appeared again
[ https://issues.apache.org/jira/browse/HBASE-16649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15524530#comment-15524530 ] Hudson commented on HBASE-16649: FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #1679 (See [https://builds.apache.org/job/HBase-Trunk_matrix/1679/]) HBASE-16649 Truncate table with splits preserved can cause both data (matteo.bertozzi: rev f06c0060aa13a2b5b18edeb66b7479bdd3c6fdc8) * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/master/procedure/TestTruncateTableProcedure.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/master/ServerManager.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/TruncateTableProcedure.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/master/CatalogJanitor.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/DeleteTableProcedure.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestCatalogJanitor.java > Truncate table with splits preserved can cause both data loss and truncated > data appeared again > --- > > Key: HBASE-16649 > URL: https://issues.apache.org/jira/browse/HBASE-16649 > Project: HBase > Issue Type: Bug >Affects Versions: 1.1.3 >Reporter: Allan Yang >Assignee: Matteo Bertozzi > Fix For: 2.0.0, 1.3.0, 1.1.7, 0.98.23, 1.2.4 > > Attachments: HBASE-16649-v0.patch, HBASE-16649-v1.patch, > HBASE-16649-v2.patch > > > Since truncate table with splits preserved will delete hfiles and use the > previous regioninfo. It can cause odd behaviors > - Case 1: *Data appeared after truncate* > reproduce procedure: > 1. create a table, let's say 'test' > 2. write data to 'test', make sure memstore of 'test' is not empty > 3. truncate 'test' with splits preserved > 4. kill the regionserver hosting the region(s) of 'test' > 5. start the regionserver, now it is the time to witness the miracle! the > truncated data appeared in table 'test' > - Case 2: *Data loss* > reproduce procedure: > 1. create a table, let's say 'test' > 2. write some data to 'test', no matter how many > 3. truncate 'test' with splits preserved > 4. restart the regionserver to reset the seqid > 5. write some data, but less than 2 since we don't want the seqid to run over > the one in 2 > 6. kill the regionserver hosting the region(s) of 'test' > 7. restart the regionserver. Congratulations! the data writen in 4 is now all > lost > *Why?* > for case 1 > Since preserve splits in truncate table procedure will not change the > regioninfo, when log replay happens, the 'unflushed' data will restore back > to the region > for case 2 > since the flushedSequenceIdByRegion are stored in Master in a map with the > region's encodedName. Although the table is truncated, the region's name is > not changed since we chose to preserve the splits. So after truncate the > table, the region's sequenceid is reset in the regionserver, but not reset in > master. When flush comes and report to master, master will reject the update > of sequenceid since the new one is smaller than the old one. The same happens > in log replay, all the edits writen in 4 will be skipped since they have a > smaller seqid -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16694) Reduce garbage for onDiskChecksum in HFileBlock
[ https://issues.apache.org/jira/browse/HBASE-16694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15524532#comment-15524532 ] Hudson commented on HBASE-16694: FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #1679 (See [https://builds.apache.org/job/HBase-Trunk_matrix/1679/]) HBASE-16694 Reduce garbage for onDiskChecksum in HFileBlock (binlijin) (apurtell: rev b9ec59ebbe0ea392bfe742a9f3774d9447722d42) * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java > Reduce garbage for onDiskChecksum in HFileBlock > --- > > Key: HBASE-16694 > URL: https://issues.apache.org/jira/browse/HBASE-16694 > Project: HBase > Issue Type: Improvement >Affects Versions: 2.0.. >Reporter: binlijin >Assignee: binlijin >Priority: Minor > Fix For: 2.0.0, 1.4.0, 0.98.23 > > Attachments: HBASE-16694-master.patch > > > Current when finish a HFileBlock will create a new byte[] for onDiskChecksum, > we can reuse it. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-16713) Bring back connection caching as a client API
Enis Soztutar created HBASE-16713: - Summary: Bring back connection caching as a client API Key: HBASE-16713 URL: https://issues.apache.org/jira/browse/HBASE-16713 Project: HBase Issue Type: New Feature Components: Client Reporter: Enis Soztutar Fix For: 2.0.0, 1.4.0 Connection.getConnection() is removed in master for good reasons. The connection lifecycle should always be explicit. We have replaced some of the functionality with ConnectionCache for rest and thrift servers internally, but it is not exposed to clients. Turns out our friends doing the hbase-spark connector work needs a similar connection caching behavior that we have in rest and thrift server. At a higher level we want: - Spark executors should be able to run short living hbase tasks with low latency - Short living tasks should be able to share the same connection, and should not pay the price of instantiating the cluster connection (which means zk connection, meta cache, 200+ threads, etc) - Connections to the cluster should be closed if it is not used for some time. Spark executors are used for other tasks as well. - Spark jobs may be launched with different configuration objects, possibly connecting to different clusters between different jobs. - Although not a direct requirement for spark, different users should not share the same connection object. Looking at the old code that we have in branch-1 for {{ConnectionManager}}, managed connections and the code in ConnectionCache, I think we should do a first-class client level API called ConnectionCache which will be a hybrid between ConnectionCache and old ConnectionManager. The lifecycle of the ConnectionCache is still explicit, so I think API-design-wise, this will fit into the current model. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16604) Scanner retries on IOException can cause the scans to miss data
[ https://issues.apache.org/jira/browse/HBASE-16604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15524489#comment-15524489 ] Enis Soztutar commented on HBASE-16604: --- bq. Now the client on seeing this exception will try to retry this Exception. Since the scanner is removed from the scanner's map and already we have a scannerId associated with the scan request, No, both UnknownScannerException and ScannerResetException extend DoNotRetryIOException, client will not retry with the same scanner id. This means that the RPC retrying mechanism (RPCRetryingCaller, ScannerCallableWithReplicas, etc) is not gonna be retried. However, at a higher level, there is a retry-from-where-you-are-left mechanism within ClientScanner. Thus, ClientScanner will re-open a new RegionScanner by sending a new scan request and get a new scanner name. This logic is in ClientScanner: {code} // If exception is any but the list below throw it back to the client; else setup // the scanner and retry. Throwable cause = e.getCause(); if ((cause != null && cause instanceof NotServingRegionException) || (cause != null && cause instanceof RegionServerStoppedException) || e instanceof OutOfOrderScannerNextException || e instanceof UnknownScannerException || e instanceof ScannerResetException) { // Pass. It is easier writing the if loop test as list of what is allowed rather than // as a list of what is not allowed... so if in here, it means we do not throw. } else { throw e; } {code} The client will also toss-away any partial results so far, and continue the scan from the last known row. bq. ->In case of actual retries whether the scanner internals and its heap are reset properly The heap will be reset correctly, because the region scanner is closed for good. A completely new RegionScanner will be constructed from scratch. bq. -> In case my retries are over how am I cleaning up the heap and also the blocks. This will happen only for master branch I think and we need to fix only in 2.0. We close the scanner and remove the lease already. We set the rpcCallback which will get run and call shipped(), no? Is it the case that if the scanner is already closed, shipped() will not free up the blocks? bq. One more thing is that since closeScanner is getting called even on exception the CP hooks preScannerClose and postScannerClose are getting called. Is that expected? Yes, I have checked that in other contexts where we close the scanner in case of exception, we still call the coprocessor methods. > Scanner retries on IOException can cause the scans to miss data > > > Key: HBASE-16604 > URL: https://issues.apache.org/jira/browse/HBASE-16604 > Project: HBase > Issue Type: Bug > Components: regionserver, Scanners >Reporter: Enis Soztutar >Assignee: Enis Soztutar > Fix For: 2.0.0, 1.3.0, 1.4.0, 1.1.7, 1.2.4 > > Attachments: HBASE-16604-branch-1.3-addendum.patch, > hbase-16604_v1.patch, hbase-16604_v2.patch, hbase-16604_v3.branch-1.patch, > hbase-16604_v3.patch > > > Debugging an ITBLL failure, where the Verify did not "see" all the data in > the cluster, I've noticed that if we end up getting a generic IOException > from the HFileReader level, we may end up missing the rest of the data in the > region. I was able to manually test this, and this stack trace helps to > understand what is going on: > {code} > 2016-09-09 16:27:15,633 INFO [hconnection-0x71ad3d8a-shared--pool21-t9] > client.ScannerCallable(376): Open scanner=1 for > scan={"loadColumnFamiliesOnDemand":null,"startRow":"","stopRow":"","batch":-1,"cacheBlocks":true,"totalColumns":1,"maxResultSize":2097152,"families":{"testFamily":["testFamily"]},"caching":100,"maxVersions":1,"timeRange":[0,9223372036854775807]} > on region > region=testScanThrowsException,,1473463632707.b2adfb618e5d0fe225c1dc40c0eabfee., > hostname=hw10676,51833,1473463626529, seqNum=2 > 2016-09-09 16:27:15,634 INFO > [B.fifo.QRpcServer.handler=5,queue=0,port=51833] > regionserver.RSRpcServices(2196): scan request:scanner_id: 1 number_of_rows: > 100 close_scanner: false next_call_seq: 0 client_handles_partials: true > client_handles_heartbeats: true renew: false > 2016-09-09 16:27:15,635 INFO > [B.fifo.QRpcServer.handler=5,queue=0,port=51833] > regionserver.RSRpcServices(2510): Rolling back next call seqId > 2016-09-09 16:27:15,635 INFO > [B.fifo.QRpcServer.handler=5,queue=0,port=51833] > regionserver.RSRpcServices(2565): Throwing new > ServiceExceptionjava.io.IOException: Could not reseek > StoreFileScanner[HFileScanner for reader >
[jira] [Commented] (HBASE-16712) fix hadoop-3.0 profile mvn install
[ https://issues.apache.org/jira/browse/HBASE-16712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15524472#comment-15524472 ] Jonathan Hsieh commented on HBASE-16712: After including HBASE-16711, this command line completes successfully mvn clean test -DskipTests -Dhadoop.profile=3.0 install site [~busbey], you might want to take a look at this one. > fix hadoop-3.0 profile mvn install > -- > > Key: HBASE-16712 > URL: https://issues.apache.org/jira/browse/HBASE-16712 > Project: HBase > Issue Type: Bug > Components: build, hadoop3 >Affects Versions: 2.0.0 >Reporter: Jonathan Hsieh >Assignee: Jonathan Hsieh > Fix For: 2.0.0 > > Attachments: hbase-16712.v0.patch > > > After the compile is fixed, mvn install fails due to transitive dependencies > coming from hadoop3. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16712) fix hadoop-3.0 profile mvn install
[ https://issues.apache.org/jira/browse/HBASE-16712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Hsieh updated HBASE-16712: --- Status: Patch Available (was: Open) > fix hadoop-3.0 profile mvn install > -- > > Key: HBASE-16712 > URL: https://issues.apache.org/jira/browse/HBASE-16712 > Project: HBase > Issue Type: Bug > Components: build, hadoop3 >Affects Versions: 2.0.0 >Reporter: Jonathan Hsieh >Assignee: Jonathan Hsieh > Fix For: 2.0.0 > > Attachments: hbase-16712.v0.patch > > > After the compile is fixed, mvn install fails due to transitive dependencies > coming from hadoop3. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16711) Fix hadoop-3.0 profile compile
[ https://issues.apache.org/jira/browse/HBASE-16711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Hsieh updated HBASE-16711: --- Status: Patch Available (was: Open) > Fix hadoop-3.0 profile compile > -- > > Key: HBASE-16711 > URL: https://issues.apache.org/jira/browse/HBASE-16711 > Project: HBase > Issue Type: Bug > Components: build, hadoop3 >Affects Versions: 2.0.0 >Reporter: Jonathan Hsieh >Assignee: Jonathan Hsieh > Fix For: 2.0.0 > > Attachments: hbase-16711.v0.patch > > > The -Dhadoop.profile=3.0 build is failing currently due to code deprecated in > hadoop2 and removed in hadoop3. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16711) Fix hadoop-3.0 profile compile
[ https://issues.apache.org/jira/browse/HBASE-16711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15524468#comment-15524468 ] Jonathan Hsieh commented on HBASE-16711: compiles with mvn clean test -DskipTests -Dhadoop.profile=3.0 > Fix hadoop-3.0 profile compile > -- > > Key: HBASE-16711 > URL: https://issues.apache.org/jira/browse/HBASE-16711 > Project: HBase > Issue Type: Bug > Components: build, hadoop3 >Affects Versions: 2.0.0 >Reporter: Jonathan Hsieh >Assignee: Jonathan Hsieh > Fix For: 2.0.0 > > Attachments: hbase-16711.v0.patch > > > The -Dhadoop.profile=3.0 build is failing currently due to code deprecated in > hadoop2 and removed in hadoop3. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16711) Fix hadoop-3.0 profile compile
[ https://issues.apache.org/jira/browse/HBASE-16711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Hsieh updated HBASE-16711: --- Attachment: hbase-16711.v0.patch > Fix hadoop-3.0 profile compile > -- > > Key: HBASE-16711 > URL: https://issues.apache.org/jira/browse/HBASE-16711 > Project: HBase > Issue Type: Bug > Components: build, hadoop3 >Affects Versions: 2.0.0 >Reporter: Jonathan Hsieh >Assignee: Jonathan Hsieh > Fix For: 2.0.0 > > Attachments: hbase-16711.v0.patch > > > The -Dhadoop.profile=3.0 build is failing currently due to code deprecated in > hadoop2 and removed in hadoop3. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16712) fix hadoop-3.0 profile mvn install
[ https://issues.apache.org/jira/browse/HBASE-16712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Hsieh updated HBASE-16712: --- Attachment: hbase-16712.v0.patch > fix hadoop-3.0 profile mvn install > -- > > Key: HBASE-16712 > URL: https://issues.apache.org/jira/browse/HBASE-16712 > Project: HBase > Issue Type: Bug > Components: build, hadoop3 >Affects Versions: 2.0.0 >Reporter: Jonathan Hsieh >Assignee: Jonathan Hsieh > Fix For: 2.0.0 > > Attachments: hbase-16712.v0.patch > > > After the compile is fixed, mvn install fails due to transitive dependencies > coming from hadoop3. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-16712) fix hadoop-3.0 profile mvn install
Jonathan Hsieh created HBASE-16712: -- Summary: fix hadoop-3.0 profile mvn install Key: HBASE-16712 URL: https://issues.apache.org/jira/browse/HBASE-16712 Project: HBase Issue Type: Bug Components: build, hadoop3 Affects Versions: 2.0.0 Reporter: Jonathan Hsieh Assignee: Jonathan Hsieh Fix For: 2.0.0 After the compile is fixed, mvn install fails due to transitive dependencies coming from hadoop3. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-16711) Fix hadoop-3.0 profile compile
Jonathan Hsieh created HBASE-16711: -- Summary: Fix hadoop-3.0 profile compile Key: HBASE-16711 URL: https://issues.apache.org/jira/browse/HBASE-16711 Project: HBase Issue Type: Bug Components: hadoop3, build Affects Versions: 2.0.0 Reporter: Jonathan Hsieh Assignee: Jonathan Hsieh Fix For: 2.0.0 The -Dhadoop.profile=3.0 build is failing currently due to code deprecated in hadoop2 and removed in hadoop3. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16705) Eliminate long to Long auto boxing in LongComparator
[ https://issues.apache.org/jira/browse/HBASE-16705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell updated HBASE-16705: --- Fix Version/s: 0.98.23 1.4.0 > Eliminate long to Long auto boxing in LongComparator > > > Key: HBASE-16705 > URL: https://issues.apache.org/jira/browse/HBASE-16705 > Project: HBase > Issue Type: Improvement > Components: Filters >Affects Versions: 2.0.. >Reporter: binlijin >Assignee: binlijin >Priority: Minor > Fix For: 2.0.0, 1.4.0, 0.98.23 > > Attachments: HBASE-16705-master.patch > > > LongComparator > @Override > public int compareTo(byte[] value, int offset, int length) { > Long that = Bytes.toLong(value, offset, length); > return this.longValue.compareTo(that); > } > Every time need to convert long to Long, this is not necessary. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-15560) TinyLFU-based BlockCache
[ https://issues.apache.org/jira/browse/HBASE-15560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ben Manes updated HBASE-15560: -- Attachment: HBASE-15560.patch Attached patch from review board > TinyLFU-based BlockCache > > > Key: HBASE-15560 > URL: https://issues.apache.org/jira/browse/HBASE-15560 > Project: HBase > Issue Type: Improvement > Components: BlockCache >Affects Versions: 2.0.0 >Reporter: Ben Manes >Assignee: Ben Manes > Attachments: HBASE-15560.patch, HBASE-15560.patch, tinylfu.patch > > > LruBlockCache uses the Segmented LRU (SLRU) policy to capture frequency and > recency of the working set. It achieves concurrency by using an O( n ) > background thread to prioritize the entries and evict. Accessing an entry is > O(1) by a hash table lookup, recording its logical access time, and setting a > frequency flag. A write is performed in O(1) time by updating the hash table > and triggering an async eviction thread. This provides ideal concurrency and > minimizes the latencies by penalizing the thread instead of the caller. > However the policy does not age the frequencies and may not be resilient to > various workload patterns. > W-TinyLFU ([research paper|http://arxiv.org/pdf/1512.00727.pdf]) records the > frequency in a counting sketch, ages periodically by halving the counters, > and orders entries by SLRU. An entry is discarded by comparing the frequency > of the new arrival (candidate) to the SLRU's victim, and keeping the one with > the highest frequency. This allows the operations to be performed in O(1) > time and, though the use of a compact sketch, a much larger history is > retained beyond the current working set. In a variety of real world traces > the policy had [near optimal hit > rates|https://github.com/ben-manes/caffeine/wiki/Efficiency]. > Concurrency is achieved by buffering and replaying the operations, similar to > a write-ahead log. A read is recorded into a striped ring buffer and writes > to a queue. The operations are applied in batches under a try-lock by an > asynchronous thread, thereby track the usage pattern without incurring high > latencies > ([benchmarks|https://github.com/ben-manes/caffeine/wiki/Benchmarks#server-class]). > In YCSB benchmarks the results were inconclusive. For a large cache (99% hit > rates) the two caches have near identical throughput and latencies with > LruBlockCache narrowly winning. At medium and small caches, TinyLFU had a > 1-4% hit rate improvement and therefore lower latencies. The lack luster > result is because a synthetic Zipfian distribution is used, which SLRU > performs optimally. In a more varied, real-world workload we'd expect to see > improvements by being able to make smarter predictions. > The provided patch implements BlockCache using the > [Caffeine|https://github.com/ben-manes/caffeine] caching library (see > HighScalability > [article|http://highscalability.com/blog/2016/1/25/design-of-a-modern-cache.html]). > Edward Bortnikov and Eshcar Hillel have graciously provided guidance for > evaluating this patch ([github > branch|https://github.com/ben-manes/hbase/tree/tinylfu]). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16705) Eliminate long to Long auto boxing in LongComparator
[ https://issues.apache.org/jira/browse/HBASE-16705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15524383#comment-15524383 ] Andrew Purtell commented on HBASE-16705: This should be committed everywhere we have LongComparator. Let me do that now > Eliminate long to Long auto boxing in LongComparator > > > Key: HBASE-16705 > URL: https://issues.apache.org/jira/browse/HBASE-16705 > Project: HBase > Issue Type: Improvement > Components: Filters >Affects Versions: 2.0.. >Reporter: binlijin >Assignee: binlijin >Priority: Minor > Fix For: 2.0.0 > > Attachments: HBASE-16705-master.patch > > > LongComparator > @Override > public int compareTo(byte[] value, int offset, int length) { > Long that = Bytes.toLong(value, offset, length); > return this.longValue.compareTo(that); > } > Every time need to convert long to Long, this is not necessary. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16686) Add latency metrics for REST
[ https://issues.apache.org/jira/browse/HBASE-16686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15524367#comment-15524367 ] Andrew Purtell commented on HBASE-16686: Convention across the code base is to use EnvironmentEdgeManager#getTime() instead of System#currentTimeMillis so the time can be controlled in unit tests: {code} @@ -72,6 +73,7 @@ public class MultiRowResource extends ResourceBase implements Constants { MultivaluedMapparams = uriInfo.getQueryParameters(); servlet.getMetrics().incrementRequests(1); +final long startTime = System.currentTimeMillis(); try { CellSetModel model = new CellSetModel(); for (String rk : params.get(ROW_KEYS_PARAM_NAME)) { {code} While a simple count of exceptions could be useful, does it make sense to break down the counts for common exceptions of interest? I would think so: {code} diff --git a/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/ResourceBase.java b/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/ResourceBase.java index f71d848..f2a6c46 100644 --- a/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/ResourceBase.java +++ b/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/ResourceBase.java @@ -45,6 +45,7 @@ public class ResourceBase implements Constants { } protected Response processException(Throwable exp) { +servlet.getMetrics().incrementProcessException(1); Throwable curr = exp; if(accessDeniedClazz != null) { //some access denied exceptions are buried {code} Also, note that the REST gateway embeds the HBase client so the client metrics (HBASE-12911) could be made available. > Add latency metrics for REST > > > Key: HBASE-16686 > URL: https://issues.apache.org/jira/browse/HBASE-16686 > Project: HBase > Issue Type: New Feature > Components: monitoring, REST >Reporter: Guang Yang >Priority: Minor > Attachments: HBASE-16686_v0.patch > > > It would be helpful to have the latency metrics for rest for various > operations. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HBASE-16694) Reduce garbage for onDiskChecksum in HFileBlock
[ https://issues.apache.org/jira/browse/HBASE-16694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell resolved HBASE-16694. Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 0.98.23 1.4.0 2.0.0 > Reduce garbage for onDiskChecksum in HFileBlock > --- > > Key: HBASE-16694 > URL: https://issues.apache.org/jira/browse/HBASE-16694 > Project: HBase > Issue Type: Improvement >Affects Versions: 2.0.. >Reporter: binlijin >Assignee: binlijin >Priority: Minor > Fix For: 2.0.0, 1.4.0, 0.98.23 > > Attachments: HBASE-16694-master.patch > > > Current when finish a HFileBlock will create a new byte[] for onDiskChecksum, > we can reuse it. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16649) Truncate table with splits preserved can cause both data loss and truncated data appeared again
[ https://issues.apache.org/jira/browse/HBASE-16649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matteo Bertozzi updated HBASE-16649: Resolution: Fixed Fix Version/s: 1.2.4 0.98.23 1.1.7 1.3.0 2.0.0 Status: Resolved (was: Patch Available) > Truncate table with splits preserved can cause both data loss and truncated > data appeared again > --- > > Key: HBASE-16649 > URL: https://issues.apache.org/jira/browse/HBASE-16649 > Project: HBase > Issue Type: Bug >Affects Versions: 1.1.3 >Reporter: Allan Yang >Assignee: Matteo Bertozzi > Fix For: 2.0.0, 1.3.0, 1.1.7, 0.98.23, 1.2.4 > > Attachments: HBASE-16649-v0.patch, HBASE-16649-v1.patch, > HBASE-16649-v2.patch > > > Since truncate table with splits preserved will delete hfiles and use the > previous regioninfo. It can cause odd behaviors > - Case 1: *Data appeared after truncate* > reproduce procedure: > 1. create a table, let's say 'test' > 2. write data to 'test', make sure memstore of 'test' is not empty > 3. truncate 'test' with splits preserved > 4. kill the regionserver hosting the region(s) of 'test' > 5. start the regionserver, now it is the time to witness the miracle! the > truncated data appeared in table 'test' > - Case 2: *Data loss* > reproduce procedure: > 1. create a table, let's say 'test' > 2. write some data to 'test', no matter how many > 3. truncate 'test' with splits preserved > 4. restart the regionserver to reset the seqid > 5. write some data, but less than 2 since we don't want the seqid to run over > the one in 2 > 6. kill the regionserver hosting the region(s) of 'test' > 7. restart the regionserver. Congratulations! the data writen in 4 is now all > lost > *Why?* > for case 1 > Since preserve splits in truncate table procedure will not change the > regioninfo, when log replay happens, the 'unflushed' data will restore back > to the region > for case 2 > since the flushedSequenceIdByRegion are stored in Master in a map with the > region's encodedName. Although the table is truncated, the region's name is > not changed since we chose to preserve the splits. So after truncate the > table, the region's sequenceid is reset in the regionserver, but not reset in > master. When flush comes and report to master, master will reject the update > of sequenceid since the new one is smaller than the old one. The same happens > in log replay, all the edits writen in 4 will be skipped since they have a > smaller seqid -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16345) RpcRetryingCallerWithReadReplicas#call() should catch some RegionServer Exceptions
[ https://issues.apache.org/jira/browse/HBASE-16345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15524094#comment-15524094 ] Matteo Bertozzi commented on HBASE-16345: - patch looks good to me, [~enis] do you have more comments or are you ok with it? > RpcRetryingCallerWithReadReplicas#call() should catch some RegionServer > Exceptions > -- > > Key: HBASE-16345 > URL: https://issues.apache.org/jira/browse/HBASE-16345 > Project: HBase > Issue Type: Bug > Components: Client >Affects Versions: 2.0.0 >Reporter: huaxiang sun >Assignee: huaxiang sun > Attachments: HBASE-16345-v001.patch, HBASE-16345.master.001.patch, > HBASE-16345.master.002.patch, HBASE-16345.master.003.patch, > HBASE-16345.master.004.patch, HBASE-16345.master.005.patch, > HBASE-16345.master.005.patch > > > Update for the description. Debugged more at this front based on the comments > from Enis. > The cause is that for the primary replica, if its retry is exhausted too > fast, f.get() [1] returns ExecutionException. This Exception needs to be > ignored and continue with the replicas. > The other issue is that after adding calls for the replicas, if the first > completed task gets ExecutionException (due to the retry exhausted), it > throws the exception to the client[2]. > In this case, it needs to loop through these tasks, waiting for the success > one. If no one succeeds, throw exception. > Similar for the scan as well > [1] > https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/client/RpcRetryingCallerWithReadReplicas.java#L197 > [2] > https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/client/RpcRetryingCallerWithReadReplicas.java#L219 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16644) Errors when reading legit HFile' Trailer on branch 1.3
[ https://issues.apache.org/jira/browse/HBASE-16644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15524001#comment-15524001 ] stack commented on HBASE-16644: --- Let me build a 0.92 and write a few examples of this file v2.0 hfile... and then try and reproduce this issue. > Errors when reading legit HFile' Trailer on branch 1.3 > -- > > Key: HBASE-16644 > URL: https://issues.apache.org/jira/browse/HBASE-16644 > Project: HBase > Issue Type: Bug > Components: HFile >Affects Versions: 1.3.0, 1.4.0 >Reporter: Mikhail Antonov >Assignee: Mikhail Antonov >Priority: Critical > Fix For: 1.3.0 > > Attachments: HBASE-16644.branch-1.3.patch > > > There seems to be a regression in branch 1.3 where we can't read HFile > trailer(getting "CorruptHFileException: Problem reading HFile Trailer") on > some HFiles that could be successfully read on 1.2. > I've seen this error manifesting in two ways so far. > {code}Caused by: org.apache.hadoop.hbase.io.hfile.CorruptHFileException: > Problem reading HFile Trailer from file > at > org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:497) > at org.apache.hadoop.hbase.io.hfile.HFile.createReader(HFile.java:525) > at > org.apache.hadoop.hbase.regionserver.StoreFile$Reader.(StoreFile.java:1164) > at > org.apache.hadoop.hbase.regionserver.StoreFileInfo.open(StoreFileInfo.java:259) > at > org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:427) > at > org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:528) > at > org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:518) > at > org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:652) > at > org.apache.hadoop.hbase.regionserver.HStore.access$000(HStore.java:117) > at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:519) > at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:516) > ... 6 more > Caused by: java.io.IOException: Invalid HFile block magic: > \x00\x00\x04\x00\x00\x00\x00\x00 > at org.apache.hadoop.hbase.io.hfile.BlockType.parse(BlockType.java:155) > at org.apache.hadoop.hbase.io.hfile.BlockType.read(BlockType.java:167) > at > org.apache.hadoop.hbase.io.hfile.HFileBlock.(HFileBlock.java:344) > at > org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1735) > at > org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1558) > at > org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader$1.nextBlock(HFileBlock.java:1397) > at > org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader$1.nextBlockWithBlockType(HFileBlock.java:1405) > at > org.apache.hadoop.hbase.io.hfile.HFileReaderV2.(HFileReaderV2.java:156) > at > org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:485) > {code} > and second > {code} > Caused by: org.apache.hadoop.hbase.io.hfile.CorruptHFileException: Problem > reading HFile Trailer from file > at > org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:497) > at org.apache.hadoop.hbase.io.hfile.HFile.createReader(HFile.java:525) > at > org.apache.hadoop.hbase.regionserver.StoreFile$Reader.(StoreFile.java:1164) > at > org.apache.hadoop.hbase.io.HalfStoreFileReader.(HalfStoreFileReader.java:104) > at > org.apache.hadoop.hbase.regionserver.StoreFileInfo.open(StoreFileInfo.java:256) > at > org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:427) > at > org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:528) > at > org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:518) > at > org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:652) > at > org.apache.hadoop.hbase.regionserver.HStore.access$000(HStore.java:117) > at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:519) > at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:516) > ... 6 more > Caused by: java.io.IOException: Premature EOF from inputStream (read returned > -1, was trying to read 10083 necessary bytes and 24 extra bytes, successfully > read 1072 > at > org.apache.hadoop.hbase.io.hfile.HFileBlock.readWithExtra(HFileBlock.java:737) > at > org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1459) > at > org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1712) > at >
[jira] [Commented] (HBASE-16587) Procedure v2 - Cleanup suspended proc execution
[ https://issues.apache.org/jira/browse/HBASE-16587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15523979#comment-15523979 ] Hudson commented on HBASE-16587: FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #1678 (See [https://builds.apache.org/job/HBase-Trunk_matrix/1678/]) HBASE-16587 Procedure v2 - Cleanup suspended proc execution (matteo.bertozzi: rev e01e05cc0ef5255c549d3c7bb87be38d34f13d94) * (edit) hbase-procedure/src/main/java/org/apache/hadoop/hbase/procedure2/RemoteProcedureException.java * (edit) hbase-procedure/src/main/java/org/apache/hadoop/hbase/procedure2/StateMachineProcedure.java * (add) hbase-procedure/src/test/java/org/apache/hadoop/hbase/procedure2/TestProcedureSuspended.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/MasterProcedureScheduler.java * (edit) hbase-procedure/src/main/java/org/apache/hadoop/hbase/procedure2/Procedure.java * (edit) hbase-procedure/src/main/java/org/apache/hadoop/hbase/procedure2/ProcedureExecutor.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/ProcedurePrepareLatch.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/ProcedureSyncWait.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/master/procedure/TestMasterProcedureEvents.java > Procedure v2 - Cleanup suspended proc execution > --- > > Key: HBASE-16587 > URL: https://issues.apache.org/jira/browse/HBASE-16587 > Project: HBase > Issue Type: Sub-task > Components: proc-v2 >Affects Versions: 2.0.0 >Reporter: Matteo Bertozzi >Assignee: Matteo Bertozzi > Fix For: 2.0.0 > > Attachments: HBASE-16587-v0.patch, HBASE-16587-v1.patch, > HBASE-16587-v2.patch, HBASE-16587-v3.patch, HBASE-16587-v4.patch > > > for procedures like the assignment or the lock one we need to be able to hold > on locks while suspended. At the moment the way to do that is up to the proc > implementation. This patch moves the logic to the base Procedure and > ProcedureExecutor. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16695) Procedure v2 - Support for parent holding locks
[ https://issues.apache.org/jira/browse/HBASE-16695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15523980#comment-15523980 ] Hudson commented on HBASE-16695: FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #1678 (See [https://builds.apache.org/job/HBase-Trunk_matrix/1678/]) HBASE-16695 Procedure v2 - Support for parent holding locks (matteo.bertozzi: rev 8da0500e7d494f45cded7c3cb3423401a73e21fb) * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/MasterProcedureScheduler.java * (edit) hbase-procedure/src/main/java/org/apache/hadoop/hbase/procedure2/Procedure.java * (add) hbase-server/src/test/java/org/apache/hadoop/hbase/master/procedure/TestMasterProcedureSchedulerConcurrency.java * (edit) hbase-procedure/src/main/java/org/apache/hadoop/hbase/procedure2/ProcedureRunnableSet.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/master/procedure/TestMasterProcedureScheduler.java > Procedure v2 - Support for parent holding locks > --- > > Key: HBASE-16695 > URL: https://issues.apache.org/jira/browse/HBASE-16695 > Project: HBase > Issue Type: Sub-task > Components: proc-v2 >Affects Versions: 2.0.0 >Reporter: Matteo Bertozzi >Assignee: Matteo Bertozzi > Fix For: 2.0.0 > > Attachments: HBASE-16695-v0.patch > > > Add the logic to allow child procs to be executed when the parent is holding > the xlock. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16691) Optimize KeyOnlyFilter by utilizing KeyOnlyCell
[ https://issues.apache.org/jira/browse/HBASE-16691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15523977#comment-15523977 ] Hudson commented on HBASE-16691: FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #1678 (See [https://builds.apache.org/job/HBase-Trunk_matrix/1678/]) HBASE-16691 Optimize KeyOnlyFilter by utilizing KeyOnlyCell (binlijin) (tedyu: rev 890e3f223f778395ada0c008b90630259b5a7e7f) * (edit) hbase-client/src/main/java/org/apache/hadoop/hbase/filter/KeyOnlyFilter.java * (add) hbase-client/src/test/java/org/apache/hadoop/hbase/filter/TestKeyOnlyFilter.java > Optimize KeyOnlyFilter by utilizing KeyOnlyCell > --- > > Key: HBASE-16691 > URL: https://issues.apache.org/jira/browse/HBASE-16691 > Project: HBase > Issue Type: Improvement >Reporter: binlijin >Assignee: binlijin >Priority: Minor > Fix For: 2.0.0 > > Attachments: HBASE-16691-master.patch > > > In KeyOnlyFilter#transformCell will return a KeyOnlyCell that have no value > or has valueLength as value, current will copy all row keys into a new byte[] > and new a KeyValue, we can eliminate the copy and have a wrap KeyOnlyCell > that ignore the cell's value. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16704) Scan will be broken while working with DBE and KeyValueCodecWithTags
[ https://issues.apache.org/jira/browse/HBASE-16704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15523978#comment-15523978 ] Hudson commented on HBASE-16704: FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #1678 (See [https://builds.apache.org/job/HBase-Trunk_matrix/1678/]) HBASE-16704 Scan will be broken while working with DBE and (anoopsamjohn: rev 43f47a8e73792b4934b3b53d0b8ee880b5edd8c8) * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderImpl.java * (edit) hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestBufferedDataBlockEncoder.java * (edit) hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/BufferedDataBlockEncoder.java > Scan will be broken while working with DBE and KeyValueCodecWithTags > > > Key: HBASE-16704 > URL: https://issues.apache.org/jira/browse/HBASE-16704 > Project: HBase > Issue Type: Bug >Affects Versions: 2.0.0 >Reporter: Yu Sun >Assignee: Anoop Sam John > Fix For: 2.0.0 > > Attachments: HBASE-16704.patch > > > scan will always broken if we set LIMIT more than 1 with rs > hbase.client.rpc.codec set to > org.apache.hadoop.hbase.codec.KeyValueCodecWithTags. > How to reproduce: > 1. 1 master + 1 rs, codec use KeyValueCodecWithTags. > 2. create a table table_1024B_30g,1 cf and with only 1 qualifier, then load > some data with ycsb,. Use Diff DataBlockEncoding > 3. scan 'table_1024B_30g', {LIMIT => 2, STARTROW => 'user5499'}, STARTROW is > set any valid start row. > 4. scan failed. > this should be bug in KeyValueCodecWithTags, after some investigations, I > found some the key not serialized correctly. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16694) Reduce garbage for onDiskChecksum in HFileBlock
[ https://issues.apache.org/jira/browse/HBASE-16694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15523797#comment-15523797 ] Andrew Purtell commented on HBASE-16694: If no objections I will commit this later today > Reduce garbage for onDiskChecksum in HFileBlock > --- > > Key: HBASE-16694 > URL: https://issues.apache.org/jira/browse/HBASE-16694 > Project: HBase > Issue Type: Improvement >Affects Versions: 2.0.. >Reporter: binlijin >Assignee: binlijin >Priority: Minor > Attachments: HBASE-16694-master.patch > > > Current when finish a HFileBlock will create a new byte[] for onDiskChecksum, > we can reuse it. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16694) Reduce garbage for onDiskChecksum in HFileBlock
[ https://issues.apache.org/jira/browse/HBASE-16694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15523793#comment-15523793 ] Andrew Purtell commented on HBASE-16694: +1 > Reduce garbage for onDiskChecksum in HFileBlock > --- > > Key: HBASE-16694 > URL: https://issues.apache.org/jira/browse/HBASE-16694 > Project: HBase > Issue Type: Improvement >Affects Versions: 2.0.. >Reporter: binlijin >Assignee: binlijin >Priority: Minor > Attachments: HBASE-16694-master.patch > > > Current when finish a HFileBlock will create a new byte[] for onDiskChecksum, > we can reuse it. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16649) Truncate table with splits preserved can cause both data loss and truncated data appeared again
[ https://issues.apache.org/jira/browse/HBASE-16649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15523740#comment-15523740 ] Stephen Yuan Jiang commented on HBASE-16649: +1. V2 looks good to me. (The only thing I am unsure is that whether we catch all cases here. For truncate table, the patch would fix the corruption.) I think this change should go to all branches (include non-proc-V2 based branch such as 0.98 and 1.0). > Truncate table with splits preserved can cause both data loss and truncated > data appeared again > --- > > Key: HBASE-16649 > URL: https://issues.apache.org/jira/browse/HBASE-16649 > Project: HBase > Issue Type: Bug >Affects Versions: 1.1.3 >Reporter: Allan Yang >Assignee: Matteo Bertozzi > Attachments: HBASE-16649-v0.patch, HBASE-16649-v1.patch, > HBASE-16649-v2.patch > > > Since truncate table with splits preserved will delete hfiles and use the > previous regioninfo. It can cause odd behaviors > - Case 1: *Data appeared after truncate* > reproduce procedure: > 1. create a table, let's say 'test' > 2. write data to 'test', make sure memstore of 'test' is not empty > 3. truncate 'test' with splits preserved > 4. kill the regionserver hosting the region(s) of 'test' > 5. start the regionserver, now it is the time to witness the miracle! the > truncated data appeared in table 'test' > - Case 2: *Data loss* > reproduce procedure: > 1. create a table, let's say 'test' > 2. write some data to 'test', no matter how many > 3. truncate 'test' with splits preserved > 4. restart the regionserver to reset the seqid > 5. write some data, but less than 2 since we don't want the seqid to run over > the one in 2 > 6. kill the regionserver hosting the region(s) of 'test' > 7. restart the regionserver. Congratulations! the data writen in 4 is now all > lost > *Why?* > for case 1 > Since preserve splits in truncate table procedure will not change the > regioninfo, when log replay happens, the 'unflushed' data will restore back > to the region > for case 2 > since the flushedSequenceIdByRegion are stored in Master in a map with the > region's encodedName. Although the table is truncated, the region's name is > not changed since we chose to preserve the splits. So after truncate the > table, the region's sequenceid is reset in the regionserver, but not reset in > master. When flush comes and report to master, master will reject the update > of sequenceid since the new one is smaller than the old one. The same happens > in log replay, all the edits writen in 4 will be skipped since they have a > smaller seqid -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16608) Introducing the ability to merge ImmutableSegments without copy-compaction or SQM usage
[ https://issues.apache.org/jira/browse/HBASE-16608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15523710#comment-15523710 ] Hadoop QA commented on HBASE-16608: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 4 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 22s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 45s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 15s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 45s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 46s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 36s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 36s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 45s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 25m 43s {color} | {color:green} Patch does not cause any errors with Hadoop 2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} | | {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 0m 11s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 51s {color} | {color:red} hbase-server generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 82m 38s {color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 14s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 121m 21s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hbase-server | | | Possible null pointer dereference of iterator in org.apache.hadoop.hbase.regionserver.MemStoreCompactor.createSubstitution() on exception path Dereferenced at MemStoreCompactor.java:iterator in org.apache.hadoop.hbase.regionserver.MemStoreCompactor.createSubstitution() on exception path Dereferenced at MemStoreCompactor.java:[line 262] | | | Switch statement found in org.apache.hadoop.hbase.regionserver.MemStoreCompactor.doCompaction() where one case falls through to the next case At MemStoreCompactor.java:where one case falls through to the next case At MemStoreCompactor.java:[lines 192-220] | | Timed out junit tests | org.apache.hadoop.hbase.client.TestFromClientSide | | | org.apache.hadoop.hbase.client.TestScannerTimeout | | | org.apache.hadoop.hbase.client.TestRestoreSnapshotFromClientWithRegionReplicas | | | org.apache.hadoop.hbase.client.TestMobCloneSnapshotFromClient | | | org.apache.hadoop.hbase.client.TestMobSnapshotCloneIndependence | \\ \\ || Subsystem || Report/Notes || | Docker | Client=1.11.2 Server=1.11.2 Image:yetus/hbase:7bda515 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12830347/HBASE-16608-V01.patch | | JIRA Issue | HBASE-16608 | | Optional Tests | asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile | | uname | Linux 5e2b8d03207e 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 20:42:26 UTC 2016 x86_64 x86_64 x86_64
[jira] [Commented] (HBASE-16643) Reverse scanner heap creation may not allow MSLAB closure due to improper ref counting of segments
[ https://issues.apache.org/jira/browse/HBASE-16643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15523687#comment-15523687 ] ramkrishna.s.vasudevan commented on HBASE-16643: bq.t just init reverse KVHeap.. Within the init we do seekToLast/seekPrevious stuff.. This is what I asked previously. Can we do like initReverseKVHeapIfNeeded do just heap create (as in other method) and the actual methods do seekToLast/ seekPrevious work? I think I replied to this in the RB. Yes this will init the heap after doing the seek. In the above case it is seekToLastRow(). I tried to do the following change by just creating the heap and then allow the API call to do the actual seek. It was creating test failures. So it requires some more investigation as why that fails. My guess is that the heap creation depends on what cell the scanners could peek. So after doing seek if we peek (this is specific to reverse) we are able to do the right cells to peek from the ReverseKVHeap. So if that change has to be done then we need to do some more investigation and may lead to more changes as part of this patch. Let me know what you think. I believe that we could have this committed and see those in another JIRA. > Reverse scanner heap creation may not allow MSLAB closure due to improper ref > counting of segments > -- > > Key: HBASE-16643 > URL: https://issues.apache.org/jira/browse/HBASE-16643 > Project: HBase > Issue Type: Bug >Reporter: ramkrishna.s.vasudevan >Assignee: ramkrishna.s.vasudevan >Priority: Critical > Fix For: 2.0.0 > > Attachments: HBASE-16643.patch, HBASE-16643_1.patch, > HBASE-16643_2.patch, HBASE-16643_3.patch, HBASE-16643_4.patch, > HBASE-16643_5.patch, HBASE-16643_6.patch, HBASE-16643_7.patch, > HBASE-16643_8.patch > > > In the reverse scanner case, > While doing 'initBackwardHeapIfNeeded' in MemstoreScanner for setting the > backward heap, we do a > {code} > if ((backwardHeap == null) && (forwardHeap != null)) { > forwardHeap.close(); > forwardHeap = null; > // before building the heap seek for the relevant key on the scanners, > // for the heap to be built from the scanners correctly > for (KeyValueScanner scan : scanners) { > if (toLast) { > res |= scan.seekToLastRow(); > } else { > res |= scan.backwardSeek(cell); > } > } > {code} > forwardHeap.close(). This would internally decrement the MSLAB ref counter > for the current active segment and snapshot segment. > When the scan is actually closed again we do close() and that will again > decrement the count. Here chances are there that the count would go negative > and hence the actual MSLAB closure that checks for refCount==0 will fail. > Apart from this, when the refCount becomes 0 after the firstClose if any > other thread requests to close the segment, then we will end up in corrupted > segment because the segment could be put back to the MSLAB pool. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-16710) Add ZStandard Codec to Compression.java
churro morales created HBASE-16710: -- Summary: Add ZStandard Codec to Compression.java Key: HBASE-16710 URL: https://issues.apache.org/jira/browse/HBASE-16710 Project: HBase Issue Type: Task Affects Versions: 2.0.0 Reporter: churro morales Assignee: churro morales Priority: Minor HADOOP-13578 is adding the ZStandardCodec to hadoop. This is a placeholder to ensure it gets added to hbase once this gets upstream. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16574) Add backup / restore feature to refguide
[ https://issues.apache.org/jira/browse/HBASE-16574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-16574: --- Attachment: hbase_reference_guide.v1.pdf pdf version of refguide corresponding to v1 patch. > Add backup / restore feature to refguide > > > Key: HBASE-16574 > URL: https://issues.apache.org/jira/browse/HBASE-16574 > Project: HBase > Issue Type: Improvement >Reporter: Ted Yu > Labels: backup > Attachments: Backup-and-Restore-Apache_19Sep2016.pdf, > HBASE-16574.001.patch, hbase_reference_guide.v1.pdf > > > This issue is to add backup / restore feature description to hbase refguide. > The description should cover: > scenarios where backup / restore is used > backup / restore commands and sample usage > considerations in setup -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16604) Scanner retries on IOException can cause the scans to miss data
[ https://issues.apache.org/jira/browse/HBASE-16604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15523661#comment-15523661 ] ramkrishna.s.vasudevan commented on HBASE-16604: One more thing is that since closeScanner is getting called even on exception the CP hooks preScannerClose and postScannerClose are getting called. Is that expected? > Scanner retries on IOException can cause the scans to miss data > > > Key: HBASE-16604 > URL: https://issues.apache.org/jira/browse/HBASE-16604 > Project: HBase > Issue Type: Bug > Components: regionserver, Scanners >Reporter: Enis Soztutar >Assignee: Enis Soztutar > Fix For: 2.0.0, 1.3.0, 1.4.0, 1.1.7, 1.2.4 > > Attachments: HBASE-16604-branch-1.3-addendum.patch, > hbase-16604_v1.patch, hbase-16604_v2.patch, hbase-16604_v3.branch-1.patch, > hbase-16604_v3.patch > > > Debugging an ITBLL failure, where the Verify did not "see" all the data in > the cluster, I've noticed that if we end up getting a generic IOException > from the HFileReader level, we may end up missing the rest of the data in the > region. I was able to manually test this, and this stack trace helps to > understand what is going on: > {code} > 2016-09-09 16:27:15,633 INFO [hconnection-0x71ad3d8a-shared--pool21-t9] > client.ScannerCallable(376): Open scanner=1 for > scan={"loadColumnFamiliesOnDemand":null,"startRow":"","stopRow":"","batch":-1,"cacheBlocks":true,"totalColumns":1,"maxResultSize":2097152,"families":{"testFamily":["testFamily"]},"caching":100,"maxVersions":1,"timeRange":[0,9223372036854775807]} > on region > region=testScanThrowsException,,1473463632707.b2adfb618e5d0fe225c1dc40c0eabfee., > hostname=hw10676,51833,1473463626529, seqNum=2 > 2016-09-09 16:27:15,634 INFO > [B.fifo.QRpcServer.handler=5,queue=0,port=51833] > regionserver.RSRpcServices(2196): scan request:scanner_id: 1 number_of_rows: > 100 close_scanner: false next_call_seq: 0 client_handles_partials: true > client_handles_heartbeats: true renew: false > 2016-09-09 16:27:15,635 INFO > [B.fifo.QRpcServer.handler=5,queue=0,port=51833] > regionserver.RSRpcServices(2510): Rolling back next call seqId > 2016-09-09 16:27:15,635 INFO > [B.fifo.QRpcServer.handler=5,queue=0,port=51833] > regionserver.RSRpcServices(2565): Throwing new > ServiceExceptionjava.io.IOException: Could not reseek > StoreFileScanner[HFileScanner for reader > reader=hdfs://localhost:51795/user/enis/test-data/d6fb1c70-93c1-4099-acb7-5723fc05a737/data/default/testScanThrowsException/b2adfb618e5d0fe225c1dc40c0eabfee/testFamily/5a213cc23b714e5e8e1a140ebbe72f2c, > compression=none, cacheConf=blockCache=LruBlockCache{blockCount=0, > currentSize=1567264, freeSize=1525578848, maxSize=1527146112, > heapSize=1567264, minSize=1450788736, minFactor=0.95, multiSize=725394368, > multiFactor=0.5, singleSize=362697184, singleFactor=0.25}, > cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, > cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, > prefetchOnOpen=false, firstKey=aaa/testFamily:testFamily/1473463633859/Put, > lastKey=zzz/testFamily:testFamily/1473463634271/Put, avgKeyLen=35, > avgValueLen=3, entries=17576, length=866998, > cur=/testFamily:/OLDEST_TIMESTAMP/Minimum/vlen=0/seqid=0] to key > /testFamily:testFamily/LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0 > 2016-09-09 16:27:15,635 DEBUG > [B.fifo.QRpcServer.handler=5,queue=0,port=51833] ipc.CallRunner(110): > B.fifo.QRpcServer.handler=5,queue=0,port=51833: callId: 26 service: > ClientService methodName: Scan size: 26 connection: 192.168.42.75:51903 > java.io.IOException: Could not reseek StoreFileScanner[HFileScanner for > reader > reader=hdfs://localhost:51795/user/enis/test-data/d6fb1c70-93c1-4099-acb7-5723fc05a737/data/default/testScanThrowsException/b2adfb618e5d0fe225c1dc40c0eabfee/testFamily/5a213cc23b714e5e8e1a140ebbe72f2c, > compression=none, cacheConf=blockCache=LruBlockCache{blockCount=0, > currentSize=1567264, freeSize=1525578848, maxSize=1527146112, > heapSize=1567264, minSize=1450788736, minFactor=0.95, multiSize=725394368, > multiFactor=0.5, singleSize=362697184, singleFactor=0.25}, > cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, > cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, > prefetchOnOpen=false, firstKey=aaa/testFamily:testFamily/1473463633859/Put, > lastKey=zzz/testFamily:testFamily/1473463634271/Put, avgKeyLen=35, > avgValueLen=3, entries=17576, length=866998, > cur=/testFamily:/OLDEST_TIMESTAMP/Minimum/vlen=0/seqid=0] to key > /testFamily:testFamily/LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0 > at > org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:224) >
[jira] [Commented] (HBASE-16643) Reverse scanner heap creation may not allow MSLAB closure due to improper ref counting of segments
[ https://issues.apache.org/jira/browse/HBASE-16643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15523656#comment-15523656 ] Anoop Sam John commented on HBASE-16643: {code} public boolean seekToLastRow() throws IOException { return initReverseKVHeapIfNeeded(KeyValue.LOWESTKEY, comparator, scanners); } {code} It just init reverse KVHeap.. Within the init we do seekToLast/seekPrevious stuff.. This is what I asked previously. Can we do like initReverseKVHeapIfNeeded do just heap create (as in other method) and the actual methods do seekToLast/ seekPrevious work? We dont need closed state to be checked some other places? Another jira around reverse scan will use this? Else LGTM. > Reverse scanner heap creation may not allow MSLAB closure due to improper ref > counting of segments > -- > > Key: HBASE-16643 > URL: https://issues.apache.org/jira/browse/HBASE-16643 > Project: HBase > Issue Type: Bug >Reporter: ramkrishna.s.vasudevan >Assignee: ramkrishna.s.vasudevan >Priority: Critical > Fix For: 2.0.0 > > Attachments: HBASE-16643.patch, HBASE-16643_1.patch, > HBASE-16643_2.patch, HBASE-16643_3.patch, HBASE-16643_4.patch, > HBASE-16643_5.patch, HBASE-16643_6.patch, HBASE-16643_7.patch, > HBASE-16643_8.patch > > > In the reverse scanner case, > While doing 'initBackwardHeapIfNeeded' in MemstoreScanner for setting the > backward heap, we do a > {code} > if ((backwardHeap == null) && (forwardHeap != null)) { > forwardHeap.close(); > forwardHeap = null; > // before building the heap seek for the relevant key on the scanners, > // for the heap to be built from the scanners correctly > for (KeyValueScanner scan : scanners) { > if (toLast) { > res |= scan.seekToLastRow(); > } else { > res |= scan.backwardSeek(cell); > } > } > {code} > forwardHeap.close(). This would internally decrement the MSLAB ref counter > for the current active segment and snapshot segment. > When the scan is actually closed again we do close() and that will again > decrement the count. Here chances are there that the count would go negative > and hence the actual MSLAB closure that checks for refCount==0 will fail. > Apart from this, when the refCount becomes 0 after the firstClose if any > other thread requests to close the segment, then we will end up in corrupted > segment because the segment could be put back to the MSLAB pool. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16704) Scan will be broken while working with DBE and KeyValueCodecWithTags
[ https://issues.apache.org/jira/browse/HBASE-16704?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anoop Sam John updated HBASE-16704: --- Resolution: Fixed Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) Pushed to master. Thanks for the reviews. > Scan will be broken while working with DBE and KeyValueCodecWithTags > > > Key: HBASE-16704 > URL: https://issues.apache.org/jira/browse/HBASE-16704 > Project: HBase > Issue Type: Bug >Affects Versions: 2.0.0 >Reporter: Yu Sun >Assignee: Anoop Sam John > Fix For: 2.0.0 > > Attachments: HBASE-16704.patch > > > scan will always broken if we set LIMIT more than 1 with rs > hbase.client.rpc.codec set to > org.apache.hadoop.hbase.codec.KeyValueCodecWithTags. > How to reproduce: > 1. 1 master + 1 rs, codec use KeyValueCodecWithTags. > 2. create a table table_1024B_30g,1 cf and with only 1 qualifier, then load > some data with ycsb,. Use Diff DataBlockEncoding > 3. scan 'table_1024B_30g', {LIMIT => 2, STARTROW => 'user5499'}, STARTROW is > set any valid start row. > 4. scan failed. > this should be bug in KeyValueCodecWithTags, after some investigations, I > found some the key not serialized correctly. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16643) Reverse scanner heap creation may not allow MSLAB closure due to improper ref counting of segments
[ https://issues.apache.org/jira/browse/HBASE-16643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ramkrishna.s.vasudevan updated HBASE-16643: --- Attachment: HBASE-16643_8.patch One more addition to patch. Added a closed flag in case MemstoreScanner is closed for the second time. > Reverse scanner heap creation may not allow MSLAB closure due to improper ref > counting of segments > -- > > Key: HBASE-16643 > URL: https://issues.apache.org/jira/browse/HBASE-16643 > Project: HBase > Issue Type: Bug >Reporter: ramkrishna.s.vasudevan >Assignee: ramkrishna.s.vasudevan >Priority: Critical > Fix For: 2.0.0 > > Attachments: HBASE-16643.patch, HBASE-16643_1.patch, > HBASE-16643_2.patch, HBASE-16643_3.patch, HBASE-16643_4.patch, > HBASE-16643_5.patch, HBASE-16643_6.patch, HBASE-16643_7.patch, > HBASE-16643_8.patch > > > In the reverse scanner case, > While doing 'initBackwardHeapIfNeeded' in MemstoreScanner for setting the > backward heap, we do a > {code} > if ((backwardHeap == null) && (forwardHeap != null)) { > forwardHeap.close(); > forwardHeap = null; > // before building the heap seek for the relevant key on the scanners, > // for the heap to be built from the scanners correctly > for (KeyValueScanner scan : scanners) { > if (toLast) { > res |= scan.seekToLastRow(); > } else { > res |= scan.backwardSeek(cell); > } > } > {code} > forwardHeap.close(). This would internally decrement the MSLAB ref counter > for the current active segment and snapshot segment. > When the scan is actually closed again we do close() and that will again > decrement the count. Here chances are there that the count would go negative > and hence the actual MSLAB closure that checks for refCount==0 will fail. > Apart from this, when the refCount becomes 0 after the firstClose if any > other thread requests to close the segment, then we will end up in corrupted > segment because the segment could be put back to the MSLAB pool. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16643) Reverse scanner heap creation may not allow MSLAB closure due to improper ref counting of segments
[ https://issues.apache.org/jira/browse/HBASE-16643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15523588#comment-15523588 ] ramkrishna.s.vasudevan commented on HBASE-16643: All the above tests are passing locally. Can I get a +1 here? I have addressed the comments. here. TestBlockEvictionClient is being tracked seperately. > Reverse scanner heap creation may not allow MSLAB closure due to improper ref > counting of segments > -- > > Key: HBASE-16643 > URL: https://issues.apache.org/jira/browse/HBASE-16643 > Project: HBase > Issue Type: Bug >Reporter: ramkrishna.s.vasudevan >Assignee: ramkrishna.s.vasudevan >Priority: Critical > Fix For: 2.0.0 > > Attachments: HBASE-16643.patch, HBASE-16643_1.patch, > HBASE-16643_2.patch, HBASE-16643_3.patch, HBASE-16643_4.patch, > HBASE-16643_5.patch, HBASE-16643_6.patch, HBASE-16643_7.patch > > > In the reverse scanner case, > While doing 'initBackwardHeapIfNeeded' in MemstoreScanner for setting the > backward heap, we do a > {code} > if ((backwardHeap == null) && (forwardHeap != null)) { > forwardHeap.close(); > forwardHeap = null; > // before building the heap seek for the relevant key on the scanners, > // for the heap to be built from the scanners correctly > for (KeyValueScanner scan : scanners) { > if (toLast) { > res |= scan.seekToLastRow(); > } else { > res |= scan.backwardSeek(cell); > } > } > {code} > forwardHeap.close(). This would internally decrement the MSLAB ref counter > for the current active segment and snapshot segment. > When the scan is actually closed again we do close() and that will again > decrement the count. Here chances are there that the count would go negative > and hence the actual MSLAB closure that checks for refCount==0 will fail. > Apart from this, when the refCount becomes 0 after the firstClose if any > other thread requests to close the segment, then we will end up in corrupted > segment because the segment could be put back to the MSLAB pool. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16643) Reverse scanner heap creation may not allow MSLAB closure due to improper ref counting of segments
[ https://issues.apache.org/jira/browse/HBASE-16643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ramkrishna.s.vasudevan updated HBASE-16643: --- Status: Open (was: Patch Available) > Reverse scanner heap creation may not allow MSLAB closure due to improper ref > counting of segments > -- > > Key: HBASE-16643 > URL: https://issues.apache.org/jira/browse/HBASE-16643 > Project: HBase > Issue Type: Bug >Reporter: ramkrishna.s.vasudevan >Assignee: ramkrishna.s.vasudevan >Priority: Critical > Fix For: 2.0.0 > > Attachments: HBASE-16643.patch, HBASE-16643_1.patch, > HBASE-16643_2.patch, HBASE-16643_3.patch, HBASE-16643_4.patch, > HBASE-16643_5.patch, HBASE-16643_6.patch, HBASE-16643_7.patch > > > In the reverse scanner case, > While doing 'initBackwardHeapIfNeeded' in MemstoreScanner for setting the > backward heap, we do a > {code} > if ((backwardHeap == null) && (forwardHeap != null)) { > forwardHeap.close(); > forwardHeap = null; > // before building the heap seek for the relevant key on the scanners, > // for the heap to be built from the scanners correctly > for (KeyValueScanner scan : scanners) { > if (toLast) { > res |= scan.seekToLastRow(); > } else { > res |= scan.backwardSeek(cell); > } > } > {code} > forwardHeap.close(). This would internally decrement the MSLAB ref counter > for the current active segment and snapshot segment. > When the scan is actually closed again we do close() and that will again > decrement the count. Here chances are there that the count would go negative > and hence the actual MSLAB closure that checks for refCount==0 will fail. > Apart from this, when the refCount becomes 0 after the firstClose if any > other thread requests to close the segment, then we will end up in corrupted > segment because the segment could be put back to the MSLAB pool. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16643) Reverse scanner heap creation may not allow MSLAB closure due to improper ref counting of segments
[ https://issues.apache.org/jira/browse/HBASE-16643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15523576#comment-15523576 ] ramkrishna.s.vasudevan commented on HBASE-16643: I think the some commit has caused the TestFlushSnapshotFromClient, TestMobFlushSnapshotFromClient these tests also to fail. Are those failing in the trunk builds also? > Reverse scanner heap creation may not allow MSLAB closure due to improper ref > counting of segments > -- > > Key: HBASE-16643 > URL: https://issues.apache.org/jira/browse/HBASE-16643 > Project: HBase > Issue Type: Bug >Reporter: ramkrishna.s.vasudevan >Assignee: ramkrishna.s.vasudevan >Priority: Critical > Fix For: 2.0.0 > > Attachments: HBASE-16643.patch, HBASE-16643_1.patch, > HBASE-16643_2.patch, HBASE-16643_3.patch, HBASE-16643_4.patch, > HBASE-16643_5.patch, HBASE-16643_6.patch, HBASE-16643_7.patch > > > In the reverse scanner case, > While doing 'initBackwardHeapIfNeeded' in MemstoreScanner for setting the > backward heap, we do a > {code} > if ((backwardHeap == null) && (forwardHeap != null)) { > forwardHeap.close(); > forwardHeap = null; > // before building the heap seek for the relevant key on the scanners, > // for the heap to be built from the scanners correctly > for (KeyValueScanner scan : scanners) { > if (toLast) { > res |= scan.seekToLastRow(); > } else { > res |= scan.backwardSeek(cell); > } > } > {code} > forwardHeap.close(). This would internally decrement the MSLAB ref counter > for the current active segment and snapshot segment. > When the scan is actually closed again we do close() and that will again > decrement the count. Here chances are there that the count would go negative > and hence the actual MSLAB closure that checks for refCount==0 will fail. > Apart from this, when the refCount becomes 0 after the firstClose if any > other thread requests to close the segment, then we will end up in corrupted > segment because the segment could be put back to the MSLAB pool. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HBASE-16709) Drop hadoop-1.1 profile in pom.xml for master branch
[ https://issues.apache.org/jira/browse/HBASE-16709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu resolved HBASE-16709. Resolution: Duplicate > Drop hadoop-1.1 profile in pom.xml for master branch > > > Key: HBASE-16709 > URL: https://issues.apache.org/jira/browse/HBASE-16709 > Project: HBase > Issue Type: Bug >Reporter: Ted Yu >Priority: Minor > > Currently the following modules have hadoop-1.1 profile in pom.xml: > {code} > hadoop-1.1 > ./hbase-client/pom.xml > hadoop-1.1 > ./hbase-common/pom.xml > hadoop-1.1 > ./hbase-examples/pom.xml > hadoop-1.1 > ./hbase-external-blockcache/pom.xml > hadoop-1.1 > ./hbase-it/pom.xml > hadoop-1.1 > ./hbase-prefix-tree/pom.xml > hadoop-1.1 > ./hbase-procedure/pom.xml > hadoop-1.1 > ./hbase-server/pom.xml > hadoop-1.1 > ./hbase-shell/pom.xml > hadoop-1.1 > ./hbase-testing-util/pom.xml > hadoop-1.1 > ./hbase-thrift/pom.xml > {code} > hadoop-1.1 profile can be dropped in the above pom.xml for hbase 2.0 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HBASE-14776) Rewrite smart-apply-patch.sh to use 'git am' or 'git apply' rather than 'patch'
[ https://issues.apache.org/jira/browse/HBASE-14776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Busbey resolved HBASE-14776. - Resolution: Won't Fix Assignee: (was: Sean Busbey) Fix Version/s: (was: 2.0.0) obviated by our move to yetus. > Rewrite smart-apply-patch.sh to use 'git am' or 'git apply' rather than > 'patch' > --- > > Key: HBASE-14776 > URL: https://issues.apache.org/jira/browse/HBASE-14776 > Project: HBase > Issue Type: Bug > Components: scripts >Affects Versions: 2.0.0 >Reporter: Misty Stanley-Jones > Attachments: HBASE-14776.patch > > > We require patches to be created using 'git format-patch' or 'git diff', so > patches should be tested using 'git am' or 'git apply', not 'patch -pX'. This > causes false errors in the Jenkins patch tester. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HBASE-16019) Cut HBase 1.2.2 release
[ https://issues.apache.org/jira/browse/HBASE-16019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Busbey resolved HBASE-16019. - Resolution: Fixed this got finished some time ago (we've even had a 1.2.3 since). not sure what I was waiting for. maybe the announce email? > Cut HBase 1.2.2 release > --- > > Key: HBASE-16019 > URL: https://issues.apache.org/jira/browse/HBASE-16019 > Project: HBase > Issue Type: Task > Components: community >Reporter: Sean Busbey >Assignee: Sean Busbey > Fix For: 1.2.2 > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16709) Drop hadoop-1.1 profile in pom.xml for master branch
[ https://issues.apache.org/jira/browse/HBASE-16709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15523534#comment-15523534 ] Sean Busbey commented on HBASE-16709: - Did HBASE-12088 miss these? > Drop hadoop-1.1 profile in pom.xml for master branch > > > Key: HBASE-16709 > URL: https://issues.apache.org/jira/browse/HBASE-16709 > Project: HBase > Issue Type: Bug >Reporter: Ted Yu >Priority: Minor > > Currently the following modules have hadoop-1.1 profile in pom.xml: > {code} > hadoop-1.1 > ./hbase-client/pom.xml > hadoop-1.1 > ./hbase-common/pom.xml > hadoop-1.1 > ./hbase-examples/pom.xml > hadoop-1.1 > ./hbase-external-blockcache/pom.xml > hadoop-1.1 > ./hbase-it/pom.xml > hadoop-1.1 > ./hbase-prefix-tree/pom.xml > hadoop-1.1 > ./hbase-procedure/pom.xml > hadoop-1.1 > ./hbase-server/pom.xml > hadoop-1.1 > ./hbase-shell/pom.xml > hadoop-1.1 > ./hbase-testing-util/pom.xml > hadoop-1.1 > ./hbase-thrift/pom.xml > {code} > hadoop-1.1 profile can be dropped in the above pom.xml for hbase 2.0 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16704) Scan will be broken while working with DBE and KeyValueCodecWithTags
[ https://issues.apache.org/jira/browse/HBASE-16704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15523529#comment-15523529 ] ramkrishna.s.vasudevan commented on HBASE-16704: Ok +1 to commit. > Scan will be broken while working with DBE and KeyValueCodecWithTags > > > Key: HBASE-16704 > URL: https://issues.apache.org/jira/browse/HBASE-16704 > Project: HBase > Issue Type: Bug >Affects Versions: 2.0.0 >Reporter: Yu Sun >Assignee: Anoop Sam John > Fix For: 2.0.0 > > Attachments: HBASE-16704.patch > > > scan will always broken if we set LIMIT more than 1 with rs > hbase.client.rpc.codec set to > org.apache.hadoop.hbase.codec.KeyValueCodecWithTags. > How to reproduce: > 1. 1 master + 1 rs, codec use KeyValueCodecWithTags. > 2. create a table table_1024B_30g,1 cf and with only 1 qualifier, then load > some data with ycsb,. Use Diff DataBlockEncoding > 3. scan 'table_1024B_30g', {LIMIT => 2, STARTROW => 'user5499'}, STARTROW is > set any valid start row. > 4. scan failed. > this should be bug in KeyValueCodecWithTags, after some investigations, I > found some the key not serialized correctly. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16660) ArrayIndexOutOfBounds during the majorCompactionCheck in DateTieredCompaction
[ https://issues.apache.org/jira/browse/HBASE-16660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15523512#comment-15523512 ] Andrew Purtell commented on HBASE-16660: lgtm [~abhishek.chouhan]. Yes, please provide a patch for master. Happy to take it from there during commit > ArrayIndexOutOfBounds during the majorCompactionCheck in DateTieredCompaction > - > > Key: HBASE-16660 > URL: https://issues.apache.org/jira/browse/HBASE-16660 > Project: HBase > Issue Type: Bug > Components: Compaction >Affects Versions: 0.98.20 >Reporter: Abhishek Singh Chouhan >Assignee: Abhishek Singh Chouhan > Fix For: 2.0.0, 1.4.0, 0.98.23 > > Attachments: HBASE-16660-0.98.patch > > > We get an ArrayIndexOutOfBoundsException during the major compaction check as > follows > {noformat} > 2016-09-19 05:04:18,287 ERROR [20.compactionChecker] > regionserver.HRegionServer$CompactionChecker - Caught exception > java.lang.ArrayIndexOutOfBoundsException: -2 > at > org.apache.hadoop.hbase.regionserver.compactions.DateTieredCompactionPolicy.shouldPerformMajorCompaction(DateTieredCompactionPolicy.java:159) > at > org.apache.hadoop.hbase.regionserver.HStore.isMajorCompaction(HStore.java:1412) > at > org.apache.hadoop.hbase.regionserver.HRegionServer$CompactionChecker.chore(HRegionServer.java:1532) > at org.apache.hadoop.hbase.Chore.run(Chore.java:80) > at java.lang.Thread.run(Thread.java:745) > {noformat} > This happens due to the following lines in > org.apache.hadoop.hbase.regionserver.compactions.DateTieredCompactionPolicy.selectMajorCompaction > {noformat} > int lowerWindowIndex = Collections.binarySearch(boundaries, > minTimestamp == null ? Long.MAX_VALUE : file.getMinimumTimestamp()); > int upperWindowIndex = Collections.binarySearch(boundaries, > file.getMaximumTimestamp() == null ? Long.MAX_VALUE : > file.getMaximumTimestamp()); > {noformat} > These return negative values if the element is not found and in the case the > values are equal we get the exception. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-16709) Drop hadoop-1.1 profile in pom.xml for master branch
Ted Yu created HBASE-16709: -- Summary: Drop hadoop-1.1 profile in pom.xml for master branch Key: HBASE-16709 URL: https://issues.apache.org/jira/browse/HBASE-16709 Project: HBase Issue Type: Bug Reporter: Ted Yu Priority: Minor Currently the following modules have hadoop-1.1 profile in pom.xml: {code} hadoop-1.1 ./hbase-client/pom.xml hadoop-1.1 ./hbase-common/pom.xml hadoop-1.1 ./hbase-examples/pom.xml hadoop-1.1 ./hbase-external-blockcache/pom.xml hadoop-1.1 ./hbase-it/pom.xml hadoop-1.1 ./hbase-prefix-tree/pom.xml hadoop-1.1 ./hbase-procedure/pom.xml hadoop-1.1 ./hbase-server/pom.xml hadoop-1.1 ./hbase-shell/pom.xml hadoop-1.1 ./hbase-testing-util/pom.xml hadoop-1.1 ./hbase-thrift/pom.xml {code} hadoop-1.1 profile can be dropped in the above pom.xml for hbase 2.0 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16691) Optimize KeyOnlyFilter by utilizing KeyOnlyCell
[ https://issues.apache.org/jira/browse/HBASE-16691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-16691: --- Resolution: Fixed Status: Resolved (was: Patch Available) Thanks for the patch, binlijin. Thanks for the reviews. > Optimize KeyOnlyFilter by utilizing KeyOnlyCell > --- > > Key: HBASE-16691 > URL: https://issues.apache.org/jira/browse/HBASE-16691 > Project: HBase > Issue Type: Improvement >Reporter: binlijin >Assignee: binlijin >Priority: Minor > Fix For: 2.0.0 > > Attachments: HBASE-16691-master.patch > > > In KeyOnlyFilter#transformCell will return a KeyOnlyCell that have no value > or has valueLength as value, current will copy all row keys into a new byte[] > and new a KeyValue, we can eliminate the copy and have a wrap KeyOnlyCell > that ignore the cell's value. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16691) Optimize KeyOnlyFilter by utilizing KeyOnlyCell
[ https://issues.apache.org/jira/browse/HBASE-16691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-16691: --- Summary: Optimize KeyOnlyFilter by utilizing KeyOnlyCell (was: optimize KeyOnlyFilter) > Optimize KeyOnlyFilter by utilizing KeyOnlyCell > --- > > Key: HBASE-16691 > URL: https://issues.apache.org/jira/browse/HBASE-16691 > Project: HBase > Issue Type: Improvement >Reporter: binlijin >Assignee: binlijin >Priority: Minor > Fix For: 2.0.0 > > Attachments: HBASE-16691-master.patch > > > In KeyOnlyFilter#transformCell will return a KeyOnlyCell that have no value > or has valueLength as value, current will copy all row keys into a new byte[] > and new a KeyValue, we can eliminate the copy and have a wrap KeyOnlyCell > that ignore the cell's value. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16653) Backport HBASE-11393 to all branches which support namespace
[ https://issues.apache.org/jira/browse/HBASE-16653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15523436#comment-15523436 ] Hadoop QA commented on HBASE-16653: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 27m 22s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 17 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 19s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 17s {color} | {color:green} branch-1 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 26s {color} | {color:green} branch-1 passed with JDK v1.8.0_101 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 32s {color} | {color:green} branch-1 passed with JDK v1.7.0_111 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 41s {color} | {color:green} branch-1 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 58s {color} | {color:green} branch-1 passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 7s {color} | {color:red} hbase-server in branch-1 has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 8s {color} | {color:green} branch-1 passed with JDK v1.8.0_101 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 18s {color} | {color:green} branch-1 passed with JDK v1.7.0_111 {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 55s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 25s {color} | {color:green} the patch passed with JDK v1.8.0_101 {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 25s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 25s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 32s {color} | {color:green} the patch passed with JDK v1.7.0_111 {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 32s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 32s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 7m 24s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 43s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s {color} | {color:red} The patch has 10 line(s) that end in whitespace. Use git apply --whitespace=fix. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 23m 38s {color} | {color:green} The patch does not cause any errors with Hadoop 2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 23s {color} | {color:red} hbase-client generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s {color} | {color:green} the patch passed with JDK v1.8.0_101 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 20s {color} | {color:green} the patch passed with JDK v1.7.0_111 {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 26s {color} | {color:green} hbase-protocol in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 59s {color} | {color:green} hbase-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 124m 52s {color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 1m 15s {color} | {color:green} The patch does not generate ASF License warnings. {color} | |
[jira] [Commented] (HBASE-16608) Introducing the ability to merge ImmutableSegments without copy-compaction or SQM usage
[ https://issues.apache.org/jira/browse/HBASE-16608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15523399#comment-15523399 ] Anastasia Braginsky commented on HBASE-16608: - I have addressed all [~stack] comments. Thank you, [~stack]! [~anoop.hbase], I have fixed the issue with the boolean useSQM inside MemStoreCompactorIterator (St.Ack also didn't like it :) ), now it elegantly works with two types of iterators. You can take a look on RB. [~ram_krish], we have addressed the issue of lots of blocking writes, that you have raised above. Can you please give it another run and see if it happens or not? The recent patch (that includes it all) is published here and on the review board. I have switched the patch name to HBASE-16608-V* so now all the problems will vanish away! :) :) Waiting for your comments guys! > Introducing the ability to merge ImmutableSegments without copy-compaction or > SQM usage > --- > > Key: HBASE-16608 > URL: https://issues.apache.org/jira/browse/HBASE-16608 > Project: HBase > Issue Type: Sub-task >Reporter: Anastasia Braginsky >Assignee: Anastasia Braginsky > Attachments: HBASE-16417-V02.patch, HBASE-16417-V04.patch, > HBASE-16417-V06.patch, HBASE-16417-V07.patch, HBASE-16417-V08.patch, > HBASE-16417-V10.patch, HBASE-16608-V01.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16587) Procedure v2 - Cleanup suspended proc execution
[ https://issues.apache.org/jira/browse/HBASE-16587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matteo Bertozzi updated HBASE-16587: Resolution: Fixed Status: Resolved (was: Patch Available) > Procedure v2 - Cleanup suspended proc execution > --- > > Key: HBASE-16587 > URL: https://issues.apache.org/jira/browse/HBASE-16587 > Project: HBase > Issue Type: Sub-task > Components: proc-v2 >Affects Versions: 2.0.0 >Reporter: Matteo Bertozzi >Assignee: Matteo Bertozzi > Fix For: 2.0.0 > > Attachments: HBASE-16587-v0.patch, HBASE-16587-v1.patch, > HBASE-16587-v2.patch, HBASE-16587-v3.patch, HBASE-16587-v4.patch > > > for procedures like the assignment or the lock one we need to be able to hold > on locks while suspended. At the moment the way to do that is up to the proc > implementation. This patch moves the logic to the base Procedure and > ProcedureExecutor. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16695) Procedure v2 - Support for parent holding locks
[ https://issues.apache.org/jira/browse/HBASE-16695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matteo Bertozzi updated HBASE-16695: Resolution: Fixed Status: Resolved (was: Patch Available) > Procedure v2 - Support for parent holding locks > --- > > Key: HBASE-16695 > URL: https://issues.apache.org/jira/browse/HBASE-16695 > Project: HBase > Issue Type: Sub-task > Components: proc-v2 >Affects Versions: 2.0.0 >Reporter: Matteo Bertozzi >Assignee: Matteo Bertozzi > Fix For: 2.0.0 > > Attachments: HBASE-16695-v0.patch > > > Add the logic to allow child procs to be executed when the parent is holding > the xlock. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16706) Allow users to have Custom tags on Cells
[ https://issues.apache.org/jira/browse/HBASE-16706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15523402#comment-15523402 ] Vrushali C commented on HBASE-16706: Posting as a discussion note: In the next gen Timeline Service (v2) in YARN, we are using cell tags in coprocessors for aggregation. We have custom tags which are not needed in the client but are used in the coprocessor on a single row+column. All versions of cells in a particular row+column are read and then cell tags are used to retain/discard values for next steps in processing. YARN-3901 > Allow users to have Custom tags on Cells > > > Key: HBASE-16706 > URL: https://issues.apache.org/jira/browse/HBASE-16706 > Project: HBase > Issue Type: Improvement >Reporter: Anoop Sam John >Assignee: Anoop Sam John > Fix For: 2.0.0 > > > The Codec based strip of tags was done as a temp solution not to pass the > critical system tags from server back to client. This also imposes the > limitation that Tags can not be used by users. Tags are a system side feature > alone. In the past there were some Qs in user@ for using custom tags. > We should allow users to set tags on Cell and pass them while write. Also > these custom tags must be returned back to users (Irrespective of codec and > all). The system tags (like ACL, visibility) should not get transferred btw > client and server. And when the client is run by a super user, we should pass > all tags (including system tags). This way we can make sure that all tags are > passed while replication and also tool like Export gets all tags. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16608) Introducing the ability to merge ImmutableSegments without copy-compaction or SQM usage
[ https://issues.apache.org/jira/browse/HBASE-16608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anastasia Braginsky updated HBASE-16608: Attachment: HBASE-16608-V01.patch > Introducing the ability to merge ImmutableSegments without copy-compaction or > SQM usage > --- > > Key: HBASE-16608 > URL: https://issues.apache.org/jira/browse/HBASE-16608 > Project: HBase > Issue Type: Sub-task >Reporter: Anastasia Braginsky >Assignee: Anastasia Braginsky > Attachments: HBASE-16417-V02.patch, HBASE-16417-V04.patch, > HBASE-16417-V06.patch, HBASE-16417-V07.patch, HBASE-16417-V08.patch, > HBASE-16417-V10.patch, HBASE-16608-V01.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16703) Explore object pooling of SeekerState
[ https://issues.apache.org/jira/browse/HBASE-16703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15523359#comment-15523359 ] Andrew Purtell commented on HBASE-16703: Bear in mind what I am seeing is in the JFR trace 35% of the time when the JVM goes to allocate a TLAB it's because we are asking for a SeekerState object and this is the top line of the allocation profile of RPC workers. Seen in 0.98 up to 1.2. Hence I'm wondering if there is opportunity for object reuse, both of the SeekerState and its twin byte arrays. > Explore object pooling of SeekerState > - > > Key: HBASE-16703 > URL: https://issues.apache.org/jira/browse/HBASE-16703 > Project: HBase > Issue Type: Task >Reporter: Andrew Purtell >Assignee: ramkrishna.s.vasudevan > > In read workloads 35% of the allocation pressure produced by servicing RPC > requests, when block encoding is enabled, comes from > BufferedDataBlockEncoder$SeekerState., where we allocate two byte > arrays of INITIAL_KEY_BUFFER_SIZE in length. There's an opportunity for > object pooling of SeekerState here. Subsequent code checks if those byte > arrays are sized sufficiently to handle incoming data to copy. The arrays > will be resized if needed. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16643) Reverse scanner heap creation may not allow MSLAB closure due to improper ref counting of segments
[ https://issues.apache.org/jira/browse/HBASE-16643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15523291#comment-15523291 ] Ted Yu commented on HBASE-16643: 77 * Creates either a forward KeyValue heap or Reverse KeyValue heap based on the type of scan For the two argument MemStoreScanner ctor, it is always forward. > Reverse scanner heap creation may not allow MSLAB closure due to improper ref > counting of segments > -- > > Key: HBASE-16643 > URL: https://issues.apache.org/jira/browse/HBASE-16643 > Project: HBase > Issue Type: Bug >Reporter: ramkrishna.s.vasudevan >Assignee: ramkrishna.s.vasudevan >Priority: Critical > Fix For: 2.0.0 > > Attachments: HBASE-16643.patch, HBASE-16643_1.patch, > HBASE-16643_2.patch, HBASE-16643_3.patch, HBASE-16643_4.patch, > HBASE-16643_5.patch, HBASE-16643_6.patch, HBASE-16643_7.patch > > > In the reverse scanner case, > While doing 'initBackwardHeapIfNeeded' in MemstoreScanner for setting the > backward heap, we do a > {code} > if ((backwardHeap == null) && (forwardHeap != null)) { > forwardHeap.close(); > forwardHeap = null; > // before building the heap seek for the relevant key on the scanners, > // for the heap to be built from the scanners correctly > for (KeyValueScanner scan : scanners) { > if (toLast) { > res |= scan.seekToLastRow(); > } else { > res |= scan.backwardSeek(cell); > } > } > {code} > forwardHeap.close(). This would internally decrement the MSLAB ref counter > for the current active segment and snapshot segment. > When the scan is actually closed again we do close() and that will again > decrement the count. Here chances are there that the count would go negative > and hence the actual MSLAB closure that checks for refCount==0 will fail. > Apart from this, when the refCount becomes 0 after the firstClose if any > other thread requests to close the segment, then we will end up in corrupted > segment because the segment could be put back to the MSLAB pool. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-16708) Expose endpoint Coprocessor name in "responseTooSlow" log messages
Nick Dimiduk created HBASE-16708: Summary: Expose endpoint Coprocessor name in "responseTooSlow" log messages Key: HBASE-16708 URL: https://issues.apache.org/jira/browse/HBASE-16708 Project: HBase Issue Type: Improvement Reporter: Nick Dimiduk Fix For: 1.1.2 Operational diagnostics of a Phoenix install would be easier if we included which endpoint coprocessor was being called in this responseTooSlow WARN message. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16643) Reverse scanner heap creation may not allow MSLAB closure due to improper ref counting of segments
[ https://issues.apache.org/jira/browse/HBASE-16643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15523238#comment-15523238 ] Hadoop QA commented on HBASE-16643: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 3 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 24s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 45s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 41s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 44s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 35s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 44s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 26m 26s {color} | {color:green} Patch does not cause any errors with Hadoop 2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} | | {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 0m 12s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 55s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 105m 3s {color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 144m 26s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hbase.regionserver.TestHRegionWithInMemoryFlush | | | hadoop.hbase.client.TestBlockEvictionFromClient | | Timed out junit tests | org.apache.hadoop.hbase.snapshot.TestMobSecureExportSnapshot | | | org.apache.hadoop.hbase.snapshot.TestRestoreFlushSnapshotFromClient | | | org.apache.hadoop.hbase.client.TestHCM | | | org.apache.hadoop.hbase.snapshot.TestMobFlushSnapshotFromClient | | | org.apache.hadoop.hbase.snapshot.TestFlushSnapshotFromClient | \\ \\ || Subsystem || Report/Notes || | Docker | Client=1.11.2 Server=1.11.2 Image:yetus/hbase:7bda515 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12830284/HBASE-16643_7.patch | | JIRA Issue | HBASE-16643 | | Optional Tests | asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile | | uname | Linux 4fc29757887f 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | master / 5f7e642 | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | unit | https://builds.apache.org/job/PreCommit-HBASE-Build/3717/artifact/patchprocess/patch-unit-hbase-server.txt | | unit test logs | https://builds.apache.org/job/PreCommit-HBASE-Build/3717/artifact/patchprocess/patch-unit-hbase-server.txt | | Test Results | https://builds.apache.org/job/PreCommit-HBASE-Build/3717/testReport/ | |
[jira] [Commented] (HBASE-16698) Performance issue: handlers stuck waiting for CountDownLatch inside WALKey#getWriteEntry under high writing workload
[ https://issues.apache.org/jira/browse/HBASE-16698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15523055#comment-15523055 ] Yu Li commented on HBASE-16698: --- bq. Why not? If a false positive and you can't clean it up... Because doMiniBatchMutate is a big and critical method, and I'm afraid adding such a suppress will make us ignore some real bugs in future changes... Is this a valid concern or I should still add the suppress? [~stack] bq. On the patch, I'd be good w/ it going in as off by default in branch-1 and on by default in master branch. ok, let me prepare a branch-1 patch > Performance issue: handlers stuck waiting for CountDownLatch inside > WALKey#getWriteEntry under high writing workload > > > Key: HBASE-16698 > URL: https://issues.apache.org/jira/browse/HBASE-16698 > Project: HBase > Issue Type: Improvement > Components: Performance >Affects Versions: 1.1.6, 1.2.3 >Reporter: Yu Li >Assignee: Yu Li > Attachments: HBASE-16698.patch, HBASE-16698.v2.patch, > hadoop0495.et2.jstack > > > As titled, on our production environment we observed 98 out of 128 handlers > get stuck waiting for the CountDownLatch {{seqNumAssignedLatch}} inside > {{WALKey#getWriteEntry}} under a high writing workload. > After digging into the problem, we found that the problem is mainly caused by > advancing mvcc in the append logic. Below is some detailed analysis: > Under current branch-1 code logic, all batch puts will call > {{WALKey#getWriteEntry}} after appending edit to WAL, and > {{seqNumAssignedLatch}} is only released when the relative append call is > handled by RingBufferEventHandler (see {{FSWALEntry#stampRegionSequenceId}}). > Because currently we're using a single event handler for the ringbuffer, the > append calls are handled one by one (actually lot's of our current logic > depending on this sequential dealing logic), and this becomes a bottleneck > under high writing workload. > The worst part is that by default we only use one WAL per RS, so appends on > all regions are dealt with in sequential, which causes contention among > different regions... > To fix this, we could also take use of the "sequential appends" mechanism, > that we could grab the WriteEntry before publishing append onto ringbuffer > and use it as sequence id, only that we need to add a lock to make "grab > WriteEntry" and "append edit" a transaction. This will still cause contention > inside a region but could avoid contention between different regions. This > solution is already verified in our online environment and proved to be > effective. > Notice that for master (2.0) branch since we already change the write > pipeline to sync before writing memstore (HBASE-15158), this issue only > exists for the ASYNC_WAL writes scenario. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16682) Fix Shell tests failure. NoClassDefFoundError for MiniKdc
[ https://issues.apache.org/jira/browse/HBASE-16682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15523018#comment-15523018 ] Hudson commented on HBASE-16682: FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #1676 (See [https://builds.apache.org/job/HBase-Trunk_matrix/1676/]) HBASE-16682 Fix Shell tests failure. NoClassDefFoundError for MiniKdc. (appy: rev 5f7e642fed2e393831f630233e93bd20801ec70a) * (edit) hbase-shell/pom.xml * (edit) hbase-testing-util/pom.xml > Fix Shell tests failure. NoClassDefFoundError for MiniKdc > - > > Key: HBASE-16682 > URL: https://issues.apache.org/jira/browse/HBASE-16682 > Project: HBase > Issue Type: Bug >Reporter: Appy >Assignee: Appy > Fix For: 2.0.0 > > Attachments: HBASE-16682.master.001.patch, > HBASE-16682.master.002.patch, HBASE-16682.master.003.patch > > > Stacktrace > {noformat} > java.lang.NoClassDefFoundError: org/apache/hadoop/minikdc/MiniKdc > at java.lang.Class.getDeclaredMethods0(Native Method) > at java.lang.Class.privateGetDeclaredMethods(Class.java:2701) > at java.lang.Class.getDeclaredMethods(Class.java:1975) > at org.jruby.javasupport.JavaClass.getMethods(JavaClass.java:2110) > at org.jruby.javasupport.JavaClass.setupClassMethods(JavaClass.java:955) > at org.jruby.javasupport.JavaClass.access$700(JavaClass.java:99) > at > org.jruby.javasupport.JavaClass$ClassInitializer.initialize(JavaClass.java:650) > at org.jruby.javasupport.JavaClass.setupProxy(JavaClass.java:689) > at org.jruby.javasupport.Java.createProxyClass(Java.java:526) > at org.jruby.javasupport.Java.getProxyClass(Java.java:455) > at org.jruby.javasupport.Java.getInstance(Java.java:364) > at > org.jruby.javasupport.JavaUtil.convertJavaToUsableRubyObject(JavaUtil.java:166) > at > org.jruby.javasupport.JavaEmbedUtils.javaToRuby(JavaEmbedUtils.java:291) > at > org.jruby.embed.variable.AbstractVariable.updateByJavaObject(AbstractVariable.java:81) > at > org.jruby.embed.variable.GlobalVariable.(GlobalVariable.java:69) > at > org.jruby.embed.variable.GlobalVariable.getInstance(GlobalVariable.java:60) > at > org.jruby.embed.variable.VariableInterceptor.getVariableInstance(VariableInterceptor.java:97) > at org.jruby.embed.internal.BiVariableMap.put(BiVariableMap.java:321) > at org.jruby.embed.ScriptingContainer.put(ScriptingContainer.java:1123) > at > org.apache.hadoop.hbase.client.AbstractTestShell.setUpBeforeClass(AbstractTestShell.java:61) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.runners.ParentRunner.run(ParentRunner.java:363) > at org.junit.runners.Suite.runChild(Suite.java:128) > at org.junit.runners.Suite.runChild(Suite.java:27) > at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) > at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) > at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) > at org.junit.runners.ParentRunner.run(ParentRunner.java:363) > at org.junit.runner.JUnitCore.run(JUnitCore.java:137) > at org.junit.runner.JUnitCore.run(JUnitCore.java:115) > at > org.apache.maven.surefire.junitcore.JUnitCoreWrapper.createRequestAndRun(JUnitCoreWrapper.java:108) > at > org.apache.maven.surefire.junitcore.JUnitCoreWrapper.executeEager(JUnitCoreWrapper.java:78) > at > org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:54) > at > org.apache.maven.surefire.junitcore.JUnitCoreProvider.invoke(JUnitCoreProvider.java:144) > at > org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:203) > at > org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:155) > at >
[jira] [Commented] (HBASE-16705) Eliminate long to Long auto boxing in LongComparator
[ https://issues.apache.org/jira/browse/HBASE-16705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15523019#comment-15523019 ] Hudson commented on HBASE-16705: FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #1676 (See [https://builds.apache.org/job/HBase-Trunk_matrix/1676/]) HBASE-16705 Eliminate long to Long auto boxing in LongComparator. (anoopsamjohn: rev da37fd9cdc9cd3bffa6a863be45dff4ba49be89e) * (edit) hbase-client/src/main/java/org/apache/hadoop/hbase/filter/LongComparator.java > Eliminate long to Long auto boxing in LongComparator > > > Key: HBASE-16705 > URL: https://issues.apache.org/jira/browse/HBASE-16705 > Project: HBase > Issue Type: Improvement > Components: Filters >Affects Versions: 2.0.. >Reporter: binlijin >Assignee: binlijin >Priority: Minor > Fix For: 2.0.0 > > Attachments: HBASE-16705-master.patch > > > LongComparator > @Override > public int compareTo(byte[] value, int offset, int length) { > Long that = Bytes.toLong(value, offset, length); > return this.longValue.compareTo(that); > } > Every time need to convert long to Long, this is not necessary. -- This message was sent by Atlassian JIRA (v6.3.4#6332)