[jira] [Commented] (HBASE-17863) Procedure V2: Some cleanup around Procedure.isFinished() and procedure executor
[ https://issues.apache.org/jira/browse/HBASE-17863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15954586#comment-15954586 ] stack commented on HBASE-17863: --- This failed [~uagashe] Results : Tests in error: TestAcidGuarantees.testMixedAtomicity:461->runTestAtomicity:348->runTestAtomicity:357->runTestAtomicity:421 ? Runtime Tests run: 3449, Failures: 0, Errors: 1, Skipped: 51 It fails often enough (we need to dig in... ) so let me retry to see if it related. > Procedure V2: Some cleanup around Procedure.isFinished() and procedure > executor > --- > > Key: HBASE-17863 > URL: https://issues.apache.org/jira/browse/HBASE-17863 > Project: HBase > Issue Type: Bug > Components: proc-v2 >Reporter: Umesh Agashe >Assignee: Umesh Agashe > Attachments: HBASE-17863.v1.patch, HBASE-17863.v2.patch, > HBASE-17863.v3.patch, HBASE-17863.v3.patch > > > Clean up around isFinished() and procedure executor -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HBASE-17863) Procedure V2: Some cleanup around Procedure.isFinished() and procedure executor
[ https://issues.apache.org/jira/browse/HBASE-17863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-17863: -- Attachment: HBASE-17863.v3.patch Retry > Procedure V2: Some cleanup around Procedure.isFinished() and procedure > executor > --- > > Key: HBASE-17863 > URL: https://issues.apache.org/jira/browse/HBASE-17863 > Project: HBase > Issue Type: Bug > Components: proc-v2 >Reporter: Umesh Agashe >Assignee: Umesh Agashe > Attachments: HBASE-17863.v1.patch, HBASE-17863.v2.patch, > HBASE-17863.v3.patch, HBASE-17863.v3.patch > > > Clean up around isFinished() and procedure executor -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-17861) Regionserver down when checking the permission of staging dir if hbase.rootdir is on S3
[ https://issues.apache.org/jira/browse/HBASE-17861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15954585#comment-15954585 ] Hadoop QA commented on HBASE-17861: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 10m 52s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s {color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 49s {color} | {color:green} branch-1 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s {color} | {color:green} branch-1 passed with JDK v1.8.0_121 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s {color} | {color:green} branch-1 passed with JDK v1.7.0_80 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 55s {color} | {color:green} branch-1 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 17s {color} | {color:green} branch-1 passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 55s {color} | {color:red} hbase-server in branch-1 has 2 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s {color} | {color:green} branch-1 passed with JDK v1.8.0_121 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s {color} | {color:green} branch-1 passed with JDK v1.7.0_80 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 41s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s {color} | {color:green} the patch passed with JDK v1.8.0_121 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 31s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s {color} | {color:green} the patch passed with JDK v1.7.0_80 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 34s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 55s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 17s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 14m 52s {color} | {color:green} The patch does not cause any errors with Hadoop 2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} | | {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 0m 15s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 5s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s {color} | {color:green} the patch passed with JDK v1.8.0_121 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 34s {color} | {color:green} the patch passed with JDK v1.7.0_80 {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 82m 58s {color} | {color:green} hbase-server in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 122m 45s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=1.11.2 Server=1.11.2 Image:yetus/hbase:e01ee2f | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12861809/HBASE-17861.branch-1.V4.patch | | JIRA Issue | HBASE-17861 | | Optional Tests | asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile | | uname | Linux 069d176cb5e4 3.13.0-103-generic #150-Ubuntu SMP Thu Nov 24 10:34:17 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality |
[jira] [Resolved] (HBASE-17868) Backport HBASE-10205 to branch-1.3
[ https://issues.apache.org/jira/browse/HBASE-17868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ramkrishna.s.vasudevan resolved HBASE-17868. Resolution: Duplicate Dup of HBASE-HBASE-15691. > Backport HBASE-10205 to branch-1.3 > -- > > Key: HBASE-17868 > URL: https://issues.apache.org/jira/browse/HBASE-17868 > Project: HBase > Issue Type: Bug > Components: BucketCache >Affects Versions: 1.3.0 >Reporter: ramkrishna.s.vasudevan >Assignee: ramkrishna.s.vasudevan > Fix For: 1.3.1 > > > I got the similar ConcurrentModificationException with hbase-1.3.0 while > working with bucket cache. On verifying seems the issue is not been added to > hbase-1.3.0. > We need to back port to hbase-1.3 and to other branches where ever it was not > applied. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-17868) Backport HBASE-10205 to branch-1.3
[ https://issues.apache.org/jira/browse/HBASE-17868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15954580#comment-15954580 ] ramkrishna.s.vasudevan commented on HBASE-17868: Thanks for that note [~andrew.purt...@gmail.com].. Let me close this and mark as dup and go back to HBASE-15691. > Backport HBASE-10205 to branch-1.3 > -- > > Key: HBASE-17868 > URL: https://issues.apache.org/jira/browse/HBASE-17868 > Project: HBase > Issue Type: Bug > Components: BucketCache >Affects Versions: 1.3.0 >Reporter: ramkrishna.s.vasudevan >Assignee: ramkrishna.s.vasudevan > Fix For: 1.3.1 > > > I got the similar ConcurrentModificationException with hbase-1.3.0 while > working with bucket cache. On verifying seems the issue is not been added to > hbase-1.3.0. > We need to back port to hbase-1.3 and to other branches where ever it was not > applied. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-17857) Remove IS annotations from IA.Public classes
[ https://issues.apache.org/jira/browse/HBASE-17857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15954577#comment-15954577 ] Jerry He commented on HBASE-17857: -- Is it the case that from now on, we will say we will mark unstable new feature API as IA private intentionally? > Remove IS annotations from IA.Public classes > > > Key: HBASE-17857 > URL: https://issues.apache.org/jira/browse/HBASE-17857 > Project: HBase > Issue Type: Sub-task > Components: API >Affects Versions: 2.0.0 >Reporter: Duo Zhang >Assignee: Duo Zhang > Fix For: 2.0.0 > > Attachments: HBASE-17857.patch, HBASE-17857-v1.patch, > HBASE-17857-v2.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-16438) Create a cell type so that chunk id is embedded in it
[ https://issues.apache.org/jira/browse/HBASE-16438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15954573#comment-15954573 ] ramkrishna.s.vasudevan commented on HBASE-16438: IMHO the whole new cell should come only when we have CellChunkMap. The cell in the current patch is fine as it does not add much overhead except for adding an entry in the ChunkID map. But the new cell where seqId has to be embedded in the cell has to happen only while moving over to CellChunk representation. With default memstore case it is unwanted. > Create a cell type so that chunk id is embedded in it > - > > Key: HBASE-16438 > URL: https://issues.apache.org/jira/browse/HBASE-16438 > Project: HBase > Issue Type: Sub-task >Affects Versions: 2.0.0 >Reporter: ramkrishna.s.vasudevan >Assignee: ramkrishna.s.vasudevan > Attachments: HBASE-16438_1.patch, > HBASE-16438_3_ChunkCreatorwrappingChunkPool.patch, > HBASE-16438_4_ChunkCreatorwrappingChunkPool.patch, > HBASE-16438_8_ChunkCreatorwrappingChunkPool_withchunkRef.patch, > HBASE-16438_9_ChunkCreatorwrappingChunkPool_withchunkRef.patch, > HBASE-16438.patch, MemstoreChunkCell_memstoreChunkCreator_oldversion.patch, > MemstoreChunkCell_trunk.patch > > > For CellChunkMap we may need a cell such that the chunk out of which it was > created, the id of the chunk be embedded in it so that when doing flattening > we can use the chunk id as a meta data. More details will follow once the > initial tasks are completed. > Why we need to embed the chunkid in the Cell is described by [~anastas] in > this remark over in parent issue > https://issues.apache.org/jira/browse/HBASE-14921?focusedCommentId=15244119=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15244119 -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-17849) PE tool randomness is not totally random
[ https://issues.apache.org/jira/browse/HBASE-17849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15954571#comment-15954571 ] ramkrishna.s.vasudevan commented on HBASE-17849: I tried out the patch. Seems to work fine. So atleast for randomReads and randomSeekScan we can now have a total random way of selecting rows out of the total data set. Any chance of reviews here. > PE tool randomness is not totally random > > > Key: HBASE-17849 > URL: https://issues.apache.org/jira/browse/HBASE-17849 > Project: HBase > Issue Type: Bug > Components: test >Affects Versions: 2.0.0 >Reporter: ramkrishna.s.vasudevan > Fix For: 2.0.0 > > Attachments: HBASE-17849.patch > > > Recently we were using the PE tool for doing some bucket cache related > performance tests. One thing that we noted was that the way the random read > works is not totally random. > Suppose we load 200G of data using --size param and then we use --rows=50 > to do the randomRead. The assumption was among the 200G of data it could > generate randomly 50 row keys to do the reads. > But it so happens that the PE tool generates random rows only on those set of > row keys which falls under the first 50 rows. > This was quite evident when we tried to use HBASE-15314 in our testing. > Suppose we split the bucket cache of size 200G into 2 files each 100G the > randomReads with --rows=50 always lands in the first file and not in the > 2nd file. Better to make PE purely random. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-17871) scan#setBatch(int) call leads wrong result of VerifyReplication
[ https://issues.apache.org/jira/browse/HBASE-17871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15954567#comment-15954567 ] Hadoop QA commented on HBASE-17871: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s {color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s {color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 44s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 43s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 52s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 17s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 59s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 49s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 44s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 44s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 52s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 17s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 35m 53s {color} | {color:green} Patch does not cause any errors with Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 34s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 39s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 120m 4s {color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 171m 1s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=1.11.2 Server=1.11.2 Image:yetus/hbase:8d52d23 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12861798/HBASE-17871.master.001.patch | | JIRA Issue | HBASE-17871 | | Optional Tests | asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile | | uname | Linux 52e9578b5d5a 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | master / e916b79 | | Default Java | 1.8.0_121 | | findbugs | v3.0.0 | | unit | https://builds.apache.org/job/PreCommit-HBASE-Build/6306/artifact/patchprocess/patch-unit-hbase-server.txt | | Test Results | https://builds.apache.org/job/PreCommit-HBASE-Build/6306/testReport/ | | modules | C: hbase-server U: hbase-server | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/6306/console | | Powered by | Apache Yetus 0.3.0 http://yetus.apache.org | This message was automatically generated. > scan#setBatch(int) call leads wrong result of VerifyReplication > --- > > Key: HBASE-17871 > URL: https://issues.apache.org/jira/browse/HBASE-17871 >
[jira] [Commented] (HBASE-17863) Procedure V2: Some cleanup around Procedure.isFinished() and procedure executor
[ https://issues.apache.org/jira/browse/HBASE-17863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15954549#comment-15954549 ] Hadoop QA commented on HBASE-17863: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 32s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s {color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 3 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 36s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 39s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 59s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 43s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 56s {color} | {color:green} master passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 33s {color} | {color:red} hbase-protocol-shaded in master has 24 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 15s {color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 6s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 51s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 51s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 51s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 39s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 53s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 33m 41s {color} | {color:green} Patch does not cause any errors with Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} | | {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 1m 18s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 59s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 55s {color} | {color:green} hbase-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 31s {color} | {color:green} hbase-protocol-shaded in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 50s {color} | {color:green} hbase-procedure in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 123m 0s {color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 1m 7s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 197m 14s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=1.11.2 Server=1.11.2 Image:yetus/hbase:8d52d23 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12861791/HBASE-17863.v3.patch | | JIRA Issue | HBASE-17863 | | Optional Tests | asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile cc hbaseprotoc | | uname | Linux 18c422814f5e 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | |
[jira] [Commented] (HBASE-16780) Since move to protobuf3.1, Cells are limited to 64MB where previous they had no limit
[ https://issues.apache.org/jira/browse/HBASE-16780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15954538#comment-15954538 ] Hudson commented on HBASE-16780: SUCCESS: Integrated in Jenkins build HBase-Trunk_matrix #2795 (See [https://builds.apache.org/job/HBase-Trunk_matrix/2795/]) HBASE-16780 Since move to protobuf3.1, Cells are limited to 64MB where (stack: rev e916b79db58bb9be806a833b2c0e675f1136c15a) * (edit) hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/com/google/protobuf/BoolValue.java * (edit) hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/com/google/protobuf/Timestamp.java * (edit) hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/ipc/protobuf/generated/TestProtos.java * (edit) hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/generated/LoadBalancerProtos.java * (edit) hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/com/google/protobuf/DoubleValueOrBuilder.java * (edit) hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/com/google/protobuf/StringValueOrBuilder.java * (edit) hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/com/google/protobuf/TimestampOrBuilder.java * (edit) hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/com/google/protobuf/Option.java * (edit) hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/com/google/protobuf/DescriptorProtos.java * (edit) hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/com/google/protobuf/BytesValueOrBuilder.java * (edit) hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/com/google/protobuf/MapEntry.java * (edit) hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/generated/CellProtos.java * (edit) hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/com/google/protobuf/BytesValue.java * (edit) hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/generated/SnapshotProtos.java * (edit) hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/com/google/protobuf/GeneratedMessageV3.java * (edit) hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/com/google/protobuf/Type.java * (edit) hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/com/google/protobuf/BoolValueOrBuilder.java * (edit) hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/com/google/protobuf/Method.java * (edit) hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/com/google/protobuf/Duration.java * (edit) hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/com/google/protobuf/FieldMaskProto.java * (edit) hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/com/google/protobuf/Struct.java * (edit) hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/generated/MasterProtos.java * (edit) hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/com/google/protobuf/Value.java * (edit) hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/com/google/protobuf/NullValue.java * (edit) hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/com/google/protobuf/Descriptors.java * (edit) hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/com/google/protobuf/ExtensionRegistryLite.java * (edit) hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/com/google/protobuf/Any.java * (edit) hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/com/google/protobuf/EnumValue.java * (edit) hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/com/google/protobuf/FieldSet.java * (edit) hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/com/google/protobuf/StringValue.java * (edit) hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/com/google/protobuf/MixinOrBuilder.java * (edit) hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/com/google/protobuf/ByteBufferWriter.java * (edit) hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/com/google/protobuf/LazyFieldLite.java * (edit) hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/com/google/protobuf/GeneratedMessageLite.java * (edit) hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/com/google/protobuf/MethodOrBuilder.java * (edit) hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/com/google/protobuf/ValueOrBuilder.java * (edit) hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/com/google/protobuf/CodedInputStream.java * (edit) hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/com/google/protobuf/UnknownFieldSet.java * (edit) hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/com/google/protobuf/Enum.java * (edit)
[jira] [Commented] (HBASE-16438) Create a cell type so that chunk id is embedded in it
[ https://issues.apache.org/jira/browse/HBASE-16438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15954534#comment-15954534 ] Anoop Sam John commented on HBASE-16438: On seqId being in which chunk.. We should be carefully taking a call.. One disadv of having it in data chunk is that, when the Cell is in active segment (Not ChunkMapped) and a parallel read happens, we will have to match its seqId against the read pnt of that Read request. For this, if the seqId is in Cell object itself as a long state, we have that value ready for compare. If at that time itself, we moved that as 8 bytes after key and value bytes in data chunk, we will have to decode the seqId from the 8 bytes. This will happen for every cell. More over, this MSLAB copy is a top level step which happens even for DefaultMemstore. So we may end up doing this way of encoding 8 bytes in chunk for all cases. (There is no in memory compaction or no chunk map based flush).. All such cases, this impact will come. So pls be careful abt making such a choice. > Create a cell type so that chunk id is embedded in it > - > > Key: HBASE-16438 > URL: https://issues.apache.org/jira/browse/HBASE-16438 > Project: HBase > Issue Type: Sub-task >Affects Versions: 2.0.0 >Reporter: ramkrishna.s.vasudevan >Assignee: ramkrishna.s.vasudevan > Attachments: HBASE-16438_1.patch, > HBASE-16438_3_ChunkCreatorwrappingChunkPool.patch, > HBASE-16438_4_ChunkCreatorwrappingChunkPool.patch, > HBASE-16438_8_ChunkCreatorwrappingChunkPool_withchunkRef.patch, > HBASE-16438_9_ChunkCreatorwrappingChunkPool_withchunkRef.patch, > HBASE-16438.patch, MemstoreChunkCell_memstoreChunkCreator_oldversion.patch, > MemstoreChunkCell_trunk.patch > > > For CellChunkMap we may need a cell such that the chunk out of which it was > created, the id of the chunk be embedded in it so that when doing flattening > we can use the chunk id as a meta data. More details will follow once the > initial tasks are completed. > Why we need to embed the chunkid in the Cell is described by [~anastas] in > this remark over in parent issue > https://issues.apache.org/jira/browse/HBASE-14921?focusedCommentId=15244119=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15244119 -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-17863) Procedure V2: Some cleanup around Procedure.isFinished() and procedure executor
[ https://issues.apache.org/jira/browse/HBASE-17863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15954529#comment-15954529 ] Hadoop QA commented on HBASE-17863: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s {color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 3 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 25s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 14s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 35s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 19s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 39s {color} | {color:green} master passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 0s {color} | {color:red} hbase-protocol-shaded in master has 24 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 1s {color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 45s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 34s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 34s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 34s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 22s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 41s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 33m 28s {color} | {color:green} Patch does not cause any errors with Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} | | {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 1m 28s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 7m 15s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 20s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 7s {color} | {color:green} hbase-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 36s {color} | {color:green} hbase-protocol-shaded in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 59s {color} | {color:green} hbase-procedure in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 115m 16s {color} | {color:green} hbase-server in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 55s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 185m 21s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:8d52d23 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12861788/HBASE-17863.v2.patch | | JIRA Issue | HBASE-17863 | | Optional Tests | asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile cc hbaseprotoc | | uname | Linux 54ee3f1ad476 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64
[jira] [Commented] (HBASE-17861) Regionserver down when checking the permission of staging dir if hbase.rootdir is on S3
[ https://issues.apache.org/jira/browse/HBASE-17861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15954524#comment-15954524 ] Yi Liang commented on HBASE-17861: -- new patch carry Ted's comments > Regionserver down when checking the permission of staging dir if > hbase.rootdir is on S3 > --- > > Key: HBASE-17861 > URL: https://issues.apache.org/jira/browse/HBASE-17861 > Project: HBase > Issue Type: Bug >Affects Versions: 1.4.0 >Reporter: Yi Liang >Assignee: Yi Liang > Labels: filesystem, s3, wal > Fix For: 1.4.0 > > Attachments: HBASE-17861.branch-1.V1.patch, > HBASE-17861.branch-1.V2.patch, HBASE-17861.branch-1.V3.patch, > HBASE-17861.branch-1.V4.patch, HBASE-17861-V1.patch > > > Found some issue, when set up HBASE-17437: Support specifying a WAL directory > outside of the root directory. > The region server are showdown when I add following config into > hbase-site.xml > hbase.rootdir = s3a://xx//xx > hbase.wal.dir = hdfs://xx/xx > hbase.coprocessor.region.classes = > org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint > Error is below > {noformat} > org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint threw > java.lang.IllegalStateException: Directory already exists but permissions > aren't set to '-rwx--x--x' > {noformat} > The reason is that, when hbase enable securebulkload, hbase will create a > folder in s3, it can not set above permission, because in s3, all files are > listed as having full read/write permissions and all directories appear to > have full rwx permissions. See Object stores have differerent authorization > models in > https://hadoop.apache.org/docs/current3/hadoop-aws/tools/hadoop-aws/index.html -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HBASE-17861) Regionserver down when checking the permission of staging dir if hbase.rootdir is on S3
[ https://issues.apache.org/jira/browse/HBASE-17861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Liang updated HBASE-17861: - Attachment: HBASE-17861.branch-1.V4.patch > Regionserver down when checking the permission of staging dir if > hbase.rootdir is on S3 > --- > > Key: HBASE-17861 > URL: https://issues.apache.org/jira/browse/HBASE-17861 > Project: HBase > Issue Type: Bug >Affects Versions: 1.4.0 >Reporter: Yi Liang >Assignee: Yi Liang > Labels: filesystem, s3, wal > Fix For: 1.4.0 > > Attachments: HBASE-17861.branch-1.V1.patch, > HBASE-17861.branch-1.V2.patch, HBASE-17861.branch-1.V3.patch, > HBASE-17861.branch-1.V4.patch, HBASE-17861-V1.patch > > > Found some issue, when set up HBASE-17437: Support specifying a WAL directory > outside of the root directory. > The region server are showdown when I add following config into > hbase-site.xml > hbase.rootdir = s3a://xx//xx > hbase.wal.dir = hdfs://xx/xx > hbase.coprocessor.region.classes = > org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint > Error is below > {noformat} > org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint threw > java.lang.IllegalStateException: Directory already exists but permissions > aren't set to '-rwx--x--x' > {noformat} > The reason is that, when hbase enable securebulkload, hbase will create a > folder in s3, it can not set above permission, because in s3, all files are > listed as having full read/write permissions and all directories appear to > have full rwx permissions. See Object stores have differerent authorization > models in > https://hadoop.apache.org/docs/current3/hadoop-aws/tools/hadoop-aws/index.html -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-17861) Regionserver down when checking the permission of staging dir if hbase.rootdir is on S3
[ https://issues.apache.org/jira/browse/HBASE-17861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15954519#comment-15954519 ] Ted Yu commented on HBASE-17861: I don't think the failure of TestReplicasClient had anything to do with bulk load. > Regionserver down when checking the permission of staging dir if > hbase.rootdir is on S3 > --- > > Key: HBASE-17861 > URL: https://issues.apache.org/jira/browse/HBASE-17861 > Project: HBase > Issue Type: Bug >Affects Versions: 1.4.0 >Reporter: Yi Liang >Assignee: Yi Liang > Labels: filesystem, s3, wal > Fix For: 1.4.0 > > Attachments: HBASE-17861.branch-1.V1.patch, > HBASE-17861.branch-1.V2.patch, HBASE-17861.branch-1.V3.patch, > HBASE-17861-V1.patch > > > Found some issue, when set up HBASE-17437: Support specifying a WAL directory > outside of the root directory. > The region server are showdown when I add following config into > hbase-site.xml > hbase.rootdir = s3a://xx//xx > hbase.wal.dir = hdfs://xx/xx > hbase.coprocessor.region.classes = > org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint > Error is below > {noformat} > org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint threw > java.lang.IllegalStateException: Directory already exists but permissions > aren't set to '-rwx--x--x' > {noformat} > The reason is that, when hbase enable securebulkload, hbase will create a > folder in s3, it can not set above permission, because in s3, all files are > listed as having full read/write permissions and all directories appear to > have full rwx permissions. See Object stores have differerent authorization > models in > https://hadoop.apache.org/docs/current3/hadoop-aws/tools/hadoop-aws/index.html -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Comment Edited] (HBASE-17861) Regionserver down when checking the permission of staging dir if hbase.rootdir is on S3
[ https://issues.apache.org/jira/browse/HBASE-17861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15954512#comment-15954512 ] Yi Liang edited comment on HBASE-17861 at 4/4/17 3:25 AM: -- ok, let me check if there are too many schemes for this three FS, and is hadoop.hbase.client.TestReplicasClient a flasky test? I saw it a lot times was (Author: easyliangjob): ok, let me check if there are too many schemes > Regionserver down when checking the permission of staging dir if > hbase.rootdir is on S3 > --- > > Key: HBASE-17861 > URL: https://issues.apache.org/jira/browse/HBASE-17861 > Project: HBase > Issue Type: Bug >Affects Versions: 1.4.0 >Reporter: Yi Liang >Assignee: Yi Liang > Labels: filesystem, s3, wal > Fix For: 1.4.0 > > Attachments: HBASE-17861.branch-1.V1.patch, > HBASE-17861.branch-1.V2.patch, HBASE-17861.branch-1.V3.patch, > HBASE-17861-V1.patch > > > Found some issue, when set up HBASE-17437: Support specifying a WAL directory > outside of the root directory. > The region server are showdown when I add following config into > hbase-site.xml > hbase.rootdir = s3a://xx//xx > hbase.wal.dir = hdfs://xx/xx > hbase.coprocessor.region.classes = > org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint > Error is below > {noformat} > org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint threw > java.lang.IllegalStateException: Directory already exists but permissions > aren't set to '-rwx--x--x' > {noformat} > The reason is that, when hbase enable securebulkload, hbase will create a > folder in s3, it can not set above permission, because in s3, all files are > listed as having full read/write permissions and all directories appear to > have full rwx permissions. See Object stores have differerent authorization > models in > https://hadoop.apache.org/docs/current3/hadoop-aws/tools/hadoop-aws/index.html -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-17861) Regionserver down when checking the permission of staging dir if hbase.rootdir is on S3
[ https://issues.apache.org/jira/browse/HBASE-17861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15954512#comment-15954512 ] Yi Liang commented on HBASE-17861: -- ok, let me check if there are too many schemes > Regionserver down when checking the permission of staging dir if > hbase.rootdir is on S3 > --- > > Key: HBASE-17861 > URL: https://issues.apache.org/jira/browse/HBASE-17861 > Project: HBase > Issue Type: Bug >Affects Versions: 1.4.0 >Reporter: Yi Liang >Assignee: Yi Liang > Labels: filesystem, s3, wal > Fix For: 1.4.0 > > Attachments: HBASE-17861.branch-1.V1.patch, > HBASE-17861.branch-1.V2.patch, HBASE-17861.branch-1.V3.patch, > HBASE-17861-V1.patch > > > Found some issue, when set up HBASE-17437: Support specifying a WAL directory > outside of the root directory. > The region server are showdown when I add following config into > hbase-site.xml > hbase.rootdir = s3a://xx//xx > hbase.wal.dir = hdfs://xx/xx > hbase.coprocessor.region.classes = > org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint > Error is below > {noformat} > org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint threw > java.lang.IllegalStateException: Directory already exists but permissions > aren't set to '-rwx--x--x' > {noformat} > The reason is that, when hbase enable securebulkload, hbase will create a > folder in s3, it can not set above permission, because in s3, all files are > listed as having full read/write permissions and all directories appear to > have full rwx permissions. See Object stores have differerent authorization > models in > https://hadoop.apache.org/docs/current3/hadoop-aws/tools/hadoop-aws/index.html -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-17861) Regionserver down when checking the permission of staging dir if hbase.rootdir is on S3
[ https://issues.apache.org/jira/browse/HBASE-17861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15954506#comment-15954506 ] Ted Yu commented on HBASE-17861: How about: {code} Set h = new HashSet(Arrays.asList("s3a", "s3", "s3n", "wasb", "wasbs", ...)); {code} > Regionserver down when checking the permission of staging dir if > hbase.rootdir is on S3 > --- > > Key: HBASE-17861 > URL: https://issues.apache.org/jira/browse/HBASE-17861 > Project: HBase > Issue Type: Bug >Affects Versions: 1.4.0 >Reporter: Yi Liang >Assignee: Yi Liang > Labels: filesystem, s3, wal > Fix For: 1.4.0 > > Attachments: HBASE-17861.branch-1.V1.patch, > HBASE-17861.branch-1.V2.patch, HBASE-17861.branch-1.V3.patch, > HBASE-17861-V1.patch > > > Found some issue, when set up HBASE-17437: Support specifying a WAL directory > outside of the root directory. > The region server are showdown when I add following config into > hbase-site.xml > hbase.rootdir = s3a://xx//xx > hbase.wal.dir = hdfs://xx/xx > hbase.coprocessor.region.classes = > org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint > Error is below > {noformat} > org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint threw > java.lang.IllegalStateException: Directory already exists but permissions > aren't set to '-rwx--x--x' > {noformat} > The reason is that, when hbase enable securebulkload, hbase will create a > folder in s3, it can not set above permission, because in s3, all files are > listed as having full read/write permissions and all directories appear to > have full rwx permissions. See Object stores have differerent authorization > models in > https://hadoop.apache.org/docs/current3/hadoop-aws/tools/hadoop-aws/index.html -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-17861) Regionserver down when checking the permission of staging dir if hbase.rootdir is on S3
[ https://issues.apache.org/jira/browse/HBASE-17861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15954502#comment-15954502 ] Yi Liang commented on HBASE-17861: -- Hi Ted, thanks for review, the reason why I did not use set is (1) In java, When we initialize set, we still need to initialize as below, transform a array to set; or we use add() method for multiple times to add values. It seems not very efficient Set h = new HashSet(Arrays.asList("a", "b")); or h.add(a); h.add(b); (2) In the code, in the check condition(scheme.startsWith(FsNotSupportPermission[i])), I use startwith() method, since there are schemes like: s3a, s3, s3n, wasb, wasbs and so on, the Set.contains(FsNotSupportPermission[i]) method seems does not work quite well in our case. Let me know if you have any good idea to implement it, thanks, :) > Regionserver down when checking the permission of staging dir if > hbase.rootdir is on S3 > --- > > Key: HBASE-17861 > URL: https://issues.apache.org/jira/browse/HBASE-17861 > Project: HBase > Issue Type: Bug >Affects Versions: 1.4.0 >Reporter: Yi Liang >Assignee: Yi Liang > Labels: filesystem, s3, wal > Fix For: 1.4.0 > > Attachments: HBASE-17861.branch-1.V1.patch, > HBASE-17861.branch-1.V2.patch, HBASE-17861.branch-1.V3.patch, > HBASE-17861-V1.patch > > > Found some issue, when set up HBASE-17437: Support specifying a WAL directory > outside of the root directory. > The region server are showdown when I add following config into > hbase-site.xml > hbase.rootdir = s3a://xx//xx > hbase.wal.dir = hdfs://xx/xx > hbase.coprocessor.region.classes = > org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint > Error is below > {noformat} > org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint threw > java.lang.IllegalStateException: Directory already exists but permissions > aren't set to '-rwx--x--x' > {noformat} > The reason is that, when hbase enable securebulkload, hbase will create a > folder in s3, it can not set above permission, because in s3, all files are > listed as having full read/write permissions and all directories appear to > have full rwx permissions. See Object stores have differerent authorization > models in > https://hadoop.apache.org/docs/current3/hadoop-aws/tools/hadoop-aws/index.html -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-17861) Regionserver down when checking the permission of staging dir if hbase.rootdir is on S3
[ https://issues.apache.org/jira/browse/HBASE-17861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15954497#comment-15954497 ] Hadoop QA commented on HBASE-17861: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s {color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 48s {color} | {color:green} branch-1 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 29s {color} | {color:green} branch-1 passed with JDK v1.8.0_121 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s {color} | {color:green} branch-1 passed with JDK v1.7.0_80 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 55s {color} | {color:green} branch-1 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 16s {color} | {color:green} branch-1 passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 56s {color} | {color:red} hbase-server in branch-1 has 2 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 24s {color} | {color:green} branch-1 passed with JDK v1.8.0_121 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s {color} | {color:green} branch-1 passed with JDK v1.7.0_80 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 40s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s {color} | {color:green} the patch passed with JDK v1.8.0_121 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 31s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s {color} | {color:green} the patch passed with JDK v1.7.0_80 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 34s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 56s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 16s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 14m 44s {color} | {color:green} The patch does not cause any errors with Hadoop 2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} | | {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 0m 15s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 8s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s {color} | {color:green} the patch passed with JDK v1.8.0_121 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 34s {color} | {color:green} the patch passed with JDK v1.7.0_80 {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 83m 25s {color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 112m 30s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hbase.client.TestReplicasClient | \\ \\ || Subsystem || Report/Notes || | Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:e01ee2f | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12861789/HBASE-17861.branch-1.V3.patch | | JIRA Issue | HBASE-17861 | | Optional Tests | asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile | | uname | Linux a858e26b4706 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | |
[jira] [Commented] (HBASE-14141) HBase Backup/Restore Phase 3: Filter WALs on backup to include only edits from backup tables
[ https://issues.apache.org/jira/browse/HBASE-14141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15954491#comment-15954491 ] Hadoop QA commented on HBASE-14141: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s {color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s {color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 32s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 41s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 49s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 15s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 48s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 43s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 38s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 47s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 15s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 27m 41s {color} | {color:green} Patch does not cause any errors with Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 56s {color} | {color:red} hbase-server generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 27s {color} | {color:red} hbase-server generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 114m 22s {color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 155m 24s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hbase-server | | | Load of known null value in org.apache.hadoop.hbase.wal.AbstractFSWALProvider.openReader(Path, Configuration) At AbstractFSWALProvider.java:in org.apache.hadoop.hbase.wal.AbstractFSWALProvider.openReader(Path, Configuration) At AbstractFSWALProvider.java:[line 424] | \\ \\ || Subsystem || Report/Notes || | Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:8d52d23 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12861781/HBASE-14141.v5.patch | | JIRA Issue | HBASE-14141 | | Optional Tests | asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile | | uname | Linux 7bba8377e917 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | master / e916b79 | | Default Java | 1.8.0_121 | | findbugs | v3.0.0 | | findbugs | https://builds.apache.org/job/PreCommit-HBASE-Build/6302/artifact/patchprocess/new-findbugs-hbase-server.html | | javadoc | https://builds.apache.org/job/PreCommit-HBASE-Build/6302/artifact/patchprocess/diff-javadoc-javadoc-hbase-server.txt | | unit
[jira] [Updated] (HBASE-17871) scan#setBatch(int) call leads wrong result of VerifyReplication
[ https://issues.apache.org/jira/browse/HBASE-17871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tomu Tsuruhara updated HBASE-17871: --- Status: Patch Available (was: Open) > scan#setBatch(int) call leads wrong result of VerifyReplication > --- > > Key: HBASE-17871 > URL: https://issues.apache.org/jira/browse/HBASE-17871 > Project: HBase > Issue Type: Bug >Affects Versions: 2.0.0, 1.4.0 >Reporter: Tomu Tsuruhara >Assignee: Tomu Tsuruhara >Priority: Minor > Attachments: HBASE-17871.master.001.patch > > > VerifyReplication tool printed weird logs. > {noformat} > 2017-04-03 23:30:50,252 ERROR [main] > org.apache.hadoop.hbase.mapreduce.replication.VerifyReplication: > CONTENT_DIFFERENT_ROWS, rowkey=a100193 > 2017-04-03 23:30:50,280 ERROR [main] > org.apache.hadoop.hbase.mapreduce.replication.VerifyReplication: > ONLY_IN_PEER_TABLE_ROWS, rowkey=a100193 > 2017-04-03 23:30:50,387 ERROR [main] > org.apache.hadoop.hbase.mapreduce.replication.VerifyReplication: > CONTENT_DIFFERENT_ROWS, rowkey=a100385 > 2017-04-03 23:30:50,414 ERROR [main] > org.apache.hadoop.hbase.mapreduce.replication.VerifyReplication: > ONLY_IN_PEER_TABLE_ROWS, rowkey=a100385 > 2017-04-03 23:30:50,480 ERROR [main] > org.apache.hadoop.hbase.mapreduce.replication.VerifyReplication: > CONTENT_DIFFERENT_ROWS, rowkey=a100532 > 2017-04-03 23:30:50,508 ERROR [main] > org.apache.hadoop.hbase.mapreduce.replication.VerifyReplication: > ONLY_IN_PEER_TABLE_ROWS, rowkey=a100532 > {noformat} > Here, each bad rows were marked as both {{CONTENT_DIFFERENT_ROWS}} and > {{ONLY_IN_PEER_TABLE_ROWS}}. > This should never happen so I took a look at code and found scan.setBatch > call. > {code} > @Override > public void map(ImmutableBytesWritable row, final Result value, > Context context) > throws IOException { > if (replicatedScanner == null) { > ... > final Scan scan = new Scan(); > scan.setBatch(batch); > {code} > As stated in HBASE-16376, {{scan#setBatch(int)}} call implicitly allows scan > results to be partial. > Since {{VerifyReplication}} is assuming each {{scanner.next()}} call returns > entire row, > partial results break compare logic. > We should avoid setBatch call here. > Thanks to RPC chunking (explained in this blog > https://blogs.apache.org/hbase/entry/scan_improvements_in_hbase_1), > it's safe and acceptable I think. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-14925) Develop HBase shell command/tool to list table's region info through command line
[ https://issues.apache.org/jira/browse/HBASE-14925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15954480#comment-15954480 ] Karan Mehta commented on HBASE-14925: - [~ashish singhi] {code} def command(table_name, region_server_name = "") admin_instance = admin.instance_variable_get("@admin") conn_instance = admin_instance.getConnection() cluster_status = admin_instance.getClusterStatus() hregion_locator_instance = conn_instance.getRegionLocator(TableName.valueOf(table_name)) list = hregion_locator_instance.getAllRegionLocations() results = Array.new begin list.each do |hregion| if hregion.getServerName().toString.start_with? region_server_name startKey = Bytes.toString(hregion.getRegionInfo().getStartKey()) endKey = Bytes.toString(hregion.getRegionInfo().getEndKey()) puts "All = #{hregion} , Start = #{startKey}, End = #{endKey}" end end ensure hregion_locator_instance.close() end {code} How does this code seem? It filters the regions by user provided {{Server Name}} as prefix. > Develop HBase shell command/tool to list table's region info through command > line > - > > Key: HBASE-14925 > URL: https://issues.apache.org/jira/browse/HBASE-14925 > Project: HBase > Issue Type: Improvement > Components: shell >Reporter: Romil Choksi >Assignee: Karan Mehta > Attachments: HBASE-14925.patch > > > I am going through the hbase shell commands to see if there is anything I can > use to get all the regions info just for a particular table. I don’t see any > such command that provides me that information. > It would be better to have a command that provides region info, start key, > end key etc taking a table name as the input parameter. This is available > through HBase UI on clicking on a particular table's link > A tool/shell command to get a list of regions for a table or all tables in a > tabular structured output (that is machine readable) -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HBASE-17871) scan#setBatch(int) call leads wrong result of VerifyReplication
[ https://issues.apache.org/jira/browse/HBASE-17871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tomu Tsuruhara updated HBASE-17871: --- Attachment: HBASE-17871.master.001.patch > scan#setBatch(int) call leads wrong result of VerifyReplication > --- > > Key: HBASE-17871 > URL: https://issues.apache.org/jira/browse/HBASE-17871 > Project: HBase > Issue Type: Bug >Affects Versions: 2.0.0, 1.4.0 >Reporter: Tomu Tsuruhara >Assignee: Tomu Tsuruhara >Priority: Minor > Attachments: HBASE-17871.master.001.patch > > > VerifyReplication tool printed weird logs. > {noformat} > 2017-04-03 23:30:50,252 ERROR [main] > org.apache.hadoop.hbase.mapreduce.replication.VerifyReplication: > CONTENT_DIFFERENT_ROWS, rowkey=a100193 > 2017-04-03 23:30:50,280 ERROR [main] > org.apache.hadoop.hbase.mapreduce.replication.VerifyReplication: > ONLY_IN_PEER_TABLE_ROWS, rowkey=a100193 > 2017-04-03 23:30:50,387 ERROR [main] > org.apache.hadoop.hbase.mapreduce.replication.VerifyReplication: > CONTENT_DIFFERENT_ROWS, rowkey=a100385 > 2017-04-03 23:30:50,414 ERROR [main] > org.apache.hadoop.hbase.mapreduce.replication.VerifyReplication: > ONLY_IN_PEER_TABLE_ROWS, rowkey=a100385 > 2017-04-03 23:30:50,480 ERROR [main] > org.apache.hadoop.hbase.mapreduce.replication.VerifyReplication: > CONTENT_DIFFERENT_ROWS, rowkey=a100532 > 2017-04-03 23:30:50,508 ERROR [main] > org.apache.hadoop.hbase.mapreduce.replication.VerifyReplication: > ONLY_IN_PEER_TABLE_ROWS, rowkey=a100532 > {noformat} > Here, each bad rows were marked as both {{CONTENT_DIFFERENT_ROWS}} and > {{ONLY_IN_PEER_TABLE_ROWS}}. > This should never happen so I took a look at code and found scan.setBatch > call. > {code} > @Override > public void map(ImmutableBytesWritable row, final Result value, > Context context) > throws IOException { > if (replicatedScanner == null) { > ... > final Scan scan = new Scan(); > scan.setBatch(batch); > {code} > As stated in HBASE-16376, {{scan#setBatch(int)}} call implicitly allows scan > results to be partial. > Since {{VerifyReplication}} is assuming each {{scanner.next()}} call returns > entire row, > partial results break compare logic. > We should avoid setBatch call here. > Thanks to RPC chunking (explained in this blog > https://blogs.apache.org/hbase/entry/scan_improvements_in_hbase_1), > it's safe and acceptable I think. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Created] (HBASE-17871) scan#setBatch(int) call leads wrong result of VerifyReplication
Tomu Tsuruhara created HBASE-17871: -- Summary: scan#setBatch(int) call leads wrong result of VerifyReplication Key: HBASE-17871 URL: https://issues.apache.org/jira/browse/HBASE-17871 Project: HBase Issue Type: Bug Affects Versions: 2.0.0, 1.4.0 Reporter: Tomu Tsuruhara Assignee: Tomu Tsuruhara Priority: Minor VerifyReplication tool printed weird logs. {noformat} 2017-04-03 23:30:50,252 ERROR [main] org.apache.hadoop.hbase.mapreduce.replication.VerifyReplication: CONTENT_DIFFERENT_ROWS, rowkey=a100193 2017-04-03 23:30:50,280 ERROR [main] org.apache.hadoop.hbase.mapreduce.replication.VerifyReplication: ONLY_IN_PEER_TABLE_ROWS, rowkey=a100193 2017-04-03 23:30:50,387 ERROR [main] org.apache.hadoop.hbase.mapreduce.replication.VerifyReplication: CONTENT_DIFFERENT_ROWS, rowkey=a100385 2017-04-03 23:30:50,414 ERROR [main] org.apache.hadoop.hbase.mapreduce.replication.VerifyReplication: ONLY_IN_PEER_TABLE_ROWS, rowkey=a100385 2017-04-03 23:30:50,480 ERROR [main] org.apache.hadoop.hbase.mapreduce.replication.VerifyReplication: CONTENT_DIFFERENT_ROWS, rowkey=a100532 2017-04-03 23:30:50,508 ERROR [main] org.apache.hadoop.hbase.mapreduce.replication.VerifyReplication: ONLY_IN_PEER_TABLE_ROWS, rowkey=a100532 {noformat} Here, each bad rows were marked as both {{CONTENT_DIFFERENT_ROWS}} and {{ONLY_IN_PEER_TABLE_ROWS}}. This should never happen so I took a look at code and found scan.setBatch call. {code} @Override public void map(ImmutableBytesWritable row, final Result value, Context context) throws IOException { if (replicatedScanner == null) { ... final Scan scan = new Scan(); scan.setBatch(batch); {code} As stated in HBASE-16376, {{scan#setBatch(int)}} call implicitly allows scan results to be partial. Since {{VerifyReplication}} is assuming each {{scanner.next()}} call returns entire row, partial results break compare logic. We should avoid setBatch call here. Thanks to RPC chunking (explained in this blog https://blogs.apache.org/hbase/entry/scan_improvements_in_hbase_1), it's safe and acceptable I think. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-17870) Backport HBASE-12770 to branch-1.3
[ https://issues.apache.org/jira/browse/HBASE-17870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15954474#comment-15954474 ] Hadoop QA commented on HBASE-17870: --- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 11m 28s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 2 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 10s {color} | {color:green} branch-1.3 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 51s {color} | {color:green} branch-1.3 passed with JDK v1.8.0_121 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 55s {color} | {color:green} branch-1.3 passed with JDK v1.7.0_80 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 55s {color} | {color:green} branch-1.3 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 31s {color} | {color:green} branch-1.3 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 7s {color} | {color:green} branch-1.3 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 44s {color} | {color:green} branch-1.3 passed with JDK v1.8.0_121 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s {color} | {color:green} branch-1.3 passed with JDK v1.7.0_80 {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 9s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 51s {color} | {color:green} the patch passed with JDK v1.8.0_121 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 51s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 55s {color} | {color:green} the patch passed with JDK v1.7.0_80 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 55s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 54s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 29s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 17m 56s {color} | {color:green} The patch does not cause any errors with Hadoop 2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} | | {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 0m 25s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 51s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s {color} | {color:green} the patch passed with JDK v1.8.0_121 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s {color} | {color:green} the patch passed with JDK v1.7.0_80 {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 40s {color} | {color:green} hbase-client in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 87m 8s {color} | {color:green} hbase-server in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 34s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 140m 32s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=1.11.2 Server=1.11.2 Image:yetus/hbase:66fbe99 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12861778/HBASE-17870.branch-1.3.001.patch | | JIRA Issue | HBASE-17870 | | Optional Tests | asflicense
[jira] [Commented] (HBASE-17861) Regionserver down when checking the permission of staging dir if hbase.rootdir is on S3
[ https://issues.apache.org/jira/browse/HBASE-17861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15954443#comment-15954443 ] Ted Yu commented on HBASE-17861: FsNotSupportPermission -> FsWithoutPermissionSupport Why do you use array instead of Set ? > Regionserver down when checking the permission of staging dir if > hbase.rootdir is on S3 > --- > > Key: HBASE-17861 > URL: https://issues.apache.org/jira/browse/HBASE-17861 > Project: HBase > Issue Type: Bug >Affects Versions: 1.4.0 >Reporter: Yi Liang >Assignee: Yi Liang > Labels: filesystem, s3, wal > Fix For: 1.4.0 > > Attachments: HBASE-17861.branch-1.V1.patch, > HBASE-17861.branch-1.V2.patch, HBASE-17861.branch-1.V3.patch, > HBASE-17861-V1.patch > > > Found some issue, when set up HBASE-17437: Support specifying a WAL directory > outside of the root directory. > The region server are showdown when I add following config into > hbase-site.xml > hbase.rootdir = s3a://xx//xx > hbase.wal.dir = hdfs://xx/xx > hbase.coprocessor.region.classes = > org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint > Error is below > {noformat} > org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint threw > java.lang.IllegalStateException: Directory already exists but permissions > aren't set to '-rwx--x--x' > {noformat} > The reason is that, when hbase enable securebulkload, hbase will create a > folder in s3, it can not set above permission, because in s3, all files are > listed as having full read/write permissions and all directories appear to > have full rwx permissions. See Object stores have differerent authorization > models in > https://hadoop.apache.org/docs/current3/hadoop-aws/tools/hadoop-aws/index.html -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HBASE-17863) Procedure V2: Some cleanup around Procedure.isFinished() and procedure executor
[ https://issues.apache.org/jira/browse/HBASE-17863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Umesh Agashe updated HBASE-17863: - Attachment: HBASE-17863.v3.patch Fixed a few typos. > Procedure V2: Some cleanup around Procedure.isFinished() and procedure > executor > --- > > Key: HBASE-17863 > URL: https://issues.apache.org/jira/browse/HBASE-17863 > Project: HBase > Issue Type: Bug > Components: proc-v2 >Reporter: Umesh Agashe >Assignee: Umesh Agashe > Attachments: HBASE-17863.v1.patch, HBASE-17863.v2.patch, > HBASE-17863.v3.patch > > > Clean up around isFinished() and procedure executor -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HBASE-17861) Regionserver down when checking the permission of staging dir if hbase.rootdir is on S3
[ https://issues.apache.org/jira/browse/HBASE-17861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Liang updated HBASE-17861: - Attachment: HBASE-17861.branch-1.V3.patch > Regionserver down when checking the permission of staging dir if > hbase.rootdir is on S3 > --- > > Key: HBASE-17861 > URL: https://issues.apache.org/jira/browse/HBASE-17861 > Project: HBase > Issue Type: Bug >Affects Versions: 1.4.0 >Reporter: Yi Liang >Assignee: Yi Liang > Labels: filesystem, s3, wal > Fix For: 1.4.0 > > Attachments: HBASE-17861.branch-1.V1.patch, > HBASE-17861.branch-1.V2.patch, HBASE-17861.branch-1.V3.patch, > HBASE-17861-V1.patch > > > Found some issue, when set up HBASE-17437: Support specifying a WAL directory > outside of the root directory. > The region server are showdown when I add following config into > hbase-site.xml > hbase.rootdir = s3a://xx//xx > hbase.wal.dir = hdfs://xx/xx > hbase.coprocessor.region.classes = > org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint > Error is below > {noformat} > org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint threw > java.lang.IllegalStateException: Directory already exists but permissions > aren't set to '-rwx--x--x' > {noformat} > The reason is that, when hbase enable securebulkload, hbase will create a > folder in s3, it can not set above permission, because in s3, all files are > listed as having full read/write permissions and all directories appear to > have full rwx permissions. See Object stores have differerent authorization > models in > https://hadoop.apache.org/docs/current3/hadoop-aws/tools/hadoop-aws/index.html -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Comment Edited] (HBASE-17861) Regionserver down when checking the permission of staging dir if hbase.rootdir is on S3
[ https://issues.apache.org/jira/browse/HBASE-17861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15954424#comment-15954424 ] Yi Liang edited comment on HBASE-17861 at 4/4/17 12:55 AM: --- new patch attached was (Author: easyliangjob): new patch attaced > Regionserver down when checking the permission of staging dir if > hbase.rootdir is on S3 > --- > > Key: HBASE-17861 > URL: https://issues.apache.org/jira/browse/HBASE-17861 > Project: HBase > Issue Type: Bug >Affects Versions: 1.4.0 >Reporter: Yi Liang >Assignee: Yi Liang > Labels: filesystem, s3, wal > Fix For: 1.4.0 > > Attachments: HBASE-17861.branch-1.V1.patch, > HBASE-17861.branch-1.V2.patch, HBASE-17861.branch-1.V3.patch, > HBASE-17861-V1.patch > > > Found some issue, when set up HBASE-17437: Support specifying a WAL directory > outside of the root directory. > The region server are showdown when I add following config into > hbase-site.xml > hbase.rootdir = s3a://xx//xx > hbase.wal.dir = hdfs://xx/xx > hbase.coprocessor.region.classes = > org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint > Error is below > {noformat} > org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint threw > java.lang.IllegalStateException: Directory already exists but permissions > aren't set to '-rwx--x--x' > {noformat} > The reason is that, when hbase enable securebulkload, hbase will create a > folder in s3, it can not set above permission, because in s3, all files are > listed as having full read/write permissions and all directories appear to > have full rwx permissions. See Object stores have differerent authorization > models in > https://hadoop.apache.org/docs/current3/hadoop-aws/tools/hadoop-aws/index.html -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-17861) Regionserver down when checking the permission of staging dir if hbase.rootdir is on S3
[ https://issues.apache.org/jira/browse/HBASE-17861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15954424#comment-15954424 ] Yi Liang commented on HBASE-17861: -- new patch attaced > Regionserver down when checking the permission of staging dir if > hbase.rootdir is on S3 > --- > > Key: HBASE-17861 > URL: https://issues.apache.org/jira/browse/HBASE-17861 > Project: HBase > Issue Type: Bug >Affects Versions: 1.4.0 >Reporter: Yi Liang >Assignee: Yi Liang > Labels: filesystem, s3, wal > Fix For: 1.4.0 > > Attachments: HBASE-17861.branch-1.V1.patch, > HBASE-17861.branch-1.V2.patch, HBASE-17861.branch-1.V3.patch, > HBASE-17861-V1.patch > > > Found some issue, when set up HBASE-17437: Support specifying a WAL directory > outside of the root directory. > The region server are showdown when I add following config into > hbase-site.xml > hbase.rootdir = s3a://xx//xx > hbase.wal.dir = hdfs://xx/xx > hbase.coprocessor.region.classes = > org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint > Error is below > {noformat} > org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint threw > java.lang.IllegalStateException: Directory already exists but permissions > aren't set to '-rwx--x--x' > {noformat} > The reason is that, when hbase enable securebulkload, hbase will create a > folder in s3, it can not set above permission, because in s3, all files are > listed as having full read/write permissions and all directories appear to > have full rwx permissions. See Object stores have differerent authorization > models in > https://hadoop.apache.org/docs/current3/hadoop-aws/tools/hadoop-aws/index.html -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HBASE-17863) Procedure V2: Some cleanup around Procedure.isFinished() and procedure executor
[ https://issues.apache.org/jira/browse/HBASE-17863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Umesh Agashe updated HBASE-17863: - Attachment: HBASE-17863.v2.patch Currently the procedure with exception and in FINISHED state is considered as failed and needs rollback. But though the state is FINISHED its not terminal state. Added FAILED state. All procedures in FAILED state need to be rolled back. Change FINISH to SUCCESS, to make it more clear that its a positive terminal state. > Procedure V2: Some cleanup around Procedure.isFinished() and procedure > executor > --- > > Key: HBASE-17863 > URL: https://issues.apache.org/jira/browse/HBASE-17863 > Project: HBase > Issue Type: Bug > Components: proc-v2 >Reporter: Umesh Agashe >Assignee: Umesh Agashe > Attachments: HBASE-17863.v1.patch, HBASE-17863.v2.patch > > > Clean up around isFinished() and procedure executor -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HBASE-17863) Procedure V2: Some cleanup around Procedure.isFinished() and procedure executor
[ https://issues.apache.org/jira/browse/HBASE-17863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Umesh Agashe updated HBASE-17863: - Summary: Procedure V2: Some cleanup around Procedure.isFinished() and procedure executor (was: Procedure V2: Some cleanup around isFinished() and procedure executor) > Procedure V2: Some cleanup around Procedure.isFinished() and procedure > executor > --- > > Key: HBASE-17863 > URL: https://issues.apache.org/jira/browse/HBASE-17863 > Project: HBase > Issue Type: Bug > Components: proc-v2 >Reporter: Umesh Agashe >Assignee: Umesh Agashe > Attachments: HBASE-17863.v1.patch > > > Clean up around isFinished() and procedure executor -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-17861) Regionserver down when checking the permission of staging dir if hbase.rootdir is on S3
[ https://issues.apache.org/jira/browse/HBASE-17861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15954405#comment-15954405 ] Hadoop QA commented on HBASE-17861: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s {color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 48s {color} | {color:green} branch-1 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s {color} | {color:green} branch-1 passed with JDK v1.8.0_121 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s {color} | {color:green} branch-1 passed with JDK v1.7.0_80 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 58s {color} | {color:green} branch-1 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 16s {color} | {color:green} branch-1 passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 56s {color} | {color:red} hbase-server in branch-1 has 2 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s {color} | {color:green} branch-1 passed with JDK v1.8.0_121 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s {color} | {color:green} branch-1 passed with JDK v1.7.0_80 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 42s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 30s {color} | {color:green} the patch passed with JDK v1.8.0_121 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 30s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s {color} | {color:green} the patch passed with JDK v1.7.0_80 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 35s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 55s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 17s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 14m 53s {color} | {color:green} The patch does not cause any errors with Hadoop 2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} | | {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 0m 15s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 9s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s {color} | {color:green} the patch passed with JDK v1.8.0_121 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 32s {color} | {color:green} the patch passed with JDK v1.7.0_80 {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 83m 17s {color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 112m 43s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hbase.regionserver.TestScannerHeartbeatMessages | | | hadoop.hbase.client.TestReplicasClient | \\ \\ || Subsystem || Report/Notes || | Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:e01ee2f | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12861769/HBASE-17861.branch-1.V2.patch | | JIRA Issue | HBASE-17861 | | Optional Tests | asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile | | uname | Linux d024ff5a0daf 3.13.0-107-generic #154-Ubuntu SMP
[jira] [Updated] (HBASE-14141) HBase Backup/Restore Phase 3: Filter WALs on backup to include only edits from backup tables
[ https://issues.apache.org/jira/browse/HBASE-14141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Rodionov updated HBASE-14141: -- Attachment: HBASE-14141.v5.patch v5 addresses recent RB comments > HBase Backup/Restore Phase 3: Filter WALs on backup to include only edits > from backup tables > > > Key: HBASE-14141 > URL: https://issues.apache.org/jira/browse/HBASE-14141 > Project: HBase > Issue Type: New Feature >Reporter: Vladimir Rodionov >Assignee: Vladimir Rodionov >Priority: Blocker > Labels: backup > Fix For: 2.0.0 > > Attachments: HBASE-14141.HBASE-14123.v1.patch, HBASE-14141.v1.patch, > HBASE-14141.v2.patch, HBASE-14141.v4.patch, HBASE-14141.v5.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HBASE-17870) Backport HBASE-12770 to branch-1.3
[ https://issues.apache.org/jira/browse/HBASE-17870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashu Pachauri updated HBASE-17870: -- Status: Patch Available (was: Open) > Backport HBASE-12770 to branch-1.3 > -- > > Key: HBASE-17870 > URL: https://issues.apache.org/jira/browse/HBASE-17870 > Project: HBase > Issue Type: Improvement > Components: Replication >Reporter: Ashu Pachauri >Assignee: Ashu Pachauri > Attachments: HBASE-17870.branch-1.3.001.patch > > > Based on discussion on HBASE-12770, let's backport it to branch-1.3. This > combined with zookeeper transport limit breaks replication quite often in > large clusters. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-17861) Regionserver down when checking the permission of staging dir if hbase.rootdir is on S3
[ https://issues.apache.org/jira/browse/HBASE-17861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15954327#comment-15954327 ] Zach York commented on HBASE-17861: --- bq. There may be more file system(s) added in the future, how about putting the 3 known FS in a private Set so that the above check is more readable ? +1 I meant to say this earlier, but forgot before you uploaded the patch. > Regionserver down when checking the permission of staging dir if > hbase.rootdir is on S3 > --- > > Key: HBASE-17861 > URL: https://issues.apache.org/jira/browse/HBASE-17861 > Project: HBase > Issue Type: Bug >Affects Versions: 1.4.0 >Reporter: Yi Liang >Assignee: Yi Liang > Labels: filesystem, s3, wal > Fix For: 1.4.0 > > Attachments: HBASE-17861.branch-1.V1.patch, > HBASE-17861.branch-1.V2.patch, HBASE-17861-V1.patch > > > Found some issue, when set up HBASE-17437: Support specifying a WAL directory > outside of the root directory. > The region server are showdown when I add following config into > hbase-site.xml > hbase.rootdir = s3a://xx//xx > hbase.wal.dir = hdfs://xx/xx > hbase.coprocessor.region.classes = > org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint > Error is below > {noformat} > org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint threw > java.lang.IllegalStateException: Directory already exists but permissions > aren't set to '-rwx--x--x' > {noformat} > The reason is that, when hbase enable securebulkload, hbase will create a > folder in s3, it can not set above permission, because in s3, all files are > listed as having full read/write permissions and all directories appear to > have full rwx permissions. See Object stores have differerent authorization > models in > https://hadoop.apache.org/docs/current3/hadoop-aws/tools/hadoop-aws/index.html -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HBASE-17870) Backport HBASE-12770 to branch-1.3
[ https://issues.apache.org/jira/browse/HBASE-17870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashu Pachauri updated HBASE-17870: -- Attachment: HBASE-17870.branch-1.3.001.patch > Backport HBASE-12770 to branch-1.3 > -- > > Key: HBASE-17870 > URL: https://issues.apache.org/jira/browse/HBASE-17870 > Project: HBase > Issue Type: Improvement > Components: Replication >Reporter: Ashu Pachauri >Assignee: Ashu Pachauri > Attachments: HBASE-17870.branch-1.3.001.patch > > > Based on discussion on HBASE-12770, let's backport it to branch-1.3. This > combined with zookeeper transport limit breaks replication quite often in > large clusters. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-17861) Regionserver down when checking the permission of staging dir if hbase.rootdir is on S3
[ https://issues.apache.org/jira/browse/HBASE-17861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15954318#comment-15954318 ] Ted Yu commented on HBASE-17861: {code} 157 if (!scheme.startsWith("s3") && !scheme.startsWith("wasb") && !scheme.startsWith("swift") && !status.getPermission().equals(PERM_HIDDEN)) { {code} There may be more file system(s) added in the future, how about putting the 3 known FS in a private Set so that the above check is more readable ? > Regionserver down when checking the permission of staging dir if > hbase.rootdir is on S3 > --- > > Key: HBASE-17861 > URL: https://issues.apache.org/jira/browse/HBASE-17861 > Project: HBase > Issue Type: Bug >Affects Versions: 1.4.0 >Reporter: Yi Liang >Assignee: Yi Liang > Labels: filesystem, s3, wal > Fix For: 1.4.0 > > Attachments: HBASE-17861.branch-1.V1.patch, > HBASE-17861.branch-1.V2.patch, HBASE-17861-V1.patch > > > Found some issue, when set up HBASE-17437: Support specifying a WAL directory > outside of the root directory. > The region server are showdown when I add following config into > hbase-site.xml > hbase.rootdir = s3a://xx//xx > hbase.wal.dir = hdfs://xx/xx > hbase.coprocessor.region.classes = > org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint > Error is below > {noformat} > org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint threw > java.lang.IllegalStateException: Directory already exists but permissions > aren't set to '-rwx--x--x' > {noformat} > The reason is that, when hbase enable securebulkload, hbase will create a > folder in s3, it can not set above permission, because in s3, all files are > listed as having full read/write permissions and all directories appear to > have full rwx permissions. See Object stores have differerent authorization > models in > https://hadoop.apache.org/docs/current3/hadoop-aws/tools/hadoop-aws/index.html -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-15143) Procedure v2 - Web UI displaying queues
[ https://issues.apache.org/jira/browse/HBASE-15143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15954294#comment-15954294 ] Hadoop QA commented on HBASE-15143: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s {color} | {color:blue} Docker mode activated. {color} | | {color:blue}0{color} | {color:blue} rubocop {color} | {color:blue} 0m 0s {color} | {color:blue} rubocop was not available. {color} | | {color:blue}0{color} | {color:blue} ruby-lint {color} | {color:blue} 0m 0s {color} | {color:blue} Ruby-lint was not available. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s {color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 6 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 25s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 16s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 58s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 59s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 2s {color} | {color:green} master passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 57s {color} | {color:red} hbase-protocol-shaded in master has 24 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 20s {color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 8s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 58s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 58s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 58s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 58s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 2s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s {color} | {color:red} The patch has 25 line(s) that end in whitespace. Use git apply --whitespace=fix. {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s {color} | {color:red} The patch 13 line(s) with tabs. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 27m 36s {color} | {color:green} Patch does not cause any errors with Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} | | {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 1m 35s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 30s {color} | {color:red} hbase-protocol-shaded generated 1 new + 23 unchanged - 1 fixed = 24 total (was 24) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 35s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 0s {color} | {color:green} hbase-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 33s {color} | {color:green} hbase-protocol-shaded in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 51s {color} | {color:green} hbase-procedure in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 23s {color} | {color:green} hbase-client in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 104m 37s {color} | {color:green} hbase-server in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 5m 0s {color} | {color:green}
[jira] [Commented] (HBASE-17861) Regionserver down when checking the permission of staging dir if hbase.rootdir is on S3
[ https://issues.apache.org/jira/browse/HBASE-17861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15954246#comment-15954246 ] Yi Liang commented on HBASE-17861: -- New patch carry Jerry and Zach's comments > Regionserver down when checking the permission of staging dir if > hbase.rootdir is on S3 > --- > > Key: HBASE-17861 > URL: https://issues.apache.org/jira/browse/HBASE-17861 > Project: HBase > Issue Type: Bug >Affects Versions: 1.4.0 >Reporter: Yi Liang >Assignee: Yi Liang > Labels: filesystem, s3, wal > Fix For: 1.4.0 > > Attachments: HBASE-17861.branch-1.V1.patch, > HBASE-17861.branch-1.V2.patch, HBASE-17861-V1.patch > > > Found some issue, when set up HBASE-17437: Support specifying a WAL directory > outside of the root directory. > The region server are showdown when I add following config into > hbase-site.xml > hbase.rootdir = s3a://xx//xx > hbase.wal.dir = hdfs://xx/xx > hbase.coprocessor.region.classes = > org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint > Error is below > {noformat} > org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint threw > java.lang.IllegalStateException: Directory already exists but permissions > aren't set to '-rwx--x--x' > {noformat} > The reason is that, when hbase enable securebulkload, hbase will create a > folder in s3, it can not set above permission, because in s3, all files are > listed as having full read/write permissions and all directories appear to > have full rwx permissions. See Object stores have differerent authorization > models in > https://hadoop.apache.org/docs/current3/hadoop-aws/tools/hadoop-aws/index.html -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HBASE-17861) Regionserver down when checking the permission of staging dir if hbase.rootdir is on S3
[ https://issues.apache.org/jira/browse/HBASE-17861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Liang updated HBASE-17861: - Attachment: (was: HBASE-17861.branch-1.V2.patch) > Regionserver down when checking the permission of staging dir if > hbase.rootdir is on S3 > --- > > Key: HBASE-17861 > URL: https://issues.apache.org/jira/browse/HBASE-17861 > Project: HBase > Issue Type: Bug >Affects Versions: 1.4.0 >Reporter: Yi Liang >Assignee: Yi Liang > Labels: filesystem, s3, wal > Fix For: 1.4.0 > > Attachments: HBASE-17861.branch-1.V1.patch, > HBASE-17861.branch-1.V2.patch, HBASE-17861-V1.patch > > > Found some issue, when set up HBASE-17437: Support specifying a WAL directory > outside of the root directory. > The region server are showdown when I add following config into > hbase-site.xml > hbase.rootdir = s3a://xx//xx > hbase.wal.dir = hdfs://xx/xx > hbase.coprocessor.region.classes = > org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint > Error is below > {noformat} > org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint threw > java.lang.IllegalStateException: Directory already exists but permissions > aren't set to '-rwx--x--x' > {noformat} > The reason is that, when hbase enable securebulkload, hbase will create a > folder in s3, it can not set above permission, because in s3, all files are > listed as having full read/write permissions and all directories appear to > have full rwx permissions. See Object stores have differerent authorization > models in > https://hadoop.apache.org/docs/current3/hadoop-aws/tools/hadoop-aws/index.html -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HBASE-17861) Regionserver down when checking the permission of staging dir if hbase.rootdir is on S3
[ https://issues.apache.org/jira/browse/HBASE-17861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Liang updated HBASE-17861: - Attachment: HBASE-17861.branch-1.V2.patch HBASE-17861.branch-1.V2.patch > Regionserver down when checking the permission of staging dir if > hbase.rootdir is on S3 > --- > > Key: HBASE-17861 > URL: https://issues.apache.org/jira/browse/HBASE-17861 > Project: HBase > Issue Type: Bug >Affects Versions: 1.4.0 >Reporter: Yi Liang >Assignee: Yi Liang > Labels: filesystem, s3, wal > Fix For: 1.4.0 > > Attachments: HBASE-17861.branch-1.V1.patch, > HBASE-17861.branch-1.V2.patch, HBASE-17861-V1.patch > > > Found some issue, when set up HBASE-17437: Support specifying a WAL directory > outside of the root directory. > The region server are showdown when I add following config into > hbase-site.xml > hbase.rootdir = s3a://xx//xx > hbase.wal.dir = hdfs://xx/xx > hbase.coprocessor.region.classes = > org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint > Error is below > {noformat} > org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint threw > java.lang.IllegalStateException: Directory already exists but permissions > aren't set to '-rwx--x--x' > {noformat} > The reason is that, when hbase enable securebulkload, hbase will create a > folder in s3, it can not set above permission, because in s3, all files are > listed as having full read/write permissions and all directories appear to > have full rwx permissions. See Object stores have differerent authorization > models in > https://hadoop.apache.org/docs/current3/hadoop-aws/tools/hadoop-aws/index.html -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HBASE-17861) Regionserver down when checking the permission of staging dir if hbase.rootdir is on S3
[ https://issues.apache.org/jira/browse/HBASE-17861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Liang updated HBASE-17861: - Attachment: (was: HBASE-17861.branch-1.V2.patch) > Regionserver down when checking the permission of staging dir if > hbase.rootdir is on S3 > --- > > Key: HBASE-17861 > URL: https://issues.apache.org/jira/browse/HBASE-17861 > Project: HBase > Issue Type: Bug >Affects Versions: 1.4.0 >Reporter: Yi Liang >Assignee: Yi Liang > Labels: filesystem, s3, wal > Fix For: 1.4.0 > > Attachments: HBASE-17861.branch-1.V1.patch, > HBASE-17861.branch-1.V2.patch, HBASE-17861-V1.patch > > > Found some issue, when set up HBASE-17437: Support specifying a WAL directory > outside of the root directory. > The region server are showdown when I add following config into > hbase-site.xml > hbase.rootdir = s3a://xx//xx > hbase.wal.dir = hdfs://xx/xx > hbase.coprocessor.region.classes = > org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint > Error is below > {noformat} > org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint threw > java.lang.IllegalStateException: Directory already exists but permissions > aren't set to '-rwx--x--x' > {noformat} > The reason is that, when hbase enable securebulkload, hbase will create a > folder in s3, it can not set above permission, because in s3, all files are > listed as having full read/write permissions and all directories appear to > have full rwx permissions. See Object stores have differerent authorization > models in > https://hadoop.apache.org/docs/current3/hadoop-aws/tools/hadoop-aws/index.html -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HBASE-17861) Regionserver down when checking the permission of staging dir if hbase.rootdir is on S3
[ https://issues.apache.org/jira/browse/HBASE-17861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Liang updated HBASE-17861: - Attachment: HBASE-17861.branch-1.V2.patch > Regionserver down when checking the permission of staging dir if > hbase.rootdir is on S3 > --- > > Key: HBASE-17861 > URL: https://issues.apache.org/jira/browse/HBASE-17861 > Project: HBase > Issue Type: Bug >Affects Versions: 1.4.0 >Reporter: Yi Liang >Assignee: Yi Liang > Labels: filesystem, s3, wal > Fix For: 1.4.0 > > Attachments: HBASE-17861.branch-1.V1.patch, > HBASE-17861.branch-1.V2.patch, HBASE-17861-V1.patch > > > Found some issue, when set up HBASE-17437: Support specifying a WAL directory > outside of the root directory. > The region server are showdown when I add following config into > hbase-site.xml > hbase.rootdir = s3a://xx//xx > hbase.wal.dir = hdfs://xx/xx > hbase.coprocessor.region.classes = > org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint > Error is below > {noformat} > org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint threw > java.lang.IllegalStateException: Directory already exists but permissions > aren't set to '-rwx--x--x' > {noformat} > The reason is that, when hbase enable securebulkload, hbase will create a > folder in s3, it can not set above permission, because in s3, all files are > listed as having full read/write permissions and all directories appear to > have full rwx permissions. See Object stores have differerent authorization > models in > https://hadoop.apache.org/docs/current3/hadoop-aws/tools/hadoop-aws/index.html -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Created] (HBASE-17870) Backport HBASE-12770 to branch-1.3
Ashu Pachauri created HBASE-17870: - Summary: Backport HBASE-12770 to branch-1.3 Key: HBASE-17870 URL: https://issues.apache.org/jira/browse/HBASE-17870 Project: HBase Issue Type: Improvement Components: Replication Reporter: Ashu Pachauri Assignee: Ashu Pachauri Based on discussion on HBASE-12770, let's backport it to branch-1.3. This combined with zookeeper transport limit breaks replication quite often in large clusters. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-17273) Create an hbase-coprocessor module as repository for generally useful coprocessors
[ https://issues.apache.org/jira/browse/HBASE-17273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15954082#comment-15954082 ] Vrushali C commented on HBASE-17273: bq. Do we need anything else here to get started? No, not waiting on anything else other than finding the time to work on this. > Create an hbase-coprocessor module as repository for generally useful > coprocessors > -- > > Key: HBASE-17273 > URL: https://issues.apache.org/jira/browse/HBASE-17273 > Project: HBase > Issue Type: Task > Components: Coprocessors >Reporter: stack > Attachments: HBASE-17273.master.001.patch, > HBASE-17273.master.002.patch > > > Idea here is a module where we can carry coprocessors that are of general > utility. In particular, I am thinking of the coprocessor used by yarn > timeline server v2 which does aggregating and then replacing increments with > a sum cell. This seems generally useful. Other candidates might include our > cross-row transactions coprocessor is another that could go here pulled out > of hbase-server and so on. > Currently only coprocessors bundled in hbase are examples and endpoints. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HBASE-15143) Procedure v2 - Web UI displaying queues
[ https://issues.apache.org/jira/browse/HBASE-15143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Balazs Meszaros updated HBASE-15143: Attachment: HBASE-15143-BM-0009.patch > Procedure v2 - Web UI displaying queues > --- > > Key: HBASE-15143 > URL: https://issues.apache.org/jira/browse/HBASE-15143 > Project: HBase > Issue Type: Sub-task > Components: proc-v2, UI >Reporter: Matteo Bertozzi >Assignee: Balazs Meszaros >Priority: Minor > Fix For: 2.0.0 > > Attachments: HBASE-15143-BM-0001.patch, HBASE-15143-BM-0002.patch, > HBASE-15143-BM-0003.patch, HBASE-15143-BM-0004.patch, > HBASE-15143-BM-0005.patch, HBASE-15143-BM-0006.patch, > HBASE-15143-BM-0006.patch, HBASE-15143-BM-0007.patch, > HBASE-15143-BM-0008.patch, HBASE-15143-BM-0009.patch, screenshot.png > > > We can query MasterProcedureScheduler to display the various procedures and > who is holding table/region locks. > Each procedure is in a TableQueue or ServerQueue, so it is easy to display > the procedures in its own group. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-17869) UnsafeAvailChecker wrongly returns false on ppc
[ https://issues.apache.org/jira/browse/HBASE-17869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15954054#comment-15954054 ] Ted Yu commented on HBASE-17869: lgtm nit: {code} 58} 59else { {code} Put else on the same line as right curly when committing. > UnsafeAvailChecker wrongly returns false on ppc > --- > > Key: HBASE-17869 > URL: https://issues.apache.org/jira/browse/HBASE-17869 > Project: HBase > Issue Type: Bug >Affects Versions: 1.2.4 >Reporter: Jerry He >Assignee: Jerry He >Priority: Minor > Fix For: 2.0.0, 1.4.0 > > Attachments: HBASE-17869.patch > > > On ppc64 arch, java.nio.Bits.unaligned() wrongly returns false due to a JDK > bug. > https://bugs.openjdk.java.net/browse/JDK-8165231 > This causes some problem for HBase. i.e. FuzzyRowFilter test fails. > Fix it by providing a hard-code workaround for the JDK bug. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-17273) Create an hbase-coprocessor module as repository for generally useful coprocessors
[ https://issues.apache.org/jira/browse/HBASE-17273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15954019#comment-15954019 ] Sean Busbey commented on HBASE-17273: - Do we need anything else here to get started? An update to the docs on coprocessors that calls out this location maybe? > Create an hbase-coprocessor module as repository for generally useful > coprocessors > -- > > Key: HBASE-17273 > URL: https://issues.apache.org/jira/browse/HBASE-17273 > Project: HBase > Issue Type: Task > Components: Coprocessors >Reporter: stack > Attachments: HBASE-17273.master.001.patch, > HBASE-17273.master.002.patch > > > Idea here is a module where we can carry coprocessors that are of general > utility. In particular, I am thinking of the coprocessor used by yarn > timeline server v2 which does aggregating and then replacing increments with > a sum cell. This seems generally useful. Other candidates might include our > cross-row transactions coprocessor is another that could go here pulled out > of hbase-server and so on. > Currently only coprocessors bundled in hbase are examples and endpoints. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-17868) Backport HBASE-10205 to branch-1.3
[ https://issues.apache.org/jira/browse/HBASE-17868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15954005#comment-15954005 ] Andrew Purtell commented on HBASE-17868: There were concerns about the HBASE-10205 patch. Followup issue is HBASE-15691 > Backport HBASE-10205 to branch-1.3 > -- > > Key: HBASE-17868 > URL: https://issues.apache.org/jira/browse/HBASE-17868 > Project: HBase > Issue Type: Bug > Components: BucketCache >Affects Versions: 1.3.0 >Reporter: ramkrishna.s.vasudevan >Assignee: ramkrishna.s.vasudevan > Fix For: 1.3.1 > > > I got the similar ConcurrentModificationException with hbase-1.3.0 while > working with bucket cache. On verifying seems the issue is not been added to > hbase-1.3.0. > We need to back port to hbase-1.3 and to other branches where ever it was not > applied. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-16438) Create a cell type so that chunk id is embedded in it
[ https://issues.apache.org/jira/browse/HBASE-16438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15953981#comment-15953981 ] Anastasia Braginsky commented on HBASE-16438: - [~anoop.hbase] and [~ram_krish], it looks like we all agree to put the seqID in the data-chunk (not in index) together with the key, value and etc. This can be done by creating a new type of Cell that doesn't include seqID as a field. bq. Am just getting confused here. When you say ChunkCell are you telling the BBChunkCell in current patch? In current patch there is no extra overhead at all. But if you are talking about the cell to be moved to CellChunkMap - yes then it will have some overhead in terms of serialization and not in terms of heap overhead I am talking about the Cell-representation that is going to be part of CellChunkMap. I am confused here by myself and don't know what is exactly the name of such cell. But I need a constant saying how many bytes such a Cell takes... > Create a cell type so that chunk id is embedded in it > - > > Key: HBASE-16438 > URL: https://issues.apache.org/jira/browse/HBASE-16438 > Project: HBase > Issue Type: Sub-task >Affects Versions: 2.0.0 >Reporter: ramkrishna.s.vasudevan >Assignee: ramkrishna.s.vasudevan > Attachments: HBASE-16438_1.patch, > HBASE-16438_3_ChunkCreatorwrappingChunkPool.patch, > HBASE-16438_4_ChunkCreatorwrappingChunkPool.patch, > HBASE-16438_8_ChunkCreatorwrappingChunkPool_withchunkRef.patch, > HBASE-16438_9_ChunkCreatorwrappingChunkPool_withchunkRef.patch, > HBASE-16438.patch, MemstoreChunkCell_memstoreChunkCreator_oldversion.patch, > MemstoreChunkCell_trunk.patch > > > For CellChunkMap we may need a cell such that the chunk out of which it was > created, the id of the chunk be embedded in it so that when doing flattening > we can use the chunk id as a meta data. More details will follow once the > initial tasks are completed. > Why we need to embed the chunkid in the Cell is described by [~anastas] in > this remark over in parent issue > https://issues.apache.org/jira/browse/HBASE-14921?focusedCommentId=15244119=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15244119 -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HBASE-17869) UnsafeAvailChecker wrongly returns false on ppc
[ https://issues.apache.org/jira/browse/HBASE-17869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jerry He updated HBASE-17869: - Fix Version/s: 1.4.0 2.0.0 Status: Patch Available (was: Open) > UnsafeAvailChecker wrongly returns false on ppc > --- > > Key: HBASE-17869 > URL: https://issues.apache.org/jira/browse/HBASE-17869 > Project: HBase > Issue Type: Bug >Affects Versions: 1.2.4 >Reporter: Jerry He >Assignee: Jerry He >Priority: Minor > Fix For: 2.0.0, 1.4.0 > > Attachments: HBASE-17869.patch > > > On ppc64 arch, java.nio.Bits.unaligned() wrongly returns false due to a JDK > bug. > https://bugs.openjdk.java.net/browse/JDK-8165231 > This causes some problem for HBase. i.e. FuzzyRowFilter test fails. > Fix it by providing a hard-code workaround for the JDK bug. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HBASE-17869) UnsafeAvailChecker wrongly returns false on ppc
[ https://issues.apache.org/jira/browse/HBASE-17869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jerry He updated HBASE-17869: - Attachment: HBASE-17869.patch > UnsafeAvailChecker wrongly returns false on ppc > --- > > Key: HBASE-17869 > URL: https://issues.apache.org/jira/browse/HBASE-17869 > Project: HBase > Issue Type: Bug >Affects Versions: 1.2.4 >Reporter: Jerry He >Assignee: Jerry He >Priority: Minor > Attachments: HBASE-17869.patch > > > On ppc64 arch, java.nio.Bits.unaligned() wrongly returns false due to a JDK > bug. > https://bugs.openjdk.java.net/browse/JDK-8165231 > This causes some problem for HBase. i.e. FuzzyRowFilter test fails. > Fix it by providing a hard-code workaround for the JDK bug. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-10205) ConcurrentModificationException in BucketAllocator
[ https://issues.apache.org/jira/browse/HBASE-10205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15953955#comment-15953955 ] Andrew Purtell commented on HBASE-10205: Followup issue is HBASE-15691 > ConcurrentModificationException in BucketAllocator > -- > > Key: HBASE-10205 > URL: https://issues.apache.org/jira/browse/HBASE-10205 > Project: HBase > Issue Type: Bug > Components: regionserver >Affects Versions: 0.89-fb >Reporter: Arjen Roodselaar >Assignee: Arjen Roodselaar >Priority: Minor > Fix For: 0.89-fb, 0.99.0, 2.0.0, 0.98.6 > > Attachments: hbase-10205-trunk.patch > > > The BucketCache WriterThread calls BucketCache.freeSpace() upon draining the > RAM queue containing entries to be cached. freeSpace() in turn calls > BucketSizeInfo.statistics() through BucketAllocator.getIndexStatistics(), > which iterates over 'bucketList'. At the same time another WriterThread might > call BucketAllocator.allocateBlock(), which may call > BucketSizeInfo.allocateBlock(), add a bucket to 'bucketList' and consequently > cause a ConcurrentModificationException. Calls to > BucketAllocator.allocateBlock() are synchronized, but calls to > BucketAllocator.getIndexStatistics() are not, which allows this race to occur. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Created] (HBASE-17869) UnsafeAvailChecker wrongly returns false on ppc
Jerry He created HBASE-17869: Summary: UnsafeAvailChecker wrongly returns false on ppc Key: HBASE-17869 URL: https://issues.apache.org/jira/browse/HBASE-17869 Project: HBase Issue Type: Bug Affects Versions: 1.2.4 Reporter: Jerry He Assignee: Jerry He Priority: Minor On ppc64 arch, java.nio.Bits.unaligned() wrongly returns false due to a JDK bug. https://bugs.openjdk.java.net/browse/JDK-8165231 This causes some problem for HBase. i.e. FuzzyRowFilter test fails. Fix it by providing a hard-code workaround for the JDK bug. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-17863) Procedure V2: Some cleanup around isFinished() and procedure executor
[ https://issues.apache.org/jira/browse/HBASE-17863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15953875#comment-15953875 ] Umesh Agashe commented on HBASE-17863: -- [~saint@gmail.com], yes, looking into it. > Procedure V2: Some cleanup around isFinished() and procedure executor > - > > Key: HBASE-17863 > URL: https://issues.apache.org/jira/browse/HBASE-17863 > Project: HBase > Issue Type: Bug > Components: proc-v2 >Reporter: Umesh Agashe >Assignee: Umesh Agashe > Attachments: HBASE-17863.v1.patch > > > Clean up around isFinished() and procedure executor -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Issue Comment Deleted] (HBASE-17854) Use StealJobQueue in HFileCleaner after HBASE-17215
[ https://issues.apache.org/jira/browse/HBASE-17854?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] huaxiang sun updated HBASE-17854: - Comment: was deleted (was: Looks good to me, thanks [~carp84]. I checked that the initial capacity for PriorityBlockingQueue is 11 in Java 8, is this too small? Maybe we can add a new constructor for StealJobQueue to specify initialCapacity.) > Use StealJobQueue in HFileCleaner after HBASE-17215 > --- > > Key: HBASE-17854 > URL: https://issues.apache.org/jira/browse/HBASE-17854 > Project: HBase > Issue Type: Improvement >Affects Versions: 2.0.0 >Reporter: Yu Li >Assignee: Yu Li > Attachments: HBASE-17854.patch, HBASE-17854.v2.patch, > HBASE-17854.v3.patch > > > In HBASE-17215 we use specific threads for deleting large/small (archived) > hfiles, and will improve it from below aspects in this JIRA: > 1. Using {{StealJobQueue}} to allow large file deletion thread to steal jobs > from small queue, based on the experience that in real world there'll be much > more small hfiles > 2. {{StealJobQueue}} is a kind of {{PriorityQueue}}, so we could also delete > from the larger file in the queues. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-17854) Use StealJobQueue in HFileCleaner after HBASE-17215
[ https://issues.apache.org/jira/browse/HBASE-17854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15953743#comment-15953743 ] huaxiang sun commented on HBASE-17854: -- Looks good to me, thanks [~carp84]. I checked that the initial capacity for PriorityBlockingQueue is 11 in Java 8, is this too small? Maybe we can add a new constructor for StealJobQueue to specify initialCapacity. > Use StealJobQueue in HFileCleaner after HBASE-17215 > --- > > Key: HBASE-17854 > URL: https://issues.apache.org/jira/browse/HBASE-17854 > Project: HBase > Issue Type: Improvement >Affects Versions: 2.0.0 >Reporter: Yu Li >Assignee: Yu Li > Attachments: HBASE-17854.patch, HBASE-17854.v2.patch, > HBASE-17854.v3.patch > > > In HBASE-17215 we use specific threads for deleting large/small (archived) > hfiles, and will improve it from below aspects in this JIRA: > 1. Using {{StealJobQueue}} to allow large file deletion thread to steal jobs > from small queue, based on the experience that in real world there'll be much > more small hfiles > 2. {{StealJobQueue}} is a kind of {{PriorityQueue}}, so we could also delete > from the larger file in the queues. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-17854) Use StealJobQueue in HFileCleaner after HBASE-17215
[ https://issues.apache.org/jira/browse/HBASE-17854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15953741#comment-15953741 ] huaxiang sun commented on HBASE-17854: -- Looks good to me, thanks [~carp84]. I checked that the initial capacity for PriorityBlockingQueue is 11 in Java 8, is this too small? Maybe we can add a new constructor for StealJobQueue to specify initialCapacity. > Use StealJobQueue in HFileCleaner after HBASE-17215 > --- > > Key: HBASE-17854 > URL: https://issues.apache.org/jira/browse/HBASE-17854 > Project: HBase > Issue Type: Improvement >Affects Versions: 2.0.0 >Reporter: Yu Li >Assignee: Yu Li > Attachments: HBASE-17854.patch, HBASE-17854.v2.patch, > HBASE-17854.v3.patch > > > In HBASE-17215 we use specific threads for deleting large/small (archived) > hfiles, and will improve it from below aspects in this JIRA: > 1. Using {{StealJobQueue}} to allow large file deletion thread to steal jobs > from small queue, based on the experience that in real world there'll be much > more small hfiles > 2. {{StealJobQueue}} is a kind of {{PriorityQueue}}, so we could also delete > from the larger file in the queues. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-17854) Use StealJobQueue in HFileCleaner after HBASE-17215
[ https://issues.apache.org/jira/browse/HBASE-17854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15953490#comment-15953490 ] Hadoop QA commented on HBASE-17854: --- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s {color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 22s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 46s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 15s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 49s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 40s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 37s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 46s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 14s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 27m 5s {color} | {color:green} Patch does not cause any errors with Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 54s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 100m 16s {color} | {color:green} hbase-server in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 17s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 140m 5s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:8d52d23 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12861694/HBASE-17854.v3.patch | | JIRA Issue | HBASE-17854 | | Optional Tests | asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile | | uname | Linux 63d8aebefe85 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | master / 73e1bcd | | Default Java | 1.8.0_121 | | findbugs | v3.0.0 | | Test Results | https://builds.apache.org/job/PreCommit-HBASE-Build/6297/testReport/ | | modules | C: hbase-server U: hbase-server | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/6297/console | | Powered by | Apache Yetus 0.3.0 http://yetus.apache.org | This message was automatically generated. > Use StealJobQueue in HFileCleaner after HBASE-17215 > --- > > Key: HBASE-17854 > URL: https://issues.apache.org/jira/browse/HBASE-17854 > Project: HBase > Issue Type: Improvement >Affects Versions: 2.0.0 >Reporter: Yu Li >Assignee: Yu Li > Attachments: HBASE-17854.patch, HBASE-17854.v2.patch, > HBASE-17854.v3.patch > > > In HBASE-17215 we use
[jira] [Commented] (HBASE-17857) Remove IS annotations from IA.Public classes
[ https://issues.apache.org/jira/browse/HBASE-17857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15953371#comment-15953371 ] Duo Zhang commented on HBASE-17857: --- The AsyncAdmin is still under development so I think it is OK to declare it as IA.Private? And for the backoff stuffs, yeah they have been released out as IA.Public, for two years, still unstable... For AsyncAdmin, if we want to declare it as IA.Public, then HBASE-17359 will be a blocker for 2.0 release... This is what I do not want to see... For the backoff stuffs, my only concern is lack of maintenance. It has been there for two years but still unstable, and seems only be used in AsyncProcess. Anyway, I think it is OK to only remove the IS annotations here and open a new issue to address these two classes. But we need to set the new issue as blocker for 2.0 release. Thanks. > Remove IS annotations from IA.Public classes > > > Key: HBASE-17857 > URL: https://issues.apache.org/jira/browse/HBASE-17857 > Project: HBase > Issue Type: Sub-task > Components: API >Affects Versions: 2.0.0 >Reporter: Duo Zhang >Assignee: Duo Zhang > Fix For: 2.0.0 > > Attachments: HBASE-17857.patch, HBASE-17857-v1.patch, > HBASE-17857-v2.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HBASE-17854) Use StealJobQueue in HFileCleaner after HBASE-17215
[ https://issues.apache.org/jira/browse/HBASE-17854?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yu Li updated HBASE-17854: -- Attachment: HBASE-17854.v3.patch > Use StealJobQueue in HFileCleaner after HBASE-17215 > --- > > Key: HBASE-17854 > URL: https://issues.apache.org/jira/browse/HBASE-17854 > Project: HBase > Issue Type: Improvement >Affects Versions: 2.0.0 >Reporter: Yu Li >Assignee: Yu Li > Attachments: HBASE-17854.patch, HBASE-17854.v2.patch, > HBASE-17854.v3.patch > > > In HBASE-17215 we use specific threads for deleting large/small (archived) > hfiles, and will improve it from below aspects in this JIRA: > 1. Using {{StealJobQueue}} to allow large file deletion thread to steal jobs > from small queue, based on the experience that in real world there'll be much > more small hfiles > 2. {{StealJobQueue}} is a kind of {{PriorityQueue}}, so we could also delete > from the larger file in the queues. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Created] (HBASE-17868) Backport HBASE-10205 to branch-1.3
ramkrishna.s.vasudevan created HBASE-17868: -- Summary: Backport HBASE-10205 to branch-1.3 Key: HBASE-17868 URL: https://issues.apache.org/jira/browse/HBASE-17868 Project: HBase Issue Type: Bug Components: BucketCache Affects Versions: 1.3.0 Reporter: ramkrishna.s.vasudevan Assignee: ramkrishna.s.vasudevan Fix For: 1.3.1 I got the similar ConcurrentModificationException with hbase-1.3.0 while working with bucket cache. On verifying seems the issue is not been added to hbase-1.3.0. We need to back port to hbase-1.3 and to other branches where ever it was not applied. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-10205) ConcurrentModificationException in BucketAllocator
[ https://issues.apache.org/jira/browse/HBASE-10205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15953265#comment-15953265 ] ramkrishna.s.vasudevan commented on HBASE-10205: This issue is not fixed in branch-1.3. > ConcurrentModificationException in BucketAllocator > -- > > Key: HBASE-10205 > URL: https://issues.apache.org/jira/browse/HBASE-10205 > Project: HBase > Issue Type: Bug > Components: regionserver >Affects Versions: 0.89-fb >Reporter: Arjen Roodselaar >Assignee: Arjen Roodselaar >Priority: Minor > Fix For: 0.89-fb, 0.99.0, 2.0.0, 0.98.6 > > Attachments: hbase-10205-trunk.patch > > > The BucketCache WriterThread calls BucketCache.freeSpace() upon draining the > RAM queue containing entries to be cached. freeSpace() in turn calls > BucketSizeInfo.statistics() through BucketAllocator.getIndexStatistics(), > which iterates over 'bucketList'. At the same time another WriterThread might > call BucketAllocator.allocateBlock(), which may call > BucketSizeInfo.allocateBlock(), add a bucket to 'bucketList' and consequently > cause a ConcurrentModificationException. Calls to > BucketAllocator.allocateBlock() are synchronized, but calls to > BucketAllocator.getIndexStatistics() are not, which allows this race to occur. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-16438) Create a cell type so that chunk id is embedded in it
[ https://issues.apache.org/jira/browse/HBASE-16438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15953244#comment-15953244 ] ramkrishna.s.vasudevan commented on HBASE-16438: bq.In ByteBufferChunkCell, please explain me why to add this new class? Why can not the existing BBKV just have a new method - getChunkId() - to return the chunk id in the 0th offset of the backing BB? We now have BBKV every where in write path and we can also make use of it in read path to form cells coming out of hfileblocks. Since we have added getChunkId() to the ExtendedCell any cell can make use of this getchunkId. (though it was not generic it was added to make things simpler). Since we deal with ExtendedCells we create a specific impl of BBKV that returns the chunkId alone and by default it will be returning -1. bq.In ByteBufferKeyValue or in MSLAB or anywhere else, please add constant saying what is the size in bytes of the ChunkCell or what I call cell-representation (chunkId + offset + length + seqId), so I can use it later. Ok. bq.OK. So lets have a new cell representation. Ok fine we can make use of it. bq.This is not a desired situation. We are increasing from 12 bytes to 20 bytes, almost twice... We should not do it unless it is very very necessary... Some where we need the seqId of every cell getting written to the cellChunkMap. How do you think you can avoid it? You have some idea on that? bq.. In ByteBufferKeyValue or in MSLAB or anywhere else, please add constant saying what is the size in bytes of the ChunkCell Am just getting confused here. When you say ChunkCell are you telling the BBChunkCell in current patch? In current patch there is no extra overhead at all. But if you are talking about the cell to be moved to CellChunkMap - yes then it will have some overhead in terms of serialization and not in terms of heap overhead. > Create a cell type so that chunk id is embedded in it > - > > Key: HBASE-16438 > URL: https://issues.apache.org/jira/browse/HBASE-16438 > Project: HBase > Issue Type: Sub-task >Affects Versions: 2.0.0 >Reporter: ramkrishna.s.vasudevan >Assignee: ramkrishna.s.vasudevan > Attachments: HBASE-16438_1.patch, > HBASE-16438_3_ChunkCreatorwrappingChunkPool.patch, > HBASE-16438_4_ChunkCreatorwrappingChunkPool.patch, > HBASE-16438_8_ChunkCreatorwrappingChunkPool_withchunkRef.patch, > HBASE-16438_9_ChunkCreatorwrappingChunkPool_withchunkRef.patch, > HBASE-16438.patch, MemstoreChunkCell_memstoreChunkCreator_oldversion.patch, > MemstoreChunkCell_trunk.patch > > > For CellChunkMap we may need a cell such that the chunk out of which it was > created, the id of the chunk be embedded in it so that when doing flattening > we can use the chunk id as a meta data. More details will follow once the > initial tasks are completed. > Why we need to embed the chunkid in the Cell is described by [~anastas] in > this remark over in parent issue > https://issues.apache.org/jira/browse/HBASE-14921?focusedCommentId=15244119=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15244119 -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-16438) Create a cell type so that chunk id is embedded in it
[ https://issues.apache.org/jira/browse/HBASE-16438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15953207#comment-15953207 ] Anoop Sam John commented on HBASE-16438: bq.Do you mean the seqID is going to be written in index-chunk only and is not going to be written in the main-chunk, holding key, value and etc.? So no duplication? Are you sure? If so, then already little better, but still I would like to keep the Cell meta data smaller. Yes. Either to main chunk or to meta data chunk. As of now, it has to be in the meta chunk.. Ya we can change that if we make a new Cell impl for representing the cells added to the MSLAB. ( Seems we have one already). One more issue is u will need some more refactoring at the base classes level so that u have a class to extend here. BBKV u can not as that is already having a long state for seqId (wasting 8 bytes on heap space per cell). bq.Why can not the existing BBKV just have a new method - getChunkId() - to return the chunk id in the 0th offset of the backing BB The cell impl BBKV might be used elsewhere also where its data is NOT in MSLAB chunks. So blindly reading 1st 4 bytes of the chunk's backing buffer is not correct. You need a new Cell impl with this way of impl for the getChunkId() method. I see this method is added to ExtendedCell now. So other impls of the ExtendedCell (Like KV) will throw Exception? > Create a cell type so that chunk id is embedded in it > - > > Key: HBASE-16438 > URL: https://issues.apache.org/jira/browse/HBASE-16438 > Project: HBase > Issue Type: Sub-task >Affects Versions: 2.0.0 >Reporter: ramkrishna.s.vasudevan >Assignee: ramkrishna.s.vasudevan > Attachments: HBASE-16438_1.patch, > HBASE-16438_3_ChunkCreatorwrappingChunkPool.patch, > HBASE-16438_4_ChunkCreatorwrappingChunkPool.patch, > HBASE-16438_8_ChunkCreatorwrappingChunkPool_withchunkRef.patch, > HBASE-16438_9_ChunkCreatorwrappingChunkPool_withchunkRef.patch, > HBASE-16438.patch, MemstoreChunkCell_memstoreChunkCreator_oldversion.patch, > MemstoreChunkCell_trunk.patch > > > For CellChunkMap we may need a cell such that the chunk out of which it was > created, the id of the chunk be embedded in it so that when doing flattening > we can use the chunk id as a meta data. More details will follow once the > initial tasks are completed. > Why we need to embed the chunkid in the Cell is described by [~anastas] in > this remark over in parent issue > https://issues.apache.org/jira/browse/HBASE-14921?focusedCommentId=15244119=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15244119 -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-16438) Create a cell type so that chunk id is embedded in it
[ https://issues.apache.org/jira/browse/HBASE-16438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15953177#comment-15953177 ] Anastasia Braginsky commented on HBASE-16438: - bq. What specific question in RB are you looking out for? OK. I will write here the questions that bother me and I don't see responses: 1.In ByteBufferChunkCell, please explain me why to add this new class? Why can not the existing BBKV just have a new method - getChunkId() - to return the chunk id in the 0th offset of the backing BB? 2. In ByteBufferKeyValue or in MSLAB or anywhere else, please add constant saying what is the size in bytes of the ChunkCell or what I call cell-representation (chunkId + offset + length + seqId), so I can use it later. I will review the existing patch once again bq. ChunkId is per ByteBuffer backing the chunk. I can change the chunkId to be an int. You got it yourself, I also thought so for a moment. I am talking about ChunkID of where each cell is located, which is saved per cell. Please do change chunkID to int, but check for overflow (at least log some error). I believe we should strive to decrease number of bytes the cell representation is taking, because this is the reason why are we doing the CellChunkMap... bq. My Q was, this Cell meta data (ChunkId, offset, length) also we planned to write to chunks. So what is the difference? In this chunk or that chunk? Do you mean the seqID is going to be written in index-chunk only and is not going to be written in the main-chunk, holding key, value and etc.? So no duplication? Are you sure? If so, then already little better, but still I would like to keep the Cell meta data smaller. The smaller the Cell meta data is (hopefully only chunkId, offset, length and only 12 bytes) the less is the meta-data-overhead per cell is and the more we can squeeze into single index-chunk (CellChunkMap). The smaller CellChunkMap is we all enjoy the locality for scans and the binary search can hit the processor-cache easily. bq. The only thing is we should go with fixed 8 bytes for that. This is not a desired situation. We are increasing from 12 bytes to 20 bytes, almost twice... We should not do it unless it is very very necessary... bq. So now if you are going to write the seqId in the BB backing every cell, then the seqId as the state variable is not needed at all and hence you may need a new cell representation for it. OK. So lets have a new cell representation. bq. Otherwise we should still go with it and use the seqID as a caching value in addition to having it in the BB. Why to have the duplication of the same? > Create a cell type so that chunk id is embedded in it > - > > Key: HBASE-16438 > URL: https://issues.apache.org/jira/browse/HBASE-16438 > Project: HBase > Issue Type: Sub-task >Affects Versions: 2.0.0 >Reporter: ramkrishna.s.vasudevan >Assignee: ramkrishna.s.vasudevan > Attachments: HBASE-16438_1.patch, > HBASE-16438_3_ChunkCreatorwrappingChunkPool.patch, > HBASE-16438_4_ChunkCreatorwrappingChunkPool.patch, > HBASE-16438_8_ChunkCreatorwrappingChunkPool_withchunkRef.patch, > HBASE-16438_9_ChunkCreatorwrappingChunkPool_withchunkRef.patch, > HBASE-16438.patch, MemstoreChunkCell_memstoreChunkCreator_oldversion.patch, > MemstoreChunkCell_trunk.patch > > > For CellChunkMap we may need a cell such that the chunk out of which it was > created, the id of the chunk be embedded in it so that when doing flattening > we can use the chunk id as a meta data. More details will follow once the > initial tasks are completed. > Why we need to embed the chunkid in the Cell is described by [~anastas] in > this remark over in parent issue > https://issues.apache.org/jira/browse/HBASE-14921?focusedCommentId=15244119=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15244119 -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-9393) Hbase does not closing a closed socket resulting in many CLOSE_WAIT
[ https://issues.apache.org/jira/browse/HBASE-9393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15953073#comment-15953073 ] Hadoop QA commented on HBASE-9393: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s {color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s {color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 29s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 46s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 14s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 46s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 42s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 37s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 47s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 14s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s {color} | {color:red} The patch has 6 line(s) that end in whitespace. Use git apply --whitespace=fix. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 27m 36s {color} | {color:green} Patch does not cause any errors with Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 56s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 102m 13s {color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 16s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 142m 49s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:8d52d23 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12861657/HBASE-9393.v16.patch | | JIRA Issue | HBASE-9393 | | Optional Tests | asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile | | uname | Linux aa34e09c1275 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | master / 73e1bcd | | Default Java | 1.8.0_121 | | findbugs | v3.0.0 | | whitespace | https://builds.apache.org/job/PreCommit-HBASE-Build/6296/artifact/patchprocess/whitespace-eol.txt | | unit | https://builds.apache.org/job/PreCommit-HBASE-Build/6296/artifact/patchprocess/patch-unit-hbase-server.txt | | Test Results | https://builds.apache.org/job/PreCommit-HBASE-Build/6296/testReport/ | | modules | C: hbase-server U: hbase-server | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/6296/console | | Powered by | Apache Yetus 0.3.0 http://yetus.apache.org | This message was automatically generated. > Hbase does not closing a closed socket resulting in many CLOSE_WAIT >
[jira] [Commented] (HBASE-16438) Create a cell type so that chunk id is embedded in it
[ https://issues.apache.org/jira/browse/HBASE-16438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15953063#comment-15953063 ] ramkrishna.s.vasudevan commented on HBASE-16438: Ok. Got it now. You are asking for the CellChunk representation. Yes we need chunkId + offset + length + seqId. Seqid if embedded with Cell data it is easier to retrieve it. Just doing getSeqId can decode the value from the backing BB. The only thing is we should go with fixed 8 bytes for that. So now if you are going to write the seqId in the BB backing every cell, then the seqId as the state variable is not needed at all and hence you may need a new cell representation for it. Otherwise we should still go with it and use the seqID as a caching value in addition to having it in the BB. > Create a cell type so that chunk id is embedded in it > - > > Key: HBASE-16438 > URL: https://issues.apache.org/jira/browse/HBASE-16438 > Project: HBase > Issue Type: Sub-task >Affects Versions: 2.0.0 >Reporter: ramkrishna.s.vasudevan >Assignee: ramkrishna.s.vasudevan > Attachments: HBASE-16438_1.patch, > HBASE-16438_3_ChunkCreatorwrappingChunkPool.patch, > HBASE-16438_4_ChunkCreatorwrappingChunkPool.patch, > HBASE-16438_8_ChunkCreatorwrappingChunkPool_withchunkRef.patch, > HBASE-16438_9_ChunkCreatorwrappingChunkPool_withchunkRef.patch, > HBASE-16438.patch, MemstoreChunkCell_memstoreChunkCreator_oldversion.patch, > MemstoreChunkCell_trunk.patch > > > For CellChunkMap we may need a cell such that the chunk out of which it was > created, the id of the chunk be embedded in it so that when doing flattening > we can use the chunk id as a meta data. More details will follow once the > initial tasks are completed. > Why we need to embed the chunkid in the Cell is described by [~anastas] in > this remark over in parent issue > https://issues.apache.org/jira/browse/HBASE-14921?focusedCommentId=15244119=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15244119 -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-16438) Create a cell type so that chunk id is embedded in it
[ https://issues.apache.org/jira/browse/HBASE-16438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15953048#comment-15953048 ] ramkrishna.s.vasudevan commented on HBASE-16438: [~anastas] What specific question in RB are you looking out for? If it is the length of every cell along with this meta data for the chunks? IF so I think I answered it and going thro the latest comments I can change the chunkId to be an int. And as I replied in that RB the chunkId will be once per ByteBuffer and not per cell. So when you convert to chunk map you wll be needing the length, offset and the seqId only per cell. ChunkId is per ByteBuffer backing the chunk. > Create a cell type so that chunk id is embedded in it > - > > Key: HBASE-16438 > URL: https://issues.apache.org/jira/browse/HBASE-16438 > Project: HBase > Issue Type: Sub-task >Affects Versions: 2.0.0 >Reporter: ramkrishna.s.vasudevan >Assignee: ramkrishna.s.vasudevan > Attachments: HBASE-16438_1.patch, > HBASE-16438_3_ChunkCreatorwrappingChunkPool.patch, > HBASE-16438_4_ChunkCreatorwrappingChunkPool.patch, > HBASE-16438_8_ChunkCreatorwrappingChunkPool_withchunkRef.patch, > HBASE-16438_9_ChunkCreatorwrappingChunkPool_withchunkRef.patch, > HBASE-16438.patch, MemstoreChunkCell_memstoreChunkCreator_oldversion.patch, > MemstoreChunkCell_trunk.patch > > > For CellChunkMap we may need a cell such that the chunk out of which it was > created, the id of the chunk be embedded in it so that when doing flattening > we can use the chunk id as a meta data. More details will follow once the > initial tasks are completed. > Why we need to embed the chunkid in the Cell is described by [~anastas] in > this remark over in parent issue > https://issues.apache.org/jira/browse/HBASE-14921?focusedCommentId=15244119=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15244119 -- This message was sent by Atlassian JIRA (v6.3.15#6346)