[jira] [Commented] (HBASE-18898) Provide way for the core flow to know whether CP implemented each of the hooks
[ https://issues.apache.org/jira/browse/HBASE-18898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183707#comment-16183707 ] Anoop Sam John commented on HBASE-18898: Will try working on this post 2.0 > Provide way for the core flow to know whether CP implemented each of the hooks > -- > > Key: HBASE-18898 > URL: https://issues.apache.org/jira/browse/HBASE-18898 > Project: HBase > Issue Type: Improvement > Components: Coprocessors >Reporter: Anoop Sam John >Assignee: Anoop Sam John > > This came as a discussion topic at the tale of HBASE-17732 > Can we have a way in the code (before trying to call the hook) to know > whether the user has implemented one particular hook or not? eg: On write > related hooks only prePut() might be what the user CP implemented. All others > are just dummy impl from the interface. Can we have a way for the core code > to know this and avoid the call to other dummy hooks fully? Some times we do > some processing for just calling CP hooks (Say we have to make a POJO out of > PB object for calling) and if the user CP not impl this hook, we can avoid > this extra work fully. The pain of this will be more when we have to later > deprecate one hook and add new. So the dummy impl in new hook has to call the > old one and that might be doing some extra work normally. > If the CP f/w itself is having a way to tell this, the core code can make > use. What am expecting is some thing like in PB way where we can call > CPObject.hasPre(), then CPObject. pre ().. Should not like asking > users to impl this extra ugly thing. When the CP instance is loaded in the > RS/HM, that object will be having this info also. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Assigned] (HBASE-18898) Provide way for the core flow to know whether CP implemented each of the hooks
[ https://issues.apache.org/jira/browse/HBASE-18898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anoop Sam John reassigned HBASE-18898: -- Assignee: Anoop Sam John > Provide way for the core flow to know whether CP implemented each of the hooks > -- > > Key: HBASE-18898 > URL: https://issues.apache.org/jira/browse/HBASE-18898 > Project: HBase > Issue Type: Improvement > Components: Coprocessors >Reporter: Anoop Sam John >Assignee: Anoop Sam John > > This came as a discussion topic at the tale of HBASE-17732 > Can we have a way in the code (before trying to call the hook) to know > whether the user has implemented one particular hook or not? eg: On write > related hooks only prePut() might be what the user CP implemented. All others > are just dummy impl from the interface. Can we have a way for the core code > to know this and avoid the call to other dummy hooks fully? Some times we do > some processing for just calling CP hooks (Say we have to make a POJO out of > PB object for calling) and if the user CP not impl this hook, we can avoid > this extra work fully. The pain of this will be more when we have to later > deprecate one hook and add new. So the dummy impl in new hook has to call the > old one and that might be doing some extra work normally. > If the CP f/w itself is having a way to tell this, the core code can make > use. What am expecting is some thing like in PB way where we can call > CPObject.hasPre(), then CPObject. pre ().. Should not like asking > users to impl this extra ugly thing. When the CP instance is loaded in the > RS/HM, that object will be having this info also. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-18826) Use HStore instead of Store in our own code base and remove unnecessary methods in Store interface
[ https://issues.apache.org/jira/browse/HBASE-18826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183703#comment-16183703 ] Anoop Sam John commented on HBASE-18826: bq.One thing I can see in the current code is that the readPt for the store is any way got from calling HRegion.getsmallestReadPt(). So in the Region refactoring JIRA are we going to expose readPt or not? As I saw the old jira, I did not remove this yet from Region. Working on that big patch now. Lets keep it in Region as of that jira, later we can decide to remove or not. For all that StoreScanner creation to be allowed or not is the key! > Use HStore instead of Store in our own code base and remove unnecessary > methods in Store interface > -- > > Key: HBASE-18826 > URL: https://issues.apache.org/jira/browse/HBASE-18826 > Project: HBase > Issue Type: Sub-task > Components: Coprocessors >Reporter: Duo Zhang >Assignee: Duo Zhang > Fix For: 2.0.0-alpha-4 > > Attachments: HBASE-18826.patch, HBASE-18826-v1.patch, > HBASE-18826-v1.patch, HBASE-18826-v2.patch, HBASE-18826-v3.patch, > HBASE-18826-v4.patch, HBASE-18826-v5.patch, HBASE-18826-v6.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-18010) Connect CellChunkMap to be used for flattening in CompactingMemStore
[ https://issues.apache.org/jira/browse/HBASE-18010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chia-Ping Tsai updated HBASE-18010: --- Status: Open (was: Patch Available) retry > Connect CellChunkMap to be used for flattening in CompactingMemStore > > > Key: HBASE-18010 > URL: https://issues.apache.org/jira/browse/HBASE-18010 > Project: HBase > Issue Type: Sub-task >Reporter: Anastasia Braginsky >Assignee: Anastasia Braginsky > Fix For: 2.0.0, 3.0.0 > > Attachments: HBASE-18010-branch-2.patch, HBASE-18010-branch-2.patch, > HBASE-18010.branch-2.v1.patch, HBASE-18010-V04.patch, HBASE-18010-V06.patch, > HBASE-18010-V07.patch, HBASE-18010-V08.patch, HBASE-18010-V09.patch, > HBASE-18010-V10.patch, HBASE-18010-V11.patch > > > The CellChunkMap helps to create a new type of ImmutableSegment, where the > index (CellSet's delegatee) is going to be CellChunkMap. No big cells or > upserted cells are going to be supported here. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-18826) Use HStore instead of Store in our own code base and remove unnecessary methods in Store interface
[ https://issues.apache.org/jira/browse/HBASE-18826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183697#comment-16183697 ] ramkrishna.s.vasudevan commented on HBASE-18826: One thing I can see in the current code is that the readPt for the store is any way got from calling HRegion.getsmallestReadPt(). So in the Region refactoring JIRA are we going to expose readPt or not? > Use HStore instead of Store in our own code base and remove unnecessary > methods in Store interface > -- > > Key: HBASE-18826 > URL: https://issues.apache.org/jira/browse/HBASE-18826 > Project: HBase > Issue Type: Sub-task > Components: Coprocessors >Reporter: Duo Zhang >Assignee: Duo Zhang > Fix For: 2.0.0-alpha-4 > > Attachments: HBASE-18826.patch, HBASE-18826-v1.patch, > HBASE-18826-v1.patch, HBASE-18826-v2.patch, HBASE-18826-v3.patch, > HBASE-18826-v4.patch, HBASE-18826-v5.patch, HBASE-18826-v6.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-18127) Enable state to be passed between the region observer coprocessor hook calls
[ https://issues.apache.org/jira/browse/HBASE-18127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183694#comment-16183694 ] Hadoop QA commented on HBASE-18127: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 5s{color} | {color:red} HBASE-18127 does not apply to master. Rebase required? Wrong Branch? See https://yetus.apache.org/documentation/0.4.0/precommit-patchnames for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HBASE-18127 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12889426/HBASE-18127.master.005.patch | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/8841/console | | Powered by | Apache Yetus 0.4.0 http://yetus.apache.org | This message was automatically generated. > Enable state to be passed between the region observer coprocessor hook calls > > > Key: HBASE-18127 > URL: https://issues.apache.org/jira/browse/HBASE-18127 > Project: HBase > Issue Type: New Feature >Reporter: Lars Hofhansl >Assignee: Abhishek Singh Chouhan > Attachments: HBASE-18127.master.001.patch, > HBASE-18127.master.002.patch, HBASE-18127.master.002.patch, > HBASE-18127.master.003.patch, HBASE-18127.master.004.patch, > HBASE-18127.master.005.patch > > > Allow regionobserver to optionally skip postPut/postDelete when > postBatchMutate was called. > Right now a RegionObserver can only statically implement one or the other. In > scenarios where we need to work sometimes on the single postPut and > postDelete hooks and sometimes on the batchMutate hooks, there is currently > no place to convey this information to the single hooks. I.e. the work has > been done in the batch, skip the single hooks. > There are various solutions: > 1. Allow some state to be passed _per operation_. > 2. Remove the single hooks and always only call batch hooks (with a default > wrapper for the single hooks). > 3. more? > [~apurtell], what we had discussed a few days back. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-18127) Enable state to be passed between the region observer coprocessor hook calls
[ https://issues.apache.org/jira/browse/HBASE-18127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Abhishek Singh Chouhan updated HBASE-18127: --- Attachment: HBASE-18127.master.005.patch > Enable state to be passed between the region observer coprocessor hook calls > > > Key: HBASE-18127 > URL: https://issues.apache.org/jira/browse/HBASE-18127 > Project: HBase > Issue Type: New Feature >Reporter: Lars Hofhansl >Assignee: Abhishek Singh Chouhan > Attachments: HBASE-18127.master.001.patch, > HBASE-18127.master.002.patch, HBASE-18127.master.002.patch, > HBASE-18127.master.003.patch, HBASE-18127.master.004.patch, > HBASE-18127.master.005.patch > > > Allow regionobserver to optionally skip postPut/postDelete when > postBatchMutate was called. > Right now a RegionObserver can only statically implement one or the other. In > scenarios where we need to work sometimes on the single postPut and > postDelete hooks and sometimes on the batchMutate hooks, there is currently > no place to convey this information to the single hooks. I.e. the work has > been done in the batch, skip the single hooks. > There are various solutions: > 1. Allow some state to be passed _per operation_. > 2. Remove the single hooks and always only call batch hooks (with a default > wrapper for the single hooks). > 3. more? > [~apurtell], what we had discussed a few days back. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-18898) Provide way for the core flow to know whether CP implemented each of the hooks
[ https://issues.apache.org/jira/browse/HBASE-18898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183692#comment-16183692 ] ramkrishna.s.vasudevan commented on HBASE-18898: bq.CPObject.hasPre(), then CPObject. pre ().. Should not like asking users to impl this extra ugly thing. When the CP instance is loaded in the RS/HM, that object will be having this info also. This is good one. Nice to have feature. > Provide way for the core flow to know whether CP implemented each of the hooks > -- > > Key: HBASE-18898 > URL: https://issues.apache.org/jira/browse/HBASE-18898 > Project: HBase > Issue Type: Improvement > Components: Coprocessors >Reporter: Anoop Sam John > > This came as a discussion topic at the tale of HBASE-17732 > Can we have a way in the code (before trying to call the hook) to know > whether the user has implemented one particular hook or not? eg: On write > related hooks only prePut() might be what the user CP implemented. All others > are just dummy impl from the interface. Can we have a way for the core code > to know this and avoid the call to other dummy hooks fully? Some times we do > some processing for just calling CP hooks (Say we have to make a POJO out of > PB object for calling) and if the user CP not impl this hook, we can avoid > this extra work fully. The pain of this will be more when we have to later > deprecate one hook and add new. So the dummy impl in new hook has to call the > old one and that might be doing some extra work normally. > If the CP f/w itself is having a way to tell this, the core code can make > use. What am expecting is some thing like in PB way where we can call > CPObject.hasPre(), then CPObject. pre ().. Should not like asking > users to impl this extra ugly thing. When the CP instance is loaded in the > RS/HM, that object will be having this info also. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-17732) Coprocessor Design Improvements
[ https://issues.apache.org/jira/browse/HBASE-17732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183685#comment-16183685 ] Anoop Sam John commented on HBASE-17732: Done. HBASE-18898 > Coprocessor Design Improvements > --- > > Key: HBASE-17732 > URL: https://issues.apache.org/jira/browse/HBASE-17732 > Project: HBase > Issue Type: Improvement > Components: Coprocessors >Reporter: Appy >Assignee: Appy >Priority: Critical > Labels: incompatible > Fix For: 2.0.0-alpha-4 > > Attachments: HBASE-17732.master.001.patch, > HBASE-17732.master.002.patch, HBASE-17732.master.003.patch, > HBASE-17732.master.004.patch, HBASE-17732.master.005.patch, > HBASE-17732.master.006.patch, HBASE-17732.master.007.patch, > HBASE-17732.master.008.patch, HBASE-17732.master.009.patch, > HBASE-17732.master.010.patch, HBASE-17732.master.011.patch, > HBASE-17732.master.012.patch, HBASE-17732.master.013.patch, > HBASE-17732.master.014.patch > > > The two main changes are: > * *Adding template for coprocessor type to CoprocessorEnvironment i.e. > {{interface CoprocessorEnvironment}}* > ** Enables us to load only relevant coprocessors in hosts. Right now each > type of host loads all types of coprocs and it's only during execOperation > that it checks if the coproc is of correct type i.e. XCoprocessorHost will > load XObserver, YObserver, and all others, and will check in execOperation if > {{coproc instanceOf XObserver}} and ignore the rest. > ** Allow sharing of a bunch functions/classes which are currently > duplicated in each host. For eg. CoprocessorOperations, > CoprocessorOperationWithResult, execOperations(). > * *Introduce 4 coprocessor classes and use composition between these new > classes and and old observers* > ** The real gold here is, moving forward, we'll be able to break down giant > everything-in-one observers (masterobserver has 100+ functions) into smaller, > more focused observers. These smaller observer can then have different compat > guarantees!! > Here's a more detailed design doc: > https://docs.google.com/document/d/1mPkM1CRRvBMZL4dBQzrus8obyvNnHhR5it2yyhiFXTg/edit?usp=sharing -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-18898) Provide way for the core flow to know whether CP implemented each of the hooks
[ https://issues.apache.org/jira/browse/HBASE-18898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anoop Sam John updated HBASE-18898: --- Component/s: Coprocessors > Provide way for the core flow to know whether CP implemented each of the hooks > -- > > Key: HBASE-18898 > URL: https://issues.apache.org/jira/browse/HBASE-18898 > Project: HBase > Issue Type: Improvement > Components: Coprocessors >Reporter: Anoop Sam John > > This came as a discussion topic at the tale of HBASE-17732 > Can we have a way in the code (before trying to call the hook) to know > whether the user has implemented one particular hook or not? eg: On write > related hooks only prePut() might be what the user CP implemented. All others > are just dummy impl from the interface. Can we have a way for the core code > to know this and avoid the call to other dummy hooks fully? Some times we do > some processing for just calling CP hooks (Say we have to make a POJO out of > PB object for calling) and if the user CP not impl this hook, we can avoid > this extra work fully. The pain of this will be more when we have to later > deprecate one hook and add new. So the dummy impl in new hook has to call the > old one and that might be doing some extra work normally. > If the CP f/w itself is having a way to tell this, the core code can make > use. What am expecting is some thing like in PB way where we can call > CPObject.hasPre(), then CPObject. pre ().. Should not like asking > users to impl this extra ugly thing. When the CP instance is loaded in the > RS/HM, that object will be having this info also. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (HBASE-18898) Provide way for the core flow to know whether CP implemented each of the hooks
Anoop Sam John created HBASE-18898: -- Summary: Provide way for the core flow to know whether CP implemented each of the hooks Key: HBASE-18898 URL: https://issues.apache.org/jira/browse/HBASE-18898 Project: HBase Issue Type: Improvement Reporter: Anoop Sam John This came as a discussion topic at the tale of HBASE-17732 Can we have a way in the code (before trying to call the hook) to know whether the user has implemented one particular hook or not? eg: On write related hooks only prePut() might be what the user CP implemented. All others are just dummy impl from the interface. Can we have a way for the core code to know this and avoid the call to other dummy hooks fully? Some times we do some processing for just calling CP hooks (Say we have to make a POJO out of PB object for calling) and if the user CP not impl this hook, we can avoid this extra work fully. The pain of this will be more when we have to later deprecate one hook and add new. So the dummy impl in new hook has to call the old one and that might be doing some extra work normally. If the CP f/w itself is having a way to tell this, the core code can make use. What am expecting is some thing like in PB way where we can call CPObject.hasPre(), then CPObject. pre ().. Should not like asking users to impl this extra ugly thing. When the CP instance is loaded in the RS/HM, that object will be having this info also. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-18884) Coprocessor Design Improvements follow up of HBASE-17732
[ https://issues.apache.org/jira/browse/HBASE-18884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anoop Sam John updated HBASE-18884: --- Issue Type: Improvement (was: Bug) > Coprocessor Design Improvements follow up of HBASE-17732 > > > Key: HBASE-18884 > URL: https://issues.apache.org/jira/browse/HBASE-18884 > Project: HBase > Issue Type: Improvement >Reporter: Appy >Assignee: Appy > Fix For: 2.0.0-alpha-4 > > Attachments: HBASE-18884.master.001.patch > > > Creating new jira to track suggestions that came in review > (https://reviews.apache.org/r/62141/) but are not blocker and can be done > separately. > Suggestions by [~apurtell] > - Change {{Service Coprocessor#getService()}} to {{List > Coprocessor#getServices()}} > - I think we overstepped by offering [table resource management via this > interface|https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/CoprocessorEnvironment.java#L57]. > There are a lot of other internal resource types which could/should be > managed this way but they are all left up to the implementor. Perhaps we > should remove the table ref management and leave it up to them as well. > > - Checkin the finalized design doc into repo > (https://docs.google.com/document/d/1mPkM1CRRvBMZL4dBQzrus8obyvNnHhR5it2yyhiFXTg/edit) > (fyi: [~stack]) > - Added example to javadoc of Coprocessor base interface on how to implement > one in the new design -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-18826) Use HStore instead of Store in our own code base and remove unnecessary methods in Store interface
[ https://issues.apache.org/jira/browse/HBASE-18826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183681#comment-16183681 ] Anoop Sam John commented on HBASE-18826: I mean we should comment on that jira may be and show alternate ways for CP users from 2.0 onwards.. So we better know the usages. The above jira says people were creating StoreScanner on their own from pre hooks. May be let the system make the scanner and in post hook, they can wrap it with custom scanners. Also the pre hook returning the Scanner and so the core code bypassing the actual creation of scanner (eg: only), we may have to change now? > Use HStore instead of Store in our own code base and remove unnecessary > methods in Store interface > -- > > Key: HBASE-18826 > URL: https://issues.apache.org/jira/browse/HBASE-18826 > Project: HBase > Issue Type: Sub-task > Components: Coprocessors >Reporter: Duo Zhang >Assignee: Duo Zhang > Fix For: 2.0.0-alpha-4 > > Attachments: HBASE-18826.patch, HBASE-18826-v1.patch, > HBASE-18826-v1.patch, HBASE-18826-v2.patch, HBASE-18826-v3.patch, > HBASE-18826-v4.patch, HBASE-18826-v5.patch, HBASE-18826-v6.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-18891) Upgrade netty-all jar
[ https://issues.apache.org/jira/browse/HBASE-18891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183679#comment-16183679 ] Hadoop QA commented on HBASE-18891: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 3s{color} | {color:green} branch-1.3 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 14s{color} | {color:green} branch-1.3 passed with JDK v1.8.0_144 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 17s{color} | {color:green} branch-1.3 passed with JDK v1.7.0_151 {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 36s{color} | {color:green} branch-1.3 passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 7m 16s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 32s{color} | {color:green} branch-1.3 passed with JDK v1.8.0_144 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 14s{color} | {color:green} branch-1.3 passed with JDK v1.7.0_151 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 12s{color} | {color:green} the patch passed with JDK v1.8.0_144 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 14s{color} | {color:green} the patch passed with JDK v1.7.0_151 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 0s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:red}-1{color} | {color:red} shadedjars {color} | {color:red} 0m 41s{color} | {color:red} patch has 11 errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 15m 59s{color} | {color:green} Patch does not cause any errors with Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha4. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 31s{color} | {color:green} the patch passed with JDK v1.8.0_144 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 15s{color} | {color:green} the patch passed with JDK v1.7.0_151 {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}334m 56s{color} | {color:red} root in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 3m 0s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}374m 19s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hbase.util.TestHBaseFsck | | Timed out junit tests | org.apache.hadoop.hbase.replication.TestReplicationKillSlaveRS | | | org.apache.hadoop.hbase.replication.TestReplicationDisableInactivePeer | | | org.apache.hadoop.hbase.master.TestMasterMetricsWrapper | | | org.apache.hadoop.hbase.client.TestScanWithoutFetchingData | | | org.apache.hadoop.hbase.master.procedure.TestServerCrashProcedure | | | org.apache.hadoop.hbase.trace.TestHTraceHooks | | | org.apache.hadoop.hbase.master.TestMasterFailover | | | org.apache.hadoop.hbase.TestFullLogReconstruction | | | org.apache.hadoop.hbase.mapred.TestTableInputFormat | | | org.apache.hadoop.hbase.TestHBaseTestingUtility | | |
[jira] [Updated] (HBASE-18888) StealJobQueue should call super() to init the PriorityBlockingQueue
[ https://issues.apache.org/jira/browse/HBASE-1?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ramkrishna.s.vasudevan updated HBASE-1: --- Resolution: Fixed Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) I just noticed the +1. Pushed to master and branch-2 . Thanks for the reviews. QA is also green. > StealJobQueue should call super() to init the PriorityBlockingQueue > --- > > Key: HBASE-1 > URL: https://issues.apache.org/jira/browse/HBASE-1 > Project: HBase > Issue Type: Bug >Affects Versions: 2.0.0-alpha-3 >Reporter: ramkrishna.s.vasudevan >Assignee: ramkrishna.s.vasudevan > Fix For: 2.0.0-alpha-4 > > Attachments: HBASE-1.patch > > > {code} > ERROR: java.io.IOException: > org.apache.hadoop.hbase.regionserver.CompactSplit$CompactionRunner cannot be > cast to java.lang.Comparable > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:465) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133) > at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:278) > at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:258) > Caused by: java.lang.ClassCastException: > org.apache.hadoop.hbase.regionserver.CompactSplit$CompactionRunner cannot be > cast to java.lang.Comparable > at > java.util.concurrent.PriorityBlockingQueue.siftUpComparable(PriorityBlockingQueue.java:357) > at > java.util.concurrent.PriorityBlockingQueue.offer(PriorityBlockingQueue.java:489) > at > org.apache.hadoop.hbase.util.StealJobQueue.offer(StealJobQueue.java:103) > at > java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1361) > at > org.apache.hadoop.hbase.regionserver.CompactSplit.requestCompactionInternal(CompactSplit.java:291) > at > org.apache.hadoop.hbase.regionserver.CompactSplit.requestCompactionInternal(CompactSplit.java:248) > at > org.apache.hadoop.hbase.regionserver.CompactSplit.requestCompaction(CompactSplit.java:236) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.compactRegion(RSRpcServices.java:1591) > at > org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:26856) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:406) > {code} > Seems to be a simple miss. StealJobQueue does not init the > PriorityBlockingQueue that it extends and so major compaction/compaction just > fails with the above stack trace. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-18649) Deprecate KV Usage in MR to move to Cells in 3.0
[ https://issues.apache.org/jira/browse/HBASE-18649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ramkrishna.s.vasudevan updated HBASE-18649: --- Attachment: HBASE-18649-branch-2_1.patch Patch for branch-2 for QA. Cleared the whitespace warnings. All tests seems to have already passed. > Deprecate KV Usage in MR to move to Cells in 3.0 > > > Key: HBASE-18649 > URL: https://issues.apache.org/jira/browse/HBASE-18649 > Project: HBase > Issue Type: Improvement > Components: mapreduce >Affects Versions: 2.0.0-alpha-2 >Reporter: ramkrishna.s.vasudevan >Assignee: ramkrishna.s.vasudevan > Fix For: 2.0.0, 3.0.0, 2.1.0, 2.0.0-alpha-4 > > Attachments: HBASE-18649-branch-2_1.patch, > HBASE-18649_branch-2.patch, HBASE-18649-branch-2.patch, > HBASE-18649_master_2.patch, HBASE-18649_master_3.patch, > HBASE-18649_master_5.patch, HBASE-18649_master_6.patch, > HBASE-18649_master.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-18649) Deprecate KV Usage in MR to move to Cells in 3.0
[ https://issues.apache.org/jira/browse/HBASE-18649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ramkrishna.s.vasudevan updated HBASE-18649: --- Status: Patch Available (was: Open) > Deprecate KV Usage in MR to move to Cells in 3.0 > > > Key: HBASE-18649 > URL: https://issues.apache.org/jira/browse/HBASE-18649 > Project: HBase > Issue Type: Improvement > Components: mapreduce >Affects Versions: 2.0.0-alpha-2 >Reporter: ramkrishna.s.vasudevan >Assignee: ramkrishna.s.vasudevan > Fix For: 2.0.0, 3.0.0, 2.1.0, 2.0.0-alpha-4 > > Attachments: HBASE-18649-branch-2_1.patch, > HBASE-18649_branch-2.patch, HBASE-18649-branch-2.patch, > HBASE-18649_master_2.patch, HBASE-18649_master_3.patch, > HBASE-18649_master_5.patch, HBASE-18649_master_6.patch, > HBASE-18649_master.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-18649) Deprecate KV Usage in MR to move to Cells in 3.0
[ https://issues.apache.org/jira/browse/HBASE-18649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ramkrishna.s.vasudevan updated HBASE-18649: --- Status: Open (was: Patch Available) > Deprecate KV Usage in MR to move to Cells in 3.0 > > > Key: HBASE-18649 > URL: https://issues.apache.org/jira/browse/HBASE-18649 > Project: HBase > Issue Type: Improvement > Components: mapreduce >Affects Versions: 2.0.0-alpha-2 >Reporter: ramkrishna.s.vasudevan >Assignee: ramkrishna.s.vasudevan > Fix For: 2.0.0, 3.0.0, 2.1.0, 2.0.0-alpha-4 > > Attachments: HBASE-18649_branch-2.patch, HBASE-18649-branch-2.patch, > HBASE-18649_master_2.patch, HBASE-18649_master_3.patch, > HBASE-18649_master_5.patch, HBASE-18649_master_6.patch, > HBASE-18649_master.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-18846) Accommodate the hbase-indexer/lily/SEP consumer deploy-type
[ https://issues.apache.org/jira/browse/HBASE-18846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183661#comment-16183661 ] stack commented on HBASE-18846: --- .002 another hack patch that has too much in it and needs a load of cleanup but this one includes a viable route to getting hbase-indexer out of hbase privates. h2. Proposal The hbase-indexer, instead of starting a cluster of hacked up servers made of hbase's RpcServer hosting an Admin Service with all methods but the replication method shutdown, will run a stock RegionServer that has had the bulk of its threads/services/chores/etc. disabled via configuration. The hbase-indexer then adds an implementation of the hbase Connection Interface just so it can return Table implementations. All Table overridden methods throw unsupported EXCEPT the #batch method; it is what the Replication sink calls to farm out edits in the sink cluster. The #batch method does the index insertion returning Result instance to keep the Replication stream flowing (and retrying). Let me go and get the hbase-indexer folks and see what they think. > Accommodate the hbase-indexer/lily/SEP consumer deploy-type > --- > > Key: HBASE-18846 > URL: https://issues.apache.org/jira/browse/HBASE-18846 > Project: HBase > Issue Type: Bug >Reporter: stack > Attachments: HBASE-18846.master.001.patch, > HBASE-18846.master.002.patch, javadoc.txt > > > This is a follow-on from HBASE-10504, Define a Replication Interface. There > we defined a new, flexible replication endpoint for others to implement but > it did little to help the case of the lily hbase-indexer. This issue takes up > the case of the hbase-indexer. > The hbase-indexer poses to hbase as a 'fake' peer cluster (For why > hbase-indexer is implemented so, the advantage to having the indexing done in > a separate process set that can be independently scaled, can participate in > the same security realm, etc., see discussion in HBASE-10504). The > hbase-indexer will start up a cut-down "RegionServer" processes that are just > an instance of hbase RpcServer hosting an AdminProtos Service. They make > themselves 'appear' to the Replication Source by hoisting up an ephemeral > znode 'registering' as a RegionServer. The source cluster then streams > WALEdits to the Admin Protos method: > {code} > public ReplicateWALEntryResponse replicateWALEntry(final RpcController > controller, > final ReplicateWALEntryRequest request) throws ServiceException { > {code} > The hbase-indexer relies on other hbase internals like Server so it can get a > ZooKeeperWatcher instance and know the 'name' to use for this cut-down server. > Thoughts on how to proceed include: > > * Better formalize its current digestion of hbase internals; make it so > rpcserver is allowed to be used by others, etc. This would be hard to do > given they use basics like Server, Protobuf serdes for WAL types, and > AdminProtos Service. Any change in this wide API breaks (again) > hbase-indexer. We have made a 'channel' for Coprocessor Endpoints so they > continue to work though they use 'internal' types. They can use protos in > hbase-protocol. hbase-protocol protos are in a limbo currently where they are > sort-of 'public'; a TODO. Perhaps the hbase-indexer could do similar relying > on the hbase-protocol (pb2.5) content and we could do something to reveal > rpcserver and zk for hbase-indexer safe use. > * Start an actual RegionServer only have it register the AdminProtos Service > only -- not ClientProtos and the Service that does Master interaction, etc. > [I checked, this is not as easy to do as I at first thought -- St.Ack] Then > have the hbase-indexer implement an AdminCoprocessor to override the > replicateWALEntry method (the Admin CP implementation may need work). This > would narrow the hbase-indexer exposure to that of the Admin Coprocessor > Interface > * Over in HBASE-10504, [~enis] suggested "... if we want to provide > isolation for the replication services in hbase, we can have a simple host as > another daemon which hosts the ReplicationEndpoint implementation. RS's will > use a built-in RE to send the edits to this layer, and the host will delegate > it to the RE implementation. The flow would be something like: RS --> RE > inside RS --> Host daemon for RE --> Actual RE implementation --> third party > system..." > > Other crazy notions occur including the setup of an Admin Interface > Coprocessor Endpoint. A new ReplicationEndpoint would feed the replication > stream to the remote cluster via the CPEP registered channel. > But time is short. Hopefully we can figure something that will work in 2.0 > timeframe w/o too much code movement. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-18846) Accommodate the hbase-indexer/lily/SEP consumer deploy-type
[ https://issues.apache.org/jira/browse/HBASE-18846?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-18846: -- Attachment: HBASE-18846.master.002.patch > Accommodate the hbase-indexer/lily/SEP consumer deploy-type > --- > > Key: HBASE-18846 > URL: https://issues.apache.org/jira/browse/HBASE-18846 > Project: HBase > Issue Type: Bug >Reporter: stack > Attachments: HBASE-18846.master.001.patch, > HBASE-18846.master.002.patch, javadoc.txt > > > This is a follow-on from HBASE-10504, Define a Replication Interface. There > we defined a new, flexible replication endpoint for others to implement but > it did little to help the case of the lily hbase-indexer. This issue takes up > the case of the hbase-indexer. > The hbase-indexer poses to hbase as a 'fake' peer cluster (For why > hbase-indexer is implemented so, the advantage to having the indexing done in > a separate process set that can be independently scaled, can participate in > the same security realm, etc., see discussion in HBASE-10504). The > hbase-indexer will start up a cut-down "RegionServer" processes that are just > an instance of hbase RpcServer hosting an AdminProtos Service. They make > themselves 'appear' to the Replication Source by hoisting up an ephemeral > znode 'registering' as a RegionServer. The source cluster then streams > WALEdits to the Admin Protos method: > {code} > public ReplicateWALEntryResponse replicateWALEntry(final RpcController > controller, > final ReplicateWALEntryRequest request) throws ServiceException { > {code} > The hbase-indexer relies on other hbase internals like Server so it can get a > ZooKeeperWatcher instance and know the 'name' to use for this cut-down server. > Thoughts on how to proceed include: > > * Better formalize its current digestion of hbase internals; make it so > rpcserver is allowed to be used by others, etc. This would be hard to do > given they use basics like Server, Protobuf serdes for WAL types, and > AdminProtos Service. Any change in this wide API breaks (again) > hbase-indexer. We have made a 'channel' for Coprocessor Endpoints so they > continue to work though they use 'internal' types. They can use protos in > hbase-protocol. hbase-protocol protos are in a limbo currently where they are > sort-of 'public'; a TODO. Perhaps the hbase-indexer could do similar relying > on the hbase-protocol (pb2.5) content and we could do something to reveal > rpcserver and zk for hbase-indexer safe use. > * Start an actual RegionServer only have it register the AdminProtos Service > only -- not ClientProtos and the Service that does Master interaction, etc. > [I checked, this is not as easy to do as I at first thought -- St.Ack] Then > have the hbase-indexer implement an AdminCoprocessor to override the > replicateWALEntry method (the Admin CP implementation may need work). This > would narrow the hbase-indexer exposure to that of the Admin Coprocessor > Interface > * Over in HBASE-10504, [~enis] suggested "... if we want to provide > isolation for the replication services in hbase, we can have a simple host as > another daemon which hosts the ReplicationEndpoint implementation. RS's will > use a built-in RE to send the edits to this layer, and the host will delegate > it to the RE implementation. The flow would be something like: RS --> RE > inside RS --> Host daemon for RE --> Actual RE implementation --> third party > system..." > > Other crazy notions occur including the setup of an Admin Interface > Coprocessor Endpoint. A new ReplicationEndpoint would feed the replication > stream to the remote cluster via the CPEP registered channel. > But time is short. Hopefully we can figure something that will work in 2.0 > timeframe w/o too much code movement. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-17732) Coprocessor Design Improvements
[ https://issues.apache.org/jira/browse/HBASE-17732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183648#comment-16183648 ] Appy commented on HBASE-17732: -- bq. Can we have a way in the code (before trying to call the hook) to know whether the user has implemented one particular hook or not? You're right. I too gave a lot of thought on this unnecessary creation of extra 'Operation' object for each and every hook if there's a loaded CP which only overrides few (or even worse, one) hook(s). It is definitely doable using reflection! I have a rough design in mind but not the bandwidth to work on it right now. But let's get the discussion going. Create a jira? > Coprocessor Design Improvements > --- > > Key: HBASE-17732 > URL: https://issues.apache.org/jira/browse/HBASE-17732 > Project: HBase > Issue Type: Improvement > Components: Coprocessors >Reporter: Appy >Assignee: Appy >Priority: Critical > Labels: incompatible > Fix For: 2.0.0-alpha-4 > > Attachments: HBASE-17732.master.001.patch, > HBASE-17732.master.002.patch, HBASE-17732.master.003.patch, > HBASE-17732.master.004.patch, HBASE-17732.master.005.patch, > HBASE-17732.master.006.patch, HBASE-17732.master.007.patch, > HBASE-17732.master.008.patch, HBASE-17732.master.009.patch, > HBASE-17732.master.010.patch, HBASE-17732.master.011.patch, > HBASE-17732.master.012.patch, HBASE-17732.master.013.patch, > HBASE-17732.master.014.patch > > > The two main changes are: > * *Adding template for coprocessor type to CoprocessorEnvironment i.e. > {{interface CoprocessorEnvironment}}* > ** Enables us to load only relevant coprocessors in hosts. Right now each > type of host loads all types of coprocs and it's only during execOperation > that it checks if the coproc is of correct type i.e. XCoprocessorHost will > load XObserver, YObserver, and all others, and will check in execOperation if > {{coproc instanceOf XObserver}} and ignore the rest. > ** Allow sharing of a bunch functions/classes which are currently > duplicated in each host. For eg. CoprocessorOperations, > CoprocessorOperationWithResult, execOperations(). > * *Introduce 4 coprocessor classes and use composition between these new > classes and and old observers* > ** The real gold here is, moving forward, we'll be able to break down giant > everything-in-one observers (masterobserver has 100+ functions) into smaller, > more focused observers. These smaller observer can then have different compat > guarantees!! > Here's a more detailed design doc: > https://docs.google.com/document/d/1mPkM1CRRvBMZL4dBQzrus8obyvNnHhR5it2yyhiFXTg/edit?usp=sharing -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-18826) Use HStore instead of Store in our own code base and remove unnecessary methods in Store interface
[ https://issues.apache.org/jira/browse/HBASE-18826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183642#comment-16183642 ] Duo Zhang commented on HBASE-18826: --- Anyway for now it is not safe for users to create a StoreScanner. If they want to get a StoreScanner for compaction and also specify a filter then the boom... The ScanQueryMatcher for compaction will not consider a filter... > Use HStore instead of Store in our own code base and remove unnecessary > methods in Store interface > -- > > Key: HBASE-18826 > URL: https://issues.apache.org/jira/browse/HBASE-18826 > Project: HBase > Issue Type: Sub-task > Components: Coprocessors >Reporter: Duo Zhang >Assignee: Duo Zhang > Fix For: 2.0.0-alpha-4 > > Attachments: HBASE-18826.patch, HBASE-18826-v1.patch, > HBASE-18826-v1.patch, HBASE-18826-v2.patch, HBASE-18826-v3.patch, > HBASE-18826-v4.patch, HBASE-18826-v5.patch, HBASE-18826-v6.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-17732) Coprocessor Design Improvements
[ https://issues.apache.org/jira/browse/HBASE-17732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183640#comment-16183640 ] Anoop Sam John commented on HBASE-17732: This is great stuff from u. One Q... Now we can have diff kind of Observer hooks going into diff classes. This is nice. Can we have a way in the code (before trying to call the hook) to know whether the user has implemented one particular hook or not? eg: On write related hooks only prePut() might be what the user CP implemented. All others are just dummy impl from the interface. Can we have a way for the core code to know this and avoid the call to other dummy hooks fully?Some times we do some processing for just calling CP hooks (Say we have to make a POJO out of PB object for calling) and if the user CP not impl this hook, we can avoid this extra work fully.The pain of this will be more when we have to later deprecate one hook and add new. So the dummy impl in new hook has to call the old one and that might be doing some extra work normally. If the CP f/w itself is having a way to tell this, the core code can make use. What am expecting is some thing like in PB way where we can call CPObject.hasPre(), then CPObject. pre ().. Should not like asking users to impl this extra ugly thing. When the CP instance is loaded in the RS/HM, that object will be having this info also. Just some thinking and sharing with u. May be for thinking and later changes. Thanks for this nice refactoring. > Coprocessor Design Improvements > --- > > Key: HBASE-17732 > URL: https://issues.apache.org/jira/browse/HBASE-17732 > Project: HBase > Issue Type: Improvement > Components: Coprocessors >Reporter: Appy >Assignee: Appy >Priority: Critical > Labels: incompatible > Fix For: 2.0.0-alpha-4 > > Attachments: HBASE-17732.master.001.patch, > HBASE-17732.master.002.patch, HBASE-17732.master.003.patch, > HBASE-17732.master.004.patch, HBASE-17732.master.005.patch, > HBASE-17732.master.006.patch, HBASE-17732.master.007.patch, > HBASE-17732.master.008.patch, HBASE-17732.master.009.patch, > HBASE-17732.master.010.patch, HBASE-17732.master.011.patch, > HBASE-17732.master.012.patch, HBASE-17732.master.013.patch, > HBASE-17732.master.014.patch > > > The two main changes are: > * *Adding template for coprocessor type to CoprocessorEnvironment i.e. > {{interface CoprocessorEnvironment}}* > ** Enables us to load only relevant coprocessors in hosts. Right now each > type of host loads all types of coprocs and it's only during execOperation > that it checks if the coproc is of correct type i.e. XCoprocessorHost will > load XObserver, YObserver, and all others, and will check in execOperation if > {{coproc instanceOf XObserver}} and ignore the rest. > ** Allow sharing of a bunch functions/classes which are currently > duplicated in each host. For eg. CoprocessorOperations, > CoprocessorOperationWithResult, execOperations(). > * *Introduce 4 coprocessor classes and use composition between these new > classes and and old observers* > ** The real gold here is, moving forward, we'll be able to break down giant > everything-in-one observers (masterobserver has 100+ functions) into smaller, > more focused observers. These smaller observer can then have different compat > guarantees!! > Here's a more detailed design doc: > https://docs.google.com/document/d/1mPkM1CRRvBMZL4dBQzrus8obyvNnHhR5it2yyhiFXTg/edit?usp=sharing -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Resolved] (HBASE-18896) Work around incompatible change to DistCpOptions for hadoop-3
[ https://issues.apache.org/jira/browse/HBASE-18896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Josh Elser resolved HBASE-18896. Resolution: Won't Fix Being addressed in HBASE-18843. > Work around incompatible change to DistCpOptions for hadoop-3 > - > > Key: HBASE-18896 > URL: https://issues.apache.org/jira/browse/HBASE-18896 > Project: HBase > Issue Type: Bug >Reporter: Josh Elser >Assignee: Vladimir Rodionov >Priority: Blocker > Fix For: 2.0.0-alpha-4 > > > HADOOP-14267 change methods on DistCpOptions and introduced a new Builder > class which doesn't exist on any previous releases. > We'll have to shim/reflect around this. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-18885) HFileOutputFormat2 hardcodes default FileOutputCommitter
[ https://issues.apache.org/jira/browse/HBASE-18885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183639#comment-16183639 ] Ted Yu commented on HBASE-18885: Please go ahead with the commit. > HFileOutputFormat2 hardcodes default FileOutputCommitter > > > Key: HBASE-18885 > URL: https://issues.apache.org/jira/browse/HBASE-18885 > Project: HBase > Issue Type: Bug > Components: mapreduce >Reporter: Shaofeng SHI >Assignee: Shaofeng SHI > Fix For: 1.4.0, 1.5.0, 2.0.0-alpha-4 > > Attachments: HBASE-18885.branch-1.001.patch, > HBASE-18885.master.001.patch > > > Apache Kylin uses HBase's HFileOutputFormat2.java to configure the MR job. > The original reporting is in KYLIN-2788[1]. After some investigation, we > found this class always uses the default "FileOutputCommitter", see [2], > regardless of the job's configuration; so it always writing to "_temporary" > folder. Since AWS EMR configured to use DirectOutputCommitter for S3, then > this problem occurs: Hadoop expects to see the file directly under output > path, while the RecordWriter generates them in "_temporary" folder. This > caused no data be loaded to HTable. > Seems this problem exists in all versions so far. > [1] https://issues.apache.org/jira/browse/KYLIN-2788 > [2] > https://github.com/apache/hbase/blob/master/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/HFileOutputFormat2.java#L193 -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-18843) Add DistCp support to incremental backup with bulk loading
[ https://issues.apache.org/jira/browse/HBASE-18843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183636#comment-16183636 ] Josh Elser commented on HBASE-18843: bq. Doing this now. bq. Reverted from branch-2 and master. Oh... I guess I won't revert it then. I had builds running locally to make sure the branches were actually "good". > Add DistCp support to incremental backup with bulk loading > -- > > Key: HBASE-18843 > URL: https://issues.apache.org/jira/browse/HBASE-18843 > Project: HBase > Issue Type: Improvement >Reporter: Vladimir Rodionov >Assignee: Vladimir Rodionov > Fix For: 2.0.0-alpha-4 > > Attachments: HBASE-18843-v1.patch, HBASE-18843-v2.patch, > HBASE-18843-v4.patch, HBASE-18843-v5.patch > > > Currently, we copy bulk loaded files to backup one-by-one on a client side > (where backup create runs). This has to be replaced with DistCp copying. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-18843) Add DistCp support to incremental backup with bulk loading
[ https://issues.apache.org/jira/browse/HBASE-18843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183634#comment-16183634 ] Appy commented on HBASE-18843: -- Reverted from branch-2 and master. > Add DistCp support to incremental backup with bulk loading > -- > > Key: HBASE-18843 > URL: https://issues.apache.org/jira/browse/HBASE-18843 > Project: HBase > Issue Type: Improvement >Reporter: Vladimir Rodionov >Assignee: Vladimir Rodionov > Fix For: 2.0.0-alpha-4 > > Attachments: HBASE-18843-v1.patch, HBASE-18843-v2.patch, > HBASE-18843-v4.patch, HBASE-18843-v5.patch > > > Currently, we copy bulk loaded files to backup one-by-one on a client side > (where backup create runs). This has to be replaced with DistCp copying. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-18601) Update Htrace to 4.2
[ https://issues.apache.org/jira/browse/HBASE-18601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183630#comment-16183630 ] Hadoop QA commented on HBASE-18601: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 23s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 7 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 32s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 3s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 22s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 4s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 6m 48s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 2s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hbase-testing-util . {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 12m 55s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 7m 14s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 18s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 15s{color} | {color:red} hbase-rest in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 34s{color} | {color:red} hbase-spark in the patch failed. {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 9m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 6m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} rubocop {color} | {color:green} 0m 4s{color} | {color:green} There were no new rubocop issues. {color} | | {color:green}+1{color} | {color:green} ruby-lint {color} | {color:green} 0m 1s{color} | {color:green} There were no new ruby-lint issues. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 25s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 23s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 37m 13s{color} | {color:green} Patch does not cause any errors with Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha4. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hbase-testing-util . {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 42s{color} | {color:red} hbase-common generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 6m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 26s{color} | {color:green} hbase-protocol-shaded in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 16s{color} | {color:green} hbase-common in the patch
[jira] [Commented] (HBASE-18826) Use HStore instead of Store in our own code base and remove unnecessary methods in Store interface
[ https://issues.apache.org/jira/browse/HBASE-18826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183629#comment-16183629 ] Anoop Sam John commented on HBASE-18826: On StoreScanner creation from pre hooks, see this issue HBASE-16962. > Use HStore instead of Store in our own code base and remove unnecessary > methods in Store interface > -- > > Key: HBASE-18826 > URL: https://issues.apache.org/jira/browse/HBASE-18826 > Project: HBase > Issue Type: Sub-task > Components: Coprocessors >Reporter: Duo Zhang >Assignee: Duo Zhang > Fix For: 2.0.0-alpha-4 > > Attachments: HBASE-18826.patch, HBASE-18826-v1.patch, > HBASE-18826-v1.patch, HBASE-18826-v2.patch, HBASE-18826-v3.patch, > HBASE-18826-v4.patch, HBASE-18826-v5.patch, HBASE-18826-v6.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-18885) HFileOutputFormat2 hardcodes default FileOutputCommitter
[ https://issues.apache.org/jira/browse/HBASE-18885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183628#comment-16183628 ] Chia-Ping Tsai commented on HBASE-18885: This is a bug fix. Perhaps we need to patch against branch-1.3 and branch-1.2. [~tedyu] WDYT? > HFileOutputFormat2 hardcodes default FileOutputCommitter > > > Key: HBASE-18885 > URL: https://issues.apache.org/jira/browse/HBASE-18885 > Project: HBase > Issue Type: Bug > Components: mapreduce >Reporter: Shaofeng SHI >Assignee: Shaofeng SHI > Fix For: 1.4.0, 1.5.0, 2.0.0-alpha-4 > > Attachments: HBASE-18885.branch-1.001.patch, > HBASE-18885.master.001.patch > > > Apache Kylin uses HBase's HFileOutputFormat2.java to configure the MR job. > The original reporting is in KYLIN-2788[1]. After some investigation, we > found this class always uses the default "FileOutputCommitter", see [2], > regardless of the job's configuration; so it always writing to "_temporary" > folder. Since AWS EMR configured to use DirectOutputCommitter for S3, then > this problem occurs: Hadoop expects to see the file directly under output > path, while the RecordWriter generates them in "_temporary" folder. This > caused no data be loaded to HTable. > Seems this problem exists in all versions so far. > [1] https://issues.apache.org/jira/browse/KYLIN-2788 > [2] > https://github.com/apache/hbase/blob/master/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/HFileOutputFormat2.java#L193 -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-18843) Add DistCp support to incremental backup with bulk loading
[ https://issues.apache.org/jira/browse/HBASE-18843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183624#comment-16183624 ] Appy commented on HBASE-18843: -- Thanks a lot [~elserj] for prompt back-and-forth! Yeah, let's revert it for now. I looked around a bit more, and maybe have a more elegant solution. Just override {{protected void doBuildListing(Path pathToListingFile, DistCpContext context) throws IOException }} which is marked protected, so is meant to be. For each path, do path=path.getParent() for 1 less time than is being done in current {{computeSourceRootPath}}. Set the context appropriately so that the default implementation of {{computeSourceRootPath}} does one more path.getParent. And that should be enough! > Add DistCp support to incremental backup with bulk loading > -- > > Key: HBASE-18843 > URL: https://issues.apache.org/jira/browse/HBASE-18843 > Project: HBase > Issue Type: Improvement >Reporter: Vladimir Rodionov >Assignee: Vladimir Rodionov > Fix For: 2.0.0-alpha-4 > > Attachments: HBASE-18843-v1.patch, HBASE-18843-v2.patch, > HBASE-18843-v4.patch, HBASE-18843-v5.patch > > > Currently, we copy bulk loaded files to backup one-by-one on a client side > (where backup create runs). This has to be replaced with DistCp copying. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-18826) Use HStore instead of Store in our own code base and remove unnecessary methods in Store interface
[ https://issues.apache.org/jira/browse/HBASE-18826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-18826: -- Attachment: HBASE-18826-v6.patch Remove Store.triggerMajorCompaction. Users are expected to trigger a compaction at region level. > Use HStore instead of Store in our own code base and remove unnecessary > methods in Store interface > -- > > Key: HBASE-18826 > URL: https://issues.apache.org/jira/browse/HBASE-18826 > Project: HBase > Issue Type: Sub-task > Components: Coprocessors >Reporter: Duo Zhang >Assignee: Duo Zhang > Fix For: 2.0.0-alpha-4 > > Attachments: HBASE-18826.patch, HBASE-18826-v1.patch, > HBASE-18826-v1.patch, HBASE-18826-v2.patch, HBASE-18826-v3.patch, > HBASE-18826-v4.patch, HBASE-18826-v5.patch, HBASE-18826-v6.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-18839) Apply RegionInfo to code base
[ https://issues.apache.org/jira/browse/HBASE-18839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chia-Ping Tsai updated HBASE-18839: --- Status: Patch Available (was: Open) > Apply RegionInfo to code base > - > > Key: HBASE-18839 > URL: https://issues.apache.org/jira/browse/HBASE-18839 > Project: HBase > Issue Type: Sub-task > Components: Coprocessors >Reporter: Chia-Ping Tsai >Assignee: Chia-Ping Tsai > Fix For: 2.0.0-alpha-4 > > Attachments: HBASE-18839.v0.patch, HBASE-18839.v1.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-18839) Apply RegionInfo to code base
[ https://issues.apache.org/jira/browse/HBASE-18839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chia-Ping Tsai updated HBASE-18839: --- Attachment: HBASE-18839.v1.patch v1: rebase > Apply RegionInfo to code base > - > > Key: HBASE-18839 > URL: https://issues.apache.org/jira/browse/HBASE-18839 > Project: HBase > Issue Type: Sub-task > Components: Coprocessors >Reporter: Chia-Ping Tsai >Assignee: Chia-Ping Tsai > Fix For: 2.0.0-alpha-4 > > Attachments: HBASE-18839.v0.patch, HBASE-18839.v1.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-18843) Add DistCp support to incremental backup with bulk loading
[ https://issues.apache.org/jira/browse/HBASE-18843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183617#comment-16183617 ] Josh Elser commented on HBASE-18843: bq. I guess followup is fine. Nah, you're right. It was an unintended de-stabilization. Let's revert for now. This isn't blocking folks if it's *not* in the tree. Doing this now. > Add DistCp support to incremental backup with bulk loading > -- > > Key: HBASE-18843 > URL: https://issues.apache.org/jira/browse/HBASE-18843 > Project: HBase > Issue Type: Improvement >Reporter: Vladimir Rodionov >Assignee: Vladimir Rodionov > Fix For: 2.0.0-alpha-4 > > Attachments: HBASE-18843-v1.patch, HBASE-18843-v2.patch, > HBASE-18843-v4.patch, HBASE-18843-v5.patch > > > Currently, we copy bulk loaded files to backup one-by-one on a client side > (where backup create runs). This has to be replaced with DistCp copying. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-18883) Upgrade to Curator 4.0
[ https://issues.apache.org/jira/browse/HBASE-18883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183616#comment-16183616 ] Hadoop QA commented on HBASE-18883: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 24s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 9s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 44s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 2m 16s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 14m 29s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 5s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 3m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 2m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 14s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 41m 13s{color} | {color:green} Patch does not cause any errors with Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha4. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green}141m 12s{color} | {color:green} root in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 22s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}213m 44s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:5d60123 | | JIRA Issue | HBASE-18883 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12889340/HBASE-18883.v2.patch | | Optional Tests | asflicense shadedjars javac javadoc unit xml compile | | uname | Linux 6a68f60907cd 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | master / 9751346 | | Default Java | 1.8.0_144 | | Test Results | https://builds.apache.org/job/PreCommit-HBASE-Build/8835/testReport/ | | modules | C: . U: . | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/8835/console | | Powered by | Apache Yetus 0.4.0 http://yetus.apache.org | This message was automatically generated. > Upgrade to Curator 4.0 > -- > > Key: HBASE-18883 > URL: https://issues.apache.org/jira/browse/HBASE-18883 > Project: HBase > Issue Type: Bug > Components: Client >Reporter: Mike Drob >Assignee: Mike Drob > Fix For: 2.0.0 > > Attachments: HBASE-18883.patch, HBASE-18883.v2.patch > > > While we're doing a dependency pass for HBase 2, we should see if we can bump > Curator to 4.0 -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Comment Edited] (HBASE-18843) Add DistCp support to incremental backup with bulk loading
[ https://issues.apache.org/jira/browse/HBASE-18843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183614#comment-16183614 ] Appy edited comment on HBASE-18843 at 9/28/17 3:26 AM: --- My main concern is, if Vlad is not around in HBase world after some years, there's no knowing why he did what he did. And it'll be a mess for anyone to figure it out for any refactoring/deletion/etc. was (Author: appy): My main concern is, if Vlad is not around in some years, there's no knowing why he did what he did. And it'll be a mess for anyone to figure it out for any refactoring/deletion/etc. > Add DistCp support to incremental backup with bulk loading > -- > > Key: HBASE-18843 > URL: https://issues.apache.org/jira/browse/HBASE-18843 > Project: HBase > Issue Type: Improvement >Reporter: Vladimir Rodionov >Assignee: Vladimir Rodionov > Fix For: 2.0.0-alpha-4 > > Attachments: HBASE-18843-v1.patch, HBASE-18843-v2.patch, > HBASE-18843-v4.patch, HBASE-18843-v5.patch > > > Currently, we copy bulk loaded files to backup one-by-one on a client side > (where backup create runs). This has to be replaced with DistCp copying. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-18843) Add DistCp support to incremental backup with bulk loading
[ https://issues.apache.org/jira/browse/HBASE-18843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183614#comment-16183614 ] Appy commented on HBASE-18843: -- My main concern is, if Vlad is not around in some years, there's no knowing why he did what he did. And it'll be a mess for anyone to figure it out for any refactoring/deletion/etc. > Add DistCp support to incremental backup with bulk loading > -- > > Key: HBASE-18843 > URL: https://issues.apache.org/jira/browse/HBASE-18843 > Project: HBase > Issue Type: Improvement >Reporter: Vladimir Rodionov >Assignee: Vladimir Rodionov > Fix For: 2.0.0-alpha-4 > > Attachments: HBASE-18843-v1.patch, HBASE-18843-v2.patch, > HBASE-18843-v4.patch, HBASE-18843-v5.patch > > > Currently, we copy bulk loaded files to backup one-by-one on a client side > (where backup create runs). This has to be replaced with DistCp copying. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-18843) Add DistCp support to incremental backup with bulk loading
[ https://issues.apache.org/jira/browse/HBASE-18843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183613#comment-16183613 ] Appy commented on HBASE-18843: -- I guess followup is fine. But i have a bigger question, why is this class such almost duplicate of SimpleCopyListing. I see that [~tedyu] also raised that question and the answer was - need to overwrite {{computeSourceRootPath}}. But why? And if so, all you needed was to just override the only method which actually calls it - {{public void doBuildListing(SequenceFile.Writer fileListWriter, DistCpContext options) throws IOException}} I spent sometime looking around. I see the class is dynamically loaded by setting a conf. When does it come into picture? Can we do the custom logic before triggering copy, and inject it somehow? In any case, it's not good that such a blatant copy of code got checked in and the class doesn't have a big fat comment explaining 'Why the need? What's was the exact pain point?, etc etc'. > Add DistCp support to incremental backup with bulk loading > -- > > Key: HBASE-18843 > URL: https://issues.apache.org/jira/browse/HBASE-18843 > Project: HBase > Issue Type: Improvement >Reporter: Vladimir Rodionov >Assignee: Vladimir Rodionov > Fix For: 2.0.0-alpha-4 > > Attachments: HBASE-18843-v1.patch, HBASE-18843-v2.patch, > HBASE-18843-v4.patch, HBASE-18843-v5.patch > > > Currently, we copy bulk loaded files to backup one-by-one on a client side > (where backup create runs). This has to be replaced with DistCp copying. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (HBASE-18897) Substitute MemStore for Memstore
Chia-Ping Tsai created HBASE-18897: -- Summary: Substitute MemStore for Memstore Key: HBASE-18897 URL: https://issues.apache.org/jira/browse/HBASE-18897 Project: HBase Issue Type: Task Reporter: Chia-Ping Tsai Assignee: Chia-Ping Tsai Memstore/MemStore is our core component, but we have two ways of writing its name. We should unify its name in our code base. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-18897) Substitute MemStore for Memstore
[ https://issues.apache.org/jira/browse/HBASE-18897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chia-Ping Tsai updated HBASE-18897: --- Fix Version/s: 2.0.0 > Substitute MemStore for Memstore > > > Key: HBASE-18897 > URL: https://issues.apache.org/jira/browse/HBASE-18897 > Project: HBase > Issue Type: Task >Reporter: Chia-Ping Tsai >Assignee: Chia-Ping Tsai > Fix For: 2.0.0 > > > Memstore/MemStore is our core component, but we have two ways of writing its > name. We should unify its name in our code base. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-16290) Dump summary of callQueue content; can help debugging
[ https://issues.apache.org/jira/browse/HBASE-16290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183589#comment-16183589 ] Chia-Ping Tsai commented on HBASE-16290: {code} +if(null != rpcCall.getMethod()) { + method = rpcCall.getMethod().getName(); +} else { + method = ""; +} {code} What about naming it "Unknown"? The {{""}} is ambiguous. {code} + @Override + public CallQueueInfo getCallQueueInfo() { +String queueName = "Fifo Queue"; +CallQueueInfo callQueueInfo = new CallQueueInfo(); + +HashMapcallQueueMethodTotalCount = new HashMap<>(); +HashMap callQueueMethodTotalSize = new HashMap<>(); + +callQueueMethodTotalCount.put("", queueSize.longValue()); +callQueueMethodTotalSize.put("", 0L); + +callQueueInfo.setCallMethodCount(queueName, callQueueMethodTotalCount); +callQueueInfo.setCallMethodSize(queueName, callQueueMethodTotalSize); + +return callQueueInfo; + } {code} Can we have FifoRpcScheduler provide the more information of holding CallRunner? What about wrapping the {{CallRunner}} as {{Runnable + getCallRunner()}}? We can call {{ThreadPoolExecutor#getQueue}} to list the holding {{Runnable + getCallRunner()}. BTW, could you put ur patch on review board? see [reviewboard|http://hbase.apache.org/book.html#reviewboard] > Dump summary of callQueue content; can help debugging > - > > Key: HBASE-16290 > URL: https://issues.apache.org/jira/browse/HBASE-16290 > Project: HBase > Issue Type: Bug > Components: Operability >Affects Versions: 2.0.0 >Reporter: stack >Assignee: Sreeram Venkatasubramanian > Labels: beginner > Fix For: 2.0.0 > > Attachments: DebugDump_screenshot.png, HBASE-16290.master.001.patch, > HBASE-16290.master.002.patch, HBASE-16290.master.003.patch, > HBASE-16290.master.004.patch, HBASE-16290.master.005.patch, Sample Summary.txt > > > Being able to get a clue what is in a backedup callQueue could give insight > on what is going on on a jacked server. Just needs to summarize count, sizes, > call types. Useful debugging. In a servlet? -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-18843) Add DistCp support to incremental backup with bulk loading
[ https://issues.apache.org/jira/browse/HBASE-18843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183582#comment-16183582 ] Josh Elser commented on HBASE-18843: bq. The one which was supposed to test building with hadoop3 Hah! I thought it was curious that we didn't catch this :) [~appy] you think we should revert temporarily to get this fixed instead of handling as a follow-on fix? (was my original thought -- see the linked issue) > Add DistCp support to incremental backup with bulk loading > -- > > Key: HBASE-18843 > URL: https://issues.apache.org/jira/browse/HBASE-18843 > Project: HBase > Issue Type: Improvement >Reporter: Vladimir Rodionov >Assignee: Vladimir Rodionov > Fix For: 2.0.0-alpha-4 > > Attachments: HBASE-18843-v1.patch, HBASE-18843-v2.patch, > HBASE-18843-v4.patch, HBASE-18843-v5.patch > > > Currently, we copy bulk loaded files to backup one-by-one on a client side > (where backup create runs). This has to be replaced with DistCp copying. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-18843) Add DistCp support to incremental backup with bulk loading
[ https://issues.apache.org/jira/browse/HBASE-18843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183576#comment-16183576 ] Appy commented on HBASE-18843: -- Not your fault [~vrodionov], our precommit is wrong. The one which was supposed to test building with hadoop3 (https://builds.apache.org/job/PreCommit-HBASE-Build/8813/artifact/patchprocess/patch-javac-3.0.0-alpha4.txt/*view*/) was still downloading 2.7.1 jars. > Add DistCp support to incremental backup with bulk loading > -- > > Key: HBASE-18843 > URL: https://issues.apache.org/jira/browse/HBASE-18843 > Project: HBase > Issue Type: Improvement >Reporter: Vladimir Rodionov >Assignee: Vladimir Rodionov > Fix For: 2.0.0-alpha-4 > > Attachments: HBASE-18843-v1.patch, HBASE-18843-v2.patch, > HBASE-18843-v4.patch, HBASE-18843-v5.patch > > > Currently, we copy bulk loaded files to backup one-by-one on a client side > (where backup create runs). This has to be replaced with DistCp copying. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Reopened] (HBASE-18843) Add DistCp support to incremental backup with bulk loading
[ https://issues.apache.org/jira/browse/HBASE-18843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Appy reopened HBASE-18843: -- > Add DistCp support to incremental backup with bulk loading > -- > > Key: HBASE-18843 > URL: https://issues.apache.org/jira/browse/HBASE-18843 > Project: HBase > Issue Type: Improvement >Reporter: Vladimir Rodionov >Assignee: Vladimir Rodionov > Fix For: 2.0.0-alpha-4 > > Attachments: HBASE-18843-v1.patch, HBASE-18843-v2.patch, > HBASE-18843-v4.patch, HBASE-18843-v5.patch > > > Currently, we copy bulk loaded files to backup one-by-one on a client side > (where backup create runs). This has to be replaced with DistCp copying. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-18843) Add DistCp support to incremental backup with bulk loading
[ https://issues.apache.org/jira/browse/HBASE-18843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183574#comment-16183574 ] Appy commented on HBASE-18843: -- Fails to build with hadoop3. {noformat} $ mvn clean install -DskipTests -Dhadoop.profile=3.0 ... ... - [ERROR] COMPILATION ERROR : - [ERROR] /Users/appy/apache/hbase/hbase-backup/src/main/java/org/apache/hadoop/hbase/backup/util/FixedRelativePathCopyListing.java:[76,3] method does not override or implement a method from a supertype [ERROR] /Users/appy/apache/hbase/hbase-backup/src/main/java/org/apache/hadoop/hbase/backup/util/FixedRelativePathCopyListing.java:[98,3] method does not override or implement a method from a supertype [ERROR] /Users/appy/apache/hbase/hbase-backup/src/main/java/org/apache/hadoop/hbase/backup/util/FixedRelativePathCopyListing.java:[107,50] cannot find symbol symbol: method shouldPreserveRawXattrs() location: variable options of type org.apache.hadoop.tools.DistCpOptions [ERROR] /Users/appy/apache/hbase/hbase-backup/src/main/java/org/apache/hadoop/hbase/backup/util/FixedRelativePathCopyListing.java:[117,24] method toCopyListingFileStatus in class org.apache.hadoop.tools.util.DistCpUtils cannot be applied to given types; required: org.apache.hadoop.fs.FileSystem,org.apache.hadoop.fs.FileStatus,boolean,boolean,boolean,int found: org.apache.hadoop.fs.FileSystem,org.apache.hadoop.fs.FileStatus,boolean,boolean,boolean reason: actual and formal argument lists differ in length [ERROR] /Users/appy/apache/hbase/hbase-backup/src/main/java/org/apache/hadoop/hbase/backup/util/FixedRelativePathCopyListing.java:[128,26] method toCopyListingFileStatus in class org.apache.hadoop.tools.util.DistCpUtils cannot be applied to given types; required: org.apache.hadoop.fs.FileSystem,org.apache.hadoop.fs.FileStatus,boolean,boolean,boolean,int found: org.apache.hadoop.fs.FileSystem,org.apache.hadoop.fs.FileStatus,boolean,boolean,boolean reason: actual and formal argument lists differ in length [ERROR] /Users/appy/apache/hbase/hbase-backup/src/main/java/org/apache/hadoop/hbase/backup/util/FixedRelativePathCopyListing.java:[170,3] method does not override or implement a method from a supertype [ERROR] /Users/appy/apache/hbase/hbase-backup/src/main/java/org/apache/hadoop/hbase/backup/util/FixedRelativePathCopyListing.java:[222,46] cannot find symbol symbol: method shouldPreserveRawXattrs() location: variable options of type org.apache.hadoop.tools.DistCpOptions [ERROR] /Users/appy/apache/hbase/hbase-backup/src/main/java/org/apache/hadoop/hbase/backup/util/FixedRelativePathCopyListing.java:[232,22] method toCopyListingFileStatus in class org.apache.hadoop.tools.util.DistCpUtils cannot be applied to given types; required: org.apache.hadoop.fs.FileSystem,org.apache.hadoop.fs.FileStatus,boolean,boolean,boolean,int found: org.apache.hadoop.fs.FileSystem,org.apache.hadoop.fs.FileStatus,boolean,boolean,boolean reason: actual and formal argument lists differ in length [ERROR] /Users/appy/apache/hbase/hbase-backup/src/main/java/org/apache/hadoop/hbase/backup/util/FixedRelativePathCopyListing.java:[273,25] incompatible types: org.apache.hadoop.tools.CopyListingFileStatus cannot be converted to org.apache.hadoop.fs.FileStatus 9 errors {noformat} > Add DistCp support to incremental backup with bulk loading > -- > > Key: HBASE-18843 > URL: https://issues.apache.org/jira/browse/HBASE-18843 > Project: HBase > Issue Type: Improvement >Reporter: Vladimir Rodionov >Assignee: Vladimir Rodionov > Fix For: 2.0.0-alpha-4 > > Attachments: HBASE-18843-v1.patch, HBASE-18843-v2.patch, > HBASE-18843-v4.patch, HBASE-18843-v5.patch > > > Currently, we copy bulk loaded files to backup one-by-one on a client side > (where backup create runs). This has to be replaced with DistCp copying. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (HBASE-18896) Work around incompatible change to DistCpOptions for hadoop-3
Josh Elser created HBASE-18896: -- Summary: Work around incompatible change to DistCpOptions for hadoop-3 Key: HBASE-18896 URL: https://issues.apache.org/jira/browse/HBASE-18896 Project: HBase Issue Type: Bug Reporter: Josh Elser Assignee: Vladimir Rodionov Priority: Blocker Fix For: 2.0.0-alpha-4 HADOOP-14267 change methods on DistCpOptions and introduced a new Builder class which doesn't exist on any previous releases. We'll have to shim/reflect around this. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-18843) Add DistCp support to incremental backup with bulk loading
[ https://issues.apache.org/jira/browse/HBASE-18843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183568#comment-16183568 ] Josh Elser commented on HBASE-18843: Looks like Hadoop made a non-backwards compatible change affecting DistCpOptions in 3.0.0-alpha4 in HADOOP-14267. It appears that we'll have to work around this one with reflection (goody..) > Add DistCp support to incremental backup with bulk loading > -- > > Key: HBASE-18843 > URL: https://issues.apache.org/jira/browse/HBASE-18843 > Project: HBase > Issue Type: Improvement >Reporter: Vladimir Rodionov >Assignee: Vladimir Rodionov > Fix For: 2.0.0-alpha-4 > > Attachments: HBASE-18843-v1.patch, HBASE-18843-v2.patch, > HBASE-18843-v4.patch, HBASE-18843-v5.patch > > > Currently, we copy bulk loaded files to backup one-by-one on a client side > (where backup create runs). This has to be replaced with DistCp copying. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-18839) Apply RegionInfo to code base
[ https://issues.apache.org/jira/browse/HBASE-18839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chia-Ping Tsai updated HBASE-18839: --- Status: Open (was: Patch Available) > Apply RegionInfo to code base > - > > Key: HBASE-18839 > URL: https://issues.apache.org/jira/browse/HBASE-18839 > Project: HBase > Issue Type: Sub-task > Components: Coprocessors >Reporter: Chia-Ping Tsai >Assignee: Chia-Ping Tsai > Fix For: 2.0.0-alpha-4 > > Attachments: HBASE-18839.v0.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-18391) List the stuffs which are using the patent grant license (PATENTS file) of Facebook; And then discuss and remove them.
[ https://issues.apache.org/jira/browse/HBASE-18391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183561#comment-16183561 ] Chia-Ping Tsai commented on HBASE-18391: bq. sounds like this is all taken care of now? +1 to close this and update the release note. > List the stuffs which are using the patent grant license (PATENTS file) of > Facebook; And then discuss and remove them. > -- > > Key: HBASE-18391 > URL: https://issues.apache.org/jira/browse/HBASE-18391 > Project: HBase > Issue Type: Task > Components: community, dependencies >Reporter: Chia-Ping Tsai >Priority: Blocker > Labels: incompatible > Fix For: 2.0.0-beta-1 > > > See ["Apache Foundation disallows use of the Facebook “BSD+Patent” > license"|https://news.ycombinator.com/item?id=14779881] -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-18887) Full backup passed on hdfs root but incremental failed. Not able to clean full backup
[ https://issues.apache.org/jira/browse/HBASE-18887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183559#comment-16183559 ] Ted Yu commented on HBASE-18887: Vishal: Can you verify that when subdir under hdfs root is used, the above problem doesn't exist ? Thanks > Full backup passed on hdfs root but incremental failed. Not able to clean > full backup > - > > Key: HBASE-18887 > URL: https://issues.apache.org/jira/browse/HBASE-18887 > Project: HBase > Issue Type: Bug >Reporter: Vishal Khandelwal >Assignee: Vladimir Rodionov > Labels: backup > Attachments: HBASE-18887-v1.patch > > > >> > ./bin/hbase backup create full hdfs://localhost:8020/ -t test1 > 2017-09-27 10:19:38,885 INFO [main] impl.BackupManifest: Manifest file > stored to hdfs://localhost:8020/backup_1506487766386/.backup.manifest > 2017-09-27 10:19:38,937 INFO [main] impl.TableBackupClient: Backup > backup_1506487766386 completed. > Backup session backup_1506487766386 finished. Status: SUCCESS > >> > 2017-09-27 10:20:48,211 INFO [main] mapreduce.JobSubmitter: Cleaning up the > staging area > /tmp/hadoop-yarn/staging/vkhandelwal/.staging/job_1506419443344_0045 > 2017-09-27 10:20:48,215 ERROR [main] impl.TableBackupClient: Unexpected > exception in incremental-backup: incremental copy backup_1506487845361Can not > convert from directory (check Hadoop, HBase and WALPlayer M/R job logs) > java.io.IOException: Can not convert from directory (check Hadoop, HBase and > WALPlayer M/R job logs) > at > org.apache.hadoop.hbase.backup.impl.IncrementalTableBackupClient.walToHFiles(IncrementalTableBackupClient.java:363) > at > {code} ./bin/hbase backup create full hdfs://localhost:8020/ -t test1 {code} > 2017-09-27 10:19:38,885 INFO [main] impl.BackupManifest: Manifest file > stored to hdfs://localhost:8020/backup_1506487766386/.backup.manifest > 2017-09-27 10:19:38,937 INFO [main] impl.TableBackupClient: Backup > backup_1506487766386 completed. > Backup session backup_1506487766386 finished. Status: SUCCESS > {code} ./bin/hbase backup create incremental hdfs://localhost:8020/ -t test1 > {code} > 2017-09-27 10:20:48,215 ERROR [main] impl.TableBackupClient: Unexpected > exception in incremental-backup: incremental copy backup_1506487845361Can not > convert from directory (check Hadoop, HBase and WALPlayer M/R job logs) > java.io.IOException: Can not convert from directory (check Hadoop, HBase and > WALPlayer M/R job logs) > at > org.apache.hadoop.hbase.backup.impl.IncrementalTableBackupClient.walToHFiles(IncrementalTableBackupClient.java:363) > at > org.apache.hadoop.hbase.backup.impl.IncrementalTableBackupClient.convertWALsToHFiles(IncrementalTableBackupClient.java:322) > at > org.apache.hadoop.hbase.backup.impl.IncrementalTableBackupClient.execute(IncrementalTableBackupClient.java:232) > at > org.apache.hadoop.hbase.backup.impl.BackupAdminImpl.backupTables(BackupAdminImpl.java:601) > at > org.apache.hadoop.hbase.backup.impl.BackupCommands$CreateCommand.execute(BackupCommands.java:336) > at > org.apache.hadoop.hbase.backup.BackupDriver.parseAndRun(BackupDriver.java:137) > at > org.apache.hadoop.hbase.backup.BackupDriver.doWork(BackupDriver.java:170) > at > org.apache.hadoop.hbase.backup.BackupDriver.run(BackupDriver.java:203) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) > at > org.apache.hadoop.hbase.backup.BackupDriver.main(BackupDriver.java:178) > Caused by: java.lang.IllegalArgumentException: Can not create a Path from an > empty string > at org.apache.hadoop.fs.Path.checkPathArg(Path.java:126) > at org.apache.hadoop.fs.Path.(Path.java:134) > at org.apache.hadoop.util.StringUtils.stringToPath(StringUtils.java:245) > at > org.apache.hadoop.hbase.mapreduce.WALInputFormat.getInputPaths(WALInputFormat.java:301) > at > org.apache.hadoop.hbase.mapreduce.WALInputFormat.getSplits(WALInputFormat.java:274) > at > org.apache.hadoop.hbase.mapreduce.WALInputFormat.getSplits(WALInputFormat.java:264) > at > org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:301) > at > org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:318) > at > org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:196) > at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290) > at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) > at
[jira] [Commented] (HBASE-18885) HFileOutputFormat2 hardcodes default FileOutputCommitter
[ https://issues.apache.org/jira/browse/HBASE-18885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183542#comment-16183542 ] Shaofeng SHI commented on HBASE-18885: -- Thanks Ted for merging it! > HFileOutputFormat2 hardcodes default FileOutputCommitter > > > Key: HBASE-18885 > URL: https://issues.apache.org/jira/browse/HBASE-18885 > Project: HBase > Issue Type: Bug > Components: mapreduce >Reporter: Shaofeng SHI >Assignee: Shaofeng SHI > Fix For: 1.4.0, 1.5.0, 2.0.0-alpha-4 > > Attachments: HBASE-18885.branch-1.001.patch, > HBASE-18885.master.001.patch > > > Apache Kylin uses HBase's HFileOutputFormat2.java to configure the MR job. > The original reporting is in KYLIN-2788[1]. After some investigation, we > found this class always uses the default "FileOutputCommitter", see [2], > regardless of the job's configuration; so it always writing to "_temporary" > folder. Since AWS EMR configured to use DirectOutputCommitter for S3, then > this problem occurs: Hadoop expects to see the file directly under output > path, while the RecordWriter generates them in "_temporary" folder. This > caused no data be loaded to HTable. > Seems this problem exists in all versions so far. > [1] https://issues.apache.org/jira/browse/KYLIN-2788 > [2] > https://github.com/apache/hbase/blob/master/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/HFileOutputFormat2.java#L193 -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-18884) Coprocessor Design Improvements follow up of HBASE-17732
[ https://issues.apache.org/jira/browse/HBASE-18884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Appy updated HBASE-18884: - Status: Patch Available (was: In Progress) > Coprocessor Design Improvements follow up of HBASE-17732 > > > Key: HBASE-18884 > URL: https://issues.apache.org/jira/browse/HBASE-18884 > Project: HBase > Issue Type: Bug >Reporter: Appy >Assignee: Appy > Fix For: 2.0.0-alpha-4 > > Attachments: HBASE-18884.master.001.patch > > > Creating new jira to track suggestions that came in review > (https://reviews.apache.org/r/62141/) but are not blocker and can be done > separately. > Suggestions by [~apurtell] > - Change {{Service Coprocessor#getService()}} to {{List > Coprocessor#getServices()}} > - I think we overstepped by offering [table resource management via this > interface|https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/CoprocessorEnvironment.java#L57]. > There are a lot of other internal resource types which could/should be > managed this way but they are all left up to the implementor. Perhaps we > should remove the table ref management and leave it up to them as well. > > - Checkin the finalized design doc into repo > (https://docs.google.com/document/d/1mPkM1CRRvBMZL4dBQzrus8obyvNnHhR5it2yyhiFXTg/edit) > (fyi: [~stack]) > - Added example to javadoc of Coprocessor base interface on how to implement > one in the new design -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Work started] (HBASE-18884) Coprocessor Design Improvements follow up of HBASE-17732
[ https://issues.apache.org/jira/browse/HBASE-18884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HBASE-18884 started by Appy. > Coprocessor Design Improvements follow up of HBASE-17732 > > > Key: HBASE-18884 > URL: https://issues.apache.org/jira/browse/HBASE-18884 > Project: HBase > Issue Type: Bug >Reporter: Appy >Assignee: Appy > Fix For: 2.0.0-alpha-4 > > Attachments: HBASE-18884.master.001.patch > > > Creating new jira to track suggestions that came in review > (https://reviews.apache.org/r/62141/) but are not blocker and can be done > separately. > Suggestions by [~apurtell] > - Change {{Service Coprocessor#getService()}} to {{List > Coprocessor#getServices()}} > - I think we overstepped by offering [table resource management via this > interface|https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/CoprocessorEnvironment.java#L57]. > There are a lot of other internal resource types which could/should be > managed this way but they are all left up to the implementor. Perhaps we > should remove the table ref management and leave it up to them as well. > > - Checkin the finalized design doc into repo > (https://docs.google.com/document/d/1mPkM1CRRvBMZL4dBQzrus8obyvNnHhR5it2yyhiFXTg/edit) > (fyi: [~stack]) > - Added example to javadoc of Coprocessor base interface on how to implement > one in the new design -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-18884) Coprocessor Design Improvements follow up of HBASE-17732
[ https://issues.apache.org/jira/browse/HBASE-18884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183541#comment-16183541 ] Appy commented on HBASE-18884: -- Uploaded patch addressing 3 of 4 tasks. [~apurtell] About the table resource management, it's for private use only since CoprocessorEnvironment is marked IS.private. Do you think we still need to remove it? > Coprocessor Design Improvements follow up of HBASE-17732 > > > Key: HBASE-18884 > URL: https://issues.apache.org/jira/browse/HBASE-18884 > Project: HBase > Issue Type: Bug >Reporter: Appy >Assignee: Appy > Fix For: 2.0.0-alpha-4 > > Attachments: HBASE-18884.master.001.patch > > > Creating new jira to track suggestions that came in review > (https://reviews.apache.org/r/62141/) but are not blocker and can be done > separately. > Suggestions by [~apurtell] > - Change {{Service Coprocessor#getService()}} to {{List > Coprocessor#getServices()}} > - I think we overstepped by offering [table resource management via this > interface|https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/CoprocessorEnvironment.java#L57]. > There are a lot of other internal resource types which could/should be > managed this way but they are all left up to the implementor. Perhaps we > should remove the table ref management and leave it up to them as well. > > - Checkin the finalized design doc into repo > (https://docs.google.com/document/d/1mPkM1CRRvBMZL4dBQzrus8obyvNnHhR5it2yyhiFXTg/edit) > (fyi: [~stack]) > - Added example to javadoc of Coprocessor base interface on how to implement > one in the new design -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Resolved] (HBASE-18712) Specify -X for precommit unit tests
[ https://issues.apache.org/jira/browse/HBASE-18712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu resolved HBASE-18712. Resolution: Won't Fix > Specify -X for precommit unit tests > --- > > Key: HBASE-18712 > URL: https://issues.apache.org/jira/browse/HBASE-18712 > Project: HBase > Issue Type: Bug >Reporter: Ted Yu >Assignee: Ted Yu > Attachments: 18712.v1.txt, 18712.v2.txt, 18712.v3.txt > > > Add -X in dev-support/hbase-personality.sh for precommit unit tests so that > we have more information when "The forked VM terminated without saying > properly goodbye" happens again. > The following (initial proposal) doesn't apply to jdk 1.8 and has limited > benefit: > Currently hbase-surefire.argLine doesn't specify MaxPermSize for the test > run(s). > This sometimes resulted in mvn build prematurely exiting, leaving some large > tests behind. > The tests would be deemed timed out. > As indicated by the following post: > https://stackoverflow.com/questions/23260057/the-forked-vm-terminated-without-saying-properly-goodbye-vm-crash-or-system-exi > We should specify large enough MaxPermSize so that mvn build doesn't end > prematurely. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-18884) Coprocessor Design Improvements follow up of HBASE-17732
[ https://issues.apache.org/jira/browse/HBASE-18884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Appy updated HBASE-18884: - Attachment: HBASE-18884.master.001.patch > Coprocessor Design Improvements follow up of HBASE-17732 > > > Key: HBASE-18884 > URL: https://issues.apache.org/jira/browse/HBASE-18884 > Project: HBase > Issue Type: Bug >Reporter: Appy >Assignee: Appy > Fix For: 2.0.0-alpha-4 > > Attachments: HBASE-18884.master.001.patch > > > Creating new jira to track suggestions that came in review > (https://reviews.apache.org/r/62141/) but are not blocker and can be done > separately. > Suggestions by [~apurtell] > - Change {{Service Coprocessor#getService()}} to {{List > Coprocessor#getServices()}} > - I think we overstepped by offering [table resource management via this > interface|https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/CoprocessorEnvironment.java#L57]. > There are a lot of other internal resource types which could/should be > managed this way but they are all left up to the implementor. Perhaps we > should remove the table ref management and leave it up to them as well. > > - Checkin the finalized design doc into repo > (https://docs.google.com/document/d/1mPkM1CRRvBMZL4dBQzrus8obyvNnHhR5it2yyhiFXTg/edit) > (fyi: [~stack]) > - Added example to javadoc of Coprocessor base interface on how to implement > one in the new design -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-13844) Move static helper methods from KeyValue into CellUtils
[ https://issues.apache.org/jira/browse/HBASE-13844?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andy Yang updated HBASE-13844: -- Status: Patch Available (was: Open) > Move static helper methods from KeyValue into CellUtils > --- > > Key: HBASE-13844 > URL: https://issues.apache.org/jira/browse/HBASE-13844 > Project: HBase > Issue Type: Improvement >Reporter: Lars George >Assignee: Andy Yang >Priority: Minor > Labels: beginner > Fix For: 2.0.0-beta-1 > > Attachments: HBASE-13844.1.patch, HBASE-13844.2.patch, > HBASE-13844.3.patch, HBASE-13844.branch-2.v0.patch, > HBASE-13844.branch-2.v1.patch, HBASE-13844.branch-2.v2.patch, > HBASE-13844.branch-2.v3.patch, HBASE-13844.branch-2.v4.patch, > HBASE-13844.branch-2.v4.patch, HBASE-13844.branch-2.v5.patch > > > Add KeyValue.parseColumn() to CellUtils (also any other public static helper) -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-18884) Coprocessor Design Improvements follow up of HBASE-17732
[ https://issues.apache.org/jira/browse/HBASE-18884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Appy updated HBASE-18884: - Attachment: (was: HBASE-18884.master.001.patch) > Coprocessor Design Improvements follow up of HBASE-17732 > > > Key: HBASE-18884 > URL: https://issues.apache.org/jira/browse/HBASE-18884 > Project: HBase > Issue Type: Bug >Reporter: Appy >Assignee: Appy > Fix For: 2.0.0-alpha-4 > > Attachments: HBASE-18884.master.001.patch > > > Creating new jira to track suggestions that came in review > (https://reviews.apache.org/r/62141/) but are not blocker and can be done > separately. > Suggestions by [~apurtell] > - Change {{Service Coprocessor#getService()}} to {{List > Coprocessor#getServices()}} > - I think we overstepped by offering [table resource management via this > interface|https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/CoprocessorEnvironment.java#L57]. > There are a lot of other internal resource types which could/should be > managed this way but they are all left up to the implementor. Perhaps we > should remove the table ref management and leave it up to them as well. > > - Checkin the finalized design doc into repo > (https://docs.google.com/document/d/1mPkM1CRRvBMZL4dBQzrus8obyvNnHhR5it2yyhiFXTg/edit) > (fyi: [~stack]) > - Added example to javadoc of Coprocessor base interface on how to implement > one in the new design -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-13844) Move static helper methods from KeyValue into CellUtils
[ https://issues.apache.org/jira/browse/HBASE-13844?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andy Yang updated HBASE-13844: -- Status: Open (was: Patch Available) > Move static helper methods from KeyValue into CellUtils > --- > > Key: HBASE-13844 > URL: https://issues.apache.org/jira/browse/HBASE-13844 > Project: HBase > Issue Type: Improvement >Reporter: Lars George >Assignee: Andy Yang >Priority: Minor > Labels: beginner > Fix For: 2.0.0-beta-1 > > Attachments: HBASE-13844.1.patch, HBASE-13844.2.patch, > HBASE-13844.3.patch, HBASE-13844.branch-2.v0.patch, > HBASE-13844.branch-2.v1.patch, HBASE-13844.branch-2.v2.patch, > HBASE-13844.branch-2.v3.patch, HBASE-13844.branch-2.v4.patch, > HBASE-13844.branch-2.v4.patch, HBASE-13844.branch-2.v5.patch > > > Add KeyValue.parseColumn() to CellUtils (also any other public static helper) -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-18884) Coprocessor Design Improvements follow up of HBASE-17732
[ https://issues.apache.org/jira/browse/HBASE-18884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Appy updated HBASE-18884: - Attachment: HBASE-18884.master.001.patch > Coprocessor Design Improvements follow up of HBASE-17732 > > > Key: HBASE-18884 > URL: https://issues.apache.org/jira/browse/HBASE-18884 > Project: HBase > Issue Type: Bug >Reporter: Appy >Assignee: Appy > Fix For: 2.0.0-alpha-4 > > Attachments: HBASE-18884.master.001.patch > > > Creating new jira to track suggestions that came in review > (https://reviews.apache.org/r/62141/) but are not blocker and can be done > separately. > Suggestions by [~apurtell] > - Change {{Service Coprocessor#getService()}} to {{List > Coprocessor#getServices()}} > - I think we overstepped by offering [table resource management via this > interface|https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/CoprocessorEnvironment.java#L57]. > There are a lot of other internal resource types which could/should be > managed this way but they are all left up to the implementor. Perhaps we > should remove the table ref management and leave it up to them as well. > > - Checkin the finalized design doc into repo > (https://docs.google.com/document/d/1mPkM1CRRvBMZL4dBQzrus8obyvNnHhR5it2yyhiFXTg/edit) > (fyi: [~stack]) > - Added example to javadoc of Coprocessor base interface on how to implement > one in the new design -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-18884) Coprocessor Design Improvements follow up of HBASE-17732
[ https://issues.apache.org/jira/browse/HBASE-18884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Appy updated HBASE-18884: - Summary: Coprocessor Design Improvements follow up of HBASE-17732 (was: Coprocessor Design Improvements 2 (Follow up of HBASE-17732)) > Coprocessor Design Improvements follow up of HBASE-17732 > > > Key: HBASE-18884 > URL: https://issues.apache.org/jira/browse/HBASE-18884 > Project: HBase > Issue Type: Bug >Reporter: Appy >Assignee: Appy > Fix For: 2.0.0-alpha-4 > > > Creating new jira to track suggestions that came in review > (https://reviews.apache.org/r/62141/) but are not blocker and can be done > separately. > Suggestions by [~apurtell] > - Change {{Service Coprocessor#getService()}} to {{List > Coprocessor#getServices()}} > - I think we overstepped by offering [table resource management via this > interface|https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/CoprocessorEnvironment.java#L57]. > There are a lot of other internal resource types which could/should be > managed this way but they are all left up to the implementor. Perhaps we > should remove the table ref management and leave it up to them as well. > > - Checkin the finalized design doc into repo > (https://docs.google.com/document/d/1mPkM1CRRvBMZL4dBQzrus8obyvNnHhR5it2yyhiFXTg/edit) > (fyi: [~stack]) > - Added example to javadoc of Coprocessor base interface on how to implement > one in the new design -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-18887) Full backup passed on hdfs root but incremental failed. Not able to clean full backup
[ https://issues.apache.org/jira/browse/HBASE-18887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Rodionov updated HBASE-18887: -- Attachment: HBASE-18887-v1.patch Patch v1 disables backup into DFS root > Full backup passed on hdfs root but incremental failed. Not able to clean > full backup > - > > Key: HBASE-18887 > URL: https://issues.apache.org/jira/browse/HBASE-18887 > Project: HBase > Issue Type: Bug >Reporter: Vishal Khandelwal > Labels: backup > Attachments: HBASE-18887-v1.patch > > > >> > ./bin/hbase backup create full hdfs://localhost:8020/ -t test1 > 2017-09-27 10:19:38,885 INFO [main] impl.BackupManifest: Manifest file > stored to hdfs://localhost:8020/backup_1506487766386/.backup.manifest > 2017-09-27 10:19:38,937 INFO [main] impl.TableBackupClient: Backup > backup_1506487766386 completed. > Backup session backup_1506487766386 finished. Status: SUCCESS > >> > 2017-09-27 10:20:48,211 INFO [main] mapreduce.JobSubmitter: Cleaning up the > staging area > /tmp/hadoop-yarn/staging/vkhandelwal/.staging/job_1506419443344_0045 > 2017-09-27 10:20:48,215 ERROR [main] impl.TableBackupClient: Unexpected > exception in incremental-backup: incremental copy backup_1506487845361Can not > convert from directory (check Hadoop, HBase and WALPlayer M/R job logs) > java.io.IOException: Can not convert from directory (check Hadoop, HBase and > WALPlayer M/R job logs) > at > org.apache.hadoop.hbase.backup.impl.IncrementalTableBackupClient.walToHFiles(IncrementalTableBackupClient.java:363) > at > {code} ./bin/hbase backup create full hdfs://localhost:8020/ -t test1 {code} > 2017-09-27 10:19:38,885 INFO [main] impl.BackupManifest: Manifest file > stored to hdfs://localhost:8020/backup_1506487766386/.backup.manifest > 2017-09-27 10:19:38,937 INFO [main] impl.TableBackupClient: Backup > backup_1506487766386 completed. > Backup session backup_1506487766386 finished. Status: SUCCESS > {code} ./bin/hbase backup create incremental hdfs://localhost:8020/ -t test1 > {code} > 2017-09-27 10:20:48,215 ERROR [main] impl.TableBackupClient: Unexpected > exception in incremental-backup: incremental copy backup_1506487845361Can not > convert from directory (check Hadoop, HBase and WALPlayer M/R job logs) > java.io.IOException: Can not convert from directory (check Hadoop, HBase and > WALPlayer M/R job logs) > at > org.apache.hadoop.hbase.backup.impl.IncrementalTableBackupClient.walToHFiles(IncrementalTableBackupClient.java:363) > at > org.apache.hadoop.hbase.backup.impl.IncrementalTableBackupClient.convertWALsToHFiles(IncrementalTableBackupClient.java:322) > at > org.apache.hadoop.hbase.backup.impl.IncrementalTableBackupClient.execute(IncrementalTableBackupClient.java:232) > at > org.apache.hadoop.hbase.backup.impl.BackupAdminImpl.backupTables(BackupAdminImpl.java:601) > at > org.apache.hadoop.hbase.backup.impl.BackupCommands$CreateCommand.execute(BackupCommands.java:336) > at > org.apache.hadoop.hbase.backup.BackupDriver.parseAndRun(BackupDriver.java:137) > at > org.apache.hadoop.hbase.backup.BackupDriver.doWork(BackupDriver.java:170) > at > org.apache.hadoop.hbase.backup.BackupDriver.run(BackupDriver.java:203) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) > at > org.apache.hadoop.hbase.backup.BackupDriver.main(BackupDriver.java:178) > Caused by: java.lang.IllegalArgumentException: Can not create a Path from an > empty string > at org.apache.hadoop.fs.Path.checkPathArg(Path.java:126) > at org.apache.hadoop.fs.Path.(Path.java:134) > at org.apache.hadoop.util.StringUtils.stringToPath(StringUtils.java:245) > at > org.apache.hadoop.hbase.mapreduce.WALInputFormat.getInputPaths(WALInputFormat.java:301) > at > org.apache.hadoop.hbase.mapreduce.WALInputFormat.getSplits(WALInputFormat.java:274) > at > org.apache.hadoop.hbase.mapreduce.WALInputFormat.getSplits(WALInputFormat.java:264) > at > org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:301) > at > org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:318) > at > org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:196) > at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290) > at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) > at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287) > at
[jira] [Assigned] (HBASE-18887) Full backup passed on hdfs root but incremental failed. Not able to clean full backup
[ https://issues.apache.org/jira/browse/HBASE-18887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Rodionov reassigned HBASE-18887: - Assignee: Vladimir Rodionov > Full backup passed on hdfs root but incremental failed. Not able to clean > full backup > - > > Key: HBASE-18887 > URL: https://issues.apache.org/jira/browse/HBASE-18887 > Project: HBase > Issue Type: Bug >Reporter: Vishal Khandelwal >Assignee: Vladimir Rodionov > Labels: backup > Attachments: HBASE-18887-v1.patch > > > >> > ./bin/hbase backup create full hdfs://localhost:8020/ -t test1 > 2017-09-27 10:19:38,885 INFO [main] impl.BackupManifest: Manifest file > stored to hdfs://localhost:8020/backup_1506487766386/.backup.manifest > 2017-09-27 10:19:38,937 INFO [main] impl.TableBackupClient: Backup > backup_1506487766386 completed. > Backup session backup_1506487766386 finished. Status: SUCCESS > >> > 2017-09-27 10:20:48,211 INFO [main] mapreduce.JobSubmitter: Cleaning up the > staging area > /tmp/hadoop-yarn/staging/vkhandelwal/.staging/job_1506419443344_0045 > 2017-09-27 10:20:48,215 ERROR [main] impl.TableBackupClient: Unexpected > exception in incremental-backup: incremental copy backup_1506487845361Can not > convert from directory (check Hadoop, HBase and WALPlayer M/R job logs) > java.io.IOException: Can not convert from directory (check Hadoop, HBase and > WALPlayer M/R job logs) > at > org.apache.hadoop.hbase.backup.impl.IncrementalTableBackupClient.walToHFiles(IncrementalTableBackupClient.java:363) > at > {code} ./bin/hbase backup create full hdfs://localhost:8020/ -t test1 {code} > 2017-09-27 10:19:38,885 INFO [main] impl.BackupManifest: Manifest file > stored to hdfs://localhost:8020/backup_1506487766386/.backup.manifest > 2017-09-27 10:19:38,937 INFO [main] impl.TableBackupClient: Backup > backup_1506487766386 completed. > Backup session backup_1506487766386 finished. Status: SUCCESS > {code} ./bin/hbase backup create incremental hdfs://localhost:8020/ -t test1 > {code} > 2017-09-27 10:20:48,215 ERROR [main] impl.TableBackupClient: Unexpected > exception in incremental-backup: incremental copy backup_1506487845361Can not > convert from directory (check Hadoop, HBase and WALPlayer M/R job logs) > java.io.IOException: Can not convert from directory (check Hadoop, HBase and > WALPlayer M/R job logs) > at > org.apache.hadoop.hbase.backup.impl.IncrementalTableBackupClient.walToHFiles(IncrementalTableBackupClient.java:363) > at > org.apache.hadoop.hbase.backup.impl.IncrementalTableBackupClient.convertWALsToHFiles(IncrementalTableBackupClient.java:322) > at > org.apache.hadoop.hbase.backup.impl.IncrementalTableBackupClient.execute(IncrementalTableBackupClient.java:232) > at > org.apache.hadoop.hbase.backup.impl.BackupAdminImpl.backupTables(BackupAdminImpl.java:601) > at > org.apache.hadoop.hbase.backup.impl.BackupCommands$CreateCommand.execute(BackupCommands.java:336) > at > org.apache.hadoop.hbase.backup.BackupDriver.parseAndRun(BackupDriver.java:137) > at > org.apache.hadoop.hbase.backup.BackupDriver.doWork(BackupDriver.java:170) > at > org.apache.hadoop.hbase.backup.BackupDriver.run(BackupDriver.java:203) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) > at > org.apache.hadoop.hbase.backup.BackupDriver.main(BackupDriver.java:178) > Caused by: java.lang.IllegalArgumentException: Can not create a Path from an > empty string > at org.apache.hadoop.fs.Path.checkPathArg(Path.java:126) > at org.apache.hadoop.fs.Path.(Path.java:134) > at org.apache.hadoop.util.StringUtils.stringToPath(StringUtils.java:245) > at > org.apache.hadoop.hbase.mapreduce.WALInputFormat.getInputPaths(WALInputFormat.java:301) > at > org.apache.hadoop.hbase.mapreduce.WALInputFormat.getSplits(WALInputFormat.java:274) > at > org.apache.hadoop.hbase.mapreduce.WALInputFormat.getSplits(WALInputFormat.java:264) > at > org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:301) > at > org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:318) > at > org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:196) > at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290) > at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) > at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287) > at
[jira] [Updated] (HBASE-17732) Coprocessor Design Improvements
[ https://issues.apache.org/jira/browse/HBASE-17732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mike Drob updated HBASE-17732: -- Hadoop Flags: Incompatible change > Coprocessor Design Improvements > --- > > Key: HBASE-17732 > URL: https://issues.apache.org/jira/browse/HBASE-17732 > Project: HBase > Issue Type: Improvement > Components: Coprocessors >Reporter: Appy >Assignee: Appy >Priority: Critical > Labels: incompatible > Fix For: 2.0.0-alpha-4 > > Attachments: HBASE-17732.master.001.patch, > HBASE-17732.master.002.patch, HBASE-17732.master.003.patch, > HBASE-17732.master.004.patch, HBASE-17732.master.005.patch, > HBASE-17732.master.006.patch, HBASE-17732.master.007.patch, > HBASE-17732.master.008.patch, HBASE-17732.master.009.patch, > HBASE-17732.master.010.patch, HBASE-17732.master.011.patch, > HBASE-17732.master.012.patch, HBASE-17732.master.013.patch, > HBASE-17732.master.014.patch > > > The two main changes are: > * *Adding template for coprocessor type to CoprocessorEnvironment i.e. > {{interface CoprocessorEnvironment}}* > ** Enables us to load only relevant coprocessors in hosts. Right now each > type of host loads all types of coprocs and it's only during execOperation > that it checks if the coproc is of correct type i.e. XCoprocessorHost will > load XObserver, YObserver, and all others, and will check in execOperation if > {{coproc instanceOf XObserver}} and ignore the rest. > ** Allow sharing of a bunch functions/classes which are currently > duplicated in each host. For eg. CoprocessorOperations, > CoprocessorOperationWithResult, execOperations(). > * *Introduce 4 coprocessor classes and use composition between these new > classes and and old observers* > ** The real gold here is, moving forward, we'll be able to break down giant > everything-in-one observers (masterobserver has 100+ functions) into smaller, > more focused observers. These smaller observer can then have different compat > guarantees!! > Here's a more detailed design doc: > https://docs.google.com/document/d/1mPkM1CRRvBMZL4dBQzrus8obyvNnHhR5it2yyhiFXTg/edit?usp=sharing -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-18894) null pointer exception in list_regions in shell command
[ https://issues.apache.org/jira/browse/HBASE-18894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183490#comment-16183490 ] Mike Drob commented on HBASE-18894: --- Can you take a look at the new rubocop/ruby-lint warnings? > null pointer exception in list_regions in shell command > --- > > Key: HBASE-18894 > URL: https://issues.apache.org/jira/browse/HBASE-18894 > Project: HBase > Issue Type: Bug >Affects Versions: 2.0.0-alpha-3 >Reporter: Yi Liang >Assignee: Yi Liang > Fix For: 2.0.0 > > Attachments: HBASE-18894-v1-master.patch > > > See this error when run list_regions command After disable 't1' > or after running split 't1', will see this error before split complete > this caused by region is disabled or still in transition > {quote} > list_regions 't1' > ERROR: undefined method `getDataLocality' for nil:NilClass > {quote} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-17732) Coprocessor Design Improvements
[ https://issues.apache.org/jira/browse/HBASE-17732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183482#comment-16183482 ] Hudson commented on HBASE-17732: FAILURE: Integrated in Jenkins build HBase-2.0 #588 (See [https://builds.apache.org/job/HBase-2.0/588/]) HBASE-17732 Coprocessor Design Improvements (appy: rev 0c883a23c57e16b212c69193edd8e5d01306b823) * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestHBaseFsckMOB.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/WALObserver.java * (edit) hbase-endpoint/src/test/java/org/apache/hadoop/hbase/coprocessor/TestAsyncCoprocessorEndpoint.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/wal/TestWALFactory.java * (add) hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/MasterCoprocessor.java * (edit) hbase-examples/src/main/java/org/apache/hadoop/hbase/coprocessor/example/BulkDeleteEndpoint.java * (edit) hbase-examples/src/main/java/org/apache/hadoop/hbase/coprocessor/example/ExampleMasterObserverWithMetrics.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestHBaseFsckReplicas.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRegionServerAbort.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestAccessController3.java * (edit) hbase-examples/src/main/java/org/apache/hadoop/hbase/coprocessor/example/ExampleRegionObserverWithMetrics.java * (edit) hbase-thrift/src/test/java/org/apache/hadoop/hbase/thrift/ErrorThrowingGetObserver.java * (edit) hbase-endpoint/src/test/java/org/apache/hadoop/hbase/coprocessor/TestRegionServerCoprocessorEndpoint.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/util/BaseTestHBaseFsck.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestWALObserver.java * (edit) hbase-rsgroup/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupAdminEndpoint.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestMasterCoprocessorExceptionWithAbort.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/replication/regionserver/TestRegionReplicaReplicationEndpointNoMaster.java * (edit) hbase-client/src/main/java/org/apache/hadoop/hbase/HTableDescriptor.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/NoOpScanPolicyObserver.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestReplicaWithCluster.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestResultFromCoprocessor.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide3.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterCoprocessorHost.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RegionCoprocessorHost.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/client/HTableWrapper.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/namespace/TestNamespaceAuditor.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestNamespaceCommands.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestRegionObserverScannerOpenHook.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestCoprocessorWhitelistMasterObserver.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/SecureBulkLoadManager.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/CoprocessorWhitelistMasterObserver.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/security/visibility/VisibilityController.java * (add) hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/RegionCoprocessor.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAsyncAdminBuilder.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestCoprocessorMetrics.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAsyncRegionLocatorTimeout.java * (add) hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/WALCoprocessor.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestScanEarlyTermination.java * (add) hbase-endpoint/src/test/java/org/apache/hadoop/hbase/coprocessor/TestCoprocessorServiceBackwardCompatibility.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/constraint/ConstraintProcessor.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/MultiRowMutationEndpoint.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestCoprocessorStop.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/SingletonCoprocessorService.java * (edit)
[jira] [Commented] (HBASE-18601) Update Htrace to 4.2
[ https://issues.apache.org/jira/browse/HBASE-18601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183477#comment-16183477 ] Mike Drob commented on HBASE-18601: --- +1 pending QA. Thanks for this huge effort, [~tamaas]! > Update Htrace to 4.2 > > > Key: HBASE-18601 > URL: https://issues.apache.org/jira/browse/HBASE-18601 > Project: HBase > Issue Type: Task >Affects Versions: 2.0.0, 3.0.0 >Reporter: Tamas Penzes >Assignee: Tamas Penzes > Fix For: 2.0.0-alpha-4 > > Attachments: HBASE-18601.master.001.patch, > HBASE-18601.master.002.patch, HBASE-18601.master.003 (3).patch, > HBASE-18601.master.003.patch, HBASE-18601.master.004.patch, > HBASE-18601.master.004.patch, HBASE-18601.master.005.patch > > > HTrace is not perfectly integrated into HBase, the version 3.2.0 is buggy, > the upgrade to 4.x is not trivial and would take time. It might not worth to > keep it in this state, so would be better to remove it. > Of course it doesn't mean tracing would be useless, just that in this form > the use of HTrace 3.2 might not add any value to the project and fixing it > would be far too much effort. > - > Based on the decision of the community we keep htrace now and update version -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-18601) Update Htrace to 4.2
[ https://issues.apache.org/jira/browse/HBASE-18601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183475#comment-16183475 ] Tamas Penzes commented on HBASE-18601: -- hi [~mdrob], please check my newest patch. > Update Htrace to 4.2 > > > Key: HBASE-18601 > URL: https://issues.apache.org/jira/browse/HBASE-18601 > Project: HBase > Issue Type: Task >Affects Versions: 2.0.0, 3.0.0 >Reporter: Tamas Penzes >Assignee: Tamas Penzes > Fix For: 2.0.0-alpha-4 > > Attachments: HBASE-18601.master.001.patch, > HBASE-18601.master.002.patch, HBASE-18601.master.003 (3).patch, > HBASE-18601.master.003.patch, HBASE-18601.master.004.patch, > HBASE-18601.master.004.patch, HBASE-18601.master.005.patch > > > HTrace is not perfectly integrated into HBase, the version 3.2.0 is buggy, > the upgrade to 4.x is not trivial and would take time. It might not worth to > keep it in this state, so would be better to remove it. > Of course it doesn't mean tracing would be useless, just that in this form > the use of HTrace 3.2 might not add any value to the project and fixing it > would be far too much effort. > - > Based on the decision of the community we keep htrace now and update version -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-18601) Update Htrace to 4.2
[ https://issues.apache.org/jira/browse/HBASE-18601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tamas Penzes updated HBASE-18601: - Status: Patch Available (was: In Progress) > Update Htrace to 4.2 > > > Key: HBASE-18601 > URL: https://issues.apache.org/jira/browse/HBASE-18601 > Project: HBase > Issue Type: Task >Affects Versions: 2.0.0, 3.0.0 >Reporter: Tamas Penzes >Assignee: Tamas Penzes > Fix For: 2.0.0-alpha-4 > > Attachments: HBASE-18601.master.001.patch, > HBASE-18601.master.002.patch, HBASE-18601.master.003 (3).patch, > HBASE-18601.master.003.patch, HBASE-18601.master.004.patch, > HBASE-18601.master.004.patch, HBASE-18601.master.005.patch > > > HTrace is not perfectly integrated into HBase, the version 3.2.0 is buggy, > the upgrade to 4.x is not trivial and would take time. It might not worth to > keep it in this state, so would be better to remove it. > Of course it doesn't mean tracing would be useless, just that in this form > the use of HTrace 3.2 might not add any value to the project and fixing it > would be far too much effort. > - > Based on the decision of the community we keep htrace now and update version -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-18601) Update Htrace to 4.2
[ https://issues.apache.org/jira/browse/HBASE-18601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tamas Penzes updated HBASE-18601: - Attachment: HBASE-18601.master.005.patch > Update Htrace to 4.2 > > > Key: HBASE-18601 > URL: https://issues.apache.org/jira/browse/HBASE-18601 > Project: HBase > Issue Type: Task >Affects Versions: 2.0.0, 3.0.0 >Reporter: Tamas Penzes >Assignee: Tamas Penzes > Fix For: 2.0.0-alpha-4 > > Attachments: HBASE-18601.master.001.patch, > HBASE-18601.master.002.patch, HBASE-18601.master.003 (3).patch, > HBASE-18601.master.003.patch, HBASE-18601.master.004.patch, > HBASE-18601.master.004.patch, HBASE-18601.master.005.patch > > > HTrace is not perfectly integrated into HBase, the version 3.2.0 is buggy, > the upgrade to 4.x is not trivial and would take time. It might not worth to > keep it in this state, so would be better to remove it. > Of course it doesn't mean tracing would be useless, just that in this form > the use of HTrace 3.2 might not add any value to the project and fixing it > would be far too much effort. > - > Based on the decision of the community we keep htrace now and update version -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-17732) Coprocessor Design Improvements
[ https://issues.apache.org/jira/browse/HBASE-17732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183473#comment-16183473 ] Hudson commented on HBASE-17732: FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #3789 (See [https://builds.apache.org/job/HBase-Trunk_matrix/3789/]) HBASE-17732 Coprocessor Design Improvements (appy: rev 97513466c05f5eaadb94425c98098063ac374098) * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/SecureTestUtil.java * (edit) hbase-rsgroup/src/test/java/org/apache/hadoop/hbase/rsgroup/TestRSGroups.java * (edit) hbase-endpoint/src/test/java/org/apache/hadoop/hbase/coprocessor/ColumnAggregationEndpointNullResponse.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestHCM.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/replication/regionserver/TestRegionReplicaReplicationEndpointNoMaster.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestRegionObserverForAddingMutationsFromCoprocessors.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestHBaseFsckMOB.java * (add) hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/BaseEnvironment.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestCoprocessorConfiguration.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestScannerRetriableFailure.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/CoprocessorHost.java * (edit) hbase-examples/src/main/java/org/apache/hadoop/hbase/coprocessor/example/BulkDeleteEndpoint.java * (edit) hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/mapreduce/TestImportTSVWithTTLs.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/WALCoprocessorHost.java * (delete) hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/SampleRegionWALObserver.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAsyncAdminBuilder.java * (edit) hbase-client/src/main/java/org/apache/hadoop/hbase/CoprocessorEnvironment.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestSettingTimeoutOnBlockingPoint.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterCoprocessorHost.java * (add) hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/SampleRegionWALCoprocessor.java * (edit) hbase-endpoint/src/main/java/org/apache/hadoop/hbase/coprocessor/AggregateImplementation.java * (add) hbase-endpoint/src/test/java/org/apache/hadoop/hbase/coprocessor/TestCoprocessorServiceBackwardCompatibility.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/constraint/ConstraintProcessor.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestMasterCoprocessorExceptionWithAbort.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/security/visibility/TestVisibilityLabelsReplication.java * (edit) hbase-examples/src/main/java/org/apache/hadoop/hbase/coprocessor/example/ExampleMasterObserverWithMetrics.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/mob/compactions/TestMobCompactor.java * (add) hbase-server/src/main/java/org/apache/hadoop/hbase/security/visibility/VisibilityReplication.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestMobCloneSnapshotFromClient.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAsyncRegionLocatorTimeout.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestResultFromCoprocessor.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/AbstractTestFSWAL.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestWithDisabledAuthorization.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestMasterObserver.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestClientOperationInterrupt.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/RegionServerObserver.java * (edit) hbase-examples/src/main/java/org/apache/hadoop/hbase/coprocessor/example/RowCountEndpoint.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/MasterSpaceQuotaObserver.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestBlockEvictionFromClient.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/wal/TestWALFactory.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestCoprocessorScanPolicy.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestServerBusyException.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/CoprocessorWhitelistMasterObserver.java * (edit)
[jira] [Commented] (HBASE-18891) Upgrade netty-all jar
[ https://issues.apache.org/jira/browse/HBASE-18891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183467#comment-16183467 ] Hadoop QA commented on HBASE-18891: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 21s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 23s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 8s{color} | {color:green} branch-1.3 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 14s{color} | {color:green} branch-1.3 passed with JDK v1.8.0_144 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 0s{color} | {color:green} branch-1.3 passed with JDK v1.7.0_151 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 49s{color} | {color:green} branch-1.3 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 53s{color} | {color:green} branch-1.3 passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 59s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 51s{color} | {color:green} branch-1.3 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 1s{color} | {color:green} branch-1.3 passed with JDK v1.8.0_144 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 54s{color} | {color:green} branch-1.3 passed with JDK v1.7.0_151 {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 48s{color} | {color:green} the patch passed with JDK v1.8.0_144 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 53s{color} | {color:green} the patch passed with JDK v1.7.0_151 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 2m 17s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 18m 53s{color} | {color:green} Patch does not cause any errors with Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha4. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 13s{color} | {color:green} the patch passed with JDK v1.8.0_144 {color} |
[jira] [Commented] (HBASE-18894) null pointer exception in list_regions in shell command
[ https://issues.apache.org/jira/browse/HBASE-18894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183460#comment-16183460 ] Hadoop QA commented on HBASE-18894: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 15s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 15s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 3m 28s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 9s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} rubocop {color} | {color:red} 0m 5s{color} | {color:red} The patch generated 9 new + 27 unchanged - 7 fixed = 36 total (was 34) {color} | | {color:red}-1{color} | {color:red} ruby-lint {color} | {color:red} 0m 2s{color} | {color:red} The patch generated 1 new + 15 unchanged - 0 fixed = 16 total (was 15) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 3m 29s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 33m 12s{color} | {color:green} Patch does not cause any errors with Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha4. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 42s{color} | {color:green} hbase-shell in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 8s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 42m 27s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:5d60123 | | JIRA Issue | HBASE-18894 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12889386/HBASE-18894-v1-master.patch | | Optional Tests | asflicense shadedjars javac javadoc unit rubocop ruby_lint | | uname | Linux 94ff60fd0bfa 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | master / 9751346 | | Default Java | 1.8.0_144 | | rubocop | v0.50.0 | | rubocop | https://builds.apache.org/job/PreCommit-HBASE-Build/8834/artifact/patchprocess/diff-patch-rubocop.txt | | ruby-lint | v2.3.1 | | ruby-lint | https://builds.apache.org/job/PreCommit-HBASE-Build/8834/artifact/patchprocess/diff-patch-ruby-lint.txt | | Test Results | https://builds.apache.org/job/PreCommit-HBASE-Build/8834/testReport/ | | modules | C: hbase-shell U: hbase-shell | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/8834/console | | Powered by | Apache Yetus 0.4.0 http://yetus.apache.org | This message was automatically generated. > null pointer exception in list_regions in shell command > --- > > Key: HBASE-18894 > URL: https://issues.apache.org/jira/browse/HBASE-18894 > Project: HBase > Issue Type: Bug >Affects Versions: 2.0.0-alpha-3 >Reporter: Yi Liang >Assignee: Yi Liang > Fix For: 2.0.0 > > Attachments:
[jira] [Created] (HBASE-18895) Implement changes eliminated during HTrace update
Tamas Penzes created HBASE-18895: Summary: Implement changes eliminated during HTrace update Key: HBASE-18895 URL: https://issues.apache.org/jira/browse/HBASE-18895 Project: HBase Issue Type: Improvement Affects Versions: 2.0.0-alpha-3 Reporter: Tamas Penzes Priority: Minor HTrace 4 is not fully compatible with HTrace 3. Some functionalities were generally changed and couldn't have been migrated. Due this ticket they should be handled or removed. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-18894) null pointer exception in list_regions in shell command
[ https://issues.apache.org/jira/browse/HBASE-18894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183422#comment-16183422 ] Ted Yu commented on HBASE-18894: lgtm > null pointer exception in list_regions in shell command > --- > > Key: HBASE-18894 > URL: https://issues.apache.org/jira/browse/HBASE-18894 > Project: HBase > Issue Type: Bug >Affects Versions: 2.0.0-alpha-3 >Reporter: Yi Liang >Assignee: Yi Liang > Fix For: 2.0.0 > > Attachments: HBASE-18894-v1-master.patch > > > See this error when run list_regions command After disable 't1' > or after running split 't1', will see this error before split complete > this caused by region is disabled or still in transition > {quote} > list_regions 't1' > ERROR: undefined method `getDataLocality' for nil:NilClass > {quote} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-18894) null pointer exception in list_regions in shell command
[ https://issues.apache.org/jira/browse/HBASE-18894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Liang updated HBASE-18894: - Affects Version/s: 2.0.0-alpha-3 > null pointer exception in list_regions in shell command > --- > > Key: HBASE-18894 > URL: https://issues.apache.org/jira/browse/HBASE-18894 > Project: HBase > Issue Type: Bug >Affects Versions: 2.0.0-alpha-3 >Reporter: Yi Liang >Assignee: Yi Liang > Fix For: 2.0.0 > > Attachments: HBASE-18894-v1-master.patch > > > See this error when run list_regions command After disable 't1' > or after running split 't1', will see this error before split complete > this caused by region is disabled or still in transition > {quote} > list_regions 't1' > ERROR: undefined method `getDataLocality' for nil:NilClass > {quote} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-18894) null pointer exception in list_regions in shell command
[ https://issues.apache.org/jira/browse/HBASE-18894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Liang updated HBASE-18894: - Fix Version/s: 2.0.0 > null pointer exception in list_regions in shell command > --- > > Key: HBASE-18894 > URL: https://issues.apache.org/jira/browse/HBASE-18894 > Project: HBase > Issue Type: Bug >Affects Versions: 2.0.0-alpha-3 >Reporter: Yi Liang >Assignee: Yi Liang > Fix For: 2.0.0 > > Attachments: HBASE-18894-v1-master.patch > > > See this error when run list_regions command After disable 't1' > or after running split 't1', will see this error before split complete > this caused by region is disabled or still in transition > {quote} > list_regions 't1' > ERROR: undefined method `getDataLocality' for nil:NilClass > {quote} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-18894) null pointer exception in list_regions in shell command
[ https://issues.apache.org/jira/browse/HBASE-18894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Liang updated HBASE-18894: - Status: Patch Available (was: Open) > null pointer exception in list_regions in shell command > --- > > Key: HBASE-18894 > URL: https://issues.apache.org/jira/browse/HBASE-18894 > Project: HBase > Issue Type: Bug >Reporter: Yi Liang >Assignee: Yi Liang > Attachments: HBASE-18894-v1-master.patch > > > See this error when run list_regions command After disable 't1' > or after running split 't1', will see this error before split complete > this caused by region is disabled or still in transition > {quote} > list_regions 't1' > ERROR: undefined method `getDataLocality' for nil:NilClass > {quote} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-18894) null pointer exception in list_regions in shell command
[ https://issues.apache.org/jira/browse/HBASE-18894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Liang updated HBASE-18894: - Attachment: HBASE-18894-v1-master.patch > null pointer exception in list_regions in shell command > --- > > Key: HBASE-18894 > URL: https://issues.apache.org/jira/browse/HBASE-18894 > Project: HBase > Issue Type: Bug >Reporter: Yi Liang >Assignee: Yi Liang > Attachments: HBASE-18894-v1-master.patch > > > See this error when run list_regions command After disable 't1' > or after running split 't1', will see this error before split complete > this caused by region is disabled or still in transition > {quote} > list_regions 't1' > ERROR: undefined method `getDataLocality' for nil:NilClass > {quote} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-18891) Upgrade netty-all jar
[ https://issues.apache.org/jira/browse/HBASE-18891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Josh Elser updated HBASE-18891: --- Attachment: HBASE-18891.002.branch-1.3.patch .002 Let's try 4.0.52.Final > Upgrade netty-all jar > - > > Key: HBASE-18891 > URL: https://issues.apache.org/jira/browse/HBASE-18891 > Project: HBase > Issue Type: Bug >Reporter: Josh Elser >Assignee: Josh Elser >Priority: Critical > Fix For: 1.3.2, 1.2.7, 1.1.13 > > Attachments: HBASE-18891.001.branch-1.3.patch, > HBASE-18891.002.branch-1.3.patch > > > Upgrade netty-all jar to 4.0.37.Final version to fix latest vulnerabilities > reported. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (HBASE-18894) null pointer exception in list_regions in shell command
Yi Liang created HBASE-18894: Summary: null pointer exception in list_regions in shell command Key: HBASE-18894 URL: https://issues.apache.org/jira/browse/HBASE-18894 Project: HBase Issue Type: Bug Reporter: Yi Liang Assignee: Yi Liang See this error when run list_regions command After disable 't1' or after running split 't1', will see this error before split complete this caused by region is disabled or still in transition {quote} list_regions 't1' ERROR: undefined method `getDataLocality' for nil:NilClass {quote} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-18891) Upgrade netty-all jar
[ https://issues.apache.org/jira/browse/HBASE-18891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183391#comment-16183391 ] Sean Busbey commented on HBASE-18891: - build #8822 got killed with 6 hour timeout before it could post back here, FYI. > Upgrade netty-all jar > - > > Key: HBASE-18891 > URL: https://issues.apache.org/jira/browse/HBASE-18891 > Project: HBase > Issue Type: Bug >Reporter: Josh Elser >Assignee: Josh Elser >Priority: Critical > Fix For: 1.3.2, 1.2.7, 1.1.13 > > Attachments: HBASE-18891.001.branch-1.3.patch > > > Upgrade netty-all jar to 4.0.37.Final version to fix latest vulnerabilities > reported. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-18883) Upgrade to Curator 4.0
[ https://issues.apache.org/jira/browse/HBASE-18883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183385#comment-16183385 ] Sean Busbey commented on HBASE-18883: - {quote}Verified manually that hbase-shaded-client jar does not have any unrelocated guava classes. Sean Busbey - do we need to add a test for this in the invariants check? {quote} There already is such a test, [ensure-jars-have-correct-contents.sh|https://github.com/apache/hbase/blob/bc5478f947c41f3f710c1117e0d604f0b28f72eb/hbase-shaded/hbase-shaded-check-invariants/src/test/resources/ensure-jars-have-correct-contents.sh#L25] will flag if any path doesn't match our known-good list. that'll catch unrelocated guava. > Upgrade to Curator 4.0 > -- > > Key: HBASE-18883 > URL: https://issues.apache.org/jira/browse/HBASE-18883 > Project: HBase > Issue Type: Bug > Components: Client >Reporter: Mike Drob >Assignee: Mike Drob > Fix For: 2.0.0 > > Attachments: HBASE-18883.patch, HBASE-18883.v2.patch > > > While we're doing a dependency pass for HBase 2, we should see if we can bump > Curator to 4.0 -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-18349) Enable disabled tests in TestFavoredStochasticLoadBalancer that were disabled by Proc-V2 AM in HBASE-14614
[ https://issues.apache.org/jira/browse/HBASE-18349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183365#comment-16183365 ] Thiruvel Thirumoolan commented on HBASE-18349: -- Was on vacation for a while, will get back to this one. > Enable disabled tests in TestFavoredStochasticLoadBalancer that were disabled > by Proc-V2 AM in HBASE-14614 > -- > > Key: HBASE-18349 > URL: https://issues.apache.org/jira/browse/HBASE-18349 > Project: HBase > Issue Type: Bug > Components: test >Affects Versions: 2.0.0-alpha-1 >Reporter: Stephen Yuan Jiang >Assignee: Thiruvel Thirumoolan > > The following 3 tests in TestFavoredStochasticLoadBalancerwere disabled by > HBASE-14614 (Core Proc-V2 AM): > - testAllFavoredNodesDead > - testAllFavoredNodesDeadMasterRestarted > - testMisplacedRegions > This JIRA is tracking necessary work to re-able (or remove/change if not > applicable) these UTs -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-18349) Enable disabled tests in TestFavoredStochasticLoadBalancer that were disabled by Proc-V2 AM in HBASE-14614
[ https://issues.apache.org/jira/browse/HBASE-18349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183357#comment-16183357 ] Sean Busbey commented on HBASE-18349: - still working on this [~thiruvel]? > Enable disabled tests in TestFavoredStochasticLoadBalancer that were disabled > by Proc-V2 AM in HBASE-14614 > -- > > Key: HBASE-18349 > URL: https://issues.apache.org/jira/browse/HBASE-18349 > Project: HBase > Issue Type: Bug > Components: test >Affects Versions: 2.0.0-alpha-1 >Reporter: Stephen Yuan Jiang >Assignee: Thiruvel Thirumoolan > > The following 3 tests in TestFavoredStochasticLoadBalancerwere disabled by > HBASE-14614 (Core Proc-V2 AM): > - testAllFavoredNodesDead > - testAllFavoredNodesDeadMasterRestarted > - testMisplacedRegions > This JIRA is tracking necessary work to re-able (or remove/change if not > applicable) these UTs -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-18351) Fix tests that carry meta in Master that were disabled by Proc-V2 AM in HBASE-14614
[ https://issues.apache.org/jira/browse/HBASE-18351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183354#comment-16183354 ] Sean Busbey commented on HBASE-18351: - these tests might be fixed by default now that Master doesn't carry meta by default. > Fix tests that carry meta in Master that were disabled by Proc-V2 AM in > HBASE-14614 > --- > > Key: HBASE-18351 > URL: https://issues.apache.org/jira/browse/HBASE-18351 > Project: HBase > Issue Type: Bug > Components: test >Affects Versions: 2.0.0-alpha-1 >Reporter: Stephen Yuan Jiang >Assignee: Vladimir Rodionov > > The following tests were disabled as part of Core Proc-V2 AM in HBASE-14614 > - TestRegionRebalancing is disabled because doesn't consider the fact that > Master carries system tables only (fix of average in RegionStates brought out > the issue). > - Disabled testMetaAddressChange in TestMetaWithReplicas because presumes can > move meta... you can't > - TestAsyncTableGetMultiThreaded wants to move hbase:meta...Balancer does > NPEs. AMv2 won't let you move hbase:meta off Master. > - TestMasterFailover needs to be rewritten for AMv2. It uses tricks not > ordained when up on AMv2. The test is also hobbled by fact that we > religiously enforce that only master can carry meta, something we are lose > about in old AM > This JIRA is tracking the work to enable/modify them. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Comment Edited] (HBASE-16010) Put draining function through Admin API
[ https://issues.apache.org/jira/browse/HBASE-16010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183340#comment-16183340 ] Jerry He edited comment on HBASE-16010 at 9/27/17 10:06 PM: I'd like to get a closure on this ticket. I am ready to add the 'decommission' logic as suggested by the folks here. Decommission: Mark the region servers as 'draining' so that no regions will be added. Also unload the regions on them. What about 'recommission'? Remove the 'draining' mode. Then load the regions? What regions to load? API accepts a list of regions to region servers? Or only remove the 'draining' mode, and let the balancer takes care? was (Author: jinghe): I'd like to get a closure on this ticket. I am ready to add the 'decommission' logic as suggested by the folks here. Decommission: Mark the region servers as 'draining' so that no regions will be added. Also unload the regions on them. What about 'recommission'? Remove the 'draining' mode. Then load the regions? What regions to load? API accepts a list of regions to region servers? Or only remove the 'draining' mode, and let the balancer take care? > Put draining function through Admin API > --- > > Key: HBASE-16010 > URL: https://issues.apache.org/jira/browse/HBASE-16010 > Project: HBase > Issue Type: Improvement >Reporter: Jerry He >Assignee: Jerry He >Priority: Minor > Fix For: 2.0.0 > > Attachments: hbase-16010-v1.patch, hbase-16010-v2.patch, > HBASE-16010-v3.patch > > > Currently, there is no Amdin API for draining function. Client has to > interact directly with Zookeeper draining node to add and remove draining > servers. > For example, in draining_servers.rb: > {code} > zkw = org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.new(config, > "draining_servers", nil) > parentZnode = zkw.drainingZNode > begin > for server in servers > node = ZKUtil.joinZNode(parentZnode, server) > ZKUtil.createAndFailSilent(zkw, node) > end > ensure > zkw.close() > end > {code} > This is not good in cases like secure clusters with protected Zookeeper nodes. > Let's put draining function through Admin API. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-16010) Put draining function through Admin API
[ https://issues.apache.org/jira/browse/HBASE-16010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183340#comment-16183340 ] Jerry He commented on HBASE-16010: -- I'd like to get a closure on this ticket. I am ready to add the 'decommission' logic as suggested by the folks here. Decommission: Mark the region servers as 'draining' so that no regions will be added. Also unload the regions on them. What about 'recommission'? Remove the 'draining' mode. Then load the regions? What regions to load? API accepts a list of regions to region servers? Or only remove the 'draining' mode, and let the balancer take care? > Put draining function through Admin API > --- > > Key: HBASE-16010 > URL: https://issues.apache.org/jira/browse/HBASE-16010 > Project: HBase > Issue Type: Improvement >Reporter: Jerry He >Assignee: Jerry He >Priority: Minor > Fix For: 2.0.0 > > Attachments: hbase-16010-v1.patch, hbase-16010-v2.patch, > HBASE-16010-v3.patch > > > Currently, there is no Amdin API for draining function. Client has to > interact directly with Zookeeper draining node to add and remove draining > servers. > For example, in draining_servers.rb: > {code} > zkw = org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.new(config, > "draining_servers", nil) > parentZnode = zkw.drainingZNode > begin > for server in servers > node = ZKUtil.joinZNode(parentZnode, server) > ZKUtil.createAndFailSilent(zkw, node) > end > ensure > zkw.close() > end > {code} > This is not good in cases like secure clusters with protected Zookeeper nodes. > Let's put draining function through Admin API. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-18391) List the stuffs which are using the patent grant license (PATENTS file) of Facebook; And then discuss and remove them.
[ https://issues.apache.org/jira/browse/HBASE-18391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183313#comment-16183313 ] Sean Busbey commented on HBASE-18391: - sounds like this is all taken care of now? > List the stuffs which are using the patent grant license (PATENTS file) of > Facebook; And then discuss and remove them. > -- > > Key: HBASE-18391 > URL: https://issues.apache.org/jira/browse/HBASE-18391 > Project: HBase > Issue Type: Task > Components: community, dependencies >Reporter: Chia-Ping Tsai >Priority: Blocker > Labels: incompatible > Fix For: 2.0.0-beta-1 > > > See ["Apache Foundation disallows use of the Facebook “BSD+Patent” > license"|https://news.ycombinator.com/item?id=14779881] -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-18883) Upgrade to Curator 4.0
[ https://issues.apache.org/jira/browse/HBASE-18883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183300#comment-16183300 ] Hadoop QA commented on HBASE-18883: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 34s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 16s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 54s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 12m 38s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 54s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 3m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 1s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 39m 2s{color} | {color:green} Patch does not cause any errors with Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha4. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 39s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 74m 19s{color} | {color:red} root in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 32s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}142m 30s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Timed out junit tests | org.apache.hadoop.hbase.master.procedure.TestDisableTableProcedure | | | org.apache.hadoop.hbase.master.procedure.TestEnableTableProcedure | | | org.apache.hadoop.hbase.client.TestAsyncReplicationAdminApi | | | org.apache.hadoop.hbase.snapshot.TestSnapshotClientRetries | | | org.apache.hadoop.hbase.master.balancer.TestStochasticLoadBalancer2 | | | org.apache.hadoop.hbase.master.TestMasterFailover | | | org.apache.hadoop.hbase.coprocessor.TestRegionObserverScannerOpenHook | | | org.apache.hadoop.hbase.namespace.TestNamespaceAuditor | | | org.apache.hadoop.hbase.wal.TestWALFiltering | | | org.apache.hadoop.hbase.master.TestMasterFileSystemWithWALDir | | | org.apache.hadoop.hbase.replication.TestReplicationStateHBaseImpl | | | org.apache.hadoop.hbase.quotas.TestMasterSpaceQuotaObserver | | | org.apache.hadoop.hbase.replication.regionserver.TestWALEntryStream | | | org.apache.hadoop.hbase.wal.TestWALSplitCompressed | | | org.apache.hadoop.hbase.client.TestAsyncTableNoncedRetry | | | org.apache.hadoop.hbase.quotas.TestRegionSizeUse | | | org.apache.hadoop.hbase.master.balancer.TestFavoredStochasticLoadBalancer | | | org.apache.hadoop.hbase.client.TestClientTimeouts | | | org.apache.hadoop.hbase.io.asyncfs.TestSaslFanOutOneBlockAsyncDFSOutput | | | org.apache.hadoop.hbase.util.TestHBaseFsckEncryption | | | org.apache.hadoop.hbase.master.assignment.TestRogueRSAssignment | | | org.apache.hadoop.hbase.master.procedure.TestMasterFailoverWithProcedures | | |
[jira] [Created] (HBASE-18893) shell 'alter' command no longer distinguishes column add/modify/delete
Mike Drob created HBASE-18893: - Summary: shell 'alter' command no longer distinguishes column add/modify/delete Key: HBASE-18893 URL: https://issues.apache.org/jira/browse/HBASE-18893 Project: HBase Issue Type: Bug Components: shell Reporter: Mike Drob After HBASE-15641 all 'alter' commands go through a single modifyTable call at the end, so we no longer can easily distinguish add, modify, and delete column events. This potentially affects coprocessors that needed the update notifications for new or removed columns. Let's let the shell still make separate behaviour calls like it did before without undoing the batching that seems pretty useful. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-18626) Handle the incompatible change about the replication TableCFs' config
[ https://issues.apache.org/jira/browse/HBASE-18626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Busbey updated HBASE-18626: Fix Version/s: (was: 2.0.0-beta-1) 2.0.0-beta-2 let's push rolling upgrade stuff to beta-2. Ability to do upgrade at all should be sufficient for beta-1. > Handle the incompatible change about the replication TableCFs' config > - > > Key: HBASE-18626 > URL: https://issues.apache.org/jira/browse/HBASE-18626 > Project: HBase > Issue Type: Bug >Reporter: Guanghao Zhang >Priority: Blocker > Fix For: 2.0.0-beta-2 > > > About compatibility, there is one incompatible change about the replication > TableCFs' config. The old config is a string and it concatenate the list of > tables and column families in format "table1:cf1,cf2;table2:cfA,cfB" in > zookeeper for table-cf to replication peer mapping. When parse the config, it > use ":" to split the string. If table name includes namespace, it will be > wrong (See HBASE-11386). It is a problem since we support namespace (0.98). > So HBASE-11393 (and HBASE-16653) changed it to a PB object. When rolling > update cluster, you need rolling master first. And the master will try to > translate the string config to a PB object. But there are two problems. > 1. Permission problem. The replication client can write the zookeeper > directly. So the znode may have different owner. And master may don't have > the write permission for the znode. It maybe failed to translate old > table-cfs string to new PB Object. See HBASE-16938 > 2. We usually keep compatibility between old client and new server. But the > old replication client may write a string config to znode directly. Then the > new server can't parse them. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-16550) Procedure v2 - Add AM compatibility for 2.x Master and 1.x RSs; i.e. support Rolling Upgrade from hbase-1 to -2.
[ https://issues.apache.org/jira/browse/HBASE-16550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Busbey updated HBASE-16550: Fix Version/s: (was: 2.0.0-beta-1) 2.0.0-beta-2 let's push rolling upgrade stuff to beta-2. Ability to do upgrade at all should be sufficient for beta-1. > Procedure v2 - Add AM compatibility for 2.x Master and 1.x RSs; i.e. support > Rolling Upgrade from hbase-1 to -2. > > > Key: HBASE-16550 > URL: https://issues.apache.org/jira/browse/HBASE-16550 > Project: HBase > Issue Type: Bug > Components: proc-v2, Region Assignment >Affects Versions: 2.0.0 >Reporter: Matteo Bertozzi >Priority: Blocker > Fix For: 2.0.0-beta-2 > > > Core AM HBASE-14614 relies on the RS to be using zkless assignment. Add > support for the old a plain non zkless AM -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-17143) Scan improvement
[ https://issues.apache.org/jira/browse/HBASE-17143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183290#comment-16183290 ] Sean Busbey commented on HBASE-17143: - these last 3 items making it into 2.0.0-beta-1 [~Apache9], or better to promote them to their own tasks and target a later release, e.g. 2.1? > Scan improvement > > > Key: HBASE-17143 > URL: https://issues.apache.org/jira/browse/HBASE-17143 > Project: HBase > Issue Type: Umbrella > Components: Client, scan >Affects Versions: 2.0.0, 1.4.0 >Reporter: Duo Zhang >Priority: Blocker > Fix For: 2.0.0-beta-1 > > > Parent issues to track some improvements of the current scan. > Timeout per scan, unify batch and allowPartial, add inclusive and exclusive > of startKey and endKey, start scan from the middle of a record, use mvcc to > keep row atomic when allowPartial, etc. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-12260) MasterServices - remove from coprocessor API (Discuss)
[ https://issues.apache.org/jira/browse/HBASE-12260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183288#comment-16183288 ] Andrew Purtell commented on HBASE-12260: Sure, we could replace MS with a new interface that only exposes a safe subset. > MasterServices - remove from coprocessor API (Discuss) > -- > > Key: HBASE-12260 > URL: https://issues.apache.org/jira/browse/HBASE-12260 > Project: HBase > Issue Type: Sub-task > Components: master >Reporter: ryan rawson >Priority: Critical > Fix For: 2.0.0-alpha-4 > > > A major issue with MasterServices is the MasterCoprocessorEnvironment exposes > this class even though MasterServices is tagged with > @InterfaceAudience.Private > This means that the entire internals of the HMaster is essentially part of > the coprocessor API. Many of the classes returned by the MasterServices API > are highly internal, extremely powerful, and subject to constant change. > Perhaps a new API to replace MasterServices that is use-case focused, and > justified based on real world co-processors would suit things better. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-18891) Upgrade netty-all jar
[ https://issues.apache.org/jira/browse/HBASE-18891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183287#comment-16183287 ] Josh Elser commented on HBASE-18891: Just kidding, it is running: https://builds.apache.org/job/PreCommit-HBASE-Build/8822 When I'll put up a v2 when that job finishes that updates to .52.Final > Upgrade netty-all jar > - > > Key: HBASE-18891 > URL: https://issues.apache.org/jira/browse/HBASE-18891 > Project: HBase > Issue Type: Bug >Reporter: Josh Elser >Assignee: Josh Elser >Priority: Critical > Fix For: 1.3.2, 1.2.7, 1.1.13 > > Attachments: HBASE-18891.001.branch-1.3.patch > > > Upgrade netty-all jar to 4.0.37.Final version to fix latest vulnerabilities > reported. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-18891) Upgrade netty-all jar
[ https://issues.apache.org/jira/browse/HBASE-18891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183284#comment-16183284 ] Josh Elser commented on HBASE-18891: bq. Agreed, if moving up why wouldn't we try the latest first? Presumably those bug fixes and whatnot in the later revisions are worth something to someone Agreed. I was going to put up a new patch, but I figured precommit wouldn't handle that well. It seems like perhaps it didn't like me at all given no response yet :) > Upgrade netty-all jar > - > > Key: HBASE-18891 > URL: https://issues.apache.org/jira/browse/HBASE-18891 > Project: HBase > Issue Type: Bug >Reporter: Josh Elser >Assignee: Josh Elser >Priority: Critical > Fix For: 1.3.2, 1.2.7, 1.1.13 > > Attachments: HBASE-18891.001.branch-1.3.patch > > > Upgrade netty-all jar to 4.0.37.Final version to fix latest vulnerabilities > reported. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-18891) Upgrade netty-all jar
[ https://issues.apache.org/jira/browse/HBASE-18891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell updated HBASE-18891: --- Summary: Upgrade netty-all jar (was: Upgrade netty-all jar to 4.0.37.Final) > Upgrade netty-all jar > - > > Key: HBASE-18891 > URL: https://issues.apache.org/jira/browse/HBASE-18891 > Project: HBase > Issue Type: Bug >Reporter: Josh Elser >Assignee: Josh Elser >Priority: Critical > Fix For: 1.3.2, 1.2.7, 1.1.13 > > Attachments: HBASE-18891.001.branch-1.3.patch > > > Upgrade netty-all jar to 4.0.37.Final version to fix latest vulnerabilities > reported. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-18891) Upgrade netty-all jar to 4.0.37.Final
[ https://issues.apache.org/jira/browse/HBASE-18891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183275#comment-16183275 ] Andrew Purtell commented on HBASE-18891: We are doing the same. This is a point version upgrade so "should have" no compatibility impact. > Well, 4.0.52.Final was released two weeks ago. Not sure why we'd have to > limit ourselves to .37... Agreed, if moving up why wouldn't we try the latest first? Presumably those bug fixes and whatnot in the later revisions are worth something to someone > Upgrade netty-all jar to 4.0.37.Final > - > > Key: HBASE-18891 > URL: https://issues.apache.org/jira/browse/HBASE-18891 > Project: HBase > Issue Type: Bug >Reporter: Josh Elser >Assignee: Josh Elser >Priority: Critical > Fix For: 1.3.2, 1.2.7, 1.1.13 > > Attachments: HBASE-18891.001.branch-1.3.patch > > > Upgrade netty-all jar to 4.0.37.Final version to fix latest vulnerabilities > reported. -- This message was sent by Atlassian JIRA (v6.4.14#64029)