[jira] [Updated] (HBASE-17220) [C++] Address major issues from cpplint

2016-12-15 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HBASE-17220:
--
Attachment: hbase-17220_v1.patch

This patch is removing the {{long}}s and {{shorts}} in favor of {{int64_t}} and 
addressing some other issues reported by {{make lint}}. 

[~sudeeps] do you mind taking a quick look? 

> [C++] Address major issues from cpplint 
> 
>
> Key: HBASE-17220
> URL: https://issues.apache.org/jira/browse/HBASE-17220
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Enis Soztutar
> Attachments: hbase-17220_v1.patch
>
>
> See HBASE-17218. 
> Some warnings seems important to address: 
> {code}
> core/time_range.cc:29:  Use int16/int64/etc, rather than the C type long  
> [runtime/int] [4]
> core/cell.h:39:  Use int16/int64/etc, rather than the C type long  
> [runtime/int] [4]
> core/cell.h:45:  Use int16/int64/etc, rather than the C type long  
> [runtime/int] [4]
> ...
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-11392) add/remove peer requests should be routed through master

2016-12-15 Thread Guanghao Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang updated HBASE-11392:
---
Attachment: HBASE-11392-v3.patch

> add/remove peer requests should be routed through master
> 
>
> Key: HBASE-11392
> URL: https://issues.apache.org/jira/browse/HBASE-11392
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Enis Soztutar
>Assignee: Guanghao Zhang
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HBASE-11392-v1.patch, HBASE-11392-v2.patch, 
> HBASE-11392-v3.patch
>
>
> ReplicationAdmin directly operates over the zookeeper data for replication 
> setup. We should move these operations to be routed through master for two 
> reasons: 
>  - Replication implementation details are exposed to client. We should move 
> most of the replication related classes to hbase-server package. 
>  - Routing the requests through master is the standard practice for all other 
> operations. It allows for decoupling implementation details from the client 
> and code.
> Review board: https://reviews.apache.org/r/54730/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17314) Limit total buffered size for all replication sources

2016-12-15 Thread Phil Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phil Yang updated HBASE-17314:
--
Attachment: HBASE-17314.v03.patch

> Limit total buffered size for all replication sources
> -
>
> Key: HBASE-17314
> URL: https://issues.apache.org/jira/browse/HBASE-17314
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Reporter: Phil Yang
>Assignee: Phil Yang
> Attachments: HBASE-17314.v01.patch, HBASE-17314.v02.patch, 
> HBASE-17314.v03.patch
>
>
> If we have many peers or some servers have many recovered queues, we will 
> hold many entries in memory which will increase the pressure of GC, even 
> maybe OOM because we will read entries for 64MB to buffer in default for one 
> source.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17314) Limit total buffered size for all replication sources

2016-12-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15753682#comment-15753682
 ] 

Hadoop QA commented on HBASE-17314:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 4s {color} 
| {color:red} HBASE-17314 does not apply to master. Rebase required? Wrong 
Branch? See https://yetus.apache.org/documentation/0.3.0/precommit-patchnames 
for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12843561/HBASE-17314.v02.patch 
|
| JIRA Issue | HBASE-17314 |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/4946/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Limit total buffered size for all replication sources
> -
>
> Key: HBASE-17314
> URL: https://issues.apache.org/jira/browse/HBASE-17314
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Reporter: Phil Yang
>Assignee: Phil Yang
> Attachments: HBASE-17314.v01.patch, HBASE-17314.v02.patch
>
>
> If we have many peers or some servers have many recovered queues, we will 
> hold many entries in memory which will increase the pressure of GC, even 
> maybe OOM because we will read entries for 64MB to buffer in default for one 
> source.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17314) Limit total buffered size for all replication sources

2016-12-15 Thread Phil Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phil Yang updated HBASE-17314:
--
Status: Patch Available  (was: Open)

> Limit total buffered size for all replication sources
> -
>
> Key: HBASE-17314
> URL: https://issues.apache.org/jira/browse/HBASE-17314
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Reporter: Phil Yang
>Assignee: Phil Yang
> Attachments: HBASE-17314.v01.patch, HBASE-17314.v02.patch
>
>
> If we have many peers or some servers have many recovered queues, we will 
> hold many entries in memory which will increase the pressure of GC, even 
> maybe OOM because we will read entries for 64MB to buffer in default for one 
> source.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17314) Limit total buffered size for all replication sources

2016-12-15 Thread Phil Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phil Yang updated HBASE-17314:
--
Attachment: HBASE-17314.v02.patch

Add UT

> Limit total buffered size for all replication sources
> -
>
> Key: HBASE-17314
> URL: https://issues.apache.org/jira/browse/HBASE-17314
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Reporter: Phil Yang
>Assignee: Phil Yang
> Attachments: HBASE-17314.v01.patch, HBASE-17314.v02.patch
>
>
> If we have many peers or some servers have many recovered queues, we will 
> hold many entries in memory which will increase the pressure of GC, even 
> maybe OOM because we will read entries for 64MB to buffer in default for one 
> source.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17282) Reduce the redundant requests to meta table

2016-12-15 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-17282:
--
Attachment: HBASE-17282-v1.patch

Add testcase. Fix findbugs warnings.

> Reduce the redundant requests to meta table
> ---
>
> Key: HBASE-17282
> URL: https://issues.apache.org/jira/browse/HBASE-17282
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-17282-v1.patch, HBASE-17282.patch
>
>
> This usually happens at startup when the meta cache is empty. There will be a 
> lot of locating requests, but most of will have same results. Things become 
> worse if we do batch operations with AsyncTable as we will send a locating 
> request for each operation concurrently.
> We need to reduce the redundant requests to meta table.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17325) Add batch delete capability to ImportTsv

2016-12-15 Thread zhangshibin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhangshibin updated HBASE-17325:

Status: Patch Available  (was: Open)

> Add batch delete capability to ImportTsv 
> -
>
> Key: HBASE-17325
> URL: https://issues.apache.org/jira/browse/HBASE-17325
> Project: HBase
>  Issue Type: New Feature
>  Components: tooling
>Reporter: zhangshibin
>  Labels: patch
> Attachments: 
> 0001-HBASE-17325-Add-batch-delete-capability-to-ImportTsv.patch
>
>
> Considering to batch delete data in table which we load from external 
> files,this feature add a switch key to enable batch delete.
> First,using the file we load to table and the bulkoutput function of 
> ImportTsv to generate hfiles in hdfs which contain the keyvalue of   
> 'DeleteFamily' marker .Then the tool of 
> org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles is used to load 
> hfiles into table intending to cover the data of whole family we need to 
> delete.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17257) Add column-aliasing capability to hbase-client

2016-12-15 Thread Daniel Vimont (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Vimont updated HBASE-17257:
--
Status: Patch Available  (was: Open)

> Add column-aliasing capability to hbase-client
> --
>
> Key: HBASE-17257
> URL: https://issues.apache.org/jira/browse/HBASE-17257
> Project: HBase
>  Issue Type: New Feature
>  Components: Client
>Affects Versions: 2.0.0
>Reporter: Daniel Vimont
>Assignee: Daniel Vimont
>  Labels: features
> Attachments: HBASE-17257-v2.patch, HBASE-17257-v3.patch, 
> HBASE-17257-v4.patch, HBASE-17257.patch
>
>
> Review Board link: https://reviews.apache.org/r/54635/
> Column aliasing will provide the option for a 1, 2, or 4 byte alias value to 
> be stored in each cell of an "alias enabled" column-family, in place of the 
> full-length column-qualifier. Aliasing is intended to operate completely 
> invisibly to the end-user developer, with absolutely no "awareness" of 
> aliasing required to be coded into a front-end application. No new public 
> hbase-client interfaces are to be introduced, and only a few new public 
> methods should need to be added to existing interfaces, primarily to allow an 
> administrator to designate that a new column-family is to be alias-enabled by 
> setting its aliasSize attribute to 1, 2, or 4.
> To facilitate such functionality, new subclasses of HTable, 
> BufferedMutatorImpl, and HTableMultiplexer are to be provided. The overriding 
> methods of these new subclasses will invoke methods of the new AliasManager 
> class to facilitate qualifier-to-alias conversions (for user-submitted Gets, 
> Scans, and Mutations) and alias-to-qualifier conversions (for Results 
> returned from HBase) for any Table that has one or more alias-enabled column 
> families. All conversion logic will be encapsulated in the new AliasManager 
> class, and all qualifier-to-alias mappings will be persisted in a new 
> aliasMappingTable in a new, reserved namespace.
> An informal polling of HBase users at HBaseCon East and at the 
> Strata/Hadoop-World conference in Sept. 2016 showed that Column Aliasing 
> could be a popular enhancement to standard HBase functionality, due to the 
> fact that full column-qualifiers are stored in each cell, and reducing this 
> qualifier storage requirement down to 1, 2, or 4 bytes per cell could prove 
> beneficial in terms of reduced storage and bandwidth needs. Aliasing is 
> intended chiefly for column-families which are of the "narrow and tall" 
> variety (i.e., that are designed to use relatively few distinct 
> column-qualifiers throughout a large number of rows, throughout the lifespan 
> of the column-family). A column-family that is set up with an alias-size of 1 
> byte can contain up to 255 unique column-qualifiers; a 2 byte alias-size 
> allows for up to 65,535 unique column-qualifiers; and a 4 byte alias-size 
> allows for up to 4,294,967,295 unique column-qualifiers.
> Fuller specifications will be entered into the comments section below. Note 
> that it may well not be viable to add aliasing support in the new "async" 
> classes that appear to be currently under development.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17325) Add batch delete capability to ImportTsv

2016-12-15 Thread zhangshibin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhangshibin updated HBASE-17325:

Attachment: 0001-HBASE-17325-Add-batch-delete-capability-to-ImportTsv.patch

> Add batch delete capability to ImportTsv 
> -
>
> Key: HBASE-17325
> URL: https://issues.apache.org/jira/browse/HBASE-17325
> Project: HBase
>  Issue Type: New Feature
>  Components: tooling
>Reporter: zhangshibin
>  Labels: patch
> Attachments: 
> 0001-HBASE-17325-Add-batch-delete-capability-to-ImportTsv.patch
>
>
> Considering to batch delete data in table which we load from external 
> files,this feature add a switch key to enable batch delete.
> First,using the file we load to table and the bulkoutput function of 
> ImportTsv to generate hfiles in hdfs which contain the keyvalue of   
> 'DeleteFamily' marker .Then the tool of 
> org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles is used to load 
> hfiles into table intending to cover the data of whole family we need to 
> delete.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17257) Add column-aliasing capability to hbase-client

2016-12-15 Thread Daniel Vimont (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Vimont updated HBASE-17257:
--
Attachment: HBASE-17257-v4.patch

Submitting new patch which takes into account changes to code-base made by 
HBASE-17277 and others.

> Add column-aliasing capability to hbase-client
> --
>
> Key: HBASE-17257
> URL: https://issues.apache.org/jira/browse/HBASE-17257
> Project: HBase
>  Issue Type: New Feature
>  Components: Client
>Affects Versions: 2.0.0
>Reporter: Daniel Vimont
>Assignee: Daniel Vimont
>  Labels: features
> Attachments: HBASE-17257-v2.patch, HBASE-17257-v3.patch, 
> HBASE-17257-v4.patch, HBASE-17257.patch
>
>
> Review Board link: https://reviews.apache.org/r/54635/
> Column aliasing will provide the option for a 1, 2, or 4 byte alias value to 
> be stored in each cell of an "alias enabled" column-family, in place of the 
> full-length column-qualifier. Aliasing is intended to operate completely 
> invisibly to the end-user developer, with absolutely no "awareness" of 
> aliasing required to be coded into a front-end application. No new public 
> hbase-client interfaces are to be introduced, and only a few new public 
> methods should need to be added to existing interfaces, primarily to allow an 
> administrator to designate that a new column-family is to be alias-enabled by 
> setting its aliasSize attribute to 1, 2, or 4.
> To facilitate such functionality, new subclasses of HTable, 
> BufferedMutatorImpl, and HTableMultiplexer are to be provided. The overriding 
> methods of these new subclasses will invoke methods of the new AliasManager 
> class to facilitate qualifier-to-alias conversions (for user-submitted Gets, 
> Scans, and Mutations) and alias-to-qualifier conversions (for Results 
> returned from HBase) for any Table that has one or more alias-enabled column 
> families. All conversion logic will be encapsulated in the new AliasManager 
> class, and all qualifier-to-alias mappings will be persisted in a new 
> aliasMappingTable in a new, reserved namespace.
> An informal polling of HBase users at HBaseCon East and at the 
> Strata/Hadoop-World conference in Sept. 2016 showed that Column Aliasing 
> could be a popular enhancement to standard HBase functionality, due to the 
> fact that full column-qualifiers are stored in each cell, and reducing this 
> qualifier storage requirement down to 1, 2, or 4 bytes per cell could prove 
> beneficial in terms of reduced storage and bandwidth needs. Aliasing is 
> intended chiefly for column-families which are of the "narrow and tall" 
> variety (i.e., that are designed to use relatively few distinct 
> column-qualifiers throughout a large number of rows, throughout the lifespan 
> of the column-family). A column-family that is set up with an alias-size of 1 
> byte can contain up to 255 unique column-qualifiers; a 2 byte alias-size 
> allows for up to 65,535 unique column-qualifiers; and a 4 byte alias-size 
> allows for up to 4,294,967,295 unique column-qualifiers.
> Fuller specifications will be entered into the comments section below. Note 
> that it may well not be viable to add aliasing support in the new "async" 
> classes that appear to be currently under development.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17257) Add column-aliasing capability to hbase-client

2016-12-15 Thread Daniel Vimont (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Vimont updated HBASE-17257:
--
Status: Open  (was: Patch Available)

> Add column-aliasing capability to hbase-client
> --
>
> Key: HBASE-17257
> URL: https://issues.apache.org/jira/browse/HBASE-17257
> Project: HBase
>  Issue Type: New Feature
>  Components: Client
>Affects Versions: 2.0.0
>Reporter: Daniel Vimont
>Assignee: Daniel Vimont
>  Labels: features
> Attachments: HBASE-17257-v2.patch, HBASE-17257-v3.patch, 
> HBASE-17257.patch
>
>
> Review Board link: https://reviews.apache.org/r/54635/
> Column aliasing will provide the option for a 1, 2, or 4 byte alias value to 
> be stored in each cell of an "alias enabled" column-family, in place of the 
> full-length column-qualifier. Aliasing is intended to operate completely 
> invisibly to the end-user developer, with absolutely no "awareness" of 
> aliasing required to be coded into a front-end application. No new public 
> hbase-client interfaces are to be introduced, and only a few new public 
> methods should need to be added to existing interfaces, primarily to allow an 
> administrator to designate that a new column-family is to be alias-enabled by 
> setting its aliasSize attribute to 1, 2, or 4.
> To facilitate such functionality, new subclasses of HTable, 
> BufferedMutatorImpl, and HTableMultiplexer are to be provided. The overriding 
> methods of these new subclasses will invoke methods of the new AliasManager 
> class to facilitate qualifier-to-alias conversions (for user-submitted Gets, 
> Scans, and Mutations) and alias-to-qualifier conversions (for Results 
> returned from HBase) for any Table that has one or more alias-enabled column 
> families. All conversion logic will be encapsulated in the new AliasManager 
> class, and all qualifier-to-alias mappings will be persisted in a new 
> aliasMappingTable in a new, reserved namespace.
> An informal polling of HBase users at HBaseCon East and at the 
> Strata/Hadoop-World conference in Sept. 2016 showed that Column Aliasing 
> could be a popular enhancement to standard HBase functionality, due to the 
> fact that full column-qualifiers are stored in each cell, and reducing this 
> qualifier storage requirement down to 1, 2, or 4 bytes per cell could prove 
> beneficial in terms of reduced storage and bandwidth needs. Aliasing is 
> intended chiefly for column-families which are of the "narrow and tall" 
> variety (i.e., that are designed to use relatively few distinct 
> column-qualifiers throughout a large number of rows, throughout the lifespan 
> of the column-family). A column-family that is set up with an alias-size of 1 
> byte can contain up to 255 unique column-qualifiers; a 2 byte alias-size 
> allows for up to 65,535 unique column-qualifiers; and a 4 byte alias-size 
> allows for up to 4,294,967,295 unique column-qualifiers.
> Fuller specifications will be entered into the comments section below. Note 
> that it may well not be viable to add aliasing support in the new "async" 
> classes that appear to be currently under development.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16398) optimize HRegion computeHDFSBlocksDistribution

2016-12-15 Thread binlijin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

binlijin updated HBASE-16398:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> optimize HRegion computeHDFSBlocksDistribution
> --
>
> Key: HBASE-16398
> URL: https://issues.apache.org/jira/browse/HBASE-16398
> Project: HBase
>  Issue Type: Sub-task
>  Components: regionserver
>Affects Versions: 2.0.0, 1.4.0
>Reporter: binlijin
>Assignee: binlijin
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-16398.branch-1.v1.patch, HBASE-16398.patch, 
> HBASE-16398_v2.patch, HBASE-16398_v3.patch, HBASE-16398_v4.patch, 
> HBASE-16398_v5.patch, LocatedBlockStatusComparison.java
>
>
> First i assume there is no reference and link in a region family's directory. 
> Without the patch to computeHDFSBlocksDistribution for a region family, there 
> is 1+2*N rpc call, N is hfile numbers, The first rpc call is to 
> DistributedFileSystem#listStatus to get hfiles, for every hfile there is two 
> rpc call DistributedFileSystem#getFileStatus(path) and then 
> DistributedFileSystem#getFileBlockLocations(status, start, length).
> With the patch to computeHDFSBlocksDistribution for a region family, there is 
> 2 rpc call, they are DistributedFileSystem#getFileStatus(path) and  
> DistributedFileSystem#listLocatedStatus(final Path p, final PathFilter 
> filter).
> So if there is at least one hfile, with the patch, the rpc call will less.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16398) optimize HRegion computeHDFSBlocksDistribution

2016-12-15 Thread binlijin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

binlijin updated HBASE-16398:
-
Affects Version/s: 1.4.0
Fix Version/s: 1.4.0

> optimize HRegion computeHDFSBlocksDistribution
> --
>
> Key: HBASE-16398
> URL: https://issues.apache.org/jira/browse/HBASE-16398
> Project: HBase
>  Issue Type: Sub-task
>  Components: regionserver
>Affects Versions: 2.0.0, 1.4.0
>Reporter: binlijin
>Assignee: binlijin
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-16398.branch-1.v1.patch, HBASE-16398.patch, 
> HBASE-16398_v2.patch, HBASE-16398_v3.patch, HBASE-16398_v4.patch, 
> HBASE-16398_v5.patch, LocatedBlockStatusComparison.java
>
>
> First i assume there is no reference and link in a region family's directory. 
> Without the patch to computeHDFSBlocksDistribution for a region family, there 
> is 1+2*N rpc call, N is hfile numbers, The first rpc call is to 
> DistributedFileSystem#listStatus to get hfiles, for every hfile there is two 
> rpc call DistributedFileSystem#getFileStatus(path) and then 
> DistributedFileSystem#getFileBlockLocations(status, start, length).
> With the patch to computeHDFSBlocksDistribution for a region family, there is 
> 2 rpc call, they are DistributedFileSystem#getFileStatus(path) and  
> DistributedFileSystem#listLocatedStatus(final Path p, final PathFilter 
> filter).
> So if there is at least one hfile, with the patch, the rpc call will less.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-17325) Add batch delete capability to ImportTsv

2016-12-15 Thread zhangshibin (JIRA)
zhangshibin created HBASE-17325:
---

 Summary: Add batch delete capability to ImportTsv 
 Key: HBASE-17325
 URL: https://issues.apache.org/jira/browse/HBASE-17325
 Project: HBase
  Issue Type: New Feature
  Components: tooling
Reporter: zhangshibin



Considering to batch delete data in table which we load from external 
files,this feature add a switch key to enable batch delete.
First,using the file we load to table and the bulkoutput function of ImportTsv 
to generate hfiles in hdfs which contain the keyvalue of   'DeleteFamily' 
marker .Then the tool of 
org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles is used to load hfiles 
into table intending to cover the data of whole family we need to delete.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17262) Refactor RpcServer so as to make it extendable and/or pluggable

2016-12-15 Thread binlijin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17262?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15753576#comment-15753576
 ] 

binlijin commented on HBASE-17262:
--

OK, done.

> Refactor RpcServer so as to make it extendable and/or pluggable
> ---
>
> Key: HBASE-17262
> URL: https://issues.apache.org/jira/browse/HBASE-17262
> Project: HBase
>  Issue Type: Sub-task
>  Components: Performance, rpc
>Reporter: binlijin
>Assignee: binlijin
> Fix For: 2.0.0
>
> Attachments: HBASE-17262.master.V1.patch, HBASE-17262.master.V2.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16398) optimize HRegion computeHDFSBlocksDistribution

2016-12-15 Thread binlijin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15753566#comment-15753566
 ] 

binlijin commented on HBASE-16398:
--

push to branch-1


> optimize HRegion computeHDFSBlocksDistribution
> --
>
> Key: HBASE-16398
> URL: https://issues.apache.org/jira/browse/HBASE-16398
> Project: HBase
>  Issue Type: Sub-task
>  Components: regionserver
>Affects Versions: 2.0.0
>Reporter: binlijin
>Assignee: binlijin
> Fix For: 2.0.0
>
> Attachments: HBASE-16398.branch-1.v1.patch, HBASE-16398.patch, 
> HBASE-16398_v2.patch, HBASE-16398_v3.patch, HBASE-16398_v4.patch, 
> HBASE-16398_v5.patch, LocatedBlockStatusComparison.java
>
>
> First i assume there is no reference and link in a region family's directory. 
> Without the patch to computeHDFSBlocksDistribution for a region family, there 
> is 1+2*N rpc call, N is hfile numbers, The first rpc call is to 
> DistributedFileSystem#listStatus to get hfiles, for every hfile there is two 
> rpc call DistributedFileSystem#getFileStatus(path) and then 
> DistributedFileSystem#getFileBlockLocations(status, start, length).
> With the patch to computeHDFSBlocksDistribution for a region family, there is 
> 2 rpc call, they are DistributedFileSystem#getFileStatus(path) and  
> DistributedFileSystem#listLocatedStatus(final Path p, final PathFilter 
> filter).
> So if there is at least one hfile, with the patch, the rpc call will less.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-17322) New API to get the list of draining region servers

2016-12-15 Thread Jerry He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jerry He resolved HBASE-17322.
--
Resolution: Duplicate

> New API to get the list of draining region servers
> --
>
> Key: HBASE-17322
> URL: https://issues.apache.org/jira/browse/HBASE-17322
> Project: HBase
>  Issue Type: Improvement
>Reporter: Abhishek Singh Chouhan
>Assignee: Abhishek Singh Chouhan
>
> In various scenarios it would be useful to have a list of draining region 
> servers so as to avoid them while doing certain operations such as region 
> moving during batch rolling upgrades.
> Jira to add a method getDrainingServers() in ClusterStatus so that this info 
> can be retrieved through HBaseAdmin.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17302) The region flush request disappeared from flushQueue

2016-12-15 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15753516#comment-15753516
 ] 

Anoop Sam John commented on HBASE-17302:


Sorry for being late.. Just seeing this patch now
Why for region name compare we use getRegionNameAsString?  Can go for 
getRegionName and do bytes compare check no?

> The region flush request disappeared from flushQueue
> 
>
> Key: HBASE-17302
> URL: https://issues.apache.org/jira/browse/HBASE-17302
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0, 0.98.23, 1.2.4
>Reporter: Guangxu Cheng
>Assignee: Guangxu Cheng
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-17302-branch-1.2-v1.patch, 
> HBASE-17302-branch-master-v1.patch
>
>
> Region has too many store files delaying flush up to blockingWaitTime ms, and 
> the region flush request is requeued into the flushQueue.
> When the region flush request is requeued into the flushQueue frequently, the 
> request is inexplicably disappeared sometimes. 
> But regionsInQueue still contains the information of the region request, 
> which leads to new flush request can not be inserted into the flushQueue.
> Then, the region will not do flush anymore.
> In order to locate the problem, I added a lot of log in the code.
> {code:title=MemStoreFlusher.java|borderStyle=solid}
> private boolean flushRegion(final HRegion region, final boolean 
> emergencyFlush) {
> long startTime = 0;
> synchronized (this.regionsInQueue) {
>   FlushRegionEntry fqe = this.regionsInQueue.remove(region);
>   // Use the start time of the FlushRegionEntry if available
>   if (fqe != null) {
>   startTime = fqe.createTime;
>   }
>   if (fqe != null && emergencyFlush) {
>   // Need to remove from region from delay queue.  When NOT an
>   // emergencyFlush, then item was removed via a flushQueue.poll.
>   flushQueue.remove(fqe);
>  }
> }
> {code}
> When encountered emergencyFlush, the region flusher will be removed from the 
> flushQueue.
> By comparing the flushQueue content before and after remove, RegionA should 
> have been removed, it is possible to remove RegionB.
> {code:title=MemStoreFlusher.java|borderStyle=solid}
> public boolean equals(Object obj) {
>   if (this == obj) {
>   return true;
>   }
>   if (obj == null || getClass() != obj.getClass()) {
>   return false;
>   }
>   Delayed other = (Delayed) obj;
>   return compareTo(other) == 0;
> }
> {code}
> FlushRegionEntry in achieving the equals function, only comparison of the 
> delay time, if different regions of the same delay time, it is possible that 
> A wrong B.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-17302) The region flush request disappeared from flushQueue

2016-12-15 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15753516#comment-15753516
 ] 

Anoop Sam John edited comment on HBASE-17302 at 12/16/16 5:30 AM:
--

Sorry for being late.. Just seeing this patch now
Why for region name compare we use getRegionNameAsString?  Can go for 
getRegionName and do bytes equals check no?


was (Author: anoop.hbase):
Sorry for being late.. Just seeing this patch now
Why for region name compare we use getRegionNameAsString?  Can go for 
getRegionName and do bytes compare check no?

> The region flush request disappeared from flushQueue
> 
>
> Key: HBASE-17302
> URL: https://issues.apache.org/jira/browse/HBASE-17302
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0, 0.98.23, 1.2.4
>Reporter: Guangxu Cheng
>Assignee: Guangxu Cheng
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-17302-branch-1.2-v1.patch, 
> HBASE-17302-branch-master-v1.patch
>
>
> Region has too many store files delaying flush up to blockingWaitTime ms, and 
> the region flush request is requeued into the flushQueue.
> When the region flush request is requeued into the flushQueue frequently, the 
> request is inexplicably disappeared sometimes. 
> But regionsInQueue still contains the information of the region request, 
> which leads to new flush request can not be inserted into the flushQueue.
> Then, the region will not do flush anymore.
> In order to locate the problem, I added a lot of log in the code.
> {code:title=MemStoreFlusher.java|borderStyle=solid}
> private boolean flushRegion(final HRegion region, final boolean 
> emergencyFlush) {
> long startTime = 0;
> synchronized (this.regionsInQueue) {
>   FlushRegionEntry fqe = this.regionsInQueue.remove(region);
>   // Use the start time of the FlushRegionEntry if available
>   if (fqe != null) {
>   startTime = fqe.createTime;
>   }
>   if (fqe != null && emergencyFlush) {
>   // Need to remove from region from delay queue.  When NOT an
>   // emergencyFlush, then item was removed via a flushQueue.poll.
>   flushQueue.remove(fqe);
>  }
> }
> {code}
> When encountered emergencyFlush, the region flusher will be removed from the 
> flushQueue.
> By comparing the flushQueue content before and after remove, RegionA should 
> have been removed, it is possible to remove RegionB.
> {code:title=MemStoreFlusher.java|borderStyle=solid}
> public boolean equals(Object obj) {
>   if (this == obj) {
>   return true;
>   }
>   if (obj == null || getClass() != obj.getClass()) {
>   return false;
>   }
>   Delayed other = (Delayed) obj;
>   return compareTo(other) == 0;
> }
> {code}
> FlushRegionEntry in achieving the equals function, only comparison of the 
> delay time, if different regions of the same delay time, it is possible that 
> A wrong B.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-16993) BucketCache throw java.io.IOException: Invalid HFile block magic when DATA_BLOCK_ENCODING set to DIFF

2016-12-15 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15753500#comment-15753500
 ] 

Anoop Sam John edited comment on HBASE-16993 at 12/16/16 5:22 AM:
--

This is ready for commit [~saint@gmail.com]?   We may have to update the 
jira title and description.. Now we dont need the bucket sizes to be multiple 
of 256 after this patch.


was (Author: anoop.hbase):
This is ready for commit [~saint@gmail.com]?   We may have to update the 
jira title and description.. Now we dont need the bucket sizes to be multiple 
of 1024 after this patch.

> BucketCache throw java.io.IOException: Invalid HFile block magic when 
> DATA_BLOCK_ENCODING set to DIFF
> -
>
> Key: HBASE-16993
> URL: https://issues.apache.org/jira/browse/HBASE-16993
> Project: HBase
>  Issue Type: Bug
>  Components: BucketCache, io
>Affects Versions: 1.1.3
> Environment: hbase version 1.1.3
>Reporter: liubangchen
>Assignee: liubangchen
> Fix For: 2.0.0
>
> Attachments: HBASE-16993.000.patch, HBASE-16993.001.patch, 
> HBASE-16993.master.001.patch, HBASE-16993.master.002.patch, 
> HBASE-16993.master.003.patch, HBASE-16993.master.004.patch, 
> HBASE-16993.master.005.patch
>
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>
> hbase-site.xml setting
> 
> hbase.bucketcache.bucket.sizes
> 16384,32768,40960, 
> 46000,49152,51200,65536,131072,524288
> 
> 
> hbase.bucketcache.size
> 16384
> 
> 
> hbase.bucketcache.ioengine
> offheap
> 
> 
> hfile.block.cache.size
> 0.3
> 
> 
> hfile.block.bloom.cacheonwrite
> true
> 
> 
> hbase.rs.cacheblocksonwrite
> true
> 
> 
> hfile.block.index.cacheonwrite
> true
>  n_splits = 200
> create 'usertable',{NAME =>'family', COMPRESSION => 'snappy', VERSIONS => 
> 1,DATA_BLOCK_ENCODING => 'DIFF',CONFIGURATION => 
> {'hbase.hregion.memstore.block.multiplier' => 5}},{DURABILITY => 
> 'SKIP_WAL'},{SPLITS => (1..n_splits).map {|i| 
> "user#{1000+i*(-1000)/n_splits}"}}
> load data
> bin/ycsb load hbase10 -P workloads/workloada -p table=usertable -p 
> columnfamily=family -p fieldcount=10 -p fieldlength=100 -p 
> recordcount=2 -p insertorder=hashed -p insertstart=0 -p 
> clientbuffering=true -p durability=SKIP_WAL -threads 20 -s 
> run 
> bin/ycsb run hbase10 -P workloads/workloadb -p table=usertable -p 
> columnfamily=family -p fieldcount=10 -p fieldlength=100 -p 
> operationcount=2000 -p readallfields=true -p clientbuffering=true -p 
> requestdistribution=zipfian  -threads 10 -s
> log info
> 2016-11-02 20:20:20,261 ERROR 
> [RW.default.readRpcServer.handler=36,queue=21,port=6020] bucket.BucketCache: 
> Failed reading block fdcc7ed6f3b2498b9ef316cc8206c233_44819759 from bucket 
> cache
> java.io.IOException: Invalid HFile block magic: 
> \x00\x00\x00\x00\x00\x00\x00\x00
> at 
> org.apache.hadoop.hbase.io.hfile.BlockType.parse(BlockType.java:154)
> at org.apache.hadoop.hbase.io.hfile.BlockType.read(BlockType.java:167)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock.(HFileBlock.java:273)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$1.deserialize(HFileBlock.java:134)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$1.deserialize(HFileBlock.java:121)
> at 
> org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.getBlock(BucketCache.java:427)
> at 
> org.apache.hadoop.hbase.io.hfile.CombinedBlockCache.getBlock(CombinedBlockCache.java:85)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.getCachedBlock(HFileReaderV2.java:266)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:403)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:269)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:634)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:584)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:247)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:156)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:363)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:217)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:2071)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:5369)
> at 
> 

[jira] [Commented] (HBASE-16993) BucketCache throw java.io.IOException: Invalid HFile block magic when DATA_BLOCK_ENCODING set to DIFF

2016-12-15 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15753500#comment-15753500
 ] 

Anoop Sam John commented on HBASE-16993:


This is ready for commit [~saint@gmail.com]?   We may have to update the 
jira title and description.. Now we dont need the bucket sizes to be multiple 
of 1024 after this patch.

> BucketCache throw java.io.IOException: Invalid HFile block magic when 
> DATA_BLOCK_ENCODING set to DIFF
> -
>
> Key: HBASE-16993
> URL: https://issues.apache.org/jira/browse/HBASE-16993
> Project: HBase
>  Issue Type: Bug
>  Components: BucketCache, io
>Affects Versions: 1.1.3
> Environment: hbase version 1.1.3
>Reporter: liubangchen
>Assignee: liubangchen
> Fix For: 2.0.0
>
> Attachments: HBASE-16993.000.patch, HBASE-16993.001.patch, 
> HBASE-16993.master.001.patch, HBASE-16993.master.002.patch, 
> HBASE-16993.master.003.patch, HBASE-16993.master.004.patch, 
> HBASE-16993.master.005.patch
>
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>
> hbase-site.xml setting
> 
> hbase.bucketcache.bucket.sizes
> 16384,32768,40960, 
> 46000,49152,51200,65536,131072,524288
> 
> 
> hbase.bucketcache.size
> 16384
> 
> 
> hbase.bucketcache.ioengine
> offheap
> 
> 
> hfile.block.cache.size
> 0.3
> 
> 
> hfile.block.bloom.cacheonwrite
> true
> 
> 
> hbase.rs.cacheblocksonwrite
> true
> 
> 
> hfile.block.index.cacheonwrite
> true
>  n_splits = 200
> create 'usertable',{NAME =>'family', COMPRESSION => 'snappy', VERSIONS => 
> 1,DATA_BLOCK_ENCODING => 'DIFF',CONFIGURATION => 
> {'hbase.hregion.memstore.block.multiplier' => 5}},{DURABILITY => 
> 'SKIP_WAL'},{SPLITS => (1..n_splits).map {|i| 
> "user#{1000+i*(-1000)/n_splits}"}}
> load data
> bin/ycsb load hbase10 -P workloads/workloada -p table=usertable -p 
> columnfamily=family -p fieldcount=10 -p fieldlength=100 -p 
> recordcount=2 -p insertorder=hashed -p insertstart=0 -p 
> clientbuffering=true -p durability=SKIP_WAL -threads 20 -s 
> run 
> bin/ycsb run hbase10 -P workloads/workloadb -p table=usertable -p 
> columnfamily=family -p fieldcount=10 -p fieldlength=100 -p 
> operationcount=2000 -p readallfields=true -p clientbuffering=true -p 
> requestdistribution=zipfian  -threads 10 -s
> log info
> 2016-11-02 20:20:20,261 ERROR 
> [RW.default.readRpcServer.handler=36,queue=21,port=6020] bucket.BucketCache: 
> Failed reading block fdcc7ed6f3b2498b9ef316cc8206c233_44819759 from bucket 
> cache
> java.io.IOException: Invalid HFile block magic: 
> \x00\x00\x00\x00\x00\x00\x00\x00
> at 
> org.apache.hadoop.hbase.io.hfile.BlockType.parse(BlockType.java:154)
> at org.apache.hadoop.hbase.io.hfile.BlockType.read(BlockType.java:167)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock.(HFileBlock.java:273)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$1.deserialize(HFileBlock.java:134)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$1.deserialize(HFileBlock.java:121)
> at 
> org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.getBlock(BucketCache.java:427)
> at 
> org.apache.hadoop.hbase.io.hfile.CombinedBlockCache.getBlock(CombinedBlockCache.java:85)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.getCachedBlock(HFileReaderV2.java:266)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:403)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:269)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:634)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:584)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:247)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:156)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:363)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:217)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:2071)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:5369)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:2546)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2532)
> at 
> 

[jira] [Commented] (HBASE-16398) optimize HRegion computeHDFSBlocksDistribution

2016-12-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15753489#comment-15753489
 ] 

Hadoop QA commented on HBASE-16398:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 11m 9s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
44s {color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s 
{color} | {color:green} branch-1 passed with JDK v1.8.0_111 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 36s 
{color} | {color:green} branch-1 passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
56s {color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} branch-1 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 58s 
{color} | {color:red} hbase-server in branch-1 has 2 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s 
{color} | {color:green} branch-1 passed with JDK v1.8.0_111 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 35s 
{color} | {color:green} branch-1 passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
42s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 33s 
{color} | {color:green} the patch passed with JDK v1.8.0_111 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 33s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
54s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
17m 33s {color} | {color:green} The patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 0m 
17s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
28s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s 
{color} | {color:green} the patch passed with JDK v1.8.0_111 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 41s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 94m 14s 
{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
20s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 137m 52s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:e01ee2f |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12843538/HBASE-16398.branch-1.v1.patch
 |
| JIRA Issue | HBASE-16398 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 2d1af3bae57f 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Commented] (HBASE-17081) Flush the entire CompactingMemStore content to disk

2016-12-15 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15753491#comment-15753491
 ] 

ramkrishna.s.vasudevan commented on HBASE-17081:


Thanks [~ebortnik].
>From where did you see this flaky test?
If you see the last pre commit build 
https://builds.apache.org/job/PreCommit-HBASE-Build/4933/testReport/org.apache.hadoop.hbase.regionserver/TestHRegionWithInMemoryFlush/.
 It seems to have passed.

And in the jenkins test result
https://builds.apache.org/job/HBase-Trunk_matrix/2137/. I could not spot any of 
the Large tests in the test result.
I back tracked upto build #2135 - I cannot see some of the tests in the report 
like TestHRegion, TestHRegionWithInMemoryFlush etc in any of them. So I think 
the report is not displaying the LargeTests.
[~saint@gmail.com]- FYI.

> Flush the entire CompactingMemStore content to disk
> ---
>
> Key: HBASE-17081
> URL: https://issues.apache.org/jira/browse/HBASE-17081
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Anastasia Braginsky
>Assignee: Anastasia Braginsky
> Attachments: HBASE-15787_8.patch, HBASE-17081-V01.patch, 
> HBASE-17081-V02.patch, HBASE-17081-V03.patch, HBASE-17081-V04.patch, 
> HBASE-17081-V05.patch, HBASE-17081-V06.patch, HBASE-17081-V06.patch, 
> HBASE-17081-V07.patch, HBaseMeetupDecember2016-V02.pptx, 
> Pipelinememstore_fortrunk_3.patch
>
>
> Part of CompactingMemStore's memory is held by an active segment, and another 
> part is divided between immutable segments in the compacting pipeline. Upon 
> flush-to-disk request we want to flush all of it to disk, in contrast to 
> flushing only tail of the compacting pipeline.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17319) Truncate table with preserve after split may cause truncation to fail

2016-12-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15753440#comment-15753440
 ] 

Hadoop QA commented on HBASE-17319:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
50s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
43s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
47s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
41s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 36s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 36s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
45s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
25m 45s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
54s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 103m 51s 
{color} | {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 141m 36s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.regionserver.TestHRegion |
| Timed out junit tests | 
org.apache.hadoop.hbase.master.procedure.TestTruncateTableProcedure |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12843531/HBASE-17319.patch |
| JIRA Issue | HBASE-17319 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 649220c40985 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 35f0718 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/4941/artifact/patchprocess/patch-unit-hbase-server.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HBASE-Build/4941/artifact/patchprocess/patch-unit-hbase-server.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/4941/testReport/ |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/4941/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Truncate table with preserve after split may cause truncation 

[jira] [Commented] (HBASE-17262) Refactor RpcServer so as to make it extendable and/or pluggable

2016-12-15 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17262?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15753410#comment-15753410
 ] 

Anoop Sam John commented on HBASE-17262:


Patch to RB pls.

> Refactor RpcServer so as to make it extendable and/or pluggable
> ---
>
> Key: HBASE-17262
> URL: https://issues.apache.org/jira/browse/HBASE-17262
> Project: HBase
>  Issue Type: Sub-task
>  Components: Performance, rpc
>Reporter: binlijin
>Assignee: binlijin
> Fix For: 2.0.0
>
> Attachments: HBASE-17262.master.V1.patch, HBASE-17262.master.V2.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15432) TableInputFormat - support multiple column families scan

2016-12-15 Thread Xuesen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15432?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuesen Liang updated HBASE-15432:
-
Release Note: retrigger  (was: Patch is available.)
  Status: Patch Available  (was: Open)

> TableInputFormat - support multiple column families scan
> 
>
> Key: HBASE-15432
> URL: https://issues.apache.org/jira/browse/HBASE-15432
> Project: HBase
>  Issue Type: New Feature
>Affects Versions: 0.98.16.1, 0.98.16, 2.0.0
>Reporter: nirav patel
>Assignee: Xuesen Liang
> Fix For: 2.0.0
>
> Attachments: HBASE-15432.master.002.patch, HBASE-15432.master.patch
>
>
> Currently Hbase TableInputFormat class has SCAN_COLUMN_FAMILY and 
> SCAN_COLUMNS. SCAN_COLUMN_FAMILY can only scan single column family. If we 
> need to scan multiple column families from a table then we must use 
> SCAN_COLUMNS where we must provide both columns and column families which is 
> not a convenient. Can we have a SCAN_COLUMN_FAMILIES which supports scan of 
> multiple column families.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15432) TableInputFormat - support multiple column families scan

2016-12-15 Thread Xuesen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15432?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuesen Liang updated HBASE-15432:
-
Status: Open  (was: Patch Available)

cancel for re-trigger

> TableInputFormat - support multiple column families scan
> 
>
> Key: HBASE-15432
> URL: https://issues.apache.org/jira/browse/HBASE-15432
> Project: HBase
>  Issue Type: New Feature
>Affects Versions: 0.98.16.1, 0.98.16, 2.0.0
>Reporter: nirav patel
>Assignee: Xuesen Liang
> Fix For: 2.0.0
>
> Attachments: HBASE-15432.master.002.patch, HBASE-15432.master.patch
>
>
> Currently Hbase TableInputFormat class has SCAN_COLUMN_FAMILY and 
> SCAN_COLUMNS. SCAN_COLUMN_FAMILY can only scan single column family. If we 
> need to scan multiple column families from a table then we must use 
> SCAN_COLUMNS where we must provide both columns and column families which is 
> not a convenient. Can we have a SCAN_COLUMN_FAMILIES which supports scan of 
> multiple column families.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17090) Procedure v2 - fast wake if nothing else is running

2016-12-15 Thread Stephen Yuan Jiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15753321#comment-15753321
 ] 

Stephen Yuan Jiang commented on HBASE-17090:


I think this looks good. 

> Procedure v2 - fast wake if nothing else is running
> ---
>
> Key: HBASE-17090
> URL: https://issues.apache.org/jira/browse/HBASE-17090
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
> Fix For: 2.0.0
>
> Attachments: HBASE-17090-v0.patch
>
>
> We wait Nmsec to see if we can batch more procedures, but the pattern that we 
> have allows us to wait only for what we know is running and avoid waiting for 
> something that will never get there. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15432) TableInputFormat - support multiple column families scan

2016-12-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15432?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15753294#comment-15753294
 ] 

Hadoop QA commented on HBASE-15432:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 12m 26s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
34s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
41s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
32s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 24s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
36s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 32s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
38s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
23m 17s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
43s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 91m 15s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
16s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 137m 34s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.mapreduce.TestImportExport |
|   | hadoop.hbase.client.TestSizeFailures |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12843510/HBASE-15432.master.002.patch
 |
| JIRA Issue | HBASE-15432 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 6cd303169545 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 35f0718 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/4938/artifact/patchprocess/patch-unit-hbase-server.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HBASE-Build/4938/artifact/patchprocess/patch-unit-hbase-server.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/4938/testReport/ |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/4938/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> TableInputFormat - support multiple column families scan
> 

[jira] [Updated] (HBASE-17018) Spooling BufferedMutator

2016-12-15 Thread Joep Rottinghuis (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joep Rottinghuis updated HBASE-17018:
-
Attachment: HBASE-17018.master.002.patch

Added .master.002.patch
Eliminated SpoolingBufferedMutatorCoordinator Interface
Added initial unit tests (several more needed) to show that the basic approach 
works.
Tweaked some logic in processor.

Still open for design feedback. Without it I'll keep going in current direction.

> Spooling BufferedMutator
> 
>
> Key: HBASE-17018
> URL: https://issues.apache.org/jira/browse/HBASE-17018
> Project: HBase
>  Issue Type: New Feature
>Reporter: Joep Rottinghuis
> Attachments: HBASE-17018.master.001.patch, 
> HBASE-17018.master.002.patch, 
> HBASE-17018SpoolingBufferedMutatorDesign-v1.pdf, YARN-4061 HBase requirements 
> for fault tolerant writer.pdf
>
>
> For Yarn Timeline Service v2 we use HBase as a backing store.
> A big concern we would like to address is what to do if HBase is 
> (temporarily) down, for example in case of an HBase upgrade.
> Most of the high volume writes will be mostly on a best-effort basis, but 
> occasionally we do a flush. Mainly during application lifecycle events, 
> clients will call a flush on the timeline service API. In order to handle the 
> volume of writes we use a BufferedMutator. When flush gets called on our API, 
> we in turn call flush on the BufferedMutator.
> We would like our interface to HBase be able to spool the mutations to a 
> filesystems in case of HBase errors. If we use the Hadoop filesystem 
> interface, this can then be HDFS, gcs, s3, or any other distributed storage. 
> The mutations can then later be re-played, for example through a MapReduce 
> job.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16398) optimize HRegion computeHDFSBlocksDistribution

2016-12-15 Thread binlijin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15753262#comment-15753262
 ] 

binlijin commented on HBASE-16398:
--

push to master

> optimize HRegion computeHDFSBlocksDistribution
> --
>
> Key: HBASE-16398
> URL: https://issues.apache.org/jira/browse/HBASE-16398
> Project: HBase
>  Issue Type: Sub-task
>  Components: regionserver
>Affects Versions: 2.0.0
>Reporter: binlijin
>Assignee: binlijin
> Fix For: 2.0.0
>
> Attachments: HBASE-16398.branch-1.v1.patch, HBASE-16398.patch, 
> HBASE-16398_v2.patch, HBASE-16398_v3.patch, HBASE-16398_v4.patch, 
> HBASE-16398_v5.patch, LocatedBlockStatusComparison.java
>
>
> First i assume there is no reference and link in a region family's directory. 
> Without the patch to computeHDFSBlocksDistribution for a region family, there 
> is 1+2*N rpc call, N is hfile numbers, The first rpc call is to 
> DistributedFileSystem#listStatus to get hfiles, for every hfile there is two 
> rpc call DistributedFileSystem#getFileStatus(path) and then 
> DistributedFileSystem#getFileBlockLocations(status, start, length).
> With the patch to computeHDFSBlocksDistribution for a region family, there is 
> 2 rpc call, they are DistributedFileSystem#getFileStatus(path) and  
> DistributedFileSystem#listLocatedStatus(final Path p, final PathFilter 
> filter).
> So if there is at least one hfile, with the patch, the rpc call will less.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16398) optimize HRegion computeHDFSBlocksDistribution

2016-12-15 Thread binlijin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

binlijin updated HBASE-16398:
-
Attachment: HBASE-16398.branch-1.v1.patch

> optimize HRegion computeHDFSBlocksDistribution
> --
>
> Key: HBASE-16398
> URL: https://issues.apache.org/jira/browse/HBASE-16398
> Project: HBase
>  Issue Type: Sub-task
>  Components: regionserver
>Affects Versions: 2.0.0
>Reporter: binlijin
>Assignee: binlijin
> Fix For: 2.0.0
>
> Attachments: HBASE-16398.branch-1.v1.patch, HBASE-16398.patch, 
> HBASE-16398_v2.patch, HBASE-16398_v3.patch, HBASE-16398_v4.patch, 
> HBASE-16398_v5.patch, LocatedBlockStatusComparison.java
>
>
> First i assume there is no reference and link in a region family's directory. 
> Without the patch to computeHDFSBlocksDistribution for a region family, there 
> is 1+2*N rpc call, N is hfile numbers, The first rpc call is to 
> DistributedFileSystem#listStatus to get hfiles, for every hfile there is two 
> rpc call DistributedFileSystem#getFileStatus(path) and then 
> DistributedFileSystem#getFileBlockLocations(status, start, length).
> With the patch to computeHDFSBlocksDistribution for a region family, there is 
> 2 rpc call, they are DistributedFileSystem#getFileStatus(path) and  
> DistributedFileSystem#listLocatedStatus(final Path p, final PathFilter 
> filter).
> So if there is at least one hfile, with the patch, the rpc call will less.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17319) Truncate table with preserve after split may cause truncation to fail

2016-12-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15753225#comment-15753225
 ] 

Hudson commented on HBASE-17319:


SUCCESS: Integrated in Jenkins build HBase-1.4 #568 (See 
[https://builds.apache.org/job/HBase-1.4/568/])
HBASE-17319 Truncate table with preserve after split may cause (tedyu: rev 
f3a30697966afd3adac698b83be14b830eaad942)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/ProcedureSyncWait.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/procedure/TestTruncateTableProcedure.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/TruncateTableProcedure.java


> Truncate table with preserve after split may cause truncation to fail
> -
>
> Key: HBASE-17319
> URL: https://issues.apache.org/jira/browse/HBASE-17319
> Project: HBase
>  Issue Type: Bug
>  Components: Admin
>Affects Versions: 1.1.7, 1.2.4
>Reporter: Allan Yang
>Assignee: Allan Yang
> Fix For: 1.4.0
>
> Attachments: HBASE-17319-branch-1.patch, HBASE-17319.patch
>
>
> In truncateTableProcedue , when getting tables regions  from meta to recreate 
> new regions, split parents are not excluded, so the new regions can end up 
> with the same start key, and the same region dir:
> {noformat}
> 2016-12-14 20:15:22,231 WARN  [RegionOpenAndInitThread-writetest-1] 
> regionserver.HRegionFileSystem: Trying to create a region that already exists 
> on disk: 
> hdfs://hbasedev1/zhengyan-hbase11-func2/.tmp/data/default/writetest/9b2c8d1539cd92661703ceb8a4d518a1
> {noformat} 
> The truncateTableProcedue will retry forever and never get success.
> A attached unit test shows everything.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17319) Truncate table with preserve after split may cause truncation to fail

2016-12-15 Thread Allan Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allan Yang updated HBASE-17319:
---
Attachment: HBASE-17319.patch

patch for master branch attached

> Truncate table with preserve after split may cause truncation to fail
> -
>
> Key: HBASE-17319
> URL: https://issues.apache.org/jira/browse/HBASE-17319
> Project: HBase
>  Issue Type: Bug
>  Components: Admin
>Affects Versions: 1.1.7, 1.2.4
>Reporter: Allan Yang
>Assignee: Allan Yang
> Fix For: 1.4.0
>
> Attachments: HBASE-17319-branch-1.patch, HBASE-17319.patch
>
>
> In truncateTableProcedue , when getting tables regions  from meta to recreate 
> new regions, split parents are not excluded, so the new regions can end up 
> with the same start key, and the same region dir:
> {noformat}
> 2016-12-14 20:15:22,231 WARN  [RegionOpenAndInitThread-writetest-1] 
> regionserver.HRegionFileSystem: Trying to create a region that already exists 
> on disk: 
> hdfs://hbasedev1/zhengyan-hbase11-func2/.tmp/data/default/writetest/9b2c8d1539cd92661703ceb8a4d518a1
> {noformat} 
> The truncateTableProcedue will retry forever and never get success.
> A attached unit test shows everything.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17090) Procedure v2 - fast wake if nothing else is running

2016-12-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15753186#comment-15753186
 ] 

Hadoop QA commented on HBASE-17090:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 28m 24s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 
43s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 15s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
15s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
30s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 16s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 16s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
37m 16s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
36s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 44s 
{color} | {color:green} hbase-procedure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
10s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 76m 53s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12838836/HBASE-17090-v0.patch |
| JIRA Issue | HBASE-17090 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux cef9db2d9cc7 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 
20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 35f0718 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/4940/testReport/ |
| modules | C: hbase-procedure U: hbase-procedure |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/4940/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Procedure v2 - fast wake if nothing else is running
> ---
>
> Key: HBASE-17090
> URL: https://issues.apache.org/jira/browse/HBASE-17090
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
> Fix For: 2.0.0
>
> Attachments: HBASE-17090-v0.patch
>
>
> We wait Nmsec 

[jira] [Commented] (HBASE-17275) Assign timeout cause region unassign forever

2016-12-15 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15753179#comment-15753179
 ] 

Ted Yu commented on HBASE-17275:


[~syuanjiang] is more familiar with region assignment.

Stephen:
Can you take a look at the JIRAs ?

> Assign timeout cause region unassign forever
> 
>
> Key: HBASE-17275
> URL: https://issues.apache.org/jira/browse/HBASE-17275
> Project: HBase
>  Issue Type: Bug
>  Components: Region Assignment
>Affects Versions: 1.2.3, 1.1.7
>Reporter: Allan Yang
>Assignee: Allan Yang
> Attachments: HBASE-17275-branch-1.patch
>
>
> This is a real cased happened in my test cluster.
> I have more 8000 regions to assign when I restart a cluster, but I only 
> started one regionserver. That means master need to assign these 8000 regions 
> to a single server(I know it is not right, but just for testing).
> The rs recevied the open region rpc and began to open regions. But the due to 
> the hugh number of regions, , master timeout the rpc call(but actually some 
> region had already opened) after 1 mins, as you can see from log 1.
> {noformat}
> 1. 2016-11-22 10:17:32,285 INFO  [example.org:30001.activeMasterManager] 
> master.AssignmentManager: Unable to communicate with 
> example.org,30003,1479780976834 in order to assign regions,
> java.io.IOException: Call to /example.org:30003 failed on local exception: 
> org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=1, waitTime=60001, 
> operationTimeout=6 expired.
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl.wrapException(RpcClientImpl.java:1338)
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1272)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:216)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:290)
> at 
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.openRegion(AdminProtos.java:30177)
> at 
> org.apache.hadoop.hbase.master.ServerManager.sendRegionOpen(ServerManager.java:1000)
> at 
> org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1719)
> at 
> org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:2828)
> at 
> org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:2775)
> at 
> org.apache.hadoop.hbase.master.AssignmentManager.assignAllUserRegions(AssignmentManager.java:2876)
> at 
> org.apache.hadoop.hbase.master.AssignmentManager.processDeadServersAndRegionsInTransition(AssignmentManager.java:646)
> at 
> org.apache.hadoop.hbase.master.AssignmentManager.joinCluster(AssignmentManager.java:493)
> at 
> org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:796)
> at org.apache.hadoop.hbase.master.HMaster.access$500(HMaster.java:188)
> at org.apache.hadoop.hbase.master.HMaster$1.run(HMaster.java:1711)
> at java.lang.Thread.run(Thread.java:756)
> Caused by: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=1, 
> waitTime=60001, operationTimeout=6 expired.
> at org.apache.hadoop.hbase.ipc.Call.checkAndSetTimeout(Call.java:81)
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1246)
> ... 14 more  
> {noformat}
> for the region 7e9aee32eb98a6fc9d503b99fc5f9615(like many others), after 
> timeout, master use a pool to re-assign them, as in 2
> {noformat}
> 2. 2016-11-22 10:17:32,303 DEBUG [AM.-pool1-t26] master.AssignmentManager: 
> Force region state offline {7e9aee32eb98a6fc9d503b99fc5f9615 
> state=PENDING_OPEN, ts=1479780992078, server=example.org,30003,1479780976834} 
>  
> {noformat}
> But, this region was actually opened on the rs, but (maybe) due to the hugh 
> pressure, the OPENED zk event recevied by master , as you can tell from 3, 
> "which is more than 15 seconds late"
> {noformat}
> 3. 2016-11-22 10:17:32,304 DEBUG [AM.ZK.Worker-pool2-t3] 
> master.AssignmentManager: Handling RS_ZK_REGION_OPENED, 
> server=example.org,30003,1479780976834, 
> region=7e9aee32eb98a6fc9d503b99fc5f9615, which is more than 15 seconds late, 
> current_state={7e9aee32eb98a6fc9d503b99fc5f9615 state=PENDING_OPEN, 
> ts=1479780992078, server=example.org,30003,1479780976834}
> {noformat}
> In the meantime, master still try to re-assign this region in another thread. 
> Master first close this region in case of multi assign, then change the state 
> of this region change from PENDING_OPEN >OFFLINE>PENDING_OPEN. Its RIT node 
> in zk was also transitioned to OFFLINE, as in 4,5,6,7
> {noformat}
> 4. 2016-11-22 10:17:32,321 DEBUG 

[jira] [Commented] (HBASE-17319) Truncate table with preserve after split may cause truncation to fail

2016-12-15 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15753165#comment-15753165
 ] 

Ted Yu commented on HBASE-17319:


Please name the patch for master branch and attach for QA run.

Thanks

> Truncate table with preserve after split may cause truncation to fail
> -
>
> Key: HBASE-17319
> URL: https://issues.apache.org/jira/browse/HBASE-17319
> Project: HBase
>  Issue Type: Bug
>  Components: Admin
>Affects Versions: 1.1.7, 1.2.4
>Reporter: Allan Yang
>Assignee: Allan Yang
> Fix For: 1.4.0
>
> Attachments: HBASE-17319-branch-1.patch
>
>
> In truncateTableProcedue , when getting tables regions  from meta to recreate 
> new regions, split parents are not excluded, so the new regions can end up 
> with the same start key, and the same region dir:
> {noformat}
> 2016-12-14 20:15:22,231 WARN  [RegionOpenAndInitThread-writetest-1] 
> regionserver.HRegionFileSystem: Trying to create a region that already exists 
> on disk: 
> hdfs://hbasedev1/zhengyan-hbase11-func2/.tmp/data/default/writetest/9b2c8d1539cd92661703ceb8a4d518a1
> {noformat} 
> The truncateTableProcedue will retry forever and never get success.
> A attached unit test shows everything.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-17275) Assign timeout cause region unassign forever

2016-12-15 Thread Allan Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15753130#comment-15753130
 ] 

Allan Yang edited comment on HBASE-17275 at 12/16/16 1:58 AM:
--

[~tedyu] can you or find someone to look at HBASE-17264,  HBASE-17265 and 
HBASE-17275, they are all related, can the fixes really take effects in our 
environment. 


was (Author: allan163):
[~ted_yu] can you or find someone to look at HBASE-17264,  HBASE-17265 and 
HBASE-17275, they are all related, can the fixes really take effects in our 
environment. 

> Assign timeout cause region unassign forever
> 
>
> Key: HBASE-17275
> URL: https://issues.apache.org/jira/browse/HBASE-17275
> Project: HBase
>  Issue Type: Bug
>  Components: Region Assignment
>Affects Versions: 1.2.3, 1.1.7
>Reporter: Allan Yang
>Assignee: Allan Yang
> Attachments: HBASE-17275-branch-1.patch
>
>
> This is a real cased happened in my test cluster.
> I have more 8000 regions to assign when I restart a cluster, but I only 
> started one regionserver. That means master need to assign these 8000 regions 
> to a single server(I know it is not right, but just for testing).
> The rs recevied the open region rpc and began to open regions. But the due to 
> the hugh number of regions, , master timeout the rpc call(but actually some 
> region had already opened) after 1 mins, as you can see from log 1.
> {noformat}
> 1. 2016-11-22 10:17:32,285 INFO  [example.org:30001.activeMasterManager] 
> master.AssignmentManager: Unable to communicate with 
> example.org,30003,1479780976834 in order to assign regions,
> java.io.IOException: Call to /example.org:30003 failed on local exception: 
> org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=1, waitTime=60001, 
> operationTimeout=6 expired.
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl.wrapException(RpcClientImpl.java:1338)
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1272)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:216)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:290)
> at 
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.openRegion(AdminProtos.java:30177)
> at 
> org.apache.hadoop.hbase.master.ServerManager.sendRegionOpen(ServerManager.java:1000)
> at 
> org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1719)
> at 
> org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:2828)
> at 
> org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:2775)
> at 
> org.apache.hadoop.hbase.master.AssignmentManager.assignAllUserRegions(AssignmentManager.java:2876)
> at 
> org.apache.hadoop.hbase.master.AssignmentManager.processDeadServersAndRegionsInTransition(AssignmentManager.java:646)
> at 
> org.apache.hadoop.hbase.master.AssignmentManager.joinCluster(AssignmentManager.java:493)
> at 
> org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:796)
> at org.apache.hadoop.hbase.master.HMaster.access$500(HMaster.java:188)
> at org.apache.hadoop.hbase.master.HMaster$1.run(HMaster.java:1711)
> at java.lang.Thread.run(Thread.java:756)
> Caused by: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=1, 
> waitTime=60001, operationTimeout=6 expired.
> at org.apache.hadoop.hbase.ipc.Call.checkAndSetTimeout(Call.java:81)
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1246)
> ... 14 more  
> {noformat}
> for the region 7e9aee32eb98a6fc9d503b99fc5f9615(like many others), after 
> timeout, master use a pool to re-assign them, as in 2
> {noformat}
> 2. 2016-11-22 10:17:32,303 DEBUG [AM.-pool1-t26] master.AssignmentManager: 
> Force region state offline {7e9aee32eb98a6fc9d503b99fc5f9615 
> state=PENDING_OPEN, ts=1479780992078, server=example.org,30003,1479780976834} 
>  
> {noformat}
> But, this region was actually opened on the rs, but (maybe) due to the hugh 
> pressure, the OPENED zk event recevied by master , as you can tell from 3, 
> "which is more than 15 seconds late"
> {noformat}
> 3. 2016-11-22 10:17:32,304 DEBUG [AM.ZK.Worker-pool2-t3] 
> master.AssignmentManager: Handling RS_ZK_REGION_OPENED, 
> server=example.org,30003,1479780976834, 
> region=7e9aee32eb98a6fc9d503b99fc5f9615, which is more than 15 seconds late, 
> current_state={7e9aee32eb98a6fc9d503b99fc5f9615 state=PENDING_OPEN, 
> ts=1479780992078, server=example.org,30003,1479780976834}
> {noformat}
> In the meantime, master still try to 

[jira] [Commented] (HBASE-17148) Procedure v2 - add bulk proc submit

2016-12-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15753145#comment-15753145
 ] 

Hadoop QA commented on HBASE-17148:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 18m 43s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
25s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 12s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
10s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
8s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
21s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 10s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 10s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 10s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
9s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
9s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
26m 2s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
28s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 9s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 2m 23s {color} 
| {color:red} hbase-procedure in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
7s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 53m 16s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.procedure2.store.wal.TestWALProcedureStore |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.2 Server=1.12.2 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12843506/HBASE-17148.master.001.patch
 |
| JIRA Issue | HBASE-17148 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux f0f54f0b7c06 3.13.0-100-generic #147-Ubuntu SMP Tue Oct 18 
16:48:51 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 35f0718 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/4939/artifact/patchprocess/patch-unit-hbase-procedure.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HBASE-Build/4939/artifact/patchprocess/patch-unit-hbase-procedure.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/4939/testReport/ |
| modules | C: hbase-procedure U: hbase-procedure |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/4939/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Procedure v2 - add bulk proc submit
> ---
>
> Key: HBASE-17148
> 

[jira] [Comment Edited] (HBASE-17275) Assign timeout cause region unassign forever

2016-12-15 Thread Allan Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15753130#comment-15753130
 ] 

Allan Yang edited comment on HBASE-17275 at 12/16/16 1:56 AM:
--

[~ted_yu] can you or find someone to look at HBASE-17264,  HBASE-17265 and 
HBASE-17275, they are all related, can the fixes really take effects in our 
environment. 


was (Author: allan163):
[~tedyu] can you or find someone to look at HBASE-17264,  HBASE-17265 and 
HBASE-17275, they are all related, can the fixes really take effects in our 
environment. 

> Assign timeout cause region unassign forever
> 
>
> Key: HBASE-17275
> URL: https://issues.apache.org/jira/browse/HBASE-17275
> Project: HBase
>  Issue Type: Bug
>  Components: Region Assignment
>Affects Versions: 1.2.3, 1.1.7
>Reporter: Allan Yang
>Assignee: Allan Yang
> Attachments: HBASE-17275-branch-1.patch
>
>
> This is a real cased happened in my test cluster.
> I have more 8000 regions to assign when I restart a cluster, but I only 
> started one regionserver. That means master need to assign these 8000 regions 
> to a single server(I know it is not right, but just for testing).
> The rs recevied the open region rpc and began to open regions. But the due to 
> the hugh number of regions, , master timeout the rpc call(but actually some 
> region had already opened) after 1 mins, as you can see from log 1.
> {noformat}
> 1. 2016-11-22 10:17:32,285 INFO  [example.org:30001.activeMasterManager] 
> master.AssignmentManager: Unable to communicate with 
> example.org,30003,1479780976834 in order to assign regions,
> java.io.IOException: Call to /example.org:30003 failed on local exception: 
> org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=1, waitTime=60001, 
> operationTimeout=6 expired.
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl.wrapException(RpcClientImpl.java:1338)
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1272)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:216)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:290)
> at 
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.openRegion(AdminProtos.java:30177)
> at 
> org.apache.hadoop.hbase.master.ServerManager.sendRegionOpen(ServerManager.java:1000)
> at 
> org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1719)
> at 
> org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:2828)
> at 
> org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:2775)
> at 
> org.apache.hadoop.hbase.master.AssignmentManager.assignAllUserRegions(AssignmentManager.java:2876)
> at 
> org.apache.hadoop.hbase.master.AssignmentManager.processDeadServersAndRegionsInTransition(AssignmentManager.java:646)
> at 
> org.apache.hadoop.hbase.master.AssignmentManager.joinCluster(AssignmentManager.java:493)
> at 
> org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:796)
> at org.apache.hadoop.hbase.master.HMaster.access$500(HMaster.java:188)
> at org.apache.hadoop.hbase.master.HMaster$1.run(HMaster.java:1711)
> at java.lang.Thread.run(Thread.java:756)
> Caused by: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=1, 
> waitTime=60001, operationTimeout=6 expired.
> at org.apache.hadoop.hbase.ipc.Call.checkAndSetTimeout(Call.java:81)
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1246)
> ... 14 more  
> {noformat}
> for the region 7e9aee32eb98a6fc9d503b99fc5f9615(like many others), after 
> timeout, master use a pool to re-assign them, as in 2
> {noformat}
> 2. 2016-11-22 10:17:32,303 DEBUG [AM.-pool1-t26] master.AssignmentManager: 
> Force region state offline {7e9aee32eb98a6fc9d503b99fc5f9615 
> state=PENDING_OPEN, ts=1479780992078, server=example.org,30003,1479780976834} 
>  
> {noformat}
> But, this region was actually opened on the rs, but (maybe) due to the hugh 
> pressure, the OPENED zk event recevied by master , as you can tell from 3, 
> "which is more than 15 seconds late"
> {noformat}
> 3. 2016-11-22 10:17:32,304 DEBUG [AM.ZK.Worker-pool2-t3] 
> master.AssignmentManager: Handling RS_ZK_REGION_OPENED, 
> server=example.org,30003,1479780976834, 
> region=7e9aee32eb98a6fc9d503b99fc5f9615, which is more than 15 seconds late, 
> current_state={7e9aee32eb98a6fc9d503b99fc5f9615 state=PENDING_OPEN, 
> ts=1479780992078, server=example.org,30003,1479780976834}
> {noformat}
> In the meantime, master still try to 

[jira] [Commented] (HBASE-17275) Assign timeout cause region unassign forever

2016-12-15 Thread Allan Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15753130#comment-15753130
 ] 

Allan Yang commented on HBASE-17275:


[~tedyu] can you or find someone to look at HBASE-17264,  HBASE-17265 and 
HBASE-17275, they are all related, can the fixes really take effects in our 
environment. 

> Assign timeout cause region unassign forever
> 
>
> Key: HBASE-17275
> URL: https://issues.apache.org/jira/browse/HBASE-17275
> Project: HBase
>  Issue Type: Bug
>  Components: Region Assignment
>Affects Versions: 1.2.3, 1.1.7
>Reporter: Allan Yang
>Assignee: Allan Yang
> Attachments: HBASE-17275-branch-1.patch
>
>
> This is a real cased happened in my test cluster.
> I have more 8000 regions to assign when I restart a cluster, but I only 
> started one regionserver. That means master need to assign these 8000 regions 
> to a single server(I know it is not right, but just for testing).
> The rs recevied the open region rpc and began to open regions. But the due to 
> the hugh number of regions, , master timeout the rpc call(but actually some 
> region had already opened) after 1 mins, as you can see from log 1.
> {noformat}
> 1. 2016-11-22 10:17:32,285 INFO  [example.org:30001.activeMasterManager] 
> master.AssignmentManager: Unable to communicate with 
> example.org,30003,1479780976834 in order to assign regions,
> java.io.IOException: Call to /example.org:30003 failed on local exception: 
> org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=1, waitTime=60001, 
> operationTimeout=6 expired.
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl.wrapException(RpcClientImpl.java:1338)
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1272)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:216)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:290)
> at 
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.openRegion(AdminProtos.java:30177)
> at 
> org.apache.hadoop.hbase.master.ServerManager.sendRegionOpen(ServerManager.java:1000)
> at 
> org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1719)
> at 
> org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:2828)
> at 
> org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:2775)
> at 
> org.apache.hadoop.hbase.master.AssignmentManager.assignAllUserRegions(AssignmentManager.java:2876)
> at 
> org.apache.hadoop.hbase.master.AssignmentManager.processDeadServersAndRegionsInTransition(AssignmentManager.java:646)
> at 
> org.apache.hadoop.hbase.master.AssignmentManager.joinCluster(AssignmentManager.java:493)
> at 
> org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:796)
> at org.apache.hadoop.hbase.master.HMaster.access$500(HMaster.java:188)
> at org.apache.hadoop.hbase.master.HMaster$1.run(HMaster.java:1711)
> at java.lang.Thread.run(Thread.java:756)
> Caused by: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=1, 
> waitTime=60001, operationTimeout=6 expired.
> at org.apache.hadoop.hbase.ipc.Call.checkAndSetTimeout(Call.java:81)
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1246)
> ... 14 more  
> {noformat}
> for the region 7e9aee32eb98a6fc9d503b99fc5f9615(like many others), after 
> timeout, master use a pool to re-assign them, as in 2
> {noformat}
> 2. 2016-11-22 10:17:32,303 DEBUG [AM.-pool1-t26] master.AssignmentManager: 
> Force region state offline {7e9aee32eb98a6fc9d503b99fc5f9615 
> state=PENDING_OPEN, ts=1479780992078, server=example.org,30003,1479780976834} 
>  
> {noformat}
> But, this region was actually opened on the rs, but (maybe) due to the hugh 
> pressure, the OPENED zk event recevied by master , as you can tell from 3, 
> "which is more than 15 seconds late"
> {noformat}
> 3. 2016-11-22 10:17:32,304 DEBUG [AM.ZK.Worker-pool2-t3] 
> master.AssignmentManager: Handling RS_ZK_REGION_OPENED, 
> server=example.org,30003,1479780976834, 
> region=7e9aee32eb98a6fc9d503b99fc5f9615, which is more than 15 seconds late, 
> current_state={7e9aee32eb98a6fc9d503b99fc5f9615 state=PENDING_OPEN, 
> ts=1479780992078, server=example.org,30003,1479780976834}
> {noformat}
> In the meantime, master still try to re-assign this region in another thread. 
> Master first close this region in case of multi assign, then change the state 
> of this region change from PENDING_OPEN >OFFLINE>PENDING_OPEN. Its RIT node 
> in zk was also transitioned to OFFLINE, 

[jira] [Commented] (HBASE-17319) Truncate table with preserve after split may cause truncation to fail

2016-12-15 Thread Allan Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15753122#comment-15753122
 ] 

Allan Yang commented on HBASE-17319:


Just conformed the the master branch may have the same problem, this patch can 
apply to master branch too.

> Truncate table with preserve after split may cause truncation to fail
> -
>
> Key: HBASE-17319
> URL: https://issues.apache.org/jira/browse/HBASE-17319
> Project: HBase
>  Issue Type: Bug
>  Components: Admin
>Affects Versions: 1.1.7, 1.2.4
>Reporter: Allan Yang
>Assignee: Allan Yang
> Fix For: 1.4.0
>
> Attachments: HBASE-17319-branch-1.patch
>
>
> In truncateTableProcedue , when getting tables regions  from meta to recreate 
> new regions, split parents are not excluded, so the new regions can end up 
> with the same start key, and the same region dir:
> {noformat}
> 2016-12-14 20:15:22,231 WARN  [RegionOpenAndInitThread-writetest-1] 
> regionserver.HRegionFileSystem: Trying to create a region that already exists 
> on disk: 
> hdfs://hbasedev1/zhengyan-hbase11-func2/.tmp/data/default/writetest/9b2c8d1539cd92661703ceb8a4d518a1
> {noformat} 
> The truncateTableProcedue will retry forever and never get success.
> A attached unit test shows everything.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17318) Increment does not add new column if the increment amount is zero at first time writing

2016-12-15 Thread Guangxu Cheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15753097#comment-15753097
 ] 

Guangxu Cheng commented on HBASE-17318:
---

[~tedyu] [~anoop.hbase] Thanks for your reviewing.

> Increment does not add new column if the increment amount is zero at first 
> time writing
> ---
>
> Key: HBASE-17318
> URL: https://issues.apache.org/jira/browse/HBASE-17318
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0, 0.98.23, 1.2.4
>Reporter: Guangxu Cheng
>Assignee: Guangxu Cheng
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-17318-branch-1.2-v1.patch, 
> HBASE-17318-branch-1.2-v2.patch, HBASE-17318-master-v1.patch
>
>
> When the data written for the first time is 0, no new columns are added.
> Iterate the input columns and update existing values if they were found, 
> otherwise add new column initialized to the increment amount.
> Does not add new column if the increment amount is zero at first time 
> writting.
> It is necessary to add a new column at the first write to 0. 
> If not, the result of using the phoenix is NULL.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17149) Procedure v2 - Fix nonce submission

2016-12-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15753072#comment-15753072
 ] 

Hadoop QA commented on HBASE-17149:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 23 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 26s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
33s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 50s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
23s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
17s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 36s 
{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
52s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 45s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 45s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
19s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
28m 42s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
49s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 40s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 33s 
{color} | {color:green} hbase-procedure in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 91m 57s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
24s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 138m 48s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.11.2 Server=1.11.2 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12843490/HBASE-17149.master.002.patch
 |
| JIRA Issue | HBASE-17149 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux bd68bf5dce21 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 
15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 35f0718 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/4936/artifact/patchprocess/patch-unit-hbase-server.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/4936/testReport/ |
| modules | C: hbase-procedure hbase-server U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/4936/console |
| Powered by | Apache Yetus 0.3.0  

[jira] [Commented] (HBASE-16993) BucketCache throw java.io.IOException: Invalid HFile block magic when DATA_BLOCK_ENCODING set to DIFF

2016-12-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15753062#comment-15753062
 ] 

Hadoop QA commented on HBASE-16993:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 13m 13s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 23 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
54s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 45s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
19s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
22s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 0s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 36s 
{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
51s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 45s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 45s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
25m 32s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 36s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 26s 
{color} | {color:green} hbase-procedure in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 79m 6s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
27s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 133m 44s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.security.access.TestAccessController |
| Timed out junit tests | 
org.apache.hadoop.hbase.master.procedure.TestCreateTableProcedure |
|   | org.apache.hadoop.hbase.master.procedure.TestMasterFailoverWithProcedures 
|
|   | org.apache.hadoop.hbase.master.procedure.TestSplitTableRegionProcedure |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12843488/HBASE-16993.master.005.patch
 |
| JIRA Issue | HBASE-16993 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 14e9ff87f1e9 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 
15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 35f0718 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| unit | 

[jira] [Commented] (HBASE-17149) Procedure v2 - Fix nonce submission

2016-12-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15753042#comment-15753042
 ] 

Hadoop QA commented on HBASE-17149:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 13m 57s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 23 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
59s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 48s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
19s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
22s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 3s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 37s 
{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
4s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 2s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 2s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
26s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
25m 45s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
17s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 26s 
{color} | {color:green} hbase-procedure in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 97m 42s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
26s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 154m 13s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.client.TestSizeFailures |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12843486/HBASE-17149.master.001.patch
 |
| JIRA Issue | HBASE-17149 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 0be6123c545d 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/component/dev-support/hbase-personality.sh
 |
| git revision | master / 35f0718 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/4935/artifact/patchprocess/patch-unit-hbase-server.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HBASE-Build/4935/artifact/patchprocess/patch-unit-hbase-server.txt
 |
|  Test Results | 

[jira] [Commented] (HBASE-17090) Procedure v2 - fast wake if nothing else is running

2016-12-15 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15752995#comment-15752995
 ] 

stack commented on HBASE-17090:
---

Let me look at this some more

> Procedure v2 - fast wake if nothing else is running
> ---
>
> Key: HBASE-17090
> URL: https://issues.apache.org/jira/browse/HBASE-17090
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
> Fix For: 2.0.0
>
> Attachments: HBASE-17090-v0.patch
>
>
> We wait Nmsec to see if we can batch more procedures, but the pattern that we 
> have allows us to wait only for what we know is running and avoid waiting for 
> something that will never get there. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15432) TableInputFormat - support multiple column families scan

2016-12-15 Thread Xuesen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15432?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuesen Liang updated HBASE-15432:
-
Attachment: HBASE-15432.master.002.patch

> TableInputFormat - support multiple column families scan
> 
>
> Key: HBASE-15432
> URL: https://issues.apache.org/jira/browse/HBASE-15432
> Project: HBase
>  Issue Type: New Feature
>Affects Versions: 2.0.0, 0.98.16, 0.98.16.1
>Reporter: nirav patel
>Assignee: Xuesen Liang
> Fix For: 2.0.0
>
> Attachments: HBASE-15432.master.002.patch, HBASE-15432.master.patch
>
>
> Currently Hbase TableInputFormat class has SCAN_COLUMN_FAMILY and 
> SCAN_COLUMNS. SCAN_COLUMN_FAMILY can only scan single column family. If we 
> need to scan multiple column families from a table then we must use 
> SCAN_COLUMNS where we must provide both columns and column families which is 
> not a convenient. Can we have a SCAN_COLUMN_FAMILIES which supports scan of 
> multiple column families.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17090) Procedure v2 - fast wake if nothing else is running

2016-12-15 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-17090:
--
Status: Patch Available  (was: Open)

Patch LGTM. What you think [~syuanjiang]? Let me submit to see how it does up 
on hadoopqa.

> Procedure v2 - fast wake if nothing else is running
> ---
>
> Key: HBASE-17090
> URL: https://issues.apache.org/jira/browse/HBASE-17090
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
> Fix For: 2.0.0
>
> Attachments: HBASE-17090-v0.patch
>
>
> We wait Nmsec to see if we can batch more procedures, but the pattern that we 
> have allows us to wait only for what we know is running and avoid waiting for 
> something that will never get there. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17148) Procedure v2 - add bulk proc submit

2016-12-15 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-17148:
--
Status: Patch Available  (was: Open)

Lets see whats broke (should be nothing -- though this patch adds a patch).

> Procedure v2 - add bulk proc submit
> ---
>
> Key: HBASE-17148
> URL: https://issues.apache.org/jira/browse/HBASE-17148
> Project: HBase
>  Issue Type: Sub-task
>  Components: master, proc-v2
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-17148-v0.patch, HBASE-17148.master.001.patch
>
>
> Add the ability to submit multiple procedure as a single operation. useful 
> for the AM to reduce some lock/unlock/wait times



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17148) Procedure v2 - add bulk proc submit

2016-12-15 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15752941#comment-15752941
 ] 

stack commented on HBASE-17148:
---

Rebase.

> Procedure v2 - add bulk proc submit
> ---
>
> Key: HBASE-17148
> URL: https://issues.apache.org/jira/browse/HBASE-17148
> Project: HBase
>  Issue Type: Sub-task
>  Components: master, proc-v2
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-17148-v0.patch, HBASE-17148.master.001.patch
>
>
> Add the ability to submit multiple procedure as a single operation. useful 
> for the AM to reduce some lock/unlock/wait times



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17148) Procedure v2 - add bulk proc submit

2016-12-15 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-17148:
--
Attachment: HBASE-17148.master.001.patch

> Procedure v2 - add bulk proc submit
> ---
>
> Key: HBASE-17148
> URL: https://issues.apache.org/jira/browse/HBASE-17148
> Project: HBase
>  Issue Type: Sub-task
>  Components: master, proc-v2
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-17148-v0.patch, HBASE-17148.master.001.patch
>
>
> Add the ability to submit multiple procedure as a single operation. useful 
> for the AM to reduce some lock/unlock/wait times



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17292) Add observer notification before bulk loaded hfile is moved to region directory

2016-12-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15752912#comment-15752912
 ] 

Hudson commented on HBASE-17292:


SUCCESS: Integrated in Jenkins build HBase-Trunk_matrix #2138 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/2138/])
HBASE-17292 Add observer notification before bulk loaded hfile is moved (tedyu: 
rev 35f0718a418f1aaf4d86da5bf88d3b2db3d6be67)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/RegionObserver.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/BaseRegionObserver.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RegionCoprocessorHost.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionFileSystem.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/SecureBulkLoadManager.java


> Add observer notification before bulk loaded hfile is moved to region 
> directory
> ---
>
> Key: HBASE-17292
> URL: https://issues.apache.org/jira/browse/HBASE-17292
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 2.0.0
>
> Attachments: 17292.v1.txt, 17292.v2.txt, 17292.v3.txt
>
>
> Currently the postBulkLoadHFile() hook notifies the locations of bulk loaded 
> hfiles.
> However, if bulk load fails after hfile is moved to region directory but 
> before postBulkLoadHFile() hook is called, there is no way for pluggable 
> components (replication - see HBASE-17290, backup / restore) to know which 
> hfile(s) have been moved to region directory.
> Even if postBulkLoadHFile() is called in finally block, the write (to backup 
> table or zookeeper) issued from postBulkLoadHFile() may fail, ending up with 
> same situation.
> This issue adds a preCommitStoreFile() hook which notifies path of to be 
> committed hfile before bulk loaded hfile is moved to region directory.
> With preCommitStoreFile() hook, write (to backup table or zookeeper) can be 
> issued before the movement of hfile.
> If write fails, IOException would make bulk load fail, not leaving hfile in 
> region directory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-17219) [C++] Reformat the code according to the style guidelines

2016-12-15 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar resolved HBASE-17219.
---
   Resolution: Fixed
Fix Version/s: HBASE-14850

> [C++] Reformat the code according to the style guidelines 
> --
>
> Key: HBASE-17219
> URL: https://issues.apache.org/jira/browse/HBASE-17219
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: HBASE-14850
>
> Attachments: hbase-17219_v1.patch
>
>
> See HBASE-17218. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17219) [C++] Reformat the code according to the style guidelines

2016-12-15 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HBASE-17219:
--
Attachment: hbase-17219_v1.patch

This is what I'll commit shortly. 

> [C++] Reformat the code according to the style guidelines 
> --
>
> Key: HBASE-17219
> URL: https://issues.apache.org/jira/browse/HBASE-17219
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Attachments: hbase-17219_v1.patch
>
>
> See HBASE-17218. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17324) PE Write workload results are wrong

2016-12-15 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15752822#comment-15752822
 ] 

Appy commented on HBASE-17324:
--

oops, forgot to give proper credit. Bad of me. :(


> PE Write workload results are wrong
> ---
>
> Key: HBASE-17324
> URL: https://issues.apache.org/jira/browse/HBASE-17324
> Project: HBase
>  Issue Type: Bug
>Reporter: Appy
>
> [~jmspaggi]  found this issue of write latencies being unreasonable.
> The reason is, we are using BufferedMutator with size 2MB, so most writes 
> return very quick because they get buffered. That's giving us avg. latency 
> like 39us. Needs fixing.
> {noformat}
> 16/12/14 12:46:04 INFO hbase.PerformanceEvaluation: Latency (us) : 
> mean=39,37, min=2,00, max=27193408,00, stdDev=3443,01, 50th=2,00, 75th=2,00, 
> 95th=3,00, 99th=5,00, 99.9th=9157,00, 99.99th=39685,59, 99.999th=160751,28
> 16/12/14 12:46:04 INFO hbase.PerformanceEvaluation: Num measures (latency) : 
> 1
> 16/12/14 12:46:04 INFO hbase.PerformanceEvaluation: Mean  = 39,37
> Min   = 2,00
> Max   = 27193408,00
> StdDev= 3443,01
> 50th  = 2,00
> 75th  = 2,00
> 95th  = 3,00
> 99th  = 5,00
> 99.9th= 9157,00
> 99.99th   = 39685,59
> 99.999th  = 160751,28
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17324) PE Write workload results are wrong

2016-12-15 Thread Appy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17324?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Appy updated HBASE-17324:
-
Description: 
[~jmspaggi]  found this issue of write latencies being unreasonable.

The reason is, we are using BufferedMutator with size 2MB, so most writes 
return very quick because they get buffered. That's giving us avg. latency like 
39us. Needs fixing.

{noformat}
16/12/14 12:46:04 INFO hbase.PerformanceEvaluation: Latency (us) : mean=39,37, 
min=2,00, max=27193408,00, stdDev=3443,01, 50th=2,00, 75th=2,00, 95th=3,00, 
99th=5,00, 99.9th=9157,00, 99.99th=39685,59, 99.999th=160751,28
16/12/14 12:46:04 INFO hbase.PerformanceEvaluation: Num measures (latency) : 
1
16/12/14 12:46:04 INFO hbase.PerformanceEvaluation: Mean  = 39,37
Min   = 2,00
Max   = 27193408,00
StdDev= 3443,01
50th  = 2,00
75th  = 2,00
95th  = 3,00
99th  = 5,00
99.9th= 9157,00
99.99th   = 39685,59
99.999th  = 160751,28
{noformat}


  was:
During writing, we are using BufferedMutator with size 2MB, so most writes 
return very quick because they get buffered. That's giving us avg. latency like 
39us. Needs fixing.

{noformat}
16/12/14 12:46:04 INFO hbase.PerformanceEvaluation: Latency (us) : mean=39,37, 
min=2,00, max=27193408,00, stdDev=3443,01, 50th=2,00, 75th=2,00, 95th=3,00, 
99th=5,00, 99.9th=9157,00, 99.99th=39685,59, 99.999th=160751,28
16/12/14 12:46:04 INFO hbase.PerformanceEvaluation: Num measures (latency) : 
1
16/12/14 12:46:04 INFO hbase.PerformanceEvaluation: Mean  = 39,37
Min   = 2,00
Max   = 27193408,00
StdDev= 3443,01
50th  = 2,00
75th  = 2,00
95th  = 3,00
99th  = 5,00
99.9th= 9157,00
99.99th   = 39685,59
99.999th  = 160751,28
{noformat}


> PE Write workload results are wrong
> ---
>
> Key: HBASE-17324
> URL: https://issues.apache.org/jira/browse/HBASE-17324
> Project: HBase
>  Issue Type: Bug
>Reporter: Appy
>
> [~jmspaggi]  found this issue of write latencies being unreasonable.
> The reason is, we are using BufferedMutator with size 2MB, so most writes 
> return very quick because they get buffered. That's giving us avg. latency 
> like 39us. Needs fixing.
> {noformat}
> 16/12/14 12:46:04 INFO hbase.PerformanceEvaluation: Latency (us) : 
> mean=39,37, min=2,00, max=27193408,00, stdDev=3443,01, 50th=2,00, 75th=2,00, 
> 95th=3,00, 99th=5,00, 99.9th=9157,00, 99.99th=39685,59, 99.999th=160751,28
> 16/12/14 12:46:04 INFO hbase.PerformanceEvaluation: Num measures (latency) : 
> 1
> 16/12/14 12:46:04 INFO hbase.PerformanceEvaluation: Mean  = 39,37
> Min   = 2,00
> Max   = 27193408,00
> StdDev= 3443,01
> 50th  = 2,00
> 75th  = 2,00
> 95th  = 3,00
> 99th  = 5,00
> 99.9th= 9157,00
> 99.99th   = 39685,59
> 99.999th  = 160751,28
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17323) TestAsyncGetMultiThread fails in master

2016-12-15 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15752799#comment-15752799
 ] 

Ted Yu commented on HBASE-17323:


There were a lot of errors in the form:
{code}
Caused by: 
org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.NotServingRegionException):
 org.apache.hadoop.hbase.NotServingRegionException:   Region 
async,555,1481831095337.f8cfe8be90c3055f9abcb0c257ffdd6d. is not online on 
cn012.a.com,42921,1481830976137
  at 
org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.java:3155)
  at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.getRegion(RSRpcServices.java:1229)
  at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2263)
{code}
{code}
grep ' is not online on' 
org.apache.hadoop.hbase.client.TestAsyncGetMultiThread-output.txt | wc
2131   20164  475147
{code}

> TestAsyncGetMultiThread fails in master
> ---
>
> Key: HBASE-17323
> URL: https://issues.apache.org/jira/browse/HBASE-17323
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
> Attachments: testAsyncGetMultiThread-output.gz
>
>
> From 
> https://builds.apache.org/job/HBase-Trunk_matrix/2137/jdk=JDK%201.8%20(latest),label=Hadoop/testReport/junit/org.apache.hadoop.hbase.client/TestAsyncGetMultiThread/test/
>  :
> {code}
> java.util.concurrent.ExecutionException: java.lang.NullPointerException
>   at org.apache.hadoop.hbase.util.Bytes.toInt(Bytes.java:1003)
>   at org.apache.hadoop.hbase.util.Bytes.toInt(Bytes.java:980)
>   at 
> org.apache.hadoop.hbase.client.TestAsyncGetMultiThread.run(TestAsyncGetMultiThread.java:108)
>   at 
> org.apache.hadoop.hbase.client.TestAsyncGetMultiThread.lambda$null$1(TestAsyncGetMultiThread.java:122)
> {code}
> This can be reproduced locally.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14123) HBase Backup/Restore Phase 2

2016-12-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15752769#comment-15752769
 ] 

Hadoop QA commented on HBASE-14123:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue} 0m 5s 
{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 24 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
49s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 53s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 
42s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 2m 
33s {color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: 
hbase-assembly . {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 52s 
{color} | {color:red} hbase-protocol-shaded in master has 24 extant Findbugs 
warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 47s 
{color} | {color:red} hbase-client in master has 1 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 25s 
{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 
31s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 15s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 5m 15s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 15s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 
2s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 2m 
52s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 
4s {color} | {color:green} There were no new shellcheck issues. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 3 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch 1 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 5s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
27m 1s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha1. {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 4m 
30s {color} | {color:green} the patch passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: . 
hbase-assembly {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 7m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 56s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 44s 
{color} | {color:green} hbase-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 28s 

[jira] [Commented] (HBASE-17149) Procedure v2 - Fix nonce submission

2016-12-15 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15752761#comment-15752761
 ] 

stack commented on HBASE-17149:
---

-002 is rebase that -001 didn't do.

> Procedure v2 - Fix nonce submission
> ---
>
> Key: HBASE-17149
> URL: https://issues.apache.org/jira/browse/HBASE-17149
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2
>Affects Versions: 2.0.0, 1.3.0, 1.4.0, 1.1.7, 1.2.4
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
> Attachments: HBASE-17149.master.001.patch, 
> HBASE-17149.master.002.patch, nonce.patch
>
>
> instead of having all the logic in submitProcedure(), split in 
> registerNonce() + submitProcedure().
> In this case we can avoid calling the coprocessor twice and having a clean 
> submit logic knowing that there will only be one submission.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17149) Procedure v2 - Fix nonce submission

2016-12-15 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-17149:
--
Attachment: HBASE-17149.master.002.patch

> Procedure v2 - Fix nonce submission
> ---
>
> Key: HBASE-17149
> URL: https://issues.apache.org/jira/browse/HBASE-17149
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2
>Affects Versions: 2.0.0, 1.3.0, 1.4.0, 1.1.7, 1.2.4
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
> Attachments: HBASE-17149.master.001.patch, 
> HBASE-17149.master.002.patch, nonce.patch
>
>
> instead of having all the logic in submitProcedure(), split in 
> registerNonce() + submitProcedure().
> In this case we can avoid calling the coprocessor twice and having a clean 
> submit logic knowing that there will only be one submission.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17323) TestAsyncGetMultiThread fails in master

2016-12-15 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15752743#comment-15752743
 ] 

Ted Yu commented on HBASE-17323:


{code}
grep 'Call queue is full' 
org.apache.hadoop.hbase.client.TestAsyncGetMultiThread-output.txt | wc
3796   49348  614952
{code}

> TestAsyncGetMultiThread fails in master
> ---
>
> Key: HBASE-17323
> URL: https://issues.apache.org/jira/browse/HBASE-17323
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
> Attachments: testAsyncGetMultiThread-output.gz
>
>
> From 
> https://builds.apache.org/job/HBase-Trunk_matrix/2137/jdk=JDK%201.8%20(latest),label=Hadoop/testReport/junit/org.apache.hadoop.hbase.client/TestAsyncGetMultiThread/test/
>  :
> {code}
> java.util.concurrent.ExecutionException: java.lang.NullPointerException
>   at org.apache.hadoop.hbase.util.Bytes.toInt(Bytes.java:1003)
>   at org.apache.hadoop.hbase.util.Bytes.toInt(Bytes.java:980)
>   at 
> org.apache.hadoop.hbase.client.TestAsyncGetMultiThread.run(TestAsyncGetMultiThread.java:108)
>   at 
> org.apache.hadoop.hbase.client.TestAsyncGetMultiThread.lambda$null$1(TestAsyncGetMultiThread.java:122)
> {code}
> This can be reproduced locally.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16993) BucketCache throw java.io.IOException: Invalid HFile block magic when DATA_BLOCK_ENCODING set to DIFF

2016-12-15 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-16993:
--
Attachment: HBASE-16993.master.005.patch

> BucketCache throw java.io.IOException: Invalid HFile block magic when 
> DATA_BLOCK_ENCODING set to DIFF
> -
>
> Key: HBASE-16993
> URL: https://issues.apache.org/jira/browse/HBASE-16993
> Project: HBase
>  Issue Type: Bug
>  Components: BucketCache, io
>Affects Versions: 1.1.3
> Environment: hbase version 1.1.3
>Reporter: liubangchen
>Assignee: liubangchen
> Fix For: 2.0.0
>
> Attachments: HBASE-16993.000.patch, HBASE-16993.001.patch, 
> HBASE-16993.master.001.patch, HBASE-16993.master.002.patch, 
> HBASE-16993.master.003.patch, HBASE-16993.master.004.patch, 
> HBASE-16993.master.005.patch
>
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>
> hbase-site.xml setting
> 
> hbase.bucketcache.bucket.sizes
> 16384,32768,40960, 
> 46000,49152,51200,65536,131072,524288
> 
> 
> hbase.bucketcache.size
> 16384
> 
> 
> hbase.bucketcache.ioengine
> offheap
> 
> 
> hfile.block.cache.size
> 0.3
> 
> 
> hfile.block.bloom.cacheonwrite
> true
> 
> 
> hbase.rs.cacheblocksonwrite
> true
> 
> 
> hfile.block.index.cacheonwrite
> true
>  n_splits = 200
> create 'usertable',{NAME =>'family', COMPRESSION => 'snappy', VERSIONS => 
> 1,DATA_BLOCK_ENCODING => 'DIFF',CONFIGURATION => 
> {'hbase.hregion.memstore.block.multiplier' => 5}},{DURABILITY => 
> 'SKIP_WAL'},{SPLITS => (1..n_splits).map {|i| 
> "user#{1000+i*(-1000)/n_splits}"}}
> load data
> bin/ycsb load hbase10 -P workloads/workloada -p table=usertable -p 
> columnfamily=family -p fieldcount=10 -p fieldlength=100 -p 
> recordcount=2 -p insertorder=hashed -p insertstart=0 -p 
> clientbuffering=true -p durability=SKIP_WAL -threads 20 -s 
> run 
> bin/ycsb run hbase10 -P workloads/workloadb -p table=usertable -p 
> columnfamily=family -p fieldcount=10 -p fieldlength=100 -p 
> operationcount=2000 -p readallfields=true -p clientbuffering=true -p 
> requestdistribution=zipfian  -threads 10 -s
> log info
> 2016-11-02 20:20:20,261 ERROR 
> [RW.default.readRpcServer.handler=36,queue=21,port=6020] bucket.BucketCache: 
> Failed reading block fdcc7ed6f3b2498b9ef316cc8206c233_44819759 from bucket 
> cache
> java.io.IOException: Invalid HFile block magic: 
> \x00\x00\x00\x00\x00\x00\x00\x00
> at 
> org.apache.hadoop.hbase.io.hfile.BlockType.parse(BlockType.java:154)
> at org.apache.hadoop.hbase.io.hfile.BlockType.read(BlockType.java:167)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock.(HFileBlock.java:273)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$1.deserialize(HFileBlock.java:134)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$1.deserialize(HFileBlock.java:121)
> at 
> org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.getBlock(BucketCache.java:427)
> at 
> org.apache.hadoop.hbase.io.hfile.CombinedBlockCache.getBlock(CombinedBlockCache.java:85)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.getCachedBlock(HFileReaderV2.java:266)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:403)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:269)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:634)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:584)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:247)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:156)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:363)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:217)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:2071)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:5369)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:2546)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2532)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2514)
> at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:6558)
> at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:6537)
> at 
> 

[jira] [Reopened] (HBASE-17081) Flush the entire CompactingMemStore content to disk

2016-12-15 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack reopened HBASE-17081:
---

Reopen till we ellicit failure not related or fixed. Thanks.

> Flush the entire CompactingMemStore content to disk
> ---
>
> Key: HBASE-17081
> URL: https://issues.apache.org/jira/browse/HBASE-17081
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Anastasia Braginsky
>Assignee: Anastasia Braginsky
> Attachments: HBASE-15787_8.patch, HBASE-17081-V01.patch, 
> HBASE-17081-V02.patch, HBASE-17081-V03.patch, HBASE-17081-V04.patch, 
> HBASE-17081-V05.patch, HBASE-17081-V06.patch, HBASE-17081-V06.patch, 
> HBASE-17081-V07.patch, HBaseMeetupDecember2016-V02.pptx, 
> Pipelinememstore_fortrunk_3.patch
>
>
> Part of CompactingMemStore's memory is held by an active segment, and another 
> part is divided between immutable segments in the compacting pipeline. Upon 
> flush-to-disk request we want to flush all of it to disk, in contrast to 
> flushing only tail of the compacting pipeline.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16700) Allow for coprocessor whitelisting

2016-12-15 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15752690#comment-15752690
 ] 

stack commented on HBASE-16700:
---

[~clayb] Thanks for patch. Suggest add a release note since this a nice new 
feature. Just write something that would work for an operator audience. I think 
you know this perspective (smile).

I see the added test can fail: 
https://builds.apache.org/view/All/job/HBase-Trunk_matrix/jdk=JDK%201.8%20(latest),label=Hadoop/2135/testReport/junit/org.apache.hadoop.hbase.security.access/TestCoprocessorWhitelistMasterObserver/org_apache_hadoop_hbase_security_access_TestCoprocessorWhitelistMasterObserver/

It failed also on a hadoopqa build. What you think? Should it be a large test 
so it has more time to run or  did something go wrong in this run.

Thanks.

> Allow for coprocessor whitelisting
> --
>
> Key: HBASE-16700
> URL: https://issues.apache.org/jira/browse/HBASE-16700
> Project: HBase
>  Issue Type: Improvement
>  Components: Coprocessors
>Reporter: Clay B.
>Assignee: Clay B.
>Priority: Minor
>  Labels: security
> Fix For: 2.0.0
>
> Attachments: HBASE-16700.000.patch, HBASE-16700.001.patch, 
> HBASE-16700.002.patch, HBASE-16700.003.patch, HBASE-16700.004.patch, 
> HBASE-16700.005.patch, HBASE-16700.006.patch, HBASE-16700.007.patch, 
> HBASE-16700.008.patch
>
>
> Today one can turn off all non-system coprocessors with 
> {{hbase.coprocessor.user.enabled}} however, this disables very useful things 
> like Apache Phoenix's coprocessors. Some tenants of a multi-user HBase may 
> also need to run bespoke coprocessors. But as an operator I would not want 
> wanton coprocessor usage. Ideally, one could do one of two things:
> * Allow coprocessors defined in {{hbase-site.xml}} -- this can only be 
> administratively changed in most cases
> * Allow coprocessors from table descriptors but only if the coprocessor is 
> whitelisted



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17149) Procedure v2 - Fix nonce submission

2016-12-15 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15752684#comment-15752684
 ] 

stack commented on HBASE-17149:
---

Turns out that yeah, the UT failure is unrelated but this is a new test and 
seems to have failed soon after addition: 
https://builds.apache.org/view/H-L/view/HBase/job/HBase-Trunk_matrix/2135/  
I'll ask over on the issue that added the test.

New patch fixes findbugs and javadoc.

> Procedure v2 - Fix nonce submission
> ---
>
> Key: HBASE-17149
> URL: https://issues.apache.org/jira/browse/HBASE-17149
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2
>Affects Versions: 2.0.0, 1.3.0, 1.4.0, 1.1.7, 1.2.4
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
> Attachments: HBASE-17149.master.001.patch, nonce.patch
>
>
> instead of having all the logic in submitProcedure(), split in 
> registerNonce() + submitProcedure().
> In this case we can avoid calling the coprocessor twice and having a clean 
> submit logic knowing that there will only be one submission.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17324) PE Write workload results are wrong

2016-12-15 Thread Jean-Marc Spaggiari (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15752674#comment-15752674
 ] 

Jean-Marc Spaggiari commented on HBASE-17324:
-

This sounds familiar to me ;) Thanks for looking at it Appy!

> PE Write workload results are wrong
> ---
>
> Key: HBASE-17324
> URL: https://issues.apache.org/jira/browse/HBASE-17324
> Project: HBase
>  Issue Type: Bug
>Reporter: Appy
>
> During writing, we are using BufferedMutator with size 2MB, so most writes 
> return very quick because they get buffered. That's giving us avg. latency 
> like 39us. Needs fixing.
> {noformat}
> 16/12/14 12:46:04 INFO hbase.PerformanceEvaluation: Latency (us) : 
> mean=39,37, min=2,00, max=27193408,00, stdDev=3443,01, 50th=2,00, 75th=2,00, 
> 95th=3,00, 99th=5,00, 99.9th=9157,00, 99.99th=39685,59, 99.999th=160751,28
> 16/12/14 12:46:04 INFO hbase.PerformanceEvaluation: Num measures (latency) : 
> 1
> 16/12/14 12:46:04 INFO hbase.PerformanceEvaluation: Mean  = 39,37
> Min   = 2,00
> Max   = 27193408,00
> StdDev= 3443,01
> 50th  = 2,00
> 75th  = 2,00
> 95th  = 3,00
> 99th  = 5,00
> 99.9th= 9157,00
> 99.99th   = 39685,59
> 99.999th  = 160751,28
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17318) Increment does not add new column if the increment amount is zero at first time writing

2016-12-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15752669#comment-15752669
 ] 

Hudson commented on HBASE-17318:


FAILURE: Integrated in Jenkins build HBase-1.4 #567 (See 
[https://builds.apache.org/job/HBase-1.4/567/])
HBASE-17318 Increment does not add new column if the increment amount is 
(tedyu: rev ffe70158ccf99947a86ddbc0bf18e387581ebd63)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestIncrementsFromClientSide.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestDurability.java


> Increment does not add new column if the increment amount is zero at first 
> time writing
> ---
>
> Key: HBASE-17318
> URL: https://issues.apache.org/jira/browse/HBASE-17318
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0, 0.98.23, 1.2.4
>Reporter: Guangxu Cheng
>Assignee: Guangxu Cheng
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-17318-branch-1.2-v1.patch, 
> HBASE-17318-branch-1.2-v2.patch, HBASE-17318-master-v1.patch
>
>
> When the data written for the first time is 0, no new columns are added.
> Iterate the input columns and update existing values if they were found, 
> otherwise add new column initialized to the increment amount.
> Does not add new column if the increment amount is zero at first time 
> writting.
> It is necessary to add a new column at the first write to 0. 
> If not, the result of using the phoenix is NULL.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17149) Procedure v2 - Fix nonce submission

2016-12-15 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-17149:
--
Attachment: HBASE-17149.master.001.patch

> Procedure v2 - Fix nonce submission
> ---
>
> Key: HBASE-17149
> URL: https://issues.apache.org/jira/browse/HBASE-17149
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2
>Affects Versions: 2.0.0, 1.3.0, 1.4.0, 1.1.7, 1.2.4
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
> Attachments: HBASE-17149.master.001.patch, nonce.patch
>
>
> instead of having all the logic in submitProcedure(), split in 
> registerNonce() + submitProcedure().
> In this case we can avoid calling the coprocessor twice and having a clean 
> submit logic knowing that there will only be one submission.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17319) Truncate table with preserve after split may cause truncation to fail

2016-12-15 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15752603#comment-15752603
 ] 

Ted Yu commented on HBASE-17319:


This only affects branch-1 and earlier releases, right ?

> Truncate table with preserve after split may cause truncation to fail
> -
>
> Key: HBASE-17319
> URL: https://issues.apache.org/jira/browse/HBASE-17319
> Project: HBase
>  Issue Type: Bug
>  Components: Admin
>Affects Versions: 1.1.7, 1.2.4
>Reporter: Allan Yang
>Assignee: Allan Yang
> Fix For: 1.4.0
>
> Attachments: HBASE-17319-branch-1.patch
>
>
> In truncateTableProcedue , when getting tables regions  from meta to recreate 
> new regions, split parents are not excluded, so the new regions can end up 
> with the same start key, and the same region dir:
> {noformat}
> 2016-12-14 20:15:22,231 WARN  [RegionOpenAndInitThread-writetest-1] 
> regionserver.HRegionFileSystem: Trying to create a region that already exists 
> on disk: 
> hdfs://hbasedev1/zhengyan-hbase11-func2/.tmp/data/default/writetest/9b2c8d1539cd92661703ceb8a4d518a1
> {noformat} 
> The truncateTableProcedue will retry forever and never get success.
> A attached unit test shows everything.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17319) Truncate table with preserve after split may cause truncation to fail

2016-12-15 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-17319:
---
 Hadoop Flags: Reviewed
Fix Version/s: 1.4.0

+1

> Truncate table with preserve after split may cause truncation to fail
> -
>
> Key: HBASE-17319
> URL: https://issues.apache.org/jira/browse/HBASE-17319
> Project: HBase
>  Issue Type: Bug
>  Components: Admin
>Affects Versions: 1.1.7, 1.2.4
>Reporter: Allan Yang
>Assignee: Allan Yang
> Fix For: 1.4.0
>
> Attachments: HBASE-17319-branch-1.patch
>
>
> In truncateTableProcedue , when getting tables regions  from meta to recreate 
> new regions, split parents are not excluded, so the new regions can end up 
> with the same start key, and the same region dir:
> {noformat}
> 2016-12-14 20:15:22,231 WARN  [RegionOpenAndInitThread-writetest-1] 
> regionserver.HRegionFileSystem: Trying to create a region that already exists 
> on disk: 
> hdfs://hbasedev1/zhengyan-hbase11-func2/.tmp/data/default/writetest/9b2c8d1539cd92661703ceb8a4d518a1
> {noformat} 
> The truncateTableProcedue will retry forever and never get success.
> A attached unit test shows everything.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17319) Truncate table with preserve after split may cause truncation to fail

2016-12-15 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-17319:
---
Summary: Truncate table with preserve after split may cause truncation to 
fail  (was: Truncate table with preserve after split may cause truncate fail)

> Truncate table with preserve after split may cause truncation to fail
> -
>
> Key: HBASE-17319
> URL: https://issues.apache.org/jira/browse/HBASE-17319
> Project: HBase
>  Issue Type: Bug
>  Components: Admin
>Affects Versions: 1.1.7, 1.2.4
>Reporter: Allan Yang
>Assignee: Allan Yang
> Fix For: 1.4.0
>
> Attachments: HBASE-17319-branch-1.patch
>
>
> In truncateTableProcedue , when getting tables regions  from meta to recreate 
> new regions, split parents are not excluded, so the new regions can end up 
> with the same start key, and the same region dir:
> {noformat}
> 2016-12-14 20:15:22,231 WARN  [RegionOpenAndInitThread-writetest-1] 
> regionserver.HRegionFileSystem: Trying to create a region that already exists 
> on disk: 
> hdfs://hbasedev1/zhengyan-hbase11-func2/.tmp/data/default/writetest/9b2c8d1539cd92661703ceb8a4d518a1
> {noformat} 
> The truncateTableProcedue will retry forever and never get success.
> A attached unit test shows everything.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17317) [branch-1] The updatePeerConfig method in ReplicationPeersZKImpl didn't update the table-cfs map

2016-12-15 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15752592#comment-15752592
 ] 

Ted Yu commented on HBASE-17317:


+1

> [branch-1] The updatePeerConfig method in ReplicationPeersZKImpl didn't 
> update the table-cfs map
> 
>
> Key: HBASE-17317
> URL: https://issues.apache.org/jira/browse/HBASE-17317
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.4.0
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
> Attachments: HBASE-17317-branch-1.patch
>
>
> The updatePeerConfig method in ReplicationPeersZKImpl.java
> {code}
>   @Override
>   public void updatePeerConfig(String id, ReplicationPeerConfig newConfig)
> throws ReplicationException {
> ReplicationPeer peer = getPeer(id);
> if (peer == null){
>   throw new ReplicationException("Could not find peer Id " + id);
> }   
> ReplicationPeerConfig existingConfig = peer.getPeerConfig();
> if (newConfig.getClusterKey() != null && 
> !newConfig.getClusterKey().isEmpty() &&
> !newConfig.getClusterKey().equals(existingConfig.getClusterKey())){
>   throw new ReplicationException("Changing the cluster key on an existing 
> peer is not allowed."
>   + " Existing key '" + existingConfig.getClusterKey() + "' does not 
> match new key '"
>   + newConfig.getClusterKey() +
>   "'");
> }   
> String existingEndpointImpl = existingConfig.getReplicationEndpointImpl();
> if (newConfig.getReplicationEndpointImpl() != null &&
> !newConfig.getReplicationEndpointImpl().isEmpty() &&
> !newConfig.getReplicationEndpointImpl().equals(existingEndpointImpl)){
>   throw new ReplicationException("Changing the replication endpoint 
> implementation class " +
>   "on an existing peer is not allowed. Existing class '"
>   + existingConfig.getReplicationEndpointImpl()
>   + "' does not match new class '" + 
> newConfig.getReplicationEndpointImpl() + "'");
> }   
> //Update existingConfig's peer config and peer data with the new values, 
> but don't touch config
> // or data that weren't explicitly changed
> existingConfig.getConfiguration().putAll(newConfig.getConfiguration());
> existingConfig.getPeerData().putAll(newConfig.getPeerData());
>// Bug. We should update table-cfs map, too.
> try {
>   ZKUtil.setData(this.zookeeper, getPeerNode(id),
>   ReplicationSerDeHelper.toByteArray(existingConfig));
> }   
> catch(KeeperException ke){
>   throw new ReplicationException("There was a problem trying to save 
> changes to the " +
>   "replication peer " + id, ke);
> }   
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17149) Procedure v2 - Fix nonce submission

2016-12-15 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15752423#comment-15752423
 ] 

stack commented on HBASE-17149:
---

Sorry [~syuanjiang] I should have noted I am on this issue. The test fails 
reliably for me so was trying to fix. Will take care of the other issues too. 
Will put up a patch later sir.

> Procedure v2 - Fix nonce submission
> ---
>
> Key: HBASE-17149
> URL: https://issues.apache.org/jira/browse/HBASE-17149
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2
>Affects Versions: 2.0.0, 1.3.0, 1.4.0, 1.1.7, 1.2.4
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
> Attachments: nonce.patch
>
>
> instead of having all the logic in submitProcedure(), split in 
> registerNonce() + submitProcedure().
> In this case we can avoid calling the coprocessor twice and having a clean 
> submit logic knowing that there will only be one submission.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17305) Two active HBase Masters can run at the same time under certain circumstances

2016-12-15 Thread Esteban Gutierrez (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15752410#comment-15752410
 ] 

Esteban Gutierrez commented on HBASE-17305:
---

Was a regular restart [~enis]. I'm sure this is very rare. What I think is the 
culprit here is this:

{code}
blockUntilBecomingActiveMaster() {
...
this.clusterHasActiveMaster.set(true);
...
byte[] bytes = ZKUtil.getDataAndWatch(this.watcher, 
this.watcher.znodePaths.masterAddressZNode) <--- [0]
...
currentMaster = ProtobufUtil.parseServerNameFrom(bytes);
...
if (ServerName.isSameHostnameAndPort(currentMaster, this.sn)) { 
msg = ("Current master has this master's address, " +
  currentMaster + "; master was restarted? Deleting node.");
// Hurry along the expiration of the znode.
ZKUtil.deleteNode(this.watcher, 
this.watcher.znodePaths.masterAddressZNode); <--- [1]

// We may have failed to delete the znode at the previous step, but
//  we delete the file anyway: a second attempt to delete the znode 
is likely to fail again.
ZNodeClearer.deleteMyEphemeralNodeOnDisk();
  } else {
...
{code}

I think the problem lies between [0] and [1] when the old master thinks there 
was a restart and between [0] and [1] a backup master becomes active. As I 
mentioned this happened in a very short time, somewhere around 85ms but it 
could be less due clock jitter. 

One solution might be to update the znode instead of delete it when the there 
is a restart of the active master.


> Two active HBase Masters can run at the same time under certain circumstances 
> --
>
> Key: HBASE-17305
> URL: https://issues.apache.org/jira/browse/HBASE-17305
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Affects Versions: 2.0.0
>Reporter: Esteban Gutierrez
>Assignee: Esteban Gutierrez
>Priority: Critical
>
> This needs a little more investigation, but we found a very edgy case when 
> the active master is restarted and a stand-by master tries to become active, 
> however the original active master was able to become the active master again 
> and just before the standby master passed the point of the transition to 
> become active we ended up with two active masters running at the same time. 
> Assuming the clock on both masters were accurate to milliseconds, this race 
> happened in less than 85ms. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-17324) PE Write workload results are wrong

2016-12-15 Thread Appy (JIRA)
Appy created HBASE-17324:


 Summary: PE Write workload results are wrong
 Key: HBASE-17324
 URL: https://issues.apache.org/jira/browse/HBASE-17324
 Project: HBase
  Issue Type: Bug
Reporter: Appy


During writing, we are using BufferedMutator with size 2MB, so most writes 
return very quick because they get buffered. That's giving us avg. latency like 
39us. Needs fixing.

{noformat}
16/12/14 12:46:04 INFO hbase.PerformanceEvaluation: Latency (us) : mean=39,37, 
min=2,00, max=27193408,00, stdDev=3443,01, 50th=2,00, 75th=2,00, 95th=3,00, 
99th=5,00, 99.9th=9157,00, 99.99th=39685,59, 99.999th=160751,28
16/12/14 12:46:04 INFO hbase.PerformanceEvaluation: Num measures (latency) : 
1
16/12/14 12:46:04 INFO hbase.PerformanceEvaluation: Mean  = 39,37
Min   = 2,00
Max   = 27193408,00
StdDev= 3443,01
50th  = 2,00
75th  = 2,00
95th  = 3,00
99th  = 5,00
99.9th= 9157,00
99.99th   = 39685,59
99.999th  = 160751,28
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-17149) Procedure v2 - Fix nonce submission

2016-12-15 Thread Stephen Yuan Jiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15752365#comment-15752365
 ] 

Stephen Yuan Jiang edited comment on HBASE-17149 at 12/15/16 8:02 PM:
--

The UT failure is unrelated to this change.

The javadoc warning (basically, in the comments,@Override appears) - we can 
either ignore or make the comment simpler.  

I am not quite understand the FindBugs warning by looking at the code (maybe it 
complains that the 'completed' map is 100% sure that the key is missing when 
reaching to line 745): 
{code}
  public void setFailureResultForNonce(final NonceKey nonceKey, final String 
procName,

final User procOwner, final IOException exception) {

if (nonceKey == null) return;

final Long procId = nonceKeysToProcIdsMap.get(nonceKey);

if (procId == null || completed.containsKey(procId)) return;

final long currentTime = EnvironmentEdgeManager.currentTime();

final ProcedureInfo result = new ProcedureInfo(procId.longValue(),

  procName, procOwner != null ? procOwner.getShortName() : null,

  ProcedureUtil.convertToProcedureState(ProcedureState.ROLLEDBACK),

  -1, nonceKey, exception, currentTime, currentTime, null);

completed.putIfAbsent(procId.longValue(), result);  <== Line 745

  }
{code}

[~stack], any insight?  If you don't see issue either, then we can commit.


was (Author: syuanjiang):
The UT failure is unrelated to this change.

The javadoc warning (basically, in the comments,@Override appears) - we can 
either ignore or make the comment simpler.  

I am not quite understand the FindBugs warning by looking at the code (maybe it 
complains that the 'completed' map is 100% sure that the key is missing when 
reaching to line 745): 
{code}
  public void setFailureResultForNonce(final NonceKey nonceKey, final String 
procName,

final User procOwner, final IOException exception) {

if (nonceKey == null) return;

final Long procId = nonceKeysToProcIdsMap.get(nonceKey);

if (procId == null || completed.containsKey(procId)) return;

final long currentTime = EnvironmentEdgeManager.currentTime();

final ProcedureInfo result = new ProcedureInfo(procId.longValue(),

  procName, procOwner != null ? procOwner.getShortName() : null,

  ProcedureUtil.convertToProcedureState(ProcedureState.ROLLEDBACK),

  -1, nonceKey, exception, currentTime, currentTime, null);

completed.putIfAbsent(procId.longValue(), result);  <== Line 745

  }
{code}


> Procedure v2 - Fix nonce submission
> ---
>
> Key: HBASE-17149
> URL: https://issues.apache.org/jira/browse/HBASE-17149
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2
>Affects Versions: 2.0.0, 1.3.0, 1.4.0, 1.1.7, 1.2.4
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
> Attachments: nonce.patch
>
>
> instead of having all the logic in submitProcedure(), split in 
> registerNonce() + submitProcedure().
> In this case we can avoid calling the coprocessor twice and having a clean 
> submit logic knowing that there will only be one submission.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17149) Procedure v2 - Fix nonce submission

2016-12-15 Thread Stephen Yuan Jiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15752365#comment-15752365
 ] 

Stephen Yuan Jiang commented on HBASE-17149:


The UT failure is unrelated to this change.

The javadoc warning (basically, in the comments,@Override appears) - we can 
either ignore or make the comment simpler.  

I am not quite understand the FindBugs warning by looking at the code (maybe it 
complains that the 'completed' map is 100% sure that the key is missing when 
reaching to line 745): 
{code}
  public void setFailureResultForNonce(final NonceKey nonceKey, final String 
procName,

final User procOwner, final IOException exception) {

if (nonceKey == null) return;

final Long procId = nonceKeysToProcIdsMap.get(nonceKey);

if (procId == null || completed.containsKey(procId)) return;

final long currentTime = EnvironmentEdgeManager.currentTime();

final ProcedureInfo result = new ProcedureInfo(procId.longValue(),

  procName, procOwner != null ? procOwner.getShortName() : null,

  ProcedureUtil.convertToProcedureState(ProcedureState.ROLLEDBACK),

  -1, nonceKey, exception, currentTime, currentTime, null);

completed.putIfAbsent(procId.longValue(), result);  <== Line 745

  }
{code}


> Procedure v2 - Fix nonce submission
> ---
>
> Key: HBASE-17149
> URL: https://issues.apache.org/jira/browse/HBASE-17149
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2
>Affects Versions: 2.0.0, 1.3.0, 1.4.0, 1.1.7, 1.2.4
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
> Attachments: nonce.patch
>
>
> instead of having all the logic in submitProcedure(), split in 
> registerNonce() + submitProcedure().
> In this case we can avoid calling the coprocessor twice and having a clean 
> submit logic knowing that there will only be one submission.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17323) TestAsyncGetMultiThread fails in master

2016-12-15 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-17323:
---
Attachment: testAsyncGetMultiThread-output.gz

> TestAsyncGetMultiThread fails in master
> ---
>
> Key: HBASE-17323
> URL: https://issues.apache.org/jira/browse/HBASE-17323
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
> Attachments: testAsyncGetMultiThread-output.gz
>
>
> From 
> https://builds.apache.org/job/HBase-Trunk_matrix/2137/jdk=JDK%201.8%20(latest),label=Hadoop/testReport/junit/org.apache.hadoop.hbase.client/TestAsyncGetMultiThread/test/
>  :
> {code}
> java.util.concurrent.ExecutionException: java.lang.NullPointerException
>   at org.apache.hadoop.hbase.util.Bytes.toInt(Bytes.java:1003)
>   at org.apache.hadoop.hbase.util.Bytes.toInt(Bytes.java:980)
>   at 
> org.apache.hadoop.hbase.client.TestAsyncGetMultiThread.run(TestAsyncGetMultiThread.java:108)
>   at 
> org.apache.hadoop.hbase.client.TestAsyncGetMultiThread.lambda$null$1(TestAsyncGetMultiThread.java:122)
> {code}
> This can be reproduced locally.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17081) Flush the entire CompactingMemStore content to disk

2016-12-15 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15752341#comment-15752341
 ] 

stack commented on HBASE-17081:
---

If you have an addendum, we can get it in... else we can revert till fixed. 
Thanks [~anastas] (FYI [~ram_krish])

> Flush the entire CompactingMemStore content to disk
> ---
>
> Key: HBASE-17081
> URL: https://issues.apache.org/jira/browse/HBASE-17081
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Anastasia Braginsky
>Assignee: Anastasia Braginsky
> Attachments: HBASE-15787_8.patch, HBASE-17081-V01.patch, 
> HBASE-17081-V02.patch, HBASE-17081-V03.patch, HBASE-17081-V04.patch, 
> HBASE-17081-V05.patch, HBASE-17081-V06.patch, HBASE-17081-V06.patch, 
> HBASE-17081-V07.patch, HBaseMeetupDecember2016-V02.pptx, 
> Pipelinememstore_fortrunk_3.patch
>
>
> Part of CompactingMemStore's memory is held by an active segment, and another 
> part is divided between immutable segments in the compacting pipeline. Upon 
> flush-to-disk request we want to flush all of it to disk, in contrast to 
> flushing only tail of the compacting pipeline.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-17323) TestAsyncGetMultiThread fails in master

2016-12-15 Thread Ted Yu (JIRA)
Ted Yu created HBASE-17323:
--

 Summary: TestAsyncGetMultiThread fails in master
 Key: HBASE-17323
 URL: https://issues.apache.org/jira/browse/HBASE-17323
 Project: HBase
  Issue Type: Test
Reporter: Ted Yu


>From 
>https://builds.apache.org/job/HBase-Trunk_matrix/2137/jdk=JDK%201.8%20(latest),label=Hadoop/testReport/junit/org.apache.hadoop.hbase.client/TestAsyncGetMultiThread/test/
> :
{code}
java.util.concurrent.ExecutionException: java.lang.NullPointerException
at org.apache.hadoop.hbase.util.Bytes.toInt(Bytes.java:1003)
at org.apache.hadoop.hbase.util.Bytes.toInt(Bytes.java:980)
at 
org.apache.hadoop.hbase.client.TestAsyncGetMultiThread.run(TestAsyncGetMultiThread.java:108)
at 
org.apache.hadoop.hbase.client.TestAsyncGetMultiThread.lambda$null$1(TestAsyncGetMultiThread.java:122)
{code}
This can be reproduced locally.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17081) Flush the entire CompactingMemStore content to disk

2016-12-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15752296#comment-15752296
 ] 

Hudson commented on HBASE-17081:


FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #2137 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/2137/])
HBASE-17081 Flush the entire CompactingMemStore content to disk (ramkrishna: 
rev a2a7618d261bfe121f05821d89242d770cd7b7ec)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/AbstractMemStore.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/ImmutableSegment.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestDefaultMemStore.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/CompactionPipeline.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/DefaultMemStore.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStoreCompactor.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCompactingMemStore.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/SegmentFactory.java
* (add) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/CompositeImmutableSegment.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/Segment.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestWalAndCompactingMemStoreFlush.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemstoreSize.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/CompactingMemStore.java


> Flush the entire CompactingMemStore content to disk
> ---
>
> Key: HBASE-17081
> URL: https://issues.apache.org/jira/browse/HBASE-17081
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Anastasia Braginsky
>Assignee: Anastasia Braginsky
> Attachments: HBASE-15787_8.patch, HBASE-17081-V01.patch, 
> HBASE-17081-V02.patch, HBASE-17081-V03.patch, HBASE-17081-V04.patch, 
> HBASE-17081-V05.patch, HBASE-17081-V06.patch, HBASE-17081-V06.patch, 
> HBASE-17081-V07.patch, HBaseMeetupDecember2016-V02.pptx, 
> Pipelinememstore_fortrunk_3.patch
>
>
> Part of CompactingMemStore's memory is held by an active segment, and another 
> part is divided between immutable segments in the compacting pipeline. Upon 
> flush-to-disk request we want to flush all of it to disk, in contrast to 
> flushing only tail of the compacting pipeline.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17318) Increment does not add new column if the increment amount is zero at first time writing

2016-12-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15752295#comment-15752295
 ] 

Hudson commented on HBASE-17318:


FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #2137 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/2137/])
HBASE-17318 Increment does not add new column if the increment amount is 
(tedyu: rev 401e83cee383508a2ccdf5b92e7c77a0be2e566a)
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestDurability.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestIncrementsFromClientSide.java


> Increment does not add new column if the increment amount is zero at first 
> time writing
> ---
>
> Key: HBASE-17318
> URL: https://issues.apache.org/jira/browse/HBASE-17318
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0, 0.98.23, 1.2.4
>Reporter: Guangxu Cheng
>Assignee: Guangxu Cheng
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-17318-branch-1.2-v1.patch, 
> HBASE-17318-branch-1.2-v2.patch, HBASE-17318-master-v1.patch
>
>
> When the data written for the first time is 0, no new columns are added.
> Iterate the input columns and update existing values if they were found, 
> otherwise add new column initialized to the increment amount.
> Does not add new column if the increment amount is zero at first time 
> writting.
> It is necessary to add a new column at the first write to 0. 
> If not, the result of using the phoenix is NULL.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15787) Change the flush related heuristics to work with offheap size configured

2016-12-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15752297#comment-15752297
 ] 

Hudson commented on HBASE-15787:


FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #2137 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/2137/])
HBASE-15787 Change the flush related heuristics to work with offheap 
(ramkrishna: rev d1147eeb7e1d5f41161c7cf5bc5ddb4744ca5b57)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/DefaultHeapMemoryTuner.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/AbstractFSWAL.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HeapMemoryManager.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStoreChunkPool.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/TestGlobalMemStoreSize.java
* (add) 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRegionServerAccounting.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegionReplayEvents.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStoreFlusher.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsHeapMemoryManager.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/util/MemorySizeUtil.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RegionServerAccounting.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHeapMemoryManager.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RegionServerServices.java


> Change the flush related heuristics to work with offheap size configured
> 
>
> Key: HBASE-15787
> URL: https://issues.apache.org/jira/browse/HBASE-15787
> Project: HBase
>  Issue Type: Sub-task
>  Components: regionserver
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0
>
> Attachments: HBASE-15787.patch, HBASE-15787_1.patch, 
> HBASE-15787_4.patch, HBASE-15787_5.patch, HBASE-15787_6.patch, 
> HBASE-15787_7.patch, HBASE-15787_8.patch, HBASE-15787_9.patch
>
>
> With offheap MSLAB in place we may have to change the flush related 
> heuristics to work with offheap size configured rather than the java heap 
> size.
> Since we now have clear seperation of the memstore data size and memstore 
> heap size, for offheap memstore
> -> Decide if the global.offheap.memstore.size is breached for blocking 
> updates and force flushes. 
> -> If the onheap global.memstore.size is breached (due to heap overhead) even 
> then block updates and force flushes.
> -> The global.memstore.size.lower.limit is now by default 95% of the 
> global.memstore.size. So now we apply this 95% on the 
> global.offheap.memstore.size and also on global.memstore.size (as it was done 
> for onheap case).
> -> We will have new FlushTypes introduced
> {code}
>   ABOVE_ONHEAP_LOWER_MARK, /* happens due to lower mark breach of onheap 
> memstore settings
>   An offheap memstore can even breach the 
> onheap_lower_mark*/
>   ABOVE_ONHEAP_HIGHER_MARK,/* happens due to higher mark breach of onheap 
> memstore settings
>   An offheap memstore can even breach the 
> onheap_higher_mark*/
>   ABOVE_OFFHEAP_LOWER_MARK,/* happens due to lower mark breach of offheap 
> memstore settings*/
>   ABOVE_OFFHEAP_HIGHER_MARK;
> {code}
> -> regionServerAccounting does all the accounting.
> -> HeapMemoryTuner is what is litte tricky here. First thing to note is that 
> at no point it will try to increase or decrease the 
> global.offheap.memstore.size. If there is a heap pressure then it will try to 
> increase the memstore heap limit. 
> In case of offheap memstore there is always a chance that the heap pressure 
> does not increase. In that case we could ideally decrease the heap limit for 
> memstore. The current logic of heapmemory tuner is such that things will 
> naturally settle down. But on discussion what we thought is let us include 
> the flush count that happens due to offheap pressure but give that a lesser 
> weightage and thus ensure that the initial decrease on memstore heap limit 
> does not happen. Currently that fraction is set as 0.5. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17321) ExploringCompactionPolicy DEBUG message should provide region details.

2016-12-15 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-17321:
--
Assignee: Carlos A. Morillo  (was: Jean-Marc Spaggiari)

> ExploringCompactionPolicy DEBUG message should provide region details.
> --
>
> Key: HBASE-17321
> URL: https://issues.apache.org/jira/browse/HBASE-17321
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 1.2.4
>Reporter: Jean-Marc Spaggiari
>Assignee: Carlos A. Morillo
>Priority: Minor
>  Labels: beginner
>
> ExploringCompactionPolicy says things like:
> {code}
> 2016-12-15 11:53:34,075 DEBUG 
> org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy: 
> Exploring compaction algorithm has selected 3 files of size 109411408 
> starting at candidate #0 after considering 1 permutations with 1 in ratio
> {code}
> It doesn't say anything about which region it is looking at. For debugging 
> purposes, it my be interesting to provide the region encoded name.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17298) remove unused code in HRegion#doMiniBatchMutation

2016-12-15 Thread huaxiang sun (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15752208#comment-15752208
 ] 

huaxiang sun commented on HBASE-17298:
--

Thanks [~anoop.hbase] for review.

> remove unused code in HRegion#doMiniBatchMutation
> -
>
> Key: HBASE-17298
> URL: https://issues.apache.org/jira/browse/HBASE-17298
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Affects Versions: 2.0.0
>Reporter: huaxiang sun
>Assignee: huaxiang sun
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-17298-master-001.patch
>
>
> In HReigon#doMiniBatchMutation(), there is the following code 
> https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java#L3194
> which is not used anymore. They can be removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17321) ExploringCompactionPolicy DEBUG message should provide region details.

2016-12-15 Thread Jean-Marc Spaggiari (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15752103#comment-15752103
 ] 

Jean-Marc Spaggiari commented on HBASE-17321:
-

I'm not able to assign this to [~carlosmorillo]. [~saint@gmail.com], any 
chance to do it on your side?  Thanks.

> ExploringCompactionPolicy DEBUG message should provide region details.
> --
>
> Key: HBASE-17321
> URL: https://issues.apache.org/jira/browse/HBASE-17321
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 1.2.4
>Reporter: Jean-Marc Spaggiari
>Assignee: Jean-Marc Spaggiari
>Priority: Minor
>  Labels: beginner
>
> ExploringCompactionPolicy says things like:
> {code}
> 2016-12-15 11:53:34,075 DEBUG 
> org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy: 
> Exploring compaction algorithm has selected 3 files of size 109411408 
> starting at candidate #0 after considering 1 permutations with 1 in ratio
> {code}
> It doesn't say anything about which region it is looking at. For debugging 
> purposes, it my be interesting to provide the region encoded name.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17321) ExploringCompactionPolicy DEBUG message should provide region details.

2016-12-15 Thread Jean-Marc Spaggiari (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jean-Marc Spaggiari updated HBASE-17321:

Assignee: Jean-Marc Spaggiari

> ExploringCompactionPolicy DEBUG message should provide region details.
> --
>
> Key: HBASE-17321
> URL: https://issues.apache.org/jira/browse/HBASE-17321
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 1.2.4
>Reporter: Jean-Marc Spaggiari
>Assignee: Jean-Marc Spaggiari
>Priority: Minor
>  Labels: beginner
>
> ExploringCompactionPolicy says things like:
> {code}
> 2016-12-15 11:53:34,075 DEBUG 
> org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy: 
> Exploring compaction algorithm has selected 3 files of size 109411408 
> starting at candidate #0 after considering 1 permutations with 1 in ratio
> {code}
> It doesn't say anything about which region it is looking at. For debugging 
> purposes, it my be interesting to provide the region encoded name.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17321) ExploringCompactionPolicy DEBUG message should provide region details.

2016-12-15 Thread Jean-Marc Spaggiari (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15752086#comment-15752086
 ] 

Jean-Marc Spaggiari commented on HBASE-17321:
-

[~carlosmorillo]

> ExploringCompactionPolicy DEBUG message should provide region details.
> --
>
> Key: HBASE-17321
> URL: https://issues.apache.org/jira/browse/HBASE-17321
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 1.2.4
>Reporter: Jean-Marc Spaggiari
>Priority: Minor
>  Labels: beginner
>
> ExploringCompactionPolicy says things like:
> {code}
> 2016-12-15 11:53:34,075 DEBUG 
> org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy: 
> Exploring compaction algorithm has selected 3 files of size 109411408 
> starting at candidate #0 after considering 1 permutations with 1 in ratio
> {code}
> It doesn't say anything about which region it is looking at. For debugging 
> purposes, it my be interesting to provide the region encoded name.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17321) ExploringCompactionPolicy DEBUG message should provide region details.

2016-12-15 Thread Jean-Marc Spaggiari (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15752080#comment-15752080
 ] 

Jean-Marc Spaggiari commented on HBASE-17321:
-

Carlos Morillo is going to provide a patch. When logs are huge,  to make it 
easier I just ¦ grep regionname. But since region name is not logged,  it 
requires a bit a manual work. Just trying to make debugging easier. Working on 
a cluster where there is 1000+ files in a single region ;) 

Patch to come sometime next week. 

> ExploringCompactionPolicy DEBUG message should provide region details.
> --
>
> Key: HBASE-17321
> URL: https://issues.apache.org/jira/browse/HBASE-17321
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 1.2.4
>Reporter: Jean-Marc Spaggiari
>Priority: Minor
>  Labels: beginner
>
> ExploringCompactionPolicy says things like:
> {code}
> 2016-12-15 11:53:34,075 DEBUG 
> org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy: 
> Exploring compaction algorithm has selected 3 files of size 109411408 
> starting at candidate #0 after considering 1 permutations with 1 in ratio
> {code}
> It doesn't say anything about which region it is looking at. For debugging 
> purposes, it my be interesting to provide the region encoded name.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17081) Flush the entire CompactingMemStore content to disk

2016-12-15 Thread Edward Bortnikov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15752078#comment-15752078
 ] 

Edward Bortnikov commented on HBASE-17081:
--

[~anastas] Please take a look at the test result, seems to be related: 

Flaked tests: 
org.apache.hadoop.hbase.regionserver.TestHRegionWithInMemoryFlush.testWritesWhileScanning(org.apache.hadoop.hbase.regionserver.TestHRegionWithInMemoryFlush)
  Run 1: TestHRegionWithInMemoryFlush>TestHRegion.testWritesWhileScanning:3979 
expected null, but was:


> Flush the entire CompactingMemStore content to disk
> ---
>
> Key: HBASE-17081
> URL: https://issues.apache.org/jira/browse/HBASE-17081
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Anastasia Braginsky
>Assignee: Anastasia Braginsky
> Attachments: HBASE-15787_8.patch, HBASE-17081-V01.patch, 
> HBASE-17081-V02.patch, HBASE-17081-V03.patch, HBASE-17081-V04.patch, 
> HBASE-17081-V05.patch, HBASE-17081-V06.patch, HBASE-17081-V06.patch, 
> HBASE-17081-V07.patch, HBaseMeetupDecember2016-V02.pptx, 
> Pipelinememstore_fortrunk_3.patch
>
>
> Part of CompactingMemStore's memory is held by an active segment, and another 
> part is divided between immutable segments in the compacting pipeline. Upon 
> flush-to-disk request we want to flush all of it to disk, in contrast to 
> flushing only tail of the compacting pipeline.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17321) ExploringCompactionPolicy DEBUG message should provide region details.

2016-12-15 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15752065#comment-15752065
 ] 

stack commented on HBASE-17321:
---

The region is in loglines before this one, no [~jmspaggi]? You can tell from 
the context which region is being compacted? You have a patch to add the 
encoded name at least?

> ExploringCompactionPolicy DEBUG message should provide region details.
> --
>
> Key: HBASE-17321
> URL: https://issues.apache.org/jira/browse/HBASE-17321
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 1.2.4
>Reporter: Jean-Marc Spaggiari
>Priority: Minor
>  Labels: beginner
>
> ExploringCompactionPolicy says things like:
> {code}
> 2016-12-15 11:53:34,075 DEBUG 
> org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy: 
> Exploring compaction algorithm has selected 3 files of size 109411408 
> starting at candidate #0 after considering 1 permutations with 1 in ratio
> {code}
> It doesn't say anything about which region it is looking at. For debugging 
> purposes, it my be interesting to provide the region encoded name.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17081) Flush the entire CompactingMemStore content to disk

2016-12-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15752056#comment-15752056
 ] 

Hadoop QA commented on HBASE-17081:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
50s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 36s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
44s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
45s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
42s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 36s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 36s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
43s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
25m 31s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
53s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 91m 23s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
13s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 128m 48s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12843429/HBASE-17081-V07.patch 
|
| JIRA Issue | HBASE-17081 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux a50f78535a43 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 401e83c |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/4933/artifact/patchprocess/patch-unit-hbase-server.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/4933/testReport/ |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/4933/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Flush the entire CompactingMemStore content to disk
> ---
>
> Key: HBASE-17081
> URL: https://issues.apache.org/jira/browse/HBASE-17081
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Anastasia Braginsky
>Assignee: Anastasia Braginsky
> 

[jira] [Commented] (HBASE-14123) HBase Backup/Restore Phase 2

2016-12-15 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15752016#comment-15752016
 ] 

Ted Yu commented on HBASE-14123:


Precommit didn't go through last night.

Triggered a new QA run.

> HBase Backup/Restore Phase 2
> 
>
> Key: HBASE-14123
> URL: https://issues.apache.org/jira/browse/HBASE-14123
> Project: HBase
>  Issue Type: Umbrella
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
>Priority: Blocker
> Fix For: 2.0.0
>
> Attachments: 14123-master.v14.txt, 14123-master.v15.txt, 
> 14123-master.v16.txt, 14123-master.v17.txt, 14123-master.v18.txt, 
> 14123-master.v19.txt, 14123-master.v2.txt, 14123-master.v20.txt, 
> 14123-master.v21.txt, 14123-master.v24.txt, 14123-master.v25.txt, 
> 14123-master.v27.txt, 14123-master.v28.txt, 14123-master.v29.full.txt, 
> 14123-master.v3.txt, 14123-master.v30.txt, 14123-master.v31.txt, 
> 14123-master.v32.txt, 14123-master.v33.txt, 14123-master.v34.txt, 
> 14123-master.v35.txt, 14123-master.v36.txt, 14123-master.v37.txt, 
> 14123-master.v38.txt, 14123-master.v5.txt, 14123-master.v6.txt, 
> 14123-master.v7.txt, 14123-master.v8.txt, 14123-master.v9.txt, 14123-v14.txt, 
> 14123.master.v39.patch, 14123.master.v40.patch, 14123.master.v41.patch, 
> 14123.master.v42.patch, 14123.master.v44.patch, 
> HBASE-14123-for-7912-v1.patch, HBASE-14123-for-7912-v6.patch, 
> HBASE-14123-v1.patch, HBASE-14123-v10.patch, HBASE-14123-v11.patch, 
> HBASE-14123-v12.patch, HBASE-14123-v13.patch, HBASE-14123-v15.patch, 
> HBASE-14123-v16.patch, HBASE-14123-v2.patch, HBASE-14123-v3.patch, 
> HBASE-14123-v4.patch, HBASE-14123-v5.patch, HBASE-14123-v6.patch, 
> HBASE-14123-v7.patch, HBASE-14123-v9.patch
>
>
> Phase 2 umbrella JIRA. See HBASE-7912 for design document and description. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17292) Add observer notification before bulk loaded hfile is moved to region directory

2016-12-15 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15751973#comment-15751973
 ] 

Ted Yu commented on HBASE-17292:


Thanks for review, Jerry.

I updated description, explaining why the current postBulkLoadHFile() hook 
doesn't suffice for fault tolerance purposes.

> Add observer notification before bulk loaded hfile is moved to region 
> directory
> ---
>
> Key: HBASE-17292
> URL: https://issues.apache.org/jira/browse/HBASE-17292
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 2.0.0
>
> Attachments: 17292.v1.txt, 17292.v2.txt, 17292.v3.txt
>
>
> Currently the postBulkLoadHFile() hook notifies the locations of bulk loaded 
> hfiles.
> However, if bulk load fails after hfile is moved to region directory but 
> before postBulkLoadHFile() hook is called, there is no way for pluggable 
> components (replication - see HBASE-17290, backup / restore) to know which 
> hfile(s) have been moved to region directory.
> Even if postBulkLoadHFile() is called in finally block, the write (to backup 
> table or zookeeper) issued from postBulkLoadHFile() may fail, ending up with 
> same situation.
> This issue adds a preCommitStoreFile() hook which notifies path of to be 
> committed hfile before bulk loaded hfile is moved to region directory.
> With preCommitStoreFile() hook, write (to backup table or zookeeper) can be 
> issued before the movement of hfile.
> If write fails, IOException would make bulk load fail, not leaving hfile in 
> region directory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17292) Add observer notification before bulk loaded hfile is moved to region directory

2016-12-15 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-17292:
---
Description: 
Currently the postBulkLoadHFile() hook notifies the locations of bulk loaded 
hfiles.
However, if bulk load fails after hfile is moved to region directory but before 
postBulkLoadHFile() hook is called, there is no way for pluggable components 
(replication - see HBASE-17290, backup / restore) to know which hfile(s) have 
been moved to region directory.

Even if postBulkLoadHFile() is called in finally block, the write (to backup 
table or zookeeper) issued from postBulkLoadHFile() may fail, ending up with 
same situation.

This issue adds a preCommitStoreFile() hook which notifies path of to be 
committed hfile before bulk loaded hfile is moved to region directory.

With preCommitStoreFile() hook, write (to backup table or zookeeper) can be 
issued before the movement of hfile.
If write fails, IOException would make bulk load fail, not leaving hfile in 
region directory.

  was:
Currently the postBulkLoadHFile() hook notifies the locations of bulk loaded 
hfiles.
However, if bulk load fails after hfile is moved to region directory but before 
postBulkLoadHFile() hook is called, there is no way for pluggable components 
(replication - see HBASE-17290, backup / restore) to know which hfile(s) have 
been moved to region directory.

This issue adds a preCommitStoreFile() hook which notifies path of to be 
committed hfile before bulk loaded hfile is moved to region directory.


> Add observer notification before bulk loaded hfile is moved to region 
> directory
> ---
>
> Key: HBASE-17292
> URL: https://issues.apache.org/jira/browse/HBASE-17292
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 2.0.0
>
> Attachments: 17292.v1.txt, 17292.v2.txt, 17292.v3.txt
>
>
> Currently the postBulkLoadHFile() hook notifies the locations of bulk loaded 
> hfiles.
> However, if bulk load fails after hfile is moved to region directory but 
> before postBulkLoadHFile() hook is called, there is no way for pluggable 
> components (replication - see HBASE-17290, backup / restore) to know which 
> hfile(s) have been moved to region directory.
> Even if postBulkLoadHFile() is called in finally block, the write (to backup 
> table or zookeeper) issued from postBulkLoadHFile() may fail, ending up with 
> same situation.
> This issue adds a preCommitStoreFile() hook which notifies path of to be 
> committed hfile before bulk loaded hfile is moved to region directory.
> With preCommitStoreFile() hook, write (to backup table or zookeeper) can be 
> issued before the movement of hfile.
> If write fails, IOException would make bulk load fail, not leaving hfile in 
> region directory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17282) Reduce the redundant requests to meta table

2016-12-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15751970#comment-15751970
 ] 

Hadoop QA commented on HBASE-17282:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
30s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 6s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
50s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
29s {color} | {color:green} master passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 59s 
{color} | {color:red} hbase-client in master has 1 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s 
{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 6s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 6s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
49s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
28s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
31m 50s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha1. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 10s 
{color} | {color:red} hbase-client generated 2 new + 1 unchanged - 0 fixed = 3 
total (was 1) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 9s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 94m 26s 
{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
26s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 146m 45s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hbase-client |
|  |  Dead store to future in 
org.apache.hadoop.hbase.client.AsyncNonMetaRegionLocator.getRegionLocation(TableName,
 byte[], boolean)  At 
AsyncNonMetaRegionLocator.java:org.apache.hadoop.hbase.client.AsyncNonMetaRegionLocator.getRegionLocation(TableName,
 byte[], boolean)  At AsyncNonMetaRegionLocator.java:[line 427] |
|  |  Private method 
org.apache.hadoop.hbase.client.AsyncNonMetaRegionLocator.addToCache(HRegionLocation)
 is never called  At AsyncNonMetaRegionLocator.java:called  At 
AsyncNonMetaRegionLocator.java:[lines 181-185] |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12843424/HBASE-17282.patch |
| JIRA Issue | HBASE-17282 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 46114c669ac3 3.13.0-93-generic #140-Ubuntu SMP Mon 

[jira] [Comment Edited] (HBASE-17322) New API to get the list of draining region servers

2016-12-15 Thread Abhishek Singh Chouhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15751953#comment-15751953
 ] 

Abhishek Singh Chouhan edited comment on HBASE-17322 at 12/15/16 5:32 PM:
--

Yep that one covers this. Can you close this as dup?


was (Author: abhishek.chouhan):
Yep that one covers this. I'll mark this as dup.

> New API to get the list of draining region servers
> --
>
> Key: HBASE-17322
> URL: https://issues.apache.org/jira/browse/HBASE-17322
> Project: HBase
>  Issue Type: Improvement
>Reporter: Abhishek Singh Chouhan
>Assignee: Abhishek Singh Chouhan
>
> In various scenarios it would be useful to have a list of draining region 
> servers so as to avoid them while doing certain operations such as region 
> moving during batch rolling upgrades.
> Jira to add a method getDrainingServers() in ClusterStatus so that this info 
> can be retrieved through HBaseAdmin.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17322) New API to get the list of draining region servers

2016-12-15 Thread Abhishek Singh Chouhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15751953#comment-15751953
 ] 

Abhishek Singh Chouhan commented on HBASE-17322:


Yep that one covers this. I'll mark this as dup.

> New API to get the list of draining region servers
> --
>
> Key: HBASE-17322
> URL: https://issues.apache.org/jira/browse/HBASE-17322
> Project: HBase
>  Issue Type: Improvement
>Reporter: Abhishek Singh Chouhan
>Assignee: Abhishek Singh Chouhan
>
> In various scenarios it would be useful to have a list of draining region 
> servers so as to avoid them while doing certain operations such as region 
> moving during batch rolling upgrades.
> Jira to add a method getDrainingServers() in ClusterStatus so that this info 
> can be retrieved through HBaseAdmin.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >