[jira] [Commented] (HBASE-16012) Major compaction can't work because left scanner read point in RegionServer

2016-06-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15343761#comment-15343761
 ] 

Hadoop QA commented on HBASE-16012:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 4s {color} 
| {color:red} HBASE-16012 does not apply to master. Rebase required? Wrong 
Branch? See https://yetus.apache.org/documentation/0.2.1/precommit-patchnames 
for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12812390/HBASE-16012-0.98-v5.patch
 |
| JIRA Issue | HBASE-16012 |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/2321/console |
| Powered by | Apache Yetus 0.2.1   http://yetus.apache.org |


This message was automatically generated.



> Major compaction can't work because left scanner read point in RegionServer
> ---
>
> Key: HBASE-16012
> URL: https://issues.apache.org/jira/browse/HBASE-16012
> Project: HBase
>  Issue Type: Bug
>  Components: Compaction, Scanners
>Affects Versions: 2.0.0, 0.94.27, 1.1.6, 1.3.1, 0.98.21, 1.2.3
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
> Fix For: 2.0.0, 1.1.6, 1.3.1, 0.98.21, 1.2.3
>
> Attachments: HBASE-16012-0.98-v5.patch, 
> HBASE-16012-branch-1-v5.patch, HBASE-16012-v1.patch, HBASE-16012-v2.patch, 
> HBASE-16012-v3.patch, HBASE-16012-v4.patch, HBASE-16012-v5.patch, 
> HBASE-16012.patch
>
>
> When new RegionScanner, it will add a scanner read point in 
> scannerReadPoints. But if we got a exception after add read point, the read 
> point will keep in regions server and the delete after this mvcc number will 
> never be compacted.
> Our hbase version is base 0.94. If it throws other exception when initialize 
> RegionScanner, the master branch has this bug, too.
> ERROR org.apache.hadoop.hbase.regionserver.HRegionServer: Failed openScanner 
> java.io.IOException: Could not seek StoreFileScanner
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:160)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:268)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:168)
>   at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:2232)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:4026)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1895)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1879)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1854)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.internalOpenScanner(HRegionServer.java:3032)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.openScanner(HRegionServer.java:2995)
>   at sun.reflect.GeneratedMethodAccessor67.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at 
> org.apache.hadoop.hbase.ipc.SecureRpcEngine$Server.call(SecureRpcEngine.java:338)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1595)
> Caused by: org.apache.hadoop.hbase.ipc.CallerDisconnectedException: Aborting 
> call openScanner, since caller disconnected
>   at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Call.throwExceptionIfCallerDisconnected(HBaseServer.java:475)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1443)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockDataInternal(HFileBlock.java:1902)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1766)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:345)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:254)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:499)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:520)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:235)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:148)
>   ... 14 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15146) Don't block on Reader threads queueing to a scheduler queue

2016-06-21 Thread Hiroshi Ikeda (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15343753#comment-15343753
 ] 

Hiroshi Ikeda commented on HBASE-15146:
---

Oh, I just realized I misunderstood classification annotations. Thanks.

> Don't block on Reader threads queueing to a scheduler queue
> ---
>
> Key: HBASE-15146
> URL: https://issues.apache.org/jira/browse/HBASE-15146
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
>Priority: Blocker
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-15146-v7.patch, HBASE-15146-v8.patch, 
> HBASE-15146-v8.patch, HBASE-15146.0.patch, HBASE-15146.1.patch, 
> HBASE-15146.2.patch, HBASE-15146.3.patch, HBASE-15146.4.patch, 
> HBASE-15146.5.patch, HBASE-15146.6.patch
>
>
> Blocking on the epoll thread is awful. The new rpc scheduler can have lots of 
> different queues. Those queues have different capacity limits. Currently the 
> dispatch method can block trying to add the the blocking queue in any of the 
> schedulers.
> This causes readers to block, tcp acks are delayed, and everything slows down.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15945) Patch for Key Value, Bytes and Cell

2016-06-21 Thread Sudeep Sunthankar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15343750#comment-15343750
 ] 

Sudeep Sunthankar commented on HBASE-15945:
---

Thanks for the feedback.
Please find my comments and let me know your thoughts on the same.
--- Will use hbase namespace going forward.
--- Let's use std::string for representing Java's byte[]. As Elliott mentioned 
even PB uses strings for the same. We can use string references wherever 
required.
--- The first patch in this series had exposed all of the KeyValue API's. In 
the last one, we derived Cell from KeyValue to hide the implementation details. 
Will work on Enis's feedback to remove KeyValue implementation details from the 
patch. 
--- "if/Cell.pb.h" is added for reusing the CellType enum defined in the PB 
instead of redefining the same enums.Should we define our own enums ? 
--- We have defined a custom exception class derived from std::exception which 
we use to throw exceptions.
--- We have taken care not to add unwanted code. There might be some API's or 
functions which provide some functionality but may not be called in the patch. 
Should we remove all such functions ? 

> Patch for Key Value, Bytes and Cell
> ---
>
> Key: HBASE-15945
> URL: https://issues.apache.org/jira/browse/HBASE-15945
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Sudeep Sunthankar
> Attachments: HBASE-15945-HBASE-14850.v2.patch, 
> HBASE-15945.HBASE-14850.v1.patch
>
>
> This patch contains an implementation of Key Value, Bytes and Cell modeled on 
> the lines of Java implementation.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16012) Major compaction can't work because left scanner read point in RegionServer

2016-06-21 Thread Guanghao Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang updated HBASE-16012:
---
Fix Version/s: 1.2.3
   0.98.21
   1.3.1
   1.1.6

> Major compaction can't work because left scanner read point in RegionServer
> ---
>
> Key: HBASE-16012
> URL: https://issues.apache.org/jira/browse/HBASE-16012
> Project: HBase
>  Issue Type: Bug
>  Components: Compaction, Scanners
>Affects Versions: 2.0.0, 0.94.27, 1.1.6, 1.3.1, 0.98.21, 1.2.3
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
> Fix For: 2.0.0, 1.1.6, 1.3.1, 0.98.21, 1.2.3
>
> Attachments: HBASE-16012-0.98-v5.patch, 
> HBASE-16012-branch-1-v5.patch, HBASE-16012-v1.patch, HBASE-16012-v2.patch, 
> HBASE-16012-v3.patch, HBASE-16012-v4.patch, HBASE-16012-v5.patch, 
> HBASE-16012.patch
>
>
> When new RegionScanner, it will add a scanner read point in 
> scannerReadPoints. But if we got a exception after add read point, the read 
> point will keep in regions server and the delete after this mvcc number will 
> never be compacted.
> Our hbase version is base 0.94. If it throws other exception when initialize 
> RegionScanner, the master branch has this bug, too.
> ERROR org.apache.hadoop.hbase.regionserver.HRegionServer: Failed openScanner 
> java.io.IOException: Could not seek StoreFileScanner
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:160)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:268)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:168)
>   at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:2232)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:4026)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1895)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1879)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1854)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.internalOpenScanner(HRegionServer.java:3032)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.openScanner(HRegionServer.java:2995)
>   at sun.reflect.GeneratedMethodAccessor67.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at 
> org.apache.hadoop.hbase.ipc.SecureRpcEngine$Server.call(SecureRpcEngine.java:338)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1595)
> Caused by: org.apache.hadoop.hbase.ipc.CallerDisconnectedException: Aborting 
> call openScanner, since caller disconnected
>   at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Call.throwExceptionIfCallerDisconnected(HBaseServer.java:475)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1443)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockDataInternal(HFileBlock.java:1902)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1766)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:345)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:254)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:499)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:520)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:235)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:148)
>   ... 14 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16012) Major compaction can't work because left scanner read point in RegionServer

2016-06-21 Thread Guanghao Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang updated HBASE-16012:
---
Attachment: HBASE-16012-0.98-v5.patch

> Major compaction can't work because left scanner read point in RegionServer
> ---
>
> Key: HBASE-16012
> URL: https://issues.apache.org/jira/browse/HBASE-16012
> Project: HBase
>  Issue Type: Bug
>  Components: Compaction, Scanners
>Affects Versions: 2.0.0, 0.94.27, 1.1.6, 1.3.1, 0.98.21, 1.2.3
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-16012-0.98-v5.patch, 
> HBASE-16012-branch-1-v5.patch, HBASE-16012-v1.patch, HBASE-16012-v2.patch, 
> HBASE-16012-v3.patch, HBASE-16012-v4.patch, HBASE-16012-v5.patch, 
> HBASE-16012.patch
>
>
> When new RegionScanner, it will add a scanner read point in 
> scannerReadPoints. But if we got a exception after add read point, the read 
> point will keep in regions server and the delete after this mvcc number will 
> never be compacted.
> Our hbase version is base 0.94. If it throws other exception when initialize 
> RegionScanner, the master branch has this bug, too.
> ERROR org.apache.hadoop.hbase.regionserver.HRegionServer: Failed openScanner 
> java.io.IOException: Could not seek StoreFileScanner
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:160)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:268)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:168)
>   at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:2232)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:4026)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1895)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1879)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1854)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.internalOpenScanner(HRegionServer.java:3032)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.openScanner(HRegionServer.java:2995)
>   at sun.reflect.GeneratedMethodAccessor67.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at 
> org.apache.hadoop.hbase.ipc.SecureRpcEngine$Server.call(SecureRpcEngine.java:338)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1595)
> Caused by: org.apache.hadoop.hbase.ipc.CallerDisconnectedException: Aborting 
> call openScanner, since caller disconnected
>   at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Call.throwExceptionIfCallerDisconnected(HBaseServer.java:475)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1443)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockDataInternal(HFileBlock.java:1902)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1766)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:345)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:254)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:499)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:520)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:235)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:148)
>   ... 14 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16012) Major compaction can't work because left scanner read point in RegionServer

2016-06-21 Thread Guanghao Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang updated HBASE-16012:
---
Attachment: HBASE-16012-branch-1-v5.patch

> Major compaction can't work because left scanner read point in RegionServer
> ---
>
> Key: HBASE-16012
> URL: https://issues.apache.org/jira/browse/HBASE-16012
> Project: HBase
>  Issue Type: Bug
>  Components: Compaction, Scanners
>Affects Versions: 2.0.0, 0.94.27, 1.1.6, 1.3.1, 0.98.21, 1.2.3
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-16012-0.98-v5.patch, 
> HBASE-16012-branch-1-v5.patch, HBASE-16012-v1.patch, HBASE-16012-v2.patch, 
> HBASE-16012-v3.patch, HBASE-16012-v4.patch, HBASE-16012-v5.patch, 
> HBASE-16012.patch
>
>
> When new RegionScanner, it will add a scanner read point in 
> scannerReadPoints. But if we got a exception after add read point, the read 
> point will keep in regions server and the delete after this mvcc number will 
> never be compacted.
> Our hbase version is base 0.94. If it throws other exception when initialize 
> RegionScanner, the master branch has this bug, too.
> ERROR org.apache.hadoop.hbase.regionserver.HRegionServer: Failed openScanner 
> java.io.IOException: Could not seek StoreFileScanner
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:160)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:268)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:168)
>   at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:2232)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:4026)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1895)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1879)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1854)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.internalOpenScanner(HRegionServer.java:3032)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.openScanner(HRegionServer.java:2995)
>   at sun.reflect.GeneratedMethodAccessor67.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at 
> org.apache.hadoop.hbase.ipc.SecureRpcEngine$Server.call(SecureRpcEngine.java:338)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1595)
> Caused by: org.apache.hadoop.hbase.ipc.CallerDisconnectedException: Aborting 
> call openScanner, since caller disconnected
>   at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Call.throwExceptionIfCallerDisconnected(HBaseServer.java:475)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1443)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockDataInternal(HFileBlock.java:1902)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1766)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:345)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:254)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:499)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:520)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:235)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:148)
>   ... 14 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HBASE-16012) Major compaction can't work because left scanner read point in RegionServer

2016-06-21 Thread Guanghao Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang reassigned HBASE-16012:
--

Assignee: Guanghao Zhang

> Major compaction can't work because left scanner read point in RegionServer
> ---
>
> Key: HBASE-16012
> URL: https://issues.apache.org/jira/browse/HBASE-16012
> Project: HBase
>  Issue Type: Bug
>  Components: Compaction, Scanners
>Affects Versions: 2.0.0, 0.94.27, 1.1.6, 1.3.1, 0.98.21, 1.2.3
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-16012-0.98-v5.patch, 
> HBASE-16012-branch-1-v5.patch, HBASE-16012-v1.patch, HBASE-16012-v2.patch, 
> HBASE-16012-v3.patch, HBASE-16012-v4.patch, HBASE-16012-v5.patch, 
> HBASE-16012.patch
>
>
> When new RegionScanner, it will add a scanner read point in 
> scannerReadPoints. But if we got a exception after add read point, the read 
> point will keep in regions server and the delete after this mvcc number will 
> never be compacted.
> Our hbase version is base 0.94. If it throws other exception when initialize 
> RegionScanner, the master branch has this bug, too.
> ERROR org.apache.hadoop.hbase.regionserver.HRegionServer: Failed openScanner 
> java.io.IOException: Could not seek StoreFileScanner
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:160)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:268)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:168)
>   at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:2232)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:4026)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1895)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1879)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1854)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.internalOpenScanner(HRegionServer.java:3032)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.openScanner(HRegionServer.java:2995)
>   at sun.reflect.GeneratedMethodAccessor67.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at 
> org.apache.hadoop.hbase.ipc.SecureRpcEngine$Server.call(SecureRpcEngine.java:338)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1595)
> Caused by: org.apache.hadoop.hbase.ipc.CallerDisconnectedException: Aborting 
> call openScanner, since caller disconnected
>   at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Call.throwExceptionIfCallerDisconnected(HBaseServer.java:475)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1443)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockDataInternal(HFileBlock.java:1902)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1766)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:345)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:254)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:499)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:520)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:235)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:148)
>   ... 14 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16012) Major compaction can't work because left scanner read point in RegionServer

2016-06-21 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15343741#comment-15343741
 ] 

Lars Hofhansl commented on HBASE-16012:
---

Thanks [~Apache9].

> Major compaction can't work because left scanner read point in RegionServer
> ---
>
> Key: HBASE-16012
> URL: https://issues.apache.org/jira/browse/HBASE-16012
> Project: HBase
>  Issue Type: Bug
>  Components: Compaction, Scanners
>Affects Versions: 2.0.0, 0.94.27, 1.1.6, 1.3.1, 0.98.21, 1.2.3
>Reporter: Guanghao Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-16012-v1.patch, HBASE-16012-v2.patch, 
> HBASE-16012-v3.patch, HBASE-16012-v4.patch, HBASE-16012-v5.patch, 
> HBASE-16012.patch
>
>
> When new RegionScanner, it will add a scanner read point in 
> scannerReadPoints. But if we got a exception after add read point, the read 
> point will keep in regions server and the delete after this mvcc number will 
> never be compacted.
> Our hbase version is base 0.94. If it throws other exception when initialize 
> RegionScanner, the master branch has this bug, too.
> ERROR org.apache.hadoop.hbase.regionserver.HRegionServer: Failed openScanner 
> java.io.IOException: Could not seek StoreFileScanner
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:160)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:268)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:168)
>   at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:2232)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:4026)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1895)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1879)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1854)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.internalOpenScanner(HRegionServer.java:3032)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.openScanner(HRegionServer.java:2995)
>   at sun.reflect.GeneratedMethodAccessor67.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at 
> org.apache.hadoop.hbase.ipc.SecureRpcEngine$Server.call(SecureRpcEngine.java:338)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1595)
> Caused by: org.apache.hadoop.hbase.ipc.CallerDisconnectedException: Aborting 
> call openScanner, since caller disconnected
>   at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Call.throwExceptionIfCallerDisconnected(HBaseServer.java:475)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1443)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockDataInternal(HFileBlock.java:1902)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1766)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:345)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:254)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:499)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:520)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:235)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:148)
>   ... 14 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16013) In-memory Compaction process can be improved for a default case

2016-06-21 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-16013:
---
Affects Version/s: 2.0.0
Fix Version/s: 2.0.0

> In-memory Compaction process can be improved for a default case
> ---
>
> Key: HBASE-16013
> URL: https://issues.apache.org/jira/browse/HBASE-16013
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0
>
> Attachments: HBASE-16013.patch
>
>
> There is an ongoing discussion on how to handle the default case (no 
> duplicates/deletes) and how to use the new CompactingMemstore in such a case.
> One improvement is to avoid the StoreScanner and go with a simple 
> MemstoreScanner. This avoids lot of comparisons that the StoreScanner does. 
> Just raising this sub-task as part of some analysis going on.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16013) Compaction process can be improved for a default case

2016-06-21 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-16013:
---
Status: Patch Available  (was: Open)

> Compaction process can be improved for a default case 
> --
>
> Key: HBASE-16013
> URL: https://issues.apache.org/jira/browse/HBASE-16013
> Project: HBase
>  Issue Type: Sub-task
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Attachments: HBASE-16013.patch
>
>
> There is an ongoing discussion on how to handle the default case (no 
> duplicates/deletes) and how to use the new CompactingMemstore in such a case.
> One improvement is to avoid the StoreScanner and go with a simple 
> MemstoreScanner. This avoids lot of comparisons that the StoreScanner does. 
> Just raising this sub-task as part of some analysis going on.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16013) Compaction process can be improved for a default case

2016-06-21 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-16013:
---
Attachment: HBASE-16013.patch

In the default case a simple memstore iterator makes things much faster. I 
could see the improvement in the cluster. But this is a thing to be discussed 
and see how can this be implemented. In a default case may be we don't even 
need a compaction, just directly flush from the pipelines?

> Compaction process can be improved for a default case 
> --
>
> Key: HBASE-16013
> URL: https://issues.apache.org/jira/browse/HBASE-16013
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0
>
> Attachments: HBASE-16013.patch
>
>
> There is an ongoing discussion on how to handle the default case (no 
> duplicates/deletes) and how to use the new CompactingMemstore in such a case.
> One improvement is to avoid the StoreScanner and go with a simple 
> MemstoreScanner. This avoids lot of comparisons that the StoreScanner does. 
> Just raising this sub-task as part of some analysis going on.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16013) In-memory Compaction process can be improved for a default case

2016-06-21 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-16013:
---
Summary: In-memory Compaction process can be improved for a default case  
(was: Compaction process can be improved for a default case )

> In-memory Compaction process can be improved for a default case
> ---
>
> Key: HBASE-16013
> URL: https://issues.apache.org/jira/browse/HBASE-16013
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0
>
> Attachments: HBASE-16013.patch
>
>
> There is an ongoing discussion on how to handle the default case (no 
> duplicates/deletes) and how to use the new CompactingMemstore in such a case.
> One improvement is to avoid the StoreScanner and go with a simple 
> MemstoreScanner. This avoids lot of comparisons that the StoreScanner does. 
> Just raising this sub-task as part of some analysis going on.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15146) Don't block on Reader threads queueing to a scheduler queue

2016-06-21 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15343720#comment-15343720
 ] 

Sean Busbey commented on HBASE-15146:
-

RWQueueRpcExecutor is IA.LimitedPrivate, which is allowed to change on minor 
releases. AFAICT, that's what happened here. Since this change has already gone 
into releases, please open a new JIRA if you think something needs to change in 
the solution here.

> Don't block on Reader threads queueing to a scheduler queue
> ---
>
> Key: HBASE-15146
> URL: https://issues.apache.org/jira/browse/HBASE-15146
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
>Priority: Blocker
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-15146-v7.patch, HBASE-15146-v8.patch, 
> HBASE-15146-v8.patch, HBASE-15146.0.patch, HBASE-15146.1.patch, 
> HBASE-15146.2.patch, HBASE-15146.3.patch, HBASE-15146.4.patch, 
> HBASE-15146.5.patch, HBASE-15146.6.patch
>
>
> Blocking on the epoll thread is awful. The new rpc scheduler can have lots of 
> different queues. Those queues have different capacity limits. Currently the 
> dispatch method can block trying to add the the blocking queue in any of the 
> schedulers.
> This causes readers to block, tcp acks are delayed, and everything slows down.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15146) Don't block on Reader threads queueing to a scheduler queue

2016-06-21 Thread Hiroshi Ikeda (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15343718#comment-15343718
 ] 

Hiroshi Ikeda commented on HBASE-15146:
---

Changing the published method RWQueueRpcExecutor.dispatch etc. breaks 
compatibility against explicitly declaring the contract by InterfaceAudience.

It doesn't seem to make sense that reader threads hold the initiative from 
worker threads and continue to just reject incoming requests. Moreover, in my 
old experience, Selector.select immediately causes a context switch when an 
event occurs, and this patch might make worse performance in such subtle heavy 
congestion.

In general, gradually reducing performance is rather preferable in heavy load.

> Don't block on Reader threads queueing to a scheduler queue
> ---
>
> Key: HBASE-15146
> URL: https://issues.apache.org/jira/browse/HBASE-15146
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
>Priority: Blocker
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-15146-v7.patch, HBASE-15146-v8.patch, 
> HBASE-15146-v8.patch, HBASE-15146.0.patch, HBASE-15146.1.patch, 
> HBASE-15146.2.patch, HBASE-15146.3.patch, HBASE-15146.4.patch, 
> HBASE-15146.5.patch, HBASE-15146.6.patch
>
>
> Blocking on the epoll thread is awful. The new rpc scheduler can have lots of 
> different queues. Those queues have different capacity limits. Currently the 
> dispatch method can block trying to add the the blocking queue in any of the 
> schedulers.
> This causes readers to block, tcp acks are delayed, and everything slows down.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16049) TestRowProcessorEndpoint is failing on Apache Builds

2016-06-21 Thread Mikhail Antonov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15343713#comment-15343713
 ] 

Mikhail Antonov commented on HBASE-16049:
-

Done (you were in jira-users group, but not in the list of contributors for the 
project, so I've added you).

Thanks for picking it up [~zghaobac]!

> TestRowProcessorEndpoint is failing on Apache Builds
> 
>
> Key: HBASE-16049
> URL: https://issues.apache.org/jira/browse/HBASE-16049
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.0, 1.4.0
>Reporter: Mikhail Antonov
>Assignee: Guanghao Zhang
>
> example log 
> https://paste.apache.org/46Uh



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16049) TestRowProcessorEndpoint is failing on Apache Builds

2016-06-21 Thread Mikhail Antonov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15343715#comment-15343715
 ] 

Mikhail Antonov commented on HBASE-16049:
-

Also now you should be able to assign jiras to yourself.

> TestRowProcessorEndpoint is failing on Apache Builds
> 
>
> Key: HBASE-16049
> URL: https://issues.apache.org/jira/browse/HBASE-16049
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.0, 1.4.0
>Reporter: Mikhail Antonov
>Assignee: Guanghao Zhang
>
> example log 
> https://paste.apache.org/46Uh



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16049) TestRowProcessorEndpoint is failing on Apache Builds

2016-06-21 Thread Mikhail Antonov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Antonov updated HBASE-16049:

Assignee: Guanghao Zhang

> TestRowProcessorEndpoint is failing on Apache Builds
> 
>
> Key: HBASE-16049
> URL: https://issues.apache.org/jira/browse/HBASE-16049
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.0, 1.4.0
>Reporter: Mikhail Antonov
>Assignee: Guanghao Zhang
>
> example log 
> https://paste.apache.org/46Uh



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15871) Memstore flush doesn't finish because of backwardseek() in memstore scanner.

2016-06-21 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15343711#comment-15343711
 ] 

ramkrishna.s.vasudevan commented on HBASE-15871:


bq.So, I changed some codes to ensure that the sequence id of HFile is always 
same as the sequence id of the flush operation which created the snapshot.
This seems to be a critical change. There are some test failures as Ted 
pointed. Can you take a look at them before we proceed with the reviews - 
[~Jeongdae Kim]?

> Memstore flush doesn't finish because of backwardseek() in memstore scanner.
> 
>
> Key: HBASE-15871
> URL: https://issues.apache.org/jira/browse/HBASE-15871
> Project: HBase
>  Issue Type: Bug
>  Components: Scanners
>Affects Versions: 1.1.2
>Reporter: Jeongdae Kim
> Fix For: 1.1.2
>
> Attachments: HBASE-15871.branch-1.1.001.patch, 
> HBASE-15871.branch-1.1.002.patch, HBASE-15871.branch-1.1.003.patch, 
> memstore_backwardSeek().PNG
>
>
> Sometimes in our production hbase cluster, it takes a long time to finish 
> memstore flush.( for about more than 30 minutes)
> the reason is that a memstore flusher thread calls 
> StoreScanner.updateReaders(), waits for acquiring a lock that store scanner 
> holds in StoreScanner.next() and backwardseek() in memstore scanner runs for 
> a long time.
> I think that this condition could occur in reverse scan by the following 
> process.
> 1) create a reversed store scanner by requesting a reverse scan.
> 2) flush a memstore in the same HStore.
> 3) puts a lot of cells in memstore and memstore is almost full.
> 4) call the reverse scanner.next() and re-create all scanners in this store 
> because all scanners was already closed by 2)'s flush() and backwardseek() 
> with store's lastTop for all new scanners.
> 5) in this status, memstore is almost full by 2) and all cells in memstore 
> have sequenceID greater than this scanner's readPoint because of 2)'s 
> flush(). this condition causes searching all cells in memstore, and 
> seekToPreviousRow() repeatly seach cells that are already searched if a row 
> has one column. (described this in more detail in a attached file.)
> 6) flush a memstore again in the same HStore, and wait until 4-5) process 
> finished, to update store files in the same HStore after flusing.
> I searched HBase jira. and found a similar issue. (HBASE-14497) but, 
> HBASE-14497's fix can't solve this issue because that fix just changed 
> recursive call to loop.(and already applied to our HBase version)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16049) TestRowProcessorEndpoint is failing on Apache Builds

2016-06-21 Thread Mikhail Antonov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Antonov updated HBASE-16049:

Assignee: (was: Mikhail Antonov)

> TestRowProcessorEndpoint is failing on Apache Builds
> 
>
> Key: HBASE-16049
> URL: https://issues.apache.org/jira/browse/HBASE-16049
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.0, 1.4.0
>Reporter: Mikhail Antonov
>
> example log 
> https://paste.apache.org/46Uh



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16049) TestRowProcessorEndpoint is failing on Apache Builds

2016-06-21 Thread Mikhail Antonov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Antonov updated HBASE-16049:

Assignee: Mikhail Antonov

> TestRowProcessorEndpoint is failing on Apache Builds
> 
>
> Key: HBASE-16049
> URL: https://issues.apache.org/jira/browse/HBASE-16049
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.0, 1.4.0
>Reporter: Mikhail Antonov
>Assignee: Mikhail Antonov
>
> example log 
> https://paste.apache.org/46Uh



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16076) Cannot configure split policy in HBase shell

2016-06-21 Thread Youngjoon Kim (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15343690#comment-15343690
 ] 

Youngjoon Kim commented on HBASE-16076:
---

The last command works correctly. :) Thank you, [~chenheng]

> Cannot configure split policy in HBase shell
> 
>
> Key: HBASE-16076
> URL: https://issues.apache.org/jira/browse/HBASE-16076
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.0.0
>Reporter: Youngjoon Kim
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-16076.patch, HBASE-16076_v1.patch
>
>
> The reference guide explains how to configure split policy in HBase 
> shell([link|http://hbase.apache.org/book.html#_custom_split_policies]).
> {noformat}
> Configuring the Split Policy On a Table Using HBase Shell
> hbase> create 'test', {METHOD => 'table_att', CONFIG => {'SPLIT_POLICY' => 
> 'org.apache.hadoop.hbase.regionserver.ConstantSizeRegionSplitPolicy'}},
> {NAME => 'cf1'}
> {noformat}
> But if run that command, shell complains 'An argument ignored (unknown or 
> overridden): CONFIG', and the table description has no split policy.
> {noformat}
> hbase(main):067:0* create 'test', {METHOD => 'table_att', CONFIG => 
> {'SPLIT_POLICY' => 
> 'org.apache.hadoop.hbase.regionserver.ConstantSizeRegionSplitPolicy'}}, {NAME 
> => 'cf1'}
> An argument ignored (unknown or overridden): CONFIG
> Created table test
> Took 1.2180 seconds
> hbase(main):068:0> describe 'test'
> Table test is ENABLED
> test
> COLUMN FAMILIES DESCRIPTION
> {NAME => 'cf1', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', 
> REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '1', TTL => 
> 'FOREVER', MIN_VERSIONS => '0', IN_MEMORY_COMPACTION => 'false', 
> KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => '
> false', BLOCKCACHE => 'true'}
> 1 row(s)
> Took 0.0200 seconds
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16062) Improper error handling in WAL Reader/Writer creation

2016-06-21 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-16062:
---
   Resolution: Fixed
Fix Version/s: 1.2.2
   1.3.0
   Status: Resolved  (was: Patch Available)

Thanks for the patch, Vlad.

> Improper error handling in WAL Reader/Writer creation
> -
>
> Key: HBASE-16062
> URL: https://issues.apache.org/jira/browse/HBASE-16062
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.2.2
>
> Attachments: 16062.branch-1.txt, HBASE-16062-v1.patch, 
> HBASE-16062-v2.patch
>
>
> If creation of WAL reader/ writer fails for some reason RS may leak hanging 
> socket in CLOSE_WAIT state. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16012) Major compaction can't work because left scanner read point in RegionServer

2016-06-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15343650#comment-15343650
 ] 

Hadoop QA commented on HBASE-16012:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
14s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 43s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s 
{color} | {color:green} master passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
53s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
57s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 28s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 34s 
{color} | {color:green} master passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
44s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 44s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 44s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 34s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
52s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
26m 32s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 9s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 73m 2s 
{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
16s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 115m 11s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12812352/HBASE-16012-v5.patch |
| JIRA Issue | HBASE-16012 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux asf907.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / ef90ecc |
| Default Java | 1.7.0_80 |
| Multi-JDK versions |  /home/jenkins/tools/java/jdk1.8.0:1.8.0 

[jira] [Commented] (HBASE-15945) Patch for Key Value, Bytes and Cell

2016-06-21 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15343641#comment-15343641
 ] 

Elliott Clark commented on HBASE-15945:
---

This is the same Java in cpp stuff that's been posted a couple of times.
If we want to have something that can share the same buffers and has offset and 
all then use IOBuf. If you just want a straight array of bytes then use string 
( the same thing that protobuf uses)


Exceptions should be meaningful or they should be runtime_error.

Code that's not used shouldn't be checked in. Code that's just there to add 
unused maybe needed functionality is just going to be bloat.

> Patch for Key Value, Bytes and Cell
> ---
>
> Key: HBASE-15945
> URL: https://issues.apache.org/jira/browse/HBASE-15945
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Sudeep Sunthankar
> Attachments: HBASE-15945-HBASE-14850.v2.patch, 
> HBASE-15945.HBASE-14850.v1.patch
>
>
> This patch contains an implementation of Key Value, Bytes and Cell modeled on 
> the lines of Java implementation.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16076) Cannot configure split policy in HBase shell

2016-06-21 Thread Heng Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Heng Chen updated HBASE-16076:
--
Attachment: HBASE-16076_v1.patch

> Cannot configure split policy in HBase shell
> 
>
> Key: HBASE-16076
> URL: https://issues.apache.org/jira/browse/HBASE-16076
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.0.0
>Reporter: Youngjoon Kim
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-16076.patch, HBASE-16076_v1.patch
>
>
> The reference guide explains how to configure split policy in HBase 
> shell([link|http://hbase.apache.org/book.html#_custom_split_policies]).
> {noformat}
> Configuring the Split Policy On a Table Using HBase Shell
> hbase> create 'test', {METHOD => 'table_att', CONFIG => {'SPLIT_POLICY' => 
> 'org.apache.hadoop.hbase.regionserver.ConstantSizeRegionSplitPolicy'}},
> {NAME => 'cf1'}
> {noformat}
> But if run that command, shell complains 'An argument ignored (unknown or 
> overridden): CONFIG', and the table description has no split policy.
> {noformat}
> hbase(main):067:0* create 'test', {METHOD => 'table_att', CONFIG => 
> {'SPLIT_POLICY' => 
> 'org.apache.hadoop.hbase.regionserver.ConstantSizeRegionSplitPolicy'}}, {NAME 
> => 'cf1'}
> An argument ignored (unknown or overridden): CONFIG
> Created table test
> Took 1.2180 seconds
> hbase(main):068:0> describe 'test'
> Table test is ENABLED
> test
> COLUMN FAMILIES DESCRIPTION
> {NAME => 'cf1', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', 
> REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '1', TTL => 
> 'FOREVER', MIN_VERSIONS => '0', IN_MEMORY_COMPACTION => 'false', 
> KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => '
> false', BLOCKCACHE => 'true'}
> 1 row(s)
> Took 0.0200 seconds
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15945) Patch for Key Value, Bytes and Cell

2016-06-21 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15343262#comment-15343262
 ] 

Enis Soztutar commented on HBASE-15945:
---

Thanks for the updated patch. Do you mind putting this to review board as well. 
You can use the fancy new dev-support/submit-patch.sh utility for convenience 
if you like. 

- These should be under hbase namespace, no? 
- There seems to be a confusion about how do we represent java byte[]'s in C++. 
In the patch, I can see, we are using all three of {{std::string}}, 
{{std::vector}}, and {{Bytes}} class itself. 
{code}
+using BYTE_ARRAY = std::vector;
+using ByteBuffer = std::vector;
{code}

We should stick with only 1. We have a couple of options to represent these 
- (a) std::string, 
- (b) std::vector, 
- (c) std::array, 
- (d) Implement custom class like LevelDB Slice, a char* with length (and maybe 
offset?). 

Object creation in Java is costly since everything is heap allocated. That is 
partially why we have Cell.getRowArray() and Cell.getRowOffset(), etc versus 
CellUtil.cloneRow(cell). If we want to keep the API super-simple, we can just 
keep passing std::string's, and have every string to be cloned from the API. 
Otherwise, in C++, we should be able to have a {{Buffer}} class which contains 
the char*, length and offset, so that we directly return these triplets.

- Cell should not extend KeyValue. Cell should be an abstract class with 
virtual methods. 
{code}
+class Cell : public KeyValue{
{code}

KeyValue is an implementation that keeps all of the data in an underlying 
byte[] for historical reasons. In C++ client side, we are only constructing 
"Cells", either for {{Puts}}, or for iterating over the {{Results}}. We 
actually do not have a reason to keep the data for a Cell in a single byte[]. 
So, I think we do not need the KeyValue as in the patch at all. We still should 
be able to carry Cells in Puts and also read results from KeyValueCodec into a 
list-of-cells. For representing those, we can write a CellImpl class (which is 
not visible from client API), and have it implement Cell's methods. The 
CellImpl can keep pointers to it's underlying rowKey, column, value, etc which 
are all byte arrays. This I think should be very similar to the PB cell message 
class in cell.proto. We can move all of the KeyValue encoding complexity into 
KeyValueEncoder/Decoder classes which will encode / decode given Cells into 
KeyValue encoded serialized bytes. 

- Cell should not have SetSequenceId. Only get. 
{code}
+void Cell::SetSequenceId(const long _id){
{code}

- Why do we need the PB here? 
{code}
+#include "if/Cell.pb.h"
{code}

> Patch for Key Value, Bytes and Cell
> ---
>
> Key: HBASE-15945
> URL: https://issues.apache.org/jira/browse/HBASE-15945
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Sudeep Sunthankar
> Attachments: HBASE-15945-HBASE-14850.v2.patch, 
> HBASE-15945.HBASE-14850.v1.patch
>
>
> This patch contains an implementation of Key Value, Bytes and Cell modeled on 
> the lines of Java implementation.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16076) Cannot configure split policy in HBase shell

2016-06-21 Thread Heng Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15343260#comment-15343260
 ] 

Heng Chen commented on HBASE-16076:
---

After check the code,  please try command below:
{code}
hbase(main):003:0> create 'test2', 'cf2', {METADATA => {'SPLIT_POLICY' => 
'org.apache.hadoop.hbase.regionserver.ConstantSizeRegionSplitPolicy'}}
Created table test2
Took 1.2820 seconds
hbase(main):004:0> describe 'test2'
Table test2 is ENABLED
test2, {TABLE_ATTRIBUTES => {METADATA => {'SPLIT_POLICY' => 
'org.apache.hadoop.hbase.regionserver.ConstantSizeRegionSplitPolicy'}}
COLUMN FAMILIES DESCRIPTION
{NAME => 'cf2', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', 
KEEP_DELETED_CELLS => 'FALSE', IN_MEMORY_COMPACTION => 'false', 
DATA_BLOCK_ENCODING => 'NONE',
TTL => 'FOREVER', COMPRESSION => 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 
'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}
1 row(s)
Took 0.0100 seconds
hbase(main):005:0>
{code}


> Cannot configure split policy in HBase shell
> 
>
> Key: HBASE-16076
> URL: https://issues.apache.org/jira/browse/HBASE-16076
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.0.0
>Reporter: Youngjoon Kim
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-16076.patch
>
>
> The reference guide explains how to configure split policy in HBase 
> shell([link|http://hbase.apache.org/book.html#_custom_split_policies]).
> {noformat}
> Configuring the Split Policy On a Table Using HBase Shell
> hbase> create 'test', {METHOD => 'table_att', CONFIG => {'SPLIT_POLICY' => 
> 'org.apache.hadoop.hbase.regionserver.ConstantSizeRegionSplitPolicy'}},
> {NAME => 'cf1'}
> {noformat}
> But if run that command, shell complains 'An argument ignored (unknown or 
> overridden): CONFIG', and the table description has no split policy.
> {noformat}
> hbase(main):067:0* create 'test', {METHOD => 'table_att', CONFIG => 
> {'SPLIT_POLICY' => 
> 'org.apache.hadoop.hbase.regionserver.ConstantSizeRegionSplitPolicy'}}, {NAME 
> => 'cf1'}
> An argument ignored (unknown or overridden): CONFIG
> Created table test
> Took 1.2180 seconds
> hbase(main):068:0> describe 'test'
> Table test is ENABLED
> test
> COLUMN FAMILIES DESCRIPTION
> {NAME => 'cf1', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', 
> REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '1', TTL => 
> 'FOREVER', MIN_VERSIONS => '0', IN_MEMORY_COMPACTION => 'false', 
> KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => '
> false', BLOCKCACHE => 'true'}
> 1 row(s)
> Took 0.0200 seconds
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-3727) MultiHFileOutputFormat

2016-06-21 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-3727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15343259#comment-15343259
 ] 

Ted Yu commented on HBASE-3727:
---

@Yi:
There is no need to attach individual classes as Java files.
The patch would suffice.

> MultiHFileOutputFormat
> --
>
> Key: HBASE-3727
> URL: https://issues.apache.org/jira/browse/HBASE-3727
> Project: HBase
>  Issue Type: New Feature
>Affects Versions: 2.0.0
>Reporter: Andrew Purtell
>Assignee: yi liang
>Priority: Minor
> Attachments: HFileOutputFormat2___3.java, MH2.patch, MH3.patch, 
> MultiHFileOutputFormat.java, MultiHFileOutputFormat.java, 
> MultiHFileOutputFormat.java, MultiHFileOutputFormat___3.java, 
> TestMultiHFileOutputFormat.java, TestMultiHFileOutputFormat___3.java
>
>
> Like MultiTableOutputFormat, but outputting HFiles. Key is tablename as an 
> IBW. Creates sub-writers (code cut and pasted from HFileOutputFormat) on 
> demand that produce HFiles in per-table subdirectories of the configured 
> output path. Does not currently support partitioning for existing tables / 
> incremental update.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16012) Major compaction can't work because left scanner read point in RegionServer

2016-06-21 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15343257#comment-15343257
 ] 

Ted Yu commented on HBASE-16012:


[~carp84]:
You can commit the patch if you want.

> Major compaction can't work because left scanner read point in RegionServer
> ---
>
> Key: HBASE-16012
> URL: https://issues.apache.org/jira/browse/HBASE-16012
> Project: HBase
>  Issue Type: Bug
>  Components: Compaction, Scanners
>Affects Versions: 2.0.0, 0.94.27, 1.1.6, 1.3.1, 0.98.21, 1.2.3
>Reporter: Guanghao Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-16012-v1.patch, HBASE-16012-v2.patch, 
> HBASE-16012-v3.patch, HBASE-16012-v4.patch, HBASE-16012-v5.patch, 
> HBASE-16012.patch
>
>
> When new RegionScanner, it will add a scanner read point in 
> scannerReadPoints. But if we got a exception after add read point, the read 
> point will keep in regions server and the delete after this mvcc number will 
> never be compacted.
> Our hbase version is base 0.94. If it throws other exception when initialize 
> RegionScanner, the master branch has this bug, too.
> ERROR org.apache.hadoop.hbase.regionserver.HRegionServer: Failed openScanner 
> java.io.IOException: Could not seek StoreFileScanner
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:160)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:268)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:168)
>   at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:2232)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:4026)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1895)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1879)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1854)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.internalOpenScanner(HRegionServer.java:3032)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.openScanner(HRegionServer.java:2995)
>   at sun.reflect.GeneratedMethodAccessor67.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at 
> org.apache.hadoop.hbase.ipc.SecureRpcEngine$Server.call(SecureRpcEngine.java:338)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1595)
> Caused by: org.apache.hadoop.hbase.ipc.CallerDisconnectedException: Aborting 
> call openScanner, since caller disconnected
>   at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Call.throwExceptionIfCallerDisconnected(HBaseServer.java:475)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1443)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockDataInternal(HFileBlock.java:1902)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1766)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:345)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:254)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:499)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:520)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:235)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:148)
>   ... 14 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16062) Improper error handling in WAL Reader/Writer creation

2016-06-21 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-16062:
---
 Hadoop Flags: Reviewed
Fix Version/s: 1.4.0

> Improper error handling in WAL Reader/Writer creation
> -
>
> Key: HBASE-16062
> URL: https://issues.apache.org/jira/browse/HBASE-16062
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0, 1.4.0
>
> Attachments: 16062.branch-1.txt, HBASE-16062-v1.patch, 
> HBASE-16062-v2.patch
>
>
> If creation of WAL reader/ writer fails for some reason RS may leak hanging 
> socket in CLOSE_WAIT state. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16012) Major compaction can't work because left scanner read point in RegionServer

2016-06-21 Thread Yu Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15343253#comment-15343253
 ] 

Yu Li commented on HBASE-16012:
---

v5 lgtm, +1

> Major compaction can't work because left scanner read point in RegionServer
> ---
>
> Key: HBASE-16012
> URL: https://issues.apache.org/jira/browse/HBASE-16012
> Project: HBase
>  Issue Type: Bug
>  Components: Compaction, Scanners
>Affects Versions: 2.0.0, 0.94.27, 1.1.6, 1.3.1, 0.98.21, 1.2.3
>Reporter: Guanghao Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-16012-v1.patch, HBASE-16012-v2.patch, 
> HBASE-16012-v3.patch, HBASE-16012-v4.patch, HBASE-16012-v5.patch, 
> HBASE-16012.patch
>
>
> When new RegionScanner, it will add a scanner read point in 
> scannerReadPoints. But if we got a exception after add read point, the read 
> point will keep in regions server and the delete after this mvcc number will 
> never be compacted.
> Our hbase version is base 0.94. If it throws other exception when initialize 
> RegionScanner, the master branch has this bug, too.
> ERROR org.apache.hadoop.hbase.regionserver.HRegionServer: Failed openScanner 
> java.io.IOException: Could not seek StoreFileScanner
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:160)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:268)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:168)
>   at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:2232)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:4026)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1895)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1879)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1854)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.internalOpenScanner(HRegionServer.java:3032)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.openScanner(HRegionServer.java:2995)
>   at sun.reflect.GeneratedMethodAccessor67.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at 
> org.apache.hadoop.hbase.ipc.SecureRpcEngine$Server.call(SecureRpcEngine.java:338)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1595)
> Caused by: org.apache.hadoop.hbase.ipc.CallerDisconnectedException: Aborting 
> call openScanner, since caller disconnected
>   at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Call.throwExceptionIfCallerDisconnected(HBaseServer.java:475)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1443)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockDataInternal(HFileBlock.java:1902)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1766)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:345)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:254)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:499)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:520)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:235)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:148)
>   ... 14 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-16081) Replication remove_peer gets stuck and blocks WAL rolling

2016-06-21 Thread Ashu Pachauri (JIRA)
Ashu Pachauri created HBASE-16081:
-

 Summary: Replication remove_peer gets stuck and blocks WAL rolling
 Key: HBASE-16081
 URL: https://issues.apache.org/jira/browse/HBASE-16081
 Project: HBase
  Issue Type: Bug
  Components: regionserver, Replication
Reporter: Ashu Pachauri
Assignee: Ashu Pachauri


We use a blocking take from CompletionService in 
HBaseInterClusterReplicationEndpoint. When we remove a peer, we try to shut 
down all threads gracefully. But, under certain race condition, the underlying 
executor gets shutdown and the CompletionService#take will block forever, which 
means the remove_peer call will never gracefully finish.
Since ReplicationSourceManager#removePeer and 
ReplicationSourceManager#recordLog lock on the same object, we are not able to 
roll WALs in such a situation and will end up with gigantic WALs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16012) Major compaction can't work because left scanner read point in RegionServer

2016-06-21 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15343183#comment-15343183
 ] 

Ted Yu commented on HBASE-16012:


Planning to integrate tomorrow morning, if there is no more review comment.

> Major compaction can't work because left scanner read point in RegionServer
> ---
>
> Key: HBASE-16012
> URL: https://issues.apache.org/jira/browse/HBASE-16012
> Project: HBase
>  Issue Type: Bug
>  Components: Compaction, Scanners
>Affects Versions: 2.0.0, 0.94.27, 1.1.6, 1.3.1, 0.98.21, 1.2.3
>Reporter: Guanghao Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-16012-v1.patch, HBASE-16012-v2.patch, 
> HBASE-16012-v3.patch, HBASE-16012-v4.patch, HBASE-16012-v5.patch, 
> HBASE-16012.patch
>
>
> When new RegionScanner, it will add a scanner read point in 
> scannerReadPoints. But if we got a exception after add read point, the read 
> point will keep in regions server and the delete after this mvcc number will 
> never be compacted.
> Our hbase version is base 0.94. If it throws other exception when initialize 
> RegionScanner, the master branch has this bug, too.
> ERROR org.apache.hadoop.hbase.regionserver.HRegionServer: Failed openScanner 
> java.io.IOException: Could not seek StoreFileScanner
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:160)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:268)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:168)
>   at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:2232)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:4026)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1895)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1879)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1854)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.internalOpenScanner(HRegionServer.java:3032)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.openScanner(HRegionServer.java:2995)
>   at sun.reflect.GeneratedMethodAccessor67.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at 
> org.apache.hadoop.hbase.ipc.SecureRpcEngine$Server.call(SecureRpcEngine.java:338)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1595)
> Caused by: org.apache.hadoop.hbase.ipc.CallerDisconnectedException: Aborting 
> call openScanner, since caller disconnected
>   at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Call.throwExceptionIfCallerDisconnected(HBaseServer.java:475)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1443)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockDataInternal(HFileBlock.java:1902)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1766)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:345)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:254)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:499)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:520)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:235)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:148)
>   ... 14 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16012) Major compaction can't work because left scanner read point in RegionServer

2016-06-21 Thread Guanghao Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15343171#comment-15343171
 ] 

Guanghao Zhang commented on HBASE-16012:


Thanks Ted. Attach a v5 patch.

> Major compaction can't work because left scanner read point in RegionServer
> ---
>
> Key: HBASE-16012
> URL: https://issues.apache.org/jira/browse/HBASE-16012
> Project: HBase
>  Issue Type: Bug
>  Components: Compaction, Scanners
>Affects Versions: 2.0.0, 0.94.27, 1.1.6, 1.3.1, 0.98.21, 1.2.3
>Reporter: Guanghao Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-16012-v1.patch, HBASE-16012-v2.patch, 
> HBASE-16012-v3.patch, HBASE-16012-v4.patch, HBASE-16012-v5.patch, 
> HBASE-16012.patch
>
>
> When new RegionScanner, it will add a scanner read point in 
> scannerReadPoints. But if we got a exception after add read point, the read 
> point will keep in regions server and the delete after this mvcc number will 
> never be compacted.
> Our hbase version is base 0.94. If it throws other exception when initialize 
> RegionScanner, the master branch has this bug, too.
> ERROR org.apache.hadoop.hbase.regionserver.HRegionServer: Failed openScanner 
> java.io.IOException: Could not seek StoreFileScanner
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:160)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:268)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:168)
>   at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:2232)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:4026)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1895)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1879)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1854)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.internalOpenScanner(HRegionServer.java:3032)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.openScanner(HRegionServer.java:2995)
>   at sun.reflect.GeneratedMethodAccessor67.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at 
> org.apache.hadoop.hbase.ipc.SecureRpcEngine$Server.call(SecureRpcEngine.java:338)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1595)
> Caused by: org.apache.hadoop.hbase.ipc.CallerDisconnectedException: Aborting 
> call openScanner, since caller disconnected
>   at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Call.throwExceptionIfCallerDisconnected(HBaseServer.java:475)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1443)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockDataInternal(HFileBlock.java:1902)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1766)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:345)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:254)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:499)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:520)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:235)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:148)
>   ... 14 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16012) Major compaction can't work because left scanner read point in RegionServer

2016-06-21 Thread Guanghao Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang updated HBASE-16012:
---
Attachment: HBASE-16012-v5.patch

> Major compaction can't work because left scanner read point in RegionServer
> ---
>
> Key: HBASE-16012
> URL: https://issues.apache.org/jira/browse/HBASE-16012
> Project: HBase
>  Issue Type: Bug
>  Components: Compaction, Scanners
>Affects Versions: 2.0.0, 0.94.27, 1.1.6, 1.3.1, 0.98.21, 1.2.3
>Reporter: Guanghao Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-16012-v1.patch, HBASE-16012-v2.patch, 
> HBASE-16012-v3.patch, HBASE-16012-v4.patch, HBASE-16012-v5.patch, 
> HBASE-16012.patch
>
>
> When new RegionScanner, it will add a scanner read point in 
> scannerReadPoints. But if we got a exception after add read point, the read 
> point will keep in regions server and the delete after this mvcc number will 
> never be compacted.
> Our hbase version is base 0.94. If it throws other exception when initialize 
> RegionScanner, the master branch has this bug, too.
> ERROR org.apache.hadoop.hbase.regionserver.HRegionServer: Failed openScanner 
> java.io.IOException: Could not seek StoreFileScanner
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:160)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:268)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:168)
>   at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:2232)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:4026)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1895)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1879)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1854)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.internalOpenScanner(HRegionServer.java:3032)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.openScanner(HRegionServer.java:2995)
>   at sun.reflect.GeneratedMethodAccessor67.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at 
> org.apache.hadoop.hbase.ipc.SecureRpcEngine$Server.call(SecureRpcEngine.java:338)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1595)
> Caused by: org.apache.hadoop.hbase.ipc.CallerDisconnectedException: Aborting 
> call openScanner, since caller disconnected
>   at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Call.throwExceptionIfCallerDisconnected(HBaseServer.java:475)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1443)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockDataInternal(HFileBlock.java:1902)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1766)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:345)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:254)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:499)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:520)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:235)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:148)
>   ... 14 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16049) TestRowProcessorEndpoint is failing on Apache Builds

2016-06-21 Thread Guanghao Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15343169#comment-15343169
 ] 

Guanghao Zhang commented on HBASE-16049:


[~mantonov] I didn't see the link about "Assign to me", so can you assign this 
to me? I will attach a little fix for this. 

> TestRowProcessorEndpoint is failing on Apache Builds
> 
>
> Key: HBASE-16049
> URL: https://issues.apache.org/jira/browse/HBASE-16049
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.0, 1.4.0
>Reporter: Mikhail Antonov
>
> example log 
> https://paste.apache.org/46Uh



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16080) Fix flakey tests

2016-06-21 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-16080:
--
Priority: Critical  (was: Major)

> Fix flakey tests
> 
>
> Key: HBASE-16080
> URL: https://issues.apache.org/jira/browse/HBASE-16080
> Project: HBase
>  Issue Type: Sub-task
>  Components: Replication
>Reporter: Elliott Clark
>Assignee: Joseph
>Priority: Critical
>
> Seems like 
> TestTableBasedReplicationSourceManagerImpl.testCleanupFailoverQueues is a 
> little flakey. We should make that more stable even on the apache test infra.
> https://builds.apache.org/job/HBase-Trunk_matrix/1090/jdk=latest1.7,label=yahoo-not-h2/testReport/junit/org.apache.hadoop.hbase.replication.regionserver/TestTableBasedReplicationSourceManagerImpl/testCleanupFailoverQueues/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-16080) Fix flakey tests

2016-06-21 Thread Elliott Clark (JIRA)
Elliott Clark created HBASE-16080:
-

 Summary: Fix flakey tests
 Key: HBASE-16080
 URL: https://issues.apache.org/jira/browse/HBASE-16080
 Project: HBase
  Issue Type: Sub-task
Reporter: Elliott Clark
Assignee: Joseph


Seems like TestTableBasedReplicationSourceManagerImpl.testCleanupFailoverQueues 
is a little flakey. We should make that more stable even on the apache test 
infra.

https://builds.apache.org/job/HBase-Trunk_matrix/1090/jdk=latest1.7,label=yahoo-not-h2/testReport/junit/org.apache.hadoop.hbase.replication.regionserver/TestTableBasedReplicationSourceManagerImpl/testCleanupFailoverQueues/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-16079) TestFailedAppendAndSync fails

2016-06-21 Thread Mikhail Antonov (JIRA)
Mikhail Antonov created HBASE-16079:
---

 Summary: TestFailedAppendAndSync fails
 Key: HBASE-16079
 URL: https://issues.apache.org/jira/browse/HBASE-16079
 Project: HBase
  Issue Type: Bug
  Components: test
Affects Versions: 1.3.0
Reporter: Mikhail Antonov
 Fix For: 1.3.0


#testLockupAroundBadAssignSync

https://builds.apache.org/view/All/job/HBase-1.3/751/jdk=latest1.8,label=yahoo-not-h2/testReport/junit/org.apache.hadoop.hbase.regionserver/TestFailedAppendAndSync/testLockupAroundBadAssignSync/

Error Message

test timed out after 30 milliseconds
Stacktrace

org.junit.runners.model.TestTimedOutException: test timed out after 30 
milliseconds
at 
org.mockito.internal.debugging.LocationImpl.toString(LocationImpl.java:29)
at java.lang.String.valueOf(String.java:2994)
at java.lang.StringBuilder.append(StringBuilder.java:131)
at 
org.mockito.exceptions.Reporter.wantedButNotInvoked(Reporter.java:320)
at 
org.mockito.internal.verification.checkers.MissingInvocationChecker.check(MissingInvocationChecker.java:42)
at org.mockito.internal.verification.AtLeast.verify(AtLeast.java:38)
at 
org.mockito.internal.verification.MockAwareVerificationMode.verify(MockAwareVerificationMode.java:21)
at 
org.mockito.internal.handler.MockHandlerImpl.handle(MockHandlerImpl.java:72)
at 
org.mockito.internal.handler.NullResultGuardian.handle(NullResultGuardian.java:29)
at 
org.mockito.internal.handler.InvocationNotifierHandler.handle(InvocationNotifierHandler.java:38)
at 
org.mockito.internal.creation.MethodInterceptorFilter.intercept(MethodInterceptorFilter.java:61)
at 
org.apache.hadoop.hbase.Server$$EnhancerByMockitoWithCGLIB$$323c5f5b.abort()
at 
org.apache.hadoop.hbase.regionserver.TestFailedAppendAndSync.testLockupAroundBadAssignSync(TestFailedAppendAndSync.java:239)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16032) Possible memory leak in StoreScanner

2016-06-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15343101#comment-15343101
 ] 

Hudson commented on HBASE-16032:


FAILURE: Integrated in HBase-1.1-JDK8 #1821 (See 
[https://builds.apache.org/job/HBase-1.1-JDK8/1821/])
HBASE-16032 Possible memory leak in StoreScanner (liyu: rev 
591f33750630a1586aa724cd961623e0efe905e2)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreScanner.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
HBASE-16032 Possible memory leak in StoreScanner, addendum (liyu: rev 
4e2649e0391258b63f2871c0f31e126d778243a4)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreScanner.java


> Possible memory leak in StoreScanner
> 
>
> Key: HBASE-16032
> URL: https://issues.apache.org/jira/browse/HBASE-16032
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.1, 1.1.5, 0.98.20
>Reporter: Yu Li
>Assignee: Yu Li
> Fix For: 2.0.0, 1.1.6, 1.3.1, 0.98.21, 1.2.3
>
> Attachments: HBASE-16032.patch, HBASE-16032_v2.patch, 
> HBASE-16032_v3.patch, HBASE-16032_v4.patch
>
>
> We observed frequent fullGC of RS in our production environment, and after 
> analyzing the heapdump, we found large memory occupancy by 
> HStore#changedReaderObservers, the map is surprisingly containing 7500w 
> objects...
> After some debugging, I located some possible memory leak in StoreScanner 
> constructor:
> {code}
>   public StoreScanner(Store store, ScanInfo scanInfo, Scan scan, final 
> NavigableSet columns,
>   long readPt)
>   throws IOException {
> this(store, scan, scanInfo, columns, readPt, scan.getCacheBlocks());
> if (columns != null && scan.isRaw()) {
>   throw new DoNotRetryIOException("Cannot specify any column for a raw 
> scan");
> }
> matcher = new ScanQueryMatcher(scan, scanInfo, columns,
> ScanType.USER_SCAN, Long.MAX_VALUE, HConstants.LATEST_TIMESTAMP,
> oldestUnexpiredTS, now, store.getCoprocessorHost());
> this.store.addChangedReaderObserver(this);
> // Pass columns to try to filter out unnecessary StoreFiles.
> List scanners = getScannersNoCompaction();
> ...
> seekScanners(scanners, matcher.getStartKey(), explicitColumnQuery
> && lazySeekEnabledGlobally, parallelSeekEnabled);
> ...
> resetKVHeap(scanners, store.getComparator());
>   }
> {code}
> If there's any Exception thrown after 
> {{this.store.addChangedReaderObserver(this)}}, the returned scanner might be 
> null and there's no chance to remove the scanner from changedReaderObservers, 
> like in {{HRegion#get}}
> {code}
> RegionScanner scanner = null;
> try {
>   scanner = getScanner(scan);
>   scanner.next(results);
> } finally {
>   if (scanner != null)
> scanner.close();
> }
> {code}
> What's more, all exception thrown in the {{HRegion#getScanner}} path will 
> cause scanner==null then memory leak, so we also need to handle this part.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16032) Possible memory leak in StoreScanner

2016-06-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15343098#comment-15343098
 ] 

Hudson commented on HBASE-16032:


SUCCESS: Integrated in HBase-1.1-JDK7 #1734 (See 
[https://builds.apache.org/job/HBase-1.1-JDK7/1734/])
HBASE-16032 Possible memory leak in StoreScanner (liyu: rev 
591f33750630a1586aa724cd961623e0efe905e2)
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreScanner.java
HBASE-16032 Possible memory leak in StoreScanner, addendum (liyu: rev 
4e2649e0391258b63f2871c0f31e126d778243a4)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreScanner.java


> Possible memory leak in StoreScanner
> 
>
> Key: HBASE-16032
> URL: https://issues.apache.org/jira/browse/HBASE-16032
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.1, 1.1.5, 0.98.20
>Reporter: Yu Li
>Assignee: Yu Li
> Fix For: 2.0.0, 1.1.6, 1.3.1, 0.98.21, 1.2.3
>
> Attachments: HBASE-16032.patch, HBASE-16032_v2.patch, 
> HBASE-16032_v3.patch, HBASE-16032_v4.patch
>
>
> We observed frequent fullGC of RS in our production environment, and after 
> analyzing the heapdump, we found large memory occupancy by 
> HStore#changedReaderObservers, the map is surprisingly containing 7500w 
> objects...
> After some debugging, I located some possible memory leak in StoreScanner 
> constructor:
> {code}
>   public StoreScanner(Store store, ScanInfo scanInfo, Scan scan, final 
> NavigableSet columns,
>   long readPt)
>   throws IOException {
> this(store, scan, scanInfo, columns, readPt, scan.getCacheBlocks());
> if (columns != null && scan.isRaw()) {
>   throw new DoNotRetryIOException("Cannot specify any column for a raw 
> scan");
> }
> matcher = new ScanQueryMatcher(scan, scanInfo, columns,
> ScanType.USER_SCAN, Long.MAX_VALUE, HConstants.LATEST_TIMESTAMP,
> oldestUnexpiredTS, now, store.getCoprocessorHost());
> this.store.addChangedReaderObserver(this);
> // Pass columns to try to filter out unnecessary StoreFiles.
> List scanners = getScannersNoCompaction();
> ...
> seekScanners(scanners, matcher.getStartKey(), explicitColumnQuery
> && lazySeekEnabledGlobally, parallelSeekEnabled);
> ...
> resetKVHeap(scanners, store.getComparator());
>   }
> {code}
> If there's any Exception thrown after 
> {{this.store.addChangedReaderObserver(this)}}, the returned scanner might be 
> null and there's no chance to remove the scanner from changedReaderObservers, 
> like in {{HRegion#get}}
> {code}
> RegionScanner scanner = null;
> try {
>   scanner = getScanner(scan);
>   scanner.next(results);
> } finally {
>   if (scanner != null)
> scanner.close();
> }
> {code}
> What's more, all exception thrown in the {{HRegion#getScanner}} path will 
> cause scanner==null then memory leak, so we also need to handle this part.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-3727) MultiHFileOutputFormat

2016-06-21 Thread yi liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-3727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15343089#comment-15343089
 ] 

yi liang edited comment on HBASE-3727 at 6/22/16 12:22 AM:
---

the new file are MH3.patch,  and all  __3.java


was (Author: easyliangjob):
the new file 

> MultiHFileOutputFormat
> --
>
> Key: HBASE-3727
> URL: https://issues.apache.org/jira/browse/HBASE-3727
> Project: HBase
>  Issue Type: New Feature
>Affects Versions: 2.0.0
>Reporter: Andrew Purtell
>Assignee: yi liang
>Priority: Minor
> Attachments: HFileOutputFormat2___3.java, MH2.patch, MH3.patch, 
> MultiHFileOutputFormat.java, MultiHFileOutputFormat.java, 
> MultiHFileOutputFormat.java, MultiHFileOutputFormat___3.java, 
> TestMultiHFileOutputFormat.java, TestMultiHFileOutputFormat___3.java
>
>
> Like MultiTableOutputFormat, but outputting HFiles. Key is tablename as an 
> IBW. Creates sub-writers (code cut and pasted from HFileOutputFormat) on 
> demand that produce HFiles in per-table subdirectories of the configured 
> output path. Does not currently support partitioning for existing tables / 
> incremental update.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-3727) MultiHFileOutputFormat

2016-06-21 Thread yi liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-3727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

yi liang updated HBASE-3727:

Attachment: TestMultiHFileOutputFormat___3.java
MultiHFileOutputFormat___3.java
HFileOutputFormat2___3.java

> MultiHFileOutputFormat
> --
>
> Key: HBASE-3727
> URL: https://issues.apache.org/jira/browse/HBASE-3727
> Project: HBase
>  Issue Type: New Feature
>Affects Versions: 2.0.0
>Reporter: Andrew Purtell
>Assignee: yi liang
>Priority: Minor
> Attachments: HFileOutputFormat2___3.java, MH2.patch, MH3.patch, 
> MultiHFileOutputFormat.java, MultiHFileOutputFormat.java, 
> MultiHFileOutputFormat.java, MultiHFileOutputFormat___3.java, 
> TestMultiHFileOutputFormat.java, TestMultiHFileOutputFormat___3.java
>
>
> Like MultiTableOutputFormat, but outputting HFiles. Key is tablename as an 
> IBW. Creates sub-writers (code cut and pasted from HFileOutputFormat) on 
> demand that produce HFiles in per-table subdirectories of the configured 
> output path. Does not currently support partitioning for existing tables / 
> incremental update.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-3727) MultiHFileOutputFormat

2016-06-21 Thread yi liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-3727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

yi liang updated HBASE-3727:

Attachment: (was: HFileOutputFormat2.java)

> MultiHFileOutputFormat
> --
>
> Key: HBASE-3727
> URL: https://issues.apache.org/jira/browse/HBASE-3727
> Project: HBase
>  Issue Type: New Feature
>Affects Versions: 2.0.0
>Reporter: Andrew Purtell
>Assignee: yi liang
>Priority: Minor
> Attachments: MH2.patch, MH3.patch, MultiHFileOutputFormat.java, 
> MultiHFileOutputFormat.java, MultiHFileOutputFormat.java, 
> TestMultiHFileOutputFormat.java
>
>
> Like MultiTableOutputFormat, but outputting HFiles. Key is tablename as an 
> IBW. Creates sub-writers (code cut and pasted from HFileOutputFormat) on 
> demand that produce HFiles in per-table subdirectories of the configured 
> output path. Does not currently support partitioning for existing tables / 
> incremental update.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-3727) MultiHFileOutputFormat

2016-06-21 Thread yi liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-3727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

yi liang updated HBASE-3727:

Attachment: (was: TestMultiHFileOutputFormat.java)

> MultiHFileOutputFormat
> --
>
> Key: HBASE-3727
> URL: https://issues.apache.org/jira/browse/HBASE-3727
> Project: HBase
>  Issue Type: New Feature
>Affects Versions: 2.0.0
>Reporter: Andrew Purtell
>Assignee: yi liang
>Priority: Minor
> Attachments: MH2.patch, MH3.patch, MultiHFileOutputFormat.java, 
> MultiHFileOutputFormat.java, MultiHFileOutputFormat.java, 
> TestMultiHFileOutputFormat.java
>
>
> Like MultiTableOutputFormat, but outputting HFiles. Key is tablename as an 
> IBW. Creates sub-writers (code cut and pasted from HFileOutputFormat) on 
> demand that produce HFiles in per-table subdirectories of the configured 
> output path. Does not currently support partitioning for existing tables / 
> incremental update.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-3727) MultiHFileOutputFormat

2016-06-21 Thread yi liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-3727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

yi liang updated HBASE-3727:

Attachment: (was: MultiHFileOutputFormat.java)

> MultiHFileOutputFormat
> --
>
> Key: HBASE-3727
> URL: https://issues.apache.org/jira/browse/HBASE-3727
> Project: HBase
>  Issue Type: New Feature
>Affects Versions: 2.0.0
>Reporter: Andrew Purtell
>Assignee: yi liang
>Priority: Minor
> Attachments: MH2.patch, MH3.patch, MultiHFileOutputFormat.java, 
> MultiHFileOutputFormat.java, MultiHFileOutputFormat.java, 
> TestMultiHFileOutputFormat.java
>
>
> Like MultiTableOutputFormat, but outputting HFiles. Key is tablename as an 
> IBW. Creates sub-writers (code cut and pasted from HFileOutputFormat) on 
> demand that produce HFiles in per-table subdirectories of the configured 
> output path. Does not currently support partitioning for existing tables / 
> incremental update.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-3727) MultiHFileOutputFormat

2016-06-21 Thread yi liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-3727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

yi liang updated HBASE-3727:

Attachment: TestMultiHFileOutputFormat.java
HFileOutputFormat2.java
MultiHFileOutputFormat.java

the new file 

> MultiHFileOutputFormat
> --
>
> Key: HBASE-3727
> URL: https://issues.apache.org/jira/browse/HBASE-3727
> Project: HBase
>  Issue Type: New Feature
>Affects Versions: 2.0.0
>Reporter: Andrew Purtell
>Assignee: yi liang
>Priority: Minor
> Attachments: HFileOutputFormat2.java, MH2.patch, MH3.patch, 
> MultiHFileOutputFormat.java, MultiHFileOutputFormat.java, 
> MultiHFileOutputFormat.java, MultiHFileOutputFormat.java, 
> TestMultiHFileOutputFormat.java, TestMultiHFileOutputFormat.java
>
>
> Like MultiTableOutputFormat, but outputting HFiles. Key is tablename as an 
> IBW. Creates sub-writers (code cut and pasted from HFileOutputFormat) on 
> demand that produce HFiles in per-table subdirectories of the configured 
> output path. Does not currently support partitioning for existing tables / 
> incremental update.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-3727) MultiHFileOutputFormat

2016-06-21 Thread yi liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-3727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

yi liang updated HBASE-3727:

Attachment: MH3.patch

> MultiHFileOutputFormat
> --
>
> Key: HBASE-3727
> URL: https://issues.apache.org/jira/browse/HBASE-3727
> Project: HBase
>  Issue Type: New Feature
>Affects Versions: 2.0.0
>Reporter: Andrew Purtell
>Assignee: yi liang
>Priority: Minor
> Attachments: MH2.patch, MH3.patch, MultiHFileOutputFormat.java, 
> MultiHFileOutputFormat.java, MultiHFileOutputFormat.java, 
> TestMultiHFileOutputFormat.java
>
>
> Like MultiTableOutputFormat, but outputting HFiles. Key is tablename as an 
> IBW. Creates sub-writers (code cut and pasted from HFileOutputFormat) on 
> demand that produce HFiles in per-table subdirectories of the configured 
> output path. Does not currently support partitioning for existing tables / 
> incremental update.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-3727) MultiHFileOutputFormat

2016-06-21 Thread yi liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-3727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15343082#comment-15343082
 ] 

yi liang commented on HBASE-3727:
-

Attached is revised version, MH3.patch
  To use existing code in the HFileOutputFormat2.java, I need to modify the 
original code in HFileOutputFormat2. There are only one major modification in 
HFileOutputFormat2: change the Anonymous Classes of return RecordWriter to 
HFileRecordWriter extends RecordWriter.
   In this way, I can directly new HFileRecordWriter in my 
MultiHFileOutputFormat.java.
   And I also put the code on the review board, thanks 

> MultiHFileOutputFormat
> --
>
> Key: HBASE-3727
> URL: https://issues.apache.org/jira/browse/HBASE-3727
> Project: HBase
>  Issue Type: New Feature
>Affects Versions: 2.0.0
>Reporter: Andrew Purtell
>Assignee: yi liang
>Priority: Minor
> Attachments: MH2.patch, MH3.patch, MultiHFileOutputFormat.java, 
> MultiHFileOutputFormat.java, MultiHFileOutputFormat.java, 
> TestMultiHFileOutputFormat.java
>
>
> Like MultiTableOutputFormat, but outputting HFiles. Key is tablename as an 
> IBW. Creates sub-writers (code cut and pasted from HFileOutputFormat) on 
> demand that produce HFiles in per-table subdirectories of the configured 
> output path. Does not currently support partitioning for existing tables / 
> incremental update.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16036) Fix ReplicationTableBase initialization to make it non-blocking

2016-06-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15343038#comment-15343038
 ] 

Hudson commented on HBASE-16036:


FAILURE: Integrated in HBase-Trunk_matrix #1090 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/1090/])
HBASE-16036 Made Replication Table creation non-blocking. (eclark: rev 
152594560e29549642587b850320f5d66339b747)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationTableBase.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/replication/regionserver/TestReplicationSourceManagerZkImpl.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/replication/regionserver/TestReplicationSourceManager.java
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/replication/TableBasedReplicationQueuesClientImpl.java
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationTableBase.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationStateHBaseImpl.java
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationQueuesArguments.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceManager.java
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationQueuesClientArguments.java
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/replication/TableBasedReplicationQueuesImpl.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/replication/regionserver/TestTableBasedReplicationSourceManagerImpl.java


> Fix ReplicationTableBase initialization to make it non-blocking
> ---
>
> Key: HBASE-16036
> URL: https://issues.apache.org/jira/browse/HBASE-16036
> Project: HBase
>  Issue Type: Sub-task
>  Components: Replication
>Reporter: Joseph
>Assignee: Joseph
> Fix For: 2.0.0
>
> Attachments: HBASE-16036.patch
>
>
> Currently there is a bug inside of TableBasedReplicationQueuesImpl 
> construction that prevents ReplicationServices from starting before Master is 
> initialized. So currently each of the RS, including HMaster, with Replication 
> enabled will attempt to create the ReplicationTable on initialization. 
> Currently HMaster's initialization: serviceThreads.start() -> new 
> TableBasedReplicationQueuesImpl() -> Replication Table Creation -> HMaster 
> sets initialized flags.
> But this fails when we try to create the Replication Table as the 
> HMaster.checkInitialized() flag fails. This ends up blocking HMaster 
> initialization and results in a deadlock.
> So in this patch, I will create the Replication Table in the background of 
> TableBasedReplicationQueuesImpl and only block when we actually call methods 
> that access it.
> This also requires a small refactoring of ReplicationSourceManager.init() so 
> that we run the abandoned queue adoption in the background
> Review board at: https://reviews.apache.org/r/48763/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16073) update compatibility_checker for jacc dropping comma sep args

2016-06-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15343037#comment-15343037
 ] 

Hudson commented on HBASE-16073:


FAILURE: Integrated in HBase-Trunk_matrix #1090 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/1090/])
HBASE-16073 update compatibility_checker for jacc dropping comma sep (busbey: 
rev ef90ecc00c01cba9b8fbc84368096cdb380a4f7e)
* dev-support/check_compatibility.sh


> update compatibility_checker for jacc dropping comma sep args
> -
>
> Key: HBASE-16073
> URL: https://issues.apache.org/jira/browse/HBASE-16073
> Project: HBase
>  Issue Type: Task
>  Components: build, documentation
>Reporter: Sean Busbey
>Assignee: Dima Spivak
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HBASE-16073_v1.patch, HBASE-16073_v2.patch
>
>
> the japi-compliance-checker has a change in place (post the 1.7 release) that 
> removes the ability to give a comma separated list of jars on the cli.
> we should switch to generating descriptor xml docs since that will still be 
> supported, or update to use the expanded tooling suggested in the issue:
> https://github.com/lvc/japi-compliance-checker/issues/27



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16062) Improper error handling in WAL Reader/Writer creation

2016-06-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15343033#comment-15343033
 ] 

Hadoop QA commented on HBASE-16062:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue} 0m 1s 
{color} | {color:blue} The patch file was not named according to hbase's naming 
conventions. Please see 
https://yetus.apache.org/documentation/0.2.1/precommit-patchnames for 
instructions. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
12s {color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 36s 
{color} | {color:green} branch-1 passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s 
{color} | {color:green} branch-1 passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
59s {color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} branch-1 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 54s 
{color} | {color:red} hbase-server in branch-1 has 1 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s 
{color} | {color:green} branch-1 passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s 
{color} | {color:green} branch-1 passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
44s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 48s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 48s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 32s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
57s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
16m 3s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 34s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 64m 41s 
{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
13s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 95m 26s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12812311/16062.branch-1.txt |
| JIRA Issue | HBASE-16062 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux asf907.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool 

[jira] [Commented] (HBASE-16032) Possible memory leak in StoreScanner

2016-06-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15343005#comment-15343005
 ] 

Hudson commented on HBASE-16032:


FAILURE: Integrated in HBase-1.2 #656 (See 
[https://builds.apache.org/job/HBase-1.2/656/])
HBASE-16032 Possible memory leak in StoreScanner (liyu: rev 
f8762efc0e041ba63c745bfa3ccb9f06d2010699)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreScanner.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java


> Possible memory leak in StoreScanner
> 
>
> Key: HBASE-16032
> URL: https://issues.apache.org/jira/browse/HBASE-16032
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.1, 1.1.5, 0.98.20
>Reporter: Yu Li
>Assignee: Yu Li
> Fix For: 2.0.0, 1.1.6, 1.3.1, 0.98.21, 1.2.3
>
> Attachments: HBASE-16032.patch, HBASE-16032_v2.patch, 
> HBASE-16032_v3.patch, HBASE-16032_v4.patch
>
>
> We observed frequent fullGC of RS in our production environment, and after 
> analyzing the heapdump, we found large memory occupancy by 
> HStore#changedReaderObservers, the map is surprisingly containing 7500w 
> objects...
> After some debugging, I located some possible memory leak in StoreScanner 
> constructor:
> {code}
>   public StoreScanner(Store store, ScanInfo scanInfo, Scan scan, final 
> NavigableSet columns,
>   long readPt)
>   throws IOException {
> this(store, scan, scanInfo, columns, readPt, scan.getCacheBlocks());
> if (columns != null && scan.isRaw()) {
>   throw new DoNotRetryIOException("Cannot specify any column for a raw 
> scan");
> }
> matcher = new ScanQueryMatcher(scan, scanInfo, columns,
> ScanType.USER_SCAN, Long.MAX_VALUE, HConstants.LATEST_TIMESTAMP,
> oldestUnexpiredTS, now, store.getCoprocessorHost());
> this.store.addChangedReaderObserver(this);
> // Pass columns to try to filter out unnecessary StoreFiles.
> List scanners = getScannersNoCompaction();
> ...
> seekScanners(scanners, matcher.getStartKey(), explicitColumnQuery
> && lazySeekEnabledGlobally, parallelSeekEnabled);
> ...
> resetKVHeap(scanners, store.getComparator());
>   }
> {code}
> If there's any Exception thrown after 
> {{this.store.addChangedReaderObserver(this)}}, the returned scanner might be 
> null and there's no chance to remove the scanner from changedReaderObservers, 
> like in {{HRegion#get}}
> {code}
> RegionScanner scanner = null;
> try {
>   scanner = getScanner(scan);
>   scanner.next(results);
> } finally {
>   if (scanner != null)
> scanner.close();
> }
> {code}
> What's more, all exception thrown in the {{HRegion#getScanner}} path will 
> cause scanner==null then memory leak, so we also need to handle this part.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15870) Specify columns in REST multi gets

2016-06-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15343004#comment-15343004
 ] 

Hudson commented on HBASE-15870:


FAILURE: Integrated in HBase-1.2 #656 (See 
[https://builds.apache.org/job/HBase-1.2/656/])
Revert "HBASE-15870 Specify columns in REST multi gets (Matt Warhaftig)" 
(jerryjch: rev 65738a8353cd4b65b5204e42048ef484bd7ca8f3)
* 
hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/TestMultiRowResource.java
* hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/MultiRowResource.java
* hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/TableResource.java


> Specify columns in REST multi gets
> --
>
> Key: HBASE-15870
> URL: https://issues.apache.org/jira/browse/HBASE-15870
> Project: HBase
>  Issue Type: Improvement
>  Components: REST
>Reporter: Dean Gurvitz
>Assignee: Matt Warhaftig
>Priority: Minor
> Fix For: 2.0.0, 1.3.0, 1.4.0
>
> Attachments: hbase-15870-v1.patch
>
>
> The REST multi-gets feature currently does not allow specifying only certain 
> columns or column families. Adding support for these should be quite simple 
> and improve the usability of the multi-gets feature.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15347) Update CHANGES.txt for 1.3

2016-06-21 Thread Mikhail Antonov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Antonov updated HBASE-15347:

Attachment: HBASE-15347-branch-1.3.v1.patch

All right, here're the release notes for 1.3

> Update CHANGES.txt for 1.3
> --
>
> Key: HBASE-15347
> URL: https://issues.apache.org/jira/browse/HBASE-15347
> Project: HBase
>  Issue Type: Sub-task
>  Components: documentation
>Affects Versions: 1.3.0
>Reporter: Mikhail Antonov
>Assignee: Mikhail Antonov
> Attachments: HBASE-15347-branch-1.3.v1.patch
>
>
> Going to post the steps in preparing changes file for 1.3 here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16032) Possible memory leak in StoreScanner

2016-06-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342962#comment-15342962
 ] 

Hudson commented on HBASE-16032:


FAILURE: Integrated in HBase-1.3 #751 (See 
[https://builds.apache.org/job/HBase-1.3/751/])
HBASE-16032 Possible memory leak in StoreScanner (liyu: rev 
0cbad9c890b65b1409be741f55370b7a387f5764)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreScanner.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java


> Possible memory leak in StoreScanner
> 
>
> Key: HBASE-16032
> URL: https://issues.apache.org/jira/browse/HBASE-16032
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.1, 1.1.5, 0.98.20
>Reporter: Yu Li
>Assignee: Yu Li
> Fix For: 2.0.0, 1.1.6, 1.3.1, 0.98.21, 1.2.3
>
> Attachments: HBASE-16032.patch, HBASE-16032_v2.patch, 
> HBASE-16032_v3.patch, HBASE-16032_v4.patch
>
>
> We observed frequent fullGC of RS in our production environment, and after 
> analyzing the heapdump, we found large memory occupancy by 
> HStore#changedReaderObservers, the map is surprisingly containing 7500w 
> objects...
> After some debugging, I located some possible memory leak in StoreScanner 
> constructor:
> {code}
>   public StoreScanner(Store store, ScanInfo scanInfo, Scan scan, final 
> NavigableSet columns,
>   long readPt)
>   throws IOException {
> this(store, scan, scanInfo, columns, readPt, scan.getCacheBlocks());
> if (columns != null && scan.isRaw()) {
>   throw new DoNotRetryIOException("Cannot specify any column for a raw 
> scan");
> }
> matcher = new ScanQueryMatcher(scan, scanInfo, columns,
> ScanType.USER_SCAN, Long.MAX_VALUE, HConstants.LATEST_TIMESTAMP,
> oldestUnexpiredTS, now, store.getCoprocessorHost());
> this.store.addChangedReaderObserver(this);
> // Pass columns to try to filter out unnecessary StoreFiles.
> List scanners = getScannersNoCompaction();
> ...
> seekScanners(scanners, matcher.getStartKey(), explicitColumnQuery
> && lazySeekEnabledGlobally, parallelSeekEnabled);
> ...
> resetKVHeap(scanners, store.getComparator());
>   }
> {code}
> If there's any Exception thrown after 
> {{this.store.addChangedReaderObserver(this)}}, the returned scanner might be 
> null and there's no chance to remove the scanner from changedReaderObservers, 
> like in {{HRegion#get}}
> {code}
> RegionScanner scanner = null;
> try {
>   scanner = getScanner(scan);
>   scanner.next(results);
> } finally {
>   if (scanner != null)
> scanner.close();
> }
> {code}
> What's more, all exception thrown in the {{HRegion#getScanner}} path will 
> cause scanner==null then memory leak, so we also need to handle this part.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16051) TestScannerHeartbeatMessages fails on some machines

2016-06-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342961#comment-15342961
 ] 

Hudson commented on HBASE-16051:


FAILURE: Integrated in HBase-1.3 #751 (See 
[https://builds.apache.org/job/HBase-1.3/751/])
HBASE-16051 TestScannerHeartbeatMessages fails on some machines (Phil (tedyu: 
rev c8501d80096a845e13ce06f36ebf536415b3c7c8)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestScannerHeartbeatMessages.java


> TestScannerHeartbeatMessages fails on some machines
> ---
>
> Key: HBASE-16051
> URL: https://issues.apache.org/jira/browse/HBASE-16051
> Project: HBase
>  Issue Type: Test
>  Components: test
>Affects Versions: 1.3.0
>Reporter: Mikhail Antonov
>Assignee: Phil Yang
> Fix For: 2.0.0, 1.3.0, 1.4.0
>
> Attachments: HBASE-16051-v1.patch
>
>
> I can see below on my Linux box (reproduces consistently). It passes on OSX 
> laptop.
>  T E S T S
> ---
> Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; 
> support was removed in 8.0
> Running org.apache.hadoop.hbase.regionserver.TestScannerHeartbeatMessages
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 27.907 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.hbase.regionserver.TestScannerHeartbeatMessages
> testScannerHeartbeatMessages(org.apache.hadoop.hbase.regionserver.TestScannerHeartbeatMessages)
>   Time elapsed: 12.95 sec  <<< FAILURE!
> java.lang.AssertionError: Heartbeats messages are disabled, an exception 
> should be thrown. If an exception  is not thrown, the test case is not 
> testing the importance of heartbeat messages
> at org.junit.Assert.fail(Assert.java:88)
> at 
> org.apache.hadoop.hbase.regionserver.TestScannerHeartbeatMessages.testImportanceOfHeartbeats(TestScannerHeartbeatMessages.java:233)
> at 
> org.apache.hadoop.hbase.regionserver.TestScannerHeartbeatMessages.testScannerHeartbeatMessages(TestScannerHeartbeatMessages.java:204)
> Results :
> Failed tests:
>   
> TestScannerHeartbeatMessages.testScannerHeartbeatMessages:204->testImportanceOfHeartbeats:233
>  Heartbeats messages are disabled, an exception should be thrown. If an 
> exception  is not thrown, the test case is not testing the importance of 
> heartbeat messages
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16051) TestScannerHeartbeatMessages fails on some machines

2016-06-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342835#comment-15342835
 ] 

Hudson commented on HBASE-16051:


FAILURE: Integrated in HBase-1.4 #234 (See 
[https://builds.apache.org/job/HBase-1.4/234/])
HBASE-16051 TestScannerHeartbeatMessages fails on some machines (Phil (tedyu: 
rev 8dea578bcff5633895d0b2f573d3ca3eaac4b592)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestScannerHeartbeatMessages.java


> TestScannerHeartbeatMessages fails on some machines
> ---
>
> Key: HBASE-16051
> URL: https://issues.apache.org/jira/browse/HBASE-16051
> Project: HBase
>  Issue Type: Test
>  Components: test
>Affects Versions: 1.3.0
>Reporter: Mikhail Antonov
>Assignee: Phil Yang
> Fix For: 2.0.0, 1.3.0, 1.4.0
>
> Attachments: HBASE-16051-v1.patch
>
>
> I can see below on my Linux box (reproduces consistently). It passes on OSX 
> laptop.
>  T E S T S
> ---
> Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; 
> support was removed in 8.0
> Running org.apache.hadoop.hbase.regionserver.TestScannerHeartbeatMessages
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 27.907 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.hbase.regionserver.TestScannerHeartbeatMessages
> testScannerHeartbeatMessages(org.apache.hadoop.hbase.regionserver.TestScannerHeartbeatMessages)
>   Time elapsed: 12.95 sec  <<< FAILURE!
> java.lang.AssertionError: Heartbeats messages are disabled, an exception 
> should be thrown. If an exception  is not thrown, the test case is not 
> testing the importance of heartbeat messages
> at org.junit.Assert.fail(Assert.java:88)
> at 
> org.apache.hadoop.hbase.regionserver.TestScannerHeartbeatMessages.testImportanceOfHeartbeats(TestScannerHeartbeatMessages.java:233)
> at 
> org.apache.hadoop.hbase.regionserver.TestScannerHeartbeatMessages.testScannerHeartbeatMessages(TestScannerHeartbeatMessages.java:204)
> Results :
> Failed tests:
>   
> TestScannerHeartbeatMessages.testScannerHeartbeatMessages:204->testImportanceOfHeartbeats:233
>  Heartbeats messages are disabled, an exception should be thrown. If an 
> exception  is not thrown, the test case is not testing the importance of 
> heartbeat messages
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16062) Improper error handling in WAL Reader/Writer creation

2016-06-21 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-16062:
---
Attachment: 16062.branch-1.txt

> Improper error handling in WAL Reader/Writer creation
> -
>
> Key: HBASE-16062
> URL: https://issues.apache.org/jira/browse/HBASE-16062
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0
>
> Attachments: 16062.branch-1.txt, HBASE-16062-v1.patch, 
> HBASE-16062-v2.patch
>
>
> If creation of WAL reader/ writer fails for some reason RS may leak hanging 
> socket in CLOSE_WAIT state. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16062) Improper error handling in WAL Reader/Writer creation

2016-06-21 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-16062:
---
Attachment: (was: 16062.branch-1.txt)

> Improper error handling in WAL Reader/Writer creation
> -
>
> Key: HBASE-16062
> URL: https://issues.apache.org/jira/browse/HBASE-16062
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0
>
> Attachments: 16062.branch-1.txt, HBASE-16062-v1.patch, 
> HBASE-16062-v2.patch
>
>
> If creation of WAL reader/ writer fails for some reason RS may leak hanging 
> socket in CLOSE_WAIT state. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16052) Improve HBaseFsck Scalability

2016-06-21 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342806#comment-15342806
 ] 

Enis Soztutar commented on HBASE-16052:
---

Skimmed the patch, improvements makes sense. [~syuanjiang] want to take a look? 

> Improve HBaseFsck Scalability
> -
>
> Key: HBASE-16052
> URL: https://issues.apache.org/jira/browse/HBASE-16052
> Project: HBase
>  Issue Type: Improvement
>  Components: hbck
>Reporter: Ben Lau
> Attachments: HBASE-16052-master.patch
>
>
> There are some problems with HBaseFsck that make it unnecessarily slow 
> especially for large tables or clusters with many regions.  
> This patch tries to fix the biggest bottlenecks and also include a couple of 
> bug fixes for some of the race conditions caused by gathering and holding 
> state about a live cluster that is no longer true by the time you use that 
> state in Fsck processing.  These race conditions cause Fsck to crash and 
> become unusable on large clusters with lots of region splits/merges.
> Here are some scalability/performance problems in HBaseFsck and the changes 
> the patch makes:
> - Unnecessary I/O and RPCs caused by fetching an array of FileStatuses and 
> then discarding everything but the Paths, then passing the Paths to a 
> PathFilter, and then having the filter look up the (previously discarded) 
> FileStatuses of the paths again.  This is actually worse than double I/O 
> because the first lookup obtains a batch of FileStatuses while all the other 
> lookups are individual RPCs performed sequentially.
> -- Avoid this by adding a FileStatusFilter so that filtering can happen 
> directly on FileStatuses
> -- This performance bug affects more than Fsck, but also to some extent 
> things like snapshots, hfile archival, etc.  I didn't have time to look too 
> deep into other things affected and didn't want to increase the scope of this 
> ticket so I focus mostly on Fsck and make only a few improvements to other 
> codepaths.  The changes in this patch though should make it fairly easy to 
> fix other code paths in later jiras if we feel there are some other features 
> strongly impacted by this problem.  
> - OfflineReferenceFileRepair is the most expensive part of Fsck (often 50% of 
> Fsck runtime) and the running time scales with the number of store files, yet 
> the function is completely serial
> -- Make offlineReferenceFileRepair multithreaded
> - LoadHdfsRegionDirs() uses table-level concurrency, which is a big 
> bottleneck if you have 1 large cluster with 1 very large table that has 
> nearly all the regions
> -- Change loadHdfsRegionDirs() to region-level parallelism instead of 
> table-level parallelism for operations.
> The changes benefit all clusters but are especially noticeable for large 
> clusters with a few very large tables.  On our version of 0.98 with the 
> original patch we had a moderately sized production cluster with 2 (user) 
> tables and ~160k regions where HBaseFsck went from taking 18 min to 5 minutes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16062) Improper error handling in WAL Reader/Writer creation

2016-06-21 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-16062:
---
Attachment: 16062.branch-1.txt

Patch for branch-1

> Improper error handling in WAL Reader/Writer creation
> -
>
> Key: HBASE-16062
> URL: https://issues.apache.org/jira/browse/HBASE-16062
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0
>
> Attachments: 16062.branch-1.txt, HBASE-16062-v1.patch, 
> HBASE-16062-v2.patch
>
>
> If creation of WAL reader/ writer fails for some reason RS may leak hanging 
> socket in CLOSE_WAIT state. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16073) update compatibility_checker for jacc dropping comma sep args

2016-06-21 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16073?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-16073:

   Resolution: Fixed
Fix Version/s: 2.0.0
   Status: Resolved  (was: Patch Available)

thanks Dima!

> update compatibility_checker for jacc dropping comma sep args
> -
>
> Key: HBASE-16073
> URL: https://issues.apache.org/jira/browse/HBASE-16073
> Project: HBase
>  Issue Type: Task
>  Components: build, documentation
>Reporter: Sean Busbey
>Assignee: Dima Spivak
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HBASE-16073_v1.patch, HBASE-16073_v2.patch
>
>
> the japi-compliance-checker has a change in place (post the 1.7 release) that 
> removes the ability to give a comma separated list of jars on the cli.
> we should switch to generating descriptor xml docs since that will still be 
> supported, or update to use the expanded tooling suggested in the issue:
> https://github.com/lvc/japi-compliance-checker/issues/27



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14090) Redo FS layout; let go of tables/regions/stores directory hierarchy in DFS

2016-06-21 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-14090:

Priority: Critical  (was: Major)

> Redo FS layout; let go of tables/regions/stores directory hierarchy in DFS
> --
>
> Key: HBASE-14090
> URL: https://issues.apache.org/jira/browse/HBASE-14090
> Project: HBase
>  Issue Type: Sub-task
>Reporter: stack
>Assignee: Sean Busbey
>Priority: Critical
>
> Our layout as is won't work if 1M regions; e.g. HDFS will fall over if 
> directories of hundreds of thousands of files. HBASE-13991 (Humongous Tables) 
> would address this specific directory problem only by adding subdirs under 
> table dir but there are other issues with our current layout:
>  * Our table/regions/column family 'facade' has to be maintained in two 
> locations -- in master memory and in the hdfs directory layout -- and the 
> farce needs to be kept synced or worse, the model management is split between 
> master memory and DFS layout. 'Syncing' in HDFS has us dropping constructs 
> such as 'Reference' and 'HalfHFiles' on split, 'HFileLinks' when archiving, 
> and so on. This 'tie' makes it hard to make changes.
>  * While HDFS has atomic rename, useful for fencing and for having files 
> added atomically, if the model were solely owned by hbase, there are hbase 
> primitives we could make use of -- changes in a row are atomic and 
> coprocessors -- to simplify table transactions and provide more consistent 
> views of our model to clients; file 'moves' could be a memory operation only 
> rather than an HDFS call; sharing files between tables/snapshots and when it 
> is safe to remove them would be simplified if one owner only; and so on.
> This is an umbrella blue-sky issue to discuss what a new layout would look 
> like and how we might get there. I'll follow up with some sketches of what 
> new layout could look like that come of some chats a few of us have been 
> having. We are also under the 'delusion' that move to a new layout could be 
> done as part of a rolling upgrade and that the amount of work involved is not 
> gargantuan.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14090) Redo FS layout; let go of tables/regions/stores directory hierarchy in DFS

2016-06-21 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-14090:

Priority: Major  (was: Critical)

> Redo FS layout; let go of tables/regions/stores directory hierarchy in DFS
> --
>
> Key: HBASE-14090
> URL: https://issues.apache.org/jira/browse/HBASE-14090
> Project: HBase
>  Issue Type: Sub-task
>Reporter: stack
>Assignee: Sean Busbey
>
> Our layout as is won't work if 1M regions; e.g. HDFS will fall over if 
> directories of hundreds of thousands of files. HBASE-13991 (Humongous Tables) 
> would address this specific directory problem only by adding subdirs under 
> table dir but there are other issues with our current layout:
>  * Our table/regions/column family 'facade' has to be maintained in two 
> locations -- in master memory and in the hdfs directory layout -- and the 
> farce needs to be kept synced or worse, the model management is split between 
> master memory and DFS layout. 'Syncing' in HDFS has us dropping constructs 
> such as 'Reference' and 'HalfHFiles' on split, 'HFileLinks' when archiving, 
> and so on. This 'tie' makes it hard to make changes.
>  * While HDFS has atomic rename, useful for fencing and for having files 
> added atomically, if the model were solely owned by hbase, there are hbase 
> primitives we could make use of -- changes in a row are atomic and 
> coprocessors -- to simplify table transactions and provide more consistent 
> views of our model to clients; file 'moves' could be a memory operation only 
> rather than an HDFS call; sharing files between tables/snapshots and when it 
> is safe to remove them would be simplified if one owner only; and so on.
> This is an umbrella blue-sky issue to discuss what a new layout would look 
> like and how we might get there. I'll follow up with some sketches of what 
> new layout could look like that come of some chats a few of us have been 
> having. We are also under the 'delusion' that move to a new layout could be 
> done as part of a rolling upgrade and that the amount of work involved is not 
> gargantuan.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HBASE-14090) Redo FS layout; let go of tables/regions/stores directory hierarchy in DFS

2016-06-21 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey reassigned HBASE-14090:
---

Assignee: Sean Busbey

> Redo FS layout; let go of tables/regions/stores directory hierarchy in DFS
> --
>
> Key: HBASE-14090
> URL: https://issues.apache.org/jira/browse/HBASE-14090
> Project: HBase
>  Issue Type: Sub-task
>Reporter: stack
>Assignee: Sean Busbey
>
> Our layout as is won't work if 1M regions; e.g. HDFS will fall over if 
> directories of hundreds of thousands of files. HBASE-13991 (Humongous Tables) 
> would address this specific directory problem only by adding subdirs under 
> table dir but there are other issues with our current layout:
>  * Our table/regions/column family 'facade' has to be maintained in two 
> locations -- in master memory and in the hdfs directory layout -- and the 
> farce needs to be kept synced or worse, the model management is split between 
> master memory and DFS layout. 'Syncing' in HDFS has us dropping constructs 
> such as 'Reference' and 'HalfHFiles' on split, 'HFileLinks' when archiving, 
> and so on. This 'tie' makes it hard to make changes.
>  * While HDFS has atomic rename, useful for fencing and for having files 
> added atomically, if the model were solely owned by hbase, there are hbase 
> primitives we could make use of -- changes in a row are atomic and 
> coprocessors -- to simplify table transactions and provide more consistent 
> views of our model to clients; file 'moves' could be a memory operation only 
> rather than an HDFS call; sharing files between tables/snapshots and when it 
> is safe to remove them would be simplified if one owner only; and so on.
> This is an umbrella blue-sky issue to discuss what a new layout would look 
> like and how we might get there. I'll follow up with some sketches of what 
> new layout could look like that come of some chats a few of us have been 
> having. We are also under the 'delusion' that move to a new layout could be 
> done as part of a rolling upgrade and that the amount of work involved is not 
> gargantuan.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15870) Specify columns in REST multi gets

2016-06-21 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342568#comment-15342568
 ] 

Sean Busbey commented on HBASE-15870:
-

{quote}
Will this feature be available on the 1.3 release then?
{quote}

Yep, it's in branch-1.3 now.

> Specify columns in REST multi gets
> --
>
> Key: HBASE-15870
> URL: https://issues.apache.org/jira/browse/HBASE-15870
> Project: HBase
>  Issue Type: Improvement
>  Components: REST
>Reporter: Dean Gurvitz
>Assignee: Matt Warhaftig
>Priority: Minor
> Fix For: 2.0.0, 1.3.0, 1.4.0
>
> Attachments: hbase-15870-v1.patch
>
>
> The REST multi-gets feature currently does not allow specifying only certain 
> columns or column families. Adding support for these should be quite simple 
> and improve the usability of the multi-gets feature.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16036) Fix ReplicationTableBase initialization to make it non-blocking

2016-06-21 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-16036:
--
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.0.0
   Status: Resolved  (was: Patch Available)

Pushed to master.

> Fix ReplicationTableBase initialization to make it non-blocking
> ---
>
> Key: HBASE-16036
> URL: https://issues.apache.org/jira/browse/HBASE-16036
> Project: HBase
>  Issue Type: Sub-task
>  Components: Replication
>Reporter: Joseph
>Assignee: Joseph
> Fix For: 2.0.0
>
> Attachments: HBASE-16036.patch
>
>
> Currently there is a bug inside of TableBasedReplicationQueuesImpl 
> construction that prevents ReplicationServices from starting before Master is 
> initialized. So currently each of the RS, including HMaster, with Replication 
> enabled will attempt to create the ReplicationTable on initialization. 
> Currently HMaster's initialization: serviceThreads.start() -> new 
> TableBasedReplicationQueuesImpl() -> Replication Table Creation -> HMaster 
> sets initialized flags.
> But this fails when we try to create the Replication Table as the 
> HMaster.checkInitialized() flag fails. This ends up blocking HMaster 
> initialization and results in a deadlock.
> So in this patch, I will create the Replication Table in the background of 
> TableBasedReplicationQueuesImpl and only block when we actually call methods 
> that access it.
> This also requires a small refactoring of ReplicationSourceManager.init() so 
> that we run the abandoned queue adoption in the background
> Review board at: https://reviews.apache.org/r/48763/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16036) Fix ReplicationTableBase initialization to make it non-blocking

2016-06-21 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342535#comment-15342535
 ] 

Elliott Clark commented on HBASE-16036:
---

+1

> Fix ReplicationTableBase initialization to make it non-blocking
> ---
>
> Key: HBASE-16036
> URL: https://issues.apache.org/jira/browse/HBASE-16036
> Project: HBase
>  Issue Type: Sub-task
>  Components: Replication
>Reporter: Joseph
>Assignee: Joseph
> Attachments: HBASE-16036.patch
>
>
> Currently there is a bug inside of TableBasedReplicationQueuesImpl 
> construction that prevents ReplicationServices from starting before Master is 
> initialized. So currently each of the RS, including HMaster, with Replication 
> enabled will attempt to create the ReplicationTable on initialization. 
> Currently HMaster's initialization: serviceThreads.start() -> new 
> TableBasedReplicationQueuesImpl() -> Replication Table Creation -> HMaster 
> sets initialized flags.
> But this fails when we try to create the Replication Table as the 
> HMaster.checkInitialized() flag fails. This ends up blocking HMaster 
> initialization and results in a deadlock.
> So in this patch, I will create the Replication Table in the background of 
> TableBasedReplicationQueuesImpl and only block when we actually call methods 
> that access it.
> This also requires a small refactoring of ReplicationSourceManager.init() so 
> that we run the abandoned queue adoption in the background
> Review board at: https://reviews.apache.org/r/48763/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16032) Possible memory leak in StoreScanner

2016-06-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342522#comment-15342522
 ] 

Hudson commented on HBASE-16032:


FAILURE: Integrated in HBase-1.2-IT #537 (See 
[https://builds.apache.org/job/HBase-1.2-IT/537/])
HBASE-16032 Possible memory leak in StoreScanner (liyu: rev 
f8762efc0e041ba63c745bfa3ccb9f06d2010699)
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreScanner.java


> Possible memory leak in StoreScanner
> 
>
> Key: HBASE-16032
> URL: https://issues.apache.org/jira/browse/HBASE-16032
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.1, 1.1.5, 0.98.20
>Reporter: Yu Li
>Assignee: Yu Li
> Fix For: 2.0.0, 1.1.6, 1.3.1, 0.98.21, 1.2.3
>
> Attachments: HBASE-16032.patch, HBASE-16032_v2.patch, 
> HBASE-16032_v3.patch, HBASE-16032_v4.patch
>
>
> We observed frequent fullGC of RS in our production environment, and after 
> analyzing the heapdump, we found large memory occupancy by 
> HStore#changedReaderObservers, the map is surprisingly containing 7500w 
> objects...
> After some debugging, I located some possible memory leak in StoreScanner 
> constructor:
> {code}
>   public StoreScanner(Store store, ScanInfo scanInfo, Scan scan, final 
> NavigableSet columns,
>   long readPt)
>   throws IOException {
> this(store, scan, scanInfo, columns, readPt, scan.getCacheBlocks());
> if (columns != null && scan.isRaw()) {
>   throw new DoNotRetryIOException("Cannot specify any column for a raw 
> scan");
> }
> matcher = new ScanQueryMatcher(scan, scanInfo, columns,
> ScanType.USER_SCAN, Long.MAX_VALUE, HConstants.LATEST_TIMESTAMP,
> oldestUnexpiredTS, now, store.getCoprocessorHost());
> this.store.addChangedReaderObserver(this);
> // Pass columns to try to filter out unnecessary StoreFiles.
> List scanners = getScannersNoCompaction();
> ...
> seekScanners(scanners, matcher.getStartKey(), explicitColumnQuery
> && lazySeekEnabledGlobally, parallelSeekEnabled);
> ...
> resetKVHeap(scanners, store.getComparator());
>   }
> {code}
> If there's any Exception thrown after 
> {{this.store.addChangedReaderObserver(this)}}, the returned scanner might be 
> null and there's no chance to remove the scanner from changedReaderObservers, 
> like in {{HRegion#get}}
> {code}
> RegionScanner scanner = null;
> try {
>   scanner = getScanner(scan);
>   scanner.next(results);
> } finally {
>   if (scanner != null)
> scanner.close();
> }
> {code}
> What's more, all exception thrown in the {{HRegion#getScanner}} path will 
> cause scanner==null then memory leak, so we also need to handle this part.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16051) TestScannerHeartbeatMessages fails on some machines

2016-06-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342517#comment-15342517
 ] 

Hudson commented on HBASE-16051:


FAILURE: Integrated in HBase-Trunk_matrix #1089 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/1089/])
HBASE-16051 TestScannerHeartbeatMessages fails on some machines (Phil (tedyu: 
rev b006e41a37de7f87b91d73407978f6b09b12a15b)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestScannerHeartbeatMessages.java


> TestScannerHeartbeatMessages fails on some machines
> ---
>
> Key: HBASE-16051
> URL: https://issues.apache.org/jira/browse/HBASE-16051
> Project: HBase
>  Issue Type: Test
>  Components: test
>Affects Versions: 1.3.0
>Reporter: Mikhail Antonov
>Assignee: Phil Yang
> Fix For: 2.0.0, 1.3.0, 1.4.0
>
> Attachments: HBASE-16051-v1.patch
>
>
> I can see below on my Linux box (reproduces consistently). It passes on OSX 
> laptop.
>  T E S T S
> ---
> Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; 
> support was removed in 8.0
> Running org.apache.hadoop.hbase.regionserver.TestScannerHeartbeatMessages
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 27.907 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.hbase.regionserver.TestScannerHeartbeatMessages
> testScannerHeartbeatMessages(org.apache.hadoop.hbase.regionserver.TestScannerHeartbeatMessages)
>   Time elapsed: 12.95 sec  <<< FAILURE!
> java.lang.AssertionError: Heartbeats messages are disabled, an exception 
> should be thrown. If an exception  is not thrown, the test case is not 
> testing the importance of heartbeat messages
> at org.junit.Assert.fail(Assert.java:88)
> at 
> org.apache.hadoop.hbase.regionserver.TestScannerHeartbeatMessages.testImportanceOfHeartbeats(TestScannerHeartbeatMessages.java:233)
> at 
> org.apache.hadoop.hbase.regionserver.TestScannerHeartbeatMessages.testScannerHeartbeatMessages(TestScannerHeartbeatMessages.java:204)
> Results :
> Failed tests:
>   
> TestScannerHeartbeatMessages.testScannerHeartbeatMessages:204->testImportanceOfHeartbeats:233
>  Heartbeats messages are disabled, an exception should be thrown. If an 
> exception  is not thrown, the test case is not testing the importance of 
> heartbeat messages
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15870) Specify columns in REST multi gets

2016-06-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342521#comment-15342521
 ] 

Hudson commented on HBASE-15870:


FAILURE: Integrated in HBase-1.2-IT #537 (See 
[https://builds.apache.org/job/HBase-1.2-IT/537/])
Revert "HBASE-15870 Specify columns in REST multi gets (Matt Warhaftig)" 
(jerryjch: rev 65738a8353cd4b65b5204e42048ef484bd7ca8f3)
* hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/MultiRowResource.java
* 
hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/TestMultiRowResource.java
* hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/TableResource.java


> Specify columns in REST multi gets
> --
>
> Key: HBASE-15870
> URL: https://issues.apache.org/jira/browse/HBASE-15870
> Project: HBase
>  Issue Type: Improvement
>  Components: REST
>Reporter: Dean Gurvitz
>Assignee: Matt Warhaftig
>Priority: Minor
> Fix For: 2.0.0, 1.3.0, 1.4.0
>
> Attachments: hbase-15870-v1.patch
>
>
> The REST multi-gets feature currently does not allow specifying only certain 
> columns or column families. Adding support for these should be quite simple 
> and improve the usability of the multi-gets feature.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15870) Specify columns in REST multi gets

2016-06-21 Thread Dean Gurvitz (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342493#comment-15342493
 ] 

Dean Gurvitz commented on HBASE-15870:
--

Will this feature be available on the 1.3 release then?

> Specify columns in REST multi gets
> --
>
> Key: HBASE-15870
> URL: https://issues.apache.org/jira/browse/HBASE-15870
> Project: HBase
>  Issue Type: Improvement
>  Components: REST
>Reporter: Dean Gurvitz
>Assignee: Matt Warhaftig
>Priority: Minor
> Fix For: 2.0.0, 1.3.0, 1.4.0
>
> Attachments: hbase-15870-v1.patch
>
>
> The REST multi-gets feature currently does not allow specifying only certain 
> columns or column families. Adding support for these should be quite simple 
> and improve the usability of the multi-gets feature.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16051) TestScannerHeartbeatMessages fails on some machines

2016-06-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342455#comment-15342455
 ] 

Hudson commented on HBASE-16051:


SUCCESS: Integrated in HBase-1.3-IT #723 (See 
[https://builds.apache.org/job/HBase-1.3-IT/723/])
HBASE-16051 TestScannerHeartbeatMessages fails on some machines (Phil (tedyu: 
rev c8501d80096a845e13ce06f36ebf536415b3c7c8)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestScannerHeartbeatMessages.java


> TestScannerHeartbeatMessages fails on some machines
> ---
>
> Key: HBASE-16051
> URL: https://issues.apache.org/jira/browse/HBASE-16051
> Project: HBase
>  Issue Type: Test
>  Components: test
>Affects Versions: 1.3.0
>Reporter: Mikhail Antonov
>Assignee: Phil Yang
> Fix For: 2.0.0, 1.3.0, 1.4.0
>
> Attachments: HBASE-16051-v1.patch
>
>
> I can see below on my Linux box (reproduces consistently). It passes on OSX 
> laptop.
>  T E S T S
> ---
> Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; 
> support was removed in 8.0
> Running org.apache.hadoop.hbase.regionserver.TestScannerHeartbeatMessages
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 27.907 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.hbase.regionserver.TestScannerHeartbeatMessages
> testScannerHeartbeatMessages(org.apache.hadoop.hbase.regionserver.TestScannerHeartbeatMessages)
>   Time elapsed: 12.95 sec  <<< FAILURE!
> java.lang.AssertionError: Heartbeats messages are disabled, an exception 
> should be thrown. If an exception  is not thrown, the test case is not 
> testing the importance of heartbeat messages
> at org.junit.Assert.fail(Assert.java:88)
> at 
> org.apache.hadoop.hbase.regionserver.TestScannerHeartbeatMessages.testImportanceOfHeartbeats(TestScannerHeartbeatMessages.java:233)
> at 
> org.apache.hadoop.hbase.regionserver.TestScannerHeartbeatMessages.testScannerHeartbeatMessages(TestScannerHeartbeatMessages.java:204)
> Results :
> Failed tests:
>   
> TestScannerHeartbeatMessages.testScannerHeartbeatMessages:204->testImportanceOfHeartbeats:233
>  Heartbeats messages are disabled, an exception should be thrown. If an 
> exception  is not thrown, the test case is not testing the importance of 
> heartbeat messages
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16032) Possible memory leak in StoreScanner

2016-06-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342456#comment-15342456
 ] 

Hudson commented on HBASE-16032:


SUCCESS: Integrated in HBase-1.3-IT #723 (See 
[https://builds.apache.org/job/HBase-1.3-IT/723/])
HBASE-16032 Possible memory leak in StoreScanner (liyu: rev 
0cbad9c890b65b1409be741f55370b7a387f5764)
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreScanner.java


> Possible memory leak in StoreScanner
> 
>
> Key: HBASE-16032
> URL: https://issues.apache.org/jira/browse/HBASE-16032
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.1, 1.1.5, 0.98.20
>Reporter: Yu Li
>Assignee: Yu Li
> Fix For: 2.0.0, 1.1.6, 1.3.1, 0.98.21, 1.2.3
>
> Attachments: HBASE-16032.patch, HBASE-16032_v2.patch, 
> HBASE-16032_v3.patch, HBASE-16032_v4.patch
>
>
> We observed frequent fullGC of RS in our production environment, and after 
> analyzing the heapdump, we found large memory occupancy by 
> HStore#changedReaderObservers, the map is surprisingly containing 7500w 
> objects...
> After some debugging, I located some possible memory leak in StoreScanner 
> constructor:
> {code}
>   public StoreScanner(Store store, ScanInfo scanInfo, Scan scan, final 
> NavigableSet columns,
>   long readPt)
>   throws IOException {
> this(store, scan, scanInfo, columns, readPt, scan.getCacheBlocks());
> if (columns != null && scan.isRaw()) {
>   throw new DoNotRetryIOException("Cannot specify any column for a raw 
> scan");
> }
> matcher = new ScanQueryMatcher(scan, scanInfo, columns,
> ScanType.USER_SCAN, Long.MAX_VALUE, HConstants.LATEST_TIMESTAMP,
> oldestUnexpiredTS, now, store.getCoprocessorHost());
> this.store.addChangedReaderObserver(this);
> // Pass columns to try to filter out unnecessary StoreFiles.
> List scanners = getScannersNoCompaction();
> ...
> seekScanners(scanners, matcher.getStartKey(), explicitColumnQuery
> && lazySeekEnabledGlobally, parallelSeekEnabled);
> ...
> resetKVHeap(scanners, store.getComparator());
>   }
> {code}
> If there's any Exception thrown after 
> {{this.store.addChangedReaderObserver(this)}}, the returned scanner might be 
> null and there's no chance to remove the scanner from changedReaderObservers, 
> like in {{HRegion#get}}
> {code}
> RegionScanner scanner = null;
> try {
>   scanner = getScanner(scan);
>   scanner.next(results);
> } finally {
>   if (scanner != null)
> scanner.close();
> }
> {code}
> What's more, all exception thrown in the {{HRegion#getScanner}} path will 
> cause scanner==null then memory leak, so we also need to handle this part.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16074) ITBLL fails, reports lost big or tine families

2016-06-21 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342442#comment-15342442
 ] 

Elliott Clark commented on HBASE-16074:
---

Current thought is that there's something weird going on in the scan code. Any 
help from people who have recently touched scanners would be appreciated.

> ITBLL fails, reports lost big or tine families
> --
>
> Key: HBASE-16074
> URL: https://issues.apache.org/jira/browse/HBASE-16074
> Project: HBase
>  Issue Type: Bug
>  Components: integration tests
>Affects Versions: 1.3.0
>Reporter: Mikhail Antonov
>Assignee: Mikhail Antonov
>Priority: Blocker
> Fix For: 1.3.0
>
>
> Underlying MR jobs succeed but I'm seeing the following in the logs (mid-size 
> distributed test cluster):
> ERROR test.IntegrationTestBigLinkedList$Verify: Found nodes which lost big or 
> tiny families, count=164
> I do not know exactly yet whether it's a bug, a test issue or env setup 
> issue, but need figure it out. Opening this to raise awareness and see if 
> someone saw that recently.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16074) ITBLL fails, reports lost big or tine families

2016-06-21 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342438#comment-15342438
 ] 

Elliott Clark commented on HBASE-16074:
---

So we had a run like this:

{code}
REFERENCED  0   1,800,000,000   1,800,000,000
UNREFERENCED0   76  76
{code}


That is the correct number of referenced but there shouldn't be any 
unreferenced. So we went into the logs and found:

{code}
2016-06-21 04:28:43,314 WARN [main] 
org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList$Verify: Prev is not 
set for: Y\xD3\x16t\xC5\x9D1@
{code}

That row key looks really weird. It's less than the length we would expect.

However it is the split point for a region:
{code}
IntegrationTestBigLinkedList.11,Y\xD3\x16t\xC5\x9D1@,1466506812220.15898a252e1b54728dd44a2b13fca290.

{code}

Going into the shell and that row does not exist.

{code}
get ''HBase Shell; enter 'help' for list of supported commands.
Type "exit" to leave the HBase Shell
Version 1.3.0-fb10-SNAPSHOT, rd8d63d67152af8eed48f8863a0e13d3e71fc097c, Fri Jun 
10 16:59:00 PDT 2016

hbase(main):001:0> get 'IntegrationTestBigLinkedList.11', "Y\xD3\x16t\xC5\x9D1@"
COLUMN  
 CELL
0 row(s) in 0.3390 seconds
{code}

So that got us very worried about data loss. So we re-ran the verify step. When 
stopping the chaos monkey and letting everything settle we got a clean verify 
step.

{code}
REFERENCED  0   1,800,000,000   1,800,000,000
{code}


> ITBLL fails, reports lost big or tine families
> --
>
> Key: HBASE-16074
> URL: https://issues.apache.org/jira/browse/HBASE-16074
> Project: HBase
>  Issue Type: Bug
>  Components: integration tests
>Affects Versions: 1.3.0
>Reporter: Mikhail Antonov
>Assignee: Mikhail Antonov
>Priority: Blocker
> Fix For: 1.3.0
>
>
> Underlying MR jobs succeed but I'm seeing the following in the logs (mid-size 
> distributed test cluster):
> ERROR test.IntegrationTestBigLinkedList$Verify: Found nodes which lost big or 
> tiny families, count=164
> I do not know exactly yet whether it's a bug, a test issue or env setup 
> issue, but need figure it out. Opening this to raise awareness and see if 
> someone saw that recently.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16012) Major compaction can't work because left scanner read point in RegionServer

2016-06-21 Thread Yu Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342421#comment-15342421
 ] 

Yu Li commented on HBASE-16012:
---

Yes, this need to be rebased since HBASE-16032 already in, JFYI [~zghaobac]

> Major compaction can't work because left scanner read point in RegionServer
> ---
>
> Key: HBASE-16012
> URL: https://issues.apache.org/jira/browse/HBASE-16012
> Project: HBase
>  Issue Type: Bug
>  Components: Compaction, Scanners
>Affects Versions: 2.0.0, 0.94.27, 1.1.6, 1.3.1, 0.98.21, 1.2.3
>Reporter: Guanghao Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-16012-v1.patch, HBASE-16012-v2.patch, 
> HBASE-16012-v3.patch, HBASE-16012-v4.patch, HBASE-16012.patch
>
>
> When new RegionScanner, it will add a scanner read point in 
> scannerReadPoints. But if we got a exception after add read point, the read 
> point will keep in regions server and the delete after this mvcc number will 
> never be compacted.
> Our hbase version is base 0.94. If it throws other exception when initialize 
> RegionScanner, the master branch has this bug, too.
> ERROR org.apache.hadoop.hbase.regionserver.HRegionServer: Failed openScanner 
> java.io.IOException: Could not seek StoreFileScanner
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:160)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:268)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:168)
>   at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:2232)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:4026)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1895)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1879)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1854)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.internalOpenScanner(HRegionServer.java:3032)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.openScanner(HRegionServer.java:2995)
>   at sun.reflect.GeneratedMethodAccessor67.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at 
> org.apache.hadoop.hbase.ipc.SecureRpcEngine$Server.call(SecureRpcEngine.java:338)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1595)
> Caused by: org.apache.hadoop.hbase.ipc.CallerDisconnectedException: Aborting 
> call openScanner, since caller disconnected
>   at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Call.throwExceptionIfCallerDisconnected(HBaseServer.java:475)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1443)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockDataInternal(HFileBlock.java:1902)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1766)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:345)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:254)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:499)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:520)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:235)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:148)
>   ... 14 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16032) Possible memory leak in StoreScanner

2016-06-21 Thread Yu Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342420#comment-15342420
 ] 

Yu Li commented on HBASE-16032:
---

Checking the failed case:
{noformat}
org.apache.hadoop.hbase.regionserver.TestScannerHeartbeatMessages.testScannerHeartbeatMessages
{noformat}
and confirmed it's not related to change here and could pass with HBASE-16051 
in local

> Possible memory leak in StoreScanner
> 
>
> Key: HBASE-16032
> URL: https://issues.apache.org/jira/browse/HBASE-16032
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.1, 1.1.5, 0.98.20
>Reporter: Yu Li
>Assignee: Yu Li
> Fix For: 2.0.0, 1.1.6, 1.3.1, 0.98.21, 1.2.3
>
> Attachments: HBASE-16032.patch, HBASE-16032_v2.patch, 
> HBASE-16032_v3.patch, HBASE-16032_v4.patch
>
>
> We observed frequent fullGC of RS in our production environment, and after 
> analyzing the heapdump, we found large memory occupancy by 
> HStore#changedReaderObservers, the map is surprisingly containing 7500w 
> objects...
> After some debugging, I located some possible memory leak in StoreScanner 
> constructor:
> {code}
>   public StoreScanner(Store store, ScanInfo scanInfo, Scan scan, final 
> NavigableSet columns,
>   long readPt)
>   throws IOException {
> this(store, scan, scanInfo, columns, readPt, scan.getCacheBlocks());
> if (columns != null && scan.isRaw()) {
>   throw new DoNotRetryIOException("Cannot specify any column for a raw 
> scan");
> }
> matcher = new ScanQueryMatcher(scan, scanInfo, columns,
> ScanType.USER_SCAN, Long.MAX_VALUE, HConstants.LATEST_TIMESTAMP,
> oldestUnexpiredTS, now, store.getCoprocessorHost());
> this.store.addChangedReaderObserver(this);
> // Pass columns to try to filter out unnecessary StoreFiles.
> List scanners = getScannersNoCompaction();
> ...
> seekScanners(scanners, matcher.getStartKey(), explicitColumnQuery
> && lazySeekEnabledGlobally, parallelSeekEnabled);
> ...
> resetKVHeap(scanners, store.getComparator());
>   }
> {code}
> If there's any Exception thrown after 
> {{this.store.addChangedReaderObserver(this)}}, the returned scanner might be 
> null and there's no chance to remove the scanner from changedReaderObservers, 
> like in {{HRegion#get}}
> {code}
> RegionScanner scanner = null;
> try {
>   scanner = getScanner(scan);
>   scanner.next(results);
> } finally {
>   if (scanner != null)
> scanner.close();
> }
> {code}
> What's more, all exception thrown in the {{HRegion#getScanner}} path will 
> cause scanner==null then memory leak, so we also need to handle this part.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16032) Possible memory leak in StoreScanner

2016-06-21 Thread Yu Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yu Li updated HBASE-16032:
--
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

No problem, thanks for the confirmation [~enis]

Pushed change into all branches (master, branch-1, branch-1.1, branch-1.2, 
branch-1.3 and 0.98). Closing issue.

> Possible memory leak in StoreScanner
> 
>
> Key: HBASE-16032
> URL: https://issues.apache.org/jira/browse/HBASE-16032
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.1, 1.1.5, 0.98.20
>Reporter: Yu Li
>Assignee: Yu Li
> Fix For: 2.0.0, 1.1.6, 1.3.1, 0.98.21, 1.2.3
>
> Attachments: HBASE-16032.patch, HBASE-16032_v2.patch, 
> HBASE-16032_v3.patch, HBASE-16032_v4.patch
>
>
> We observed frequent fullGC of RS in our production environment, and after 
> analyzing the heapdump, we found large memory occupancy by 
> HStore#changedReaderObservers, the map is surprisingly containing 7500w 
> objects...
> After some debugging, I located some possible memory leak in StoreScanner 
> constructor:
> {code}
>   public StoreScanner(Store store, ScanInfo scanInfo, Scan scan, final 
> NavigableSet columns,
>   long readPt)
>   throws IOException {
> this(store, scan, scanInfo, columns, readPt, scan.getCacheBlocks());
> if (columns != null && scan.isRaw()) {
>   throw new DoNotRetryIOException("Cannot specify any column for a raw 
> scan");
> }
> matcher = new ScanQueryMatcher(scan, scanInfo, columns,
> ScanType.USER_SCAN, Long.MAX_VALUE, HConstants.LATEST_TIMESTAMP,
> oldestUnexpiredTS, now, store.getCoprocessorHost());
> this.store.addChangedReaderObserver(this);
> // Pass columns to try to filter out unnecessary StoreFiles.
> List scanners = getScannersNoCompaction();
> ...
> seekScanners(scanners, matcher.getStartKey(), explicitColumnQuery
> && lazySeekEnabledGlobally, parallelSeekEnabled);
> ...
> resetKVHeap(scanners, store.getComparator());
>   }
> {code}
> If there's any Exception thrown after 
> {{this.store.addChangedReaderObserver(this)}}, the returned scanner might be 
> null and there's no chance to remove the scanner from changedReaderObservers, 
> like in {{HRegion#get}}
> {code}
> RegionScanner scanner = null;
> try {
>   scanner = getScanner(scan);
>   scanner.next(results);
> } finally {
>   if (scanner != null)
> scanner.close();
> }
> {code}
> What's more, all exception thrown in the {{HRegion#getScanner}} path will 
> cause scanner==null then memory leak, so we also need to handle this part.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16032) Possible memory leak in StoreScanner

2016-06-21 Thread Yu Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342365#comment-15342365
 ] 

Yu Li commented on HBASE-16032:
---

Checked the failed cases:
{noformat}
Test Result (Fail)
org.apache.hadoop.hbase.client.TestCloneSnapshotFromClient.testCloneOnMissingNamespace
org.apache.hadoop.hbase.client.TestRestoreSnapshotFromClient.testRestoreSnapshot
org.apache.hadoop.hbase.client.TestReplicationShell.testRunShellTests
{noformat}
and confirmed none related to the change here. All could pass in local run.

> Possible memory leak in StoreScanner
> 
>
> Key: HBASE-16032
> URL: https://issues.apache.org/jira/browse/HBASE-16032
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.1, 1.1.5, 0.98.20
>Reporter: Yu Li
>Assignee: Yu Li
> Fix For: 2.0.0, 1.1.6, 1.3.1, 0.98.21, 1.2.3
>
> Attachments: HBASE-16032.patch, HBASE-16032_v2.patch, 
> HBASE-16032_v3.patch, HBASE-16032_v4.patch
>
>
> We observed frequent fullGC of RS in our production environment, and after 
> analyzing the heapdump, we found large memory occupancy by 
> HStore#changedReaderObservers, the map is surprisingly containing 7500w 
> objects...
> After some debugging, I located some possible memory leak in StoreScanner 
> constructor:
> {code}
>   public StoreScanner(Store store, ScanInfo scanInfo, Scan scan, final 
> NavigableSet columns,
>   long readPt)
>   throws IOException {
> this(store, scan, scanInfo, columns, readPt, scan.getCacheBlocks());
> if (columns != null && scan.isRaw()) {
>   throw new DoNotRetryIOException("Cannot specify any column for a raw 
> scan");
> }
> matcher = new ScanQueryMatcher(scan, scanInfo, columns,
> ScanType.USER_SCAN, Long.MAX_VALUE, HConstants.LATEST_TIMESTAMP,
> oldestUnexpiredTS, now, store.getCoprocessorHost());
> this.store.addChangedReaderObserver(this);
> // Pass columns to try to filter out unnecessary StoreFiles.
> List scanners = getScannersNoCompaction();
> ...
> seekScanners(scanners, matcher.getStartKey(), explicitColumnQuery
> && lazySeekEnabledGlobally, parallelSeekEnabled);
> ...
> resetKVHeap(scanners, store.getComparator());
>   }
> {code}
> If there's any Exception thrown after 
> {{this.store.addChangedReaderObserver(this)}}, the returned scanner might be 
> null and there's no chance to remove the scanner from changedReaderObservers, 
> like in {{HRegion#get}}
> {code}
> RegionScanner scanner = null;
> try {
>   scanner = getScanner(scan);
>   scanner.next(results);
> } finally {
>   if (scanner != null)
> scanner.close();
> }
> {code}
> What's more, all exception thrown in the {{HRegion#getScanner}} path will 
> cause scanner==null then memory leak, so we also need to handle this part.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16032) Possible memory leak in StoreScanner

2016-06-21 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342337#comment-15342337
 ] 

Enis Soztutar commented on HBASE-16032:
---

Belated +1. 

> Possible memory leak in StoreScanner
> 
>
> Key: HBASE-16032
> URL: https://issues.apache.org/jira/browse/HBASE-16032
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.1, 1.1.5, 0.98.20
>Reporter: Yu Li
>Assignee: Yu Li
> Fix For: 2.0.0, 1.1.6, 1.3.1, 0.98.21, 1.2.3
>
> Attachments: HBASE-16032.patch, HBASE-16032_v2.patch, 
> HBASE-16032_v3.patch, HBASE-16032_v4.patch
>
>
> We observed frequent fullGC of RS in our production environment, and after 
> analyzing the heapdump, we found large memory occupancy by 
> HStore#changedReaderObservers, the map is surprisingly containing 7500w 
> objects...
> After some debugging, I located some possible memory leak in StoreScanner 
> constructor:
> {code}
>   public StoreScanner(Store store, ScanInfo scanInfo, Scan scan, final 
> NavigableSet columns,
>   long readPt)
>   throws IOException {
> this(store, scan, scanInfo, columns, readPt, scan.getCacheBlocks());
> if (columns != null && scan.isRaw()) {
>   throw new DoNotRetryIOException("Cannot specify any column for a raw 
> scan");
> }
> matcher = new ScanQueryMatcher(scan, scanInfo, columns,
> ScanType.USER_SCAN, Long.MAX_VALUE, HConstants.LATEST_TIMESTAMP,
> oldestUnexpiredTS, now, store.getCoprocessorHost());
> this.store.addChangedReaderObserver(this);
> // Pass columns to try to filter out unnecessary StoreFiles.
> List scanners = getScannersNoCompaction();
> ...
> seekScanners(scanners, matcher.getStartKey(), explicitColumnQuery
> && lazySeekEnabledGlobally, parallelSeekEnabled);
> ...
> resetKVHeap(scanners, store.getComparator());
>   }
> {code}
> If there's any Exception thrown after 
> {{this.store.addChangedReaderObserver(this)}}, the returned scanner might be 
> null and there's no chance to remove the scanner from changedReaderObservers, 
> like in {{HRegion#get}}
> {code}
> RegionScanner scanner = null;
> try {
>   scanner = getScanner(scan);
>   scanner.next(results);
> } finally {
>   if (scanner != null)
> scanner.close();
> }
> {code}
> What's more, all exception thrown in the {{HRegion#getScanner}} path will 
> cause scanner==null then memory leak, so we also need to handle this part.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13701) Consolidate SecureBulkLoadEndpoint into HBase core as default for bulk load

2016-06-21 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342327#comment-15342327
 ] 

Enis Soztutar commented on HBASE-13701:
---

bq. both non-secure and secure protobuf messages.
We can add a new parameter to the bulkload protobuf message to indicate that 
newer clients knows about the new data flow. 

> Consolidate SecureBulkLoadEndpoint into HBase core as default for bulk load
> ---
>
> Key: HBASE-13701
> URL: https://issues.apache.org/jira/browse/HBASE-13701
> Project: HBase
>  Issue Type: Improvement
>Reporter: Jerry He
>Assignee: Jerry He
> Fix For: 2.0.0
>
>
> HBASE-12052 makes SecureBulkLoadEndpoint work in a non-secure env to solve 
> HDFS permission issues.
> We have encountered some of the permission issues and have to use this 
> SecureBulkLoadEndpoint to workaround issues.
> We should  probably consolidate SecureBulkLoadEndpoint into HBase core as 
> default for bulk load since it is able to handle both secure Kerberos and 
> non-secure cases.
> Maintaining two versions of bulk load implementation is also a cause of 
> confusion, and having to explicitly set it is also inconvenient.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13701) Consolidate SecureBulkLoadEndpoint into HBase core as default for bulk load

2016-06-21 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13701?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HBASE-13701:
--
Fix Version/s: 2.0.0

> Consolidate SecureBulkLoadEndpoint into HBase core as default for bulk load
> ---
>
> Key: HBASE-13701
> URL: https://issues.apache.org/jira/browse/HBASE-13701
> Project: HBase
>  Issue Type: Improvement
>Reporter: Jerry He
>Assignee: Jerry He
> Fix For: 2.0.0
>
>
> HBASE-12052 makes SecureBulkLoadEndpoint work in a non-secure env to solve 
> HDFS permission issues.
> We have encountered some of the permission issues and have to use this 
> SecureBulkLoadEndpoint to workaround issues.
> We should  probably consolidate SecureBulkLoadEndpoint into HBase core as 
> default for bulk load since it is able to handle both secure Kerberos and 
> non-secure cases.
> Maintaining two versions of bulk load implementation is also a cause of 
> confusion, and having to explicitly set it is also inconvenient.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16032) Possible memory leak in StoreScanner

2016-06-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342321#comment-15342321
 ] 

Hudson commented on HBASE-16032:


FAILURE: Integrated in HBase-1.4 #233 (See 
[https://builds.apache.org/job/HBase-1.4/233/])
HBASE-16032 Possible memory leak in StoreScanner (liyu: rev 
c6b8c9bb02da8fb0a4b6ae4d610b2c11c4ce1655)
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreScanner.java


> Possible memory leak in StoreScanner
> 
>
> Key: HBASE-16032
> URL: https://issues.apache.org/jira/browse/HBASE-16032
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.1, 1.1.5, 0.98.20
>Reporter: Yu Li
>Assignee: Yu Li
> Fix For: 2.0.0, 1.1.6, 1.3.1, 0.98.21, 1.2.3
>
> Attachments: HBASE-16032.patch, HBASE-16032_v2.patch, 
> HBASE-16032_v3.patch, HBASE-16032_v4.patch
>
>
> We observed frequent fullGC of RS in our production environment, and after 
> analyzing the heapdump, we found large memory occupancy by 
> HStore#changedReaderObservers, the map is surprisingly containing 7500w 
> objects...
> After some debugging, I located some possible memory leak in StoreScanner 
> constructor:
> {code}
>   public StoreScanner(Store store, ScanInfo scanInfo, Scan scan, final 
> NavigableSet columns,
>   long readPt)
>   throws IOException {
> this(store, scan, scanInfo, columns, readPt, scan.getCacheBlocks());
> if (columns != null && scan.isRaw()) {
>   throw new DoNotRetryIOException("Cannot specify any column for a raw 
> scan");
> }
> matcher = new ScanQueryMatcher(scan, scanInfo, columns,
> ScanType.USER_SCAN, Long.MAX_VALUE, HConstants.LATEST_TIMESTAMP,
> oldestUnexpiredTS, now, store.getCoprocessorHost());
> this.store.addChangedReaderObserver(this);
> // Pass columns to try to filter out unnecessary StoreFiles.
> List scanners = getScannersNoCompaction();
> ...
> seekScanners(scanners, matcher.getStartKey(), explicitColumnQuery
> && lazySeekEnabledGlobally, parallelSeekEnabled);
> ...
> resetKVHeap(scanners, store.getComparator());
>   }
> {code}
> If there's any Exception thrown after 
> {{this.store.addChangedReaderObserver(this)}}, the returned scanner might be 
> null and there's no chance to remove the scanner from changedReaderObservers, 
> like in {{HRegion#get}}
> {code}
> RegionScanner scanner = null;
> try {
>   scanner = getScanner(scan);
>   scanner.next(results);
> } finally {
>   if (scanner != null)
> scanner.close();
> }
> {code}
> What's more, all exception thrown in the {{HRegion#getScanner}} path will 
> cause scanner==null then memory leak, so we also need to handle this part.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15870) Specify columns in REST multi gets

2016-06-21 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342294#comment-15342294
 ] 

Sean Busbey commented on HBASE-15870:
-

Thanks Jerry!

> Specify columns in REST multi gets
> --
>
> Key: HBASE-15870
> URL: https://issues.apache.org/jira/browse/HBASE-15870
> Project: HBase
>  Issue Type: Improvement
>  Components: REST
>Reporter: Dean Gurvitz
>Assignee: Matt Warhaftig
>Priority: Minor
> Fix For: 2.0.0, 1.3.0, 1.4.0
>
> Attachments: hbase-15870-v1.patch
>
>
> The REST multi-gets feature currently does not allow specifying only certain 
> columns or column families. Adding support for these should be quite simple 
> and improve the usability of the multi-gets feature.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16052) Improve HBaseFsck Scalability

2016-06-21 Thread Ben Lau (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342282#comment-15342282
 ] 

Ben Lau commented on HBASE-16052:
-

[~jmhsieh] and [~jxiang] Any comments on this patch?  Just checking since you 
guys are listed as the owners for the HBaseFsck component on 
https://issues.apache.org/jira/browse/HBASE/?selectedTab=com.atlassian.jira.jira-projects-plugin:components-panel.

> Improve HBaseFsck Scalability
> -
>
> Key: HBASE-16052
> URL: https://issues.apache.org/jira/browse/HBASE-16052
> Project: HBase
>  Issue Type: Improvement
>  Components: hbck
>Reporter: Ben Lau
> Attachments: HBASE-16052-master.patch
>
>
> There are some problems with HBaseFsck that make it unnecessarily slow 
> especially for large tables or clusters with many regions.  
> This patch tries to fix the biggest bottlenecks and also include a couple of 
> bug fixes for some of the race conditions caused by gathering and holding 
> state about a live cluster that is no longer true by the time you use that 
> state in Fsck processing.  These race conditions cause Fsck to crash and 
> become unusable on large clusters with lots of region splits/merges.
> Here are some scalability/performance problems in HBaseFsck and the changes 
> the patch makes:
> - Unnecessary I/O and RPCs caused by fetching an array of FileStatuses and 
> then discarding everything but the Paths, then passing the Paths to a 
> PathFilter, and then having the filter look up the (previously discarded) 
> FileStatuses of the paths again.  This is actually worse than double I/O 
> because the first lookup obtains a batch of FileStatuses while all the other 
> lookups are individual RPCs performed sequentially.
> -- Avoid this by adding a FileStatusFilter so that filtering can happen 
> directly on FileStatuses
> -- This performance bug affects more than Fsck, but also to some extent 
> things like snapshots, hfile archival, etc.  I didn't have time to look too 
> deep into other things affected and didn't want to increase the scope of this 
> ticket so I focus mostly on Fsck and make only a few improvements to other 
> codepaths.  The changes in this patch though should make it fairly easy to 
> fix other code paths in later jiras if we feel there are some other features 
> strongly impacted by this problem.  
> - OfflineReferenceFileRepair is the most expensive part of Fsck (often 50% of 
> Fsck runtime) and the running time scales with the number of store files, yet 
> the function is completely serial
> -- Make offlineReferenceFileRepair multithreaded
> - LoadHdfsRegionDirs() uses table-level concurrency, which is a big 
> bottleneck if you have 1 large cluster with 1 very large table that has 
> nearly all the regions
> -- Change loadHdfsRegionDirs() to region-level parallelism instead of 
> table-level parallelism for operations.
> The changes benefit all clusters but are especially noticeable for large 
> clusters with a few very large tables.  On our version of 0.98 with the 
> original patch we had a moderately sized production cluster with 2 (user) 
> tables and ~160k regions where HBaseFsck went from taking 18 min to 5 minutes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16074) ITBLL fails, reports lost big or tine families

2016-06-21 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342284#comment-15342284
 ] 

Elliott Clark commented on HBASE-16074:
---

So a little more information on this, it looks worse than previously thought. 
We've had tests on one cluster come up with unreferenced rows as well.

> ITBLL fails, reports lost big or tine families
> --
>
> Key: HBASE-16074
> URL: https://issues.apache.org/jira/browse/HBASE-16074
> Project: HBase
>  Issue Type: Bug
>  Components: integration tests
>Affects Versions: 1.3.0
>Reporter: Mikhail Antonov
>Assignee: Mikhail Antonov
>Priority: Blocker
> Fix For: 1.3.0
>
>
> Underlying MR jobs succeed but I'm seeing the following in the logs (mid-size 
> distributed test cluster):
> ERROR test.IntegrationTestBigLinkedList$Verify: Found nodes which lost big or 
> tiny families, count=164
> I do not know exactly yet whether it's a bug, a test issue or env setup 
> issue, but need figure it out. Opening this to raise awareness and see if 
> someone saw that recently.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14921) Memory optimizations

2016-06-21 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342183#comment-15342183
 ] 

ramkrishna.s.vasudevan commented on HBASE-14921:


[~anastas] 
Have you tried with pure write workload? And what is the number of threads you 
have used in your evaluation?

> Memory optimizations
> 
>
> Key: HBASE-14921
> URL: https://issues.apache.org/jira/browse/HBASE-14921
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0
>Reporter: Eshcar Hillel
>Assignee: Anastasia Braginsky
> Attachments: CellBlocksSegmentInMemStore.pdf, 
> CellBlocksSegmentinthecontextofMemStore(1).pdf, HBASE-14921-V01.patch, 
> HBASE-14921-V02.patch, HBASE-14921-V03.patch, HBASE-14921-V04-CA.patch, 
> InitialCellArrayMapEvaluation.pdf, IntroductiontoNewFlatandCompactMemStore.pdf
>
>
> Memory optimizations including compressed format representation and offheap 
> allocations



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16051) TestScannerHeartbeatMessages fails on some machines

2016-06-21 Thread Mikhail Antonov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342129#comment-15342129
 ] 

Mikhail Antonov commented on HBASE-16051:
-

Thanks [~yangzhe1991] and [~tedyu] for quick fix and review!

> TestScannerHeartbeatMessages fails on some machines
> ---
>
> Key: HBASE-16051
> URL: https://issues.apache.org/jira/browse/HBASE-16051
> Project: HBase
>  Issue Type: Test
>  Components: test
>Affects Versions: 1.3.0
>Reporter: Mikhail Antonov
>Assignee: Phil Yang
> Fix For: 2.0.0, 1.3.0, 1.4.0
>
> Attachments: HBASE-16051-v1.patch
>
>
> I can see below on my Linux box (reproduces consistently). It passes on OSX 
> laptop.
>  T E S T S
> ---
> Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; 
> support was removed in 8.0
> Running org.apache.hadoop.hbase.regionserver.TestScannerHeartbeatMessages
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 27.907 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.hbase.regionserver.TestScannerHeartbeatMessages
> testScannerHeartbeatMessages(org.apache.hadoop.hbase.regionserver.TestScannerHeartbeatMessages)
>   Time elapsed: 12.95 sec  <<< FAILURE!
> java.lang.AssertionError: Heartbeats messages are disabled, an exception 
> should be thrown. If an exception  is not thrown, the test case is not 
> testing the importance of heartbeat messages
> at org.junit.Assert.fail(Assert.java:88)
> at 
> org.apache.hadoop.hbase.regionserver.TestScannerHeartbeatMessages.testImportanceOfHeartbeats(TestScannerHeartbeatMessages.java:233)
> at 
> org.apache.hadoop.hbase.regionserver.TestScannerHeartbeatMessages.testScannerHeartbeatMessages(TestScannerHeartbeatMessages.java:204)
> Results :
> Failed tests:
>   
> TestScannerHeartbeatMessages.testScannerHeartbeatMessages:204->testImportanceOfHeartbeats:233
>  Heartbeats messages are disabled, an exception should be thrown. If an 
> exception  is not thrown, the test case is not testing the importance of 
> heartbeat messages
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15870) Specify columns in REST multi gets

2016-06-21 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342130#comment-15342130
 ] 

Jerry He commented on HBASE-15870:
--

Ok. Reverted it from 1.2

> Specify columns in REST multi gets
> --
>
> Key: HBASE-15870
> URL: https://issues.apache.org/jira/browse/HBASE-15870
> Project: HBase
>  Issue Type: Improvement
>  Components: REST
>Reporter: Dean Gurvitz
>Assignee: Matt Warhaftig
>Priority: Minor
> Fix For: 2.0.0, 1.3.0, 1.4.0
>
> Attachments: hbase-15870-v1.patch
>
>
> The REST multi-gets feature currently does not allow specifying only certain 
> columns or column families. Adding support for these should be quite simple 
> and improve the usability of the multi-gets feature.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15870) Specify columns in REST multi gets

2016-06-21 Thread Jerry He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jerry He updated HBASE-15870:
-
Fix Version/s: (was: 1.2.2)

> Specify columns in REST multi gets
> --
>
> Key: HBASE-15870
> URL: https://issues.apache.org/jira/browse/HBASE-15870
> Project: HBase
>  Issue Type: Improvement
>  Components: REST
>Reporter: Dean Gurvitz
>Assignee: Matt Warhaftig
>Priority: Minor
> Fix For: 2.0.0, 1.3.0, 1.4.0
>
> Attachments: hbase-15870-v1.patch
>
>
> The REST multi-gets feature currently does not allow specifying only certain 
> columns or column families. Adding support for these should be quite simple 
> and improve the usability of the multi-gets feature.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16051) TestScannerHeartbeatMessages fails on some machines

2016-06-21 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-16051:
---
Issue Type: Test  (was: Bug)

> TestScannerHeartbeatMessages fails on some machines
> ---
>
> Key: HBASE-16051
> URL: https://issues.apache.org/jira/browse/HBASE-16051
> Project: HBase
>  Issue Type: Test
>  Components: test
>Affects Versions: 1.3.0
>Reporter: Mikhail Antonov
>Assignee: Phil Yang
> Fix For: 2.0.0, 1.3.0, 1.4.0
>
> Attachments: HBASE-16051-v1.patch
>
>
> I can see below on my Linux box (reproduces consistently). It passes on OSX 
> laptop.
>  T E S T S
> ---
> Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; 
> support was removed in 8.0
> Running org.apache.hadoop.hbase.regionserver.TestScannerHeartbeatMessages
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 27.907 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.hbase.regionserver.TestScannerHeartbeatMessages
> testScannerHeartbeatMessages(org.apache.hadoop.hbase.regionserver.TestScannerHeartbeatMessages)
>   Time elapsed: 12.95 sec  <<< FAILURE!
> java.lang.AssertionError: Heartbeats messages are disabled, an exception 
> should be thrown. If an exception  is not thrown, the test case is not 
> testing the importance of heartbeat messages
> at org.junit.Assert.fail(Assert.java:88)
> at 
> org.apache.hadoop.hbase.regionserver.TestScannerHeartbeatMessages.testImportanceOfHeartbeats(TestScannerHeartbeatMessages.java:233)
> at 
> org.apache.hadoop.hbase.regionserver.TestScannerHeartbeatMessages.testScannerHeartbeatMessages(TestScannerHeartbeatMessages.java:204)
> Results :
> Failed tests:
>   
> TestScannerHeartbeatMessages.testScannerHeartbeatMessages:204->testImportanceOfHeartbeats:233
>  Heartbeats messages are disabled, an exception should be thrown. If an 
> exception  is not thrown, the test case is not testing the importance of 
> heartbeat messages
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16051) TestScannerHeartbeatMessages fails on some machines

2016-06-21 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-16051:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 1.4.0
   1.3.0
   2.0.0
   Status: Resolved  (was: Patch Available)

Thanks for the patch, Phil.

> TestScannerHeartbeatMessages fails on some machines
> ---
>
> Key: HBASE-16051
> URL: https://issues.apache.org/jira/browse/HBASE-16051
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 1.3.0
>Reporter: Mikhail Antonov
>Assignee: Phil Yang
> Fix For: 2.0.0, 1.3.0, 1.4.0
>
> Attachments: HBASE-16051-v1.patch
>
>
> I can see below on my Linux box (reproduces consistently). It passes on OSX 
> laptop.
>  T E S T S
> ---
> Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; 
> support was removed in 8.0
> Running org.apache.hadoop.hbase.regionserver.TestScannerHeartbeatMessages
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 27.907 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.hbase.regionserver.TestScannerHeartbeatMessages
> testScannerHeartbeatMessages(org.apache.hadoop.hbase.regionserver.TestScannerHeartbeatMessages)
>   Time elapsed: 12.95 sec  <<< FAILURE!
> java.lang.AssertionError: Heartbeats messages are disabled, an exception 
> should be thrown. If an exception  is not thrown, the test case is not 
> testing the importance of heartbeat messages
> at org.junit.Assert.fail(Assert.java:88)
> at 
> org.apache.hadoop.hbase.regionserver.TestScannerHeartbeatMessages.testImportanceOfHeartbeats(TestScannerHeartbeatMessages.java:233)
> at 
> org.apache.hadoop.hbase.regionserver.TestScannerHeartbeatMessages.testScannerHeartbeatMessages(TestScannerHeartbeatMessages.java:204)
> Results :
> Failed tests:
>   
> TestScannerHeartbeatMessages.testScannerHeartbeatMessages:204->testImportanceOfHeartbeats:233
>  Heartbeats messages are disabled, an exception should be thrown. If an 
> exception  is not thrown, the test case is not testing the importance of 
> heartbeat messages
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16032) Possible memory leak in StoreScanner

2016-06-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15341961#comment-15341961
 ] 

Hudson commented on HBASE-16032:


FAILURE: Integrated in HBase-Trunk_matrix #1088 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/1088/])
HBASE-16032 Possible memory leak in StoreScanner (liyu: rev 
471f942ec8a11bb5de022f8a05af93e9c0082457)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreScanner.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java


> Possible memory leak in StoreScanner
> 
>
> Key: HBASE-16032
> URL: https://issues.apache.org/jira/browse/HBASE-16032
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.1, 1.1.5, 0.98.20
>Reporter: Yu Li
>Assignee: Yu Li
> Fix For: 2.0.0, 1.1.6, 1.3.1, 0.98.21, 1.2.3
>
> Attachments: HBASE-16032.patch, HBASE-16032_v2.patch, 
> HBASE-16032_v3.patch, HBASE-16032_v4.patch
>
>
> We observed frequent fullGC of RS in our production environment, and after 
> analyzing the heapdump, we found large memory occupancy by 
> HStore#changedReaderObservers, the map is surprisingly containing 7500w 
> objects...
> After some debugging, I located some possible memory leak in StoreScanner 
> constructor:
> {code}
>   public StoreScanner(Store store, ScanInfo scanInfo, Scan scan, final 
> NavigableSet columns,
>   long readPt)
>   throws IOException {
> this(store, scan, scanInfo, columns, readPt, scan.getCacheBlocks());
> if (columns != null && scan.isRaw()) {
>   throw new DoNotRetryIOException("Cannot specify any column for a raw 
> scan");
> }
> matcher = new ScanQueryMatcher(scan, scanInfo, columns,
> ScanType.USER_SCAN, Long.MAX_VALUE, HConstants.LATEST_TIMESTAMP,
> oldestUnexpiredTS, now, store.getCoprocessorHost());
> this.store.addChangedReaderObserver(this);
> // Pass columns to try to filter out unnecessary StoreFiles.
> List scanners = getScannersNoCompaction();
> ...
> seekScanners(scanners, matcher.getStartKey(), explicitColumnQuery
> && lazySeekEnabledGlobally, parallelSeekEnabled);
> ...
> resetKVHeap(scanners, store.getComparator());
>   }
> {code}
> If there's any Exception thrown after 
> {{this.store.addChangedReaderObserver(this)}}, the returned scanner might be 
> null and there's no chance to remove the scanner from changedReaderObservers, 
> like in {{HRegion#get}}
> {code}
> RegionScanner scanner = null;
> try {
>   scanner = getScanner(scan);
>   scanner.next(results);
> } finally {
>   if (scanner != null)
> scanner.close();
> }
> {code}
> What's more, all exception thrown in the {{HRegion#getScanner}} path will 
> cause scanner==null then memory leak, so we also need to handle this part.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16051) TestScannerHeartbeatMessages fails on some machines

2016-06-21 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15341876#comment-15341876
 ] 

Ted Yu commented on HBASE-16051:


+1

> TestScannerHeartbeatMessages fails on some machines
> ---
>
> Key: HBASE-16051
> URL: https://issues.apache.org/jira/browse/HBASE-16051
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 1.3.0
>Reporter: Mikhail Antonov
>Assignee: Phil Yang
> Attachments: HBASE-16051-v1.patch
>
>
> I can see below on my Linux box (reproduces consistently). It passes on OSX 
> laptop.
>  T E S T S
> ---
> Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; 
> support was removed in 8.0
> Running org.apache.hadoop.hbase.regionserver.TestScannerHeartbeatMessages
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 27.907 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.hbase.regionserver.TestScannerHeartbeatMessages
> testScannerHeartbeatMessages(org.apache.hadoop.hbase.regionserver.TestScannerHeartbeatMessages)
>   Time elapsed: 12.95 sec  <<< FAILURE!
> java.lang.AssertionError: Heartbeats messages are disabled, an exception 
> should be thrown. If an exception  is not thrown, the test case is not 
> testing the importance of heartbeat messages
> at org.junit.Assert.fail(Assert.java:88)
> at 
> org.apache.hadoop.hbase.regionserver.TestScannerHeartbeatMessages.testImportanceOfHeartbeats(TestScannerHeartbeatMessages.java:233)
> at 
> org.apache.hadoop.hbase.regionserver.TestScannerHeartbeatMessages.testScannerHeartbeatMessages(TestScannerHeartbeatMessages.java:204)
> Results :
> Failed tests:
>   
> TestScannerHeartbeatMessages.testScannerHeartbeatMessages:204->testImportanceOfHeartbeats:233
>  Heartbeats messages are disabled, an exception should be thrown. If an 
> exception  is not thrown, the test case is not testing the importance of 
> heartbeat messages
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16051) TestScannerHeartbeatMessages fails on some machines

2016-06-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15341851#comment-15341851
 ] 

Hadoop QA commented on HBASE-16051:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 
6s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 47s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s 
{color} | {color:green} master passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
55s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
15s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 31s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 35s 
{color} | {color:green} master passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
47s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 40s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 36s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 36s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
57s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
27m 44s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 75m 43s 
{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
16s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 120m 38s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12812164/HBASE-16051-v1.patch |
| JIRA Issue | HBASE-16051 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux asf900.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 471f942 |
| Default Java | 1.7.0_80 |
| Multi-JDK versions |  /home/jenkins/tools/java/jdk1.8.0:1.8.0 
/home/jenkins/jenkins-slave/tools/hudson.model.JDK/JDK_1.7_latest_:1.7.0_80 |
| findbugs | v3.0.0 |
|  Test Results | 

[jira] [Commented] (HBASE-15783) AccessControlConstants#OP_ATTRIBUTE_ACL_STRATEGY_CELL_FIRST not used any more.

2016-06-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15341815#comment-15341815
 ] 

Hudson commented on HBASE-15783:


FAILURE: Integrated in HBase-1.4 #232 (See 
[https://builds.apache.org/job/HBase-1.4/232/])
HBASE-15783 AccessControlConstants#OP_ATTRIBUTE_ACL_STRATEGY_CELL_FIRST 
(ramkrishna: rev f06945ae6c4bfbfed31dd552a24024b90865c1fb)
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/security/access/AccessControlConstants.java


> AccessControlConstants#OP_ATTRIBUTE_ACL_STRATEGY_CELL_FIRST not used any more.
> --
>
> Key: HBASE-15783
> URL: https://issues.apache.org/jira/browse/HBASE-15783
> Project: HBase
>  Issue Type: Bug
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-15783.patch, HBASE-15783_branch_1.patch
>
>
> This is based on a mail in the user list. 
> OP_ATTRIBUTE_ACL_STRATEGY_CELL_FIRST in AccessControlConstants is not used 
> any more in the code and AccessControlconstants is Public. We need to either 
> bring in this feature or remove the constant from the Public APi which may be 
> misleading. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-16078) Create java cli tool for managing blancer states for scripts usage.

2016-06-21 Thread Samir Ahmic (JIRA)
Samir Ahmic created HBASE-16078:
---

 Summary: Create java cli tool for managing blancer states for 
scripts usage.
 Key: HBASE-16078
 URL: https://issues.apache.org/jira/browse/HBASE-16078
 Project: HBase
  Issue Type: Improvement
  Components: scripts, util
Affects Versions: 2.0.0
Reporter: Samir Ahmic
Assignee: Samir Ahmic
 Fix For: 2.0.0


This ticket is result of discussion in 
[HBASE16044|https://issues.apache.org/jira/browse/HBASE-16044] to avoid "hbase 
shell" output parsing hacks. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16012) Major compaction can't work because left scanner read point in RegionServer

2016-06-21 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15341778#comment-15341778
 ] 

Ted Yu commented on HBASE-16012:


Please rebase the patch.

> Major compaction can't work because left scanner read point in RegionServer
> ---
>
> Key: HBASE-16012
> URL: https://issues.apache.org/jira/browse/HBASE-16012
> Project: HBase
>  Issue Type: Bug
>  Components: Compaction, Scanners
>Affects Versions: 2.0.0, 0.94.27, 1.1.6, 1.3.1, 0.98.21, 1.2.3
>Reporter: Guanghao Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-16012-v1.patch, HBASE-16012-v2.patch, 
> HBASE-16012-v3.patch, HBASE-16012-v4.patch, HBASE-16012.patch
>
>
> When new RegionScanner, it will add a scanner read point in 
> scannerReadPoints. But if we got a exception after add read point, the read 
> point will keep in regions server and the delete after this mvcc number will 
> never be compacted.
> Our hbase version is base 0.94. If it throws other exception when initialize 
> RegionScanner, the master branch has this bug, too.
> ERROR org.apache.hadoop.hbase.regionserver.HRegionServer: Failed openScanner 
> java.io.IOException: Could not seek StoreFileScanner
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:160)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:268)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:168)
>   at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:2232)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:4026)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1895)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1879)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1854)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.internalOpenScanner(HRegionServer.java:3032)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.openScanner(HRegionServer.java:2995)
>   at sun.reflect.GeneratedMethodAccessor67.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at 
> org.apache.hadoop.hbase.ipc.SecureRpcEngine$Server.call(SecureRpcEngine.java:338)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1595)
> Caused by: org.apache.hadoop.hbase.ipc.CallerDisconnectedException: Aborting 
> call openScanner, since caller disconnected
>   at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Call.throwExceptionIfCallerDisconnected(HBaseServer.java:475)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1443)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockDataInternal(HFileBlock.java:1902)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1766)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:345)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:254)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:499)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:520)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:235)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:148)
>   ... 14 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >