[jira] [Updated] (HBASE-15908) Checksum verification is broken

2016-05-27 Thread Mikhail Antonov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Antonov updated HBASE-15908:

Priority: Critical  (was: Major)

> Checksum verification is broken
> ---
>
> Key: HBASE-15908
> URL: https://issues.apache.org/jira/browse/HBASE-15908
> Project: HBase
>  Issue Type: Bug
>  Components: HFile
>Affects Versions: 1.3.0
>Reporter: Mikhail Antonov
>Assignee: Mikhail Antonov
>Priority: Critical
>
> It looks like HBASE-11625 (cc [~stack], [~appy]) has broken checksum 
> verification? I'm seeing the following on my cluster (1.3.0, Hadoop 2.7).
> Caused by: org.apache.hadoop.hbase.io.hfile.CorruptHFileException: Problem 
> reading HFile Trailer from file 
>   at 
> org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:497)
>   at org.apache.hadoop.hbase.io.hfile.HFile.createReader(HFile.java:525)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile$Reader.(StoreFile.java:1135)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileInfo.open(StoreFileInfo.java:259)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:427)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:528)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:518)
>   at 
> org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:652)
>   at 
> org.apache.hadoop.hbase.regionserver.HStore.access$000(HStore.java:117)
>   at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:519)
>   at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:516)
>   ... 6 more
> Caused by: java.lang.IllegalArgumentException: input ByteBuffers must be 
> direct buffers
>   at org.apache.hadoop.util.NativeCrc32.nativeComputeChunkedSums(Native 
> Method)
>   at 
> org.apache.hadoop.util.NativeCrc32.verifyChunkedSums(NativeCrc32.java:59)
>   at 
> org.apache.hadoop.util.DataChecksum.verifyChunkedSums(DataChecksum.java:301)
>   at 
> org.apache.hadoop.hbase.io.hfile.ChecksumUtil.validateChecksum(ChecksumUtil.java:120)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.validateChecksum(HFileBlock.java:1785)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1728)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1558)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader$1.nextBlock(HFileBlock.java:1397)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader$1.nextBlockWithBlockType(HFileBlock.java:1405)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.(HFileReaderV2.java:151)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV3.(HFileReaderV3.java:78)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:487)
>   ... 16 more
> Prior this change we won't use use native crc32 checksum verification as in 
> Hadoop's DataChecksum#verifyChunkedSums we would go this codepath
> if (data.hasArray() && checksums.hasArray()) {
>   
> }
> So we were fine. However, now we're dropping below and try to use the 
> slightly different variant of native crc32 (if one is available)  taking 
> ByteBuffer instead of byte[], which expects DirectByteBuffer, not Heap BB. 
> I think easiest fix working on all Hadoops would be to remove asReadonly() 
> conversion here:
> !validateChecksum(offset, onDiskBlockByteBuffer.asReadOnlyBuffer(), hdrSize)) 
> {
> I don't see why do we need it. Let me test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15908) Checksum verification is broken

2016-05-27 Thread Mikhail Antonov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Antonov updated HBASE-15908:

Description: 
It looks like HBASE-11625 (cc [~stack], [~appy]) has broken checksum 
verification? I'm seeing the following on my cluster (1.3.0, Hadoop 2.7).

Caused by: org.apache.hadoop.hbase.io.hfile.CorruptHFileException: Problem 
reading HFile Trailer from file 
at 
org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:497)
at org.apache.hadoop.hbase.io.hfile.HFile.createReader(HFile.java:525)
at 
org.apache.hadoop.hbase.regionserver.StoreFile$Reader.(StoreFile.java:1135)
at 
org.apache.hadoop.hbase.regionserver.StoreFileInfo.open(StoreFileInfo.java:259)
at 
org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:427)
at 
org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:528)
at 
org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:518)
at 
org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:652)
at 
org.apache.hadoop.hbase.regionserver.HStore.access$000(HStore.java:117)
at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:519)
at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:516)
... 6 more
Caused by: java.lang.IllegalArgumentException: input ByteBuffers must be direct 
buffers
at org.apache.hadoop.util.NativeCrc32.nativeComputeChunkedSums(Native 
Method)
at 
org.apache.hadoop.util.NativeCrc32.verifyChunkedSums(NativeCrc32.java:59)
at 
org.apache.hadoop.util.DataChecksum.verifyChunkedSums(DataChecksum.java:301)
at 
org.apache.hadoop.hbase.io.hfile.ChecksumUtil.validateChecksum(ChecksumUtil.java:120)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.validateChecksum(HFileBlock.java:1785)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1728)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1558)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader$1.nextBlock(HFileBlock.java:1397)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader$1.nextBlockWithBlockType(HFileBlock.java:1405)
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderV2.(HFileReaderV2.java:151)
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderV3.(HFileReaderV3.java:78)
at 
org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:487)
... 16 more

Prior this change we won't use use native crc32 checksum verification as in 
Hadoop's DataChecksum#verifyChunkedSums we would go this codepath

if (data.hasArray() && checksums.hasArray()) {
  
}

So we were fine. However, now we're dropping below and try to use the slightly 
different variant of native crc32 (if one is available)  taking ByteBuffer 
instead of byte[], which expects DirectByteBuffer, not Heap BB. 

I think easiest fix working on all Hadoops would be to remove asReadonly() 
conversion here:

!validateChecksum(offset, onDiskBlockByteBuffer.asReadOnlyBuffer(), hdrSize)) {

I don't see why do we need it. Let me test.

  was:
It looks like HBASE-11625 (cc [~stack], [~appy]) has broken checksum 
verification? I'm seeing the following on my cluster.

Caused by: org.apache.hadoop.hbase.io.hfile.CorruptHFileException: Problem 
reading HFile Trailer from file 
at 
org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:497)
at org.apache.hadoop.hbase.io.hfile.HFile.createReader(HFile.java:525)
at 
org.apache.hadoop.hbase.regionserver.StoreFile$Reader.(StoreFile.java:1135)
at 
org.apache.hadoop.hbase.regionserver.StoreFileInfo.open(StoreFileInfo.java:259)
at 
org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:427)
at 
org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:528)
at 
org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:518)
at 
org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:652)
at 
org.apache.hadoop.hbase.regionserver.HStore.access$000(HStore.java:117)
at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:519)
at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:516)
... 6 more
Caused by: java.lang.IllegalArgumentException: input ByteBuffers must be direct 
buffers
at org.apache.hadoop.util.NativeCrc32.nativeComputeChunkedSums(Native 
Method)
at 
org.apache.hadoop.util.NativeCrc32.verifyChunkedSums(NativeCrc32.java:59)
at 
org.apache.hadoop.util.DataChecksum.verifyChunkedSums(DataChecksum.java:301)
at 

[jira] [Updated] (HBASE-15908) Checksum verification is broken

2016-05-27 Thread Mikhail Antonov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Antonov updated HBASE-15908:

Description: 
It looks like HBASE-11625 (cc [~stack], [~appy]) has broken checksum 
verification? I'm seeing the following on my cluster.

Caused by: org.apache.hadoop.hbase.io.hfile.CorruptHFileException: Problem 
reading HFile Trailer from file 
at 
org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:497)
at org.apache.hadoop.hbase.io.hfile.HFile.createReader(HFile.java:525)
at 
org.apache.hadoop.hbase.regionserver.StoreFile$Reader.(StoreFile.java:1135)
at 
org.apache.hadoop.hbase.regionserver.StoreFileInfo.open(StoreFileInfo.java:259)
at 
org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:427)
at 
org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:528)
at 
org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:518)
at 
org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:652)
at 
org.apache.hadoop.hbase.regionserver.HStore.access$000(HStore.java:117)
at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:519)
at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:516)
... 6 more
Caused by: java.lang.IllegalArgumentException: input ByteBuffers must be direct 
buffers
at org.apache.hadoop.util.NativeCrc32.nativeComputeChunkedSums(Native 
Method)
at 
org.apache.hadoop.util.NativeCrc32.verifyChunkedSums(NativeCrc32.java:59)
at 
org.apache.hadoop.util.DataChecksum.verifyChunkedSums(DataChecksum.java:301)
at 
org.apache.hadoop.hbase.io.hfile.ChecksumUtil.validateChecksum(ChecksumUtil.java:120)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.validateChecksum(HFileBlock.java:1785)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1728)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1558)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader$1.nextBlock(HFileBlock.java:1397)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader$1.nextBlockWithBlockType(HFileBlock.java:1405)
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderV2.(HFileReaderV2.java:151)
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderV3.(HFileReaderV3.java:78)
at 
org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:487)
... 16 more

Prior this change we won't use use native crc32 checksum verification as in 
Hadoop's DataChecksum#verifyChunkedSums we would go this codepath

if (data.hasArray() && checksums.hasArray()) {
  non-native checksum
}

So we were fine. However, now we're dropping below and try to use the variant 
of native crc32 call taking ByteBuffer instead of byte[], and if one's 
available (and I think it's not in the tests / local fs), which expects 
DirectByteBuffer, not Heap BB.

  was:
It looks like HBASE-11625 (cc [~stack], [~appy]) has broken checksum 
verification? I'm seeing the following on my cluster.

Caused by: org.apache.hadoop.hbase.io.hfile.CorruptHFileException: Problem 
reading HFile Trailer from file 
at 
org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:497)
at org.apache.hadoop.hbase.io.hfile.HFile.createReader(HFile.java:525)
at 
org.apache.hadoop.hbase.regionserver.StoreFile$Reader.(StoreFile.java:1135)
at 
org.apache.hadoop.hbase.regionserver.StoreFileInfo.open(StoreFileInfo.java:259)
at 
org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:427)
at 
org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:528)
at 
org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:518)
at 
org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:652)
at 
org.apache.hadoop.hbase.regionserver.HStore.access$000(HStore.java:117)
at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:519)
at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:516)
... 6 more
Caused by: java.lang.IllegalArgumentException: input ByteBuffers must be direct 
buffers
at org.apache.hadoop.util.NativeCrc32.nativeComputeChunkedSums(Native 
Method)
at 
org.apache.hadoop.util.NativeCrc32.verifyChunkedSums(NativeCrc32.java:59)
at 
org.apache.hadoop.util.DataChecksum.verifyChunkedSums(DataChecksum.java:301)
at 
org.apache.hadoop.hbase.io.hfile.ChecksumUtil.validateChecksum(ChecksumUtil.java:120)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.validateChecksum(HFileBlock.java:1785)
at 

[jira] [Updated] (HBASE-15908) Checksum verification is broken

2016-05-27 Thread Mikhail Antonov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Antonov updated HBASE-15908:

Description: 
It looks like HBASE-11625 (cc [~stack], [~appy]) has broken checksum 
verification? I'm seeing the following on my cluster.

Caused by: org.apache.hadoop.hbase.io.hfile.CorruptHFileException: Problem 
reading HFile Trailer from file 
at 
org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:497)
at org.apache.hadoop.hbase.io.hfile.HFile.createReader(HFile.java:525)
at 
org.apache.hadoop.hbase.regionserver.StoreFile$Reader.(StoreFile.java:1135)
at 
org.apache.hadoop.hbase.regionserver.StoreFileInfo.open(StoreFileInfo.java:259)
at 
org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:427)
at 
org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:528)
at 
org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:518)
at 
org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:652)
at 
org.apache.hadoop.hbase.regionserver.HStore.access$000(HStore.java:117)
at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:519)
at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:516)
... 6 more
Caused by: java.lang.IllegalArgumentException: input ByteBuffers must be direct 
buffers
at org.apache.hadoop.util.NativeCrc32.nativeComputeChunkedSums(Native 
Method)
at 
org.apache.hadoop.util.NativeCrc32.verifyChunkedSums(NativeCrc32.java:59)
at 
org.apache.hadoop.util.DataChecksum.verifyChunkedSums(DataChecksum.java:301)
at 
org.apache.hadoop.hbase.io.hfile.ChecksumUtil.validateChecksum(ChecksumUtil.java:120)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.validateChecksum(HFileBlock.java:1785)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1728)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1558)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader$1.nextBlock(HFileBlock.java:1397)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader$1.nextBlockWithBlockType(HFileBlock.java:1405)
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderV2.(HFileReaderV2.java:151)
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderV3.(HFileReaderV3.java:78)
at 
org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:487)
... 16 more

Prior this change we won't use use native crc32 checksum verification as in 
Hadoop's DataChecksum#verifyChunkedSums we would go this codepath

if (data.hasArray() && checksums.hasArray()) {
  non-native checksum
}

So we were fine. However, now we're dropping below and try to use native crc32 
if one's available (and I think it's not in the tests), which expects 
DirectByteBuffer, not Heap BB.

  was:
It looks like HBASE-11625 (cc [~stack], [~appy]) has broken checksum 
verification? I'm seeing the following on my cluster.

Caused by: org.apache.hadoop.hbase.io.hfile.CorruptHFileException: Problem 
reading HFile Trailer from file 
at 
org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:497)
at org.apache.hadoop.hbase.io.hfile.HFile.createReader(HFile.java:525)
at 
org.apache.hadoop.hbase.regionserver.StoreFile$Reader.(StoreFile.java:1135)
at 
org.apache.hadoop.hbase.regionserver.StoreFileInfo.open(StoreFileInfo.java:259)
at 
org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:427)
at 
org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:528)
at 
org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:518)
at 
org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:652)
at 
org.apache.hadoop.hbase.regionserver.HStore.access$000(HStore.java:117)
at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:519)
at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:516)
... 6 more
Caused by: java.lang.IllegalArgumentException: input ByteBuffers must be direct 
buffers
at org.apache.hadoop.util.NativeCrc32.nativeComputeChunkedSums(Native 
Method)
at 
org.apache.hadoop.util.NativeCrc32.verifyChunkedSums(NativeCrc32.java:59)
at 
org.apache.hadoop.util.DataChecksum.verifyChunkedSums(DataChecksum.java:301)
at 
org.apache.hadoop.hbase.io.hfile.ChecksumUtil.validateChecksum(ChecksumUtil.java:120)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.validateChecksum(HFileBlock.java:1785)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1728)
at 

[jira] [Updated] (HBASE-15908) Checksum verification is broken

2016-05-27 Thread Mikhail Antonov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Antonov updated HBASE-15908:

Description: 
It looks like HBASE-11625 (cc [~stack], [~appy]) has broken checksum 
verification? I'm seeing the following on my cluster.

Caused by: org.apache.hadoop.hbase.io.hfile.CorruptHFileException: Problem 
reading HFile Trailer from file 
at 
org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:497)
at org.apache.hadoop.hbase.io.hfile.HFile.createReader(HFile.java:525)
at 
org.apache.hadoop.hbase.regionserver.StoreFile$Reader.(StoreFile.java:1135)
at 
org.apache.hadoop.hbase.regionserver.StoreFileInfo.open(StoreFileInfo.java:259)
at 
org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:427)
at 
org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:528)
at 
org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:518)
at 
org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:652)
at 
org.apache.hadoop.hbase.regionserver.HStore.access$000(HStore.java:117)
at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:519)
at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:516)
... 6 more
Caused by: java.lang.IllegalArgumentException: input ByteBuffers must be direct 
buffers
at org.apache.hadoop.util.NativeCrc32.nativeComputeChunkedSums(Native 
Method)
at 
org.apache.hadoop.util.NativeCrc32.verifyChunkedSums(NativeCrc32.java:59)
at 
org.apache.hadoop.util.DataChecksum.verifyChunkedSums(DataChecksum.java:301)
at 
org.apache.hadoop.hbase.io.hfile.ChecksumUtil.validateChecksum(ChecksumUtil.java:120)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.validateChecksum(HFileBlock.java:1785)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1728)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1558)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader$1.nextBlock(HFileBlock.java:1397)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader$1.nextBlockWithBlockType(HFileBlock.java:1405)
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderV2.(HFileReaderV2.java:151)
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderV3.(HFileReaderV3.java:78)
at 
org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:487)
... 16 more

Prior this change we won't use use native crc32 checksum verification as in 
Hadoop's DataChecksum#verifyChunkedSums we would go this codepath

if (data.hasArray() && checksums.hasArray()) {
  non-native checksum
}

So we were fine. However, now we're dropping below and try to use native crc32 
if one's available (and I think it's not in the tests / local fs), which 
expects DirectByteBuffer, not Heap BB.

  was:
It looks like HBASE-11625 (cc [~stack], [~appy]) has broken checksum 
verification? I'm seeing the following on my cluster.

Caused by: org.apache.hadoop.hbase.io.hfile.CorruptHFileException: Problem 
reading HFile Trailer from file 
at 
org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:497)
at org.apache.hadoop.hbase.io.hfile.HFile.createReader(HFile.java:525)
at 
org.apache.hadoop.hbase.regionserver.StoreFile$Reader.(StoreFile.java:1135)
at 
org.apache.hadoop.hbase.regionserver.StoreFileInfo.open(StoreFileInfo.java:259)
at 
org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:427)
at 
org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:528)
at 
org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:518)
at 
org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:652)
at 
org.apache.hadoop.hbase.regionserver.HStore.access$000(HStore.java:117)
at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:519)
at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:516)
... 6 more
Caused by: java.lang.IllegalArgumentException: input ByteBuffers must be direct 
buffers
at org.apache.hadoop.util.NativeCrc32.nativeComputeChunkedSums(Native 
Method)
at 
org.apache.hadoop.util.NativeCrc32.verifyChunkedSums(NativeCrc32.java:59)
at 
org.apache.hadoop.util.DataChecksum.verifyChunkedSums(DataChecksum.java:301)
at 
org.apache.hadoop.hbase.io.hfile.ChecksumUtil.validateChecksum(ChecksumUtil.java:120)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.validateChecksum(HFileBlock.java:1785)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1728)
  

[jira] [Updated] (HBASE-15908) Checksum verification is broken

2016-05-27 Thread Mikhail Antonov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Antonov updated HBASE-15908:

Description: 
It looks like HBASE-11625 (cc [~stack], [~appy]) has broken checksum 
verification? I'm seeing the following on my cluster.

Caused by: org.apache.hadoop.hbase.io.hfile.CorruptHFileException: Problem 
reading HFile Trailer from file 
at 
org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:497)
at org.apache.hadoop.hbase.io.hfile.HFile.createReader(HFile.java:525)
at 
org.apache.hadoop.hbase.regionserver.StoreFile$Reader.(StoreFile.java:1135)
at 
org.apache.hadoop.hbase.regionserver.StoreFileInfo.open(StoreFileInfo.java:259)
at 
org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:427)
at 
org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:528)
at 
org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:518)
at 
org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:652)
at 
org.apache.hadoop.hbase.regionserver.HStore.access$000(HStore.java:117)
at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:519)
at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:516)
... 6 more
Caused by: java.lang.IllegalArgumentException: input ByteBuffers must be direct 
buffers
at org.apache.hadoop.util.NativeCrc32.nativeComputeChunkedSums(Native 
Method)
at 
org.apache.hadoop.util.NativeCrc32.verifyChunkedSums(NativeCrc32.java:59)
at 
org.apache.hadoop.util.DataChecksum.verifyChunkedSums(DataChecksum.java:301)
at 
org.apache.hadoop.hbase.io.hfile.ChecksumUtil.validateChecksum(ChecksumUtil.java:120)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.validateChecksum(HFileBlock.java:1785)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1728)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1558)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader$1.nextBlock(HFileBlock.java:1397)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader$1.nextBlockWithBlockType(HFileBlock.java:1405)
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderV2.(HFileReaderV2.java:151)
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderV3.(HFileReaderV3.java:78)
at 
org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:487)
... 16 more

Prior this change we won't use use native crc32 checksum verification as in 
Hadoop's DataChecksum#verifyChunkedSums we would go this codepath

if (data.hasArray() && checksums.hasArray()) {
  non-native checksum
}

So we were fine. However, now we're dropping below and try to use native crc32 
is one's available (and I think it's not in the tests), which expects 
DirectByteBuffer, not Heap BB.

  was:
It looks like HBASE-11625 has broken checksum verification.

Caused by: org.apache.hadoop.hbase.io.hfile.CorruptHFileException: Problem 
reading HFile Trailer from file 
at 
org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:497)
at org.apache.hadoop.hbase.io.hfile.HFile.createReader(HFile.java:525)
at 
org.apache.hadoop.hbase.regionserver.StoreFile$Reader.(StoreFile.java:1135)
at 
org.apache.hadoop.hbase.regionserver.StoreFileInfo.open(StoreFileInfo.java:259)
at 
org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:427)
at 
org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:528)
at 
org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:518)
at 
org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:652)
at 
org.apache.hadoop.hbase.regionserver.HStore.access$000(HStore.java:117)
at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:519)
at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:516)
... 6 more
Caused by: java.lang.IllegalArgumentException: input ByteBuffers must be direct 
buffers
at org.apache.hadoop.util.NativeCrc32.nativeComputeChunkedSums(Native 
Method)
at 
org.apache.hadoop.util.NativeCrc32.verifyChunkedSums(NativeCrc32.java:59)
at 
org.apache.hadoop.util.DataChecksum.verifyChunkedSums(DataChecksum.java:301)
at 
org.apache.hadoop.hbase.io.hfile.ChecksumUtil.validateChecksum(ChecksumUtil.java:120)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.validateChecksum(HFileBlock.java:1785)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1728)
at 

[jira] [Assigned] (HBASE-15908) Checksum verification is broken

2016-05-27 Thread Mikhail Antonov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Antonov reassigned HBASE-15908:
---

Assignee: Mikhail Antonov

> Checksum verification is broken
> ---
>
> Key: HBASE-15908
> URL: https://issues.apache.org/jira/browse/HBASE-15908
> Project: HBase
>  Issue Type: Bug
>  Components: HFile
>Affects Versions: 1.3.0
>Reporter: Mikhail Antonov
>Assignee: Mikhail Antonov
>
> It looks like HBASE-11625 has broken checksum verification.
> Caused by: org.apache.hadoop.hbase.io.hfile.CorruptHFileException: Problem 
> reading HFile Trailer from file 
>   at 
> org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:497)
>   at org.apache.hadoop.hbase.io.hfile.HFile.createReader(HFile.java:525)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile$Reader.(StoreFile.java:1135)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileInfo.open(StoreFileInfo.java:259)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:427)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:528)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:518)
>   at 
> org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:652)
>   at 
> org.apache.hadoop.hbase.regionserver.HStore.access$000(HStore.java:117)
>   at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:519)
>   at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:516)
>   ... 6 more
> Caused by: java.lang.IllegalArgumentException: input ByteBuffers must be 
> direct buffers
>   at org.apache.hadoop.util.NativeCrc32.nativeComputeChunkedSums(Native 
> Method)
>   at 
> org.apache.hadoop.util.NativeCrc32.verifyChunkedSums(NativeCrc32.java:59)
>   at 
> org.apache.hadoop.util.DataChecksum.verifyChunkedSums(DataChecksum.java:301)
>   at 
> org.apache.hadoop.hbase.io.hfile.ChecksumUtil.validateChecksum(ChecksumUtil.java:120)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.validateChecksum(HFileBlock.java:1785)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1728)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1558)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader$1.nextBlock(HFileBlock.java:1397)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader$1.nextBlockWithBlockType(HFileBlock.java:1405)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.(HFileReaderV2.java:151)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV3.(HFileReaderV3.java:78)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:487)
>   ... 16 more
> Prior this change we won't use use native crc32 checksum verification as in 
> DataChecksum#verifyChunkedSums we would go this codepath
> if (data.hasArray() && checksums.hasArray()) {
>   non-native checksum
> }
> So we were fine. However, now we're dropping below and try to use native 
> crc32 is one's available (and I think it's not in the tests), which expect 
> DirectByteBuffer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-15908) Checksum verification is broken

2016-05-27 Thread Mikhail Antonov (JIRA)
Mikhail Antonov created HBASE-15908:
---

 Summary: Checksum verification is broken
 Key: HBASE-15908
 URL: https://issues.apache.org/jira/browse/HBASE-15908
 Project: HBase
  Issue Type: Bug
  Components: HFile
Affects Versions: 1.3.0
Reporter: Mikhail Antonov


It looks like HBASE-11625 has broken checksum verification.

Caused by: org.apache.hadoop.hbase.io.hfile.CorruptHFileException: Problem 
reading HFile Trailer from file 
at 
org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:497)
at org.apache.hadoop.hbase.io.hfile.HFile.createReader(HFile.java:525)
at 
org.apache.hadoop.hbase.regionserver.StoreFile$Reader.(StoreFile.java:1135)
at 
org.apache.hadoop.hbase.regionserver.StoreFileInfo.open(StoreFileInfo.java:259)
at 
org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:427)
at 
org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:528)
at 
org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:518)
at 
org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:652)
at 
org.apache.hadoop.hbase.regionserver.HStore.access$000(HStore.java:117)
at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:519)
at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:516)
... 6 more
Caused by: java.lang.IllegalArgumentException: input ByteBuffers must be direct 
buffers
at org.apache.hadoop.util.NativeCrc32.nativeComputeChunkedSums(Native 
Method)
at 
org.apache.hadoop.util.NativeCrc32.verifyChunkedSums(NativeCrc32.java:59)
at 
org.apache.hadoop.util.DataChecksum.verifyChunkedSums(DataChecksum.java:301)
at 
org.apache.hadoop.hbase.io.hfile.ChecksumUtil.validateChecksum(ChecksumUtil.java:120)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.validateChecksum(HFileBlock.java:1785)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1728)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1558)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader$1.nextBlock(HFileBlock.java:1397)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader$1.nextBlockWithBlockType(HFileBlock.java:1405)
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderV2.(HFileReaderV2.java:151)
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderV3.(HFileReaderV3.java:78)
at 
org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:487)
... 16 more

Prior this change we won't use use native crc32 checksum verification as in 

DataChecksum#verifyChunkedSums we would go this codepath

if (data.hasArray() && checksums.hasArray()) {
  non-native checksum
}

So we were fine. However, now we're dropping below and try to use native crc32 
is one's available (and I think it's not in the tests), which expect 
DirectByteBuffer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15893) Get object

2016-05-27 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15305165#comment-15305165
 ] 

stack commented on HBASE-15893:
---

Hey [~sudeeps] Mind putting the putting the next patch after addressing the 
above up on review board? Its a biggie and putting it up on review board helps 
reviewing the big ones. Use the new submit-patch.py and it does most of this 
for you (don't worry about messing up... we'll help you through):

{code}
$ ./dev-support/submit-patch.py --help
usage: submit-patch.py [-h] [-b BRANCH] [-jid JIRA_ID] [-srb]
   [--reviewers REVIEWERS] [--patch-dir PATCH_DIR]
   [--rb-repo RB_REPO]

optional arguments:
  -h, --helpshow this help message and exit
  -b BRANCH, --branch BRANCH
Branch to use for generating diff. If not specified, 
tracking branch is used. If there is no tracking branch, error will be thrown.
  -jid JIRA_ID, --jira-id JIRA_ID
Jira id of the issue. If set, we deduce next patch 
version from attachments in the jira and also upload the new patch. Script will 
ask for jira username/password for authentication. If not set, patch is named 
_v0.patch.
  -srb, --skip-review-board
Don't create/update the review board.
  --reviewers REVIEWERS
Comma separated list of users to add as reviewers.
  --patch-dir PATCH_DIR
Directory to store patch files. If it doesn't exist, it 
will be created. Default: ~/patches
  --rb-repo RB_REPO Review board repository. Default: hbase-git

To avoid having to enter jira/review board username/password every time, setup 
an encrypted ~/.apache-cred files as follows:
1) Create a file with following single line:
{"jira_username" : "appy", "jira_password":"123", "rb_username":"appy", 
"rb_password" : "@#$"}
2) Encrypt it with openssl.
openssl enc -aes-256-cbc -in  -out ~/.apache-creds
3) Delete original file.
Now onwards, you'll need to enter this encryption key only once per run. If you 
forget the key, simply regenerate ~/.apache-cred file again.
{code}

> Get object
> --
>
> Key: HBASE-15893
> URL: https://issues.apache.org/jira/browse/HBASE-15893
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Sudeep Sunthankar
> Attachments: HBASE-15893.HBASE-14850.v1.patch
>
>
> Patch for creating Get objects.  Get objects can be passed to the Table 
> implementation to fetch results for a given row. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15803) ZooKeeperWatcher's constructor can leak a ZooKeeper instance with throwing ZooKeeperConnectionException when canCreateBaseZNode is true

2016-05-27 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15803?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15305161#comment-15305161
 ] 

stack commented on HBASE-15803:
---

What you think of the patch [~ikeda]-san?

> ZooKeeperWatcher's constructor can leak a ZooKeeper instance with throwing 
> ZooKeeperConnectionException when canCreateBaseZNode is true
> ---
>
> Key: HBASE-15803
> URL: https://issues.apache.org/jira/browse/HBASE-15803
> Project: HBase
>  Issue Type: Bug
>Reporter: Hiroshi Ikeda
>Assignee: Ted Yu
>Priority: Minor
> Attachments: 15803.v1.txt
>
>
> {code}
>   public ZooKeeperWatcher(Configuration conf, String identifier,
>   Abortable abortable, boolean canCreateBaseZNode)
>   throws IOException, ZooKeeperConnectionException {
> ...skip...
> this.recoverableZooKeeper = ZKUtil.connect(...
> ...skip...
> if (canCreateBaseZNode) {
>   createBaseZNodes();
> }
>   }
>   private void createBaseZNodes() throws ZooKeeperConnectionException {
> {code}
> The registered watcher doesn't seem to close the Zookeeper instance by watch 
> events, and the instance keeps alive when createBaseZNodes is failed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15610) Remove deprecated HConnection for 2.0 thus removing all PB references for 2.0

2016-05-27 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15305160#comment-15305160
 ] 

stack commented on HBASE-15610:
---

[~jurmous] See above sir.

> Remove deprecated HConnection for 2.0 thus removing all PB references for 2.0
> -
>
> Key: HBASE-15610
> URL: https://issues.apache.org/jira/browse/HBASE-15610
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: Jurriaan Mous
>Priority: Blocker
> Fix For: 2.0.0
>
> Attachments: HBASE-15610.patch, HBASE-15610.patch, 
> HBASE-15610.v1.patch
>
>
> This is sub-task for HBASE-15174.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15892) submit-patch.py: Single command line to make patch, upload it to jira, and update review board

2016-05-27 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15305159#comment-15305159
 ] 

stack commented on HBASE-15892:
---

What you want me to apply [~appy]? The patch w/ branch-1 in it (hard to 
distingush what to do going by names alone -- I am too lazy to looksee)

> submit-patch.py: Single command line to make patch, upload it to jira, and 
> update review board
> --
>
> Key: HBASE-15892
> URL: https://issues.apache.org/jira/browse/HBASE-15892
> Project: HBase
>  Issue Type: New Feature
>Reporter: Appy
>Assignee: Appy
>Priority: Trivial
> Fix For: 2.0.0
>
> Attachments: HBASE-15892_branch-1_v1.patch, 
> HBASE-15892_master_v1.patch, HBASE-15892_master_v2.patch, 
> HBASE-15892_master_v3.patch
>
>
> Adds dev-support/submit-patch.py
> The script builds a new patch (using specified branch/tracking branch as base 
> branch), uploads it to jira, and updates diff of the review on ReviewBoard.
> Remote links in the jira are used to figure out if a review request already 
> exists. If no review request is present, then creates a new one and populates 
> all required fields using jira summary, patch description, etc.
> *Authentication*
> Since attaching patches & changes links on JIRA and creating/changing review 
> request on ReviewBoard requires a logged in user, the script will prompt you 
> for username and password. To avoid the hassle every time, I'd suggest 
> setting up ~/.apache-creds with the login details and encrypt it as explained 
> in scripts help message footer.
> *Python dependencies*
> To install required python dependencies, execute {{pip install -r 
> dev-support/python-requirements.txt}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15890) Allow thrift to set/unset "cacheBlocks" for Scans

2016-05-27 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15305158#comment-15305158
 ] 

stack commented on HBASE-15890:
---

I pushed the branch-1 and branch-1.3 patches. Thanks for the contrib 
[~ashu210890]

> Allow thrift to set/unset "cacheBlocks" for Scans
> -
>
> Key: HBASE-15890
> URL: https://issues.apache.org/jira/browse/HBASE-15890
> Project: HBase
>  Issue Type: Improvement
>  Components: Thrift
>Reporter: Ashu Pachauri
>Assignee: Ashu Pachauri
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-15890-V0-branch-1.3.patch, 
> HBASE-15890-V0-branch-1.patch, HBASE-15890-V0.patch
>
>
> Long running scans going through thrift cache everything to the block cache. 
> We need the ability to disable caching for scans going through thrift.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15896) Add timeout tests to flaky list from report-flakies.py

2016-05-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15305157#comment-15305157
 ] 

Hudson commented on HBASE-15896:


FAILURE: Integrated in HBase-Trunk_matrix #954 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/954/])
HBASE-15896 ADDENDUM Add timeout tests to flaky list from (busbey: rev 
36bd7d03fc1d457c21733f625a4fae45de90d4d6)
* dev-support/report-flakies.py


> Add timeout tests to flaky list from report-flakies.py
> --
>
> Key: HBASE-15896
> URL: https://issues.apache.org/jira/browse/HBASE-15896
> Project: HBase
>  Issue Type: Sub-task
>  Components: test
>Reporter: Appy
>Assignee: Appy
> Fix For: 2.0.0
>
> Attachments: HBASE-15896_master_addendum_v1.patch, 
> HBASE-15896_master_v1.patch, HBASE-15896_master_v2.patch, 
> HBASE-15896_master_v3.patch, HBASE-15896_master_v4.patch
>
>
> - Adds timed-out tests to flaky list. Dumps two new files for reference, 
> 'timeout' and 'failed' for corresponding list of bad tests.
> - Set --max-builds for different urls separately. This is needed so that we 
> can turn the knobs for post-commit job and flaky-tests job separately.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15890) Allow thrift to set/unset "cacheBlocks" for Scans

2016-05-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15305155#comment-15305155
 ] 

Hudson commented on HBASE-15890:


FAILURE: Integrated in HBase-Trunk_matrix #954 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/954/])
HBASE-15890 Allow setting cacheBlocks for TScan (stack: rev 
7d9d3ea38a327505a7ac1b65fd4e26072aacb08f)
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/generated/TRowResult.java
* hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/generated/Hbase.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TCellVisibility.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/generated/AlreadyExists.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TAuthorization.java
* hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/generated/TCell.java
* hbase-thrift/src/main/resources/org/apache/hadoop/hbase/thrift/Hbase.thrift
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TIOError.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TColumn.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TColumnIncrement.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/generated/IllegalArgument.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/THBaseService.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TResult.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/generated/TRegionInfo.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/generated/ColumnDescriptor.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/THRegionLocation.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/generated/IOError.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/THRegionInfo.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TScan.java
* hbase-thrift/src/main/resources/org/apache/hadoop/hbase/thrift2/hbase.thrift
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/generated/TColumn.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TIncrement.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/ThriftUtilities.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TTimeRange.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/generated/TAppend.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/generated/BatchMutation.java
* hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TGet.java
* hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/generated/TScan.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TIllegalArgument.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TServerName.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TColumnValue.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TDelete.java
* hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TPut.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/ThriftServerRunner.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/generated/TIncrement.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/generated/Mutation.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TRowMutations.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TAppend.java


> Allow thrift to set/unset "cacheBlocks" for Scans
> -
>
> Key: HBASE-15890
> URL: https://issues.apache.org/jira/browse/HBASE-15890
> Project: HBase
>  Issue Type: Improvement
>  Components: Thrift
>Reporter: Ashu Pachauri
>Assignee: Ashu Pachauri
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-15890-V0-branch-1.3.patch, 
> HBASE-15890-V0-branch-1.patch, HBASE-15890-V0.patch
>
>
> Long running scans going through thrift cache everything to the block cache. 
> We need the ability to disable caching for scans going through thrift.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15895) remove unmaintained jenkins build analysis tool.

2016-05-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15305156#comment-15305156
 ] 

Hudson commented on HBASE-15895:


FAILURE: Integrated in HBase-Trunk_matrix #954 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/954/])
HBASE-15895 Remove unmaintained jenkins build analysis tool. (busbey: rev 
d50cf9972d0bd3e610370669345617c87416e3f4)
* 
dev-support/jenkins-tools/buildstats/src/main/java/org/apache/hadoop/hbase/devtools/buildstats/TestCaseResult.java
* dev-support/jenkins-tools/pom.xml
* 
dev-support/jenkins-tools/buildstats/src/main/java/org/apache/hadoop/hbase/devtools/buildstats/BuildResultWithTestCaseDetails.java
* dev-support/jenkins-tools/README.md
* 
dev-support/jenkins-tools/buildstats/src/main/java/org/apache/hadoop/hbase/devtools/buildstats/HistoryReport.java
* dev-support/jenkins-tools/buildstats/pom.xml
* 
dev-support/jenkins-tools/buildstats/src/main/java/org/apache/hadoop/hbase/devtools/buildstats/TestResultHistory.java
* 
dev-support/jenkins-tools/buildstats/src/main/java/org/apache/hadoop/hbase/devtools/buildstats/TestSuite.java


> remove unmaintained jenkins build analysis tool.
> 
>
> Key: HBASE-15895
> URL: https://issues.apache.org/jira/browse/HBASE-15895
> Project: HBase
>  Issue Type: Task
>  Components: build
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Minor
> Fix For: 2.0.0, 1.3.0, 1.0.4, 1.4.0, 1.2.2, 0.98.20, 1.1.6
>
> Attachments: HBASE-15895.1.patch
>
>
> See HBASE-15889. We don't actually maintain the "buildstats" module any more.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15296) Break out writer and reader from StoreFile

2016-05-27 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15305154#comment-15305154
 ] 

stack commented on HBASE-15296:
---

Where is 'void foo(A);' in the refactor? (I am looking at your notes above, not 
at the patch; I am wondering if there a misedit in your explanation?)


> Break out writer and reader from StoreFile
> --
>
> Key: HBASE-15296
> URL: https://issues.apache.org/jira/browse/HBASE-15296
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Reporter: Appy
>Assignee: Appy
> Fix For: 2.0.0
>
> Attachments: HBASE-15296-branch-1-v1.patch, 
> HBASE-15296-branch-1.1.patch, HBASE-15296-branch-1.2.patch, 
> HBASE-15296-branch-1.patch, HBASE-15296-branch-1.patch, 
> HBASE-15296-master-v2.patch, HBASE-15296-master-v3.patch, 
> HBASE-15296-master-v4.patch, HBASE-15296-master-v5.patch, 
> HBASE-15296-master.patch
>
>
> StoreFile.java is trending to become a monolithic class, it's ~1800 lines. 
> Would it make sense to break out reader and writer (~500 lines each) into 
> separate files.
> We are doing so many different things in a single class: comparators, reader, 
> writer, other stuff; and it hurts readability a lot, to the point that just 
> reading through a piece of code require scrolling up and down to see which 
> level (reader/writer/base class level) it belongs to. These small-small 
> things really don't help while trying to understanding the code. There are 
> good reasons we don't do these often (affects existing patches, needs to be 
> done for all branches, etc). But this and a few other classes can really use 
> a single iteration of refactoring to make things a lot better.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15895) remove unmaintained jenkins build analysis tool.

2016-05-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15305144#comment-15305144
 ] 

Hudson commented on HBASE-15895:


SUCCESS: Integrated in HBase-1.2-IT #520 (See 
[https://builds.apache.org/job/HBase-1.2-IT/520/])
HBASE-15895 Remove unmaintained jenkins build analysis tool. (busbey: rev 
fe78e9d8edd3332b90795d19a9857255d7af72e2)
* 
dev-support/jenkins-tools/buildstats/src/main/java/org/apache/hadoop/hbase/devtools/buildstats/TestSuite.java
* dev-support/jenkins-tools/README.md
* 
dev-support/jenkins-tools/buildstats/src/main/java/org/apache/hadoop/hbase/devtools/buildstats/HistoryReport.java
* 
dev-support/jenkins-tools/buildstats/src/main/java/org/apache/hadoop/hbase/devtools/buildstats/TestCaseResult.java
* 
dev-support/jenkins-tools/buildstats/src/main/java/org/apache/hadoop/hbase/devtools/buildstats/TestResultHistory.java
* dev-support/jenkins-tools/buildstats/pom.xml
* dev-support/jenkins-tools/pom.xml
* 
dev-support/jenkins-tools/buildstats/src/main/java/org/apache/hadoop/hbase/devtools/buildstats/BuildResultWithTestCaseDetails.java


> remove unmaintained jenkins build analysis tool.
> 
>
> Key: HBASE-15895
> URL: https://issues.apache.org/jira/browse/HBASE-15895
> Project: HBase
>  Issue Type: Task
>  Components: build
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Minor
> Fix For: 2.0.0, 1.3.0, 1.0.4, 1.4.0, 1.2.2, 0.98.20, 1.1.6
>
> Attachments: HBASE-15895.1.patch
>
>
> See HBASE-15889. We don't actually maintain the "buildstats" module any more.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15296) Break out writer and reader from StoreFile

2016-05-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15305141#comment-15305141
 ] 

Hadoop QA commented on HBASE-15296:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 28 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
34s {color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s 
{color} | {color:green} branch-1 passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s 
{color} | {color:green} branch-1 passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
11s {color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
52s {color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 36s 
{color} | {color:green} branch-1 passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s 
{color} | {color:green} branch-1 passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
41s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 43s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 43s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 30s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 30s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
5s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
11m 51s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 10s 
{color} | {color:red} hbase-server generated 2 new + 0 unchanged - 0 fixed = 2 
total (was 0) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 2m 34s 
{color} | {color:red} hbase-server-jdk1.8.0 with JDK v1.8.0 generated 2 new + 3 
unchanged - 0 fixed = 5 total (was 3) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 3m 8s 
{color} | {color:red} hbase-server-jdk1.7.0_79 with JDK v1.7.0_79 generated 2 
new + 3 unchanged - 0 fixed = 5 total (was 3) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 34s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 77m 14s 
{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
21s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 103m 31s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hbase-server |
|  |  Should org.apache.hadoop.hbase.regionserver.StoreFile$Reader be a 
_static_ inner class?  At StoreFile.java:inner class?  At StoreFile.java:[lines 
770-777] |
|  |  Should org.apache.hadoop.hbase.regionserver.StoreFile$Writer be a 
_static_ inner class?  At StoreFile.java:inner class?  At StoreFile.java:[lines 
758-760] |
\\
\\
|| Subsystem 

[jira] [Commented] (HBASE-15889) String case conversions are locale-sensitive, used without locale

2016-05-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15305133#comment-15305133
 ] 

Hadoop QA commented on HBASE-15889:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 13 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 17s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
8s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 20s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 19s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
16s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
37s {color} | {color:green} master passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 42s 
{color} | {color:red} hbase-rest in master has 2 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 59s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 31s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
42s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 24s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 3m 24s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 20s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 20s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
35s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
25m 39s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 7m 
58s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 25s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 41s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 8s 
{color} | {color:green} hbase-annotations in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 48s 
{color} | {color:green} hbase-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 56s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 21s 
{color} | {color:green} hbase-hadoop2-compat in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 85m 58s 
{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 34s 
{color} | {color:green} hbase-thrift in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 26s 
{color} | 

[jira] [Commented] (HBASE-15895) remove unmaintained jenkins build analysis tool.

2016-05-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15305132#comment-15305132
 ] 

Hudson commented on HBASE-15895:


FAILURE: Integrated in HBase-1.3 #718 (See 
[https://builds.apache.org/job/HBase-1.3/718/])
HBASE-15895 Remove unmaintained jenkins build analysis tool. (busbey: rev 
64f9c40c07f68fd9fc9da725ffbbb8c5a4912627)
* dev-support/jenkins-tools/pom.xml
* 
dev-support/jenkins-tools/buildstats/src/main/java/org/apache/hadoop/hbase/devtools/buildstats/TestCaseResult.java
* dev-support/jenkins-tools/buildstats/pom.xml
* 
dev-support/jenkins-tools/buildstats/src/main/java/org/apache/hadoop/hbase/devtools/buildstats/HistoryReport.java
* 
dev-support/jenkins-tools/buildstats/src/main/java/org/apache/hadoop/hbase/devtools/buildstats/TestSuite.java
* dev-support/jenkins-tools/README.md
* 
dev-support/jenkins-tools/buildstats/src/main/java/org/apache/hadoop/hbase/devtools/buildstats/BuildResultWithTestCaseDetails.java
* 
dev-support/jenkins-tools/buildstats/src/main/java/org/apache/hadoop/hbase/devtools/buildstats/TestResultHistory.java


> remove unmaintained jenkins build analysis tool.
> 
>
> Key: HBASE-15895
> URL: https://issues.apache.org/jira/browse/HBASE-15895
> Project: HBase
>  Issue Type: Task
>  Components: build
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Minor
> Fix For: 2.0.0, 1.3.0, 1.0.4, 1.4.0, 1.2.2, 0.98.20, 1.1.6
>
> Attachments: HBASE-15895.1.patch
>
>
> See HBASE-15889. We don't actually maintain the "buildstats" module any more.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15895) remove unmaintained jenkins build analysis tool.

2016-05-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15305125#comment-15305125
 ] 

Hudson commented on HBASE-15895:


SUCCESS: Integrated in HBase-1.3-IT #683 (See 
[https://builds.apache.org/job/HBase-1.3-IT/683/])
HBASE-15895 Remove unmaintained jenkins build analysis tool. (busbey: rev 
64f9c40c07f68fd9fc9da725ffbbb8c5a4912627)
* 
dev-support/jenkins-tools/buildstats/src/main/java/org/apache/hadoop/hbase/devtools/buildstats/HistoryReport.java
* 
dev-support/jenkins-tools/buildstats/src/main/java/org/apache/hadoop/hbase/devtools/buildstats/TestResultHistory.java
* dev-support/jenkins-tools/README.md
* 
dev-support/jenkins-tools/buildstats/src/main/java/org/apache/hadoop/hbase/devtools/buildstats/TestSuite.java
* dev-support/jenkins-tools/buildstats/pom.xml
* 
dev-support/jenkins-tools/buildstats/src/main/java/org/apache/hadoop/hbase/devtools/buildstats/TestCaseResult.java
* 
dev-support/jenkins-tools/buildstats/src/main/java/org/apache/hadoop/hbase/devtools/buildstats/BuildResultWithTestCaseDetails.java
* dev-support/jenkins-tools/pom.xml


> remove unmaintained jenkins build analysis tool.
> 
>
> Key: HBASE-15895
> URL: https://issues.apache.org/jira/browse/HBASE-15895
> Project: HBase
>  Issue Type: Task
>  Components: build
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Minor
> Fix For: 2.0.0, 1.3.0, 1.0.4, 1.4.0, 1.2.2, 0.98.20, 1.1.6
>
> Attachments: HBASE-15895.1.patch
>
>
> See HBASE-15889. We don't actually maintain the "buildstats" module any more.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15895) remove unmaintained jenkins build analysis tool.

2016-05-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15305121#comment-15305121
 ] 

Hudson commented on HBASE-15895:


SUCCESS: Integrated in HBase-1.2 #638 (See 
[https://builds.apache.org/job/HBase-1.2/638/])
HBASE-15895 Remove unmaintained jenkins build analysis tool. (busbey: rev 
fe78e9d8edd3332b90795d19a9857255d7af72e2)
* dev-support/jenkins-tools/README.md
* dev-support/jenkins-tools/buildstats/pom.xml
* 
dev-support/jenkins-tools/buildstats/src/main/java/org/apache/hadoop/hbase/devtools/buildstats/HistoryReport.java
* dev-support/jenkins-tools/pom.xml
* 
dev-support/jenkins-tools/buildstats/src/main/java/org/apache/hadoop/hbase/devtools/buildstats/TestResultHistory.java
* 
dev-support/jenkins-tools/buildstats/src/main/java/org/apache/hadoop/hbase/devtools/buildstats/TestSuite.java
* 
dev-support/jenkins-tools/buildstats/src/main/java/org/apache/hadoop/hbase/devtools/buildstats/TestCaseResult.java
* 
dev-support/jenkins-tools/buildstats/src/main/java/org/apache/hadoop/hbase/devtools/buildstats/BuildResultWithTestCaseDetails.java


> remove unmaintained jenkins build analysis tool.
> 
>
> Key: HBASE-15895
> URL: https://issues.apache.org/jira/browse/HBASE-15895
> Project: HBase
>  Issue Type: Task
>  Components: build
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Minor
> Fix For: 2.0.0, 1.3.0, 1.0.4, 1.4.0, 1.2.2, 0.98.20, 1.1.6
>
> Attachments: HBASE-15895.1.patch
>
>
> See HBASE-15889. We don't actually maintain the "buildstats" module any more.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-15907) Missing documentation of create table split options

2016-05-27 Thread ronan stokes (JIRA)
ronan stokes created HBASE-15907:


 Summary: Missing documentation of create table split options
 Key: HBASE-15907
 URL: https://issues.apache.org/jira/browse/HBASE-15907
 Project: HBase
  Issue Type: Bug
  Components: documentation
Reporter: ronan stokes


Earlier versions of the online documentation seemed to have more material 
around the split options available in the HBase shell - but these seem to have 
been omitted in the process of various updates. 

Presplitting has minimal matches and only brings up references to presplitting 
from code. 

However there are a number of options relating to creation of splits in tables 
available in the HBase shell

For example :

- create table with set of split literals
- create table specifying number of splits and a split algorithm
- create table specifying a split file (not personally verified)




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15895) remove unmaintained jenkins build analysis tool.

2016-05-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15305109#comment-15305109
 ] 

Hudson commented on HBASE-15895:


FAILURE: Integrated in HBase-1.4 #182 (See 
[https://builds.apache.org/job/HBase-1.4/182/])
HBASE-15895 Remove unmaintained jenkins build analysis tool. (busbey: rev 
77abf73549b0b6c4e63c84351a7933066feb6628)
* dev-support/jenkins-tools/README.md
* 
dev-support/jenkins-tools/buildstats/src/main/java/org/apache/hadoop/hbase/devtools/buildstats/TestSuite.java
* 
dev-support/jenkins-tools/buildstats/src/main/java/org/apache/hadoop/hbase/devtools/buildstats/BuildResultWithTestCaseDetails.java
* 
dev-support/jenkins-tools/buildstats/src/main/java/org/apache/hadoop/hbase/devtools/buildstats/TestCaseResult.java
* 
dev-support/jenkins-tools/buildstats/src/main/java/org/apache/hadoop/hbase/devtools/buildstats/HistoryReport.java
* 
dev-support/jenkins-tools/buildstats/src/main/java/org/apache/hadoop/hbase/devtools/buildstats/TestResultHistory.java
* dev-support/jenkins-tools/buildstats/pom.xml
* dev-support/jenkins-tools/pom.xml


> remove unmaintained jenkins build analysis tool.
> 
>
> Key: HBASE-15895
> URL: https://issues.apache.org/jira/browse/HBASE-15895
> Project: HBase
>  Issue Type: Task
>  Components: build
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Minor
> Fix For: 2.0.0, 1.3.0, 1.0.4, 1.4.0, 1.2.2, 0.98.20, 1.1.6
>
> Attachments: HBASE-15895.1.patch
>
>
> See HBASE-15889. We don't actually maintain the "buildstats" module any more.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15296) Break out writer and reader from StoreFile

2016-05-27 Thread Appy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Appy updated HBASE-15296:
-
Attachment: HBASE-15296-branch-1-v1.patch

> Break out writer and reader from StoreFile
> --
>
> Key: HBASE-15296
> URL: https://issues.apache.org/jira/browse/HBASE-15296
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Reporter: Appy
>Assignee: Appy
> Fix For: 2.0.0
>
> Attachments: HBASE-15296-branch-1-v1.patch, 
> HBASE-15296-branch-1.1.patch, HBASE-15296-branch-1.2.patch, 
> HBASE-15296-branch-1.patch, HBASE-15296-branch-1.patch, 
> HBASE-15296-master-v2.patch, HBASE-15296-master-v3.patch, 
> HBASE-15296-master-v4.patch, HBASE-15296-master-v5.patch, 
> HBASE-15296-master.patch
>
>
> StoreFile.java is trending to become a monolithic class, it's ~1800 lines. 
> Would it make sense to break out reader and writer (~500 lines each) into 
> separate files.
> We are doing so many different things in a single class: comparators, reader, 
> writer, other stuff; and it hurts readability a lot, to the point that just 
> reading through a piece of code require scrolling up and down to see which 
> level (reader/writer/base class level) it belongs to. These small-small 
> things really don't help while trying to understanding the code. There are 
> good reasons we don't do these often (affects existing patches, needs to be 
> done for all branches, etc). But this and a few other classes can really use 
> a single iteration of refactoring to make things a lot better.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15296) Break out writer and reader from StoreFile

2016-05-27 Thread Appy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Appy updated HBASE-15296:
-
Attachment: (was: HBASE-15296_branch-1_v1.patch)

> Break out writer and reader from StoreFile
> --
>
> Key: HBASE-15296
> URL: https://issues.apache.org/jira/browse/HBASE-15296
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Reporter: Appy
>Assignee: Appy
> Fix For: 2.0.0
>
> Attachments: HBASE-15296-branch-1.1.patch, 
> HBASE-15296-branch-1.2.patch, HBASE-15296-branch-1.patch, 
> HBASE-15296-branch-1.patch, HBASE-15296-master-v2.patch, 
> HBASE-15296-master-v3.patch, HBASE-15296-master-v4.patch, 
> HBASE-15296-master-v5.patch, HBASE-15296-master.patch
>
>
> StoreFile.java is trending to become a monolithic class, it's ~1800 lines. 
> Would it make sense to break out reader and writer (~500 lines each) into 
> separate files.
> We are doing so many different things in a single class: comparators, reader, 
> writer, other stuff; and it hurts readability a lot, to the point that just 
> reading through a piece of code require scrolling up and down to see which 
> level (reader/writer/base class level) it belongs to. These small-small 
> things really don't help while trying to understanding the code. There are 
> good reasons we don't do these often (affects existing patches, needs to be 
> done for all branches, etc). But this and a few other classes can really use 
> a single iteration of refactoring to make things a lot better.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15296) Break out writer and reader from StoreFile

2016-05-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15305080#comment-15305080
 ] 

Hadoop QA commented on HBASE-15296:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 5s {color} 
| {color:red} HBASE-15296 does not apply to master. Rebase required? Wrong 
Branch? See https://yetus.apache.org/documentation/0.2.1/precommit-patchnames 
for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12806777/HBASE-15296_branch-1_v1.patch
 |
| JIRA Issue | HBASE-15296 |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/2042/console |
| Powered by | Apache Yetus 0.2.1   http://yetus.apache.org |


This message was automatically generated.



> Break out writer and reader from StoreFile
> --
>
> Key: HBASE-15296
> URL: https://issues.apache.org/jira/browse/HBASE-15296
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Reporter: Appy
>Assignee: Appy
> Fix For: 2.0.0
>
> Attachments: HBASE-15296-branch-1.1.patch, 
> HBASE-15296-branch-1.2.patch, HBASE-15296-branch-1.patch, 
> HBASE-15296-branch-1.patch, HBASE-15296-master-v2.patch, 
> HBASE-15296-master-v3.patch, HBASE-15296-master-v4.patch, 
> HBASE-15296-master-v5.patch, HBASE-15296-master.patch, 
> HBASE-15296_branch-1_v1.patch
>
>
> StoreFile.java is trending to become a monolithic class, it's ~1800 lines. 
> Would it make sense to break out reader and writer (~500 lines each) into 
> separate files.
> We are doing so many different things in a single class: comparators, reader, 
> writer, other stuff; and it hurts readability a lot, to the point that just 
> reading through a piece of code require scrolling up and down to see which 
> level (reader/writer/base class level) it belongs to. These small-small 
> things really don't help while trying to understanding the code. There are 
> good reasons we don't do these often (affects existing patches, needs to be 
> done for all branches, etc). But this and a few other classes can really use 
> a single iteration of refactoring to make things a lot better.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15296) Break out writer and reader from StoreFile

2016-05-27 Thread Appy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Appy updated HBASE-15296:
-
Attachment: HBASE-15296_branch-1_v1.patch

> Break out writer and reader from StoreFile
> --
>
> Key: HBASE-15296
> URL: https://issues.apache.org/jira/browse/HBASE-15296
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Reporter: Appy
>Assignee: Appy
> Fix For: 2.0.0
>
> Attachments: HBASE-15296-branch-1.1.patch, 
> HBASE-15296-branch-1.2.patch, HBASE-15296-branch-1.patch, 
> HBASE-15296-branch-1.patch, HBASE-15296-master-v2.patch, 
> HBASE-15296-master-v3.patch, HBASE-15296-master-v4.patch, 
> HBASE-15296-master-v5.patch, HBASE-15296-master.patch, 
> HBASE-15296_branch-1_v1.patch
>
>
> StoreFile.java is trending to become a monolithic class, it's ~1800 lines. 
> Would it make sense to break out reader and writer (~500 lines each) into 
> separate files.
> We are doing so many different things in a single class: comparators, reader, 
> writer, other stuff; and it hurts readability a lot, to the point that just 
> reading through a piece of code require scrolling up and down to see which 
> level (reader/writer/base class level) it belongs to. These small-small 
> things really don't help while trying to understanding the code. There are 
> good reasons we don't do these often (affects existing patches, needs to be 
> done for all branches, etc). But this and a few other classes can really use 
> a single iteration of refactoring to make things a lot better.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14818) user_permission does not list namespace permissions

2016-05-27 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-14818:
---
Fix Version/s: 0.98.20

> user_permission does not list namespace permissions
> ---
>
> Key: HBASE-14818
> URL: https://issues.apache.org/jira/browse/HBASE-14818
> Project: HBase
>  Issue Type: Bug
>  Components: hbase
>Affects Versions: 1.2.0
>Reporter: Steven Hancz
>Assignee: li xiang
>Priority: Minor
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.2.2, 0.98.20
>
> Attachments: HBASE-14818-1.2-v4.patch, HBASE-14818-master-v3.patch, 
> HBASE-14818-master-v4.patch, HBASE-14818-v0.patch, HBASE-14818-v1.patch, 
> HBASE-14818-v2.patch
>
>
> The user_permission command does not list namespace permissions:
> For example: if I create a new namespace or use an existing namespace and 
> grant a user privileges to that namespace, the command user_permission does 
> not list it. The permission is visible in the acl table.
> Example:
> hbase(main):005:0>  create_namespace 'ns3'
> 0 row(s) in 0.1640 seconds
> hbase(main):007:0> grant 'test_user','RWXAC','@ns3'
> 0 row(s) in 0.5680 seconds
> hbase(main):008:0> user_permission '.*'
> User   
> Namespace,Table,Family,Qualifier:Permission   
>  
>  sh82993   finance,finance:emp,,: [Permission: 
> actions=READ,WRITE,EXEC,CREATE,ADMIN]  
>  @hbaseglobaldba   hbase,hbase:acl,,: [Permission: 
> actions=EXEC,CREATE,ADMIN] 
>  @hbaseglobaloper  hbase,hbase:acl,,: [Permission: 
> actions=EXEC,ADMIN]
>  hdfs  hbase,hbase:acl,,: [Permission: 
> actions=READ,WRITE,CREATE,ADMIN,EXEC]  
>  sh82993   ns1,ns1:tbl1,,: [Permission: 
> actions=READ,WRITE,EXEC,CREATE,ADMIN] 
>  ns1admin  ns1,ns1:tbl2,,: [Permission: 
> actions=EXEC,CREATE,ADMIN]
>  @hbaseappltest_ns1funct   ns1,ns1:tbl2,,: [Permission: 
> actions=READ,WRITE,EXEC]  
>  ns1funct  ns1,ns1:tbl2,,: [Permission: 
> actions=READ,WRITE,EXEC,CREATE,ADMIN] 
>  hbase ns2,ns2:tbl1,,: [Permission: 
> actions=READ,WRITE,EXEC,CREATE,ADMIN] 
> 9 row(s) in 1.8090 seconds
> As you can see user test_user does not appear in the output, but we can see 
> the permission in the ACL table. 
> hbase(main):001:0>  scan 'hbase:acl'
> ROWCOLUMN+CELL
> 
>  @finance  column=l:sh82993, timestamp=105519510, 
> value=RWXCA 
>  @gcbcppdn column=l:hdfs, timestamp=1446141119602, 
> value=RWCXA
>  @hbasecolumn=l:hdfs, timestamp=1446141485136, 
> value=RWCAX
>  @ns1  column=l:@hbaseappltest_ns1admin, 
> timestamp=1447437007467, value=RWXCA 
>  @ns1  column=l:@hbaseappltest_ns1funct, 
> timestamp=1447427366835, value=RWX   
>  @ns2  column=l:@hbaseappltest_ns2admin, 
> timestamp=1446674470456, value=XCA   
>  @ns2  column=l:test_user, 
> timestamp=1447692840030, value=RWAC   
>  
>  @ns3  column=l:test_user, 
> timestamp=1447692860434, value=RWXAC  
>  
>  finance:emp   column=l:sh82993, timestamp=107723316, 
> value=RWXCA 
>  hbase:acl column=l:@hbaseglobaldba, 
> timestamp=1446590375370, value=XCA   
>  hbase:acl column=l:@hbaseglobaloper, 
> timestamp=1446590387965, value=XA   
>  hbase:acl column=l:hdfs, timestamp=1446141737213, 
> value=RWCAX
>  ns1:tbl1  column=l:sh82993, timestamp=1446674153058, 
> value=RWXCA 
>  ns1:tbl2  

[jira] [Updated] (HBASE-15465) userPermission returned by getUserPermission() for the selected namespace does not have namespace set

2016-05-27 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-15465:
---
Fix Version/s: 0.98.20

> userPermission returned by getUserPermission() for the selected namespace 
> does not have namespace set
> -
>
> Key: HBASE-15465
> URL: https://issues.apache.org/jira/browse/HBASE-15465
> Project: HBase
>  Issue Type: Bug
>  Components: Protobufs
>Affects Versions: 1.2.0
>Reporter: li xiang
>Assignee: li xiang
>Priority: Minor
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.2.2, 0.98.20
>
> Attachments: HBASE-15465-master-v2.patch, HBASE-15465.patch.v0, 
> HBASE-15465.patch.v1
>
>
> The request sent is with type = Namespace, but the response returned contains 
> Global permissions (that is, the field of namespace is not set)
> It is in 
> hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java,
>  from line 2380, and I made some comments into it
> {code}
> /**
>* A utility used to get permissions for selected namespace.
>* 
>* It's also called by the shell, in case you want to find references.
>*
>* @param protocol the AccessControlService protocol proxy
>* @param namespace name of the namespace
>* @throws ServiceException
>*/
>   public static List getUserPermissions(
>   AccessControlService.BlockingInterface protocol,
>   byte[] namespace) throws ServiceException {
> AccessControlProtos.GetUserPermissionsRequest.Builder builder =
>   AccessControlProtos.GetUserPermissionsRequest.newBuilder();
> if (namespace != null) {
>   builder.setNamespaceName(ByteStringer.wrap(namespace)); 
> }
> builder.setType(AccessControlProtos.Permission.Type.Namespace);  
> //builder is set with type = Namespace
> AccessControlProtos.GetUserPermissionsRequest request = builder.build();  
> //I printed the request, its type is Namespace, which is correct.
> AccessControlProtos.GetUserPermissionsResponse response =  
>protocol.getUserPermissions(null, request);
> /* I printed the response, it contains Global permissions, as below, not a 
> Namespace permission.
> user_permission {
>   user: "a1"
>   permission {
> type: Global
> global_permission {
>   action: READ
>   action: WRITE
>   action: ADMIN
>   action: EXEC
>   action: CREATE
> }
>   }
> }
> AccessControlProtos.GetUserPermissionsRequest has a member called type_ to 
> store the type, but AccessControlProtos.GetUserPermissionsResponse does not.
> */
>  
> List perms = new 
> ArrayList(response.getUserPermissionCount());
> for (AccessControlProtos.UserPermission perm: 
> response.getUserPermissionList()) {
>   perms.add(ProtobufUtil.toUserPermission(perm));  // (1)
> }
> return perms;
>   }
> {code}
> it could be more reasonable to return user permissions with namespace set in 
> getUserPermission() for selected namespace ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15890) Allow thrift to set/unset "cacheBlocks" for Scans

2016-05-27 Thread Ashu Pachauri (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15305074#comment-15305074
 ] 

Ashu Pachauri commented on HBASE-15890:
---

[~stack] [~tedyu] No problem. I understand the importance of a comprehensive 
test coverage, just thought that it was redundant. I have also uploaded the 
patches for branch-1 and branch-1.3.

> Allow thrift to set/unset "cacheBlocks" for Scans
> -
>
> Key: HBASE-15890
> URL: https://issues.apache.org/jira/browse/HBASE-15890
> Project: HBase
>  Issue Type: Improvement
>  Components: Thrift
>Reporter: Ashu Pachauri
>Assignee: Ashu Pachauri
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-15890-V0-branch-1.3.patch, 
> HBASE-15890-V0-branch-1.patch, HBASE-15890-V0.patch
>
>
> Long running scans going through thrift cache everything to the block cache. 
> We need the ability to disable caching for scans going through thrift.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15890) Allow thrift to set/unset "cacheBlocks" for Scans

2016-05-27 Thread Ashu Pachauri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashu Pachauri updated HBASE-15890:
--
Attachment: HBASE-15890-V0-branch-1.patch
HBASE-15890-V0-branch-1.3.patch

> Allow thrift to set/unset "cacheBlocks" for Scans
> -
>
> Key: HBASE-15890
> URL: https://issues.apache.org/jira/browse/HBASE-15890
> Project: HBase
>  Issue Type: Improvement
>  Components: Thrift
>Reporter: Ashu Pachauri
>Assignee: Ashu Pachauri
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-15890-V0-branch-1.3.patch, 
> HBASE-15890-V0-branch-1.patch, HBASE-15890-V0.patch
>
>
> Long running scans going through thrift cache everything to the block cache. 
> We need the ability to disable caching for scans going through thrift.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15906) Compaction reporting is still wrong

2016-05-27 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-15906:
-
Attachment: Untitled.png

> Compaction reporting is still wrong
> ---
>
> Key: HBASE-15906
> URL: https://issues.apache.org/jira/browse/HBASE-15906
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 1.1.2
>Reporter: Nick Dimiduk
>Priority: Minor
> Attachments: Untitled.png
>
>
> The RS webUI is reporting my compaction at 140% complete. Looks like there's 
> more to it than HBASE-11979. This is seen on HDP, version 1.1.2.2.3.2.0-2950.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-15906) Compaction reporting is still wrong

2016-05-27 Thread Nick Dimiduk (JIRA)
Nick Dimiduk created HBASE-15906:


 Summary: Compaction reporting is still wrong
 Key: HBASE-15906
 URL: https://issues.apache.org/jira/browse/HBASE-15906
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 1.1.2
Reporter: Nick Dimiduk
Priority: Minor
 Attachments: Untitled.png

The RS webUI is reporting my compaction at 140% complete. Looks like there's 
more to it than HBASE-11979. This is seen on HDP, version 1.1.2.2.3.2.0-2950.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15892) submit-patch.py: Single command line to make patch, upload it to jira, and update review board

2016-05-27 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15305060#comment-15305060
 ] 

Appy commented on HBASE-15892:
--

[~stack] I think we should backport this to other branches too. It works with 
all branches.

> submit-patch.py: Single command line to make patch, upload it to jira, and 
> update review board
> --
>
> Key: HBASE-15892
> URL: https://issues.apache.org/jira/browse/HBASE-15892
> Project: HBase
>  Issue Type: New Feature
>Reporter: Appy
>Assignee: Appy
>Priority: Trivial
> Fix For: 2.0.0
>
> Attachments: HBASE-15892_branch-1_v1.patch, 
> HBASE-15892_master_v1.patch, HBASE-15892_master_v2.patch, 
> HBASE-15892_master_v3.patch
>
>
> Adds dev-support/submit-patch.py
> The script builds a new patch (using specified branch/tracking branch as base 
> branch), uploads it to jira, and updates diff of the review on ReviewBoard.
> Remote links in the jira are used to figure out if a review request already 
> exists. If no review request is present, then creates a new one and populates 
> all required fields using jira summary, patch description, etc.
> *Authentication*
> Since attaching patches & changes links on JIRA and creating/changing review 
> request on ReviewBoard requires a logged in user, the script will prompt you 
> for username and password. To avoid the hassle every time, I'd suggest 
> setting up ~/.apache-creds with the login details and encrypt it as explained 
> in scripts help message footer.
> *Python dependencies*
> To install required python dependencies, execute {{pip install -r 
> dev-support/python-requirements.txt}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15698) Increment TimeRange not serialized to server

2016-05-27 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-15698:
---
Attachment: 15698.v3.txt

In patch v3, I used two TimeRange's.

range10 is for Increment of 10, performed through Table.increment().
range2 is for Increment of 2, performed through Table.batch().

This way, both code paths are covered.

> Increment TimeRange not serialized to server
> 
>
> Key: HBASE-15698
> URL: https://issues.apache.org/jira/browse/HBASE-15698
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.0, 1.3.0
>Reporter: James Taylor
>Assignee: Sean Busbey
>Priority: Blocker
>  Labels: phoenix
> Fix For: 1.3.0, 1.0.4, 1.2.2, 0.98.20, 1.1.6
>
> Attachments: 15698-suggest.txt, 15698.v1.txt, 15698.v2.txt, 
> 15698.v3.txt, HBASE-15698.1.patch
>
>
> Before HBase-1.2, the Increment TimeRange set on the client was serialized 
> over to the server. As of HBase 1.2, this appears to no longer be true, as my 
> preIncrement coprocessor always gets HConstants.LATEST_TIMESTAMP as the value 
> of increment.getTimeRange().getMax() regardless of what the client has 
> specified.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15610) Remove deprecated HConnection for 2.0 thus removing all PB references for 2.0

2016-05-27 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15305049#comment-15305049
 ] 

stack commented on HBASE-15610:
---

This is the only dodgy looking thing that I see:

{code}
1170  protected AdminProtos.AdminService.BlockingInterface getAdmin(final 
ServerName serverName,
1254final boolean master)   1171
{code}
We go from public to protected. It is in a Deprecated class but you seem to 
move the difference between HConnection and ClusterConnection into CC elsewhere 
except here. Maybe you have reasoning? Might be less risky leaving this public 
but deprecating it? (It looks like it is useless/unused so could do the 
above...)

Otherwise, +1.





> Remove deprecated HConnection for 2.0 thus removing all PB references for 2.0
> -
>
> Key: HBASE-15610
> URL: https://issues.apache.org/jira/browse/HBASE-15610
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: Jurriaan Mous
>Priority: Blocker
> Fix For: 2.0.0
>
> Attachments: HBASE-15610.patch, HBASE-15610.patch, 
> HBASE-15610.v1.patch
>
>
> This is sub-task for HBASE-15174.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15896) Add timeout tests to flaky list from report-flakies.py

2016-05-27 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15305038#comment-15305038
 ] 

stack commented on HBASE-15896:
---

[~busbey] Thanks Sean.

> Add timeout tests to flaky list from report-flakies.py
> --
>
> Key: HBASE-15896
> URL: https://issues.apache.org/jira/browse/HBASE-15896
> Project: HBase
>  Issue Type: Sub-task
>  Components: test
>Reporter: Appy
>Assignee: Appy
> Fix For: 2.0.0
>
> Attachments: HBASE-15896_master_addendum_v1.patch, 
> HBASE-15896_master_v1.patch, HBASE-15896_master_v2.patch, 
> HBASE-15896_master_v3.patch, HBASE-15896_master_v4.patch
>
>
> - Adds timed-out tests to flaky list. Dumps two new files for reference, 
> 'timeout' and 'failed' for corresponding list of bad tests.
> - Set --max-builds for different urls separately. This is needed so that we 
> can turn the knobs for post-commit job and flaky-tests job separately.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15889) String case conversions are locale-sensitive, used without locale

2016-05-27 Thread Sean Mackrory (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15305024#comment-15305024
 ] 

Sean Mackrory commented on HBASE-15889:
---

Sure - just attached the rebased patch

> String case conversions are locale-sensitive, used without locale
> -
>
> Key: HBASE-15889
> URL: https://issues.apache.org/jira/browse/HBASE-15889
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.0
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
>Priority: Minor
> Attachments: HBASE-15889-v1.patch, HBASE-15891-v2.patch
>
>
> Static code analysis is flagging cases of String.toLowerCase and 
> String.toUpperCase being used without Locale. From the API reference:
> {quote}
> Note: This method is locale sensitive, and may produce unexpected results if 
> used for strings that are intended to be interpreted locale independently. 
> Examples are programming language identifiers, protocol keys, and HTML tags. 
> For instance, "TITLE".toLowerCase() in a Turkish locale returns "t\u0131tle", 
> where '\u0131' is the LATIN SMALL LETTER DOTLESS I character. To obtain 
> correct results for locale insensitive strings, use toLowerCase(Locale.ROOT).
> {quote}
> Many uses of these functions do appear to be looking up classes, etc. and not 
> dealing with stored data, so I'd think there aren't significant compatibility 
> problems here and specifying the locale is indeed the safer way to go.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15889) String case conversions are locale-sensitive, used without locale

2016-05-27 Thread Sean Mackrory (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory updated HBASE-15889:
--
Attachment: HBASE-15891-v2.patch

> String case conversions are locale-sensitive, used without locale
> -
>
> Key: HBASE-15889
> URL: https://issues.apache.org/jira/browse/HBASE-15889
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.0
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
>Priority: Minor
> Attachments: HBASE-15889-v1.patch, HBASE-15891-v2.patch
>
>
> Static code analysis is flagging cases of String.toLowerCase and 
> String.toUpperCase being used without Locale. From the API reference:
> {quote}
> Note: This method is locale sensitive, and may produce unexpected results if 
> used for strings that are intended to be interpreted locale independently. 
> Examples are programming language identifiers, protocol keys, and HTML tags. 
> For instance, "TITLE".toLowerCase() in a Turkish locale returns "t\u0131tle", 
> where '\u0131' is the LATIN SMALL LETTER DOTLESS I character. To obtain 
> correct results for locale insensitive strings, use toLowerCase(Locale.ROOT).
> {quote}
> Many uses of these functions do appear to be looking up classes, etc. and not 
> dealing with stored data, so I'd think there aren't significant compatibility 
> problems here and specifying the locale is indeed the safer way to go.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15896) Add timeout tests to flaky list from report-flakies.py

2016-05-27 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15305011#comment-15305011
 ] 

Appy commented on HBASE-15896:
--

Yeah, Sean just committed the addendum.

> Add timeout tests to flaky list from report-flakies.py
> --
>
> Key: HBASE-15896
> URL: https://issues.apache.org/jira/browse/HBASE-15896
> Project: HBase
>  Issue Type: Sub-task
>  Components: test
>Reporter: Appy
>Assignee: Appy
> Fix For: 2.0.0
>
> Attachments: HBASE-15896_master_addendum_v1.patch, 
> HBASE-15896_master_v1.patch, HBASE-15896_master_v2.patch, 
> HBASE-15896_master_v3.patch, HBASE-15896_master_v4.patch
>
>
> - Adds timed-out tests to flaky list. Dumps two new files for reference, 
> 'timeout' and 'failed' for corresponding list of bad tests.
> - Set --max-builds for different urls separately. This is needed so that we 
> can turn the knobs for post-commit job and flaky-tests job separately.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15610) Remove deprecated HConnection for 2.0 thus removing all PB references for 2.0

2016-05-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15305005#comment-15305005
 ] 

Hadoop QA commented on HBASE-15610:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 29 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
11s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 55s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 34s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 
6s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
13s {color} | {color:green} master passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 33s 
{color} | {color:red} hbase-rsgroup in master has 6 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 52s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 27s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} scaladoc {color} | {color:green} 1m 
17s {color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
49s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 49s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 49s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} scalac {color} | {color:green} 2m 49s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 37s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 37s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} scalac {color} | {color:green} 2m 37s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 
5s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
26m 35s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 52s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 26s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} scaladoc {color} | {color:green} 1m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} scaladoc {color} | {color:green} 1m 
17s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 58s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 94m 41s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | 

[jira] [Commented] (HBASE-15889) String case conversions are locale-sensitive, used without locale

2016-05-27 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15305004#comment-15305004
 ] 

Sean Busbey commented on HBASE-15889:
-

Would you mind updating your patch now that we've removed the jenkins-tools 
module?

> String case conversions are locale-sensitive, used without locale
> -
>
> Key: HBASE-15889
> URL: https://issues.apache.org/jira/browse/HBASE-15889
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.0
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
>Priority: Minor
> Attachments: HBASE-15889-v1.patch
>
>
> Static code analysis is flagging cases of String.toLowerCase and 
> String.toUpperCase being used without Locale. From the API reference:
> {quote}
> Note: This method is locale sensitive, and may produce unexpected results if 
> used for strings that are intended to be interpreted locale independently. 
> Examples are programming language identifiers, protocol keys, and HTML tags. 
> For instance, "TITLE".toLowerCase() in a Turkish locale returns "t\u0131tle", 
> where '\u0131' is the LATIN SMALL LETTER DOTLESS I character. To obtain 
> correct results for locale insensitive strings, use toLowerCase(Locale.ROOT).
> {quote}
> Many uses of these functions do appear to be looking up classes, etc. and not 
> dealing with stored data, so I'd think there aren't significant compatibility 
> problems here and specifying the locale is indeed the safer way to go.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15889) String case conversions are locale-sensitive, used without locale

2016-05-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15305003#comment-15305003
 ] 

Hadoop QA commented on HBASE-15889:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 3s {color} 
| {color:red} HBASE-15889 does not apply to master. Rebase required? Wrong 
Branch? See https://yetus.apache.org/documentation/0.2.1/precommit-patchnames 
for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12806204/HBASE-15889-v1.patch |
| JIRA Issue | HBASE-15889 |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/2040/console |
| Powered by | Apache Yetus 0.2.1   http://yetus.apache.org |


This message was automatically generated.



> String case conversions are locale-sensitive, used without locale
> -
>
> Key: HBASE-15889
> URL: https://issues.apache.org/jira/browse/HBASE-15889
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.0
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
>Priority: Minor
> Attachments: HBASE-15889-v1.patch
>
>
> Static code analysis is flagging cases of String.toLowerCase and 
> String.toUpperCase being used without Locale. From the API reference:
> {quote}
> Note: This method is locale sensitive, and may produce unexpected results if 
> used for strings that are intended to be interpreted locale independently. 
> Examples are programming language identifiers, protocol keys, and HTML tags. 
> For instance, "TITLE".toLowerCase() in a Turkish locale returns "t\u0131tle", 
> where '\u0131' is the LATIN SMALL LETTER DOTLESS I character. To obtain 
> correct results for locale insensitive strings, use toLowerCase(Locale.ROOT).
> {quote}
> Many uses of these functions do appear to be looking up classes, etc. and not 
> dealing with stored data, so I'd think there aren't significant compatibility 
> problems here and specifying the locale is indeed the safer way to go.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15896) Add timeout tests to flaky list from report-flakies.py

2016-05-27 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15305002#comment-15305002
 ] 

stack commented on HBASE-15896:
---

You sure @appy? The addendum seems to be there.

> Add timeout tests to flaky list from report-flakies.py
> --
>
> Key: HBASE-15896
> URL: https://issues.apache.org/jira/browse/HBASE-15896
> Project: HBase
>  Issue Type: Sub-task
>  Components: test
>Reporter: Appy
>Assignee: Appy
> Fix For: 2.0.0
>
> Attachments: HBASE-15896_master_addendum_v1.patch, 
> HBASE-15896_master_v1.patch, HBASE-15896_master_v2.patch, 
> HBASE-15896_master_v3.patch, HBASE-15896_master_v4.patch
>
>
> - Adds timed-out tests to flaky list. Dumps two new files for reference, 
> 'timeout' and 'failed' for corresponding list of bad tests.
> - Set --max-builds for different urls separately. This is needed so that we 
> can turn the knobs for post-commit job and flaky-tests job separately.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15895) remove unmaintained jenkins build analysis tool.

2016-05-27 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-15895:

   Resolution: Fixed
Fix Version/s: 1.1.6
   0.98.20
   1.2.2
   1.4.0
   1.0.4
   1.3.0
   2.0.0
   Status: Resolved  (was: Patch Available)

Confirmed the -1s in precommit were all spurious.

pushed to all branches that had it still. Thanks folks!

> remove unmaintained jenkins build analysis tool.
> 
>
> Key: HBASE-15895
> URL: https://issues.apache.org/jira/browse/HBASE-15895
> Project: HBase
>  Issue Type: Task
>  Components: build
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Minor
> Fix For: 2.0.0, 1.3.0, 1.0.4, 1.4.0, 1.2.2, 0.98.20, 1.1.6
>
> Attachments: HBASE-15895.1.patch
>
>
> See HBASE-15889. We don't actually maintain the "buildstats" module any more.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15896) Add timeout tests to flaky list from report-flakies.py

2016-05-27 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-15896:

Component/s: test

> Add timeout tests to flaky list from report-flakies.py
> --
>
> Key: HBASE-15896
> URL: https://issues.apache.org/jira/browse/HBASE-15896
> Project: HBase
>  Issue Type: Sub-task
>  Components: test
>Reporter: Appy
>Assignee: Appy
> Fix For: 2.0.0
>
> Attachments: HBASE-15896_master_addendum_v1.patch, 
> HBASE-15896_master_v1.patch, HBASE-15896_master_v2.patch, 
> HBASE-15896_master_v3.patch, HBASE-15896_master_v4.patch
>
>
> - Adds timed-out tests to flaky list. Dumps two new files for reference, 
> 'timeout' and 'failed' for corresponding list of bad tests.
> - Set --max-builds for different urls separately. This is needed so that we 
> can turn the knobs for post-commit job and flaky-tests job separately.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15887) Report Log Additions and Removals in Builds

2016-05-27 Thread Matthew Byng-Maddick (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15304981#comment-15304981
 ] 

Matthew Byng-Maddick commented on HBASE-15887:
--

So cool!!! :-)

> Report Log Additions and Removals in Builds
> ---
>
> Key: HBASE-15887
> URL: https://issues.apache.org/jira/browse/HBASE-15887
> Project: HBase
>  Issue Type: New Feature
>  Components: build
>Reporter: Clay B.
>Priority: Trivial
> Attachments: HBASE-15887-v1.txt
>
>
> It would be very nice for the Apache Yetus verifications of HBase patches to 
> report log item addition and deletions.
> This is not my idea but [~mbm] asked if we could modify the personality for 
> reporting log additions and removals yesterday at an [HBase meetup at Splice 
> machine|http://www.meetup.com/hbaseusergroup/events/230547750/] as [~aw] 
> presented Apache Yetus for building HBase.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15698) Increment TimeRange not serialized to server

2016-05-27 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15304976#comment-15304976
 ] 

Ted Yu commented on HBASE-15698:


Patch v2 addresses Anoop's comments by dropping unneeded import and Observer.

bq. add assert for increment when we have Increment.setTimerange.

In v1 there was already assertion that the TimeRange captured by MyObserver has 
same bounds as the one we set to Increment.

> Increment TimeRange not serialized to server
> 
>
> Key: HBASE-15698
> URL: https://issues.apache.org/jira/browse/HBASE-15698
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.0, 1.3.0
>Reporter: James Taylor
>Assignee: Sean Busbey
>Priority: Blocker
>  Labels: phoenix
> Fix For: 1.3.0, 1.0.4, 1.2.2, 0.98.20, 1.1.6
>
> Attachments: 15698-suggest.txt, 15698.v1.txt, 15698.v2.txt, 
> HBASE-15698.1.patch
>
>
> Before HBase-1.2, the Increment TimeRange set on the client was serialized 
> over to the server. As of HBase 1.2, this appears to no longer be true, as my 
> preIncrement coprocessor always gets HConstants.LATEST_TIMESTAMP as the value 
> of increment.getTimeRange().getMax() regardless of what the client has 
> specified.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15698) Increment TimeRange not serialized to server

2016-05-27 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-15698:
---
Attachment: 15698.v2.txt

> Increment TimeRange not serialized to server
> 
>
> Key: HBASE-15698
> URL: https://issues.apache.org/jira/browse/HBASE-15698
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.0, 1.3.0
>Reporter: James Taylor
>Assignee: Sean Busbey
>Priority: Blocker
>  Labels: phoenix
> Fix For: 1.3.0, 1.0.4, 1.2.2, 0.98.20, 1.1.6
>
> Attachments: 15698-suggest.txt, 15698.v1.txt, 15698.v2.txt, 
> HBASE-15698.1.patch
>
>
> Before HBase-1.2, the Increment TimeRange set on the client was serialized 
> over to the server. As of HBase 1.2, this appears to no longer be true, as my 
> preIncrement coprocessor always gets HConstants.LATEST_TIMESTAMP as the value 
> of increment.getTimeRange().getMax() regardless of what the client has 
> specified.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15896) Add timeout tests to flaky list from report-flakies.py

2016-05-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15304968#comment-15304968
 ] 

Hadoop QA commented on HBASE-15896:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} pylint {color} | {color:blue} 0m 22s 
{color} | {color:blue} Pylint was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
38m 43s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
27s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 39m 52s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12806752/HBASE-15896_master_addendum_v1.patch
 |
| JIRA Issue | HBASE-15896 |
| Optional Tests |  asflicense  pylint  |
| uname | Linux asf910.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / aa016c7 |
| modules | C: . U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/2039/console |
| Powered by | Apache Yetus 0.2.1   http://yetus.apache.org |


This message was automatically generated.



> Add timeout tests to flaky list from report-flakies.py
> --
>
> Key: HBASE-15896
> URL: https://issues.apache.org/jira/browse/HBASE-15896
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Appy
>Assignee: Appy
> Fix For: 2.0.0
>
> Attachments: HBASE-15896_master_addendum_v1.patch, 
> HBASE-15896_master_v1.patch, HBASE-15896_master_v2.patch, 
> HBASE-15896_master_v3.patch, HBASE-15896_master_v4.patch
>
>
> - Adds timed-out tests to flaky list. Dumps two new files for reference, 
> 'timeout' and 'failed' for corresponding list of bad tests.
> - Set --max-builds for different urls separately. This is needed so that we 
> can turn the knobs for post-commit job and flaky-tests job separately.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15861) Add support for table sets in restore operation

2016-05-27 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-15861:
--
Attachment: HBASE-15861-v2.patch

v2

> Add support for table sets in restore operation
> ---
>
> Key: HBASE-15861
> URL: https://issues.apache.org/jira/browse/HBASE-15861
> Project: HBase
>  Issue Type: Task
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Attachments: HBASE-15861-v1.patch, HBASE-15861-v2.patch
>
>
> We support backup operation for table set, but there is no support for 
> restore operation for table set yet.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15698) Increment TimeRange not serialized to server

2016-05-27 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15304957#comment-15304957
 ] 

Anoop Sam John commented on HBASE-15698:


Code change looks good. 
bq.import org.mortbay.log.Log;
Unused import. Pls remove

Regarding new test, do we need the CP?  For assert fine np.  There is an unused 
Dummy CP. Pls remove.  Assert for TR reached CP, pls add assert for increment 
when we have Increment.setTimerange.

Just checked all other mutations.  Seems only Increment is having this kind of 
extra attribute which we missed pbing when using PB convert with no cell data.

> Increment TimeRange not serialized to server
> 
>
> Key: HBASE-15698
> URL: https://issues.apache.org/jira/browse/HBASE-15698
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.0, 1.3.0
>Reporter: James Taylor
>Assignee: Sean Busbey
>Priority: Blocker
>  Labels: phoenix
> Fix For: 1.3.0, 1.0.4, 1.2.2, 0.98.20, 1.1.6
>
> Attachments: 15698-suggest.txt, 15698.v1.txt, HBASE-15698.1.patch
>
>
> Before HBase-1.2, the Increment TimeRange set on the client was serialized 
> over to the server. As of HBase 1.2, this appears to no longer be true, as my 
> preIncrement coprocessor always gets HConstants.LATEST_TIMESTAMP as the value 
> of increment.getTimeRange().getMax() regardless of what the client has 
> specified.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15839) Track our flaky tests and use them to improve our build environment

2016-05-27 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15304950#comment-15304950
 ] 

Appy commented on HBASE-15839:
--

After HBASE-15896, the flaky tests lists will also contain timed out (hanging) 
tests. Next few steps are:
- Hanging tests make the builds run for longer. I'll analyze the hanging tests 
and assign more meaningful timeouts to them so that they fail fast. This should 
reduce runtime of HBase-Flaky-Tests and will allow us to run it more frequently 
(currently runs every hour).
- Build dashboard to see flaky tests and various stats
- Run all flaky tests in pre-commit, but don’t fail the build if these tests 
fail. (if yetus allows setting it up)

> Track our flaky tests and use them to improve our build environment
> ---
>
> Key: HBASE-15839
> URL: https://issues.apache.org/jira/browse/HBASE-15839
> Project: HBase
>  Issue Type: Improvement
>Reporter: Appy
>Assignee: Appy
> Attachments: Screen Shot 2016-05-16 at 4.02.46 PM.png
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15610) Remove deprecated HConnection for 2.0 thus removing all PB references for 2.0

2016-05-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15304943#comment-15304943
 ] 

Hadoop QA commented on HBASE-15610:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 27 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
48s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 18s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 13s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
37s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
58s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
36s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 31s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 13s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} scaladoc {color} | {color:green} 1m 
12s {color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
30s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 20s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 20s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} scalac {color} | {color:green} 2m 20s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 12s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 12s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} scalac {color} | {color:green} 2m 12s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
36s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
57s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 1m 11s 
{color} | {color:red} Patch causes 32 errors with Hadoop v2.4.0. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 2m 23s 
{color} | {color:red} Patch causes 32 errors with Hadoop v2.4.1. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 3m 35s 
{color} | {color:red} Patch causes 32 errors with Hadoop v2.5.0. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 4m 48s 
{color} | {color:red} Patch causes 32 errors with Hadoop v2.5.1. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 6m 1s 
{color} | {color:red} Patch causes 32 errors with Hadoop v2.5.2. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 7m 13s 
{color} | {color:red} Patch causes 32 errors with Hadoop v2.6.1. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 8m 27s 
{color} | {color:red} Patch causes 32 errors with Hadoop v2.6.2. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 9m 41s 
{color} | {color:red} Patch causes 32 errors with Hadoop v2.6.3. {color} |
| {color:red}-1{color} | {color:red} 

[jira] [Updated] (HBASE-15610) Remove deprecated HConnection for 2.0 thus removing all PB references for 2.0

2016-05-27 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-15610:
--
Hadoop Flags: Incompatible change,Reviewed

> Remove deprecated HConnection for 2.0 thus removing all PB references for 2.0
> -
>
> Key: HBASE-15610
> URL: https://issues.apache.org/jira/browse/HBASE-15610
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: Jurriaan Mous
>Priority: Blocker
> Fix For: 2.0.0
>
> Attachments: HBASE-15610.patch, HBASE-15610.patch, 
> HBASE-15610.v1.patch
>
>
> This is sub-task for HBASE-15174.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15890) Allow thrift to set/unset "cacheBlocks" for Scans

2016-05-27 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-15890:
--
  Resolution: Fixed
Hadoop Flags: Reviewed
Release Note: Adds cacheBlocks to Scan
  Status: Resolved  (was: Patch Available)

Thanks for the patch Ashu. I pushed to master and am resolving. Put up patches 
for branch-1 and branch-1.3 and I'll commit. I don't have a thrift gen 
environment handy. Figure you probably have?  Thanks.

> Allow thrift to set/unset "cacheBlocks" for Scans
> -
>
> Key: HBASE-15890
> URL: https://issues.apache.org/jira/browse/HBASE-15890
> Project: HBase
>  Issue Type: Improvement
>  Components: Thrift
>Reporter: Ashu Pachauri
>Assignee: Ashu Pachauri
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-15890-V0.patch
>
>
> Long running scans going through thrift cache everything to the block cache. 
> We need the ability to disable caching for scans going through thrift.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (HBASE-15896) Add timeout tests to flaky list from report-flakies.py

2016-05-27 Thread Appy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Appy reopened HBASE-15896:
--

I think v3 got committed instead of v4. Adding the diff between the two as 
addendum.

> Add timeout tests to flaky list from report-flakies.py
> --
>
> Key: HBASE-15896
> URL: https://issues.apache.org/jira/browse/HBASE-15896
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Appy
>Assignee: Appy
> Fix For: 2.0.0
>
> Attachments: HBASE-15896_master_addendum_v1.patch, 
> HBASE-15896_master_v1.patch, HBASE-15896_master_v2.patch, 
> HBASE-15896_master_v3.patch, HBASE-15896_master_v4.patch
>
>
> - Adds timed-out tests to flaky list. Dumps two new files for reference, 
> 'timeout' and 'failed' for corresponding list of bad tests.
> - Set --max-builds for different urls separately. This is needed so that we 
> can turn the knobs for post-commit job and flaky-tests job separately.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15896) Add timeout tests to flaky list from report-flakies.py

2016-05-27 Thread Appy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Appy updated HBASE-15896:
-
Attachment: HBASE-15896_master_addendum_v1.patch

> Add timeout tests to flaky list from report-flakies.py
> --
>
> Key: HBASE-15896
> URL: https://issues.apache.org/jira/browse/HBASE-15896
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Appy
>Assignee: Appy
> Fix For: 2.0.0
>
> Attachments: HBASE-15896_master_addendum_v1.patch, 
> HBASE-15896_master_v1.patch, HBASE-15896_master_v2.patch, 
> HBASE-15896_master_v3.patch, HBASE-15896_master_v4.patch
>
>
> - Adds timed-out tests to flaky list. Dumps two new files for reference, 
> 'timeout' and 'failed' for corresponding list of bad tests.
> - Set --max-builds for different urls separately. This is needed so that we 
> can turn the knobs for post-commit job and flaky-tests job separately.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15896) Add timeout tests to flaky list from report-flakies.py

2016-05-27 Thread Appy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Appy updated HBASE-15896:
-
Status: Patch Available  (was: Reopened)

> Add timeout tests to flaky list from report-flakies.py
> --
>
> Key: HBASE-15896
> URL: https://issues.apache.org/jira/browse/HBASE-15896
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Appy
>Assignee: Appy
> Fix For: 2.0.0
>
> Attachments: HBASE-15896_master_addendum_v1.patch, 
> HBASE-15896_master_v1.patch, HBASE-15896_master_v2.patch, 
> HBASE-15896_master_v3.patch, HBASE-15896_master_v4.patch
>
>
> - Adds timed-out tests to flaky list. Dumps two new files for reference, 
> 'timeout' and 'failed' for corresponding list of bad tests.
> - Set --max-builds for different urls separately. This is needed so that we 
> can turn the knobs for post-commit job and flaky-tests job separately.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15890) Allow thrift to set/unset "cacheBlocks" for Scans

2016-05-27 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15304919#comment-15304919
 ] 

stack commented on HBASE-15890:
---

[~ashu210890] Let me commit this patch as is. It is fine. You can add a test 
next time you do a thrift test. The handoff is direct.

I think [~ted_yu] is a little sensitive at the moment because we have an issue 
where a TimeRange set on client-side was not carried-through when it got to the 
server-side (on Increment) which is a good deal more fragile than the direct 
invocation that is going on here.

> Allow thrift to set/unset "cacheBlocks" for Scans
> -
>
> Key: HBASE-15890
> URL: https://issues.apache.org/jira/browse/HBASE-15890
> Project: HBase
>  Issue Type: Improvement
>  Components: Thrift
>Reporter: Ashu Pachauri
>Assignee: Ashu Pachauri
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-15890-V0.patch
>
>
> Long running scans going through thrift cache everything to the block cache. 
> We need the ability to disable caching for scans going through thrift.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15890) Allow thrift to set/unset "cacheBlocks" for Scans

2016-05-27 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15304823#comment-15304823
 ] 

Ted Yu commented on HBASE-15890:


bq. add a test for whether the property is successfully passed on from TScan to 
Scan

Please so the above.

> Allow thrift to set/unset "cacheBlocks" for Scans
> -
>
> Key: HBASE-15890
> URL: https://issues.apache.org/jira/browse/HBASE-15890
> Project: HBase
>  Issue Type: Improvement
>  Components: Thrift
>Reporter: Ashu Pachauri
>Assignee: Ashu Pachauri
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-15890-V0.patch
>
>
> Long running scans going through thrift cache everything to the block cache. 
> We need the ability to disable caching for scans going through thrift.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15890) Allow thrift to set/unset "cacheBlocks" for Scans

2016-05-27 Thread Ashu Pachauri (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15304806#comment-15304806
 ] 

Ashu Pachauri commented on HBASE-15890:
---

Yes, it is. But, test for setting "cacheBlocks" for scans already exists in 
org.apache.hadoop.hbase.regionserver.TestBlocksRead#testBlocksStoredWhenCachingDisabled.
 I don't want to add another redundant end-to-end test. I can add a test for 
whether the property is successfully passed on from TScan to Scan, not sure who 
much value that adds given how straight forward the change is.

> Allow thrift to set/unset "cacheBlocks" for Scans
> -
>
> Key: HBASE-15890
> URL: https://issues.apache.org/jira/browse/HBASE-15890
> Project: HBase
>  Issue Type: Improvement
>  Components: Thrift
>Reporter: Ashu Pachauri
>Assignee: Ashu Pachauri
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-15890-V0.patch
>
>
> Long running scans going through thrift cache everything to the block cache. 
> We need the ability to disable caching for scans going through thrift.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15610) Remove deprecated HConnection for 2.0 thus removing all PB references for 2.0

2016-05-27 Thread Jurriaan Mous (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15304787#comment-15304787
 ] 

Jurriaan Mous commented on HBASE-15610:
---

Yes all publicly exposed Protos interfaces are removed with the removal of 
HConnection.

> Remove deprecated HConnection for 2.0 thus removing all PB references for 2.0
> -
>
> Key: HBASE-15610
> URL: https://issues.apache.org/jira/browse/HBASE-15610
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: Jurriaan Mous
>Priority: Blocker
> Fix For: 2.0.0
>
> Attachments: HBASE-15610.patch, HBASE-15610.patch, 
> HBASE-15610.v1.patch
>
>
> This is sub-task for HBASE-15174.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15610) Remove deprecated HConnection for 2.0 thus removing all PB references for 2.0

2016-05-27 Thread Jurriaan Mous (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jurriaan Mous updated HBASE-15610:
--
Attachment: HBASE-15610.v1.patch

Fix hadoopcheck failures.

> Remove deprecated HConnection for 2.0 thus removing all PB references for 2.0
> -
>
> Key: HBASE-15610
> URL: https://issues.apache.org/jira/browse/HBASE-15610
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: Jurriaan Mous
>Priority: Blocker
> Fix For: 2.0.0
>
> Attachments: HBASE-15610.patch, HBASE-15610.patch, 
> HBASE-15610.v1.patch
>
>
> This is sub-task for HBASE-15174.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-15905) Makefile build env incorrectly links in tests

2016-05-27 Thread Elliott Clark (JIRA)
Elliott Clark created HBASE-15905:
-

 Summary: Makefile build env incorrectly links in tests
 Key: HBASE-15905
 URL: https://issues.apache.org/jira/browse/HBASE-15905
 Project: HBase
  Issue Type: Sub-task
Reporter: Elliott Clark


Right now the makefile build system doesn't seem to do so well.
* Tests are included in the lib
* Documentation includes the protobuf dir
* just running make on a clean check out fails.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15610) Remove deprecated HConnection for 2.0 thus removing all PB references for 2.0

2016-05-27 Thread Jurriaan Mous (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jurriaan Mous updated HBASE-15610:
--
Attachment: HBASE-15610.patch

> Remove deprecated HConnection for 2.0 thus removing all PB references for 2.0
> -
>
> Key: HBASE-15610
> URL: https://issues.apache.org/jira/browse/HBASE-15610
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: Jurriaan Mous
>Priority: Blocker
> Fix For: 2.0.0
>
> Attachments: HBASE-15610.patch, HBASE-15610.patch
>
>
> This is sub-task for HBASE-15174.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15803) ZooKeeperWatcher's constructor can leak a ZooKeeper instance with throwing ZooKeeperConnectionException when canCreateBaseZNode is true

2016-05-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15803?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15304656#comment-15304656
 ] 

Hadoop QA commented on HBASE-15803:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue} 0m 1s 
{color} | {color:blue} The patch file was not named according to hbase's naming 
conventions. Please see 
https://yetus.apache.org/documentation/0.2.1/precommit-patchnames for 
instructions. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
14s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 26s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 18s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
25s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
55s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
20s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 27s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 27s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 16s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 16s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
24s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
26m 7s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 58s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
9s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 37m 32s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12806688/15803.v1.txt |
| JIRA Issue | HBASE-15803 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux asf907.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Commented] (HBASE-15896) Add timeout tests to flaky list from report-flakies.py

2016-05-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15304635#comment-15304635
 ] 

Hudson commented on HBASE-15896:


SUCCESS: Integrated in HBase-Trunk_matrix #953 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/953/])
HBASE-15896 Add timeout tests to flaky list from report-flakies.py - (stack: 
rev aa016c78a72515a0ebed5adb38cc34207c7d8013)
* dev-support/findHangingTests.py
* dev-support/report-flakies.py


> Add timeout tests to flaky list from report-flakies.py
> --
>
> Key: HBASE-15896
> URL: https://issues.apache.org/jira/browse/HBASE-15896
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Appy
>Assignee: Appy
> Fix For: 2.0.0
>
> Attachments: HBASE-15896_master_v1.patch, 
> HBASE-15896_master_v2.patch, HBASE-15896_master_v3.patch, 
> HBASE-15896_master_v4.patch
>
>
> - Adds timed-out tests to flaky list. Dumps two new files for reference, 
> 'timeout' and 'failed' for corresponding list of bad tests.
> - Set --max-builds for different urls separately. This is needed so that we 
> can turn the knobs for post-commit job and flaky-tests job separately.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15830) Sasl encryption doesn't work with AsyncRpcChannelImpl

2016-05-27 Thread Gary Helmling (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Helmling updated HBASE-15830:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Sasl encryption doesn't work with AsyncRpcChannelImpl
> -
>
> Key: HBASE-15830
> URL: https://issues.apache.org/jira/browse/HBASE-15830
> Project: HBase
>  Issue Type: Bug
>Reporter: Colin Ma
>Assignee: Colin Ma
> Fix For: 2.0.0
>
> Attachments: HBASE-15830.001.patch, HBASE-15830.002.patch, 
> HBASE-15830.003.patch, HBASE-15830.004.patch
>
>
> Currently, sasl encryption doesn't work with AsyncRpcChannelImpl, there has 3 
> problems:
> 1. 
> [sourcecode|https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/security/SaslClientHandler.java#L308]
>  will throw the following exception:
> java.lang.UnsupportedOperationException: direct buffer
>   at 
> io.netty.buffer.UnpooledUnsafeDirectByteBuf.array(UnpooledUnsafeDirectByteBuf.java:199)
>   at 
> org.apache.hadoop.hbase.security.SaslClientHandler.write(SaslClientHandler.java:308)
> 2. 
> [sourcecode|https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/AsyncRpcChannelImpl.java#L212]
>  has deadlocks problem.
> 3. TestAsyncSecureIPC doesn't cover the sasl encryption test case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15803) ZooKeeperWatcher's constructor can leak a ZooKeeper instance with throwing ZooKeeperConnectionException when canCreateBaseZNode is true

2016-05-27 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-15803:
---
Assignee: Ted Yu
  Status: Patch Available  (was: Open)

> ZooKeeperWatcher's constructor can leak a ZooKeeper instance with throwing 
> ZooKeeperConnectionException when canCreateBaseZNode is true
> ---
>
> Key: HBASE-15803
> URL: https://issues.apache.org/jira/browse/HBASE-15803
> Project: HBase
>  Issue Type: Bug
>Reporter: Hiroshi Ikeda
>Assignee: Ted Yu
>Priority: Minor
> Attachments: 15803.v1.txt
>
>
> {code}
>   public ZooKeeperWatcher(Configuration conf, String identifier,
>   Abortable abortable, boolean canCreateBaseZNode)
>   throws IOException, ZooKeeperConnectionException {
> ...skip...
> this.recoverableZooKeeper = ZKUtil.connect(...
> ...skip...
> if (canCreateBaseZNode) {
>   createBaseZNodes();
> }
>   }
>   private void createBaseZNodes() throws ZooKeeperConnectionException {
> {code}
> The registered watcher doesn't seem to close the Zookeeper instance by watch 
> events, and the instance keeps alive when createBaseZNodes is failed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15803) ZooKeeperWatcher's constructor can leak a ZooKeeper instance with throwing ZooKeeperConnectionException when canCreateBaseZNode is true

2016-05-27 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-15803:
---
Attachment: 15803.v1.txt

> ZooKeeperWatcher's constructor can leak a ZooKeeper instance with throwing 
> ZooKeeperConnectionException when canCreateBaseZNode is true
> ---
>
> Key: HBASE-15803
> URL: https://issues.apache.org/jira/browse/HBASE-15803
> Project: HBase
>  Issue Type: Bug
>Reporter: Hiroshi Ikeda
>Priority: Minor
> Attachments: 15803.v1.txt
>
>
> {code}
>   public ZooKeeperWatcher(Configuration conf, String identifier,
>   Abortable abortable, boolean canCreateBaseZNode)
>   throws IOException, ZooKeeperConnectionException {
> ...skip...
> this.recoverableZooKeeper = ZKUtil.connect(...
> ...skip...
> if (canCreateBaseZNode) {
>   createBaseZNodes();
> }
>   }
>   private void createBaseZNodes() throws ZooKeeperConnectionException {
> {code}
> The registered watcher doesn't seem to close the Zookeeper instance by watch 
> events, and the instance keeps alive when createBaseZNodes is failed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15898) Document G1GC Recommendations

2016-05-27 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15304568#comment-15304568
 ] 

Elliott Clark commented on HBASE-15898:
---

Most of these recommendations are for if you have a very large heap. We need to 
call that out

> Document G1GC Recommendations
> -
>
> Key: HBASE-15898
> URL: https://issues.apache.org/jira/browse/HBASE-15898
> Project: HBase
>  Issue Type: Bug
>  Components: documentation, java
>Affects Versions: 2.0.0
>Reporter: Misty Stanley-Jones
>Assignee: Misty Stanley-Jones
> Fix For: 2.0.0
>
> Attachments: HBASE-15898.patch
>
>
> Document G1GC recommendations for HBase



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15904) Use comma as separator for list of tables in BackupInfo

2016-05-27 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-15904:
---
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Thanks for the reviews, Vlad and Enis.

> Use comma as separator for list of tables in BackupInfo
> ---
>
> Key: HBASE-15904
> URL: https://issues.apache.org/jira/browse/HBASE-15904
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
>  Labels: backup
> Attachments: 15904.v1.txt
>
>
> Currently semicolon is used to separate tables in BackupInfo.java :
> {code}
>   public String getTableListAsString() {
> return StringUtils.join(backupStatusMap.keySet(), ";");
> {code}
> 'hbase restore' accepts a comma-separated list of tables.
> [~cartershanklin] made the following request:
> The semicolon should be changed to comma so that user can copy-paste the 
> table list.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15904) Use comma as separator for list of tables in BackupInfo

2016-05-27 Thread Vladimir Rodionov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15304542#comment-15304542
 ] 

Vladimir Rodionov commented on HBASE-15904:
---

+1

> Use comma as separator for list of tables in BackupInfo
> ---
>
> Key: HBASE-15904
> URL: https://issues.apache.org/jira/browse/HBASE-15904
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
>  Labels: backup
> Attachments: 15904.v1.txt
>
>
> Currently semicolon is used to separate tables in BackupInfo.java :
> {code}
>   public String getTableListAsString() {
> return StringUtils.join(backupStatusMap.keySet(), ";");
> {code}
> 'hbase restore' accepts a comma-separated list of tables.
> [~cartershanklin] made the following request:
> The semicolon should be changed to comma so that user can copy-paste the 
> table list.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15893) Get object

2016-05-27 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15304537#comment-15304537
 ] 

Elliott Clark commented on HBASE-15893:
---

-1

* Auto generated everywhere. If code isn't needed and already used it shouldn't 
be here. Remove everything that's not 100% needed.
* Makefile overcommit
* There's no need to have a byte comparable when string already has all that.
* Don't need cell and key value. There's no off heap. We've made no promises 
about kv's aways being in the same contiguous memory so there's no need to have 
the distinction.
* hconstants is an abomination and should never ever be repeated.
* Don't create a new object and then add it into a unique pointer. Use 
make_unique
* Prefer the protobuf whenever it's api is palatable.
* B after H is always capital.
* Utils is an awful class name
* Right now I don't even think that we want to try implementing kv/cell. The 
protobufs have been doing very well in perf tests.
* Tests. These all need tests. If things aren't covered by tests I'm not ok 
with committing it.

> Get object
> --
>
> Key: HBASE-15893
> URL: https://issues.apache.org/jira/browse/HBASE-15893
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Sudeep Sunthankar
> Attachments: HBASE-15893.HBASE-14850.v1.patch
>
>
> Patch for creating Get objects.  Get objects can be passed to the Table 
> implementation to fetch results for a given row. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-15851) Makefile update for build env

2016-05-27 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15851?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark resolved HBASE-15851.
---
Resolution: Fixed

> Makefile update for build env
> -
>
> Key: HBASE-15851
> URL: https://issues.apache.org/jira/browse/HBASE-15851
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Sudeep Sunthankar
>Assignee: Sudeep Sunthankar
> Attachments: HBASE-15851.HBASE-14850.v2.patch, 
> hbase-native-client-with-makefile.patch
>
>
> 1) Makefile:- edited to compile using g++ .  A SO would be created with the 
> name libHbaseClient.so.
> 2) core/meta-utils.h :- Edited to update an incorrect header file reference 
> (connection/Request.h -> connection/request.h)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15851) Makefile update for build env

2016-05-27 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15304515#comment-15304515
 ] 

Elliott Clark commented on HBASE-15851:
---

+1

> Makefile update for build env
> -
>
> Key: HBASE-15851
> URL: https://issues.apache.org/jira/browse/HBASE-15851
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Sudeep Sunthankar
>Assignee: Sudeep Sunthankar
> Attachments: HBASE-15851.HBASE-14850.v2.patch, 
> hbase-native-client-with-makefile.patch
>
>
> 1) Makefile:- edited to compile using g++ .  A SO would be created with the 
> name libHbaseClient.so.
> 2) core/meta-utils.h :- Edited to update an incorrect header file reference 
> (connection/Request.h -> connection/request.h)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15904) Use comma as separator for list of tables in BackupInfo

2016-05-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15304494#comment-15304494
 ] 

Hadoop QA commented on HBASE-15904:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue} 0m 1s 
{color} | {color:blue} The patch file was not named according to hbase's naming 
conventions. Please see 
https://yetus.apache.org/documentation/0.2.1/precommit-patchnames for 
instructions. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 4s {color} 
| {color:red} HBASE-15904 does not apply to master. Rebase required? Wrong 
Branch? See https://yetus.apache.org/documentation/0.2.1/precommit-patchnames 
for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12806679/15904.v1.txt |
| JIRA Issue | HBASE-15904 |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/2035/console |
| Powered by | Apache Yetus 0.2.1   http://yetus.apache.org |


This message was automatically generated.



> Use comma as separator for list of tables in BackupInfo
> ---
>
> Key: HBASE-15904
> URL: https://issues.apache.org/jira/browse/HBASE-15904
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
>  Labels: backup
> Attachments: 15904.v1.txt
>
>
> Currently semicolon is used to separate tables in BackupInfo.java :
> {code}
>   public String getTableListAsString() {
> return StringUtils.join(backupStatusMap.keySet(), ";");
> {code}
> 'hbase restore' accepts a comma-separated list of tables.
> [~cartershanklin] made the following request:
> The semicolon should be changed to comma so that user can copy-paste the 
> table list.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15904) Use comma as separator for list of tables in BackupInfo

2016-05-27 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15304479#comment-15304479
 ] 

Enis Soztutar commented on HBASE-15904:
---

+1. 

> Use comma as separator for list of tables in BackupInfo
> ---
>
> Key: HBASE-15904
> URL: https://issues.apache.org/jira/browse/HBASE-15904
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
>  Labels: backup
> Attachments: 15904.v1.txt
>
>
> Currently semicolon is used to separate tables in BackupInfo.java :
> {code}
>   public String getTableListAsString() {
> return StringUtils.join(backupStatusMap.keySet(), ";");
> {code}
> 'hbase restore' accepts a comma-separated list of tables.
> [~cartershanklin] made the following request:
> The semicolon should be changed to comma so that user can copy-paste the 
> table list.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15904) Use comma as separator for list of tables in BackupInfo

2016-05-27 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-15904:
---
Status: Patch Available  (was: Open)

> Use comma as separator for list of tables in BackupInfo
> ---
>
> Key: HBASE-15904
> URL: https://issues.apache.org/jira/browse/HBASE-15904
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
>  Labels: backup
> Attachments: 15904.v1.txt
>
>
> Currently semicolon is used to separate tables in BackupInfo.java :
> {code}
>   public String getTableListAsString() {
> return StringUtils.join(backupStatusMap.keySet(), ";");
> {code}
> 'hbase restore' accepts a comma-separated list of tables.
> [~cartershanklin] made the following request:
> The semicolon should be changed to comma so that user can copy-paste the 
> table list.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15904) Use comma as separator for list of tables in BackupInfo

2016-05-27 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-15904:
---
Attachment: 15904.v1.txt

> Use comma as separator for list of tables in BackupInfo
> ---
>
> Key: HBASE-15904
> URL: https://issues.apache.org/jira/browse/HBASE-15904
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
>  Labels: backup
> Attachments: 15904.v1.txt
>
>
> Currently semicolon is used to separate tables in BackupInfo.java :
> {code}
>   public String getTableListAsString() {
> return StringUtils.join(backupStatusMap.keySet(), ";");
> {code}
> 'hbase restore' accepts a comma-separated list of tables.
> [~cartershanklin] made the following request:
> The semicolon should be changed to comma so that user can copy-paste the 
> table list.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15898) Document G1GC Recommendations

2016-05-27 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15304475#comment-15304475
 ] 

Andrew Purtell commented on HBASE-15898:


Have you thought about incorporating some of the information from 
https://blogs.apache.org/hbase/entry/tuning_g1gc_for_your_hbase ?

> Document G1GC Recommendations
> -
>
> Key: HBASE-15898
> URL: https://issues.apache.org/jira/browse/HBASE-15898
> Project: HBase
>  Issue Type: Bug
>  Components: documentation, java
>Affects Versions: 2.0.0
>Reporter: Misty Stanley-Jones
>Assignee: Misty Stanley-Jones
> Fix For: 2.0.0
>
> Attachments: HBASE-15898.patch
>
>
> Document G1GC recommendations for HBase



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-15904) Use comma as separator for list of tables in BackupInfo

2016-05-27 Thread Ted Yu (JIRA)
Ted Yu created HBASE-15904:
--

 Summary: Use comma as separator for list of tables in BackupInfo
 Key: HBASE-15904
 URL: https://issues.apache.org/jira/browse/HBASE-15904
 Project: HBase
  Issue Type: Improvement
Reporter: Ted Yu
Assignee: Ted Yu
Priority: Minor


Currently semicolon is used to separate tables in BackupInfo.java :
{code}
  public String getTableListAsString() {
return StringUtils.join(backupStatusMap.keySet(), ";");
{code}
'hbase restore' accepts a comma-separated list of tables.

[~cartershanklin] made the following request:

The semicolon should be changed to comma so that user can copy-paste the table 
list.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15830) Sasl encryption doesn't work with AsyncRpcChannelImpl

2016-05-27 Thread Gary Helmling (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15304473#comment-15304473
 ] 

Gary Helmling commented on HBASE-15830:
---

[~colinma], this is now applied to master so will show up in 2.0.  It looks 
like the async RPC code in branch-1 will have similar problems with qop != 
auth, but the code there is substantially different as well -- AsyncRpcChannel 
vs. AsyncRpcChannelImpl, and no TestAsyncSecureIPC class at all.  Were you 
looking to get this change in to branch-1 as well?  If so, maybe we should open 
a separate issue for the backport.

> Sasl encryption doesn't work with AsyncRpcChannelImpl
> -
>
> Key: HBASE-15830
> URL: https://issues.apache.org/jira/browse/HBASE-15830
> Project: HBase
>  Issue Type: Bug
>Reporter: Colin Ma
>Assignee: Colin Ma
> Fix For: 2.0.0
>
> Attachments: HBASE-15830.001.patch, HBASE-15830.002.patch, 
> HBASE-15830.003.patch, HBASE-15830.004.patch
>
>
> Currently, sasl encryption doesn't work with AsyncRpcChannelImpl, there has 3 
> problems:
> 1. 
> [sourcecode|https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/security/SaslClientHandler.java#L308]
>  will throw the following exception:
> java.lang.UnsupportedOperationException: direct buffer
>   at 
> io.netty.buffer.UnpooledUnsafeDirectByteBuf.array(UnpooledUnsafeDirectByteBuf.java:199)
>   at 
> org.apache.hadoop.hbase.security.SaslClientHandler.write(SaslClientHandler.java:308)
> 2. 
> [sourcecode|https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/AsyncRpcChannelImpl.java#L212]
>  has deadlocks problem.
> 3. TestAsyncSecureIPC doesn't cover the sasl encryption test case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14140) HBase Backup/Restore Phase 3: Enhance HBaseAdmin API to include backup/restore - related API

2016-05-27 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HBASE-14140:
--
Fix Version/s: HBASE-7912

> HBase Backup/Restore Phase 3: Enhance HBaseAdmin API to include 
> backup/restore - related API
> 
>
> Key: HBASE-14140
> URL: https://issues.apache.org/jira/browse/HBASE-14140
> Project: HBase
>  Issue Type: New Feature
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: HBASE-7912
>
> Attachments: HBASE-14140-v1.patch, HBASE-14140-v10.patch, 
> HBASE-14140-v11.patch, HBASE-14140-v12.patch, HBASE-14140-v13.patch, 
> HBASE-14140-v14.patch, HBASE-14140-v4.patch, HBASE-14140-v7.patch, 
> HBASE-14140-v9.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15727) Canary Tool for Zookeeper

2016-05-27 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15304460#comment-15304460
 ] 

Ted Yu commented on HBASE-15727:


TestCanaryTool.java has been added to repo by HBASE-15617

Mind rebasing the patch ?

> Canary Tool for Zookeeper
> -
>
> Key: HBASE-15727
> URL: https://issues.apache.org/jira/browse/HBASE-15727
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: churro morales
>Assignee: churro morales
> Attachments: HBASE-15727-v1.patch, HBASE-15727-v2.patch, 
> HBASE-15727-v3.patch, HBASE-15727.patch
>
>
> It would be nice to have the canary tool also monitor zookeeper.  Something 
> simple like doing a getData() call on zookeeper.znode.parent
> It would be nice to create clients for every instance in the quorum such that 
> you could monitor overloaded or poor behaving instances.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15292) Refined ZooKeeperWatcher to prevent ZooKeeper's callback while construction

2016-05-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15304437#comment-15304437
 ] 

Hudson commented on HBASE-15292:


FAILURE: Integrated in HBase-0.98-matrix #348 (See 
[https://builds.apache.org/job/HBase-0.98-matrix/348/])
HBASE-15292 Refined ZooKeeperWatcher to prevent ZooKeeper's callback (apurtell: 
rev c7445edd627bee2ad822691de9475b0ce0264a87)
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/zookeeper/InstancePending.java
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/zookeeper/PendingWatcher.java
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/zookeeper/ZooKeeperWatcher.java
* 
hbase-client/src/test/java/org/apache/hadoop/hbase/zookeeper/TestInstancePending.java


> Refined ZooKeeperWatcher to prevent ZooKeeper's callback while construction
> ---
>
> Key: HBASE-15292
> URL: https://issues.apache.org/jira/browse/HBASE-15292
> Project: HBase
>  Issue Type: Bug
>  Components: Zookeeper
>Reporter: Hiroshi Ikeda
>Assignee: Hiroshi Ikeda
>Priority: Minor
> Fix For: 2.0.0, 1.3.0, 1.4.0, 0.98.20
>
> Attachments: HBASE-15292-V2.patch, HBASE-15292-V3.patch, 
> HBASE-15292-V4.patch, HBASE-15292-V5.patch, HBASE-15292.patch
>
>
> The existing code is not just messy but also contains a subtle bug of 
> visibility due to missing synchronization between threads.
> The root of the evil is that ZooKeeper uses a silly anti-pattern, starting a 
> thread within its constructor, and in practice all the developers are not 
> allowed to use ZooKeeper correctly without tedious code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15854) Log the cause of SASL connection failures

2016-05-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15304432#comment-15304432
 ] 

Hudson commented on HBASE-15854:


FAILURE: Integrated in HBase-0.98-matrix #348 (See 
[https://builds.apache.org/job/HBase-0.98-matrix/348/])
HBASE-15854 Log the cause of SASL connection failures (Robert Yokota) 
(apurtell: rev ffa899f2dd0363389d9d7465c0cdfd132c7e2eaf)
* hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/RpcClient.java


> Log the cause of SASL connection failures
> -
>
> Key: HBASE-15854
> URL: https://issues.apache.org/jira/browse/HBASE-15854
> Project: HBase
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 1.2.1
>Reporter: Robert Yokota
>Priority: Minor
> Fix For: 2.0.0, 1.3.0, 1.2.2, 0.98.20, 1.1.6
>
> Attachments: HBASE-15854.1.patch
>
>
> This is the same fix as for HADOOP-11291 to add more info during logging of 
> SASL connection failures.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15842) SnapshotInfo should display ownership information

2016-05-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15304435#comment-15304435
 ] 

Hudson commented on HBASE-15842:


FAILURE: Integrated in HBase-0.98-matrix #348 (See 
[https://builds.apache.org/job/HBase-0.98-matrix/348/])
HBASE-15842 SnapshotInfo should display ownership information (apurtell: rev 
124dd53e11b94b01f74ae36203f752aed8ab97bc)
* hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/SnapshotInfo.java
Revert "HBASE-15842 SnapshotInfo should display ownership information" 
(apurtell: rev cfc0eec98b993d97b230ea92ca95116933a990fe)
* hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/SnapshotInfo.java


> SnapshotInfo should display ownership information
> -
>
> Key: HBASE-15842
> URL: https://issues.apache.org/jira/browse/HBASE-15842
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 2.0.0, 1.4.0
>
> Attachments: 15842.v1.txt, 15842.v1.txt
>
>
> Currently SnapshotInfo doesn't show snapshot owner:
> {code}
> hbase org.apache.hadoop.hbase.snapshot.SnapshotInfo -snapshot 
> snapshot_table_qm0uxvk19x -stats -schema
> ...
> Snapshot Info
> 
>Name: snapshot_table_qm0uxvk19x
>Type: FLUSH
>   Table: table_qm0uxvk19x
>  Format: 2
> Created: 2016-05-16T20:54:08
> ...
> {code}
> This JIRA is to add ownership information to the display.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15740) Replication source.shippedKBs metric is undercounting because it is in KB

2016-05-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15304434#comment-15304434
 ] 

Hudson commented on HBASE-15740:


FAILURE: Integrated in HBase-0.98-matrix #348 (See 
[https://builds.apache.org/job/HBase-0.98-matrix/348/])
HBASE-15740 Replication source.shippedKBs metric is undercounting (apurtell: 
rev 1822bd3ec57d40a2cb682500a8a5cff5df9c9e08)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java
* 
hbase-hadoop1-compat/src/main/java/org/apache/hadoop/hbase/replication/regionserver/MetricsReplicationGlobalSourceSource.java
* 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/replication/regionserver/MetricsReplicationGlobalSourceSource.java
* 
hbase-hadoop1-compat/src/main/java/org/apache/hadoop/hbase/replication/regionserver/MetricsReplicationSourceSourceImpl.java
* 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/replication/regionserver/MetricsReplicationSourceSourceImpl.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/MetricsSource.java
* 
hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/replication/regionserver/MetricsReplicationSourceSource.java


> Replication source.shippedKBs metric is undercounting because it is in KB
> -
>
> Key: HBASE-15740
> URL: https://issues.apache.org/jira/browse/HBASE-15740
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 2.0.0, 1.3.0, 1.4.0, 0.98.20
>
> Attachments: HBASE-15740-0.98.patch, hbase-15740_v1.patch, 
> hbase-15740_v2.patch
>
>
> In a cluster where there is replication going on, I've noticed that this is 
> always 0:
> {code}
> "source.shippedKBs" : 0,
> {code}
> Looking at the source reveals why:
> {code}
>   metrics.shipBatch(currentNbOperations, currentSize / 1024, 
> currentNbHFiles);
> {code}
> It is always undercounting because we discard remaining bytes after KB 
> boundary. This is specially a problem when we are always shipping small 
> batches <1KB.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15837) Memstore size accounting is wrong if postBatchMutate() throws exception

2016-05-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15304436#comment-15304436
 ] 

Hudson commented on HBASE-15837:


FAILURE: Integrated in HBase-0.98-matrix #348 (See 
[https://builds.apache.org/job/HBase-0.98-matrix/348/])
HBASE-15837 Memstore size accounting is wrong if postBatchMutate() (enis: rev 
695843bcdabcf1030638f2ccc3a522184e162431)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegion.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java


> Memstore size accounting is wrong if postBatchMutate() throws exception
> ---
>
> Key: HBASE-15837
> URL: https://issues.apache.org/jira/browse/HBASE-15837
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Reporter: Josh Elser
>Assignee: Josh Elser
> Fix For: 2.0.0, 1.3.0, 1.2.2, 0.98.20, 1.1.6
>
> Attachments: HBASE-15837.001.patch, hbase-15837-v1.patch, 
> hbase-15837.branch-1.patch, hbase-memstore-size-accounting.patch
>
>
> Over in PHOENIX-2883, I've been trying to figure out how to track down the 
> root cause of an issue we were seeing where a negative memstoreSize was 
> ultimately causing an RS to abort. The tl;dr version is
> * Something causes memstoreSize to be negative (not sure what is doing this 
> yet)
> * All subsequent flushes short-circuit and don't run because they think there 
> is no data to flush
> * The region is eventually closed (commonly, for a move).
> * A final flush is attempted on each store before closing (which also 
> short-circuit for the same reason), leaving unflushed data in each store.
> * The sanity check that each store's size is zero fails and the RS aborts.
> I have a little patch which I think should improve our failure case around 
> this, preventing the RS abort safely (forcing a flush when memstoreSize is 
> negative) and logging a calltrace when an update to memstoreSize make it 
> negative (to find culprits in the future).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15847) VerifyReplication prefix filtering

2016-05-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15304439#comment-15304439
 ] 

Hudson commented on HBASE-15847:


FAILURE: Integrated in HBase-0.98-matrix #348 (See 
[https://builds.apache.org/job/HBase-0.98-matrix/348/])
HBASE-15847 VerifyReplication prefix filtering (apurtell: rev 
734c50bc616cc3595f45dd212a5ff0200a5c9ea5)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/replication/VerifyReplication.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationSmallTests.java


> VerifyReplication prefix filtering
> --
>
> Key: HBASE-15847
> URL: https://issues.apache.org/jira/browse/HBASE-15847
> Project: HBase
>  Issue Type: New Feature
>  Components: Replication
>Affects Versions: 2.0.0, 0.98.19, 1.1.5
>Reporter: Geoffrey Jacoby
>Assignee: Geoffrey Jacoby
> Fix For: 2.0.0, 1.4.0, 0.98.20
>
> Attachments: HBASE-15847-0.98.patch, HBASE-15847-branch-1.patch, 
> HBASE-15847.patch, HBASE-15847.v1.patch
>
>
> VerifyReplication currently lets a user verify data within a time range has 
> been replicated to a particular peer. It can be useful to verify only data 
> that starts with particular prefixes. (An example would be an unsalted 
> multi-tenant Phoenix table where you wish to only verify data for particular 
> tenants.)
> Add a new option to the VerifyReplication job to allow for a list of prefixes 
> to be given. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15808) Reduce potential bulk load intermediate space usage and waste

2016-05-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15304433#comment-15304433
 ] 

Hudson commented on HBASE-15808:


FAILURE: Integrated in HBase-0.98-matrix #348 (See 
[https://builds.apache.org/job/HBase-0.98-matrix/348/])
HBASE-15808 Reduce potential bulk load intermediate space usage and (apurtell: 
rev a38b633a4bdf171d9a12a600a9b1a22b1f09dec9)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestLoadIncrementalHFilesSplitRecovery.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/LoadIncrementalHFiles.java


> Reduce potential bulk load intermediate space usage and waste
> -
>
> Key: HBASE-15808
> URL: https://issues.apache.org/jira/browse/HBASE-15808
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 1.2.0
>Reporter: Jerry He
>Assignee: Jerry He
>Priority: Minor
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.2.2, 0.98.20
>
> Attachments: HBASE-15808-v2.patch, HBASE-15808-v3.patch, 
> HBASE-15808.patch
>
>
> If the bulk load input files do not match the existing region boudaries, the 
> files will be splitted.
> In the unfornate cases where the files need to be splitted multiple times,
> the process can consume unnecessary space and can even cause out of space.
> Here is over-simplified example.
> Orinal size of input files:  
>   consumed space: size --> 300GB
> After a round of splits: 
>   consumed space: size + tmpspace1 --> 300GB + 300GB
> After another round of splits: 
>   consumded space:  size + tmpspace1 + tmpspace2 --> 300GB + 300GB + 300GB
> ..
> Currently we don't do any cleanup in the process. At least all the 
> intermediate tmpspace (not the last one) can be deleted in the process.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15841) Performance Evaluation tool total rows may not be set correctly

2016-05-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15304431#comment-15304431
 ] 

Hudson commented on HBASE-15841:


FAILURE: Integrated in HBase-0.98-matrix #348 (See 
[https://builds.apache.org/job/HBase-0.98-matrix/348/])
HBASE-15841 Performance Evaluation tool total rows may not be set (apurtell: 
rev 30cf572836aeae318f8fddf5fb35ac27367edb2f)
* hbase-server/src/test/java/org/apache/hadoop/hbase/PerformanceEvaluation.java


> Performance Evaluation tool total rows may not be set correctly
> ---
>
> Key: HBASE-15841
> URL: https://issues.apache.org/jira/browse/HBASE-15841
> Project: HBase
>  Issue Type: Bug
>Reporter: Jerry He
>Assignee: Jerry He
>Priority: Minor
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.2.2, 0.98.20
>
> Attachments: HBASE-15841-branch-1-v2.patch, 
> HBASE-15841-branch-1.patch, HBASE-15841-master-v2.patch, 
> HBASE-15841-master.patch
>
>
> Carried my comment on HBASE-15403 to here:
> Recently when I ran PerformanceEvaluation, I did notice some problem with the 
> number of rows.
> hbase org.apache.hadoop.hbase.PerformanceEvaluation --table=TestTable1 
> randomWrite 1
> hbase org.apache.hadoop.hbase.PerformanceEvaluation --table=TestTable5 
> randomWrite 5
> hbase org.apache.hadoop.hbase.PerformanceEvaluation --table=TestTable10 
> randomWrite 10
> hbase org.apache.hadoop.hbase.PerformanceEvaluation --table=TestTable10 
> randomWrite 20
> All produced similar number of rows, and on the file system, they look like 
> in similar size as well:
> hadoop fs -du -h /apps/hbase/data/data/default
> 786.5 M /apps/hbase/data/data/default/TestTable1
> 786.0 M /apps/hbase/data/data/default/TestTable10
> 782.0 M /apps/hbase/data/data/default/TestTable20
> 713.4 M /apps/hbase/data/data/default/TestTable5
> HBase is 1.2.0. Looks like a regression somewhere.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15815) Region mover script sometimes reports stuck region where only one server was involved

2016-05-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15304438#comment-15304438
 ] 

Hudson commented on HBASE-15815:


FAILURE: Integrated in HBase-0.98-matrix #348 (See 
[https://builds.apache.org/job/HBase-0.98-matrix/348/])
HBASE-15815 Region mover script sometimes reports stuck region where (apurtell: 
rev 8b4a2dcaf7f97015fdfdf0757f571cdd139f455f)
* bin/region_mover.rb


> Region mover script sometimes reports stuck region where only one server was 
> involved
> -
>
> Key: HBASE-15815
> URL: https://issues.apache.org/jira/browse/HBASE-15815
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.1.2
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Fix For: 1.4.0, 0.98.20
>
> Attachments: 15815-branch-1.v1.txt, 15815-branch-1.v2.txt, 
> HBASE-15815.branch-1.02.txt, HBASE-15815.branch-1.v2.txt
>
>
> Sometimes we saw the following in output from region mover script:
> {code}
> 2016-05-11 01:38:21,187||INFO|3969|140086696048384|MainThread|2016-05-11 
> 01:38:21,186 INFO  [RubyThread-7: 
> /.../current/hbase-client/bin/thread-pool.rb:28-EventThread] 
> zookeeper.ClientCnxn: EventThread shut down
> 2016-05-11 01:38:21,299||INFO|3969|140086696048384|MainThread|RuntimeError: 
> Region stuck on hbase-5-2.osl,16020,1462930100540,, 
> newserver=hbase-5-2.osl,16020,1462930100540
> {code}
> There was only one server involved.
> Since the name of region was not printed, it makes debugging hard to do.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15842) SnapshotInfo should display ownership information

2016-05-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15304385#comment-15304385
 ] 

Hudson commented on HBASE-15842:


FAILURE: Integrated in HBase-0.98-on-Hadoop-1.1 #1220 (See 
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/1220/])
HBASE-15842 SnapshotInfo should display ownership information (apurtell: rev 
124dd53e11b94b01f74ae36203f752aed8ab97bc)
* hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/SnapshotInfo.java
Revert "HBASE-15842 SnapshotInfo should display ownership information" 
(apurtell: rev cfc0eec98b993d97b230ea92ca95116933a990fe)
* hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/SnapshotInfo.java


> SnapshotInfo should display ownership information
> -
>
> Key: HBASE-15842
> URL: https://issues.apache.org/jira/browse/HBASE-15842
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 2.0.0, 1.4.0
>
> Attachments: 15842.v1.txt, 15842.v1.txt
>
>
> Currently SnapshotInfo doesn't show snapshot owner:
> {code}
> hbase org.apache.hadoop.hbase.snapshot.SnapshotInfo -snapshot 
> snapshot_table_qm0uxvk19x -stats -schema
> ...
> Snapshot Info
> 
>Name: snapshot_table_qm0uxvk19x
>Type: FLUSH
>   Table: table_qm0uxvk19x
>  Format: 2
> Created: 2016-05-16T20:54:08
> ...
> {code}
> This JIRA is to add ownership information to the display.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15740) Replication source.shippedKBs metric is undercounting because it is in KB

2016-05-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15304384#comment-15304384
 ] 

Hudson commented on HBASE-15740:


FAILURE: Integrated in HBase-0.98-on-Hadoop-1.1 #1220 (See 
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/1220/])
HBASE-15740 Replication source.shippedKBs metric is undercounting (apurtell: 
rev 1822bd3ec57d40a2cb682500a8a5cff5df9c9e08)
* 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/replication/regionserver/MetricsReplicationGlobalSourceSource.java
* 
hbase-hadoop1-compat/src/main/java/org/apache/hadoop/hbase/replication/regionserver/MetricsReplicationGlobalSourceSource.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/MetricsSource.java
* 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/replication/regionserver/MetricsReplicationSourceSourceImpl.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java
* 
hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/replication/regionserver/MetricsReplicationSourceSource.java
* 
hbase-hadoop1-compat/src/main/java/org/apache/hadoop/hbase/replication/regionserver/MetricsReplicationSourceSourceImpl.java


> Replication source.shippedKBs metric is undercounting because it is in KB
> -
>
> Key: HBASE-15740
> URL: https://issues.apache.org/jira/browse/HBASE-15740
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 2.0.0, 1.3.0, 1.4.0, 0.98.20
>
> Attachments: HBASE-15740-0.98.patch, hbase-15740_v1.patch, 
> hbase-15740_v2.patch
>
>
> In a cluster where there is replication going on, I've noticed that this is 
> always 0:
> {code}
> "source.shippedKBs" : 0,
> {code}
> Looking at the source reveals why:
> {code}
>   metrics.shipBatch(currentNbOperations, currentSize / 1024, 
> currentNbHFiles);
> {code}
> It is always undercounting because we discard remaining bytes after KB 
> boundary. This is specially a problem when we are always shipping small 
> batches <1KB.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15841) Performance Evaluation tool total rows may not be set correctly

2016-05-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15304381#comment-15304381
 ] 

Hudson commented on HBASE-15841:


FAILURE: Integrated in HBase-0.98-on-Hadoop-1.1 #1220 (See 
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/1220/])
HBASE-15841 Performance Evaluation tool total rows may not be set (apurtell: 
rev 30cf572836aeae318f8fddf5fb35ac27367edb2f)
* hbase-server/src/test/java/org/apache/hadoop/hbase/PerformanceEvaluation.java


> Performance Evaluation tool total rows may not be set correctly
> ---
>
> Key: HBASE-15841
> URL: https://issues.apache.org/jira/browse/HBASE-15841
> Project: HBase
>  Issue Type: Bug
>Reporter: Jerry He
>Assignee: Jerry He
>Priority: Minor
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.2.2, 0.98.20
>
> Attachments: HBASE-15841-branch-1-v2.patch, 
> HBASE-15841-branch-1.patch, HBASE-15841-master-v2.patch, 
> HBASE-15841-master.patch
>
>
> Carried my comment on HBASE-15403 to here:
> Recently when I ran PerformanceEvaluation, I did notice some problem with the 
> number of rows.
> hbase org.apache.hadoop.hbase.PerformanceEvaluation --table=TestTable1 
> randomWrite 1
> hbase org.apache.hadoop.hbase.PerformanceEvaluation --table=TestTable5 
> randomWrite 5
> hbase org.apache.hadoop.hbase.PerformanceEvaluation --table=TestTable10 
> randomWrite 10
> hbase org.apache.hadoop.hbase.PerformanceEvaluation --table=TestTable10 
> randomWrite 20
> All produced similar number of rows, and on the file system, they look like 
> in similar size as well:
> hadoop fs -du -h /apps/hbase/data/data/default
> 786.5 M /apps/hbase/data/data/default/TestTable1
> 786.0 M /apps/hbase/data/data/default/TestTable10
> 782.0 M /apps/hbase/data/data/default/TestTable20
> 713.4 M /apps/hbase/data/data/default/TestTable5
> HBase is 1.2.0. Looks like a regression somewhere.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >