[jira] [Commented] (HBASE-16567) Upgrade to protobuf3

2016-09-06 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15469637#comment-15469637
 ] 

ramkrishna.s.vasudevan commented on HBASE-16567:


Ya. I just saw their page after adding that comment that it got released.

> Upgrade to protobuf3
> 
>
> Key: HBASE-16567
> URL: https://issues.apache.org/jira/browse/HBASE-16567
> Project: HBase
>  Issue Type: Task
>  Components: Protobufs
>Affects Versions: 2.0.0
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Attachments: HBASE-16567.master.001.patch
>
>
> Move master branch on to protobuf3. See 
> https://github.com/google/protobuf/releases We'd do it because pb3 saves some 
> on byte copies can work with offheap buffers -- needed for the off-heap write 
> path project -- though read-time is still a TODO (this means pb3 is not 
> enough; we'll have to patch it -- or patch pb2.5).
> HBASE-15638 has us first shading protobufs before upgrading. Let us list here 
> issues just going to pb3 without shading if only for completeness sake; i.e. 
> do we have to shade?
>  * pb3 is by default wire compatible with pb2.
>  * protoc3 run against our .protos works fine except pb3 breaks our 
> HBaseZeroCopyLiteralByteString hack so this has to be removed (possibly 
> recast using new pb3 types)
>  * Starting up a cluster that is all pb3 seems to work fine.
>  * A pb2 branch-1 can read and write against the pb3 master cluster.
> What will break if we just upgrade to pb3?
>  * We should be able to write HDFS messages on our AsyncWAL using pb3; the 
> pb2 HDFS should be able to  read them (not tested). Or maybe not. See policy 
> here: https://github.com/google/protobuf/issues/1852 which seems to indicate 
> pb3s will not be able to write compatible pb2 Messages. TODO.
>  * Core Coprocessor Endpoints such as AccessControl seem to just work (their 
> protos will have been protoc3'd). I did simple test with a server from master 
> branch up on pb3 and then going against it with a branch-1 client on pb2. I 
> was able to add grants.
>  * For non-core CPEPs where the protos are pb2 still, it might just work. To 
> test. It would not be the end-of-the-world if they did not.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16530) Reduce DBE code duplication

2016-09-06 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15469593#comment-15469593
 ] 

ramkrishna.s.vasudevan commented on HBASE-16530:


I think this unification is great. +1 on V4. Will commit this later today 
unless objections.

> Reduce DBE code duplication
> ---
>
> Key: HBASE-16530
> URL: https://issues.apache.org/jira/browse/HBASE-16530
> Project: HBase
>  Issue Type: Sub-task
>  Components: Performance
>Reporter: binlijin
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-16530-master_V1.patch, 
> HBASE-16530-master_V2.patch, HBASE-16530-master_V3.patch, 
> HBASE-16530-master_V4.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16567) Upgrade to protobuf3

2016-09-06 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15469588#comment-15469588
 ] 

ramkrishna.s.vasudevan commented on HBASE-16567:


One question on the pom upgrade.
When I did this some time back I found this,
Just adding the note I had stored when we did this upgrade
{code}
the actual protoc.exe version shows 3.0.0 but the protobuf-java .jars are of 
3.0.0-beta2. so things does not work. 
Since when we configure 3.0.0-beta 2 in hbase pom the protoc says it needs 
3.0.0. When we configure 3.0.0 in hbase pom
it says canoot find 3.0.0 protobuf jar. Hence renmaed 3.0.0-beta2 jar to 3.0.0 
protobuf jar.
{code}
Also how were you able to compile the .proto files under src/test folder?


> Upgrade to protobuf3
> 
>
> Key: HBASE-16567
> URL: https://issues.apache.org/jira/browse/HBASE-16567
> Project: HBase
>  Issue Type: Task
>  Components: Protobufs
>Affects Versions: 2.0.0
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Attachments: HBASE-16567.master.001.patch
>
>
> Move master branch on to protobuf3. See 
> https://github.com/google/protobuf/releases We'd do it because pb3 saves some 
> on byte copies can work with offheap buffers -- needed for the off-heap write 
> path project -- thought read-time is still a TODO.
> HBASE-15638 has us first shading protobufs before upgrading. Let us list here 
> issues just going to pb3 without shading if only for completeness sake; i.e. 
> do we have to shade?
>  * pb3 is by default wire compatible with pb2.
>  * protoc3 run against our .protos works fine except pb3 breaks our 
> HBaseZeroCopyLiteralByteString hack.
>  * Starting up a cluster that is all pb3'd seems to work fine.
>  * A pb2 branch-1 can read and write against the pb3 master cluster.
> What will break if we just upgrade to pb3?
>  * We should be able to write HDFS messages on our AsyncWAL using pb3; the 
> pb2 HDFS should be able to  read them (not tested). Or maybe not. See policy 
> here: https://github.com/google/protobuf/issues/1852
>  * Core Coprocessor Endpoints such as AccessControl seem to just work (their 
> protos will have been protoc3'd). I did simple test with a server from master 
> branch up on pb3 and then going against it with a branch-1 client on pb2. I 
> was able to add grants.
>  * For non-core CPEPs where the protos are pb2 still, it might just work. To 
> test. It would not be the end-of-the-world if they did not.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15638) Shade protobuf

2016-09-06 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-15638:
--
Description: 
We need to change our protobuf. Currently it is pb2.5.0. As is, protobufs 
expect all buffers to be on-heap byte arrays. It does not have facility for 
dealing in ByteBuffers and off-heap ByteBuffers in particular. This fact 
frustrates the off-heaping-of-the-write-path project as 
marshalling/unmarshalling of protobufs involves a copy on-heap first.

So, we need to patch our protobuf so it supports off-heap ByteBuffers. To 
ensure we pick up the patched protobuf always, we need to relocate/shade our 
protobuf and adjust all protobuf references accordingly.

Given as we have protobufs in our public facing API, Coprocessor Endpoints -- 
which use protobuf Service to describe new API -- a blind relocation/shading of 
com.google.protobuf.* will break our API for CoProcessor EndPoints (CPEP) in 
particular. For example, in the Table Interface, to invoke a method on a 
registered CPEP, we have:

{code} Map 
coprocessorService(
Class service, byte[] startKey, byte[] endKey,   
  org.apache.hadoop.hbase.client.coprocessor.Batch.Call 
callable)
throws com.google.protobuf.ServiceException, Throwable{code}

This issue is how we intend to shade protobuf for hbase-2.0.0 while preserving 
our API as is so CPEPs continue to work on the new hbase.

  was:
We need to change our protobuf. Currently it is pb2.5.0. As is, protobufs 
expect all buffers to be on-heap byte arrays. It does not have facility for 
dealing in ByteBuffers and off-heap ByteBuffers in particular. This fact 
frustrates the off-heaping-of-the-write-path project as 
marshalling/unmarshalling of protobufs involves a copy on-heap first.

So, we need to patch our protobuf so it supports off-heap ByteBuffers. To 
ensure we pick up the patched protobuf always, we need to relocate/shade our 
protobuf and adjust all protobuf references accordingly.

Given as we have protobufs in our public facing API, Coprocessor Endpoints -- 
which use protobuf Service to describe new API -- a blind relocation/shading of 
com.google.protobuf.* will break our API for CoProcessor EndPoints (CPEP) in 
particular. For example, in the Table Interface, to invoke a method on a 
registered CPEP, we have:

{{ Map coprocessorService(
Class service, byte[] startKey, byte[] endKey,   
  org.apache.hadoop.hbase.client.coprocessor.Batch.Call 
callable)
throws com.google.protobuf.ServiceException, Throwable}}

This issue is how we intend to shade protobuf for hbase-2.0.0 while preserving 
our API as is so CPEPs continue to work on the new hbase.


> Shade protobuf
> --
>
> Key: HBASE-15638
> URL: https://issues.apache.org/jira/browse/HBASE-15638
> Project: HBase
>  Issue Type: Bug
>  Components: Protobufs
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Attachments: 15638v2.patch, HBASE-15638.master.001.patch, 
> HBASE-15638.master.002.patch, HBASE-15638.master.003 (1).patch, 
> HBASE-15638.master.003 (1).patch, HBASE-15638.master.003 (1).patch, 
> HBASE-15638.master.003.patch, HBASE-15638.master.003.patch, 
> HBASE-15638.master.004.patch, HBASE-15638.master.005.patch, 
> HBASE-15638.master.006.patch, HBASE-15638.master.007.patch, 
> HBASE-15638.master.007.patch, HBASE-15638.master.008.patch, 
> HBASE-15638.master.009.patch, as.far.as.server.patch
>
>
> We need to change our protobuf. Currently it is pb2.5.0. As is, protobufs 
> expect all buffers to be on-heap byte arrays. It does not have facility for 
> dealing in ByteBuffers and off-heap ByteBuffers in particular. This fact 
> frustrates the off-heaping-of-the-write-path project as 
> marshalling/unmarshalling of protobufs involves a copy on-heap first.
> So, we need to patch our protobuf so it supports off-heap ByteBuffers. To 
> ensure we pick up the patched protobuf always, we need to relocate/shade our 
> protobuf and adjust all protobuf references accordingly.
> Given as we have protobufs in our public facing API, Coprocessor Endpoints -- 
> which use protobuf Service to describe new API -- a blind relocation/shading 
> of com.google.protobuf.* will break our API for CoProcessor EndPoints (CPEP) 
> in particular. For example, in the Table Interface, to invoke a method on a 
> registered CPEP, we have:
> {code} Map 
> coprocessorService(
> Class service, byte[] startKey, byte[] endKey, 
> org.apache.hadoop.hbase.client.coprocessor.Batch.Call 
> callable)
> throws com.google.protobuf.ServiceException, Throwable{code}
> This issue is how we intend to shade protobuf for hbase-2.0.0 while 
> preserving our API as is so CPEPs continue to work on the new 

[jira] [Updated] (HBASE-15638) Shade protobuf

2016-09-06 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-15638:
--
Description: 
We need to change our protobuf. Currently it is pb2.5.0. As is, protobufs 
expect all buffers to be on-heap byte arrays. It does not have facility for 
dealing in ByteBuffers and off-heap ByteBuffers in particular. This fact 
frustrates the off-heaping-of-the-write-path project as 
marshalling/unmarshalling of protobufs involves a copy on-heap first.

So, we need to patch our protobuf so it supports off-heap ByteBuffers. To 
ensure we pick up the patched protobuf always, we need to relocate/shade our 
protobuf and adjust all protobuf references accordingly.

Given as we have protobufs in our public facing API, Coprocessor Endpoints -- 
which use protobuf Service to describe new API -- a blind relocation/shading of 
com.google.protobuf.* will break our API for CoProcessor EndPoints (CPEP) in 
particular. For example, in the Table Interface, to invoke a method on a 
registered CPEP, we have:

{{ Map coprocessorService(
Class service, byte[] startKey, byte[] endKey,   
  org.apache.hadoop.hbase.client.coprocessor.Batch.Call 
callable)
throws com.google.protobuf.ServiceException, Throwable}}

This issue is how we intend to shade protobuf for hbase-2.0.0 while preserving 
our API as is so CPEPs continue to work on the new hbase.

  was:
We need to change our protobuf. Currently it is pb2.5.0. As is, protobufs 
expect all buffers to be on-heap byte arrays. It does not have facility for 
dealing in ByteBuffers and off-heap ByteBuffers in particular. This fact 
frustrates the off-heaping-of-the-write-path project as 
marshalling/unmarshalling of protobufs involves a copy on-heap first.

So, we need to patch our protobuf so it supports off-heap ByteBuffers. To 
ensure we pick up the patched protobuf always, we need to relocate/shade our 
protobuf and adjust all protobuf references accordingly.

Given as we have protobufs in our public facing API, Coprocessor Endpoints -- 
which use protobuf Service to describe new API -- a blind relocation/shading of 
com.google.protobuf.* will break our API for CoProcessor EndPoints (CPEP) in 
particular. For example, in the Table Interface, to invoke a method on a 
registered CPEP, we have:

 Map coprocessorService(
Class service, byte[] startKey, byte[] endKey,   
  org.apache.hadoop.hbase.client.coprocessor.Batch.Call 
callable)
throws com.google.protobuf.ServiceException, Throwable

This issue is how we intend to shade protobuf for hbase-2.0.0 while preserving 
our API as is so CPEPs continue to work on the new hbase.


> Shade protobuf
> --
>
> Key: HBASE-15638
> URL: https://issues.apache.org/jira/browse/HBASE-15638
> Project: HBase
>  Issue Type: Bug
>  Components: Protobufs
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Attachments: 15638v2.patch, HBASE-15638.master.001.patch, 
> HBASE-15638.master.002.patch, HBASE-15638.master.003 (1).patch, 
> HBASE-15638.master.003 (1).patch, HBASE-15638.master.003 (1).patch, 
> HBASE-15638.master.003.patch, HBASE-15638.master.003.patch, 
> HBASE-15638.master.004.patch, HBASE-15638.master.005.patch, 
> HBASE-15638.master.006.patch, HBASE-15638.master.007.patch, 
> HBASE-15638.master.007.patch, HBASE-15638.master.008.patch, 
> HBASE-15638.master.009.patch, as.far.as.server.patch
>
>
> We need to change our protobuf. Currently it is pb2.5.0. As is, protobufs 
> expect all buffers to be on-heap byte arrays. It does not have facility for 
> dealing in ByteBuffers and off-heap ByteBuffers in particular. This fact 
> frustrates the off-heaping-of-the-write-path project as 
> marshalling/unmarshalling of protobufs involves a copy on-heap first.
> So, we need to patch our protobuf so it supports off-heap ByteBuffers. To 
> ensure we pick up the patched protobuf always, we need to relocate/shade our 
> protobuf and adjust all protobuf references accordingly.
> Given as we have protobufs in our public facing API, Coprocessor Endpoints -- 
> which use protobuf Service to describe new API -- a blind relocation/shading 
> of com.google.protobuf.* will break our API for CoProcessor EndPoints (CPEP) 
> in particular. For example, in the Table Interface, to invoke a method on a 
> registered CPEP, we have:
> {{ Map coprocessorService(
> Class service, byte[] startKey, byte[] endKey, 
> org.apache.hadoop.hbase.client.coprocessor.Batch.Call 
> callable)
> throws com.google.protobuf.ServiceException, Throwable}}
> This issue is how we intend to shade protobuf for hbase-2.0.0 while 
> preserving our API as is so CPEPs continue to work on the new hbase.



--
This 

[jira] [Updated] (HBASE-15638) Shade protobuf

2016-09-06 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-15638:
--
Description: 
We need to change our protobuf. Currently it is pb2.5.0. As is, protobufs 
expect all buffers to be on-heap byte arrays. It does not have facility for 
dealing in ByteBuffers and off-heap ByteBuffers in particular. This fact 
frustrates the off-heaping-of-the-write-path project as 
marshalling/unmarshalling of protobufs involves a copy on-heap first.

So, we need to patch our protobuf so it supports off-heap ByteBuffers. To 
ensure we pick up the patched protobuf always, we need to relocate/shade our 
protobuf and adjust all protobuf references accordingly.

Given as we have protobufs in our public facing API, Coprocessor Endpoints -- 
which use protobuf Service to describe new API -- a blind relocation/shading of 
com.google.protobuf.* will break our API for CoProcessor EndPoints (CPEP) in 
particular. For example, in the Table Interface, to invoke a method on a 
registered CPEP, we have:

 Map coprocessorService(
Class service, byte[] startKey, byte[] endKey,   
  org.apache.hadoop.hbase.client.coprocessor.Batch.Call 
callable)
throws com.google.protobuf.ServiceException, Throwable

This issue is how we intend to shade protobuf for hbase-2.0.0 while preserving 
our API as is so CPEPs continue to work on the new hbase.

  was:
Shade protobufs so we can move to a different version without breaking the 
world. We want to get up on pb3 because it has unsafe methods that allow us 
save on copies; it also has some means of dealing with BBs so we can pass it 
offheap DBBs. We'll probably want to change PB3 to open it up some more too so 
we can stay offheap as we traverse PB. This issue comes of [~anoop.hbase] and 
[~ram_krish]'s offheaping of the readpath work.

This change is mostly straight-forward but there are some tricky bits:

 # How to interface with HDFS? It wants its ByteStrings. Here in particular in 
FanOutOneBlockAsyncDFSOutputSaslHelper:

{code}
  if (payload != null) {
builder.setPayload(ByteString.copyFrom(payload));
  }
{code}

 # [~busbey] also points out that we need to take care of endpoints done as pb. 
Test at least.

Let me raise this one on the dev list too.



> Shade protobuf
> --
>
> Key: HBASE-15638
> URL: https://issues.apache.org/jira/browse/HBASE-15638
> Project: HBase
>  Issue Type: Bug
>  Components: Protobufs
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Attachments: 15638v2.patch, HBASE-15638.master.001.patch, 
> HBASE-15638.master.002.patch, HBASE-15638.master.003 (1).patch, 
> HBASE-15638.master.003 (1).patch, HBASE-15638.master.003 (1).patch, 
> HBASE-15638.master.003.patch, HBASE-15638.master.003.patch, 
> HBASE-15638.master.004.patch, HBASE-15638.master.005.patch, 
> HBASE-15638.master.006.patch, HBASE-15638.master.007.patch, 
> HBASE-15638.master.007.patch, HBASE-15638.master.008.patch, 
> HBASE-15638.master.009.patch, as.far.as.server.patch
>
>
> We need to change our protobuf. Currently it is pb2.5.0. As is, protobufs 
> expect all buffers to be on-heap byte arrays. It does not have facility for 
> dealing in ByteBuffers and off-heap ByteBuffers in particular. This fact 
> frustrates the off-heaping-of-the-write-path project as 
> marshalling/unmarshalling of protobufs involves a copy on-heap first.
> So, we need to patch our protobuf so it supports off-heap ByteBuffers. To 
> ensure we pick up the patched protobuf always, we need to relocate/shade our 
> protobuf and adjust all protobuf references accordingly.
> Given as we have protobufs in our public facing API, Coprocessor Endpoints -- 
> which use protobuf Service to describe new API -- a blind relocation/shading 
> of com.google.protobuf.* will break our API for CoProcessor EndPoints (CPEP) 
> in particular. For example, in the Table Interface, to invoke a method on a 
> registered CPEP, we have:
>  Map coprocessorService(
> Class service, byte[] startKey, byte[] endKey, 
> org.apache.hadoop.hbase.client.coprocessor.Batch.Call 
> callable)
> throws com.google.protobuf.ServiceException, Throwable
> This issue is how we intend to shade protobuf for hbase-2.0.0 while 
> preserving our API as is so CPEPs continue to work on the new hbase.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16264) Figure how to deal with endpoints and shaded pb

2016-09-06 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-16264:
--
Release Note: Shade/relocate the protobuf hbase uses internally. Even 
though our API references protobufs -- to support Coprocessor Endpoints -- 
Coprocessor Endpoints should still work (it is a bug if they do not).  (was: 
Shade/relocate the protobuf hbase uses internally. Even though our API 
references protobufs -- to support Coprocessor Endpoints -- Coprocessor 
Endpoints should still work (it is a bug if they do not).

Downside for developers is that you will have to add a built version of the 
hbase-protocol jar -- the module that includes the shaded protobuf -- to your 
IDE build path so the relocated protobufs can be resolved.)

> Figure how to deal with endpoints and shaded pb
> ---
>
> Key: HBASE-16264
> URL: https://issues.apache.org/jira/browse/HBASE-16264
> Project: HBase
>  Issue Type: Sub-task
>  Components: Coprocessors, Protobufs
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: 16264.tactic2.patch, HBASE-16264.master.001.patch, 
> HBASE-16264.master.002.patch, HBASE-16264.master.003.patch, 
> HBASE-16264.master.004.patch, HBASE-16264.master.005.patch, 
> HBASE-16264.master.006.patch, HBASE-16264.master.007.patch
>
>
> Come up w/ a migration plan for coprocessor endpoints when our pb is shaded. 
> Would be sweet if could make it so all just worked. At worst, come up w/ a 
> prescription for how to migrate existing CPs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16562) ITBLL should fail to start if misconfigured

2016-09-06 Thread Heng Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Heng Chen updated HBASE-16562:
--
Fix Version/s: 1.2.4
   0.98.23
   1.1.7
   1.3.1
   1.4.0
   1.0.4
   2.0.0

> ITBLL should fail to start if misconfigured
> ---
>
> Key: HBASE-16562
> URL: https://issues.apache.org/jira/browse/HBASE-16562
> Project: HBase
>  Issue Type: Improvement
>  Components: integration tests
>Reporter: Andrew Purtell
>Assignee: Heng Chen
> Fix For: 2.0.0, 1.0.4, 1.4.0, 1.3.1, 1.1.7, 0.98.23, 1.2.4
>
> Attachments: HBASE-16562.patch, HBASE-16562.v1.patch
>
>
> The number of nodes in ITBLL must a multiple of width*wrap (defaults to 25M, 
> but can be configured by adding two more args to the test invocation) or else 
> verification will fail. This can be very expensive in terms of time or hourly 
> billing for on demand test resources. Check the sanity of test parameters 
> before launching any MR jobs and fail fast if invariants aren't met with an 
> indication what parameter(s) need fixing. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16562) ITBLL should fail to start if misconfigured

2016-09-06 Thread Heng Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Heng Chen updated HBASE-16562:
--
Attachment: HBASE-16562.v1.patch

address [~apurtell] comments, will commit if no other objections

> ITBLL should fail to start if misconfigured
> ---
>
> Key: HBASE-16562
> URL: https://issues.apache.org/jira/browse/HBASE-16562
> Project: HBase
>  Issue Type: Improvement
>  Components: integration tests
>Reporter: Andrew Purtell
>Assignee: Heng Chen
> Attachments: HBASE-16562.patch, HBASE-16562.v1.patch
>
>
> The number of nodes in ITBLL must a multiple of width*wrap (defaults to 25M, 
> but can be configured by adding two more args to the test invocation) or else 
> verification will fail. This can be very expensive in terms of time or hourly 
> billing for on demand test resources. Check the sanity of test parameters 
> before launching any MR jobs and fail fast if invariants aren't met with an 
> indication what parameter(s) need fixing. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16447) Replication by namespaces config in peer

2016-09-06 Thread Guanghao Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15469382#comment-15469382
 ] 

Guanghao Zhang commented on HBASE-16447:


[~enis] [~tedyu] Any ideas about v3 patch?

> Replication by namespaces config in peer
> 
>
> Key: HBASE-16447
> URL: https://issues.apache.org/jira/browse/HBASE-16447
> Project: HBase
>  Issue Type: New Feature
>  Components: Replication
>Affects Versions: 2.0.0
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
> Attachments: HBASE-16447-v1.patch, HBASE-16447-v2.patch, 
> HBASE-16447-v3.patch
>
>
> Now we only config table cfs in peer. But in our production cluster, there 
> are a dozen of namespace and every namespace has dozens of tables. It was 
> complicated to config all table cfs in peer. For some namespace, it need 
> replication all tables to other slave cluster. It will be easy to config if 
> we support replication by namespace. Suggestions and discussions are welcomed.
> Review board: https://reviews.apache.org/r/51521/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16148) Hybrid Logical Clocks(placeholder for running tests)

2016-09-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15469375#comment-15469375
 ] 

Hadoop QA commented on HBASE-16148:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 28 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 18s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
14s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 24s 
{color} | {color:green} master passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 30s 
{color} | {color:green} master passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 10m 
13s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
47s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 
34s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s 
{color} | {color:green} master passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 18s 
{color} | {color:green} master passed with JDK v1.7.0_111 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
46s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 22s 
{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 22s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 22s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 32s 
{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 32s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 35s {color} 
| {color:red} hbase-server-jdk1.7.0_111 with JDK v1.7.0_111 generated 2 new + 4 
unchanged - 2 fixed = 6 total (was 6) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 10m 
4s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
45s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
27m 8s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 19s 
{color} | {color:red} hbase-server generated 1 new + 0 unchanged - 0 fixed = 1 
total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 12s 
{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 19s 
{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 22s 
{color} | {color:green} hbase-protocol in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 48s 
{color} | {color:green} hbase-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 59s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 100m 54s 
{color} | {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense 

[jira] [Commented] (HBASE-14882) Provide a Put API that adds the provided family, qualifier, value without copying

2016-09-06 Thread Xiang Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15469330#comment-15469330
 ] 

Xiang Li commented on HBASE-14882:
--

[~anoop.hbase], what about "ImmutableKeyFieldCell" or 
"CellWithImmutableKeyFields" or "SeparateKeyFieldCell" ?

> Provide a Put API that adds the provided family, qualifier, value without 
> copying
> -
>
> Key: HBASE-14882
> URL: https://issues.apache.org/jira/browse/HBASE-14882
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 1.2.0
>Reporter: Jerry He
>Assignee: Xiang Li
> Fix For: 2.0.0
>
> Attachments: HBASE-14882.master.000.patch, 
> HBASE-14882.master.001.patch
>
>
> In the Put API, we have addImmutable()
> {code}
>  /**
>* See {@link #addColumn(byte[], byte[], byte[])}. This version expects
>* that the underlying arrays won't change. It's intended
>* for usage internal HBase to and for advanced client applications.
>*/
>   public Put addImmutable(byte [] family, byte [] qualifier, byte [] value)
> {code}
> But in the implementation, the family, qualifier and value are still being 
> copied locally to create kv.
> Hopefully we should provide an API that truly uses immutable family, 
> qualifier and value.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16505) Add AsyncRegion interface to pass deadline and support async operations

2016-09-06 Thread Yu Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15469288#comment-15469288
 ] 

Yu Li commented on HBASE-16505:
---

patch v6 lgtm, +1

> Add AsyncRegion interface to pass deadline and support async operations
> ---
>
> Key: HBASE-16505
> URL: https://issues.apache.org/jira/browse/HBASE-16505
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Phil Yang
>Assignee: Phil Yang
> Attachments: HBASE-16505-v1.patch, HBASE-16505-v2.patch, 
> HBASE-16505-v3.patch, HBASE-16505-v4.patch, HBASE-16505-v5.patch, 
> HBASE-16505-v6.patch
>
>
> If we want to know the correct setting of timeout in read/write path, we need 
> add a new parameter in operation-methods of Region.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16530) Reduce DBE code duplication

2016-09-06 Thread binlijin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15469256#comment-15469256
 ] 

binlijin commented on HBASE-16530:
--

How about now?

> Reduce DBE code duplication
> ---
>
> Key: HBASE-16530
> URL: https://issues.apache.org/jira/browse/HBASE-16530
> Project: HBase
>  Issue Type: Sub-task
>  Components: Performance
>Reporter: binlijin
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-16530-master_V1.patch, 
> HBASE-16530-master_V2.patch, HBASE-16530-master_V3.patch, 
> HBASE-16530-master_V4.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16530) Reduce DBE code duplication

2016-09-06 Thread binlijin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

binlijin updated HBASE-16530:
-
Attachment: HBASE-16530-master_V4.patch

> Reduce DBE code duplication
> ---
>
> Key: HBASE-16530
> URL: https://issues.apache.org/jira/browse/HBASE-16530
> Project: HBase
>  Issue Type: Sub-task
>  Components: Performance
>Reporter: binlijin
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-16530-master_V1.patch, 
> HBASE-16530-master_V2.patch, HBASE-16530-master_V3.patch, 
> HBASE-16530-master_V4.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15968) MVCC-sensitive semantics of versions

2016-09-06 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15469234#comment-15469234
 ] 

Duo Zhang commented on HBASE-15968:
---

How do you deal with visibility labels? There is special delete tracker for it 
and I haven't seen related code in the {{MvccSensitiveTracker}}.

Thanks.

> MVCC-sensitive semantics of versions
> 
>
> Key: HBASE-15968
> URL: https://issues.apache.org/jira/browse/HBASE-15968
> Project: HBase
>  Issue Type: New Feature
>Reporter: Phil Yang
>Assignee: Phil Yang
> Attachments: HBASE-15968-v1.patch
>
>
> In HBase book, we have a section in Versions called "Current Limitations" see 
> http://hbase.apache.org/book.html#_current_limitations
> {quote}
> 28.3. Current Limitations
> 28.3.1. Deletes mask Puts
> Deletes mask puts, even puts that happened after the delete was entered. See 
> HBASE-2256. Remember that a delete writes a tombstone, which only disappears 
> after then next major compaction has run. Suppose you do a delete of 
> everything ⇐ T. After this you do a new put with a timestamp ⇐ T. This put, 
> even if it happened after the delete, will be masked by the delete tombstone. 
> Performing the put will not fail, but when you do a get you will notice the 
> put did have no effect. It will start working again after the major 
> compaction has run. These issues should not be a problem if you use 
> always-increasing versions for new puts to a row. But they can occur even if 
> you do not care about time: just do delete and put immediately after each 
> other, and there is some chance they happen within the same millisecond.
> 28.3.2. Major compactions change query results
> …​create three cell versions at t1, t2 and t3, with a maximum-versions 
> setting of 2. So when getting all versions, only the values at t2 and t3 will 
> be returned. But if you delete the version at t2 or t3, the one at t1 will 
> appear again. Obviously, once a major compaction has run, such behavior will 
> not be the case anymore…​ (See Garbage Collection in Bending time in HBase.)
> {quote}
> These limitations result from the current implementation on multi-versions: 
> we only consider timestamp, no matter when it comes; we will not remove old 
> version immediately if there are enough number of new versions. 
> So we can get a stronger semantics of versions by two guarantees:
> 1, Delete will not mask Put that comes after it.
> 2, If a version is masked by enough number of higher versions (VERSIONS in 
> cf's conf), it will never be seen any more.
> Some examples for understanding:
> (delete t<=3 means use Delete.addColumns to delete all versions whose ts is 
> not greater than 3, and delete t3 means use Delete.addColumn to delete the 
> version whose ts=3)
> case 1: put t2 -> put t3 -> delete t<=3 -> put t1, and we will get t1 because 
> the put is after delete.
> case 2: maxversion=2, put t1 -> put t2 -> put t3 -> delete t3, and we will 
> always get t2 no matter if there is a major compaction, because t1 is masked 
> when we put t3 so t1 will never be seen.
> case 3: maxversion=2, put t1 -> put t2 -> put t3 -> delete t2 -> delete t3, 
> and we will get nothing.
> case 4: maxversion=3, put t1 -> put t2 -> put t3 -> delete t2 -> delete t3, 
> and we will get t1 because it is not masked.
> case 5: maxversion=2, put t1 -> put t2 -> put t3 -> delete t3 -> put t1, and 
> we can get t3+t1 because when we put t1 at second time it is the 2nd latest 
> version and it can be read.
> case 6:maxversion=2, put t3->put t2->put t1, and we will get t3+t2 just like 
> what we can get now, ts is still the key of versions.
> Different VERSIONS may result in different results even the size of result is 
> smaller than VERSIONS(see case 3 and 4).  So Get/Scan.setMaxVersions will be 
> handled at end after we read correct data according to CF's  VERSIONS setting.
> The semantics is different from the current HBase, and we may need more logic 
> to support the new semantic, so it is configurable and default is disabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16414) Improve performance for RPC encryption with Apache Common Crypto

2016-09-06 Thread Colin Ma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Ma updated HBASE-16414:
-
Attachment: HBASE-16414.003.patch

Fix the license problem and unit test problems.

> Improve performance for RPC encryption with Apache Common Crypto
> 
>
> Key: HBASE-16414
> URL: https://issues.apache.org/jira/browse/HBASE-16414
> Project: HBase
>  Issue Type: Improvement
>  Components: IPC/RPC
>Affects Versions: 2.0.0
>Reporter: Colin Ma
>Assignee: Colin Ma
> Attachments: HBASE-16414.001.patch, HBASE-16414.002.patch, 
> HBASE-16414.003.patch, HbaseRpcEncryptionWithCrypoto.docx
>
>
> Hbase RPC encryption is enabled by setting “hbase.rpc.protection” to 
> "privacy". With the token authentication, it utilized DIGEST-MD5 mechanisms 
> for secure authentication and data protection. For DIGEST-MD5, it uses DES, 
> 3DES or RC4 to do encryption and it is very slow, especially for Scan. This 
> will become the bottleneck of the RPC throughput.
> Apache Commons Crypto is a cryptographic library optimized with AES-NI. It 
> provides Java API for both cipher level and Java stream level. Developers can 
> use it to implement high performance AES encryption/decryption with the 
> minimum code and effort. Compare with the current implementation of 
> org.apache.hadoop.hbase.io.crypto.aes.AES, Crypto supports both JCE Cipher 
> and OpenSSL Cipher which is better performance than JCE Cipher. User can 
> configure the cipher type and the default is JCE Cipher.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16567) Upgrade to protobuf3

2016-09-06 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15469159#comment-15469159
 ] 

stack commented on HBASE-16567:
---

Just a note to say that upgrading our pb to pb3 w/o running protoc3, we do 
pretty good but fail reading trailers on hfiles. We can't find CellComparator. 
TODO: check why.

{code}
2016-09-06 18:06:42,856 ERROR [RS_OPEN_META-localhost:55129-0] 
handler.OpenRegionHandler: Failed open of region=hbase:meta,,1.1588230740, 
starting to roll back the global memstore size.
java.io.IOException: java.io.IOException: 
org.apache.hadoop.hbase.io.hfile.CorruptHFileException: Problem reading HFile 
Trailer from file 
file:/var/folders/cj/jgfy62h13vz019xgz681df_rgp/T/hbase-stack/hbase/data/hbase/meta/1588230740/info/47a6d1f4adb24f6e92de8dbe9e9c144f
at 
org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionStores(HRegion.java:831)
at 
org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:732)
at 
org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:705)
at 
org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4750)
at 
org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4721)
at 
org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4693)
at 
org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4649)
at 
org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4600)
at 
org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:276)
at 
org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:103)
at 
org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: 
org.apache.hadoop.hbase.io.hfile.CorruptHFileException: Problem reading HFile 
Trailer from file 
file:/var/folders/cj/jgfy62h13vz019xgz681df_rgp/T/hbase-stack/hbase/data/hbase/meta/1588230740/info/47a6d1f4adb24f6e92de8dbe9e9c144f
at 
org.apache.hadoop.hbase.regionserver.HStore.openStoreFiles(HStore.java:544)
at 
org.apache.hadoop.hbase.regionserver.HStore.loadStoreFiles(HStore.java:499)
at org.apache.hadoop.hbase.regionserver.HStore.(HStore.java:267)
at 
org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:3652)
at org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:805)
at org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:802)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
... 3 more
Caused by: org.apache.hadoop.hbase.io.hfile.CorruptHFileException: Problem 
reading HFile Trailer from file 
file:/var/folders/cj/jgfy62h13vz019xgz681df_rgp/T/hbase-stack/hbase/data/hbase/meta/1588230740/info/47a6d1f4adb24f6e92de8dbe9e9c144f
at 
org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:477)
at org.apache.hadoop.hbase.io.hfile.HFile.createReader(HFile.java:505)
at 
org.apache.hadoop.hbase.regionserver.StoreFile$Reader.(StoreFile.java:1033)
at 
org.apache.hadoop.hbase.regionserver.StoreFileInfo.open(StoreFileInfo.java:241)
at 
org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:365)
at 
org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:462)
at 
org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:629)
at 
org.apache.hadoop.hbase.regionserver.HStore.access$000(HStore.java:123)
at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:519)
at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:516)
... 6 more
Caused by: java.io.IOException: java.lang.ClassNotFoundException: 
org.apache.hadoop.hbase.CellComparator$MetaCellComparator
at 
org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.getComparatorClass(FixedFileTrailer.java:581)
at 
org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.deserializeFromPB(FixedFileTrailer.java:300)
at 
org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.deserialize(FixedFileTrailer.java:242)
at 
org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.readFromStream(FixedFileTrailer.java:407)
at 
org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:462)
... 15 more
Caused by: java.lang.ClassNotFoundException: 

[jira] [Commented] (HBASE-16465) Disable region splits and merges, balancer during full backup

2016-09-06 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15469132#comment-15469132
 ] 

Enis Soztutar commented on HBASE-16465:
---

bq. Who would be restoring the balancer setting ?
We can use the new maintenance mode introduced in HBASE-16008 for this. We can 
put an ephemeral znode while the backup process is on, so that region 
operations like split / merge and balancing are disabled. See 
HBaseFSCK#setMasterInMaintenanceMode(). [~syuanjiang] what do you think? 

> Disable region splits and merges, balancer during full backup
> -
>
> Key: HBASE-16465
> URL: https://issues.apache.org/jira/browse/HBASE-16465
> Project: HBase
>  Issue Type: Improvement
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
>  Labels: backup
> Attachments: HBASE-16465-v1.patch, HBASE-16465-v2.patch, 
> HBASE-16465-v3.patch, HBASE-16465-v4.patch, HBASE-16465-v5.patch
>
>
> Incorporate HBASE-15128
> Balancer, catalog janitor and region normalizer should be disabled as well 
> during full backup



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16568) Remove Cygwin-oriented instructions (for installing HBase in Windows OS) from official reference materials

2016-09-06 Thread Dima Spivak (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15469118#comment-15469118
 ] 

Dima Spivak commented on HBASE-16568:
-

Sounds good to me, [~daniel_vimont]. Get a patch up here and I'll be happy to 
commit it for you. 

> Remove Cygwin-oriented instructions (for installing HBase in Windows OS) from 
> official reference materials
> --
>
> Key: HBASE-16568
> URL: https://issues.apache.org/jira/browse/HBASE-16568
> Project: HBase
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.0.0
>Reporter: Daniel Vimont
>Assignee: Daniel Vimont
>Priority: Minor
>
> Cygwin-oriented instructions in the official reference materials (for 
> installing HBase in a Windows environment) seem to be out of date and 
> incorrect; a number of unresolved/unresolvable requests for help have been 
> posted to d...@hbase.org and u...@hbase.org mailing lists.
> Discussions on d...@apache.org and HBase Slack channel resulted in (1) no 
> volunteers to update/maintain the Cygwin-oriented instructions, and (2) 
> several "+" votes and no "-" votes on the suggestion of removing 
> Cygwin-oriented instructions from the official reference materials.
> FUTURE POSSIBLE FOLLOW-UP: For the sake of setting up a 
> development/testing/sandbox environment in Windows, it might be helpful to 
> recommend the installation of a virtual machine environment (e.g. VirtualBox) 
> in a Windows OS, followed by the installation of an appropriate flavor of 
> Linux (e.g., Ubuntu) in the VM. After this, all the standard HBase 
> installation/config/usage instructions can be followed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15984) Given failure to parse a given WAL that was closed cleanly, replay the WAL.

2016-09-06 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15469107#comment-15469107
 ] 

Sean Busbey commented on HBASE-15984:
-

bump?

> Given failure to parse a given WAL that was closed cleanly, replay the WAL.
> ---
>
> Key: HBASE-15984
> URL: https://issues.apache.org/jira/browse/HBASE-15984
> Project: HBase
>  Issue Type: Sub-task
>  Components: Replication
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Critical
> Fix For: 2.0.0, 1.0.4, 1.4.0, 1.3.1, 1.1.7, 0.98.23, 1.2.4
>
> Attachments: HBASE-15984.1.patch, HBASE-15984.2.patch
>
>
> subtask for a general work around for "underlying reader failed / is in a bad 
> state" just for the case where a WAL 1) was closed cleanly and 2) we can tell 
> that our current offset ought not be the end of parseable entries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-16568) Remove Cygwin-oriented instructions (for installing HBase in Windows OS) from official reference materials

2016-09-06 Thread Daniel Vimont (JIRA)
Daniel Vimont created HBASE-16568:
-

 Summary: Remove Cygwin-oriented instructions (for installing HBase 
in Windows OS) from official reference materials
 Key: HBASE-16568
 URL: https://issues.apache.org/jira/browse/HBASE-16568
 Project: HBase
  Issue Type: Improvement
  Components: documentation
Affects Versions: 2.0.0
Reporter: Daniel Vimont
Assignee: Daniel Vimont
Priority: Minor


Cygwin-oriented instructions in the official reference materials (for 
installing HBase in a Windows environment) seem to be out of date and 
incorrect; a number of unresolved/unresolvable requests for help have been 
posted to d...@hbase.org and u...@hbase.org mailing lists.

Discussions on d...@apache.org and HBase Slack channel resulted in (1) no 
volunteers to update/maintain the Cygwin-oriented instructions, and (2) several 
"+" votes and no "-" votes on the suggestion of removing Cygwin-oriented 
instructions from the official reference materials.

FUTURE POSSIBLE FOLLOW-UP: For the sake of setting up a 
development/testing/sandbox environment in Windows, it might be helpful to 
recommend the installation of a virtual machine environment (e.g. VirtualBox) 
in a Windows OS, followed by the installation of an appropriate flavor of Linux 
(e.g., Ubuntu) in the VM. After this, all the standard HBase 
installation/config/usage instructions can be followed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16445) Refactor and reimplement RpcClient

2016-09-06 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15469055#comment-15469055
 ] 

Duo Zhang commented on HBASE-16445:
---

The {{TestScannerHeartbeatMessages}} is not stable, too many magic sleeps. This 
is a known problem. Will file another jira to rewrite it, maybe we could use 
{{EnvironmentEdgeManager}} to make it more stable.

Thanks.

> Refactor and reimplement RpcClient
> --
>
> Key: HBASE-16445
> URL: https://issues.apache.org/jira/browse/HBASE-16445
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-16445-v1.patch, HBASE-16445-v2.patch, 
> HBASE-16445-v3.patch, HBASE-16445-v4.patch, HBASE-16445-v5.patch, 
> HBASE-16445-v6.patch, HBASE-16445.patch
>
>
> There are lots of common logics between RpcClientImpl and AsyncRpcClient. We 
> should have much less code comparing to the current implementations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16148) Hybrid Logical Clocks(placeholder for running tests)

2016-09-06 Thread Sai Teja Ranuva (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sai Teja Ranuva updated HBASE-16148:

Attachment: HLC.10.5.patch

Bugs found in TTL, timetopurge removed. 
Parameterized other failed tests, changed to include HLC clock, System 
Monotonic. 

> Hybrid Logical Clocks(placeholder for running tests)
> 
>
> Key: HBASE-16148
> URL: https://issues.apache.org/jira/browse/HBASE-16148
> Project: HBase
>  Issue Type: Sub-task
>  Components: API
>Reporter: Sai Teja Ranuva
>Assignee: Sai Teja Ranuva
>Priority: Minor
>  Labels: test-patch
> Attachments: HBASE-16148.master.001.patch, 
> HBASE-16148.master.002.patch, HBASE-16148.master.003.patch, 
> HBASE-16148.master.004.patch, HBASE-16148.master.6.patch, 
> HBASE-16148.master.test.1.patch, HBASE-16148.master.test.2.patch, 
> HBASE-16148.master.test.3.patch, HBASE-16148.master.test.4.patch, 
> HBASE-16148.master.test.5.patch, HLC.1.patch, HLC.10.1.patch, HLC.10.2.patch, 
> HLC.10.3.patch, HLC.10.4.patch, HLC.10.5.patch, HLC.10.patch, HLC.2.patch, 
> HLC.3.patch, HLC.4.patch, HLC.5.patch, HLC.6.patch, HLC.8.patch, HLC.9.patch, 
> HLC.patch
>
>
> This JIRA is just a placeholder to test Hybrid Logical Clocks code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16567) Upgrade to protobuf3

2016-09-06 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-16567:
--
Status: Patch Available  (was: Open)

> Upgrade to protobuf3
> 
>
> Key: HBASE-16567
> URL: https://issues.apache.org/jira/browse/HBASE-16567
> Project: HBase
>  Issue Type: Task
>  Components: Protobufs
>Affects Versions: 2.0.0
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Attachments: HBASE-16567.master.001.patch
>
>
> Move master branch on to protobuf3. See 
> https://github.com/google/protobuf/releases We'd do it because pb3 saves some 
> on byte copies can work with offheap buffers -- needed for the off-heap write 
> path project -- thought read-time is still a TODO.
> HBASE-15638 has us first shading protobufs before upgrading. Let us list here 
> issues just going to pb3 without shading if only for completeness sake; i.e. 
> do we have to shade?
>  * pb3 is by default wire compatible with pb2.
>  * protoc3 run against our .protos works fine except pb3 breaks our 
> HBaseZeroCopyLiteralByteString hack.
>  * Starting up a cluster that is all pb3'd seems to work fine.
>  * A pb2 branch-1 can read and write against the pb3 master cluster.
> What will break if we just upgrade to pb3?
>  * We should be able to write HDFS messages on our AsyncWAL using pb3; the 
> pb2 HDFS should be able to  read them (not tested). Or maybe not. See policy 
> here: https://github.com/google/protobuf/issues/1852
>  * Core Coprocessor Endpoints such as AccessControl seem to just work (their 
> protos will have been protoc3'd). I did simple test with a server from master 
> branch up on pb3 and then going against it with a branch-1 client on pb2. I 
> was able to add grants.
>  * For non-core CPEPs where the protos are pb2 still, it might just work. To 
> test. It would not be the end-of-the-world if they did not.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16567) Upgrade to protobuf3

2016-09-06 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15468971#comment-15468971
 ] 

stack commented on HBASE-16567:
---

Attached patch is pretty much same as a patch that [~ram_krish] and 
[~anoop.hbase] have been slinging around for a while only in this case, 
ByteStringer and HBaseZeroCopyByteString are removed. Posting to see what fails 
in our test run.

> Upgrade to protobuf3
> 
>
> Key: HBASE-16567
> URL: https://issues.apache.org/jira/browse/HBASE-16567
> Project: HBase
>  Issue Type: Task
>  Components: Protobufs
>Affects Versions: 2.0.0
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Attachments: HBASE-16567.master.001.patch
>
>
> Move master branch on to protobuf3. See 
> https://github.com/google/protobuf/releases We'd do it because pb3 saves some 
> on byte copies can work with offheap buffers -- needed for the off-heap write 
> path project -- thought read-time is still a TODO.
> HBASE-15638 has us first shading protobufs before upgrading. Let us list here 
> issues just going to pb3 without shading if only for completeness sake; i.e. 
> do we have to shade?
>  * pb3 is by default wire compatible with pb2.
>  * protoc3 run against our .protos works fine except pb3 breaks our 
> HBaseZeroCopyLiteralByteString hack.
>  * Starting up a cluster that is all pb3'd seems to work fine.
>  * A pb2 branch-1 can read and write against the pb3 master cluster.
> What will break if we just upgrade to pb3?
>  * We should be able to write HDFS messages on our AsyncWAL using pb3; the 
> pb2 HDFS should be able to  read them (not tested). Or maybe not. See policy 
> here: https://github.com/google/protobuf/issues/1852
>  * Core Coprocessor Endpoints such as AccessControl seem to just work (their 
> protos will have been protoc3'd). I did simple test with a server from master 
> branch up on pb3 and then going against it with a branch-1 client on pb2. I 
> was able to add grants.
>  * For non-core CPEPs where the protos are pb2 still, it might just work. To 
> test. It would not be the end-of-the-world if they did not.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16567) Upgrade to protobuf3

2016-09-06 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-16567:
--
Attachment: HBASE-16567.master.001.patch

> Upgrade to protobuf3
> 
>
> Key: HBASE-16567
> URL: https://issues.apache.org/jira/browse/HBASE-16567
> Project: HBase
>  Issue Type: Task
>  Components: Protobufs
>Affects Versions: 2.0.0
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Attachments: HBASE-16567.master.001.patch
>
>
> Move master branch on to protobuf3. See 
> https://github.com/google/protobuf/releases We'd do it because pb3 saves some 
> on byte copies can work with offheap buffers -- needed for the off-heap write 
> path project -- thought read-time is still a TODO.
> HBASE-15638 has us first shading protobufs before upgrading. Let us list here 
> issues just going to pb3 without shading if only for completeness sake; i.e. 
> do we have to shade?
>  * pb3 is by default wire compatible with pb2.
>  * protoc3 run against our .protos works fine except pb3 breaks our 
> HBaseZeroCopyLiteralByteString hack.
>  * Starting up a cluster that is all pb3'd seems to work fine.
>  * A pb2 branch-1 can read and write against the pb3 master cluster.
> What will break if we just upgrade to pb3?
>  * We should be able to write HDFS messages on our AsyncWAL using pb3; the 
> pb2 HDFS should be able to  read them (not tested). Or maybe not. See policy 
> here: https://github.com/google/protobuf/issues/1852
>  * Core Coprocessor Endpoints such as AccessControl seem to just work (their 
> protos will have been protoc3'd). I did simple test with a server from master 
> branch up on pb3 and then going against it with a branch-1 client on pb2. I 
> was able to add grants.
>  * For non-core CPEPs where the protos are pb2 still, it might just work. To 
> test. It would not be the end-of-the-world if they did not.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16524) Clean procedure wal periodically instead of on every sync

2016-09-06 Thread Appy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Appy updated HBASE-16524:
-
Issue Type: Sub-task  (was: Bug)
Parent: HBASE-14350

> Clean procedure wal periodically instead of on every sync
> -
>
> Key: HBASE-16524
> URL: https://issues.apache.org/jira/browse/HBASE-16524
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Appy
>Assignee: Appy
>Priority: Minor
> Attachments: HBASE-16524.master.001.patch, flame1.svg
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-16567) Upgrade to protobuf3

2016-09-06 Thread stack (JIRA)
stack created HBASE-16567:
-

 Summary: Upgrade to protobuf3
 Key: HBASE-16567
 URL: https://issues.apache.org/jira/browse/HBASE-16567
 Project: HBase
  Issue Type: Task
  Components: Protobufs
Affects Versions: 2.0.0
Reporter: stack
Assignee: stack
Priority: Critical


Move master branch on to protobuf3. See 
https://github.com/google/protobuf/releases We'd do it because pb3 saves some 
on byte copies can work with offheap buffers -- needed for the off-heap write 
path project -- thought read-time is still a TODO.

HBASE-15638 has us first shading protobufs before upgrading. Let us list here 
issues just going to pb3 without shading if only for completeness sake; i.e. do 
we have to shade?

 * pb3 is by default wire compatible with pb2.
 * protoc3 run against our .protos works fine except pb3 breaks our 
HBaseZeroCopyLiteralByteString hack.
 * Starting up a cluster that is all pb3'd seems to work fine.
 * A pb2 branch-1 can read and write against the pb3 master cluster.

What will break if we just upgrade to pb3?

 * We should be able to write HDFS messages on our AsyncWAL using pb3; the pb2 
HDFS should be able to  read them (not tested). Or maybe not. See policy here: 
https://github.com/google/protobuf/issues/1852
 * Core Coprocessor Endpoints such as AccessControl seem to just work (their 
protos will have been protoc3'd). I did simple test with a server from master 
branch up on pb3 and then going against it with a branch-1 client on pb2. I was 
able to add grants.
 * For non-core CPEPs where the protos are pb2 still, it might just work. To 
test. It would not be the end-of-the-world if they did not.







--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16524) Clean procedure wal periodically instead of on every sync

2016-09-06 Thread Matteo Bertozzi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15468894#comment-15468894
 ] 

Matteo Bertozzi commented on HBASE-16524:
-

Instead of scanning all the wals every time, we can rely on the 
insert/update/delete events we have.
and since we want to delete the wals in order we can keep track of what is 
"holding" that wal, and take a hit on scanning all the trackers only when we 
remove the first log in the queue. 

e.g.
WAL-1 [1, 2] 
WAL-2 [1] -> "[2] is holding WAL-1"
WAL-3 [2] -> "WAL 1 can be removed, recompute what is holding WAL-2"

> Clean procedure wal periodically instead of on every sync
> -
>
> Key: HBASE-16524
> URL: https://issues.apache.org/jira/browse/HBASE-16524
> Project: HBase
>  Issue Type: Bug
>Reporter: Appy
>Assignee: Appy
>Priority: Minor
> Attachments: HBASE-16524.master.001.patch, flame1.svg
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-16524) Clean procedure wal periodically instead of on every sync

2016-09-06 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15468754#comment-15468754
 ] 

Appy edited comment on HBASE-16524 at 9/6/16 10:09 PM:
---

Thinking about it, that should be easy to do. 
In our current logic, we basically do this:
1. make a copy of current global tracker, say T.
2. Iterate over log files:
- keep the file if contain an update to proc P which is not marked 
deleted in T.
- Delete P from T so we don't hold another older log file because of it.

The new steps would be:
1.  make a copy of current global tracker, say T.
2. *Mark all recently updated procs as deleted in T (so we don't hold logs 
because of these procs)*
3. (step 2 above in old logic) Iterate over log files:
- keep the file if contain an update to proc P which is not marked 
deleted in T.
- Delete P from T so we don't hold another older log file because of it.
Sounds good?


was (Author: appy):
Thinking about it, that should be easy to do. 
In our current logic, we basically do this:
1. make a copy of current global tracker, say T.
2. Iterate over log files:
- keep the file if contain an update to proc P which is not marked 
deleted in T.
- Delete P from T so we don't hold another older log file because of it.

We can simply add step 0, which marks all recently updated procs as deleted in 
T.
Sounds good?

> Clean procedure wal periodically instead of on every sync
> -
>
> Key: HBASE-16524
> URL: https://issues.apache.org/jira/browse/HBASE-16524
> Project: HBase
>  Issue Type: Bug
>Reporter: Appy
>Assignee: Appy
>Priority: Minor
> Attachments: HBASE-16524.master.001.patch, flame1.svg
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16524) Clean procedure wal periodically instead of on every sync

2016-09-06 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15468754#comment-15468754
 ] 

Appy commented on HBASE-16524:
--

Thinking about it, that should be easy to do. 
In our current logic, we basically do this:
1. make a copy of current global tracker, say T.
2. Iterate over log files:
- keep the file if contain an update to proc P which is not marked 
deleted in T.
- Delete P from T so we don't hold another older log file because of it.

We can simply add step 0, which marks all recently updated procs as deleted in 
T.
Sounds good?

> Clean procedure wal periodically instead of on every sync
> -
>
> Key: HBASE-16524
> URL: https://issues.apache.org/jira/browse/HBASE-16524
> Project: HBase
>  Issue Type: Bug
>Reporter: Appy
>Assignee: Appy
>Priority: Minor
> Attachments: HBASE-16524.master.001.patch, flame1.svg
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16538) Version mismatch in HBaseConfiguration.checkDefaultsVersion

2016-09-06 Thread Mikhail Antonov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15468574#comment-15468574
 ] 

Mikhail Antonov commented on HBASE-16538:
-

[~appy] thanks for the ping! let me look and come back in a bit..

> Version mismatch in HBaseConfiguration.checkDefaultsVersion
> ---
>
> Key: HBASE-16538
> URL: https://issues.apache.org/jira/browse/HBASE-16538
> Project: HBase
>  Issue Type: Bug
>Reporter: Appy
>Assignee: Appy
>  Labels: configuration, test-failure
> Fix For: 2.0.0, 1.0.4, 1.4.0, 1.2.3, 0.98.22, 1.1.7
>
> Attachments: HBASE-16538-addendum.patch, 
> HBASE-16538.master.001.patch, HBASE-16538.master.002.patch
>
>
> {noformat}
> org.apache.hadoop.hbase.procedure2.TestYieldProcedures
> testYieldEachExecutionStep(org.apache.hadoop.hbase.procedure2.TestYieldProcedures)
>   Time elapsed: 0.255 sec  <<< ERROR!
> java.lang.RuntimeException: hbase-default.xml file seems to be for an older 
> version of HBase (2.0.0-SNAPSHOT), this version is Unknown
>   at 
> org.apache.hadoop.hbase.HBaseConfiguration.checkDefaultsVersion(HBaseConfiguration.java:73)
>   at 
> org.apache.hadoop.hbase.HBaseConfiguration.addHbaseResources(HBaseConfiguration.java:83)
>   at 
> org.apache.hadoop.hbase.HBaseConfiguration.create(HBaseConfiguration.java:98)
>   at 
> org.apache.hadoop.hbase.HBaseCommonTestingUtility.(HBaseCommonTestingUtility.java:46)
>   at 
> org.apache.hadoop.hbase.procedure2.TestYieldProcedures.setUp(TestYieldProcedures.java:63)
> {noformat}
> (Exact test is not important)
> Reference run:
> https://builds.apache.org/view/All/job/HBase-Trunk_matrix/jdk=JDK%201.8%20(latest),label=yahoo-not-h2/1515/console



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16538) Version mismatch in HBaseConfiguration.checkDefaultsVersion

2016-09-06 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15468569#comment-15468569
 ] 

Appy commented on HBASE-16538:
--

Assigned to me.
[~mantonov] any verdict? Should i wait or is it good to go in branch-1.3?

> Version mismatch in HBaseConfiguration.checkDefaultsVersion
> ---
>
> Key: HBASE-16538
> URL: https://issues.apache.org/jira/browse/HBASE-16538
> Project: HBase
>  Issue Type: Bug
>Reporter: Appy
>Assignee: Appy
>  Labels: configuration, test-failure
> Fix For: 2.0.0, 1.0.4, 1.4.0, 1.2.3, 0.98.22, 1.1.7
>
> Attachments: HBASE-16538-addendum.patch, 
> HBASE-16538.master.001.patch, HBASE-16538.master.002.patch
>
>
> {noformat}
> org.apache.hadoop.hbase.procedure2.TestYieldProcedures
> testYieldEachExecutionStep(org.apache.hadoop.hbase.procedure2.TestYieldProcedures)
>   Time elapsed: 0.255 sec  <<< ERROR!
> java.lang.RuntimeException: hbase-default.xml file seems to be for an older 
> version of HBase (2.0.0-SNAPSHOT), this version is Unknown
>   at 
> org.apache.hadoop.hbase.HBaseConfiguration.checkDefaultsVersion(HBaseConfiguration.java:73)
>   at 
> org.apache.hadoop.hbase.HBaseConfiguration.addHbaseResources(HBaseConfiguration.java:83)
>   at 
> org.apache.hadoop.hbase.HBaseConfiguration.create(HBaseConfiguration.java:98)
>   at 
> org.apache.hadoop.hbase.HBaseCommonTestingUtility.(HBaseCommonTestingUtility.java:46)
>   at 
> org.apache.hadoop.hbase.procedure2.TestYieldProcedures.setUp(TestYieldProcedures.java:63)
> {noformat}
> (Exact test is not important)
> Reference run:
> https://builds.apache.org/view/All/job/HBase-Trunk_matrix/jdk=JDK%201.8%20(latest),label=yahoo-not-h2/1515/console



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-16566) Add nonce support to TableBackupProcedure

2016-09-06 Thread Ted Yu (JIRA)
Ted Yu created HBASE-16566:
--

 Summary: Add nonce support to TableBackupProcedure
 Key: HBASE-16566
 URL: https://issues.apache.org/jira/browse/HBASE-16566
 Project: HBase
  Issue Type: Improvement
Reporter: Ted Yu


We should pass in nonce to avoid duplicate table backup RPC (the same RPC sent 
to server multiple times).

The duplicate table backup RPC may happen due to master failover.
If there is no nonce, same procedure may be executed more than once.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15565) Rewrite restore with Procedure V2

2016-09-06 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-15565:
---
Attachment: 15565.v15.txt

> Rewrite restore with Procedure V2
> -
>
> Key: HBASE-15565
> URL: https://issues.apache.org/jira/browse/HBASE-15565
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Ted Yu
>  Labels: backup
> Fix For: 2.0.0
>
> Attachments: 15565-v1.txt, 15565.v10.txt, 15565.v11.txt, 
> 15565.v12.txt, 15565.v13.txt, 15565.v14.txt, 15565.v15.txt, 15565.v5.txt, 
> 15565.v8.txt, 15565.v9.txt
>
>
> Currently restore is driven by RestoreClientImpl#restore().
> This issue rewrites the flow using Procedure V2.
> RestoreTablesProcedure would replace RestoreClientImpl.
> Main logic would be driven by executeFromState() method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16465) Disable region splits and merges, balancer during full backup

2016-09-06 Thread Vladimir Rodionov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15468382#comment-15468382
 ] 

Vladimir Rodionov commented on HBASE-16465:
---

{quote}
The above check is in HBaseAdmin. What if the admin disconnects from cluster in 
the middle of backup ?
Who would be restoring the balancer setting 
{quote}

Admin can do this manually, for example.

> Disable region splits and merges, balancer during full backup
> -
>
> Key: HBASE-16465
> URL: https://issues.apache.org/jira/browse/HBASE-16465
> Project: HBase
>  Issue Type: Improvement
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
>  Labels: backup
> Attachments: HBASE-16465-v1.patch, HBASE-16465-v2.patch, 
> HBASE-16465-v3.patch, HBASE-16465-v4.patch, HBASE-16465-v5.patch
>
>
> Incorporate HBASE-15128
> Balancer, catalog janitor and region normalizer should be disabled as well 
> during full backup



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16554) Procedure V2 - handle corruption of WAL trailer

2016-09-06 Thread Appy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Appy updated HBASE-16554:
-
Description: 
If the last wal was closed cleanly, the global tracker will be the last wal 
tracker (no rebuild needed)
if the last wal does not have a tracker (corrupted/master-killed). on load() we 
will rebuild the global tracker.

To compute quickly which files should be deleted, we also want the tracker of 
each file.
if the wal was closed properly and has a tracker we are good, if not we need to 
rebuild the tracker for that file.
each file tracker contains a bitmap about what is in the wal (the updated 
bitmap), which is easy to compute just by reading each entry of the wal. and a 
bitmap that keeps track of the "running procedures" up to that wal (the deleted 
bitmap). The delete bitmap requires a bit of post read-all-wals work. and it 
will basically require to AND the deleted bitmap of wal\(i\) and wal(i-1)

  was:
If the last wal was closed cleanly, the global tracker will be the last wal 
tracker (no rebuild needed)
if the last wal does not have a tracker (corrupted/master-killed). on load() we 
will rebuild the global tracker.

To compute quickly which files should be deleted, we also want the tracker of 
each file.
if the wal was closed properly and has a tracker we are good, if not we need to 
rebuild the tracker for that file.
each file tracker contains a bitmap about what is in the wal (the updated 
bitmap), which is easy to compute just by reading each entry of the wal. and a 
bitmap that keeps track of the "running procedures" up to that wal (the deleted 
bitmap). The delete bitmap requires a bit of post read-all-wals work. and it 
will basically require to AND the deleted bitmap of wal(i) and wal(i-1)


> Procedure V2 - handle corruption of WAL trailer
> ---
>
> Key: HBASE-16554
> URL: https://issues.apache.org/jira/browse/HBASE-16554
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Appy
>Assignee: Appy
> Attachments: tracker-rebuild.patch
>
>
> If the last wal was closed cleanly, the global tracker will be the last wal 
> tracker (no rebuild needed)
> if the last wal does not have a tracker (corrupted/master-killed). on load() 
> we will rebuild the global tracker.
> To compute quickly which files should be deleted, we also want the tracker of 
> each file.
> if the wal was closed properly and has a tracker we are good, if not we need 
> to rebuild the tracker for that file.
> each file tracker contains a bitmap about what is in the wal (the updated 
> bitmap), which is easy to compute just by reading each entry of the wal. and 
> a bitmap that keeps track of the "running procedures" up to that wal (the 
> deleted bitmap). The delete bitmap requires a bit of post read-all-wals work. 
> and it will basically require to AND the deleted bitmap of wal\(i\) and 
> wal(i-1)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-16565) Add metrics for backup / restore

2016-09-06 Thread Ted Yu (JIRA)
Ted Yu created HBASE-16565:
--

 Summary: Add metrics for backup / restore
 Key: HBASE-16565
 URL: https://issues.apache.org/jira/browse/HBASE-16565
 Project: HBase
  Issue Type: Improvement
Reporter: Ted Yu


Exposing metrics for backup / restore would give admin insight on the overall 
operations.

The metrics should include (but are not limited to):

* number of backups performed (full / incremental)
* number of restore's performed (full / incremental)
* number of aborted backups
* number of aborted restore's



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16465) Disable region splits and merges, balancer during full backup

2016-09-06 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15468350#comment-15468350
 ] 

Ted Yu commented on HBASE-16465:


{code}
+
.setTargetRootDir(args[2]).setWorkers(workers).setBandwidth(bandwidth).setSplitsDisabled(splitsDisabled);
{code}
Wrap long line.
{code}
+  if (isFullBackup && balancerEnabled) {
{code}
The above check is in HBaseAdmin. What if the admin disconnects from cluster in 
the middle of backup ?
Who would be restoring the balancer setting ?



> Disable region splits and merges, balancer during full backup
> -
>
> Key: HBASE-16465
> URL: https://issues.apache.org/jira/browse/HBASE-16465
> Project: HBase
>  Issue Type: Improvement
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
>  Labels: backup
> Attachments: HBASE-16465-v1.patch, HBASE-16465-v2.patch, 
> HBASE-16465-v3.patch, HBASE-16465-v4.patch, HBASE-16465-v5.patch
>
>
> Incorporate HBASE-15128
> Balancer, catalog janitor and region normalizer should be disabled as well 
> during full backup



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HBASE-16538) Version mismatch in HBaseConfiguration.checkDefaultsVersion

2016-09-06 Thread Appy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Appy reassigned HBASE-16538:


Assignee: Appy

> Version mismatch in HBaseConfiguration.checkDefaultsVersion
> ---
>
> Key: HBASE-16538
> URL: https://issues.apache.org/jira/browse/HBASE-16538
> Project: HBase
>  Issue Type: Bug
>Reporter: Appy
>Assignee: Appy
>  Labels: configuration, test-failure
> Fix For: 2.0.0, 1.0.4, 1.4.0, 1.2.3, 0.98.22, 1.1.7
>
> Attachments: HBASE-16538-addendum.patch, 
> HBASE-16538.master.001.patch, HBASE-16538.master.002.patch
>
>
> {noformat}
> org.apache.hadoop.hbase.procedure2.TestYieldProcedures
> testYieldEachExecutionStep(org.apache.hadoop.hbase.procedure2.TestYieldProcedures)
>   Time elapsed: 0.255 sec  <<< ERROR!
> java.lang.RuntimeException: hbase-default.xml file seems to be for an older 
> version of HBase (2.0.0-SNAPSHOT), this version is Unknown
>   at 
> org.apache.hadoop.hbase.HBaseConfiguration.checkDefaultsVersion(HBaseConfiguration.java:73)
>   at 
> org.apache.hadoop.hbase.HBaseConfiguration.addHbaseResources(HBaseConfiguration.java:83)
>   at 
> org.apache.hadoop.hbase.HBaseConfiguration.create(HBaseConfiguration.java:98)
>   at 
> org.apache.hadoop.hbase.HBaseCommonTestingUtility.(HBaseCommonTestingUtility.java:46)
>   at 
> org.apache.hadoop.hbase.procedure2.TestYieldProcedures.setUp(TestYieldProcedures.java:63)
> {noformat}
> (Exact test is not important)
> Reference run:
> https://builds.apache.org/view/All/job/HBase-Trunk_matrix/jdk=JDK%201.8%20(latest),label=yahoo-not-h2/1515/console



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16465) Disable region splits and merges, balancer during full backup

2016-09-06 Thread Vladimir Rodionov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15468246#comment-15468246
 ] 

Vladimir Rodionov commented on HBASE-16465:
---

[~tedyu], can you take a look at the patch? 

> Disable region splits and merges, balancer during full backup
> -
>
> Key: HBASE-16465
> URL: https://issues.apache.org/jira/browse/HBASE-16465
> Project: HBase
>  Issue Type: Improvement
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
>  Labels: backup
> Attachments: HBASE-16465-v1.patch, HBASE-16465-v2.patch, 
> HBASE-16465-v3.patch, HBASE-16465-v4.patch, HBASE-16465-v5.patch
>
>
> Incorporate HBASE-15128
> Balancer, catalog janitor and region normalizer should be disabled as well 
> during full backup



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15565) Rewrite restore with Procedure V2

2016-09-06 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-15565:
---
Attachment: 15565.v14.txt

> Rewrite restore with Procedure V2
> -
>
> Key: HBASE-15565
> URL: https://issues.apache.org/jira/browse/HBASE-15565
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Ted Yu
>  Labels: backup
> Fix For: 2.0.0
>
> Attachments: 15565-v1.txt, 15565.v10.txt, 15565.v11.txt, 
> 15565.v12.txt, 15565.v13.txt, 15565.v14.txt, 15565.v5.txt, 15565.v8.txt, 
> 15565.v9.txt
>
>
> Currently restore is driven by RestoreClientImpl#restore().
> This issue rewrites the flow using Procedure V2.
> RestoreTablesProcedure would replace RestoreClientImpl.
> Main logic would be driven by executeFromState() method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16562) ITBLL should fail to start if misconfigured

2016-09-06 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15467936#comment-15467936
 ] 

Andrew Purtell commented on HBASE-16562:


Please add an extra help message here why the parameter check failed
{code}
+
+long wrap = (long)width*wrapMultiplier;
+if (wrap < numNodes && numNodes % wrap != 0) {
> Help message
+  System.err.println(USAGE);
+  return 1;
+}
{code}

and then this is good to go, thanks [~chenheng]! 

> ITBLL should fail to start if misconfigured
> ---
>
> Key: HBASE-16562
> URL: https://issues.apache.org/jira/browse/HBASE-16562
> Project: HBase
>  Issue Type: Improvement
>  Components: integration tests
>Reporter: Andrew Purtell
>Assignee: Heng Chen
> Attachments: HBASE-16562.patch
>
>
> The number of nodes in ITBLL must a multiple of width*wrap (defaults to 25M, 
> but can be configured by adding two more args to the test invocation) or else 
> verification will fail. This can be very expensive in terms of time or hourly 
> billing for on demand test resources. Check the sanity of test parameters 
> before launching any MR jobs and fail fast if invariants aren't met with an 
> indication what parameter(s) need fixing. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16562) ITBLL should fail to start if misconfigured

2016-09-06 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-16562:
---
Assignee: Heng Chen

> ITBLL should fail to start if misconfigured
> ---
>
> Key: HBASE-16562
> URL: https://issues.apache.org/jira/browse/HBASE-16562
> Project: HBase
>  Issue Type: Improvement
>  Components: integration tests
>Reporter: Andrew Purtell
>Assignee: Heng Chen
> Attachments: HBASE-16562.patch
>
>
> The number of nodes in ITBLL must a multiple of width*wrap (defaults to 25M, 
> but can be configured by adding two more args to the test invocation) or else 
> verification will fail. This can be very expensive in terms of time or hourly 
> billing for on demand test resources. Check the sanity of test parameters 
> before launching any MR jobs and fail fast if invariants aren't met with an 
> indication what parameter(s) need fixing. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16345) RpcRetryingCallerWithReadReplicas#call() should catch some RegionServer Exceptions

2016-09-06 Thread huaxiang sun (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15467863#comment-15467863
 ] 

huaxiang sun commented on HBASE-16345:
--

Hi [~enis], [~tedyu], any comments for version 3 patch? Thanks.

> RpcRetryingCallerWithReadReplicas#call() should catch some RegionServer 
> Exceptions
> --
>
> Key: HBASE-16345
> URL: https://issues.apache.org/jira/browse/HBASE-16345
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 2.0.0
>Reporter: huaxiang sun
>Assignee: huaxiang sun
> Attachments: HBASE-16345-v001.patch, HBASE-16345.master.001.patch, 
> HBASE-16345.master.002.patch, HBASE-16345.master.003.patch
>
>
> Update for the description. Debugged more at this front based on the comments 
> from Enis. 
> The cause is that for the primary replica, if its retry is exhausted too 
> fast, f.get() [1] returns ExecutionException. This Exception needs to be 
> ignored and continue with the replicas.
> The other issue is that after adding calls for the replicas, if the first 
> completed task gets ExecutionException (due to the retry exhausted), it 
> throws the exception to the client[2].
> In this case, it needs to loop through these tasks, waiting for the success 
> one. If no one succeeds, throw exception.
> Similar for the scan as well
> [1] 
> https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/client/RpcRetryingCallerWithReadReplicas.java#L197
> [2] 
> https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/client/RpcRetryingCallerWithReadReplicas.java#L219



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15968) MVCC-sensitive semantics of versions

2016-09-06 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15467816#comment-15467816
 ] 

Ted Yu commented on HBASE-15968:


{code}
+  public IncludeAllCompactionQueryMatcher(ScanInfo scanInfo,
{code}
Looks like compaction can be dropped from the class name.
{code}
+   * Note maxVersion and minVersion must set accourding to cf's conf, not 
user's scan parameter.
{code}
accourding -> according
{code}
+   * @param minVersionThe minimum number of versions to keep(used when 
TTL is set).
{code}
Add javadoc for the other parameters.

There're 3 NavigableMap's in MvccSensitiveTracker. Can any of them be dropped ? 
There seems to be correlation between delColMap and delFamMap.

TestMvccSensitiveSemanticsFromClientSide needs test category.

Please put the next version of patch on review board.

> MVCC-sensitive semantics of versions
> 
>
> Key: HBASE-15968
> URL: https://issues.apache.org/jira/browse/HBASE-15968
> Project: HBase
>  Issue Type: New Feature
>Reporter: Phil Yang
>Assignee: Phil Yang
> Attachments: HBASE-15968-v1.patch
>
>
> In HBase book, we have a section in Versions called "Current Limitations" see 
> http://hbase.apache.org/book.html#_current_limitations
> {quote}
> 28.3. Current Limitations
> 28.3.1. Deletes mask Puts
> Deletes mask puts, even puts that happened after the delete was entered. See 
> HBASE-2256. Remember that a delete writes a tombstone, which only disappears 
> after then next major compaction has run. Suppose you do a delete of 
> everything ⇐ T. After this you do a new put with a timestamp ⇐ T. This put, 
> even if it happened after the delete, will be masked by the delete tombstone. 
> Performing the put will not fail, but when you do a get you will notice the 
> put did have no effect. It will start working again after the major 
> compaction has run. These issues should not be a problem if you use 
> always-increasing versions for new puts to a row. But they can occur even if 
> you do not care about time: just do delete and put immediately after each 
> other, and there is some chance they happen within the same millisecond.
> 28.3.2. Major compactions change query results
> …​create three cell versions at t1, t2 and t3, with a maximum-versions 
> setting of 2. So when getting all versions, only the values at t2 and t3 will 
> be returned. But if you delete the version at t2 or t3, the one at t1 will 
> appear again. Obviously, once a major compaction has run, such behavior will 
> not be the case anymore…​ (See Garbage Collection in Bending time in HBase.)
> {quote}
> These limitations result from the current implementation on multi-versions: 
> we only consider timestamp, no matter when it comes; we will not remove old 
> version immediately if there are enough number of new versions. 
> So we can get a stronger semantics of versions by two guarantees:
> 1, Delete will not mask Put that comes after it.
> 2, If a version is masked by enough number of higher versions (VERSIONS in 
> cf's conf), it will never be seen any more.
> Some examples for understanding:
> (delete t<=3 means use Delete.addColumns to delete all versions whose ts is 
> not greater than 3, and delete t3 means use Delete.addColumn to delete the 
> version whose ts=3)
> case 1: put t2 -> put t3 -> delete t<=3 -> put t1, and we will get t1 because 
> the put is after delete.
> case 2: maxversion=2, put t1 -> put t2 -> put t3 -> delete t3, and we will 
> always get t2 no matter if there is a major compaction, because t1 is masked 
> when we put t3 so t1 will never be seen.
> case 3: maxversion=2, put t1 -> put t2 -> put t3 -> delete t2 -> delete t3, 
> and we will get nothing.
> case 4: maxversion=3, put t1 -> put t2 -> put t3 -> delete t2 -> delete t3, 
> and we will get t1 because it is not masked.
> case 5: maxversion=2, put t1 -> put t2 -> put t3 -> delete t3 -> put t1, and 
> we can get t3+t1 because when we put t1 at second time it is the 2nd latest 
> version and it can be read.
> case 6:maxversion=2, put t3->put t2->put t1, and we will get t3+t2 just like 
> what we can get now, ts is still the key of versions.
> Different VERSIONS may result in different results even the size of result is 
> smaller than VERSIONS(see case 3 and 4).  So Get/Scan.setMaxVersions will be 
> handled at end after we read correct data according to CF's  VERSIONS setting.
> The semantics is different from the current HBase, and we may need more logic 
> to support the new semantic, so it is configurable and default is disabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-15449) HBase Backup Phase 3: Support physical table layout change

2016-09-06 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu resolved HBASE-15449.

  Resolution: Fixed
Hadoop Flags: Reviewed

Thanks for the review, Vlad.

> HBase Backup Phase 3: Support physical table layout change 
> ---
>
> Key: HBASE-15449
> URL: https://issues.apache.org/jira/browse/HBASE-15449
> Project: HBase
>  Issue Type: Task
>Affects Versions: 2.0.0
>Reporter: Vladimir Rodionov
>Assignee: Ted Yu
>  Labels: backup
> Fix For: 2.0.0
>
> Attachments: 15449.v1.txt, 15449.v10.txt, 15449.v11.txt, 
> 15449.v12.txt, 15449.v2.txt, 15449.v4.txt, 15449.v5.txt, 15449.v7.txt, 
> 15449.v8.txt
>
>
> Table operation such as add column family, delete column family, truncate , 
> delete table may result in subsequent backup restore failure.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16445) Refactor and reimplement RpcClient

2016-09-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15467712#comment-15467712
 ] 

Hadoop QA commented on HBASE-16445:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 21 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
1s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 57s 
{color} | {color:green} master passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 6s 
{color} | {color:green} master passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
8s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
43s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
53s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 46s 
{color} | {color:green} master passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 1s 
{color} | {color:green} master passed with JDK v1.7.0_111 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 57s 
{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 57s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 8s 
{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 8s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
44s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
26m 57s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 0m 
37s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
19s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 46s 
{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s 
{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 0s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 96m 56s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 26s 
{color} | {color:green} hbase-it in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
49s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 150m 20s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.regionserver.TestScannerHeartbeatMessages |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.11.2 

[jira] [Updated] (HBASE-15449) HBase Backup Phase 3: Support physical table layout change

2016-09-06 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-15449:
---
Attachment: 15449.v12.txt

Patch v12 is rebased.

> HBase Backup Phase 3: Support physical table layout change 
> ---
>
> Key: HBASE-15449
> URL: https://issues.apache.org/jira/browse/HBASE-15449
> Project: HBase
>  Issue Type: Task
>Affects Versions: 2.0.0
>Reporter: Vladimir Rodionov
>Assignee: Ted Yu
>  Labels: backup
> Fix For: 2.0.0
>
> Attachments: 15449.v1.txt, 15449.v10.txt, 15449.v11.txt, 
> 15449.v12.txt, 15449.v2.txt, 15449.v4.txt, 15449.v5.txt, 15449.v7.txt, 
> 15449.v8.txt
>
>
> Table operation such as add column family, delete column family, truncate , 
> delete table may result in subsequent backup restore failure.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-16545) Add backup test where data is ingested during backup procedure

2016-09-06 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu resolved HBASE-16545.

  Resolution: Fixed
Assignee: Ted Yu
Hadoop Flags: Reviewed

Thanks for the review, Vlad.

> Add backup test where data is ingested during backup procedure
> --
>
> Key: HBASE-16545
> URL: https://issues.apache.org/jira/browse/HBASE-16545
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: Ted Yu
>  Labels: backup
> Attachments: 16545.v1.txt, 16545.v2.txt
>
>
> Currently the backup / restore tests do the following:
> * ingest data
> * perform full backup
> * ingest more data
> Data ingestion in step 3 above is after the completion of backup.
> This issue is to add concurrent data ingestion in the presence of on-going 
> backup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16530) Reduce DBE code duplication

2016-09-06 Thread binlijin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15467506#comment-15467506
 ] 

binlijin commented on HBASE-16530:
--

OK, let me try it.

> Reduce DBE code duplication
> ---
>
> Key: HBASE-16530
> URL: https://issues.apache.org/jira/browse/HBASE-16530
> Project: HBase
>  Issue Type: Sub-task
>  Components: Performance
>Reporter: binlijin
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-16530-master_V1.patch, 
> HBASE-16530-master_V2.patch, HBASE-16530-master_V3.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16445) Refactor and reimplement RpcClient

2016-09-06 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-16445:
--
Attachment: HBASE-16445-v6.patch

> Refactor and reimplement RpcClient
> --
>
> Key: HBASE-16445
> URL: https://issues.apache.org/jira/browse/HBASE-16445
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-16445-v1.patch, HBASE-16445-v2.patch, 
> HBASE-16445-v3.patch, HBASE-16445-v4.patch, HBASE-16445-v5.patch, 
> HBASE-16445-v6.patch, HBASE-16445.patch
>
>
> There are lots of common logics between RpcClientImpl and AsyncRpcClient. We 
> should have much less code comparing to the current implementations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (HBASE-15924) Enhance hbase services autorestart capability to hbase-daemon.sh

2016-09-06 Thread Loknath Priyatham Teja Singamsetty (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HBASE-15924 started by Loknath Priyatham Teja Singamsetty .
---
> Enhance hbase services autorestart capability to hbase-daemon.sh 
> -
>
> Key: HBASE-15924
> URL: https://issues.apache.org/jira/browse/HBASE-15924
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 0.98.19
>Reporter: Loknath Priyatham Teja Singamsetty 
>Assignee: Loknath Priyatham Teja Singamsetty 
> Fix For: 0.98.23
>
>
> As part of HBASE-5939, the autorestart for hbase services has been added to 
> deal with scenarios where hbase services (master/regionserver/master-backup) 
> gets killed or goes down leading to unplanned outages. The changes were made 
> to hbase-daemon.sh to support autorestart option. 
> However, the autorestart implementation doesn't work in standalone mode and 
> other than that have few gaps with the implementation as per release notes of 
> HBASE-5939. Here is an attempt to re-design and fix the functionality 
> considering all possible usecases with hbase service operations.
> Release Notes of HBASE-5939:
> --
> When launched with autorestart, HBase processes will automatically restart if 
> they are not properly terminated, either by a "stop" command or by a cluster 
> stop. To ensure that it does not overload the system when the server itself 
> is corrupted and the process cannot be restarted, the server sleeps for 5 
> minutes before restarting if it was already started 5 minutes ago previously. 
> To use it, launch the process with "bin/start-hbase autorestart". This option 
> is not fully compatible with the existing "restart" command: if you ask for a 
> restart on a server launched with autorestart, the server will restart but 
> the next server instance won't be automatically restarted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16505) Add AsyncRegion interface to pass deadline and support async operations

2016-09-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15467260#comment-15467260
 ] 

Hadoop QA commented on HBASE-16505:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 21s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
15s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 58s 
{color} | {color:green} master passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 8s 
{color} | {color:green} master passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
12s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
44s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
55s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 48s 
{color} | {color:green} master passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s 
{color} | {color:green} master passed with JDK v1.7.0_111 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 59s 
{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 59s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 9s 
{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 9s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
43s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
27m 25s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 0m 
38s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
20s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 47s 
{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s 
{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 0s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 93m 50s 
{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 25s 
{color} | {color:green} hbase-it in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
51s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 148m 26s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.11.2 Server=1.11.2 Image:yetus/hbase:date2016-09-06 |
| JIRA Patch URL | 

[jira] [Commented] (HBASE-15968) MVCC-sensitive semantics of versions

2016-09-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15467256#comment-15467256
 ] 

Hadoop QA commented on HBASE-15968:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
5s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 46s 
{color} | {color:green} master passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 52s 
{color} | {color:green} master passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
49s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
27s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
56s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 41s 
{color} | {color:green} master passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s 
{color} | {color:green} master passed with JDK v1.7.0_111 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
5s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 45s 
{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 45s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 53s 
{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 53s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
48s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
28s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
27m 39s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 0m 
26s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 15s 
{color} | {color:red} hbase-server generated 1 new + 0 unchanged - 0 fixed = 1 
total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 41s 
{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s 
{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 0s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 15m 31s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
19s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 65m 45s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hbase-server |
|  |  Switch statement found in 
org.apache.hadoop.hbase.regionserver.querymatcher.MvccSensitiveTracker.add(Cell)
 where default case is missing  At MvccSensitiveTracker.java:where default case 
is missing  At 

[jira] [Comment Edited] (HBASE-15968) MVCC-sensitive semantics of versions

2016-09-06 Thread Phil Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15467105#comment-15467105
 ] 

Phil Yang edited comment on HBASE-15968 at 9/6/16 10:52 AM:


First I change the name to mvcc-sensitive, I think it may be better than before 
:)

The logic is different from the initial design doc, so I removed the link. Now 
the logic is much simpler. I use a MvccSensitiveTracker implementing both 
ColumnTracker and DeleteTracker to track delete and versions. In the tracker, 
we should judge if a Put is deleted by delete-marker with higher mvcc(and of 
course, same or higher timestamp) , or "masked"(same as deleted) by enough 
number of Put with higher timestamp. The logic of ScanQueryMatcher is not 
changed except minor compaction. In minor compaction we can not drop anything 
because we only see partial cells.

And we can not set mvcc to 0 while compacting.

Users can set MVCC_SENSITIVE to "true" in CF's configuration to enable this 
logic, and REPLICATION_SCOPE must be set to 2 if need being pushed to a slave 
peer with this feature on(See HBASE-9465), because the order of write is 
meaningful.

Any comments are welcomed, thanks!


was (Author: yangzhe1991):
First I change the name to mvcc-sensitive, I think it may be better than before 
:)

The logic is different from the initial design doc, so I removed the link. Now 
the logic is much simpler. I use a MvccSensitiveTracker implementing both 
ColumnTracker and DeleteTracker to track delete and versions. In the tracker, 
we should judge if a Put is deleted by delete-marker with higher mvcc(and of 
course, same or higher timestamp) , or "masked"(same as deleted) by enough 
number of Put with higher timestamp. The logic of ScanQueryMatcher is not 
changed except minor compaction. In minor compaction we can not drop anything 
because we only see partial cells.

And we can not set mvcc to 0 while compacting.

Users can set MVCC_SENSITIVE to "true" in CF's configuration to enable this 
logic, and REPLICATION_SCOPE must be set to 2 if enable(See HBASE-9465), 
because the order of write is meaningful.

Any comments are welcomed, thanks!

> MVCC-sensitive semantics of versions
> 
>
> Key: HBASE-15968
> URL: https://issues.apache.org/jira/browse/HBASE-15968
> Project: HBase
>  Issue Type: New Feature
>Reporter: Phil Yang
>Assignee: Phil Yang
> Attachments: HBASE-15968-v1.patch
>
>
> In HBase book, we have a section in Versions called "Current Limitations" see 
> http://hbase.apache.org/book.html#_current_limitations
> {quote}
> 28.3. Current Limitations
> 28.3.1. Deletes mask Puts
> Deletes mask puts, even puts that happened after the delete was entered. See 
> HBASE-2256. Remember that a delete writes a tombstone, which only disappears 
> after then next major compaction has run. Suppose you do a delete of 
> everything ⇐ T. After this you do a new put with a timestamp ⇐ T. This put, 
> even if it happened after the delete, will be masked by the delete tombstone. 
> Performing the put will not fail, but when you do a get you will notice the 
> put did have no effect. It will start working again after the major 
> compaction has run. These issues should not be a problem if you use 
> always-increasing versions for new puts to a row. But they can occur even if 
> you do not care about time: just do delete and put immediately after each 
> other, and there is some chance they happen within the same millisecond.
> 28.3.2. Major compactions change query results
> …​create three cell versions at t1, t2 and t3, with a maximum-versions 
> setting of 2. So when getting all versions, only the values at t2 and t3 will 
> be returned. But if you delete the version at t2 or t3, the one at t1 will 
> appear again. Obviously, once a major compaction has run, such behavior will 
> not be the case anymore…​ (See Garbage Collection in Bending time in HBase.)
> {quote}
> These limitations result from the current implementation on multi-versions: 
> we only consider timestamp, no matter when it comes; we will not remove old 
> version immediately if there are enough number of new versions. 
> So we can get a stronger semantics of versions by two guarantees:
> 1, Delete will not mask Put that comes after it.
> 2, If a version is masked by enough number of higher versions (VERSIONS in 
> cf's conf), it will never be seen any more.
> Some examples for understanding:
> (delete t<=3 means use Delete.addColumns to delete all versions whose ts is 
> not greater than 3, and delete t3 means use Delete.addColumn to delete the 
> version whose ts=3)
> case 1: put t2 -> put t3 -> delete t<=3 -> put t1, and we will get t1 because 
> the put is after delete.
> case 2: maxversion=2, put t1 -> put t2 -> put t3 -> delete t3, and we will 
> always get t2 no 

[jira] [Commented] (HBASE-15968) MVCC-sensitive semantics of versions

2016-09-06 Thread Phil Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15467105#comment-15467105
 ] 

Phil Yang commented on HBASE-15968:
---

First I change the name to mvcc-sensitive, I think it may be better than before 
:)

The logic is different from the initial design doc, so I removed the link. Now 
the logic is much simpler. I use a MvccSensitiveTracker implementing both 
ColumnTracker and DeleteTracker to track delete and versions. In the tracker, 
we should judge if a Put is deleted by delete-marker with higher mvcc(and of 
course, same or higher timestamp) , or "masked"(same as deleted) by enough 
number of Put with higher timestamp. The logic of ScanQueryMatcher is not 
changed except minor compaction. In minor compaction we can not drop anything 
because we only see partial cells.

And we can not set mvcc to 0 while compacting.

Users can set MVCC_SENSITIVE to "true" in CF's configuration to enable this 
logic, and REPLICATION_SCOPE must be set to 2 if enable(See HBASE-9465), 
because the order of write is meaningful.

Any comments are welcomed, thanks!

> MVCC-sensitive semantics of versions
> 
>
> Key: HBASE-15968
> URL: https://issues.apache.org/jira/browse/HBASE-15968
> Project: HBase
>  Issue Type: New Feature
>Reporter: Phil Yang
>Assignee: Phil Yang
> Attachments: HBASE-15968-v1.patch
>
>
> In HBase book, we have a section in Versions called "Current Limitations" see 
> http://hbase.apache.org/book.html#_current_limitations
> {quote}
> 28.3. Current Limitations
> 28.3.1. Deletes mask Puts
> Deletes mask puts, even puts that happened after the delete was entered. See 
> HBASE-2256. Remember that a delete writes a tombstone, which only disappears 
> after then next major compaction has run. Suppose you do a delete of 
> everything ⇐ T. After this you do a new put with a timestamp ⇐ T. This put, 
> even if it happened after the delete, will be masked by the delete tombstone. 
> Performing the put will not fail, but when you do a get you will notice the 
> put did have no effect. It will start working again after the major 
> compaction has run. These issues should not be a problem if you use 
> always-increasing versions for new puts to a row. But they can occur even if 
> you do not care about time: just do delete and put immediately after each 
> other, and there is some chance they happen within the same millisecond.
> 28.3.2. Major compactions change query results
> …​create three cell versions at t1, t2 and t3, with a maximum-versions 
> setting of 2. So when getting all versions, only the values at t2 and t3 will 
> be returned. But if you delete the version at t2 or t3, the one at t1 will 
> appear again. Obviously, once a major compaction has run, such behavior will 
> not be the case anymore…​ (See Garbage Collection in Bending time in HBase.)
> {quote}
> These limitations result from the current implementation on multi-versions: 
> we only consider timestamp, no matter when it comes; we will not remove old 
> version immediately if there are enough number of new versions. 
> So we can get a stronger semantics of versions by two guarantees:
> 1, Delete will not mask Put that comes after it.
> 2, If a version is masked by enough number of higher versions (VERSIONS in 
> cf's conf), it will never be seen any more.
> Some examples for understanding:
> (delete t<=3 means use Delete.addColumns to delete all versions whose ts is 
> not greater than 3, and delete t3 means use Delete.addColumn to delete the 
> version whose ts=3)
> case 1: put t2 -> put t3 -> delete t<=3 -> put t1, and we will get t1 because 
> the put is after delete.
> case 2: maxversion=2, put t1 -> put t2 -> put t3 -> delete t3, and we will 
> always get t2 no matter if there is a major compaction, because t1 is masked 
> when we put t3 so t1 will never be seen.
> case 3: maxversion=2, put t1 -> put t2 -> put t3 -> delete t2 -> delete t3, 
> and we will get nothing.
> case 4: maxversion=3, put t1 -> put t2 -> put t3 -> delete t2 -> delete t3, 
> and we will get t1 because it is not masked.
> case 5: maxversion=2, put t1 -> put t2 -> put t3 -> delete t3 -> put t1, and 
> we can get t3+t1 because when we put t1 at second time it is the 2nd latest 
> version and it can be read.
> case 6:maxversion=2, put t3->put t2->put t1, and we will get t3+t2 just like 
> what we can get now, ts is still the key of versions.
> Different VERSIONS may result in different results even the size of result is 
> smaller than VERSIONS(see case 3 and 4).  So Get/Scan.setMaxVersions will be 
> handled at end after we read correct data according to CF's  VERSIONS setting.
> The semantics is different from the current HBase, and we may need more logic 
> to support the new semantic, so it is configurable and default is disabled.



--
This 

[jira] [Updated] (HBASE-15968) MVCC-sensitive semantics of versions

2016-09-06 Thread Phil Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phil Yang updated HBASE-15968:
--
Status: Patch Available  (was: Open)

> MVCC-sensitive semantics of versions
> 
>
> Key: HBASE-15968
> URL: https://issues.apache.org/jira/browse/HBASE-15968
> Project: HBase
>  Issue Type: New Feature
>Reporter: Phil Yang
>Assignee: Phil Yang
> Attachments: HBASE-15968-v1.patch
>
>
> In HBase book, we have a section in Versions called "Current Limitations" see 
> http://hbase.apache.org/book.html#_current_limitations
> {quote}
> 28.3. Current Limitations
> 28.3.1. Deletes mask Puts
> Deletes mask puts, even puts that happened after the delete was entered. See 
> HBASE-2256. Remember that a delete writes a tombstone, which only disappears 
> after then next major compaction has run. Suppose you do a delete of 
> everything ⇐ T. After this you do a new put with a timestamp ⇐ T. This put, 
> even if it happened after the delete, will be masked by the delete tombstone. 
> Performing the put will not fail, but when you do a get you will notice the 
> put did have no effect. It will start working again after the major 
> compaction has run. These issues should not be a problem if you use 
> always-increasing versions for new puts to a row. But they can occur even if 
> you do not care about time: just do delete and put immediately after each 
> other, and there is some chance they happen within the same millisecond.
> 28.3.2. Major compactions change query results
> …​create three cell versions at t1, t2 and t3, with a maximum-versions 
> setting of 2. So when getting all versions, only the values at t2 and t3 will 
> be returned. But if you delete the version at t2 or t3, the one at t1 will 
> appear again. Obviously, once a major compaction has run, such behavior will 
> not be the case anymore…​ (See Garbage Collection in Bending time in HBase.)
> {quote}
> These limitations result from the current implementation on multi-versions: 
> we only consider timestamp, no matter when it comes; we will not remove old 
> version immediately if there are enough number of new versions. 
> So we can get a stronger semantics of versions by two guarantees:
> 1, Delete will not mask Put that comes after it.
> 2, If a version is masked by enough number of higher versions (VERSIONS in 
> cf's conf), it will never be seen any more.
> Some examples for understanding:
> (delete t<=3 means use Delete.addColumns to delete all versions whose ts is 
> not greater than 3, and delete t3 means use Delete.addColumn to delete the 
> version whose ts=3)
> case 1: put t2 -> put t3 -> delete t<=3 -> put t1, and we will get t1 because 
> the put is after delete.
> case 2: maxversion=2, put t1 -> put t2 -> put t3 -> delete t3, and we will 
> always get t2 no matter if there is a major compaction, because t1 is masked 
> when we put t3 so t1 will never be seen.
> case 3: maxversion=2, put t1 -> put t2 -> put t3 -> delete t2 -> delete t3, 
> and we will get nothing.
> case 4: maxversion=3, put t1 -> put t2 -> put t3 -> delete t2 -> delete t3, 
> and we will get t1 because it is not masked.
> case 5: maxversion=2, put t1 -> put t2 -> put t3 -> delete t3 -> put t1, and 
> we can get t3+t1 because when we put t1 at second time it is the 2nd latest 
> version and it can be read.
> case 6:maxversion=2, put t3->put t2->put t1, and we will get t3+t2 just like 
> what we can get now, ts is still the key of versions.
> Different VERSIONS may result in different results even the size of result is 
> smaller than VERSIONS(see case 3 and 4).  So Get/Scan.setMaxVersions will be 
> handled at end after we read correct data according to CF's  VERSIONS setting.
> The semantics is different from the current HBase, and we may need more logic 
> to support the new semantic, so it is configurable and default is disabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15968) MVCC-sensitive semantics of versions

2016-09-06 Thread Phil Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phil Yang updated HBASE-15968:
--
Attachment: HBASE-15968-v1.patch

The main work is done. Let's see QA result first.

> MVCC-sensitive semantics of versions
> 
>
> Key: HBASE-15968
> URL: https://issues.apache.org/jira/browse/HBASE-15968
> Project: HBase
>  Issue Type: New Feature
>Reporter: Phil Yang
>Assignee: Phil Yang
> Attachments: HBASE-15968-v1.patch
>
>
> In HBase book, we have a section in Versions called "Current Limitations" see 
> http://hbase.apache.org/book.html#_current_limitations
> {quote}
> 28.3. Current Limitations
> 28.3.1. Deletes mask Puts
> Deletes mask puts, even puts that happened after the delete was entered. See 
> HBASE-2256. Remember that a delete writes a tombstone, which only disappears 
> after then next major compaction has run. Suppose you do a delete of 
> everything ⇐ T. After this you do a new put with a timestamp ⇐ T. This put, 
> even if it happened after the delete, will be masked by the delete tombstone. 
> Performing the put will not fail, but when you do a get you will notice the 
> put did have no effect. It will start working again after the major 
> compaction has run. These issues should not be a problem if you use 
> always-increasing versions for new puts to a row. But they can occur even if 
> you do not care about time: just do delete and put immediately after each 
> other, and there is some chance they happen within the same millisecond.
> 28.3.2. Major compactions change query results
> …​create three cell versions at t1, t2 and t3, with a maximum-versions 
> setting of 2. So when getting all versions, only the values at t2 and t3 will 
> be returned. But if you delete the version at t2 or t3, the one at t1 will 
> appear again. Obviously, once a major compaction has run, such behavior will 
> not be the case anymore…​ (See Garbage Collection in Bending time in HBase.)
> {quote}
> These limitations result from the current implementation on multi-versions: 
> we only consider timestamp, no matter when it comes; we will not remove old 
> version immediately if there are enough number of new versions. 
> So we can get a stronger semantics of versions by two guarantees:
> 1, Delete will not mask Put that comes after it.
> 2, If a version is masked by enough number of higher versions (VERSIONS in 
> cf's conf), it will never be seen any more.
> Some examples for understanding:
> (delete t<=3 means use Delete.addColumns to delete all versions whose ts is 
> not greater than 3, and delete t3 means use Delete.addColumn to delete the 
> version whose ts=3)
> case 1: put t2 -> put t3 -> delete t<=3 -> put t1, and we will get t1 because 
> the put is after delete.
> case 2: maxversion=2, put t1 -> put t2 -> put t3 -> delete t3, and we will 
> always get t2 no matter if there is a major compaction, because t1 is masked 
> when we put t3 so t1 will never be seen.
> case 3: maxversion=2, put t1 -> put t2 -> put t3 -> delete t2 -> delete t3, 
> and we will get nothing.
> case 4: maxversion=3, put t1 -> put t2 -> put t3 -> delete t2 -> delete t3, 
> and we will get t1 because it is not masked.
> case 5: maxversion=2, put t1 -> put t2 -> put t3 -> delete t3 -> put t1, and 
> we can get t3+t1 because when we put t1 at second time it is the 2nd latest 
> version and it can be read.
> case 6:maxversion=2, put t3->put t2->put t1, and we will get t3+t2 just like 
> what we can get now, ts is still the key of versions.
> Different VERSIONS may result in different results even the size of result is 
> smaller than VERSIONS(see case 3 and 4).  So Get/Scan.setMaxVersions will be 
> handled at end after we read correct data according to CF's  VERSIONS setting.
> The semantics is different from the current HBase, and we may need more logic 
> to support the new semantic, so it is configurable and default is disabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16445) Refactor and reimplement RpcClient

2016-09-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15466990#comment-15466990
 ] 

Hadoop QA commented on HBASE-16445:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 21 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 19s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
28s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 11s 
{color} | {color:green} master passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 16s 
{color} | {color:green} master passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
16s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
48s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
12s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s 
{color} | {color:green} master passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 10s 
{color} | {color:green} master passed with JDK v1.7.0_111 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
31s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 8s 
{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 8s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 16s 
{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 16s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
48s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
30m 7s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 0m 
42s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
52s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 18s 
{color} | {color:red} hbase-client-jdk1.8.0_101 with JDK v1.8.0_101 generated 1 
new + 14 unchanged - 0 fixed = 15 total (was 14) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 22s 
{color} | {color:red} hbase-client-jdk1.7.0_111 with JDK v1.7.0_111 generated 1 
new + 14 unchanged - 0 fixed = 15 total (was 14) {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 8s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 96m 27s 
{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 26s 
{color} | {color:green} hbase-it in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
50s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 156m 42s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| 

[jira] [Updated] (HBASE-16505) Add AsyncRegion interface to pass deadline and support async operations

2016-09-06 Thread Phil Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phil Yang updated HBASE-16505:
--
Attachment: HBASE-16505-v6.patch

Fix findbugs warning and add a test

> Add AsyncRegion interface to pass deadline and support async operations
> ---
>
> Key: HBASE-16505
> URL: https://issues.apache.org/jira/browse/HBASE-16505
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Phil Yang
>Assignee: Phil Yang
> Attachments: HBASE-16505-v1.patch, HBASE-16505-v2.patch, 
> HBASE-16505-v3.patch, HBASE-16505-v4.patch, HBASE-16505-v5.patch, 
> HBASE-16505-v6.patch
>
>
> If we want to know the correct setting of timeout in read/write path, we need 
> add a new parameter in operation-methods of Region.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-16564) ITBLL run failed with hadoop 2.7.2 on branch 0.98

2016-09-06 Thread Heng Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Heng Chen resolved HBASE-16564.
---
Resolution: Invalid

As [~Apache9] said, the best solution is upgrade hadoop client,  so close this 
issue as invalid.

> ITBLL run failed with hadoop 2.7.2 on branch 0.98
> -
>
> Key: HBASE-16564
> URL: https://issues.apache.org/jira/browse/HBASE-16564
> Project: HBase
>  Issue Type: Bug
>Reporter: Heng Chen
>Priority: Minor
>
> 0.98 compiled with hadoop 2.2.0,   so it has some compatibility issues with 
> hadoop 2.7.2 (it seems 2.5.0+ has the same issue),  some counter has been 
> removed.  
> IMO we should catch the exception so our ITBLL could go on.
> {code}
> 16/09/06 15:39:33 INFO hbase.HBaseCluster: Added new HBaseAdmin
> 16/09/06 15:39:33 INFO hbase.HBaseCluster: Restoring cluster - done
> 16/09/06 15:39:33 INFO hbase.HBaseCommonTestingUtility: Stopping mini 
> mapreduce cluster...
> 16/09/06 15:39:33 INFO Configuration.deprecation: mapred.job.tracker is 
> deprecated. Instead, use mapreduce.jobtracker.address
> 16/09/06 15:39:33 INFO hbase.HBaseCommonTestingUtility: Mini mapreduce 
> cluster stopped
> 16/09/06 15:39:33 ERROR util.AbstractHBaseTool: Error running command-line 
> tool
> java.lang.IllegalArgumentException: No enum constant 
> org.apache.hadoop.mapreduce.JobCounter.MB_MILLIS_MAPS
>   at java.lang.Enum.valueOf(Enum.java:238)
>   at 
> org.apache.hadoop.mapreduce.counters.FrameworkCounterGroup.valueOf(FrameworkCounterGroup.java:148)
>   at 
> org.apache.hadoop.mapreduce.counters.FrameworkCounterGroup.findCounter(FrameworkCounterGroup.java:182)
>   at 
> org.apache.hadoop.mapreduce.counters.AbstractCounters.findCounter(AbstractCounters.java:154)
>   at 
> org.apache.hadoop.mapreduce.TypeConverter.fromYarn(TypeConverter.java:240)
>   at 
> org.apache.hadoop.mapred.ClientServiceDelegate.getJobCounters(ClientServiceDelegate.java:370)
>   at 
> org.apache.hadoop.mapred.YARNRunner.getJobCounters(YARNRunner.java:511)
>   at org.apache.hadoop.mapreduce.Job$7.run(Job.java:756)
>   at org.apache.hadoop.mapreduce.Job$7.run(Job.java:753)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
>   at org.apache.hadoop.mapreduce.Job.getCounters(Job.java:753)
>   at 
> org.apache.hadoop.mapreduce.Job.monitorAndPrintJob(Job.java:1361)
>   at 
> org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1289)
>   at 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList$Generator.jobCompletion(IntegrationTestBigLinkedList.java:543)
>   at 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList$Generator.runRandomInputGenerator(IntegrationTestBigLinkedList.java:505)
>   at 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList$Generator.run(IntegrationTestBigLinkedList.java:553)
>   at 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList$Loop.runGenerator(IntegrationTestBigLinkedList.java:842)
>   at 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList$Loop.run(IntegrationTestBigLinkedList.java:892)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList.runTestFromCommandLine(IntegrationTestBigLinkedList.java:1237)
>   at 
> org.apache.hadoop.hbase.IntegrationTestBase.doWork(IntegrationTestBase.java:115)
>   at 
> org.apache.hadoop.hbase.util.AbstractHBaseTool.run(AbstractHBaseTool.java:112)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList.main(IntegrationTestBigLinkedList.java:1272)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-16463) Improve transparent table/CF encryption with Commons Crypto

2016-09-06 Thread Dapeng Sun (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16463?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15466862#comment-15466862
 ] 

Dapeng Sun edited comment on HBASE-16463 at 9/6/16 8:52 AM:


Just checked the error log of {{hbaseprotoc}}, I think the error is caused by 
the linux image didn't install protobuf library, and the command {{mvn 
-DHBasePatchProcess compile -DskipTests -Pcompile-protobuf -X 
-DHBasePatchProcess}} is passed at my local env.
{noformat}
 [ERROR] Failed to execute goal 
org.apache.hadoop:hadoop-maven-plugins:2.7.1:protoc (compile-protoc) on project 
hbase-protocol: org.apache.maven.plugin.MojoExecutionException: 'protoc 
--version' did not return a version
{noformat}

Is there any comments?


was (Author: dapengsun):
Just checked the error log of {{hbaseprotoc}}, I think the error is caused by 
the linux image didn't install protobuf library, and the command {{ mvn 
-DHBasePatchProcess compile -DskipTests -Pcompile-protobuf -X 
-DHBasePatchProcess}} is passed at my local env.
{noformat}
 [ERROR] Failed to execute goal 
org.apache.hadoop:hadoop-maven-plugins:2.7.1:protoc (compile-protoc) on project 
hbase-protocol: org.apache.maven.plugin.MojoExecutionException: 'protoc 
--version' did not return a version
{noformat}

Is there any comments?

> Improve transparent table/CF encryption with Commons Crypto
> ---
>
> Key: HBASE-16463
> URL: https://issues.apache.org/jira/browse/HBASE-16463
> Project: HBase
>  Issue Type: New Feature
>  Components: encryption
>Affects Versions: 2.0.0
>Reporter: Dapeng Sun
> Attachments: HBASE-16463.001.patch, HBASE-16463.002.patch, 
> HBASE-16463.003.patch
>
>
> Apache Commons Crypto 
> (https://commons.apache.org/proper/commons-crypto/index.html) is a 
> cryptographic library optimized with AES-NI.
> HBASE-7544 introduces a framework for transparent encryption feature for 
> protecting HFile and WAL data at rest. Currently JCE cipher is used bu 
> default, the improvement will use Commons Crypto to accelerate the 
> transparent encryption of HBase. new crypto provider with Commons CRYPTO will 
> be provided for Transparent encryption.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-16463) Improve transparent table/CF encryption with Commons Crypto

2016-09-06 Thread Dapeng Sun (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16463?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15466862#comment-15466862
 ] 

Dapeng Sun edited comment on HBASE-16463 at 9/6/16 8:53 AM:


Just checked the error log of {{hbaseprotoc}}, I think the error is caused by 
the linux image didn't install protobuf library, and the command {{mvn 
-DHBasePatchProcess compile -DskipTests -Pcompile-protobuf -X 
-DHBasePatchProcess}} is passed at my local env.
{noformat}
 [ERROR] Failed to execute goal 
org.apache.hadoop:hadoop-maven-plugins:2.7.1:protoc (compile-protoc) on project 
hbase-protocol: org.apache.maven.plugin.MojoExecutionException: 'protoc 
--version' did not return a version
{noformat}

Is there any comments about the latest patch?


was (Author: dapengsun):
Just checked the error log of {{hbaseprotoc}}, I think the error is caused by 
the linux image didn't install protobuf library, and the command {{mvn 
-DHBasePatchProcess compile -DskipTests -Pcompile-protobuf -X 
-DHBasePatchProcess}} is passed at my local env.
{noformat}
 [ERROR] Failed to execute goal 
org.apache.hadoop:hadoop-maven-plugins:2.7.1:protoc (compile-protoc) on project 
hbase-protocol: org.apache.maven.plugin.MojoExecutionException: 'protoc 
--version' did not return a version
{noformat}

Is there any comments?

> Improve transparent table/CF encryption with Commons Crypto
> ---
>
> Key: HBASE-16463
> URL: https://issues.apache.org/jira/browse/HBASE-16463
> Project: HBase
>  Issue Type: New Feature
>  Components: encryption
>Affects Versions: 2.0.0
>Reporter: Dapeng Sun
> Attachments: HBASE-16463.001.patch, HBASE-16463.002.patch, 
> HBASE-16463.003.patch
>
>
> Apache Commons Crypto 
> (https://commons.apache.org/proper/commons-crypto/index.html) is a 
> cryptographic library optimized with AES-NI.
> HBASE-7544 introduces a framework for transparent encryption feature for 
> protecting HFile and WAL data at rest. Currently JCE cipher is used bu 
> default, the improvement will use Commons Crypto to accelerate the 
> transparent encryption of HBase. new crypto provider with Commons CRYPTO will 
> be provided for Transparent encryption.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16463) Improve transparent table/CF encryption with Commons Crypto

2016-09-06 Thread Dapeng Sun (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16463?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15466862#comment-15466862
 ] 

Dapeng Sun commented on HBASE-16463:


Just checked the error log of {{hbaseprotoc}}, I think the error is caused by 
the linux image didn't install protobuf library, and the command {{ mvn 
-DHBasePatchProcess compile -DskipTests -Pcompile-protobuf -X 
-DHBasePatchProcess}} is passed at my local env.
{noformat}
 [ERROR] Failed to execute goal 
org.apache.hadoop:hadoop-maven-plugins:2.7.1:protoc (compile-protoc) on project 
hbase-protocol: org.apache.maven.plugin.MojoExecutionException: 'protoc 
--version' did not return a version
{noformat}

Is there any comments?

> Improve transparent table/CF encryption with Commons Crypto
> ---
>
> Key: HBASE-16463
> URL: https://issues.apache.org/jira/browse/HBASE-16463
> Project: HBase
>  Issue Type: New Feature
>  Components: encryption
>Affects Versions: 2.0.0
>Reporter: Dapeng Sun
> Attachments: HBASE-16463.001.patch, HBASE-16463.002.patch, 
> HBASE-16463.003.patch
>
>
> Apache Commons Crypto 
> (https://commons.apache.org/proper/commons-crypto/index.html) is a 
> cryptographic library optimized with AES-NI.
> HBASE-7544 introduces a framework for transparent encryption feature for 
> protecting HFile and WAL data at rest. Currently JCE cipher is used bu 
> default, the improvement will use Commons Crypto to accelerate the 
> transparent encryption of HBase. new crypto provider with Commons CRYPTO will 
> be provided for Transparent encryption.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16564) ITBLL run failed with hadoop 2.7.2 on branch 0.98

2016-09-06 Thread Heng Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15466839#comment-15466839
 ] 

Heng Chen commented on HBASE-16564:
---

So your suggestion is upgrade client or downgrade yarn ?

> ITBLL run failed with hadoop 2.7.2 on branch 0.98
> -
>
> Key: HBASE-16564
> URL: https://issues.apache.org/jira/browse/HBASE-16564
> Project: HBase
>  Issue Type: Bug
>Reporter: Heng Chen
>Priority: Minor
>
> 0.98 compiled with hadoop 2.2.0,   so it has some compatibility issues with 
> hadoop 2.7.2 (it seems 2.5.0+ has the same issue),  some counter has been 
> removed.  
> IMO we should catch the exception so our ITBLL could go on.
> {code}
> 16/09/06 15:39:33 INFO hbase.HBaseCluster: Added new HBaseAdmin
> 16/09/06 15:39:33 INFO hbase.HBaseCluster: Restoring cluster - done
> 16/09/06 15:39:33 INFO hbase.HBaseCommonTestingUtility: Stopping mini 
> mapreduce cluster...
> 16/09/06 15:39:33 INFO Configuration.deprecation: mapred.job.tracker is 
> deprecated. Instead, use mapreduce.jobtracker.address
> 16/09/06 15:39:33 INFO hbase.HBaseCommonTestingUtility: Mini mapreduce 
> cluster stopped
> 16/09/06 15:39:33 ERROR util.AbstractHBaseTool: Error running command-line 
> tool
> java.lang.IllegalArgumentException: No enum constant 
> org.apache.hadoop.mapreduce.JobCounter.MB_MILLIS_MAPS
>   at java.lang.Enum.valueOf(Enum.java:238)
>   at 
> org.apache.hadoop.mapreduce.counters.FrameworkCounterGroup.valueOf(FrameworkCounterGroup.java:148)
>   at 
> org.apache.hadoop.mapreduce.counters.FrameworkCounterGroup.findCounter(FrameworkCounterGroup.java:182)
>   at 
> org.apache.hadoop.mapreduce.counters.AbstractCounters.findCounter(AbstractCounters.java:154)
>   at 
> org.apache.hadoop.mapreduce.TypeConverter.fromYarn(TypeConverter.java:240)
>   at 
> org.apache.hadoop.mapred.ClientServiceDelegate.getJobCounters(ClientServiceDelegate.java:370)
>   at 
> org.apache.hadoop.mapred.YARNRunner.getJobCounters(YARNRunner.java:511)
>   at org.apache.hadoop.mapreduce.Job$7.run(Job.java:756)
>   at org.apache.hadoop.mapreduce.Job$7.run(Job.java:753)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
>   at org.apache.hadoop.mapreduce.Job.getCounters(Job.java:753)
>   at 
> org.apache.hadoop.mapreduce.Job.monitorAndPrintJob(Job.java:1361)
>   at 
> org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1289)
>   at 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList$Generator.jobCompletion(IntegrationTestBigLinkedList.java:543)
>   at 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList$Generator.runRandomInputGenerator(IntegrationTestBigLinkedList.java:505)
>   at 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList$Generator.run(IntegrationTestBigLinkedList.java:553)
>   at 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList$Loop.runGenerator(IntegrationTestBigLinkedList.java:842)
>   at 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList$Loop.run(IntegrationTestBigLinkedList.java:892)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList.runTestFromCommandLine(IntegrationTestBigLinkedList.java:1237)
>   at 
> org.apache.hadoop.hbase.IntegrationTestBase.doWork(IntegrationTestBase.java:115)
>   at 
> org.apache.hadoop.hbase.util.AbstractHBaseTool.run(AbstractHBaseTool.java:112)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList.main(IntegrationTestBigLinkedList.java:1272)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16564) ITBLL run failed with hadoop 2.7.2 on branch 0.98

2016-09-06 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15466824#comment-15466824
 ] 

Duo Zhang commented on HBASE-16564:
---

I do not think this is a "miss". The counter is added in hadoop-2.3. It does 
not make sense to use 2.7.2 for other modules but 2.2.0 for 
hadoop-mapreduce-client-core...

> ITBLL run failed with hadoop 2.7.2 on branch 0.98
> -
>
> Key: HBASE-16564
> URL: https://issues.apache.org/jira/browse/HBASE-16564
> Project: HBase
>  Issue Type: Bug
>Reporter: Heng Chen
>Priority: Minor
>
> 0.98 compiled with hadoop 2.2.0,   so it has some compatibility issues with 
> hadoop 2.7.2 (it seems 2.5.0+ has the same issue),  some counter has been 
> removed.  
> IMO we should catch the exception so our ITBLL could go on.
> {code}
> 16/09/06 15:39:33 INFO hbase.HBaseCluster: Added new HBaseAdmin
> 16/09/06 15:39:33 INFO hbase.HBaseCluster: Restoring cluster - done
> 16/09/06 15:39:33 INFO hbase.HBaseCommonTestingUtility: Stopping mini 
> mapreduce cluster...
> 16/09/06 15:39:33 INFO Configuration.deprecation: mapred.job.tracker is 
> deprecated. Instead, use mapreduce.jobtracker.address
> 16/09/06 15:39:33 INFO hbase.HBaseCommonTestingUtility: Mini mapreduce 
> cluster stopped
> 16/09/06 15:39:33 ERROR util.AbstractHBaseTool: Error running command-line 
> tool
> java.lang.IllegalArgumentException: No enum constant 
> org.apache.hadoop.mapreduce.JobCounter.MB_MILLIS_MAPS
>   at java.lang.Enum.valueOf(Enum.java:238)
>   at 
> org.apache.hadoop.mapreduce.counters.FrameworkCounterGroup.valueOf(FrameworkCounterGroup.java:148)
>   at 
> org.apache.hadoop.mapreduce.counters.FrameworkCounterGroup.findCounter(FrameworkCounterGroup.java:182)
>   at 
> org.apache.hadoop.mapreduce.counters.AbstractCounters.findCounter(AbstractCounters.java:154)
>   at 
> org.apache.hadoop.mapreduce.TypeConverter.fromYarn(TypeConverter.java:240)
>   at 
> org.apache.hadoop.mapred.ClientServiceDelegate.getJobCounters(ClientServiceDelegate.java:370)
>   at 
> org.apache.hadoop.mapred.YARNRunner.getJobCounters(YARNRunner.java:511)
>   at org.apache.hadoop.mapreduce.Job$7.run(Job.java:756)
>   at org.apache.hadoop.mapreduce.Job$7.run(Job.java:753)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
>   at org.apache.hadoop.mapreduce.Job.getCounters(Job.java:753)
>   at 
> org.apache.hadoop.mapreduce.Job.monitorAndPrintJob(Job.java:1361)
>   at 
> org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1289)
>   at 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList$Generator.jobCompletion(IntegrationTestBigLinkedList.java:543)
>   at 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList$Generator.runRandomInputGenerator(IntegrationTestBigLinkedList.java:505)
>   at 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList$Generator.run(IntegrationTestBigLinkedList.java:553)
>   at 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList$Loop.runGenerator(IntegrationTestBigLinkedList.java:842)
>   at 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList$Loop.run(IntegrationTestBigLinkedList.java:892)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList.runTestFromCommandLine(IntegrationTestBigLinkedList.java:1237)
>   at 
> org.apache.hadoop.hbase.IntegrationTestBase.doWork(IntegrationTestBase.java:115)
>   at 
> org.apache.hadoop.hbase.util.AbstractHBaseTool.run(AbstractHBaseTool.java:112)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList.main(IntegrationTestBigLinkedList.java:1272)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16460) Can't rebuild the BucketAllocator's data structures when BucketCache uses FileIOEngine

2016-09-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15466826#comment-15466826
 ] 

Hudson commented on HBASE-16460:


SUCCESS: Integrated in Jenkins build HBase-1.1-JDK7 #1780 (See 
[https://builds.apache.org/job/HBase-1.1-JDK7/1780/])
HBASE-16460 Can't rebuild the BucketAllocator's data structures when (tedyu: 
rev d91a28a450fc0f697bf78aab07543cd48f7dedfc)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/CacheTestUtils.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketAllocator.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/bucket/TestBucketCache.java


> Can't rebuild the BucketAllocator's data structures when BucketCache uses 
> FileIOEngine
> --
>
> Key: HBASE-16460
> URL: https://issues.apache.org/jira/browse/HBASE-16460
> Project: HBase
>  Issue Type: Bug
>  Components: BucketCache
>Affects Versions: 2.0.0, 1.1.6, 1.3.1, 1.2.3, 0.98.22
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.1.7, 1.2.4
>
> Attachments: 16460.v6.patch, 16460.v6.patch, 
> HBASE-16460-branch-1-v6.patch, HBASE-16460-v1.patch, HBASE-16460-v2.patch, 
> HBASE-16460-v2.patch, HBASE-16460-v3.patch, HBASE-16460-v4.patch, 
> HBASE-16460-v5.patch, HBASE-16460-v5.patch, HBASE-16460.patch
>
>
> When bucket cache use FileIOEngine, it will rebuild the bucket allocator's 
> data structures from a persisted map. So it should first read the map from 
> persistence file then use the map to new a BucketAllocator. But now the code 
> has wrong sequence in retrieveFromFile() method of BucketCache.java.
> {code}
>   BucketAllocator allocator = new BucketAllocator(cacheCapacity, 
> bucketSizes, backingMap, realCacheSize);
>   backingMap = (ConcurrentHashMap) 
> ois.readObject();
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16564) ITBLL run failed with hadoop 2.7.2 on branch 0.98

2016-09-06 Thread Heng Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15466819#comment-15466819
 ] 

Heng Chen commented on HBASE-16564:
---

It seems hadoop 2.2.0 client miss the counter?

> ITBLL run failed with hadoop 2.7.2 on branch 0.98
> -
>
> Key: HBASE-16564
> URL: https://issues.apache.org/jira/browse/HBASE-16564
> Project: HBase
>  Issue Type: Bug
>Reporter: Heng Chen
>Priority: Minor
>
> 0.98 compiled with hadoop 2.2.0,   so it has some compatibility issues with 
> hadoop 2.7.2 (it seems 2.5.0+ has the same issue),  some counter has been 
> removed.  
> IMO we should catch the exception so our ITBLL could go on.
> {code}
> 16/09/06 15:39:33 INFO hbase.HBaseCluster: Added new HBaseAdmin
> 16/09/06 15:39:33 INFO hbase.HBaseCluster: Restoring cluster - done
> 16/09/06 15:39:33 INFO hbase.HBaseCommonTestingUtility: Stopping mini 
> mapreduce cluster...
> 16/09/06 15:39:33 INFO Configuration.deprecation: mapred.job.tracker is 
> deprecated. Instead, use mapreduce.jobtracker.address
> 16/09/06 15:39:33 INFO hbase.HBaseCommonTestingUtility: Mini mapreduce 
> cluster stopped
> 16/09/06 15:39:33 ERROR util.AbstractHBaseTool: Error running command-line 
> tool
> java.lang.IllegalArgumentException: No enum constant 
> org.apache.hadoop.mapreduce.JobCounter.MB_MILLIS_MAPS
>   at java.lang.Enum.valueOf(Enum.java:238)
>   at 
> org.apache.hadoop.mapreduce.counters.FrameworkCounterGroup.valueOf(FrameworkCounterGroup.java:148)
>   at 
> org.apache.hadoop.mapreduce.counters.FrameworkCounterGroup.findCounter(FrameworkCounterGroup.java:182)
>   at 
> org.apache.hadoop.mapreduce.counters.AbstractCounters.findCounter(AbstractCounters.java:154)
>   at 
> org.apache.hadoop.mapreduce.TypeConverter.fromYarn(TypeConverter.java:240)
>   at 
> org.apache.hadoop.mapred.ClientServiceDelegate.getJobCounters(ClientServiceDelegate.java:370)
>   at 
> org.apache.hadoop.mapred.YARNRunner.getJobCounters(YARNRunner.java:511)
>   at org.apache.hadoop.mapreduce.Job$7.run(Job.java:756)
>   at org.apache.hadoop.mapreduce.Job$7.run(Job.java:753)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
>   at org.apache.hadoop.mapreduce.Job.getCounters(Job.java:753)
>   at 
> org.apache.hadoop.mapreduce.Job.monitorAndPrintJob(Job.java:1361)
>   at 
> org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1289)
>   at 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList$Generator.jobCompletion(IntegrationTestBigLinkedList.java:543)
>   at 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList$Generator.runRandomInputGenerator(IntegrationTestBigLinkedList.java:505)
>   at 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList$Generator.run(IntegrationTestBigLinkedList.java:553)
>   at 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList$Loop.runGenerator(IntegrationTestBigLinkedList.java:842)
>   at 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList$Loop.run(IntegrationTestBigLinkedList.java:892)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList.runTestFromCommandLine(IntegrationTestBigLinkedList.java:1237)
>   at 
> org.apache.hadoop.hbase.IntegrationTestBase.doWork(IntegrationTestBase.java:115)
>   at 
> org.apache.hadoop.hbase.util.AbstractHBaseTool.run(AbstractHBaseTool.java:112)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList.main(IntegrationTestBigLinkedList.java:1272)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16564) ITBLL run failed with hadoop 2.7.2 on branch 0.98

2016-09-06 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15466813#comment-15466813
 ] 

Duo Zhang commented on HBASE-16564:
---

But there is a JobCounter.MB_MILLIS_MAPS in hadoop-2.7.2?

https://github.com/apache/hadoop/blob/branch-2.7.2/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/JobCounter.java

Could you please check the version of your hadoop-mapreduce-client-core?

> ITBLL run failed with hadoop 2.7.2 on branch 0.98
> -
>
> Key: HBASE-16564
> URL: https://issues.apache.org/jira/browse/HBASE-16564
> Project: HBase
>  Issue Type: Bug
>Reporter: Heng Chen
>Priority: Minor
>
> 0.98 compiled with hadoop 2.2.0,   so it has some compatibility issues with 
> hadoop 2.7.2 (it seems 2.5.0+ has the same issue),  some counter has been 
> removed.  
> IMO we should catch the exception so our ITBLL could go on.
> {code}
> 16/09/06 15:39:33 INFO hbase.HBaseCluster: Added new HBaseAdmin
> 16/09/06 15:39:33 INFO hbase.HBaseCluster: Restoring cluster - done
> 16/09/06 15:39:33 INFO hbase.HBaseCommonTestingUtility: Stopping mini 
> mapreduce cluster...
> 16/09/06 15:39:33 INFO Configuration.deprecation: mapred.job.tracker is 
> deprecated. Instead, use mapreduce.jobtracker.address
> 16/09/06 15:39:33 INFO hbase.HBaseCommonTestingUtility: Mini mapreduce 
> cluster stopped
> 16/09/06 15:39:33 ERROR util.AbstractHBaseTool: Error running command-line 
> tool
> java.lang.IllegalArgumentException: No enum constant 
> org.apache.hadoop.mapreduce.JobCounter.MB_MILLIS_MAPS
>   at java.lang.Enum.valueOf(Enum.java:238)
>   at 
> org.apache.hadoop.mapreduce.counters.FrameworkCounterGroup.valueOf(FrameworkCounterGroup.java:148)
>   at 
> org.apache.hadoop.mapreduce.counters.FrameworkCounterGroup.findCounter(FrameworkCounterGroup.java:182)
>   at 
> org.apache.hadoop.mapreduce.counters.AbstractCounters.findCounter(AbstractCounters.java:154)
>   at 
> org.apache.hadoop.mapreduce.TypeConverter.fromYarn(TypeConverter.java:240)
>   at 
> org.apache.hadoop.mapred.ClientServiceDelegate.getJobCounters(ClientServiceDelegate.java:370)
>   at 
> org.apache.hadoop.mapred.YARNRunner.getJobCounters(YARNRunner.java:511)
>   at org.apache.hadoop.mapreduce.Job$7.run(Job.java:756)
>   at org.apache.hadoop.mapreduce.Job$7.run(Job.java:753)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
>   at org.apache.hadoop.mapreduce.Job.getCounters(Job.java:753)
>   at 
> org.apache.hadoop.mapreduce.Job.monitorAndPrintJob(Job.java:1361)
>   at 
> org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1289)
>   at 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList$Generator.jobCompletion(IntegrationTestBigLinkedList.java:543)
>   at 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList$Generator.runRandomInputGenerator(IntegrationTestBigLinkedList.java:505)
>   at 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList$Generator.run(IntegrationTestBigLinkedList.java:553)
>   at 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList$Loop.runGenerator(IntegrationTestBigLinkedList.java:842)
>   at 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList$Loop.run(IntegrationTestBigLinkedList.java:892)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList.runTestFromCommandLine(IntegrationTestBigLinkedList.java:1237)
>   at 
> org.apache.hadoop.hbase.IntegrationTestBase.doWork(IntegrationTestBase.java:115)
>   at 
> org.apache.hadoop.hbase.util.AbstractHBaseTool.run(AbstractHBaseTool.java:112)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList.main(IntegrationTestBigLinkedList.java:1272)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16564) ITBLL run failed with hadoop 2.7.2 on branch 0.98

2016-09-06 Thread Heng Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Heng Chen updated HBASE-16564:
--
Summary: ITBLL run failed with hadoop 2.7.2 on branch 0.98  (was: ITBLL run 
failed with hdfs 2.7.2 on branch 0.98)

> ITBLL run failed with hadoop 2.7.2 on branch 0.98
> -
>
> Key: HBASE-16564
> URL: https://issues.apache.org/jira/browse/HBASE-16564
> Project: HBase
>  Issue Type: Bug
>Reporter: Heng Chen
>Priority: Minor
>
> 0.98 compiled with hdfs 2.2.0,   so it has some compatibility issues with 
> hdfs 2.7.2 (it seems 2.5.0+ has the same issue),  some counter has been 
> removed.  
> IMO we should catch the exception so our ITBLL could go on.
> {code}
> 16/09/06 15:39:33 INFO hbase.HBaseCluster: Added new HBaseAdmin
> 16/09/06 15:39:33 INFO hbase.HBaseCluster: Restoring cluster - done
> 16/09/06 15:39:33 INFO hbase.HBaseCommonTestingUtility: Stopping mini 
> mapreduce cluster...
> 16/09/06 15:39:33 INFO Configuration.deprecation: mapred.job.tracker is 
> deprecated. Instead, use mapreduce.jobtracker.address
> 16/09/06 15:39:33 INFO hbase.HBaseCommonTestingUtility: Mini mapreduce 
> cluster stopped
> 16/09/06 15:39:33 ERROR util.AbstractHBaseTool: Error running command-line 
> tool
> java.lang.IllegalArgumentException: No enum constant 
> org.apache.hadoop.mapreduce.JobCounter.MB_MILLIS_MAPS
>   at java.lang.Enum.valueOf(Enum.java:238)
>   at 
> org.apache.hadoop.mapreduce.counters.FrameworkCounterGroup.valueOf(FrameworkCounterGroup.java:148)
>   at 
> org.apache.hadoop.mapreduce.counters.FrameworkCounterGroup.findCounter(FrameworkCounterGroup.java:182)
>   at 
> org.apache.hadoop.mapreduce.counters.AbstractCounters.findCounter(AbstractCounters.java:154)
>   at 
> org.apache.hadoop.mapreduce.TypeConverter.fromYarn(TypeConverter.java:240)
>   at 
> org.apache.hadoop.mapred.ClientServiceDelegate.getJobCounters(ClientServiceDelegate.java:370)
>   at 
> org.apache.hadoop.mapred.YARNRunner.getJobCounters(YARNRunner.java:511)
>   at org.apache.hadoop.mapreduce.Job$7.run(Job.java:756)
>   at org.apache.hadoop.mapreduce.Job$7.run(Job.java:753)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
>   at org.apache.hadoop.mapreduce.Job.getCounters(Job.java:753)
>   at 
> org.apache.hadoop.mapreduce.Job.monitorAndPrintJob(Job.java:1361)
>   at 
> org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1289)
>   at 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList$Generator.jobCompletion(IntegrationTestBigLinkedList.java:543)
>   at 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList$Generator.runRandomInputGenerator(IntegrationTestBigLinkedList.java:505)
>   at 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList$Generator.run(IntegrationTestBigLinkedList.java:553)
>   at 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList$Loop.runGenerator(IntegrationTestBigLinkedList.java:842)
>   at 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList$Loop.run(IntegrationTestBigLinkedList.java:892)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList.runTestFromCommandLine(IntegrationTestBigLinkedList.java:1237)
>   at 
> org.apache.hadoop.hbase.IntegrationTestBase.doWork(IntegrationTestBase.java:115)
>   at 
> org.apache.hadoop.hbase.util.AbstractHBaseTool.run(AbstractHBaseTool.java:112)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList.main(IntegrationTestBigLinkedList.java:1272)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16463) Improve transparent table/CF encryption with Commons Crypto

2016-09-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16463?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15466799#comment-15466799
 ] 

Hadoop QA commented on HBASE-16463:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
5s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 1s 
{color} | {color:green} master passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 30s 
{color} | {color:green} master passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
5s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
38s {color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: . {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
43s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 17s 
{color} | {color:green} master passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 12s 
{color} | {color:green} master passed with JDK v1.7.0_111 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 
4s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 0s 
{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 3m 0s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 28s 
{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 3m 28s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
6s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
40s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
27m 42s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:red}-1{color} | {color:red} hbaseprotoc {color} | {color:red} 0m 11s 
{color} | {color:red} root in the patch failed. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: . {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 16s 
{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 16s 
{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 48s 
{color} | {color:green} hbase-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 92m 15s 
{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 113m 52s 
{color} | {color:green} root in the patch passed. 

[jira] [Updated] (HBASE-16564) ITBLL run failed with hadoop 2.7.2 on branch 0.98

2016-09-06 Thread Heng Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Heng Chen updated HBASE-16564:
--
Description: 
0.98 compiled with hadoop 2.2.0,   so it has some compatibility issues with 
hadoop 2.7.2 (it seems 2.5.0+ has the same issue),  some counter has been 
removed.  

IMO we should catch the exception so our ITBLL could go on.

{code}
16/09/06 15:39:33 INFO hbase.HBaseCluster: Added new HBaseAdmin
16/09/06 15:39:33 INFO hbase.HBaseCluster: Restoring cluster - done
16/09/06 15:39:33 INFO hbase.HBaseCommonTestingUtility: Stopping mini mapreduce 
cluster...
16/09/06 15:39:33 INFO Configuration.deprecation: mapred.job.tracker is 
deprecated. Instead, use mapreduce.jobtracker.address
16/09/06 15:39:33 INFO hbase.HBaseCommonTestingUtility: Mini mapreduce cluster 
stopped
16/09/06 15:39:33 ERROR util.AbstractHBaseTool: Error running command-line tool
java.lang.IllegalArgumentException: No enum constant 
org.apache.hadoop.mapreduce.JobCounter.MB_MILLIS_MAPS
at java.lang.Enum.valueOf(Enum.java:238)
at 
org.apache.hadoop.mapreduce.counters.FrameworkCounterGroup.valueOf(FrameworkCounterGroup.java:148)
at 
org.apache.hadoop.mapreduce.counters.FrameworkCounterGroup.findCounter(FrameworkCounterGroup.java:182)
at 
org.apache.hadoop.mapreduce.counters.AbstractCounters.findCounter(AbstractCounters.java:154)
at 
org.apache.hadoop.mapreduce.TypeConverter.fromYarn(TypeConverter.java:240)
at 
org.apache.hadoop.mapred.ClientServiceDelegate.getJobCounters(ClientServiceDelegate.java:370)
at 
org.apache.hadoop.mapred.YARNRunner.getJobCounters(YARNRunner.java:511)
at org.apache.hadoop.mapreduce.Job$7.run(Job.java:756)
at org.apache.hadoop.mapreduce.Job$7.run(Job.java:753)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
at org.apache.hadoop.mapreduce.Job.getCounters(Job.java:753)
at org.apache.hadoop.mapreduce.Job.monitorAndPrintJob(Job.java:1361)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1289)
at 
org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList$Generator.jobCompletion(IntegrationTestBigLinkedList.java:543)
at 
org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList$Generator.runRandomInputGenerator(IntegrationTestBigLinkedList.java:505)
at 
org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList$Generator.run(IntegrationTestBigLinkedList.java:553)
at 
org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList$Loop.runGenerator(IntegrationTestBigLinkedList.java:842)
at 
org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList$Loop.run(IntegrationTestBigLinkedList.java:892)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at 
org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList.runTestFromCommandLine(IntegrationTestBigLinkedList.java:1237)
at 
org.apache.hadoop.hbase.IntegrationTestBase.doWork(IntegrationTestBase.java:115)
at 
org.apache.hadoop.hbase.util.AbstractHBaseTool.run(AbstractHBaseTool.java:112)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at 
org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList.main(IntegrationTestBigLinkedList.java:1272)
{code}

  was:
0.98 compiled with hdfs 2.2.0,   so it has some compatibility issues with hdfs 
2.7.2 (it seems 2.5.0+ has the same issue),  some counter has been removed.  

IMO we should catch the exception so our ITBLL could go on.

{code}
16/09/06 15:39:33 INFO hbase.HBaseCluster: Added new HBaseAdmin
16/09/06 15:39:33 INFO hbase.HBaseCluster: Restoring cluster - done
16/09/06 15:39:33 INFO hbase.HBaseCommonTestingUtility: Stopping mini mapreduce 
cluster...
16/09/06 15:39:33 INFO Configuration.deprecation: mapred.job.tracker is 
deprecated. Instead, use mapreduce.jobtracker.address
16/09/06 15:39:33 INFO hbase.HBaseCommonTestingUtility: Mini mapreduce cluster 
stopped
16/09/06 15:39:33 ERROR util.AbstractHBaseTool: Error running command-line tool
java.lang.IllegalArgumentException: No enum constant 
org.apache.hadoop.mapreduce.JobCounter.MB_MILLIS_MAPS
at java.lang.Enum.valueOf(Enum.java:238)
at 
org.apache.hadoop.mapreduce.counters.FrameworkCounterGroup.valueOf(FrameworkCounterGroup.java:148)
at 
org.apache.hadoop.mapreduce.counters.FrameworkCounterGroup.findCounter(FrameworkCounterGroup.java:182)
at 
org.apache.hadoop.mapreduce.counters.AbstractCounters.findCounter(AbstractCounters.java:154)
at 
org.apache.hadoop.mapreduce.TypeConverter.fromYarn(TypeConverter.java:240)
at 
org.apache.hadoop.mapred.ClientServiceDelegate.getJobCounters(ClientServiceDelegate.java:370)
at 

[jira] [Commented] (HBASE-16564) ITBLL run failed with hdfs 2.7.2 on branch 0.98

2016-09-06 Thread Heng Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15466797#comment-15466797
 ] 

Heng Chen commented on HBASE-16564:
---

2.7.2

> ITBLL run failed with hdfs 2.7.2 on branch 0.98
> ---
>
> Key: HBASE-16564
> URL: https://issues.apache.org/jira/browse/HBASE-16564
> Project: HBase
>  Issue Type: Bug
>Reporter: Heng Chen
>Priority: Minor
>
> 0.98 compiled with hdfs 2.2.0,   so it has some compatibility issues with 
> hdfs 2.7.2 (it seems 2.5.0+ has the same issue),  some counter has been 
> removed.  
> IMO we should catch the exception so our ITBLL could go on.
> {code}
> 16/09/06 15:39:33 INFO hbase.HBaseCluster: Added new HBaseAdmin
> 16/09/06 15:39:33 INFO hbase.HBaseCluster: Restoring cluster - done
> 16/09/06 15:39:33 INFO hbase.HBaseCommonTestingUtility: Stopping mini 
> mapreduce cluster...
> 16/09/06 15:39:33 INFO Configuration.deprecation: mapred.job.tracker is 
> deprecated. Instead, use mapreduce.jobtracker.address
> 16/09/06 15:39:33 INFO hbase.HBaseCommonTestingUtility: Mini mapreduce 
> cluster stopped
> 16/09/06 15:39:33 ERROR util.AbstractHBaseTool: Error running command-line 
> tool
> java.lang.IllegalArgumentException: No enum constant 
> org.apache.hadoop.mapreduce.JobCounter.MB_MILLIS_MAPS
>   at java.lang.Enum.valueOf(Enum.java:238)
>   at 
> org.apache.hadoop.mapreduce.counters.FrameworkCounterGroup.valueOf(FrameworkCounterGroup.java:148)
>   at 
> org.apache.hadoop.mapreduce.counters.FrameworkCounterGroup.findCounter(FrameworkCounterGroup.java:182)
>   at 
> org.apache.hadoop.mapreduce.counters.AbstractCounters.findCounter(AbstractCounters.java:154)
>   at 
> org.apache.hadoop.mapreduce.TypeConverter.fromYarn(TypeConverter.java:240)
>   at 
> org.apache.hadoop.mapred.ClientServiceDelegate.getJobCounters(ClientServiceDelegate.java:370)
>   at 
> org.apache.hadoop.mapred.YARNRunner.getJobCounters(YARNRunner.java:511)
>   at org.apache.hadoop.mapreduce.Job$7.run(Job.java:756)
>   at org.apache.hadoop.mapreduce.Job$7.run(Job.java:753)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
>   at org.apache.hadoop.mapreduce.Job.getCounters(Job.java:753)
>   at 
> org.apache.hadoop.mapreduce.Job.monitorAndPrintJob(Job.java:1361)
>   at 
> org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1289)
>   at 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList$Generator.jobCompletion(IntegrationTestBigLinkedList.java:543)
>   at 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList$Generator.runRandomInputGenerator(IntegrationTestBigLinkedList.java:505)
>   at 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList$Generator.run(IntegrationTestBigLinkedList.java:553)
>   at 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList$Loop.runGenerator(IntegrationTestBigLinkedList.java:842)
>   at 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList$Loop.run(IntegrationTestBigLinkedList.java:892)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList.runTestFromCommandLine(IntegrationTestBigLinkedList.java:1237)
>   at 
> org.apache.hadoop.hbase.IntegrationTestBase.doWork(IntegrationTestBase.java:115)
>   at 
> org.apache.hadoop.hbase.util.AbstractHBaseTool.run(AbstractHBaseTool.java:112)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList.main(IntegrationTestBigLinkedList.java:1272)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16564) ITBLL run failed with hdfs 2.7.2 on branch 0.98

2016-09-06 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15466789#comment-15466789
 ] 

Duo Zhang commented on HBASE-16564:
---

What's the version of you yarn?

> ITBLL run failed with hdfs 2.7.2 on branch 0.98
> ---
>
> Key: HBASE-16564
> URL: https://issues.apache.org/jira/browse/HBASE-16564
> Project: HBase
>  Issue Type: Bug
>Reporter: Heng Chen
>Priority: Minor
>
> 0.98 compiled with hdfs 2.2.0,   so it has some compatibility issues with 
> hdfs 2.7.2 (it seems 2.5.0+ has the same issue),  some counter has been 
> removed.  
> IMO we should catch the exception so our ITBLL could go on.
> {code}
> 16/09/06 15:39:33 INFO hbase.HBaseCluster: Added new HBaseAdmin
> 16/09/06 15:39:33 INFO hbase.HBaseCluster: Restoring cluster - done
> 16/09/06 15:39:33 INFO hbase.HBaseCommonTestingUtility: Stopping mini 
> mapreduce cluster...
> 16/09/06 15:39:33 INFO Configuration.deprecation: mapred.job.tracker is 
> deprecated. Instead, use mapreduce.jobtracker.address
> 16/09/06 15:39:33 INFO hbase.HBaseCommonTestingUtility: Mini mapreduce 
> cluster stopped
> 16/09/06 15:39:33 ERROR util.AbstractHBaseTool: Error running command-line 
> tool
> java.lang.IllegalArgumentException: No enum constant 
> org.apache.hadoop.mapreduce.JobCounter.MB_MILLIS_MAPS
>   at java.lang.Enum.valueOf(Enum.java:238)
>   at 
> org.apache.hadoop.mapreduce.counters.FrameworkCounterGroup.valueOf(FrameworkCounterGroup.java:148)
>   at 
> org.apache.hadoop.mapreduce.counters.FrameworkCounterGroup.findCounter(FrameworkCounterGroup.java:182)
>   at 
> org.apache.hadoop.mapreduce.counters.AbstractCounters.findCounter(AbstractCounters.java:154)
>   at 
> org.apache.hadoop.mapreduce.TypeConverter.fromYarn(TypeConverter.java:240)
>   at 
> org.apache.hadoop.mapred.ClientServiceDelegate.getJobCounters(ClientServiceDelegate.java:370)
>   at 
> org.apache.hadoop.mapred.YARNRunner.getJobCounters(YARNRunner.java:511)
>   at org.apache.hadoop.mapreduce.Job$7.run(Job.java:756)
>   at org.apache.hadoop.mapreduce.Job$7.run(Job.java:753)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
>   at org.apache.hadoop.mapreduce.Job.getCounters(Job.java:753)
>   at 
> org.apache.hadoop.mapreduce.Job.monitorAndPrintJob(Job.java:1361)
>   at 
> org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1289)
>   at 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList$Generator.jobCompletion(IntegrationTestBigLinkedList.java:543)
>   at 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList$Generator.runRandomInputGenerator(IntegrationTestBigLinkedList.java:505)
>   at 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList$Generator.run(IntegrationTestBigLinkedList.java:553)
>   at 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList$Loop.runGenerator(IntegrationTestBigLinkedList.java:842)
>   at 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList$Loop.run(IntegrationTestBigLinkedList.java:892)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList.runTestFromCommandLine(IntegrationTestBigLinkedList.java:1237)
>   at 
> org.apache.hadoop.hbase.IntegrationTestBase.doWork(IntegrationTestBase.java:115)
>   at 
> org.apache.hadoop.hbase.util.AbstractHBaseTool.run(AbstractHBaseTool.java:112)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList.main(IntegrationTestBigLinkedList.java:1272)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16460) Can't rebuild the BucketAllocator's data structures when BucketCache uses FileIOEngine

2016-09-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15466784#comment-15466784
 ] 

Hudson commented on HBASE-16460:


SUCCESS: Integrated in Jenkins build HBase-1.1-JDK8 #1865 (See 
[https://builds.apache.org/job/HBase-1.1-JDK8/1865/])
HBASE-16460 Can't rebuild the BucketAllocator's data structures when (tedyu: 
rev d91a28a450fc0f697bf78aab07543cd48f7dedfc)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketAllocator.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/CacheTestUtils.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/bucket/TestBucketCache.java


> Can't rebuild the BucketAllocator's data structures when BucketCache uses 
> FileIOEngine
> --
>
> Key: HBASE-16460
> URL: https://issues.apache.org/jira/browse/HBASE-16460
> Project: HBase
>  Issue Type: Bug
>  Components: BucketCache
>Affects Versions: 2.0.0, 1.1.6, 1.3.1, 1.2.3, 0.98.22
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.1.7, 1.2.4
>
> Attachments: 16460.v6.patch, 16460.v6.patch, 
> HBASE-16460-branch-1-v6.patch, HBASE-16460-v1.patch, HBASE-16460-v2.patch, 
> HBASE-16460-v2.patch, HBASE-16460-v3.patch, HBASE-16460-v4.patch, 
> HBASE-16460-v5.patch, HBASE-16460-v5.patch, HBASE-16460.patch
>
>
> When bucket cache use FileIOEngine, it will rebuild the bucket allocator's 
> data structures from a persisted map. So it should first read the map from 
> persistence file then use the map to new a BucketAllocator. But now the code 
> has wrong sequence in retrieveFromFile() method of BucketCache.java.
> {code}
>   BucketAllocator allocator = new BucketAllocator(cacheCapacity, 
> bucketSizes, backingMap, realCacheSize);
>   backingMap = (ConcurrentHashMap) 
> ois.readObject();
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-16564) ITBLL run failed with hdfs 2.7.2 on branch 0.98

2016-09-06 Thread Heng Chen (JIRA)
Heng Chen created HBASE-16564:
-

 Summary: ITBLL run failed with hdfs 2.7.2 on branch 0.98
 Key: HBASE-16564
 URL: https://issues.apache.org/jira/browse/HBASE-16564
 Project: HBase
  Issue Type: Bug
Reporter: Heng Chen
Priority: Minor


0.98 compiled with hdfs 2.2.0,   so it has some compatibility issues with hdfs 
2.7.2 (it seems 2.5.0+ has the same issue),  some counter has been removed.  

IMO we should catch the exception so our ITBLL could go on.

{code}
16/09/06 15:39:33 INFO hbase.HBaseCluster: Added new HBaseAdmin
16/09/06 15:39:33 INFO hbase.HBaseCluster: Restoring cluster - done
16/09/06 15:39:33 INFO hbase.HBaseCommonTestingUtility: Stopping mini mapreduce 
cluster...
16/09/06 15:39:33 INFO Configuration.deprecation: mapred.job.tracker is 
deprecated. Instead, use mapreduce.jobtracker.address
16/09/06 15:39:33 INFO hbase.HBaseCommonTestingUtility: Mini mapreduce cluster 
stopped
16/09/06 15:39:33 ERROR util.AbstractHBaseTool: Error running command-line tool
java.lang.IllegalArgumentException: No enum constant 
org.apache.hadoop.mapreduce.JobCounter.MB_MILLIS_MAPS
at java.lang.Enum.valueOf(Enum.java:238)
at 
org.apache.hadoop.mapreduce.counters.FrameworkCounterGroup.valueOf(FrameworkCounterGroup.java:148)
at 
org.apache.hadoop.mapreduce.counters.FrameworkCounterGroup.findCounter(FrameworkCounterGroup.java:182)
at 
org.apache.hadoop.mapreduce.counters.AbstractCounters.findCounter(AbstractCounters.java:154)
at 
org.apache.hadoop.mapreduce.TypeConverter.fromYarn(TypeConverter.java:240)
at 
org.apache.hadoop.mapred.ClientServiceDelegate.getJobCounters(ClientServiceDelegate.java:370)
at 
org.apache.hadoop.mapred.YARNRunner.getJobCounters(YARNRunner.java:511)
at org.apache.hadoop.mapreduce.Job$7.run(Job.java:756)
at org.apache.hadoop.mapreduce.Job$7.run(Job.java:753)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
at org.apache.hadoop.mapreduce.Job.getCounters(Job.java:753)
at org.apache.hadoop.mapreduce.Job.monitorAndPrintJob(Job.java:1361)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1289)
at 
org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList$Generator.jobCompletion(IntegrationTestBigLinkedList.java:543)
at 
org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList$Generator.runRandomInputGenerator(IntegrationTestBigLinkedList.java:505)
at 
org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList$Generator.run(IntegrationTestBigLinkedList.java:553)
at 
org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList$Loop.runGenerator(IntegrationTestBigLinkedList.java:842)
at 
org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList$Loop.run(IntegrationTestBigLinkedList.java:892)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at 
org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList.runTestFromCommandLine(IntegrationTestBigLinkedList.java:1237)
at 
org.apache.hadoop.hbase.IntegrationTestBase.doWork(IntegrationTestBase.java:115)
at 
org.apache.hadoop.hbase.util.AbstractHBaseTool.run(AbstractHBaseTool.java:112)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at 
org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList.main(IntegrationTestBigLinkedList.java:1272)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16445) Refactor and reimplement RpcClient

2016-09-06 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15466722#comment-15466722
 ] 

Duo Zhang commented on HBASE-16445:
---

In the new patch, I renamed the RpcClientImpl to BlockingRpcClient and 
AsyncRpcClient to NettyRpcClient to better describe the actual implementation. 
Added a deprecated name mapping in RpcClientFactory to map the old name to new 
name.

> Refactor and reimplement RpcClient
> --
>
> Key: HBASE-16445
> URL: https://issues.apache.org/jira/browse/HBASE-16445
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-16445-v1.patch, HBASE-16445-v2.patch, 
> HBASE-16445-v3.patch, HBASE-16445-v4.patch, HBASE-16445-v5.patch, 
> HBASE-16445.patch
>
>
> There are lots of common logics between RpcClientImpl and AsyncRpcClient. We 
> should have much less code comparing to the current implementations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16445) Refactor and reimplement RpcClient

2016-09-06 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-16445:
--
Attachment: HBASE-16445-v5.patch

Modify according to the comments on rb.

> Refactor and reimplement RpcClient
> --
>
> Key: HBASE-16445
> URL: https://issues.apache.org/jira/browse/HBASE-16445
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-16445-v1.patch, HBASE-16445-v2.patch, 
> HBASE-16445-v3.patch, HBASE-16445-v4.patch, HBASE-16445-v5.patch, 
> HBASE-16445.patch
>
>
> There are lots of common logics between RpcClientImpl and AsyncRpcClient. We 
> should have much less code comparing to the current implementations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16445) Refactor and reimplement RpcClient

2016-09-06 Thread Yu Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15466612#comment-15466612
 ] 

Yu Li commented on HBASE-16445:
---

I see, then no more issue on this, thanks for the clarification.

> Refactor and reimplement RpcClient
> --
>
> Key: HBASE-16445
> URL: https://issues.apache.org/jira/browse/HBASE-16445
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-16445-v1.patch, HBASE-16445-v2.patch, 
> HBASE-16445-v3.patch, HBASE-16445-v4.patch, HBASE-16445.patch
>
>
> There are lots of common logics between RpcClientImpl and AsyncRpcClient. We 
> should have much less code comparing to the current implementations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16563) hbase-assembly can only deal with the first license of dependency

2016-09-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15466589#comment-15466589
 ] 

Hadoop QA commented on HBASE-16563:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
18s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
8s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 7s 
{color} | {color:green} master passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 7s 
{color} | {color:green} master passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
8s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
8s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
27m 3s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 0m 
8s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 6s 
{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 8s 
{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 7s 
{color} | {color:green} hbase-resource-bundle in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
9s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 32m 5s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.11.2 Server=1.11.2 Image:yetus/hbase:date2016-09-06 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12827124/HBASE-16563.001.patch 
|
| JIRA Issue | HBASE-16563 |
| Optional Tests |  asflicense  javac  javadoc  unit  |
| uname | Linux 0f4c6e79ac41 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 
20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / b6ba13c |
| Default Java | 1.7.0_111 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_101 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_111 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/3424/testReport/ |
| modules | C: hbase-resource-bundle U: hbase-resource-bundle |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/3424/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> hbase-assembly can only deal with the first license of dependency
> -
>
> Key: HBASE-16563
> URL: https://issues.apache.org/jira/browse/HBASE-16563
> Project: HBase
>  Issue Type: Bug
>  Components: build
>Reporter: Colin Ma
>Assignee: Colin Ma
> Attachments: HBASE-16563.001.patch
>
>
> Currently, only the first  in  will be validated in 
> LICENSE.vm. The hbase-assembly will be failed to validate the following 
> information,because Apache License v2.0 is not the first one :
> {code}
> 
>   
>   LGPL, version 2.1
>   http://www.gnu.org/licenses/licenses.html
>   repo
> 

[jira] [Commented] (HBASE-16414) Improve performance for RPC encryption with Apache Common Crypto

2016-09-06 Thread Colin Ma (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15466535#comment-15466535
 ] 

Colin Ma commented on HBASE-16414:
--

Blocked by license problem, created HBASE-16563 to fix it.

> Improve performance for RPC encryption with Apache Common Crypto
> 
>
> Key: HBASE-16414
> URL: https://issues.apache.org/jira/browse/HBASE-16414
> Project: HBase
>  Issue Type: Improvement
>  Components: IPC/RPC
>Affects Versions: 2.0.0
>Reporter: Colin Ma
>Assignee: Colin Ma
> Attachments: HBASE-16414.001.patch, HBASE-16414.002.patch, 
> HbaseRpcEncryptionWithCrypoto.docx
>
>
> Hbase RPC encryption is enabled by setting “hbase.rpc.protection” to 
> "privacy". With the token authentication, it utilized DIGEST-MD5 mechanisms 
> for secure authentication and data protection. For DIGEST-MD5, it uses DES, 
> 3DES or RC4 to do encryption and it is very slow, especially for Scan. This 
> will become the bottleneck of the RPC throughput.
> Apache Commons Crypto is a cryptographic library optimized with AES-NI. It 
> provides Java API for both cipher level and Java stream level. Developers can 
> use it to implement high performance AES encryption/decryption with the 
> minimum code and effort. Compare with the current implementation of 
> org.apache.hadoop.hbase.io.crypto.aes.AES, Crypto supports both JCE Cipher 
> and OpenSSL Cipher which is better performance than JCE Cipher. User can 
> configure the cipher type and the default is JCE Cipher.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)