[jira] [Commented] (HBASE-14154) DFS Replication should be configurable at column family level

2015-07-30 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14647685#comment-14647685
 ] 

Lars Hofhansl commented on HBASE-14154:
---

Nice. Looks good. Another minor nit: I think there should be a check in 
setDFSReplication on whether the passed value is valid.

Good for some temporary tables that we create in our setups.


 DFS Replication should be configurable at column family level
 -

 Key: HBASE-14154
 URL: https://issues.apache.org/jira/browse/HBASE-14154
 Project: HBase
  Issue Type: New Feature
Reporter: Ashish Singhi
Assignee: Ashish Singhi
Priority: Minor
 Fix For: 2.0.0, 0.98.14, 1.3.0

 Attachments: HBASE-14154-0.98-v1.patch, HBASE-14154-0.98.patch, 
 HBASE-14154-branch-1-v1.patch, HBASE-14154-branch-1.patch, 
 HBASE-14154-v1.patch, HBASE-14154.patch


 There are cases where a user wants to have a control on the number of hfile 
 copies he/she can have in the cluster.
 For eg: For a test table user would like to have only one copy instead of 
 three(default).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14168) Avoid useless retry as exception implies in TableRecordReaderImpl

2015-07-30 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14647681#comment-14647681
 ] 

Ted Yu commented on HBASE-14168:


Test failure in TestTableInputFormat is related to the patch.

 Avoid useless retry as exception implies in TableRecordReaderImpl
 -

 Key: HBASE-14168
 URL: https://issues.apache.org/jira/browse/HBASE-14168
 Project: HBase
  Issue Type: Bug
  Components: mapreduce
Reporter: zhouyingchao
Assignee: zhouyingchao
Priority: Minor
 Fix For: 2.0.0, 1.2.0, 1.1.2, 1.3.0

 Attachments: HBASE-14168-001.patch


 In TableRecordReaderImpl, even if the next() of scan throws 
 DoNotRetryIOException, it would still be retried. This does not make sense 
 and should be avoided.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-14173) includeMVCCReadpoint parameter in DefaultCompactor#createTmpWriter() represents no-op

2015-07-30 Thread Ted Yu (JIRA)
Ted Yu created HBASE-14173:
--

 Summary: includeMVCCReadpoint parameter in 
DefaultCompactor#createTmpWriter() represents no-op
 Key: HBASE-14173
 URL: https://issues.apache.org/jira/browse/HBASE-14173
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Ted Yu
 Fix For: 2.0.0


Around line 160:
{code}
return store.createWriterInTmp(fd.maxKeyCount, this.compactionCompression,
true, fd.maxMVCCReadpoint = 0, fd.maxTagsLength 0);
{code}
The condition, fd.maxMVCCReadpoint = 0, represents no-op.

The correct condition should be fd.maxMVCCReadpoint  0




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14173) includeMVCCReadpoint parameter in DefaultCompactor#createTmpWriter() represents no-op

2015-07-30 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-14173:
---
Status: Patch Available  (was: Open)

 includeMVCCReadpoint parameter in DefaultCompactor#createTmpWriter() 
 represents no-op
 -

 Key: HBASE-14173
 URL: https://issues.apache.org/jira/browse/HBASE-14173
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Ted Yu
 Fix For: 2.0.0

 Attachments: 14173-v1.txt


 Around line 160:
 {code}
 return store.createWriterInTmp(fd.maxKeyCount, this.compactionCompression,
 true, fd.maxMVCCReadpoint = 0, fd.maxTagsLength 0);
 {code}
 The condition, fd.maxMVCCReadpoint = 0, represents no-op.
 The correct condition should be fd.maxMVCCReadpoint  0



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12853) distributed write pattern to replace ad hoc 'salting'

2015-07-30 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14648174#comment-14648174
 ] 

Andrew Purtell commented on HBASE-12853:


bq.  Either you find value to the suggestion or not. That is your call. But 
please note that Andrew P. worked on 
https://issues.apache.org/jira/browse/HBASE-13044. (Also relatively trivial)

Not sure I understand the relevance. For the record, I filed that issue after a 
brief encounter with Jim Scott of MapR over on the OpenTSDB list. He spoke of 
customers implementing coprocessors that exist solely to prevent loading of any 
other coprocessors, so I thought we could do something simple to make that 
unnecessary and volunteered time to do it. Strictly speaking, I didn't have to 
but the conversation was respectful and interesting and I felt like 
volunteering some of my evening that evening rather than spend it with family.

The committer role at Apache is not about requiring individuals to implement 
unfunded mandates from random folks. On the other hand, we are expected to try 
and assess all contributions in the form of a patch in the most impartial 
manner possible. If for whatever reason you are not in a position to provide a 
patch, that's fine, but understand you are speaking to a community of 
volunteers who have work and personal lives and are already being super 
generous just for showing up here from time to time. You'll have to find a way 
to convince them they should volunteer their time to help you. Sometimes under 
the best of circumstances that just won't happen. An abrasive communication 
style - for example, repeated comments about lack[ing] the patience to suffer 
fools - dooms you to failure out of the gate. Don't be surprised at your lack 
of results.

 distributed write pattern to replace ad hoc 'salting'
 -

 Key: HBASE-12853
 URL: https://issues.apache.org/jira/browse/HBASE-12853
 Project: HBase
  Issue Type: New Feature
Reporter: Michael Segel 
 Fix For: 2.0.0


 In reviewing HBASE-11682 (Description of Hot Spotting), one of the issues is 
 that while 'salting' alleviated  regional hot spotting, it increased the 
 complexity required to utilize the data.  
 Through the use of coprocessors, it should be possible to offer a method 
 which distributes the data on write across the cluster and then manages 
 reading the data returning a sort ordered result set, abstracting the 
 underlying process. 
 On table creation, a flag is set to indicate that this is a parallel table. 
 On insert in to the table, if the flag is set to true then a prefix is added 
 to the key.  e.g. region server#- or region server #|| where the region 
 server # is an integer between 1 and the number of region servers defined.  
 On read (scan) for each region server defined, a separate scan is created 
 adding the prefix. Since each scan will be in sort order, its possible to 
 strip the prefix and return the lowest value key from each of the subsets. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13992) Integrate SparkOnHBase into HBase

2015-07-30 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14648186#comment-14648186
 ] 

Steve Loughran commented on HBASE-13992:


Coverage is an odd metric anyway, because as well as code there's state 
coverage : ipv6, windows, timezone=GMT0, locale=turkish, which can break things 
even in code which nominally had 100%. Having tests which generate failure 
conditions (done here) with test setups that explore the configuration space 
are about the best you can get.

 Integrate SparkOnHBase into HBase
 -

 Key: HBASE-13992
 URL: https://issues.apache.org/jira/browse/HBASE-13992
 Project: HBase
  Issue Type: New Feature
  Components: spark
Reporter: Ted Malaska
Assignee: Ted Malaska
 Fix For: 2.0.0

 Attachments: HBASE-13992.10.patch, HBASE-13992.11.patch, 
 HBASE-13992.12.patch, HBASE-13992.5.patch, HBASE-13992.6.patch, 
 HBASE-13992.7.patch, HBASE-13992.8.patch, HBASE-13992.9.patch, 
 HBASE-13992.patch, HBASE-13992.patch.3, HBASE-13992.patch.4, 
 HBASE-13992.patch.5


 This Jira is to ask if SparkOnHBase can find a home in side HBase core.
 Here is the github: 
 https://github.com/cloudera-labs/SparkOnHBase
 I am the core author of this project and the license is Apache 2.0
 A blog explaining this project is here
 http://blog.cloudera.com/blog/2014/12/new-in-cloudera-labs-sparkonhbase/
 A spark Streaming example is here
 http://blog.cloudera.com/blog/2014/11/how-to-do-near-real-time-sessionization-with-spark-streaming-and-apache-hadoop/
 A real customer using this in produce is blogged here
 http://blog.cloudera.com/blog/2015/03/how-edmunds-com-used-spark-streaming-to-build-a-near-real-time-dashboard/
 Please debate and let me know what I can do to make this happen.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14085) Correct LICENSE and NOTICE files in artifacts

2015-07-30 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14648211#comment-14648211
 ] 

Andrew Purtell commented on HBASE-14085:


bq. looks like JRuby 9k might be an option if it isn't fine since the Ruby 
files at that point are available under BSD-2-clause [...]. It's an 
incompatible change (since it'll be Ruby 2.2), but that's presumably better 
than the alternative.

Agreed, thanks for looking into that, glad we have that as a fallback.

 Correct LICENSE and NOTICE files in artifacts
 -

 Key: HBASE-14085
 URL: https://issues.apache.org/jira/browse/HBASE-14085
 Project: HBase
  Issue Type: Task
  Components: build
Affects Versions: 2.0.0, 0.94.28, 0.98.14, 1.0.2, 1.2.0, 1.1.2, 1.3.0
Reporter: Sean Busbey
Assignee: Sean Busbey
Priority: Blocker
 Fix For: 2.0.0, 0.94.28, 0.98.14, 1.0.2, 1.2.0, 1.1.2


 +Problems:
 * checked LICENSE/NOTICE on binary
 ** binary artifact LICENSE file has not been updated to include the 
 additional license terms for contained third party dependencies
 ** binary artifact NOTICE file does not include a copyright line
 ** binary artifact NOTICE file does not appear to propagate appropriate info 
 from the NOTICE files from bundled dependencies
 * checked NOTICE on source
 ** source artifact NOTICE file does not include a copyright line
 ** source NOTICE file includes notices for third party dependencies not 
 included in the artifact
 * checked NOTICE files shipped in maven jars
 ** copyright line only says 2015 when it's very likely the contents are under 
 copyright prior to this year
 * nit: NOTICE file on jars in maven say HBase - ${module} rather than 
 Apache HBase - ${module} as required 
 refs:
 http://www.apache.org/dev/licensing-howto.html#bundled-vs-non-bundled
 http://www.apache.org/dev/licensing-howto.html#binary
 http://www.apache.org/dev/licensing-howto.html#simple



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13140) Version documentation to match software

2015-07-30 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14648183#comment-14648183
 ] 

Sean Busbey commented on HBASE-13140:
-

What do folks think about just publishing the most recent patch release in a 
given minor line, i.e. 1.1.1 for the 1.1 line? The public API shouldn't have 
changed. The devapidocs may have, but for the most part folks who need the 
earlier version should know how to get it (e.g. maven javadoc jars, the 
release, etc)

What about for 0.98 where more can change in each 0.98.y release?

 Version documentation to match software
 ---

 Key: HBASE-13140
 URL: https://issues.apache.org/jira/browse/HBASE-13140
 Project: HBase
  Issue Type: Improvement
  Components: documentation, site
Reporter: stack

 Its probably about time that we add in a 0.98 and 1.0 version of 
 documentation.  Currently, the doc shows as 2.0.0-SNAPSHOT which is a little 
 disorientating. A user on mailing list just had trouble because doc talked of 
 configs that are in hbase 1.0 but he was on 0.98.
 We have a manually made 0.94 version. It is time to do the work to make it so 
 we can gen doc by version for the site and it is also similarly time to let 
 doc versions diverge.
 (I think we've only talked of doing this in past but have not as yet filed 
 issue... close if I have this wrong)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HBASE-14163) hbase master stop loops both processes forever

2015-07-30 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell reassigned HBASE-14163:
--

Assignee: Andrew Purtell

I have a similar dev environment available, let me try to repro

 hbase master stop loops both processes forever
 --

 Key: HBASE-14163
 URL: https://issues.apache.org/jira/browse/HBASE-14163
 Project: HBase
  Issue Type: Bug
  Components: master
Affects Versions: 2.0.0
Reporter: Allen Wittenauer
Assignee: Andrew Purtell

 It would appear that there is an infinite loop in the zk client connection 
 code when performing a master stop when no external zk servers are configured.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-14175) Adopt releasedocmaker for better generated release notes

2015-07-30 Thread Andrew Purtell (JIRA)
Andrew Purtell created HBASE-14175:
--

 Summary: Adopt releasedocmaker for better generated release notes
 Key: HBASE-14175
 URL: https://issues.apache.org/jira/browse/HBASE-14175
 Project: HBase
  Issue Type: Task
Reporter: Andrew Purtell
 Fix For: 2.0.0


We should consider adopting Hadoop's releasedocmaker for better generated 
release notes. Could hook it into the site build. A convenient part of Yetus to 
get up and running with. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14176) Add missing headers to META-INF files

2015-07-30 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-14176:
---
Summary: Add missing headers to META-INF files  (was: Add ASF missing 
headers to META-INF files)

 Add missing headers to META-INF files
 -

 Key: HBASE-14176
 URL: https://issues.apache.org/jira/browse/HBASE-14176
 Project: HBase
  Issue Type: Sub-task
  Components: build
Reporter: Andrew Purtell
Assignee: Andrew Purtell
Priority: Trivial
 Fix For: 2.0.0, 0.94.28, 0.98.14, 1.0.2, 1.2.0, 1.1.2


 Weird that most META-INF files are missing ASF copyright header, but a 
 handful have it.
 Missed by HBASE-14087.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-14176) Add ASF missing headers to META-INF files

2015-07-30 Thread Andrew Purtell (JIRA)
Andrew Purtell created HBASE-14176:
--

 Summary: Add ASF missing headers to META-INF files
 Key: HBASE-14176
 URL: https://issues.apache.org/jira/browse/HBASE-14176
 Project: HBase
  Issue Type: Sub-task
Reporter: Andrew Purtell
Assignee: Andrew Purtell
Priority: Trivial


Weird that most META-INF files are missing ASF copyright header, but a handful 
have it.

Missed by HBASE-14087.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14154) DFS Replication should be configurable at column family level

2015-07-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14648554#comment-14648554
 ] 

Hadoop QA commented on HBASE-14154:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12748081/HBASE-14154-v2.patch
  against master branch at commit f1f0d99662559a2090fc56c001f33ae1ace90686.
  ATTACHMENT ID: 12748081

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 20 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 1 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

 {color:red}-1 core zombie tests{color}.  There are 4 zombie test(s):   
at 
org.apache.hadoop.hbase.snapshot.TestExportSnapshot.testExportFileSystemState(TestExportSnapshot.java:288)
at 
org.apache.hadoop.hbase.snapshot.TestExportSnapshot.testConsecutiveExports(TestExportSnapshot.java:213)
at 
org.apache.hadoop.hbase.TestZooKeeper.testLogSplittingAfterMasterRecoveryDueToZKExpiry(TestZooKeeper.java:626)
at 
org.apache.hadoop.hbase.snapshot.TestExportSnapshot.testExportFileSystemState(TestExportSnapshot.java:288)
at 
org.apache.hadoop.hbase.snapshot.TestExportSnapshot.testConsecutiveExports(TestExportSnapshot.java:213)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14937//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14937//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14937//artifact/patchprocess/checkstyle-aggregate.html

  Javadoc warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14937//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14937//console

This message is automatically generated.

 DFS Replication should be configurable at column family level
 -

 Key: HBASE-14154
 URL: https://issues.apache.org/jira/browse/HBASE-14154
 Project: HBase
  Issue Type: New Feature
Reporter: Ashish Singhi
Assignee: Ashish Singhi
Priority: Minor
 Fix For: 2.0.0, 0.98.14, 1.3.0

 Attachments: HBASE-14154-0.98-v1.patch, HBASE-14154-0.98.patch, 
 HBASE-14154-branch-1-v1.patch, HBASE-14154-branch-1.patch, 
 HBASE-14154-v1.patch, HBASE-14154-v2.patch, HBASE-14154.patch


 There are cases where a user wants to have a control on the number of hfile 
 copies he/she can have in the cluster.
 For eg: For a test table user would like to have only one copy instead of 
 three(default).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14175) Adopt releasedocmaker for better generated release notes

2015-07-30 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-14175:
---
Description: We should consider adopting Hadoop's releasedocmaker for 
better release notes. This would pull out text from the JIRA 'release notes' 
field with clean presentation and is vastly superior to our current notes, 
which are simply JIRA's list of issues by fix version. Could hook it into the 
site build. A convenient part of Yetus to get up and running with.   (was: We 
should consider adopting Hadoop's releasedocmaker for better generated release 
notes. Could hook it into the site build. A convenient part of Yetus to get up 
and running with. )

 Adopt releasedocmaker for better generated release notes
 

 Key: HBASE-14175
 URL: https://issues.apache.org/jira/browse/HBASE-14175
 Project: HBase
  Issue Type: Task
Reporter: Andrew Purtell
 Fix For: 2.0.0


 We should consider adopting Hadoop's releasedocmaker for better release 
 notes. This would pull out text from the JIRA 'release notes' field with 
 clean presentation and is vastly superior to our current notes, which are 
 simply JIRA's list of issues by fix version. Could hook it into the site 
 build. A convenient part of Yetus to get up and running with. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14154) DFS Replication should be configurable at column family level

2015-07-30 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14648557#comment-14648557
 ] 

Andrew Purtell commented on HBASE-14154:


I'll run the unit test suite locally to check if there's something to those 
zombies

 DFS Replication should be configurable at column family level
 -

 Key: HBASE-14154
 URL: https://issues.apache.org/jira/browse/HBASE-14154
 Project: HBase
  Issue Type: New Feature
Reporter: Ashish Singhi
Assignee: Ashish Singhi
Priority: Minor
 Fix For: 2.0.0, 0.98.14, 1.3.0

 Attachments: HBASE-14154-0.98-v1.patch, HBASE-14154-0.98.patch, 
 HBASE-14154-branch-1-v1.patch, HBASE-14154-branch-1.patch, 
 HBASE-14154-v1.patch, HBASE-14154-v2.patch, HBASE-14154.patch


 There are cases where a user wants to have a control on the number of hfile 
 copies he/she can have in the cluster.
 For eg: For a test table user would like to have only one copy instead of 
 three(default).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14176) Add missing headers to META-INF files

2015-07-30 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-14176:
---
Attachment: HBASE-14176-0.98.patch

0.98 also has the hbase-hadoop1-compat module

 Add missing headers to META-INF files
 -

 Key: HBASE-14176
 URL: https://issues.apache.org/jira/browse/HBASE-14176
 Project: HBase
  Issue Type: Sub-task
  Components: build
Reporter: Andrew Purtell
Assignee: Andrew Purtell
Priority: Trivial
 Fix For: 2.0.0, 0.94.28, 0.98.14, 1.0.2, 1.2.0, 1.1.2

 Attachments: HBASE-14176-0.98.patch, HBASE-14176.patch


 Weird that most META-INF files are missing ASF copyright header, but a 
 handful have it.
 Missed by HBASE-14087.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14176) Add missing headers to META-INF files

2015-07-30 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-14176:
---
Attachment: HBASE-14176.patch

 Add missing headers to META-INF files
 -

 Key: HBASE-14176
 URL: https://issues.apache.org/jira/browse/HBASE-14176
 Project: HBase
  Issue Type: Sub-task
  Components: build
Reporter: Andrew Purtell
Assignee: Andrew Purtell
Priority: Trivial
 Fix For: 2.0.0, 0.94.28, 0.98.14, 1.0.2, 1.2.0, 1.1.2

 Attachments: HBASE-14176.patch


 Weird that most META-INF files are missing ASF copyright header, but a 
 handful have it.
 Missed by HBASE-14087.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14175) Adopt releasedocmaker for better generated release notes

2015-07-30 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14648455#comment-14648455
 ] 

Andrew Purtell commented on HBASE-14175:


According to Allen, we can generate release notes for all releases in a range 
to bootstrap:

{noformat}
releasedocmaker.py --license --projecttitle Apache HBase --index --project 
HBASE \
  --range --version 0.98 --version 2.0.0
{noformat}

Could do this by hand, update one of the site sources to point to where the 
generated relnotes land, then commit the results. 

In order to make this repeatable for RCs, we'll need to have a Yetus release to 
consume or temporarily take releasedocmaker.py into our dev-support/ (the 
latter being a poor option), then add the right magic to make_rc.sh and release 
HOWTO.

 Adopt releasedocmaker for better generated release notes
 

 Key: HBASE-14175
 URL: https://issues.apache.org/jira/browse/HBASE-14175
 Project: HBase
  Issue Type: Task
Reporter: Andrew Purtell
 Fix For: 2.0.0


 We should consider adopting Hadoop's releasedocmaker for better release 
 notes. This would pull out text from the JIRA 'release notes' field with 
 clean presentation and is vastly superior to our current notes, which are 
 simply JIRA's list of issues by fix version. Could hook it into the site 
 build. A convenient part of Yetus to get up and running with. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-14175) Adopt releasedocmaker for better generated release notes

2015-07-30 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14648455#comment-14648455
 ] 

Andrew Purtell edited comment on HBASE-14175 at 7/30/15 11:09 PM:
--

According to Allen, we can generate release notes for all releases in a range 
to bootstrap:

{noformat}
releasedocmaker.py --license --projecttitle Apache HBase --index --project 
HBASE \
  --range --version 0.98.0 --version 2.0.0
{noformat}

Could do this by hand, update one of the site sources to point to where the 
generated relnotes land, then commit the results. 

In order to make this repeatable for RCs, we'll need to have a Yetus release to 
consume or temporarily take releasedocmaker.py into our dev-support/ (the 
latter being a poor option), then add the right magic to make_rc.sh and release 
HOWTO.


was (Author: apurtell):
According to Allen, we can generate release notes for all releases in a range 
to bootstrap:

{noformat}
releasedocmaker.py --license --projecttitle Apache HBase --index --project 
HBASE \
  --range --version 0.98 --version 2.0.0
{noformat}

Could do this by hand, update one of the site sources to point to where the 
generated relnotes land, then commit the results. 

In order to make this repeatable for RCs, we'll need to have a Yetus release to 
consume or temporarily take releasedocmaker.py into our dev-support/ (the 
latter being a poor option), then add the right magic to make_rc.sh and release 
HOWTO.

 Adopt releasedocmaker for better generated release notes
 

 Key: HBASE-14175
 URL: https://issues.apache.org/jira/browse/HBASE-14175
 Project: HBase
  Issue Type: Task
Reporter: Andrew Purtell
 Fix For: 2.0.0


 We should consider adopting Hadoop's releasedocmaker for better release 
 notes. This would pull out text from the JIRA 'release notes' field with 
 clean presentation and is vastly superior to our current notes, which are 
 simply JIRA's list of issues by fix version. Could hook it into the site 
 build. A convenient part of Yetus to get up and running with. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14175) Adopt releasedocmaker for better generated release notes

2015-07-30 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14648524#comment-14648524
 ] 

Allen Wittenauer commented on HBASE-14175:
--

BTW, documentation is currently sitting in HADOOP-12228 .  If someone from 
Yetus could +1 it, I'll commit it *hint hint*

bq. so not sure 0.98.0 will work - will be fun to test.

I did a run a while back. ( 
https://github.com/aw-altiscale/eco-release-metadata/tree/master/HBASE  )  It 
works as well as expected.  A few people that have played with the output have 
taken the opportunity to clean things up since it tends to highlight things 
like bogus release notes. The lint mode tries to help with some of those 
things, but it's tuned pretty closely to Hadoop's needs.  

I suspect HBase is going to be better shape due to building release notes from 
JIRA anyway.

 Adopt releasedocmaker for better generated release notes
 

 Key: HBASE-14175
 URL: https://issues.apache.org/jira/browse/HBASE-14175
 Project: HBase
  Issue Type: Task
Reporter: Andrew Purtell
 Fix For: 2.0.0


 We should consider adopting Hadoop's releasedocmaker for better release 
 notes. This would pull out text from the JIRA 'release notes' field with 
 clean presentation and is vastly superior to our current notes, which are 
 simply JIRA's list of issues by fix version. Could hook it into the site 
 build. A convenient part of Yetus to get up and running with. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14168) Avoid useless retry as exception implies in TableRecordReaderImpl

2015-07-30 Thread zhouyingchao (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14648599#comment-14648599
 ] 

zhouyingchao commented on HBASE-14168:
--

Ted, thank you.  I'll investigate the issue ASAP.

 Avoid useless retry as exception implies in TableRecordReaderImpl
 -

 Key: HBASE-14168
 URL: https://issues.apache.org/jira/browse/HBASE-14168
 Project: HBase
  Issue Type: Bug
  Components: mapreduce
Reporter: zhouyingchao
Assignee: zhouyingchao
Priority: Minor
 Fix For: 2.0.0, 1.2.0, 1.1.2, 1.3.0

 Attachments: HBASE-14168-001.patch


 In TableRecordReaderImpl, even if the next() of scan throws 
 DoNotRetryIOException, it would still be retried. This does not make sense 
 and should be avoided.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14154) DFS Replication should be configurable at column family level

2015-07-30 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-14154:
---
Attachment: HBASE-14154-v2.patch

bq. Another minor nit: I think there should be a check in setDFSReplication on 
whether the passed value is valid.

Attached an updated patch for master including this check. If this is ok 
[~ashish singhi] we can commit it and the committer can port the change back, 
that looks easy enough.

 DFS Replication should be configurable at column family level
 -

 Key: HBASE-14154
 URL: https://issues.apache.org/jira/browse/HBASE-14154
 Project: HBase
  Issue Type: New Feature
Reporter: Ashish Singhi
Assignee: Ashish Singhi
Priority: Minor
 Fix For: 2.0.0, 0.98.14, 1.3.0

 Attachments: HBASE-14154-0.98-v1.patch, HBASE-14154-0.98.patch, 
 HBASE-14154-branch-1-v1.patch, HBASE-14154-branch-1.patch, 
 HBASE-14154-v1.patch, HBASE-14154-v2.patch, HBASE-14154.patch


 There are cases where a user wants to have a control on the number of hfile 
 copies he/she can have in the cluster.
 For eg: For a test table user would like to have only one copy instead of 
 three(default).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14175) Adopt releasedocmaker for better generated release notes

2015-07-30 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14648450#comment-14648450
 ] 

Andrew Purtell commented on HBASE-14175:


bq. time to see if we can cut a release of the yetus artifacts via the Hadoop 
PMC?
Maybe? There's an interested downstream user here

 Adopt releasedocmaker for better generated release notes
 

 Key: HBASE-14175
 URL: https://issues.apache.org/jira/browse/HBASE-14175
 Project: HBase
  Issue Type: Task
Reporter: Andrew Purtell
 Fix For: 2.0.0


 We should consider adopting Hadoop's releasedocmaker for better release 
 notes. This would pull out text from the JIRA 'release notes' field with 
 clean presentation and is vastly superior to our current notes, which are 
 simply JIRA's list of issues by fix version. Could hook it into the site 
 build. A convenient part of Yetus to get up and running with. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14175) Adopt releasedocmaker for better generated release notes

2015-07-30 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14648539#comment-14648539
 ] 

Allen Wittenauer commented on HBASE-14175:
--

OK, docs are now viewable online here: 
https://github.com/apache/hadoop/blob/HADOOP-12111/dev-support/docs/releasedocmaker.md
 :D

 Adopt releasedocmaker for better generated release notes
 

 Key: HBASE-14175
 URL: https://issues.apache.org/jira/browse/HBASE-14175
 Project: HBase
  Issue Type: Task
Reporter: Andrew Purtell
 Fix For: 2.0.0


 We should consider adopting Hadoop's releasedocmaker for better release 
 notes. This would pull out text from the JIRA 'release notes' field with 
 clean presentation and is vastly superior to our current notes, which are 
 simply JIRA's list of issues by fix version. Could hook it into the site 
 build. A convenient part of Yetus to get up and running with. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14087) ensure correct ASF policy compliant headers on source/docs

2015-07-30 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14648542#comment-14648542
 ] 

Andrew Purtell commented on HBASE-14087:


Going to commit backports to all branches shortly. Let me know if you want to 
take a look at the branch patches beforehand.

 ensure correct ASF policy compliant headers on source/docs
 --

 Key: HBASE-14087
 URL: https://issues.apache.org/jira/browse/HBASE-14087
 Project: HBase
  Issue Type: Sub-task
  Components: build
Reporter: Sean Busbey
Assignee: Sean Busbey
Priority: Blocker
 Attachments: HBASE-14087.1.patch, HBASE-14087.2.patch, 
 HBASE-14087.2.patch


 * we have a couple of files that are missing their headers.
 * we have one file using old-style ASF copyrights



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14122) Client API for determining if server side supports cell level security

2015-07-30 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14648596#comment-14648596
 ] 

Andrew Purtell commented on HBASE-14122:


Yes, this is legit. New patches forthcoming shortly.

 Client API for determining if server side supports cell level security
 --

 Key: HBASE-14122
 URL: https://issues.apache.org/jira/browse/HBASE-14122
 Project: HBase
  Issue Type: Improvement
Reporter: Andrew Purtell
Assignee: Andrew Purtell
Priority: Minor
 Fix For: 2.0.0, 0.98.14, 1.2.0, 1.3.0

 Attachments: HBASE-14122-v2-0.98.patch, 
 HBASE-14122-v2-branch-1.patch, HBASE-14122-v2.patch, HBASE-14122.patch


 Add a client API for determining if the server side supports cell level 
 security. 
 Ask the master, assuming as we do in many other instances that the master and 
 regionservers all have a consistent view of site configuration.
 Return {{true}} if all features required for cell level security are present, 
 {{false}} otherwise, or throw {{UnsupportedOperationException}} if the master 
 does not have support for the RPC call.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14169) API to refreshSuperUserGroupsConfiguration

2015-07-30 Thread Srikanth Srungarapu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14648498#comment-14648498
 ] 

Srikanth Srungarapu commented on HBASE-14169:
-

Agree with Matteo's point. And also, allowing global admin to refresh super 
user groups can introduce vulnerability as it introduces the possibility for 
global admin to gain super user privileges.

{code}
requirePermission(refreshSuperUserGroupsConf, Action.ADMIN);
{code}

Also, we have new class {{SuperUsers}} which encapsulates the code related to 
super user configurations. You might want to move some changes over to there.

 API to refreshSuperUserGroupsConfiguration
 --

 Key: HBASE-14169
 URL: https://issues.apache.org/jira/browse/HBASE-14169
 Project: HBase
  Issue Type: New Feature
Reporter: Francis Liu
Assignee: Francis Liu
 Attachments: HBASE-14169.patch


 For deployments that use security. User impersonation (AKA doAs()) is needed 
 for some services (ie Stargate, thriftserver, Oozie, etc). Impersonation 
 definitions are defined in a xml config file and read and cached by the 
 ProxyUsers class. Calling this api will refresh cached information, 
 eliminating the need to restart the master/regionserver whenever the 
 configuration is changed. 
 Implementation just adds another method to AccessControlService.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14087) ensure correct ASF policy compliant headers on source/docs

2015-07-30 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14648549#comment-14648549
 ] 

Andrew Purtell commented on HBASE-14087:


Filed HBASE-14176 for a nit I noticed while poking around. 

 ensure correct ASF policy compliant headers on source/docs
 --

 Key: HBASE-14087
 URL: https://issues.apache.org/jira/browse/HBASE-14087
 Project: HBase
  Issue Type: Sub-task
  Components: build
Reporter: Sean Busbey
Assignee: Sean Busbey
Priority: Blocker
 Attachments: HBASE-14087.1.patch, HBASE-14087.2.patch, 
 HBASE-14087.2.patch


 * we have a couple of files that are missing their headers.
 * we have one file using old-style ASF copyrights



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14176) Add missing headers to META-INF files

2015-07-30 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-14176:
---
Status: Patch Available  (was: Open)

 Add missing headers to META-INF files
 -

 Key: HBASE-14176
 URL: https://issues.apache.org/jira/browse/HBASE-14176
 Project: HBase
  Issue Type: Sub-task
  Components: build
Reporter: Andrew Purtell
Assignee: Andrew Purtell
Priority: Trivial
 Fix For: 2.0.0, 0.94.28, 0.98.14, 1.0.2, 1.2.0, 1.1.2

 Attachments: HBASE-14176.patch


 Weird that most META-INF files are missing ASF copyright header, but a 
 handful have it.
 Missed by HBASE-14087.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14093) deduplicate copies of bootstrap files

2015-07-30 Thread Gabor Liptak (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14648592#comment-14648592
 ] 

Gabor Liptak commented on HBASE-14093:
--

[~busbey] Could you offer pointers? Thanks


 deduplicate copies of bootstrap files
 -

 Key: HBASE-14093
 URL: https://issues.apache.org/jira/browse/HBASE-14093
 Project: HBase
  Issue Type: Improvement
  Components: build
Reporter: Sean Busbey
  Labels: beginner
 Fix For: 2.0.0


 right now we have a couple of different copies of the bootstrap js and css 
 files. It'll be easier to maintain them later if we can centralize.
 Move them to a common location and use maven to populate them as needed in 
 various component build directories.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14173) includeMVCCReadpoint parameter in DefaultCompactor#createTmpWriter() represents no-op

2015-07-30 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-14173:
---
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Thanks for the review, Anoop.

 includeMVCCReadpoint parameter in DefaultCompactor#createTmpWriter() 
 represents no-op
 -

 Key: HBASE-14173
 URL: https://issues.apache.org/jira/browse/HBASE-14173
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Ted Yu
 Fix For: 2.0.0

 Attachments: 14173-v1.txt


 Around line 160:
 {code}
 return store.createWriterInTmp(fd.maxKeyCount, this.compactionCompression,
 true, fd.maxMVCCReadpoint = 0, fd.maxTagsLength 0);
 {code}
 The condition, fd.maxMVCCReadpoint = 0, represents no-op.
 The correct condition should be fd.maxMVCCReadpoint  0



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14176) Add missing headers to META-INF files

2015-07-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14648701#comment-14648701
 ] 

Hadoop QA commented on HBASE-14176:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12748097/HBASE-14176-0.98.patch
  against 0.98 branch at commit f1f0d99662559a2090fc56c001f33ae1ace90686.
  ATTACHMENT ID: 12748097

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 15 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
22 warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14938//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14938//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14938//artifact/patchprocess/checkstyle-aggregate.html

  Javadoc warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14938//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14938//console

This message is automatically generated.

 Add missing headers to META-INF files
 -

 Key: HBASE-14176
 URL: https://issues.apache.org/jira/browse/HBASE-14176
 Project: HBase
  Issue Type: Sub-task
  Components: build
Reporter: Andrew Purtell
Assignee: Andrew Purtell
Priority: Trivial
 Fix For: 2.0.0, 0.94.28, 0.98.14, 1.0.2, 1.2.0, 1.1.2

 Attachments: HBASE-14176-0.98.patch, HBASE-14176.patch


 Weird that most META-INF files are missing ASF copyright header, but a 
 handful have it.
 Missed by HBASE-14087.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-14174) hbase 0.98.12 api documentation

2015-07-30 Thread nirav patel (JIRA)
nirav patel created HBASE-14174:
---

 Summary: hbase 0.98.12 api documentation
 Key: HBASE-14174
 URL: https://issues.apache.org/jira/browse/HBASE-14174
 Project: HBase
  Issue Type: Wish
  Components: API
Affects Versions: 0.98.12.1
Reporter: nirav patel


Where is hbase 0.98.12 api documentation?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14098) Allow dropping caches behind compactions

2015-07-30 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14647263#comment-14647263
 ] 

ramkrishna.s.vasudevan commented on HBASE-14098:


bq.if (this.conf.getBoolean(hbase.regionserver.compaction.private.readers, 
true)) 
So with this change compactions will always use new readers so that the OS 
pages of those files are not cached based on the new setting that says 
dropBehind on compaction?

 Allow dropping caches behind compactions
 

 Key: HBASE-14098
 URL: https://issues.apache.org/jira/browse/HBASE-14098
 Project: HBase
  Issue Type: Bug
  Components: Compaction, hadoop2, HFile
Affects Versions: 2.0.0, 1.3.0
Reporter: Elliott Clark
Assignee: Elliott Clark
 Fix For: 2.0.0, 1.3.0

 Attachments: HBASE-14098-v1.patch, HBASE-14098-v2.patch, 
 HBASE-14098-v3.patch, HBASE-14098-v4.patch, HBASE-14098-v5.patch, 
 HBASE-14098.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-14172) Upgrade existing thrift binding using thrift 0.9.2 compiler.

2015-07-30 Thread Srikanth Srungarapu (JIRA)
Srikanth Srungarapu created HBASE-14172:
---

 Summary: Upgrade existing thrift binding using thrift 0.9.2 
compiler.
 Key: HBASE-14172
 URL: https://issues.apache.org/jira/browse/HBASE-14172
 Project: HBase
  Issue Type: Improvement
Reporter: Srikanth Srungarapu
Assignee: Srikanth Srungarapu
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)



[jira] [Updated] (HBASE-14154) DFS Replication should be configurable at column family level

2015-07-30 Thread Ashish Singhi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish Singhi updated HBASE-14154:
--
Attachment: HBASE-14154-branch-1-v1.patch

 DFS Replication should be configurable at column family level
 -

 Key: HBASE-14154
 URL: https://issues.apache.org/jira/browse/HBASE-14154
 Project: HBase
  Issue Type: New Feature
Reporter: Ashish Singhi
Assignee: Ashish Singhi
Priority: Minor
 Fix For: 2.0.0, 0.98.14, 1.3.0

 Attachments: HBASE-14154-0.98.patch, HBASE-14154-branch-1-v1.patch, 
 HBASE-14154-branch-1.patch, HBASE-14154-v1.patch, HBASE-14154.patch


 There are cases where a user wants to have a control on the number of hfile 
 copies he/she can have in the cluster.
 For eg: For a test table user would like to have only one copy instead of 
 three(default).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14163) hbase master stop loops both processes forever

2015-07-30 Thread Samir Ahmic (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14647317#comment-14647317
 ] 

Samir Ahmic commented on HBASE-14163:
-

bq. How long did it take for your hbase master to shutdown?
In my case 10-15s.
Here is my output from $hbase master stop:
{code}
2015-07-30 09:22:20,826 INFO  [main] zookeeper.ZooKeeper: Initiating client 
connection, connectString=localhost:2181 sessionTimeout=9 
watcher=hconnection-0x344ec9c40x0, quorum=localhost:2181, baseZNode=/hbase
2015-07-30 09:22:20,856 INFO  [main-SendThread(localhost.localdomain:2181)] 
zookeeper.ClientCnxn: Opening socket connection to server 
localhost.localdomain/127.0.0.1:2181. Will not attempt to authenticate using 
SASL (unknown error)
2015-07-30 09:22:20,861 INFO  [main-SendThread(localhost.localdomain:2181)] 
zookeeper.ClientCnxn: Socket connection established to 
localhost.localdomain/127.0.0.1:2181, initiating session
2015-07-30 09:22:20,870 INFO  [main-SendThread(localhost.localdomain:2181)] 
zookeeper.ClientCnxn: Session establishment complete on server 
localhost.localdomain/127.0.0.1:2181, sessionid = 0x14eddd660110008, negotiated 
timeout = 4
2015-07-30 09:22:21,686 INFO  [main] client.ConnectionImplementation: Closing 
master protocol: MasterService
2015-07-30 09:22:21,686 INFO  [main] client.ConnectionImplementation: Closing 
zookeeper sessionid=0x14eddd660110008
2015-07-30 09:22:21,687 INFO  [main] zookeeper.ZooKeeper: Session: 
0x14eddd660110008 closed
2015-07-30 09:22:21,687 INFO  [main-EventThread] zookeeper.ClientCnxn: 
EventThread shut down
{code}
I have seen this strange behavior on OS X before in most cases it was network 
related or something with fact that in default configuration we write data in 
/tmp dir.  
 

 hbase master stop loops both processes forever
 --

 Key: HBASE-14163
 URL: https://issues.apache.org/jira/browse/HBASE-14163
 Project: HBase
  Issue Type: Bug
  Components: master
Affects Versions: 2.0.0
Reporter: Allen Wittenauer

 It would appear that there is an infinite loop in the zk client connection 
 code when performing a master stop when no external zk servers are configured.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14154) DFS Replication should be configurable at column family level

2015-07-30 Thread Ashish Singhi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14647318#comment-14647318
 ] 

Ashish Singhi commented on HBASE-14154:
---

Attached patch addressing the comment. Thanks for being so kind.
bq. Instead of returning null if not set, return 0?
I changed it to return 0. Now if user set the DFS_REPLICATION to 0 we will not 
be able to validate the value and fail in the sanity check and will use the 
default replication set in the file system for these hfile(s) (this make sense 
to me).

 DFS Replication should be configurable at column family level
 -

 Key: HBASE-14154
 URL: https://issues.apache.org/jira/browse/HBASE-14154
 Project: HBase
  Issue Type: New Feature
Reporter: Ashish Singhi
Assignee: Ashish Singhi
Priority: Minor
 Fix For: 2.0.0, 0.98.14, 1.3.0

 Attachments: HBASE-14154-0.98-v1.patch, HBASE-14154-0.98.patch, 
 HBASE-14154-branch-1-v1.patch, HBASE-14154-branch-1.patch, 
 HBASE-14154-v1.patch, HBASE-14154.patch


 There are cases where a user wants to have a control on the number of hfile 
 copies he/she can have in the cluster.
 For eg: For a test table user would like to have only one copy instead of 
 three(default).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14063) Use BufferBackedCell in read path after HBASE-12213 and HBASE-12295

2015-07-30 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14647324#comment-14647324
 ] 

ramkrishna.s.vasudevan commented on HBASE-14063:


Yes Stack. This is only for offheap. If the BB is on heap we will create 
KeyValue only using buf.array().
{code}
assert buf.isDirect();
{code}

 Use BufferBackedCell in read path after HBASE-12213 and HBASE-12295
 ---

 Key: HBASE-14063
 URL: https://issues.apache.org/jira/browse/HBASE-14063
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 2.0.0

 Attachments: HBASE-14063.patch, HBASE-14063_1.patch, 
 HBASE-14063_3.patch, HBASE-14063_4.patch, HBASE-14063_final.patch


 Subtask to ensure that the BytebufferBackedCell gets used in the read path 
 after HBASE-12213 and HBASE-12295 goes in.  This would help to clearly change 
 the required places and makes the review easier. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13408) HBase In-Memory Memstore Compaction

2015-07-30 Thread Eshcar Hillel (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14647297#comment-14647297
 ] 

Eshcar Hillel commented on HBASE-13408:
---

Thank you [~Apache9] and [~anoop.hbase] for your comments.

There is a question of when to push the active set into the pipeline, and which 
threshold to use. This should be some configurable parameter. But please let’s 
put this aside for a minute.
The problem I meant to handle with the WAL truncation mechanism is orthogonal 
to this decision. Consider a region with one compacting store. Assume we add 
the following key-value-ts tuples to the memstore:
(A,1,1) (A,4,4) (A,7,7)
(B,2,2) (B,5,5) (B,8,8)
(C,3,3) (C,6,6) (C,9,9)
All these items will have edits in the WAL. After compaction what is left 
in-memory are
(A,7,7) (B,8,8) (C,9,9)
however these edits are not removed from the WAL since no flushing occurs.
This can go on and on without ever flushing data to disk and without removing 
WAL edits.
The solution we suggested earlier is to have a small map that would help 
determine that after the compaction in the example above we can remove all WAL 
entries that correspond to ts equal or lower than 6. And it happens not within 
the scope of a flush as compaction is a background process. 
If we don’t change the WAL truncation in this way WAL can grow without limit.

Supporting a more compacted format in the compaction pipeline was discussed 
when we just started this JIRA. The design we suggested enables plugging-in any 
data structure: it can be the CellBlocks by [~anoop.hbase], it can be a b-tree, 
or any alternative that is suggested in HBASE-3993. It only needs to support 
the API defined by the CellSkipListSet wrapper class (in our patch we changed 
its name to CellSet to indicate the implementation is not restricted to a 
skip-list).
Having said that, we would like to keep the initial solution simple. The 
plug-in infrastructure is in; experimenting with different data structures can 
be allocated a different task.

Coming back to the timing of the in-memory flush, since this action mandates 
the same synchronization as in a flush to disk (to block the updaters while 
allocating a new active set) it seems appropriate to apply it upon a disk 
flush. 
Moreover, if we don’t change the flush semantics a compacting memstore can be 
forced to flush to disk when it reaches 16M (I can show an example) which would 
countervail the benefits of this feature.

 HBase In-Memory Memstore Compaction
 ---

 Key: HBASE-13408
 URL: https://issues.apache.org/jira/browse/HBASE-13408
 Project: HBase
  Issue Type: New Feature
Reporter: Eshcar Hillel
 Attachments: 
 HBaseIn-MemoryMemstoreCompactionDesignDocument-ver02.pdf, 
 HBaseIn-MemoryMemstoreCompactionDesignDocument.pdf, 
 InMemoryMemstoreCompactionEvaluationResults.pdf


 A store unit holds a column family in a region, where the memstore is its 
 in-memory component. The memstore absorbs all updates to the store; from time 
 to time these updates are flushed to a file on disk, where they are 
 compacted. Unlike disk components, the memstore is not compacted until it is 
 written to the filesystem and optionally to block-cache. This may result in 
 underutilization of the memory due to duplicate entries per row, for example, 
 when hot data is continuously updated. 
 Generally, the faster the data is accumulated in memory, more flushes are 
 triggered, the data sinks to disk more frequently, slowing down retrieval of 
 data, even if very recent.
 In high-churn workloads, compacting the memstore can help maintain the data 
 in memory, and thereby speed up data retrieval. 
 We suggest a new compacted memstore with the following principles:
 1.The data is kept in memory for as long as possible
 2.Memstore data is either compacted or in process of being compacted 
 3.Allow a panic mode, which may interrupt an in-progress compaction and 
 force a flush of part of the memstore.
 We suggest applying this optimization only to in-memory column families.
 A design document is attached.
 This feature was previously discussed in HBASE-5311.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14172) Upgrade existing thrift binding using thrift 0.9.2 compiler.

2015-07-30 Thread Srikanth Srungarapu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Srikanth Srungarapu updated HBASE-14172:

Attachment: HBASE-14172.patch

 Upgrade existing thrift binding using thrift 0.9.2 compiler.
 

 Key: HBASE-14172
 URL: https://issues.apache.org/jira/browse/HBASE-14172
 Project: HBase
  Issue Type: Improvement
Reporter: Srikanth Srungarapu
Assignee: Srikanth Srungarapu
Priority: Minor
 Attachments: HBASE-14172.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14172) Upgrade existing thrift binding using thrift 0.9.2 compiler.

2015-07-30 Thread Srikanth Srungarapu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Srikanth Srungarapu updated HBASE-14172:

Status: Patch Available  (was: Open)

 Upgrade existing thrift binding using thrift 0.9.2 compiler.
 

 Key: HBASE-14172
 URL: https://issues.apache.org/jira/browse/HBASE-14172
 Project: HBase
  Issue Type: Improvement
Reporter: Srikanth Srungarapu
Assignee: Srikanth Srungarapu
Priority: Minor
 Attachments: HBASE-14172-branch-1.patch, HBASE-14172.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14172) Upgrade existing thrift binding using thrift 0.9.2 compiler.

2015-07-30 Thread Srikanth Srungarapu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Srikanth Srungarapu updated HBASE-14172:

Attachment: HBASE-14172-branch-1.patch

 Upgrade existing thrift binding using thrift 0.9.2 compiler.
 

 Key: HBASE-14172
 URL: https://issues.apache.org/jira/browse/HBASE-14172
 Project: HBase
  Issue Type: Improvement
Reporter: Srikanth Srungarapu
Assignee: Srikanth Srungarapu
Priority: Minor
 Attachments: HBASE-14172-branch-1.patch, HBASE-14172.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14063) Use BufferBackedCell in read path after HBASE-12213 and HBASE-12295

2015-07-30 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14647311#comment-14647311
 ] 

stack commented on HBASE-14063:
---

bq. OffheapKV extends ByteBufferedCell, that is the main difference between 
this and the normal KV. So this KV will have the new APIs in ByteBufferedCell 
for referring to the BB underlying these cells.

Naming a class 'Offheap' when backed by a BB which could be on or off heap 
seems incorrect. The implemenation is offheap only?

 Use BufferBackedCell in read path after HBASE-12213 and HBASE-12295
 ---

 Key: HBASE-14063
 URL: https://issues.apache.org/jira/browse/HBASE-14063
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 2.0.0

 Attachments: HBASE-14063.patch, HBASE-14063_1.patch, 
 HBASE-14063_3.patch, HBASE-14063_4.patch, HBASE-14063_final.patch


 Subtask to ensure that the BytebufferBackedCell gets used in the read path 
 after HBASE-12213 and HBASE-12295 goes in.  This would help to clearly change 
 the required places and makes the review easier. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14169) API to refreshSuperUserGroupsConfiguration

2015-07-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14647333#comment-14647333
 ] 

Hadoop QA commented on HBASE-14169:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12747922/HBASE-14169.patch
  against master branch at commit 5f1129c799e9c273dfd58a7fc87d5e654061607b.
  ATTACHMENT ID: 12747922

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 6 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:red}-1 checkstyle{color}.  The applied patch generated 
1867 checkstyle errors (more than the master's current 1864 errors).

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+   * coderpc 
RefreshSuperUserGroupsConf(.hbase.pb.RefreshSuperUserGroupsConfRequest) returns 
(.hbase.pb.RefreshSuperUserGroupsConfResponse);/code
+ * coderpc 
RefreshSuperUserGroupsConf(.hbase.pb.RefreshSuperUserGroupsConfRequest) returns 
(.hbase.pb.RefreshSuperUserGroupsConfResponse);/code
+public void refreshSuperUserGroupsConf(RpcController controller, 
RefreshSuperUserGroupsConfRequest request, 
RpcCallbackRefreshSuperUserGroupsConfResponse done) {

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.client.TestFastFail

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14931//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14931//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14931//artifact/patchprocess/checkstyle-aggregate.html

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14931//console

This message is automatically generated.

 API to refreshSuperUserGroupsConfiguration
 --

 Key: HBASE-14169
 URL: https://issues.apache.org/jira/browse/HBASE-14169
 Project: HBase
  Issue Type: New Feature
Reporter: Francis Liu
Assignee: Francis Liu
 Attachments: HBASE-14169.patch


 For deployments that use security. User impersonation (AKA doAs()) is needed 
 for some services (ie Stargate, thriftserver, Oozie, etc). Impersonation 
 definitions are defined in a xml config file and read and cached by the 
 ProxyUsers class. Calling this api will refresh cached information, 
 eliminating the need to restart the master/regionserver whenever the 
 configuration is changed. 
 Implementation just adds another method to AccessControlService.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13408) HBase In-Memory Memstore Compaction

2015-07-30 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14647348#comment-14647348
 ] 

Duo Zhang commented on HBASE-13408:
---

OK, I get your point. After a memstore compaction we may drop some old cells so 
set a new value of the {{oldestUnflushedSeqId}} in WAL is reasonable. And yes, 
this can avoid WAL triggers a flush for log truncating under your cases.

But I still think you can find a way to set it without changing the semantics 
of flush... Flush is a very critical operation in HBase so you should keep away 
from it as much as possible unless you have to...

Or a more difficult way, remove the old flush operation and introduce some new 
operations such as reduce your memory usage and persist old cells and so 
on. You can put your compaction logic in the reduce your memory usage 
operation.

Thanks.

 HBase In-Memory Memstore Compaction
 ---

 Key: HBASE-13408
 URL: https://issues.apache.org/jira/browse/HBASE-13408
 Project: HBase
  Issue Type: New Feature
Reporter: Eshcar Hillel
 Attachments: 
 HBaseIn-MemoryMemstoreCompactionDesignDocument-ver02.pdf, 
 HBaseIn-MemoryMemstoreCompactionDesignDocument.pdf, 
 InMemoryMemstoreCompactionEvaluationResults.pdf


 A store unit holds a column family in a region, where the memstore is its 
 in-memory component. The memstore absorbs all updates to the store; from time 
 to time these updates are flushed to a file on disk, where they are 
 compacted. Unlike disk components, the memstore is not compacted until it is 
 written to the filesystem and optionally to block-cache. This may result in 
 underutilization of the memory due to duplicate entries per row, for example, 
 when hot data is continuously updated. 
 Generally, the faster the data is accumulated in memory, more flushes are 
 triggered, the data sinks to disk more frequently, slowing down retrieval of 
 data, even if very recent.
 In high-churn workloads, compacting the memstore can help maintain the data 
 in memory, and thereby speed up data retrieval. 
 We suggest a new compacted memstore with the following principles:
 1.The data is kept in memory for as long as possible
 2.Memstore data is either compacted or in process of being compacted 
 3.Allow a panic mode, which may interrupt an in-progress compaction and 
 force a flush of part of the memstore.
 We suggest applying this optimization only to in-memory column families.
 A design document is attached.
 This feature was previously discussed in HBASE-5311.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14169) API to refreshSuperUserGroupsConfiguration

2015-07-30 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14647849#comment-14647849
 ] 

Ted Yu commented on HBASE-14169:


Normally there is a space between double slash and the comment itself.
{code}
953   //refresh proxy users on startup, this is for
{code}
The method is called refreshSuperUserGroupsConfiguration() while comment 
mentions 'proxy users'. Better use the same term.

{code}
954   //when backup master become an active master
{code}
'become' - 'becomes'

 API to refreshSuperUserGroupsConfiguration
 --

 Key: HBASE-14169
 URL: https://issues.apache.org/jira/browse/HBASE-14169
 Project: HBase
  Issue Type: New Feature
Reporter: Francis Liu
Assignee: Francis Liu
 Attachments: HBASE-14169.patch


 For deployments that use security. User impersonation (AKA doAs()) is needed 
 for some services (ie Stargate, thriftserver, Oozie, etc). Impersonation 
 definitions are defined in a xml config file and read and cached by the 
 ProxyUsers class. Calling this api will refresh cached information, 
 eliminating the need to restart the master/regionserver whenever the 
 configuration is changed. 
 Implementation just adds another method to AccessControlService.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13497) Remove MVCC stamps from HFile when that is safe

2015-07-30 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14647903#comment-14647903
 ] 

Anoop Sam John commented on HBASE-13497:


[~larsh]
{quote}
{quote}
// When all MVCC readpoints are 0, don't write them.
...
fd.maxMVCCReadpoint = 0,
{quote}
Do we need = or  only?
{quote}
Seems this comment was not addressed while commit. :-)   Pls see HBASE-14173.  
Let us commit the change in HBASE-14173.


 Remove MVCC stamps from HFile when that is safe
 ---

 Key: HBASE-13497
 URL: https://issues.apache.org/jira/browse/HBASE-13497
 Project: HBase
  Issue Type: Sub-task
  Components: Scanners
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
  Labels: performance
 Fix For: 2.0.0, 1.0.2, 1.2.0, 1.1.1

 Attachments: 13497.txt


 See discussion in HBASE-13389.
 The optimization was initially put in with HBASE-8166, HBASE-12600 undoes it, 
 this will partially restores it.
 Instead of checking the MVCC readpoints against the oldest current scanner, 
 we check that all are 0, if so, we do not need to write them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14085) Correct LICENSE and NOTICE files in artifacts

2015-07-30 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14647921#comment-14647921
 ] 

Andrew Purtell commented on HBASE-14085:


Well I hope the answer to that question is yes, otherwise installation 
instructions for all future releases of HBase, except maybe 2.0 and up where we 
nuke the shell and replace it with ???, will carry the instructions: to get a 
functional shell, download this outdated jruby jar if you can find it .

So to be clear, this is the last remaining issue that we've found? And we are 
able to conclude this review and resume releases after the resolution of that 
question? 

 Correct LICENSE and NOTICE files in artifacts
 -

 Key: HBASE-14085
 URL: https://issues.apache.org/jira/browse/HBASE-14085
 Project: HBase
  Issue Type: Task
  Components: build
Affects Versions: 2.0.0, 0.94.28, 0.98.14, 1.0.2, 1.2.0, 1.1.2, 1.3.0
Reporter: Sean Busbey
Assignee: Sean Busbey
Priority: Blocker
 Fix For: 2.0.0, 0.94.28, 0.98.14, 1.0.2, 1.2.0, 1.1.2


 +Problems:
 * checked LICENSE/NOTICE on binary
 ** binary artifact LICENSE file has not been updated to include the 
 additional license terms for contained third party dependencies
 ** binary artifact NOTICE file does not include a copyright line
 ** binary artifact NOTICE file does not appear to propagate appropriate info 
 from the NOTICE files from bundled dependencies
 * checked NOTICE on source
 ** source artifact NOTICE file does not include a copyright line
 ** source NOTICE file includes notices for third party dependencies not 
 included in the artifact
 * checked NOTICE files shipped in maven jars
 ** copyright line only says 2015 when it's very likely the contents are under 
 copyright prior to this year
 * nit: NOTICE file on jars in maven say HBase - ${module} rather than 
 Apache HBase - ${module} as required 
 refs:
 http://www.apache.org/dev/licensing-howto.html#bundled-vs-non-bundled
 http://www.apache.org/dev/licensing-howto.html#binary
 http://www.apache.org/dev/licensing-howto.html#simple



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13497) Remove MVCC stamps from HFile when that is safe

2015-07-30 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14647942#comment-14647942
 ] 

Lars Hofhansl commented on HBASE-13497:
---

Same in branch-1.0... So all good (I cherry-picked it all the way down to 1.0)

 Remove MVCC stamps from HFile when that is safe
 ---

 Key: HBASE-13497
 URL: https://issues.apache.org/jira/browse/HBASE-13497
 Project: HBase
  Issue Type: Sub-task
  Components: Scanners
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
  Labels: performance
 Fix For: 2.0.0, 1.0.2, 1.2.0, 1.1.1

 Attachments: 13497.txt


 See discussion in HBASE-13389.
 The optimization was initially put in with HBASE-8166, HBASE-12600 undoes it, 
 this will partially restores it.
 Instead of checking the MVCC readpoints against the oldest current scanner, 
 we check that all are 0, if so, we do not need to write them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13497) Remove MVCC stamps from HFile when that is safe

2015-07-30 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14647979#comment-14647979
 ] 

Anoop Sam John commented on HBASE-13497:


I see..  Thanks for checking and confirming Lars.

 Remove MVCC stamps from HFile when that is safe
 ---

 Key: HBASE-13497
 URL: https://issues.apache.org/jira/browse/HBASE-13497
 Project: HBase
  Issue Type: Sub-task
  Components: Scanners
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
  Labels: performance
 Fix For: 2.0.0, 1.0.2, 1.2.0, 1.1.1

 Attachments: 13497.txt


 See discussion in HBASE-13389.
 The optimization was initially put in with HBASE-8166, HBASE-12600 undoes it, 
 this will partially restores it.
 Instead of checking the MVCC readpoints against the oldest current scanner, 
 we check that all are 0, if so, we do not need to write them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14085) Correct LICENSE and NOTICE files in artifacts

2015-07-30 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14647956#comment-14647956
 ] 

Sean Busbey commented on HBASE-14085:
-

yes. I'm just verifying the generated binary tarball now.

 Correct LICENSE and NOTICE files in artifacts
 -

 Key: HBASE-14085
 URL: https://issues.apache.org/jira/browse/HBASE-14085
 Project: HBase
  Issue Type: Task
  Components: build
Affects Versions: 2.0.0, 0.94.28, 0.98.14, 1.0.2, 1.2.0, 1.1.2, 1.3.0
Reporter: Sean Busbey
Assignee: Sean Busbey
Priority: Blocker
 Fix For: 2.0.0, 0.94.28, 0.98.14, 1.0.2, 1.2.0, 1.1.2


 +Problems:
 * checked LICENSE/NOTICE on binary
 ** binary artifact LICENSE file has not been updated to include the 
 additional license terms for contained third party dependencies
 ** binary artifact NOTICE file does not include a copyright line
 ** binary artifact NOTICE file does not appear to propagate appropriate info 
 from the NOTICE files from bundled dependencies
 * checked NOTICE on source
 ** source artifact NOTICE file does not include a copyright line
 ** source NOTICE file includes notices for third party dependencies not 
 included in the artifact
 * checked NOTICE files shipped in maven jars
 ** copyright line only says 2015 when it's very likely the contents are under 
 copyright prior to this year
 * nit: NOTICE file on jars in maven say HBase - ${module} rather than 
 Apache HBase - ${module} as required 
 refs:
 http://www.apache.org/dev/licensing-howto.html#bundled-vs-non-bundled
 http://www.apache.org/dev/licensing-howto.html#binary
 http://www.apache.org/dev/licensing-howto.html#simple



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12853) distributed write pattern to replace ad hoc 'salting'

2015-07-30 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14647980#comment-14647980
 ] 

Lars Hofhansl commented on HBASE-12853:
---

bq. As I have stated repeatedly, I am unable to contribute to certain Apache 
projects unless Apache is willing to indemnify me. (Which they are not.) 

Don't be ridiculous. It is always your task to clear with all possible IP 
owners before you contribute anything under any license.
If you have something to contribute show us the code or even just a spec, 
otherwise it's just useless noise; if not just leave it instead of now blaming 
the committers with specious excuses why you can't do it.

I'm going to close this.

 distributed write pattern to replace ad hoc 'salting'
 -

 Key: HBASE-12853
 URL: https://issues.apache.org/jira/browse/HBASE-12853
 Project: HBase
  Issue Type: New Feature
Reporter: Michael Segel 
 Fix For: 2.0.0


 In reviewing HBASE-11682 (Description of Hot Spotting), one of the issues is 
 that while 'salting' alleviated  regional hot spotting, it increased the 
 complexity required to utilize the data.  
 Through the use of coprocessors, it should be possible to offer a method 
 which distributes the data on write across the cluster and then manages 
 reading the data returning a sort ordered result set, abstracting the 
 underlying process. 
 On table creation, a flag is set to indicate that this is a parallel table. 
 On insert in to the table, if the flag is set to true then a prefix is added 
 to the key.  e.g. region server#- or region server #|| where the region 
 server # is an integer between 1 and the number of region servers defined.  
 On read (scan) for each region server defined, a separate scan is created 
 adding the prefix. Since each scan will be in sort order, its possible to 
 strip the prefix and return the lowest value key from each of the subsets. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14173) includeMVCCReadpoint parameter in DefaultCompactor#createTmpWriter() represents no-op

2015-07-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14648065#comment-14648065
 ] 

Hadoop QA commented on HBASE-14173:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12748005/14173-v1.txt
  against master branch at commit 5f1129c799e9c273dfd58a7fc87d5e654061607b.
  ATTACHMENT ID: 12748005

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14936//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14936//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14936//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14936//console

This message is automatically generated.

 includeMVCCReadpoint parameter in DefaultCompactor#createTmpWriter() 
 represents no-op
 -

 Key: HBASE-14173
 URL: https://issues.apache.org/jira/browse/HBASE-14173
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Ted Yu
 Fix For: 2.0.0

 Attachments: 14173-v1.txt


 Around line 160:
 {code}
 return store.createWriterInTmp(fd.maxKeyCount, this.compactionCompression,
 true, fd.maxMVCCReadpoint = 0, fd.maxTagsLength 0);
 {code}
 The condition, fd.maxMVCCReadpoint = 0, represents no-op.
 The correct condition should be fd.maxMVCCReadpoint  0



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14168) Avoid useless retry as exception implies in TableRecordReaderImpl

2015-07-30 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14648085#comment-14648085
 ] 

Ted Yu commented on HBASE-14168:


TestTableInputFormat#testTableRecordReaderScannerTimeout uses a scanner which 
throws UnknownScannerException.
The test retries until result is returned.

 Avoid useless retry as exception implies in TableRecordReaderImpl
 -

 Key: HBASE-14168
 URL: https://issues.apache.org/jira/browse/HBASE-14168
 Project: HBase
  Issue Type: Bug
  Components: mapreduce
Reporter: zhouyingchao
Assignee: zhouyingchao
Priority: Minor
 Fix For: 2.0.0, 1.2.0, 1.1.2, 1.3.0

 Attachments: HBASE-14168-001.patch


 In TableRecordReaderImpl, even if the next() of scan throws 
 DoNotRetryIOException, it would still be retried. This does not make sense 
 and should be avoided.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14168) Avoid useless retry as exception implies in TableRecordReaderImpl

2015-07-30 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-14168:
---
Status: Open  (was: Patch Available)

 Avoid useless retry as exception implies in TableRecordReaderImpl
 -

 Key: HBASE-14168
 URL: https://issues.apache.org/jira/browse/HBASE-14168
 Project: HBase
  Issue Type: Bug
  Components: mapreduce
Reporter: zhouyingchao
Assignee: zhouyingchao
Priority: Minor
 Fix For: 2.0.0, 1.2.0, 1.1.2, 1.3.0

 Attachments: HBASE-14168-001.patch


 In TableRecordReaderImpl, even if the next() of scan throws 
 DoNotRetryIOException, it would still be retried. This does not make sense 
 and should be avoided.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14168) Avoid useless retry as exception implies in TableRecordReaderImpl

2015-07-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14647249#comment-14647249
 ] 

Hadoop QA commented on HBASE-14168:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12747917/HBASE-14168-001.patch
  against master branch at commit 5f1129c799e9c273dfd58a7fc87d5e654061607b.
  ATTACHMENT ID: 12747917

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.mapreduce.TestTableInputFormat
  org.apache.hadoop.hbase.mapred.TestTableInputFormat

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s):   
at 
org.apache.camel.processor.async.AsyncEndpointCustomRoutePolicyTest.testAsyncEndpoint(AsyncEndpointCustomRoutePolicyTest.java:69)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14930//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14930//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14930//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14930//console

This message is automatically generated.

 Avoid useless retry as exception implies in TableRecordReaderImpl
 -

 Key: HBASE-14168
 URL: https://issues.apache.org/jira/browse/HBASE-14168
 Project: HBase
  Issue Type: Bug
  Components: mapreduce
Reporter: zhouyingchao
Assignee: zhouyingchao
Priority: Minor
 Fix For: 2.0.0, 1.2.0, 1.1.2, 1.3.0

 Attachments: HBASE-14168-001.patch


 In TableRecordReaderImpl, even if the next() of scan throws 
 DoNotRetryIOException, it would still be retried. This does not make sense 
 and should be avoided.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-14170) [HBase Rest] RESTServer is not shutting down if hbase.rest.port Address already in use.

2015-07-30 Thread Y. SREENIVASULU REDDY (JIRA)
Y. SREENIVASULU REDDY created HBASE-14170:
-

 Summary: [HBase Rest] RESTServer is not shutting down if 
hbase.rest.port Address already in use.
 Key: HBASE-14170
 URL: https://issues.apache.org/jira/browse/HBASE-14170
 Project: HBase
  Issue Type: Bug
  Components: REST
Reporter: Y. SREENIVASULU REDDY
 Fix For: 2.0.0, 1.0.2, 1.2.0


[HBase Rest] RESTServer is not shutting down if hbase.rest.port Address 
already in use.

 If hbase.rest.port Address already in use, RESTServer should shutdown,

with out this hbase.rest.port  we cant perform any operations on RESTServer. 
Then there is no use of running RESTServer process.

{code}
2015-07-30 11:49:48,273 WARN  [main] mortbay.log: failed 
SelectChannelConnector@0.0.0.0:8080: java.net.BindException: Address already in 
use
2015-07-30 11:49:48,274 WARN  [main] mortbay.log: failed Server@563f38c4: 
java.net.BindException: Address already in use
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14154) DFS Replication should be configurable at column family level

2015-07-30 Thread Ashish Singhi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish Singhi updated HBASE-14154:
--
Attachment: HBASE-14154-v1.patch

 DFS Replication should be configurable at column family level
 -

 Key: HBASE-14154
 URL: https://issues.apache.org/jira/browse/HBASE-14154
 Project: HBase
  Issue Type: New Feature
Reporter: Ashish Singhi
Assignee: Ashish Singhi
Priority: Minor
 Fix For: 2.0.0, 0.98.14, 1.3.0

 Attachments: HBASE-14154-0.98.patch, HBASE-14154-branch-1.patch, 
 HBASE-14154-v1.patch, HBASE-14154.patch


 There are cases where a user wants to have a control on the number of hfile 
 copies he/she can have in the cluster.
 For eg: For a test table user would like to have only one copy instead of 
 three(default).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-14171) [HBase Thrift] ThriftServer is not shutting down if hbase.regionserver.thrift.port Address already in use.

2015-07-30 Thread Y. SREENIVASULU REDDY (JIRA)
Y. SREENIVASULU REDDY created HBASE-14171:
-

 Summary: [HBase Thrift] ThriftServer is not shutting down if 
hbase.regionserver.thrift.port Address already in use.
 Key: HBASE-14171
 URL: https://issues.apache.org/jira/browse/HBASE-14171
 Project: HBase
  Issue Type: Bug
  Components: Thrift
Reporter: Y. SREENIVASULU REDDY
 Fix For: 2.0.0, 1.0.2, 1.2.0


If hbase.regionserver.thrift.port Address already in use, ThriftServer should 
shutdown,

with out this hbase.regionserver.thrift.port  we cant perform any operations 
on ThriftServer . Then there is no use of running ThriftServer process.
If port is already used then exception or message also not throwing like 
address already in use.
Only info port information is showing.
{code}
2015-07-30 12:20:56,186 INFO  [main] http.HttpServer: Jetty bound to port 9095
2015-07-30 12:20:56,186 INFO  [main] mortbay.log: jetty-6.1.26
2015-07-30 12:20:56,227 WARN  [main] mortbay.log: Can't reuse 
/tmp/Jetty_0_0_0_0_9095_thrift.vqpz9l, using 
/tmp/Jetty_0_0_0_0_9095_thrift.vqpz9l_4913486964252131199
2015-07-30 12:20:56,553 INFO  [main] mortbay.log: Started 
HttpServer$SelectChannelConnectorWithSafeStartup@0.0.0.0:9095 {code}





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14162) Fixing maven target for regenerating thrift classes fails against 0.9.2

2015-07-30 Thread Srikanth Srungarapu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Srikanth Srungarapu updated HBASE-14162:

Attachment: HBASE-14162_v2.patch

Attaching the patch for fixing only maven target. Will create a separate jira 
for updating thrift bindings.

 Fixing maven target for regenerating thrift classes fails against 0.9.2
 ---

 Key: HBASE-14162
 URL: https://issues.apache.org/jira/browse/HBASE-14162
 Project: HBase
  Issue Type: Bug
  Components: build, Thrift
Affects Versions: 2.0.0, 1.3.0
Reporter: Sean Busbey
Assignee: Srikanth Srungarapu
Priority: Blocker
 Fix For: 2.0.0, 1.3.0

 Attachments: HBASE-14162-branch-1.patch, HBASE-14162.patch, 
 HBASE-14162_v2.patch


 HBASE-14045 updated the thrift version, but our enforcer rule is still 
 checking 0.9.0.
 {code}
 $ git checkout master
 Switched to branch 'master'
 Your branch is up-to-date with 'origin/master'.
 $ mvn compile -Pcompile-thrift -DskipTests
 [INFO] Scanning for projects...
 ... SNIP ...
 [INFO] 
 
 [INFO] Building HBase - Thrift 2.0.0-SNAPSHOT
 [INFO] 
 
 [INFO] 
 [INFO] --- maven-enforcer-plugin:1.3.1:enforce (enforce) @ hbase-thrift ---
 [INFO] 
 [INFO] --- maven-enforcer-plugin:1.3.1:enforce (enforce-thrift-version) @ 
 hbase-thrift ---
 [WARNING] Rule 0: org.apache.maven.plugins.enforcer.RequireProperty failed 
 with message:
 -
 -
 [FATAL] 
 ==
 [FATAL] HBase Thrift requires the thrift generator version 0.9.0.
 [FATAL] Setting it to something else needs to be reviewed for wire and 
 behavior compatibility.
 [FATAL] 
 ==
 -
 -
 [INFO] 
 
 [INFO] Reactor Summary:
 [INFO] 
 [INFO] HBase .. SUCCESS [  2.897 
 s]
 [INFO] HBase - Checkstyle . SUCCESS [  0.554 
 s]
 [INFO] HBase - Annotations  SUCCESS [  0.940 
 s]
 [INFO] HBase - Protocol ... SUCCESS [ 15.454 
 s]
 [INFO] HBase - Common . SUCCESS [  8.984 
 s]
 [INFO] HBase - Procedure .. SUCCESS [  1.982 
 s]
 [INFO] HBase - Client . SUCCESS [  6.805 
 s]
 [INFO] HBase - Hadoop Compatibility ... SUCCESS [  0.202 
 s]
 [INFO] HBase - Hadoop Two Compatibility ... SUCCESS [  1.393 
 s]
 [INFO] HBase - Prefix Tree  SUCCESS [  1.233 
 s]
 [INFO] HBase - Server . SUCCESS [ 13.841 
 s]
 [INFO] HBase - Testing Util ... SUCCESS [  2.979 
 s]
 [INFO] HBase - Thrift . FAILURE [  0.234 
 s]
 [INFO] HBase - Shell .. SKIPPED
 [INFO] HBase - Integration Tests .. SKIPPED
 [INFO] HBase - Examples ... SKIPPED
 [INFO] HBase - Rest ... SKIPPED
 [INFO] HBase - Assembly ... SKIPPED
 [INFO] HBase - Shaded . SKIPPED
 [INFO] HBase - Shaded - Client  SKIPPED
 [INFO] HBase - Shaded - Server  SKIPPED
 [INFO] Apache HBase - Spark ... SKIPPED
 [INFO] 
 
 [INFO] BUILD FAILURE
 [INFO] 
 
 [INFO] Total time: 01:00 min
 [INFO] Finished at: 2015-07-28T12:36:15-05:00
 [INFO] Final Memory: 84M/1038M
 [INFO] 
 
 [ERROR] Failed to execute goal 
 org.apache.maven.plugins:maven-enforcer-plugin:1.3.1:enforce 
 (enforce-thrift-version) on project hbase-thrift: Some Enforcer rules have 
 failed. Look above for specific messages explaining why the rule failed. - 
 [Help 1]
 [ERROR] 
 [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
 switch.
 [ERROR] Re-run Maven using the -X switch to enable full debug logging.
 [ERROR] 
 [ERROR] For more information about the errors and possible solutions, please 
 read the following articles:
 [ERROR] [Help 1] 
 http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
 [ERROR] 
 [ERROR] 

[jira] [Commented] (HBASE-14169) API to refreshSuperUserGroupsConfiguration

2015-07-30 Thread Matteo Bertozzi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14647867#comment-14647867
 ] 

Matteo Bertozzi commented on HBASE-14169:
-

in general we send an RPC on the server side, and the server side deals with 
propagating that request to other machines. This is probably the first case (at 
least in the ACL) where the client ask each machine to execute something.
and this will probably prevent to reimplement a proper refresh propagation in 
a compatible way. 

I think we should follow the same pattern of grant/revoke. The client goes to 
the ACL endpoint, and the ACL endpoint propagate the request. At least we can 
change the server side intercommunication at any point without having to worry 
about the client compatibility.

 API to refreshSuperUserGroupsConfiguration
 --

 Key: HBASE-14169
 URL: https://issues.apache.org/jira/browse/HBASE-14169
 Project: HBase
  Issue Type: New Feature
Reporter: Francis Liu
Assignee: Francis Liu
 Attachments: HBASE-14169.patch


 For deployments that use security. User impersonation (AKA doAs()) is needed 
 for some services (ie Stargate, thriftserver, Oozie, etc). Impersonation 
 definitions are defined in a xml config file and read and cached by the 
 ProxyUsers class. Calling this api will refresh cached information, 
 eliminating the need to restart the master/regionserver whenever the 
 configuration is changed. 
 Implementation just adds another method to AccessControlService.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-13497) Remove MVCC stamps from HFile when that is safe

2015-07-30 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14647903#comment-14647903
 ] 

Anoop Sam John edited comment on HBASE-13497 at 7/30/15 4:37 PM:
-

[~larsh]
{quote}
// When all MVCC readpoints are 0, don't write them.
...
fd.maxMVCCReadpoint = 0,
Do we need = or  only?
{quote}
Seems this comment was not addressed while commit. :-)   Pls see HBASE-14173.  
Let us commit the change in HBASE-14173.



was (Author: anoop.hbase):
[~larsh]
{quote}
{quote}
// When all MVCC readpoints are 0, don't write them.
...
fd.maxMVCCReadpoint = 0,
{quote}
Do we need = or  only?
{quote}
Seems this comment was not addressed while commit. :-)   Pls see HBASE-14173.  
Let us commit the change in HBASE-14173.


 Remove MVCC stamps from HFile when that is safe
 ---

 Key: HBASE-13497
 URL: https://issues.apache.org/jira/browse/HBASE-13497
 Project: HBase
  Issue Type: Sub-task
  Components: Scanners
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
  Labels: performance
 Fix For: 2.0.0, 1.0.2, 1.2.0, 1.1.1

 Attachments: 13497.txt


 See discussion in HBASE-13389.
 The optimization was initially put in with HBASE-8166, HBASE-12600 undoes it, 
 this will partially restores it.
 Instead of checking the MVCC readpoints against the oldest current scanner, 
 we check that all are 0, if so, we do not need to write them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14173) includeMVCCReadpoint parameter in DefaultCompactor#createTmpWriter() represents no-op

2015-07-30 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14647960#comment-14647960
 ] 

Lars Hofhansl commented on HBASE-14173:
---

Looks like the merge of hbase-11339 reintroduced this. See commit: 
09a00efc0b37d6a43f684ee30f8b266d27c58c1c

 includeMVCCReadpoint parameter in DefaultCompactor#createTmpWriter() 
 represents no-op
 -

 Key: HBASE-14173
 URL: https://issues.apache.org/jira/browse/HBASE-14173
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Ted Yu
 Fix For: 2.0.0

 Attachments: 14173-v1.txt


 Around line 160:
 {code}
 return store.createWriterInTmp(fd.maxKeyCount, this.compactionCompression,
 true, fd.maxMVCCReadpoint = 0, fd.maxTagsLength 0);
 {code}
 The condition, fd.maxMVCCReadpoint = 0, represents no-op.
 The correct condition should be fd.maxMVCCReadpoint  0



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14173) includeMVCCReadpoint parameter in DefaultCompactor#createTmpWriter() represents no-op

2015-07-30 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14647961#comment-14647961
 ] 

Lars Hofhansl commented on HBASE-14173:
---

[~jmhsieh]

 includeMVCCReadpoint parameter in DefaultCompactor#createTmpWriter() 
 represents no-op
 -

 Key: HBASE-14173
 URL: https://issues.apache.org/jira/browse/HBASE-14173
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Ted Yu
 Fix For: 2.0.0

 Attachments: 14173-v1.txt


 Around line 160:
 {code}
 return store.createWriterInTmp(fd.maxKeyCount, this.compactionCompression,
 true, fd.maxMVCCReadpoint = 0, fd.maxTagsLength 0);
 {code}
 The condition, fd.maxMVCCReadpoint = 0, represents no-op.
 The correct condition should be fd.maxMVCCReadpoint  0



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14085) Correct LICENSE and NOTICE files in artifacts

2015-07-30 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14647858#comment-14647858
 ] 

Sean Busbey commented on HBASE-14085:
-

JRuby includes a bunch of stuff under the Ruby License, which isn't 
categorized. I filed LEGAL-222 to get guidance on redistributing the 
jruby-complete jar.

 Correct LICENSE and NOTICE files in artifacts
 -

 Key: HBASE-14085
 URL: https://issues.apache.org/jira/browse/HBASE-14085
 Project: HBase
  Issue Type: Task
  Components: build
Affects Versions: 2.0.0, 0.94.28, 0.98.14, 1.0.2, 1.2.0, 1.1.2, 1.3.0
Reporter: Sean Busbey
Assignee: Sean Busbey
Priority: Blocker
 Fix For: 2.0.0, 0.94.28, 0.98.14, 1.0.2, 1.2.0, 1.1.2


 +Problems:
 * checked LICENSE/NOTICE on binary
 ** binary artifact LICENSE file has not been updated to include the 
 additional license terms for contained third party dependencies
 ** binary artifact NOTICE file does not include a copyright line
 ** binary artifact NOTICE file does not appear to propagate appropriate info 
 from the NOTICE files from bundled dependencies
 * checked NOTICE on source
 ** source artifact NOTICE file does not include a copyright line
 ** source NOTICE file includes notices for third party dependencies not 
 included in the artifact
 * checked NOTICE files shipped in maven jars
 ** copyright line only says 2015 when it's very likely the contents are under 
 copyright prior to this year
 * nit: NOTICE file on jars in maven say HBase - ${module} rather than 
 Apache HBase - ${module} as required 
 refs:
 http://www.apache.org/dev/licensing-howto.html#bundled-vs-non-bundled
 http://www.apache.org/dev/licensing-howto.html#binary
 http://www.apache.org/dev/licensing-howto.html#simple



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14173) includeMVCCReadpoint parameter in DefaultCompactor#createTmpWriter() represents no-op

2015-07-30 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14647950#comment-14647950
 ] 

Lars Hofhansl commented on HBASE-14173:
---

HBASE-13497 has it correctly with  0 according to git. See {{git show 
aabf6ea2f692fc04f02f9a9ce74677bda203647e}}
I'll check what happened here.

 includeMVCCReadpoint parameter in DefaultCompactor#createTmpWriter() 
 represents no-op
 -

 Key: HBASE-14173
 URL: https://issues.apache.org/jira/browse/HBASE-14173
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Ted Yu
 Fix For: 2.0.0

 Attachments: 14173-v1.txt


 Around line 160:
 {code}
 return store.createWriterInTmp(fd.maxKeyCount, this.compactionCompression,
 true, fd.maxMVCCReadpoint = 0, fd.maxTagsLength 0);
 {code}
 The condition, fd.maxMVCCReadpoint = 0, represents no-op.
 The correct condition should be fd.maxMVCCReadpoint  0



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14173) includeMVCCReadpoint parameter in DefaultCompactor#createTmpWriter() represents no-op

2015-07-30 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14647905#comment-14647905
 ] 

Anoop Sam John commented on HBASE-14173:


+1

 includeMVCCReadpoint parameter in DefaultCompactor#createTmpWriter() 
 represents no-op
 -

 Key: HBASE-14173
 URL: https://issues.apache.org/jira/browse/HBASE-14173
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Ted Yu
 Fix For: 2.0.0

 Attachments: 14173-v1.txt


 Around line 160:
 {code}
 return store.createWriterInTmp(fd.maxKeyCount, this.compactionCompression,
 true, fd.maxMVCCReadpoint = 0, fd.maxTagsLength 0);
 {code}
 The condition, fd.maxMVCCReadpoint = 0, represents no-op.
 The correct condition should be fd.maxMVCCReadpoint  0



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14085) Correct LICENSE and NOTICE files in artifacts

2015-07-30 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14647959#comment-14647959
 ] 

Sean Busbey commented on HBASE-14085:
-

also, it looks like JRuby 9k might be an option if it isn't fine since the Ruby 
files at that point are available under BSD-2-clause (once they [correct their 
licensing info|https://github.com/jruby/jruby/issues/3198]). It's an 
incompatible change (since it'll be Ruby 2.2), but that's presumably better 
than the alternative.

 Correct LICENSE and NOTICE files in artifacts
 -

 Key: HBASE-14085
 URL: https://issues.apache.org/jira/browse/HBASE-14085
 Project: HBase
  Issue Type: Task
  Components: build
Affects Versions: 2.0.0, 0.94.28, 0.98.14, 1.0.2, 1.2.0, 1.1.2, 1.3.0
Reporter: Sean Busbey
Assignee: Sean Busbey
Priority: Blocker
 Fix For: 2.0.0, 0.94.28, 0.98.14, 1.0.2, 1.2.0, 1.1.2


 +Problems:
 * checked LICENSE/NOTICE on binary
 ** binary artifact LICENSE file has not been updated to include the 
 additional license terms for contained third party dependencies
 ** binary artifact NOTICE file does not include a copyright line
 ** binary artifact NOTICE file does not appear to propagate appropriate info 
 from the NOTICE files from bundled dependencies
 * checked NOTICE on source
 ** source artifact NOTICE file does not include a copyright line
 ** source NOTICE file includes notices for third party dependencies not 
 included in the artifact
 * checked NOTICE files shipped in maven jars
 ** copyright line only says 2015 when it's very likely the contents are under 
 copyright prior to this year
 * nit: NOTICE file on jars in maven say HBase - ${module} rather than 
 Apache HBase - ${module} as required 
 refs:
 http://www.apache.org/dev/licensing-howto.html#bundled-vs-non-bundled
 http://www.apache.org/dev/licensing-howto.html#binary
 http://www.apache.org/dev/licensing-howto.html#simple



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14163) hbase master stop loops both processes forever

2015-07-30 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14647987#comment-14647987
 ] 

Allen Wittenauer commented on HBASE-14163:
--

I wonder if this is a race condition.

 hbase master stop loops both processes forever
 --

 Key: HBASE-14163
 URL: https://issues.apache.org/jira/browse/HBASE-14163
 Project: HBase
  Issue Type: Bug
  Components: master
Affects Versions: 2.0.0
Reporter: Allen Wittenauer

 It would appear that there is an infinite loop in the zk client connection 
 code when performing a master stop when no external zk servers are configured.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14086) remove unused bundled dependencies

2015-07-30 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14647993#comment-14647993
 ] 

Lars Hofhansl commented on HBASE-14086:
---

Let's leave 0.94 be. :)
I might be doing one more release and the declare it done (unless somebody else 
has need and steps up that is).


 remove unused bundled dependencies
 --

 Key: HBASE-14086
 URL: https://issues.apache.org/jira/browse/HBASE-14086
 Project: HBase
  Issue Type: Sub-task
  Components: documentation
Reporter: Sean Busbey
Assignee: Sean Busbey
Priority: Blocker
 Fix For: 2.0.0, 0.98.14, 1.0.2, 1.2.0, 1.1.2, 1.3.0

 Attachments: HBASE-14086.1.patch


 We have some files with compatible non-ASL licenses that don't appear to be 
 used, so remove them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-14086) remove unused bundled dependencies

2015-07-30 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14648010#comment-14648010
 ] 

Sean Busbey edited comment on HBASE-14086 at 7/30/15 5:39 PM:
--

Okay, we'll just need to make sure we update LICENSE files for HBASE-14085 on 
0.94.


was (Author: busbey):
Okay, we'll just need to make sure we update LICENSE files for HBASE-14087 on 
0.94.

 remove unused bundled dependencies
 --

 Key: HBASE-14086
 URL: https://issues.apache.org/jira/browse/HBASE-14086
 Project: HBase
  Issue Type: Sub-task
  Components: documentation
Reporter: Sean Busbey
Assignee: Sean Busbey
Priority: Blocker
 Fix For: 2.0.0, 0.98.14, 1.0.2, 1.2.0, 1.1.2, 1.3.0

 Attachments: HBASE-14086.1.patch


 We have some files with compatible non-ASL licenses that don't appear to be 
 used, so remove them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-14086) remove unused bundled dependencies

2015-07-30 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey resolved HBASE-14086.
-
Resolution: Fixed

Okay, we'll just need to make sure we update LICENSE files for HBASE-14087 on 
0.94.

 remove unused bundled dependencies
 --

 Key: HBASE-14086
 URL: https://issues.apache.org/jira/browse/HBASE-14086
 Project: HBase
  Issue Type: Sub-task
  Components: documentation
Reporter: Sean Busbey
Assignee: Sean Busbey
Priority: Blocker
 Fix For: 2.0.0, 0.98.14, 1.0.2, 1.2.0, 1.1.2, 1.3.0

 Attachments: HBASE-14086.1.patch


 We have some files with compatible non-ASL licenses that don't appear to be 
 used, so remove them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-14098) Allow dropping caches behind compactions

2015-07-30 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14647988#comment-14647988
 ] 

Lars Hofhansl edited comment on HBASE-14098 at 7/30/15 5:27 PM:


Was just going to +1... Missed the private readers bit.
I added the private.readers features in order to decouple compactions readers 
from operational readers (user scans, gets, etc).
Defensively I defaulted this to false, but I think we can safely enable that 
always... But if we wanted to stay defensive we could default to the value of 
drop caches behind compactions.

Actually, in either case +1 from me.


was (Author: lhofhansl):
Was just going to +1... Missed the private readers bit.
I added the private.readers features and order to decouple compactions readers 
from operational readers (user scans, gets, etc).
Defensively I defaulted this to false, but I think we can safely enable that 
always... But if we wanted to stay defensively we could default to the value of 
drop caches behind compactions.

Actually, in either case +1 from me.

 Allow dropping caches behind compactions
 

 Key: HBASE-14098
 URL: https://issues.apache.org/jira/browse/HBASE-14098
 Project: HBase
  Issue Type: Bug
  Components: Compaction, hadoop2, HFile
Affects Versions: 2.0.0, 1.3.0
Reporter: Elliott Clark
Assignee: Elliott Clark
 Fix For: 2.0.0, 1.3.0

 Attachments: HBASE-14098-v1.patch, HBASE-14098-v2.patch, 
 HBASE-14098-v3.patch, HBASE-14098-v4.patch, HBASE-14098-v5.patch, 
 HBASE-14098.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14098) Allow dropping caches behind compactions

2015-07-30 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14647988#comment-14647988
 ] 

Lars Hofhansl commented on HBASE-14098:
---

Was just going to +1... Missed the private readers bit.
I added the private.readers features and order to decouple compactions readers 
from operational readers (user scans, gets, etc).
Defensively I defaulted this to false, but I think we can safely enable that 
always... But if we wanted to stay defensively we could default to the value of 
drop caches behind compactions.

Actually, in either case +1 from me.

 Allow dropping caches behind compactions
 

 Key: HBASE-14098
 URL: https://issues.apache.org/jira/browse/HBASE-14098
 Project: HBase
  Issue Type: Bug
  Components: Compaction, hadoop2, HFile
Affects Versions: 2.0.0, 1.3.0
Reporter: Elliott Clark
Assignee: Elliott Clark
 Fix For: 2.0.0, 1.3.0

 Attachments: HBASE-14098-v1.patch, HBASE-14098-v2.patch, 
 HBASE-14098-v3.patch, HBASE-14098-v4.patch, HBASE-14098-v5.patch, 
 HBASE-14098.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14172) Upgrade existing thrift binding using thrift 0.9.2 compiler.

2015-07-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14647455#comment-14647455
 ] 

Hadoop QA commented on HBASE-14172:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12747939/HBASE-14172-branch-1.patch
  against branch-1 branch at commit 5f1129c799e9c273dfd58a7fc87d5e654061607b.
  ATTACHMENT ID: 12747939

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+lastComparison = 
Boolean.valueOf(isSetBloomFilterType()).compareTo(other.isSetBloomFilterType());
+lastComparison = 
Boolean.valueOf(isSetBloomFilterVectorSize()).compareTo(other.isSetBloomFilterVectorSize());
+lastComparison = 
Boolean.valueOf(isSetBloomFilterNbHashes()).compareTo(other.isSetBloomFilterNbHashes());
+lastComparison = 
Boolean.valueOf(isSetBlockCacheEnabled()).compareTo(other.isSetBlockCacheEnabled());
+  iface.mutateRowTs(args.tableName, args.row, args.mutations, 
args.timestamp, args.attributes);
+  result.success = iface.scannerOpen(args.tableName, args.startRow, 
args.columns, args.attributes);
+  result.success = iface.scannerOpenWithStop(args.tableName, 
args.startRow, args.stopRow, args.columns, args.attributes);
+  result.success = iface.scannerOpenWithPrefix(args.tableName, 
args.startAndPrefix, args.columns, args.attributes);
+  result.success = iface.scannerOpenTs(args.tableName, args.startRow, 
args.columns, args.timestamp, args.attributes);
+  result.success = iface.scannerOpenWithStopTs(args.tableName, 
args.startRow, args.stopRow, args.columns, args.timestamp, args.attributes);

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   
org.apache.hadoop.hbase.replication.TestReplicationSmallTests

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14934//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14934//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14934//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14934//console

This message is automatically generated.

 Upgrade existing thrift binding using thrift 0.9.2 compiler.
 

 Key: HBASE-14172
 URL: https://issues.apache.org/jira/browse/HBASE-14172
 Project: HBase
  Issue Type: Improvement
Reporter: Srikanth Srungarapu
Assignee: Srikanth Srungarapu
Priority: Minor
 Attachments: HBASE-14172-branch-1.patch, HBASE-14172.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14154) DFS Replication should be configurable at column family level

2015-07-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14647475#comment-14647475
 ] 

Hadoop QA commented on HBASE-14154:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12747943/HBASE-14154-0.98-v1.patch
  against 0.98 branch at commit 5f1129c799e9c273dfd58a7fc87d5e654061607b.
  ATTACHMENT ID: 12747943

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 16 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
21 warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14935//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14935//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14935//artifact/patchprocess/checkstyle-aggregate.html

  Javadoc warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14935//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14935//console

This message is automatically generated.

 DFS Replication should be configurable at column family level
 -

 Key: HBASE-14154
 URL: https://issues.apache.org/jira/browse/HBASE-14154
 Project: HBase
  Issue Type: New Feature
Reporter: Ashish Singhi
Assignee: Ashish Singhi
Priority: Minor
 Fix For: 2.0.0, 0.98.14, 1.3.0

 Attachments: HBASE-14154-0.98-v1.patch, HBASE-14154-0.98.patch, 
 HBASE-14154-branch-1-v1.patch, HBASE-14154-branch-1.patch, 
 HBASE-14154-v1.patch, HBASE-14154.patch


 There are cases where a user wants to have a control on the number of hfile 
 copies he/she can have in the cluster.
 For eg: For a test table user would like to have only one copy instead of 
 three(default).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14162) Fixing maven target for regenerating thrift classes fails against 0.9.2

2015-07-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14647419#comment-14647419
 ] 

Hadoop QA commented on HBASE-14162:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12747936/HBASE-14162_v2.patch
  against master branch at commit 5f1129c799e9c273dfd58a7fc87d5e654061607b.
  ATTACHMENT ID: 12747936

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation, build,
or dev-support patch that doesn't require tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.TestJMXListener

 {color:red}-1 core zombie tests{color}.  There are 4 zombie test(s):   
at 
org.apache.hadoop.hbase.wal.TestWALSplit.testTrailingGarbageCorruptionLogFileSkipErrorsFalseThrows(TestWALSplit.java:582)
at 
org.apache.hadoop.hbase.mapreduce.TestTableInputFormatScanBase.testScan(TestTableInputFormatScanBase.java:243)
at 
org.apache.hadoop.hbase.mapreduce.TestTableInputFormatScan2.testScanOPPToEmpty(TestTableInputFormatScan2.java:71)
at 
org.apache.hadoop.hbase.mapreduce.TestTableMapReduceBase.testCombiner(TestTableMapReduceBase.java:106)
at 
org.apache.hadoop.hbase.wal.TestWALSplit.testMiddleGarbageCorruptionSkipErrorsReadsHalfOfFile(TestWALSplit.java:509)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14933//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14933//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14933//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14933//console

This message is automatically generated.

 Fixing maven target for regenerating thrift classes fails against 0.9.2
 ---

 Key: HBASE-14162
 URL: https://issues.apache.org/jira/browse/HBASE-14162
 Project: HBase
  Issue Type: Bug
  Components: build, Thrift
Affects Versions: 2.0.0, 1.3.0
Reporter: Sean Busbey
Assignee: Srikanth Srungarapu
Priority: Blocker
 Fix For: 2.0.0, 1.3.0

 Attachments: HBASE-14162-branch-1.patch, HBASE-14162.patch, 
 HBASE-14162_v2.patch


 HBASE-14045 updated the thrift version, but our enforcer rule is still 
 checking 0.9.0.
 {code}
 $ git checkout master
 Switched to branch 'master'
 Your branch is up-to-date with 'origin/master'.
 $ mvn compile -Pcompile-thrift -DskipTests
 [INFO] Scanning for projects...
 ... SNIP ...
 [INFO] 
 
 [INFO] Building HBase - Thrift 2.0.0-SNAPSHOT
 [INFO] 
 
 [INFO] 
 [INFO] --- maven-enforcer-plugin:1.3.1:enforce (enforce) @ hbase-thrift ---
 [INFO] 
 [INFO] --- maven-enforcer-plugin:1.3.1:enforce (enforce-thrift-version) @ 
 hbase-thrift ---
 [WARNING] Rule 0: org.apache.maven.plugins.enforcer.RequireProperty failed 
 with message:
 -
 -
 [FATAL] 
 ==
 [FATAL] HBase Thrift requires the thrift generator version 0.9.0.
 [FATAL] Setting it to something else needs to be reviewed for wire and 
 behavior compatibility.
 [FATAL] 
 ==
 -
 -
 [INFO] 
 
 [INFO] Reactor 

[jira] [Commented] (HBASE-14154) DFS Replication should be configurable at column family level

2015-07-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14647391#comment-14647391
 ] 

Hadoop QA commented on HBASE-14154:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12747932/HBASE-14154-v1.patch
  against master branch at commit 5f1129c799e9c273dfd58a7fc87d5e654061607b.
  ATTACHMENT ID: 12747932

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 20 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

 {color:red}-1 core zombie tests{color}.  There are 5 zombie test(s):   
at 
org.apache.hadoop.hbase.snapshot.TestMobExportSnapshot.testExportFileSystemState(TestMobExportSnapshot.java:285)
at 
org.apache.hadoop.hbase.snapshot.TestMobExportSnapshot.testExportFileSystemState(TestMobExportSnapshot.java:259)
at 
org.apache.hadoop.hbase.snapshot.TestMobExportSnapshot.testExportWithTargetName(TestMobExportSnapshot.java:217)
at 
org.apache.hadoop.hbase.snapshot.TestMobExportSnapshot.testExportFileSystemState(TestMobExportSnapshot.java:270)
at 
org.apache.hadoop.hbase.snapshot.TestMobExportSnapshot.testExportFileSystemState(TestMobExportSnapshot.java:259)
at 
org.apache.hadoop.hbase.snapshot.TestMobExportSnapshot.testExportWithTargetName(TestMobExportSnapshot.java:217)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14932//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14932//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14932//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14932//console

This message is automatically generated.

 DFS Replication should be configurable at column family level
 -

 Key: HBASE-14154
 URL: https://issues.apache.org/jira/browse/HBASE-14154
 Project: HBase
  Issue Type: New Feature
Reporter: Ashish Singhi
Assignee: Ashish Singhi
Priority: Minor
 Fix For: 2.0.0, 0.98.14, 1.3.0

 Attachments: HBASE-14154-0.98-v1.patch, HBASE-14154-0.98.patch, 
 HBASE-14154-branch-1-v1.patch, HBASE-14154-branch-1.patch, 
 HBASE-14154-v1.patch, HBASE-14154.patch


 There are cases where a user wants to have a control on the number of hfile 
 copies he/she can have in the cluster.
 For eg: For a test table user would like to have only one copy instead of 
 three(default).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)