[jira] [Created] (HBASE-14870) Backport namespace permissions to 0.98

2015-11-23 Thread Andrew Purtell (JIRA)
Andrew Purtell created HBASE-14870:
--

 Summary: Backport namespace permissions to 0.98
 Key: HBASE-14870
 URL: https://issues.apache.org/jira/browse/HBASE-14870
 Project: HBase
  Issue Type: Task
Reporter: Andrew Purtell
 Fix For: 0.98.17


Backport namespace permissions to 0.98. The new permission checks will be 
disabled by default for behavioral compatibility with previous releases, like 
what we did when we introduced enforcement of the EXEC permission. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HBASE-14870) Backport namespace permissions to 0.98

2015-11-23 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell reassigned HBASE-14870:
--

Assignee: Andrew Purtell

> Backport namespace permissions to 0.98
> --
>
> Key: HBASE-14870
> URL: https://issues.apache.org/jira/browse/HBASE-14870
> Project: HBase
>  Issue Type: Task
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
> Fix For: 0.98.17
>
>
> Backport namespace permissions to 0.98. The new permission checks will be 
> disabled by default for behavioral compatibility with previous releases, like 
> what we did when we introduced enforcement of the EXEC permission. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14843) TestWALProcedureStore.testLoad is flakey

2015-11-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15022862#comment-15022862
 ] 

Hudson commented on HBASE-14843:


SUCCESS: Integrated in HBase-1.2-IT #301 (See 
[https://builds.apache.org/job/HBase-1.2-IT/301/])
HBASE-14843 TestWALProcedureStore.testLoad is flakey (matteo.bertozzi: rev 
44b3e4af9adbaffe938afc02ebda367435a0e3a3)
* 
hbase-procedure/src/test/java/org/apache/hadoop/hbase/procedure2/store/wal/TestWALProcedureStore.java
* 
hbase-procedure/src/main/java/org/apache/hadoop/hbase/procedure2/store/wal/WALProcedureStore.java


> TestWALProcedureStore.testLoad is flakey
> 
>
> Key: HBASE-14843
> URL: https://issues.apache.org/jira/browse/HBASE-14843
> Project: HBase
>  Issue Type: Bug
>  Components: proc-v2
>Affects Versions: 2.0.0, 1.2.0
>Reporter: Heng Chen
>Assignee: Matteo Bertozzi
>Priority: Blocker
> Fix For: 2.0.0, 1.2.0
>
> Attachments: HBASE-14843-v0.patch
>
>
> I see it twice recently, 
> see.
> https://builds.apache.org/job/PreCommit-HBASE-Build/16589//testReport/org.apache.hadoop.hbase.procedure2.store.wal/TestWALProcedureStore/testLoad/
> https://builds.apache.org/job/PreCommit-HBASE-Build/16532/testReport/org.apache.hadoop.hbase.procedure2.store.wal/TestWALProcedureStore/testLoad/
> Let's see what's happening.
> Update.
> It failed once again today, 
> https://builds.apache.org/job/PreCommit-HBASE-Build/16602/testReport/junit/org.apache.hadoop.hbase.procedure2.store.wal/TestWALProcedureStore/testLoad/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14793) Allow limiting size of block into L1 block cache.

2015-11-23 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-14793:
---
Fix Version/s: 0.98.17

> Allow limiting size of block into L1 block cache.
> -
>
> Key: HBASE-14793
> URL: https://issues.apache.org/jira/browse/HBASE-14793
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.0, 1.1.2
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Fix For: 2.0.0, 1.2.0, 1.3.0, 0.98.17
>
> Attachments: HBASE-14793-v1.patch, HBASE-14793-v3.patch, 
> HBASE-14793-v4.patch, HBASE-14793-v5.patch, HBASE-14793-v6.patch, 
> HBASE-14793-v7.patch, HBASE-14793.patch
>
>
> G1GC does really badly with long lived large objects. Lets allow limiting the 
> size of a block that can be kept in the block cache.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14777) Fix Inter Cluster Replication Future ordering issues

2015-11-23 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15022613#comment-15022613
 ] 

Lars Hofhansl commented on HBASE-14777:
---

I guess I messed HBASE-12988 up. Sorry about that :(
I'll do an extra review through this patch/addendum now.


> Fix Inter Cluster Replication Future ordering issues
> 
>
> Key: HBASE-14777
> URL: https://issues.apache.org/jira/browse/HBASE-14777
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Reporter: Bhupendra Kumar Jain
>Assignee: Ashu Pachauri
>Priority: Critical
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-14777-1.patch, HBASE-14777-2.patch, 
> HBASE-14777-3.patch, HBASE-14777-4.patch, HBASE-14777-5.patch, 
> HBASE-14777-6.patch, HBASE-14777-addendum.patch, HBASE-14777.patch
>
>
> Replication fails with IndexOutOfBoundsException 
> {code}
> regionserver.ReplicationSource$ReplicationSourceWorkerThread(939): 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint
>  threw unknown exception:java.lang.IndexOutOfBoundsException: Index: 1, Size: 
> 1
>   at java.util.ArrayList.rangeCheck(Unknown Source)
>   at java.util.ArrayList.remove(Unknown Source)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint.replicate(HBaseInterClusterReplicationEndpoint.java:222)
> {code}
> Its happening due to incorrect removal of entries from the replication 
> entries list. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14703) not collect stats when call HTable.mutateRow

2015-11-23 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15022668#comment-15022668
 ] 

Jesse Yates commented on HBASE-14703:
-

{quote}
optional bool enabled = 4 [default = false];
Excuse me, what's the purpose for this option.
{quote}

Its just come cruft left in my example addendum from when I was trying to 
figure out how to better implement stats. Unless you use it anywhere - I didn't 
see it - lets pull it out.

bq. But not remove RegionLoadStats in ResultOrException for PB parser not 
crashed when user upgrade cluster.

What do you mean? Its an optional field, so if its not there on the wire, PB 
will just ignore it (which is why PB generates #hasLoadStats() method).

bq. Maybe we should add process flag back into MultiResponse. wdyt?

ClientProtos.MultiResponse (the PB) or MultiResponse (o.a.h.hbase.client 
class)? I think by marking it as an EMPTY_RESULT we are implicitly saying 
'processed'. The only other option for that value is as an exception, right?

> not collect stats when call HTable.mutateRow 
> -
>
> Key: HBASE-14703
> URL: https://issues.apache.org/jira/browse/HBASE-14703
> Project: HBase
>  Issue Type: Bug
>Reporter: Heng Chen
>Assignee: Heng Chen
> Fix For: 2.0.0
>
> Attachments: HBASE-14702_v5.2_addendum-addendum.patch, 
> HBASE-14703-5.2-addendum.patch, HBASE-14703-async.patch, 
> HBASE-14703-start.patch, HBASE-14703-v4.1.patch, HBASE-14703-v4.patch, 
> HBASE-14703.patch, HBASE-14703_v1.patch, HBASE-14703_v2.patch, 
> HBASE-14703_v3.patch, HBASE-14703_v5.1.patch, HBASE-14703_v5.2.patch, 
> HBASE-14703_v5.patch, HBASE-14703_v6.patch
>
>
> In {{AsyncProcess.SingleServerRequestRunnable}}, it seems we update 
> serverStatistics twice.
> The first one is that we wrapper {{RetryingCallable}}  by 
> {{StatsTrackingRpcRetryingCaller}}, and do serverStatistics update when we 
> call {{callWithRetries}} and {{callWithoutRetries}}. Relates code like below:
> {code}
>   @Override
>   public T callWithRetries(RetryingCallable callable, int callTimeout)
>   throws IOException, RuntimeException {
> T result = delegate.callWithRetries(callable, callTimeout);
> return updateStatsAndUnwrap(result, callable);
>   }
>   @Override
>   public T callWithoutRetries(RetryingCallable callable, int callTimeout)
>   throws IOException, RuntimeException {
> T result = delegate.callWithRetries(callable, callTimeout);
> return updateStatsAndUnwrap(result, callable);
>   }
> {code}
> The secondary one is after we get response, in {{receiveMultiAction}}, we do 
> update again. 
> {code}
> // update the stats about the region, if its a user table. We don't want to 
> slow down
> // updates to meta tables, especially from internal updates (master, etc).
> if (AsyncProcess.this.connection.getStatisticsTracker() != null) {
>   result = ResultStatsUtil.updateStats(result,
>   AsyncProcess.this.connection.getStatisticsTracker(), server, regionName);
> }
> {code}
> It seems that {{StatsTrackingRpcRetryingCaller}} is NOT necessary,  remove it?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14719) Add metric for number of MasterProcWALs

2015-11-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15022950#comment-15022950
 ] 

Hadoop QA commented on HBASE-14719:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12773863/HBASE-14719.3.patch
  against master branch at commit 55087ce8887b5be38b0fda0dda3fbf2f92c13778.
  ATTACHMENT ID: 12773863

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 12 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.6.1 2.7.0 
2.7.1)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16641//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16641//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16641//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16641//console

This message is automatically generated.

> Add metric for number of MasterProcWALs
> ---
>
> Key: HBASE-14719
> URL: https://issues.apache.org/jira/browse/HBASE-14719
> Project: HBase
>  Issue Type: Improvement
>Reporter: Elliott Clark
>Assignee: Vrishal Kulkarni
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-14719.1.patch, HBASE-14719.2.patch, 
> HBASE-14719.3.patch, HBASE-14719.patch
>
>
> Lets add monitoring to this so that we can see when it starts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14825) HBase Ref Guide corrections of typos/misspellings

2015-11-23 Thread Daniel Vimont (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Vimont updated HBASE-14825:
--
Status: Open  (was: Patch Available)

Toggling patch submission

> HBase Ref Guide corrections of typos/misspellings
> -
>
> Key: HBASE-14825
> URL: https://issues.apache.org/jira/browse/HBASE-14825
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.0.0
>Reporter: Daniel Vimont
>Assignee: Daniel Vimont
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-14825-v2.patch, HBASE-14825-v3.patch, 
> HBASE-14825-v4.patch, HBASE-14825.patch
>
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> Found the following list of typos/misspellings on the book.html page, and 
> thought I would make corrections to the appropriate src/main/asciidoc files 
> in which they are located. (This is just a good opportunity for me to become 
> familiar with submission of fixes/patches as a prelude to beginning to make 
> some coding contributions. This is also my first submission to the JIRA 
> system, so corrections to content/conventions are welcome!)
> [Note: I see that [~misty]  may be in the midst of a reformatting task -- 
> HBASE-14823 --  that might involve these same asciidoc files. Please advise 
> if I should wait on this task to avoid a possibly cumbersome Git 
> reconciliation mess. (?)]
> Here is the list of typos/misspellings. The format of each item is (a) the 
> problem is presented in brackets on the first line, and (b) the phrase (as it 
> currently appears in the text) is on the second line.
> ===
> ["you" should be "your", and "Kimballs'" should be "Kimball's" (move the 
> apostrophe) in the following:]
> A useful read setting config on you hadoop cluster is Aaron Kimballs' 
> Configuration Parameters: What can you just ignore?
> [Period needed after "a"]
> a.k.a pseudo-distributed
> ["empty" is misspelled]
> The default value in this configuration has been intentionally left emtpy in 
> order to honor the old hbase.regionserver.global.memstore.upperLimit property 
> if present.
> [All occurrences of "a HBase" should be changed to "an HBase" -- 15 
> occurrences found]
> ["file path are" should be "file paths are"]
> By default, all of HBase's ZooKeeper file path are configured with a relative 
> path, so they will all go under this directory unless changed.
> ["times" -- plural required]
> How many time to retry attempting to write a version file before just 
> aborting. 
> ["separated" is misspelled]
> Each attempt is seperated by the hbase.server.thread.wakefrequency 
> milliseconds.
> [space needed after quotation mark (include"limit)]
> Because this limit represents the "automatic include"limit...
> [space needed ("ashbase:metadata" should be "as hbase:metadata")]
> This helps to keep compaction of lean tables (such ashbase:meta) fast.
> [Acronym "ide" should be capitalized for clarity: IDE]
> Setting this to true can be useful in contexts other than the other side of a 
> maven generation; i.e. running in an ide. 
> [RuntimeException missing an "e"]
> You'll want to set this boolean to true to avoid seeing the RuntimException 
> complaint:
> [Space missing after "secure"]
> FS Permissions for the root directory in a secure(kerberos) setup.
> ["mutations" misspelled]
> ...will be created which will tail the logs and replicate the mutatations to 
> region replicas for tables that have region replication > 1.
> ["it such that" should be "is such that"]
> If your working set it such that block cache does you no good...
> ["an" should be "and"]
> See the Deveraj Das an Nicolas Liochon blog post...
> [Tag "" should be ""]
> hbase.coprocessor.master.classes
> [Misspelling of "implementations"]
> Those consumers are coprocessors, phoenix, replication endpoint 
> implemnetations or similar.
> [Misspelling of "cluster"]
> On upgrade, before running a rolling restart over the cluser...
> ["effect" should be "affect"]
> If NOT using BucketCache, this change does not effect you.
> [Need space after "throw"]
> This will throw`java.lang.NoSuchMethodError...
> ["habasee" should be "hbase"]
> You can pass commands to the HBase Shell in non-interactive mode (see 
> hbasee.shell.noninteractive)...
> ["ie" should be "i.e."]
> Restrict the amount of resources (ie regions, tables) a namespace can consume.
> ["an" should be "and"]
> ...but can be conjured on the fly while the table is up an running.
> [Malformed link (text appears as follows when rendered in a browser):]
> Puts are executed via Table.put (writeBuffer) or 
> link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Table.html#batch(java.util.List,
>  java.lang.Object[])[Table.batch] (non-writeBuffer).
> ["regions" should appear 

[jira] [Updated] (HBASE-14825) HBase Ref Guide corrections of typos/misspellings

2015-11-23 Thread Daniel Vimont (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Vimont updated HBASE-14825:
--
Release Note: 
Corrections to content of "book.html", which is pulled from various *.adoc 
files and *.xml files.
-- corrects typos/misspellings
-- corrects incorrectly formatted links

New patch (v4) contains additional commit to fix "long lines" problems present 
in the original patch.

  was:
Corrections to content of "book.html", which is pulled from various *.adoc 
files and *.xml files.
-- corrects typos/misspellings
-- corrects incorrectly formatted links

  Status: Patch Available  (was: Open)

> HBase Ref Guide corrections of typos/misspellings
> -
>
> Key: HBASE-14825
> URL: https://issues.apache.org/jira/browse/HBASE-14825
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.0.0
>Reporter: Daniel Vimont
>Assignee: Daniel Vimont
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-14825-v2.patch, HBASE-14825-v3.patch, 
> HBASE-14825-v4.patch, HBASE-14825.patch
>
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> Found the following list of typos/misspellings on the book.html page, and 
> thought I would make corrections to the appropriate src/main/asciidoc files 
> in which they are located. (This is just a good opportunity for me to become 
> familiar with submission of fixes/patches as a prelude to beginning to make 
> some coding contributions. This is also my first submission to the JIRA 
> system, so corrections to content/conventions are welcome!)
> [Note: I see that [~misty]  may be in the midst of a reformatting task -- 
> HBASE-14823 --  that might involve these same asciidoc files. Please advise 
> if I should wait on this task to avoid a possibly cumbersome Git 
> reconciliation mess. (?)]
> Here is the list of typos/misspellings. The format of each item is (a) the 
> problem is presented in brackets on the first line, and (b) the phrase (as it 
> currently appears in the text) is on the second line.
> ===
> ["you" should be "your", and "Kimballs'" should be "Kimball's" (move the 
> apostrophe) in the following:]
> A useful read setting config on you hadoop cluster is Aaron Kimballs' 
> Configuration Parameters: What can you just ignore?
> [Period needed after "a"]
> a.k.a pseudo-distributed
> ["empty" is misspelled]
> The default value in this configuration has been intentionally left emtpy in 
> order to honor the old hbase.regionserver.global.memstore.upperLimit property 
> if present.
> [All occurrences of "a HBase" should be changed to "an HBase" -- 15 
> occurrences found]
> ["file path are" should be "file paths are"]
> By default, all of HBase's ZooKeeper file path are configured with a relative 
> path, so they will all go under this directory unless changed.
> ["times" -- plural required]
> How many time to retry attempting to write a version file before just 
> aborting. 
> ["separated" is misspelled]
> Each attempt is seperated by the hbase.server.thread.wakefrequency 
> milliseconds.
> [space needed after quotation mark (include"limit)]
> Because this limit represents the "automatic include"limit...
> [space needed ("ashbase:metadata" should be "as hbase:metadata")]
> This helps to keep compaction of lean tables (such ashbase:meta) fast.
> [Acronym "ide" should be capitalized for clarity: IDE]
> Setting this to true can be useful in contexts other than the other side of a 
> maven generation; i.e. running in an ide. 
> [RuntimeException missing an "e"]
> You'll want to set this boolean to true to avoid seeing the RuntimException 
> complaint:
> [Space missing after "secure"]
> FS Permissions for the root directory in a secure(kerberos) setup.
> ["mutations" misspelled]
> ...will be created which will tail the logs and replicate the mutatations to 
> region replicas for tables that have region replication > 1.
> ["it such that" should be "is such that"]
> If your working set it such that block cache does you no good...
> ["an" should be "and"]
> See the Deveraj Das an Nicolas Liochon blog post...
> [Tag "" should be ""]
> hbase.coprocessor.master.classes
> [Misspelling of "implementations"]
> Those consumers are coprocessors, phoenix, replication endpoint 
> implemnetations or similar.
> [Misspelling of "cluster"]
> On upgrade, before running a rolling restart over the cluser...
> ["effect" should be "affect"]
> If NOT using BucketCache, this change does not effect you.
> [Need space after "throw"]
> This will throw`java.lang.NoSuchMethodError...
> ["habasee" should be "hbase"]
> You can pass commands to the HBase Shell in non-interactive mode (see 
> hbasee.shell.noninteractive)...
> ["ie" should be "i.e."]
> Restrict the amount of resources 

[jira] [Updated] (HBASE-14825) HBase Ref Guide corrections of typos/misspellings

2015-11-23 Thread Daniel Vimont (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Vimont updated HBASE-14825:
--
Attachment: HBASE-14825-v4.patch

Okay -- this patch contains an additional commit which takes care of the long 
lines (greater than 100 characters) that were among the lines modified in the 
first commit.

This *should* take care of the first problem noted in my last patch submission.

However, it does not directly deal with the second problem, outputted by 
Jenkins as follows:
-1 core tests. The patch failed these unit tests:
org.apache.hadoop.hbase.procedure2.store.wal.TestWALProcedureStore

We'll see if this problem persists in this revised patch submission (v4).

> HBase Ref Guide corrections of typos/misspellings
> -
>
> Key: HBASE-14825
> URL: https://issues.apache.org/jira/browse/HBASE-14825
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.0.0
>Reporter: Daniel Vimont
>Assignee: Daniel Vimont
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-14825-v2.patch, HBASE-14825-v3.patch, 
> HBASE-14825-v4.patch, HBASE-14825.patch
>
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> Found the following list of typos/misspellings on the book.html page, and 
> thought I would make corrections to the appropriate src/main/asciidoc files 
> in which they are located. (This is just a good opportunity for me to become 
> familiar with submission of fixes/patches as a prelude to beginning to make 
> some coding contributions. This is also my first submission to the JIRA 
> system, so corrections to content/conventions are welcome!)
> [Note: I see that [~misty]  may be in the midst of a reformatting task -- 
> HBASE-14823 --  that might involve these same asciidoc files. Please advise 
> if I should wait on this task to avoid a possibly cumbersome Git 
> reconciliation mess. (?)]
> Here is the list of typos/misspellings. The format of each item is (a) the 
> problem is presented in brackets on the first line, and (b) the phrase (as it 
> currently appears in the text) is on the second line.
> ===
> ["you" should be "your", and "Kimballs'" should be "Kimball's" (move the 
> apostrophe) in the following:]
> A useful read setting config on you hadoop cluster is Aaron Kimballs' 
> Configuration Parameters: What can you just ignore?
> [Period needed after "a"]
> a.k.a pseudo-distributed
> ["empty" is misspelled]
> The default value in this configuration has been intentionally left emtpy in 
> order to honor the old hbase.regionserver.global.memstore.upperLimit property 
> if present.
> [All occurrences of "a HBase" should be changed to "an HBase" -- 15 
> occurrences found]
> ["file path are" should be "file paths are"]
> By default, all of HBase's ZooKeeper file path are configured with a relative 
> path, so they will all go under this directory unless changed.
> ["times" -- plural required]
> How many time to retry attempting to write a version file before just 
> aborting. 
> ["separated" is misspelled]
> Each attempt is seperated by the hbase.server.thread.wakefrequency 
> milliseconds.
> [space needed after quotation mark (include"limit)]
> Because this limit represents the "automatic include"limit...
> [space needed ("ashbase:metadata" should be "as hbase:metadata")]
> This helps to keep compaction of lean tables (such ashbase:meta) fast.
> [Acronym "ide" should be capitalized for clarity: IDE]
> Setting this to true can be useful in contexts other than the other side of a 
> maven generation; i.e. running in an ide. 
> [RuntimeException missing an "e"]
> You'll want to set this boolean to true to avoid seeing the RuntimException 
> complaint:
> [Space missing after "secure"]
> FS Permissions for the root directory in a secure(kerberos) setup.
> ["mutations" misspelled]
> ...will be created which will tail the logs and replicate the mutatations to 
> region replicas for tables that have region replication > 1.
> ["it such that" should be "is such that"]
> If your working set it such that block cache does you no good...
> ["an" should be "and"]
> See the Deveraj Das an Nicolas Liochon blog post...
> [Tag "" should be ""]
> hbase.coprocessor.master.classes
> [Misspelling of "implementations"]
> Those consumers are coprocessors, phoenix, replication endpoint 
> implemnetations or similar.
> [Misspelling of "cluster"]
> On upgrade, before running a rolling restart over the cluser...
> ["effect" should be "affect"]
> If NOT using BucketCache, this change does not effect you.
> [Need space after "throw"]
> This will throw`java.lang.NoSuchMethodError...
> ["habasee" should be "hbase"]
> You can pass commands to the HBase Shell in non-interactive mode (see 
> hbasee.shell.noninteractive)...
> 

[jira] [Commented] (HBASE-14865) Support passing multiple QOPs to SaslClient/Server via hbase.rpc.protection

2015-11-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15021940#comment-15021940
 ] 

Hadoop QA commented on HBASE-14865:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12773783/HBASE-14865-master.patch
  against master branch at commit 55087ce8887b5be38b0fda0dda3fbf2f92c13778.
  ATTACHMENT ID: 12773783

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 27 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.6.1 2.7.0 
2.7.1)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:red}-1 checkstyle{color}.  The applied patch generated 
18689 checkstyle errors (more than the master's current 18686 errors).

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.security.TestAsyncSecureIPC

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16638//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16638//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16638//artifact/patchprocess/checkstyle-aggregate.html

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16638//console

This message is automatically generated.

> Support passing multiple QOPs to SaslClient/Server via hbase.rpc.protection
> ---
>
> Key: HBASE-14865
> URL: https://issues.apache.org/jira/browse/HBASE-14865
> Project: HBase
>  Issue Type: Improvement
>  Components: security
>Reporter: Appy
>Assignee: Appy
> Attachments: HBASE-14865-master.patch
>
>
> Currently, we can set the value of hbase.rpc.protection to one of 
> authentication/integrity/privacy. It is the used to set 
> {{javax.security.sasl.qop}} in SaslUtil.java.
> The problem is, if a cluster wants to switch from one qop to another, it'll 
> have to take a downtime. Rolling upgrade will create a situation where some 
> nodes have old value and some have new, which'll prevent any communication 
> between them. There will be similar issue when clients will try to connect.
> {{javax.security.sasl.qop}} can take in a list of QOP in preferences order. 
> So a transition from qop1 to qop2 can be easily done like this
> "qop1" --> "qop2,qop1" --> rolling restart --> "qop2" --> rolling restart
> Need to change hbase.rpc.protection to accept a list too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13082) Coarsen StoreScanner locks to RegionScanner

2015-11-23 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15021943#comment-15021943
 ] 

ramkrishna.s.vasudevan commented on HBASE-13082:


bq. Do it up here where compactedfiles is introduced.
Ya okie. I will add it in StorefileManager where we introduce this 
compactedfiles. Also removed getCompactedfiles from the Store.java.
bq.ImmutableList is a guava-ism? 
Removed the usage of guava API.
bq.Yeah, below has a side-effect. Instead return what was sorted and have the 
caller assign:
Ya done.
bq.Why every '2' minutes? Any reason? 5 minutes?
The TTLCleaner works every 5 mins for the secondary region replica. So before 
that we need to move the compacted files to the archive. Hence it is 2 mins.
bq.You are looking at an enum? Why not just look at the refcount? Have a method 
isReferenced? Or isAliveStill? Or isInScan?
I have changed these things now..  Also the logic is pretty much simple. Every 
time do a refCount increment while getting the files for scans. (We are going 
to do that only on the active Store files). Scan completiong will decrement the 
count. Once compacted done mark the boolean as compactedAway. So for compaction 
cleaner chore we only check if the count and the boolean is true. If so move it 
to archive.
bq.So, we need Store Interface to have public Collection 
getCompactedfiles() { 
Not needed any more. Will use the StorefileManager API. I think configuring the 
Cleaner per region makes sense because the region can collectively issue the 
cleaner across all its stores. 
bq.Is this added in this patch? If so, closeAfterCompaction? or 
isCloseAfterCompaction?
closeAfterCompaction() will be the name.
bq.We make filesToRemove even if we may not use it it? i.e. compacted is false. 
We create this array to hold one file only? Then clear it?
Changed this. Will directly collect the compactedfiles and just remove them.
bq.Who calls this closeAndArchiveCompactedFiles ? The chore thread?
Yes.
bq.We do a copy here and operate on the copy?
The original can change if just after we getThecompactedFiles another set of 
compaction gets completed. Like we do in other areas by obtaininig a read lock 
before getting the files for scanners similarly I have added the read lock here 
just for copying the compacted files.
bq..down in archiveAndRemoveCompactedFiles? There are no references to the 
file, right?
Currently the storefiles are updated under the write lock. Similarly the 
compacted files are also updated with the same write lock. So the removal of 
the compacted files are also under the write lock. This is basically to make 
the update to the compactedfiles list atomic. 
bq.{ 156ACTIVE, DISCARDED; 157  }
Now no more enums.
bq.doing it in StoreFile is not right.. .this is meta info on the StoreFile 
instances. StoreFileManager?
Agree that Storefile is not right and StorefileInfo is better. I think we can 
do this in a seperate JIRA. But if we do in StorefileManager currently it is an 
interface so every impl of the SFManager should take care of this state and ref 
counting. (like default and StripeSFM).
bq.Shoud return Collection and internally you do the Immutable stuff 
(good practice)
Okie. I just followed what clearStoreFiles() does.
bq.In StoreFileScanner, when would scanner be null?
Changed this now. Considering the fact that we have two seperated lists for the 
current storefiles and compacted files this may no longer be needed.
bq.Maybe a timer on scans? If goes on longer than a minute have it return and 
then clean up compacted files New issue.
Ya true. New Issue.
bq.Is checkFlushed the right name for the method?
Changed to checkResetHeap.
bq.Why is it a CompactedHFilesCleaner cleaner and not a HFileCleaner?
Changed to CompactedHFilesDischarger.  Does that sound good?
bq.Yeah, why call checkFlushed in shipped?
I rechecked this flow fully. Calling this in shipped or close() may not be 
needed because after the shipped() call any way we wil be calling one of the 
scan API like next(), reseek(), peek() etc.
But regarding not calling checkResetHeap in reseek(), peek(), seek() etc. I 
think its okie. The reason is because every next() may call reseek or seek() 
internally and that time if we can ensure that if we can reset the heap on 
flush it will ensure that we don't hold up the snapshot for a longer time. But 
one downside could be if there is no flush during a scan and there are lot of 
reseeks() we end up checking the volatile every time. But I think it is okie? 



> Coarsen StoreScanner locks to RegionScanner
> ---
>
> Key: HBASE-13082
> URL: https://issues.apache.org/jira/browse/HBASE-13082
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: ramkrishna.s.vasudevan
> Attachments: 13082-test.txt, 13082-v2.txt, 

[jira] [Updated] (HBASE-14826) Small improvement in KVHeap seek() API

2015-11-23 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-14826:
---
Attachment: HBASE-14826_1.patch

This is what will commit. Should this be in trunk alone? [~lhofhansl] - What do 
you think?

> Small improvement in KVHeap seek() API
> --
>
> Key: HBASE-14826
> URL: https://issues.apache.org/jira/browse/HBASE-14826
> Project: HBase
>  Issue Type: Improvement
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
>Priority: Minor
> Attachments: HBASE-14826.patch, HBASE-14826_1.patch
>
>
> Currently in seek/reseek() APIs we tend to do lot of priorityqueue related 
> operations. We initially add the current scanner to the heap, then poll and 
> again add the scanner back if the seekKey is greater than the topkey in that 
> scanner. Since the KVs are always going to be in increasing order and in 
> ideal scan flow every seek/reseek is followed by a next() call it should be 
> ok if we start with checking the current scanner and then do a poll to get 
> the next scanner. Just avoid the initial PQ.add(current) call. This could 
> save some comparisons. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14826) Small improvement in KVHeap seek() API

2015-11-23 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-14826:
---
Status: Open  (was: Patch Available)

> Small improvement in KVHeap seek() API
> --
>
> Key: HBASE-14826
> URL: https://issues.apache.org/jira/browse/HBASE-14826
> Project: HBase
>  Issue Type: Improvement
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
>Priority: Minor
> Attachments: HBASE-14826.patch
>
>
> Currently in seek/reseek() APIs we tend to do lot of priorityqueue related 
> operations. We initially add the current scanner to the heap, then poll and 
> again add the scanner back if the seekKey is greater than the topkey in that 
> scanner. Since the KVs are always going to be in increasing order and in 
> ideal scan flow every seek/reseek is followed by a next() call it should be 
> ok if we start with checking the current scanner and then do a poll to get 
> the next scanner. Just avoid the initial PQ.add(current) call. This could 
> save some comparisons. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14826) Small improvement in KVHeap seek() API

2015-11-23 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-14826:
---
Hadoop Flags: Reviewed
  Status: Patch Available  (was: Open)

> Small improvement in KVHeap seek() API
> --
>
> Key: HBASE-14826
> URL: https://issues.apache.org/jira/browse/HBASE-14826
> Project: HBase
>  Issue Type: Improvement
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
>Priority: Minor
> Attachments: HBASE-14826.patch, HBASE-14826_1.patch
>
>
> Currently in seek/reseek() APIs we tend to do lot of priorityqueue related 
> operations. We initially add the current scanner to the heap, then poll and 
> again add the scanner back if the seekKey is greater than the topkey in that 
> scanner. Since the KVs are always going to be in increasing order and in 
> ideal scan flow every seek/reseek is followed by a next() call it should be 
> ok if we start with checking the current scanner and then do a poll to get 
> the next scanner. Just avoid the initial PQ.add(current) call. This could 
> save some comparisons. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14865) Support passing multiple QOPs to SaslClient/Server via hbase.rpc.protection

2015-11-23 Thread Appy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14865?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Appy updated HBASE-14865:
-
Attachment: HBASE-14865-master.patch

> Support passing multiple QOPs to SaslClient/Server via hbase.rpc.protection
> ---
>
> Key: HBASE-14865
> URL: https://issues.apache.org/jira/browse/HBASE-14865
> Project: HBase
>  Issue Type: Improvement
>  Components: security
>Reporter: Appy
> Attachments: HBASE-14865-master.patch
>
>
> Currently, we can set the value of hbase.rpc.protection to one of 
> authentication/integrity/privacy. It is the used to set 
> {{javax.security.sasl.qop}} in SaslUtil.java.
> The problem is, if a cluster wants to switch from one qop to another, it'll 
> have to take a downtime. Rolling upgrade will create a situation where some 
> nodes have old value and some have new, which'll prevent any communication 
> between them. There will be similar issue when clients will try to connect.
> {{javax.security.sasl.qop}} can take in a list of QOP in preferences order. 
> So a transition from qop1 to qop2 can be easily done like this
> "qop1" --> "qop2,qop1" --> rolling restart --> "qop2" --> rolling restart
> Need to change hbase.rpc.protection to accept a list too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14825) HBase Ref Guide corrections of typos/misspellings

2015-11-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15021763#comment-15021763
 ] 

Hadoop QA commented on HBASE-14825:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12773780/HBASE-14825-v4.patch
  against master branch at commit 55087ce8887b5be38b0fda0dda3fbf2f92c13778.
  ATTACHMENT ID: 12773780

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 22 new 
or modified tests.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16637//console

This message is automatically generated.

> HBase Ref Guide corrections of typos/misspellings
> -
>
> Key: HBASE-14825
> URL: https://issues.apache.org/jira/browse/HBASE-14825
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.0.0
>Reporter: Daniel Vimont
>Assignee: Daniel Vimont
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-14825-v2.patch, HBASE-14825-v3.patch, 
> HBASE-14825-v4.patch, HBASE-14825.patch
>
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> Found the following list of typos/misspellings on the book.html page, and 
> thought I would make corrections to the appropriate src/main/asciidoc files 
> in which they are located. (This is just a good opportunity for me to become 
> familiar with submission of fixes/patches as a prelude to beginning to make 
> some coding contributions. This is also my first submission to the JIRA 
> system, so corrections to content/conventions are welcome!)
> [Note: I see that [~misty]  may be in the midst of a reformatting task -- 
> HBASE-14823 --  that might involve these same asciidoc files. Please advise 
> if I should wait on this task to avoid a possibly cumbersome Git 
> reconciliation mess. (?)]
> Here is the list of typos/misspellings. The format of each item is (a) the 
> problem is presented in brackets on the first line, and (b) the phrase (as it 
> currently appears in the text) is on the second line.
> ===
> ["you" should be "your", and "Kimballs'" should be "Kimball's" (move the 
> apostrophe) in the following:]
> A useful read setting config on you hadoop cluster is Aaron Kimballs' 
> Configuration Parameters: What can you just ignore?
> [Period needed after "a"]
> a.k.a pseudo-distributed
> ["empty" is misspelled]
> The default value in this configuration has been intentionally left emtpy in 
> order to honor the old hbase.regionserver.global.memstore.upperLimit property 
> if present.
> [All occurrences of "a HBase" should be changed to "an HBase" -- 15 
> occurrences found]
> ["file path are" should be "file paths are"]
> By default, all of HBase's ZooKeeper file path are configured with a relative 
> path, so they will all go under this directory unless changed.
> ["times" -- plural required]
> How many time to retry attempting to write a version file before just 
> aborting. 
> ["separated" is misspelled]
> Each attempt is seperated by the hbase.server.thread.wakefrequency 
> milliseconds.
> [space needed after quotation mark (include"limit)]
> Because this limit represents the "automatic include"limit...
> [space needed ("ashbase:metadata" should be "as hbase:metadata")]
> This helps to keep compaction of lean tables (such ashbase:meta) fast.
> [Acronym "ide" should be capitalized for clarity: IDE]
> Setting this to true can be useful in contexts other than the other side of a 
> maven generation; i.e. running in an ide. 
> [RuntimeException missing an "e"]
> You'll want to set this boolean to true to avoid seeing the RuntimException 
> complaint:
> [Space missing after "secure"]
> FS Permissions for the root directory in a secure(kerberos) setup.
> ["mutations" misspelled]
> ...will be created which will tail the logs and replicate the mutatations to 
> region replicas for tables that have region replication > 1.
> ["it such that" should be "is such that"]
> If your working set it such that block cache does you no good...
> ["an" should be "and"]
> See the Deveraj Das an Nicolas Liochon blog post...
> [Tag "" should be ""]
> hbase.coprocessor.master.classes
> [Misspelling of "implementations"]
> Those consumers are coprocessors, phoenix, replication endpoint 
> implemnetations or similar.
> [Misspelling of "cluster"]
> On upgrade, before running a rolling restart over the cluser...
> ["effect" should be "affect"]
> If NOT using BucketCache, this change does not effect you.
> [Need space after "throw"]
> This will throw`java.lang.NoSuchMethodError...
> ["habasee" should be 

[jira] [Commented] (HBASE-11393) Replication TableCfs should be a PB object rather than a string

2015-11-23 Thread Heng Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15021855#comment-15021855
 ] 

Heng Chen commented on HBASE-11393:
---

[~ashish singhi] Thanks for your review.  I will fix what you mentioned in RB 
next version.
Let's see what [~enis] will say. 


> Replication TableCfs should be a PB object rather than a string
> ---
>
> Key: HBASE-11393
> URL: https://issues.apache.org/jira/browse/HBASE-11393
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Enis Soztutar
> Fix For: 2.0.0
>
> Attachments: HBASE-11393.patch, HBASE-11393_v1.patch, 
> HBASE-11393_v10.patch, HBASE-11393_v11.patch, HBASE-11393_v2.patch, 
> HBASE-11393_v3.patch, HBASE-11393_v4.patch, HBASE-11393_v5.patch, 
> HBASE-11393_v6.patch, HBASE-11393_v7.patch, HBASE-11393_v8.patch, 
> HBASE-11393_v9.patch
>
>
> We concatenate the list of tables and column families in format  
> "table1:cf1,cf2;table2:cfA,cfB" in zookeeper for table-cf to replication peer 
> mapping. 
> This results in ugly parsing code. We should do this a PB object. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13082) Coarsen StoreScanner locks to RegionScanner

2015-11-23 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-13082:
---
Attachment: HBASE-13082_16.patch

Updated patch for QA.

> Coarsen StoreScanner locks to RegionScanner
> ---
>
> Key: HBASE-13082
> URL: https://issues.apache.org/jira/browse/HBASE-13082
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: ramkrishna.s.vasudevan
> Attachments: 13082-test.txt, 13082-v2.txt, 13082-v3.txt, 
> 13082-v4.txt, 13082.txt, 13082.txt, HBASE-13082.pdf, HBASE-13082_1.pdf, 
> HBASE-13082_12.patch, HBASE-13082_13.patch, HBASE-13082_14.patch, 
> HBASE-13082_15.patch, HBASE-13082_16.patch, HBASE-13082_1_WIP.patch, 
> HBASE-13082_2.pdf, HBASE-13082_2_WIP.patch, HBASE-13082_3.patch, 
> HBASE-13082_4.patch, HBASE-13082_9.patch, HBASE-13082_9.patch, 
> HBASE-13082_withoutpatch.jpg, HBASE-13082_withpatch.jpg, 
> LockVsSynchronized.java, gc.png, gc.png, gc.png, hits.png, next.png, next.png
>
>
> Continuing where HBASE-10015 left of.
> We can avoid locking (and memory fencing) inside StoreScanner by deferring to 
> the lock already held by the RegionScanner.
> In tests this shows quite a scan improvement and reduced CPU (the fences make 
> the cores wait for memory fetches).
> There are some drawbacks too:
> * All calls to RegionScanner need to be remain synchronized
> * Implementors of coprocessors need to be diligent in following the locking 
> contract. For example Phoenix does not lock RegionScanner.nextRaw() and 
> required in the documentation (not picking on Phoenix, this one is my fault 
> as I told them it's OK)
> * possible starving of flushes and compaction with heavy read load. 
> RegionScanner operations would keep getting the locks and the 
> flushes/compactions would not be able finalize the set of files.
> I'll have a patch soon.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14866) VerifyReplication should use peer configuration in peer connection

2015-11-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15022032#comment-15022032
 ] 

Hadoop QA commented on HBASE-14866:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12773792/HBASE-14866_v1.patch
  against master branch at commit 55087ce8887b5be38b0fda0dda3fbf2f92c13778.
  ATTACHMENT ID: 12773792

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.6.1 2.7.0 
2.7.1)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16639//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16639//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16639//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16639//console

This message is automatically generated.

> VerifyReplication should use peer configuration in peer connection
> --
>
> Key: HBASE-14866
> URL: https://issues.apache.org/jira/browse/HBASE-14866
> Project: HBase
>  Issue Type: Improvement
>  Components: Replication
>Reporter: Gary Helmling
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-14866.patch, HBASE-14866_v1.patch
>
>
> VerifyReplication uses the replication peer's configuration to construct the 
> ZooKeeper quorum address for the peer connection.  However, other 
> configuration properties in the peer's configuration are dropped.  It should 
> merge all configuration properties from the {{ReplicationPeerConfig}} when 
> creating the peer connection and obtaining a credentials for the peer cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14825) HBase Ref Guide corrections of typos/misspellings

2015-11-23 Thread Daniel Vimont (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15021836#comment-15021836
 ] 

Daniel Vimont commented on HBASE-14825:
---

Alright, the latest Jenkins feedback is "The patch command could not apply the 
patch.".

The patch (containing two commits) applies without a problem in my Ubuntu 
environment, (following submission of a [git --pull rebase] just to be sure I'm 
applying the patch to the latest version of the master branch).

The next thing I would normally do in situations like this (where things are 
simply not working but I have no diagnostic feedback to guide me) is to go back 
to the beginning and start from scratch: make a single modification to a single 
typo in one of the adoc files, and try submitting it through the process to see 
if I can get it to be accepted by the Jenkins system.

I will do this tomorrow, unless somebody could advise me of a potentially 
better path to take.

> HBase Ref Guide corrections of typos/misspellings
> -
>
> Key: HBASE-14825
> URL: https://issues.apache.org/jira/browse/HBASE-14825
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.0.0
>Reporter: Daniel Vimont
>Assignee: Daniel Vimont
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-14825-v2.patch, HBASE-14825-v3.patch, 
> HBASE-14825-v4.patch, HBASE-14825.patch
>
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> Found the following list of typos/misspellings on the book.html page, and 
> thought I would make corrections to the appropriate src/main/asciidoc files 
> in which they are located. (This is just a good opportunity for me to become 
> familiar with submission of fixes/patches as a prelude to beginning to make 
> some coding contributions. This is also my first submission to the JIRA 
> system, so corrections to content/conventions are welcome!)
> [Note: I see that [~misty]  may be in the midst of a reformatting task -- 
> HBASE-14823 --  that might involve these same asciidoc files. Please advise 
> if I should wait on this task to avoid a possibly cumbersome Git 
> reconciliation mess. (?)]
> Here is the list of typos/misspellings. The format of each item is (a) the 
> problem is presented in brackets on the first line, and (b) the phrase (as it 
> currently appears in the text) is on the second line.
> ===
> ["you" should be "your", and "Kimballs'" should be "Kimball's" (move the 
> apostrophe) in the following:]
> A useful read setting config on you hadoop cluster is Aaron Kimballs' 
> Configuration Parameters: What can you just ignore?
> [Period needed after "a"]
> a.k.a pseudo-distributed
> ["empty" is misspelled]
> The default value in this configuration has been intentionally left emtpy in 
> order to honor the old hbase.regionserver.global.memstore.upperLimit property 
> if present.
> [All occurrences of "a HBase" should be changed to "an HBase" -- 15 
> occurrences found]
> ["file path are" should be "file paths are"]
> By default, all of HBase's ZooKeeper file path are configured with a relative 
> path, so they will all go under this directory unless changed.
> ["times" -- plural required]
> How many time to retry attempting to write a version file before just 
> aborting. 
> ["separated" is misspelled]
> Each attempt is seperated by the hbase.server.thread.wakefrequency 
> milliseconds.
> [space needed after quotation mark (include"limit)]
> Because this limit represents the "automatic include"limit...
> [space needed ("ashbase:metadata" should be "as hbase:metadata")]
> This helps to keep compaction of lean tables (such ashbase:meta) fast.
> [Acronym "ide" should be capitalized for clarity: IDE]
> Setting this to true can be useful in contexts other than the other side of a 
> maven generation; i.e. running in an ide. 
> [RuntimeException missing an "e"]
> You'll want to set this boolean to true to avoid seeing the RuntimException 
> complaint:
> [Space missing after "secure"]
> FS Permissions for the root directory in a secure(kerberos) setup.
> ["mutations" misspelled]
> ...will be created which will tail the logs and replicate the mutatations to 
> region replicas for tables that have region replication > 1.
> ["it such that" should be "is such that"]
> If your working set it such that block cache does you no good...
> ["an" should be "and"]
> See the Deveraj Das an Nicolas Liochon blog post...
> [Tag "" should be ""]
> hbase.coprocessor.master.classes
> [Misspelling of "implementations"]
> Those consumers are coprocessors, phoenix, replication endpoint 
> implemnetations or similar.
> [Misspelling of "cluster"]
> On upgrade, before running a rolling restart over the cluser...
> ["effect" should be "affect"]
> If NOT using BucketCache, this 

[jira] [Commented] (HBASE-14189) CF Level BC setting should override global one.

2015-11-23 Thread Heng Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15021852#comment-15021852
 ] 

Heng Chen commented on HBASE-14189:
---

ping [~anoop.hbase]  :)

> CF Level BC setting should override global one.
> ---
>
> Key: HBASE-14189
> URL: https://issues.apache.org/jira/browse/HBASE-14189
> Project: HBase
>  Issue Type: Improvement
>  Components: BlockCache
>Affects Versions: 2.0.0
>Reporter: Heng Chen
>Assignee: Heng Chen
> Attachments: HBASE-14189.patch, HBASE-14189_v1.patch, 
> HBASE-14189_v2.patch, HBASE-14189_v3.patch, HBASE-14189_v4.patch, 
> HBASE-14189_v5.patch, HBASE-14189_v6.patch
>
>
> The original description is ambiguous. I think i will rewrite it.
> Let's see {{BlockCache}} constructor firstly
> {code}
>   public CacheConfig(Configuration conf, HColumnDescriptor family) {
> this(CacheConfig.instantiateBlockCache(conf),
> family.isBlockCacheEnabled(),
> family.isInMemory(),
> // For the following flags we enable them regardless of per-schema 
> settings
> // if they are enabled in the global configuration.
> conf.getBoolean(CACHE_BLOCKS_ON_WRITE_KEY,
> DEFAULT_CACHE_DATA_ON_WRITE) || family.isCacheDataOnWrite(),
> conf.getBoolean(CACHE_INDEX_BLOCKS_ON_WRITE_KEY,
> DEFAULT_CACHE_INDEXES_ON_WRITE) || family.isCacheIndexesOnWrite(),
> conf.getBoolean(CACHE_BLOOM_BLOCKS_ON_WRITE_KEY,
> DEFAULT_CACHE_BLOOMS_ON_WRITE) || family.isCacheBloomsOnWrite(),
> conf.getBoolean(EVICT_BLOCKS_ON_CLOSE_KEY,
> DEFAULT_EVICT_ON_CLOSE) || family.isEvictBlocksOnClose(),
> conf.getBoolean(CACHE_DATA_BLOCKS_COMPRESSED_KEY, 
> DEFAULT_CACHE_DATA_COMPRESSED),
> conf.getBoolean(PREFETCH_BLOCKS_ON_OPEN_KEY,
> DEFAULT_PREFETCH_ON_OPEN) || family.isPrefetchBlocksOnOpen(),
> conf.getBoolean(HColumnDescriptor.CACHE_DATA_IN_L1,
> HColumnDescriptor.DEFAULT_CACHE_DATA_IN_L1) || 
> family.isCacheDataInL1(),
> 
> conf.getBoolean(DROP_BEHIND_CACHE_COMPACTION_KEY,DROP_BEHIND_CACHE_COMPACTION_DEFAULT)
>  );
>   }
> {code}
> If we dig in it,  we will see {{CacheConfig.cacheDataOnRead}} is used to 
> accept {{family.isBlockCacheEnabled()}}.  
> I think it is confused as comments about {{cacheDataOnRead}}
> {code}
>   /**
>* Whether blocks should be cached on read (default is on if there is a
>* cache but this can be turned off on a per-family or per-request basis).
>* If off we will STILL cache meta blocks; i.e. INDEX and BLOOM types.
>* This cannot be disabled.
>*/
>   private boolean cacheDataOnRead;
> {code}
> So i think we should use another variable to represent for 
> {{family.isBlockCacheEnabled()}}.
> The secondary point is we use 'or' to decide {{cacheDataOnWrite}} is on/off 
> when both CF and global has this setting.
> {code}
> conf.getBoolean(CACHE_BLOCKS_ON_WRITE_KEY,
> DEFAULT_CACHE_DATA_ON_WRITE) || family.isCacheDataOnWrite()
> {code}
> IMO we should use CF Level setting to override global setting. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13153) Bulk Loaded HFile Replication

2015-11-23 Thread Ashish Singhi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15021743#comment-15021743
 ] 

Ashish Singhi commented on HBASE-13153:
---

[~jerryhe], thanks for the comments.

bq. Another region server RPC handler --> holds region write lock --> transfers 
files to be bulk loaded into the region from remote cluster synchronously
Not remote cluster, it will be local at this point as all the files are copied 
first from source to peer cluster and then only bulk load is initiated which 
will just rename these files.

bq. Multiple handlers on the peer cluster can potentially be blocked
Yes agress, this point was earlier raised by [~devaraj] also I have noted it 
down and have plan to may be add another QoS for bulk load as part of another 
jira as it will also help in normal bulk load case.

bq. Now that the peer cluster 'server id' needs to read files directly from 
source cluster hbase.root directory. In a secure cluster, I recall that the 
hbase.root has been changed to be only accessible by the current 'server id'. 
Now they need to match
I did not get what you mean. But we have done internal testing for this by 
providing peer cluster user read permission on the source cluster FS as 
mentioned in the design doc.

> Bulk Loaded HFile Replication
> -
>
> Key: HBASE-13153
> URL: https://issues.apache.org/jira/browse/HBASE-13153
> Project: HBase
>  Issue Type: New Feature
>  Components: Replication
>Reporter: sunhaitao
>Assignee: Ashish Singhi
> Fix For: 2.0.0
>
> Attachments: HBASE-13153-v1.patch, HBASE-13153-v10.patch, 
> HBASE-13153-v11.patch, HBASE-13153-v12.patch, HBASE-13153-v13.patch, 
> HBASE-13153-v14.patch, HBASE-13153-v15.patch, HBASE-13153-v16.patch, 
> HBASE-13153-v17.patch, HBASE-13153-v18.patch, HBASE-13153-v2.patch, 
> HBASE-13153-v3.patch, HBASE-13153-v4.patch, HBASE-13153-v5.patch, 
> HBASE-13153-v6.patch, HBASE-13153-v7.patch, HBASE-13153-v8.patch, 
> HBASE-13153-v9.patch, HBASE-13153.patch, HBase Bulk Load 
> Replication-v1-1.pdf, HBase Bulk Load Replication-v2.pdf, HBase Bulk Load 
> Replication-v3.pdf, HBase Bulk Load Replication.pdf, HDFS_HA_Solution.PNG
>
>
> Currently we plan to use HBase Replication feature to deal with disaster 
> tolerance scenario.But we encounter an issue that we will use bulkload very 
> frequently,because bulkload bypass write path, and will not generate WAL, so 
> the data will not be replicated to backup cluster. It's inappropriate to 
> bukload twice both on active cluster and backup cluster. So i advise do some 
> modification to bulkload feature to enable bukload to both active cluster and 
> backup cluster



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14865) Support passing multiple QOPs to SaslClient/Server via hbase.rpc.protection

2015-11-23 Thread Appy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14865?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Appy updated HBASE-14865:
-
Assignee: Appy
  Status: Patch Available  (was: Open)

> Support passing multiple QOPs to SaslClient/Server via hbase.rpc.protection
> ---
>
> Key: HBASE-14865
> URL: https://issues.apache.org/jira/browse/HBASE-14865
> Project: HBase
>  Issue Type: Improvement
>  Components: security
>Reporter: Appy
>Assignee: Appy
> Attachments: HBASE-14865-master.patch
>
>
> Currently, we can set the value of hbase.rpc.protection to one of 
> authentication/integrity/privacy. It is the used to set 
> {{javax.security.sasl.qop}} in SaslUtil.java.
> The problem is, if a cluster wants to switch from one qop to another, it'll 
> have to take a downtime. Rolling upgrade will create a situation where some 
> nodes have old value and some have new, which'll prevent any communication 
> between them. There will be similar issue when clients will try to connect.
> {{javax.security.sasl.qop}} can take in a list of QOP in preferences order. 
> So a transition from qop1 to qop2 can be easily done like this
> "qop1" --> "qop2,qop1" --> rolling restart --> "qop2" --> rolling restart
> Need to change hbase.rpc.protection to accept a list too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14866) VerifyReplication should use peer configuration in peer connection

2015-11-23 Thread Heng Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Heng Chen updated HBASE-14866:
--
Attachment: HBASE-14866_v1.patch

Currently, hbase-server has no dependency on gson, let change it to Jackson.

> VerifyReplication should use peer configuration in peer connection
> --
>
> Key: HBASE-14866
> URL: https://issues.apache.org/jira/browse/HBASE-14866
> Project: HBase
>  Issue Type: Improvement
>  Components: Replication
>Reporter: Gary Helmling
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-14866.patch, HBASE-14866_v1.patch
>
>
> VerifyReplication uses the replication peer's configuration to construct the 
> ZooKeeper quorum address for the peer connection.  However, other 
> configuration properties in the peer's configuration are dropped.  It should 
> merge all configuration properties from the {{ReplicationPeerConfig}} when 
> creating the peer connection and obtaining a credentials for the peer cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14843) TestWALProcedureStore.testLoad is flakey

2015-11-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15021813#comment-15021813
 ] 

Hadoop QA commented on HBASE-14843:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12773771/HBASE-14843-v0.patch
  against master branch at commit 55087ce8887b5be38b0fda0dda3fbf2f92c13778.
  ATTACHMENT ID: 12773771

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.6.1 2.7.0 
2.7.1)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16636//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16636//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16636//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16636//console

This message is automatically generated.

> TestWALProcedureStore.testLoad is flakey
> 
>
> Key: HBASE-14843
> URL: https://issues.apache.org/jira/browse/HBASE-14843
> Project: HBase
>  Issue Type: Bug
>  Components: proc-v2
>Affects Versions: 2.0.0, 1.2.0
>Reporter: Heng Chen
>Assignee: Matteo Bertozzi
>Priority: Blocker
> Fix For: 2.0.0, 1.2.0
>
> Attachments: HBASE-14843-v0.patch
>
>
> I see it twice recently, 
> see.
> https://builds.apache.org/job/PreCommit-HBASE-Build/16589//testReport/org.apache.hadoop.hbase.procedure2.store.wal/TestWALProcedureStore/testLoad/
> https://builds.apache.org/job/PreCommit-HBASE-Build/16532/testReport/org.apache.hadoop.hbase.procedure2.store.wal/TestWALProcedureStore/testLoad/
> Let's see what's happening.
> Update.
> It failed once again today, 
> https://builds.apache.org/job/PreCommit-HBASE-Build/16602/testReport/junit/org.apache.hadoop.hbase.procedure2.store.wal/TestWALProcedureStore/testLoad/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14689) Addendum and unit test for HBASE-13471

2015-11-23 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HBASE-14689:
--
Attachment: hbase-14689-after-revert.patch

> Addendum and unit test for HBASE-13471
> --
>
> Key: HBASE-14689
> URL: https://issues.apache.org/jira/browse/HBASE-14689
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.0.3, 1.1.3, 0.98.16
>
> Attachments: hbase-14689-after-revert.patch, 
> hbase-14689-after-revert.patch, hbase-14689_v1-branch-1.1.patch, 
> hbase-14689_v1-branch-1.1.patch, hbase-14689_v1.patch
>
>
> One of our customers ran into HBASE-13471, which resulted in all the handlers 
> getting blocked and various other issues. While backporting the issue, I 
> noticed that there is one more case where we might go into infinite loop. In 
> case a row lock cannot be acquired (due to a previous leak for example which 
> we have seen in Phoenix before) this will cause similar infinite loop. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14689) Addendum and unit test for HBASE-13471

2015-11-23 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15023232#comment-15023232
 ] 

Enis Soztutar commented on HBASE-14689:
---

I have checked TestReplicationEndpointWithMultipleWAL. Does not seem related 
though. [~busbey] any idea? 

See: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16580/artifact/hbase/hbase-server/target/surefire-reports/org.apache.hadoop.hbase.replication.multiwal.TestReplicationEndpointWithMultipleWAL-output.txt

Let me re-attach. 

> Addendum and unit test for HBASE-13471
> --
>
> Key: HBASE-14689
> URL: https://issues.apache.org/jira/browse/HBASE-14689
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.0.3, 1.1.3, 0.98.16
>
> Attachments: hbase-14689-after-revert.patch, 
> hbase-14689-after-revert.patch, hbase-14689_v1-branch-1.1.patch, 
> hbase-14689_v1-branch-1.1.patch, hbase-14689_v1.patch
>
>
> One of our customers ran into HBASE-13471, which resulted in all the handlers 
> getting blocked and various other issues. While backporting the issue, I 
> noticed that there is one more case where we might go into infinite loop. In 
> case a row lock cannot be acquired (due to a previous leak for example which 
> we have seen in Phoenix before) this will cause similar infinite loop. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14223) Meta WALs are not cleared if meta region was closed and RS aborts

2015-11-23 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15023280#comment-15023280
 ] 

Devaraj Das commented on HBASE-14223:
-

Looks fine to me. One question I have is why is a null wal passed to the method 
HRegion.warmupHRegion

> Meta WALs are not cleared if meta region was closed and RS aborts
> -
>
> Key: HBASE-14223
> URL: https://issues.apache.org/jira/browse/HBASE-14223
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.4, 1.0.4
>
> Attachments: HBASE-14223logs, hbase-14223_v0.patch, 
> hbase-14223_v1-branch-1.patch, hbase-14223_v2-branch-1.patch, 
> hbase-14223_v3-branch-1.patch, hbase-14223_v3-branch-1.patch
>
>
> When an RS opens meta, and later closes it, the WAL(FSHlog) is not closed. 
> The last WAL file just sits there in the RS WAL directory. If RS stops 
> gracefully, the WAL file for meta is deleted. Otherwise if RS aborts, WAL for 
> meta is not cleaned. It is also not split (which is correct) since master 
> determines that the RS no longer hosts meta at the time of RS abort. 
> From a cluster after running ITBLL with CM, I see a lot of {{-splitting}} 
> directories left uncleaned: 
> {code}
> [root@os-enis-dal-test-jun-4-7 cluster-os]# sudo -u hdfs hadoop fs -ls 
> /apps/hbase/data/WALs
> Found 31 items
> drwxr-xr-x   - hbase hadoop  0 2015-06-05 01:14 
> /apps/hbase/data/WALs/hregion-58203265
> drwxr-xr-x   - hbase hadoop  0 2015-06-05 07:54 
> /apps/hbase/data/WALs/os-enis-dal-test-jun-4-1.openstacklocal,16020,1433489308745-splitting
> drwxr-xr-x   - hbase hadoop  0 2015-06-05 09:28 
> /apps/hbase/data/WALs/os-enis-dal-test-jun-4-1.openstacklocal,16020,1433494382959-splitting
> drwxr-xr-x   - hbase hadoop  0 2015-06-05 10:01 
> /apps/hbase/data/WALs/os-enis-dal-test-jun-4-1.openstacklocal,16020,1433498252205-splitting
> ...
> {code}
> The directories contain WALs from meta: 
> {code}
> [root@os-enis-dal-test-jun-4-7 cluster-os]# sudo -u hdfs hadoop fs -ls 
> /apps/hbase/data/WALs/os-enis-dal-test-jun-4-5.openstacklocal,16020,1433466904285-splitting
> Found 2 items
> -rw-r--r--   3 hbase hadoop 201608 2015-06-05 03:15 
> /apps/hbase/data/WALs/os-enis-dal-test-jun-4-5.openstacklocal,16020,1433466904285-splitting/os-enis-dal-test-jun-4-5.openstacklocal%2C16020%2C1433466904285..meta.1433470511501.meta
> -rw-r--r--   3 hbase hadoop  44420 2015-06-05 04:36 
> /apps/hbase/data/WALs/os-enis-dal-test-jun-4-5.openstacklocal,16020,1433466904285-splitting/os-enis-dal-test-jun-4-5.openstacklocal%2C16020%2C1433466904285..meta.1433474111645.meta
> {code}
> The RS hosted the meta region for some time: 
> {code}
> 2015-06-05 03:14:28,692 INFO  [PostOpenDeployTasks:1588230740] 
> zookeeper.MetaTableLocator: Setting hbase:meta region location in ZooKeeper 
> as os-enis-dal-test-jun-4-5.openstacklocal,16020,1433466904285
> ...
> 2015-06-05 03:15:17,302 INFO  
> [RS_CLOSE_META-os-enis-dal-test-jun-4-5:16020-0] regionserver.HRegion: Closed 
> hbase:meta,,1.1588230740
> {code}
> In between, a WAL is created: 
> {code}
> 2015-06-05 03:15:11,707 INFO  
> [RS_OPEN_META-os-enis-dal-test-jun-4-5:16020-0-MetaLogRoller] wal.FSHLog: 
> Rolled WAL 
> /apps/hbase/data/WALs/os-enis-dal-test-jun-4-5.openstacklocal,16020,1433466904285/os-enis-dal-test-jun-4-5.openstacklocal%2C16020%2C1433466904285..meta.1433470511501.meta
>  with entries=385, filesize=196.88 KB; new WAL 
> /apps/hbase/data/WALs/os-enis-dal-test-jun-4-5.openstacklocal,16020,1433466904285/os-enis-dal-test-jun-4-5.openstacklocal%2C16020%2C1433466904285..meta.1433474111645.meta
> {code}
> When CM killed the region server later master did not see these WAL files: 
> {code}
> ./hbase-hbase-master-os-enis-dal-test-jun-4-3.log:2015-06-05 03:36:46,075 
> INFO  [MASTER_SERVER_OPERATIONS-os-enis-dal-test-jun-4-3:16000-0] 
> master.SplitLogManager: started splitting 2 logs in 
> [hdfs://os-enis-dal-test-jun-4-1.openstacklocal:8020/apps/hbase/data/WALs/os-enis-dal-test-jun-4-5.openstacklocal,16020,1433466904285-splitting]
>  for [os-enis-dal-test-jun-4-5.openstacklocal,16020,1433466904285]
> ./hbase-hbase-master-os-enis-dal-test-jun-4-3.log:2015-06-05 03:36:47,300 
> INFO  [main-EventThread] wal.WALSplitter: Archived processed log 
> hdfs://os-enis-dal-test-jun-4-1.openstacklocal:8020/apps/hbase/data/WALs/os-enis-dal-test-jun-4-5.openstacklocal,16020,1433466904285-splitting/os-enis-dal-test-jun-4-5.openstacklocal%2C16020%2C1433466904285.default.1433475074436
>  to 
> hdfs://os-enis-dal-test-jun-4-1.openstacklocal:8020/apps/hbase/data/oldWALs/os-enis-dal-test-jun-4-5.openstacklocal%2C16020%2C1433466904285.default.1433475074436
> ./hbase-hbase-master-os-enis-dal-test-jun-4-3.log:2015-06-05 03:36:50,497 
> INFO  [main-EventThread] 

[jira] [Commented] (HBASE-14843) TestWALProcedureStore.testLoad is flakey

2015-11-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15023180#comment-15023180
 ] 

Hudson commented on HBASE-14843:


SUCCESS: Integrated in HBase-1.2 #393 (See 
[https://builds.apache.org/job/HBase-1.2/393/])
HBASE-14843 TestWALProcedureStore.testLoad is flakey (matteo.bertozzi: rev 
44b3e4af9adbaffe938afc02ebda367435a0e3a3)
* 
hbase-procedure/src/test/java/org/apache/hadoop/hbase/procedure2/store/wal/TestWALProcedureStore.java
* 
hbase-procedure/src/main/java/org/apache/hadoop/hbase/procedure2/store/wal/WALProcedureStore.java


> TestWALProcedureStore.testLoad is flakey
> 
>
> Key: HBASE-14843
> URL: https://issues.apache.org/jira/browse/HBASE-14843
> Project: HBase
>  Issue Type: Bug
>  Components: proc-v2
>Affects Versions: 2.0.0, 1.2.0
>Reporter: Heng Chen
>Assignee: Matteo Bertozzi
>Priority: Blocker
> Fix For: 2.0.0, 1.2.0
>
> Attachments: HBASE-14843-v0.patch
>
>
> I see it twice recently, 
> see.
> https://builds.apache.org/job/PreCommit-HBASE-Build/16589//testReport/org.apache.hadoop.hbase.procedure2.store.wal/TestWALProcedureStore/testLoad/
> https://builds.apache.org/job/PreCommit-HBASE-Build/16532/testReport/org.apache.hadoop.hbase.procedure2.store.wal/TestWALProcedureStore/testLoad/
> Let's see what's happening.
> Update.
> It failed once again today, 
> https://builds.apache.org/job/PreCommit-HBASE-Build/16602/testReport/junit/org.apache.hadoop.hbase.procedure2.store.wal/TestWALProcedureStore/testLoad/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14843) TestWALProcedureStore.testLoad is flakey

2015-11-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15023313#comment-15023313
 ] 

Hudson commented on HBASE-14843:


FAILURE: Integrated in HBase-1.3-IT #331 (See 
[https://builds.apache.org/job/HBase-1.3-IT/331/])
HBASE-14843 TestWALProcedureStore.testLoad is flakey (matteo.bertozzi: rev 
221ae58555f34bd8a1de8757b15cb7d3301a90ff)
* 
hbase-procedure/src/main/java/org/apache/hadoop/hbase/procedure2/store/wal/WALProcedureStore.java
* 
hbase-procedure/src/test/java/org/apache/hadoop/hbase/procedure2/store/wal/TestWALProcedureStore.java


> TestWALProcedureStore.testLoad is flakey
> 
>
> Key: HBASE-14843
> URL: https://issues.apache.org/jira/browse/HBASE-14843
> Project: HBase
>  Issue Type: Bug
>  Components: proc-v2
>Affects Versions: 2.0.0, 1.2.0
>Reporter: Heng Chen
>Assignee: Matteo Bertozzi
>Priority: Blocker
> Fix For: 2.0.0, 1.2.0
>
> Attachments: HBASE-14843-v0.patch
>
>
> I see it twice recently, 
> see.
> https://builds.apache.org/job/PreCommit-HBASE-Build/16589//testReport/org.apache.hadoop.hbase.procedure2.store.wal/TestWALProcedureStore/testLoad/
> https://builds.apache.org/job/PreCommit-HBASE-Build/16532/testReport/org.apache.hadoop.hbase.procedure2.store.wal/TestWALProcedureStore/testLoad/
> Let's see what's happening.
> Update.
> It failed once again today, 
> https://builds.apache.org/job/PreCommit-HBASE-Build/16602/testReport/junit/org.apache.hadoop.hbase.procedure2.store.wal/TestWALProcedureStore/testLoad/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14172) Upgrade existing thrift binding using thrift 0.9.3 compiler.

2015-11-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15023312#comment-15023312
 ] 

Hudson commented on HBASE-14172:


FAILURE: Integrated in HBase-1.3-IT #331 (See 
[https://builds.apache.org/job/HBase-1.3-IT/331/])
HBASE-14172 Upgrade existing thrift binding using thrift 0.9.3 compiler. (enis: 
rev 4ff5a46439588df2fb72470624a63aef19a7dc5d)
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TDelete.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TIncrement.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/THRegionInfo.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TIOError.java
* hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/generated/TScan.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TColumnIncrement.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TResult.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TDurability.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/THRegionLocation.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TAuthorization.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TRowMutations.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/THBaseService.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TAppend.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/generated/TRegionInfo.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/generated/TRowResult.java
* hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/ThriftServer.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TColumnValue.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/generated/IOError.java
* hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/generated/Hbase.java
* hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/generated/TCell.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/generated/Mutation.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TDeleteType.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/generated/AlreadyExists.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TIllegalArgument.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/generated/BatchMutation.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TColumn.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/generated/TAppend.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TCellVisibility.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/generated/IllegalArgument.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/generated/ColumnDescriptor.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TScan.java
* pom.xml
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/generated/TColumn.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TServerName.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/ThriftServerRunner.java
* hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TGet.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TTimeRange.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/generated/TIncrement.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TMutation.java
* hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TPut.java
* hbase-thrift/pom.xml


> Upgrade existing thrift binding using thrift 0.9.3 compiler.
> 
>
> Key: HBASE-14172
> URL: https://issues.apache.org/jira/browse/HBASE-14172
> Project: HBase
>  Issue Type: Improvement
>Reporter: Srikanth Srungarapu
>Assignee: Josh Elser
>Priority: Minor
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-14172-branch-1.001.patch, 
> HBASE-14172-branch-1.2.001.patch, HBASE-14172-branch-1.patch, 
> HBASE-14172.001.patch, HBASE-14172.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14843) TestWALProcedureStore.testLoad is flakey

2015-11-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15023227#comment-15023227
 ] 

Hudson commented on HBASE-14843:


SUCCESS: Integrated in HBase-1.3 #390 (See 
[https://builds.apache.org/job/HBase-1.3/390/])
HBASE-14843 TestWALProcedureStore.testLoad is flakey (matteo.bertozzi: rev 
221ae58555f34bd8a1de8757b15cb7d3301a90ff)
* 
hbase-procedure/src/main/java/org/apache/hadoop/hbase/procedure2/store/wal/WALProcedureStore.java
* 
hbase-procedure/src/test/java/org/apache/hadoop/hbase/procedure2/store/wal/TestWALProcedureStore.java


> TestWALProcedureStore.testLoad is flakey
> 
>
> Key: HBASE-14843
> URL: https://issues.apache.org/jira/browse/HBASE-14843
> Project: HBase
>  Issue Type: Bug
>  Components: proc-v2
>Affects Versions: 2.0.0, 1.2.0
>Reporter: Heng Chen
>Assignee: Matteo Bertozzi
>Priority: Blocker
> Fix For: 2.0.0, 1.2.0
>
> Attachments: HBASE-14843-v0.patch
>
>
> I see it twice recently, 
> see.
> https://builds.apache.org/job/PreCommit-HBASE-Build/16589//testReport/org.apache.hadoop.hbase.procedure2.store.wal/TestWALProcedureStore/testLoad/
> https://builds.apache.org/job/PreCommit-HBASE-Build/16532/testReport/org.apache.hadoop.hbase.procedure2.store.wal/TestWALProcedureStore/testLoad/
> Let's see what's happening.
> Update.
> It failed once again today, 
> https://builds.apache.org/job/PreCommit-HBASE-Build/16602/testReport/junit/org.apache.hadoop.hbase.procedure2.store.wal/TestWALProcedureStore/testLoad/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14172) Upgrade existing thrift binding using thrift 0.9.3 compiler.

2015-11-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15023226#comment-15023226
 ] 

Hudson commented on HBASE-14172:


SUCCESS: Integrated in HBase-1.3 #390 (See 
[https://builds.apache.org/job/HBase-1.3/390/])
HBASE-14172 Upgrade existing thrift binding using thrift 0.9.3 compiler. (enis: 
rev 4ff5a46439588df2fb72470624a63aef19a7dc5d)
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TDelete.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/generated/IOError.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/generated/TIncrement.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TDurability.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TScan.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TServerName.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/THRegionInfo.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TTimeRange.java
* pom.xml
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/generated/TAppend.java
* hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/generated/TScan.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TColumnValue.java
* hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/ThriftServer.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TColumn.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/generated/TRowResult.java
* hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TGet.java
* hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TPut.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TAppend.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/THRegionLocation.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/THBaseService.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TMutation.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TAuthorization.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TResult.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TIncrement.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/generated/AlreadyExists.java
* hbase-thrift/pom.xml
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TDeleteType.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TIllegalArgument.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/generated/TColumn.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/generated/Mutation.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/generated/IllegalArgument.java
* hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/generated/TCell.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TColumnIncrement.java
* hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/generated/Hbase.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TIOError.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/ThriftServerRunner.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/generated/TRegionInfo.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/generated/BatchMutation.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TRowMutations.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/generated/ColumnDescriptor.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TCellVisibility.java


> Upgrade existing thrift binding using thrift 0.9.3 compiler.
> 
>
> Key: HBASE-14172
> URL: https://issues.apache.org/jira/browse/HBASE-14172
> Project: HBase
>  Issue Type: Improvement
>Reporter: Srikanth Srungarapu
>Assignee: Josh Elser
>Priority: Minor
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-14172-branch-1.001.patch, 
> HBASE-14172-branch-1.2.001.patch, HBASE-14172-branch-1.patch, 
> HBASE-14172.001.patch, HBASE-14172.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14825) HBase Ref Guide corrections of typos/misspellings

2015-11-23 Thread Daniel Vimont (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Vimont updated HBASE-14825:
--
Status: Open  (was: Patch Available)

About to submit a new test patch with a single commit which changes a single 
78-character line in architecture.adoc (removing an extraneous "the").

To get to this point, I completely deleted all HBase artifacts from my 
Ubuntu-based NetBeans environment and did a fresh anonymous clone of HBase, 
checked out a new branch, made the single-line change, committed the change, 
and generated the patch. We'll see if this new test patch works or not.

> HBase Ref Guide corrections of typos/misspellings
> -
>
> Key: HBASE-14825
> URL: https://issues.apache.org/jira/browse/HBASE-14825
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.0.0
>Reporter: Daniel Vimont
>Assignee: Daniel Vimont
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-14825-v2.patch, HBASE-14825-v3.patch, 
> HBASE-14825-v4.patch, HBASE-14825.patch
>
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> Found the following list of typos/misspellings on the book.html page, and 
> thought I would make corrections to the appropriate src/main/asciidoc files 
> in which they are located. (This is just a good opportunity for me to become 
> familiar with submission of fixes/patches as a prelude to beginning to make 
> some coding contributions. This is also my first submission to the JIRA 
> system, so corrections to content/conventions are welcome!)
> [Note: I see that [~misty]  may be in the midst of a reformatting task -- 
> HBASE-14823 --  that might involve these same asciidoc files. Please advise 
> if I should wait on this task to avoid a possibly cumbersome Git 
> reconciliation mess. (?)]
> Here is the list of typos/misspellings. The format of each item is (a) the 
> problem is presented in brackets on the first line, and (b) the phrase (as it 
> currently appears in the text) is on the second line.
> ===
> ["you" should be "your", and "Kimballs'" should be "Kimball's" (move the 
> apostrophe) in the following:]
> A useful read setting config on you hadoop cluster is Aaron Kimballs' 
> Configuration Parameters: What can you just ignore?
> [Period needed after "a"]
> a.k.a pseudo-distributed
> ["empty" is misspelled]
> The default value in this configuration has been intentionally left emtpy in 
> order to honor the old hbase.regionserver.global.memstore.upperLimit property 
> if present.
> [All occurrences of "a HBase" should be changed to "an HBase" -- 15 
> occurrences found]
> ["file path are" should be "file paths are"]
> By default, all of HBase's ZooKeeper file path are configured with a relative 
> path, so they will all go under this directory unless changed.
> ["times" -- plural required]
> How many time to retry attempting to write a version file before just 
> aborting. 
> ["separated" is misspelled]
> Each attempt is seperated by the hbase.server.thread.wakefrequency 
> milliseconds.
> [space needed after quotation mark (include"limit)]
> Because this limit represents the "automatic include"limit...
> [space needed ("ashbase:metadata" should be "as hbase:metadata")]
> This helps to keep compaction of lean tables (such ashbase:meta) fast.
> [Acronym "ide" should be capitalized for clarity: IDE]
> Setting this to true can be useful in contexts other than the other side of a 
> maven generation; i.e. running in an ide. 
> [RuntimeException missing an "e"]
> You'll want to set this boolean to true to avoid seeing the RuntimException 
> complaint:
> [Space missing after "secure"]
> FS Permissions for the root directory in a secure(kerberos) setup.
> ["mutations" misspelled]
> ...will be created which will tail the logs and replicate the mutatations to 
> region replicas for tables that have region replication > 1.
> ["it such that" should be "is such that"]
> If your working set it such that block cache does you no good...
> ["an" should be "and"]
> See the Deveraj Das an Nicolas Liochon blog post...
> [Tag "" should be ""]
> hbase.coprocessor.master.classes
> [Misspelling of "implementations"]
> Those consumers are coprocessors, phoenix, replication endpoint 
> implemnetations or similar.
> [Misspelling of "cluster"]
> On upgrade, before running a rolling restart over the cluser...
> ["effect" should be "affect"]
> If NOT using BucketCache, this change does not effect you.
> [Need space after "throw"]
> This will throw`java.lang.NoSuchMethodError...
> ["habasee" should be "hbase"]
> You can pass commands to the HBase Shell in non-interactive mode (see 
> hbasee.shell.noninteractive)...
> ["ie" should be "i.e."]
> Restrict the amount of resources (ie regions, tables) a 

[jira] [Commented] (HBASE-14355) Scan different TimeRange for each column family

2015-11-23 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15023747#comment-15023747
 ] 

Anoop Sam John commented on HBASE-14355:


Change looks fine.  Attach it as an addendum patch.  This was committed 10 days 
back and still open.   Do we need a new jira [~saint@gmail.com]

> Scan different TimeRange for each column family
> ---
>
> Key: HBASE-14355
> URL: https://issues.apache.org/jira/browse/HBASE-14355
> Project: HBase
>  Issue Type: New Feature
>  Components: Client, regionserver, Scanners
>Reporter: Dave Latham
>Assignee: churro morales
> Fix For: 2.0.0, 1.3.0, 0.98.17
>
> Attachments: HBASE-14355-v1.patch, HBASE-14355-v10.patch, 
> HBASE-14355-v11.patch, HBASE-14355-v12.patch, HBASE-14355-v2.patch, 
> HBASE-14355-v3.patch, HBASE-14355-v4.patch, HBASE-14355-v5.patch, 
> HBASE-14355-v6.patch, HBASE-14355-v7.patch, HBASE-14355-v8.patch, 
> HBASE-14355-v9.patch, HBASE-14355.branch-1.patch, HBASE-14355.patch
>
>
> At present the Scan API supports only table level time range. We have 
> specific use cases that will benefit from per column family time range. (See 
> background discussion at 
> https://mail-archives.apache.org/mod_mbox/hbase-user/201508.mbox/%3ccaa4mzom00ef5eoxstk0hetxeby8mqss61gbvgttgpaspmhq...@mail.gmail.com%3E)
> There are a couple of choices that would be good to validate.  First - how to 
> update the Scan API to support family and table level updates.  One proposal 
> would be to add Scan.setTimeRange(byte family, long minTime, long maxTime), 
> then store it in a Map.  When executing the scan, if a 
> family has a specified TimeRange, then use it, otherwise fall back to using 
> the table level TimeRange.  Clients using the new API against old region 
> servers would not get the families correctly filterd.  Old clients sending 
> scans to new region servers would work correctly.
> The other question is how to get StoreFileScanner.shouldUseScanner to match 
> up the proper family and time range.  It has the Scan available but doesn't 
> currently have available which family it is a part of.  One option would be 
> to try to pass down the column family in each constructor path.  Another 
> would be to instead alter shouldUseScanner to pass down the specific 
> TimeRange to use (similar to how it currently passes down the columns to use 
> which also appears to be a workaround for not having the family available). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14737) Clear cachedMaxVersions when HColumnDescriptor#setValue(VERSIONS, value) is called

2015-11-23 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14737?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-14737:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 1.3.0
   1.2.0
   2.0.0
   Status: Resolved  (was: Patch Available)

> Clear cachedMaxVersions when HColumnDescriptor#setValue(VERSIONS, value) is 
> called
> --
>
> Key: HBASE-14737
> URL: https://issues.apache.org/jira/browse/HBASE-14737
> Project: HBase
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Pankaj Kumar
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-14737.patch
>
>
> HColumnDescriptor caches the value of VERSIONS in a cachedMaxVersions member 
> variable. This member variable should be reset or cleared when 
> setValue(HConstants.VERSIONS, value) is called, like this:
> {code}
>   static final bytes[] VERSIONS_BYTES = Bytes.toBytes(HConstants.VERSIONS);
>   public HColumnDescriptor setValue(byte[] key, byte[] value) {
> if (Bytes.compare(HConstants.VERSIONS_BYTES, key) == 0) {
> cachedMaxVersions = UNINITIALIZED;
> }
> values.put(new ImmutableBytesWritable(key),
>   new ImmutableBytesWritable(value));
> return this;
>   }
> {code}
> Otherwise, you continue getting back cachedMaxVersions instead of the updated 
> value.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14825) HBase Ref Guide corrections of typos/misspellings

2015-11-23 Thread Daniel Vimont (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Vimont updated HBASE-14825:
--
Attachment: HBASE-14825-v5-test.patch

Attaching a test patch containing a single commit, which changes a single short 
line in architecture.doc (removing an extraneous "the"). If this patch 
succeeds, then I will repeatedly (a) make a few more changes and (b) generate 
and submit a new test patch, until I either (1) successfully get all the 
changes past Jenkins, OR (2) zero in on where the problem was that was 
resulting in the "patch failed to apply" situation in the "v4" patch submission.

NOTE: I am now fully aware that, when making a correction to a line that is 
currently greater than 100 characters in length, I must (in addition to making 
the correction) separate that single (too-long) line into separate, shorter 
lines (none of which may end with whitespace). :)

> HBase Ref Guide corrections of typos/misspellings
> -
>
> Key: HBASE-14825
> URL: https://issues.apache.org/jira/browse/HBASE-14825
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.0.0
>Reporter: Daniel Vimont
>Assignee: Daniel Vimont
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-14825-v2.patch, HBASE-14825-v3.patch, 
> HBASE-14825-v4.patch, HBASE-14825-v5-test.patch, HBASE-14825.patch
>
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> Found the following list of typos/misspellings on the book.html page, and 
> thought I would make corrections to the appropriate src/main/asciidoc files 
> in which they are located. (This is just a good opportunity for me to become 
> familiar with submission of fixes/patches as a prelude to beginning to make 
> some coding contributions. This is also my first submission to the JIRA 
> system, so corrections to content/conventions are welcome!)
> [Note: I see that [~misty]  may be in the midst of a reformatting task -- 
> HBASE-14823 --  that might involve these same asciidoc files. Please advise 
> if I should wait on this task to avoid a possibly cumbersome Git 
> reconciliation mess. (?)]
> Here is the list of typos/misspellings. The format of each item is (a) the 
> problem is presented in brackets on the first line, and (b) the phrase (as it 
> currently appears in the text) is on the second line.
> ===
> ["you" should be "your", and "Kimballs'" should be "Kimball's" (move the 
> apostrophe) in the following:]
> A useful read setting config on you hadoop cluster is Aaron Kimballs' 
> Configuration Parameters: What can you just ignore?
> [Period needed after "a"]
> a.k.a pseudo-distributed
> ["empty" is misspelled]
> The default value in this configuration has been intentionally left emtpy in 
> order to honor the old hbase.regionserver.global.memstore.upperLimit property 
> if present.
> [All occurrences of "a HBase" should be changed to "an HBase" -- 15 
> occurrences found]
> ["file path are" should be "file paths are"]
> By default, all of HBase's ZooKeeper file path are configured with a relative 
> path, so they will all go under this directory unless changed.
> ["times" -- plural required]
> How many time to retry attempting to write a version file before just 
> aborting. 
> ["separated" is misspelled]
> Each attempt is seperated by the hbase.server.thread.wakefrequency 
> milliseconds.
> [space needed after quotation mark (include"limit)]
> Because this limit represents the "automatic include"limit...
> [space needed ("ashbase:metadata" should be "as hbase:metadata")]
> This helps to keep compaction of lean tables (such ashbase:meta) fast.
> [Acronym "ide" should be capitalized for clarity: IDE]
> Setting this to true can be useful in contexts other than the other side of a 
> maven generation; i.e. running in an ide. 
> [RuntimeException missing an "e"]
> You'll want to set this boolean to true to avoid seeing the RuntimException 
> complaint:
> [Space missing after "secure"]
> FS Permissions for the root directory in a secure(kerberos) setup.
> ["mutations" misspelled]
> ...will be created which will tail the logs and replicate the mutatations to 
> region replicas for tables that have region replication > 1.
> ["it such that" should be "is such that"]
> If your working set it such that block cache does you no good...
> ["an" should be "and"]
> See the Deveraj Das an Nicolas Liochon blog post...
> [Tag "" should be ""]
> hbase.coprocessor.master.classes
> [Misspelling of "implementations"]
> Those consumers are coprocessors, phoenix, replication endpoint 
> implemnetations or similar.
> [Misspelling of "cluster"]
> On upgrade, before running a rolling restart over the cluser...
> ["effect" should be "affect"]
> If NOT using BucketCache, this change does 

[jira] [Commented] (HBASE-14865) Support passing multiple QOPs to SaslClient/Server via hbase.rpc.protection

2015-11-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15023755#comment-15023755
 ] 

Hadoop QA commented on HBASE-14865:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12773950/HBASE-14865-master-v2.patch
  against master branch at commit 6b11adbfa4aa565eff1bb141170c8e183aed3e4b.
  ATTACHMENT ID: 12773950

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 27 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.6.1 2.7.0 
2.7.1)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:red}-1 checkstyle{color}.  The applied patch generated 
18690 checkstyle errors (more than the master's current 18686 errors).

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16643//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16643//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16643//artifact/patchprocess/checkstyle-aggregate.html

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16643//console

This message is automatically generated.

> Support passing multiple QOPs to SaslClient/Server via hbase.rpc.protection
> ---
>
> Key: HBASE-14865
> URL: https://issues.apache.org/jira/browse/HBASE-14865
> Project: HBase
>  Issue Type: Improvement
>  Components: security
>Reporter: Appy
>Assignee: Appy
> Attachments: HBASE-14865-master-v2.patch, HBASE-14865-master.patch
>
>
> Currently, we can set the value of hbase.rpc.protection to one of 
> authentication/integrity/privacy. It is the used to set 
> {{javax.security.sasl.qop}} in SaslUtil.java.
> The problem is, if a cluster wants to switch from one qop to another, it'll 
> have to take a downtime. Rolling upgrade will create a situation where some 
> nodes have old value and some have new, which'll prevent any communication 
> between them. There will be similar issue when clients will try to connect.
> {{javax.security.sasl.qop}} can take in a list of QOP in preferences order. 
> So a transition from qop1 to qop2 can be easily done like this
> "qop1" --> "qop2,qop1" --> rolling restart --> "qop2" --> rolling restart
> Need to change hbase.rpc.protection to accept a list too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14355) Scan different TimeRange for each column family

2015-11-23 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15023774#comment-15023774
 ] 

stack commented on HBASE-14355:
---

What [~anoop.hbase] says [~churromorales] Addendums go in soon after 
original... just do  new patch... Relate it to this one. Thanks for reporting 
issue.

> Scan different TimeRange for each column family
> ---
>
> Key: HBASE-14355
> URL: https://issues.apache.org/jira/browse/HBASE-14355
> Project: HBase
>  Issue Type: New Feature
>  Components: Client, regionserver, Scanners
>Reporter: Dave Latham
>Assignee: churro morales
> Fix For: 2.0.0, 1.3.0, 0.98.17
>
> Attachments: HBASE-14355-v1.patch, HBASE-14355-v10.patch, 
> HBASE-14355-v11.patch, HBASE-14355-v12.patch, HBASE-14355-v2.patch, 
> HBASE-14355-v3.patch, HBASE-14355-v4.patch, HBASE-14355-v5.patch, 
> HBASE-14355-v6.patch, HBASE-14355-v7.patch, HBASE-14355-v8.patch, 
> HBASE-14355-v9.patch, HBASE-14355.branch-1.patch, HBASE-14355.patch
>
>
> At present the Scan API supports only table level time range. We have 
> specific use cases that will benefit from per column family time range. (See 
> background discussion at 
> https://mail-archives.apache.org/mod_mbox/hbase-user/201508.mbox/%3ccaa4mzom00ef5eoxstk0hetxeby8mqss61gbvgttgpaspmhq...@mail.gmail.com%3E)
> There are a couple of choices that would be good to validate.  First - how to 
> update the Scan API to support family and table level updates.  One proposal 
> would be to add Scan.setTimeRange(byte family, long minTime, long maxTime), 
> then store it in a Map.  When executing the scan, if a 
> family has a specified TimeRange, then use it, otherwise fall back to using 
> the table level TimeRange.  Clients using the new API against old region 
> servers would not get the families correctly filterd.  Old clients sending 
> scans to new region servers would work correctly.
> The other question is how to get StoreFileScanner.shouldUseScanner to match 
> up the proper family and time range.  It has the Scan available but doesn't 
> currently have available which family it is a part of.  One option would be 
> to try to pass down the column family in each constructor path.  Another 
> would be to instead alter shouldUseScanner to pass down the specific 
> TimeRange to use (similar to how it currently passes down the columns to use 
> which also appears to be a workaround for not having the family available). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14825) HBase Ref Guide corrections of typos/misspellings

2015-11-23 Thread Misty Stanley-Jones (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15023776#comment-15023776
 ] 

Misty Stanley-Jones commented on HBASE-14825:
-

You are nearly there with your v4 patch. It applies cleanly. The only problem 
is that it represents two commits. The HBase team decided to try to enforce 
one-issue, one-commit. So you need to squash your two changes together. On your 
branch, issue the following command:
{code}
git rebase -i origin/master
{code}

This will bring up an editor window with two commits, each line starting with 
"pick". Change the second one to "squash", save your changes, and you will be 
given a chance to edit the commit messages. By default, both commit messages 
are shown. Save your changes again, and you should have only a single commit. 
At that point, make your patch again.

> HBase Ref Guide corrections of typos/misspellings
> -
>
> Key: HBASE-14825
> URL: https://issues.apache.org/jira/browse/HBASE-14825
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.0.0
>Reporter: Daniel Vimont
>Assignee: Daniel Vimont
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-14825-v2.patch, HBASE-14825-v3.patch, 
> HBASE-14825-v4.patch, HBASE-14825-v5-test.patch, HBASE-14825.patch
>
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> Found the following list of typos/misspellings on the book.html page, and 
> thought I would make corrections to the appropriate src/main/asciidoc files 
> in which they are located. (This is just a good opportunity for me to become 
> familiar with submission of fixes/patches as a prelude to beginning to make 
> some coding contributions. This is also my first submission to the JIRA 
> system, so corrections to content/conventions are welcome!)
> [Note: I see that [~misty]  may be in the midst of a reformatting task -- 
> HBASE-14823 --  that might involve these same asciidoc files. Please advise 
> if I should wait on this task to avoid a possibly cumbersome Git 
> reconciliation mess. (?)]
> Here is the list of typos/misspellings. The format of each item is (a) the 
> problem is presented in brackets on the first line, and (b) the phrase (as it 
> currently appears in the text) is on the second line.
> ===
> ["you" should be "your", and "Kimballs'" should be "Kimball's" (move the 
> apostrophe) in the following:]
> A useful read setting config on you hadoop cluster is Aaron Kimballs' 
> Configuration Parameters: What can you just ignore?
> [Period needed after "a"]
> a.k.a pseudo-distributed
> ["empty" is misspelled]
> The default value in this configuration has been intentionally left emtpy in 
> order to honor the old hbase.regionserver.global.memstore.upperLimit property 
> if present.
> [All occurrences of "a HBase" should be changed to "an HBase" -- 15 
> occurrences found]
> ["file path are" should be "file paths are"]
> By default, all of HBase's ZooKeeper file path are configured with a relative 
> path, so they will all go under this directory unless changed.
> ["times" -- plural required]
> How many time to retry attempting to write a version file before just 
> aborting. 
> ["separated" is misspelled]
> Each attempt is seperated by the hbase.server.thread.wakefrequency 
> milliseconds.
> [space needed after quotation mark (include"limit)]
> Because this limit represents the "automatic include"limit...
> [space needed ("ashbase:metadata" should be "as hbase:metadata")]
> This helps to keep compaction of lean tables (such ashbase:meta) fast.
> [Acronym "ide" should be capitalized for clarity: IDE]
> Setting this to true can be useful in contexts other than the other side of a 
> maven generation; i.e. running in an ide. 
> [RuntimeException missing an "e"]
> You'll want to set this boolean to true to avoid seeing the RuntimException 
> complaint:
> [Space missing after "secure"]
> FS Permissions for the root directory in a secure(kerberos) setup.
> ["mutations" misspelled]
> ...will be created which will tail the logs and replicate the mutatations to 
> region replicas for tables that have region replication > 1.
> ["it such that" should be "is such that"]
> If your working set it such that block cache does you no good...
> ["an" should be "and"]
> See the Deveraj Das an Nicolas Liochon blog post...
> [Tag "" should be ""]
> hbase.coprocessor.master.classes
> [Misspelling of "implementations"]
> Those consumers are coprocessors, phoenix, replication endpoint 
> implemnetations or similar.
> [Misspelling of "cluster"]
> On upgrade, before running a rolling restart over the cluser...
> ["effect" should be "affect"]
> If NOT using BucketCache, this change does not effect you.
> [Need space after "throw"]
> This will 

[jira] [Updated] (HBASE-14355) Scan different TimeRange for each column family

2015-11-23 Thread churro morales (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

churro morales updated HBASE-14355:
---
Attachment: (was: HBASE-14355-v12.patch)

> Scan different TimeRange for each column family
> ---
>
> Key: HBASE-14355
> URL: https://issues.apache.org/jira/browse/HBASE-14355
> Project: HBase
>  Issue Type: New Feature
>  Components: Client, regionserver, Scanners
>Reporter: Dave Latham
>Assignee: churro morales
> Fix For: 2.0.0, 1.3.0, 0.98.17
>
> Attachments: HBASE-14355-v1.patch, HBASE-14355-v10.patch, 
> HBASE-14355-v11.patch, HBASE-14355-v2.patch, HBASE-14355-v3.patch, 
> HBASE-14355-v4.patch, HBASE-14355-v5.patch, HBASE-14355-v6.patch, 
> HBASE-14355-v7.patch, HBASE-14355-v8.patch, HBASE-14355-v9.patch, 
> HBASE-14355.branch-1.patch, HBASE-14355.patch
>
>
> At present the Scan API supports only table level time range. We have 
> specific use cases that will benefit from per column family time range. (See 
> background discussion at 
> https://mail-archives.apache.org/mod_mbox/hbase-user/201508.mbox/%3ccaa4mzom00ef5eoxstk0hetxeby8mqss61gbvgttgpaspmhq...@mail.gmail.com%3E)
> There are a couple of choices that would be good to validate.  First - how to 
> update the Scan API to support family and table level updates.  One proposal 
> would be to add Scan.setTimeRange(byte family, long minTime, long maxTime), 
> then store it in a Map.  When executing the scan, if a 
> family has a specified TimeRange, then use it, otherwise fall back to using 
> the table level TimeRange.  Clients using the new API against old region 
> servers would not get the families correctly filterd.  Old clients sending 
> scans to new region servers would work correctly.
> The other question is how to get StoreFileScanner.shouldUseScanner to match 
> up the proper family and time range.  It has the Scan available but doesn't 
> currently have available which family it is a part of.  One option would be 
> to try to pass down the column family in each constructor path.  Another 
> would be to instead alter shouldUseScanner to pass down the specific 
> TimeRange to use (similar to how it currently passes down the columns to use 
> which also appears to be a workaround for not having the family available). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13082) Coarsen StoreScanner locks to RegionScanner

2015-11-23 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15023787#comment-15023787
 ] 

ramkrishna.s.vasudevan commented on HBASE-13082:


bq.Do you have to have 'Compacted' in there since files are discharged by the 
compactor always?
I first thought it is a cleaner because this chore moves the compacted files 
alone to the archive dir.  It does not deal with any other hfile. That is why I 
added 'Compacted'.
You can review this patch except for the name of the Chore service .

> Coarsen StoreScanner locks to RegionScanner
> ---
>
> Key: HBASE-13082
> URL: https://issues.apache.org/jira/browse/HBASE-13082
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: ramkrishna.s.vasudevan
> Attachments: 13082-test.txt, 13082-v2.txt, 13082-v3.txt, 
> 13082-v4.txt, 13082.txt, 13082.txt, HBASE-13082.pdf, HBASE-13082_1.pdf, 
> HBASE-13082_12.patch, HBASE-13082_13.patch, HBASE-13082_14.patch, 
> HBASE-13082_15.patch, HBASE-13082_16.patch, HBASE-13082_1_WIP.patch, 
> HBASE-13082_2.pdf, HBASE-13082_2_WIP.patch, HBASE-13082_3.patch, 
> HBASE-13082_4.patch, HBASE-13082_9.patch, HBASE-13082_9.patch, 
> HBASE-13082_withoutpatch.jpg, HBASE-13082_withpatch.jpg, 
> LockVsSynchronized.java, gc.png, gc.png, gc.png, hits.png, next.png, next.png
>
>
> Continuing where HBASE-10015 left of.
> We can avoid locking (and memory fencing) inside StoreScanner by deferring to 
> the lock already held by the RegionScanner.
> In tests this shows quite a scan improvement and reduced CPU (the fences make 
> the cores wait for memory fetches).
> There are some drawbacks too:
> * All calls to RegionScanner need to be remain synchronized
> * Implementors of coprocessors need to be diligent in following the locking 
> contract. For example Phoenix does not lock RegionScanner.nextRaw() and 
> required in the documentation (not picking on Phoenix, this one is my fault 
> as I told them it's OK)
> * possible starving of flushes and compaction with heavy read load. 
> RegionScanner operations would keep getting the locks and the 
> flushes/compactions would not be able finalize the set of files.
> I'll have a patch soon.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14355) Scan different TimeRange for each column family

2015-11-23 Thread churro morales (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

churro morales updated HBASE-14355:
---
Status: Open  (was: Patch Available)

> Scan different TimeRange for each column family
> ---
>
> Key: HBASE-14355
> URL: https://issues.apache.org/jira/browse/HBASE-14355
> Project: HBase
>  Issue Type: New Feature
>  Components: Client, regionserver, Scanners
>Reporter: Dave Latham
>Assignee: churro morales
> Fix For: 2.0.0, 1.3.0, 0.98.17
>
> Attachments: HBASE-14355-v1.patch, HBASE-14355-v10.patch, 
> HBASE-14355-v11.patch, HBASE-14355-v2.patch, HBASE-14355-v3.patch, 
> HBASE-14355-v4.patch, HBASE-14355-v5.patch, HBASE-14355-v6.patch, 
> HBASE-14355-v7.patch, HBASE-14355-v8.patch, HBASE-14355-v9.patch, 
> HBASE-14355.branch-1.patch, HBASE-14355.patch
>
>
> At present the Scan API supports only table level time range. We have 
> specific use cases that will benefit from per column family time range. (See 
> background discussion at 
> https://mail-archives.apache.org/mod_mbox/hbase-user/201508.mbox/%3ccaa4mzom00ef5eoxstk0hetxeby8mqss61gbvgttgpaspmhq...@mail.gmail.com%3E)
> There are a couple of choices that would be good to validate.  First - how to 
> update the Scan API to support family and table level updates.  One proposal 
> would be to add Scan.setTimeRange(byte family, long minTime, long maxTime), 
> then store it in a Map.  When executing the scan, if a 
> family has a specified TimeRange, then use it, otherwise fall back to using 
> the table level TimeRange.  Clients using the new API against old region 
> servers would not get the families correctly filterd.  Old clients sending 
> scans to new region servers would work correctly.
> The other question is how to get StoreFileScanner.shouldUseScanner to match 
> up the proper family and time range.  It has the Scan available but doesn't 
> currently have available which family it is a part of.  One option would be 
> to try to pass down the column family in each constructor path.  Another 
> would be to instead alter shouldUseScanner to pass down the specific 
> TimeRange to use (similar to how it currently passes down the columns to use 
> which also appears to be a workaround for not having the family available). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14355) Scan different TimeRange for each column family

2015-11-23 Thread churro morales (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

churro morales updated HBASE-14355:
---
Status: Patch Available  (was: Open)

> Scan different TimeRange for each column family
> ---
>
> Key: HBASE-14355
> URL: https://issues.apache.org/jira/browse/HBASE-14355
> Project: HBase
>  Issue Type: New Feature
>  Components: Client, regionserver, Scanners
>Reporter: Dave Latham
>Assignee: churro morales
> Fix For: 2.0.0, 1.3.0, 0.98.17
>
> Attachments: HBASE-14355-addendum.patch, HBASE-14355-v1.patch, 
> HBASE-14355-v10.patch, HBASE-14355-v11.patch, HBASE-14355-v2.patch, 
> HBASE-14355-v3.patch, HBASE-14355-v4.patch, HBASE-14355-v5.patch, 
> HBASE-14355-v6.patch, HBASE-14355-v7.patch, HBASE-14355-v8.patch, 
> HBASE-14355-v9.patch, HBASE-14355.branch-1.patch, HBASE-14355.patch
>
>
> At present the Scan API supports only table level time range. We have 
> specific use cases that will benefit from per column family time range. (See 
> background discussion at 
> https://mail-archives.apache.org/mod_mbox/hbase-user/201508.mbox/%3ccaa4mzom00ef5eoxstk0hetxeby8mqss61gbvgttgpaspmhq...@mail.gmail.com%3E)
> There are a couple of choices that would be good to validate.  First - how to 
> update the Scan API to support family and table level updates.  One proposal 
> would be to add Scan.setTimeRange(byte family, long minTime, long maxTime), 
> then store it in a Map.  When executing the scan, if a 
> family has a specified TimeRange, then use it, otherwise fall back to using 
> the table level TimeRange.  Clients using the new API against old region 
> servers would not get the families correctly filterd.  Old clients sending 
> scans to new region servers would work correctly.
> The other question is how to get StoreFileScanner.shouldUseScanner to match 
> up the proper family and time range.  It has the Scan available but doesn't 
> currently have available which family it is a part of.  One option would be 
> to try to pass down the column family in each constructor path.  Another 
> would be to instead alter shouldUseScanner to pass down the specific 
> TimeRange to use (similar to how it currently passes down the columns to use 
> which also appears to be a workaround for not having the family available). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14355) Scan different TimeRange for each column family

2015-11-23 Thread churro morales (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

churro morales updated HBASE-14355:
---
Attachment: HBASE-14355-addendum.patch

> Scan different TimeRange for each column family
> ---
>
> Key: HBASE-14355
> URL: https://issues.apache.org/jira/browse/HBASE-14355
> Project: HBase
>  Issue Type: New Feature
>  Components: Client, regionserver, Scanners
>Reporter: Dave Latham
>Assignee: churro morales
> Fix For: 2.0.0, 1.3.0, 0.98.17
>
> Attachments: HBASE-14355-addendum.patch, HBASE-14355-v1.patch, 
> HBASE-14355-v10.patch, HBASE-14355-v11.patch, HBASE-14355-v2.patch, 
> HBASE-14355-v3.patch, HBASE-14355-v4.patch, HBASE-14355-v5.patch, 
> HBASE-14355-v6.patch, HBASE-14355-v7.patch, HBASE-14355-v8.patch, 
> HBASE-14355-v9.patch, HBASE-14355.branch-1.patch, HBASE-14355.patch
>
>
> At present the Scan API supports only table level time range. We have 
> specific use cases that will benefit from per column family time range. (See 
> background discussion at 
> https://mail-archives.apache.org/mod_mbox/hbase-user/201508.mbox/%3ccaa4mzom00ef5eoxstk0hetxeby8mqss61gbvgttgpaspmhq...@mail.gmail.com%3E)
> There are a couple of choices that would be good to validate.  First - how to 
> update the Scan API to support family and table level updates.  One proposal 
> would be to add Scan.setTimeRange(byte family, long minTime, long maxTime), 
> then store it in a Map.  When executing the scan, if a 
> family has a specified TimeRange, then use it, otherwise fall back to using 
> the table level TimeRange.  Clients using the new API against old region 
> servers would not get the families correctly filterd.  Old clients sending 
> scans to new region servers would work correctly.
> The other question is how to get StoreFileScanner.shouldUseScanner to match 
> up the proper family and time range.  It has the Scan available but doesn't 
> currently have available which family it is a part of.  One option would be 
> to try to pass down the column family in each constructor path.  Another 
> would be to instead alter shouldUseScanner to pass down the specific 
> TimeRange to use (similar to how it currently passes down the columns to use 
> which also appears to be a workaround for not having the family available). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14825) HBase Ref Guide corrections of typos/misspellings

2015-11-23 Thread Daniel Vimont (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15023796#comment-15023796
 ] 

Daniel Vimont commented on HBASE-14825:
---

Thanks for the good info, Misty. I will step through this now...

> HBase Ref Guide corrections of typos/misspellings
> -
>
> Key: HBASE-14825
> URL: https://issues.apache.org/jira/browse/HBASE-14825
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.0.0
>Reporter: Daniel Vimont
>Assignee: Daniel Vimont
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-14825-v2.patch, HBASE-14825-v3.patch, 
> HBASE-14825-v4.patch, HBASE-14825-v5-test.patch, HBASE-14825.patch, 
> HBASE-14825_misty_example.patch
>
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> Found the following list of typos/misspellings on the book.html page, and 
> thought I would make corrections to the appropriate src/main/asciidoc files 
> in which they are located. (This is just a good opportunity for me to become 
> familiar with submission of fixes/patches as a prelude to beginning to make 
> some coding contributions. This is also my first submission to the JIRA 
> system, so corrections to content/conventions are welcome!)
> [Note: I see that [~misty]  may be in the midst of a reformatting task -- 
> HBASE-14823 --  that might involve these same asciidoc files. Please advise 
> if I should wait on this task to avoid a possibly cumbersome Git 
> reconciliation mess. (?)]
> Here is the list of typos/misspellings. The format of each item is (a) the 
> problem is presented in brackets on the first line, and (b) the phrase (as it 
> currently appears in the text) is on the second line.
> ===
> ["you" should be "your", and "Kimballs'" should be "Kimball's" (move the 
> apostrophe) in the following:]
> A useful read setting config on you hadoop cluster is Aaron Kimballs' 
> Configuration Parameters: What can you just ignore?
> [Period needed after "a"]
> a.k.a pseudo-distributed
> ["empty" is misspelled]
> The default value in this configuration has been intentionally left emtpy in 
> order to honor the old hbase.regionserver.global.memstore.upperLimit property 
> if present.
> [All occurrences of "a HBase" should be changed to "an HBase" -- 15 
> occurrences found]
> ["file path are" should be "file paths are"]
> By default, all of HBase's ZooKeeper file path are configured with a relative 
> path, so they will all go under this directory unless changed.
> ["times" -- plural required]
> How many time to retry attempting to write a version file before just 
> aborting. 
> ["separated" is misspelled]
> Each attempt is seperated by the hbase.server.thread.wakefrequency 
> milliseconds.
> [space needed after quotation mark (include"limit)]
> Because this limit represents the "automatic include"limit...
> [space needed ("ashbase:metadata" should be "as hbase:metadata")]
> This helps to keep compaction of lean tables (such ashbase:meta) fast.
> [Acronym "ide" should be capitalized for clarity: IDE]
> Setting this to true can be useful in contexts other than the other side of a 
> maven generation; i.e. running in an ide. 
> [RuntimeException missing an "e"]
> You'll want to set this boolean to true to avoid seeing the RuntimException 
> complaint:
> [Space missing after "secure"]
> FS Permissions for the root directory in a secure(kerberos) setup.
> ["mutations" misspelled]
> ...will be created which will tail the logs and replicate the mutatations to 
> region replicas for tables that have region replication > 1.
> ["it such that" should be "is such that"]
> If your working set it such that block cache does you no good...
> ["an" should be "and"]
> See the Deveraj Das an Nicolas Liochon blog post...
> [Tag "" should be ""]
> hbase.coprocessor.master.classes
> [Misspelling of "implementations"]
> Those consumers are coprocessors, phoenix, replication endpoint 
> implemnetations or similar.
> [Misspelling of "cluster"]
> On upgrade, before running a rolling restart over the cluser...
> ["effect" should be "affect"]
> If NOT using BucketCache, this change does not effect you.
> [Need space after "throw"]
> This will throw`java.lang.NoSuchMethodError...
> ["habasee" should be "hbase"]
> You can pass commands to the HBase Shell in non-interactive mode (see 
> hbasee.shell.noninteractive)...
> ["ie" should be "i.e."]
> Restrict the amount of resources (ie regions, tables) a namespace can consume.
> ["an" should be "and"]
> ...but can be conjured on the fly while the table is up an running.
> [Malformed link (text appears as follows when rendered in a browser):]
> Puts are executed via Table.put (writeBuffer) or 
> 

[jira] [Updated] (HBASE-14355) Scan different TimeRange for each column family

2015-11-23 Thread churro morales (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

churro morales updated HBASE-14355:
---
Attachment: HBASE-14355-v12.patch

Oops noticed a bug, this logic didn't propagate to the memstore.  Can't believe 
I missed this.  [~stack], [~anoop.hbase] I've attached a patch for trunk and 
will run the tests.  

> Scan different TimeRange for each column family
> ---
>
> Key: HBASE-14355
> URL: https://issues.apache.org/jira/browse/HBASE-14355
> Project: HBase
>  Issue Type: New Feature
>  Components: Client, regionserver, Scanners
>Reporter: Dave Latham
>Assignee: churro morales
> Fix For: 2.0.0, 1.3.0, 0.98.17
>
> Attachments: HBASE-14355-v1.patch, HBASE-14355-v10.patch, 
> HBASE-14355-v11.patch, HBASE-14355-v12.patch, HBASE-14355-v2.patch, 
> HBASE-14355-v3.patch, HBASE-14355-v4.patch, HBASE-14355-v5.patch, 
> HBASE-14355-v6.patch, HBASE-14355-v7.patch, HBASE-14355-v8.patch, 
> HBASE-14355-v9.patch, HBASE-14355.branch-1.patch, HBASE-14355.patch
>
>
> At present the Scan API supports only table level time range. We have 
> specific use cases that will benefit from per column family time range. (See 
> background discussion at 
> https://mail-archives.apache.org/mod_mbox/hbase-user/201508.mbox/%3ccaa4mzom00ef5eoxstk0hetxeby8mqss61gbvgttgpaspmhq...@mail.gmail.com%3E)
> There are a couple of choices that would be good to validate.  First - how to 
> update the Scan API to support family and table level updates.  One proposal 
> would be to add Scan.setTimeRange(byte family, long minTime, long maxTime), 
> then store it in a Map.  When executing the scan, if a 
> family has a specified TimeRange, then use it, otherwise fall back to using 
> the table level TimeRange.  Clients using the new API against old region 
> servers would not get the families correctly filterd.  Old clients sending 
> scans to new region servers would work correctly.
> The other question is how to get StoreFileScanner.shouldUseScanner to match 
> up the proper family and time range.  It has the Scan available but doesn't 
> currently have available which family it is a part of.  One option would be 
> to try to pass down the column family in each constructor path.  Another 
> would be to instead alter shouldUseScanner to pass down the specific 
> TimeRange to use (similar to how it currently passes down the columns to use 
> which also appears to be a workaround for not having the family available). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14355) Scan different TimeRange for each column family

2015-11-23 Thread churro morales (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

churro morales updated HBASE-14355:
---
Status: Open  (was: Patch Available)

> Scan different TimeRange for each column family
> ---
>
> Key: HBASE-14355
> URL: https://issues.apache.org/jira/browse/HBASE-14355
> Project: HBase
>  Issue Type: New Feature
>  Components: Client, regionserver, Scanners
>Reporter: Dave Latham
>Assignee: churro morales
> Fix For: 2.0.0, 1.3.0, 0.98.17
>
> Attachments: HBASE-14355-v1.patch, HBASE-14355-v10.patch, 
> HBASE-14355-v11.patch, HBASE-14355-v12.patch, HBASE-14355-v2.patch, 
> HBASE-14355-v3.patch, HBASE-14355-v4.patch, HBASE-14355-v5.patch, 
> HBASE-14355-v6.patch, HBASE-14355-v7.patch, HBASE-14355-v8.patch, 
> HBASE-14355-v9.patch, HBASE-14355.branch-1.patch, HBASE-14355.patch
>
>
> At present the Scan API supports only table level time range. We have 
> specific use cases that will benefit from per column family time range. (See 
> background discussion at 
> https://mail-archives.apache.org/mod_mbox/hbase-user/201508.mbox/%3ccaa4mzom00ef5eoxstk0hetxeby8mqss61gbvgttgpaspmhq...@mail.gmail.com%3E)
> There are a couple of choices that would be good to validate.  First - how to 
> update the Scan API to support family and table level updates.  One proposal 
> would be to add Scan.setTimeRange(byte family, long minTime, long maxTime), 
> then store it in a Map.  When executing the scan, if a 
> family has a specified TimeRange, then use it, otherwise fall back to using 
> the table level TimeRange.  Clients using the new API against old region 
> servers would not get the families correctly filterd.  Old clients sending 
> scans to new region servers would work correctly.
> The other question is how to get StoreFileScanner.shouldUseScanner to match 
> up the proper family and time range.  It has the Scan available but doesn't 
> currently have available which family it is a part of.  One option would be 
> to try to pass down the column family in each constructor path.  Another 
> would be to instead alter shouldUseScanner to pass down the specific 
> TimeRange to use (similar to how it currently passes down the columns to use 
> which also appears to be a workaround for not having the family available). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14737) Clear cachedMaxVersions when setValue(VERSIONS, value) is called

2015-11-23 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14737?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-14737:
---
Summary: Clear cachedMaxVersions when setValue(VERSIONS, value) is called  
(was: Clear cachedMaxVersions when setValue(VERSIONS, value) called)

> Clear cachedMaxVersions when setValue(VERSIONS, value) is called
> 
>
> Key: HBASE-14737
> URL: https://issues.apache.org/jira/browse/HBASE-14737
> Project: HBase
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Pankaj Kumar
> Attachments: HBASE-14737.patch
>
>
> HColumnDescriptor caches the value of VERSIONS in a cachedMaxVersions member 
> variable. This member variable should be reset or cleared when 
> setValue(HConstants.VERSIONS, value) is called, like this:
> {code}
>   static final bytes[] VERSIONS_BYTES = Bytes.toBytes(HConstants.VERSIONS);
>   public HColumnDescriptor setValue(byte[] key, byte[] value) {
> if (Bytes.compare(HConstants.VERSIONS_BYTES, key) == 0) {
> cachedMaxVersions = UNINITIALIZED;
> }
> values.put(new ImmutableBytesWritable(key),
>   new ImmutableBytesWritable(value));
> return this;
>   }
> {code}
> Otherwise, you continue getting back cachedMaxVersions instead of the updated 
> value.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14737) Clear cachedMaxVersions when HColumnDescriptor#setValue(VERSIONS, value) is called

2015-11-23 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14737?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-14737:
---
Summary: Clear cachedMaxVersions when HColumnDescriptor#setValue(VERSIONS, 
value) is called  (was: Clear cachedMaxVersions when setValue(VERSIONS, value) 
is called)

> Clear cachedMaxVersions when HColumnDescriptor#setValue(VERSIONS, value) is 
> called
> --
>
> Key: HBASE-14737
> URL: https://issues.apache.org/jira/browse/HBASE-14737
> Project: HBase
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Pankaj Kumar
> Attachments: HBASE-14737.patch
>
>
> HColumnDescriptor caches the value of VERSIONS in a cachedMaxVersions member 
> variable. This member variable should be reset or cleared when 
> setValue(HConstants.VERSIONS, value) is called, like this:
> {code}
>   static final bytes[] VERSIONS_BYTES = Bytes.toBytes(HConstants.VERSIONS);
>   public HColumnDescriptor setValue(byte[] key, byte[] value) {
> if (Bytes.compare(HConstants.VERSIONS_BYTES, key) == 0) {
> cachedMaxVersions = UNINITIALIZED;
> }
> values.put(new ImmutableBytesWritable(key),
>   new ImmutableBytesWritable(value));
> return this;
>   }
> {code}
> Otherwise, you continue getting back cachedMaxVersions instead of the updated 
> value.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14355) Scan different TimeRange for each column family

2015-11-23 Thread churro morales (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

churro morales updated HBASE-14355:
---
Status: Patch Available  (was: Open)

> Scan different TimeRange for each column family
> ---
>
> Key: HBASE-14355
> URL: https://issues.apache.org/jira/browse/HBASE-14355
> Project: HBase
>  Issue Type: New Feature
>  Components: Client, regionserver, Scanners
>Reporter: Dave Latham
>Assignee: churro morales
> Fix For: 2.0.0, 1.3.0, 0.98.17
>
> Attachments: HBASE-14355-v1.patch, HBASE-14355-v10.patch, 
> HBASE-14355-v11.patch, HBASE-14355-v12.patch, HBASE-14355-v2.patch, 
> HBASE-14355-v3.patch, HBASE-14355-v4.patch, HBASE-14355-v5.patch, 
> HBASE-14355-v6.patch, HBASE-14355-v7.patch, HBASE-14355-v8.patch, 
> HBASE-14355-v9.patch, HBASE-14355.branch-1.patch, HBASE-14355.patch
>
>
> At present the Scan API supports only table level time range. We have 
> specific use cases that will benefit from per column family time range. (See 
> background discussion at 
> https://mail-archives.apache.org/mod_mbox/hbase-user/201508.mbox/%3ccaa4mzom00ef5eoxstk0hetxeby8mqss61gbvgttgpaspmhq...@mail.gmail.com%3E)
> There are a couple of choices that would be good to validate.  First - how to 
> update the Scan API to support family and table level updates.  One proposal 
> would be to add Scan.setTimeRange(byte family, long minTime, long maxTime), 
> then store it in a Map.  When executing the scan, if a 
> family has a specified TimeRange, then use it, otherwise fall back to using 
> the table level TimeRange.  Clients using the new API against old region 
> servers would not get the families correctly filterd.  Old clients sending 
> scans to new region servers would work correctly.
> The other question is how to get StoreFileScanner.shouldUseScanner to match 
> up the proper family and time range.  It has the Scan available but doesn't 
> currently have available which family it is a part of.  One option would be 
> to try to pass down the column family in each constructor path.  Another 
> would be to instead alter shouldUseScanner to pass down the specific 
> TimeRange to use (similar to how it currently passes down the columns to use 
> which also appears to be a workaround for not having the family available). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14843) TestWALProcedureStore.testLoad is flakey

2015-11-23 Thread Heng Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15023740#comment-15023740
 ] 

Heng Chen commented on HBASE-14843:
---

[~mbertozzi] 
Could we move ProcedureStore.start into ProcedureExecutor.start  after 
ProcedureStore.load
just like this  in ProcedureExecutor.start
{code}
store.recoverLease();
load(abortOnCorruption);
store.start(numThreads);
{code}

So we could ensure store.start after load.  
And there is no need to call WALProcedureStore.start explicitly in HMaster.  
wdyt?

> TestWALProcedureStore.testLoad is flakey
> 
>
> Key: HBASE-14843
> URL: https://issues.apache.org/jira/browse/HBASE-14843
> Project: HBase
>  Issue Type: Bug
>  Components: proc-v2
>Affects Versions: 2.0.0, 1.2.0
>Reporter: Heng Chen
>Assignee: Matteo Bertozzi
>Priority: Blocker
> Fix For: 2.0.0, 1.2.0
>
> Attachments: HBASE-14843-v0.patch
>
>
> I see it twice recently, 
> see.
> https://builds.apache.org/job/PreCommit-HBASE-Build/16589//testReport/org.apache.hadoop.hbase.procedure2.store.wal/TestWALProcedureStore/testLoad/
> https://builds.apache.org/job/PreCommit-HBASE-Build/16532/testReport/org.apache.hadoop.hbase.procedure2.store.wal/TestWALProcedureStore/testLoad/
> Let's see what's happening.
> Update.
> It failed once again today, 
> https://builds.apache.org/job/PreCommit-HBASE-Build/16602/testReport/junit/org.apache.hadoop.hbase.procedure2.store.wal/TestWALProcedureStore/testLoad/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13347) Deprecate FirstKeyValueMatchingQualifiersFilter

2015-11-23 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-13347:
---
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Thanks Abhishek for the patch.
Pushed to branch-1 and master.

> Deprecate FirstKeyValueMatchingQualifiersFilter
> ---
>
> Key: HBASE-13347
> URL: https://issues.apache.org/jira/browse/HBASE-13347
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: Lars George
>Assignee: Abhishek Singh Chouhan
>Priority: Minor
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-13347-branch-1.patch, HBASE-13347-master-v2.patch, 
> HBASE-13347-master.patch
>
>
> The {{RowCounter}} in the {{mapreduce}} package uses 
> {{FirstKeyValueMatchingQualifiersFilter}} which was introduced in HBASE-6468. 
> However we do not need that since we match columns in the scan before we 
> filter.
> Deprecate the filter in 2.0 and remove in 3.0.
> Do cleanup of RowCounter that tries to use this filter but actually doesn't.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14825) HBase Ref Guide corrections of typos/misspellings

2015-11-23 Thread Daniel Vimont (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Vimont updated HBASE-14825:
--
Release Note: 
SUBMITTING TEST PATCH as part of the following...
Corrections to content of "book.html", which is pulled from various *.adoc 
files and *.xml files.
-- corrects typos/misspellings
-- corrects incorrectly formatted links

New patch (v4) contains additional commit to fix "long lines" problems present 
in the original patch.

  was:
Corrections to content of "book.html", which is pulled from various *.adoc 
files and *.xml files.
-- corrects typos/misspellings
-- corrects incorrectly formatted links

New patch (v4) contains additional commit to fix "long lines" problems present 
in the original patch.

  Status: Patch Available  (was: Open)

> HBase Ref Guide corrections of typos/misspellings
> -
>
> Key: HBASE-14825
> URL: https://issues.apache.org/jira/browse/HBASE-14825
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.0.0
>Reporter: Daniel Vimont
>Assignee: Daniel Vimont
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-14825-v2.patch, HBASE-14825-v3.patch, 
> HBASE-14825-v4.patch, HBASE-14825-v5-test.patch, HBASE-14825.patch
>
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> Found the following list of typos/misspellings on the book.html page, and 
> thought I would make corrections to the appropriate src/main/asciidoc files 
> in which they are located. (This is just a good opportunity for me to become 
> familiar with submission of fixes/patches as a prelude to beginning to make 
> some coding contributions. This is also my first submission to the JIRA 
> system, so corrections to content/conventions are welcome!)
> [Note: I see that [~misty]  may be in the midst of a reformatting task -- 
> HBASE-14823 --  that might involve these same asciidoc files. Please advise 
> if I should wait on this task to avoid a possibly cumbersome Git 
> reconciliation mess. (?)]
> Here is the list of typos/misspellings. The format of each item is (a) the 
> problem is presented in brackets on the first line, and (b) the phrase (as it 
> currently appears in the text) is on the second line.
> ===
> ["you" should be "your", and "Kimballs'" should be "Kimball's" (move the 
> apostrophe) in the following:]
> A useful read setting config on you hadoop cluster is Aaron Kimballs' 
> Configuration Parameters: What can you just ignore?
> [Period needed after "a"]
> a.k.a pseudo-distributed
> ["empty" is misspelled]
> The default value in this configuration has been intentionally left emtpy in 
> order to honor the old hbase.regionserver.global.memstore.upperLimit property 
> if present.
> [All occurrences of "a HBase" should be changed to "an HBase" -- 15 
> occurrences found]
> ["file path are" should be "file paths are"]
> By default, all of HBase's ZooKeeper file path are configured with a relative 
> path, so they will all go under this directory unless changed.
> ["times" -- plural required]
> How many time to retry attempting to write a version file before just 
> aborting. 
> ["separated" is misspelled]
> Each attempt is seperated by the hbase.server.thread.wakefrequency 
> milliseconds.
> [space needed after quotation mark (include"limit)]
> Because this limit represents the "automatic include"limit...
> [space needed ("ashbase:metadata" should be "as hbase:metadata")]
> This helps to keep compaction of lean tables (such ashbase:meta) fast.
> [Acronym "ide" should be capitalized for clarity: IDE]
> Setting this to true can be useful in contexts other than the other side of a 
> maven generation; i.e. running in an ide. 
> [RuntimeException missing an "e"]
> You'll want to set this boolean to true to avoid seeing the RuntimException 
> complaint:
> [Space missing after "secure"]
> FS Permissions for the root directory in a secure(kerberos) setup.
> ["mutations" misspelled]
> ...will be created which will tail the logs and replicate the mutatations to 
> region replicas for tables that have region replication > 1.
> ["it such that" should be "is such that"]
> If your working set it such that block cache does you no good...
> ["an" should be "and"]
> See the Deveraj Das an Nicolas Liochon blog post...
> [Tag "" should be ""]
> hbase.coprocessor.master.classes
> [Misspelling of "implementations"]
> Those consumers are coprocessors, phoenix, replication endpoint 
> implemnetations or similar.
> [Misspelling of "cluster"]
> On upgrade, before running a rolling restart over the cluser...
> ["effect" should be "affect"]
> If NOT using BucketCache, this change does not effect you.
> [Need space after "throw"]
> This will throw`java.lang.NoSuchMethodError...
> ["habasee" should 

[jira] [Commented] (HBASE-14826) Small improvement in KVHeap seek() API

2015-11-23 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15023771#comment-15023771
 ] 

ramkrishna.s.vasudevan commented on HBASE-14826:


Just now pushed to master. Will keep it open if it needs to be in other 
branches. If not will resolve it. 

> Small improvement in KVHeap seek() API
> --
>
> Key: HBASE-14826
> URL: https://issues.apache.org/jira/browse/HBASE-14826
> Project: HBase
>  Issue Type: Improvement
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
>Priority: Minor
> Attachments: HBASE-14826.patch, HBASE-14826_1.patch
>
>
> Currently in seek/reseek() APIs we tend to do lot of priorityqueue related 
> operations. We initially add the current scanner to the heap, then poll and 
> again add the scanner back if the seekKey is greater than the topkey in that 
> scanner. Since the KVs are always going to be in increasing order and in 
> ideal scan flow every seek/reseek is followed by a next() call it should be 
> ok if we start with checking the current scanner and then do a poll to get 
> the next scanner. Just avoid the initial PQ.add(current) call. This could 
> save some comparisons. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14825) HBase Ref Guide corrections of typos/misspellings

2015-11-23 Thread Misty Stanley-Jones (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Misty Stanley-Jones updated HBASE-14825:

Attachment: HBASE-14825_misty_example.patch

Attaching the patch I made after I squashed your changes together and made the 
commit message a bit more standard. Let me know if you don't get to the same 
place.

> HBase Ref Guide corrections of typos/misspellings
> -
>
> Key: HBASE-14825
> URL: https://issues.apache.org/jira/browse/HBASE-14825
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.0.0
>Reporter: Daniel Vimont
>Assignee: Daniel Vimont
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-14825-v2.patch, HBASE-14825-v3.patch, 
> HBASE-14825-v4.patch, HBASE-14825-v5-test.patch, HBASE-14825.patch, 
> HBASE-14825_misty_example.patch
>
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> Found the following list of typos/misspellings on the book.html page, and 
> thought I would make corrections to the appropriate src/main/asciidoc files 
> in which they are located. (This is just a good opportunity for me to become 
> familiar with submission of fixes/patches as a prelude to beginning to make 
> some coding contributions. This is also my first submission to the JIRA 
> system, so corrections to content/conventions are welcome!)
> [Note: I see that [~misty]  may be in the midst of a reformatting task -- 
> HBASE-14823 --  that might involve these same asciidoc files. Please advise 
> if I should wait on this task to avoid a possibly cumbersome Git 
> reconciliation mess. (?)]
> Here is the list of typos/misspellings. The format of each item is (a) the 
> problem is presented in brackets on the first line, and (b) the phrase (as it 
> currently appears in the text) is on the second line.
> ===
> ["you" should be "your", and "Kimballs'" should be "Kimball's" (move the 
> apostrophe) in the following:]
> A useful read setting config on you hadoop cluster is Aaron Kimballs' 
> Configuration Parameters: What can you just ignore?
> [Period needed after "a"]
> a.k.a pseudo-distributed
> ["empty" is misspelled]
> The default value in this configuration has been intentionally left emtpy in 
> order to honor the old hbase.regionserver.global.memstore.upperLimit property 
> if present.
> [All occurrences of "a HBase" should be changed to "an HBase" -- 15 
> occurrences found]
> ["file path are" should be "file paths are"]
> By default, all of HBase's ZooKeeper file path are configured with a relative 
> path, so they will all go under this directory unless changed.
> ["times" -- plural required]
> How many time to retry attempting to write a version file before just 
> aborting. 
> ["separated" is misspelled]
> Each attempt is seperated by the hbase.server.thread.wakefrequency 
> milliseconds.
> [space needed after quotation mark (include"limit)]
> Because this limit represents the "automatic include"limit...
> [space needed ("ashbase:metadata" should be "as hbase:metadata")]
> This helps to keep compaction of lean tables (such ashbase:meta) fast.
> [Acronym "ide" should be capitalized for clarity: IDE]
> Setting this to true can be useful in contexts other than the other side of a 
> maven generation; i.e. running in an ide. 
> [RuntimeException missing an "e"]
> You'll want to set this boolean to true to avoid seeing the RuntimException 
> complaint:
> [Space missing after "secure"]
> FS Permissions for the root directory in a secure(kerberos) setup.
> ["mutations" misspelled]
> ...will be created which will tail the logs and replicate the mutatations to 
> region replicas for tables that have region replication > 1.
> ["it such that" should be "is such that"]
> If your working set it such that block cache does you no good...
> ["an" should be "and"]
> See the Deveraj Das an Nicolas Liochon blog post...
> [Tag "" should be ""]
> hbase.coprocessor.master.classes
> [Misspelling of "implementations"]
> Those consumers are coprocessors, phoenix, replication endpoint 
> implemnetations or similar.
> [Misspelling of "cluster"]
> On upgrade, before running a rolling restart over the cluser...
> ["effect" should be "affect"]
> If NOT using BucketCache, this change does not effect you.
> [Need space after "throw"]
> This will throw`java.lang.NoSuchMethodError...
> ["habasee" should be "hbase"]
> You can pass commands to the HBase Shell in non-interactive mode (see 
> hbasee.shell.noninteractive)...
> ["ie" should be "i.e."]
> Restrict the amount of resources (ie regions, tables) a namespace can consume.
> ["an" should be "and"]
> ...but can be conjured on the fly while the table is up an running.
> [Malformed link (text appears as follows when rendered in a browser):]
> Puts are 

[jira] [Commented] (HBASE-14871) Allow specifying the base branch for make_patch

2015-11-23 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15023780#comment-15023780
 ] 

stack commented on HBASE-14871:
---

LGTM What about a bit of doc in the script on this new option?

> Allow specifying the base branch for make_patch
> ---
>
> Key: HBASE-14871
> URL: https://issues.apache.org/jira/browse/HBASE-14871
> Project: HBase
>  Issue Type: Improvement
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Attachments: HBASE-14871-v2.patch, HBASE-14871.patch
>
>
> Not all branches will be based off of origin/*. Lets allow the user to 
> specify which branch to base the patch off of.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14420) Zombie Stomping Session

2015-11-23 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15023783#comment-15023783
 ] 

stack commented on HBASE-14420:
---

Just saw this over in branch-1.3, a hang on these:

kalashnikov:hbase.git stack$ python ./dev-support/findHangingTests.py 
https://builds.apache.org/view/H-L/view/HBase/job/HBase-1.3/391/jdk=latest1.8,label=Hadoop/consoleText
Fetching the console output from the URL
Printing hanging tests
Hanging test : org.apache.hadoop.hbase.mapreduce.TestImportExport
Hanging test : org.apache.hadoop.hbase.mapreduce.TestImportTsv
Printing Failing tests
Failing test : org.apache.hadoop.hbase.regionserver.TestAtomicOperation
Failing test : org.apache.hadoop.hbase.mapreduce.TestRowCounter



> Zombie Stomping Session
> ---
>
> Key: HBASE-14420
> URL: https://issues.apache.org/jira/browse/HBASE-14420
> Project: HBase
>  Issue Type: Umbrella
>  Components: test
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Attachments: hangers.txt, none_fix (1).txt, none_fix.txt, 
> none_fix.txt, none_fix.txt, none_fix.txt, none_fix.txt, none_fix.txt, 
> none_fix.txt, none_fix.txt, none_fix.txt, none_fix.txt, none_fix.txt, 
> none_fix.txt, none_fix.txt, none_fix.txt, none_fix.txt, none_fix.txt, 
> none_fix.txt, none_fix.txt, none_fix.txt, none_fix.txt, none_fix.txt, 
> none_fix.txt, none_fix.txt, none_fix.txt, none_fix.txt, none_fix.txt, 
> none_fix.txt, none_fix.txt, none_fix.txt, none_fix.txt
>
>
> Patch build are now failing most of the time because we are dropping zombies. 
> I confirm we are doing this on non-apache build boxes too.
> Left-over zombies consume resources on build boxes (OOME cannot create native 
> threads). Having to do multiple test runs in the hope that we can get a 
> non-zombie-making build or making (arbitrary) rulings that the zombies are 
> 'not related' is a productivity sink. And so on...
> This is an umbrella issue for a zombie stomping session that started earlier 
> this week. Will hang sub-issues of this one. Am running builds back-to-back 
> on little cluster to turn out the monsters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14843) TestWALProcedureStore.testLoad is flakey

2015-11-23 Thread Matteo Bertozzi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15023793#comment-15023793
 ] 

Matteo Bertozzi commented on HBASE-14843:
-

No, that's by design.
Currently we have some limitations like the sync load due to time limitations 
on my side. But I think I left some comment in the load saying that some 
procedure may start early without having to wait until we have read all the 
logs. In that case you need store and executor running before load is complete

> TestWALProcedureStore.testLoad is flakey
> 
>
> Key: HBASE-14843
> URL: https://issues.apache.org/jira/browse/HBASE-14843
> Project: HBase
>  Issue Type: Bug
>  Components: proc-v2
>Affects Versions: 2.0.0, 1.2.0
>Reporter: Heng Chen
>Assignee: Matteo Bertozzi
>Priority: Blocker
> Fix For: 2.0.0, 1.2.0
>
> Attachments: HBASE-14843-v0.patch
>
>
> I see it twice recently, 
> see.
> https://builds.apache.org/job/PreCommit-HBASE-Build/16589//testReport/org.apache.hadoop.hbase.procedure2.store.wal/TestWALProcedureStore/testLoad/
> https://builds.apache.org/job/PreCommit-HBASE-Build/16532/testReport/org.apache.hadoop.hbase.procedure2.store.wal/TestWALProcedureStore/testLoad/
> Let's see what's happening.
> Update.
> It failed once again today, 
> https://builds.apache.org/job/PreCommit-HBASE-Build/16602/testReport/junit/org.apache.hadoop.hbase.procedure2.store.wal/TestWALProcedureStore/testLoad/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-14872) Scan different timeRange per column family doesn't percolate down to the memstore

2015-11-23 Thread churro morales (JIRA)
churro morales created HBASE-14872:
--

 Summary: Scan different timeRange per column family doesn't 
percolate down to the memstore 
 Key: HBASE-14872
 URL: https://issues.apache.org/jira/browse/HBASE-14872
 Project: HBase
  Issue Type: Bug
  Components: Client, regionserver, Scanners
Affects Versions: 2.0.0, 1.3.0
Reporter: churro morales
Assignee: churro morales
 Fix For: 2.0.0, 1.3.0, 0.98.17


HBASE-14355 The scan different time range for column family feature was not 
applied to the memstore it was only done for the store files.  This breaks the 
contract.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14872) Scan different timeRange per column family doesn't percolate down to the memstore

2015-11-23 Thread churro morales (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

churro morales updated HBASE-14872:
---
Attachment: HBASE-14872.patch

> Scan different timeRange per column family doesn't percolate down to the 
> memstore 
> --
>
> Key: HBASE-14872
> URL: https://issues.apache.org/jira/browse/HBASE-14872
> Project: HBase
>  Issue Type: Bug
>  Components: Client, regionserver, Scanners
>Affects Versions: 2.0.0, 1.3.0
>Reporter: churro morales
>Assignee: churro morales
> Fix For: 2.0.0, 1.3.0, 0.98.17
>
> Attachments: HBASE-14872.patch
>
>
> HBASE-14355 The scan different time range for column family feature was not 
> applied to the memstore it was only done for the store files.  This breaks 
> the contract.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14872) Scan different timeRange per column family doesn't percolate down to the memstore

2015-11-23 Thread churro morales (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

churro morales updated HBASE-14872:
---
Status: Patch Available  (was: Open)

> Scan different timeRange per column family doesn't percolate down to the 
> memstore 
> --
>
> Key: HBASE-14872
> URL: https://issues.apache.org/jira/browse/HBASE-14872
> Project: HBase
>  Issue Type: Bug
>  Components: Client, regionserver, Scanners
>Affects Versions: 2.0.0, 1.3.0
>Reporter: churro morales
>Assignee: churro morales
> Fix For: 2.0.0, 1.3.0, 0.98.17
>
> Attachments: HBASE-14872.patch
>
>
> HBASE-14355 The scan different time range for column family feature was not 
> applied to the memstore it was only done for the store files.  This breaks 
> the contract.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13153) Bulk Loaded HFile Replication

2015-11-23 Thread Ashish Singhi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15023827#comment-15023827
 ] 

Ashish Singhi commented on HBASE-13153:
---

bq. This HBASE-13780 restricts the access to the hbase.root. Would it impact 
this patch?
We have tested in secure mode but with same user in active and peer cluster, so 
we were not required to give any extra permissions to the user in peer cluster 
to access active cluster FS. But if users are different then the operator has 
to provide read permission to the peer cluster user on the source cluster file 
system and we have mentioned the same in the document also in section 3 & 6. So 
with this things assured it will not impact this patch.

> Bulk Loaded HFile Replication
> -
>
> Key: HBASE-13153
> URL: https://issues.apache.org/jira/browse/HBASE-13153
> Project: HBase
>  Issue Type: New Feature
>  Components: Replication
>Reporter: sunhaitao
>Assignee: Ashish Singhi
> Fix For: 2.0.0
>
> Attachments: HBASE-13153-v1.patch, HBASE-13153-v10.patch, 
> HBASE-13153-v11.patch, HBASE-13153-v12.patch, HBASE-13153-v13.patch, 
> HBASE-13153-v14.patch, HBASE-13153-v15.patch, HBASE-13153-v16.patch, 
> HBASE-13153-v17.patch, HBASE-13153-v18.patch, HBASE-13153-v2.patch, 
> HBASE-13153-v3.patch, HBASE-13153-v4.patch, HBASE-13153-v5.patch, 
> HBASE-13153-v6.patch, HBASE-13153-v7.patch, HBASE-13153-v8.patch, 
> HBASE-13153-v9.patch, HBASE-13153.patch, HBase Bulk Load 
> Replication-v1-1.pdf, HBase Bulk Load Replication-v2.pdf, HBase Bulk Load 
> Replication-v3.pdf, HBase Bulk Load Replication.pdf, HDFS_HA_Solution.PNG
>
>
> Currently we plan to use HBase Replication feature to deal with disaster 
> tolerance scenario.But we encounter an issue that we will use bulkload very 
> frequently,because bulkload bypass write path, and will not generate WAL, so 
> the data will not be replicated to backup cluster. It's inappropriate to 
> bukload twice both on active cluster and backup cluster. So i advise do some 
> modification to bulkload feature to enable bukload to both active cluster and 
> backup cluster



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14862) Add support for reporting p90 for histogram metrics

2015-11-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15023829#comment-15023829
 ] 

Hudson commented on HBASE-14862:


SUCCESS: Integrated in HBase-1.2 #395 (See 
[https://builds.apache.org/job/HBase-1.2/395/])
HBASE-14862 Add support for reporting p90 for histogram metrics (apurtell: rev 
4916e3805cfc4c4373ae554aee1ecb50f17797da)
* 
hbase-hadoop-compat/src/main/java/org/apache/hadoop/metrics2/MetricHistogram.java
* 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/metrics2/lib/MutableHistogram.java
* 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/metrics2/lib/DynamicMetricsRegistry.java


> Add support for reporting p90 for histogram metrics
> ---
>
> Key: HBASE-14862
> URL: https://issues.apache.org/jira/browse/HBASE-14862
> Project: HBase
>  Issue Type: Improvement
>  Components: metrics
>Reporter: Sanjeev Lakshmanan
>Assignee: Sanjeev Lakshmanan
>Priority: Minor
> Fix For: 2.0.0, 1.2.0, 1.3.0, 0.98.17
>
> Attachments: HBASE-14862-0.98.patch, HBASE-14862.patch
>
>
> Currently there is support for reporting p75, p95, and p99 for histogram 
> metrics. This JIRA is to add support for reporting p90.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14843) TestWALProcedureStore.testLoad is flakey

2015-11-23 Thread Heng Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15023831#comment-15023831
 ] 

Heng Chen commented on HBASE-14843:
---

Thanks [~mbertozzi] for your explanation.
{quote}
But I think I left some comment in the load saying that some procedure may 
start early without having to wait until we have read all the logs. In that 
case you need store and executor running before load is complete
{quote}
Is there any ProcedureStore case should start without waiting for read all logs 
currently?  I can't understand in which scenario it would happen. 

> TestWALProcedureStore.testLoad is flakey
> 
>
> Key: HBASE-14843
> URL: https://issues.apache.org/jira/browse/HBASE-14843
> Project: HBase
>  Issue Type: Bug
>  Components: proc-v2
>Affects Versions: 2.0.0, 1.2.0
>Reporter: Heng Chen
>Assignee: Matteo Bertozzi
>Priority: Blocker
> Fix For: 2.0.0, 1.2.0
>
> Attachments: HBASE-14843-v0.patch
>
>
> I see it twice recently, 
> see.
> https://builds.apache.org/job/PreCommit-HBASE-Build/16589//testReport/org.apache.hadoop.hbase.procedure2.store.wal/TestWALProcedureStore/testLoad/
> https://builds.apache.org/job/PreCommit-HBASE-Build/16532/testReport/org.apache.hadoop.hbase.procedure2.store.wal/TestWALProcedureStore/testLoad/
> Let's see what's happening.
> Update.
> It failed once again today, 
> https://builds.apache.org/job/PreCommit-HBASE-Build/16602/testReport/junit/org.apache.hadoop.hbase.procedure2.store.wal/TestWALProcedureStore/testLoad/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13082) Coarsen StoreScanner locks to RegionScanner

2015-11-23 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15023832#comment-15023832
 ] 

stack commented on HBASE-13082:
---

Nice explanation on compactedfiles

bq.// Sorting may not be really needed here for the compacted files?

Yeah. They are just going to be deleted, right... No harm sorting though I 
suppose.. you'll delete oldest to newest?

Keep CompactedHFilesDischarger name. It ties his chore to this stuff going on 
in the StoreManager.


Could also have the method internally do these checks for null and check for 
zero count?

2351if (copyCompactedfiles != null && copyCompactedfiles.size() != 0) {
2352  removeCompactedFiles(copyCompactedfiles);
2353}

Then all in one place.

closeAfterCompaction seems misnamed. The return is whether there are references 
outstanding and if the file can be safely removed/closed? 

470   /**
471* Closes and archives the compacted files under this store
472*/
473   void closeAndArchiveCompactedFiles() throws IOException;

We'll only close and archive if no references and if it is marked as compacted, 
right? Otherwise, we'll do it at a later place?

So, we are going to change this in another issue?

  compactedFile.markCompacted();

i.e. marking a file as compacted rather than telling StoreFileManager it is 
compacted?

So, the StoreFile has new attributes of current refcount and if compacted away. 
StoreFileManager has list of compacted files. StoreFileManager is in charge of 
the list of StoreFlies in a Store. It makes ruling on what to include in a 
Scan. It does clearFiles... and compaction. Shouldn't it be in charge of 
knowing when files are not to be included in a Scan/can be removed? When we 
mark a file compacted, we should do it on StoreFileManager?  Can it do the 
refcounting? When a Scan is done, tell the StoreFileManager so it can do the 
refcounting?

>From your earlier comment:

bq. We cannot have this in StorefileInfo because we only cache the Storefile 
(in the StorefileManager) and not the StorefileInfo. StoreFileInfos are created 
every time from the hfile path.

Can StoreFileManager then do refcounting and knowing what files are compacted? 
Would that be doable and put these attributes in one location?

This should be happening internal to StoreFileManager rather than out here in 
HStore?

853   Collection compactedfiles =
854   storeEngine.getStoreFileManager().clearCompactedFiles();
855   // clear the compacted files
856   if (compactedfiles != null && !compactedfiles.isEmpty()) {
857 removeCompactedFiles(compactedfiles);
858   }

Gets complicated here on the end in closeAndArchiveCompactedFiles where there 
is a lock for the Store being used to close out storefiles  And, you are 
just following on from what is in here already. Ugh.

In StoreFile you have

  public boolean isCompacted() {

and then later you have on the Reader isCompactedAway. These methods should be 
named the same (And see above where hopefully, we don't have to do this on the 
StoreFile itself at all). Ditto for getRefCount (Does StoreFileManager know 
refcount?)

Looking at the additions to StoreFileManager, if compaction was kept internal 
to StoreFileManager, would you have to add these new methods?

Does StoreFileScanner have reference to StoreFileManager?

On how often to call checkResetHeap, we have to be prompt because we are 
carrying a snapshot until we do reset?








> Coarsen StoreScanner locks to RegionScanner
> ---
>
> Key: HBASE-13082
> URL: https://issues.apache.org/jira/browse/HBASE-13082
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: ramkrishna.s.vasudevan
> Attachments: 13082-test.txt, 13082-v2.txt, 13082-v3.txt, 
> 13082-v4.txt, 13082.txt, 13082.txt, HBASE-13082.pdf, HBASE-13082_1.pdf, 
> HBASE-13082_12.patch, HBASE-13082_13.patch, HBASE-13082_14.patch, 
> HBASE-13082_15.patch, HBASE-13082_16.patch, HBASE-13082_1_WIP.patch, 
> HBASE-13082_2.pdf, HBASE-13082_2_WIP.patch, HBASE-13082_3.patch, 
> HBASE-13082_4.patch, HBASE-13082_9.patch, HBASE-13082_9.patch, 
> HBASE-13082_withoutpatch.jpg, HBASE-13082_withpatch.jpg, 
> LockVsSynchronized.java, gc.png, gc.png, gc.png, hits.png, next.png, next.png
>
>
> Continuing where HBASE-10015 left of.
> We can avoid locking (and memory fencing) inside StoreScanner by deferring to 
> the lock already held by the RegionScanner.
> In tests this shows quite a scan improvement and reduced CPU (the fences make 
> the cores wait for memory fetches).
> There are some drawbacks too:
> * All calls to RegionScanner need to be remain synchronized
> * Implementors of coprocessors need to be diligent in following the locking 
> contract. For example Phoenix does not 

[jira] [Commented] (HBASE-14843) TestWALProcedureStore.testLoad is flakey

2015-11-23 Thread Matteo Bertozzi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15023841#comment-15023841
 ] 

Matteo Bertozzi commented on HBASE-14843:
-

as I said, the current code load() is sync. so you need to wait every log to be 
read. but that just because I didn't have time to finish that code. 

but we don't need to read all the logs to be able to start procedures. think 
about this case:
procedure-1 is on wal1, then we roll, procedure 2 is on wal2. both procedure 
can be start before reading the other wal, and we know already that when we 
read the log.

the code on the read side is almost there but there are other things in the wal 
and executor that needs to be done. See 
https://github.com/apache/hbase/blob/master/hbase-procedure/src/main/java/org/apache/hadoop/hbase/procedure2/store/wal/ProcedureWALFormatReader.java#L159

> TestWALProcedureStore.testLoad is flakey
> 
>
> Key: HBASE-14843
> URL: https://issues.apache.org/jira/browse/HBASE-14843
> Project: HBase
>  Issue Type: Bug
>  Components: proc-v2
>Affects Versions: 2.0.0, 1.2.0
>Reporter: Heng Chen
>Assignee: Matteo Bertozzi
>Priority: Blocker
> Fix For: 2.0.0, 1.2.0
>
> Attachments: HBASE-14843-v0.patch
>
>
> I see it twice recently, 
> see.
> https://builds.apache.org/job/PreCommit-HBASE-Build/16589//testReport/org.apache.hadoop.hbase.procedure2.store.wal/TestWALProcedureStore/testLoad/
> https://builds.apache.org/job/PreCommit-HBASE-Build/16532/testReport/org.apache.hadoop.hbase.procedure2.store.wal/TestWALProcedureStore/testLoad/
> Let's see what's happening.
> Update.
> It failed once again today, 
> https://builds.apache.org/job/PreCommit-HBASE-Build/16602/testReport/junit/org.apache.hadoop.hbase.procedure2.store.wal/TestWALProcedureStore/testLoad/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14825) HBase Ref Guide corrections of typos/misspellings

2015-11-23 Thread Daniel Vimont (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15023851#comment-15023851
 ] 

Daniel Vimont commented on HBASE-14825:
---

Yes, everything looks good now. Will resubmit my new patch as "...-v6".

Thanks again!

> HBase Ref Guide corrections of typos/misspellings
> -
>
> Key: HBASE-14825
> URL: https://issues.apache.org/jira/browse/HBASE-14825
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.0.0
>Reporter: Daniel Vimont
>Assignee: Daniel Vimont
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-14825-v2.patch, HBASE-14825-v3.patch, 
> HBASE-14825-v4.patch, HBASE-14825-v5-test.patch, HBASE-14825.patch, 
> HBASE-14825_misty_example.patch
>
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> Found the following list of typos/misspellings on the book.html page, and 
> thought I would make corrections to the appropriate src/main/asciidoc files 
> in which they are located. (This is just a good opportunity for me to become 
> familiar with submission of fixes/patches as a prelude to beginning to make 
> some coding contributions. This is also my first submission to the JIRA 
> system, so corrections to content/conventions are welcome!)
> [Note: I see that [~misty]  may be in the midst of a reformatting task -- 
> HBASE-14823 --  that might involve these same asciidoc files. Please advise 
> if I should wait on this task to avoid a possibly cumbersome Git 
> reconciliation mess. (?)]
> Here is the list of typos/misspellings. The format of each item is (a) the 
> problem is presented in brackets on the first line, and (b) the phrase (as it 
> currently appears in the text) is on the second line.
> ===
> ["you" should be "your", and "Kimballs'" should be "Kimball's" (move the 
> apostrophe) in the following:]
> A useful read setting config on you hadoop cluster is Aaron Kimballs' 
> Configuration Parameters: What can you just ignore?
> [Period needed after "a"]
> a.k.a pseudo-distributed
> ["empty" is misspelled]
> The default value in this configuration has been intentionally left emtpy in 
> order to honor the old hbase.regionserver.global.memstore.upperLimit property 
> if present.
> [All occurrences of "a HBase" should be changed to "an HBase" -- 15 
> occurrences found]
> ["file path are" should be "file paths are"]
> By default, all of HBase's ZooKeeper file path are configured with a relative 
> path, so they will all go under this directory unless changed.
> ["times" -- plural required]
> How many time to retry attempting to write a version file before just 
> aborting. 
> ["separated" is misspelled]
> Each attempt is seperated by the hbase.server.thread.wakefrequency 
> milliseconds.
> [space needed after quotation mark (include"limit)]
> Because this limit represents the "automatic include"limit...
> [space needed ("ashbase:metadata" should be "as hbase:metadata")]
> This helps to keep compaction of lean tables (such ashbase:meta) fast.
> [Acronym "ide" should be capitalized for clarity: IDE]
> Setting this to true can be useful in contexts other than the other side of a 
> maven generation; i.e. running in an ide. 
> [RuntimeException missing an "e"]
> You'll want to set this boolean to true to avoid seeing the RuntimException 
> complaint:
> [Space missing after "secure"]
> FS Permissions for the root directory in a secure(kerberos) setup.
> ["mutations" misspelled]
> ...will be created which will tail the logs and replicate the mutatations to 
> region replicas for tables that have region replication > 1.
> ["it such that" should be "is such that"]
> If your working set it such that block cache does you no good...
> ["an" should be "and"]
> See the Deveraj Das an Nicolas Liochon blog post...
> [Tag "" should be ""]
> hbase.coprocessor.master.classes
> [Misspelling of "implementations"]
> Those consumers are coprocessors, phoenix, replication endpoint 
> implemnetations or similar.
> [Misspelling of "cluster"]
> On upgrade, before running a rolling restart over the cluser...
> ["effect" should be "affect"]
> If NOT using BucketCache, this change does not effect you.
> [Need space after "throw"]
> This will throw`java.lang.NoSuchMethodError...
> ["habasee" should be "hbase"]
> You can pass commands to the HBase Shell in non-interactive mode (see 
> hbasee.shell.noninteractive)...
> ["ie" should be "i.e."]
> Restrict the amount of resources (ie regions, tables) a namespace can consume.
> ["an" should be "and"]
> ...but can be conjured on the fly while the table is up an running.
> [Malformed link (text appears as follows when rendered in a browser):]
> Puts are executed via Table.put (writeBuffer) or 
> 

[jira] [Commented] (HBASE-14735) Region may grow too big and can not be split

2015-11-23 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15023860#comment-15023860
 ] 

stack commented on HBASE-14735:
---

[~zhoushuaifeng2] So, IIRC, if too many storefiles, we intentionally prevented 
split... While we might split once if lots of files in a Store, the way split 
works, if any reference files in a Store, then we'd not be able to split until 
the references had been cleaned up (compactions clean up references). So, if 
you had a region that is filling with storefiles, while you might be able to 
split once, you'd not be able to spit a second time until after all the 
references had been cleaned out.. .and to do that, we needed to compact as fast 
as we could to remove any and all references; at extreme we would hold up 
flushing new storefiles. Thats sort of how it worked/works and explains some of 
the comments you are seeing in the code referenced by [~anoop.hbase]. So, now, 
after [~anoop.hbase]'s questions, I'm wary of this patch. I don't think it will 
really get you what you want you might get one split but then you'll run 
into a wall because your store will have reference files and can't be split 
till after all had been removed; i.e. recursive compacting... to get us back 
under blocking file count.

What was going on your cluster, do you know? Were compactions not able to keep 
up? Would splitting have made it more likely that they could keep up? 400G and 
100+ files is not good either.

> Region may grow too big and can not be split
> 
>
> Key: HBASE-14735
> URL: https://issues.apache.org/jira/browse/HBASE-14735
> Project: HBase
>  Issue Type: Bug
>  Components: Compaction, regionserver
>Affects Versions: 1.1.2, 0.98.15
>Reporter: Shuaifeng Zhou
>Assignee: Shuaifeng Zhou
> Attachments: 14735-0.98.patch, 14735-branch-1.1.patch, 
> 14735-branch-1.2.patch, 14735-branch-1.2.patch, 14735-master (2).patch, 
> 14735-master.patch, 14735-master.patch
>
>
> When a compaction completed, may there are also many storefiles in the store, 
> and CompactPriority < 0, then compactSplitThread will do a "Recursive 
> enqueue" compaction request instead of request a split:
> {code:title=CompactSplitThread.java|borderStyle=solid}
> if (completed) {
>   // degenerate case: blocked regions require recursive enqueues
>   if (store.getCompactPriority() <= 0) {
> requestSystemCompaction(region, store, "Recursive enqueue");
>   } else {
> // see if the compaction has caused us to exceed max region size
> requestSplit(region);
>   }
> {code}
> But in some situation, the "recursive enqueue" request may return null, and 
> not build up a new compaction runner. For example, an other compaction of the 
> same region is running, and compaction selection will exclude all files older 
> than the newest files currently compacting, this may cause no enough files 
> can be selected by the "recursive enqueue" request. When this happen, split 
> will not be trigged. If the input load is high enough, compactions aways 
> running on the region, and split will never be triggered.
> In our cluster, this situation happened, and a huge region more than 400GB 
> and 100+ storefiles appeared. Version is 0.98.10, and the trank also have the 
> problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14825) HBase Ref Guide corrections of typos/misspellings

2015-11-23 Thread Daniel Vimont (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Vimont updated HBASE-14825:
--
Attachment: HBASE-14825-v6.patch

Attempt #6 at submitting patch, after following Misty's instructions regarding 
need to squash commits into a single commit. (Note that the developer 
instructions say [ambiguously, to me] "If necessary, squash local commits...", 
whereas it would be clearer to say "It is always necessary to squash multiple 
commits into a single commit...".)

> HBase Ref Guide corrections of typos/misspellings
> -
>
> Key: HBASE-14825
> URL: https://issues.apache.org/jira/browse/HBASE-14825
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.0.0
>Reporter: Daniel Vimont
>Assignee: Daniel Vimont
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-14825-v2.patch, HBASE-14825-v3.patch, 
> HBASE-14825-v4.patch, HBASE-14825-v5-test.patch, HBASE-14825-v6.patch, 
> HBASE-14825.patch, HBASE-14825_misty_example.patch
>
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> Found the following list of typos/misspellings on the book.html page, and 
> thought I would make corrections to the appropriate src/main/asciidoc files 
> in which they are located. (This is just a good opportunity for me to become 
> familiar with submission of fixes/patches as a prelude to beginning to make 
> some coding contributions. This is also my first submission to the JIRA 
> system, so corrections to content/conventions are welcome!)
> [Note: I see that [~misty]  may be in the midst of a reformatting task -- 
> HBASE-14823 --  that might involve these same asciidoc files. Please advise 
> if I should wait on this task to avoid a possibly cumbersome Git 
> reconciliation mess. (?)]
> Here is the list of typos/misspellings. The format of each item is (a) the 
> problem is presented in brackets on the first line, and (b) the phrase (as it 
> currently appears in the text) is on the second line.
> ===
> ["you" should be "your", and "Kimballs'" should be "Kimball's" (move the 
> apostrophe) in the following:]
> A useful read setting config on you hadoop cluster is Aaron Kimballs' 
> Configuration Parameters: What can you just ignore?
> [Period needed after "a"]
> a.k.a pseudo-distributed
> ["empty" is misspelled]
> The default value in this configuration has been intentionally left emtpy in 
> order to honor the old hbase.regionserver.global.memstore.upperLimit property 
> if present.
> [All occurrences of "a HBase" should be changed to "an HBase" -- 15 
> occurrences found]
> ["file path are" should be "file paths are"]
> By default, all of HBase's ZooKeeper file path are configured with a relative 
> path, so they will all go under this directory unless changed.
> ["times" -- plural required]
> How many time to retry attempting to write a version file before just 
> aborting. 
> ["separated" is misspelled]
> Each attempt is seperated by the hbase.server.thread.wakefrequency 
> milliseconds.
> [space needed after quotation mark (include"limit)]
> Because this limit represents the "automatic include"limit...
> [space needed ("ashbase:metadata" should be "as hbase:metadata")]
> This helps to keep compaction of lean tables (such ashbase:meta) fast.
> [Acronym "ide" should be capitalized for clarity: IDE]
> Setting this to true can be useful in contexts other than the other side of a 
> maven generation; i.e. running in an ide. 
> [RuntimeException missing an "e"]
> You'll want to set this boolean to true to avoid seeing the RuntimException 
> complaint:
> [Space missing after "secure"]
> FS Permissions for the root directory in a secure(kerberos) setup.
> ["mutations" misspelled]
> ...will be created which will tail the logs and replicate the mutatations to 
> region replicas for tables that have region replication > 1.
> ["it such that" should be "is such that"]
> If your working set it such that block cache does you no good...
> ["an" should be "and"]
> See the Deveraj Das an Nicolas Liochon blog post...
> [Tag "" should be ""]
> hbase.coprocessor.master.classes
> [Misspelling of "implementations"]
> Those consumers are coprocessors, phoenix, replication endpoint 
> implemnetations or similar.
> [Misspelling of "cluster"]
> On upgrade, before running a rolling restart over the cluser...
> ["effect" should be "affect"]
> If NOT using BucketCache, this change does not effect you.
> [Need space after "throw"]
> This will throw`java.lang.NoSuchMethodError...
> ["habasee" should be "hbase"]
> You can pass commands to the HBase Shell in non-interactive mode (see 
> hbasee.shell.noninteractive)...
> ["ie" should be "i.e."]
> Restrict the amount of resources (ie regions, tables) a namespace can consume.
> 

[jira] [Commented] (HBASE-14862) Add support for reporting p90 for histogram metrics

2015-11-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15023863#comment-15023863
 ] 

Hudson commented on HBASE-14862:


SUCCESS: Integrated in HBase-1.3 #392 (See 
[https://builds.apache.org/job/HBase-1.3/392/])
HBASE-14862 Add support for reporting p90 for histogram metrics (apurtell: rev 
2ce27951b025d3380a987fbf3c28dcd340c42e59)
* 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/metrics2/lib/DynamicMetricsRegistry.java
* 
hbase-hadoop-compat/src/main/java/org/apache/hadoop/metrics2/MetricHistogram.java
* 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/metrics2/lib/MutableHistogram.java


> Add support for reporting p90 for histogram metrics
> ---
>
> Key: HBASE-14862
> URL: https://issues.apache.org/jira/browse/HBASE-14862
> Project: HBase
>  Issue Type: Improvement
>  Components: metrics
>Reporter: Sanjeev Lakshmanan
>Assignee: Sanjeev Lakshmanan
>Priority: Minor
> Fix For: 2.0.0, 1.2.0, 1.3.0, 0.98.17
>
> Attachments: HBASE-14862-0.98.patch, HBASE-14862.patch
>
>
> Currently there is support for reporting p75, p95, and p99 for histogram 
> metrics. This JIRA is to add support for reporting p90.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14825) HBase Ref Guide corrections of typos/misspellings

2015-11-23 Thread Daniel Vimont (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Vimont updated HBASE-14825:
--
Status: Open  (was: Patch Available)

Toggling patch submission

> HBase Ref Guide corrections of typos/misspellings
> -
>
> Key: HBASE-14825
> URL: https://issues.apache.org/jira/browse/HBASE-14825
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.0.0
>Reporter: Daniel Vimont
>Assignee: Daniel Vimont
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-14825-v2.patch, HBASE-14825-v3.patch, 
> HBASE-14825-v4.patch, HBASE-14825-v5-test.patch, HBASE-14825-v6.patch, 
> HBASE-14825.patch, HBASE-14825_misty_example.patch
>
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> Found the following list of typos/misspellings on the book.html page, and 
> thought I would make corrections to the appropriate src/main/asciidoc files 
> in which they are located. (This is just a good opportunity for me to become 
> familiar with submission of fixes/patches as a prelude to beginning to make 
> some coding contributions. This is also my first submission to the JIRA 
> system, so corrections to content/conventions are welcome!)
> [Note: I see that [~misty]  may be in the midst of a reformatting task -- 
> HBASE-14823 --  that might involve these same asciidoc files. Please advise 
> if I should wait on this task to avoid a possibly cumbersome Git 
> reconciliation mess. (?)]
> Here is the list of typos/misspellings. The format of each item is (a) the 
> problem is presented in brackets on the first line, and (b) the phrase (as it 
> currently appears in the text) is on the second line.
> ===
> ["you" should be "your", and "Kimballs'" should be "Kimball's" (move the 
> apostrophe) in the following:]
> A useful read setting config on you hadoop cluster is Aaron Kimballs' 
> Configuration Parameters: What can you just ignore?
> [Period needed after "a"]
> a.k.a pseudo-distributed
> ["empty" is misspelled]
> The default value in this configuration has been intentionally left emtpy in 
> order to honor the old hbase.regionserver.global.memstore.upperLimit property 
> if present.
> [All occurrences of "a HBase" should be changed to "an HBase" -- 15 
> occurrences found]
> ["file path are" should be "file paths are"]
> By default, all of HBase's ZooKeeper file path are configured with a relative 
> path, so they will all go under this directory unless changed.
> ["times" -- plural required]
> How many time to retry attempting to write a version file before just 
> aborting. 
> ["separated" is misspelled]
> Each attempt is seperated by the hbase.server.thread.wakefrequency 
> milliseconds.
> [space needed after quotation mark (include"limit)]
> Because this limit represents the "automatic include"limit...
> [space needed ("ashbase:metadata" should be "as hbase:metadata")]
> This helps to keep compaction of lean tables (such ashbase:meta) fast.
> [Acronym "ide" should be capitalized for clarity: IDE]
> Setting this to true can be useful in contexts other than the other side of a 
> maven generation; i.e. running in an ide. 
> [RuntimeException missing an "e"]
> You'll want to set this boolean to true to avoid seeing the RuntimException 
> complaint:
> [Space missing after "secure"]
> FS Permissions for the root directory in a secure(kerberos) setup.
> ["mutations" misspelled]
> ...will be created which will tail the logs and replicate the mutatations to 
> region replicas for tables that have region replication > 1.
> ["it such that" should be "is such that"]
> If your working set it such that block cache does you no good...
> ["an" should be "and"]
> See the Deveraj Das an Nicolas Liochon blog post...
> [Tag "" should be ""]
> hbase.coprocessor.master.classes
> [Misspelling of "implementations"]
> Those consumers are coprocessors, phoenix, replication endpoint 
> implemnetations or similar.
> [Misspelling of "cluster"]
> On upgrade, before running a rolling restart over the cluser...
> ["effect" should be "affect"]
> If NOT using BucketCache, this change does not effect you.
> [Need space after "throw"]
> This will throw`java.lang.NoSuchMethodError...
> ["habasee" should be "hbase"]
> You can pass commands to the HBase Shell in non-interactive mode (see 
> hbasee.shell.noninteractive)...
> ["ie" should be "i.e."]
> Restrict the amount of resources (ie regions, tables) a namespace can consume.
> ["an" should be "and"]
> ...but can be conjured on the fly while the table is up an running.
> [Malformed link (text appears as follows when rendered in a browser):]
> Puts are executed via Table.put (writeBuffer) or 
> 

[jira] [Updated] (HBASE-14825) HBase Ref Guide corrections of typos/misspellings

2015-11-23 Thread Daniel Vimont (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Vimont updated HBASE-14825:
--
Release Note: 
Corrections to content of "book.html", which is pulled from various *.adoc 
files and *.xml files.
-- corrects typos/misspellings
-- corrects incorrectly formatted links

  was:
SUBMITTING TEST PATCH as part of the following...
Corrections to content of "book.html", which is pulled from various *.adoc 
files and *.xml files.
-- corrects typos/misspellings
-- corrects incorrectly formatted links

New patch (v4) contains additional commit to fix "long lines" problems present 
in the original patch.

  Status: Patch Available  (was: Open)

> HBase Ref Guide corrections of typos/misspellings
> -
>
> Key: HBASE-14825
> URL: https://issues.apache.org/jira/browse/HBASE-14825
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.0.0
>Reporter: Daniel Vimont
>Assignee: Daniel Vimont
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-14825-v2.patch, HBASE-14825-v3.patch, 
> HBASE-14825-v4.patch, HBASE-14825-v5-test.patch, HBASE-14825-v6.patch, 
> HBASE-14825.patch, HBASE-14825_misty_example.patch
>
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> Found the following list of typos/misspellings on the book.html page, and 
> thought I would make corrections to the appropriate src/main/asciidoc files 
> in which they are located. (This is just a good opportunity for me to become 
> familiar with submission of fixes/patches as a prelude to beginning to make 
> some coding contributions. This is also my first submission to the JIRA 
> system, so corrections to content/conventions are welcome!)
> [Note: I see that [~misty]  may be in the midst of a reformatting task -- 
> HBASE-14823 --  that might involve these same asciidoc files. Please advise 
> if I should wait on this task to avoid a possibly cumbersome Git 
> reconciliation mess. (?)]
> Here is the list of typos/misspellings. The format of each item is (a) the 
> problem is presented in brackets on the first line, and (b) the phrase (as it 
> currently appears in the text) is on the second line.
> ===
> ["you" should be "your", and "Kimballs'" should be "Kimball's" (move the 
> apostrophe) in the following:]
> A useful read setting config on you hadoop cluster is Aaron Kimballs' 
> Configuration Parameters: What can you just ignore?
> [Period needed after "a"]
> a.k.a pseudo-distributed
> ["empty" is misspelled]
> The default value in this configuration has been intentionally left emtpy in 
> order to honor the old hbase.regionserver.global.memstore.upperLimit property 
> if present.
> [All occurrences of "a HBase" should be changed to "an HBase" -- 15 
> occurrences found]
> ["file path are" should be "file paths are"]
> By default, all of HBase's ZooKeeper file path are configured with a relative 
> path, so they will all go under this directory unless changed.
> ["times" -- plural required]
> How many time to retry attempting to write a version file before just 
> aborting. 
> ["separated" is misspelled]
> Each attempt is seperated by the hbase.server.thread.wakefrequency 
> milliseconds.
> [space needed after quotation mark (include"limit)]
> Because this limit represents the "automatic include"limit...
> [space needed ("ashbase:metadata" should be "as hbase:metadata")]
> This helps to keep compaction of lean tables (such ashbase:meta) fast.
> [Acronym "ide" should be capitalized for clarity: IDE]
> Setting this to true can be useful in contexts other than the other side of a 
> maven generation; i.e. running in an ide. 
> [RuntimeException missing an "e"]
> You'll want to set this boolean to true to avoid seeing the RuntimException 
> complaint:
> [Space missing after "secure"]
> FS Permissions for the root directory in a secure(kerberos) setup.
> ["mutations" misspelled]
> ...will be created which will tail the logs and replicate the mutatations to 
> region replicas for tables that have region replication > 1.
> ["it such that" should be "is such that"]
> If your working set it such that block cache does you no good...
> ["an" should be "and"]
> See the Deveraj Das an Nicolas Liochon blog post...
> [Tag "" should be ""]
> hbase.coprocessor.master.classes
> [Misspelling of "implementations"]
> Those consumers are coprocessors, phoenix, replication endpoint 
> implemnetations or similar.
> [Misspelling of "cluster"]
> On upgrade, before running a rolling restart over the cluser...
> ["effect" should be "affect"]
> If NOT using BucketCache, this change does not effect you.
> [Need space after "throw"]
> This will throw`java.lang.NoSuchMethodError...
> ["habasee" should be "hbase"]
> You can pass commands to the 

[jira] [Commented] (HBASE-14866) VerifyReplication should use peer configuration in peer connection

2015-11-23 Thread Gary Helmling (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15023874#comment-15023874
 ] 

Gary Helmling commented on HBASE-14866:
---

{code}
+import org.apache.htrace.fasterxml.jackson.databind.ObjectMapper;
{code}
It looks like the latest patch is using a shaded class from an HTrace 
dependency.  I don't think we should be using a shaded class.  I am also not a 
fan of serializing multiple configuration properties into a single 
configuration value.

I was planning to instead prefix the {{ReplicationPeerConfig}} configuration 
properties with a set prefix (say "hbase.verifyrep.peer."), then use the 
{{HBaseConfiguration.subset()}} method being added in HBASE-14821 to extract 
the prefixed values in the map method and merge them back in to the peer 
configuration created.

> VerifyReplication should use peer configuration in peer connection
> --
>
> Key: HBASE-14866
> URL: https://issues.apache.org/jira/browse/HBASE-14866
> Project: HBase
>  Issue Type: Improvement
>  Components: Replication
>Reporter: Gary Helmling
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-14866.patch, HBASE-14866_v1.patch
>
>
> VerifyReplication uses the replication peer's configuration to construct the 
> ZooKeeper quorum address for the peer connection.  However, other 
> configuration properties in the peer's configuration are dropped.  It should 
> merge all configuration properties from the {{ReplicationPeerConfig}} when 
> creating the peer connection and obtaining a credentials for the peer cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14777) Fix Inter Cluster Replication Future ordering issues

2015-11-23 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15023878#comment-15023878
 ] 

Lars Hofhansl commented on HBASE-14777:
---

In fact the first version of the patch used a CompletionService to allow 
removing the futures in the order in which they finished, so that we won't 
retry part that needn't to. I thought later to simplify it.

> Fix Inter Cluster Replication Future ordering issues
> 
>
> Key: HBASE-14777
> URL: https://issues.apache.org/jira/browse/HBASE-14777
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Reporter: Bhupendra Kumar Jain
>Assignee: Ashu Pachauri
>Priority: Critical
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-14777-1.patch, HBASE-14777-2.patch, 
> HBASE-14777-3.patch, HBASE-14777-4.patch, HBASE-14777-5.patch, 
> HBASE-14777-6.patch, HBASE-14777-addendum.patch, HBASE-14777.patch
>
>
> Replication fails with IndexOutOfBoundsException 
> {code}
> regionserver.ReplicationSource$ReplicationSourceWorkerThread(939): 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint
>  threw unknown exception:java.lang.IndexOutOfBoundsException: Index: 1, Size: 
> 1
>   at java.util.ArrayList.rangeCheck(Unknown Source)
>   at java.util.ArrayList.remove(Unknown Source)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint.replicate(HBaseInterClusterReplicationEndpoint.java:222)
> {code}
> Its happening due to incorrect removal of entries from the replication 
> entries list. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13408) HBase In-Memory Memstore Compaction

2015-11-23 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15023885#comment-15023885
 ] 

stack commented on HBASE-13408:
---

bq. You used to be passionate about this feature

The passion has not gone away... This could be an important enabling feature.. 
especially if we can get to in-memory hfile and smaller memstores

bq. Specifically, we removed (undo) some of the changes to the HRegion and 
FlushPolicy classes. We moved the code for triggering in memory flush into the 
compacting memstore implementation.

Good.

bq. we did not remove the snapshot 

You mean, the comment that snapshot is an implementation detail of the default 
implementation that should not be exposed outside of DefaultMemStore? If so, 
that is fine. Yes, a follow-on issue.

bq. we did not remove the StoreSegmentScanner tier from the KeyValueScanner 
hierarchy...

Not sure what this refers too... I've been away from the patch too long.. let 
me look at the patch.

> HBase In-Memory Memstore Compaction
> ---
>
> Key: HBASE-13408
> URL: https://issues.apache.org/jira/browse/HBASE-13408
> Project: HBase
>  Issue Type: New Feature
>Reporter: Eshcar Hillel
>Assignee: Eshcar Hillel
> Fix For: 2.0.0
>
> Attachments: HBASE-13408-trunk-v01.patch, 
> HBASE-13408-trunk-v02.patch, HBASE-13408-trunk-v03.patch, 
> HBASE-13408-trunk-v04.patch, HBASE-13408-trunk-v05.patch, 
> HBASE-13408-trunk-v06.patch, HBASE-13408-trunk-v07.patch, 
> HBASE-13408-trunk-v08.patch, HBASE-13408-trunk-v09.patch, 
> HBASE-13408-trunk-v10.patch, 
> HBaseIn-MemoryMemstoreCompactionDesignDocument-ver02.pdf, 
> HBaseIn-MemoryMemstoreCompactionDesignDocument-ver03.pdf, 
> HBaseIn-MemoryMemstoreCompactionDesignDocument.pdf, 
> InMemoryMemstoreCompactionEvaluationResults.pdf, 
> InMemoryMemstoreCompactionMasterEvaluationResults.pdf, 
> InMemoryMemstoreCompactionScansEvaluationResults.pdf, 
> StoreSegmentandStoreSegmentScannerClassHierarchies.pdf
>
>
> A store unit holds a column family in a region, where the memstore is its 
> in-memory component. The memstore absorbs all updates to the store; from time 
> to time these updates are flushed to a file on disk, where they are 
> compacted. Unlike disk components, the memstore is not compacted until it is 
> written to the filesystem and optionally to block-cache. This may result in 
> underutilization of the memory due to duplicate entries per row, for example, 
> when hot data is continuously updated. 
> Generally, the faster the data is accumulated in memory, more flushes are 
> triggered, the data sinks to disk more frequently, slowing down retrieval of 
> data, even if very recent.
> In high-churn workloads, compacting the memstore can help maintain the data 
> in memory, and thereby speed up data retrieval. 
> We suggest a new compacted memstore with the following principles:
> 1.The data is kept in memory for as long as possible
> 2.Memstore data is either compacted or in process of being compacted 
> 3.Allow a panic mode, which may interrupt an in-progress compaction and 
> force a flush of part of the memstore.
> We suggest applying this optimization only to in-memory column families.
> A design document is attached.
> This feature was previously discussed in HBASE-5311.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14866) VerifyReplication should use peer configuration in peer connection

2015-11-23 Thread Heng Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15023889#comment-15023889
 ] 

Heng Chen commented on HBASE-14866:
---

Oh, i see.   
this issue is a part of HBASE-14821.  Please go ahead!  
Sorry for disturbing you. :)

> VerifyReplication should use peer configuration in peer connection
> --
>
> Key: HBASE-14866
> URL: https://issues.apache.org/jira/browse/HBASE-14866
> Project: HBase
>  Issue Type: Improvement
>  Components: Replication
>Reporter: Gary Helmling
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-14866.patch, HBASE-14866_v1.patch
>
>
> VerifyReplication uses the replication peer's configuration to construct the 
> ZooKeeper quorum address for the peer connection.  However, other 
> configuration properties in the peer's configuration are dropped.  It should 
> merge all configuration properties from the {{ReplicationPeerConfig}} when 
> creating the peer connection and obtaining a credentials for the peer cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13408) HBase In-Memory Memstore Compaction

2015-11-23 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15023894#comment-15023894
 ] 

stack commented on HBASE-13408:
---

Did the design doc get updated with justifications for this feature?  In 
particular principals like 'The data is kept in memory for as long as possible' 
 or statements like this: "...may help in some scenarios, however it might also 
add unnecessary overhead in other scenarios without any performance gains, like 
when there are no in­memory duplicate records most of the time." We still think 
this last statement true? If this feature is only of use when in-memory 
duplicate records -- a relatively rare instance -- then there is a lot of code 
being added for this case. Can you go bigger? Can you come up with arguments 
that have it that this feature is advantageous 90% of the time. Above I talk of 
better perf because we'll be able to have the in-memory data in a more compact, 
perforrmant (read-only) format than having it in ConcurrentSkipList. Flushes 
could be faster if the format in memory is an hfile (especially if the hfile 
were offheap as came up in a recent offlist chat w/ [~anoop.hbase]). Can we 
come up with other reasons with why this is the bees knees? ([~anoop.hbase] you 
have input here boss?). Thanks. Let me look at the patch.

> HBase In-Memory Memstore Compaction
> ---
>
> Key: HBASE-13408
> URL: https://issues.apache.org/jira/browse/HBASE-13408
> Project: HBase
>  Issue Type: New Feature
>Reporter: Eshcar Hillel
>Assignee: Eshcar Hillel
> Fix For: 2.0.0
>
> Attachments: HBASE-13408-trunk-v01.patch, 
> HBASE-13408-trunk-v02.patch, HBASE-13408-trunk-v03.patch, 
> HBASE-13408-trunk-v04.patch, HBASE-13408-trunk-v05.patch, 
> HBASE-13408-trunk-v06.patch, HBASE-13408-trunk-v07.patch, 
> HBASE-13408-trunk-v08.patch, HBASE-13408-trunk-v09.patch, 
> HBASE-13408-trunk-v10.patch, 
> HBaseIn-MemoryMemstoreCompactionDesignDocument-ver02.pdf, 
> HBaseIn-MemoryMemstoreCompactionDesignDocument-ver03.pdf, 
> HBaseIn-MemoryMemstoreCompactionDesignDocument.pdf, 
> InMemoryMemstoreCompactionEvaluationResults.pdf, 
> InMemoryMemstoreCompactionMasterEvaluationResults.pdf, 
> InMemoryMemstoreCompactionScansEvaluationResults.pdf, 
> StoreSegmentandStoreSegmentScannerClassHierarchies.pdf
>
>
> A store unit holds a column family in a region, where the memstore is its 
> in-memory component. The memstore absorbs all updates to the store; from time 
> to time these updates are flushed to a file on disk, where they are 
> compacted. Unlike disk components, the memstore is not compacted until it is 
> written to the filesystem and optionally to block-cache. This may result in 
> underutilization of the memory due to duplicate entries per row, for example, 
> when hot data is continuously updated. 
> Generally, the faster the data is accumulated in memory, more flushes are 
> triggered, the data sinks to disk more frequently, slowing down retrieval of 
> data, even if very recent.
> In high-churn workloads, compacting the memstore can help maintain the data 
> in memory, and thereby speed up data retrieval. 
> We suggest a new compacted memstore with the following principles:
> 1.The data is kept in memory for as long as possible
> 2.Memstore data is either compacted or in process of being compacted 
> 3.Allow a panic mode, which may interrupt an in-progress compaction and 
> force a flush of part of the memstore.
> We suggest applying this optimization only to in-memory column families.
> A design document is attached.
> This feature was previously discussed in HBASE-5311.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13347) Deprecate FirstKeyValueMatchingQualifiersFilter

2015-11-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15023899#comment-15023899
 ] 

Hudson commented on HBASE-13347:


FAILURE: Integrated in HBase-Trunk_matrix #494 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/494/])
HBASE-13347 Deprecate FirstKeyValueMatchingQualifiersFilter. (Abhishek) 
(anoopsamjohn: rev daba867734b6ce09e61193e90fae2a97755fdb53)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestFirstKeyValueMatchingQualifiersFilter.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestFilterSerialization.java
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FirstKeyValueMatchingQualifiersFilter.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/RowCounter.java


> Deprecate FirstKeyValueMatchingQualifiersFilter
> ---
>
> Key: HBASE-13347
> URL: https://issues.apache.org/jira/browse/HBASE-13347
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: Lars George
>Assignee: Abhishek Singh Chouhan
>Priority: Minor
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-13347-branch-1.patch, HBASE-13347-master-v2.patch, 
> HBASE-13347-master.patch
>
>
> The {{RowCounter}} in the {{mapreduce}} package uses 
> {{FirstKeyValueMatchingQualifiersFilter}} which was introduced in HBASE-6468. 
> However we do not need that since we match columns in the scan before we 
> filter.
> Deprecate the filter in 2.0 and remove in 3.0.
> Do cleanup of RowCounter that tries to use this filter but actually doesn't.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13347) Deprecate FirstKeyValueMatchingQualifiersFilter

2015-11-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15023897#comment-15023897
 ] 

Hudson commented on HBASE-13347:


FAILURE: Integrated in HBase-1.3-IT #333 (See 
[https://builds.apache.org/job/HBase-1.3-IT/333/])
HBASE-13347 Deprecate FirstKeyValueMatchingQualifiersFilter. (Abhishek) 
(anoopsamjohn: rev 81e7eb2805656021a016431d3470e6685d863b6b)
* hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/RowCounter.java


> Deprecate FirstKeyValueMatchingQualifiersFilter
> ---
>
> Key: HBASE-13347
> URL: https://issues.apache.org/jira/browse/HBASE-13347
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: Lars George
>Assignee: Abhishek Singh Chouhan
>Priority: Minor
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-13347-branch-1.patch, HBASE-13347-master-v2.patch, 
> HBASE-13347-master.patch
>
>
> The {{RowCounter}} in the {{mapreduce}} package uses 
> {{FirstKeyValueMatchingQualifiersFilter}} which was introduced in HBASE-6468. 
> However we do not need that since we match columns in the scan before we 
> filter.
> Deprecate the filter in 2.0 and remove in 3.0.
> Do cleanup of RowCounter that tries to use this filter but actually doesn't.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14737) Clear cachedMaxVersions when HColumnDescriptor#setValue(VERSIONS, value) is called

2015-11-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14737?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15023896#comment-15023896
 ] 

Hudson commented on HBASE-14737:


FAILURE: Integrated in HBase-1.3-IT #333 (See 
[https://builds.apache.org/job/HBase-1.3-IT/333/])
HBASE-14737 Clear cachedMaxVersions when (tedyu: rev 
447a0e7b7eb2d073ebfd83af6f70d94ebec074e3)
* hbase-client/src/main/java/org/apache/hadoop/hbase/HColumnDescriptor.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/TestHColumnDescriptorDefaultVersions.java


> Clear cachedMaxVersions when HColumnDescriptor#setValue(VERSIONS, value) is 
> called
> --
>
> Key: HBASE-14737
> URL: https://issues.apache.org/jira/browse/HBASE-14737
> Project: HBase
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Pankaj Kumar
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-14737.patch
>
>
> HColumnDescriptor caches the value of VERSIONS in a cachedMaxVersions member 
> variable. This member variable should be reset or cleared when 
> setValue(HConstants.VERSIONS, value) is called, like this:
> {code}
>   static final bytes[] VERSIONS_BYTES = Bytes.toBytes(HConstants.VERSIONS);
>   public HColumnDescriptor setValue(byte[] key, byte[] value) {
> if (Bytes.compare(HConstants.VERSIONS_BYTES, key) == 0) {
> cachedMaxVersions = UNINITIALIZED;
> }
> values.put(new ImmutableBytesWritable(key),
>   new ImmutableBytesWritable(value));
> return this;
>   }
> {code}
> Otherwise, you continue getting back cachedMaxVersions instead of the updated 
> value.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14821) CopyTable should allow overriding more config properties for peer cluster

2015-11-23 Thread Heng Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15023893#comment-15023893
 ] 

Heng Chen commented on HBASE-14821:
---

LGTM.  
+1

> CopyTable should allow overriding more config properties for peer cluster
> -
>
> Key: HBASE-14821
> URL: https://issues.apache.org/jira/browse/HBASE-14821
> Project: HBase
>  Issue Type: Improvement
>  Components: mapreduce
>Reporter: Gary Helmling
>Assignee: Gary Helmling
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-14821.patch
>
>
> When using CopyTable across two separate clusters, you can specify the ZK 
> quorum for the destination cluster, but not much else in configuration 
> overrides.  This can be a problem when the cluster configurations differ, 
> such as when using security with different configurations for server 
> principals.
> We should provide a general way to override configuration properties for the 
> peer / destination cluster.  One option would be to allow use of a prefix for 
> command line properties ("peer.property.").  Properties matching this prefix 
> will be stripped and merged to the peer configuration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14737) Clear cachedMaxVersions when HColumnDescriptor#setValue(VERSIONS, value) is called

2015-11-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14737?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15023898#comment-15023898
 ] 

Hudson commented on HBASE-14737:


FAILURE: Integrated in HBase-Trunk_matrix #494 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/494/])
HBASE-14737 Clear cachedMaxVersions when (tedyu: rev 
9a91f5ac818b078526962e5c64183a2541a064b4)
* hbase-client/src/main/java/org/apache/hadoop/hbase/HColumnDescriptor.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/TestHColumnDescriptorDefaultVersions.java


> Clear cachedMaxVersions when HColumnDescriptor#setValue(VERSIONS, value) is 
> called
> --
>
> Key: HBASE-14737
> URL: https://issues.apache.org/jira/browse/HBASE-14737
> Project: HBase
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Pankaj Kumar
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-14737.patch
>
>
> HColumnDescriptor caches the value of VERSIONS in a cachedMaxVersions member 
> variable. This member variable should be reset or cleared when 
> setValue(HConstants.VERSIONS, value) is called, like this:
> {code}
>   static final bytes[] VERSIONS_BYTES = Bytes.toBytes(HConstants.VERSIONS);
>   public HColumnDescriptor setValue(byte[] key, byte[] value) {
> if (Bytes.compare(HConstants.VERSIONS_BYTES, key) == 0) {
> cachedMaxVersions = UNINITIALIZED;
> }
> values.put(new ImmutableBytesWritable(key),
>   new ImmutableBytesWritable(value));
> return this;
>   }
> {code}
> Otherwise, you continue getting back cachedMaxVersions instead of the updated 
> value.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14826) Small improvement in KVHeap seek() API

2015-11-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15023901#comment-15023901
 ] 

Hudson commented on HBASE-14826:


FAILURE: Integrated in HBase-Trunk_matrix #494 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/494/])
HBASE-14826 Small improvement in KVHeap seek() API (Ram) (ramkrishna: rev 
afc5439be59c1ee74df8a6965cc2c4aad408ee3f)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/KeyValueHeap.java


> Small improvement in KVHeap seek() API
> --
>
> Key: HBASE-14826
> URL: https://issues.apache.org/jira/browse/HBASE-14826
> Project: HBase
>  Issue Type: Improvement
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
>Priority: Minor
> Attachments: HBASE-14826.patch, HBASE-14826_1.patch
>
>
> Currently in seek/reseek() APIs we tend to do lot of priorityqueue related 
> operations. We initially add the current scanner to the heap, then poll and 
> again add the scanner back if the seekKey is greater than the topkey in that 
> scanner. Since the KVs are always going to be in increasing order and in 
> ideal scan flow every seek/reseek is followed by a next() call it should be 
> ok if we start with checking the current scanner and then do a poll to get 
> the next scanner. Just avoid the initial PQ.add(current) call. This could 
> save some comparisons. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-14799) Commons-collections object deserialization remote command execution vulnerability

2015-11-23 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell resolved HBASE-14799.

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 1.0.4
   1.1.3
   1.3.0
   1.2.0
   2.0.0

> Commons-collections object deserialization remote command execution 
> vulnerability 
> --
>
> Key: HBASE-14799
> URL: https://issues.apache.org/jira/browse/HBASE-14799
> Project: HBase
>  Issue Type: Bug
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Critical
> Fix For: 2.0.0, 0.94.28, 1.2.0, 1.3.0, 1.1.3, 0.98.17, 1.0.4
>
> Attachments: HBASE-14799-0.94.patch, HBASE-14799-0.94.patch, 
> HBASE-14799-0.94.patch, HBASE-14799-0.94.patch, HBASE-14799-0.94.patch, 
> HBASE-14799-0.98.patch, HBASE-14799-0.98.patch, HBASE-14799-0.98.patch, 
> HBASE-14799.patch, HBASE-14799.patch
>
>
> Read: 
> http://foxglovesecurity.com/2015/11/06/what-do-weblogic-websphere-jboss-jenkins-opennms-and-your-application-have-in-common-this-vulnerability/
> TL;DR: If you have commons-collections on your classpath and accept and 
> process Java object serialization data, then you probably have an exploitable 
> remote command execution vulnerability. 
> 0.94 and earlier HBase releases are vulnerable because we might read in and 
> rehydrate serialized Java objects out of RPC packet data in 
> HbaseObjectWritable using ObjectInputStream#readObject (see 
> https://hbase.apache.org/0.94/xref/org/apache/hadoop/hbase/io/HbaseObjectWritable.html#714)
>  and we have commons-collections on the classpath on the server.
> 0.98 also carries some limited exposure to this problem through inclusion of 
> backwards compatible deserialization code in 
> HbaseObjectWritableFor96Migration. This is used by the 0.94-to-0.98 migration 
> utility, and by the AccessController when reading permissions from the ACL 
> table serialized in legacy format by 0.94. Unprivileged users cannot run the 
> tool nor access the ACL table.
> Unprivileged users can however attack a 0.94 installation. An attacker might 
> be able to use the method discussed on that blog post to capture valid HBase 
> RPC payloads for 0.94 and prior versions, rewrite them to embed an exploit, 
> and replay them to trigger a remote command execution with the privileges of 
> the account under which the HBase RegionServer daemon is running.
> We need to make a patch release of 0.94 that changes HbaseObjectWritable to 
> disallow processing of random Java object serializations. This will be a 
> compatibility break that might affect old style coprocessors, which quite 
> possibly may rely on this catch-all in HbaseObjectWritable for custom object 
> (de)serialization. We can introduce a new configuration setting, 
> "hbase.allow.legacy.object.serialization", defaulting to false.
> To be thorough, we can also use the new configuration setting  
> "hbase.allow.legacy.object.serialization" (defaulting to false) in 0.98 to 
> prevent the AccessController from falling back to the vulnerable legacy code. 
> This turns out to not affect the ability to migrate permissions because 
> TablePermission implements Writable, which is safe, not Serializable. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14799) Commons-collections object deserialization remote command execution vulnerability

2015-11-23 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-14799:
---
Attachment: HBASE-14799.patch
HBASE-14799-0.98.patch
HBASE-14799-0.94.patch

Attaching what I committed

> Commons-collections object deserialization remote command execution 
> vulnerability 
> --
>
> Key: HBASE-14799
> URL: https://issues.apache.org/jira/browse/HBASE-14799
> Project: HBase
>  Issue Type: Bug
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Critical
> Fix For: 2.0.0, 0.94.28, 1.2.0, 1.3.0, 1.1.3, 0.98.17, 1.0.4
>
> Attachments: HBASE-14799-0.94.patch, HBASE-14799-0.94.patch, 
> HBASE-14799-0.94.patch, HBASE-14799-0.94.patch, HBASE-14799-0.94.patch, 
> HBASE-14799-0.98.patch, HBASE-14799-0.98.patch, HBASE-14799-0.98.patch, 
> HBASE-14799.patch, HBASE-14799.patch
>
>
> Read: 
> http://foxglovesecurity.com/2015/11/06/what-do-weblogic-websphere-jboss-jenkins-opennms-and-your-application-have-in-common-this-vulnerability/
> TL;DR: If you have commons-collections on your classpath and accept and 
> process Java object serialization data, then you probably have an exploitable 
> remote command execution vulnerability. 
> 0.94 and earlier HBase releases are vulnerable because we might read in and 
> rehydrate serialized Java objects out of RPC packet data in 
> HbaseObjectWritable using ObjectInputStream#readObject (see 
> https://hbase.apache.org/0.94/xref/org/apache/hadoop/hbase/io/HbaseObjectWritable.html#714)
>  and we have commons-collections on the classpath on the server.
> 0.98 also carries some limited exposure to this problem through inclusion of 
> backwards compatible deserialization code in 
> HbaseObjectWritableFor96Migration. This is used by the 0.94-to-0.98 migration 
> utility, and by the AccessController when reading permissions from the ACL 
> table serialized in legacy format by 0.94. Unprivileged users cannot run the 
> tool nor access the ACL table.
> Unprivileged users can however attack a 0.94 installation. An attacker might 
> be able to use the method discussed on that blog post to capture valid HBase 
> RPC payloads for 0.94 and prior versions, rewrite them to embed an exploit, 
> and replay them to trigger a remote command execution with the privileges of 
> the account under which the HBase RegionServer daemon is running.
> We need to make a patch release of 0.94 that changes HbaseObjectWritable to 
> disallow processing of random Java object serializations. This will be a 
> compatibility break that might affect old style coprocessors, which quite 
> possibly may rely on this catch-all in HbaseObjectWritable for custom object 
> (de)serialization. We can introduce a new configuration setting, 
> "hbase.allow.legacy.object.serialization", defaulting to false.
> To be thorough, we can also use the new configuration setting  
> "hbase.allow.legacy.object.serialization" (defaulting to false) in 0.98 to 
> prevent the AccessController from falling back to the vulnerable legacy code. 
> This turns out to not affect the ability to migrate permissions because 
> TablePermission implements Writable, which is safe, not Serializable. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14799) Commons-collections object deserialization remote command execution vulnerability

2015-11-23 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-14799:
---
Release Note: This issue resolves a potential security vulnerability. For 
all versions we update our commons-collections dependency to the release that 
fixes the reported vulnerability in that library. In 0.98 we additionally 
disable by default a feature of code carried from 0.94 for backwards 
compatibility that is not needed. 

> Commons-collections object deserialization remote command execution 
> vulnerability 
> --
>
> Key: HBASE-14799
> URL: https://issues.apache.org/jira/browse/HBASE-14799
> Project: HBase
>  Issue Type: Bug
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Critical
> Fix For: 2.0.0, 0.94.28, 1.2.0, 1.3.0, 1.1.3, 0.98.17, 1.0.4
>
> Attachments: HBASE-14799-0.94.patch, HBASE-14799-0.94.patch, 
> HBASE-14799-0.94.patch, HBASE-14799-0.94.patch, HBASE-14799-0.94.patch, 
> HBASE-14799-0.98.patch, HBASE-14799-0.98.patch, HBASE-14799-0.98.patch, 
> HBASE-14799.patch, HBASE-14799.patch
>
>
> Read: 
> http://foxglovesecurity.com/2015/11/06/what-do-weblogic-websphere-jboss-jenkins-opennms-and-your-application-have-in-common-this-vulnerability/
> TL;DR: If you have commons-collections on your classpath and accept and 
> process Java object serialization data, then you probably have an exploitable 
> remote command execution vulnerability. 
> 0.94 and earlier HBase releases are vulnerable because we might read in and 
> rehydrate serialized Java objects out of RPC packet data in 
> HbaseObjectWritable using ObjectInputStream#readObject (see 
> https://hbase.apache.org/0.94/xref/org/apache/hadoop/hbase/io/HbaseObjectWritable.html#714)
>  and we have commons-collections on the classpath on the server.
> 0.98 also carries some limited exposure to this problem through inclusion of 
> backwards compatible deserialization code in 
> HbaseObjectWritableFor96Migration. This is used by the 0.94-to-0.98 migration 
> utility, and by the AccessController when reading permissions from the ACL 
> table serialized in legacy format by 0.94. Unprivileged users cannot run the 
> tool nor access the ACL table.
> Unprivileged users can however attack a 0.94 installation. An attacker might 
> be able to use the method discussed on that blog post to capture valid HBase 
> RPC payloads for 0.94 and prior versions, rewrite them to embed an exploit, 
> and replay them to trigger a remote command execution with the privileges of 
> the account under which the HBase RegionServer daemon is running.
> We need to make a patch release of 0.94 that changes HbaseObjectWritable to 
> disallow processing of random Java object serializations. This will be a 
> compatibility break that might affect old style coprocessors, which quite 
> possibly may rely on this catch-all in HbaseObjectWritable for custom object 
> (de)serialization. We can introduce a new configuration setting, 
> "hbase.allow.legacy.object.serialization", defaulting to false.
> To be thorough, we can also use the new configuration setting  
> "hbase.allow.legacy.object.serialization" (defaulting to false) in 0.98 to 
> prevent the AccessController from falling back to the vulnerable legacy code. 
> This turns out to not affect the ability to migrate permissions because 
> TablePermission implements Writable, which is safe, not Serializable. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14172) Upgrade existing thrift binding using thrift 0.9.3 compiler.

2015-11-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15023405#comment-15023405
 ] 

Hudson commented on HBASE-14172:


SUCCESS: Integrated in HBase-1.2-IT #302 (See 
[https://builds.apache.org/job/HBase-1.2-IT/302/])
HBASE-14172 Upgrade existing thrift binding using thrift 0.9.3 (enis: rev 
aeff2341be8742c09d111fd24fee9f9b77e7f226)
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TScan.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TIncrement.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/generated/TRowResult.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/generated/TIncrement.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TColumnValue.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/ThriftServerRunner.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/generated/TRegionInfo.java
* hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/ThriftServer.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TCellVisibility.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/generated/AlreadyExists.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/generated/BatchMutation.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/THBaseService.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/generated/IOError.java
* hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TGet.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TServerName.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TTimeRange.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/THRegionInfo.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TColumnIncrement.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/generated/IllegalArgument.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/generated/TColumn.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TColumn.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TRowMutations.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TDeleteType.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/generated/ColumnDescriptor.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TAuthorization.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/generated/TAppend.java
* hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/generated/TCell.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TAppend.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TDelete.java
* hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/generated/Hbase.java
* hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TPut.java
* pom.xml
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TMutation.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/generated/Mutation.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/THRegionLocation.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TResult.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TIllegalArgument.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TDurability.java
* hbase-thrift/pom.xml
* hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/generated/TScan.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TIOError.java


> Upgrade existing thrift binding using thrift 0.9.3 compiler.
> 
>
> Key: HBASE-14172
> URL: https://issues.apache.org/jira/browse/HBASE-14172
> Project: HBase
>  Issue Type: Improvement
>Reporter: Srikanth Srungarapu
>Assignee: Josh Elser
>Priority: Minor
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-14172-branch-1.001.patch, 
> HBASE-14172-branch-1.2.001.patch, HBASE-14172-branch-1.patch, 
> HBASE-14172.001.patch, HBASE-14172.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14843) TestWALProcedureStore.testLoad is flakey

2015-11-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15023422#comment-15023422
 ] 

Hudson commented on HBASE-14843:


FAILURE: Integrated in HBase-Trunk_matrix #491 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/491/])
HBASE-14843 TestWALProcedureStore.testLoad is flakey (matteo.bertozzi: rev 
0f3e2e0bfafddb8ada0e42f7062a7f31db241882)
* 
hbase-procedure/src/test/java/org/apache/hadoop/hbase/procedure2/store/wal/TestWALProcedureStore.java
* 
hbase-procedure/src/main/java/org/apache/hadoop/hbase/procedure2/store/wal/WALProcedureStore.java


> TestWALProcedureStore.testLoad is flakey
> 
>
> Key: HBASE-14843
> URL: https://issues.apache.org/jira/browse/HBASE-14843
> Project: HBase
>  Issue Type: Bug
>  Components: proc-v2
>Affects Versions: 2.0.0, 1.2.0
>Reporter: Heng Chen
>Assignee: Matteo Bertozzi
>Priority: Blocker
> Fix For: 2.0.0, 1.2.0
>
> Attachments: HBASE-14843-v0.patch
>
>
> I see it twice recently, 
> see.
> https://builds.apache.org/job/PreCommit-HBASE-Build/16589//testReport/org.apache.hadoop.hbase.procedure2.store.wal/TestWALProcedureStore/testLoad/
> https://builds.apache.org/job/PreCommit-HBASE-Build/16532/testReport/org.apache.hadoop.hbase.procedure2.store.wal/TestWALProcedureStore/testLoad/
> Let's see what's happening.
> Update.
> It failed once again today, 
> https://builds.apache.org/job/PreCommit-HBASE-Build/16602/testReport/junit/org.apache.hadoop.hbase.procedure2.store.wal/TestWALProcedureStore/testLoad/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14862) Add support for reporting p90 for histogram metrics

2015-11-23 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-14862:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 0.98.17
   1.3.0
   1.2.0
   2.0.0
   Status: Resolved  (was: Patch Available)

Pushed to 0.98 and 1.2+. Thanks for the patch [~sanjeevln]

> Add support for reporting p90 for histogram metrics
> ---
>
> Key: HBASE-14862
> URL: https://issues.apache.org/jira/browse/HBASE-14862
> Project: HBase
>  Issue Type: Improvement
>  Components: metrics
>Reporter: Sanjeev Lakshmanan
>Assignee: Sanjeev Lakshmanan
>Priority: Minor
> Fix For: 2.0.0, 1.2.0, 1.3.0, 0.98.17
>
> Attachments: HBASE-14862-0.98.patch, HBASE-14862.patch
>
>
> Currently there is support for reporting p75, p95, and p99 for histogram 
> metrics. This JIRA is to add support for reporting p90.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14871) Allow specifying the base branch for make_patch

2015-11-23 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15023498#comment-15023498
 ] 

Elliott Clark commented on HBASE-14871:
---

{code}
16:49:42 elliott@elliott-mbp hbase HBASE-14871 ./dev-support/make_patch.sh -b 
master
git_dirty is 0
Patch directory not specified. Falling back to ~/patches/.
Creating patch /Users/elliott/patches/HBASE-14871.patch using git format-patch
{code}

> Allow specifying the base branch for make_patch
> ---
>
> Key: HBASE-14871
> URL: https://issues.apache.org/jira/browse/HBASE-14871
> Project: HBase
>  Issue Type: Improvement
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Attachments: HBASE-14871.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14871) Allow specifying the base branch for make_patch

2015-11-23 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-14871:
--
Attachment: HBASE-14871.patch

> Allow specifying the base branch for make_patch
> ---
>
> Key: HBASE-14871
> URL: https://issues.apache.org/jira/browse/HBASE-14871
> Project: HBase
>  Issue Type: Improvement
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Attachments: HBASE-14871.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14871) Allow specifying the base branch for make_patch

2015-11-23 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-14871:
--
Attachment: HBASE-14871-v2.patch

> Allow specifying the base branch for make_patch
> ---
>
> Key: HBASE-14871
> URL: https://issues.apache.org/jira/browse/HBASE-14871
> Project: HBase
>  Issue Type: Improvement
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Attachments: HBASE-14871-v2.patch, HBASE-14871.patch
>
>
> Not all branches will be based off of origin/*. Lets allow the user to 
> specify which branch to base the patch off of.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14858) Clean up so core is ready for development on a recent version of c++

2015-11-23 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-14858:
--
Attachment: 0001-HBASE-14858-Clean-up-so-core-is-ready-for-developmen.patch

> Clean up so core is ready for development on a recent version of c++
> 
>
> Key: HBASE-14858
> URL: https://issues.apache.org/jira/browse/HBASE-14858
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Elliott Clark
> Attachments: 
> 0001-HBASE-14858-Clean-up-so-core-is-ready-for-developmen.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14865) Support passing multiple QOPs to SaslClient/Server via hbase.rpc.protection

2015-11-23 Thread Appy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14865?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Appy updated HBASE-14865:
-
Attachment: HBASE-14865-master-v2.patch

Fixes AsyncSecureIPC test.
Test was timing out because async had this bug to retry infinite times if 
kerberos is enabled and test was configured to timeout in 5 seconds. So even 
though the expected exception was happening, it'll keep retrying and timeout. 
:-/

> Support passing multiple QOPs to SaslClient/Server via hbase.rpc.protection
> ---
>
> Key: HBASE-14865
> URL: https://issues.apache.org/jira/browse/HBASE-14865
> Project: HBase
>  Issue Type: Improvement
>  Components: security
>Reporter: Appy
>Assignee: Appy
> Attachments: HBASE-14865-master-v2.patch, HBASE-14865-master.patch
>
>
> Currently, we can set the value of hbase.rpc.protection to one of 
> authentication/integrity/privacy. It is the used to set 
> {{javax.security.sasl.qop}} in SaslUtil.java.
> The problem is, if a cluster wants to switch from one qop to another, it'll 
> have to take a downtime. Rolling upgrade will create a situation where some 
> nodes have old value and some have new, which'll prevent any communication 
> between them. There will be similar issue when clients will try to connect.
> {{javax.security.sasl.qop}} can take in a list of QOP in preferences order. 
> So a transition from qop1 to qop2 can be easily done like this
> "qop1" --> "qop2,qop1" --> rolling restart --> "qop2" --> rolling restart
> Need to change hbase.rpc.protection to accept a list too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14477) Compaction improvements: Date tiered compaction policy

2015-11-23 Thread Dave Latham (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15023541#comment-15023541
 ] 

Dave Latham commented on HBASE-14477:
-

Vladimir, this looks great - would love to be able to have it.  Do you have 
intent to backport to 1 or 0.98?

> Compaction improvements: Date tiered compaction policy
> --
>
> Key: HBASE-14477
> URL: https://issues.apache.org/jira/browse/HBASE-14477
> Project: HBase
>  Issue Type: New Feature
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0
>
>
> For immutable and mostly immutable data the current SizeTiered-based 
> compaction policy is not efficient. 
> # There is no need to compact all files into one, because, data is (mostly) 
> immutable and we do not need to collect garbage. (performance reason will be 
> discussed later)
> # Size-tiered compaction is not suitable for applications where most recent 
> data is most important and prevents efficient caching of this data. 
> The idea  is pretty similar to DateTieredCompaction in Cassandra:
> http://www.datastax.com/dev/blog/datetieredcompactionstrategy
> http://www.datastax.com/dev/blog/dtcs-notes-from-the-field
> From Cassandra own blog:
> {quote}
> Since DTCS can be used with any table, it is important to know when it is a 
> good idea, and when it is not. I’ll try to explain the spectrum and 
> trade-offs here:
> 1. Perfect Fit: Time Series Fact Data, Deletes by Default TTL: When you 
> ingest fact data that is ordered in time, with no deletes or overwrites. This 
> is the standard “time series” use case.
> 2. OK Fit: Time-Ordered, with limited updates across whole data set, or only 
> updates to recent data: When you ingest data that is (mostly) ordered in 
> time, but revise or delete a very small proportion of the overall data across 
> the whole timeline.
> 3. Not a Good Fit: many partial row updates or deletions over time: When you 
> need to partially revise or delete fields for rows that you read together. 
> Also, when you revise or delete rows within clustered reads.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-14871) Allow specifying the base branch for make_patch

2015-11-23 Thread Elliott Clark (JIRA)
Elliott Clark created HBASE-14871:
-

 Summary: Allow specifying the base branch for make_patch
 Key: HBASE-14871
 URL: https://issues.apache.org/jira/browse/HBASE-14871
 Project: HBase
  Issue Type: Improvement
Reporter: Elliott Clark
Assignee: Elliott Clark






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13153) Bulk Loaded HFile Replication

2015-11-23 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15023516#comment-15023516
 ] 

Jerry He commented on HBASE-13153:
--

bq. Not remote cluster, it will be local at this point as all the files are 
copied first from source to peer cluster

Okay

bq. I did not get what you mean.

Let me find the old JIRA.

> Bulk Loaded HFile Replication
> -
>
> Key: HBASE-13153
> URL: https://issues.apache.org/jira/browse/HBASE-13153
> Project: HBase
>  Issue Type: New Feature
>  Components: Replication
>Reporter: sunhaitao
>Assignee: Ashish Singhi
> Fix For: 2.0.0
>
> Attachments: HBASE-13153-v1.patch, HBASE-13153-v10.patch, 
> HBASE-13153-v11.patch, HBASE-13153-v12.patch, HBASE-13153-v13.patch, 
> HBASE-13153-v14.patch, HBASE-13153-v15.patch, HBASE-13153-v16.patch, 
> HBASE-13153-v17.patch, HBASE-13153-v18.patch, HBASE-13153-v2.patch, 
> HBASE-13153-v3.patch, HBASE-13153-v4.patch, HBASE-13153-v5.patch, 
> HBASE-13153-v6.patch, HBASE-13153-v7.patch, HBASE-13153-v8.patch, 
> HBASE-13153-v9.patch, HBASE-13153.patch, HBase Bulk Load 
> Replication-v1-1.pdf, HBase Bulk Load Replication-v2.pdf, HBase Bulk Load 
> Replication-v3.pdf, HBase Bulk Load Replication.pdf, HDFS_HA_Solution.PNG
>
>
> Currently we plan to use HBase Replication feature to deal with disaster 
> tolerance scenario.But we encounter an issue that we will use bulkload very 
> frequently,because bulkload bypass write path, and will not generate WAL, so 
> the data will not be replicated to backup cluster. It's inappropriate to 
> bukload twice both on active cluster and backup cluster. So i advise do some 
> modification to bulkload feature to enable bukload to both active cluster and 
> backup cluster



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14843) TestWALProcedureStore.testLoad is flakey

2015-11-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15023377#comment-15023377
 ] 

Hudson commented on HBASE-14843:


FAILURE: Integrated in HBase-1.1-JDK7 #1603 (See 
[https://builds.apache.org/job/HBase-1.1-JDK7/1603/])
HBASE-14843 TestWALProcedureStore.testLoad is flakey (matteo.bertozzi: rev 
58870c30f151c4c79cbdf1019ab0e2c3d971941c)
* 
hbase-procedure/src/main/java/org/apache/hadoop/hbase/procedure2/store/wal/WALProcedureStore.java
* 
hbase-procedure/src/test/java/org/apache/hadoop/hbase/procedure2/store/wal/TestWALProcedureStore.java
HBASE-14843 TestWALProcedureStore.testLoad is flakey (addendum) 
(matteo.bertozzi: rev 33a035d90fe19c4efe0b3b3ac1da45042584a823)
* 
hbase-procedure/src/test/java/org/apache/hadoop/hbase/procedure2/store/wal/TestWALProcedureStore.java


> TestWALProcedureStore.testLoad is flakey
> 
>
> Key: HBASE-14843
> URL: https://issues.apache.org/jira/browse/HBASE-14843
> Project: HBase
>  Issue Type: Bug
>  Components: proc-v2
>Affects Versions: 2.0.0, 1.2.0
>Reporter: Heng Chen
>Assignee: Matteo Bertozzi
>Priority: Blocker
> Fix For: 2.0.0, 1.2.0
>
> Attachments: HBASE-14843-v0.patch
>
>
> I see it twice recently, 
> see.
> https://builds.apache.org/job/PreCommit-HBASE-Build/16589//testReport/org.apache.hadoop.hbase.procedure2.store.wal/TestWALProcedureStore/testLoad/
> https://builds.apache.org/job/PreCommit-HBASE-Build/16532/testReport/org.apache.hadoop.hbase.procedure2.store.wal/TestWALProcedureStore/testLoad/
> Let's see what's happening.
> Update.
> It failed once again today, 
> https://builds.apache.org/job/PreCommit-HBASE-Build/16602/testReport/junit/org.apache.hadoop.hbase.procedure2.store.wal/TestWALProcedureStore/testLoad/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >