[jira] [Commented] (HBASE-11393) Replication TableCfs should be a PB object rather than a string

2015-11-19 Thread Heng Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15013323#comment-15013323
 ] 

Heng Chen commented on HBASE-11393:
---

Update a new patch. 
changes:
 *  Modify some code as @enis suggestions in review board.
 *  support namespace as [~ashishujjain] suggestions.

Review board update too.


> Replication TableCfs should be a PB object rather than a string
> ---
>
> Key: HBASE-11393
> URL: https://issues.apache.org/jira/browse/HBASE-11393
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Enis Soztutar
> Fix For: 2.0.0
>
> Attachments: HBASE-11393.patch, HBASE-11393_v1.patch, 
> HBASE-11393_v10.patch, HBASE-11393_v2.patch, HBASE-11393_v3.patch, 
> HBASE-11393_v4.patch, HBASE-11393_v5.patch, HBASE-11393_v6.patch, 
> HBASE-11393_v7.patch, HBASE-11393_v8.patch, HBASE-11393_v9.patch
>
>
> We concatenate the list of tables and column families in format  
> "table1:cf1,cf2;table2:cfA,cfB" in zookeeper for table-cf to replication peer 
> mapping. 
> This results in ugly parsing code. We should do this a PB object. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-11393) Replication TableCfs should be a PB object rather than a string

2015-11-19 Thread Heng Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Heng Chen updated HBASE-11393:
--
Attachment: HBASE-11393_v10.patch

> Replication TableCfs should be a PB object rather than a string
> ---
>
> Key: HBASE-11393
> URL: https://issues.apache.org/jira/browse/HBASE-11393
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Enis Soztutar
> Fix For: 2.0.0
>
> Attachments: HBASE-11393.patch, HBASE-11393_v1.patch, 
> HBASE-11393_v10.patch, HBASE-11393_v2.patch, HBASE-11393_v3.patch, 
> HBASE-11393_v4.patch, HBASE-11393_v5.patch, HBASE-11393_v6.patch, 
> HBASE-11393_v7.patch, HBASE-11393_v8.patch, HBASE-11393_v9.patch
>
>
> We concatenate the list of tables and column families in format  
> "table1:cf1,cf2;table2:cfA,cfB" in zookeeper for table-cf to replication peer 
> mapping. 
> This results in ugly parsing code. We should do this a PB object. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14623) Implement dedicated WAL for system tables

2015-11-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15013109#comment-15013109
 ] 

Hadoop QA commented on HBASE-14623:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12773171/14623-v2.txt
  against master branch at commit cf81b45f3771002146d6e8c4d995b12963aa685a.
  ATTACHMENT ID: 12773171

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.6.1 2.7.0 
2.7.1)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16585//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16585//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16585//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16585//console

This message is automatically generated.

> Implement dedicated WAL for system tables
> -
>
> Key: HBASE-14623
> URL: https://issues.apache.org/jira/browse/HBASE-14623
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 2.0.0
>
> Attachments: 14623-v1.txt, 14623-v2.txt
>
>
> As Stephen suggested in parent JIRA, dedicating separate WAL for system 
> tables (other than hbase:meta) should be done in new JIRA.
> This task is to fulfill the system WAL separation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14468) Compaction improvements: FIFO compaction policy

2015-11-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15013118#comment-15013118
 ] 

Hudson commented on HBASE-14468:


FAILURE: Integrated in HBase-Trunk_matrix #480 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/480/])
HBASE-14468 Compaction improvements: FIFO compaction policy (Vladimir (enis: 
rev cf81b45f3771002146d6e8c4d995b12963aa685a)
* hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
* 
hbase-common/src/test/java/org/apache/hadoop/hbase/util/TimeOffsetEnvironmentEdge.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/compactions/TestFIFOCompactionPolicy.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/FIFOCompactionPolicy.java


> Compaction improvements: FIFO compaction policy
> ---
>
> Key: HBASE-14468
> URL: https://issues.apache.org/jira/browse/HBASE-14468
> Project: HBase
>  Issue Type: Improvement
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-14468-v1.patch, HBASE-14468-v10.patch, 
> HBASE-14468-v2.patch, HBASE-14468-v3.patch, HBASE-14468-v4.patch, 
> HBASE-14468-v5.patch, HBASE-14468-v6.patch, HBASE-14468-v7.patch, 
> HBASE-14468-v8.patch, HBASE-14468-v9.patch
>
>
> h2. FIFO Compaction
> h3. Introduction
> FIFO compaction policy selects only files which have all cells expired. The 
> column family MUST have non-default TTL. 
> Essentially, FIFO compactor does only one job: collects expired store files. 
> I see many applications for this policy:
> # use it for very high volume raw data which has low TTL and which is the 
> source of another data (after additional processing). Example: Raw 
> time-series vs. time-based rollup aggregates and compacted time-series. We 
> collect raw time-series and store them into CF with FIFO compaction policy, 
> periodically we run  task which creates rollup aggregates and compacts 
> time-series, the original raw data can be discarded after that.
> # use it for data which can be kept entirely in a a block cache (RAM/SSD). 
> Say we have local SSD (1TB) which we can use as a block cache. No need for 
> compaction of a raw data at all.
> Because we do not do any real compaction, we do not use CPU and IO (disk and 
> network), we do not evict hot data from a block cache. The result: improved 
> throughput and latency both write and read.
> See: https://github.com/facebook/rocksdb/wiki/FIFO-compaction-style
> h3. To enable FIFO compaction policy
> For table:
> {code}
> HTableDescriptor desc = new HTableDescriptor(tableName);
> 
> desc.setConfiguration(DefaultStoreEngine.DEFAULT_COMPACTION_POLICY_CLASS_KEY, 
>   FIFOCompactionPolicy.class.getName());
> {code} 
> For CF:
> {code}
> HColumnDescriptor desc = new HColumnDescriptor(family);
> 
> desc.setConfiguration(DefaultStoreEngine.DEFAULT_COMPACTION_POLICY_CLASS_KEY, 
>   FIFOCompactionPolicy.class.getName());
> {code}
> Make sure, that table has disabled region splits (either by setting 
> explicitly DisabledRegionSplitPolicy or by setting 
> ConstantSizeRegionSplitPolicy and very large max region size). You will have 
> to increase to a very large number store's blocking file number : 
> *hbase.hstore.blockingStoreFiles* as well.
>  
> h3. Limitations
> Do not use FIFO compaction if :
> * Table/CF has MIN_VERSION > 0
> * Table/CF has TTL = FOREVER (HColumnDescriptor.DEFAULT_TTL)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14172) Upgrade existing thrift binding using thrift 0.9.3 compiler.

2015-11-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15013117#comment-15013117
 ] 

Hudson commented on HBASE-14172:


FAILURE: Integrated in HBase-Trunk_matrix #480 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/480/])
HBASE-14172 Upgrade existing thrift binding using thrift 0.9.2 (enis: rev 
3aa3fae1383d7dde1bb0ce8b69357fbad3863127)
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TIncrement.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TColumn.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/THRegionInfo.java
* pom.xml
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/generated/IOError.java
* hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TPut.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TIllegalArgument.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TAppend.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TColumnValue.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TAuthorization.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TCellVisibility.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TScan.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TDurability.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TDeleteType.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/generated/ColumnDescriptor.java
* hbase-thrift/pom.xml
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/generated/Mutation.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/generated/IllegalArgument.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/generated/TIncrement.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/generated/AlreadyExists.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/ThriftServerRunner.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/generated/TAppend.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TServerName.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/generated/TColumn.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TDelete.java
* hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/ThriftServer.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/generated/BatchMutation.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TTimeRange.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TRowMutations.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/THBaseService.java
* hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/generated/Hbase.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TColumnIncrement.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TResult.java
* hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TGet.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/generated/TRegionInfo.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/THRegionLocation.java
* hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/generated/TCell.java
* hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/generated/TScan.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TIOError.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TMutation.java
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/generated/TRowResult.java


> Upgrade existing thrift binding using thrift 0.9.3 compiler.
> 
>
> Key: HBASE-14172
> URL: https://issues.apache.org/jira/browse/HBASE-14172
> Project: HBase
>  Issue Type: Improvement
>Reporter: Srikanth Srungarapu
>Assignee: Josh Elser
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-14172-branch-1.patch, HBASE-14172.001.patch, 
> HBASE-14172.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13408) HBase In-Memory Memstore Compaction

2015-11-19 Thread Edward Bortnikov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15013119#comment-15013119
 ] 

Edward Bortnikov commented on HBASE-13408:
--

Sorry about the persistence ... [~tedyu], [~stack], [~anoop.hbase] - please 
weigh in. You used to be passionate about this feature :)

> HBase In-Memory Memstore Compaction
> ---
>
> Key: HBASE-13408
> URL: https://issues.apache.org/jira/browse/HBASE-13408
> Project: HBase
>  Issue Type: New Feature
>Reporter: Eshcar Hillel
>Assignee: Eshcar Hillel
> Fix For: 2.0.0
>
> Attachments: HBASE-13408-trunk-v01.patch, 
> HBASE-13408-trunk-v02.patch, HBASE-13408-trunk-v03.patch, 
> HBASE-13408-trunk-v04.patch, HBASE-13408-trunk-v05.patch, 
> HBASE-13408-trunk-v06.patch, HBASE-13408-trunk-v07.patch, 
> HBASE-13408-trunk-v08.patch, HBASE-13408-trunk-v09.patch, 
> HBASE-13408-trunk-v10.patch, 
> HBaseIn-MemoryMemstoreCompactionDesignDocument-ver02.pdf, 
> HBaseIn-MemoryMemstoreCompactionDesignDocument-ver03.pdf, 
> HBaseIn-MemoryMemstoreCompactionDesignDocument.pdf, 
> InMemoryMemstoreCompactionEvaluationResults.pdf, 
> InMemoryMemstoreCompactionMasterEvaluationResults.pdf, 
> InMemoryMemstoreCompactionScansEvaluationResults.pdf, 
> StoreSegmentandStoreSegmentScannerClassHierarchies.pdf
>
>
> A store unit holds a column family in a region, where the memstore is its 
> in-memory component. The memstore absorbs all updates to the store; from time 
> to time these updates are flushed to a file on disk, where they are 
> compacted. Unlike disk components, the memstore is not compacted until it is 
> written to the filesystem and optionally to block-cache. This may result in 
> underutilization of the memory due to duplicate entries per row, for example, 
> when hot data is continuously updated. 
> Generally, the faster the data is accumulated in memory, more flushes are 
> triggered, the data sinks to disk more frequently, slowing down retrieval of 
> data, even if very recent.
> In high-churn workloads, compacting the memstore can help maintain the data 
> in memory, and thereby speed up data retrieval. 
> We suggest a new compacted memstore with the following principles:
> 1.The data is kept in memory for as long as possible
> 2.Memstore data is either compacted or in process of being compacted 
> 3.Allow a panic mode, which may interrupt an in-progress compaction and 
> force a flush of part of the memstore.
> We suggest applying this optimization only to in-memory column families.
> A design document is attached.
> This feature was previously discussed in HBASE-5311.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14825) HBase Ref Guide corrections of typos/misspellings

2015-11-19 Thread Daniel Vimont (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Vimont updated HBASE-14825:
--
Attachment: HBASE-14825-v2.patch

Corrections made to patch by following Misty's advice.
Note that this patch gets a warning, "1 line adds whitespace errors" when 
applied. I don't know whether this is a show-stopper or not.

> HBase Ref Guide corrections of typos/misspellings
> -
>
> Key: HBASE-14825
> URL: https://issues.apache.org/jira/browse/HBASE-14825
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.0.0
>Reporter: Daniel Vimont
>Assignee: Daniel Vimont
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-14825-v2.patch, HBASE-14825.patch
>
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> Found the following list of typos/misspellings on the book.html page, and 
> thought I would make corrections to the appropriate src/main/asciidoc files 
> in which they are located. (This is just a good opportunity for me to become 
> familiar with submission of fixes/patches as a prelude to beginning to make 
> some coding contributions. This is also my first submission to the JIRA 
> system, so corrections to content/conventions are welcome!)
> [Note: I see that [~misty]  may be in the midst of a reformatting task -- 
> HBASE-14823 --  that might involve these same asciidoc files. Please advise 
> if I should wait on this task to avoid a possibly cumbersome Git 
> reconciliation mess. (?)]
> Here is the list of typos/misspellings. The format of each item is (a) the 
> problem is presented in brackets on the first line, and (b) the phrase (as it 
> currently appears in the text) is on the second line.
> ===
> ["you" should be "your", and "Kimballs'" should be "Kimball's" (move the 
> apostrophe) in the following:]
> A useful read setting config on you hadoop cluster is Aaron Kimballs' 
> Configuration Parameters: What can you just ignore?
> [Period needed after "a"]
> a.k.a pseudo-distributed
> ["empty" is misspelled]
> The default value in this configuration has been intentionally left emtpy in 
> order to honor the old hbase.regionserver.global.memstore.upperLimit property 
> if present.
> [All occurrences of "a HBase" should be changed to "an HBase" -- 15 
> occurrences found]
> ["file path are" should be "file paths are"]
> By default, all of HBase's ZooKeeper file path are configured with a relative 
> path, so they will all go under this directory unless changed.
> ["times" -- plural required]
> How many time to retry attempting to write a version file before just 
> aborting. 
> ["separated" is misspelled]
> Each attempt is seperated by the hbase.server.thread.wakefrequency 
> milliseconds.
> [space needed after quotation mark (include"limit)]
> Because this limit represents the "automatic include"limit...
> [space needed ("ashbase:metadata" should be "as hbase:metadata")]
> This helps to keep compaction of lean tables (such ashbase:meta) fast.
> [Acronym "ide" should be capitalized for clarity: IDE]
> Setting this to true can be useful in contexts other than the other side of a 
> maven generation; i.e. running in an ide. 
> [RuntimeException missing an "e"]
> You'll want to set this boolean to true to avoid seeing the RuntimException 
> complaint:
> [Space missing after "secure"]
> FS Permissions for the root directory in a secure(kerberos) setup.
> ["mutations" misspelled]
> ...will be created which will tail the logs and replicate the mutatations to 
> region replicas for tables that have region replication > 1.
> ["it such that" should be "is such that"]
> If your working set it such that block cache does you no good...
> ["an" should be "and"]
> See the Deveraj Das an Nicolas Liochon blog post...
> [Tag "" should be ""]
> hbase.coprocessor.master.classes
> [Misspelling of "implementations"]
> Those consumers are coprocessors, phoenix, replication endpoint 
> implemnetations or similar.
> [Misspelling of "cluster"]
> On upgrade, before running a rolling restart over the cluser...
> ["effect" should be "affect"]
> If NOT using BucketCache, this change does not effect you.
> [Need space after "throw"]
> This will throw`java.lang.NoSuchMethodError...
> ["habasee" should be "hbase"]
> You can pass commands to the HBase Shell in non-interactive mode (see 
> hbasee.shell.noninteractive)...
> ["ie" should be "i.e."]
> Restrict the amount of resources (ie regions, tables) a namespace can consume.
> ["an" should be "and"]
> ...but can be conjured on the fly while the table is up an running.
> [Malformed link (text appears as follows when rendered in a browser):]
> Puts are executed via Table.put (writeBuffer) or 
> 

[jira] [Updated] (HBASE-14825) HBase Ref Guide corrections of typos/misspellings

2015-11-19 Thread Daniel Vimont (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Vimont updated HBASE-14825:
--
Status: Patch Available  (was: Open)

> HBase Ref Guide corrections of typos/misspellings
> -
>
> Key: HBASE-14825
> URL: https://issues.apache.org/jira/browse/HBASE-14825
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.0.0
>Reporter: Daniel Vimont
>Assignee: Daniel Vimont
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-14825-v2.patch, HBASE-14825.patch
>
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> Found the following list of typos/misspellings on the book.html page, and 
> thought I would make corrections to the appropriate src/main/asciidoc files 
> in which they are located. (This is just a good opportunity for me to become 
> familiar with submission of fixes/patches as a prelude to beginning to make 
> some coding contributions. This is also my first submission to the JIRA 
> system, so corrections to content/conventions are welcome!)
> [Note: I see that [~misty]  may be in the midst of a reformatting task -- 
> HBASE-14823 --  that might involve these same asciidoc files. Please advise 
> if I should wait on this task to avoid a possibly cumbersome Git 
> reconciliation mess. (?)]
> Here is the list of typos/misspellings. The format of each item is (a) the 
> problem is presented in brackets on the first line, and (b) the phrase (as it 
> currently appears in the text) is on the second line.
> ===
> ["you" should be "your", and "Kimballs'" should be "Kimball's" (move the 
> apostrophe) in the following:]
> A useful read setting config on you hadoop cluster is Aaron Kimballs' 
> Configuration Parameters: What can you just ignore?
> [Period needed after "a"]
> a.k.a pseudo-distributed
> ["empty" is misspelled]
> The default value in this configuration has been intentionally left emtpy in 
> order to honor the old hbase.regionserver.global.memstore.upperLimit property 
> if present.
> [All occurrences of "a HBase" should be changed to "an HBase" -- 15 
> occurrences found]
> ["file path are" should be "file paths are"]
> By default, all of HBase's ZooKeeper file path are configured with a relative 
> path, so they will all go under this directory unless changed.
> ["times" -- plural required]
> How many time to retry attempting to write a version file before just 
> aborting. 
> ["separated" is misspelled]
> Each attempt is seperated by the hbase.server.thread.wakefrequency 
> milliseconds.
> [space needed after quotation mark (include"limit)]
> Because this limit represents the "automatic include"limit...
> [space needed ("ashbase:metadata" should be "as hbase:metadata")]
> This helps to keep compaction of lean tables (such ashbase:meta) fast.
> [Acronym "ide" should be capitalized for clarity: IDE]
> Setting this to true can be useful in contexts other than the other side of a 
> maven generation; i.e. running in an ide. 
> [RuntimeException missing an "e"]
> You'll want to set this boolean to true to avoid seeing the RuntimException 
> complaint:
> [Space missing after "secure"]
> FS Permissions for the root directory in a secure(kerberos) setup.
> ["mutations" misspelled]
> ...will be created which will tail the logs and replicate the mutatations to 
> region replicas for tables that have region replication > 1.
> ["it such that" should be "is such that"]
> If your working set it such that block cache does you no good...
> ["an" should be "and"]
> See the Deveraj Das an Nicolas Liochon blog post...
> [Tag "" should be ""]
> hbase.coprocessor.master.classes
> [Misspelling of "implementations"]
> Those consumers are coprocessors, phoenix, replication endpoint 
> implemnetations or similar.
> [Misspelling of "cluster"]
> On upgrade, before running a rolling restart over the cluser...
> ["effect" should be "affect"]
> If NOT using BucketCache, this change does not effect you.
> [Need space after "throw"]
> This will throw`java.lang.NoSuchMethodError...
> ["habasee" should be "hbase"]
> You can pass commands to the HBase Shell in non-interactive mode (see 
> hbasee.shell.noninteractive)...
> ["ie" should be "i.e."]
> Restrict the amount of resources (ie regions, tables) a namespace can consume.
> ["an" should be "and"]
> ...but can be conjured on the fly while the table is up an running.
> [Malformed link (text appears as follows when rendered in a browser):]
> Puts are executed via Table.put (writeBuffer) or 
> link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Table.html#batch(java.util.List,
>  java.lang.Object[])[Table.batch] (non-writeBuffer).
> ["regions" should appear only once:]
> Thus, the middle regions regions will never be used.
> 

[jira] [Updated] (HBASE-14825) HBase Ref Guide corrections of typos/misspellings

2015-11-19 Thread Daniel Vimont (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Vimont updated HBASE-14825:
--
Status: Open  (was: Patch Available)

Toggling patch submission

> HBase Ref Guide corrections of typos/misspellings
> -
>
> Key: HBASE-14825
> URL: https://issues.apache.org/jira/browse/HBASE-14825
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.0.0
>Reporter: Daniel Vimont
>Assignee: Daniel Vimont
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-14825-v2.patch, HBASE-14825.patch
>
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> Found the following list of typos/misspellings on the book.html page, and 
> thought I would make corrections to the appropriate src/main/asciidoc files 
> in which they are located. (This is just a good opportunity for me to become 
> familiar with submission of fixes/patches as a prelude to beginning to make 
> some coding contributions. This is also my first submission to the JIRA 
> system, so corrections to content/conventions are welcome!)
> [Note: I see that [~misty]  may be in the midst of a reformatting task -- 
> HBASE-14823 --  that might involve these same asciidoc files. Please advise 
> if I should wait on this task to avoid a possibly cumbersome Git 
> reconciliation mess. (?)]
> Here is the list of typos/misspellings. The format of each item is (a) the 
> problem is presented in brackets on the first line, and (b) the phrase (as it 
> currently appears in the text) is on the second line.
> ===
> ["you" should be "your", and "Kimballs'" should be "Kimball's" (move the 
> apostrophe) in the following:]
> A useful read setting config on you hadoop cluster is Aaron Kimballs' 
> Configuration Parameters: What can you just ignore?
> [Period needed after "a"]
> a.k.a pseudo-distributed
> ["empty" is misspelled]
> The default value in this configuration has been intentionally left emtpy in 
> order to honor the old hbase.regionserver.global.memstore.upperLimit property 
> if present.
> [All occurrences of "a HBase" should be changed to "an HBase" -- 15 
> occurrences found]
> ["file path are" should be "file paths are"]
> By default, all of HBase's ZooKeeper file path are configured with a relative 
> path, so they will all go under this directory unless changed.
> ["times" -- plural required]
> How many time to retry attempting to write a version file before just 
> aborting. 
> ["separated" is misspelled]
> Each attempt is seperated by the hbase.server.thread.wakefrequency 
> milliseconds.
> [space needed after quotation mark (include"limit)]
> Because this limit represents the "automatic include"limit...
> [space needed ("ashbase:metadata" should be "as hbase:metadata")]
> This helps to keep compaction of lean tables (such ashbase:meta) fast.
> [Acronym "ide" should be capitalized for clarity: IDE]
> Setting this to true can be useful in contexts other than the other side of a 
> maven generation; i.e. running in an ide. 
> [RuntimeException missing an "e"]
> You'll want to set this boolean to true to avoid seeing the RuntimException 
> complaint:
> [Space missing after "secure"]
> FS Permissions for the root directory in a secure(kerberos) setup.
> ["mutations" misspelled]
> ...will be created which will tail the logs and replicate the mutatations to 
> region replicas for tables that have region replication > 1.
> ["it such that" should be "is such that"]
> If your working set it such that block cache does you no good...
> ["an" should be "and"]
> See the Deveraj Das an Nicolas Liochon blog post...
> [Tag "" should be ""]
> hbase.coprocessor.master.classes
> [Misspelling of "implementations"]
> Those consumers are coprocessors, phoenix, replication endpoint 
> implemnetations or similar.
> [Misspelling of "cluster"]
> On upgrade, before running a rolling restart over the cluser...
> ["effect" should be "affect"]
> If NOT using BucketCache, this change does not effect you.
> [Need space after "throw"]
> This will throw`java.lang.NoSuchMethodError...
> ["habasee" should be "hbase"]
> You can pass commands to the HBase Shell in non-interactive mode (see 
> hbasee.shell.noninteractive)...
> ["ie" should be "i.e."]
> Restrict the amount of resources (ie regions, tables) a namespace can consume.
> ["an" should be "and"]
> ...but can be conjured on the fly while the table is up an running.
> [Malformed link (text appears as follows when rendered in a browser):]
> Puts are executed via Table.put (writeBuffer) or 
> link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Table.html#batch(java.util.List,
>  java.lang.Object[])[Table.batch] (non-writeBuffer).
> ["regions" should appear only once:]
> Thus, the middle regions regions 

[jira] [Updated] (HBASE-14840) Sink cluster reports data replication request as success though the data is not replicated

2015-11-19 Thread Ashish Singhi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish Singhi updated HBASE-14840:
--
Summary: Sink cluster reports data replication request as success though 
the data is not replicated  (was: Sink cluster RS reports data replication as 
success though the data it is not replicated)

> Sink cluster reports data replication request as success though the data is 
> not replicated
> --
>
> Key: HBASE-14840
> URL: https://issues.apache.org/jira/browse/HBASE-14840
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.1.2, 1.0.3, 0.98.16
>Reporter: Y. SREENIVASULU REDDY
>Assignee: Ashish Singhi
>
> *Scenario:*
> Sink cluster is down
> Create a table and enable table replication
> Put some data
> Now restart the sink cluster
> *Observance:*
> Data is not replicated in sink cluster but still source cluster updates the 
> WAL log position in ZK, resulting in data loss in sink cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14708) Use copy on write Map for region location cache

2015-11-19 Thread Hiroshi Ikeda (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hiroshi Ikeda updated HBASE-14708:
--
Attachment: anotherbench3.zip

I created a revised hybrid implementation and attached it with its benchmark. I 
hope there is no race condition and it will clear my bad.

I changed the benchmark code to use System.nanoTime() instead of 
System.currentTimeMillis() because the resolution of the latter seems 15 msec 
in my environment.

I measured the benchmark with 10k init entries. As to adding/removing elements, 
still the hybrid implementation has 10% overhead compared to 
ConcurrentSkipListMap, but copy-on-write array implementation is 30 times 
slower than the hybrid implementation. As to reading elements, the hybrid 
implementation seems almost always a bit faster than the cony-on-write array 
implementation. I think that is because the hybrid implementation doesn't 
create an entry object for search.

FYI

> Use copy on write Map for region location cache
> ---
>
> Key: HBASE-14708
> URL: https://issues.apache.org/jira/browse/HBASE-14708
> Project: HBase
>  Issue Type: Improvement
>  Components: Client
>Affects Versions: 1.1.2
>Reporter: Elliott Clark
>Assignee: Elliott Clark
>Priority: Critical
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-14708-v10.patch, HBASE-14708-v11.patch, 
> HBASE-14708-v12.patch, HBASE-14708-v13.patch, HBASE-14708-v15.patch, 
> HBASE-14708-v16.patch, HBASE-14708-v17.patch, HBASE-14708-v2.patch, 
> HBASE-14708-v3.patch, HBASE-14708-v4.patch, HBASE-14708-v5.patch, 
> HBASE-14708-v6.patch, HBASE-14708-v7.patch, HBASE-14708-v8.patch, 
> HBASE-14708-v9.patch, HBASE-14708.patch, anotherbench.zip, anotherbench2.zip, 
> anotherbench3.zip, location_cache_times.pdf, result.csv
>
>
> Internally a co-worker profiled their application that was talking to HBase. 
> > 60% of the time was spent in locating a region. This was while the cluster 
> was stable and no regions were moving.
> To figure out if there was a faster way to cache region location I wrote up a 
> benchmark here: https://github.com/elliottneilclark/benchmark-hbase-cache
> This tries to simulate a heavy load on the location cache. 
> * 24 different threads.
> * 2 Deleting location data
> * 2 Adding location data
> * Using floor to get the result.
> To repeat my work just run ./run.sh and it should produce a result.csv
> Results:
> ConcurrentSkiplistMap is a good middle ground. It's got equal speed for 
> reading and writing.
> However most operations will not need to remove or add a region location. 
> There will be potentially several orders of magnitude more reads for cached 
> locations than there will be on clearing the cache.
> So I propose a copy on write tree map.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-14840) Sink cluster RS reports data replication as success though the data it is not replicated

2015-11-19 Thread Y. SREENIVASULU REDDY (JIRA)
Y. SREENIVASULU REDDY created HBASE-14840:
-

 Summary: Sink cluster RS reports data replication as success 
though the data it is not replicated
 Key: HBASE-14840
 URL: https://issues.apache.org/jira/browse/HBASE-14840
 Project: HBase
  Issue Type: Bug
Reporter: Y. SREENIVASULU REDDY


*Scenario:*
Sink cluster is down
Create a table and enable table replication
Put some data
Now restart the sink cluster

*Observance:*
Data is not replicated in sink cluster but still source cluster updates the WAL 
log position in ZK, resulting in data loss in sink cluster.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14761) Deletes with and without visibility expression do not delete the matching mutation

2015-11-19 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-14761:
---
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Pushed to 0.98+ and above. Thanks for the review [~anoopsamjohn] and for 
finding out the issue [~anoopsharma].

> Deletes with and without visibility expression do not delete the matching 
> mutation
> --
>
> Key: HBASE-14761
> URL: https://issues.apache.org/jira/browse/HBASE-14761
> Project: HBase
>  Issue Type: Bug
>  Components: security
>Affects Versions: 1.0.0, 1.0.1, 1.1.0, 1.0.2, 1.1.2, 0.98.15
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0, 1.2.0, 1.3.0, 0.98.17
>
> Attachments: HBASE-14761.patch
>
>
> This is from the user list as reported by Anoop Sharma
> {code}
>  running into an issue related to visibility expressions and delete.
> Example run from hbase shell is listed below.
> Will appreciate any help on this issue.
> thanks.
> In the example below, user running queries has ‘MANAGER’ authorization.
> *First example:*
>   add a column with visib expr ‘MANAGER’
>   delete it by passing in visibility of ‘MANAGER’
>   This works and scan doesn’t return anything.
> *Second example:*
>   add a column with visib expr ‘MANAGER’
>   delete it by not passing in any visibility.
>   This doesn’t delete the column.
>   Scan doesn’t return the row but RAW scan shows the column
>   marked as deleteColumn.
>   Now if delete is done again with visibility of ‘MANAGER’,
>   it still doesn’t delete it and scan returns the original column.
> *Example 1:*
> hbase(main):096:0> create 'HBT1', 'cf'
> hbase(main):098:0* *put 'HBT1', 'John', 'cf:a', 'CA',
> {VISIBILITY=>'MANAGER'}*
> hbase(main):099:0> *scan 'HBT1'*
> ROW
> COLUMN+CELL
>  John column=cf:a, timestamp=1446154722055,
> value=CA
> 1 row(s) in 0.0030 seconds
> hbase(main):100:0> *delete 'HBT1', 'John', 'cf:a', {VISIBILITY=>'MANAGER'}*
> 0 row(s) in 0.0030 seconds
> hbase(main):101:0> *scan 'HBT1'*
> ROW
> COLUMN+CELL
> 0 row(s) in 0.0030 seconds
> *Example 2:*
> hbase(main):010:0* *put 'HBT1', 'John', 'cf:a', 'CA',
> {VISIBILITY=>'MANAGER'}*
> 0 row(s) in 0.0040 seconds
> hbase(main):011:0> *scan 'HBT1'*
> ROW
> COLUMN+CELL
>  John column=cf:a, timestamp=1446155346473,
> value=CA
> 1 row(s) in 0.0060 seconds
> hbase(main):012:0> *delete 'HBT1', 'John', 'cf:a'*
> 0 row(s) in 0.0090 seconds
> hbase(main):013:0> *scan 'HBT1'*
> ROW
> COLUMN+CELL
>  John column=cf:a, timestamp=1446155346473,
> value=CA
> 1 row(s) in 0.0050 seconds
> hbase(main):014:0> *scan 'HBT1', {RAW => true}*
> ROW
> COLUMN+CELL
>  John column=cf:a, timestamp=1446155346519,
> type=DeleteColumn
> 1 row(s) in 0.0060 seconds
> hbase(main):015:0> *delete 'HBT1', 'John', 'cf:a', {VISIBILITY=>'MANAGER'}*
> 0 row(s) in 0.0030 seconds
> hbase(main):016:0> *scan 'HBT1'*
> ROW
> COLUMN+CELL
>  John column=cf:a, timestamp=1446155346473,
> value=CA
> 1 row(s) in 0.0040 seconds
> hbase(main):017:0> *scan 'HBT1', {RAW => true}*
> ROW
> COLUMN+CELL
>  John column=cf:a, timestamp=1446155346601,
> type=DeleteColumn
> 1 row(s) in 0.0060 seconds
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14782) FuzzyRowFilter skips valid rows

2015-11-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15013116#comment-15013116
 ] 

Hudson commented on HBASE-14782:


FAILURE: Integrated in HBase-Trunk_matrix #480 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/480/])
HBASE-14782 FuzzyRowFilter skips valid rows (Vladimir Rodionov) (chenheng: rev 
ebe5801e0081dcce7f3eb918b39161fcd2298087)
* hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FuzzyRowFilter.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestFuzzyRowFilterEndToEnd.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestFuzzyRowFilter.java


> FuzzyRowFilter skips valid rows
> ---
>
> Key: HBASE-14782
> URL: https://issues.apache.org/jira/browse/HBASE-14782
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.4, 0.98.17
>
> Attachments: HBASE-14782-0.98-v4.patch, HBASE-14782-v3.patch, 
> HBASE-14782-v4.patch, HBASE-14782.patch, HBASE-14782.patch
>
>
> The issue may affect not only master branch, but previous releases as well.
> This is from one of our customers:
> {quote}
> We are experiencing a problem with the FuzzyRowFilter for HBase scan. We 
> think that it is a bug. 
> Fuzzy filter should pick a row if it matches filter criteria irrespective of 
> other rows present in table but filter is dropping a row depending on some 
> other row present in table. 
> Details/Step to reproduce/Sample outputs below: 
> Missing row key: \x9C\x00\x044\x00\x00\x00\x00 
> Causing row key: \x9C\x00\x03\xE9e\xBB{X\x1Fwts\x1F\x15vRX 
> Prerequisites 
> 1. Create a test table. HBase shell command -- create 'fuzzytest','d' 
> 2. Insert some test data. HBase shell commands: 
> • put 'fuzzytest',"\x9C\x00\x044\x00\x00\x00\x00",'d:a','junk' 
> • put 'fuzzytest',"\x9C\x00\x044\x01\x00\x00\x00",'d:a','junk' 
> • put 'fuzzytest',"\x9C\x00\x044\x00\x01\x00\x00",'d:a','junk' 
> • put 'fuzzytest',"\x9C\x00\x044\x00\x00\x01\x00",'d:a','junk' 
> • put 'fuzzytest',"\x9C\x00\x044\x00\x01\x00\x01",'d:a','junk' 
> • put 'fuzzytest',"\x9B\x00\x044e\xBB\xB2\xBB",'d:a','junk' 
> • put 'fuzzytest',"\x9D\x00\x044e\xBB\xB2\xBB",'d:a','junk' 
> Now when you run the code, you will find \x9C\x00\x044\x00\x00\x00\x00 in 
> output because it matches filter criteria. (Refer how to run code below) 
> Insert the row key causing bug: 
> HBase shell command: put 
> 'fuzzytest',"\x9C\x00\x03\xE9e\xBB{X\x1Fwts\x1F\x15vRX",'d:a','junk' 
> Now when you run the code, you will not find \x9C\x00\x044\x00\x00\x00\x00 in 
> output even though it still matches filter criteria. 
> {quote}
> Verified the issue on master.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14782) FuzzyRowFilter skips valid rows

2015-11-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15013134#comment-15013134
 ] 

Hudson commented on HBASE-14782:


FAILURE: Integrated in HBase-1.1-JDK7 #1599 (See 
[https://builds.apache.org/job/HBase-1.1-JDK7/1599/])
HBASE-14782 FuzzyRowFilter skips valid rows (Vladimir Rodionov) (chenheng: rev 
cf23a83f3836f32e33e310c0c0e67c160e195c1f)
* hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FuzzyRowFilter.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestFuzzyRowFilterEndToEnd.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestFuzzyRowFilter.java


> FuzzyRowFilter skips valid rows
> ---
>
> Key: HBASE-14782
> URL: https://issues.apache.org/jira/browse/HBASE-14782
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.4, 0.98.17
>
> Attachments: HBASE-14782-0.98-v4.patch, HBASE-14782-v3.patch, 
> HBASE-14782-v4.patch, HBASE-14782.patch, HBASE-14782.patch
>
>
> The issue may affect not only master branch, but previous releases as well.
> This is from one of our customers:
> {quote}
> We are experiencing a problem with the FuzzyRowFilter for HBase scan. We 
> think that it is a bug. 
> Fuzzy filter should pick a row if it matches filter criteria irrespective of 
> other rows present in table but filter is dropping a row depending on some 
> other row present in table. 
> Details/Step to reproduce/Sample outputs below: 
> Missing row key: \x9C\x00\x044\x00\x00\x00\x00 
> Causing row key: \x9C\x00\x03\xE9e\xBB{X\x1Fwts\x1F\x15vRX 
> Prerequisites 
> 1. Create a test table. HBase shell command -- create 'fuzzytest','d' 
> 2. Insert some test data. HBase shell commands: 
> • put 'fuzzytest',"\x9C\x00\x044\x00\x00\x00\x00",'d:a','junk' 
> • put 'fuzzytest',"\x9C\x00\x044\x01\x00\x00\x00",'d:a','junk' 
> • put 'fuzzytest',"\x9C\x00\x044\x00\x01\x00\x00",'d:a','junk' 
> • put 'fuzzytest',"\x9C\x00\x044\x00\x00\x01\x00",'d:a','junk' 
> • put 'fuzzytest',"\x9C\x00\x044\x00\x01\x00\x01",'d:a','junk' 
> • put 'fuzzytest',"\x9B\x00\x044e\xBB\xB2\xBB",'d:a','junk' 
> • put 'fuzzytest',"\x9D\x00\x044e\xBB\xB2\xBB",'d:a','junk' 
> Now when you run the code, you will find \x9C\x00\x044\x00\x00\x00\x00 in 
> output because it matches filter criteria. (Refer how to run code below) 
> Insert the row key causing bug: 
> HBase shell command: put 
> 'fuzzytest',"\x9C\x00\x03\xE9e\xBB{X\x1Fwts\x1F\x15vRX",'d:a','junk' 
> Now when you run the code, you will not find \x9C\x00\x044\x00\x00\x00\x00 in 
> output even though it still matches filter criteria. 
> {quote}
> Verified the issue on master.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14761) Deletes with and without visibility expression do not delete the matching mutation

2015-11-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15013135#comment-15013135
 ] 

Hudson commented on HBASE-14761:


FAILURE: Integrated in HBase-1.1-JDK7 #1599 (See 
[https://builds.apache.org/job/HBase-1.1-JDK7/1599/])
HBASE-14761 Deletes with and without visibility expression do not delete 
(ramkrishna: rev bb5fe0456e63b0077d8904dd40ee05befe959276)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/security/visibility/TestVisibilityLabelsWithDeletes.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/security/visibility/VisibilityScanDeleteTracker.java


> Deletes with and without visibility expression do not delete the matching 
> mutation
> --
>
> Key: HBASE-14761
> URL: https://issues.apache.org/jira/browse/HBASE-14761
> Project: HBase
>  Issue Type: Bug
>  Components: security
>Affects Versions: 1.0.0, 1.0.1, 1.1.0, 1.0.2, 1.1.2, 0.98.15
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0, 1.2.0, 1.3.0, 0.98.17
>
> Attachments: HBASE-14761.patch
>
>
> This is from the user list as reported by Anoop Sharma
> {code}
>  running into an issue related to visibility expressions and delete.
> Example run from hbase shell is listed below.
> Will appreciate any help on this issue.
> thanks.
> In the example below, user running queries has ‘MANAGER’ authorization.
> *First example:*
>   add a column with visib expr ‘MANAGER’
>   delete it by passing in visibility of ‘MANAGER’
>   This works and scan doesn’t return anything.
> *Second example:*
>   add a column with visib expr ‘MANAGER’
>   delete it by not passing in any visibility.
>   This doesn’t delete the column.
>   Scan doesn’t return the row but RAW scan shows the column
>   marked as deleteColumn.
>   Now if delete is done again with visibility of ‘MANAGER’,
>   it still doesn’t delete it and scan returns the original column.
> *Example 1:*
> hbase(main):096:0> create 'HBT1', 'cf'
> hbase(main):098:0* *put 'HBT1', 'John', 'cf:a', 'CA',
> {VISIBILITY=>'MANAGER'}*
> hbase(main):099:0> *scan 'HBT1'*
> ROW
> COLUMN+CELL
>  John column=cf:a, timestamp=1446154722055,
> value=CA
> 1 row(s) in 0.0030 seconds
> hbase(main):100:0> *delete 'HBT1', 'John', 'cf:a', {VISIBILITY=>'MANAGER'}*
> 0 row(s) in 0.0030 seconds
> hbase(main):101:0> *scan 'HBT1'*
> ROW
> COLUMN+CELL
> 0 row(s) in 0.0030 seconds
> *Example 2:*
> hbase(main):010:0* *put 'HBT1', 'John', 'cf:a', 'CA',
> {VISIBILITY=>'MANAGER'}*
> 0 row(s) in 0.0040 seconds
> hbase(main):011:0> *scan 'HBT1'*
> ROW
> COLUMN+CELL
>  John column=cf:a, timestamp=1446155346473,
> value=CA
> 1 row(s) in 0.0060 seconds
> hbase(main):012:0> *delete 'HBT1', 'John', 'cf:a'*
> 0 row(s) in 0.0090 seconds
> hbase(main):013:0> *scan 'HBT1'*
> ROW
> COLUMN+CELL
>  John column=cf:a, timestamp=1446155346473,
> value=CA
> 1 row(s) in 0.0050 seconds
> hbase(main):014:0> *scan 'HBT1', {RAW => true}*
> ROW
> COLUMN+CELL
>  John column=cf:a, timestamp=1446155346519,
> type=DeleteColumn
> 1 row(s) in 0.0060 seconds
> hbase(main):015:0> *delete 'HBT1', 'John', 'cf:a', {VISIBILITY=>'MANAGER'}*
> 0 row(s) in 0.0030 seconds
> hbase(main):016:0> *scan 'HBT1'*
> ROW
> COLUMN+CELL
>  John column=cf:a, timestamp=1446155346473,
> value=CA
> 1 row(s) in 0.0040 seconds
> hbase(main):017:0> *scan 'HBT1', {RAW => true}*
> ROW
> COLUMN+CELL
>  John column=cf:a, timestamp=1446155346601,
> type=DeleteColumn
> 1 row(s) in 0.0060 seconds
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HBASE-14840) Sink cluster RS reports data replication as success though the data it is not replicated

2015-11-19 Thread Ashish Singhi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish Singhi reassigned HBASE-14840:
-

Assignee: Ashish Singhi

> Sink cluster RS reports data replication as success though the data it is not 
> replicated
> 
>
> Key: HBASE-14840
> URL: https://issues.apache.org/jira/browse/HBASE-14840
> Project: HBase
>  Issue Type: Bug
>Reporter: Y. SREENIVASULU REDDY
>Assignee: Ashish Singhi
>
> *Scenario:*
> Sink cluster is down
> Create a table and enable table replication
> Put some data
> Now restart the sink cluster
> *Observance:*
> Data is not replicated in sink cluster but still source cluster updates the 
> WAL log position in ZK, resulting in data loss in sink cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14840) Sink cluster RS reports data replication as success though the data it is not replicated

2015-11-19 Thread Ashish Singhi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15013155#comment-15013155
 ] 

Ashish Singhi commented on HBASE-14840:
---

*Analysis:*
I found that if {{HRegionServer#replicationSinkHandler}} is not yet initialized 
and before that  replication request has come then we will simply return back 
to the source cluster. Source cluster assumes that the call was success as 
there was no exception and hence will go ahead and update the WAL position in 
ZK node.
{code}
  public ReplicateWALEntryResponse replicateWALEntry(final RpcController 
controller,
  final ReplicateWALEntryRequest request) throws ServiceException {
try {
  if (regionServer.replicationSinkHandler != null) {
checkOpen();
requestCount.increment();
List entries = request.getEntryList();
CellScanner cellScanner = 
((PayloadCarryingRpcController)controller).cellScanner();

regionServer.getRegionServerCoprocessorHost().preReplicateLogEntries(entries, 
cellScanner);
regionServer.replicationSinkHandler.replicateLogEntries(entries, 
cellScanner);

regionServer.getRegionServerCoprocessorHost().postReplicateLogEntries(entries, 
cellScanner);
  }
  return ReplicateWALEntryResponse.newBuilder().build();
} catch (IOException ie) {
  throw new ServiceException(ie);
}
  }
{code}

Will provide a patch fixing this in some time.

> Sink cluster RS reports data replication as success though the data it is not 
> replicated
> 
>
> Key: HBASE-14840
> URL: https://issues.apache.org/jira/browse/HBASE-14840
> Project: HBase
>  Issue Type: Bug
>Reporter: Y. SREENIVASULU REDDY
>Assignee: Ashish Singhi
>
> *Scenario:*
> Sink cluster is down
> Create a table and enable table replication
> Put some data
> Now restart the sink cluster
> *Observance:*
> Data is not replicated in sink cluster but still source cluster updates the 
> WAL log position in ZK, resulting in data loss in sink cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14840) Sink cluster RS reports data replication as success though the data it is not replicated

2015-11-19 Thread Ashish Singhi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish Singhi updated HBASE-14840:
--
Affects Version/s: 1.1.2
   1.0.3
   0.98.16

> Sink cluster RS reports data replication as success though the data it is not 
> replicated
> 
>
> Key: HBASE-14840
> URL: https://issues.apache.org/jira/browse/HBASE-14840
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.1.2, 1.0.3, 0.98.16
>Reporter: Y. SREENIVASULU REDDY
>Assignee: Ashish Singhi
>
> *Scenario:*
> Sink cluster is down
> Create a table and enable table replication
> Put some data
> Now restart the sink cluster
> *Observance:*
> Data is not replicated in sink cluster but still source cluster updates the 
> WAL log position in ZK, resulting in data loss in sink cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14825) HBase Ref Guide corrections of typos/misspellings

2015-11-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15013186#comment-15013186
 ] 

Hadoop QA commented on HBASE-14825:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12773209/HBASE-14825-v2.patch
  against master branch at commit c92737c0e912563aeba2112ab8df74af976e720a.
  ATTACHMENT ID: 12773209

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 22 new 
or modified tests.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16588//console

This message is automatically generated.

> HBase Ref Guide corrections of typos/misspellings
> -
>
> Key: HBASE-14825
> URL: https://issues.apache.org/jira/browse/HBASE-14825
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.0.0
>Reporter: Daniel Vimont
>Assignee: Daniel Vimont
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-14825-v2.patch, HBASE-14825.patch
>
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> Found the following list of typos/misspellings on the book.html page, and 
> thought I would make corrections to the appropriate src/main/asciidoc files 
> in which they are located. (This is just a good opportunity for me to become 
> familiar with submission of fixes/patches as a prelude to beginning to make 
> some coding contributions. This is also my first submission to the JIRA 
> system, so corrections to content/conventions are welcome!)
> [Note: I see that [~misty]  may be in the midst of a reformatting task -- 
> HBASE-14823 --  that might involve these same asciidoc files. Please advise 
> if I should wait on this task to avoid a possibly cumbersome Git 
> reconciliation mess. (?)]
> Here is the list of typos/misspellings. The format of each item is (a) the 
> problem is presented in brackets on the first line, and (b) the phrase (as it 
> currently appears in the text) is on the second line.
> ===
> ["you" should be "your", and "Kimballs'" should be "Kimball's" (move the 
> apostrophe) in the following:]
> A useful read setting config on you hadoop cluster is Aaron Kimballs' 
> Configuration Parameters: What can you just ignore?
> [Period needed after "a"]
> a.k.a pseudo-distributed
> ["empty" is misspelled]
> The default value in this configuration has been intentionally left emtpy in 
> order to honor the old hbase.regionserver.global.memstore.upperLimit property 
> if present.
> [All occurrences of "a HBase" should be changed to "an HBase" -- 15 
> occurrences found]
> ["file path are" should be "file paths are"]
> By default, all of HBase's ZooKeeper file path are configured with a relative 
> path, so they will all go under this directory unless changed.
> ["times" -- plural required]
> How many time to retry attempting to write a version file before just 
> aborting. 
> ["separated" is misspelled]
> Each attempt is seperated by the hbase.server.thread.wakefrequency 
> milliseconds.
> [space needed after quotation mark (include"limit)]
> Because this limit represents the "automatic include"limit...
> [space needed ("ashbase:metadata" should be "as hbase:metadata")]
> This helps to keep compaction of lean tables (such ashbase:meta) fast.
> [Acronym "ide" should be capitalized for clarity: IDE]
> Setting this to true can be useful in contexts other than the other side of a 
> maven generation; i.e. running in an ide. 
> [RuntimeException missing an "e"]
> You'll want to set this boolean to true to avoid seeing the RuntimException 
> complaint:
> [Space missing after "secure"]
> FS Permissions for the root directory in a secure(kerberos) setup.
> ["mutations" misspelled]
> ...will be created which will tail the logs and replicate the mutatations to 
> region replicas for tables that have region replication > 1.
> ["it such that" should be "is such that"]
> If your working set it such that block cache does you no good...
> ["an" should be "and"]
> See the Deveraj Das an Nicolas Liochon blog post...
> [Tag "" should be ""]
> hbase.coprocessor.master.classes
> [Misspelling of "implementations"]
> Those consumers are coprocessors, phoenix, replication endpoint 
> implemnetations or similar.
> [Misspelling of "cluster"]
> On upgrade, before running a rolling restart over the cluser...
> ["effect" should be "affect"]
> If NOT using BucketCache, this change does not effect you.
> [Need space after "throw"]
> This will throw`java.lang.NoSuchMethodError...
> ["habasee" should be "hbase"]
> You can pass commands to the HBase Shell 

[jira] [Commented] (HBASE-14829) Add more checkstyles

2015-11-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15013253#comment-15013253
 ] 

Hadoop QA commented on HBASE-14829:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12773187/HBASE-14829-master-v2.patch
  against master branch at commit c92737c0e912563aeba2112ab8df74af976e720a.
  ATTACHMENT ID: 12773187

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.6.1 2.7.0 
2.7.1)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:red}-1 checkstyle{color}.  The applied patch generated 
18690 checkstyle errors (more than the master's current 1727 errors).

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16587//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16587//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16587//artifact/patchprocess/checkstyle-aggregate.html

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16587//console

This message is automatically generated.

> Add more checkstyles
> 
>
> Key: HBASE-14829
> URL: https://issues.apache.org/jira/browse/HBASE-14829
> Project: HBase
>  Issue Type: Improvement
>Reporter: Appy
>Assignee: Appy
> Attachments: HBASE-14829-master-v2.patch, 
> HBASE-14829-master-v2.patch, HBASE-14829-master.patch
>
>
> This jira will add following checkstyles:
> [ImportOrder|http://checkstyle.sourceforge.net/config_imports.html#ImportOrder]
>  : keep imports in sorted order
> [LeftCurly|http://checkstyle.sourceforge.net/config_blocks.html#LeftCurly] : 
> Placement of left curly brace. Does 'eol' sounds right setting?
> [NeedBraces|http://checkstyle.sourceforge.net/config_blocks.html#NeedBraces] 
> : braces around code blocks
> [JavadocTagContinuationIndentation|http://checkstyle.sourceforge.net/config_javadoc.html#JavadocTagContinuationIndentation]
>  : Avoid weird indentations in javadocs
> [NonEmptyAtclauseDescription|http://checkstyle.sourceforge.net/config_javadoc.html#NonEmptyAtclauseDescription]
>  : We have so many empty javadoc @ clauses. This'll take care of it.
>  
> [Indentation|http://checkstyle.sourceforge.net/config_misc.html#Indentation] 
> : Bad indentation hurts code readability. We have indentation guidelines, 
> should be fine enforcing them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14623) Implement dedicated WAL for system tables

2015-11-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15013188#comment-15013188
 ] 

Hadoop QA commented on HBASE-14623:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12773174/14623-v2.txt
  against master branch at commit cf81b45f3771002146d6e8c4d995b12963aa685a.
  ATTACHMENT ID: 12773174

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.6.1 2.7.0 
2.7.1)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   
org.apache.hadoop.hbase.replication.TestReplicationEndpoint
  
org.apache.hadoop.hbase.replication.multiwal.TestReplicationEndpointWithMultipleWAL

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16586//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16586//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16586//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16586//console

This message is automatically generated.

> Implement dedicated WAL for system tables
> -
>
> Key: HBASE-14623
> URL: https://issues.apache.org/jira/browse/HBASE-14623
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 2.0.0
>
> Attachments: 14623-v1.txt, 14623-v2.txt
>
>
> As Stephen suggested in parent JIRA, dedicating separate WAL for system 
> tables (other than hbase:meta) should be done in new JIRA.
> This task is to fulfill the system WAL separation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14838) SimpleRegionNormalizer does not merge empty region of a table

2015-11-19 Thread Mikhail Antonov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15013177#comment-15013177
 ] 

Mikhail Antonov commented on HBASE-14838:
-

correction - "didn't think" in first sensence above

> SimpleRegionNormalizer does not merge empty region of a table
> -
>
> Key: HBASE-14838
> URL: https://issues.apache.org/jira/browse/HBASE-14838
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.0, 1.1.2
>Reporter: Romil Choksi
>
> SImpleRegionNormalizer does not merge empty region of a table
> Steps to repro:
> - Create an empty table with few, say 5-6 regions without any data in any of 
> them
> - Verify hbase:meta table to verify the regions for the table or check 
> HMaster UI
> - Enable normalizer switch and normalization for this table
> - Run normalizer, by 'normalize' command from hbase shell
> - Verify the regions for table by scanning hbase:meta table or checking 
> HMaster web UI
> The empty regions are not merged on running the region normalizer. This seems 
> to be an edge case with completely empty regions since the Normalizer checks 
> for: smallestRegion (in this case 0 size) + smallestNeighborOfSmallestRegion 
> (in this case 0 size) > avg region size (in this case 0 size)
> thanks to [~elserj] for verifying this from the source code side



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14838) SimpleRegionNormalizer does not merge empty region of a table

2015-11-19 Thread Mikhail Antonov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15013175#comment-15013175
 ] 

Mikhail Antonov commented on HBASE-14838:
-

Interesting, to be honest I did think about this scenario :)

I think original incentive was to provide a tool to 1) automatically fix skews 
in data distribution in regions (result of suboptimal choice of pre-chosen 
split points or something) 2) merge up small regions (either the ones which 
shrunk after major compaction, or old small regions after migration to new 
version with bigger "standard" region size)

If you have 5-6 empty regions and no data in there, do you want normalizer to 
merge them together? I would assume (if I see this scenario) that someone has 
just pre-split the table, and it should be left as is, until some data comes in 
and skews in distribution start to show up, at which point normalizer would 
kick in? Am I missing something?

> SimpleRegionNormalizer does not merge empty region of a table
> -
>
> Key: HBASE-14838
> URL: https://issues.apache.org/jira/browse/HBASE-14838
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.0, 1.1.2
>Reporter: Romil Choksi
>
> SImpleRegionNormalizer does not merge empty region of a table
> Steps to repro:
> - Create an empty table with few, say 5-6 regions without any data in any of 
> them
> - Verify hbase:meta table to verify the regions for the table or check 
> HMaster UI
> - Enable normalizer switch and normalization for this table
> - Run normalizer, by 'normalize' command from hbase shell
> - Verify the regions for table by scanning hbase:meta table or checking 
> HMaster web UI
> The empty regions are not merged on running the region normalizer. This seems 
> to be an edge case with completely empty regions since the Normalizer checks 
> for: smallestRegion (in this case 0 size) + smallestNeighborOfSmallestRegion 
> (in this case 0 size) > avg region size (in this case 0 size)
> thanks to [~elserj] for verifying this from the source code side



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14761) Deletes with and without visibility expression do not delete the matching mutation

2015-11-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15013270#comment-15013270
 ] 

Hudson commented on HBASE-14761:


SUCCESS: Integrated in HBase-1.2 #385 (See 
[https://builds.apache.org/job/HBase-1.2/385/])
HBASE-14761 Deletes with and without visibility expression do not delete 
(ramkrishna: rev c8a715d85dbe461fa9233a99658b1f6bb50ab9f1)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/security/visibility/VisibilityScanDeleteTracker.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/security/visibility/TestVisibilityLabelsWithDeletes.java


> Deletes with and without visibility expression do not delete the matching 
> mutation
> --
>
> Key: HBASE-14761
> URL: https://issues.apache.org/jira/browse/HBASE-14761
> Project: HBase
>  Issue Type: Bug
>  Components: security
>Affects Versions: 1.0.0, 1.0.1, 1.1.0, 1.0.2, 1.1.2, 0.98.15
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0, 1.2.0, 1.3.0, 0.98.17
>
> Attachments: HBASE-14761.patch
>
>
> This is from the user list as reported by Anoop Sharma
> {code}
>  running into an issue related to visibility expressions and delete.
> Example run from hbase shell is listed below.
> Will appreciate any help on this issue.
> thanks.
> In the example below, user running queries has ‘MANAGER’ authorization.
> *First example:*
>   add a column with visib expr ‘MANAGER’
>   delete it by passing in visibility of ‘MANAGER’
>   This works and scan doesn’t return anything.
> *Second example:*
>   add a column with visib expr ‘MANAGER’
>   delete it by not passing in any visibility.
>   This doesn’t delete the column.
>   Scan doesn’t return the row but RAW scan shows the column
>   marked as deleteColumn.
>   Now if delete is done again with visibility of ‘MANAGER’,
>   it still doesn’t delete it and scan returns the original column.
> *Example 1:*
> hbase(main):096:0> create 'HBT1', 'cf'
> hbase(main):098:0* *put 'HBT1', 'John', 'cf:a', 'CA',
> {VISIBILITY=>'MANAGER'}*
> hbase(main):099:0> *scan 'HBT1'*
> ROW
> COLUMN+CELL
>  John column=cf:a, timestamp=1446154722055,
> value=CA
> 1 row(s) in 0.0030 seconds
> hbase(main):100:0> *delete 'HBT1', 'John', 'cf:a', {VISIBILITY=>'MANAGER'}*
> 0 row(s) in 0.0030 seconds
> hbase(main):101:0> *scan 'HBT1'*
> ROW
> COLUMN+CELL
> 0 row(s) in 0.0030 seconds
> *Example 2:*
> hbase(main):010:0* *put 'HBT1', 'John', 'cf:a', 'CA',
> {VISIBILITY=>'MANAGER'}*
> 0 row(s) in 0.0040 seconds
> hbase(main):011:0> *scan 'HBT1'*
> ROW
> COLUMN+CELL
>  John column=cf:a, timestamp=1446155346473,
> value=CA
> 1 row(s) in 0.0060 seconds
> hbase(main):012:0> *delete 'HBT1', 'John', 'cf:a'*
> 0 row(s) in 0.0090 seconds
> hbase(main):013:0> *scan 'HBT1'*
> ROW
> COLUMN+CELL
>  John column=cf:a, timestamp=1446155346473,
> value=CA
> 1 row(s) in 0.0050 seconds
> hbase(main):014:0> *scan 'HBT1', {RAW => true}*
> ROW
> COLUMN+CELL
>  John column=cf:a, timestamp=1446155346519,
> type=DeleteColumn
> 1 row(s) in 0.0060 seconds
> hbase(main):015:0> *delete 'HBT1', 'John', 'cf:a', {VISIBILITY=>'MANAGER'}*
> 0 row(s) in 0.0030 seconds
> hbase(main):016:0> *scan 'HBT1'*
> ROW
> COLUMN+CELL
>  John column=cf:a, timestamp=1446155346473,
> value=CA
> 1 row(s) in 0.0040 seconds
> hbase(main):017:0> *scan 'HBT1', {RAW => true}*
> ROW
> COLUMN+CELL
>  John column=cf:a, timestamp=1446155346601,
> type=DeleteColumn
> 1 row(s) in 0.0060 seconds
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14468) Compaction improvements: FIFO compaction policy

2015-11-19 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14468?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-14468:
--
Description: 
h2. FIFO Compaction
h3. Introduction
FIFO compaction policy selects only files which have all cells expired. The 
column family MUST have non-default TTL. 
Essentially, FIFO compactor does only one job: collects expired store files. I 
see many applications for this policy:
# Use it for very high volume raw data which has low TTL and which is the 
source of another data (after additional processing). Example: Raw time-series 
vs. time-based rollup aggregates and compacted time-series. We collect raw 
time-series and store them into CF with FIFO compaction policy, periodically we 
run  task which creates rollup aggregates and compacts time-series, the 
original raw data can be discarded after that.
# Use it for data which can be kept entirely in a a block cache (RAM/SSD). Say 
we have local SSD (1TB) which we can use as a block cache. No need for 
compaction of a raw data at all.

Because we do not do any real compaction, we do not use CPU and IO (disk and 
network), we do not evict hot data from a block cache. The result: improved 
throughput and latency both write and read.
See: https://github.com/facebook/rocksdb/wiki/FIFO-compaction-style

h3. To enable FIFO compaction policy
For table:
{code}
HTableDescriptor desc = new HTableDescriptor(tableName);

desc.setConfiguration(DefaultStoreEngine.DEFAULT_COMPACTION_POLICY_CLASS_KEY, 
  FIFOCompactionPolicy.class.getName());
{code} 

For CF:
{code}
HColumnDescriptor desc = new HColumnDescriptor(family);

desc.setConfiguration(DefaultStoreEngine.DEFAULT_COMPACTION_POLICY_CLASS_KEY, 
  FIFOCompactionPolicy.class.getName());
{code}

Although region splitting is supported,  for optimal performance it should be 
disabled, either by setting explicitly DisabledRegionSplitPolicy or by setting 
ConstantSizeRegionSplitPolicy and very large max region size. You will have to 
increase to a very large number store's blocking file number : 
*hbase.hstore.blockingStoreFiles* as well.
 
h3. Limitations
Do not use FIFO compaction if :
* Table/CF has MIN_VERSION > 0
* Table/CF has TTL = FOREVER (HColumnDescriptor.DEFAULT_TTL)



  was:
h2. FIFO Compaction
h3. Introduction
FIFO compaction policy selects only files which have all cells expired. The 
column family MUST have non-default TTL. 
Essentially, FIFO compactor does only one job: collects expired store files. I 
see many applications for this policy:
# use it for very high volume raw data which has low TTL and which is the 
source of another data (after additional processing). Example: Raw time-series 
vs. time-based rollup aggregates and compacted time-series. We collect raw 
time-series and store them into CF with FIFO compaction policy, periodically we 
run  task which creates rollup aggregates and compacts time-series, the 
original raw data can be discarded after that.
# use it for data which can be kept entirely in a a block cache (RAM/SSD). Say 
we have local SSD (1TB) which we can use as a block cache. No need for 
compaction of a raw data at all.

Because we do not do any real compaction, we do not use CPU and IO (disk and 
network), we do not evict hot data from a block cache. The result: improved 
throughput and latency both write and read.
See: https://github.com/facebook/rocksdb/wiki/FIFO-compaction-style

h3. To enable FIFO compaction policy
For table:
{code}
HTableDescriptor desc = new HTableDescriptor(tableName);

desc.setConfiguration(DefaultStoreEngine.DEFAULT_COMPACTION_POLICY_CLASS_KEY, 
  FIFOCompactionPolicy.class.getName());
{code} 

For CF:
{code}
HColumnDescriptor desc = new HColumnDescriptor(family);

desc.setConfiguration(DefaultStoreEngine.DEFAULT_COMPACTION_POLICY_CLASS_KEY, 
  FIFOCompactionPolicy.class.getName());
{code}

Make sure, that table has disabled region splits (either by setting explicitly 
DisabledRegionSplitPolicy or by setting ConstantSizeRegionSplitPolicy and very 
large max region size). You will have to increase to a very large number 
store's blocking file number : *hbase.hstore.blockingStoreFiles* as well.
 
h3. Limitations
Do not use FIFO compaction if :
* Table/CF has MIN_VERSION > 0
* Table/CF has TTL = FOREVER (HColumnDescriptor.DEFAULT_TTL)




> Compaction improvements: FIFO compaction policy
> ---
>
> Key: HBASE-14468
> URL: https://issues.apache.org/jira/browse/HBASE-14468
> Project: HBase
>  Issue Type: Improvement
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-14468-v1.patch, HBASE-14468-v10.patch, 
> HBASE-14468-v2.patch, HBASE-14468-v3.patch, HBASE-14468-v4.patch, 
> HBASE-14468-v5.patch, 

[jira] [Updated] (HBASE-14468) Compaction improvements: FIFO compaction policy

2015-11-19 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14468?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-14468:
--
Description: 
h2. FIFO Compaction
h3. Introduction
FIFO compaction policy selects only files which have all cells expired. The 
column family MUST have non-default TTL. 
Essentially, FIFO compactor does only one job: collects expired store files. 
These are some applications which could benefit the most:
# Use it for very high volume raw data which has low TTL and which is the 
source of another data (after additional processing). Example: Raw time-series 
vs. time-based rollup aggregates and compacted time-series. We collect raw 
time-series and store them into CF with FIFO compaction policy, periodically we 
run  task which creates rollup aggregates and compacts time-series, the 
original raw data can be discarded after that.
# Use it for data which can be kept entirely in a a block cache (RAM/SSD). Say 
we have local SSD (1TB) which we can use as a block cache. No need for 
compaction of a raw data at all.

Because we do not do any real compaction, we do not use CPU and IO (disk and 
network), we do not evict hot data from a block cache. The result: improved 
throughput and latency both write and read.
See: https://github.com/facebook/rocksdb/wiki/FIFO-compaction-style

h3. To enable FIFO compaction policy
For table:
{code}
HTableDescriptor desc = new HTableDescriptor(tableName);

desc.setConfiguration(DefaultStoreEngine.DEFAULT_COMPACTION_POLICY_CLASS_KEY, 
  FIFOCompactionPolicy.class.getName());
{code} 

For CF:
{code}
HColumnDescriptor desc = new HColumnDescriptor(family);

desc.setConfiguration(DefaultStoreEngine.DEFAULT_COMPACTION_POLICY_CLASS_KEY, 
  FIFOCompactionPolicy.class.getName());
{code}

Although region splitting is supported,  for optimal performance it should be 
disabled, either by setting explicitly DisabledRegionSplitPolicy or by setting 
ConstantSizeRegionSplitPolicy and very large max region size. You will have to 
increase to a very large number store's blocking file number : 
*hbase.hstore.blockingStoreFiles* as well.
 
h3. Limitations
Do not use FIFO compaction if :
* Table/CF has MIN_VERSION > 0
* Table/CF has TTL = FOREVER (HColumnDescriptor.DEFAULT_TTL)



  was:
h2. FIFO Compaction
h3. Introduction
FIFO compaction policy selects only files which have all cells expired. The 
column family MUST have non-default TTL. 
Essentially, FIFO compactor does only one job: collects expired store files. I 
see many applications for this policy:
# Use it for very high volume raw data which has low TTL and which is the 
source of another data (after additional processing). Example: Raw time-series 
vs. time-based rollup aggregates and compacted time-series. We collect raw 
time-series and store them into CF with FIFO compaction policy, periodically we 
run  task which creates rollup aggregates and compacts time-series, the 
original raw data can be discarded after that.
# Use it for data which can be kept entirely in a a block cache (RAM/SSD). Say 
we have local SSD (1TB) which we can use as a block cache. No need for 
compaction of a raw data at all.

Because we do not do any real compaction, we do not use CPU and IO (disk and 
network), we do not evict hot data from a block cache. The result: improved 
throughput and latency both write and read.
See: https://github.com/facebook/rocksdb/wiki/FIFO-compaction-style

h3. To enable FIFO compaction policy
For table:
{code}
HTableDescriptor desc = new HTableDescriptor(tableName);

desc.setConfiguration(DefaultStoreEngine.DEFAULT_COMPACTION_POLICY_CLASS_KEY, 
  FIFOCompactionPolicy.class.getName());
{code} 

For CF:
{code}
HColumnDescriptor desc = new HColumnDescriptor(family);

desc.setConfiguration(DefaultStoreEngine.DEFAULT_COMPACTION_POLICY_CLASS_KEY, 
  FIFOCompactionPolicy.class.getName());
{code}

Although region splitting is supported,  for optimal performance it should be 
disabled, either by setting explicitly DisabledRegionSplitPolicy or by setting 
ConstantSizeRegionSplitPolicy and very large max region size. You will have to 
increase to a very large number store's blocking file number : 
*hbase.hstore.blockingStoreFiles* as well.
 
h3. Limitations
Do not use FIFO compaction if :
* Table/CF has MIN_VERSION > 0
* Table/CF has TTL = FOREVER (HColumnDescriptor.DEFAULT_TTL)




> Compaction improvements: FIFO compaction policy
> ---
>
> Key: HBASE-14468
> URL: https://issues.apache.org/jira/browse/HBASE-14468
> Project: HBase
>  Issue Type: Improvement
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-14468-v1.patch, HBASE-14468-v10.patch, 
> HBASE-14468-v2.patch, HBASE-14468-v3.patch, 

[jira] [Updated] (HBASE-14468) Compaction improvements: FIFO compaction policy

2015-11-19 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14468?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-14468:
--
Release Note: 
FIFO compaction policy selects only files which have all cells expired. The 
column family MUST have non-default TTL. 
Essentially, FIFO compactor does only one job: collects expired store files. 

Because we do not do any real compaction, we do not use CPU and IO (disk and 
network), we do not evict hot data from a block cache. The result: improved 
throughput and latency both write and read.
See: https://github.com/facebook/rocksdb/wiki/FIFO-compaction-style

> Compaction improvements: FIFO compaction policy
> ---
>
> Key: HBASE-14468
> URL: https://issues.apache.org/jira/browse/HBASE-14468
> Project: HBase
>  Issue Type: Improvement
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-14468-v1.patch, HBASE-14468-v10.patch, 
> HBASE-14468-v2.patch, HBASE-14468-v3.patch, HBASE-14468-v4.patch, 
> HBASE-14468-v5.patch, HBASE-14468-v6.patch, HBASE-14468-v7.patch, 
> HBASE-14468-v8.patch, HBASE-14468-v9.patch
>
>
> h2. FIFO Compaction
> h3. Introduction
> FIFO compaction policy selects only files which have all cells expired. The 
> column family MUST have non-default TTL. 
> Essentially, FIFO compactor does only one job: collects expired store files. 
> These are some applications which could benefit the most:
> # Use it for very high volume raw data which has low TTL and which is the 
> source of another data (after additional processing). Example: Raw 
> time-series vs. time-based rollup aggregates and compacted time-series. We 
> collect raw time-series and store them into CF with FIFO compaction policy, 
> periodically we run  task which creates rollup aggregates and compacts 
> time-series, the original raw data can be discarded after that.
> # Use it for data which can be kept entirely in a a block cache (RAM/SSD). 
> Say we have local SSD (1TB) which we can use as a block cache. No need for 
> compaction of a raw data at all.
> Because we do not do any real compaction, we do not use CPU and IO (disk and 
> network), we do not evict hot data from a block cache. The result: improved 
> throughput and latency both write and read.
> See: https://github.com/facebook/rocksdb/wiki/FIFO-compaction-style
> h3. To enable FIFO compaction policy
> For table:
> {code}
> HTableDescriptor desc = new HTableDescriptor(tableName);
> 
> desc.setConfiguration(DefaultStoreEngine.DEFAULT_COMPACTION_POLICY_CLASS_KEY, 
>   FIFOCompactionPolicy.class.getName());
> {code} 
> For CF:
> {code}
> HColumnDescriptor desc = new HColumnDescriptor(family);
> 
> desc.setConfiguration(DefaultStoreEngine.DEFAULT_COMPACTION_POLICY_CLASS_KEY, 
>   FIFOCompactionPolicy.class.getName());
> {code}
> Although region splitting is supported,  for optimal performance it should be 
> disabled, either by setting explicitly DisabledRegionSplitPolicy or by 
> setting ConstantSizeRegionSplitPolicy and very large max region size. You 
> will have to increase to a very large number store's blocking file number : 
> *hbase.hstore.blockingStoreFiles* as well.
>  
> h3. Limitations
> Do not use FIFO compaction if :
> * Table/CF has MIN_VERSION > 0
> * Table/CF has TTL = FOREVER (HColumnDescriptor.DEFAULT_TTL)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-14847) Add FIFO compaction section to HBase book

2015-11-19 Thread Vladimir Rodionov (JIRA)
Vladimir Rodionov created HBASE-14847:
-

 Summary: Add FIFO compaction section to HBase book
 Key: HBASE-14847
 URL: https://issues.apache.org/jira/browse/HBASE-14847
 Project: HBase
  Issue Type: Task
  Components: documentation
Affects Versions: 2.0.0
Reporter: Vladimir Rodionov
 Fix For: 2.0.0


HBASE-14468 introduced new compaction policy. Book needs to be updated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14807) TestWALLockup is flakey

2015-11-19 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-14807:
--
Attachment: 14807.second.attempt.txt

See below for description of the change from the patch. Let me test this on 
cluster too to be sure since it makes a small change but in a critical part.

[~eclark] I think you know this section best. You good with this patch sir?


Patch includes loosening of the test so we continue when threads
run in not-expected order. Also includes minor clean ups in
FSHLog -- a formatting change, removal of an unused trace logging,
and a check so we don't create a new exception when not needed --
but it also includes a subtle so we check if we need to get to safe
point EVEN IF an outstanding exception. Previous we could by-pass
the safe point check. This should make us even more robust against
lockup (though this is a change that comes of code reading, not of
any issue seen in test).

Here is some detail on how I loosened the test:

The test can run in an unexpected order. Attempts at dictating the
order in which threads fire only had me deadlocking one latch
against another (the test latch vs the WAL zigzag latch) so I
gave up trying and instead, if we happen to go the unusual route of
rolling WALs and failing flush before the scheduled log roll
latch goes into place, just time out the run after a few seconds
and exit the test (but do not fail it); just log a WARN.

This is less than ideal but allows us keep some coverage of the
tricky scenario that was bringing on deadlock (a broken WAL that
is throwing exceptions getting stuck waiting on a sync to clear
out the ring buffer getting overshadowed by a subsequent append
added in by a concurrent flush).

> TestWALLockup is flakey
> ---
>
> Key: HBASE-14807
> URL: https://issues.apache.org/jira/browse/HBASE-14807
> Project: HBase
>  Issue Type: Bug
>  Components: flakey, test
>Reporter: stack
>Assignee: stack
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: 14807.patch, 14807.second.attempt.txt
>
>
> Fails frequently. 
> Looks like this:
> {code}
> 2015-11-12 10:38:51,812 DEBUG [Time-limited test] regionserver.HRegion(3882): 
> Found 0 recovered edits file(s) under 
> /home/jenkins/jenkins-slave/workspace/HBase-1.2/jdk/latest1.7/label/Hadoop/hbase-server/target/test-data/8b8f8f12-1819-47e3-b1f1-8ffa789438ad/data/default/testLockupWhenSyncInMiddleOfZigZagSetup/c8694b53368f3301a8d370089120388d
> 2015-11-12 10:38:51,821 DEBUG [Time-limited test] 
> regionserver.FlushLargeStoresPolicy(56): 
> hbase.hregion.percolumnfamilyflush.size.lower.bound is not specified, use 
> global config(16777216) instead
> 2015-11-12 10:38:51,880 DEBUG [Time-limited test] wal.WALSplitter(729): Wrote 
> region 
> seqId=/home/jenkins/jenkins-slave/workspace/HBase-1.2/jdk/latest1.7/label/Hadoop/hbase-server/target/test-data/8b8f8f12-1819-47e3-b1f1-8ffa789438ad/data/default/testLockupWhenSyncInMiddleOfZigZagSetup/c8694b53368f3301a8d370089120388d/recovered.edits/2.seqid
>  to file, newSeqId=2, maxSeqId=0
> 2015-11-12 10:38:51,881 INFO  [Time-limited test] regionserver.HRegion(868): 
> Onlined c8694b53368f3301a8d370089120388d; next sequenceid=2
> 2015-11-12 10:38:51,994 ERROR [sync.1] wal.FSHLog$SyncRunner(1226): Error 
> syncing, request close of WAL
> java.io.IOException: FAKE! Failed to replace a bad datanode...SYNC
>   at 
> org.apache.hadoop.hbase.regionserver.TestWALLockup$1DodgyFSLog$1.sync(TestWALLockup.java:162)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.FSHLog$SyncRunner.run(FSHLog.java:1222)
>   at java.lang.Thread.run(Thread.java:745)
> 2015-11-12 10:38:51,997 DEBUG [Thread-4] regionserver.LogRoller(139): WAL 
> roll requested
> 2015-11-12 10:38:52,019 DEBUG [flusher] 
> regionserver.FlushLargeStoresPolicy(100): Since none of the CFs were above 
> the size, flushing all.
> 2015-11-12 10:38:52,192 INFO  [Thread-4] 
> regionserver.TestWALLockup$1DodgyFSLog(129): LATCHED
> java.lang.InterruptedException: sleep interrupted
>   at java.lang.Thread.sleep(Native Method)
>   at org.apache.hadoop.hbase.util.Threads.sleep(Threads.java:146)
>   at 
> org.apache.hadoop.hbase.regionserver.TestWALLockup.testLockupWhenSyncInMiddleOfZigZagSetup(TestWALLockup.java:245)
> 2015-11-12 10:39:18,609 INFO  [main] regionserver.TestWALLockup(91): Cleaning 
> test directory: 
> /home/jenkins/jenkins-slave/workspace/HBase-1.2/jdk/latest1.7/label/Hadoop/hbase-server/target/test-data/8b8f8f12-1819-47e3-b1f1-8ffa789438ad
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   

[jira] [Updated] (HBASE-14847) Add FIFO compaction section to HBase book

2015-11-19 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-14847:
--
Fix Version/s: 1.3.0
   1.2.0

> Add FIFO compaction section to HBase book
> -
>
> Key: HBASE-14847
> URL: https://issues.apache.org/jira/browse/HBASE-14847
> Project: HBase
>  Issue Type: Task
>  Components: documentation
>Affects Versions: 2.0.0
>Reporter: Vladimir Rodionov
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
>
> HBASE-14468 introduced new compaction policy. Book needs to be updated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14468) Compaction improvements: FIFO compaction policy

2015-11-19 Thread Vladimir Rodionov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15014251#comment-15014251
 ] 

Vladimir Rodionov commented on HBASE-14468:
---

Created separate documentation JIRA: HBASE-14847.

> Compaction improvements: FIFO compaction policy
> ---
>
> Key: HBASE-14468
> URL: https://issues.apache.org/jira/browse/HBASE-14468
> Project: HBase
>  Issue Type: Improvement
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-14468-v1.patch, HBASE-14468-v10.patch, 
> HBASE-14468-v2.patch, HBASE-14468-v3.patch, HBASE-14468-v4.patch, 
> HBASE-14468-v5.patch, HBASE-14468-v6.patch, HBASE-14468-v7.patch, 
> HBASE-14468-v8.patch, HBASE-14468-v9.patch
>
>
> h2. FIFO Compaction
> h3. Introduction
> FIFO compaction policy selects only files which have all cells expired. The 
> column family MUST have non-default TTL. 
> Essentially, FIFO compactor does only one job: collects expired store files. 
> These are some applications which could benefit the most:
> # Use it for very high volume raw data which has low TTL and which is the 
> source of another data (after additional processing). Example: Raw 
> time-series vs. time-based rollup aggregates and compacted time-series. We 
> collect raw time-series and store them into CF with FIFO compaction policy, 
> periodically we run  task which creates rollup aggregates and compacts 
> time-series, the original raw data can be discarded after that.
> # Use it for data which can be kept entirely in a a block cache (RAM/SSD). 
> Say we have local SSD (1TB) which we can use as a block cache. No need for 
> compaction of a raw data at all.
> Because we do not do any real compaction, we do not use CPU and IO (disk and 
> network), we do not evict hot data from a block cache. The result: improved 
> throughput and latency both write and read.
> See: https://github.com/facebook/rocksdb/wiki/FIFO-compaction-style
> h3. To enable FIFO compaction policy
> For table:
> {code}
> HTableDescriptor desc = new HTableDescriptor(tableName);
> 
> desc.setConfiguration(DefaultStoreEngine.DEFAULT_COMPACTION_POLICY_CLASS_KEY, 
>   FIFOCompactionPolicy.class.getName());
> {code} 
> For CF:
> {code}
> HColumnDescriptor desc = new HColumnDescriptor(family);
> 
> desc.setConfiguration(DefaultStoreEngine.DEFAULT_COMPACTION_POLICY_CLASS_KEY, 
>   FIFOCompactionPolicy.class.getName());
> {code}
> Although region splitting is supported,  for optimal performance it should be 
> disabled, either by setting explicitly DisabledRegionSplitPolicy or by 
> setting ConstantSizeRegionSplitPolicy and very large max region size. You 
> will have to increase to a very large number store's blocking file number : 
> *hbase.hstore.blockingStoreFiles* as well.
>  
> h3. Limitations
> Do not use FIFO compaction if :
> * Table/CF has MIN_VERSION > 0
> * Table/CF has TTL = FOREVER (HColumnDescriptor.DEFAULT_TTL)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14838) SimpleRegionNormalizer does not merge empty region of a table

2015-11-19 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15014035#comment-15014035
 ] 

Josh Elser commented on HBASE-14838:


bq. If you have 5-6 empty regions and no data in there, do you want normalizer 
to merge them together?

I think that's the ultimate question to answer. Given how the code read, it 
just looked like this was something that wasn't considered. Specifically, in 
working with Romil, we were trying to investigate why the normalizer wasn't 
doing anything with these regions as we expected it to (in a contrived 
environment).

bq. If you have 5-6 empty regions and no data in there, do you want normalizer 
to merge them together?

In this contrived case, we had only written a small amount of data to one of 
the regions (<1MB). I've yet to investigate why the greater-than-zero amount of 
data in one region was ultimately treated as no data (confirmed via a remote 
debugger attached to the master). Because of this, the average size was 
reported as zero (even for a data with a small amount of data in it). Purely 
background information at this point -- I need to look into this again.

bq. As Mikhail Antonov says, if there's no data, we have no way to guess at a 
reasonable distribution of split points.

At first glance, my reaction was that the "reasonable distribution of split 
points" for no data in a table is having no split points. Same goes for small 
amounts of data. I hadn't considered the side-effect of the normalizer undo-ing 
a pre-split table (dev goes to get lunch before starting ingest) which would be 
a confusing story to tell. Perhaps some comments on the class would be 
sufficient to record that "hey, this won't do anything to empty tables". What 
do you guys think?

> SimpleRegionNormalizer does not merge empty region of a table
> -
>
> Key: HBASE-14838
> URL: https://issues.apache.org/jira/browse/HBASE-14838
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.0
>Reporter: Romil Choksi
>
> SImpleRegionNormalizer does not merge empty region of a table
> Steps to repro:
> - Create an empty table with few, say 5-6 regions without any data in any of 
> them
> - Verify hbase:meta table to verify the regions for the table or check 
> HMaster UI
> - Enable normalizer switch and normalization for this table
> - Run normalizer, by 'normalize' command from hbase shell
> - Verify the regions for table by scanning hbase:meta table or checking 
> HMaster web UI
> The empty regions are not merged on running the region normalizer. This seems 
> to be an edge case with completely empty regions since the Normalizer checks 
> for: smallestRegion (in this case 0 size) + smallestNeighborOfSmallestRegion 
> (in this case 0 size) > avg region size (in this case 0 size)
> thanks to [~elserj] for verifying this from the source code side



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14819) hbase-it tests failing with OOME

2015-11-19 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15014199#comment-15014199
 ] 

stack commented on HBASE-14819:
---

Here is what I was thinking of adding:

{code}
diff --git a/hbase-it/pom.xml b/hbase-it/pom.xml
index f62a57d..6d57e55 100644
--- a/hbase-it/pom.xml
+++ b/hbase-it/pom.xml
@@ -94,6 +94,7 @@
   
 
 
${test.output.tofile}
+-Xmx4G
 
   
${env.LD_LIBRARY_PATH}:${project.build.directory}/nativelib
   
${env.DYLD_LIBRARY_PATH}:${project.build.directory}/nativelib
{code}

> hbase-it tests failing with OOME
> 
>
> Key: HBASE-14819
> URL: https://issues.apache.org/jira/browse/HBASE-14819
> Project: HBase
>  Issue Type: Sub-task
>  Components: test
>Reporter: stack
> Attachments: Screen Shot 2015-11-16 at 11.37.41 PM.png, itag.txt
>
>
> Let me up the heap used when failsafe forks.
> Here is example OOME doing ITBLL:
> {code}
> 2015-11-16 03:09:15,073 INFO  [Thread-694] actions.BatchRestartRsAction(69): 
> Starting region server:asf905.gq1.ygridcore.net
> 2015-11-16 03:09:15,099 INFO  [Thread-694] client.ConnectionUtils(104): 
> regionserver/asf905.gq1.ygridcore.net/67.195.81.149:0 server-side HConnection 
> retries=350
> 2015-11-16 03:09:15,099 INFO  [Thread-694] ipc.SimpleRpcScheduler(128): Using 
> deadline as user call queue, count=1
> 2015-11-16 03:09:15,101 INFO  [Thread-694] ipc.RpcServer$Listener(607): 
> regionserver/asf905.gq1.ygridcore.net/67.195.81.149:0: started 3 reader(s) 
> listening on port=36114
> 2015-11-16 03:09:15,103 INFO  [Thread-694] fs.HFileSystem(252): Added 
> intercepting call to namenode#getBlockLocations so can do block reordering 
> using class class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks
> 2015-11-16 03:09:15,104 INFO  [Thread-694] 
> zookeeper.RecoverableZooKeeper(120): Process identifier=regionserver:36114 
> connecting to ZooKeeper ensemble=localhost:50139
> 2015-11-16 03:09:15,117 DEBUG [Thread-694-EventThread] 
> zookeeper.ZooKeeperWatcher(554): regionserver:361140x0, 
> quorum=localhost:50139, baseZNode=/hbase Received ZooKeeper Event, type=None, 
> state=SyncConnected, path=null
> 2015-11-16 03:09:15,118 DEBUG [Thread-694] zookeeper.ZKUtil(492): 
> regionserver:361140x0, quorum=localhost:50139, baseZNode=/hbase Set watcher 
> on existing znode=/hbase/master
> 2015-11-16 03:09:15,119 DEBUG [Thread-694] zookeeper.ZKUtil(492): 
> regionserver:361140x0, quorum=localhost:50139, baseZNode=/hbase Set watcher 
> on existing znode=/hbase/running
> 2015-11-16 03:09:15,119 DEBUG [Thread-694-EventThread] 
> zookeeper.ZooKeeperWatcher(638): regionserver:36114-0x1510e2c6f1d0029 
> connected
> 2015-11-16 03:09:15,120 INFO  [RpcServer.responder] 
> ipc.RpcServer$Responder(926): RpcServer.responder: starting
> 2015-11-16 03:09:15,121 INFO  [RpcServer.listener,port=36114] 
> ipc.RpcServer$Listener(738): RpcServer.listener,port=36114: starting
> 2015-11-16 03:09:15,121 DEBUG [Thread-694] ipc.RpcExecutor(115): B.default 
> Start Handler index=0 queue=0
> 2015-11-16 03:09:15,121 DEBUG [Thread-694] ipc.RpcExecutor(115): B.default 
> Start Handler index=1 queue=0
> 2015-11-16 03:09:15,121 DEBUG [Thread-694] ipc.RpcExecutor(115): B.default 
> Start Handler index=2 queue=0
> 2015-11-16 03:09:15,122 DEBUG [Thread-694] ipc.RpcExecutor(115): B.default 
> Start Handler index=3 queue=0
> 2015-11-16 03:09:15,122 DEBUG [Thread-694] ipc.RpcExecutor(115): B.default 
> Start Handler index=4 queue=0
> 2015-11-16 03:09:15,122 DEBUG [Thread-694] ipc.RpcExecutor(115): Priority 
> Start Handler index=0 queue=0
> 2015-11-16 03:09:15,123 DEBUG [Thread-694] ipc.RpcExecutor(115): Priority 
> Start Handler index=1 queue=1
> 2015-11-16 03:09:15,123 DEBUG [Thread-694] ipc.RpcExecutor(115): Priority 
> Start Handler index=2 queue=0
> 2015-11-16 03:09:15,123 DEBUG [Thread-694] ipc.RpcExecutor(115): Priority 
> Start Handler index=3 queue=1
> 2015-11-16 03:09:15,124 DEBUG [Thread-694] ipc.RpcExecutor(115): Priority 
> Start Handler index=4 queue=0
> 2015-11-16 03:09:15,124 DEBUG [Thread-694] ipc.RpcExecutor(115): Replication 
> Start Handler index=0 queue=0
> 2015-11-16 03:09:15,124 DEBUG [Thread-694] ipc.RpcExecutor(115): Replication 
> Start Handler index=1 queue=0
> 2015-11-16 03:09:15,124 DEBUG [Thread-694] ipc.RpcExecutor(115): Replication 
> Start Handler index=2 queue=0
> 2015-11-16 03:09:15,761 DEBUG [RS:0;asf905:36114] 
> client.ConnectionManager$HConnectionImplementation(715): connection 
> construction failed
> java.io.IOException: java.lang.OutOfMemoryError: PermGen space
>   at 
> org.apache.hadoop.hbase.client.RegistryFactory.getRegistry(RegistryFactory.java:43)
>   at 
> 

[jira] [Comment Edited] (HBASE-14822) Renewing leases of scanners doesn't work

2015-11-19 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15014187#comment-15014187
 ] 

Lars Hofhansl edited comment on HBASE-14822 at 11/19/15 7:25 PM:
-

This patch should keep the behaviour the way it is.
Can you try with this one [~samarthjain]?


was (Author: lhofhansl):
This patch should keep the behaviour the way it is.
Can you try with this one [~samarthjain]

> Renewing leases of scanners doesn't work
> 
>
> Key: HBASE-14822
> URL: https://issues.apache.org/jira/browse/HBASE-14822
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.14
>Reporter: Samarth Jain
>Assignee: Lars Hofhansl
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.3, 0.98.17, 1.0.4
>
> Attachments: 14822-0.98-v2.txt, 14822-0.98-v3.txt, 14822-0.98.txt
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14822) Renewing leases of scanners doesn't work

2015-11-19 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-14822:
--
Status: Open  (was: Patch Available)

> Renewing leases of scanners doesn't work
> 
>
> Key: HBASE-14822
> URL: https://issues.apache.org/jira/browse/HBASE-14822
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.14
>Reporter: Samarth Jain
>Assignee: Lars Hofhansl
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.3, 0.98.17, 1.0.4
>
> Attachments: 14822-0.98-v2.txt, 14822-0.98-v3.txt, 14822-0.98.txt
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14791) Batch Deletes in MapReduce jobs (0.98)

2015-11-19 Thread Alex Araujo (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15014141#comment-15014141
 ] 

Alex Araujo commented on HBASE-14791:
-

[~apurtell], [~vik.karma] ran a CopyTable with this patch and the job finished 
significantly faster: 7.5 minutes vs 8.5 hours without the patch.

> Batch Deletes in MapReduce jobs (0.98)
> --
>
> Key: HBASE-14791
> URL: https://issues.apache.org/jira/browse/HBASE-14791
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.16
>Reporter: Lars Hofhansl
>Assignee: Alex Araujo
>  Labels: mapreduce
> Fix For: 0.98.17
>
> Attachments: HBASE-14791-0.98-v1.patch, HBASE-14791-0.98-v2.patch, 
> HBASE-14791-0.98.patch
>
>
> We found that some of our copy table job run for many hours, even when there 
> isn't that much data to copy.
> [~vik.karma] did his magic and found that the issue is with copying delete 
> markers (we use raw mode to also move deletes across).
> Looking at the code in 0.98 it's immediately obvious that deletes (unlike 
> puts) are not batched and hence sent to the other side one by one, causing a 
> network RTT for each delete marker.
> Looks like in trunk it's doing the right thing (using BufferedMutators for 
> all mutations in TableOutputFormat). So likely only a 0.98 (and 1.0, 1.1, 
> 1.2?) issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14822) Renewing leases of scanners doesn't work

2015-11-19 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-14822:
--
Status: Patch Available  (was: Open)

> Renewing leases of scanners doesn't work
> 
>
> Key: HBASE-14822
> URL: https://issues.apache.org/jira/browse/HBASE-14822
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.14
>Reporter: Samarth Jain
>Assignee: Lars Hofhansl
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.3, 0.98.17, 1.0.4
>
> Attachments: 14822-0.98-v2.txt, 14822-0.98-v3.txt, 14822-0.98.txt
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14838) SimpleRegionNormalizer does not merge empty region of a table

2015-11-19 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated HBASE-14838:
---
Affects Version/s: (was: 1.1.2)

> SimpleRegionNormalizer does not merge empty region of a table
> -
>
> Key: HBASE-14838
> URL: https://issues.apache.org/jira/browse/HBASE-14838
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.0
>Reporter: Romil Choksi
>
> SImpleRegionNormalizer does not merge empty region of a table
> Steps to repro:
> - Create an empty table with few, say 5-6 regions without any data in any of 
> them
> - Verify hbase:meta table to verify the regions for the table or check 
> HMaster UI
> - Enable normalizer switch and normalization for this table
> - Run normalizer, by 'normalize' command from hbase shell
> - Verify the regions for table by scanning hbase:meta table or checking 
> HMaster web UI
> The empty regions are not merged on running the region normalizer. This seems 
> to be an edge case with completely empty regions since the Normalizer checks 
> for: smallestRegion (in this case 0 size) + smallestNeighborOfSmallestRegion 
> (in this case 0 size) > avg region size (in this case 0 size)
> thanks to [~elserj] for verifying this from the source code side



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14839) [branch-1] Backport test categories so that patch backport is easier

2015-11-19 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HBASE-14839:
--
Status: Patch Available  (was: Open)

> [branch-1] Backport test categories so that patch backport is easier
> 
>
> Key: HBASE-14839
> URL: https://issues.apache.org/jira/browse/HBASE-14839
> Project: HBase
>  Issue Type: Test
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 1.2.0, 1.3.0, 1.1.3, 0.98.17, 1.0.4
>
> Attachments: hbase-14839-branch-1.patch
>
>
> Test categories are in master and new unit tests are sometimes marked with 
> that particular interface ( {{RPCTests.class}} ). 
> Since we don't have the specific annotation classes in branch-1, backports 
> usually fail. We can just commit those classes to all applicable branches so 
> that committing patches is less work. 
> We can also backport the full patch for running the specific tests from maven 
> as a further issue. Feel free to take it up, if you are interested. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14791) Batch Deletes in MapReduce jobs (0.98)

2015-11-19 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15014169#comment-15014169
 ] 

Lars Hofhansl commented on HBASE-14791:
---

Yeah! :)

> Batch Deletes in MapReduce jobs (0.98)
> --
>
> Key: HBASE-14791
> URL: https://issues.apache.org/jira/browse/HBASE-14791
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.16
>Reporter: Lars Hofhansl
>Assignee: Alex Araujo
>  Labels: mapreduce
> Fix For: 0.98.17
>
> Attachments: HBASE-14791-0.98-v1.patch, HBASE-14791-0.98-v2.patch, 
> HBASE-14791-0.98.patch
>
>
> We found that some of our copy table job run for many hours, even when there 
> isn't that much data to copy.
> [~vik.karma] did his magic and found that the issue is with copying delete 
> markers (we use raw mode to also move deletes across).
> Looking at the code in 0.98 it's immediately obvious that deletes (unlike 
> puts) are not batched and hence sent to the other side one by one, causing a 
> network RTT for each delete marker.
> Looks like in trunk it's doing the right thing (using BufferedMutators for 
> all mutations in TableOutputFormat). So likely only a 0.98 (and 1.0, 1.1, 
> 1.2?) issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14822) Renewing leases of scanners doesn't work

2015-11-19 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-14822:
--
Attachment: 14822-0.98-v3.txt

This patch should keep the behaviour the way it is.
Can you try with this one [~samarthjain]

> Renewing leases of scanners doesn't work
> 
>
> Key: HBASE-14822
> URL: https://issues.apache.org/jira/browse/HBASE-14822
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.14
>Reporter: Samarth Jain
>Assignee: Lars Hofhansl
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.3, 0.98.17, 1.0.4
>
> Attachments: 14822-0.98-v2.txt, 14822-0.98-v3.txt, 14822-0.98.txt
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14844) backport jdk.tools exclusion to 1.0 and 1.1

2015-11-19 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15014080#comment-15014080
 ] 

Andrew Purtell commented on HBASE-14844:


+1


> backport jdk.tools exclusion to 1.0 and 1.1
> ---
>
> Key: HBASE-14844
> URL: https://issues.apache.org/jira/browse/HBASE-14844
> Project: HBase
>  Issue Type: Task
>  Components: build
>Affects Versions: 1.1.2, 1.0.3
>Reporter: Sean Busbey
>Priority: Critical
> Fix For: 1.1.3, 1.0.4
>
>
> per [~apurtell]'s comment when backporting HBASE-13963 to 0.98, we should 
> probably consider leaking jdk.tools in 1.0 and 1.1 bugs as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14822) Renewing leases of scanners doesn't work

2015-11-19 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15014105#comment-15014105
 ] 

Lars Hofhansl commented on HBASE-14822:
---

I removed your patches [~samarthjain] to avoid confusion.

> Renewing leases of scanners doesn't work
> 
>
> Key: HBASE-14822
> URL: https://issues.apache.org/jira/browse/HBASE-14822
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.14
>Reporter: Samarth Jain
>Assignee: Lars Hofhansl
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.3, 0.98.17, 1.0.4
>
> Attachments: 14822-0.98-v2.txt, 14822-0.98.txt
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14822) Renewing leases of scanners doesn't work

2015-11-19 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15014102#comment-15014102
 ] 

Lars Hofhansl commented on HBASE-14822:
---

[~samarthjain] and took a look. It turns out that Phoenix does some funky 
requests where we have a scan with a filter that indicates "done" _and_ that 
has caching set to 0 - so this is essentially a useless request.
Be that as it may, though, for this use case this patch changes the behavior. 
As it so happens there is a good fix for this. Will upload a patch soon.

> Renewing leases of scanners doesn't work
> 
>
> Key: HBASE-14822
> URL: https://issues.apache.org/jira/browse/HBASE-14822
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.14
>Reporter: Samarth Jain
>Assignee: Lars Hofhansl
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.3, 0.98.17, 1.0.4
>
> Attachments: 14822-0.98-v2.txt, 14822-0.98.txt
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14822) Renewing leases of scanners doesn't work

2015-11-19 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-14822:
--
Attachment: (was: 14822-0.98-v4.txt)

> Renewing leases of scanners doesn't work
> 
>
> Key: HBASE-14822
> URL: https://issues.apache.org/jira/browse/HBASE-14822
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.14
>Reporter: Samarth Jain
>Assignee: Lars Hofhansl
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.3, 0.98.17, 1.0.4
>
> Attachments: 14822-0.98-v2.txt, 14822-0.98.txt
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14822) Renewing leases of scanners doesn't work

2015-11-19 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-14822:
--
Attachment: (was: 14822-0.98-v3.txt)

> Renewing leases of scanners doesn't work
> 
>
> Key: HBASE-14822
> URL: https://issues.apache.org/jira/browse/HBASE-14822
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.14
>Reporter: Samarth Jain
>Assignee: Lars Hofhansl
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.3, 0.98.17, 1.0.4
>
> Attachments: 14822-0.98-v2.txt, 14822-0.98.txt
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-14846) IT test suite fails when run standalone: "mvn -Dtest=NoUnitTests package verify"

2015-11-19 Thread stack (JIRA)
stack created HBASE-14846:
-

 Summary: IT test suite fails when run standalone: "mvn 
-Dtest=NoUnitTests package verify"
 Key: HBASE-14846
 URL: https://issues.apache.org/jira/browse/HBASE-14846
 Project: HBase
  Issue Type: Umbrella
Reporter: stack


Seeing how IT tests fail sometimes up on apache builds, I tried running them 
locally.

Most fail. See below. Some fail because they use too much resource. See 
HBASE-14819 where a test was using > 2k threads when run standalone (mac os x 
has limit of 2031 according to 
https://plumbr.eu/outofmemoryerror/unable-to-create-new-native-thread). Up on 
apache build, some are OOME'ing.

{code}
---
 T E S T S
---
Running org.apache.hadoop.hbase.IntegrationTestAcidGuarantees
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 559.189 sec - 
in org.apache.hadoop.hbase.IntegrationTestAcidGuarantees
Running org.apache.hadoop.hbase.IntegrationTestDDLMasterFailover
Running org.apache.hadoop.hbase.IntegrationTestIngest
Running org.apache.hadoop.hbase.IntegrationTestIngestStripeCompactions
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 677.357 sec - 
in org.apache.hadoop.hbase.IntegrationTestIngestStripeCompactions
Running org.apache.hadoop.hbase.IntegrationTestIngestWithACL
Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 220.775 sec <<< 
FAILURE! - in org.apache.hadoop.hbase.IntegrationTestIngestWithACL
testIngest(org.apache.hadoop.hbase.IntegrationTestIngestWithACL)  Time elapsed: 
220.26 sec  <<< ERROR!
java.io.IOException: Shutting down
at 
org.apache.hadoop.hbase.util.JVMClusterUtil.startup(JVMClusterUtil.java:225)
at 
org.apache.hadoop.hbase.LocalHBaseCluster.startup(LocalHBaseCluster.java:448)
at 
org.apache.hadoop.hbase.MiniHBaseCluster.init(MiniHBaseCluster.java:225)
at 
org.apache.hadoop.hbase.MiniHBaseCluster.(MiniHBaseCluster.java:94)
at 
org.apache.hadoop.hbase.HBaseTestingUtility.startMiniHBaseCluster(HBaseTestingUtility.java:1077)
at 
org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:1036)
at 
org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:908)
at 
org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:890)
at 
org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:877)
at 
org.apache.hadoop.hbase.IntegrationTestingUtility.initializeCluster(IntegrationTestingUtility.java:78)
at 
org.apache.hadoop.hbase.IntegrationTestIngest.setUpCluster(IntegrationTestIngest.java:84)
at 
org.apache.hadoop.hbase.IntegrationTestIngestWithACL.setUpCluster(IntegrationTestIngestWithACL.java:64)

Running org.apache.hadoop.hbase.IntegrationTestIngestWithEncryption
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 720.259 sec - 
in org.apache.hadoop.hbase.IntegrationTestIngestWithEncryption
Running org.apache.hadoop.hbase.IntegrationTestIngestWithMOB
Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 1,206.761 sec 
<<< FAILURE! - in org.apache.hadoop.hbase.IntegrationTestIngestWithMOB
testIngest(org.apache.hadoop.hbase.IntegrationTestIngestWithMOB)  Time elapsed: 
1,206.212 sec  <<< ERROR!
org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after 
attempts=36, exceptions:
Wed Nov 18 19:29:47 PST 2015, RpcRetryingCaller{globalStartTime=1447903770109, 
pause=100, maxAttempts=36}, org.apache.hadoop.hbase.MasterNotRunningException: 
org.apache.hadoop.hbase.MasterNotRunningException: Can't get connection to 
ZooKeeper: KeeperErrorCode = ConnectionLoss for /hbase
Wed Nov 18 19:30:04 PST 2015, RpcRetryingCaller{globalStartTime=1447903770109, 
pause=100, maxAttempts=36}, org.apache.hadoop.hbase.MasterNotRunningException: 
org.apache.hadoop.hbase.MasterNotRunningException: Can't get connection to 
ZooKeeper: KeeperErrorCode = ConnectionLoss for /hbase
Wed Nov 18 19:30:21 PST 2015, RpcRetryingCaller{globalStartTime=1447903770109, 
pause=100, maxAttempts=36}, org.apache.hadoop.hbase.MasterNotRunningException: 
org.apache.hadoop.hbase.MasterNotRunningException: Can't get connection to 
ZooKeeper: KeeperErrorCode = ConnectionLoss for /hbase
Wed Nov 18 19:30:39 PST 2015, RpcRetryingCaller{globalStartTime=1447903770109, 
pause=100, maxAttempts=36}, org.apache.hadoop.hbase.MasterNotRunningException: 
org.apache.hadoop.hbase.MasterNotRunningException: Can't get connection to 
ZooKeeper: KeeperErrorCode = ConnectionLoss for /hbase
Wed Nov 18 19:30:56 PST 2015, RpcRetryingCaller{globalStartTime=1447903770109, 
pause=100, maxAttempts=36}, org.apache.hadoop.hbase.MasterNotRunningException: 
org.apache.hadoop.hbase.MasterNotRunningException: Can't get connection to 
ZooKeeper: 

[jira] [Commented] (HBASE-14822) Renewing leases of scanners doesn't work

2015-11-19 Thread Samarth Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15014275#comment-15014275
 ] 

Samarth Jain commented on HBASE-14822:
--

The latest patch looks good [~lhofhansl]. I no longer see 
UnknownScannerException in the logs.

> Renewing leases of scanners doesn't work
> 
>
> Key: HBASE-14822
> URL: https://issues.apache.org/jira/browse/HBASE-14822
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.14
>Reporter: Samarth Jain
>Assignee: Lars Hofhansl
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.3, 0.98.17, 1.0.4
>
> Attachments: 14822-0.98-v2.txt, 14822-0.98-v3.txt, 14822-0.98.txt
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14623) Implement dedicated WAL for system tables

2015-11-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15014316#comment-15014316
 ] 

Hadoop QA commented on HBASE-14623:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12773291/14623-v2.txt
  against master branch at commit c92737c0e912563aeba2112ab8df74af976e720a.
  ATTACHMENT ID: 12773291

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.6.1 2.7.0 
2.7.1)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   
org.apache.hadoop.hbase.replication.multiwal.TestReplicationEndpointWithMultipleWAL

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16593//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16593//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16593//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16593//console

This message is automatically generated.

> Implement dedicated WAL for system tables
> -
>
> Key: HBASE-14623
> URL: https://issues.apache.org/jira/browse/HBASE-14623
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 2.0.0
>
> Attachments: 14623-v1.txt, 14623-v2.txt, 14623-v2.txt
>
>
> As Stephen suggested in parent JIRA, dedicating separate WAL for system 
> tables (other than hbase:meta) should be done in new JIRA.
> This task is to fulfill the system WAL separation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14852) Update build env

2015-11-19 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-14852:
--
Attachment: HBASE-14852.patch

> Update build env
> 
>
> Key: HBASE-14852
> URL: https://issues.apache.org/jira/browse/HBASE-14852
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Attachments: HBASE-14852.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14490) [RpcServer] reuse request read buffer

2015-11-19 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15014899#comment-15014899
 ] 

stack commented on HBASE-14490:
---

Any progress on this patch lads?

> [RpcServer] reuse request read buffer
> -
>
> Key: HBASE-14490
> URL: https://issues.apache.org/jira/browse/HBASE-14490
> Project: HBase
>  Issue Type: Improvement
>  Components: IPC/RPC
>Affects Versions: 2.0.0, 1.0.2
>Reporter: Zephyr Guo
>Assignee: Zephyr Guo
>  Labels: performance
> Fix For: 2.0.0, 1.0.2
>
> Attachments: ByteBufferPool.java, HBASE-14490-v1.patch, 
> HBASE-14490-v10.patch, HBASE-14490-v11.patch, HBASE-14490-v12.patch, 
> HBASE-14490-v2.patch, HBASE-14490-v3.patch, HBASE-14490-v4.patch, 
> HBASE-14490-v5.patch, HBASE-14490-v6.patch, HBASE-14490-v7.patch, 
> HBASE-14490-v8.patch, HBASE-14490-v9.patch, test-v12-patch
>
>
> Reusing buffer to read request.It's not necessary to every request free 
> buffer.The idea of optimization is to reduce the times that allocate 
> ByteBuffer.
> *Modification*
> 1. {{saslReadAndProcess}} ,{{processOneRpc}}, {{processUnwrappedData}}, 
> {{processConnectionHeader}} accept a ByteBuffer instead of byte[].They can 
> move {{ByteBuffer.position}} correctly when we have read the data.
> 2. {{processUnwrappedData}} no longer use any extra memory.
> 3. Maintaining a buffer pool in each {{Connection}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14851) Add test showing how to use TTL from thrift

2015-11-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15014955#comment-15014955
 ] 

Hadoop QA commented on HBASE-14851:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12773352/HBASE-14851-v1.patch
  against master branch at commit f0dc556b7174c18f3174c24364cc80e32195f715.
  ATTACHMENT ID: 12773352

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.6.1 2.7.0 
2.7.1)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16598//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16598//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16598//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16598//console

This message is automatically generated.

> Add test showing how to use TTL from thrift
> ---
>
> Key: HBASE-14851
> URL: https://issues.apache.org/jira/browse/HBASE-14851
> Project: HBase
>  Issue Type: Task
>Affects Versions: 2.0.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Fix For: 2.0.0
>
> Attachments: HBASE-14851-v1.patch, HBASE-14851.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14030) HBase Backup/Restore Phase 1

2015-11-19 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-14030:
--
Status: Patch Available  (was: Open)

> HBase Backup/Restore Phase 1
> 
>
> Key: HBASE-14030
> URL: https://issues.apache.org/jira/browse/HBASE-14030
> Project: HBase
>  Issue Type: Umbrella
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Attachments: HBASE-14030-v0.patch, HBASE-14030-v1.patch, 
> HBASE-14030-v10.patch, HBASE-14030-v11.patch, HBASE-14030-v12.patch, 
> HBASE-14030-v13.patch, HBASE-14030-v14.patch, HBASE-14030-v15.patch, 
> HBASE-14030-v17.patch, HBASE-14030-v2.patch, HBASE-14030-v3.patch, 
> HBASE-14030-v4.patch, HBASE-14030-v5.patch, HBASE-14030-v6.patch, 
> HBASE-14030-v7.patch, HBASE-14030-v8.patch
>
>
> This is the umbrella ticket for Backup/Restore Phase 1. See HBASE-7912 design 
> doc for the phase description.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14829) Add more checkstyles

2015-11-19 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15014978#comment-15014978
 ] 

stack commented on HBASE-14829:
---

Shows this as a hanging test:


Printing hanging tests
Hanging test : org.apache.hadoop.hbase.regionserver.wal.TestSecureWALReplay
Printing Failing tests


Not related.

Let me apply to trunk.

> Add more checkstyles
> 
>
> Key: HBASE-14829
> URL: https://issues.apache.org/jira/browse/HBASE-14829
> Project: HBase
>  Issue Type: Improvement
>Reporter: Appy
>Assignee: Appy
> Attachments: HBASE-14829-master-v2.patch, 
> HBASE-14829-master-v2.patch, HBASE-14829-master.patch
>
>
> This jira will add following checkstyles:
> [ImportOrder|http://checkstyle.sourceforge.net/config_imports.html#ImportOrder]
>  : keep imports in sorted order
> [LeftCurly|http://checkstyle.sourceforge.net/config_blocks.html#LeftCurly] : 
> Placement of left curly brace. Does 'eol' sounds right setting?
> [NeedBraces|http://checkstyle.sourceforge.net/config_blocks.html#NeedBraces] 
> : braces around code blocks
> [JavadocTagContinuationIndentation|http://checkstyle.sourceforge.net/config_javadoc.html#JavadocTagContinuationIndentation]
>  : Avoid weird indentations in javadocs
> [NonEmptyAtclauseDescription|http://checkstyle.sourceforge.net/config_javadoc.html#NonEmptyAtclauseDescription]
>  : We have so many empty javadoc @ clauses. This'll take care of it.
>  
> [Indentation|http://checkstyle.sourceforge.net/config_misc.html#Indentation] 
> : Bad indentation hurts code readability. We have indentation guidelines, 
> should be fine enforcing them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14829) Add more checkstyles

2015-11-19 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15014981#comment-15014981
 ] 

stack commented on HBASE-14829:
---

But then, checkstyle will be red for every until someone fixes the 16k or so 
complaints. We should up the checkstyle allowed count I'd say.

You have any script that will fix some of these for a folllow-on [~appy]?

> Add more checkstyles
> 
>
> Key: HBASE-14829
> URL: https://issues.apache.org/jira/browse/HBASE-14829
> Project: HBase
>  Issue Type: Improvement
>Reporter: Appy
>Assignee: Appy
> Attachments: HBASE-14829-master-v2.patch, 
> HBASE-14829-master-v2.patch, HBASE-14829-master.patch
>
>
> This jira will add following checkstyles:
> [ImportOrder|http://checkstyle.sourceforge.net/config_imports.html#ImportOrder]
>  : keep imports in sorted order
> [LeftCurly|http://checkstyle.sourceforge.net/config_blocks.html#LeftCurly] : 
> Placement of left curly brace. Does 'eol' sounds right setting?
> [NeedBraces|http://checkstyle.sourceforge.net/config_blocks.html#NeedBraces] 
> : braces around code blocks
> [JavadocTagContinuationIndentation|http://checkstyle.sourceforge.net/config_javadoc.html#JavadocTagContinuationIndentation]
>  : Avoid weird indentations in javadocs
> [NonEmptyAtclauseDescription|http://checkstyle.sourceforge.net/config_javadoc.html#NonEmptyAtclauseDescription]
>  : We have so many empty javadoc @ clauses. This'll take care of it.
>  
> [Indentation|http://checkstyle.sourceforge.net/config_misc.html#Indentation] 
> : Bad indentation hurts code readability. We have indentation guidelines, 
> should be fine enforcing them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14623) Implement dedicated WAL for system tables

2015-11-19 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-14623:
---
Attachment: 14623-v2.txt

> Implement dedicated WAL for system tables
> -
>
> Key: HBASE-14623
> URL: https://issues.apache.org/jira/browse/HBASE-14623
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 2.0.0
>
> Attachments: 14623-v1.txt, 14623-v2.txt, 14623-v2.txt, 14623-v2.txt
>
>
> As Stephen suggested in parent JIRA, dedicating separate WAL for system 
> tables (other than hbase:meta) should be done in new JIRA.
> This task is to fulfill the system WAL separation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14838) SimpleRegionNormalizer does not merge empty region of a table

2015-11-19 Thread Mikhail Antonov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15014829#comment-15014829
 ] 

Mikhail Antonov commented on HBASE-14838:
-

bq. In this contrived case, we had only written a small amount of data to one 
of the regions (<1MB). I've yet to investigate why the greater-than-zero amount 
of data in one region was ultimately treated as no data (confirmed via a remote 
debugger attached to the master). 

Because code which calculates region size in region normalizer uses metrics 
(ServerLoad/RegionLoad based), where region size (aggregated store file size) 
is represented is MB and is floored (truncated) down. If you got say 80kb worth 
of data, normalizer thingk its zero. That's the reason why minicluster tests 
for this feature are generating more than 1mb of data per region. I remember 
looking for some convenient method which would report exact size (like, hm, 
Region#size()), but haven't found anything suitable.

> SimpleRegionNormalizer does not merge empty region of a table
> -
>
> Key: HBASE-14838
> URL: https://issues.apache.org/jira/browse/HBASE-14838
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.0
>Reporter: Romil Choksi
>
> SImpleRegionNormalizer does not merge empty region of a table
> Steps to repro:
> - Create an empty table with few, say 5-6 regions without any data in any of 
> them
> - Verify hbase:meta table to verify the regions for the table or check 
> HMaster UI
> - Enable normalizer switch and normalization for this table
> - Run normalizer, by 'normalize' command from hbase shell
> - Verify the regions for table by scanning hbase:meta table or checking 
> HMaster web UI
> The empty regions are not merged on running the region normalizer. This seems 
> to be an edge case with completely empty regions since the Normalizer checks 
> for: smallestRegion (in this case 0 size) + smallestNeighborOfSmallestRegion 
> (in this case 0 size) > avg region size (in this case 0 size)
> thanks to [~elserj] for verifying this from the source code side



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14819) hbase-it tests failing with OOME; permgen

2015-11-19 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-14819:
--
Attachment: 14819.addendum.patch

Addendum that removes flag that does not help.

> hbase-it tests failing with OOME; permgen
> -
>
> Key: HBASE-14819
> URL: https://issues.apache.org/jira/browse/HBASE-14819
> Project: HBase
>  Issue Type: Sub-task
>  Components: test
>Reporter: stack
>Assignee: stack
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: 14819.addendum.patch, 14819v2.txt, Screen Shot 
> 2015-11-16 at 11.37.41 PM.png, itag.txt
>
>
> Let me up the heap used when failsafe forks.
> Here is example OOME doing ITBLL:
> {code}
> 2015-11-16 03:09:15,073 INFO  [Thread-694] actions.BatchRestartRsAction(69): 
> Starting region server:asf905.gq1.ygridcore.net
> 2015-11-16 03:09:15,099 INFO  [Thread-694] client.ConnectionUtils(104): 
> regionserver/asf905.gq1.ygridcore.net/67.195.81.149:0 server-side HConnection 
> retries=350
> 2015-11-16 03:09:15,099 INFO  [Thread-694] ipc.SimpleRpcScheduler(128): Using 
> deadline as user call queue, count=1
> 2015-11-16 03:09:15,101 INFO  [Thread-694] ipc.RpcServer$Listener(607): 
> regionserver/asf905.gq1.ygridcore.net/67.195.81.149:0: started 3 reader(s) 
> listening on port=36114
> 2015-11-16 03:09:15,103 INFO  [Thread-694] fs.HFileSystem(252): Added 
> intercepting call to namenode#getBlockLocations so can do block reordering 
> using class class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks
> 2015-11-16 03:09:15,104 INFO  [Thread-694] 
> zookeeper.RecoverableZooKeeper(120): Process identifier=regionserver:36114 
> connecting to ZooKeeper ensemble=localhost:50139
> 2015-11-16 03:09:15,117 DEBUG [Thread-694-EventThread] 
> zookeeper.ZooKeeperWatcher(554): regionserver:361140x0, 
> quorum=localhost:50139, baseZNode=/hbase Received ZooKeeper Event, type=None, 
> state=SyncConnected, path=null
> 2015-11-16 03:09:15,118 DEBUG [Thread-694] zookeeper.ZKUtil(492): 
> regionserver:361140x0, quorum=localhost:50139, baseZNode=/hbase Set watcher 
> on existing znode=/hbase/master
> 2015-11-16 03:09:15,119 DEBUG [Thread-694] zookeeper.ZKUtil(492): 
> regionserver:361140x0, quorum=localhost:50139, baseZNode=/hbase Set watcher 
> on existing znode=/hbase/running
> 2015-11-16 03:09:15,119 DEBUG [Thread-694-EventThread] 
> zookeeper.ZooKeeperWatcher(638): regionserver:36114-0x1510e2c6f1d0029 
> connected
> 2015-11-16 03:09:15,120 INFO  [RpcServer.responder] 
> ipc.RpcServer$Responder(926): RpcServer.responder: starting
> 2015-11-16 03:09:15,121 INFO  [RpcServer.listener,port=36114] 
> ipc.RpcServer$Listener(738): RpcServer.listener,port=36114: starting
> 2015-11-16 03:09:15,121 DEBUG [Thread-694] ipc.RpcExecutor(115): B.default 
> Start Handler index=0 queue=0
> 2015-11-16 03:09:15,121 DEBUG [Thread-694] ipc.RpcExecutor(115): B.default 
> Start Handler index=1 queue=0
> 2015-11-16 03:09:15,121 DEBUG [Thread-694] ipc.RpcExecutor(115): B.default 
> Start Handler index=2 queue=0
> 2015-11-16 03:09:15,122 DEBUG [Thread-694] ipc.RpcExecutor(115): B.default 
> Start Handler index=3 queue=0
> 2015-11-16 03:09:15,122 DEBUG [Thread-694] ipc.RpcExecutor(115): B.default 
> Start Handler index=4 queue=0
> 2015-11-16 03:09:15,122 DEBUG [Thread-694] ipc.RpcExecutor(115): Priority 
> Start Handler index=0 queue=0
> 2015-11-16 03:09:15,123 DEBUG [Thread-694] ipc.RpcExecutor(115): Priority 
> Start Handler index=1 queue=1
> 2015-11-16 03:09:15,123 DEBUG [Thread-694] ipc.RpcExecutor(115): Priority 
> Start Handler index=2 queue=0
> 2015-11-16 03:09:15,123 DEBUG [Thread-694] ipc.RpcExecutor(115): Priority 
> Start Handler index=3 queue=1
> 2015-11-16 03:09:15,124 DEBUG [Thread-694] ipc.RpcExecutor(115): Priority 
> Start Handler index=4 queue=0
> 2015-11-16 03:09:15,124 DEBUG [Thread-694] ipc.RpcExecutor(115): Replication 
> Start Handler index=0 queue=0
> 2015-11-16 03:09:15,124 DEBUG [Thread-694] ipc.RpcExecutor(115): Replication 
> Start Handler index=1 queue=0
> 2015-11-16 03:09:15,124 DEBUG [Thread-694] ipc.RpcExecutor(115): Replication 
> Start Handler index=2 queue=0
> 2015-11-16 03:09:15,761 DEBUG [RS:0;asf905:36114] 
> client.ConnectionManager$HConnectionImplementation(715): connection 
> construction failed
> java.io.IOException: java.lang.OutOfMemoryError: PermGen space
>   at 
> org.apache.hadoop.hbase.client.RegistryFactory.getRegistry(RegistryFactory.java:43)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.setupRegistry(ConnectionManager.java:886)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.(ConnectionManager.java:692)
>   at 
> org.apache.hadoop.hbase.client.ConnectionUtils$2.(ConnectionUtils.java:154)
>   at 
> 

[jira] [Commented] (HBASE-14838) SimpleRegionNormalizer does not merge empty region of a table

2015-11-19 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15014971#comment-15014971
 ] 

Josh Elser commented on HBASE-14838:


bq. Because code which calculates region size in region normalizer uses metrics 
(ServerLoad/RegionLoad based), where region size (aggregated store file size) 
is represented is MB and is floored (truncated) down

You're the best. Saved me some digging :)

bq. So how should we proceed here on this jira? Add a javadoc comment to 
specify that pre-split tables are not touched if they are empty?

I think that would be a good addition. I can add something to the class-level 
javadocs.

> SimpleRegionNormalizer does not merge empty region of a table
> -
>
> Key: HBASE-14838
> URL: https://issues.apache.org/jira/browse/HBASE-14838
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.0
>Reporter: Romil Choksi
>
> SImpleRegionNormalizer does not merge empty region of a table
> Steps to repro:
> - Create an empty table with few, say 5-6 regions without any data in any of 
> them
> - Verify hbase:meta table to verify the regions for the table or check 
> HMaster UI
> - Enable normalizer switch and normalization for this table
> - Run normalizer, by 'normalize' command from hbase shell
> - Verify the regions for table by scanning hbase:meta table or checking 
> HMaster web UI
> The empty regions are not merged on running the region normalizer. This seems 
> to be an edge case with completely empty regions since the Normalizer checks 
> for: smallestRegion (in this case 0 size) + smallestNeighborOfSmallestRegion 
> (in this case 0 size) > avg region size (in this case 0 size)
> thanks to [~elserj] for verifying this from the source code side



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14851) Add test showing how to use TTL from thrift

2015-11-19 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15014974#comment-15014974
 ] 

stack commented on HBASE-14851:
---

Patch looks fine. The below is a bit obnoxious (says 30 in comment -- fix on 
commit)

725 // Sleep 30 seconds just to make 100% sure that the key value 
should be expired.
726 Thread.sleep(ttlTimeMs * 15);


I'm only reviewing your patches because I need you to review mineHBASE-14807

> Add test showing how to use TTL from thrift
> ---
>
> Key: HBASE-14851
> URL: https://issues.apache.org/jira/browse/HBASE-14851
> Project: HBase
>  Issue Type: Task
>Affects Versions: 2.0.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Fix For: 2.0.0
>
> Attachments: HBASE-14851-v1.patch, HBASE-14851.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14819) hbase-it tests failing with OOME; permgen

2015-11-19 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-14819:
--
Summary: hbase-it tests failing with OOME; permgen  (was: hbase-it tests 
failing with OOME)

> hbase-it tests failing with OOME; permgen
> -
>
> Key: HBASE-14819
> URL: https://issues.apache.org/jira/browse/HBASE-14819
> Project: HBase
>  Issue Type: Sub-task
>  Components: test
>Reporter: stack
>Assignee: stack
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: 14819v2.txt, Screen Shot 2015-11-16 at 11.37.41 PM.png, 
> itag.txt
>
>
> Let me up the heap used when failsafe forks.
> Here is example OOME doing ITBLL:
> {code}
> 2015-11-16 03:09:15,073 INFO  [Thread-694] actions.BatchRestartRsAction(69): 
> Starting region server:asf905.gq1.ygridcore.net
> 2015-11-16 03:09:15,099 INFO  [Thread-694] client.ConnectionUtils(104): 
> regionserver/asf905.gq1.ygridcore.net/67.195.81.149:0 server-side HConnection 
> retries=350
> 2015-11-16 03:09:15,099 INFO  [Thread-694] ipc.SimpleRpcScheduler(128): Using 
> deadline as user call queue, count=1
> 2015-11-16 03:09:15,101 INFO  [Thread-694] ipc.RpcServer$Listener(607): 
> regionserver/asf905.gq1.ygridcore.net/67.195.81.149:0: started 3 reader(s) 
> listening on port=36114
> 2015-11-16 03:09:15,103 INFO  [Thread-694] fs.HFileSystem(252): Added 
> intercepting call to namenode#getBlockLocations so can do block reordering 
> using class class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks
> 2015-11-16 03:09:15,104 INFO  [Thread-694] 
> zookeeper.RecoverableZooKeeper(120): Process identifier=regionserver:36114 
> connecting to ZooKeeper ensemble=localhost:50139
> 2015-11-16 03:09:15,117 DEBUG [Thread-694-EventThread] 
> zookeeper.ZooKeeperWatcher(554): regionserver:361140x0, 
> quorum=localhost:50139, baseZNode=/hbase Received ZooKeeper Event, type=None, 
> state=SyncConnected, path=null
> 2015-11-16 03:09:15,118 DEBUG [Thread-694] zookeeper.ZKUtil(492): 
> regionserver:361140x0, quorum=localhost:50139, baseZNode=/hbase Set watcher 
> on existing znode=/hbase/master
> 2015-11-16 03:09:15,119 DEBUG [Thread-694] zookeeper.ZKUtil(492): 
> regionserver:361140x0, quorum=localhost:50139, baseZNode=/hbase Set watcher 
> on existing znode=/hbase/running
> 2015-11-16 03:09:15,119 DEBUG [Thread-694-EventThread] 
> zookeeper.ZooKeeperWatcher(638): regionserver:36114-0x1510e2c6f1d0029 
> connected
> 2015-11-16 03:09:15,120 INFO  [RpcServer.responder] 
> ipc.RpcServer$Responder(926): RpcServer.responder: starting
> 2015-11-16 03:09:15,121 INFO  [RpcServer.listener,port=36114] 
> ipc.RpcServer$Listener(738): RpcServer.listener,port=36114: starting
> 2015-11-16 03:09:15,121 DEBUG [Thread-694] ipc.RpcExecutor(115): B.default 
> Start Handler index=0 queue=0
> 2015-11-16 03:09:15,121 DEBUG [Thread-694] ipc.RpcExecutor(115): B.default 
> Start Handler index=1 queue=0
> 2015-11-16 03:09:15,121 DEBUG [Thread-694] ipc.RpcExecutor(115): B.default 
> Start Handler index=2 queue=0
> 2015-11-16 03:09:15,122 DEBUG [Thread-694] ipc.RpcExecutor(115): B.default 
> Start Handler index=3 queue=0
> 2015-11-16 03:09:15,122 DEBUG [Thread-694] ipc.RpcExecutor(115): B.default 
> Start Handler index=4 queue=0
> 2015-11-16 03:09:15,122 DEBUG [Thread-694] ipc.RpcExecutor(115): Priority 
> Start Handler index=0 queue=0
> 2015-11-16 03:09:15,123 DEBUG [Thread-694] ipc.RpcExecutor(115): Priority 
> Start Handler index=1 queue=1
> 2015-11-16 03:09:15,123 DEBUG [Thread-694] ipc.RpcExecutor(115): Priority 
> Start Handler index=2 queue=0
> 2015-11-16 03:09:15,123 DEBUG [Thread-694] ipc.RpcExecutor(115): Priority 
> Start Handler index=3 queue=1
> 2015-11-16 03:09:15,124 DEBUG [Thread-694] ipc.RpcExecutor(115): Priority 
> Start Handler index=4 queue=0
> 2015-11-16 03:09:15,124 DEBUG [Thread-694] ipc.RpcExecutor(115): Replication 
> Start Handler index=0 queue=0
> 2015-11-16 03:09:15,124 DEBUG [Thread-694] ipc.RpcExecutor(115): Replication 
> Start Handler index=1 queue=0
> 2015-11-16 03:09:15,124 DEBUG [Thread-694] ipc.RpcExecutor(115): Replication 
> Start Handler index=2 queue=0
> 2015-11-16 03:09:15,761 DEBUG [RS:0;asf905:36114] 
> client.ConnectionManager$HConnectionImplementation(715): connection 
> construction failed
> java.io.IOException: java.lang.OutOfMemoryError: PermGen space
>   at 
> org.apache.hadoop.hbase.client.RegistryFactory.getRegistry(RegistryFactory.java:43)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.setupRegistry(ConnectionManager.java:886)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.(ConnectionManager.java:692)
>   at 
> org.apache.hadoop.hbase.client.ConnectionUtils$2.(ConnectionUtils.java:154)
>   at 
> 

[jira] [Commented] (HBASE-14807) TestWALLockup is flakey

2015-11-19 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15014877#comment-15014877
 ] 

stack commented on HBASE-14807:
---

Here is what failed

Results :

Failed tests: 
  
TestReplicationKillSlaveRS.killOneSlaveRS:34->TestReplicationKillRS.loadTableAndKillRS:88
 Waited too much time for queueFailover replication. Waited 18444ms.

Tests run: 1721, Failures: 1, Errors: 0, Skipped: 19

Says:

---
Test set: org.apache.hadoop.hbase.replication.TestReplicationKillSlaveRS
---
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 111.524 sec - 
in org.apache.hadoop.hbase.replication.TestReplicationKillSlaveRS

Looks like this suite has two tests. The second one timed out then. Says a 
network error...


2015-11-19 23:15:27,701 WARN  
[RS:1;asf906:60179.replicationSource.asf906.gq1.ygridcore.net%2C60179%2C1447974900067,2]
 regionserver.HBaseInterClusterReplicationEndpoint(257): Can't replicate 
because of a local or network error: 
org.apache.hadoop.hbase.regionserver.RegionServerStoppedException: 
org.apache.hadoop.hbase.regionserver.RegionServerStoppedException: Server 
asf906.gq1.ygridcore.net,41751,1447974908266 stopping

That is as far as I got Seems unrelated. Retrying.


> TestWALLockup is flakey
> ---
>
> Key: HBASE-14807
> URL: https://issues.apache.org/jira/browse/HBASE-14807
> Project: HBase
>  Issue Type: Bug
>  Components: flakey, test
>Reporter: stack
>Assignee: stack
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: 14807.patch, 14807.second.attempt.txt
>
>
> Fails frequently. 
> Looks like this:
> {code}
> 2015-11-12 10:38:51,812 DEBUG [Time-limited test] regionserver.HRegion(3882): 
> Found 0 recovered edits file(s) under 
> /home/jenkins/jenkins-slave/workspace/HBase-1.2/jdk/latest1.7/label/Hadoop/hbase-server/target/test-data/8b8f8f12-1819-47e3-b1f1-8ffa789438ad/data/default/testLockupWhenSyncInMiddleOfZigZagSetup/c8694b53368f3301a8d370089120388d
> 2015-11-12 10:38:51,821 DEBUG [Time-limited test] 
> regionserver.FlushLargeStoresPolicy(56): 
> hbase.hregion.percolumnfamilyflush.size.lower.bound is not specified, use 
> global config(16777216) instead
> 2015-11-12 10:38:51,880 DEBUG [Time-limited test] wal.WALSplitter(729): Wrote 
> region 
> seqId=/home/jenkins/jenkins-slave/workspace/HBase-1.2/jdk/latest1.7/label/Hadoop/hbase-server/target/test-data/8b8f8f12-1819-47e3-b1f1-8ffa789438ad/data/default/testLockupWhenSyncInMiddleOfZigZagSetup/c8694b53368f3301a8d370089120388d/recovered.edits/2.seqid
>  to file, newSeqId=2, maxSeqId=0
> 2015-11-12 10:38:51,881 INFO  [Time-limited test] regionserver.HRegion(868): 
> Onlined c8694b53368f3301a8d370089120388d; next sequenceid=2
> 2015-11-12 10:38:51,994 ERROR [sync.1] wal.FSHLog$SyncRunner(1226): Error 
> syncing, request close of WAL
> java.io.IOException: FAKE! Failed to replace a bad datanode...SYNC
>   at 
> org.apache.hadoop.hbase.regionserver.TestWALLockup$1DodgyFSLog$1.sync(TestWALLockup.java:162)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.FSHLog$SyncRunner.run(FSHLog.java:1222)
>   at java.lang.Thread.run(Thread.java:745)
> 2015-11-12 10:38:51,997 DEBUG [Thread-4] regionserver.LogRoller(139): WAL 
> roll requested
> 2015-11-12 10:38:52,019 DEBUG [flusher] 
> regionserver.FlushLargeStoresPolicy(100): Since none of the CFs were above 
> the size, flushing all.
> 2015-11-12 10:38:52,192 INFO  [Thread-4] 
> regionserver.TestWALLockup$1DodgyFSLog(129): LATCHED
> java.lang.InterruptedException: sleep interrupted
>   at java.lang.Thread.sleep(Native Method)
>   at org.apache.hadoop.hbase.util.Threads.sleep(Threads.java:146)
>   at 
> org.apache.hadoop.hbase.regionserver.TestWALLockup.testLockupWhenSyncInMiddleOfZigZagSetup(TestWALLockup.java:245)
> 2015-11-12 10:39:18,609 INFO  [main] regionserver.TestWALLockup(91): Cleaning 
> test directory: 
> /home/jenkins/jenkins-slave/workspace/HBase-1.2/jdk/latest1.7/label/Hadoop/hbase-server/target/test-data/8b8f8f12-1819-47e3-b1f1-8ffa789438ad
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> 

[jira] [Updated] (HBASE-14807) TestWALLockup is flakey

2015-11-19 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-14807:
--
Attachment: 14807.second.attempt.txt

> TestWALLockup is flakey
> ---
>
> Key: HBASE-14807
> URL: https://issues.apache.org/jira/browse/HBASE-14807
> Project: HBase
>  Issue Type: Bug
>  Components: flakey, test
>Reporter: stack
>Assignee: stack
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: 14807.patch, 14807.second.attempt.txt, 
> 14807.second.attempt.txt
>
>
> Fails frequently. 
> Looks like this:
> {code}
> 2015-11-12 10:38:51,812 DEBUG [Time-limited test] regionserver.HRegion(3882): 
> Found 0 recovered edits file(s) under 
> /home/jenkins/jenkins-slave/workspace/HBase-1.2/jdk/latest1.7/label/Hadoop/hbase-server/target/test-data/8b8f8f12-1819-47e3-b1f1-8ffa789438ad/data/default/testLockupWhenSyncInMiddleOfZigZagSetup/c8694b53368f3301a8d370089120388d
> 2015-11-12 10:38:51,821 DEBUG [Time-limited test] 
> regionserver.FlushLargeStoresPolicy(56): 
> hbase.hregion.percolumnfamilyflush.size.lower.bound is not specified, use 
> global config(16777216) instead
> 2015-11-12 10:38:51,880 DEBUG [Time-limited test] wal.WALSplitter(729): Wrote 
> region 
> seqId=/home/jenkins/jenkins-slave/workspace/HBase-1.2/jdk/latest1.7/label/Hadoop/hbase-server/target/test-data/8b8f8f12-1819-47e3-b1f1-8ffa789438ad/data/default/testLockupWhenSyncInMiddleOfZigZagSetup/c8694b53368f3301a8d370089120388d/recovered.edits/2.seqid
>  to file, newSeqId=2, maxSeqId=0
> 2015-11-12 10:38:51,881 INFO  [Time-limited test] regionserver.HRegion(868): 
> Onlined c8694b53368f3301a8d370089120388d; next sequenceid=2
> 2015-11-12 10:38:51,994 ERROR [sync.1] wal.FSHLog$SyncRunner(1226): Error 
> syncing, request close of WAL
> java.io.IOException: FAKE! Failed to replace a bad datanode...SYNC
>   at 
> org.apache.hadoop.hbase.regionserver.TestWALLockup$1DodgyFSLog$1.sync(TestWALLockup.java:162)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.FSHLog$SyncRunner.run(FSHLog.java:1222)
>   at java.lang.Thread.run(Thread.java:745)
> 2015-11-12 10:38:51,997 DEBUG [Thread-4] regionserver.LogRoller(139): WAL 
> roll requested
> 2015-11-12 10:38:52,019 DEBUG [flusher] 
> regionserver.FlushLargeStoresPolicy(100): Since none of the CFs were above 
> the size, flushing all.
> 2015-11-12 10:38:52,192 INFO  [Thread-4] 
> regionserver.TestWALLockup$1DodgyFSLog(129): LATCHED
> java.lang.InterruptedException: sleep interrupted
>   at java.lang.Thread.sleep(Native Method)
>   at org.apache.hadoop.hbase.util.Threads.sleep(Threads.java:146)
>   at 
> org.apache.hadoop.hbase.regionserver.TestWALLockup.testLockupWhenSyncInMiddleOfZigZagSetup(TestWALLockup.java:245)
> 2015-11-12 10:39:18,609 INFO  [main] regionserver.TestWALLockup(91): Cleaning 
> test directory: 
> /home/jenkins/jenkins-slave/workspace/HBase-1.2/jdk/latest1.7/label/Hadoop/hbase-server/target/test-data/8b8f8f12-1819-47e3-b1f1-8ffa789438ad
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> ... then times out after being locked up for 30 seconds.  Writes 50+MB of 
> logs while spinning.
> Reported as this:
> {code}
> ---
> Test set: org.apache.hadoop.hbase.regionserver.TestWALLockup
> ---
> Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 198.23 sec 
> <<< FAILURE! - in org.apache.hadoop.hbase.regionserver.TestWALLockup
> testLockupWhenSyncInMiddleOfZigZagSetup(org.apache.hadoop.hbase.regionserver.TestWALLockup)
>   Time elapsed: 0.049 sec  <<< ERROR!
> org.junit.runners.model.TestTimedOutException: test timed out after 3 
> milliseconds
>   at org.apache.log4j.Category.callAppenders(Category.java:205)
>   

[jira] [Commented] (HBASE-14030) HBase Backup/Restore Phase 1

2015-11-19 Thread Vladimir Rodionov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15014962#comment-15014962
 ] 

Vladimir Rodionov commented on HBASE-14030:
---

[~devaraj], it was a glitch in a build system, I presume.


> HBase Backup/Restore Phase 1
> 
>
> Key: HBASE-14030
> URL: https://issues.apache.org/jira/browse/HBASE-14030
> Project: HBase
>  Issue Type: Umbrella
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Attachments: HBASE-14030-v0.patch, HBASE-14030-v1.patch, 
> HBASE-14030-v10.patch, HBASE-14030-v11.patch, HBASE-14030-v12.patch, 
> HBASE-14030-v13.patch, HBASE-14030-v14.patch, HBASE-14030-v15.patch, 
> HBASE-14030-v17.patch, HBASE-14030-v2.patch, HBASE-14030-v3.patch, 
> HBASE-14030-v4.patch, HBASE-14030-v5.patch, HBASE-14030-v6.patch, 
> HBASE-14030-v7.patch, HBASE-14030-v8.patch
>
>
> This is the umbrella ticket for Backup/Restore Phase 1. See HBASE-7912 design 
> doc for the phase description.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14839) [branch-1] Backport test categories so that patch backport is easier

2015-11-19 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15014775#comment-15014775
 ] 

Enis Soztutar commented on HBASE-14839:
---

Thanks [~ndimiduk]. 
bq. Any plan to retroactively add groups to existing tests as well? Separate 
patch?
Yep, we can do that. However we should also backport the maven changes so that 
the test categories are meaningful.  

> [branch-1] Backport test categories so that patch backport is easier
> 
>
> Key: HBASE-14839
> URL: https://issues.apache.org/jira/browse/HBASE-14839
> Project: HBase
>  Issue Type: Test
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 1.2.0, 1.3.0, 1.1.3, 0.98.17, 1.0.4
>
> Attachments: hbase-14839-branch-1.patch
>
>
> Test categories are in master and new unit tests are sometimes marked with 
> that particular interface ( {{RPCTests.class}} ). 
> Since we don't have the specific annotation classes in branch-1, backports 
> usually fail. We can just commit those classes to all applicable branches so 
> that committing patches is less work. 
> We can also backport the full patch for running the specific tests from maven 
> as a further issue. Feel free to take it up, if you are interested. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14838) SimpleRegionNormalizer does not merge empty region of a table

2015-11-19 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15014774#comment-15014774
 ] 

Enis Soztutar commented on HBASE-14838:
---

bq. At first glance, my reaction was that the "reasonable distribution of split 
points" for no data in a table is having no split points. Same goes for small 
amounts of data. I hadn't considered the side-effect of the normalizer undo-ing 
a pre-split table
This is a good point. Undoing pre-split tables will be very bad. 

> SimpleRegionNormalizer does not merge empty region of a table
> -
>
> Key: HBASE-14838
> URL: https://issues.apache.org/jira/browse/HBASE-14838
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.0
>Reporter: Romil Choksi
>
> SImpleRegionNormalizer does not merge empty region of a table
> Steps to repro:
> - Create an empty table with few, say 5-6 regions without any data in any of 
> them
> - Verify hbase:meta table to verify the regions for the table or check 
> HMaster UI
> - Enable normalizer switch and normalization for this table
> - Run normalizer, by 'normalize' command from hbase shell
> - Verify the regions for table by scanning hbase:meta table or checking 
> HMaster web UI
> The empty regions are not merged on running the region normalizer. This seems 
> to be an edge case with completely empty regions since the Normalizer checks 
> for: smallestRegion (in this case 0 size) + smallestNeighborOfSmallestRegion 
> (in this case 0 size) > avg region size (in this case 0 size)
> thanks to [~elserj] for verifying this from the source code side



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14839) [branch-1] Backport test categories so that patch backport is easier

2015-11-19 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HBASE-14839:
--
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

I've pushed this. Thanks for looking. 

> [branch-1] Backport test categories so that patch backport is easier
> 
>
> Key: HBASE-14839
> URL: https://issues.apache.org/jira/browse/HBASE-14839
> Project: HBase
>  Issue Type: Test
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 1.2.0, 1.3.0, 1.1.3, 0.98.17, 1.0.4
>
> Attachments: hbase-14839-branch-1.patch
>
>
> Test categories are in master and new unit tests are sometimes marked with 
> that particular interface ( {{RPCTests.class}} ). 
> Since we don't have the specific annotation classes in branch-1, backports 
> usually fail. We can just commit those classes to all applicable branches so 
> that committing patches is less work. 
> We can also backport the full patch for running the specific tests from maven 
> as a further issue. Feel free to take it up, if you are interested. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14807) TestWALLockup is flakey

2015-11-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15014833#comment-15014833
 ] 

Hadoop QA commented on HBASE-14807:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12773313/14807.second.attempt.txt
  against master branch at commit f0dc556b7174c18f3174c24364cc80e32195f715.
  ATTACHMENT ID: 12773313

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.6.1 2.7.0 
2.7.1)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16596//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16596//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16596//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16596//console

This message is automatically generated.

> TestWALLockup is flakey
> ---
>
> Key: HBASE-14807
> URL: https://issues.apache.org/jira/browse/HBASE-14807
> Project: HBase
>  Issue Type: Bug
>  Components: flakey, test
>Reporter: stack
>Assignee: stack
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: 14807.patch, 14807.second.attempt.txt
>
>
> Fails frequently. 
> Looks like this:
> {code}
> 2015-11-12 10:38:51,812 DEBUG [Time-limited test] regionserver.HRegion(3882): 
> Found 0 recovered edits file(s) under 
> /home/jenkins/jenkins-slave/workspace/HBase-1.2/jdk/latest1.7/label/Hadoop/hbase-server/target/test-data/8b8f8f12-1819-47e3-b1f1-8ffa789438ad/data/default/testLockupWhenSyncInMiddleOfZigZagSetup/c8694b53368f3301a8d370089120388d
> 2015-11-12 10:38:51,821 DEBUG [Time-limited test] 
> regionserver.FlushLargeStoresPolicy(56): 
> hbase.hregion.percolumnfamilyflush.size.lower.bound is not specified, use 
> global config(16777216) instead
> 2015-11-12 10:38:51,880 DEBUG [Time-limited test] wal.WALSplitter(729): Wrote 
> region 
> seqId=/home/jenkins/jenkins-slave/workspace/HBase-1.2/jdk/latest1.7/label/Hadoop/hbase-server/target/test-data/8b8f8f12-1819-47e3-b1f1-8ffa789438ad/data/default/testLockupWhenSyncInMiddleOfZigZagSetup/c8694b53368f3301a8d370089120388d/recovered.edits/2.seqid
>  to file, newSeqId=2, maxSeqId=0
> 2015-11-12 10:38:51,881 INFO  [Time-limited test] regionserver.HRegion(868): 
> Onlined c8694b53368f3301a8d370089120388d; next sequenceid=2
> 2015-11-12 10:38:51,994 ERROR [sync.1] wal.FSHLog$SyncRunner(1226): Error 
> syncing, request close of WAL
> java.io.IOException: FAKE! Failed to replace a bad datanode...SYNC
>   at 
> org.apache.hadoop.hbase.regionserver.TestWALLockup$1DodgyFSLog$1.sync(TestWALLockup.java:162)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.FSHLog$SyncRunner.run(FSHLog.java:1222)
>   at java.lang.Thread.run(Thread.java:745)
> 2015-11-12 10:38:51,997 DEBUG [Thread-4] regionserver.LogRoller(139): WAL 
> roll requested
> 2015-11-12 10:38:52,019 DEBUG [flusher] 
> regionserver.FlushLargeStoresPolicy(100): Since none of the CFs were above 
> the size, flushing all.
> 2015-11-12 10:38:52,192 INFO  [Thread-4] 
> regionserver.TestWALLockup$1DodgyFSLog(129): LATCHED
> java.lang.InterruptedException: sleep interrupted
>   at java.lang.Thread.sleep(Native Method)
>   at org.apache.hadoop.hbase.util.Threads.sleep(Threads.java:146)
>   at 
> org.apache.hadoop.hbase.regionserver.TestWALLockup.testLockupWhenSyncInMiddleOfZigZagSetup(TestWALLockup.java:245)
> 2015-11-12 10:39:18,609 INFO  [main] regionserver.TestWALLockup(91): 

[jira] [Commented] (HBASE-14838) SimpleRegionNormalizer does not merge empty region of a table

2015-11-19 Thread Mikhail Antonov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15014834#comment-15014834
 ] 

Mikhail Antonov commented on HBASE-14838:
-

Yeah, merging together regions in pre-split table sounds like a bad idea to me 
too. 

So how should we proceed here on this jira? Add a javadoc comment to specify 
that pre-split tables are not touched if they are empty?

> SimpleRegionNormalizer does not merge empty region of a table
> -
>
> Key: HBASE-14838
> URL: https://issues.apache.org/jira/browse/HBASE-14838
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.0
>Reporter: Romil Choksi
>
> SImpleRegionNormalizer does not merge empty region of a table
> Steps to repro:
> - Create an empty table with few, say 5-6 regions without any data in any of 
> them
> - Verify hbase:meta table to verify the regions for the table or check 
> HMaster UI
> - Enable normalizer switch and normalization for this table
> - Run normalizer, by 'normalize' command from hbase shell
> - Verify the regions for table by scanning hbase:meta table or checking 
> HMaster web UI
> The empty regions are not merged on running the region normalizer. This seems 
> to be an edge case with completely empty regions since the Normalizer checks 
> for: smallestRegion (in this case 0 size) + smallestNeighborOfSmallestRegion 
> (in this case 0 size) > avg region size (in this case 0 size)
> thanks to [~elserj] for verifying this from the source code side



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HBASE-11843) MapReduce classes shouldn't be in hbase-server

2015-11-19 Thread Nate Edel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nate Edel reassigned HBASE-11843:
-

Assignee: Nate Edel

Picking this up at the suggestion of [~eclark].

> MapReduce classes shouldn't be in hbase-server
> --
>
> Key: HBASE-11843
> URL: https://issues.apache.org/jira/browse/HBASE-11843
> Project: HBase
>  Issue Type: Improvement
>  Components: build
>Reporter: Keegan Witt
>Assignee: Nate Edel
>Priority: Critical
> Fix For: 2.0.0
>
>
> I'm opening this to continue the discussion I started a while back: 
> http://mail-archives.apache.org/mod_mbox/hbase-user/201405.mbox/%3ccamuu0w_xooxeg779rrfjturau+uxeavunzxkw9dxfo-gh5y...@mail.gmail.com%3E.
> To summarize, I think the MapReduce classes used by clients (like 
> TableMapper, TableReducer, etc) don't belong in hbase-server.  This forces 
> the user to pull in a rather large artifact for a relatively small number of 
> classes.  These should either be put in hbase-client, or possibly an artifact 
> of their own (like the hbase-mapreduce idea Enis suggested).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14819) hbase-it tests failing with OOME; permgen

2015-11-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15014886#comment-15014886
 ] 

Hudson commented on HBASE-14819:


FAILURE: Integrated in HBase-1.2-IT #293 (See 
[https://builds.apache.org/job/HBase-1.2-IT/293/])
HBASE-14819 hbase-it tests failing with OOME: permgen (stack: rev 
b7f30c11e5e9a6883f4380d13b034e4bd08138a9)
* 
hbase-it/src/test/java/org/apache/hadoop/hbase/IntegrationTestAcidGuarantees.java
* hbase-it/pom.xml


> hbase-it tests failing with OOME; permgen
> -
>
> Key: HBASE-14819
> URL: https://issues.apache.org/jira/browse/HBASE-14819
> Project: HBase
>  Issue Type: Sub-task
>  Components: test
>Reporter: stack
>Assignee: stack
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: 14819v2.txt, Screen Shot 2015-11-16 at 11.37.41 PM.png, 
> itag.txt
>
>
> Let me up the heap used when failsafe forks.
> Here is example OOME doing ITBLL:
> {code}
> 2015-11-16 03:09:15,073 INFO  [Thread-694] actions.BatchRestartRsAction(69): 
> Starting region server:asf905.gq1.ygridcore.net
> 2015-11-16 03:09:15,099 INFO  [Thread-694] client.ConnectionUtils(104): 
> regionserver/asf905.gq1.ygridcore.net/67.195.81.149:0 server-side HConnection 
> retries=350
> 2015-11-16 03:09:15,099 INFO  [Thread-694] ipc.SimpleRpcScheduler(128): Using 
> deadline as user call queue, count=1
> 2015-11-16 03:09:15,101 INFO  [Thread-694] ipc.RpcServer$Listener(607): 
> regionserver/asf905.gq1.ygridcore.net/67.195.81.149:0: started 3 reader(s) 
> listening on port=36114
> 2015-11-16 03:09:15,103 INFO  [Thread-694] fs.HFileSystem(252): Added 
> intercepting call to namenode#getBlockLocations so can do block reordering 
> using class class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks
> 2015-11-16 03:09:15,104 INFO  [Thread-694] 
> zookeeper.RecoverableZooKeeper(120): Process identifier=regionserver:36114 
> connecting to ZooKeeper ensemble=localhost:50139
> 2015-11-16 03:09:15,117 DEBUG [Thread-694-EventThread] 
> zookeeper.ZooKeeperWatcher(554): regionserver:361140x0, 
> quorum=localhost:50139, baseZNode=/hbase Received ZooKeeper Event, type=None, 
> state=SyncConnected, path=null
> 2015-11-16 03:09:15,118 DEBUG [Thread-694] zookeeper.ZKUtil(492): 
> regionserver:361140x0, quorum=localhost:50139, baseZNode=/hbase Set watcher 
> on existing znode=/hbase/master
> 2015-11-16 03:09:15,119 DEBUG [Thread-694] zookeeper.ZKUtil(492): 
> regionserver:361140x0, quorum=localhost:50139, baseZNode=/hbase Set watcher 
> on existing znode=/hbase/running
> 2015-11-16 03:09:15,119 DEBUG [Thread-694-EventThread] 
> zookeeper.ZooKeeperWatcher(638): regionserver:36114-0x1510e2c6f1d0029 
> connected
> 2015-11-16 03:09:15,120 INFO  [RpcServer.responder] 
> ipc.RpcServer$Responder(926): RpcServer.responder: starting
> 2015-11-16 03:09:15,121 INFO  [RpcServer.listener,port=36114] 
> ipc.RpcServer$Listener(738): RpcServer.listener,port=36114: starting
> 2015-11-16 03:09:15,121 DEBUG [Thread-694] ipc.RpcExecutor(115): B.default 
> Start Handler index=0 queue=0
> 2015-11-16 03:09:15,121 DEBUG [Thread-694] ipc.RpcExecutor(115): B.default 
> Start Handler index=1 queue=0
> 2015-11-16 03:09:15,121 DEBUG [Thread-694] ipc.RpcExecutor(115): B.default 
> Start Handler index=2 queue=0
> 2015-11-16 03:09:15,122 DEBUG [Thread-694] ipc.RpcExecutor(115): B.default 
> Start Handler index=3 queue=0
> 2015-11-16 03:09:15,122 DEBUG [Thread-694] ipc.RpcExecutor(115): B.default 
> Start Handler index=4 queue=0
> 2015-11-16 03:09:15,122 DEBUG [Thread-694] ipc.RpcExecutor(115): Priority 
> Start Handler index=0 queue=0
> 2015-11-16 03:09:15,123 DEBUG [Thread-694] ipc.RpcExecutor(115): Priority 
> Start Handler index=1 queue=1
> 2015-11-16 03:09:15,123 DEBUG [Thread-694] ipc.RpcExecutor(115): Priority 
> Start Handler index=2 queue=0
> 2015-11-16 03:09:15,123 DEBUG [Thread-694] ipc.RpcExecutor(115): Priority 
> Start Handler index=3 queue=1
> 2015-11-16 03:09:15,124 DEBUG [Thread-694] ipc.RpcExecutor(115): Priority 
> Start Handler index=4 queue=0
> 2015-11-16 03:09:15,124 DEBUG [Thread-694] ipc.RpcExecutor(115): Replication 
> Start Handler index=0 queue=0
> 2015-11-16 03:09:15,124 DEBUG [Thread-694] ipc.RpcExecutor(115): Replication 
> Start Handler index=1 queue=0
> 2015-11-16 03:09:15,124 DEBUG [Thread-694] ipc.RpcExecutor(115): Replication 
> Start Handler index=2 queue=0
> 2015-11-16 03:09:15,761 DEBUG [RS:0;asf905:36114] 
> client.ConnectionManager$HConnectionImplementation(715): connection 
> construction failed
> java.io.IOException: java.lang.OutOfMemoryError: PermGen space
>   at 
> org.apache.hadoop.hbase.client.RegistryFactory.getRegistry(RegistryFactory.java:43)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.setupRegistry(ConnectionManager.java:886)
>   at 
> 

[jira] [Commented] (HBASE-14819) hbase-it tests failing with OOME; permgen

2015-11-19 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15014929#comment-15014929
 ] 

stack commented on HBASE-14819:
---

Hmm... doesn't seem to have helped adding the permgen flag. hbase-it 1.2 build 
of ITBLL just did below when it was passing the last bunch of times. Let me 
back out the  -XX:+CMSClassUnloadingEnabled flag.

2015-11-20 00:36:05,795 ERROR [B.defaultRpcServer.handler=1,queue=0,port=46374] 
ipc.RpcServer(2212): Unexpected throwable object 
java.lang.OutOfMemoryError: PermGen space
at 
org.codehaus.jackson.map.introspect.AnnotatedClass.construct(AnnotatedClass.java:132)
at 
org.codehaus.jackson.map.introspect.BasicClassIntrospector.classWithCreators(BasicClassIntrospector.java:184)
at 
org.codehaus.jackson.map.introspect.BasicClassIntrospector.collectProperties(BasicClassIntrospector.java:157)
at 
org.codehaus.jackson.map.introspect.BasicClassIntrospector.forSerialization(BasicClassIntrospector.java:96)
at 
org.codehaus.jackson.map.introspect.BasicClassIntrospector.forSerialization(BasicClassIntrospector.java:16)
at 
org.codehaus.jackson.map.SerializationConfig.introspect(SerializationConfig.java:973)
at 
org.codehaus.jackson.map.ser.BeanSerializerFactory.createSerializer(BeanSerializerFactory.java:251)
at 
org.codehaus.jackson.map.ser.StdSerializerProvider._createUntypedSerializer(StdSerializerProvider.java:782)
at 
org.codehaus.jackson.map.ser.StdSerializerProvider._createAndCacheUntypedSerializer(StdSerializerProvider.java:735)
at 
org.codehaus.jackson.map.ser.StdSerializerProvider.findValueSerializer(StdSerializerProvider.java:344)
at 
org.codehaus.jackson.map.ser.StdSerializerProvider.findTypedValueSerializer(StdSerializerProvider.java:420)
at 
org.codehaus.jackson.map.ser.StdSerializerProvider._serializeValue(StdSerializerProvider.java:601)
at 
org.codehaus.jackson.map.ser.StdSerializerProvider.serializeValue(StdSerializerProvider.java:256)
at 
org.codehaus.jackson.map.ObjectMapper._configAndWriteValue(ObjectMapper.java:2575)
at 
org.codehaus.jackson.map.ObjectMapper.writeValueAsString(ObjectMapper.java:2097)
at 
org.apache.hadoop.hbase.ipc.RpcServer.logResponse(RpcServer.java:2268)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2194)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
at java.lang.Thread.run(Thread.java:745)



> hbase-it tests failing with OOME; permgen
> -
>
> Key: HBASE-14819
> URL: https://issues.apache.org/jira/browse/HBASE-14819
> Project: HBase
>  Issue Type: Sub-task
>  Components: test
>Reporter: stack
>Assignee: stack
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: 14819v2.txt, Screen Shot 2015-11-16 at 11.37.41 PM.png, 
> itag.txt
>
>
> Let me up the heap used when failsafe forks.
> Here is example OOME doing ITBLL:
> {code}
> 2015-11-16 03:09:15,073 INFO  [Thread-694] actions.BatchRestartRsAction(69): 
> Starting region server:asf905.gq1.ygridcore.net
> 2015-11-16 03:09:15,099 INFO  [Thread-694] client.ConnectionUtils(104): 
> regionserver/asf905.gq1.ygridcore.net/67.195.81.149:0 server-side HConnection 
> retries=350
> 2015-11-16 03:09:15,099 INFO  [Thread-694] ipc.SimpleRpcScheduler(128): Using 
> deadline as user call queue, count=1
> 2015-11-16 03:09:15,101 INFO  [Thread-694] ipc.RpcServer$Listener(607): 
> regionserver/asf905.gq1.ygridcore.net/67.195.81.149:0: started 3 reader(s) 
> listening on port=36114
> 2015-11-16 03:09:15,103 INFO  [Thread-694] fs.HFileSystem(252): Added 
> intercepting call to namenode#getBlockLocations so can do block reordering 
> using class class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks
> 2015-11-16 03:09:15,104 INFO  [Thread-694] 
> zookeeper.RecoverableZooKeeper(120): Process identifier=regionserver:36114 
> connecting to ZooKeeper ensemble=localhost:50139
> 2015-11-16 03:09:15,117 DEBUG [Thread-694-EventThread] 
> zookeeper.ZooKeeperWatcher(554): regionserver:361140x0, 
> quorum=localhost:50139, baseZNode=/hbase Received ZooKeeper Event, type=None, 
> state=SyncConnected, path=null
> 2015-11-16 03:09:15,118 DEBUG [Thread-694] zookeeper.ZKUtil(492): 
> regionserver:361140x0, quorum=localhost:50139, baseZNode=/hbase Set watcher 
> on existing znode=/hbase/master
> 2015-11-16 03:09:15,119 DEBUG [Thread-694] zookeeper.ZKUtil(492): 
> regionserver:361140x0, quorum=localhost:50139, baseZNode=/hbase Set watcher 
> on existing znode=/hbase/running
> 2015-11-16 03:09:15,119 DEBUG [Thread-694-EventThread] 
> 

[jira] [Issue Comment Deleted] (HBASE-14819) hbase-it tests failing with OOME; permgen

2015-11-19 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-14819:
--
Comment: was deleted

(was: Resovling because enough done on this issue. In the end just the change 
for ITTestAcidGuarantees made it in.)

> hbase-it tests failing with OOME; permgen
> -
>
> Key: HBASE-14819
> URL: https://issues.apache.org/jira/browse/HBASE-14819
> Project: HBase
>  Issue Type: Sub-task
>  Components: test
>Reporter: stack
>Assignee: stack
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: 14819.addendum.patch, 14819v2.txt, Screen Shot 
> 2015-11-16 at 11.37.41 PM.png, itag.txt
>
>
> Let me up the heap used when failsafe forks.
> Here is example OOME doing ITBLL:
> {code}
> 2015-11-16 03:09:15,073 INFO  [Thread-694] actions.BatchRestartRsAction(69): 
> Starting region server:asf905.gq1.ygridcore.net
> 2015-11-16 03:09:15,099 INFO  [Thread-694] client.ConnectionUtils(104): 
> regionserver/asf905.gq1.ygridcore.net/67.195.81.149:0 server-side HConnection 
> retries=350
> 2015-11-16 03:09:15,099 INFO  [Thread-694] ipc.SimpleRpcScheduler(128): Using 
> deadline as user call queue, count=1
> 2015-11-16 03:09:15,101 INFO  [Thread-694] ipc.RpcServer$Listener(607): 
> regionserver/asf905.gq1.ygridcore.net/67.195.81.149:0: started 3 reader(s) 
> listening on port=36114
> 2015-11-16 03:09:15,103 INFO  [Thread-694] fs.HFileSystem(252): Added 
> intercepting call to namenode#getBlockLocations so can do block reordering 
> using class class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks
> 2015-11-16 03:09:15,104 INFO  [Thread-694] 
> zookeeper.RecoverableZooKeeper(120): Process identifier=regionserver:36114 
> connecting to ZooKeeper ensemble=localhost:50139
> 2015-11-16 03:09:15,117 DEBUG [Thread-694-EventThread] 
> zookeeper.ZooKeeperWatcher(554): regionserver:361140x0, 
> quorum=localhost:50139, baseZNode=/hbase Received ZooKeeper Event, type=None, 
> state=SyncConnected, path=null
> 2015-11-16 03:09:15,118 DEBUG [Thread-694] zookeeper.ZKUtil(492): 
> regionserver:361140x0, quorum=localhost:50139, baseZNode=/hbase Set watcher 
> on existing znode=/hbase/master
> 2015-11-16 03:09:15,119 DEBUG [Thread-694] zookeeper.ZKUtil(492): 
> regionserver:361140x0, quorum=localhost:50139, baseZNode=/hbase Set watcher 
> on existing znode=/hbase/running
> 2015-11-16 03:09:15,119 DEBUG [Thread-694-EventThread] 
> zookeeper.ZooKeeperWatcher(638): regionserver:36114-0x1510e2c6f1d0029 
> connected
> 2015-11-16 03:09:15,120 INFO  [RpcServer.responder] 
> ipc.RpcServer$Responder(926): RpcServer.responder: starting
> 2015-11-16 03:09:15,121 INFO  [RpcServer.listener,port=36114] 
> ipc.RpcServer$Listener(738): RpcServer.listener,port=36114: starting
> 2015-11-16 03:09:15,121 DEBUG [Thread-694] ipc.RpcExecutor(115): B.default 
> Start Handler index=0 queue=0
> 2015-11-16 03:09:15,121 DEBUG [Thread-694] ipc.RpcExecutor(115): B.default 
> Start Handler index=1 queue=0
> 2015-11-16 03:09:15,121 DEBUG [Thread-694] ipc.RpcExecutor(115): B.default 
> Start Handler index=2 queue=0
> 2015-11-16 03:09:15,122 DEBUG [Thread-694] ipc.RpcExecutor(115): B.default 
> Start Handler index=3 queue=0
> 2015-11-16 03:09:15,122 DEBUG [Thread-694] ipc.RpcExecutor(115): B.default 
> Start Handler index=4 queue=0
> 2015-11-16 03:09:15,122 DEBUG [Thread-694] ipc.RpcExecutor(115): Priority 
> Start Handler index=0 queue=0
> 2015-11-16 03:09:15,123 DEBUG [Thread-694] ipc.RpcExecutor(115): Priority 
> Start Handler index=1 queue=1
> 2015-11-16 03:09:15,123 DEBUG [Thread-694] ipc.RpcExecutor(115): Priority 
> Start Handler index=2 queue=0
> 2015-11-16 03:09:15,123 DEBUG [Thread-694] ipc.RpcExecutor(115): Priority 
> Start Handler index=3 queue=1
> 2015-11-16 03:09:15,124 DEBUG [Thread-694] ipc.RpcExecutor(115): Priority 
> Start Handler index=4 queue=0
> 2015-11-16 03:09:15,124 DEBUG [Thread-694] ipc.RpcExecutor(115): Replication 
> Start Handler index=0 queue=0
> 2015-11-16 03:09:15,124 DEBUG [Thread-694] ipc.RpcExecutor(115): Replication 
> Start Handler index=1 queue=0
> 2015-11-16 03:09:15,124 DEBUG [Thread-694] ipc.RpcExecutor(115): Replication 
> Start Handler index=2 queue=0
> 2015-11-16 03:09:15,761 DEBUG [RS:0;asf905:36114] 
> client.ConnectionManager$HConnectionImplementation(715): connection 
> construction failed
> java.io.IOException: java.lang.OutOfMemoryError: PermGen space
>   at 
> org.apache.hadoop.hbase.client.RegistryFactory.getRegistry(RegistryFactory.java:43)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.setupRegistry(ConnectionManager.java:886)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.(ConnectionManager.java:692)
>   at 
> org.apache.hadoop.hbase.client.ConnectionUtils$2.(ConnectionUtils.java:154)

[jira] [Updated] (HBASE-14819) hbase-it tests failing with OOME; permgen

2015-11-19 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-14819:
--
Status: Patch Available  (was: Open)

Resovling because enough done on this issue. In the end just the change for 
ITTestAcidGuarantees made it in.

> hbase-it tests failing with OOME; permgen
> -
>
> Key: HBASE-14819
> URL: https://issues.apache.org/jira/browse/HBASE-14819
> Project: HBase
>  Issue Type: Sub-task
>  Components: test
>Reporter: stack
>Assignee: stack
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: 14819.addendum.patch, 14819v2.txt, Screen Shot 
> 2015-11-16 at 11.37.41 PM.png, itag.txt
>
>
> Let me up the heap used when failsafe forks.
> Here is example OOME doing ITBLL:
> {code}
> 2015-11-16 03:09:15,073 INFO  [Thread-694] actions.BatchRestartRsAction(69): 
> Starting region server:asf905.gq1.ygridcore.net
> 2015-11-16 03:09:15,099 INFO  [Thread-694] client.ConnectionUtils(104): 
> regionserver/asf905.gq1.ygridcore.net/67.195.81.149:0 server-side HConnection 
> retries=350
> 2015-11-16 03:09:15,099 INFO  [Thread-694] ipc.SimpleRpcScheduler(128): Using 
> deadline as user call queue, count=1
> 2015-11-16 03:09:15,101 INFO  [Thread-694] ipc.RpcServer$Listener(607): 
> regionserver/asf905.gq1.ygridcore.net/67.195.81.149:0: started 3 reader(s) 
> listening on port=36114
> 2015-11-16 03:09:15,103 INFO  [Thread-694] fs.HFileSystem(252): Added 
> intercepting call to namenode#getBlockLocations so can do block reordering 
> using class class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks
> 2015-11-16 03:09:15,104 INFO  [Thread-694] 
> zookeeper.RecoverableZooKeeper(120): Process identifier=regionserver:36114 
> connecting to ZooKeeper ensemble=localhost:50139
> 2015-11-16 03:09:15,117 DEBUG [Thread-694-EventThread] 
> zookeeper.ZooKeeperWatcher(554): regionserver:361140x0, 
> quorum=localhost:50139, baseZNode=/hbase Received ZooKeeper Event, type=None, 
> state=SyncConnected, path=null
> 2015-11-16 03:09:15,118 DEBUG [Thread-694] zookeeper.ZKUtil(492): 
> regionserver:361140x0, quorum=localhost:50139, baseZNode=/hbase Set watcher 
> on existing znode=/hbase/master
> 2015-11-16 03:09:15,119 DEBUG [Thread-694] zookeeper.ZKUtil(492): 
> regionserver:361140x0, quorum=localhost:50139, baseZNode=/hbase Set watcher 
> on existing znode=/hbase/running
> 2015-11-16 03:09:15,119 DEBUG [Thread-694-EventThread] 
> zookeeper.ZooKeeperWatcher(638): regionserver:36114-0x1510e2c6f1d0029 
> connected
> 2015-11-16 03:09:15,120 INFO  [RpcServer.responder] 
> ipc.RpcServer$Responder(926): RpcServer.responder: starting
> 2015-11-16 03:09:15,121 INFO  [RpcServer.listener,port=36114] 
> ipc.RpcServer$Listener(738): RpcServer.listener,port=36114: starting
> 2015-11-16 03:09:15,121 DEBUG [Thread-694] ipc.RpcExecutor(115): B.default 
> Start Handler index=0 queue=0
> 2015-11-16 03:09:15,121 DEBUG [Thread-694] ipc.RpcExecutor(115): B.default 
> Start Handler index=1 queue=0
> 2015-11-16 03:09:15,121 DEBUG [Thread-694] ipc.RpcExecutor(115): B.default 
> Start Handler index=2 queue=0
> 2015-11-16 03:09:15,122 DEBUG [Thread-694] ipc.RpcExecutor(115): B.default 
> Start Handler index=3 queue=0
> 2015-11-16 03:09:15,122 DEBUG [Thread-694] ipc.RpcExecutor(115): B.default 
> Start Handler index=4 queue=0
> 2015-11-16 03:09:15,122 DEBUG [Thread-694] ipc.RpcExecutor(115): Priority 
> Start Handler index=0 queue=0
> 2015-11-16 03:09:15,123 DEBUG [Thread-694] ipc.RpcExecutor(115): Priority 
> Start Handler index=1 queue=1
> 2015-11-16 03:09:15,123 DEBUG [Thread-694] ipc.RpcExecutor(115): Priority 
> Start Handler index=2 queue=0
> 2015-11-16 03:09:15,123 DEBUG [Thread-694] ipc.RpcExecutor(115): Priority 
> Start Handler index=3 queue=1
> 2015-11-16 03:09:15,124 DEBUG [Thread-694] ipc.RpcExecutor(115): Priority 
> Start Handler index=4 queue=0
> 2015-11-16 03:09:15,124 DEBUG [Thread-694] ipc.RpcExecutor(115): Replication 
> Start Handler index=0 queue=0
> 2015-11-16 03:09:15,124 DEBUG [Thread-694] ipc.RpcExecutor(115): Replication 
> Start Handler index=1 queue=0
> 2015-11-16 03:09:15,124 DEBUG [Thread-694] ipc.RpcExecutor(115): Replication 
> Start Handler index=2 queue=0
> 2015-11-16 03:09:15,761 DEBUG [RS:0;asf905:36114] 
> client.ConnectionManager$HConnectionImplementation(715): connection 
> construction failed
> java.io.IOException: java.lang.OutOfMemoryError: PermGen space
>   at 
> org.apache.hadoop.hbase.client.RegistryFactory.getRegistry(RegistryFactory.java:43)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.setupRegistry(ConnectionManager.java:886)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.(ConnectionManager.java:692)
>   at 
> 

[jira] [Updated] (HBASE-14030) HBase Backup/Restore Phase 1

2015-11-19 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-14030:
--
Attachment: HBASE-14030-v17.patch

V17. Removed several, accidentally added files. 

> HBase Backup/Restore Phase 1
> 
>
> Key: HBASE-14030
> URL: https://issues.apache.org/jira/browse/HBASE-14030
> Project: HBase
>  Issue Type: Umbrella
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Attachments: HBASE-14030-v0.patch, HBASE-14030-v1.patch, 
> HBASE-14030-v10.patch, HBASE-14030-v11.patch, HBASE-14030-v12.patch, 
> HBASE-14030-v13.patch, HBASE-14030-v14.patch, HBASE-14030-v15.patch, 
> HBASE-14030-v17.patch, HBASE-14030-v2.patch, HBASE-14030-v3.patch, 
> HBASE-14030-v4.patch, HBASE-14030-v5.patch, HBASE-14030-v6.patch, 
> HBASE-14030-v7.patch, HBASE-14030-v8.patch
>
>
> This is the umbrella ticket for Backup/Restore Phase 1. See HBASE-7912 design 
> doc for the phase description.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14819) hbase-it tests failing with OOME; permgen

2015-11-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15014980#comment-15014980
 ] 

Hadoop QA commented on HBASE-14819:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12773411/14819.addendum.patch
  against master branch at commit ea48ef86512addc3dc9bcde4b7433a3ac5881424.
  ATTACHMENT ID: 12773411

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation, build,
or dev-support patch that doesn't require tests.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16600//console

This message is automatically generated.

> hbase-it tests failing with OOME; permgen
> -
>
> Key: HBASE-14819
> URL: https://issues.apache.org/jira/browse/HBASE-14819
> Project: HBase
>  Issue Type: Sub-task
>  Components: test
>Reporter: stack
>Assignee: stack
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: 14819.addendum.patch, 14819v2.txt, Screen Shot 
> 2015-11-16 at 11.37.41 PM.png, itag.txt
>
>
> Let me up the heap used when failsafe forks.
> Here is example OOME doing ITBLL:
> {code}
> 2015-11-16 03:09:15,073 INFO  [Thread-694] actions.BatchRestartRsAction(69): 
> Starting region server:asf905.gq1.ygridcore.net
> 2015-11-16 03:09:15,099 INFO  [Thread-694] client.ConnectionUtils(104): 
> regionserver/asf905.gq1.ygridcore.net/67.195.81.149:0 server-side HConnection 
> retries=350
> 2015-11-16 03:09:15,099 INFO  [Thread-694] ipc.SimpleRpcScheduler(128): Using 
> deadline as user call queue, count=1
> 2015-11-16 03:09:15,101 INFO  [Thread-694] ipc.RpcServer$Listener(607): 
> regionserver/asf905.gq1.ygridcore.net/67.195.81.149:0: started 3 reader(s) 
> listening on port=36114
> 2015-11-16 03:09:15,103 INFO  [Thread-694] fs.HFileSystem(252): Added 
> intercepting call to namenode#getBlockLocations so can do block reordering 
> using class class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks
> 2015-11-16 03:09:15,104 INFO  [Thread-694] 
> zookeeper.RecoverableZooKeeper(120): Process identifier=regionserver:36114 
> connecting to ZooKeeper ensemble=localhost:50139
> 2015-11-16 03:09:15,117 DEBUG [Thread-694-EventThread] 
> zookeeper.ZooKeeperWatcher(554): regionserver:361140x0, 
> quorum=localhost:50139, baseZNode=/hbase Received ZooKeeper Event, type=None, 
> state=SyncConnected, path=null
> 2015-11-16 03:09:15,118 DEBUG [Thread-694] zookeeper.ZKUtil(492): 
> regionserver:361140x0, quorum=localhost:50139, baseZNode=/hbase Set watcher 
> on existing znode=/hbase/master
> 2015-11-16 03:09:15,119 DEBUG [Thread-694] zookeeper.ZKUtil(492): 
> regionserver:361140x0, quorum=localhost:50139, baseZNode=/hbase Set watcher 
> on existing znode=/hbase/running
> 2015-11-16 03:09:15,119 DEBUG [Thread-694-EventThread] 
> zookeeper.ZooKeeperWatcher(638): regionserver:36114-0x1510e2c6f1d0029 
> connected
> 2015-11-16 03:09:15,120 INFO  [RpcServer.responder] 
> ipc.RpcServer$Responder(926): RpcServer.responder: starting
> 2015-11-16 03:09:15,121 INFO  [RpcServer.listener,port=36114] 
> ipc.RpcServer$Listener(738): RpcServer.listener,port=36114: starting
> 2015-11-16 03:09:15,121 DEBUG [Thread-694] ipc.RpcExecutor(115): B.default 
> Start Handler index=0 queue=0
> 2015-11-16 03:09:15,121 DEBUG [Thread-694] ipc.RpcExecutor(115): B.default 
> Start Handler index=1 queue=0
> 2015-11-16 03:09:15,121 DEBUG [Thread-694] ipc.RpcExecutor(115): B.default 
> Start Handler index=2 queue=0
> 2015-11-16 03:09:15,122 DEBUG [Thread-694] ipc.RpcExecutor(115): B.default 
> Start Handler index=3 queue=0
> 2015-11-16 03:09:15,122 DEBUG [Thread-694] ipc.RpcExecutor(115): B.default 
> Start Handler index=4 queue=0
> 2015-11-16 03:09:15,122 DEBUG [Thread-694] ipc.RpcExecutor(115): Priority 
> Start Handler index=0 queue=0
> 2015-11-16 03:09:15,123 DEBUG [Thread-694] ipc.RpcExecutor(115): Priority 
> Start Handler index=1 queue=1
> 2015-11-16 03:09:15,123 DEBUG [Thread-694] ipc.RpcExecutor(115): Priority 
> Start Handler index=2 queue=0
> 2015-11-16 03:09:15,123 DEBUG [Thread-694] ipc.RpcExecutor(115): Priority 
> Start Handler index=3 queue=1
> 2015-11-16 03:09:15,124 DEBUG [Thread-694] ipc.RpcExecutor(115): Priority 
> Start Handler index=4 queue=0
> 2015-11-16 03:09:15,124 DEBUG [Thread-694] ipc.RpcExecutor(115): Replication 
> Start Handler index=0 queue=0
> 2015-11-16 03:09:15,124 DEBUG [Thread-694] ipc.RpcExecutor(115): Replication 
> Start Handler index=1 queue=0
> 2015-11-16 03:09:15,124 DEBUG [Thread-694] ipc.RpcExecutor(115): Replication 
> Start Handler index=2 queue=0
> 2015-11-16 

[jira] [Commented] (HBASE-14777) Fix Inter Cluster Replication Future ordering issues

2015-11-19 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15014322#comment-15014322
 ] 

stack commented on HBASE-14777:
---

This history is a bit unreliable but it does 'show' that the two times this 
test ran, it failed:

https://builds.apache.org/view/H-L/view/HBase/job/HBase-1.3/jdk=latest1.7,label=Hadoop/lastCompletedBuild/testReport/TEST-org.apache.hadoop.hbase.replication.TestReplicationEndpoint/xml/_init_/history/

> Fix Inter Cluster Replication Future ordering issues
> 
>
> Key: HBASE-14777
> URL: https://issues.apache.org/jira/browse/HBASE-14777
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Reporter: Bhupendra Kumar Jain
>Assignee: Ashu Pachauri
>Priority: Critical
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-14777-1.patch, HBASE-14777-2.patch, 
> HBASE-14777-3.patch, HBASE-14777-4.patch, HBASE-14777-5.patch, 
> HBASE-14777-6.patch, HBASE-14777.patch
>
>
> Replication fails with IndexOutOfBoundsException 
> {code}
> regionserver.ReplicationSource$ReplicationSourceWorkerThread(939): 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint
>  threw unknown exception:java.lang.IndexOutOfBoundsException: Index: 1, Size: 
> 1
>   at java.util.ArrayList.rangeCheck(Unknown Source)
>   at java.util.ArrayList.remove(Unknown Source)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint.replicate(HBaseInterClusterReplicationEndpoint.java:222)
> {code}
> Its happening due to incorrect removal of entries from the replication 
> entries list. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14848) some hbase-* module don't have test/resources/log4j and test logs are empty

2015-11-19 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15014365#comment-15014365
 ] 

Andrew Purtell commented on HBASE-14848:


+1

> some hbase-* module don't have test/resources/log4j and test logs are empty
> ---
>
> Key: HBASE-14848
> URL: https://issues.apache.org/jira/browse/HBASE-14848
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.0.0, 1.2.0, 1.1.2, 1.3.0
>Reporter: Matteo Bertozzi
> Attachments: hbase-procedure-resources.patch
>
>
> some of the hbase sub modules (e.g. hbase-procedure, hbase-prefix-tree, ...) 
> don't have the test/resources/log4j.properties file which result in unit 
> tests not printing any information.
> adding the log4j seems to work, but in the past the debug output was visibile 
> even without the file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14623) Implement dedicated WAL for system tables

2015-11-19 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15014325#comment-15014325
 ] 

Ted Yu commented on HBASE-14623:


The failed test might be related to HBASE-14777

> Implement dedicated WAL for system tables
> -
>
> Key: HBASE-14623
> URL: https://issues.apache.org/jira/browse/HBASE-14623
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 2.0.0
>
> Attachments: 14623-v1.txt, 14623-v2.txt, 14623-v2.txt
>
>
> As Stephen suggested in parent JIRA, dedicating separate WAL for system 
> tables (other than hbase:meta) should be done in new JIRA.
> This task is to fulfill the system WAL separation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14795) Enhance the spark-hbase scan operations

2015-11-19 Thread Ted Malaska (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15014342#comment-15014342
 ] 

Ted Malaska commented on HBASE-14795:
-

Hey Zhan,

What is your ETA on this JIRA.  I just opened HBASE-14849 and I wanted to know 
if I should do that now or wait until this jira is done, or if you want to 
include HBASE-14849 into this jira.

Let me know.

> Enhance the spark-hbase scan operations
> ---
>
> Key: HBASE-14795
> URL: https://issues.apache.org/jira/browse/HBASE-14795
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Malaska
>Assignee: Zhan Zhang
>Priority: Minor
>
> This is a sub-jira of HBASE-14789.  This jira is to focus on the replacement 
> of TableInputFormat for a more custom scan implementation that will make the 
> following use case more effective.
> Use case:
> In the case you have multiple scan ranges on a single table with in a single 
> query.  TableInputFormat will scan the the outer range of the scan start and 
> end range where this implementation can be more pointed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14777) Fix Inter Cluster Replication Future ordering issues

2015-11-19 Thread Ashu Pachauri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashu Pachauri updated HBASE-14777:
--
Attachment: HBASE-14777-addendum.patch

Addendum: The patch so that directly goes through HRegion#put instead of 
calling HTable#put.

> Fix Inter Cluster Replication Future ordering issues
> 
>
> Key: HBASE-14777
> URL: https://issues.apache.org/jira/browse/HBASE-14777
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Reporter: Bhupendra Kumar Jain
>Assignee: Ashu Pachauri
>Priority: Critical
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-14777-1.patch, HBASE-14777-2.patch, 
> HBASE-14777-3.patch, HBASE-14777-4.patch, HBASE-14777-5.patch, 
> HBASE-14777-6.patch, HBASE-14777-addendum.patch, HBASE-14777.patch
>
>
> Replication fails with IndexOutOfBoundsException 
> {code}
> regionserver.ReplicationSource$ReplicationSourceWorkerThread(939): 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint
>  threw unknown exception:java.lang.IndexOutOfBoundsException: Index: 1, Size: 
> 1
>   at java.util.ArrayList.rangeCheck(Unknown Source)
>   at java.util.ArrayList.remove(Unknown Source)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint.replicate(HBaseInterClusterReplicationEndpoint.java:222)
> {code}
> Its happening due to incorrect removal of entries from the replication 
> entries list. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14807) TestWALLockup is flakey

2015-11-19 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-14807:
--
Status: Patch Available  (was: Reopened)

> TestWALLockup is flakey
> ---
>
> Key: HBASE-14807
> URL: https://issues.apache.org/jira/browse/HBASE-14807
> Project: HBase
>  Issue Type: Bug
>  Components: flakey, test
>Reporter: stack
>Assignee: stack
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: 14807.patch, 14807.second.attempt.txt
>
>
> Fails frequently. 
> Looks like this:
> {code}
> 2015-11-12 10:38:51,812 DEBUG [Time-limited test] regionserver.HRegion(3882): 
> Found 0 recovered edits file(s) under 
> /home/jenkins/jenkins-slave/workspace/HBase-1.2/jdk/latest1.7/label/Hadoop/hbase-server/target/test-data/8b8f8f12-1819-47e3-b1f1-8ffa789438ad/data/default/testLockupWhenSyncInMiddleOfZigZagSetup/c8694b53368f3301a8d370089120388d
> 2015-11-12 10:38:51,821 DEBUG [Time-limited test] 
> regionserver.FlushLargeStoresPolicy(56): 
> hbase.hregion.percolumnfamilyflush.size.lower.bound is not specified, use 
> global config(16777216) instead
> 2015-11-12 10:38:51,880 DEBUG [Time-limited test] wal.WALSplitter(729): Wrote 
> region 
> seqId=/home/jenkins/jenkins-slave/workspace/HBase-1.2/jdk/latest1.7/label/Hadoop/hbase-server/target/test-data/8b8f8f12-1819-47e3-b1f1-8ffa789438ad/data/default/testLockupWhenSyncInMiddleOfZigZagSetup/c8694b53368f3301a8d370089120388d/recovered.edits/2.seqid
>  to file, newSeqId=2, maxSeqId=0
> 2015-11-12 10:38:51,881 INFO  [Time-limited test] regionserver.HRegion(868): 
> Onlined c8694b53368f3301a8d370089120388d; next sequenceid=2
> 2015-11-12 10:38:51,994 ERROR [sync.1] wal.FSHLog$SyncRunner(1226): Error 
> syncing, request close of WAL
> java.io.IOException: FAKE! Failed to replace a bad datanode...SYNC
>   at 
> org.apache.hadoop.hbase.regionserver.TestWALLockup$1DodgyFSLog$1.sync(TestWALLockup.java:162)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.FSHLog$SyncRunner.run(FSHLog.java:1222)
>   at java.lang.Thread.run(Thread.java:745)
> 2015-11-12 10:38:51,997 DEBUG [Thread-4] regionserver.LogRoller(139): WAL 
> roll requested
> 2015-11-12 10:38:52,019 DEBUG [flusher] 
> regionserver.FlushLargeStoresPolicy(100): Since none of the CFs were above 
> the size, flushing all.
> 2015-11-12 10:38:52,192 INFO  [Thread-4] 
> regionserver.TestWALLockup$1DodgyFSLog(129): LATCHED
> java.lang.InterruptedException: sleep interrupted
>   at java.lang.Thread.sleep(Native Method)
>   at org.apache.hadoop.hbase.util.Threads.sleep(Threads.java:146)
>   at 
> org.apache.hadoop.hbase.regionserver.TestWALLockup.testLockupWhenSyncInMiddleOfZigZagSetup(TestWALLockup.java:245)
> 2015-11-12 10:39:18,609 INFO  [main] regionserver.TestWALLockup(91): Cleaning 
> test directory: 
> /home/jenkins/jenkins-slave/workspace/HBase-1.2/jdk/latest1.7/label/Hadoop/hbase-server/target/test-data/8b8f8f12-1819-47e3-b1f1-8ffa789438ad
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> ... then times out after being locked up for 30 seconds.  Writes 50+MB of 
> logs while spinning.
> Reported as this:
> {code}
> ---
> Test set: org.apache.hadoop.hbase.regionserver.TestWALLockup
> ---
> Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 198.23 sec 
> <<< FAILURE! - in org.apache.hadoop.hbase.regionserver.TestWALLockup
> testLockupWhenSyncInMiddleOfZigZagSetup(org.apache.hadoop.hbase.regionserver.TestWALLockup)
>   Time elapsed: 0.049 sec  <<< ERROR!
> org.junit.runners.model.TestTimedOutException: test timed out after 3 
> milliseconds
>   at org.apache.log4j.Category.callAppenders(Category.java:205)
>   at 

[jira] [Commented] (HBASE-11393) Replication TableCfs should be a PB object rather than a string

2015-11-19 Thread Ashish Singhi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15014046#comment-15014046
 ] 

Ashish Singhi commented on HBASE-11393:
---

[~chenheng],
{code}
  /**
   *  Convert TableCFs Object to String.
   *  Output String Format: ns1.table1:cf1,cf2;ns2.table2:cfA,cfB;table3
   * */
{code}
The tableCfs string format which you have considered will go wrong when table 
name itself has a '.' in it.
I will resume my review once that is addressed.

Thanks.

> Replication TableCfs should be a PB object rather than a string
> ---
>
> Key: HBASE-11393
> URL: https://issues.apache.org/jira/browse/HBASE-11393
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Enis Soztutar
> Fix For: 2.0.0
>
> Attachments: HBASE-11393.patch, HBASE-11393_v1.patch, 
> HBASE-11393_v10.patch, HBASE-11393_v2.patch, HBASE-11393_v3.patch, 
> HBASE-11393_v4.patch, HBASE-11393_v5.patch, HBASE-11393_v6.patch, 
> HBASE-11393_v7.patch, HBASE-11393_v8.patch, HBASE-11393_v9.patch
>
>
> We concatenate the list of tables and column families in format  
> "table1:cf1,cf2;table2:cfA,cfB" in zookeeper for table-cf to replication peer 
> mapping. 
> This results in ugly parsing code. We should do this a PB object. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14832) Ensure write paths work with ByteBufferedCells in case of compaction

2015-11-19 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15014070#comment-15014070
 ] 

ramkrishna.s.vasudevan commented on HBASE-14832:


bq.The patch will work though we've not converted PrefixTree? PrefixTree will 
run really slow if it is doing copies all the time which will be a big surprise.
The read part there is no problem. Only when we try to write the Prefixtree 
cell we need to ensure that we handle offheap cells also. Currently in 
compaction case there will be a copy happening while writing back to the new 
file.
bq.Ditto for TagCompression. It will work after the patch goes in, it'll just 
be slow because of all the copies?
Again this is applicable only during writes (during compaction).  Not during 
reads.
bq.Can you explain this more or point to a place in the code that shows what 
you are seeing?
My point was that when DBEs are enabled then the cell that we get to write 
during compaction will be a DBE cell ie the decoded cell based on the DBE algo 
and that will be written back after encoding into the new file. 
In our current trunk, these DBE decoded cells are of two types (onheap DBE cell 
and offheap DBE cell). In the offheap cell only the value part is referring to 
the offheap hfileblock coming out of the bucket cache. All other components are 
onheap byte[] only since the need to be decoded. 
bq.Was ByteRange an experiment that now should be purged from the code base? I 
see it used by PrefixTree. Is that the only place that uses it? I see support 
in MemStore. Should we undo it? If deprecated, should we tidy it up some
I think PrefixTree needs it when the cells are KV based. Will check once if we 
can purge it.



> Ensure write paths work with ByteBufferedCells in case of compaction
> 
>
> Key: HBASE-14832
> URL: https://issues.apache.org/jira/browse/HBASE-14832
> Project: HBase
>  Issue Type: Sub-task
>  Components: regionserver, Scanners
>Affects Versions: 2.0.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-14832.patch
>
>
> Currently any cell coming out of offheap Bucketcache while compaction does a 
> copy using the getXXXArray() API since write path does not work with BBCells. 
> This JIRA is aimed at changing the write path to support BBCells so that this 
> copy is avoided.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-14848) some hbase-* module don't have test/resources/log4j and test logs are empty

2015-11-19 Thread Matteo Bertozzi (JIRA)
Matteo Bertozzi created HBASE-14848:
---

 Summary: some hbase-* module don't have test/resources/log4j and 
test logs are empty
 Key: HBASE-14848
 URL: https://issues.apache.org/jira/browse/HBASE-14848
 Project: HBase
  Issue Type: Bug
  Components: test
Affects Versions: 1.1.2, 2.0.0, 1.2.0, 1.3.0
Reporter: Matteo Bertozzi


some of the hbase sub modules (e.g. hbase-procedure, hbase-prefix-tree, ...) 
don't have the test/resources/log4j.properties file which result in unit tests 
not printing any information.

adding the log4j seems to work, but in the past the debug output was visibile 
even without the file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14848) some hbase-* module don't have test/resources/log4j and test logs are empty

2015-11-19 Thread Matteo Bertozzi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matteo Bertozzi updated HBASE-14848:

Attachment: hbase-procedure-resources.patch

> some hbase-* module don't have test/resources/log4j and test logs are empty
> ---
>
> Key: HBASE-14848
> URL: https://issues.apache.org/jira/browse/HBASE-14848
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.0.0, 1.2.0, 1.1.2, 1.3.0
>Reporter: Matteo Bertozzi
> Attachments: hbase-procedure-resources.patch
>
>
> some of the hbase sub modules (e.g. hbase-procedure, hbase-prefix-tree, ...) 
> don't have the test/resources/log4j.properties file which result in unit 
> tests not printing any information.
> adding the log4j seems to work, but in the past the debug output was visibile 
> even without the file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14167) hbase-spark integration tests do not respect -DskipITs

2015-11-19 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15014413#comment-15014413
 ] 

Andrew Purtell commented on HBASE-14167:


This doesn't quite work as described, you have to use 
{{-DskipIntegrationTests}}, not {{-DskipITs}}. Otherwise, lgtm. Since the only 
tests run by hbase-spark are integration tests, {{-DskipITs}} ends up 
equivalent to {{-DskipTests}}

{noformat}
[INFO] --- maven-jar-plugin:2.4:jar (default-jar) @ hbase-spark ---
[INFO] Building jar: 
/Users/apurtell/src/hbase/hbase-spark/target/hbase-spark-2.0.0-SNAPSHOT.jar
[INFO] 
[INFO] --- scalatest-maven-plugin:1.0:test (integration-test) @ hbase-spark ---
Discovery starting.
Discovery completed in 1 second, 169 milliseconds.
Run starting. Expected test count is: 46
...
{noformat}


> hbase-spark integration tests do not respect -DskipITs
> --
>
> Key: HBASE-14167
> URL: https://issues.apache.org/jira/browse/HBASE-14167
> Project: HBase
>  Issue Type: Improvement
>  Components: spark
>Affects Versions: 2.0.0
>Reporter: Andrew Purtell
>Priority: Minor
> Attachments: HBASE-14167.11.patch
>
>
> When running a build with {{mvn ... -DskipITs}}, the hbase-spark module's 
> integration tests do not respect the flag and run anyway. Fix. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14777) Fix Inter Cluster Replication Future ordering issues

2015-11-19 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15014415#comment-15014415
 ] 

stack commented on HBASE-14777:
---

Thank you for the prompt action [~ashu210890] I applied your addendum. Lets see 
how it does on any build post now.  Will leave issue open in meantime. Thanks.

> Fix Inter Cluster Replication Future ordering issues
> 
>
> Key: HBASE-14777
> URL: https://issues.apache.org/jira/browse/HBASE-14777
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Reporter: Bhupendra Kumar Jain
>Assignee: Ashu Pachauri
>Priority: Critical
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-14777-1.patch, HBASE-14777-2.patch, 
> HBASE-14777-3.patch, HBASE-14777-4.patch, HBASE-14777-5.patch, 
> HBASE-14777-6.patch, HBASE-14777-addendum.patch, HBASE-14777.patch
>
>
> Replication fails with IndexOutOfBoundsException 
> {code}
> regionserver.ReplicationSource$ReplicationSourceWorkerThread(939): 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint
>  threw unknown exception:java.lang.IndexOutOfBoundsException: Index: 1, Size: 
> 1
>   at java.util.ArrayList.rangeCheck(Unknown Source)
>   at java.util.ArrayList.remove(Unknown Source)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint.replicate(HBaseInterClusterReplicationEndpoint.java:222)
> {code}
> Its happening due to incorrect removal of entries from the replication 
> entries list. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (HBASE-14777) Fix Inter Cluster Replication Future ordering issues

2015-11-19 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack reopened HBASE-14777:
---

Since this patch went in, replication related tests are failing the 1.7 build:

See https://builds.apache.org/view/H-L/view/HBase/job/HBase-1.3/379/

and 


https://builds.apache.org/view/H-L/view/HBase/job/HBase-1.3/380/jdk=latest1.7,label=Hadoop/testReport/

If you click on the tests, it is not showing you anything useful 
(unfortunately, TODO) but if you go to the artifacts and dig to find these 
tests you see:


---
Test set: org.apache.hadoop.hbase.replication.TestReplicationEndpoint
---
Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 283.994 sec <<< 
FAILURE! - in org.apache.hadoop.hbase.replication.TestReplicationEndpoint
testInterClusterReplication(org.apache.hadoop.hbase.replication.TestReplicationEndpoint)
  Time elapsed: 120.72 sec  <<< ERROR!
org.junit.runners.model.TestTimedOutException: test timed out after 12 
milliseconds
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:714)
at 
java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:949)
at 
java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1360)
at 
org.apache.hadoop.hbase.client.ResultBoundedCompletionService.submit(ResultBoundedCompletionService.java:146)
at 
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.addCallsForCurrentReplica(ScannerCallableWithReplicas.java:279)
at 
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:166)
at 
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:60)
at 
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
at 
org.apache.hadoop.hbase.client.ClientSmallReversedScanner.loadCache(ClientSmallReversedScanner.java:212)
at 
org.apache.hadoop.hbase.client.ClientSmallReversedScanner.next(ClientSmallReversedScanner.java:186)
at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegionInMeta(ConnectionManager.java:1279)
at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1185)
at 
org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370)
at 
org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:321)
at 
org.apache.hadoop.hbase.client.BufferedMutatorImpl.backgroundFlushCommits(BufferedMutatorImpl.java:238)
at 
org.apache.hadoop.hbase.client.BufferedMutatorImpl.flush(BufferedMutatorImpl.java:190)
at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:1430)
at org.apache.hadoop.hbase.client.HTable.put(HTable.java:1021)
at 
org.apache.hadoop.hbase.replication.TestReplicationEndpoint.doPut(TestReplicationEndpoint.java:265)
at 
org.apache.hadoop.hbase.replication.TestReplicationEndpoint.doPut(TestReplicationEndpoint.java:257)
at 
org.apache.hadoop.hbase.replication.TestReplicationEndpoint.testInterClusterReplication(TestReplicationEndpoint.java:202)

Here is output: 
https://builds.apache.org/view/H-L/view/HBase/job/HBase-1.3/380/jdk=latest1.7,label=Hadoop/artifact/hbase-server/target/surefire-reports/org.apache.hadoop.hbase.replication.TestReplicationEndpoint-output.txt

Please take a look. I've started a new build of branch-1.3 in the meantime to 
try and get more data. Resolve if you don't think it this patch.




> Fix Inter Cluster Replication Future ordering issues
> 
>
> Key: HBASE-14777
> URL: https://issues.apache.org/jira/browse/HBASE-14777
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Reporter: Bhupendra Kumar Jain
>Assignee: Ashu Pachauri
>Priority: Critical
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-14777-1.patch, HBASE-14777-2.patch, 
> HBASE-14777-3.patch, HBASE-14777-4.patch, HBASE-14777-5.patch, 
> HBASE-14777-6.patch, HBASE-14777.patch
>
>
> Replication fails with IndexOutOfBoundsException 
> {code}
> regionserver.ReplicationSource$ReplicationSourceWorkerThread(939): 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint
>  threw unknown exception:java.lang.IndexOutOfBoundsException: Index: 1, Size: 
> 1
>   at java.util.ArrayList.rangeCheck(Unknown Source)
>   at java.util.ArrayList.remove(Unknown Source)
>   at 
> 

[jira] [Created] (HBASE-14849) Add option to set block cache to false on SparkSQL executions

2015-11-19 Thread Ted Malaska (JIRA)
Ted Malaska created HBASE-14849:
---

 Summary: Add option to set block cache to false on SparkSQL 
executions
 Key: HBASE-14849
 URL: https://issues.apache.org/jira/browse/HBASE-14849
 Project: HBase
  Issue Type: New Feature
Reporter: Ted Malaska


I was working at a client with a ported down version of the Spark module for 
HBase and realized we didn't add an option to turn of block cache for the 
scans.  

This is an easy but very impactful fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14777) Fix Inter Cluster Replication Future ordering issues

2015-11-19 Thread Ashu Pachauri (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15014377#comment-15014377
 ] 

Ashu Pachauri commented on HBASE-14777:
---

It is a side effect of what I am doing. For the test, I am using the doPut call 
that was already there is TestReplicationEndpoint, and using it quite a few 
times in succession. From the logs, it seems that it is getting stuck and is 
very slow because it goes the the generic HTable.put call.

Since, in my test, I have access to HRegion object, I am going to modify the 
test to use directly do a region.put, which will be way faster.

> Fix Inter Cluster Replication Future ordering issues
> 
>
> Key: HBASE-14777
> URL: https://issues.apache.org/jira/browse/HBASE-14777
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Reporter: Bhupendra Kumar Jain
>Assignee: Ashu Pachauri
>Priority: Critical
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-14777-1.patch, HBASE-14777-2.patch, 
> HBASE-14777-3.patch, HBASE-14777-4.patch, HBASE-14777-5.patch, 
> HBASE-14777-6.patch, HBASE-14777.patch
>
>
> Replication fails with IndexOutOfBoundsException 
> {code}
> regionserver.ReplicationSource$ReplicationSourceWorkerThread(939): 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint
>  threw unknown exception:java.lang.IndexOutOfBoundsException: Index: 1, Size: 
> 1
>   at java.util.ArrayList.rangeCheck(Unknown Source)
>   at java.util.ArrayList.remove(Unknown Source)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint.replicate(HBaseInterClusterReplicationEndpoint.java:222)
> {code}
> Its happening due to incorrect removal of entries from the replication 
> entries list. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14799) Commons-collections object deserialization remote command execution vulnerability

2015-11-19 Thread Jonathan Hsieh (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15014480#comment-15014480
 ] 

Jonathan Hsieh commented on HBASE-14799:


Ugh, checked one more thing -- the Base64 class is public stable.  We should 
deprecate the Base64#decodeToObject(...) and Base64#encodeObject(...) methods 
in the 1.x's instead of remove.  We didn't have the rules in 0.98 and 0.94 so 
I'm ambivalent to what happens there.

> Commons-collections object deserialization remote command execution 
> vulnerability 
> --
>
> Key: HBASE-14799
> URL: https://issues.apache.org/jira/browse/HBASE-14799
> Project: HBase
>  Issue Type: Bug
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Critical
> Fix For: 0.94.28, 0.98.17
>
> Attachments: HBASE-14799-0.94.patch, HBASE-14799-0.94.patch, 
> HBASE-14799-0.94.patch, HBASE-14799-0.94.patch, HBASE-14799-0.98.patch, 
> HBASE-14799-0.98.patch, HBASE-14799.patch
>
>
> Read: 
> http://foxglovesecurity.com/2015/11/06/what-do-weblogic-websphere-jboss-jenkins-opennms-and-your-application-have-in-common-this-vulnerability/
> TL;DR: If you have commons-collections on your classpath and accept and 
> process Java object serialization data, then you probably have an exploitable 
> remote command execution vulnerability. 
> 0.94 and earlier HBase releases are vulnerable because we might read in and 
> rehydrate serialized Java objects out of RPC packet data in 
> HbaseObjectWritable using ObjectInputStream#readObject (see 
> https://hbase.apache.org/0.94/xref/org/apache/hadoop/hbase/io/HbaseObjectWritable.html#714)
>  and we have commons-collections on the classpath on the server.
> 0.98 also carries some limited exposure to this problem through inclusion of 
> backwards compatible deserialization code in 
> HbaseObjectWritableFor96Migration. This is used by the 0.94-to-0.98 migration 
> utility, and by the AccessController when reading permissions from the ACL 
> table serialized in legacy format by 0.94. Unprivileged users cannot run the 
> tool nor access the ACL table.
> Unprivileged users can however attack a 0.94 installation. An attacker might 
> be able to use the method discussed on that blog post to capture valid HBase 
> RPC payloads for 0.94 and prior versions, rewrite them to embed an exploit, 
> and replay them to trigger a remote command execution with the privileges of 
> the account under which the HBase RegionServer daemon is running.
> We need to make a patch release of 0.94 that changes HbaseObjectWritable to 
> disallow processing of random Java object serializations. This will be a 
> compatibility break that might affect old style coprocessors, which quite 
> possibly may rely on this catch-all in HbaseObjectWritable for custom object 
> (de)serialization. We can introduce a new configuration setting, 
> "hbase.allow.legacy.object.serialization", defaulting to false.
> To be thorough, we can also use the new configuration setting  
> "hbase.allow.legacy.object.serialization" (defaulting to false) in 0.98 to 
> prevent the AccessController from falling back to the vulnerable legacy code. 
> This turns out to not affect the ability to migrate permissions because 
> TablePermission implements Writable, which is safe, not Serializable. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14712) MasterProcWALs never clean up

2015-11-19 Thread Matteo Bertozzi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matteo Bertozzi updated HBASE-14712:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> MasterProcWALs never clean up
> -
>
> Key: HBASE-14712
> URL: https://issues.apache.org/jira/browse/HBASE-14712
> Project: HBase
>  Issue Type: Bug
>Reporter: Elliott Clark
>Assignee: Matteo Bertozzi
>Priority: Blocker
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.4
>
> Attachments: HBASE-14712-v0.patch, HBASE-14712-v1.patch, state.tar.gz
>
>
> MasterProcWALs directory grows pretty much un-bounded. Because of that when 
> master failover happens the NN is flooded with connections and everything 
> grinds to a halt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14852) Update build env

2015-11-19 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15014664#comment-15014664
 ] 

Elliott Clark commented on HBASE-14852:
---

I'm going to be using https://hub.docker.com/r/pjameson/buck-folly-watchman/ as 
the base.

> Update build env
> 
>
> Key: HBASE-14852
> URL: https://issues.apache.org/jira/browse/HBASE-14852
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Elliott Clark
>Assignee: Elliott Clark
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14832) Ensure write paths work with ByteBufferedCells in case of compaction

2015-11-19 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15014434#comment-15014434
 ] 

stack commented on HBASE-14832:
---

bq. Currently in compaction case there will be a copy happening while writing 
back to the new file.

Can do in a follow up.

bq.  In the offheap cell only the value part is referring to the offheap 
hfileblock coming out of the bucket cache. All other components are onheap 
byte[] only since the need to be decoded.

Ok. Seems like related work but can do in follow-on.

Yeah, would be cool if we could purge but you probably can't. It is a 
PrefixTree intrinsic. The best we could do is move it back into the prefixtree 
module I'd say.

> Ensure write paths work with ByteBufferedCells in case of compaction
> 
>
> Key: HBASE-14832
> URL: https://issues.apache.org/jira/browse/HBASE-14832
> Project: HBase
>  Issue Type: Sub-task
>  Components: regionserver, Scanners
>Affects Versions: 2.0.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-14832.patch
>
>
> Currently any cell coming out of offheap Bucketcache while compaction does a 
> copy using the getXXXArray() API since write path does not work with BBCells. 
> This JIRA is aimed at changing the write path to support BBCells so that this 
> copy is avoided.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14030) HBase Backup/Restore Phase 1

2015-11-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15014515#comment-15014515
 ] 

Hadoop QA commented on HBASE-14030:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12772890/HBASE-14030-v16.patch
  against master branch at commit f0dc556b7174c18f3174c24364cc80e32195f715.
  ATTACHMENT ID: 12772890

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 31 new 
or modified tests.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16597//console

This message is automatically generated.

> HBase Backup/Restore Phase 1
> 
>
> Key: HBASE-14030
> URL: https://issues.apache.org/jira/browse/HBASE-14030
> Project: HBase
>  Issue Type: Umbrella
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Attachments: HBASE-14030-v0.patch, HBASE-14030-v1.patch, 
> HBASE-14030-v10.patch, HBASE-14030-v11.patch, HBASE-14030-v12.patch, 
> HBASE-14030-v13.patch, HBASE-14030-v14.patch, HBASE-14030-v15.patch, 
> HBASE-14030-v16.patch, HBASE-14030-v2.patch, HBASE-14030-v3.patch, 
> HBASE-14030-v4.patch, HBASE-14030-v5.patch, HBASE-14030-v6.patch, 
> HBASE-14030-v7.patch, HBASE-14030-v8.patch
>
>
> This is the umbrella ticket for Backup/Restore Phase 1. See HBASE-7912 design 
> doc for the phase description.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-14850) C++ client implementation

2015-11-19 Thread Elliott Clark (JIRA)
Elliott Clark created HBASE-14850:
-

 Summary: C++ client implementation
 Key: HBASE-14850
 URL: https://issues.apache.org/jira/browse/HBASE-14850
 Project: HBase
  Issue Type: Task
Reporter: Elliott Clark






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-14855) Connect to regionserver

2015-11-19 Thread Elliott Clark (JIRA)
Elliott Clark created HBASE-14855:
-

 Summary: Connect to regionserver
 Key: HBASE-14855
 URL: https://issues.apache.org/jira/browse/HBASE-14855
 Project: HBase
  Issue Type: Sub-task
Reporter: Elliott Clark






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-14854) Read meta location from zk

2015-11-19 Thread Elliott Clark (JIRA)
Elliott Clark created HBASE-14854:
-

 Summary: Read meta location from zk
 Key: HBASE-14854
 URL: https://issues.apache.org/jira/browse/HBASE-14854
 Project: HBase
  Issue Type: Sub-task
Reporter: Elliott Clark






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   3   >