[jira] [Commented] (HBASE-15813) Rename DefaultWALProvider to a more specific name and clean up unnecessary reference to it

2016-05-11 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15281231#comment-15281231
 ] 

stack commented on HBASE-15813:
---

[~Apache9] I was suggesting get a +1 from [~busbey] before commit... but we can 
get one after I'll bug him. Thanks.

> Rename DefaultWALProvider to a more specific name and clean up unnecessary 
> reference to it
> --
>
> Key: HBASE-15813
> URL: https://issues.apache.org/jira/browse/HBASE-15813
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-15813.patch
>
>
> This work can be done before we make AsyncFSWAL as our default WAL 
> implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15804) Some links in documentation are 404

2016-05-11 Thread Heng Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Heng Chen updated HBASE-15804:
--
Attachment: HBASE-15804.patch

> Some links in documentation are 404
> ---
>
> Key: HBASE-15804
> URL: https://issues.apache.org/jira/browse/HBASE-15804
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Reporter: Heng Chen
> Attachments: HBASE-15804.patch
>
>
> http://hbase.apache.org/book.html#security
> The link to {{Understanding User Authentication and Authorization in Apache 
> HBase}} return 404



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15813) Rename DefaultWALProvider to a more specific name and clean up unnecessary reference to it

2016-05-11 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-15813:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Pushed to master.
Thanks all for reviewing.

> Rename DefaultWALProvider to a more specific name and clean up unnecessary 
> reference to it
> --
>
> Key: HBASE-15813
> URL: https://issues.apache.org/jira/browse/HBASE-15813
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-15813.patch
>
>
> This work can be done before we make AsyncFSWAL as our default WAL 
> implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15454) Archive store files older than max age

2016-05-11 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15281209#comment-15281209
 ] 

Duo Zhang commented on HBASE-15454:
---

I really need this for our micloud service. Since we can not delete any old 
data and the data is huge, the only big file is very useful when doing RAID and 
checking consistency between master and peer clusters.

But in general, I think for most cases the current DT compaction is enough.

Thanks.



> Archive store files older than max age
> --
>
> Key: HBASE-15454
> URL: https://issues.apache.org/jira/browse/HBASE-15454
> Project: HBase
>  Issue Type: Sub-task
>  Components: Compaction
>Affects Versions: 2.0.0, 1.3.0, 0.98.18, 1.4.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0, 1.3.0, 1.4.0, 0.98.20
>
> Attachments: HBASE-15454-v1.patch, HBASE-15454-v2.patch, 
> HBASE-15454-v3.patch, HBASE-15454-v4.patch, HBASE-15454-v5.patch, 
> HBASE-15454-v6.patch, HBASE-15454-v7.patch, HBASE-15454.patch
>
>
> In date tiered compaction, the store files older than max age are never 
> touched by minor compactions. Here we introduce a 'freeze window' operation, 
> which does the follow things:
> 1. Find all store files that contains cells whose timestamp are in the give 
> window.
> 2. Compaction all these files and output one file for each window that these 
> files covered.
> After the compaction, we will have only one in the give window, and all cells 
> whose timestamp are in the give window are in the only file. And if you do 
> not write new cells with an older timestamp in this window, the file will 
> never be changed. This makes it easier to do erasure coding on the freezed 
> file to reduce redundence. And also, it makes it possible to check 
> consistency between master and peer cluster incrementally.
> And why use the word 'freeze'?
> Because there is already an 'HFileArchiver' class. I want to use a different 
> word to prevent confusing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15808) Reduce potential bulk load intermediate space usage and waste

2016-05-11 Thread Jerry He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15808?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jerry He updated HBASE-15808:
-
Attachment: HBASE-15808-v3.patch

v3 with a test case.

> Reduce potential bulk load intermediate space usage and waste
> -
>
> Key: HBASE-15808
> URL: https://issues.apache.org/jira/browse/HBASE-15808
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 1.2.0
>Reporter: Jerry He
>Assignee: Jerry He
>Priority: Minor
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.2.2
>
> Attachments: HBASE-15808-v2.patch, HBASE-15808-v3.patch, 
> HBASE-15808.patch
>
>
> If the bulk load input files do not match the existing region boudaries, the 
> files will be splitted.
> In the unfornate cases where the files need to be splitted multiple times,
> the process can consume unnecessary space and can even cause out of space.
> Here is over-simplified example.
> Orinal size of input files:  
>   consumed space: size --> 300GB
> After a round of splits: 
>   consumed space: size + tmpspace1 --> 300GB + 300GB
> After another round of splits: 
>   consumded space:  size + tmpspace1 + tmpspace2 --> 300GB + 300GB + 300GB
> ..
> Currently we don't do any cleanup in the process. At least all the 
> intermediate tmpspace (not the last one) can be deleted in the process.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15454) Archive store files older than max age

2016-05-11 Thread Mikhail Antonov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15281196#comment-15281196
 ] 

Mikhail Antonov commented on HBASE-15454:
-

Thanks Duo. In general, how uhm..much less convenient/useful for you/fast  are 
DT compactions without this?

> Archive store files older than max age
> --
>
> Key: HBASE-15454
> URL: https://issues.apache.org/jira/browse/HBASE-15454
> Project: HBase
>  Issue Type: Sub-task
>  Components: Compaction
>Affects Versions: 2.0.0, 1.3.0, 0.98.18, 1.4.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0, 1.3.0, 1.4.0, 0.98.20
>
> Attachments: HBASE-15454-v1.patch, HBASE-15454-v2.patch, 
> HBASE-15454-v3.patch, HBASE-15454-v4.patch, HBASE-15454-v5.patch, 
> HBASE-15454-v6.patch, HBASE-15454-v7.patch, HBASE-15454.patch
>
>
> In date tiered compaction, the store files older than max age are never 
> touched by minor compactions. Here we introduce a 'freeze window' operation, 
> which does the follow things:
> 1. Find all store files that contains cells whose timestamp are in the give 
> window.
> 2. Compaction all these files and output one file for each window that these 
> files covered.
> After the compaction, we will have only one in the give window, and all cells 
> whose timestamp are in the give window are in the only file. And if you do 
> not write new cells with an older timestamp in this window, the file will 
> never be changed. This makes it easier to do erasure coding on the freezed 
> file to reduce redundence. And also, it makes it possible to check 
> consistency between master and peer cluster incrementally.
> And why use the word 'freeze'?
> Because there is already an 'HFileArchiver' class. I want to use a different 
> word to prevent confusing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15615) Wrong sleep time when RegionServerCallable need retry

2016-05-11 Thread Mikhail Antonov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15281193#comment-15281193
 ] 

Mikhail Antonov commented on HBASE-15615:
-

Let me take a look

> Wrong sleep time when RegionServerCallable need retry
> -
>
> Key: HBASE-15615
> URL: https://issues.apache.org/jira/browse/HBASE-15615
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 1.0.0, 2.0.0, 1.1.0, 1.2.0, 1.3.0, 0.98.19
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
> Fix For: 1.3.0
>
> Attachments: HBASE-15615-branch-0.98.patch, 
> HBASE-15615-branch-1.0-v2.patch, HBASE-15615-branch-1.1-v2.patch, 
> HBASE-15615-branch-1.1-v2.patch, HBASE-15615-branch-1.patch, 
> HBASE-15615-v1.patch, HBASE-15615-v1.patch, HBASE-15615-v2.patch, 
> HBASE-15615-v2.patch, HBASE-15615-v3.patch, HBASE-15615-v4.patch, 
> HBASE-15615.patch
>
>
> In RpcRetryingCallerImpl, it get pause time by expectedSleep = 
> callable.sleep(pause, tries + 1); And in RegionServerCallable, it get pasue 
> time by sleep = ConnectionUtils.getPauseTime(pause, tries + 1). So tries will 
> be bumped up twice. And the pasue time is 3 * hbase.client.pause when tries 
> is 0.
> RETRY_BACKOFF = {1, 2, 3, 5, 10, 20, 40, 100, 100, 100, 100, 200, 200}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14879) maven archetype: mapreduce application

2016-05-11 Thread Daniel Vimont (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15281183#comment-15281183
 ] 

Daniel Vimont commented on HBASE-14879:
---

What we are experiencing seems to be the standard behavior of the 
HBaseTestingUtility.

The problem can be replicated through execution of a single line of code:
{code}
new HBaseTestingUtility().startMiniCluster();
{code}
This results in the following output, regarding the 'hbase:namespace' 
metatable, which the miniCluster is apparently in the process of creating when 
this message is issued:
{code}
2016-05-12 12:45:33,413 ERROR [ProcedureExecutor-0] master.TableStateManager: 
Unable to get table hbase:namespace state
org.apache.hadoop.hbase.TableNotFoundException: hbase:namespace
{code}

To more fully replicate what WE are experiencing (a series of those same ERROR 
messages when a user table is created), the following three lines of code may 
be executed:
{code}
final HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility();
TEST_UTIL.startMiniCluster();
TEST_UTIL.createTable(TableName.valueOf("table1"), Bytes.toBytes("cf1"));
{code}
This results in the following output:
{code}
2016-05-12 12:56:54,052 ERROR [ProcedureExecutor-0] master.TableStateManager: 
Unable to get table hbase:namespace state
org.apache.hadoop.hbase.TableNotFoundException: hbase:namespace
...
2016-05-12 12:56:56,496 ERROR [ProcedureExecutor-3] master.TableStateManager: 
Unable to get table table1 state
org.apache.hadoop.hbase.TableNotFoundException: table1
...
2016-05-12 12:56:56,550 ERROR 
[B.defaultRpcServer.handler=11,queue=2,port=38763] master.TableStateManager: 
Unable to get table table1 state
org.apache.hadoop.hbase.TableNotFoundException: table1
{code}
As you can see, whenever a *user* table is created (as opposed to a system 
metatable), a similar ERROR message is issued twice in the course of table 
creation.

> maven archetype: mapreduce application
> --
>
> Key: HBASE-14879
> URL: https://issues.apache.org/jira/browse/HBASE-14879
> Project: HBase
>  Issue Type: Sub-task
>  Components: build, Usability
>Reporter: Nick Dimiduk
>Assignee: Daniel Vimont
>  Labels: beginner
> Attachments: HBASE-14879-v1.patch, HBASE-14879-v2.patch, 
> archetype_mr_prototype.zip
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15804) Some links in documentation are 404

2016-05-11 Thread Heng Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15281181#comment-15281181
 ] 

Heng Chen commented on HBASE-15804:
---

OK, let me try to fix it.

> Some links in documentation are 404
> ---
>
> Key: HBASE-15804
> URL: https://issues.apache.org/jira/browse/HBASE-15804
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Reporter: Heng Chen
>
> http://hbase.apache.org/book.html#security
> The link to {{Understanding User Authentication and Authorization in Apache 
> HBase}} return 404



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15615) Wrong sleep time when RegionServerCallable need retry

2016-05-11 Thread Guanghao Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15281165#comment-15281165
 ] 

Guanghao Zhang commented on HBASE-15615:


[~yuzhih...@gmail.com] [~mantonov] [~ghelmling] Any ideas?

> Wrong sleep time when RegionServerCallable need retry
> -
>
> Key: HBASE-15615
> URL: https://issues.apache.org/jira/browse/HBASE-15615
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 1.0.0, 2.0.0, 1.1.0, 1.2.0, 1.3.0, 0.98.19
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
> Fix For: 1.3.0
>
> Attachments: HBASE-15615-branch-0.98.patch, 
> HBASE-15615-branch-1.0-v2.patch, HBASE-15615-branch-1.1-v2.patch, 
> HBASE-15615-branch-1.1-v2.patch, HBASE-15615-branch-1.patch, 
> HBASE-15615-v1.patch, HBASE-15615-v1.patch, HBASE-15615-v2.patch, 
> HBASE-15615-v2.patch, HBASE-15615-v3.patch, HBASE-15615-v4.patch, 
> HBASE-15615.patch
>
>
> In RpcRetryingCallerImpl, it get pause time by expectedSleep = 
> callable.sleep(pause, tries + 1); And in RegionServerCallable, it get pasue 
> time by sleep = ConnectionUtils.getPauseTime(pause, tries + 1). So tries will 
> be bumped up twice. And the pasue time is 3 * hbase.client.pause when tries 
> is 0.
> RETRY_BACKOFF = {1, 2, 3, 5, 10, 20, 40, 100, 100, 100, 100, 200, 200}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15615) Wrong sleep time when RegionServerCallable need retry

2016-05-11 Thread Guanghao Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang updated HBASE-15615:
---
Attachment: HBASE-15615-v4.patch

Reattach a patch v4 which same with v2, it doesn't fix retries number thing. 
Maybe we should close this issue first?

> Wrong sleep time when RegionServerCallable need retry
> -
>
> Key: HBASE-15615
> URL: https://issues.apache.org/jira/browse/HBASE-15615
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 1.0.0, 2.0.0, 1.1.0, 1.2.0, 1.3.0, 0.98.19
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
> Fix For: 1.3.0
>
> Attachments: HBASE-15615-branch-0.98.patch, 
> HBASE-15615-branch-1.0-v2.patch, HBASE-15615-branch-1.1-v2.patch, 
> HBASE-15615-branch-1.1-v2.patch, HBASE-15615-branch-1.patch, 
> HBASE-15615-v1.patch, HBASE-15615-v1.patch, HBASE-15615-v2.patch, 
> HBASE-15615-v2.patch, HBASE-15615-v3.patch, HBASE-15615-v4.patch, 
> HBASE-15615.patch
>
>
> In RpcRetryingCallerImpl, it get pause time by expectedSleep = 
> callable.sleep(pause, tries + 1); And in RegionServerCallable, it get pasue 
> time by sleep = ConnectionUtils.getPauseTime(pause, tries + 1). So tries will 
> be bumped up twice. And the pasue time is 3 * hbase.client.pause when tries 
> is 0.
> RETRY_BACKOFF = {1, 2, 3, 5, 10, 20, 40, 100, 100, 100, 100, 200, 200}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15813) Rename DefaultWALProvider to a more specific name and clean up unnecessary reference to it

2016-05-11 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15281158#comment-15281158
 ] 

Duo Zhang commented on HBASE-15813:
---

Thanks. Let me commit.

> Rename DefaultWALProvider to a more specific name and clean up unnecessary 
> reference to it
> --
>
> Key: HBASE-15813
> URL: https://issues.apache.org/jira/browse/HBASE-15813
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-15813.patch
>
>
> This work can be done before we make AsyncFSWAL as our default WAL 
> implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15813) Rename DefaultWALProvider to a more specific name and clean up unnecessary reference to it

2016-05-11 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15281154#comment-15281154
 ] 

stack commented on HBASE-15813:
---

+1 from me. It is a lot of code movement but it is going in the right direction 
-- replacing DefaultWALProvider with a more generic Abstract WALProvider (i 
hate the name though).

Get a +1 from [~busbey] since this his area.

Thanks [~Apache9]

> Rename DefaultWALProvider to a more specific name and clean up unnecessary 
> reference to it
> --
>
> Key: HBASE-15813
> URL: https://issues.apache.org/jira/browse/HBASE-15813
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-15813.patch
>
>
> This work can be done before we make AsyncFSWAL as our default WAL 
> implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15811) Batch Get after batch Put does not fetch all Cells

2016-05-11 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15281149#comment-15281149
 ] 

stack commented on HBASE-15811:
---

Thanks for the extra set of eyes [~jingcheng...@intel.com]

I've been working off the tip of branch-1 where read point is an AtomicLong. 
Was going to try and fix here first then work backward.

bq. I guess this code is not necessary any more? Or we miss something in 
somewhere else?

Thinking on it, the Handler is currently running the batch Put in 
doMiniBatchMutation. It has to run to the end before it returns and before the 
client will come back and ask for all Cells that were just put. Not sure now 
how client could come back in between sync and mvcc update. 




> Batch Get after batch Put does not fetch all Cells
> --
>
> Key: HBASE-15811
> URL: https://issues.apache.org/jira/browse/HBASE-15811
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 1.2.1
>Reporter: stack
>Assignee: stack
> Attachments: Test.java, Test2.java
>
>
> A big batch put followed by a batch get does not always return all Cells put. 
> See attached test program by Robert Farr that reproduces the issue. It seems 
> to be an issue to do with a cluster of more than one machine. Running against 
> a single machine does not have the problem (though the single machine may 
> have many regions). Robert was unable to make his program fail with a single 
> machine only.
> I reproduced what Robert was seeing running his program. I was also unable to 
> make a single machine fail. In a batch of 1000 puts, I see one to three Gets 
> fail. I noticed too that if I wait a second after a fail and then re-get, the 
> Get succeeds.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15107) Procedure V2 - Procedure Queue with Regions

2016-05-11 Thread Matteo Bertozzi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matteo Bertozzi updated HBASE-15107:

Attachment: HBASE-15107-v2.patch

> Procedure V2 - Procedure Queue with Regions
> ---
>
> Key: HBASE-15107
> URL: https://issues.apache.org/jira/browse/HBASE-15107
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2
>Affects Versions: 2.0.0
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
> Fix For: 2.0.0
>
> Attachments: HBASE-15107-v0.patch, HBASE-15107-v1.patch, 
> HBASE-15107-v2.patch
>
>
> Adds the "region locking" that will be used to perform assign/unassign, 
> split/merge operations.
> An operation take the xlock on the regions is working on, all the other 
> procedures will be suspended (removed from the runnable queue) and resumed 
> (put back in the runnable queue) when the operation that has the lock on the 
> region is completed.
> https://reviews.apache.org/r/42213/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15813) Rename DefaultWALProvider to a more specific name and clean up unnecessary reference to it

2016-05-11 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15281137#comment-15281137
 ] 

Duo Zhang commented on HBASE-15813:
---

So what's the final decision here? +1 or -1 or +-0?

Thanks [~stack] [~busbey].

> Rename DefaultWALProvider to a more specific name and clean up unnecessary 
> reference to it
> --
>
> Key: HBASE-15813
> URL: https://issues.apache.org/jira/browse/HBASE-15813
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-15813.patch
>
>
> This work can be done before we make AsyncFSWAL as our default WAL 
> implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14932) bulkload fails because file not found

2016-05-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15281134#comment-15281134
 ] 

Hadoop QA commented on HBASE-14932:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
39s {color} | {color:green} 0.98 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 54s 
{color} | {color:green} 0.98 passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s 
{color} | {color:green} 0.98 passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
31s {color} | {color:green} 0.98 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} 0.98 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 54s 
{color} | {color:red} hbase-server in 0.98 has 84 extant Findbugs warnings. 
{color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 43s 
{color} | {color:red} hbase-server in 0.98 failed with JDK v1.8.0. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s 
{color} | {color:green} 0.98 passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
38s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 48s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 48s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 33s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 33s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 4m 
32s {color} | {color:green} Patch does not cause any errors with Hadoop 2.4.1 
2.5.2 2.6.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
19s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 43s 
{color} | {color:red} hbase-server in the patch failed with JDK v1.8.0. {color} 
|
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 31s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 67m 58s 
{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 88m 34s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12789142/HBASE-14932-0.98.patch
 |
| JIRA Issue | HBASE-14932 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux asf910.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/test_framework/yetus-0.2.1/lib/precommit/personality/hbase.sh
 |
| git revision | 0.98 / a2a6b95 |
| Default Java | 1.7.0_79 |
| Multi-JDK versions |  

[jira] [Commented] (HBASE-15813) Rename DefaultWALProvider to a more specific name and clean up unnecessary reference to it

2016-05-11 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15281132#comment-15281132
 ] 

Duo Zhang commented on HBASE-15813:
---

OK. Let me file an issue for yetus.

> Rename DefaultWALProvider to a more specific name and clean up unnecessary 
> reference to it
> --
>
> Key: HBASE-15813
> URL: https://issues.apache.org/jira/browse/HBASE-15813
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-15813.patch
>
>
> This work can be done before we make AsyncFSWAL as our default WAL 
> implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15813) Rename DefaultWALProvider to a more specific name and clean up unnecessary reference to it

2016-05-11 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15281130#comment-15281130
 ] 

Duo Zhang commented on HBASE-15813:
---

Oh yeah, a bug in FanOutOneBlockAsyncDFSOutputHelper... The 
LeaseExpiredException is wrapped inside a RemoteException. We can the rpc proxy 
directly so we need to unwrap it by ourselves...

And getServerNameFromWALDirectoryName implicitly means that we are writing WAL 
to a FileSystem. So I think it is better to put it in class with 'FileSystem' 
in its name, not in the general WALProvider.

Thanks.

> Rename DefaultWALProvider to a more specific name and clean up unnecessary 
> reference to it
> --
>
> Key: HBASE-15813
> URL: https://issues.apache.org/jira/browse/HBASE-15813
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-15813.patch
>
>
> This work can be done before we make AsyncFSWAL as our default WAL 
> implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15811) Batch Get after batch Put does not fetch all Cells

2016-05-11 Thread Jingcheng Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15281121#comment-15281121
 ] 

Jingcheng Du commented on HBASE-15811:
--

bq.  Or there is something wrong w/ AtomicLong (smile).
In branch-1.x, the memstoreRead is not AtomicLong, it is only volatile long.
{code}
private volatile long memstoreRead = 0;
{code}

> Batch Get after batch Put does not fetch all Cells
> --
>
> Key: HBASE-15811
> URL: https://issues.apache.org/jira/browse/HBASE-15811
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 1.2.1
>Reporter: stack
>Assignee: stack
> Attachments: Test.java, Test2.java
>
>
> A big batch put followed by a batch get does not always return all Cells put. 
> See attached test program by Robert Farr that reproduces the issue. It seems 
> to be an issue to do with a cluster of more than one machine. Running against 
> a single machine does not have the problem (though the single machine may 
> have many regions). Robert was unable to make his program fail with a single 
> machine only.
> I reproduced what Robert was seeing running his program. I was also unable to 
> make a single machine fail. In a batch of 1000 puts, I see one to three Gets 
> fail. I noticed too that if I wait a second after a fail and then re-get, the 
> Get succeeds.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15454) Archive store files older than max age

2016-05-11 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15281120#comment-15281120
 ] 

Duo Zhang commented on HBASE-15454:
---

The first priority for our usage is the how to generate suitable windows. The 
second thing is to test whether the archive logic here can work properly under 
a real workload. So I'm afraid I can not give a perf test soon. I need to 
verify the customized window factory first...

The pluggable window support and configuration changes have already been 
committed in separated issues. The changes here do not introduce any compatible 
issues so I think it is fine to release it in 1.3.1.

Thanks.

> Archive store files older than max age
> --
>
> Key: HBASE-15454
> URL: https://issues.apache.org/jira/browse/HBASE-15454
> Project: HBase
>  Issue Type: Sub-task
>  Components: Compaction
>Affects Versions: 2.0.0, 1.3.0, 0.98.18, 1.4.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0, 1.3.0, 1.4.0, 0.98.20
>
> Attachments: HBASE-15454-v1.patch, HBASE-15454-v2.patch, 
> HBASE-15454-v3.patch, HBASE-15454-v4.patch, HBASE-15454-v5.patch, 
> HBASE-15454-v6.patch, HBASE-15454-v7.patch, HBASE-15454.patch
>
>
> In date tiered compaction, the store files older than max age are never 
> touched by minor compactions. Here we introduce a 'freeze window' operation, 
> which does the follow things:
> 1. Find all store files that contains cells whose timestamp are in the give 
> window.
> 2. Compaction all these files and output one file for each window that these 
> files covered.
> After the compaction, we will have only one in the give window, and all cells 
> whose timestamp are in the give window are in the only file. And if you do 
> not write new cells with an older timestamp in this window, the file will 
> never be changed. This makes it easier to do erasure coding on the freezed 
> file to reduce redundence. And also, it makes it possible to check 
> consistency between master and peer cluster incrementally.
> And why use the word 'freeze'?
> Because there is already an 'HFileArchiver' class. I want to use a different 
> word to prevent confusing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15811) Batch Get after batch Put does not fetch all Cells

2016-05-11 Thread Jingcheng Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15281110#comment-15281110
 ] 

Jingcheng Du commented on HBASE-15811:
--

Hi, MultiVersionConsistencyControl has a private lock readWaiters, it only used 
in advanceMemstore(WriteEntry e),
{code}
if (nextReadValue > 0) {
  synchronized (readWaiters) {
readWaiters.notifyAll();
  }
}
{code}
I guess this code is not necessary any more? Or we miss something in somewhere 
else?

> Batch Get after batch Put does not fetch all Cells
> --
>
> Key: HBASE-15811
> URL: https://issues.apache.org/jira/browse/HBASE-15811
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 1.2.1
>Reporter: stack
>Assignee: stack
> Attachments: Test.java, Test2.java
>
>
> A big batch put followed by a batch get does not always return all Cells put. 
> See attached test program by Robert Farr that reproduces the issue. It seems 
> to be an issue to do with a cluster of more than one machine. Running against 
> a single machine does not have the problem (though the single machine may 
> have many regions). Robert was unable to make his program fail with a single 
> machine only.
> I reproduced what Robert was seeing running his program. I was also unable to 
> make a single machine fail. In a batch of 1000 puts, I see one to three Gets 
> fail. I noticed too that if I wait a second after a fail and then re-get, the 
> Get succeeds.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15809) Basic Replication WebUI

2016-05-11 Thread Matteo Bertozzi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matteo Bertozzi updated HBASE-15809:

Description: 
At the moment the only way to have some insight on replication from the webui 
is looking at zkdump and metrics.

the basic information useful to get started debugging are: peer information and 
the view of WALs offsets for each peer.

https://reviews.apache.org/r/47275/

  was:
At the moment the only way to have some insight on replication from the webui 
is looking at zkdump and metrics.

the basic information useful to get started debugging are: peer information and 
the view of WALs offsets for each peer.


> Basic Replication WebUI
> ---
>
> Key: HBASE-15809
> URL: https://issues.apache.org/jira/browse/HBASE-15809
> Project: HBase
>  Issue Type: New Feature
>  Components: Replication, UI
>Affects Versions: 2.0.0
>Reporter: Matteo Bertozzi
>Priority: Minor
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-15809-v0.patch, HBASE-15809-v0.png, 
> HBASE-15809-v1.patch
>
>
> At the moment the only way to have some insight on replication from the webui 
> is looking at zkdump and metrics.
> the basic information useful to get started debugging are: peer information 
> and the view of WALs offsets for each peer.
> https://reviews.apache.org/r/47275/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13706) CoprocessorClassLoader should not exempt Hive classes

2016-05-11 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15281070#comment-15281070
 ] 

Andrew Purtell commented on HBASE-13706:


[~giacomotay...@gmail.com] ?

> CoprocessorClassLoader should not exempt Hive classes
> -
>
> Key: HBASE-13706
> URL: https://issues.apache.org/jira/browse/HBASE-13706
> Project: HBase
>  Issue Type: Bug
>  Components: Coprocessors
>Affects Versions: 2.0.0, 1.0.1, 1.1.0, 0.98.12
>Reporter: Jerry He
>Assignee: Jerry He
>Priority: Minor
> Fix For: 2.0.0, 0.98.14, 1.2.0, 1.3.0
>
> Attachments: HBASE-13706-0.98.patch, HBASE-13706-branch-1.patch, 
> HBASE-13706-master-v2.patch, HBASE-13706-master-v2.patch, HBASE-13706.patch
>
>
> CoprocessorClassLoader is used to load classes from the coprocessor jar.
> Certain classes are exempt from being loaded by this ClassLoader, which means 
> they will be ignored in the coprocessor jar, but loaded from parent classpath 
> instead.
> One problem is that we categorically exempt "org.apache.hadoop".
> But it happens that Hive packages start with "org.apache.hadoop".
> There is no reason to exclude hive classes from theCoprocessorClassLoader.
> HBase does not even include Hive jars.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15809) Basic Replication WebUI

2016-05-11 Thread Matteo Bertozzi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matteo Bertozzi updated HBASE-15809:

Attachment: HBASE-15809-v1.patch

> Basic Replication WebUI
> ---
>
> Key: HBASE-15809
> URL: https://issues.apache.org/jira/browse/HBASE-15809
> Project: HBase
>  Issue Type: New Feature
>  Components: Replication, UI
>Affects Versions: 2.0.0
>Reporter: Matteo Bertozzi
>Priority: Minor
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-15809-v0.patch, HBASE-15809-v0.png, 
> HBASE-15809-v1.patch
>
>
> At the moment the only way to have some insight on replication from the webui 
> is looking at zkdump and metrics.
> the basic information useful to get started debugging are: peer information 
> and the view of WALs offsets for each peer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14879) maven archetype: mapreduce application

2016-05-11 Thread Daniel Vimont (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15281067#comment-15281067
 ] 

Daniel Vimont commented on HBASE-14879:
---

Yes, I'll dig deeper into this...

> maven archetype: mapreduce application
> --
>
> Key: HBASE-14879
> URL: https://issues.apache.org/jira/browse/HBASE-14879
> Project: HBase
>  Issue Type: Sub-task
>  Components: build, Usability
>Reporter: Nick Dimiduk
>Assignee: Daniel Vimont
>  Labels: beginner
> Attachments: HBASE-14879-v1.patch, HBASE-14879-v2.patch, 
> archetype_mr_prototype.zip
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15811) Batch Get after batch Put does not fetch all Cells

2016-05-11 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15281060#comment-15281060
 ] 

stack commented on HBASE-15811:
---

Thanks [~rfarrjr]

This issue is a good one.

 # A batch of Puts come in.
 # We make it to HRegion#doMiniBatchMutation
 # It adds the edits to WAL with append, then to memstore, then calls sync, and 
then updates mvcc.
 # Down in the sync, we add our sync request to the running sync threads.
 # They send the sync and wait on return.
 # It returns. We let blocked handlers go.
 # They return to the client.
 # Client comes back in to read its own writes.

TO BE CONFIRMED, it seems like the remote client and make a query IN BETWEEN 
sync and update of mvcc.

I captures this in log:

{code}
 7357 2016-05-11 16:19:51,511 INFO  
[B.defaultRpcServer.handler=151,queue=151,port=16020] regionserver.HRegion: 
mvcc.readPoint=638, a12e7c7829e37a16f4144b03e35e3532
 7358 2016-05-11 16:19:51,512 INFO  
[B.defaultRpcServer.handler=36,queue=36,port=16020] regionserver.HRegion: SPIN 
EMPTY 637 test_farr,0,1463008764533.a12e7c7829e37a16f4144b03e  35e3532.
{code}

The first line is logging I added just after we'd updated the mvcc in 
doMiniBatchMutation

The second line is the case where a Get got nothing back when though it had 
just written the value. See how the readPoint at write is at 638 but the read 
point for the Scan/Get is at 637... Somehow at creation of the Scan, it got a 
readpoint before it was updated.  Or there is something wrong w/ AtomicLong 
(smile).

Let me see if I can artificially recreate.

> Batch Get after batch Put does not fetch all Cells
> --
>
> Key: HBASE-15811
> URL: https://issues.apache.org/jira/browse/HBASE-15811
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 1.2.1
>Reporter: stack
>Assignee: stack
> Attachments: Test.java, Test2.java
>
>
> A big batch put followed by a batch get does not always return all Cells put. 
> See attached test program by Robert Farr that reproduces the issue. It seems 
> to be an issue to do with a cluster of more than one machine. Running against 
> a single machine does not have the problem (though the single machine may 
> have many regions). Robert was unable to make his program fail with a single 
> machine only.
> I reproduced what Robert was seeing running his program. I was also unable to 
> make a single machine fail. In a batch of 1000 puts, I see one to three Gets 
> fail. I noticed too that if I wait a second after a fail and then re-get, the 
> Get succeeds.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15812) HttpServer fails to come up against hadoop trunk

2016-05-11 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15281051#comment-15281051
 ] 

Ted Yu commented on HBASE-15812:


In master branch, 'Metrics Dump' is also hooked up to /jmx:

hbase-server/src/main/jamon/org/apache/hadoop/hbase/tmpl/master/MasterStatusTmpl.jamon:
Metrics Dump

I think we can drop '/metrics'

> HttpServer fails to come up against hadoop trunk
> 
>
> Key: HBASE-15812
> URL: https://issues.apache.org/jira/browse/HBASE-15812
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.0.1
>Reporter: Sangjin Lee
>Assignee: Ted Yu
> Fix For: 2.0.0
>
> Attachments: 15812.v1.txt, 15812.v1.txt
>
>
> If you run HBase HttpServer against the hadoop trunk, it fails.
> {noformat}
> Caused by: java.lang.NoClassDefFoundError: 
> org/apache/hadoop/metrics/MetricsServlet
> at 
> org.apache.hadoop.hbase.http.HttpServer.addDefaultServlets(HttpServer.java:677)
> at 
> org.apache.hadoop.hbase.http.HttpServer.initializeWebServer(HttpServer.java:546)
> at org.apache.hadoop.hbase.http.HttpServer.(HttpServer.java:500)
> at org.apache.hadoop.hbase.http.HttpServer.(HttpServer.java:104)
> at 
> org.apache.hadoop.hbase.http.HttpServer$Builder.build(HttpServer.java:345)
> at org.apache.hadoop.hbase.http.InfoServer.(InfoServer.java:77)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.putUpWebUI(HRegionServer.java:1697)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.(HRegionServer.java:550)
> at org.apache.hadoop.hbase.master.HMaster.(HMaster.java:333)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:525)
> at 
> org.apache.hadoop.hbase.util.JVMClusterUtil.createMasterThread(JVMClusterUtil.java:139)
> ... 26 more
> {noformat}
> The hadoop trunk removed {{MetricsServlet}} (HADOOP-12504).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13706) CoprocessorClassLoader should not exempt Hive classes

2016-05-11 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15281050#comment-15281050
 ] 

Sangjin Lee commented on HBASE-13706:
-

Yes, that's what I heard too. I guess the question is how soon Phoenix 4.8 will 
be released as that may become a blocker for us to merge into the hadoop 
trunk...

> CoprocessorClassLoader should not exempt Hive classes
> -
>
> Key: HBASE-13706
> URL: https://issues.apache.org/jira/browse/HBASE-13706
> Project: HBase
>  Issue Type: Bug
>  Components: Coprocessors
>Affects Versions: 2.0.0, 1.0.1, 1.1.0, 0.98.12
>Reporter: Jerry He
>Assignee: Jerry He
>Priority: Minor
> Fix For: 2.0.0, 0.98.14, 1.2.0, 1.3.0
>
> Attachments: HBASE-13706-0.98.patch, HBASE-13706-branch-1.patch, 
> HBASE-13706-master-v2.patch, HBASE-13706-master-v2.patch, HBASE-13706.patch
>
>
> CoprocessorClassLoader is used to load classes from the coprocessor jar.
> Certain classes are exempt from being loaded by this ClassLoader, which means 
> they will be ignored in the coprocessor jar, but loaded from parent classpath 
> instead.
> One problem is that we categorically exempt "org.apache.hadoop".
> But it happens that Hive packages start with "org.apache.hadoop".
> There is no reason to exclude hive classes from theCoprocessorClassLoader.
> HBase does not even include Hive jars.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15812) HttpServer fails to come up against hadoop trunk

2016-05-11 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15281044#comment-15281044
 ] 

stack commented on HBASE-15812:
---

I don't follow. If I click on 'Metrics Dump', I get 500. In master branch, this 
goes to /metrics. In branch-1 it goes to /jmx.  If /jmx in master 'works' 
showing metrics v2, why not hook up 'Metrics Dump' to /jmx instead of /metrics? 
Maybe I am misunderstanding.

> HttpServer fails to come up against hadoop trunk
> 
>
> Key: HBASE-15812
> URL: https://issues.apache.org/jira/browse/HBASE-15812
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.0.1
>Reporter: Sangjin Lee
>Assignee: Ted Yu
> Fix For: 2.0.0
>
> Attachments: 15812.v1.txt, 15812.v1.txt
>
>
> If you run HBase HttpServer against the hadoop trunk, it fails.
> {noformat}
> Caused by: java.lang.NoClassDefFoundError: 
> org/apache/hadoop/metrics/MetricsServlet
> at 
> org.apache.hadoop.hbase.http.HttpServer.addDefaultServlets(HttpServer.java:677)
> at 
> org.apache.hadoop.hbase.http.HttpServer.initializeWebServer(HttpServer.java:546)
> at org.apache.hadoop.hbase.http.HttpServer.(HttpServer.java:500)
> at org.apache.hadoop.hbase.http.HttpServer.(HttpServer.java:104)
> at 
> org.apache.hadoop.hbase.http.HttpServer$Builder.build(HttpServer.java:345)
> at org.apache.hadoop.hbase.http.InfoServer.(InfoServer.java:77)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.putUpWebUI(HRegionServer.java:1697)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.(HRegionServer.java:550)
> at org.apache.hadoop.hbase.master.HMaster.(HMaster.java:333)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:525)
> at 
> org.apache.hadoop.hbase.util.JVMClusterUtil.createMasterThread(JVMClusterUtil.java:139)
> ... 26 more
> {noformat}
> The hadoop trunk removed {{MetricsServlet}} (HADOOP-12504).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13706) CoprocessorClassLoader should not exempt Hive classes

2016-05-11 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15281039#comment-15281039
 ] 

Andrew Purtell commented on HBASE-13706:


bq. The YARN timeline service feature also has a dependency on Phoenix, and it 
appears Phoenix is not yet ready to pick up 1.2.x?

There are a couple of minor issues with small API changes between 1.1 and 1.2 
but I bet the next release of Phoenix (4.8) will support 1.2. 

> CoprocessorClassLoader should not exempt Hive classes
> -
>
> Key: HBASE-13706
> URL: https://issues.apache.org/jira/browse/HBASE-13706
> Project: HBase
>  Issue Type: Bug
>  Components: Coprocessors
>Affects Versions: 2.0.0, 1.0.1, 1.1.0, 0.98.12
>Reporter: Jerry He
>Assignee: Jerry He
>Priority: Minor
> Fix For: 2.0.0, 0.98.14, 1.2.0, 1.3.0
>
> Attachments: HBASE-13706-0.98.patch, HBASE-13706-branch-1.patch, 
> HBASE-13706-master-v2.patch, HBASE-13706-master-v2.patch, HBASE-13706.patch
>
>
> CoprocessorClassLoader is used to load classes from the coprocessor jar.
> Certain classes are exempt from being loaded by this ClassLoader, which means 
> they will be ignored in the coprocessor jar, but loaded from parent classpath 
> instead.
> One problem is that we categorically exempt "org.apache.hadoop".
> But it happens that Hive packages start with "org.apache.hadoop".
> There is no reason to exclude hive classes from theCoprocessorClassLoader.
> HBase does not even include Hive jars.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13706) CoprocessorClassLoader should not exempt Hive classes

2016-05-11 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15281033#comment-15281033
 ] 

Sangjin Lee commented on HBASE-13706:
-

I thought Nick was not comfortable with backporting HBASE-15686, but I didn't 
read that he was uncomfortable with backporting this JIRA. The YARN timeline 
service feature also has a dependency on Phoenix, and it appears Phoenix is not 
yet ready to pick up 1.2.x? [~enis]?

> CoprocessorClassLoader should not exempt Hive classes
> -
>
> Key: HBASE-13706
> URL: https://issues.apache.org/jira/browse/HBASE-13706
> Project: HBase
>  Issue Type: Bug
>  Components: Coprocessors
>Affects Versions: 2.0.0, 1.0.1, 1.1.0, 0.98.12
>Reporter: Jerry He
>Assignee: Jerry He
>Priority: Minor
> Fix For: 2.0.0, 0.98.14, 1.2.0, 1.3.0
>
> Attachments: HBASE-13706-0.98.patch, HBASE-13706-branch-1.patch, 
> HBASE-13706-master-v2.patch, HBASE-13706-master-v2.patch, HBASE-13706.patch
>
>
> CoprocessorClassLoader is used to load classes from the coprocessor jar.
> Certain classes are exempt from being loaded by this ClassLoader, which means 
> they will be ignored in the coprocessor jar, but loaded from parent classpath 
> instead.
> One problem is that we categorically exempt "org.apache.hadoop".
> But it happens that Hive packages start with "org.apache.hadoop".
> There is no reason to exclude hive classes from theCoprocessorClassLoader.
> HBase does not even include Hive jars.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13706) CoprocessorClassLoader should not exempt Hive classes

2016-05-11 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15281026#comment-15281026
 ] 

Jerry He commented on HBASE-13706:
--

Posted concurrently with [~enis] :-)

> CoprocessorClassLoader should not exempt Hive classes
> -
>
> Key: HBASE-13706
> URL: https://issues.apache.org/jira/browse/HBASE-13706
> Project: HBase
>  Issue Type: Bug
>  Components: Coprocessors
>Affects Versions: 2.0.0, 1.0.1, 1.1.0, 0.98.12
>Reporter: Jerry He
>Assignee: Jerry He
>Priority: Minor
> Fix For: 2.0.0, 0.98.14, 1.2.0, 1.3.0
>
> Attachments: HBASE-13706-0.98.patch, HBASE-13706-branch-1.patch, 
> HBASE-13706-master-v2.patch, HBASE-13706-master-v2.patch, HBASE-13706.patch
>
>
> CoprocessorClassLoader is used to load classes from the coprocessor jar.
> Certain classes are exempt from being loaded by this ClassLoader, which means 
> they will be ignored in the coprocessor jar, but loaded from parent classpath 
> instead.
> One problem is that we categorically exempt "org.apache.hadoop".
> But it happens that Hive packages start with "org.apache.hadoop".
> There is no reason to exclude hive classes from theCoprocessorClassLoader.
> HBase does not even include Hive jars.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-15818) Shell create ‘t1’, ‘f1’, ‘f2’, ‘f3’ wrong number of arguments

2016-05-11 Thread Matteo Bertozzi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matteo Bertozzi resolved HBASE-15818.
-
Resolution: Invalid

closing, I think is just wrong ' 

> Shell create ‘t1’, ‘f1’, ‘f2’, ‘f3’ wrong number of arguments 
> --
>
> Key: HBASE-15818
> URL: https://issues.apache.org/jira/browse/HBASE-15818
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Affects Versions: 2.0.0, 1.2.1
>Reporter: Matteo Bertozzi
>Priority: Critical
> Fix For: 2.0.0, 1.3.0, 1.2.2
>
>
> Unable to create a table with multiple families (as suggested by the Examples)
> also the shell is exiting. (I only tested 1.2 and 2.0 and have the problem, 
> 1.1 seems to be ok)
> {noformat}
> hbase(main):001:0> create ‘t1’, ‘f1’, ‘f2’, ‘f3’
> ERROR: wrong number of arguments (0 for 1)
> Examples:
>   hbase> create 't1', 'f1', 'f2', 'f3'
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13706) CoprocessorClassLoader should not exempt Hive classes

2016-05-11 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15281022#comment-15281022
 ] 

Jerry He commented on HBASE-13706:
--

Hi, [~sjlee0]

[~ndimiduk] is the man for 1.1.x line. 
But I recall he had expressed his opinion for a backport in a comment on 
HBASE-15686.

> CoprocessorClassLoader should not exempt Hive classes
> -
>
> Key: HBASE-13706
> URL: https://issues.apache.org/jira/browse/HBASE-13706
> Project: HBase
>  Issue Type: Bug
>  Components: Coprocessors
>Affects Versions: 2.0.0, 1.0.1, 1.1.0, 0.98.12
>Reporter: Jerry He
>Assignee: Jerry He
>Priority: Minor
> Fix For: 2.0.0, 0.98.14, 1.2.0, 1.3.0
>
> Attachments: HBASE-13706-0.98.patch, HBASE-13706-branch-1.patch, 
> HBASE-13706-master-v2.patch, HBASE-13706-master-v2.patch, HBASE-13706.patch
>
>
> CoprocessorClassLoader is used to load classes from the coprocessor jar.
> Certain classes are exempt from being loaded by this ClassLoader, which means 
> they will be ignored in the coprocessor jar, but loaded from parent classpath 
> instead.
> One problem is that we categorically exempt "org.apache.hadoop".
> But it happens that Hive packages start with "org.apache.hadoop".
> There is no reason to exclude hive classes from theCoprocessorClassLoader.
> HBase does not even include Hive jars.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14932) bulkload fails because file not found

2016-05-11 Thread Alicia Ying Shu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14932?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alicia Ying Shu updated HBASE-14932:

Assignee: (was: Alicia Ying Shu)

> bulkload fails because file not found
> -
>
> Key: HBASE-14932
> URL: https://issues.apache.org/jira/browse/HBASE-14932
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 0.98.10
>Reporter: Shuaifeng Zhou
> Fix For: 0.98.20
>
> Attachments: HBASE-14932-0.98.patch
>
>
> When make a dobulkload call, one call may contain sevel hfiles to load, but 
> the call may timeout during regionserver load files, and client will retry to 
> load.
> But when client doing retry call, regionserver may continue doing load 
> operation, if somefiles success, the retry call will throw filenotfound 
> exception, and this will cause client retry again and again until retry 
> exhausted, and bulkload fails.
> When this happening, actually, some files are loaded successfully, that's a 
> inconsistent status.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13706) CoprocessorClassLoader should not exempt Hive classes

2016-05-11 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15281016#comment-15281016
 ] 

Enis Soztutar commented on HBASE-13706:
---

[~ndimiduk] our friends at YARN needs either this jira or HBASE-15686 to be 
backported to 1.1, so that they can use HBase-1.1.x lines (phoenix-4.8.0 is not 
out yet). Which one should we backport? 

This is for YARN-5070. 

> CoprocessorClassLoader should not exempt Hive classes
> -
>
> Key: HBASE-13706
> URL: https://issues.apache.org/jira/browse/HBASE-13706
> Project: HBase
>  Issue Type: Bug
>  Components: Coprocessors
>Affects Versions: 2.0.0, 1.0.1, 1.1.0, 0.98.12
>Reporter: Jerry He
>Assignee: Jerry He
>Priority: Minor
> Fix For: 2.0.0, 0.98.14, 1.2.0, 1.3.0
>
> Attachments: HBASE-13706-0.98.patch, HBASE-13706-branch-1.patch, 
> HBASE-13706-master-v2.patch, HBASE-13706-master-v2.patch, HBASE-13706.patch
>
>
> CoprocessorClassLoader is used to load classes from the coprocessor jar.
> Certain classes are exempt from being loaded by this ClassLoader, which means 
> they will be ignored in the coprocessor jar, but loaded from parent classpath 
> instead.
> One problem is that we categorically exempt "org.apache.hadoop".
> But it happens that Hive packages start with "org.apache.hadoop".
> There is no reason to exclude hive classes from theCoprocessorClassLoader.
> HBase does not even include Hive jars.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15816) Provide client with ability to set priority on Operations

2016-05-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280972#comment-15280972
 ] 

Hadoop QA commented on HBASE-15816:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 23s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
1s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 5s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 49s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
53s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
26s {color} | {color:green} master passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 56s 
{color} | {color:red} hbase-client in master has 1 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
1s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 10s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 10s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 47s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 47s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
48s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
28s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 8m 
39s {color} | {color:green} Patch does not cause any errors with Hadoop 2.4.1 
2.5.2 2.6.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 57s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 15m 59s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 47m 19s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.io.TestHeapSize |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12803519/HBASE-15816.patch |
| JIRA Issue | HBASE-15816 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux pietas.apache.org 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT 
Wed Sep 3 21:56:12 UTC 2014 

[jira] [Commented] (HBASE-15817) Backup history should mention the type (full or incremental) of the backup

2016-05-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280927#comment-15280927
 ] 

Hadoop QA commented on HBASE-15817:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue} 0m 1s 
{color} | {color:blue} The patch file was not named according to hbase's naming 
conventions. Please see 
https://yetus.apache.org/documentation/0.2.1/precommit-patchnames for 
instructions. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 4s {color} 
| {color:red} HBASE-15817 does not apply to master. Rebase required? Wrong 
Branch? See https://yetus.apache.org/documentation/0.2.1/precommit-patchnames 
for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12803514/15817.v1.txt |
| JIRA Issue | HBASE-15817 |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/1867/console |
| Powered by | Apache Yetus 0.2.1   http://yetus.apache.org |


This message was automatically generated.



> Backup history should mention the type (full or incremental) of the backup
> --
>
> Key: HBASE-15817
> URL: https://issues.apache.org/jira/browse/HBASE-15817
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
>  Labels: backup
> Attachments: 15817.v1.txt
>
>
> [~cartershanklin] performed full backup followed by incremental backup.
> In the output of backup history, one of the backups is full, one is 
> incremental. But which one?
> {code}
> [vagrant@hbase ~]$ hbase backup history
> ID : backup_1462900419633
> Tables : 
> SYSTEM.CATALOG;SYSTEM.SEQUENCE;MY_INDEX;SYSTEM.FUNCTION;SYSTEM.STATS;TEST_DATA
> State  : COMPLETE
> Start time : Tue May 10 17:13:39 UTC 2016
> End time   : Tue May 10 17:13:53 UTC 2016
> Progress   : 100
> ID : backup_1462900212093
> Tables : 
> SYSTEM.CATALOG;SYSTEM.SEQUENCE;MY_INDEX;SYSTEM.FUNCTION;SYSTEM.STATS;TEST_DATA
> State  : COMPLETE
> Start time : Tue May 10 17:10:12 UTC 2016
> End time   : Tue May 10 17:11:30 UTC 2016
> Progress   : 100
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15818) Shell create ‘t1’, ‘f1’, ‘f2’, ‘f3’ wrong number of arguments

2016-05-11 Thread Matteo Bertozzi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matteo Bertozzi updated HBASE-15818:

Description: 
Unable to create a table with multiple families (as suggested by the Examples)
also the shell is exiting. (I only tested 1.2 and 2.0 and have the problem, 1.1 
seems to be ok)
{noformat}
hbase(main):001:0> create ‘t1’, ‘f1’, ‘f2’, ‘f3’

ERROR: wrong number of arguments (0 for 1)

Examples:
  hbase> create 't1', 'f1', 'f2', 'f3'
{noformat}

  was:
Unable to create a table with multiple families (as suggested by the Examples)
also the shell is exiting. (I only tested 1.2 and 2.0 but it may be in every 
version)
{noformat}
hbase(main):001:0> create ‘t1’, ‘f1’, ‘f2’, ‘f3’

ERROR: wrong number of arguments (0 for 1)

Examples:
  hbase> create 't1', 'f1', 'f2', 'f3'
{noformat}


> Shell create ‘t1’, ‘f1’, ‘f2’, ‘f3’ wrong number of arguments 
> --
>
> Key: HBASE-15818
> URL: https://issues.apache.org/jira/browse/HBASE-15818
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Affects Versions: 2.0.0, 1.2.1
>Reporter: Matteo Bertozzi
>Priority: Critical
> Fix For: 2.0.0, 1.3.0, 1.2.2
>
>
> Unable to create a table with multiple families (as suggested by the Examples)
> also the shell is exiting. (I only tested 1.2 and 2.0 and have the problem, 
> 1.1 seems to be ok)
> {noformat}
> hbase(main):001:0> create ‘t1’, ‘f1’, ‘f2’, ‘f3’
> ERROR: wrong number of arguments (0 for 1)
> Examples:
>   hbase> create 't1', 'f1', 'f2', 'f3'
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15813) Rename DefaultWALProvider to a more specific name and clean up unnecessary reference to it

2016-05-11 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280884#comment-15280884
 ] 

stack commented on HBASE-15813:
---

bq. What advantage will YA FooUtility have over using the functions in the 
WALProvider that's the basis for other FileSystem based WALProviders?

Wind back my suggestion. It not a good one. Ignore. I was reacting to our 
putting AbstractFSWALProvider everywhere in this patch; it is kinda ugly having 
Abstract class as our basis. Got carried away suggesting we try and put 
WALProvider Interface in place more instead... but then there are these little 
utility methods doing Path operations to find vital info... that are critical 
to FSWAL implementations. Also, AbstractFSWALProvider everywhere is an 
improvemnt on having DefaultWALProvider throughout.

> Rename DefaultWALProvider to a more specific name and clean up unnecessary 
> reference to it
> --
>
> Key: HBASE-15813
> URL: https://issues.apache.org/jira/browse/HBASE-15813
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-15813.patch
>
>
> This work can be done before we make AsyncFSWAL as our default WAL 
> implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15816) Provide client with ability to set priority on Operations

2016-05-11 Thread churro morales (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

churro morales updated HBASE-15816:
---
Status: Patch Available  (was: Open)

> Provide client with ability to set priority on Operations 
> --
>
> Key: HBASE-15816
> URL: https://issues.apache.org/jira/browse/HBASE-15816
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: churro morales
>Assignee: churro morales
> Attachments: HBASE-15816.patch
>
>
> First round will just be to expose the ability to set priorities for client 
> operations.  For more background: 
> http://mail-archives.apache.org/mod_mbox/hbase-dev/201604.mbox/%3CCA+RK=_BG_o=q8HMptcP2WauAinmEsL+15f3YEJuz=qbpcya...@mail.gmail.com%3E
> Next step would be to remove AnnotationReadingPriorityFunction and have the 
> client send priorities explicitly.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15816) Provide client with ability to set priority on Operations

2016-05-11 Thread churro morales (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

churro morales updated HBASE-15816:
---
Attachment: HBASE-15816.patch

Some assumptions were made, 

When setting table priority and priority explicitly for a request, we take keep 
the Max priority for that request.  

For batch operations, we apply the max priority for any operation for the 
entire batch.

> Provide client with ability to set priority on Operations 
> --
>
> Key: HBASE-15816
> URL: https://issues.apache.org/jira/browse/HBASE-15816
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: churro morales
>Assignee: churro morales
> Attachments: HBASE-15816.patch
>
>
> First round will just be to expose the ability to set priorities for client 
> operations.  For more background: 
> http://mail-archives.apache.org/mod_mbox/hbase-dev/201604.mbox/%3CCA+RK=_BG_o=q8HMptcP2WauAinmEsL+15f3YEJuz=qbpcya...@mail.gmail.com%3E
> Next step would be to remove AnnotationReadingPriorityFunction and have the 
> client send priorities explicitly.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13706) CoprocessorClassLoader should not exempt Hive classes

2016-05-11 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280870#comment-15280870
 ] 

Sangjin Lee commented on HBASE-13706:
-

I asked in another JIRA, but is there appetite in porting this to the 1.1.x 
line? Thanks!

> CoprocessorClassLoader should not exempt Hive classes
> -
>
> Key: HBASE-13706
> URL: https://issues.apache.org/jira/browse/HBASE-13706
> Project: HBase
>  Issue Type: Bug
>  Components: Coprocessors
>Affects Versions: 2.0.0, 1.0.1, 1.1.0, 0.98.12
>Reporter: Jerry He
>Assignee: Jerry He
>Priority: Minor
> Fix For: 2.0.0, 0.98.14, 1.2.0, 1.3.0
>
> Attachments: HBASE-13706-0.98.patch, HBASE-13706-branch-1.patch, 
> HBASE-13706-master-v2.patch, HBASE-13706-master-v2.patch, HBASE-13706.patch
>
>
> CoprocessorClassLoader is used to load classes from the coprocessor jar.
> Certain classes are exempt from being loaded by this ClassLoader, which means 
> they will be ignored in the coprocessor jar, but loaded from parent classpath 
> instead.
> One problem is that we categorically exempt "org.apache.hadoop".
> But it happens that Hive packages start with "org.apache.hadoop".
> There is no reason to exclude hive classes from theCoprocessorClassLoader.
> HBase does not even include Hive jars.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15818) Shell create ‘t1’, ‘f1’, ‘f2’, ‘f3’ wrong number of arguments

2016-05-11 Thread Matteo Bertozzi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matteo Bertozzi updated HBASE-15818:

Affects Version/s: 1.2.1

> Shell create ‘t1’, ‘f1’, ‘f2’, ‘f3’ wrong number of arguments 
> --
>
> Key: HBASE-15818
> URL: https://issues.apache.org/jira/browse/HBASE-15818
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Affects Versions: 2.0.0, 1.2.1
>Reporter: Matteo Bertozzi
>Priority: Critical
> Fix For: 2.0.0, 1.3.0, 1.2.2
>
>
> Unable to create a table with multiple families (as suggested by the Examples)
> also the shell is exiting. (I only tested 1.2 and 2.0 but it may be in every 
> version)
> {noformat}
> hbase(main):001:0> create ‘t1’, ‘f1’, ‘f2’, ‘f3’
> ERROR: wrong number of arguments (0 for 1)
> Examples:
>   hbase> create 't1', 'f1', 'f2', 'f3'
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-15818) Shell create ‘t1’, ‘f1’, ‘f2’, ‘f3’ wrong number of arguments

2016-05-11 Thread Matteo Bertozzi (JIRA)
Matteo Bertozzi created HBASE-15818:
---

 Summary: Shell create ‘t1’, ‘f1’, ‘f2’, ‘f3’ wrong number of 
arguments 
 Key: HBASE-15818
 URL: https://issues.apache.org/jira/browse/HBASE-15818
 Project: HBase
  Issue Type: Bug
  Components: shell
Affects Versions: 2.0.0
Reporter: Matteo Bertozzi
Priority: Critical
 Fix For: 2.0.0, 1.3.0, 1.2.2


Unable to create a table with multiple families (as suggested by the Examples)
also the shell is exiting. (I only tested 1.2 and 2.0 but it may be in every 
version)
{noformat}
hbase(main):001:0> create ‘t1’, ‘f1’, ‘f2’, ‘f3’

ERROR: wrong number of arguments (0 for 1)

Examples:
  hbase> create 't1', 'f1', 'f2', 'f3'
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-15817) Backup history should mention the type (full or incremental) of the backup

2016-05-11 Thread Ted Yu (JIRA)
Ted Yu created HBASE-15817:
--

 Summary: Backup history should mention the type (full or 
incremental) of the backup
 Key: HBASE-15817
 URL: https://issues.apache.org/jira/browse/HBASE-15817
 Project: HBase
  Issue Type: Improvement
Reporter: Ted Yu
Assignee: Ted Yu
Priority: Minor
 Attachments: 15817.v1.txt

[~cartershanklin] performed full backup followed by incremental backup.

In the output of backup history, one of the backups is full, one is 
incremental. But which one?
{code}
[vagrant@hbase ~]$ hbase backup history
ID : backup_1462900419633
Tables : 
SYSTEM.CATALOG;SYSTEM.SEQUENCE;MY_INDEX;SYSTEM.FUNCTION;SYSTEM.STATS;TEST_DATA
State  : COMPLETE
Start time : Tue May 10 17:13:39 UTC 2016
End time   : Tue May 10 17:13:53 UTC 2016
Progress   : 100

ID : backup_1462900212093
Tables : 
SYSTEM.CATALOG;SYSTEM.SEQUENCE;MY_INDEX;SYSTEM.FUNCTION;SYSTEM.STATS;TEST_DATA
State  : COMPLETE
Start time : Tue May 10 17:10:12 UTC 2016
End time   : Tue May 10 17:11:30 UTC 2016
Progress   : 100
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15817) Backup history should mention the type (full or incremental) of the backup

2016-05-11 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-15817:
---
Status: Patch Available  (was: Open)

> Backup history should mention the type (full or incremental) of the backup
> --
>
> Key: HBASE-15817
> URL: https://issues.apache.org/jira/browse/HBASE-15817
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
>  Labels: backup
> Attachments: 15817.v1.txt
>
>
> [~cartershanklin] performed full backup followed by incremental backup.
> In the output of backup history, one of the backups is full, one is 
> incremental. But which one?
> {code}
> [vagrant@hbase ~]$ hbase backup history
> ID : backup_1462900419633
> Tables : 
> SYSTEM.CATALOG;SYSTEM.SEQUENCE;MY_INDEX;SYSTEM.FUNCTION;SYSTEM.STATS;TEST_DATA
> State  : COMPLETE
> Start time : Tue May 10 17:13:39 UTC 2016
> End time   : Tue May 10 17:13:53 UTC 2016
> Progress   : 100
> ID : backup_1462900212093
> Tables : 
> SYSTEM.CATALOG;SYSTEM.SEQUENCE;MY_INDEX;SYSTEM.FUNCTION;SYSTEM.STATS;TEST_DATA
> State  : COMPLETE
> Start time : Tue May 10 17:10:12 UTC 2016
> End time   : Tue May 10 17:11:30 UTC 2016
> Progress   : 100
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15817) Backup history should mention the type (full or incremental) of the backup

2016-05-11 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-15817:
---
Attachment: 15817.v1.txt

> Backup history should mention the type (full or incremental) of the backup
> --
>
> Key: HBASE-15817
> URL: https://issues.apache.org/jira/browse/HBASE-15817
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
>  Labels: backup
> Attachments: 15817.v1.txt
>
>
> [~cartershanklin] performed full backup followed by incremental backup.
> In the output of backup history, one of the backups is full, one is 
> incremental. But which one?
> {code}
> [vagrant@hbase ~]$ hbase backup history
> ID : backup_1462900419633
> Tables : 
> SYSTEM.CATALOG;SYSTEM.SEQUENCE;MY_INDEX;SYSTEM.FUNCTION;SYSTEM.STATS;TEST_DATA
> State  : COMPLETE
> Start time : Tue May 10 17:13:39 UTC 2016
> End time   : Tue May 10 17:13:53 UTC 2016
> Progress   : 100
> ID : backup_1462900212093
> Tables : 
> SYSTEM.CATALOG;SYSTEM.SEQUENCE;MY_INDEX;SYSTEM.FUNCTION;SYSTEM.STATS;TEST_DATA
> State  : COMPLETE
> Start time : Tue May 10 17:10:12 UTC 2016
> End time   : Tue May 10 17:11:30 UTC 2016
> Progress   : 100
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-15816) Provide client with ability to set priority on Operations

2016-05-11 Thread churro morales (JIRA)
churro morales created HBASE-15816:
--

 Summary: Provide client with ability to set priority on Operations 
 Key: HBASE-15816
 URL: https://issues.apache.org/jira/browse/HBASE-15816
 Project: HBase
  Issue Type: Improvement
Affects Versions: 2.0.0
Reporter: churro morales
Assignee: churro morales


First round will just be to expose the ability to set priorities for client 
operations.  For more background: 
http://mail-archives.apache.org/mod_mbox/hbase-dev/201604.mbox/%3CCA+RK=_BG_o=q8HMptcP2WauAinmEsL+15f3YEJuz=qbpcya...@mail.gmail.com%3E

Next step would be to remove AnnotationReadingPriorityFunction and have the 
client send priorities explicitly.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15811) Batch Get after batch Put does not fetch all Cells

2016-05-11 Thread Robert Farr (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Farr updated HBASE-15811:

Attachment: Test2.java

Program to reproduce when updating existing cells.

> Batch Get after batch Put does not fetch all Cells
> --
>
> Key: HBASE-15811
> URL: https://issues.apache.org/jira/browse/HBASE-15811
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 1.2.1
>Reporter: stack
>Assignee: stack
> Attachments: Test.java, Test2.java
>
>
> A big batch put followed by a batch get does not always return all Cells put. 
> See attached test program by Robert Farr that reproduces the issue. It seems 
> to be an issue to do with a cluster of more than one machine. Running against 
> a single machine does not have the problem (though the single machine may 
> have many regions). Robert was unable to make his program fail with a single 
> machine only.
> I reproduced what Robert was seeing running his program. I was also unable to 
> make a single machine fail. In a batch of 1000 puts, I see one to three Gets 
> fail. I noticed too that if I wait a second after a fail and then re-get, the 
> Get succeeds.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15809) Basic Replication WebUI

2016-05-11 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280821#comment-15280821
 ] 

Sean Busbey commented on HBASE-15809:
-

{code}
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/FreeReplicationEndpoint.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/FreeReplicationEndpoint.java
new file mode 100755
index 000..9333f98
--- /dev/null
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/FreeReplicationEndpoint.java
@@ -0,0 +1,147 @@
+package org.apache.hadoop.hbase.replication.regionserver;
+ 
+import java.io.IOException;
+import java.io.InputStream;
{code}

File is missing ASF header

{code}
+/*
+ * add_peer '1', ENDPOINT_CLASSNAME => 
'org.apache.hadoop.hbase.replication.regionserver.FreeReplicationEndpoint', 
CONFIG => {'master_addresses' => 'localhost:18080'}
+ * create 'testtb', {NAME => 'f', REPLICATION_SCOPE => 1}
+ * put 'testtb', 'row0', 'f:q', 'value0'
+ */
+public class FreeReplicationEndpoint extends BaseReplicationEndpoint
+im
{code}

I like the instructions for installation, but also needs a brief description 
what this endpoint is used for. When would I want to install it?

> Basic Replication WebUI
> ---
>
> Key: HBASE-15809
> URL: https://issues.apache.org/jira/browse/HBASE-15809
> Project: HBase
>  Issue Type: New Feature
>  Components: Replication, UI
>Affects Versions: 2.0.0
>Reporter: Matteo Bertozzi
>Priority: Minor
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-15809-v0.patch, HBASE-15809-v0.png
>
>
> At the moment the only way to have some insight on replication from the webui 
> is looking at zkdump and metrics.
> the basic information useful to get started debugging are: peer information 
> and the view of WALs offsets for each peer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15809) Basic Replication WebUI

2016-05-11 Thread Matteo Bertozzi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280824#comment-15280824
 ] 

Matteo Bertozzi commented on HBASE-15809:
-

This was not really supposed to be in. This is my replication endpoint for 
testing. 
has really nothing to do with the webui patch

> Basic Replication WebUI
> ---
>
> Key: HBASE-15809
> URL: https://issues.apache.org/jira/browse/HBASE-15809
> Project: HBase
>  Issue Type: New Feature
>  Components: Replication, UI
>Affects Versions: 2.0.0
>Reporter: Matteo Bertozzi
>Priority: Minor
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-15809-v0.patch, HBASE-15809-v0.png
>
>
> At the moment the only way to have some insight on replication from the webui 
> is looking at zkdump and metrics.
> the basic information useful to get started debugging are: peer information 
> and the view of WALs offsets for each peer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15815) Region mover script sometimes reports stuck region where only one server was involved

2016-05-11 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-15815:
---
Attachment: 15815-branch-1.v1.txt

Patch v1 adds region name to the error message.

> Region mover script sometimes reports stuck region where only one server was 
> involved
> -
>
> Key: HBASE-15815
> URL: https://issues.apache.org/jira/browse/HBASE-15815
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.1.2
>Reporter: Ted Yu
>Priority: Minor
> Attachments: 15815-branch-1.v1.txt
>
>
> Sometimes we saw the following in output from region mover script:
> {code}
> 2016-05-11 01:38:21,187||INFO|3969|140086696048384|MainThread|2016-05-11 
> 01:38:21,186 INFO  [RubyThread-7: 
> /.../current/hbase-client/bin/thread-pool.rb:28-EventThread] 
> zookeeper.ClientCnxn: EventThread shut down
> 2016-05-11 01:38:21,299||INFO|3969|140086696048384|MainThread|RuntimeError: 
> Region stuck on hbase-5-2.osl,16020,1462930100540,, 
> newserver=hbase-5-2.osl,16020,1462930100540
> {code}
> There was only one server involved.
> Since the name of region was not printed, it makes debugging hard to do.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-15815) Region mover script sometimes reports stuck region where only one server was involved

2016-05-11 Thread Ted Yu (JIRA)
Ted Yu created HBASE-15815:
--

 Summary: Region mover script sometimes reports stuck region where 
only one server was involved
 Key: HBASE-15815
 URL: https://issues.apache.org/jira/browse/HBASE-15815
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.1.2
Reporter: Ted Yu
Priority: Minor


Sometimes we saw the following in output from region mover script:
{code}
2016-05-11 01:38:21,187||INFO|3969|140086696048384|MainThread|2016-05-11 
01:38:21,186 INFO  [RubyThread-7: 
/.../current/hbase-client/bin/thread-pool.rb:28-EventThread] 
zookeeper.ClientCnxn: EventThread shut down
2016-05-11 01:38:21,299||INFO|3969|140086696048384|MainThread|RuntimeError: 
Region stuck on hbase-5-2.osl,16020,1462930100540,, 
newserver=hbase-5-2.osl,16020,1462930100540
{code}
There was only one server involved.
Since the name of region was not printed, it makes debugging hard to do.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15812) HttpServer fails to come up against hadoop trunk

2016-05-11 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280762#comment-15280762
 ] 

Ted Yu commented on HBASE-15812:


master branch uses hadoop 2.7.1 where user would encounter ERROR 500 accessing 
'/metrics' endpoint.

'/jmx' endpoint shows metrics v2

I think we can drop '/metrics' endpoint which is already broken in master 
branch.

> HttpServer fails to come up against hadoop trunk
> 
>
> Key: HBASE-15812
> URL: https://issues.apache.org/jira/browse/HBASE-15812
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.0.1
>Reporter: Sangjin Lee
>Assignee: Ted Yu
> Fix For: 2.0.0
>
> Attachments: 15812.v1.txt, 15812.v1.txt
>
>
> If you run HBase HttpServer against the hadoop trunk, it fails.
> {noformat}
> Caused by: java.lang.NoClassDefFoundError: 
> org/apache/hadoop/metrics/MetricsServlet
> at 
> org.apache.hadoop.hbase.http.HttpServer.addDefaultServlets(HttpServer.java:677)
> at 
> org.apache.hadoop.hbase.http.HttpServer.initializeWebServer(HttpServer.java:546)
> at org.apache.hadoop.hbase.http.HttpServer.(HttpServer.java:500)
> at org.apache.hadoop.hbase.http.HttpServer.(HttpServer.java:104)
> at 
> org.apache.hadoop.hbase.http.HttpServer$Builder.build(HttpServer.java:345)
> at org.apache.hadoop.hbase.http.InfoServer.(InfoServer.java:77)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.putUpWebUI(HRegionServer.java:1697)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.(HRegionServer.java:550)
> at org.apache.hadoop.hbase.master.HMaster.(HMaster.java:333)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:525)
> at 
> org.apache.hadoop.hbase.util.JVMClusterUtil.createMasterThread(JVMClusterUtil.java:139)
> ... 26 more
> {noformat}
> The hadoop trunk removed {{MetricsServlet}} (HADOOP-12504).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13532) Make UnknownScannerException logging less scary

2016-05-11 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280751#comment-15280751
 ] 

Appy commented on HBASE-13532:
--

Just log message change. No testing needed.
Ready for review/commit.


> Make UnknownScannerException logging less scary
> ---
>
> Key: HBASE-13532
> URL: https://issues.apache.org/jira/browse/HBASE-13532
> Project: HBase
>  Issue Type: Improvement
>Reporter: Appy
>Assignee: Appy
>Priority: Trivial
> Attachments: HBASE-13532-master.patch, HBASE-13532.patch
>
>
> A customer reported seeing client-side UnknownScannerExceptions after an 
> HBase upgrade/restart. Restarting a RS will expire leases on the server side. 
> So, given that there was no actual problem and everything was working as it 
> should, reworking this exception for more appropriate logging.
> {code}
> org.apache.hadoop.hbase.UnknownScannerException: 
> org.apache.hadoop.hbase.UnknownScannerException: Name: 10092964, already 
> closed? 
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3043)
>  
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29497)
>  
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2012) 
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:98) 
> at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.consumerLoop(SimpleRpcScheduler.java:160)
>  
> at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.access$000(SimpleRpcScheduler.java:38)
>  
> at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler$1.run(SimpleRpcScheduler.java:110)
>  
> at java.lang.Thread.run(Thread.java:724) 
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) 
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
>  
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>  
> at java.lang.reflect.Constructor.newInstance(Constructor.java:526) 
> at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
>  
> at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
>  
> at 
> org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:284)
>  
> at 
> org.apache.hadoop.hbase.client.ScannerCallable.close(ScannerCallable.java:287)
>  
> at 
> org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:153) 
> at 
> org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:57) 
> at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:114)
>  
> at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:90)
>  
> at org.apache.hadoop.hbase.client.ClientScanner.close(ClientScanner.java:431) 
> at 
> com.squareup.moco.persistence.TransactionTable.scan(TransactionTable.java:207)
>  
> at 
> com.squareup.moco.persistence.TransactionTable$$EnhancerByGuice$$a12c1766.CGLIB$scan$9()
>  
> at 
> com.squareup.moco.persistence.TransactionTable$$EnhancerByGuice$$a12c1766$$FastClassByGuice$$606c8773.invoke()
>  
> at 
> com.google.inject.internal.cglib.proxy.$MethodProxy.invokeSuper(MethodProxy.java:228)
>  
> at 
> com.google.inject.internal.InterceptorStackCallback$InterceptedMethodInvocation.proceed(InterceptorStackCallback.java:75)
>  
> at 
> com.squareup.common.metrics.TimedHistogramInterceptor.invoke(TimedHistogramInterceptor.java:29)
>  
> at 
> com.google.inject.internal.InterceptorStackCallback$InterceptedMethodInvocation.proceed(InterceptorStackCallback.java:75)
>  
> at 
> com.google.inject.internal.InterceptorStackCallback.intercept(InterceptorStackCallback.java:55)
>  
> at 
> com.squareup.moco.persistence.TransactionTable$$EnhancerByGuice$$a12c1766.scan()
>  
> at 
> com.squareup.moco.persistence.TransactionTable$1.run(TransactionTable.java:180)
>  
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) 
> at java.util.concurrent.FutureTask.run(FutureTask.java:166) 
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  
> at java.lang.Thread.run(Thread.java:724) 
> Caused by: 
> org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.UnknownScannerException):
>  org.apache.hadoop.hbase.UnknownScannerException: Name: 10092964, already 
> closed? 
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3043)
>  
> at 
> 

[jira] [Commented] (HBASE-15812) HttpServer fails to come up against hadoop trunk

2016-05-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280737#comment-15280737
 ] 

Hudson commented on HBASE-15812:


FAILURE: Integrated in HBase-Trunk_matrix #914 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/914/])
HBASE-15812 HttpServer fails to come up against hadoop trunk (tedyu: rev 
f3fee82ac42228d3c87353508fbd06d28e6414be)
* hbase-server/src/main/java/org/apache/hadoop/hbase/http/HttpServer.java
HBASE-15812 Revert pending investigation on whether metrics dump can be (tedyu: 
rev c867858c4437e8159235bd64a53a900a152bb41a)
* hbase-server/src/main/java/org/apache/hadoop/hbase/http/HttpServer.java


> HttpServer fails to come up against hadoop trunk
> 
>
> Key: HBASE-15812
> URL: https://issues.apache.org/jira/browse/HBASE-15812
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.0.1
>Reporter: Sangjin Lee
>Assignee: Ted Yu
> Fix For: 2.0.0
>
> Attachments: 15812.v1.txt, 15812.v1.txt
>
>
> If you run HBase HttpServer against the hadoop trunk, it fails.
> {noformat}
> Caused by: java.lang.NoClassDefFoundError: 
> org/apache/hadoop/metrics/MetricsServlet
> at 
> org.apache.hadoop.hbase.http.HttpServer.addDefaultServlets(HttpServer.java:677)
> at 
> org.apache.hadoop.hbase.http.HttpServer.initializeWebServer(HttpServer.java:546)
> at org.apache.hadoop.hbase.http.HttpServer.(HttpServer.java:500)
> at org.apache.hadoop.hbase.http.HttpServer.(HttpServer.java:104)
> at 
> org.apache.hadoop.hbase.http.HttpServer$Builder.build(HttpServer.java:345)
> at org.apache.hadoop.hbase.http.InfoServer.(InfoServer.java:77)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.putUpWebUI(HRegionServer.java:1697)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.(HRegionServer.java:550)
> at org.apache.hadoop.hbase.master.HMaster.(HMaster.java:333)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:525)
> at 
> org.apache.hadoop.hbase.util.JVMClusterUtil.createMasterThread(JVMClusterUtil.java:139)
> ... 26 more
> {noformat}
> The hadoop trunk removed {{MetricsServlet}} (HADOOP-12504).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15454) Archive store files older than max age

2016-05-11 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280685#comment-15280685
 ] 

Ted Yu commented on HBASE-15454:


I am looking forward to cluster testing result to show the effectiveness of the 
proposed change.

> Archive store files older than max age
> --
>
> Key: HBASE-15454
> URL: https://issues.apache.org/jira/browse/HBASE-15454
> Project: HBase
>  Issue Type: Sub-task
>  Components: Compaction
>Affects Versions: 2.0.0, 1.3.0, 0.98.18, 1.4.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0, 1.3.0, 1.4.0, 0.98.20
>
> Attachments: HBASE-15454-v1.patch, HBASE-15454-v2.patch, 
> HBASE-15454-v3.patch, HBASE-15454-v4.patch, HBASE-15454-v5.patch, 
> HBASE-15454-v6.patch, HBASE-15454-v7.patch, HBASE-15454.patch
>
>
> In date tiered compaction, the store files older than max age are never 
> touched by minor compactions. Here we introduce a 'freeze window' operation, 
> which does the follow things:
> 1. Find all store files that contains cells whose timestamp are in the give 
> window.
> 2. Compaction all these files and output one file for each window that these 
> files covered.
> After the compaction, we will have only one in the give window, and all cells 
> whose timestamp are in the give window are in the only file. And if you do 
> not write new cells with an older timestamp in this window, the file will 
> never be changed. This makes it easier to do erasure coding on the freezed 
> file to reduce redundence. And also, it makes it possible to check 
> consistency between master and peer cluster incrementally.
> And why use the word 'freeze'?
> Because there is already an 'HFileArchiver' class. I want to use a different 
> word to prevent confusing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15454) Archive store files older than max age

2016-05-11 Thread Mikhail Antonov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280671#comment-15280671
 ] 

Mikhail Antonov commented on HBASE-15454:
-

>From my standpoint if we release a completely new feature and people start 
>using it and find some edge case -that's fine and expected :) we'll fix it and 
>release in 1.3.1 or 1.4.0. As long as the complexity doesn't spread around too 
>much and we keep everything possible private and mark it as 
>experimental/unstable interface-wise, I'm fine with that. If I have to choose 
>between new feature with known limitations, and new feature where those 
>limitations are addressed (in a way which we can find non-ideal and fix later) 
>i'd go for latter.

I haven't yet look at this part of the code, so I don't have informed opinion. 
I'll try to look at it next few days.

[~tedyu] [~enis] do you guys have any opinion here?

> Archive store files older than max age
> --
>
> Key: HBASE-15454
> URL: https://issues.apache.org/jira/browse/HBASE-15454
> Project: HBase
>  Issue Type: Sub-task
>  Components: Compaction
>Affects Versions: 2.0.0, 1.3.0, 0.98.18, 1.4.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0, 1.3.0, 1.4.0, 0.98.20
>
> Attachments: HBASE-15454-v1.patch, HBASE-15454-v2.patch, 
> HBASE-15454-v3.patch, HBASE-15454-v4.patch, HBASE-15454-v5.patch, 
> HBASE-15454-v6.patch, HBASE-15454-v7.patch, HBASE-15454.patch
>
>
> In date tiered compaction, the store files older than max age are never 
> touched by minor compactions. Here we introduce a 'freeze window' operation, 
> which does the follow things:
> 1. Find all store files that contains cells whose timestamp are in the give 
> window.
> 2. Compaction all these files and output one file for each window that these 
> files covered.
> After the compaction, we will have only one in the give window, and all cells 
> whose timestamp are in the give window are in the only file. And if you do 
> not write new cells with an older timestamp in this window, the file will 
> never be changed. This makes it easier to do erasure coding on the freezed 
> file to reduce redundence. And also, it makes it possible to check 
> consistency between master and peer cluster incrementally.
> And why use the word 'freeze'?
> Because there is already an 'HFileArchiver' class. I want to use a different 
> word to prevent confusing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15454) Archive store files older than max age

2016-05-11 Thread Clara Xiong (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280596#comment-15280596
 ] 

Clara Xiong commented on HBASE-15454:
-

I am waiting for the test/simulation results to show it is an effective 
strategy. [~Apache9] do you have more updates?

> Archive store files older than max age
> --
>
> Key: HBASE-15454
> URL: https://issues.apache.org/jira/browse/HBASE-15454
> Project: HBase
>  Issue Type: Sub-task
>  Components: Compaction
>Affects Versions: 2.0.0, 1.3.0, 0.98.18, 1.4.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0, 1.3.0, 1.4.0, 0.98.20
>
> Attachments: HBASE-15454-v1.patch, HBASE-15454-v2.patch, 
> HBASE-15454-v3.patch, HBASE-15454-v4.patch, HBASE-15454-v5.patch, 
> HBASE-15454-v6.patch, HBASE-15454-v7.patch, HBASE-15454.patch
>
>
> In date tiered compaction, the store files older than max age are never 
> touched by minor compactions. Here we introduce a 'freeze window' operation, 
> which does the follow things:
> 1. Find all store files that contains cells whose timestamp are in the give 
> window.
> 2. Compaction all these files and output one file for each window that these 
> files covered.
> After the compaction, we will have only one in the give window, and all cells 
> whose timestamp are in the give window are in the only file. And if you do 
> not write new cells with an older timestamp in this window, the file will 
> never be changed. This makes it easier to do erasure coding on the freezed 
> file to reduce redundence. And also, it makes it possible to check 
> consistency between master and peer cluster incrementally.
> And why use the word 'freeze'?
> Because there is already an 'HFileArchiver' class. I want to use a different 
> word to prevent confusing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14879) maven archetype: mapreduce application

2016-05-11 Thread Jonathan Hsieh (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280585#comment-15280585
 ] 

Jonathan Hsieh commented on HBASE-14879:


I encountered the port conflict issue, I don't think that is a huge deal.  If 
you ^C the 'mvn install' of the archetype instance,  the process doesn't die 
right away and causes the port conflict on a subsequent run.  I just 'jps'ed 
and then killed the offending surefire instance.

I tried adding 'clean' to the hbase mvn build and the hbase+mr job mvn  build 
and still got to the same place.

The code and the logic in the test util to create tables looks fine to me, so I 
think I agree that there might be an issue where you pointed at.  I also think 
it may be something environmental (the heap settings or something else related 
to jvm?) -- This could explain why the hbaseproject mr tests are working under 
its pom, and the archetype instance ones are failing or flakey (different env 
settings).

I'd rather not commit this specific patch until I can get its artifact to work 
or if someone else can get it to work.

Would you mind digging a little further down the environmental path?



> maven archetype: mapreduce application
> --
>
> Key: HBASE-14879
> URL: https://issues.apache.org/jira/browse/HBASE-14879
> Project: HBase
>  Issue Type: Sub-task
>  Components: build, Usability
>Reporter: Nick Dimiduk
>Assignee: Daniel Vimont
>  Labels: beginner
> Attachments: HBASE-14879-v1.patch, HBASE-14879-v2.patch, 
> archetype_mr_prototype.zip
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15333) Enhance the filter to handle short, integer, long, float and double

2016-05-11 Thread Zhan Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280574#comment-15280574
 ] 

Zhan Zhang commented on HBASE-15333:


[~jmhsieh] Would you like to take a final look? Thanks.

> Enhance the filter to handle short, integer, long, float and double
> ---
>
> Key: HBASE-15333
> URL: https://issues.apache.org/jira/browse/HBASE-15333
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Zhan Zhang
>Assignee: Zhan Zhang
> Attachments: HBASE-15333-1.patch, HBASE-15333-2.patch, 
> HBASE-15333-3.patch, HBASE-15333-4.patch, HBASE-15333-5.patch, 
> HBASE-15333-6.patch, HBASE-15333-7.patch, HBASE-15333-8.patch, 
> HBASE-15333-9.patch
>
>
> Currently, the range filter is based on the order of bytes. But for java 
> primitive type, such as short, int, long, double, float, etc, their order is 
> not consistent with their byte order, extra manipulation has to be in place 
> to take care of them  correctly.
> For example, for the integer range (-100, 100), the filter <= 1, the current 
> filter will return 0 and 1, and the right return value should be (-100, 1]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15813) Rename DefaultWALProvider to a more specific name and clean up unnecessary reference to it

2016-05-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280514#comment-15280514
 ] 

Hadoop QA commented on HBASE-15813:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 39 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
5s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s 
{color} | {color:green} master passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s 
{color} | {color:green} master passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
11s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
29s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
58s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 47s 
{color} | {color:green} master passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 34s 
{color} | {color:green} master passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
46s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 33s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 33s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 36s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 36s {color} 
| {color:red} hbase-server-jdk1.7.0_95 with JDK v1.7.0_95 generated 2 new + 4 
unchanged - 2 fixed = 6 total (was 6) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
49s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
27m 42s {color} | {color:green} The patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 0m 
17s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 34s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 104m 58s 
{color} | {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
31s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 159m 5s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.regionserver.TestRegionServerMetrics |
| Timed out junit tests | 
org.apache.hadoop.hbase.regionserver.TestRegionMergeTransactionOnCluster |
|   | org.apache.hadoop.hbase.snapshot.TestFlushSnapshotFromClient |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.9.1 Server=1.9.1 Image:yetus/hbase:date2016-05-11 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12803363/HBASE-15813.patch |
| JIRA Issue | HBASE-15813 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux ea8f3b14162d 3.13.0-36-lowlatency 

[jira] [Commented] (HBASE-13683) Doc HBase and G1GC

2016-05-11 Thread Dave Latham (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280476#comment-15280476
 ] 

Dave Latham commented on HBASE-13683:
-

Hubspot's post: http://product.hubspot.com/blog/g1gc-tuning-your-hbase-cluster

> Doc HBase and G1GC
> --
>
> Key: HBASE-13683
> URL: https://issues.apache.org/jira/browse/HBASE-13683
> Project: HBase
>  Issue Type: Task
>  Components: documentation
>Reporter: stack
>
> Add section to refguide on running G1GC with HBase. There is the intel talk 
> at hbasecon with nice pictures and healthy recommendations, there is our 
> [~esteban]'s recent experience running G1GC, and the mighty [~bbeaudreault] 
> dumped a bunch of helpful advice in the mailing list just now: 
> http://search-hadoop.com/m/YGbbupEDoKTrDo/%2522How+to+know+the+root+reason+to+cause+RegionServer+OOM%253F%2522=Re+How+to+know+the+root+reason+to+cause+RegionServer+OOM+



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15742) Reduce allocation of objects in metrics

2016-05-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280471#comment-15280471
 ] 

Hudson commented on HBASE-15742:


FAILURE: Integrated in HBase-0.98-on-Hadoop-1.1 #1216 (See 
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/1216/])
HBASE-15742 Addendum fixes broken test in 0.98 branch (Phil Yang) (tedyu: rev 
a2a6b95a059930f6386f2ffa7ce87dfe3b621a4a)
* 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/metrics2/lib/MutableHistogram.java


> Reduce allocation of objects in metrics
> ---
>
> Key: HBASE-15742
> URL: https://issues.apache.org/jira/browse/HBASE-15742
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.0, 1.2.1, 1.0.3, 1.1.4, 0.98.19
>Reporter: Phil Yang
>Assignee: Phil Yang
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.1.5, 1.2.2, 0.98.20
>
> Attachments: 15742.0.98.patch, HBASE-15742-0.98.v1.patch, 
> HBASE-15742-0.98.v2.patch, HBASE-15742-branch-1-v1.patch, 
> HBASE-15742-branch-1.1-v1.patch, HBASE-15742-branch-1.2-v1.patch, 
> HBASE-15742-branch-1.2-v2.patch, HBASE-15742-v1.patch, HBASE-15742-v2.patch, 
> HBASE-15742-v3.patch, HBASE-15742-v4.patch
>
>
> We use JMX and o.a.h.metrics2 to do some metrics on regions, tables, region 
> servers and cluster. We use MetricsInfo to show the information of metrics, 
> and we use Interns to cache MetricsInfo objects because it won't be changed.
> However, in Interns there are some static values to limit the max cached 
> objects. We can only cache 2010 metrics, but we have dozens of metrics for 
> one region and we have some RS-level metrics in each RS and all metrics for 
> all regions will be saved in master. So each server will have thousands of 
> metrics, and we can not cache most of them. When we collect metrics by JMX, 
> we will create many objects which can be avoid. It increases the pressure of 
> GC and JMX has some caching logic so the objects can not be removed 
> immediately which increases the pressure more.
> Interns is in Hadoop project, and I think the implementation is not suitable 
> for HBase. Because we can not know how many MetricsInfo we have, it depends 
> on the number of regions. And we can not set it unlimited because we should 
> remove the objects whose region is split, moved, or dropped. I think we can 
> use Guava's cache with expireAfterAccess which is very simple and convenient.
> So we can add a new Interns class in HBase project first, and put it to 
> upstream later.
> Moreover, in MutableHistogram#snapshot we create same Strings each time, we 
> can create them only in the first time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15812) HttpServer fails to come up against hadoop trunk

2016-05-11 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280461#comment-15280461
 ] 

Ted Yu commented on HBASE-15812:


hadoop version is 2.7.1

> HttpServer fails to come up against hadoop trunk
> 
>
> Key: HBASE-15812
> URL: https://issues.apache.org/jira/browse/HBASE-15812
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.0.1
>Reporter: Sangjin Lee
>Assignee: Ted Yu
> Fix For: 2.0.0
>
> Attachments: 15812.v1.txt, 15812.v1.txt
>
>
> If you run HBase HttpServer against the hadoop trunk, it fails.
> {noformat}
> Caused by: java.lang.NoClassDefFoundError: 
> org/apache/hadoop/metrics/MetricsServlet
> at 
> org.apache.hadoop.hbase.http.HttpServer.addDefaultServlets(HttpServer.java:677)
> at 
> org.apache.hadoop.hbase.http.HttpServer.initializeWebServer(HttpServer.java:546)
> at org.apache.hadoop.hbase.http.HttpServer.(HttpServer.java:500)
> at org.apache.hadoop.hbase.http.HttpServer.(HttpServer.java:104)
> at 
> org.apache.hadoop.hbase.http.HttpServer$Builder.build(HttpServer.java:345)
> at org.apache.hadoop.hbase.http.InfoServer.(InfoServer.java:77)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.putUpWebUI(HRegionServer.java:1697)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.(HRegionServer.java:550)
> at org.apache.hadoop.hbase.master.HMaster.(HMaster.java:333)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:525)
> at 
> org.apache.hadoop.hbase.util.JVMClusterUtil.createMasterThread(JVMClusterUtil.java:139)
> ... 26 more
> {noformat}
> The hadoop trunk removed {{MetricsServlet}} (HADOOP-12504).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15812) HttpServer fails to come up against hadoop trunk

2016-05-11 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280441#comment-15280441
 ] 

stack commented on HBASE-15812:
---

Works fine for me tip of branch-1. Your hadoop version?

> HttpServer fails to come up against hadoop trunk
> 
>
> Key: HBASE-15812
> URL: https://issues.apache.org/jira/browse/HBASE-15812
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.0.1
>Reporter: Sangjin Lee
>Assignee: Ted Yu
> Fix For: 2.0.0
>
> Attachments: 15812.v1.txt, 15812.v1.txt
>
>
> If you run HBase HttpServer against the hadoop trunk, it fails.
> {noformat}
> Caused by: java.lang.NoClassDefFoundError: 
> org/apache/hadoop/metrics/MetricsServlet
> at 
> org.apache.hadoop.hbase.http.HttpServer.addDefaultServlets(HttpServer.java:677)
> at 
> org.apache.hadoop.hbase.http.HttpServer.initializeWebServer(HttpServer.java:546)
> at org.apache.hadoop.hbase.http.HttpServer.(HttpServer.java:500)
> at org.apache.hadoop.hbase.http.HttpServer.(HttpServer.java:104)
> at 
> org.apache.hadoop.hbase.http.HttpServer$Builder.build(HttpServer.java:345)
> at org.apache.hadoop.hbase.http.InfoServer.(InfoServer.java:77)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.putUpWebUI(HRegionServer.java:1697)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.(HRegionServer.java:550)
> at org.apache.hadoop.hbase.master.HMaster.(HMaster.java:333)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:525)
> at 
> org.apache.hadoop.hbase.util.JVMClusterUtil.createMasterThread(JVMClusterUtil.java:139)
> ... 26 more
> {noformat}
> The hadoop trunk removed {{MetricsServlet}} (HADOOP-12504).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15742) Reduce allocation of objects in metrics

2016-05-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280439#comment-15280439
 ] 

Hudson commented on HBASE-15742:


FAILURE: Integrated in HBase-0.98-matrix #344 (See 
[https://builds.apache.org/job/HBase-0.98-matrix/344/])
HBASE-15742 Addendum fixes broken test in 0.98 branch (Phil Yang) (tedyu: rev 
a2a6b95a059930f6386f2ffa7ce87dfe3b621a4a)
* 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/metrics2/lib/MutableHistogram.java


> Reduce allocation of objects in metrics
> ---
>
> Key: HBASE-15742
> URL: https://issues.apache.org/jira/browse/HBASE-15742
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.0, 1.2.1, 1.0.3, 1.1.4, 0.98.19
>Reporter: Phil Yang
>Assignee: Phil Yang
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.1.5, 1.2.2, 0.98.20
>
> Attachments: 15742.0.98.patch, HBASE-15742-0.98.v1.patch, 
> HBASE-15742-0.98.v2.patch, HBASE-15742-branch-1-v1.patch, 
> HBASE-15742-branch-1.1-v1.patch, HBASE-15742-branch-1.2-v1.patch, 
> HBASE-15742-branch-1.2-v2.patch, HBASE-15742-v1.patch, HBASE-15742-v2.patch, 
> HBASE-15742-v3.patch, HBASE-15742-v4.patch
>
>
> We use JMX and o.a.h.metrics2 to do some metrics on regions, tables, region 
> servers and cluster. We use MetricsInfo to show the information of metrics, 
> and we use Interns to cache MetricsInfo objects because it won't be changed.
> However, in Interns there are some static values to limit the max cached 
> objects. We can only cache 2010 metrics, but we have dozens of metrics for 
> one region and we have some RS-level metrics in each RS and all metrics for 
> all regions will be saved in master. So each server will have thousands of 
> metrics, and we can not cache most of them. When we collect metrics by JMX, 
> we will create many objects which can be avoid. It increases the pressure of 
> GC and JMX has some caching logic so the objects can not be removed 
> immediately which increases the pressure more.
> Interns is in Hadoop project, and I think the implementation is not suitable 
> for HBase. Because we can not know how many MetricsInfo we have, it depends 
> on the number of regions. And we can not set it unlimited because we should 
> remove the objects whose region is split, moved, or dropped. I think we can 
> use Guava's cache with expireAfterAccess which is very simple and convenient.
> So we can add a new Interns class in HBase project first, and put it to 
> upstream later.
> Moreover, in MutableHistogram#snapshot we create same Strings each time, we 
> can create them only in the first time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15812) HttpServer fails to come up against hadoop trunk

2016-05-11 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280431#comment-15280431
 ] 

Ted Yu commented on HBASE-15812:


This is what I get accessing '/metrics' on the master of a 1.1.2 cluster:
{code}
HTTP ERROR 500

Problem accessing /metrics. Reason:

INTERNAL_SERVER_ERROR
Caused by:

java.lang.NullPointerException
at 
org.apache.hadoop.http.HttpServer2.isInstrumentationAccessAllowed(HttpServer2.java:1029)
at 
org.apache.hadoop.metrics.MetricsServlet.doGet(MetricsServlet.java:109)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
at 
org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221)
at 
org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:113)
at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at 
org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter.doFilter(HttpServer.java:1350)
at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at 
org.apache.hadoop.hbase.http.NoCacheFilter.doFilter(NoCacheFilter.java:49)
at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at 
org.apache.hadoop.hbase.http.NoCacheFilter.doFilter(NoCacheFilter.java:49)
at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at 
org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
at 
org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
at 
org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
at 
org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:767)
at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
at 
org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
at 
org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
at org.mortbay.jetty.Server.handle(Server.java:326)
at 
org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
at 
org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
at 
org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410)
at 
org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
{code}

> HttpServer fails to come up against hadoop trunk
> 
>
> Key: HBASE-15812
> URL: https://issues.apache.org/jira/browse/HBASE-15812
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.0.1
>Reporter: Sangjin Lee
>Assignee: Ted Yu
> Fix For: 2.0.0
>
> Attachments: 15812.v1.txt, 15812.v1.txt
>
>
> If you run HBase HttpServer against the hadoop trunk, it fails.
> {noformat}
> Caused by: java.lang.NoClassDefFoundError: 
> org/apache/hadoop/metrics/MetricsServlet
> at 
> org.apache.hadoop.hbase.http.HttpServer.addDefaultServlets(HttpServer.java:677)
> at 
> org.apache.hadoop.hbase.http.HttpServer.initializeWebServer(HttpServer.java:546)
> at org.apache.hadoop.hbase.http.HttpServer.(HttpServer.java:500)
> at org.apache.hadoop.hbase.http.HttpServer.(HttpServer.java:104)
> at 
> org.apache.hadoop.hbase.http.HttpServer$Builder.build(HttpServer.java:345)
> at org.apache.hadoop.hbase.http.InfoServer.(InfoServer.java:77)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.putUpWebUI(HRegionServer.java:1697)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.(HRegionServer.java:550)
> at org.apache.hadoop.hbase.master.HMaster.(HMaster.java:333)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:525)
> at 
> org.apache.hadoop.hbase.util.JVMClusterUtil.createMasterThread(JVMClusterUtil.java:139)
> ... 26 more
> {noformat}
> The hadoop trunk removed {{MetricsServlet}} (HADOOP-12504).



--
This 

[jira] [Commented] (HBASE-15454) Archive store files older than max age

2016-05-11 Thread Dave Latham (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280418#comment-15280418
 ] 

Dave Latham commented on HBASE-15454:
-

Sorry for the late responses - a minor update on reviewboard as well.

I haven't changed feelings about the above:
{quote}
I don't have good intuition for how such an archiving mechanism would effect 
write amplification in practice, or how it performs under edge cases (e.g. once 
in awhile another "old" cell shows up) or if it's likely to output several 
small HFiles when it runs for example. Do you have any analysis, simulation, or 
arguments about how this will behave and perform? It seems that using this 
makes stronger assumptions about the use case and write behavior.
If going in this direction, I wonder if it's better to go all the way, from 
having every minor compaction output perfectly partitioned HFiles
{quote}

I'm nervous about the extra complexity here and whether it's going to be used.  
I wonder if it is better off being done differently or as an extended policy.  
I'm not an HBase committer so am fine going along with the flow.  If I were, I 
guess I would be -0: makes me nervous but I wouldn't try to stop it going in if 
other people think it's a good idea.

> Archive store files older than max age
> --
>
> Key: HBASE-15454
> URL: https://issues.apache.org/jira/browse/HBASE-15454
> Project: HBase
>  Issue Type: Sub-task
>  Components: Compaction
>Affects Versions: 2.0.0, 1.3.0, 0.98.18, 1.4.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0, 1.3.0, 1.4.0, 0.98.20
>
> Attachments: HBASE-15454-v1.patch, HBASE-15454-v2.patch, 
> HBASE-15454-v3.patch, HBASE-15454-v4.patch, HBASE-15454-v5.patch, 
> HBASE-15454-v6.patch, HBASE-15454-v7.patch, HBASE-15454.patch
>
>
> In date tiered compaction, the store files older than max age are never 
> touched by minor compactions. Here we introduce a 'freeze window' operation, 
> which does the follow things:
> 1. Find all store files that contains cells whose timestamp are in the give 
> window.
> 2. Compaction all these files and output one file for each window that these 
> files covered.
> After the compaction, we will have only one in the give window, and all cells 
> whose timestamp are in the give window are in the only file. And if you do 
> not write new cells with an older timestamp in this window, the file will 
> never be changed. This makes it easier to do erasure coding on the freezed 
> file to reduce redundence. And also, it makes it possible to check 
> consistency between master and peer cluster incrementally.
> And why use the word 'freeze'?
> Because there is already an 'HFileArchiver' class. I want to use a different 
> word to prevent confusing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15813) Rename DefaultWALProvider to a more specific name and clean up unnecessary reference to it

2016-05-11 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280414#comment-15280414
 ] 

Sean Busbey commented on HBASE-15813:
-

What advantage will YA FooUtility have over using the functions in the 
WALProvider that's the basis for other FileSystem based WALProviders?

> Rename DefaultWALProvider to a more specific name and clean up unnecessary 
> reference to it
> --
>
> Key: HBASE-15813
> URL: https://issues.apache.org/jira/browse/HBASE-15813
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-15813.patch
>
>
> This work can be done before we make AsyncFSWAL as our default WAL 
> implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15813) Rename DefaultWALProvider to a more specific name and clean up unnecessary reference to it

2016-05-11 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280409#comment-15280409
 ] 

stack commented on HBASE-15813:
---

We go from DefaultWALProvider to AbstractFSWALProvider? Pity we can't go to 
WALProvider?. We need to be able to get to the utility methods like 
getServerNameFromWALDirectoryName. Should we move these  to a utility class in 
here so we can go to WALProvider or you want to do that in a followon?

That a bug fix in FanOutOneBlockAsyncDFSOutputHelper.java?



> Rename DefaultWALProvider to a more specific name and clean up unnecessary 
> reference to it
> --
>
> Key: HBASE-15813
> URL: https://issues.apache.org/jira/browse/HBASE-15813
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-15813.patch
>
>
> This work can be done before we make AsyncFSWAL as our default WAL 
> implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15804) Some links in documentation are 404

2016-05-11 Thread Misty Stanley-Jones (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280393#comment-15280393
 ] 

Misty Stanley-Jones commented on HBASE-15804:
-

I tweaked the options to the Jenkins job because it was set to ignore anchors 
before (probably due to some false positives). Let's see what it shakes out. In 
the meantime, [~chenheng], would you like to try fixing the broken link you 
found?

> Some links in documentation are 404
> ---
>
> Key: HBASE-15804
> URL: https://issues.apache.org/jira/browse/HBASE-15804
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Reporter: Heng Chen
>
> http://hbase.apache.org/book.html#security
> The link to {{Understanding User Authentication and Authorization in Apache 
> HBase}} return 404



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15812) HttpServer fails to come up against hadoop trunk

2016-05-11 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280392#comment-15280392
 ] 

stack commented on HBASE-15812:
---

The metrics dump is useful. Any investigation on how hard to redo a 
MetricsServlet on metrics2?

> HttpServer fails to come up against hadoop trunk
> 
>
> Key: HBASE-15812
> URL: https://issues.apache.org/jira/browse/HBASE-15812
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.0.1
>Reporter: Sangjin Lee
>Assignee: Ted Yu
> Fix For: 2.0.0
>
> Attachments: 15812.v1.txt, 15812.v1.txt
>
>
> If you run HBase HttpServer against the hadoop trunk, it fails.
> {noformat}
> Caused by: java.lang.NoClassDefFoundError: 
> org/apache/hadoop/metrics/MetricsServlet
> at 
> org.apache.hadoop.hbase.http.HttpServer.addDefaultServlets(HttpServer.java:677)
> at 
> org.apache.hadoop.hbase.http.HttpServer.initializeWebServer(HttpServer.java:546)
> at org.apache.hadoop.hbase.http.HttpServer.(HttpServer.java:500)
> at org.apache.hadoop.hbase.http.HttpServer.(HttpServer.java:104)
> at 
> org.apache.hadoop.hbase.http.HttpServer$Builder.build(HttpServer.java:345)
> at org.apache.hadoop.hbase.http.InfoServer.(InfoServer.java:77)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.putUpWebUI(HRegionServer.java:1697)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.(HRegionServer.java:550)
> at org.apache.hadoop.hbase.master.HMaster.(HMaster.java:333)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:525)
> at 
> org.apache.hadoop.hbase.util.JVMClusterUtil.createMasterThread(JVMClusterUtil.java:139)
> ... 26 more
> {noformat}
> The hadoop trunk removed {{MetricsServlet}} (HADOOP-12504).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15804) Some links in documentation are 404

2016-05-11 Thread Misty Stanley-Jones (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280379#comment-15280379
 ] 

Misty Stanley-Jones commented on HBASE-15804:
-

http://blog.cloudera.com/blog/2012/09/understanding-user-authentication-and-authorization-in-apache-hbase/

> Some links in documentation are 404
> ---
>
> Key: HBASE-15804
> URL: https://issues.apache.org/jira/browse/HBASE-15804
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Reporter: Heng Chen
>
> http://hbase.apache.org/book.html#security
> The link to {{Understanding User Authentication and Authorization in Apache 
> HBase}} return 404



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15804) Some links in documentation are 404

2016-05-11 Thread Misty Stanley-Jones (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280376#comment-15280376
 ] 

Misty Stanley-Jones commented on HBASE-15804:
-

Oh, I see, the link to Matteo's blog post is now 404. Not sure why it didn't 
find it, but I think the solution is to replace www with blog, as in 
blog.cloudera.com.

> Some links in documentation are 404
> ---
>
> Key: HBASE-15804
> URL: https://issues.apache.org/jira/browse/HBASE-15804
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Reporter: Heng Chen
>
> http://hbase.apache.org/book.html#security
> The link to {{Understanding User Authentication and Authorization in Apache 
> HBase}} return 404



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15804) Some links in documentation are 404

2016-05-11 Thread Misty Stanley-Jones (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280369#comment-15280369
 ] 

Misty Stanley-Jones commented on HBASE-15804:
-

It's not 404. I am not sure what happened before, it must have been a temporary 
problem on apache.org. I just checked it and it works for me.

> Some links in documentation are 404
> ---
>
> Key: HBASE-15804
> URL: https://issues.apache.org/jira/browse/HBASE-15804
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Reporter: Heng Chen
>
> http://hbase.apache.org/book.html#security
> The link to {{Understanding User Authentication and Authorization in Apache 
> HBase}} return 404



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15785) Unnecessary lock in ByteBufferArray

2016-05-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280343#comment-15280343
 ] 

Hudson commented on HBASE-15785:


FAILURE: Integrated in HBase-Trunk_matrix #913 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/913/])
HBASE-15785 Unnecessary lock in ByteBufferArray. (anoopsamjohn: rev 
c9ebcd4e296a31e0da43f513db3f5a8c3929c191)
* hbase-common/src/main/java/org/apache/hadoop/hbase/util/ByteBufferUtils.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/util/ByteBufferArray.java


> Unnecessary lock in ByteBufferArray
> ---
>
> Key: HBASE-15785
> URL: https://issues.apache.org/jira/browse/HBASE-15785
> Project: HBase
>  Issue Type: Sub-task
>  Components: regionserver, Scanners
>Affects Versions: 2.0.0
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 2.0.0
>
> Attachments: HBASE-15785.patch, HBASE-15785_V2.patch, 
> HBASE-15785_V3.patch
>
>
> {code}
>  Lock lock = locks[i];
>   lock.lock();
>   try {
> ByteBuffer bb = buffers[i];
> if (i == startBuffer) {
>   cnt = bufferSize - startBufferOffset;
>   if (cnt > len) cnt = len;
>   ByteBuffer dup = bb.duplicate();
>   dup.limit(startBufferOffset + cnt).position(startBufferOffset);
>   mbb[j] = dup.slice();
> {code}
> In asSubByteBuff, we work on the duplicate BB and set limit and position on 
> that.. The locking is not needed here.
> The locking is added because we set limit and position on the BBs in the 
> array.   We can duplicate the BBs and do positioning and limit on them.  The 
> locking can be fully avoided.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15590) Add ACL for requesting table backup

2016-05-11 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-15590:
---
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Thanks Anoop for the review.

> Add ACL for requesting table backup
> ---
>
> Key: HBASE-15590
> URL: https://issues.apache.org/jira/browse/HBASE-15590
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 15590.v1.patch, 15590.v2.txt, 15590.v3.txt, 15590.v4.txt
>
>
> This issue adds necessary coprocessor hooks for table backup request along 
> with enforcing permission check in AccessController through the new hooks.
> To perform backup, admin privilege is required in secure deployment. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15812) HttpServer fails to come up against hadoop trunk

2016-05-11 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-15812:
---
 Assignee: Ted Yu
 Hadoop Flags: Reviewed
Fix Version/s: 2.0.0

> HttpServer fails to come up against hadoop trunk
> 
>
> Key: HBASE-15812
> URL: https://issues.apache.org/jira/browse/HBASE-15812
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.0.1
>Reporter: Sangjin Lee
>Assignee: Ted Yu
> Fix For: 2.0.0
>
> Attachments: 15812.v1.txt, 15812.v1.txt
>
>
> If you run HBase HttpServer against the hadoop trunk, it fails.
> {noformat}
> Caused by: java.lang.NoClassDefFoundError: 
> org/apache/hadoop/metrics/MetricsServlet
> at 
> org.apache.hadoop.hbase.http.HttpServer.addDefaultServlets(HttpServer.java:677)
> at 
> org.apache.hadoop.hbase.http.HttpServer.initializeWebServer(HttpServer.java:546)
> at org.apache.hadoop.hbase.http.HttpServer.(HttpServer.java:500)
> at org.apache.hadoop.hbase.http.HttpServer.(HttpServer.java:104)
> at 
> org.apache.hadoop.hbase.http.HttpServer$Builder.build(HttpServer.java:345)
> at org.apache.hadoop.hbase.http.InfoServer.(InfoServer.java:77)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.putUpWebUI(HRegionServer.java:1697)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.(HRegionServer.java:550)
> at org.apache.hadoop.hbase.master.HMaster.(HMaster.java:333)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:525)
> at 
> org.apache.hadoop.hbase.util.JVMClusterUtil.createMasterThread(JVMClusterUtil.java:139)
> ... 26 more
> {noformat}
> The hadoop trunk removed {{MetricsServlet}} (HADOOP-12504).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15813) Rename DefaultWALProvider to a more specific name and clean up unnecessary reference to it

2016-05-11 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280264#comment-15280264
 ] 

Sean Busbey commented on HBASE-15813:
-

yeah that's a bug. I started a build with the in-progress version. presuming it 
repeats, could you report it?

> Rename DefaultWALProvider to a more specific name and clean up unnecessary 
> reference to it
> --
>
> Key: HBASE-15813
> URL: https://issues.apache.org/jira/browse/HBASE-15813
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-15813.patch
>
>
> This work can be done before we make AsyncFSWAL as our default WAL 
> implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14920) Compacting Memstore

2016-05-11 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280253#comment-15280253
 ] 

Anoop Sam John commented on HBASE-14920:


Yes me too want it to be in asap..  We need this feature very much for the off 
heap write path and memstore.  Fine we can do more as part of remaining jiras.

> Compacting Memstore
> ---
>
> Key: HBASE-14920
> URL: https://issues.apache.org/jira/browse/HBASE-14920
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Eshcar Hillel
>Assignee: Eshcar Hillel
> Attachments: HBASE-14920-V01.patch, HBASE-14920-V02.patch, 
> HBASE-14920-V03.patch, HBASE-14920-V04.patch, HBASE-14920-V05.patch, 
> HBASE-14920-V06.patch, HBASE-14920-V07.patch, HBASE-14920-V08.patch, 
> move.to.junit4.patch
>
>
> Implementation of a new compacting memstore with non-optimized immutable 
> segment representation



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15590) Add ACL for requesting table backup

2016-05-11 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-15590:
---
Attachment: 15590.v4.txt

Patch v4 addresses Anoop's comment on post hook.

> Add ACL for requesting table backup
> ---
>
> Key: HBASE-15590
> URL: https://issues.apache.org/jira/browse/HBASE-15590
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 15590.v1.patch, 15590.v2.txt, 15590.v3.txt, 15590.v4.txt
>
>
> This issue adds necessary coprocessor hooks for table backup request along 
> with enforcing permission check in AccessController through the new hooks.
> To perform backup, admin privilege is required in secure deployment. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15784) Misuse core/maxPoolSize of LinkedBlockingQueue in ThreadPoolExecutor

2016-05-11 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15784?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HBASE-15784:
--
Summary: Misuse core/maxPoolSize of LinkedBlockingQueue in 
ThreadPoolExecutor  (was: MIsuse core/maxPoolSize of LinkedBlockingQueue in 
ThreadPoolExecutor)

> Misuse core/maxPoolSize of LinkedBlockingQueue in ThreadPoolExecutor
> 
>
> Key: HBASE-15784
> URL: https://issues.apache.org/jira/browse/HBASE-15784
> Project: HBase
>  Issue Type: Bug
>  Components: Client, Replication, Thrift
>Reporter: Jingcheng Du
>Assignee: Jingcheng Du
> Attachments: HBASE-15784-v2.patch, HBASE-15784.patch
>
>
> LinkedBlockingQueue is usually used in ThreadPoolExecutor. It allows the 
> thread pool not to be blocked if the number of running threads in the pool is 
> less than the max pool size and the queue is not full.
> But when the core pool size of ThreadPoolExecutor is different from the max 
> pool size, the things don't go as expected. When the number of running 
> threads is the same with the core size, more requests of executions are added 
> into the LinkedBlockingQueue. And the requests can be executed again only 
> when LinkedBlockingQueue is full or some of running threads are finished.
> Thus it is better to use the same value for the core and max pool size when 
> the LinkedBlockingQueue is used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14920) Compacting Memstore

2016-05-11 Thread Edward Bortnikov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280216#comment-15280216
 ] 

Edward Bortnikov commented on HBASE-14920:
--

[anoop.hbase], you have a point. The whole space of when/how much to flush, in 
memory/to disk is yet to be explored and optimized. It seems though that some 
aspects of it are being addressed in the HBASE-14921 discussion. For example, 
we already accepted your suggestion to decide selectively, upon flush, whether 
to compact multiple in memory segments or just flatten the newest segment. 
Other optimizations can be optimized as part of the same conversation. 

Unless there is some critical flaw with the baseline functionality in this 
jira, I'd suggest to finalize it, and defer the performance discussions to the 
next jira. It's already a lot of code. A commit would expedite the ensuing work 
a lot. Not avoiding the discussion but just trying to be realistic .. Does this 
makes sense? 



> Compacting Memstore
> ---
>
> Key: HBASE-14920
> URL: https://issues.apache.org/jira/browse/HBASE-14920
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Eshcar Hillel
>Assignee: Eshcar Hillel
> Attachments: HBASE-14920-V01.patch, HBASE-14920-V02.patch, 
> HBASE-14920-V03.patch, HBASE-14920-V04.patch, HBASE-14920-V05.patch, 
> HBASE-14920-V06.patch, HBASE-14920-V07.patch, HBASE-14920-V08.patch, 
> move.to.junit4.patch
>
>
> Implementation of a new compacting memstore with non-optimized immutable 
> segment representation



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15590) Add ACL for requesting table backup

2016-05-11 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280078#comment-15280078
 ] 

Anoop Sam John commented on HBASE-15590:


Other than that +1

> Add ACL for requesting table backup
> ---
>
> Key: HBASE-15590
> URL: https://issues.apache.org/jira/browse/HBASE-15590
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 15590.v1.patch, 15590.v2.txt, 15590.v3.txt
>
>
> This issue adds necessary coprocessor hooks for table backup request along 
> with enforcing permission check in AccessController through the new hooks.
> To perform backup, admin privilege is required in secure deployment. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15590) Add ACL for requesting table backup

2016-05-11 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280077#comment-15280077
 ] 

Anoop Sam John commented on HBASE-15590:


So as of now we will make Global Admin permission needed for backup.  Later if 
we make table(s) admin permission for backup (less permissive) I hope that wont 
be a BC issue.
On the patch, when the pre hook bypass op, return  immediately. We should not 
be calling post hook when actual op has not happened.

> Add ACL for requesting table backup
> ---
>
> Key: HBASE-15590
> URL: https://issues.apache.org/jira/browse/HBASE-15590
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 15590.v1.patch, 15590.v2.txt, 15590.v3.txt
>
>
> This issue adds necessary coprocessor hooks for table backup request along 
> with enforcing permission check in AccessController through the new hooks.
> To perform backup, admin privilege is required in secure deployment. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15722) Size. WAL Files (bytes) in regionserver status page displays negative values

2016-05-11 Thread Samir Ahmic (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280043#comment-15280043
 ] 

Samir Ahmic commented on HBASE-15722:
-

Filed test is {{TestRegionServerMetrics#testMobMetrics()}} i do not think 
failed test is related with this patch.

> Size. WAL Files (bytes) in regionserver status page displays negative values
> 
>
> Key: HBASE-15722
> URL: https://issues.apache.org/jira/browse/HBASE-15722
> Project: HBase
>  Issue Type: Bug
>  Components: UI
>Affects Versions: 2.0.0
>Reporter: Samir Ahmic
>Assignee: Samir Ahmic
>Priority: Minor
> Attachments: HBASE-15722_v0.patch, HBASE-15722_v1.patch, 
> HBASE-15722_v2.patch, WALs.png
>
>
> Here is the line from ServerMetricTmpl.jamon
> {code}
> TraditionalBinaryPrefix.long2String(mWrap.getWALFileSize(), "B", 1)
> {code} 
>  I will change this to StringUtils.humanSize()



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15722) Size. WAL Files (bytes) in regionserver status page displays negative values

2016-05-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15279987#comment-15279987
 ] 

Hadoop QA commented on HBASE-15722:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
38s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 50s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
3s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
24s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 37s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 39s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
57s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 54s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 54s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 42s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
19s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
11m 16s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.1 2.5.2 2.6.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
46s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 36s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 41s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 122m 57s 
{color} | {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
14s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 153m 11s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.regionserver.TestRegionServerMetrics |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12803390/HBASE-15722_v2.patch |
| JIRA Issue | HBASE-15722 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux asf910.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/test_framework/yetus-0.2.1/lib/precommit/personality/hbase.sh
 |
| git revision | master / a11091c |
| Default Java | 1.7.0_79 |
| Multi-JDK versions |  

[jira] [Commented] (HBASE-15615) Wrong sleep time when RegionServerCallable need retry

2016-05-11 Thread Guanghao Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15279977#comment-15279977
 ] 

Guanghao Zhang commented on HBASE-15615:


When set HConstants.HBASE_CLIENT_RETRIES_NUMBER to 0, AsyncProcess will have 
the same retry behavior with 1. But if HConnection's numRetries set to 0, it 
can't locate region.
{code}
if (tries >= localNumRetries) {
  throw new NoServerForRegionException("Unable to find region for "
  + Bytes.toStringBinary(row) + " in " + tableName +
  " after " + localNumRetries + " tries.");
} 
{code}

bq. in HBaseTestingUtility we use new RetryCounter(numRetries+1, (int)pause, 
TimeUnit.MICROSECONDS); 
I am not sure if this make other ut fail.

Maybe we should open a new issue to fix HBASE_CLIENT_RETRIES_NUMBER?

> Wrong sleep time when RegionServerCallable need retry
> -
>
> Key: HBASE-15615
> URL: https://issues.apache.org/jira/browse/HBASE-15615
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 1.0.0, 2.0.0, 1.1.0, 1.2.0, 1.3.0, 0.98.19
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
> Fix For: 1.3.0
>
> Attachments: HBASE-15615-branch-0.98.patch, 
> HBASE-15615-branch-1.0-v2.patch, HBASE-15615-branch-1.1-v2.patch, 
> HBASE-15615-branch-1.1-v2.patch, HBASE-15615-branch-1.patch, 
> HBASE-15615-v1.patch, HBASE-15615-v1.patch, HBASE-15615-v2.patch, 
> HBASE-15615-v2.patch, HBASE-15615-v3.patch, HBASE-15615.patch
>
>
> In RpcRetryingCallerImpl, it get pause time by expectedSleep = 
> callable.sleep(pause, tries + 1); And in RegionServerCallable, it get pasue 
> time by sleep = ConnectionUtils.getPauseTime(pause, tries + 1). So tries will 
> be bumped up twice. And the pasue time is 3 * hbase.client.pause when tries 
> is 0.
> RETRY_BACKOFF = {1, 2, 3, 5, 10, 20, 40, 100, 100, 100, 100, 200, 200}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15615) Wrong sleep time when RegionServerCallable need retry

2016-05-11 Thread Mikhail Antonov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15279915#comment-15279915
 ] 

Mikhail Antonov commented on HBASE-15615:
-

Thanks, and sorry for the delay here. Yeah, that semantic on branch-1 looks 
better, "num retries is the max number of times server will ever see your 
request", basically.

Good catch in AsyncProcess in master, in branch-1 and branch-1.3 it's set 
correctly.

I've looked the the all places where we either call 
ConnectionUtils.getPauseTime on branch-1.3, found few places I'd like us to 
check more.

1) In HTableMultiplexer we set 

this.workerConf.setInt(HConstants.HBASE_CLIENT_RETRIES_NUMBER, 0);

that doesn't look right to me. TestHTableMultiplexer passes with this patch, 
but that makes me think that the only place where AsyncProcess really uses 
numRetries is in ServerErrorTracker, and we may not need this codepath? Could 
we have a test in TestHCM#testErrorBackoffTimeCalculation to make sure we test 
SET when someone passes in zero timeout / zero max retries?

2) in HBaseTestingUtility we use new RetryCounter(numRetries+1, (int)pause, 
TimeUnit.MICROSECONDS); - nit

Otherwise looks good to me.

> Wrong sleep time when RegionServerCallable need retry
> -
>
> Key: HBASE-15615
> URL: https://issues.apache.org/jira/browse/HBASE-15615
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 1.0.0, 2.0.0, 1.1.0, 1.2.0, 1.3.0, 0.98.19
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
> Fix For: 1.3.0
>
> Attachments: HBASE-15615-branch-0.98.patch, 
> HBASE-15615-branch-1.0-v2.patch, HBASE-15615-branch-1.1-v2.patch, 
> HBASE-15615-branch-1.1-v2.patch, HBASE-15615-branch-1.patch, 
> HBASE-15615-v1.patch, HBASE-15615-v1.patch, HBASE-15615-v2.patch, 
> HBASE-15615-v2.patch, HBASE-15615-v3.patch, HBASE-15615.patch
>
>
> In RpcRetryingCallerImpl, it get pause time by expectedSleep = 
> callable.sleep(pause, tries + 1); And in RegionServerCallable, it get pasue 
> time by sleep = ConnectionUtils.getPauseTime(pause, tries + 1). So tries will 
> be bumped up twice. And the pasue time is 3 * hbase.client.pause when tries 
> is 0.
> RETRY_BACKOFF = {1, 2, 3, 5, 10, 20, 40, 100, 100, 100, 100, 200, 200}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15785) Unnecessary lock in ByteBufferArray

2016-05-11 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-15785:
---
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Pushed to master.
Thanks for the reviews Stack and Ram.

> Unnecessary lock in ByteBufferArray
> ---
>
> Key: HBASE-15785
> URL: https://issues.apache.org/jira/browse/HBASE-15785
> Project: HBase
>  Issue Type: Sub-task
>  Components: regionserver, Scanners
>Affects Versions: 2.0.0
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 2.0.0
>
> Attachments: HBASE-15785.patch, HBASE-15785_V2.patch, 
> HBASE-15785_V3.patch
>
>
> {code}
>  Lock lock = locks[i];
>   lock.lock();
>   try {
> ByteBuffer bb = buffers[i];
> if (i == startBuffer) {
>   cnt = bufferSize - startBufferOffset;
>   if (cnt > len) cnt = len;
>   ByteBuffer dup = bb.duplicate();
>   dup.limit(startBufferOffset + cnt).position(startBufferOffset);
>   mbb[j] = dup.slice();
> {code}
> In asSubByteBuff, we work on the duplicate BB and set limit and position on 
> that.. The locking is not needed here.
> The locking is added because we set limit and position on the BBs in the 
> array.   We can duplicate the BBs and do positioning and limit on them.  The 
> locking can be fully avoided.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15785) Unnecessary lock in ByteBufferArray

2016-05-11 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-15785:
---
Description: 
{code}
 Lock lock = locks[i];
  lock.lock();
  try {
ByteBuffer bb = buffers[i];
if (i == startBuffer) {
  cnt = bufferSize - startBufferOffset;
  if (cnt > len) cnt = len;
  ByteBuffer dup = bb.duplicate();
  dup.limit(startBufferOffset + cnt).position(startBufferOffset);
  mbb[j] = dup.slice();
{code}
In asSubByteBuff, we work on the duplicate BB and set limit and position on 
that.. The locking is not needed here.

The locking is added because we set limit and position on the BBs in the array. 
  We can duplicate the BBs and do positioning and limit on them.  The locking 
can be fully avoided.

  was:
{code}
 Lock lock = locks[i];
  lock.lock();
  try {
ByteBuffer bb = buffers[i];
if (i == startBuffer) {
  cnt = bufferSize - startBufferOffset;
  if (cnt > len) cnt = len;
  ByteBuffer dup = bb.duplicate();
  dup.limit(startBufferOffset + cnt).position(startBufferOffset);
  mbb[j] = dup.slice();
{code}
In asSubByteBuff, we work on the duplicate BB and set limit and position on 
that.. The locking is not needed here.


> Unnecessary lock in ByteBufferArray
> ---
>
> Key: HBASE-15785
> URL: https://issues.apache.org/jira/browse/HBASE-15785
> Project: HBase
>  Issue Type: Sub-task
>  Components: regionserver, Scanners
>Affects Versions: 2.0.0
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 2.0.0
>
> Attachments: HBASE-15785.patch, HBASE-15785_V2.patch, 
> HBASE-15785_V3.patch
>
>
> {code}
>  Lock lock = locks[i];
>   lock.lock();
>   try {
> ByteBuffer bb = buffers[i];
> if (i == startBuffer) {
>   cnt = bufferSize - startBufferOffset;
>   if (cnt > len) cnt = len;
>   ByteBuffer dup = bb.duplicate();
>   dup.limit(startBufferOffset + cnt).position(startBufferOffset);
>   mbb[j] = dup.slice();
> {code}
> In asSubByteBuff, we work on the duplicate BB and set limit and position on 
> that.. The locking is not needed here.
> The locking is added because we set limit and position on the BBs in the 
> array.   We can duplicate the BBs and do positioning and limit on them.  The 
> locking can be fully avoided.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15785) Unnecessary lock in ByteBufferArray

2016-05-11 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-15785:
---
Summary: Unnecessary lock in ByteBufferArray  (was: Unnecessary lock in 
ByteBufferArray#asSubByteBuff)

> Unnecessary lock in ByteBufferArray
> ---
>
> Key: HBASE-15785
> URL: https://issues.apache.org/jira/browse/HBASE-15785
> Project: HBase
>  Issue Type: Sub-task
>  Components: regionserver, Scanners
>Affects Versions: 2.0.0
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 2.0.0
>
> Attachments: HBASE-15785.patch, HBASE-15785_V2.patch, 
> HBASE-15785_V3.patch
>
>
> {code}
>  Lock lock = locks[i];
>   lock.lock();
>   try {
> ByteBuffer bb = buffers[i];
> if (i == startBuffer) {
>   cnt = bufferSize - startBufferOffset;
>   if (cnt > len) cnt = len;
>   ByteBuffer dup = bb.duplicate();
>   dup.limit(startBufferOffset + cnt).position(startBufferOffset);
>   mbb[j] = dup.slice();
> {code}
> In asSubByteBuff, we work on the duplicate BB and set limit and position on 
> that.. The locking is not needed here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15813) Rename DefaultWALProvider to a more specific name and clean up unnecessary reference to it

2016-05-11 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15279857#comment-15279857
 ] 

Duo Zhang commented on HBASE-15813:
---

[~busbey] Overall +1 and a sub item -1? Seems a little strange.

[~stack] Let's commit this first? Then HBASE-15536 can be done with a one line 
patch when we think it is ready to go.

Thanks.

> Rename DefaultWALProvider to a more specific name and clean up unnecessary 
> reference to it
> --
>
> Key: HBASE-15813
> URL: https://issues.apache.org/jira/browse/HBASE-15813
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-15813.patch
>
>
> This work can be done before we make AsyncFSWAL as our default WAL 
> implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15691) Port HBASE-10205 (ConcurrentModificationException in BucketAllocator) to branch-1

2016-05-11 Thread Mikhail Antonov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Antonov updated HBASE-15691:

Fix Version/s: (was: 1.3.0)
   1.4.0

> Port HBASE-10205 (ConcurrentModificationException in BucketAllocator) to 
> branch-1
> -
>
> Key: HBASE-15691
> URL: https://issues.apache.org/jira/browse/HBASE-15691
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 1.3.0
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
> Fix For: 1.4.0, 1.2.2
>
> Attachments: HBASE-15691-branch-1.patch
>
>
> HBASE-10205 was committed to trunk and 0.98 branches only. To preserve 
> continuity we should commit it to branch-1. The change requires more than 
> nontrivial fixups so I will attach a backport of the change from trunk to 
> current branch-1 here. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15691) Port HBASE-10205 (ConcurrentModificationException in BucketAllocator) to branch-1

2016-05-11 Thread Mikhail Antonov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15279818#comment-15279818
 ] 

Mikhail Antonov commented on HBASE-15691:
-

[~apurtell] [~stack] as this one been there for some time in branch-1 and isn't 
a release blocker, let me move it to 1.4. Will be happy to help to get it in 
1.3 or 1.3.1.

> Port HBASE-10205 (ConcurrentModificationException in BucketAllocator) to 
> branch-1
> -
>
> Key: HBASE-15691
> URL: https://issues.apache.org/jira/browse/HBASE-15691
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 1.3.0
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
> Fix For: 1.4.0, 1.2.2
>
> Attachments: HBASE-15691-branch-1.patch
>
>
> HBASE-10205 was committed to trunk and 0.98 branches only. To preserve 
> continuity we should commit it to branch-1. The change requires more than 
> nontrivial fixups so I will attach a backport of the change from trunk to 
> current branch-1 here. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15722) Size. WAL Files (bytes) in regionserver status page displays negative values

2016-05-11 Thread Samir Ahmic (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Samir Ahmic updated HBASE-15722:

Attachment: HBASE-15722_v2.patch

[~enis] here is new patch fixing issue in {{doReplaceWriter()}} and using this 
function for getting WALs file size. Issue was cause because we were getting 
writer length before writer was closed resulting that return value was less 
then actual file size on HDFS. 

> Size. WAL Files (bytes) in regionserver status page displays negative values
> 
>
> Key: HBASE-15722
> URL: https://issues.apache.org/jira/browse/HBASE-15722
> Project: HBase
>  Issue Type: Bug
>  Components: UI
>Affects Versions: 2.0.0
>Reporter: Samir Ahmic
>Assignee: Samir Ahmic
>Priority: Minor
> Attachments: HBASE-15722_v0.patch, HBASE-15722_v1.patch, 
> HBASE-15722_v2.patch, WALs.png
>
>
> Here is the line from ServerMetricTmpl.jamon
> {code}
> TraditionalBinaryPrefix.long2String(mWrap.getWALFileSize(), "B", 1)
> {code} 
>  I will change this to StringUtils.humanSize()



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15454) Archive store files older than max age

2016-05-11 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15279803#comment-15279803
 ] 

Duo Zhang commented on HBASE-15454:
---

I do not have any new modifications here. I'm currently working on test it in 
our own production.

Thanks.

> Archive store files older than max age
> --
>
> Key: HBASE-15454
> URL: https://issues.apache.org/jira/browse/HBASE-15454
> Project: HBase
>  Issue Type: Sub-task
>  Components: Compaction
>Affects Versions: 2.0.0, 1.3.0, 0.98.18, 1.4.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0, 1.3.0, 1.4.0, 0.98.20
>
> Attachments: HBASE-15454-v1.patch, HBASE-15454-v2.patch, 
> HBASE-15454-v3.patch, HBASE-15454-v4.patch, HBASE-15454-v5.patch, 
> HBASE-15454-v6.patch, HBASE-15454-v7.patch, HBASE-15454.patch
>
>
> In date tiered compaction, the store files older than max age are never 
> touched by minor compactions. Here we introduce a 'freeze window' operation, 
> which does the follow things:
> 1. Find all store files that contains cells whose timestamp are in the give 
> window.
> 2. Compaction all these files and output one file for each window that these 
> files covered.
> After the compaction, we will have only one in the give window, and all cells 
> whose timestamp are in the give window are in the only file. And if you do 
> not write new cells with an older timestamp in this window, the file will 
> never be changed. This makes it easier to do erasure coding on the freezed 
> file to reduce redundence. And also, it makes it possible to check 
> consistency between master and peer cluster incrementally.
> And why use the word 'freeze'?
> Because there is already an 'HFileArchiver' class. I want to use a different 
> word to prevent confusing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >