[jira] [Commented] (HBASE-18200) Set hadoop check versions for branch-2 and branch-2.x in pre commit

2017-06-11 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16046191#comment-16046191
 ] 

Duo Zhang commented on HBASE-18200:
---

So let's commit this patch and try a branch-2 build in HBASE-18179? What do you 
think [~mdrob]? Thanks.

> Set hadoop check versions for branch-2 and branch-2.x in pre commit
> ---
>
> Key: HBASE-18200
> URL: https://issues.apache.org/jira/browse/HBASE-18200
> Project: HBase
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 3.0.0
>
> Attachments: HBASE-18200.patch
>
>
> Now it will use the hadoop versions for branch-1.
> I do not know how to set the fix versions as the code will be committed to 
> master but the branch in trouble is branch-2...



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18200) Set hadoop check versions for branch-2 and branch-2.x in pre commit

2017-06-11 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16046186#comment-16046186
 ] 

Duo Zhang commented on HBASE-18200:
---

[~mdrob] We always use the hbase-personality.sh in master for pre commit so...

> Set hadoop check versions for branch-2 and branch-2.x in pre commit
> ---
>
> Key: HBASE-18200
> URL: https://issues.apache.org/jira/browse/HBASE-18200
> Project: HBase
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 3.0.0
>
> Attachments: HBASE-18200.patch
>
>
> Now it will use the hadoop versions for branch-1.
> I do not know how to set the fix versions as the code will be committed to 
> master but the branch in trouble is branch-2...



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18128) compaction marker could be skipped

2017-06-11 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16046183#comment-16046183
 ] 

Ted Yu commented on HBASE-18128:


Why is the following condition removed ?
{code}
-if (lastFlushedSequenceId >= entry.getKey().getSequenceId()) {
-  editsSkipped++;
{code}
{code}
+@Category({ RegionServerTests.class, SmallTests.class }) public class 
TestCompactionMarker {
{code}
Move the class declaration to second line.
{code}
+  private final static Log LOG = LogFactory.getLog(TestWALSplit.class);
{code}
Wrong class name.
{code}
+  @BeforeClass public static void setUpBeforeClass() throws Exception {
{code}

Leave annotation on one line and method declaration on second line - applies to 
other methods in the test.

> compaction marker could be skipped 
> ---
>
> Key: HBASE-18128
> URL: https://issues.apache.org/jira/browse/HBASE-18128
> Project: HBase
>  Issue Type: Improvement
>  Components: Compaction, regionserver
>Reporter: Jingyun Tian
>Assignee: Jingyun Tian
> Attachments: HBASE-18128-master.patch, HBASE-18128-master-v2.patch, 
> TestCompactionMarker.java
>
>
> The sequence for a compaction are as follows:
> 1. Compaction writes new files under region/.tmp directory (compaction output)
> 2. Compaction atomically moves the temporary file under region directory
> 3. Compaction appends a WAL edit containing the compaction input and output 
> files. Forces sync on WAL.
> 4. Compaction deletes the input files from the region directory.
> But if a flush happened between 3 and 4, then the regionserver crushed. The 
> compaction marker will be skipped when splitting log because the sequence id 
> of compaction marker is smaller than lastFlushedSequenceId.
> {code}
> if (lastFlushedSequenceId >= entry.getKey().getLogSeqNum()) {
>   editsSkipped++;
>   continue;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18128) compaction marker could be skipped

2017-06-11 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18128?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-18128:
---
Status: Patch Available  (was: Open)

> compaction marker could be skipped 
> ---
>
> Key: HBASE-18128
> URL: https://issues.apache.org/jira/browse/HBASE-18128
> Project: HBase
>  Issue Type: Improvement
>  Components: Compaction, regionserver
>Reporter: Jingyun Tian
>Assignee: Jingyun Tian
> Attachments: HBASE-18128-master.patch, HBASE-18128-master-v2.patch, 
> TestCompactionMarker.java
>
>
> The sequence for a compaction are as follows:
> 1. Compaction writes new files under region/.tmp directory (compaction output)
> 2. Compaction atomically moves the temporary file under region directory
> 3. Compaction appends a WAL edit containing the compaction input and output 
> files. Forces sync on WAL.
> 4. Compaction deletes the input files from the region directory.
> But if a flush happened between 3 and 4, then the regionserver crushed. The 
> compaction marker will be skipped when splitting log because the sequence id 
> of compaction marker is smaller than lastFlushedSequenceId.
> {code}
> if (lastFlushedSequenceId >= entry.getKey().getLogSeqNum()) {
>   editsSkipped++;
>   continue;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18128) compaction marker could be skipped

2017-06-11 Thread Jingyun Tian (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16046181#comment-16046181
 ] 

Jingyun Tian commented on HBASE-18128:
--

[~tedyu] please check this out, thx

> compaction marker could be skipped 
> ---
>
> Key: HBASE-18128
> URL: https://issues.apache.org/jira/browse/HBASE-18128
> Project: HBase
>  Issue Type: Improvement
>  Components: Compaction, regionserver
>Reporter: Jingyun Tian
>Assignee: Jingyun Tian
> Attachments: HBASE-18128-master.patch, HBASE-18128-master-v2.patch, 
> TestCompactionMarker.java
>
>
> The sequence for a compaction are as follows:
> 1. Compaction writes new files under region/.tmp directory (compaction output)
> 2. Compaction atomically moves the temporary file under region directory
> 3. Compaction appends a WAL edit containing the compaction input and output 
> files. Forces sync on WAL.
> 4. Compaction deletes the input files from the region directory.
> But if a flush happened between 3 and 4, then the regionserver crushed. The 
> compaction marker will be skipped when splitting log because the sequence id 
> of compaction marker is smaller than lastFlushedSequenceId.
> {code}
> if (lastFlushedSequenceId >= entry.getKey().getLogSeqNum()) {
>   editsSkipped++;
>   continue;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-16415) Replication in different namespace

2017-06-11 Thread Guanghao Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16046157#comment-16046157
 ] 

Guanghao Zhang commented on HBASE-16415:


Now the replication source read the wal entries by reader thread and filter wal 
entries by the filter config (replication scope, perr table-cfs, peer 
namespaces, etc.). Then ship them by shipper thread. They were write to peer 
cluster by replication endpoint. And we already suppose the pluggable 
replication endpoint. So I thought we should implement this feature 
(replication in different namespace) by implementing a new replication 
endpoint. And use the config now for replication endpoint. Your proposal will 
make the replication source more complicate. IMO it is not a good idea.

> Replication in different namespace
> --
>
> Key: HBASE-16415
> URL: https://issues.apache.org/jira/browse/HBASE-16415
> Project: HBase
>  Issue Type: New Feature
>  Components: Replication
>Reporter: Christian Guegi
>Assignee: Jan Kunigk
>
> It would be nice to replicate tables from one namespace to another namespace.
> Example:
> Master cluster, namespace=default, table=bar
> Slave cluster, namespace=dr, table=bar
> Replication happens in class ReplicationSink:
>   public void replicateEntries(List entries, final CellScanner 
> cells, ...){
> ...
> TableName table = 
> TableName.valueOf(entry.getKey().getTableName().toByteArray());
> ...
> addToHashMultiMap(rowMap, table, clusterIds, m);
> ...
> for (Entry> entry : 
> rowMap.entrySet()) {
>   batch(entry.getKey(), entry.getValue().values());
> }
>}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (HBASE-18187) Release hbase-2.0.0-alpha1

2017-06-11 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-18187.
---
  Resolution: Fixed
Release Note: Pushed the release. For detail: 
http://apache-hbase.679495.n3.nabble.com/ANNOUNCE-Apache-HBase-2-0-0-alpha-1-is-now-available-for-download-td4088484.html

> Release hbase-2.0.0-alpha1
> --
>
> Key: HBASE-18187
> URL: https://issues.apache.org/jira/browse/HBASE-18187
> Project: HBase
>  Issue Type: Task
>Affects Versions: 2.0.0
>Reporter: stack
>Assignee: stack
> Fix For: 2.0.0
>
>
> Push an hbase-2.0.0-alpha1
> Will file subtasks here.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17988) get-active-master.rb and draining_servers.rb no longer work

2017-06-11 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16046114#comment-16046114
 ] 

Nick Dimiduk commented on HBASE-17988:
--

Yes that's correct. Fat fingers...

> get-active-master.rb and draining_servers.rb no longer work
> ---
>
> Key: HBASE-17988
> URL: https://issues.apache.org/jira/browse/HBASE-17988
> Project: HBase
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 2.0.0
>Reporter: Mike Drob
>Assignee: Chinmay Kulkarni
>Priority: Critical
> Fix For: 2.0.0, 1.1.2, 1.4.0, 1.3.2, 1.2.7
>
> Attachments: HBASE-17988.002.patch, HBASE-17988.patch
>
>
> The scripts {{bin/get-active-master.rb}} and {{bin/draining_servers.rb}} no 
> longer work on current master branch. Here is an example error message:
> {noformat}
> $ bin/hbase-jruby bin/get-active-master.rb 
> NoMethodError: undefined method `masterAddressZNode' for 
> #
>at bin/get-active-master.rb:35
> {noformat}
> My initial probing suggests that this is likely due to movement that happened 
> in HBASE-16690. Perhaps instead of reworking the ruby, there is similar Java 
> functionality already existing somewhere.
> Putting priority at critical since it's impossible to know whether users rely 
> on the scripts.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17988) get-active-master.rb and draining_servers.rb no longer work

2017-06-11 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16046094#comment-16046094
 ] 

Mike Drob commented on HBASE-17988:
---

[~ndimiduk] - Did you mean 1.1.12, not 1.1.2?

> get-active-master.rb and draining_servers.rb no longer work
> ---
>
> Key: HBASE-17988
> URL: https://issues.apache.org/jira/browse/HBASE-17988
> Project: HBase
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 2.0.0
>Reporter: Mike Drob
>Assignee: Chinmay Kulkarni
>Priority: Critical
> Fix For: 2.0.0, 1.1.2, 1.4.0, 1.3.2, 1.2.7
>
> Attachments: HBASE-17988.002.patch, HBASE-17988.patch
>
>
> The scripts {{bin/get-active-master.rb}} and {{bin/draining_servers.rb}} no 
> longer work on current master branch. Here is an example error message:
> {noformat}
> $ bin/hbase-jruby bin/get-active-master.rb 
> NoMethodError: undefined method `masterAddressZNode' for 
> #
>at bin/get-active-master.rb:35
> {noformat}
> My initial probing suggests that this is likely due to movement that happened 
> in HBASE-16690. Perhaps instead of reworking the ruby, there is similar Java 
> functionality already existing somewhere.
> Putting priority at critical since it's impossible to know whether users rely 
> on the scripts.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18200) Set hadoop check versions for branch-2 and branch-2.x in pre commit

2017-06-11 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16046089#comment-16046089
 ] 

Mike Drob commented on HBASE-18200:
---

[~Apache9] - Can you attach a patch named for branch-2 (even though we're 
committing to master) just so that we can trigger the new code path?

> Set hadoop check versions for branch-2 and branch-2.x in pre commit
> ---
>
> Key: HBASE-18200
> URL: https://issues.apache.org/jira/browse/HBASE-18200
> Project: HBase
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 3.0.0
>
> Attachments: HBASE-18200.patch
>
>
> Now it will use the hadoop versions for branch-1.
> I do not know how to set the fix versions as the code will be committed to 
> master but the branch in trouble is branch-2...



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18203) Intelligently manage a pool of open references to store files

2017-06-11 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16046027#comment-16046027
 ] 

Andrew Purtell commented on HBASE-18203:


[~Apache9] I also argue that the number of file handles should be set to an 
essentially unlimited value, but there seem to be a nonzero number of 
deployments that run with something more like 64k. I think it is old advice. 
(We were doing this, have since raised the limit to 256k)

> Intelligently manage a pool of open references to store files
> -
>
> Key: HBASE-18203
> URL: https://issues.apache.org/jira/browse/HBASE-18203
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Affects Versions: 2.0.0
>Reporter: Andrew Purtell
>
> When bringing a region online we open every store file and keep the file 
> open, to avoid further round trips to the HDFS namenode during reads. Naively 
> keeping open every store file we encounter is a bad idea. There should be an 
> upper bound. We should close and reopen files as needed once we are above the 
> upper bound. We should choose candidates to close on a LRU basis. Otherwise 
> we can (and some users have in production) overrun high (~64k) open file 
> handle limits on the server if the aggregate number of store files is too 
> large. 
> Note the 'open files' here refers to open/active references to files at the 
> HDFS level. How this maps to active file descriptors at the OS level depends 
> on concurrency of access (block transfers, short circuit reads). The more 
> open files we have at the HDFS level the higher number of OS level file 
> handles we can expect to consume.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18203) Intelligently manage a pool of open references to store files

2017-06-11 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16046026#comment-16046026
 ] 

Andrew Purtell commented on HBASE-18203:


HBASE-9393 cleans up after sockets but doesn't manage the number of sockets in 
ESTABLISHED state nor file handles used by short circuit reads. 

> Intelligently manage a pool of open references to store files
> -
>
> Key: HBASE-18203
> URL: https://issues.apache.org/jira/browse/HBASE-18203
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Affects Versions: 2.0.0
>Reporter: Andrew Purtell
>
> When bringing a region online we open every store file and keep the file 
> open, to avoid further round trips to the HDFS namenode during reads. Naively 
> keeping open every store file we encounter is a bad idea. There should be an 
> upper bound. We should close and reopen files as needed once we are above the 
> upper bound. We should choose candidates to close on a LRU basis. Otherwise 
> we can (and some users have in production) overrun high (~64k) open file 
> handle limits on the server if the aggregate number of store files is too 
> large. 
> Note the 'open files' here refers to open/active references to files at the 
> HDFS level. How this maps to active file descriptors at the OS level depends 
> on concurrency of access (block transfers, short circuit reads). The more 
> open files we have at the HDFS level the higher number of OS level file 
> handles we can expect to consume.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18200) Set hadoop check versions for branch-2 and branch-2.x in pre commit

2017-06-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16045958#comment-16045958
 ] 

Hadoop QA commented on HBASE-18200:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 30s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue} 0m 10s 
{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 
8s {color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
57m 12s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
22s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 58m 32s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.03.0-ce Server=17.03.0-ce Image:yetus/hbase:757bf37 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12872479/HBASE-18200.patch |
| JIRA Issue | HBASE-18200 |
| Optional Tests |  asflicense  shellcheck  shelldocs  |
| uname | Linux 5a0d7eaf68b7 4.8.3-std-1 #1 SMP Fri Oct 21 11:15:43 UTC 2016 
x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 384e308 |
| shellcheck | v0.4.6 |
| modules | C: . U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/7169/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Set hadoop check versions for branch-2 and branch-2.x in pre commit
> ---
>
> Key: HBASE-18200
> URL: https://issues.apache.org/jira/browse/HBASE-18200
> Project: HBase
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 3.0.0
>
> Attachments: HBASE-18200.patch
>
>
> Now it will use the hadoop versions for branch-1.
> I do not know how to set the fix versions as the code will be committed to 
> master but the branch in trouble is branch-2...



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18200) Set hadoop check versions for branch-2 and branch-2.x in pre commit

2017-06-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16045934#comment-16045934
 ] 

Hadoop QA commented on HBASE-18200:
---

(!) A patch to the testing environment has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-HBASE-Build/7169/console in case of 
problems.


> Set hadoop check versions for branch-2 and branch-2.x in pre commit
> ---
>
> Key: HBASE-18200
> URL: https://issues.apache.org/jira/browse/HBASE-18200
> Project: HBase
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 3.0.0
>
> Attachments: HBASE-18200.patch
>
>
> Now it will use the hadoop versions for branch-1.
> I do not know how to set the fix versions as the code will be committed to 
> master but the branch in trouble is branch-2...



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18197) import.java, job output is printing two times.

2017-06-11 Thread Jan Hentschel (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16045933#comment-16045933
 ] 

Jan Hentschel commented on HBASE-18197:
---

The build failure seems to be unrelated to the actual change.

> import.java, job output is printing two times.
> --
>
> Key: HBASE-18197
> URL: https://issues.apache.org/jira/browse/HBASE-18197
> Project: HBase
>  Issue Type: Bug
>  Components: hbase
>Affects Versions: 1.0.2, 1.2.0, 1.4.0
>Reporter: Chandra Sekhar
>Assignee: Jan Hentschel
>Priority: Trivial
> Attachments: HBASE-18197.branch-1.0.001.patch, 
> HBASE-18197.branch-1.2.001.patch
>
>
> import.java, job output is printing two times.
> {quote}
> after job completed, job.waitForCompletion(true) is calling two times.
> {quote}
> {code}
> Job job = createSubmittableJob(conf, otherArgs);
> boolean isJobSuccessful = job.waitForCompletion(true);
> if(isJobSuccessful){
>   // Flush all the regions of the table
>   flushRegionsIfNecessary(conf);
> }
> long inputRecords = 
> job.getCounters().findCounter(TaskCounter.MAP_INPUT_RECORDS).getValue();
> long outputRecords = 
> job.getCounters().findCounter(TaskCounter.MAP_OUTPUT_RECORDS).getValue();
> if (outputRecords < inputRecords) {
>   System.err.println("Warning, not all records were imported (maybe 
> filtered out).");
>   if (outputRecords == 0) {
> System.err.println("If the data was exported from HBase 0.94 "+
> "consider using -Dhbase.import.version=0.94.");
>   }
> }
> System.exit(job.waitForCompletion(true) ? 0 : 1);
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-18200) Set hadoop check versions for branch-2 and branch-2.x in pre commit

2017-06-11 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-18200:
--
Assignee: Duo Zhang
  Status: Patch Available  (was: Open)

> Set hadoop check versions for branch-2 and branch-2.x in pre commit
> ---
>
> Key: HBASE-18200
> URL: https://issues.apache.org/jira/browse/HBASE-18200
> Project: HBase
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 3.0.0
>
> Attachments: HBASE-18200.patch
>
>
> Now it will use the hadoop versions for branch-1.
> I do not know how to set the fix versions as the code will be committed to 
> master but the branch in trouble is branch-2...



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (HBASE-18200) Set hadoop check versions for branch-2 and branch-2.x in pre commit

2017-06-11 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16045931#comment-16045931
 ] 

Duo Zhang edited comment on HBASE-18200 at 6/11/17 12:14 PM:
-

Use the same hadoop versions with master for branch-2 and branch-2.x.


was (Author: apache9):
Us the same hadoop versions with master for branch-2 and branch-2.x.

> Set hadoop check versions for branch-2 and branch-2.x in pre commit
> ---
>
> Key: HBASE-18200
> URL: https://issues.apache.org/jira/browse/HBASE-18200
> Project: HBase
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 3.0.0
>
> Attachments: HBASE-18200.patch
>
>
> Now it will use the hadoop versions for branch-1.
> I do not know how to set the fix versions as the code will be committed to 
> master but the branch in trouble is branch-2...



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18200) Set hadoop check versions for branch-2 and branch-2.x in pre commit

2017-06-11 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16045932#comment-16045932
 ] 

Duo Zhang commented on HBASE-18200:
---

[~busbey] PTAL. Thanks.

> Set hadoop check versions for branch-2 and branch-2.x in pre commit
> ---
>
> Key: HBASE-18200
> URL: https://issues.apache.org/jira/browse/HBASE-18200
> Project: HBase
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 3.0.0
>
> Attachments: HBASE-18200.patch
>
>
> Now it will use the hadoop versions for branch-1.
> I do not know how to set the fix versions as the code will be committed to 
> master but the branch in trouble is branch-2...



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-18200) Set hadoop check versions for branch-2 and branch-2.x in pre commit

2017-06-11 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-18200:
--
Attachment: HBASE-18200.patch

Us the same hadoop versions with master for branch-2 and branch-2.x.

> Set hadoop check versions for branch-2 and branch-2.x in pre commit
> ---
>
> Key: HBASE-18200
> URL: https://issues.apache.org/jira/browse/HBASE-18200
> Project: HBase
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Duo Zhang
> Fix For: 3.0.0
>
> Attachments: HBASE-18200.patch
>
>
> Now it will use the hadoop versions for branch-1.
> I do not know how to set the fix versions as the code will be committed to 
> master but the branch in trouble is branch-2...



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18179) Add new hadoop releases to the pre commit hadoop check

2017-06-11 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16045930#comment-16045930
 ] 

Duo Zhang commented on HBASE-18179:
---

Let's resolve HBASE-18200 first as it is more important.

> Add new hadoop releases to the pre commit hadoop check
> --
>
> Key: HBASE-18179
> URL: https://issues.apache.org/jira/browse/HBASE-18179
> Project: HBase
>  Issue Type: Bug
>Reporter: Duo Zhang
>
> 3.0.0-alpha3 is out, we should replace the old alpha2 release with alpha3. 
> And we should add new 2.x releases also.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-18200) Set hadoop check versions for branch-2 and branch-2.x in pre commit

2017-06-11 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-18200:
--
Summary: Set hadoop check versions for branch-2 and branch-2.x in pre 
commit  (was: Set hadoop check versions for branch-2 in pre commit)

> Set hadoop check versions for branch-2 and branch-2.x in pre commit
> ---
>
> Key: HBASE-18200
> URL: https://issues.apache.org/jira/browse/HBASE-18200
> Project: HBase
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Duo Zhang
> Fix For: 3.0.0
>
>
> Now it will use the hadoop versions for branch-1.
> I do not know how to set the fix versions as the code will be committed to 
> master but the branch in trouble is branch-2...



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18203) Intelligently manage a pool of open references to store files

2017-06-11 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16045912#comment-16045912
 ] 

Duo Zhang commented on HBASE-18203:
---

Usually we will set the ulimit -n to 1M... I haven't seen any bad impacts yet.

> Intelligently manage a pool of open references to store files
> -
>
> Key: HBASE-18203
> URL: https://issues.apache.org/jira/browse/HBASE-18203
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Affects Versions: 2.0.0
>Reporter: Andrew Purtell
>
> When bringing a region online we open every store file and keep the file 
> open, to avoid further round trips to the HDFS namenode during reads. Naively 
> keeping open every store file we encounter is a bad idea. There should be an 
> upper bound. We should close and reopen files as needed once we are above the 
> upper bound. We should choose candidates to close on a LRU basis. Otherwise 
> we can (and some users have in production) overrun high (~64k) open file 
> handle limits on the server if the aggregate number of store files is too 
> large. 
> Note the 'open files' here refers to open/active references to files at the 
> HDFS level. How this maps to active file descriptors at the OS level depends 
> on concurrency of access (block transfers, short circuit reads). The more 
> open files we have at the HDFS level the higher number of OS level file 
> handles we can expect to consume.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18203) Intelligently manage a pool of open references to store files

2017-06-11 Thread Ashish Singhi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16045850#comment-16045850
 ] 

Ashish Singhi commented on HBASE-18203:
---

HBASE-9393 should solve this issue..

> Intelligently manage a pool of open references to store files
> -
>
> Key: HBASE-18203
> URL: https://issues.apache.org/jira/browse/HBASE-18203
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Affects Versions: 2.0.0
>Reporter: Andrew Purtell
>
> When bringing a region online we open every store file and keep the file 
> open, to avoid further round trips to the HDFS namenode during reads. Naively 
> keeping open every store file we encounter is a bad idea. There should be an 
> upper bound. We should close and reopen files as needed once we are above the 
> upper bound. We should choose candidates to close on a LRU basis. Otherwise 
> we can (and some users have in production) overrun high (~64k) open file 
> handle limits on the server if the aggregate number of store files is too 
> large. 
> Note the 'open files' here refers to open/active references to files at the 
> HDFS level. How this maps to active file descriptors at the OS level depends 
> on concurrency of access (block transfers, short circuit reads). The more 
> open files we have at the HDFS level the higher number of OS level file 
> handles we can expect to consume.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)