[jira] [Created] (HBASE-16940) Review 52748: Miscellaneous

2016-10-24 Thread Vladimir Rodionov (JIRA)
Vladimir Rodionov created HBASE-16940:
-

 Summary: Review 52748: Miscellaneous 
 Key: HBASE-16940
 URL: https://issues.apache.org/jira/browse/HBASE-16940
 Project: HBase
  Issue Type: Sub-task
Reporter: Vladimir Rodionov
Assignee: Vladimir Rodionov


Review 52748 remaining issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-16939) ExportSnapshot: set owner and permission on right directory

2016-10-24 Thread Guanghao Zhang (JIRA)
Guanghao Zhang created HBASE-16939:
--

 Summary: ExportSnapshot: set owner and permission on right 
directory
 Key: HBASE-16939
 URL: https://issues.apache.org/jira/browse/HBASE-16939
 Project: HBase
  Issue Type: Bug
Reporter: Guanghao Zhang
Priority: Minor


{code}
FileUtil.copy(inputFs, snapshotDir, outputFs, initialOutputSnapshotDir, false, 
false, conf);
  if (filesUser != null || filesGroup != null) {
setOwner(outputFs, snapshotTmpDir, filesUser, filesGroup, true);
  }
  if (filesMode > 0) {
setPermission(outputFs, snapshotTmpDir, (short)filesMode, true);
  }
{code}
It copy snapshot manifest to initialOutputSnapshotDir, but it set owner on 
snapshotTmpDir. They are different directory when skipTmp is true.

Another problem is new cluster doesn't have .hbase-snapshot directory. So after 
export snapshot, it should set owner on .hbase-snapshot directory.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-16938) TableCFsUpdater maybe failed due to no write permission on peerNode

2016-10-24 Thread Guanghao Zhang (JIRA)
Guanghao Zhang created HBASE-16938:
--

 Summary: TableCFsUpdater maybe failed due to no write permission 
on peerNode
 Key: HBASE-16938
 URL: https://issues.apache.org/jira/browse/HBASE-16938
 Project: HBase
  Issue Type: Bug
  Components: Replication
Affects Versions: 2.0.0, 1.4.0
Reporter: Guanghao Zhang


After HBASE-11393, replication table-cfs use a PB object. So it need copy the 
old string config to new PB object when upgrade cluster. In our use case, we 
have different kerberos for different cluster, etc. online serve cluster and 
offline processing cluster. And we use a unify global admin kerberos for all 
clusters. The peer node is created by client. So only global admin has the 
write  permission for it. When upgrade cluster, HMaster doesn't has the write 
permission on peer node, it maybe failed to copy old table-cfs string to new PB 
Object. I thought it need a tool for client to do this copy job.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-15789) PB related changes to work with offheap

2016-10-24 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-15789.
---
Resolution: Fixed

Reapplied (FYI [~anoop.hbase]).

Add a clean of generated and com/google/protobuf dirs as first step so when 
comes time for the patch to run, it'll work even if the src is already patched. 
Seems to fix build.

> PB related changes to work with offheap
> ---
>
> Key: HBASE-15789
> URL: https://issues.apache.org/jira/browse/HBASE-15789
> Project: HBase
>  Issue Type: Sub-task
>  Components: regionserver
>Reporter: ramkrishna.s.vasudevan
>Assignee: Anoop Sam John
> Fix For: 2.0.0
>
> Attachments: HBASE-15789.master.001.patch, HBASE-15789.patch, 
> HBASE-15789_V2.patch
>
>
> This is an issue to brainstorm. Whether we go with pb 2.x or pb 3.0 and also 
> depends on the shading of protobuf classes. 
> We should also decide if we are going to fork the PB classes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Fixed: hbase.apache.org HTML Checker

2016-10-24 Thread Apache Jenkins Server
Fixed

If successful, the HTML and link-checking report for http://hbase.apache.org is 
available at 
https://builds.apache.org/job/HBase%20Website%20Link%20Ckecker/66/artifact/link_report/index.html.

If failed, see 
https://builds.apache.org/job/HBase%20Website%20Link%20Ckecker/66/console.

[jira] [Created] (HBASE-16937) Replace SnapshotType protobuf conversion when we can directly use the pojo object

2016-10-24 Thread Matteo Bertozzi (JIRA)
Matteo Bertozzi created HBASE-16937:
---

 Summary: Replace SnapshotType protobuf conversion when we can 
directly use the pojo object
 Key: HBASE-16937
 URL: https://issues.apache.org/jira/browse/HBASE-16937
 Project: HBase
  Issue Type: Sub-task
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
Priority: Trivial
 Fix For: 2.0.0
 Attachments: HBASE-16937-v0.patch

mostly find & replace work:
replace the back and forth protobuf conversion when we can just use the client 
SnapshotType enum.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-16936) TestRateLimiter#testOverconsumptionFixedIntervalRefillStrategy is flaky

2016-10-24 Thread Dima Spivak (JIRA)
Dima Spivak created HBASE-16936:
---

 Summary: 
TestRateLimiter#testOverconsumptionFixedIntervalRefillStrategy is flaky
 Key: HBASE-16936
 URL: https://issues.apache.org/jira/browse/HBASE-16936
 Project: HBase
  Issue Type: Bug
  Components: test
Reporter: Dima Spivak


Seeing this once every month or two in-house. Looks like it's a timing-based 
test, which makes it prone to flakiness, but I've noticed that whenever it 
fails, it fails with the same {{AssertionError}} (including values), so it'd be 
worth digging into. In our case:
{noformat}
expected:<1000> but was:<999>

Stack Trace:
java.lang.AssertionError: expected:<1000> but was:<999>
at 
org.apache.hadoop.hbase.quotas.TestRateLimiter.testOverconsumptionFixedIntervalRefillStrategy(TestRateLimiter.java:119)
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] Drop legacy hadoop support at the 2.0 release

2016-10-24 Thread Matteo Bertozzi
cool, thanks all!

to sum up the discussion:
HBase 2.0 will have 2.6.1+ and 2.7.1+ has supported hadoop version.
(hadoop 3.x support will be decided later, since work is still going on)

HBASE-16884 - updates the supported version map in the docs,  with the
2.6.1+ and 2.7.1+ support for 2.0
HBASE-16887 - fixes pre-commit to run with 2.6.1+, 2.7.1+, 3.0 on master (hbase
2.0) and 2.4.0+, 2.5.0+, 2.6.1+, 2.7.1+ on other branches.

On Thu, Oct 20, 2016 at 7:53 AM, 张铎  wrote:

> See HBASE-16887
>
> 2016-10-20 22:50 GMT+08:00 Sean Busbey :
>
> > Unfortunately, I think this means we'll need to update the hadoopcheck
> > versions to vary by branch.
> >
> > On Wed, Oct 19, 2016 at 6:21 PM, 张铎  wrote:
> > > OK, seems no objection. I will file a issue to modify the hadoop
> version
> > > support matrix.
> > >
> > > And I think we also need to change the hadoopcheck versions for our
> > > precommit check?
> > >
> > > Thanks all.
> > >
> > > 2016-10-20 1:14 GMT+08:00 Andrew Purtell :
> > >
> > >> FWIW, we are running 2.7.x in production and it's stable.
> > >>
> > >>
> > >> On Tue, Oct 18, 2016 at 10:18 PM, Sean Busbey 
> > wrote:
> > >>
> > >> > we had not decided yet AFAIK. a big concern was the lack of
> > >> > maintenance releases on more recent Hadoop versions and the
> perception
> > >> > that 2.4 and 2.5 were the last big stable release lines.
> > >> >
> > >> > 2.6 and 2.7 have both gotten a few maintenance releases now, so
> maybe
> > >> > this isn't a concern any more?
> > >> >
> > >> > On Tue, Oct 18, 2016 at 7:00 PM, Enis Söztutar 
> > >> wrote:
> > >> > > I thought we already decided to do that, no?
> > >> > >
> > >> > > Enis
> > >> > >
> > >> > > On Tue, Oct 18, 2016 at 6:56 PM, Ted Yu 
> > wrote:
> > >> > >
> > >> > >> Looking at http://hadoop.apache.org/releases.html , 2.5.x hasn't
> > got
> > >> > new
> > >> > >> release for almost two years.
> > >> > >>
> > >> > >> Seems fine to drop support for 2.4 and 2.5
> > >> > >>
> > >> > >> On Tue, Oct 18, 2016 at 6:42 PM, Duo Zhang 
> > >> wrote:
> > >> > >>
> > >> > >> > This is the current hadoop version support matrix
> > >> > >> >
> > >> > >> > https://hbase.apache.org/book.html#hadoop
> > >> > >> >
> > >> > >> > 2016-10-19 9:40 GMT+08:00 Duo Zhang :
> > >> > >> >
> > >> > >> > > To be specific, hadoop-2.4.x and hadoop-2.5.x.
> > >> > >> > >
> > >> > >> > > The latest releases for these two lines are about two years
> > ago,
> > >> so
> > >> > I
> > >> > >> > > think it is the time to drop the support of them when 2.0
> out.
> > >> Then
> > >> > we
> > >> > >> > > could drop some code in our hadoop-compat module as we may
> > need to
> > >> > add
> > >> > >> > some
> > >> > >> > > code for the incoming hadoop-3.0...
> > >> > >> > >
> > >> > >> > > Thanks.
> > >> > >> > >
> > >> > >> >
> > >> > >>
> > >> >
> > >>
> > >>
> > >>
> > >> --
> > >> Best regards,
> > >>
> > >>- Andy
> > >>
> > >> Problems worthy of attack prove their worth by hitting back. - Piet
> Hein
> > >> (via Tom White)
> > >>
> >
> >
> >
> > --
> > busbey
> >
>


Successful: HBase Generate Website

2016-10-24 Thread Apache Jenkins Server
Build status: Successful

If successful, the website and docs have been generated. To update the live 
site, follow the instructions below. If failed, skip to the bottom of this 
email.

Use the following commands to download the patch and apply it to a clean branch 
based on origin/asf-site. If you prefer to keep the hbase-site repo around 
permanently, you can skip the clone step.

  git clone https://git-wip-us.apache.org/repos/asf/hbase-site.git

  cd hbase-site
  wget -O- 
https://builds.apache.org/job/hbase_generate_website/387/artifact/website.patch.zip
 | funzip > a4d48b699f6280ba971572523c3da4486e341fb3.patch
  git fetch
  git checkout -b asf-site-a4d48b699f6280ba971572523c3da4486e341fb3 
origin/asf-site
  git am --whitespace=fix a4d48b699f6280ba971572523c3da4486e341fb3.patch

At this point, you can preview the changes by opening index.html or any of the 
other HTML pages in your local 
asf-site-a4d48b699f6280ba971572523c3da4486e341fb3 branch.

There are lots of spurious changes, such as timestamps and CSS styles in 
tables, so a generic git diff is not very useful. To see a list of files that 
have been added, deleted, renamed, changed type, or are otherwise interesting, 
use the following command:

  git diff --name-status --diff-filter=ADCRTXUB origin/asf-site

To see only files that had 100 or more lines changed:

  git diff --stat origin/asf-site | grep -E '[1-9][0-9]{2,}'

When you are satisfied, publish your changes to origin/asf-site using these 
commands:

  git commit --allow-empty -m "Empty commit" # to work around a current ASF 
INFRA bug
  git push origin asf-site-a4d48b699f6280ba971572523c3da4486e341fb3:asf-site
  git checkout asf-site
  git branch -D asf-site-a4d48b699f6280ba971572523c3da4486e341fb3

Changes take a couple of minutes to be propagated. You can verify whether they 
have been propagated by looking at the Last Published date at the bottom of 
http://hbase.apache.org/. It should match the date in the index.html on the 
asf-site branch in Git.

As a courtesy- reply-all to this email to let other committers know you pushed 
the site.



If failed, see https://builds.apache.org/job/hbase_generate_website/387/console

[jira] [Created] (HBASE-16935) Java API method Admin.deleteColumn(table, columnFamily) doesn't delete family's StoreFile from file system.

2016-10-24 Thread Mikhail Zvagelsky (JIRA)
Mikhail Zvagelsky created HBASE-16935:
-

 Summary: Java API method Admin.deleteColumn(table, columnFamily) 
doesn't delete family's StoreFile from file system.
 Key: HBASE-16935
 URL: https://issues.apache.org/jira/browse/HBASE-16935
 Project: HBase
  Issue Type: Bug
  Components: Admin
Affects Versions: 1.2.3
Reporter: Mikhail Zvagelsky


The method deleteColumn(TableName tableName, byte[] columnName) of the class 
org.apache.hadoop.hbase.client.Admin shoud delete specified column family from 
specified table. (Despite of its name the method removes the family, not a 
column - view the [issue| https://issues.apache.org/jira/browse/HBASE-1989].)
This method changes the table's schema, but it doesn't delete column family's 
Store File from a file system. To be precise - I run this code:
{code:|borderStyle=solid}
import java.io.IOException;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.HColumnDescriptor;
import org.apache.hadoop.hbase.HTableDescriptor;
import org.apache.hadoop.hbase.TableName;
import org.apache.hadoop.hbase.client.*;
import org.apache.hadoop.hbase.util.Bytes;

public class ToHBaseIssueTracker {

public static void main(String[] args) throws IOException {
TableName tableName = TableName.valueOf("test_table");
HTableDescriptor desc = new HTableDescriptor(tableName);
desc.addFamily(new HColumnDescriptor("cf1"));
desc.addFamily(new HColumnDescriptor("cf2"));

Configuration conf = HBaseConfiguration.create();
Connection connection = ConnectionFactory.createConnection(conf);
Admin admin = connection.getAdmin();

admin.createTable(desc);

HTable table = new HTable(conf, "test_table");
for (int i = 0; i < 4; i++) {
Put put = new Put(Bytes.toBytes(i)); // Use i as row key.
put.addColumn(Bytes.toBytes("cf1"), Bytes.toBytes("a"), 
Bytes.toBytes("value"));
put.addColumn(Bytes.toBytes("cf2"), Bytes.toBytes("a"), 
Bytes.toBytes("value"));
table.put(put);
}

admin.deleteColumn(tableName, Bytes.toBytes("cf2"));
admin.majorCompact(tableName);
admin.close();
}
}
{code}
Then I see that the store file for the "cf2" family persists in file system.
I observe this effect in standalone hbase installation and in 
pseudo-distributed mode.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-16562) ITBLL should fail to start if misconfigured

2016-10-24 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey resolved HBASE-16562.
-
Resolution: Fixed

okay, reverted from branches that hadn't had a release simply. reverted from 
0.98 and branch-1.1 using the linked follow on jira.

If anyone wants to fix this, please start with a new JIRA.

> ITBLL should fail to start if misconfigured
> ---
>
> Key: HBASE-16562
> URL: https://issues.apache.org/jira/browse/HBASE-16562
> Project: HBase
>  Issue Type: Improvement
>  Components: integration tests
>Reporter: Andrew Purtell
>Assignee: Heng Chen
> Fix For: 1.0.4, 0.98.23, 1.1.7
>
> Attachments: HBASE-16562-branch-1.2.patch, 
> HBASE-16562-branch-1.2.v1.patch, HBASE-16562.patch, HBASE-16562.v1.patch, 
> HBASE-16562.v1.patch-addendum
>
>
> The number of nodes in ITBLL must a multiple of width*wrap (defaults to 25M, 
> but can be configured by adding two more args to the test invocation) or else 
> verification will fail. This can be very expensive in terms of time or hourly 
> billing for on demand test resources. Check the sanity of test parameters 
> before launching any MR jobs and fail fast if invariants aren't met with an 
> indication what parameter(s) need fixing. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-16934) Revert ITBLL misconfiguration check introduced in HBASE-16562

2016-10-24 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey resolved HBASE-16934.
-
Resolution: Fixed

> Revert ITBLL misconfiguration check introduced in HBASE-16562
> -
>
> Key: HBASE-16934
> URL: https://issues.apache.org/jira/browse/HBASE-16934
> Project: HBase
>  Issue Type: Bug
>  Components: integration tests
>Affects Versions: 1.1.7, 0.98.23
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Critical
> Fix For: 0.98.24, 1.1.8
>
>
> The misconfiguration check in HBASE-16562 does insufficient input validation 
> / handling, resulting in a NPE during normal operations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-16934) Revert ITBLL misconfiguration check introduced in HBASE-16562

2016-10-24 Thread Sean Busbey (JIRA)
Sean Busbey created HBASE-16934:
---

 Summary: Revert ITBLL misconfiguration check introduced in 
HBASE-16562
 Key: HBASE-16934
 URL: https://issues.apache.org/jira/browse/HBASE-16934
 Project: HBase
  Issue Type: Bug
  Components: integration tests
Affects Versions: 0.98.23, 1.1.7
Reporter: Sean Busbey
Assignee: Sean Busbey
Priority: Critical
 Fix For: 0.98.24, 1.1.8


The misconfiguration check in HBASE-16562 does insufficient input validation / 
handling, resulting in a NPE during normal operations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Volunteer RM for 1.1.7?

2016-10-24 Thread Andrew Purtell
Glad we could help out Nick. Welcome back. 

> On Oct 23, 2016, at 4:29 PM, Nick Dimiduk  wrote:
> 
> Looks like a candidate was accepted by the PMC. Thank you very much for
> taking on this release in my absence!
> 
> -n
> 
> On Wed, Sep 21, 2016 at 11:34 AM, Andrew Purtell 
> wrote:
> 
>> I've RMed once or twice and can be available to answer questions as well.
>> 
>> On Wed, Sep 21, 2016 at 10:59 AM, Nick Dimiduk 
>> wrote:
>> 
>>> Excellent, thank you Misty!
>>> 
>>> The release would ideally land 1 month from the 1.1.6 release, so Oct 5
>> is
>>> our target date. This is a loose target, but a starting point. A release
>>> candidate VOTE thread is posted for 7 days. We're pretty far into the
>> 1.1.x
>>> line, so probably the first or second RC will pass. Thus, working
>> backwards
>>> and rounding to the nearest work week, I would aim for having an RC
>>> available to the community on Sept 26th. My preference is to have a
>> weekend
>>> land near the end of the voting period so that non-9-5 volunteers can be
>>> made aware of the VOTE and make time to take a look.
>>> 
>>> Do let me know if you have any questions, I should be available to answer
>>> them through this weekend. Otherwise, you're in good hands with the other
>>> folks who have and will take the time to respond on this thread.
>>> 
>>> Thanks,
>>> Nick
>>> 
>>> On Wed, Sep 21, 2016 at 9:11 AM, Misty Stanley-Jones 
>>> wrote:
>>> 
 OK, I'm convinced. I can do it. When should I make the RC?
 
> On Wed, Sep 21, 2016, at 01:35 PM, Andrew Purtell wrote:
> You should try your hand at RM-ing, Misty. Making the candidate
>> should
> only take an afternoon.
> 
> Then it's up to the rest of us to try it out and vote. I'll be able
>> to
> give a 1.1.7 candidate a thorough exercising.
> 
>> On Sep 20, 2016, at 8:14 PM, Stack  wrote:
>> 
>> I can help Misty.
>> St.Ack
>> 
>> On Tue, Sep 20, 2016 at 7:37 PM, Nick Dimiduk >> 
 wrote:
>> 
 
 Is this something a PMC newbie could do? I worked with stack to
 document
 the process at one time.
 
>>> 
>>> This is something any committer should be able to do -- it
>> requires
 write
>>> access to git (git-wip-us.apache.org), subversion (
>> dist.apache.org
>>> ),
 and
>>> nexus (repository.apache.org). PMC are required to vote on a
 release, but
>>> IIRC any dev can cut a release candidate and call a VOTE thread.
>>> 
> On Sep 20, 2016, at 7:19 PM, Nick Dimiduk 
 wrote:
> 
> Hello devs,
> 
> I'm going on an extended absence, through Oct 23. Would anyone
>>> care
 to
> stand in as release manager for 1.1.7 while I'm away? Given
>> 1.1.6
 went
 out
> on Sept 5, 1.1.7 RC's should begin around Sept 26th in order to
 retain
 our
> targeted monthly cadence. The overall process is documented [0],
 and I
 have
> my own almost-but-not-quite-scripted notes [1] for creating a
>>> candidate.
> 
> In terms of effort, it usually works out to be a couple hours
>> for
 tracking
> down in-flight tickets, updating JIRA, validating the
>>> compatibility
 report
> , and then another couple hours of watching maven and pushing
 bits to
> generate the RC.
> 
> If no takers, I'll make its release a priority upon my return in
 late
> October.
> 
> Thanks,
> Nick
> 
> [0]: http://hbase.apache.org/book.html#releasing
> [1]: https://gist.github.com/ndimiduk/924db7f5ee75baa67802
 
>>> 
 
>>> 
>> 
>> 
>> 
>> --
>> Best regards,
>> 
>>   - Andy
>> 
>> Problems worthy of attack prove their worth by hitting back. - Piet Hein
>> (via Tom White)
>> 


Re: [DISCUSSION] Bugs in some of the OrderedBytes decode methods?

2016-10-24 Thread Nick Dimiduk
Hi Daniel,

I'd say any inconsistencies between implementations is a bug. It would be
great to normalize the implementations and improve the coverage of our
correctness suite. Following those improvements, some attention should be
paid to the performance of encoding and decoding implementations. A couple
of the encodings are a bit complicated and their translation from the C
inspiration didn't always allow for the same short-cuts. In that mind,
any of the variable length encodings could benefit from some scrutiny.

(I'm not *that* concerned about the perf of the encoders compared to other
aspects of the HBase client API, but these things do tend to accumulate in
aggregate.)

As for Phoenix, it implements its own encodings and does not use
OrderedBytes. Phoenix does use the DataType interface, however, and could
be replumbed to use OrderedBytes, though don't underestimate the amount of
work this would require.

-n

On Thursday, October 13, 2016, Daniel Vimont  wrote:

> Thanks for checking into that, Ted. As I said, hopefully Nick D. can give
> us the final word on this question.
>
> On Fri, Oct 14, 2016 at 10:53 AM, Ted Yu  > wrote:
>
> > I searched Phoenix code base - there is no reference to OrderedBytes.
> >
> > On Thu, Oct 13, 2016 at 6:45 PM, Daniel Vimont  >
> > wrote:
> >
> > > Agreed, Ted. But I wanted to be sure there wasn't some hidden reason
> for
> > > the current "assert"-only code. The only other possibility that came to
> > > mind is that there may be some interoperability issue(s) with external
> > > consumers of OrderedBytes (such as Phoenix) which requires that no
> > > exception be thrown by some of the #decode methods. Hoping that Nick D.
> > can
> > > perhaps provide the final word on this.
> > >
> > >
> > > On Fri, Oct 14, 2016 at 9:29 AM, Ted Yu  > wrote:
> > >
> > > > I think throwing exception should be the action to take.
> > > >
> > > > Relying on assertion is not robust.
> > > >
> > > > > On Oct 13, 2016, at 5:06 PM, Daniel Vimont  >
> > > wrote:
> > > > >
> > > > > I'm currently looking into the various implementations of
> `DataType`
> > in
> > > > > hbase-common, and I just wanted to get confirmation of something
> > > before I
> > > > > open up a JIRA and fix what **appear** to be bugs in the underlying
> > > > > OrderedBytes
> > > > > code.
> > > > >
> > > > > All `DataType` implementations have their own overrides of the
> > > `#decode`
> > > > > method. Some of these throw an appropriate exception when an
> invalid
> > > > > byte-array is passed to them; for example:
> > > > >
> > > > > *Number bogusNumeric = OrderedNumeric.ASCENDING.decode(new
> > > > > SimplePositionedMutableByteRange(Bytes.toBytes("xyzpdq")));*
> > > > >
> > > > > (This throws an IllegalArgumentException: "unexpected value in
> first
> > > > byte:
> > > > > 0x78".)
> > > > >
> > > > > But for other implementations, *no* validation is done; for
> example:
> > > > >
> > > > > *Short bogusShort = OrderedInt16.ASCENDING.decode(new
> > > > > SimplePositionedMutableByteRange(Bytes.toBytes("xyzpdq")));*
> > > > >
> > > > > (This returns a short value of 1669, without complaint -- by
> ignoring
> > > the
> > > > > first invalid "header" byte and constructing the value 1669 from
> the
> > > two
> > > > > bytes that follow.)
> > > > >
> > > > > In those implementations which lack validation, there are "assert"
> > > > > statements in the place where I would expect an exception to be
> > > > explicitly
> > > > > thrown (or, in the context of OrderedBytes, one would expect the
> > > > > #unexpectedHeader
> > > > > method to be invoked, which in turn throws the exception). I just
> > > wanted
> > > > to
> > > > > check to make sure whether (perhaps for the sake of extreme
> > > efficiency?)
> > > > > some validations in HBase low-level processing are intentionally
> > being
> > > > done
> > > > > via bypassable "assert" statements instead of the throwing of
> > > exceptions.
> > > > >
> > > > > Thanks!
> > > > >
> > > > > Dan
> > > > >
> > > > >
> > > > > [image: --]
> > > > >
> > > > > Daniel Vimont
> > > > > [image: https://]about.me/dvimont
> > > > >  > > > email_sig_medium=external_link_campaign=chrome_ext>
> > > >
> > >
> >
>


Re: [ANNOUNCE] Stephen Yuan Jiang joins Apache HBase PMC

2016-10-24 Thread Nick Dimiduk
Congratulations Steven and thank you for the continued effort!

On Friday, October 14, 2016, Enis Söztutar  wrote:

> On behalf of the Apache HBase PMC, I am happy to announce that Stephen has
> accepted our invitation to become a PMC member of the Apache HBase project.
>
> Stephen has been working on HBase for a couple of years, and is already a
> committer for more than a year. Apart from his contributions in proc v2,
> hbck and other areas, he is also helping for the 2.0 release which is the
> most important milestone for the project this year.
>
> Welcome to the PMC Stephen,
> Enis
>


[jira] [Reopened] (HBASE-16929) Move default method of shipped to Shipper interface

2016-10-24 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu reopened HBASE-16929:


> Move default method of shipped to Shipper interface
> ---
>
> Key: HBASE-16929
> URL: https://issues.apache.org/jira/browse/HBASE-16929
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 2.0.0
>
> Attachments: 16929.v1.txt
>
>
> HBASE-16626 added default method of shipped() to RegionScanner.
> However, when building master branch of Phoenix against 2.0 SNAPSHOT, I got:
> {code}
> [ERROR] 
> /a/phoenix/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/DelegateRegionScanner.java:[27,8]
>  org.apache.phoenix.coprocessor.DelegateRegionScanner is not abstract and 
> does not override  abstract method shipped() in 
> org.apache.hadoop.hbase.regionserver.Shipper
> [ERROR] 
> /a/phoenix/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/BaseScannerRegionObserver.java:[344,36]
>   is 
> not abstract and does not override abstract method shipped() in 
> org.apache.hadoop.hbase.regionserver.Shipper
> {code}
> Here is the snippet for DelegateRegionScanner:
> {code}
> public class DelegateRegionScanner implements RegionScanner {
> {code}
> It seems adding default method in RegionScanner is not enough for downstream 
> projects.
> After moving the default method to Shipper interface, the above two 
> compilation errors are gone in Phoenix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-16933) Calls to ObserverContext#bypass in HRegion#processRowsWithLocks is inconsistent

2016-10-24 Thread ChiaPing Tsai (JIRA)
ChiaPing Tsai created HBASE-16933:
-

 Summary: Calls to ObserverContext#bypass in 
HRegion#processRowsWithLocks is inconsistent
 Key: HBASE-16933
 URL: https://issues.apache.org/jira/browse/HBASE-16933
 Project: HBase
  Issue Type: Bug
Reporter: ChiaPing Tsai
Priority: Minor


The scenario is similar to 
[HBASE-15417|https://issues.apache.org/jira/browse/HBASE-15417].
The MultiRowMutationProcessor has the incorrect usage of bypassed mutations.

This patch take some incompatible change shown below.

# If any mutation is bypassed, all mutations will be bypassed. Because the 
RowMutations is an atomic operation.
# No CP post-hook will be called if the all mutations are bypassed. For 
example, postPut, postDelete, and postBatchMutateIndispensably



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)