[jira] [Commented] (HBASE-14014) Explore row-by-row grouping options

2015-07-18 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14632330#comment-14632330
 ] 

Lars Hofhansl commented on HBASE-14014:
---

Some more thoughts.
Would it be correct to batch edit for multiple WALKeys and mark the result with 
both the latest write time and latest seqnum seen?
If so, I can freely recombine edits for the table table and source cluster ids 
and hence be able to group Cells by row across WALEdits.

I.e. if we had two edits: write time T1, seqnum N1, and write time T2, seqnum 
N2, with cells for the same table, cluster ids, and rows. I would then 
recombine these into a single WALEdit with T2 and seqnum N2. As a side-effect 
it would also reduce the amount of data to be sent to the sink.


 Explore row-by-row grouping options
 ---

 Key: HBASE-14014
 URL: https://issues.apache.org/jira/browse/HBASE-14014
 Project: HBase
  Issue Type: Sub-task
  Components: Replication
Reporter: Lars Hofhansl

 See discussion in parent.
 We need to considering the following attributes of WALKey:
 * The cluster ids
 * Table Name
 * write time (here we could use the latest of any batch)
 * seqNum
 As long as we preserve these we can rearrange the cells between WALEdits. 
 Since seqNum is unique this will be a challenge. Currently it is not used, 
 but we shouldn't design anything that prevents us guaranteeing better 
 ordering guarantees using seqNum.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14065) ref guide section on release candidate generation refers to old doc files

2015-07-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14632313#comment-14632313
 ] 

Hudson commented on HBASE-14065:


FAILURE: Integrated in HBase-TRUNK #6663 (See 
[https://builds.apache.org/job/HBase-TRUNK/6663/])
HBASE-14065 Correct doc file location references in documentation (busbey: rev 
338e39970ba8e4835733669b9252d073b2157b8a)
* src/main/asciidoc/_chapters/developer.adoc


 ref guide section on release candidate generation refers to old doc files
 -

 Key: HBASE-14065
 URL: https://issues.apache.org/jira/browse/HBASE-14065
 Project: HBase
  Issue Type: Bug
  Components: documentation
Reporter: Sean Busbey
Assignee: Gabor Liptak
 Fix For: 2.0.0

 Attachments: HBASE-14065.1.patch


 currently it says to copy files from the master version of 
 {{src/main/docbkx}} which is incorrect since the move to asciidoc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14070) Hybrid Logical Clocks for HBase

2015-07-18 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14632350#comment-14632350
 ] 

Lars Hofhansl commented on HBASE-14070:
---

Nice writeup.

What about replication? Are we trying to order events between clusters? I 
assume we won't: On the sink we just apply edits at the timestamps _at which 
they happened_, which may be in the past. So I think replication does not need 
special consideration.

What we could do is sending current PT,LT along with each replication RPC and 
thus keep the HLCs of the source and sink server in sync.

 Hybrid Logical Clocks for HBase
 ---

 Key: HBASE-14070
 URL: https://issues.apache.org/jira/browse/HBASE-14070
 Project: HBase
  Issue Type: New Feature
Reporter: Enis Soztutar
Assignee: Enis Soztutar
 Attachments: HybridLogicalClocksforHBaseandPhoenix.docx, 
 HybridLogicalClocksforHBaseandPhoenix.pdf


 HBase and Phoenix uses systems physical clock (PT) to give timestamps to 
 events (read and writes). This works mostly when the system clock is strictly 
 monotonically increasing and there is no cross-dependency between servers 
 clocks. However we know that leap seconds, general clock skew and clock drift 
 are in fact real. 
 This jira proposes using Hybrid Logical Clocks (HLC) as an implementation of 
 hybrid physical clock + a logical clock. HLC is best of both worlds where it 
 keeps causality relationship similar to logical clocks, but still is 
 compatible with NTP based physical system clock. HLC can be represented in 
 64bits. 
 A design document is attached and also can be found here: 
 https://docs.google.com/document/d/1LL2GAodiYi0waBz5ODGL4LDT4e_bXy8P9h6kWC05Bhw/edit#



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14067) bundle ruby files for hbase shell into a jar.

2015-07-18 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14632332#comment-14632332
 ] 

Lars Hofhansl commented on HBASE-14067:
---

Generally I think that's a good idea. What's the specific advantage?

 bundle ruby files for hbase shell into a jar.
 -

 Key: HBASE-14067
 URL: https://issues.apache.org/jira/browse/HBASE-14067
 Project: HBase
  Issue Type: Improvement
  Components: shell
Reporter: Sean Busbey
 Fix For: 2.0.0, 1.3.0, 0.98.15


 We currently package all the ruby scripts for the hbase shell by placing them 
 in a directory within lib/. We should be able to put these in a jar file 
 since we rely on jruby.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14119) Show error message instead of stack traces in hbase shell commands

2015-07-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14632414#comment-14632414
 ] 

Hadoop QA commented on HBASE-14119:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12745927/HBASE-14119.patch
  against master branch at commit 338e39970ba8e4835733669b9252d073b2157b8a.
  ATTACHMENT ID: 12745927

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14822//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14822//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14822//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14822//console

This message is automatically generated.

 Show error message instead of stack traces in hbase shell commands
 --

 Key: HBASE-14119
 URL: https://issues.apache.org/jira/browse/HBASE-14119
 Project: HBase
  Issue Type: Bug
Reporter: Apekshit Sharma
Assignee: Apekshit Sharma
Priority: Minor
 Attachments: HBASE-14119.patch


 This isn't really a functional bug, just more about erroring out cleanly.
 * the shell commands assign, move, unassign and merge_region can throw the 
 following error if given an invalid argument:
 {noformat}
 hbase(main):032:0 unassign 'adsfdsafdsa'
 ERROR: org.apache.hadoop.ipc.RemoteException: 
 org.apache.hadoop.hbase.UnknownRegionException: adsfdsafdsa
   at org.apache.hadoop.hbase.master.HMaster.unassign(HMaster.java:1562)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
   at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1336)
 Here is some help for this command:
 Unassign a region. Unassign will close region in current location and then
 reopen it again.  Pass 'true' to force the unassignment ('force' will clear
 all in-memory state in master before the reassign. If results in
 double assignment use hbck -fix to resolve. To be used by experts).
 Use with caution.  For expert use only.  Examples:
   hbase unassign 'REGIONNAME'
   hbase unassign 'REGIONNAME', true
 hbase(main):033:0 
 {noformat}
 * drop_namespace, describe_namespace throw stack trace too.
 {noformat}
 hbase(main):002:0 drop_namespace SDf
 ERROR: org.apache.hadoop.hbase.NamespaceNotFoundException: SDf
   at 
 org.apache.hadoop.hbase.master.TableNamespaceManager.remove(TableNamespaceManager.java:175)
   at 
 org.apache.hadoop.hbase.master.HMaster.deleteNamespace(HMaster.java:2119)
   at 
 org.apache.hadoop.hbase.master.MasterRpcServices.deleteNamespace(MasterRpcServices.java:430)
   at 
 org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:44279)
   at 

[jira] [Commented] (HBASE-13992) Integrate SparkOnHBase into HBase

2015-07-18 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14632420#comment-14632420
 ] 

Lars Hofhansl commented on HBASE-13992:
---

bq. As for the Result you do get the results but you need to convert it to 
something that can go into an RDD.

The TableInputFormat will return tuples of byte[] - Result, that's also what 
one would get when using the HadoopRDD with TableInputFormat.

 Integrate SparkOnHBase into HBase
 -

 Key: HBASE-13992
 URL: https://issues.apache.org/jira/browse/HBASE-13992
 Project: HBase
  Issue Type: New Feature
  Components: spark
Reporter: Ted Malaska
Assignee: Ted Malaska
 Fix For: 2.0.0

 Attachments: HBASE-13992.patch, HBASE-13992.patch.3, 
 HBASE-13992.patch.4


 This Jira is to ask if SparkOnHBase can find a home in side HBase core.
 Here is the github: 
 https://github.com/cloudera-labs/SparkOnHBase
 I am the core author of this project and the license is Apache 2.0
 A blog explaining this project is here
 http://blog.cloudera.com/blog/2014/12/new-in-cloudera-labs-sparkonhbase/
 A spark Streaming example is here
 http://blog.cloudera.com/blog/2014/11/how-to-do-near-real-time-sessionization-with-spark-streaming-and-apache-hadoop/
 A real customer using this in produce is blogged here
 http://blog.cloudera.com/blog/2015/03/how-edmunds-com-used-spark-streaming-to-build-a-near-real-time-dashboard/
 Please debate and let me know what I can do to make this happen.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12374) Change DBEs to work with new BB based cell

2015-07-18 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-12374:
---
Status: Patch Available  (was: Open)

 Change DBEs to work with new BB based cell
 --

 Key: HBASE-12374
 URL: https://issues.apache.org/jira/browse/HBASE-12374
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: ramkrishna.s.vasudevan
Assignee: Anoop Sam John
 Attachments: HBASE-12374_v1.patch


 Once we are changing the read path to use BB based cell then the DBEs should 
 also return BB based cells.  Currently they are byte[] array backed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12374) Change DBEs to work with new BB based cell

2015-07-18 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-12374:
---
Attachment: HBASE-12374_v1.patch

 Change DBEs to work with new BB based cell
 --

 Key: HBASE-12374
 URL: https://issues.apache.org/jira/browse/HBASE-12374
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: ramkrishna.s.vasudevan
Assignee: Anoop Sam John
 Attachments: HBASE-12374_v1.patch


 Once we are changing the read path to use BB based cell then the DBEs should 
 also return BB based cells.  Currently they are byte[] array backed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14119) Show error message instead of stack traces in hbase shell commands

2015-07-18 Thread Apekshit Sharma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apekshit Sharma updated HBASE-14119:

Status: Patch Available  (was: In Progress)

 Show error message instead of stack traces in hbase shell commands
 --

 Key: HBASE-14119
 URL: https://issues.apache.org/jira/browse/HBASE-14119
 Project: HBase
  Issue Type: Bug
Reporter: Apekshit Sharma
Assignee: Apekshit Sharma
Priority: Minor
 Attachments: HBASE-14119.patch


 This isn't really a functional bug, just more about erroring out cleanly.
 * the shell commands assign, move, unassign and merge_region can throw the 
 following error if given an invalid argument:
 {noformat}
 hbase(main):032:0 unassign 'adsfdsafdsa'
 ERROR: org.apache.hadoop.ipc.RemoteException: 
 org.apache.hadoop.hbase.UnknownRegionException: adsfdsafdsa
   at org.apache.hadoop.hbase.master.HMaster.unassign(HMaster.java:1562)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
   at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1336)
 Here is some help for this command:
 Unassign a region. Unassign will close region in current location and then
 reopen it again.  Pass 'true' to force the unassignment ('force' will clear
 all in-memory state in master before the reassign. If results in
 double assignment use hbck -fix to resolve. To be used by experts).
 Use with caution.  For expert use only.  Examples:
   hbase unassign 'REGIONNAME'
   hbase unassign 'REGIONNAME', true
 hbase(main):033:0 
 {noformat}
 * drop_namespace, describe_namespace throw stack trace too.
 {noformat}
 hbase(main):002:0 drop_namespace SDf
 ERROR: org.apache.hadoop.hbase.NamespaceNotFoundException: SDf
   at 
 org.apache.hadoop.hbase.master.TableNamespaceManager.remove(TableNamespaceManager.java:175)
   at 
 org.apache.hadoop.hbase.master.HMaster.deleteNamespace(HMaster.java:2119)
   at 
 org.apache.hadoop.hbase.master.MasterRpcServices.deleteNamespace(MasterRpcServices.java:430)
   at 
 org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:44279)
   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2035)
   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107)
   at 
 org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
   at java.lang.Thread.run(Thread.java:745)
 Here is some help for this command:
 Drop the named namespace. The namespace must be empty.
 {noformat}
 * fix error message in close_region
 {noformat}
 hbase(main):007:0 close_region sdf
 ERROR: sdf
 {noformat}
 * delete_snapshot throws exception too.
 {noformat}
 ERROR: org.apache.hadoop.hbase.snapshot.SnapshotDoesNotExistException: 
 Snapshot 'sdf' doesn't exist on the filesystem
   at 
 org.apache.hadoop.hbase.master.snapshot.SnapshotManager.deleteSnapshot(SnapshotManager.java:270)
   at 
 org.apache.hadoop.hbase.master.MasterRpcServices.deleteSnapshot(MasterRpcServices.java:452)
   at 
 org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:44261)
   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2035)
   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107)
   at 
 org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
   at java.lang.Thread.run(Thread.java:745)
 Here is some help for this command:
 Delete a specified snapshot. Examples:
   hbase delete_snapshot 'snapshotName',
 {noformat}
 other commands, when given bogus arguments, tend to fail cleanly and not 
 leave stacktrace in the output.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13954) Remove HTableInterface#getRowOrBefore related server side code

2015-07-18 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14632396#comment-14632396
 ] 

Anoop Sam John commented on HBASE-13954:


Thrift - Removal of the deprecated API, can be done as part of another issue. 
Fine.
+1. Let us get this cleanup in .

 Remove HTableInterface#getRowOrBefore related server side code
 --

 Key: HBASE-13954
 URL: https://issues.apache.org/jira/browse/HBASE-13954
 Project: HBase
  Issue Type: Sub-task
  Components: API
Reporter: Ashish Singhi
Assignee: Ashish Singhi
 Fix For: 2.0.0

 Attachments: HBASE-13954(1).patch, HBASE-13954-v1.patch, 
 HBASE-13954-v2.patch, HBASE-13954-v3.patch, HBASE-13954.patch


 As part of HBASE-13214 review, [~anoop.hbase] had a review comment on the 
 review board to remove all the server side related code for getRowOrBefore.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-12945) Port: New master API to track major compaction completion to 0.98

2015-07-18 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12945?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl resolved HBASE-12945.
---
   Resolution: Won't Fix
Fix Version/s: (was: 0.98.14)

Looks like there's no interest. Closing.

 Port: New master API to track major compaction completion to 0.98
 -

 Key: HBASE-12945
 URL: https://issues.apache.org/jira/browse/HBASE-12945
 Project: HBase
  Issue Type: Sub-task
Reporter: Lars Hofhansl





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13643) Follow Google to get more 9's

2015-07-18 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14632368#comment-14632368
 ] 

Lars Hofhansl commented on HBASE-13643:
---

So what's to do here?

* The memstore is already not flushed when it is empty (there's a little bit of 
preprocessing going on, but that looks pretty lightweight)
* How precise is the last seqid we keep around? Specifically can we instantly 
tell when onlining a region that there are guaranteed no logs to be replayed 
even before we split the logs?
* We could allocate mslab lazily perhaps. An unused memstore would then not 
consume any heap until used.

 Follow Google to get more 9's
 -

 Key: HBASE-13643
 URL: https://issues.apache.org/jira/browse/HBASE-13643
 Project: HBase
  Issue Type: Improvement
Reporter: Elliott Clark

 Ideas taken shamelessly from Google's HBasecon talk 
 (http://hbasecon.com/agenda/):
 On failover all regions are unavailable for reads (and sometime writes) until 
 after all write ahead logs have been recovered. To combat that the last 
 flushed seqid is kept around.
 Google took this one step farther and set some regions (Tablets in BigTable) 
 as read only. Setting a region as read only means there's no memstore. No 
 need to flush before move, split, or merge.
 In addition to the wins that Google got, HBase would also be able to shed 
 some memory pressure. Right now every region gets a memstore and with that 
 memstore comes a mslab. Read only regions would not need these added object. 
 This should allow a regionserver to host lots of cold regions without too 
 much memory pressure.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13389) [REGRESSION] HBASE-12600 undoes skip-mvcc parse optimizations

2015-07-18 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14632376#comment-14632376
 ] 

Lars Hofhansl commented on HBASE-13389:
---

So where are we with this?

To answer your question above [~stack], in the subtask I just put the part of 
the optimization back, namely if all involved HFiles have a max timestamp of 0, 
then there is no need to write the timestamp into the new HFile (as all would 
be 0 anyway).
(previously it did that if all timestamp are older than the oldest running 
scanner, but as discussed here, we can't do that any long)

So how do we proceed with this one?

 [REGRESSION] HBASE-12600 undoes skip-mvcc parse optimizations
 -

 Key: HBASE-13389
 URL: https://issues.apache.org/jira/browse/HBASE-13389
 Project: HBase
  Issue Type: Sub-task
  Components: Performance
Reporter: stack
 Attachments: 13389.txt


 HBASE-12600 moved the edit sequenceid from tags to instead exploit the 
 mvcc/sequenceid slot in a key. Now Cells near-always have an associated 
 mvcc/sequenceid where previous it was rare or the mvcc was kept up at the 
 file level. This is sort of how it should be many of us would argue but as a 
 side-effect of this change, read-time optimizations that helped speed scans 
 were undone by this change.
 In this issue, lets see if we can get the optimizations back -- or just 
 remove the optimizations altogether.
 The parse of mvcc/sequenceid is expensive. It was noticed over in HBASE-13291.
 The optimizations undone by this changes are (to quote the optimizer himself, 
 Mr [~lhofhansl]):
 {quote}
 Looks like this undoes all of HBASE-9751, HBASE-8151, and HBASE-8166.
 We're always storing the mvcc readpoints, and we never compare them against 
 the actual smallestReadpoint, and hence we're always performing all the 
 checks, tests, and comparisons that these jiras removed in addition to 
 actually storing the data - which with up to 8 bytes per Cell is not trivial.
 {quote}
 This is the 'breaking' change: 
 https://github.com/apache/hbase/commit/2c280e62530777ee43e6148fd6fcf6dac62881c0#diff-07c7ac0a9179cedff02112489a20157fR96



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13738) Scan with RAW type for increment data insertions is displaying only latest two KV's

2015-07-18 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14632391#comment-14632391
 ] 

Lars Hofhansl commented on HBASE-13738:
---

Looks like we're fixing another problem now. The behavior described with 
VERSION=1 is expected. If there's a problem and it keeps two version when it 
should only keep one, that's a different problem, and not a correctness issue.

Maybe the second issue is already fixed by HBASE-12931?

 Scan with RAW type for increment data insertions is displaying only latest 
 two KV's 
 

 Key: HBASE-13738
 URL: https://issues.apache.org/jira/browse/HBASE-13738
 Project: HBase
  Issue Type: Bug
  Components: Scanners
 Environment: Suse 11 SP3
Reporter: neha
Assignee: Pankaj Kumar
Priority: Minor
 Attachments: HBASE-13738.patch


 [Scenario for Reproducing ]:
 1. Create an HBase table with single column family by keeping the versions=1 
 (DEFAULT)
 2. Increment Insertion more than 2 times for the same row and for same 
 qualifier.
 3. scan the table with raw= true and versions= 10  
 {code}
 scan 'tbl', {RAW = TRUE, VERSIONS = 10}
 {code}
 Expected Result:
 ===
 Raw scan should result in all the versions until the table flushed



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-14120) ByteBufferUtils#compareTo small optimization

2015-07-18 Thread Anoop Sam John (JIRA)
Anoop Sam John created HBASE-14120:
--

 Summary: ByteBufferUtils#compareTo small optimization
 Key: HBASE-14120
 URL: https://issues.apache.org/jira/browse/HBASE-14120
 Project: HBase
  Issue Type: Sub-task
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 2.0.0


We have it like
{code}
if (UnsafeAccess.isAvailable()) {
  long offset1Adj, offset2Adj;
  Object refObj1 = null, refObj2 = null;
  if (buf1.hasArray()) {
offset1Adj = o1 + buf1.arrayOffset() + 
UnsafeAccess.BYTE_ARRAY_BASE_OFFSET;
refObj1 = buf1.array();
  } else {
offset1Adj = o1 + ((DirectBuffer) buf1).address();
  }
  if (buf2.hasArray()) {
{code}
Instead of hasArray() check we can have isDirect() check and reverse the if 
else block. Because we will be making BB backed cells when it is offheap BB. So 
when code reaches here for comparison, it will be direct BB.

Doing JMH test proves it.
{code}
BenchmarkMode  Cnt Score Error  
Units
OnHeapVsOffHeapComparer.offheap thrpt4  50516432.643 ±  651828.103  
ops/s
OnHeapVsOffHeapComparer.offheapOld  thrpt4  37696698.093 ± 1121685.293  
ops/s
{code}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14120) ByteBufferUtils#compareTo small optimization

2015-07-18 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-14120:
---
Status: Patch Available  (was: Open)

 ByteBufferUtils#compareTo small optimization
 

 Key: HBASE-14120
 URL: https://issues.apache.org/jira/browse/HBASE-14120
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 2.0.0

 Attachments: HBASE-14120.patch


 We have it like
 {code}
 if (UnsafeAccess.isAvailable()) {
   long offset1Adj, offset2Adj;
   Object refObj1 = null, refObj2 = null;
   if (buf1.hasArray()) {
   offset1Adj = o1 + buf1.arrayOffset() + 
 UnsafeAccess.BYTE_ARRAY_BASE_OFFSET;
   refObj1 = buf1.array();
   } else {
   offset1Adj = o1 + ((DirectBuffer) buf1).address();
   }
   if (buf2.hasArray()) {
 {code}
 Instead of hasArray() check we can have isDirect() check and reverse the if 
 else block. Because we will be making BB backed cells when it is offheap BB. 
 So when code reaches here for comparison, it will be direct BB.
 Doing JMH test proves it.
 {code}
 BenchmarkMode  Cnt Score Error  
 Units
 OnHeapVsOffHeapComparer.offheap thrpt4  50516432.643 ±  651828.103  
 ops/s
 OnHeapVsOffHeapComparer.offheapOld  thrpt4  37696698.093 ± 1121685.293  
 ops/s
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-5210) HFiles are missing from an incremental load

2015-07-18 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl resolved HBASE-5210.
--
Resolution: Cannot Reproduce

Closing for now

 HFiles are missing from an incremental load
 ---

 Key: HBASE-5210
 URL: https://issues.apache.org/jira/browse/HBASE-5210
 Project: HBase
  Issue Type: Bug
  Components: mapreduce
Affects Versions: 0.90.2
 Environment: HBase 0.90.2 with Hadoop-0.20.2 (with durable sync).  
 RHEL 2.6.18-164.15.1.el5.  4 node cluster (1 master, 3 slaves)
Reporter: Lawrence Simpson
 Attachments: HBASE-5210-crazy-new-getRandomFilename.patch


 We run an overnight map/reduce job that loads data from an external source 
 and adds that data to an existing HBase table.  The input files have been 
 loaded into hdfs.  The map/reduce job uses the HFileOutputFormat (and the 
 TotalOrderPartitioner) to create HFiles which are subsequently added to the 
 HBase table.  On at least two separate occasions (that we know of), a range 
 of output would be missing for a given day.  The range of keys for the 
 missing values corresponded to those of a particular region.  This implied 
 that a complete HFile somehow went missing from the job.  Further 
 investigation revealed the following:
  * Two different reducers (running in separate JVMs and thus separate class 
 loaders)
  * in the same server can end up using the same file names for their
  * HFiles.  The scenario is as follows:
  *1.  Both reducers start near the same time.
  *2.  The first reducer reaches the point where it wants to write its 
 first file.
  *3.  It uses the StoreFile class which contains a static Random 
 object 
  *which is initialized by default using a timestamp.
  *4.  The file name is generated using the random number generator.
  *5.  The file name is checked against other existing files.
  *6.  The file is written into temporary files in a directory named
  *after the reducer attempt.
  *7.  The second reduce task reaches the same point, but its 
 StoreClass
  *(which is now in the file system's cache) gets loaded within the
  *time resolution of the OS and thus initializes its Random()
  *object with the same seed as the first task.
  *8.  The second task also checks for an existing file with the name
  *generated by the random number generator and finds no conflict
  *because each task is writing files in its own temporary folder.
  *9.  The first task finishes and gets its temporary files committed
  *to the real folder specified for output of the HFiles.
  * 10.The second task then reaches its own conclusion and commits its
  *files (moveTaskOutputs).  The released Hadoop code just 
 overwrites
  *any files with the same name.  No warning messages or anything.
  *The first task's HFiles just go missing.
  * 
  *  Note:  The reducers here are NOT different attempts at the same 
  *reduce task.  They are different reduce tasks so data is
  *really lost.
 I am currently testing a fix in which I have added code to the Hadoop 
 FileOutputCommitter.moveTaskOutputs method to check for a conflict with
 an existing file in the final output folder and to rename the HFile if
 needed.  This may not be appropriate for all uses of FileOutputFormat.
 So I have put this into a new class which is then used by a subclass of
 HFileOutputFormat.  Subclassing of FileOutputCommitter itself was a bit 
 more of a problem due to private declarations.
 I don't know if my approach is the best fix for the problem.  If someone
 more knowledgeable than myself deems that it is, I will be happy to share
 what I have done and by that time I may have some information on the
 results.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13318) RpcServer.Listener.getAddress should be synchronized

2015-07-18 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14632384#comment-14632384
 ] 

Lars Hofhansl commented on HBASE-13318:
---

This is very hard to reproduce... Interestingly getListenerAddress is only 
called from CallRunner.run in two places and both just attempt to report the 
address (in an exception or a warning message).

Looks like this exception happened here:
{code}
} catch (ClosedChannelException cce) {
  RpcServer.LOG.warn(Thread.currentThread().getName() + : caught a 
ClosedChannelException,  +
  this means that the server  + rpcServer.getListenerAddress() +  
was processing a  +
  request but the client went away. The error message was:  +
  cce.getMessage());
{code}

[~ndimiduk] this came with HBASE-12825, mind having a quick look? The gist is 
that we can only get the listenerAddress as long as the acceptChannel exists. 
If the client goes away, the channel might have been closed in the listener 
thread before we can catch the exception... I think.


 RpcServer.Listener.getAddress should be synchronized
 

 Key: HBASE-13318
 URL: https://issues.apache.org/jira/browse/HBASE-13318
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.10.1
Reporter: Lars Hofhansl
Priority: Minor
  Labels: thread-safety

 We just saw exceptions like these:
 {noformat}
 Exception in thread B.DefaultRpcServer.handler=45,queue=0,port=60020 
 java.lang.NullPointerException
   at 
 org.apache.hadoop.hbase.ipc.RpcServer$Listener.getAddress(RpcServer.java:753)
   at 
 org.apache.hadoop.hbase.ipc.RpcServer.getListenerAddress(RpcServer.java:2157)
   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:146)
   at 
 org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
   at java.lang.Thread.run(Thread.java:745)
 {noformat}
 Looks like RpcServer$Listener.getAddress should be synchronized 
 (acceptChannel is set to null upon exiting the thread under in a synchronized 
 block).
 Should be happening very rarely only.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12748) RegionCoprocessorHost.execOperation creates too many iterator objects

2015-07-18 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-12748:
--
Fix Version/s: (was: 0.94.28)

Removing from 0.94.

 RegionCoprocessorHost.execOperation creates too many iterator objects
 -

 Key: HBASE-12748
 URL: https://issues.apache.org/jira/browse/HBASE-12748
 Project: HBase
  Issue Type: Sub-task
Affects Versions: 0.94.25, 0.98.9
Reporter: Vladimir Rodionov
Assignee: Vladimir Rodionov
 Fix For: 2.0.0, 0.98.14, 1.0.3

 Attachments: HBase-12748.patch


 This is typical pattern of enhanced for ... loop usage in a hot code path. 
 For every HBase operation it instantiates iterator for coprocessor list 
 twice. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13981) Fix ImportTsv spelling and usage issues

2015-07-18 Thread Gabor Liptak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Liptak updated HBASE-13981:
-
Status: Patch Available  (was: Open)

 Fix ImportTsv spelling and usage issues
 ---

 Key: HBASE-13981
 URL: https://issues.apache.org/jira/browse/HBASE-13981
 Project: HBase
  Issue Type: Bug
  Components: mapreduce
Affects Versions: 1.1.0.1
Reporter: Lars George
Assignee: Gabor Liptak
  Labels: beginner
 Fix For: 2.0.0, 1.3.0

 Attachments: HBASE-13981.1.patch, HBASE-13981.2.patch, 
 HBASE-13981.3.patch, HBASE-13981.4.patch


 The {{ImportTsv}} tool has various spelling and formatting issues. Fix those.
 In code:
 {noformat}
   public final static String ATTRIBUTE_SEPERATOR_CONF_KEY = 
 attributes.seperator;
 {noformat}
 It is separator.
 In usage text:
 {noformat}
 input data. Another special columnHBASE_TS_KEY designates that this column 
 should be
 {noformat}
 Space missing.
 {noformat}
 Record with invalid timestamps (blank, non-numeric) will be treated as bad 
 record.
 {noformat}
 Records ... as bad records - plural missing twice.
 {noformat}
 HBASE_ATTRIBUTES_KEY can be used to specify Operation Attributes per record.
  Should be specified as key=value where -1 is used 
  as the seperator.  Note that more than one OperationAttributes can be 
 specified.
 {noformat}
 - Remove line wraps and indentation. 
 - Fix separator.
 - Fix wrong separator being output, it is not -1 (wrong constant use in 
 code)
 - General wording/style could be better (eg. last sentence now uses 
 OperationAttributes without a space).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14120) ByteBufferUtils#compareTo small optimization

2015-07-18 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-14120:
---
Attachment: HBASE-14120.patch

 ByteBufferUtils#compareTo small optimization
 

 Key: HBASE-14120
 URL: https://issues.apache.org/jira/browse/HBASE-14120
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 2.0.0

 Attachments: HBASE-14120.patch


 We have it like
 {code}
 if (UnsafeAccess.isAvailable()) {
   long offset1Adj, offset2Adj;
   Object refObj1 = null, refObj2 = null;
   if (buf1.hasArray()) {
   offset1Adj = o1 + buf1.arrayOffset() + 
 UnsafeAccess.BYTE_ARRAY_BASE_OFFSET;
   refObj1 = buf1.array();
   } else {
   offset1Adj = o1 + ((DirectBuffer) buf1).address();
   }
   if (buf2.hasArray()) {
 {code}
 Instead of hasArray() check we can have isDirect() check and reverse the if 
 else block. Because we will be making BB backed cells when it is offheap BB. 
 So when code reaches here for comparison, it will be direct BB.
 Doing JMH test proves it.
 {code}
 BenchmarkMode  Cnt Score Error  
 Units
 OnHeapVsOffHeapComparer.offheap thrpt4  50516432.643 ±  651828.103  
 ops/s
 OnHeapVsOffHeapComparer.offheapOld  thrpt4  37696698.093 ± 1121685.293  
 ops/s
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12374) Change DBEs to work with new BB based cell

2015-07-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14632480#comment-14632480
 ] 

Hadoop QA commented on HBASE-12374:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12745953/HBASE-12374_v1.patch
  against master branch at commit 338e39970ba8e4835733669b9252d073b2157b8a.
  ATTACHMENT ID: 12745953

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 18 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 1 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.io.hfile.TestSeekTo

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14824//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14824//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14824//artifact/patchprocess/checkstyle-aggregate.html

  Javadoc warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14824//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14824//console

This message is automatically generated.

 Change DBEs to work with new BB based cell
 --

 Key: HBASE-12374
 URL: https://issues.apache.org/jira/browse/HBASE-12374
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: ramkrishna.s.vasudevan
Assignee: Anoop Sam John
 Attachments: HBASE-12374_v1.patch


 Once we are changing the read path to use BB based cell then the DBEs should 
 also return BB based cells.  Currently they are byte[] array backed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13954) Remove HTableInterface#getRowOrBefore related server side code

2015-07-18 Thread Ashish Singhi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14632534#comment-14632534
 ] 

Ashish Singhi commented on HBASE-13954:
---

bq. can be done as part of another issue.
Created HBASE-14121 to handle this.

 Remove HTableInterface#getRowOrBefore related server side code
 --

 Key: HBASE-13954
 URL: https://issues.apache.org/jira/browse/HBASE-13954
 Project: HBase
  Issue Type: Sub-task
  Components: API
Reporter: Ashish Singhi
Assignee: Ashish Singhi
 Fix For: 2.0.0

 Attachments: HBASE-13954(1).patch, HBASE-13954-v1.patch, 
 HBASE-13954-v2.patch, HBASE-13954-v3.patch, HBASE-13954.patch


 As part of HBASE-13214 review, [~anoop.hbase] had a review comment on the 
 review board to remove all the server side related code for getRowOrBefore.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13954) Remove HTableInterface#getRowOrBefore related server side code

2015-07-18 Thread Ashish Singhi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14632539#comment-14632539
 ] 

Ashish Singhi commented on HBASE-13954:
---

Attached v4 patch cleaning up the thrift side code as well.
Please review.

 Remove HTableInterface#getRowOrBefore related server side code
 --

 Key: HBASE-13954
 URL: https://issues.apache.org/jira/browse/HBASE-13954
 Project: HBase
  Issue Type: Sub-task
  Components: API
Reporter: Ashish Singhi
Assignee: Ashish Singhi
 Fix For: 2.0.0

 Attachments: HBASE-13954(1).patch, HBASE-13954-v1.patch, 
 HBASE-13954-v2.patch, HBASE-13954-v3.patch, HBASE-13954-v4.patch, 
 HBASE-13954.patch


 As part of HBASE-13214 review, [~anoop.hbase] had a review comment on the 
 review board to remove all the server side related code for getRowOrBefore.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13954) Remove HTableInterface#getRowOrBefore related server side code

2015-07-18 Thread Ashish Singhi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish Singhi updated HBASE-13954:
--
Status: Open  (was: Patch Available)

 Remove HTableInterface#getRowOrBefore related server side code
 --

 Key: HBASE-13954
 URL: https://issues.apache.org/jira/browse/HBASE-13954
 Project: HBase
  Issue Type: Sub-task
  Components: API
Reporter: Ashish Singhi
Assignee: Ashish Singhi
 Fix For: 2.0.0

 Attachments: HBASE-13954(1).patch, HBASE-13954-v1.patch, 
 HBASE-13954-v2.patch, HBASE-13954-v3.patch, HBASE-13954-v4.patch, 
 HBASE-13954.patch


 As part of HBASE-13214 review, [~anoop.hbase] had a review comment on the 
 review board to remove all the server side related code for getRowOrBefore.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12374) Change DBEs to work with new BB based cell

2015-07-18 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-12374:
---
Attachment: HBASE-12374_v2.patch

 Change DBEs to work with new BB based cell
 --

 Key: HBASE-12374
 URL: https://issues.apache.org/jira/browse/HBASE-12374
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: ramkrishna.s.vasudevan
Assignee: Anoop Sam John
 Attachments: HBASE-12374_v1.patch, HBASE-12374_v2.patch


 Once we are changing the read path to use BB based cell then the DBEs should 
 also return BB based cells.  Currently they are byte[] array backed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14099) StoreFile.passesKeyRangeFilter need not create Cells from the Scan's start and stop Row

2015-07-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14632599#comment-14632599
 ] 

Hadoop QA commented on HBASE-14099:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12745965/HBASE-14099.patch
  against master branch at commit 338e39970ba8e4835733669b9252d073b2157b8a.
  ATTACHMENT ID: 12745965

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14828//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14828//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14828//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14828//console

This message is automatically generated.

 StoreFile.passesKeyRangeFilter need not create Cells from the Scan's start 
 and stop Row
 ---

 Key: HBASE-14099
 URL: https://issues.apache.org/jira/browse/HBASE-14099
 Project: HBase
  Issue Type: Bug
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
Priority: Minor
 Fix For: 2.0.0

 Attachments: HBASE-14099.patch


 During profiling saw that the code here in passesKeyRangeFilter in Storefile
 {code}
   KeyValue smallestScanKeyValue = scan.isReversed() ? KeyValueUtil
   .createFirstOnRow(scan.getStopRow()) : 
 KeyValueUtil.createFirstOnRow(scan
   .getStartRow());
   KeyValue largestScanKeyValue = scan.isReversed() ? KeyValueUtil
   .createLastOnRow(scan.getStartRow()) : 
 KeyValueUtil.createLastOnRow(scan
   .getStopRow());
 {code}
 This row need not be copied now considering that we have 
 CellComparator.compareRows(Cell, byte[]). 
 We have already refactored the firstKeyKv and lastKeyKV as part of other 
 JIRAs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12374) Change DBEs to work with new BB based cell

2015-07-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14632604#comment-14632604
 ] 

Hadoop QA commented on HBASE-12374:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12745968/HBASE-12374_v2.patch
  against master branch at commit 338e39970ba8e4835733669b9252d073b2157b8a.
  ATTACHMENT ID: 12745968

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 18 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 1 
warning messages.

{color:red}-1 checkstyle{color}.  The applied patch generated 
1872 checkstyle errors (more than the master's current 1871 errors).

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   
org.apache.hadoop.hbase.io.encoding.TestLoadAndSwitchEncodeOnDisk

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s):   
at 
org.apache.camel.component.jpa.JpaWithNamedQueryTest.testProducerInsertsIntoDatabaseThenConsumerFiresMessageExchange(JpaWithNamedQueryTest.java:112)
at 
org.apache.camel.component.jpa.JpaWithNamedQueryTest.testProducerInsertsIntoDatabaseThenConsumerFiresMessageExchange(JpaWithNamedQueryTest.java:112)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14830//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14830//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14830//artifact/patchprocess/checkstyle-aggregate.html

Javadoc warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14830//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14830//console

This message is automatically generated.

 Change DBEs to work with new BB based cell
 --

 Key: HBASE-12374
 URL: https://issues.apache.org/jira/browse/HBASE-12374
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: ramkrishna.s.vasudevan
Assignee: Anoop Sam John
 Attachments: HBASE-12374_v1.patch, HBASE-12374_v2.patch


 Once we are changing the read path to use BB based cell then the DBEs should 
 also return BB based cells.  Currently they are byte[] array backed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13954) Remove HTableInterface#getRowOrBefore related server side code

2015-07-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14632590#comment-14632590
 ] 

Hadoop QA commented on HBASE-13954:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12745963/HBASE-13954-v4.patch
  against master branch at commit 338e39970ba8e4835733669b9252d073b2157b8a.
  ATTACHMENT ID: 12745963

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 42 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+  ualifier\030\002 
\003(\014\\336\002\n\003Get\022\013\n\003row\030\001 \002(\014\022 \n\006c +
+  \016\n\006exists\030\003 \001(\010\022\024\n\005stale\030\004 
\001(\010:\005false\022\026\n +
+  d\030\r \001(\010\022\r\n\005small\030\016 
\001(\010\022\027\n\010reversed\030\017 \001(\010 +
+  new java.lang.String[] { Row, Column, Attribute, Filter, 
TimeRange, MaxVersions, CacheBlocks, StoreLimit, StoreOffset, 
ExistenceOnly, Consistency, });
+getReverseScanResult(TableName.META_TABLE_NAME.getName(), row, 
HConstants.CATALOG_FAMILY);
+private static class getRegionInfo_argsStandardScheme extends 
StandardSchemegetRegionInfo_args {
+private static class getRegionInfo_resultStandardScheme extends 
StandardSchemegetRegionInfo_result {

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s):   
at 
org.apache.hadoop.hbase.regionserver.TestHRegion.testWritesWhileScanning(TestHRegion.java:3796)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14827//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14827//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14827//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14827//console

This message is automatically generated.

 Remove HTableInterface#getRowOrBefore related server side code
 --

 Key: HBASE-13954
 URL: https://issues.apache.org/jira/browse/HBASE-13954
 Project: HBase
  Issue Type: Sub-task
  Components: API
Reporter: Ashish Singhi
Assignee: Ashish Singhi
 Fix For: 2.0.0

 Attachments: HBASE-13954(1).patch, HBASE-13954-v1.patch, 
 HBASE-13954-v2.patch, HBASE-13954-v3.patch, HBASE-13954-v4.patch, 
 HBASE-13954-v5.patch, HBASE-13954.patch


 As part of HBASE-13214 review, [~anoop.hbase] had a review comment on the 
 review board to remove all the server side related code for getRowOrBefore.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14085) Correct LICENSE and NOTICE files in artifacts

2015-07-18 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14632592#comment-14632592
 ] 

Sean Busbey commented on HBASE-14085:
-

I have a working copy with correct licensing for our source artifact (barring a 
final confirmation of one bit of source code's provenance). Unfortunately, my 
first attempt at fixing things four our jars / binary artifact didn't pan out. 
I have a new approach I'm going to start on tonight.

If anyone would find it useful for us to be able to do source-only releases 
before Tuesday I can rearrange what I have to get the incremental improvement 
for the source artifact into master.

 Correct LICENSE and NOTICE files in artifacts
 -

 Key: HBASE-14085
 URL: https://issues.apache.org/jira/browse/HBASE-14085
 Project: HBase
  Issue Type: Task
  Components: build
Affects Versions: 2.0.0, 0.94.28, 0.98.14, 1.0.2, 1.2.0, 1.1.2, 1.3.0
Reporter: Sean Busbey
Assignee: Sean Busbey
Priority: Blocker

 +Problems:
 * checked LICENSE/NOTICE on binary
 ** binary artifact LICENSE file has not been updated to include the 
 additional license terms for contained third party dependencies
 ** binary artifact NOTICE file does not include a copyright line
 ** binary artifact NOTICE file does not appear to propagate appropriate info 
 from the NOTICE files from bundled dependencies
 * checked NOTICE on source
 ** source artifact NOTICE file does not include a copyright line
 ** source NOTICE file includes notices for third party dependencies not 
 included in the artifact
 * checked NOTICE files shipped in maven jars
 ** copyright line only says 2015 when it's very likely the contents are under 
 copyright prior to this year
 * nit: NOTICE file on jars in maven say HBase - ${module} rather than 
 Apache HBase - ${module} as required 
 refs:
 http://www.apache.org/dev/licensing-howto.html#bundled-vs-non-bundled
 http://www.apache.org/dev/licensing-howto.html#binary
 http://www.apache.org/dev/licensing-howto.html#simple



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13954) Remove HTableInterface#getRowOrBefore related server side code

2015-07-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14632606#comment-14632606
 ] 

Hadoop QA commented on HBASE-13954:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12745967/HBASE-13954-v5.patch
  against master branch at commit 338e39970ba8e4835733669b9252d073b2157b8a.
  ATTACHMENT ID: 12745967

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 34 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+  ualifier\030\002 
\003(\014\\336\002\n\003Get\022\013\n\003row\030\001 \002(\014\022 \n\006c +
+  \016\n\006exists\030\003 \001(\010\022\024\n\005stale\030\004 
\001(\010:\005false\022\026\n +
+  d\030\r \001(\010\022\r\n\005small\030\016 
\001(\010\022\027\n\010reversed\030\017 \001(\010 +
+  new java.lang.String[] { Row, Column, Attribute, Filter, 
TimeRange, MaxVersions, CacheBlocks, StoreLimit, StoreOffset, 
ExistenceOnly, Consistency, });
+getReverseScanResult(TableName.META_TABLE_NAME.getName(), row, 
HConstants.CATALOG_FAMILY);
+private static class getRegionInfo_argsStandardScheme extends 
StandardSchemegetRegionInfo_args {
+private static class getRegionInfo_resultStandardScheme extends 
StandardSchemegetRegionInfo_result {

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s): 

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14829//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14829//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14829//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14829//console

This message is automatically generated.

 Remove HTableInterface#getRowOrBefore related server side code
 --

 Key: HBASE-13954
 URL: https://issues.apache.org/jira/browse/HBASE-13954
 Project: HBase
  Issue Type: Sub-task
  Components: API
Reporter: Ashish Singhi
Assignee: Ashish Singhi
 Fix For: 2.0.0

 Attachments: HBASE-13954(1).patch, HBASE-13954-v1.patch, 
 HBASE-13954-v2.patch, HBASE-13954-v3.patch, HBASE-13954-v4.patch, 
 HBASE-13954-v5.patch, HBASE-13954.patch


 As part of HBASE-13214 review, [~anoop.hbase] had a review comment on the 
 review board to remove all the server side related code for getRowOrBefore.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13992) Integrate SparkOnHBase into HBase

2015-07-18 Thread Ted Malaska (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14632589#comment-14632589
 ] 

Ted Malaska commented on HBASE-13992:
-

[~lhofhansl] great idea I will try to add that in to the next patch which 
should be out tomorrow.

Thanks for helping me through this

 Integrate SparkOnHBase into HBase
 -

 Key: HBASE-13992
 URL: https://issues.apache.org/jira/browse/HBASE-13992
 Project: HBase
  Issue Type: New Feature
  Components: spark
Reporter: Ted Malaska
Assignee: Ted Malaska
 Fix For: 2.0.0

 Attachments: HBASE-13992.patch, HBASE-13992.patch.3, 
 HBASE-13992.patch.4


 This Jira is to ask if SparkOnHBase can find a home in side HBase core.
 Here is the github: 
 https://github.com/cloudera-labs/SparkOnHBase
 I am the core author of this project and the license is Apache 2.0
 A blog explaining this project is here
 http://blog.cloudera.com/blog/2014/12/new-in-cloudera-labs-sparkonhbase/
 A spark Streaming example is here
 http://blog.cloudera.com/blog/2014/11/how-to-do-near-real-time-sessionization-with-spark-streaming-and-apache-hadoop/
 A real customer using this in produce is blogged here
 http://blog.cloudera.com/blog/2015/03/how-edmunds-com-used-spark-streaming-to-build-a-near-real-time-dashboard/
 Please debate and let me know what I can do to make this happen.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13981) Fix ImportTsv spelling and usage issues

2015-07-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14632505#comment-14632505
 ] 

Hadoop QA commented on HBASE-13981:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12745366/HBASE-13981.4.patch
  against master branch at commit 338e39970ba8e4835733669b9252d073b2157b8a.
  ATTACHMENT ID: 12745366

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 7 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14823//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14823//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14823//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14823//console

This message is automatically generated.

 Fix ImportTsv spelling and usage issues
 ---

 Key: HBASE-13981
 URL: https://issues.apache.org/jira/browse/HBASE-13981
 Project: HBase
  Issue Type: Bug
  Components: mapreduce
Affects Versions: 1.1.0.1
Reporter: Lars George
Assignee: Gabor Liptak
  Labels: beginner
 Fix For: 2.0.0, 1.3.0

 Attachments: HBASE-13981.1.patch, HBASE-13981.2.patch, 
 HBASE-13981.3.patch, HBASE-13981.4.patch


 The {{ImportTsv}} tool has various spelling and formatting issues. Fix those.
 In code:
 {noformat}
   public final static String ATTRIBUTE_SEPERATOR_CONF_KEY = 
 attributes.seperator;
 {noformat}
 It is separator.
 In usage text:
 {noformat}
 input data. Another special columnHBASE_TS_KEY designates that this column 
 should be
 {noformat}
 Space missing.
 {noformat}
 Record with invalid timestamps (blank, non-numeric) will be treated as bad 
 record.
 {noformat}
 Records ... as bad records - plural missing twice.
 {noformat}
 HBASE_ATTRIBUTES_KEY can be used to specify Operation Attributes per record.
  Should be specified as key=value where -1 is used 
  as the seperator.  Note that more than one OperationAttributes can be 
 specified.
 {noformat}
 - Remove line wraps and indentation. 
 - Fix separator.
 - Fix wrong separator being output, it is not -1 (wrong constant use in 
 code)
 - General wording/style could be better (eg. last sentence now uses 
 OperationAttributes without a space).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14120) ByteBufferUtils#compareTo small optimization

2015-07-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14632519#comment-14632519
 ] 

Hadoop QA commented on HBASE-14120:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12745954/HBASE-14120.patch
  against master branch at commit 338e39970ba8e4835733669b9252d073b2157b8a.
  ATTACHMENT ID: 12745954

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

 {color:red}-1 core zombie tests{color}.  There are 2 zombie test(s):   
at 
org.apache.hadoop.hbase.snapshot.TestExportSnapshot.testExportFileSystemState(TestExportSnapshot.java:288)
at 
org.apache.hadoop.hbase.snapshot.TestExportSnapshot.testExportFileSystemState(TestExportSnapshot.java:262)
at 
org.apache.hadoop.hbase.snapshot.TestExportSnapshot.testSnapshotWithRefsExportFileSystemState(TestExportSnapshot.java:256)
at 
org.apache.hadoop.hbase.snapshot.TestExportSnapshot.testSnapshotWithRefsExportFileSystemState(TestExportSnapshot.java:236)
at 
org.apache.hadoop.hbase.snapshot.TestExportSnapshot.testExportFileSystemState(TestExportSnapshot.java:288)
at 
org.apache.hadoop.hbase.snapshot.TestExportSnapshot.testExportFileSystemState(TestExportSnapshot.java:262)
at 
org.apache.hadoop.hbase.snapshot.TestExportSnapshot.testSnapshotWithRefsExportFileSystemState(TestExportSnapshot.java:256)
at 
org.apache.hadoop.hbase.snapshot.TestExportSnapshot.testSnapshotWithRefsExportFileSystemState(TestExportSnapshot.java:240)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14825//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14825//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14825//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14825//console

This message is automatically generated.

 ByteBufferUtils#compareTo small optimization
 

 Key: HBASE-14120
 URL: https://issues.apache.org/jira/browse/HBASE-14120
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 2.0.0

 Attachments: HBASE-14120.patch


 We have it like
 {code}
 if (UnsafeAccess.isAvailable()) {
   long offset1Adj, offset2Adj;
   Object refObj1 = null, refObj2 = null;
   if (buf1.hasArray()) {
   offset1Adj = o1 + buf1.arrayOffset() + 
 UnsafeAccess.BYTE_ARRAY_BASE_OFFSET;
   refObj1 = buf1.array();
   } else {
   offset1Adj = o1 + ((DirectBuffer) buf1).address();
   }
   if (buf2.hasArray()) {
 {code}
 Instead of hasArray() check we can have isDirect() check and reverse the if 
 else block. Because we will be making BB backed cells when it is offheap BB. 
 So when code reaches here for comparison, it will be direct BB.
 Doing JMH test proves it.
 {code}
 BenchmarkMode  Cnt Score Error  
 Units
 OnHeapVsOffHeapComparer.offheap thrpt4  50516432.643 ±  651828.103  
 ops/s
 OnHeapVsOffHeapComparer.offheapOld  thrpt4  37696698.093 ± 1121685.293  
 ops/s
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-14121) Remove getRowOrBefore related thrift side code

2015-07-18 Thread Ashish Singhi (JIRA)
Ashish Singhi created HBASE-14121:
-

 Summary: Remove getRowOrBefore related thrift side code
 Key: HBASE-14121
 URL: https://issues.apache.org/jira/browse/HBASE-14121
 Project: HBase
  Issue Type: Sub-task
  Components: Thrift
Affects Versions: 2.0.0
Reporter: Ashish Singhi
Assignee: Ashish Singhi
 Fix For: 2.0.0


HBASE-13954 removes HTableInterface#getRowOrBefore related server side code 
only thing pending now is thrift side code, which should be done as part of 
this jira.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13954) Remove HTableInterface#getRowOrBefore related server side code

2015-07-18 Thread Ashish Singhi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish Singhi updated HBASE-13954:
--
Attachment: HBASE-13954-v4.patch

 Remove HTableInterface#getRowOrBefore related server side code
 --

 Key: HBASE-13954
 URL: https://issues.apache.org/jira/browse/HBASE-13954
 Project: HBase
  Issue Type: Sub-task
  Components: API
Reporter: Ashish Singhi
Assignee: Ashish Singhi
 Fix For: 2.0.0

 Attachments: HBASE-13954(1).patch, HBASE-13954-v1.patch, 
 HBASE-13954-v2.patch, HBASE-13954-v3.patch, HBASE-13954-v4.patch, 
 HBASE-13954.patch


 As part of HBASE-13214 review, [~anoop.hbase] had a review comment on the 
 review board to remove all the server side related code for getRowOrBefore.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14099) StoreFile.passesKeyRangeFilter need not create Cells from the Scan's start and stop Row

2015-07-18 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-14099:
---
Attachment: HBASE-14099.patch

Attaching for QA.

 StoreFile.passesKeyRangeFilter need not create Cells from the Scan's start 
 and stop Row
 ---

 Key: HBASE-14099
 URL: https://issues.apache.org/jira/browse/HBASE-14099
 Project: HBase
  Issue Type: Bug
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
Priority: Minor
 Fix For: 2.0.0

 Attachments: HBASE-14099.patch


 During profiling saw that the code here in passesKeyRangeFilter in Storefile
 {code}
   KeyValue smallestScanKeyValue = scan.isReversed() ? KeyValueUtil
   .createFirstOnRow(scan.getStopRow()) : 
 KeyValueUtil.createFirstOnRow(scan
   .getStartRow());
   KeyValue largestScanKeyValue = scan.isReversed() ? KeyValueUtil
   .createLastOnRow(scan.getStartRow()) : 
 KeyValueUtil.createLastOnRow(scan
   .getStopRow());
 {code}
 This row need not be copied now considering that we have 
 CellComparator.compareRows(Cell, byte[]). 
 We have already refactored the firstKeyKv and lastKeyKV as part of other 
 JIRAs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14099) StoreFile.passesKeyRangeFilter need not create Cells from the Scan's start and stop Row

2015-07-18 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-14099:
---
Status: Patch Available  (was: Open)

 StoreFile.passesKeyRangeFilter need not create Cells from the Scan's start 
 and stop Row
 ---

 Key: HBASE-14099
 URL: https://issues.apache.org/jira/browse/HBASE-14099
 Project: HBase
  Issue Type: Bug
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
Priority: Minor
 Fix For: 2.0.0

 Attachments: HBASE-14099.patch


 During profiling saw that the code here in passesKeyRangeFilter in Storefile
 {code}
   KeyValue smallestScanKeyValue = scan.isReversed() ? KeyValueUtil
   .createFirstOnRow(scan.getStopRow()) : 
 KeyValueUtil.createFirstOnRow(scan
   .getStartRow());
   KeyValue largestScanKeyValue = scan.isReversed() ? KeyValueUtil
   .createLastOnRow(scan.getStartRow()) : 
 KeyValueUtil.createLastOnRow(scan
   .getStopRow());
 {code}
 This row need not be copied now considering that we have 
 CellComparator.compareRows(Cell, byte[]). 
 We have already refactored the firstKeyKv and lastKeyKV as part of other 
 JIRAs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14116) Change ByteBuff.getXXXStrictlyForward to relative position based reads

2015-07-18 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-14116:
---
Status: Patch Available  (was: Open)

 Change ByteBuff.getXXXStrictlyForward to relative position based reads
 --

 Key: HBASE-14116
 URL: https://issues.apache.org/jira/browse/HBASE-14116
 Project: HBase
  Issue Type: Sub-task
Reporter: ramkrishna.s.vasudevan
Assignee: Anoop Sam John
 Fix For: 2.0.0

 Attachments: HBASE-14116.patch


 There is a TODO added in ByteBuff.getXXXStrictlyForward to a positional based 
 read from the current position. This could help in avoiding the initial check 
 that is added in the API to ensure that the passed in index is always greater 
 than the current position.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14116) Change ByteBuff.getXXXStrictlyForward to relative position based reads

2015-07-18 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14632523#comment-14632523
 ] 

Anoop Sam John commented on HBASE-14116:


getXXXStrictlyForward   takes absolute position and it has to be always after 
the current position.  Because of this restriction it will be better if this 
API is changed so as to take an offset from the current pos and get primitive 
value at that pos. The assumption holds true by default then.   Attached patch 
doing this. 

 Change ByteBuff.getXXXStrictlyForward to relative position based reads
 --

 Key: HBASE-14116
 URL: https://issues.apache.org/jira/browse/HBASE-14116
 Project: HBase
  Issue Type: Sub-task
Reporter: ramkrishna.s.vasudevan
Assignee: Anoop Sam John
 Fix For: 2.0.0

 Attachments: HBASE-14116.patch


 There is a TODO added in ByteBuff.getXXXStrictlyForward to a positional based 
 read from the current position. This could help in avoiding the initial check 
 that is added in the API to ensure that the passed in index is always greater 
 than the current position.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14116) Change ByteBuff.getXXXStrictlyForward to relative position based reads

2015-07-18 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-14116:
---
Attachment: HBASE-14116.patch

 Change ByteBuff.getXXXStrictlyForward to relative position based reads
 --

 Key: HBASE-14116
 URL: https://issues.apache.org/jira/browse/HBASE-14116
 Project: HBase
  Issue Type: Sub-task
Reporter: ramkrishna.s.vasudevan
Assignee: Anoop Sam John
 Fix For: 2.0.0

 Attachments: HBASE-14116.patch


 There is a TODO added in ByteBuff.getXXXStrictlyForward to a positional based 
 read from the current position. This could help in avoiding the initial check 
 that is added in the API to ensure that the passed in index is always greater 
 than the current position.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-14121) Remove getRowOrBefore related thrift side code

2015-07-18 Thread Ashish Singhi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish Singhi resolved HBASE-14121.
---
   Resolution: Duplicate
 Assignee: (was: Ashish Singhi)
Fix Version/s: (was: 2.0.0)

Handled in HBASE-13954 itself.

 Remove getRowOrBefore related thrift side code
 --

 Key: HBASE-14121
 URL: https://issues.apache.org/jira/browse/HBASE-14121
 Project: HBase
  Issue Type: Sub-task
  Components: Thrift
Affects Versions: 2.0.0
Reporter: Ashish Singhi

 HBASE-13954 removes HTableInterface#getRowOrBefore related server side code 
 only thing pending now is thrift side code, which should be done as part of 
 this jira.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13981) Fix ImportTsv spelling and usage issues

2015-07-18 Thread Ashish Singhi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14632554#comment-14632554
 ] 

Ashish Singhi commented on HBASE-13981:
---

bq. I do not see references to ATTRIBUTE_SEPERATOR_CONF_KEY in the codebase, 
hence I didn't create a replacement define
What about the client which is using this conf ? 
Normally the practice is when we deprecate something, we provide the reason why 
we deprecated it, since when it is deprecated, when we will be removing it and 
provide the user the replacement thing which should be used in place of that. 

 Fix ImportTsv spelling and usage issues
 ---

 Key: HBASE-13981
 URL: https://issues.apache.org/jira/browse/HBASE-13981
 Project: HBase
  Issue Type: Bug
  Components: mapreduce
Affects Versions: 1.1.0.1
Reporter: Lars George
Assignee: Gabor Liptak
  Labels: beginner
 Fix For: 2.0.0, 1.3.0

 Attachments: HBASE-13981.1.patch, HBASE-13981.2.patch, 
 HBASE-13981.3.patch, HBASE-13981.4.patch


 The {{ImportTsv}} tool has various spelling and formatting issues. Fix those.
 In code:
 {noformat}
   public final static String ATTRIBUTE_SEPERATOR_CONF_KEY = 
 attributes.seperator;
 {noformat}
 It is separator.
 In usage text:
 {noformat}
 input data. Another special columnHBASE_TS_KEY designates that this column 
 should be
 {noformat}
 Space missing.
 {noformat}
 Record with invalid timestamps (blank, non-numeric) will be treated as bad 
 record.
 {noformat}
 Records ... as bad records - plural missing twice.
 {noformat}
 HBASE_ATTRIBUTES_KEY can be used to specify Operation Attributes per record.
  Should be specified as key=value where -1 is used 
  as the seperator.  Note that more than one OperationAttributes can be 
 specified.
 {noformat}
 - Remove line wraps and indentation. 
 - Fix separator.
 - Fix wrong separator being output, it is not -1 (wrong constant use in 
 code)
 - General wording/style could be better (eg. last sentence now uses 
 OperationAttributes without a space).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12374) Change DBEs to work with new BB based cell

2015-07-18 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14632558#comment-14632558
 ] 

Anoop Sam John commented on HBASE-12374:


{quote}
-1 core tests. The patch failed these unit tests:
org.apache.hadoop.hbase.io.hfile.TestSeekTo
{quote}
Related test failure. I fixed the patch so that test will pass.
bq.See if you can add TestSeekTo also to go thro this new offheap based testing.
No we can not. This test passes the HFile block by reading the FS HFiles.  The 
HFileReaderImpl is reading the file and to an on heap BB.  We are not changing 
this area at all.. So no way to add off heap support for this test.

 Change DBEs to work with new BB based cell
 --

 Key: HBASE-12374
 URL: https://issues.apache.org/jira/browse/HBASE-12374
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: ramkrishna.s.vasudevan
Assignee: Anoop Sam John
 Attachments: HBASE-12374_v1.patch


 Once we are changing the read path to use BB based cell then the DBEs should 
 also return BB based cells.  Currently they are byte[] array backed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14116) Change ByteBuff.getXXXStrictlyForward to relative position based reads

2015-07-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14632546#comment-14632546
 ] 

Hadoop QA commented on HBASE-14116:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12745962/HBASE-14116.patch
  against master branch at commit 338e39970ba8e4835733669b9252d073b2157b8a.
  ATTACHMENT ID: 12745962

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 6 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   
org.apache.hadoop.hbase.regionserver.TestStoreFileRefresherChore
  org.apache.hadoop.hbase.regionserver.TestColumnSeeking

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14826//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14826//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14826//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14826//console

This message is automatically generated.

 Change ByteBuff.getXXXStrictlyForward to relative position based reads
 --

 Key: HBASE-14116
 URL: https://issues.apache.org/jira/browse/HBASE-14116
 Project: HBase
  Issue Type: Sub-task
Reporter: ramkrishna.s.vasudevan
Assignee: Anoop Sam John
 Fix For: 2.0.0

 Attachments: HBASE-14116.patch


 There is a TODO added in ByteBuff.getXXXStrictlyForward to a positional based 
 read from the current position. This could help in avoiding the initial check 
 that is added in the API to ensure that the passed in index is always greater 
 than the current position.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13954) Remove HTableInterface#getRowOrBefore related server side code

2015-07-18 Thread Ashish Singhi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish Singhi updated HBASE-13954:
--
Attachment: HBASE-13954-v5.patch

 Remove HTableInterface#getRowOrBefore related server side code
 --

 Key: HBASE-13954
 URL: https://issues.apache.org/jira/browse/HBASE-13954
 Project: HBase
  Issue Type: Sub-task
  Components: API
Reporter: Ashish Singhi
Assignee: Ashish Singhi
 Fix For: 2.0.0

 Attachments: HBASE-13954(1).patch, HBASE-13954-v1.patch, 
 HBASE-13954-v2.patch, HBASE-13954-v3.patch, HBASE-13954-v4.patch, 
 HBASE-13954-v5.patch, HBASE-13954.patch


 As part of HBASE-13214 review, [~anoop.hbase] had a review comment on the 
 review board to remove all the server side related code for getRowOrBefore.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13954) Remove HTableInterface#getRowOrBefore related server side code

2015-07-18 Thread Ashish Singhi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish Singhi updated HBASE-13954:
--
Status: Patch Available  (was: Open)

Attached the correct patch

 Remove HTableInterface#getRowOrBefore related server side code
 --

 Key: HBASE-13954
 URL: https://issues.apache.org/jira/browse/HBASE-13954
 Project: HBase
  Issue Type: Sub-task
  Components: API
Reporter: Ashish Singhi
Assignee: Ashish Singhi
 Fix For: 2.0.0

 Attachments: HBASE-13954(1).patch, HBASE-13954-v1.patch, 
 HBASE-13954-v2.patch, HBASE-13954-v3.patch, HBASE-13954-v4.patch, 
 HBASE-13954-v5.patch, HBASE-13954.patch


 As part of HBASE-13214 review, [~anoop.hbase] had a review comment on the 
 review board to remove all the server side related code for getRowOrBefore.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13981) Fix ImportTsv spelling and usage issues

2015-07-18 Thread Gabor Liptak (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14632562#comment-14632562
 ] 

Gabor Liptak commented on HBASE-13981:
--

[~ashish singhi] I might have missed something, but seem to be no references to 
ATTRIBUTE_SEPERATOR_CONF_KEY or to attributes.seperator in the HBASE 
codebase. So deprecating and removing doesn't represent a functional change. If 
we so deem, I'm happy to update the patch with a rename.

 Fix ImportTsv spelling and usage issues
 ---

 Key: HBASE-13981
 URL: https://issues.apache.org/jira/browse/HBASE-13981
 Project: HBase
  Issue Type: Bug
  Components: mapreduce
Affects Versions: 1.1.0.1
Reporter: Lars George
Assignee: Gabor Liptak
  Labels: beginner
 Fix For: 2.0.0, 1.3.0

 Attachments: HBASE-13981.1.patch, HBASE-13981.2.patch, 
 HBASE-13981.3.patch, HBASE-13981.4.patch


 The {{ImportTsv}} tool has various spelling and formatting issues. Fix those.
 In code:
 {noformat}
   public final static String ATTRIBUTE_SEPERATOR_CONF_KEY = 
 attributes.seperator;
 {noformat}
 It is separator.
 In usage text:
 {noformat}
 input data. Another special columnHBASE_TS_KEY designates that this column 
 should be
 {noformat}
 Space missing.
 {noformat}
 Record with invalid timestamps (blank, non-numeric) will be treated as bad 
 record.
 {noformat}
 Records ... as bad records - plural missing twice.
 {noformat}
 HBASE_ATTRIBUTES_KEY can be used to specify Operation Attributes per record.
  Should be specified as key=value where -1 is used 
  as the seperator.  Note that more than one OperationAttributes can be 
 specified.
 {noformat}
 - Remove line wraps and indentation. 
 - Fix separator.
 - Fix wrong separator being output, it is not -1 (wrong constant use in 
 code)
 - General wording/style could be better (eg. last sentence now uses 
 OperationAttributes without a space).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12374) Change DBEs to work with new BB based cell

2015-07-18 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14632522#comment-14632522
 ] 

ramkrishna.s.vasudevan commented on HBASE-12374:


Will check the patch tomorrow. See if you can add TestSeekTo also to go thro 
this new offheap based testing.

 Change DBEs to work with new BB based cell
 --

 Key: HBASE-12374
 URL: https://issues.apache.org/jira/browse/HBASE-12374
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: ramkrishna.s.vasudevan
Assignee: Anoop Sam John
 Attachments: HBASE-12374_v1.patch


 Once we are changing the read path to use BB based cell then the DBEs should 
 also return BB based cells.  Currently they are byte[] array backed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13954) Remove HTableInterface#getRowOrBefore related server side code

2015-07-18 Thread Ashish Singhi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14632556#comment-14632556
 ] 

Ashish Singhi commented on HBASE-13954:
---

Oops attached a wrong patch with some other non-related local changes..

 Remove HTableInterface#getRowOrBefore related server side code
 --

 Key: HBASE-13954
 URL: https://issues.apache.org/jira/browse/HBASE-13954
 Project: HBase
  Issue Type: Sub-task
  Components: API
Reporter: Ashish Singhi
Assignee: Ashish Singhi
 Fix For: 2.0.0

 Attachments: HBASE-13954(1).patch, HBASE-13954-v1.patch, 
 HBASE-13954-v2.patch, HBASE-13954-v3.patch, HBASE-13954-v4.patch, 
 HBASE-13954.patch


 As part of HBASE-13214 review, [~anoop.hbase] had a review comment on the 
 review board to remove all the server side related code for getRowOrBefore.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14075) HBaseClusterManager should use port(if given) to find pid

2015-07-18 Thread Yu Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14632697#comment-14632697
 ] 

Yu Li commented on HBASE-14075:
---

[~dimaspivak], [~enis]

I'm expecting a +1 on this, thanks :-)

 HBaseClusterManager should use port(if given) to find pid
 -

 Key: HBASE-14075
 URL: https://issues.apache.org/jira/browse/HBASE-14075
 Project: HBase
  Issue Type: Bug
Reporter: Yu Li
Assignee: Yu Li
Priority: Minor
 Attachments: HBASE-14075-master_v2.patch, 
 HBASE-14075-master_v3.patch, HBASE-14075-master_v4.patch, 
 HBASE-14075-master_v5.patch, HBASE-14075-master_v6.patch, 
 HBASE-14075-master_v7.patch, HBASE-14075.patch


 This issue is found while we run ITBLL in distributed cluster. Our testing 
 env is kind of special that we run multiple regionserver instance on a single 
 physical machine, so {noformat}ps -ef | grep proc_regionserver{noformat} will 
 return more than one line, thus cause the tool might check/kill the wrong 
 process
 Actually in HBaseClusterManager we already introduce port as a parameter for 
 methods like isRunning, kill, etc. So the only thing to do here is to get pid 
 through port if port is given



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11276) Add back support for running ChaosMonkey as standalone tool

2015-07-18 Thread Yu Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14632696#comment-14632696
 ] 

Yu Li commented on HBASE-11276:
---

The table/cf is truly optional, but there would be a default value for them in 
the tool, since Actions like SplitRandomRegionOfTableAction require  a table to 
operate on. IMHO, if we require user to make sure the testing table/cf already 
exists before running monkeys, then table/cf shouldn't be optional, or if we 
keep it as is, we will need the createSchema method to create the table when it 
doesn't exist.

[~enis], What's your opinion?

 Add back support for running ChaosMonkey as standalone tool
 ---

 Key: HBASE-11276
 URL: https://issues.apache.org/jira/browse/HBASE-11276
 Project: HBase
  Issue Type: Task
Affects Versions: 0.98.0, 0.96.0, 0.99.0
Reporter: Dima Spivak
Assignee: Yu Li
Priority: Minor
 Attachments: HBASE-11276.patch, HBASE-11276_v2.patch, 
 HBASE-11276_v3.patch


 [According to the ref 
 guide|http://hbase.apache.org/book/hbase.tests.html#integration.tests], it 
 was once possible to run ChaosMonkey as a standalone tool against a deployed 
 cluster. After 0.94, this is no longer possible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14075) HBaseClusterManager should use port(if given) to find pid

2015-07-18 Thread Yu Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14632698#comment-14632698
 ] 

Yu Li commented on HBASE-14075:
---

Reading the test results, the core tests failure was due to fail of reading a 
test report file, so actually there's no ut failure involved by the v7 patch

 HBaseClusterManager should use port(if given) to find pid
 -

 Key: HBASE-14075
 URL: https://issues.apache.org/jira/browse/HBASE-14075
 Project: HBase
  Issue Type: Bug
Reporter: Yu Li
Assignee: Yu Li
Priority: Minor
 Attachments: HBASE-14075-master_v2.patch, 
 HBASE-14075-master_v3.patch, HBASE-14075-master_v4.patch, 
 HBASE-14075-master_v5.patch, HBASE-14075-master_v6.patch, 
 HBASE-14075-master_v7.patch, HBASE-14075.patch


 This issue is found while we run ITBLL in distributed cluster. Our testing 
 env is kind of special that we run multiple regionserver instance on a single 
 physical machine, so {noformat}ps -ef | grep proc_regionserver{noformat} will 
 return more than one line, thus cause the tool might check/kill the wrong 
 process
 Actually in HBaseClusterManager we already introduce port as a parameter for 
 methods like isRunning, kill, etc. So the only thing to do here is to get pid 
 through port if port is given



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13706) CoprocessorClassLoader should not exempt Hive classes

2015-07-18 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14632632#comment-14632632
 ] 

Jerry He commented on HBASE-13706:
--

This is the current 'whitelist' for exemption:

{code}
 /**
   * If the class being loaded starts with any of these strings, we will skip
   * trying to load it from the coprocessor jar and instead delegate
   * directly to the parent ClassLoader.
   */
  private static final String[] CLASS_PREFIX_EXEMPTIONS = new String[] {
// Java standard library:
com.sun.,
launcher.,
java.,
javax.,
org.ietf,
org.omg,
org.w3c,
org.xml,
sunw.,
// logging
org.apache.commons.logging,
org.apache.log4j,
com.hadoop,
// Hadoop/HBase/ZK:
org.apache.hadoop,
org.apache.zookeeper,
  };
{code}

My thinking was that org.apache.hadoop in the above list happens to include 
'org.apache.hadoop.hive', which is a mistake. 
But if I want to go deeper and expand org.apache.hadoop to whitelist its 
relevant subpackages, it gets pretty messy.  For example, hadoop-common has 
multiple subpackages that are not very uniformly named.

Thinking it a little more, maybe the above list needs to be re-visited?

Is there a real need to exempt Hadoop classes?  What is special about hadoop 
packages as dependencies?  What are the subpackages we really need to exempt?  
I can understand why we want to use the parent classloader to load HBase 
classes.
Say if a co-processor implementation has to use a different hadoop version, 
will it cause trouble on the server side?  The co-processor jar bundles the 
hadoop jar. The hadoop classes of different version used by the co-processor 
would be loaded by CoprocessorClassLoader.

Pardon my ignorance on this.


 CoprocessorClassLoader should not exempt Hive classes
 -

 Key: HBASE-13706
 URL: https://issues.apache.org/jira/browse/HBASE-13706
 Project: HBase
  Issue Type: Bug
  Components: Coprocessors
Affects Versions: 2.0.0, 1.0.1, 1.1.0, 0.98.12
Reporter: Jerry He
Assignee: Jerry He
Priority: Minor
 Fix For: 2.0.0, 0.98.14, 1.0.2, 1.1.2

 Attachments: HBASE-13706.patch


 CoprocessorClassLoader is used to load classes from the coprocessor jar.
 Certain classes are exempt from being loaded by this ClassLoader, which means 
 they will be ignored in the coprocessor jar, but loaded from parent classpath 
 instead.
 One problem is that we categorically exempt org.apache.hadoop.
 But it happens that Hive packages start with org.apache.hadoop.
 There is no reason to exclude hive classes from theCoprocessorClassLoader.
 HBase does not even include Hive jars.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14119) Show meaningful error messages instead of stack traces in hbase shell commands. Fixing few commands in this jira.

2015-07-18 Thread Matteo Bertozzi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14632660#comment-14632660
 ] 

Matteo Bertozzi commented on HBASE-14119:
-

+1

 Show meaningful error messages instead of stack traces in hbase shell 
 commands. Fixing few commands in this jira.
 -

 Key: HBASE-14119
 URL: https://issues.apache.org/jira/browse/HBASE-14119
 Project: HBase
  Issue Type: Bug
Reporter: Apekshit Sharma
Assignee: Apekshit Sharma
Priority: Minor
 Attachments: HBASE-14119.patch


 This isn't really a functional bug, just more about erroring out cleanly.
 In the future, everyone should check and catch exceptions. Meaningful error 
 messages should be shown instead of stack traces. For debugging purposes, 
 'hbase shell -d' can be used which outputs a detailed stack trace.
 * the shell commands assign, move, unassign and merge_region can throw the 
 following error if given an invalid argument:
 {noformat}
 hbase(main):032:0 unassign 'adsfdsafdsa'
 ERROR: org.apache.hadoop.ipc.RemoteException: 
 org.apache.hadoop.hbase.UnknownRegionException: adsfdsafdsa
   at org.apache.hadoop.hbase.master.HMaster.unassign(HMaster.java:1562)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
   at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1336)
 Here is some help for this command:
 Unassign a region. Unassign will close region in current location and then
 reopen it again.  Pass 'true' to force the unassignment ('force' will clear
 all in-memory state in master before the reassign. If results in
 double assignment use hbck -fix to resolve. To be used by experts).
 Use with caution.  For expert use only.  Examples:
   hbase unassign 'REGIONNAME'
   hbase unassign 'REGIONNAME', true
 hbase(main):033:0 
 {noformat}
 * drop_namespace, describe_namespace throw stack trace too.
 {noformat}
 hbase(main):002:0 drop_namespace SDf
 ERROR: org.apache.hadoop.hbase.NamespaceNotFoundException: SDf
   at 
 org.apache.hadoop.hbase.master.TableNamespaceManager.remove(TableNamespaceManager.java:175)
   at 
 org.apache.hadoop.hbase.master.HMaster.deleteNamespace(HMaster.java:2119)
   at 
 org.apache.hadoop.hbase.master.MasterRpcServices.deleteNamespace(MasterRpcServices.java:430)
   at 
 org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:44279)
   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2035)
   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107)
   at 
 org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
   at java.lang.Thread.run(Thread.java:745)
 Here is some help for this command:
 Drop the named namespace. The namespace must be empty.
 {noformat}
 * fix error message in close_region
 {noformat}
 hbase(main):007:0 close_region sdf
 ERROR: sdf
 {noformat}
 * delete_snapshot throws exception too.
 {noformat}
 ERROR: org.apache.hadoop.hbase.snapshot.SnapshotDoesNotExistException: 
 Snapshot 'sdf' doesn't exist on the filesystem
   at 
 org.apache.hadoop.hbase.master.snapshot.SnapshotManager.deleteSnapshot(SnapshotManager.java:270)
   at 
 org.apache.hadoop.hbase.master.MasterRpcServices.deleteSnapshot(MasterRpcServices.java:452)
   at 
 org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:44261)
   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2035)
   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107)
   at 
 org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
   at java.lang.Thread.run(Thread.java:745)
 Here is some help for this command:
 Delete a specified snapshot. Examples:
   hbase delete_snapshot 'snapshotName',
 {noformat}
 other commands, when given bogus arguments, tend to fail cleanly and not 
 leave stacktrace in the output.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14119) Show meaningful error messages instead of stack traces in hbase shell commands. Fixing few commands in this jira.

2015-07-18 Thread Apekshit Sharma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apekshit Sharma updated HBASE-14119:

Description: 
This isn't really a functional bug, just more about erroring out cleanly.
In the future, everyone should check and catch exceptions. Meaningful error 
messages should be shown instead of stack traces. For debugging purposes, 
'hbase shell -d' can be used which outputs a detailed stack trace.

* the shell commands assign, move, unassign and merge_region can throw the 
following error if given an invalid argument:
{noformat}

hbase(main):032:0 unassign 'adsfdsafdsa'

ERROR: org.apache.hadoop.ipc.RemoteException: 
org.apache.hadoop.hbase.UnknownRegionException: adsfdsafdsa
at org.apache.hadoop.hbase.master.HMaster.unassign(HMaster.java:1562)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
at 
org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1336)

Here is some help for this command:
Unassign a region. Unassign will close region in current location and then
reopen it again.  Pass 'true' to force the unassignment ('force' will clear
all in-memory state in master before the reassign. If results in
double assignment use hbck -fix to resolve. To be used by experts).
Use with caution.  For expert use only.  Examples:

  hbase unassign 'REGIONNAME'
  hbase unassign 'REGIONNAME', true


hbase(main):033:0 
{noformat}

* drop_namespace, describe_namespace throw stack trace too.
{noformat}
hbase(main):002:0 drop_namespace SDf

ERROR: org.apache.hadoop.hbase.NamespaceNotFoundException: SDf
at 
org.apache.hadoop.hbase.master.TableNamespaceManager.remove(TableNamespaceManager.java:175)
at 
org.apache.hadoop.hbase.master.HMaster.deleteNamespace(HMaster.java:2119)
at 
org.apache.hadoop.hbase.master.MasterRpcServices.deleteNamespace(MasterRpcServices.java:430)
at 
org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:44279)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2035)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
at java.lang.Thread.run(Thread.java:745)

Here is some help for this command:
Drop the named namespace. The namespace must be empty.
{noformat}

* fix error message in close_region
{noformat}
hbase(main):007:0 close_region sdf

ERROR: sdf
{noformat}

* delete_snapshot throws exception too.
{noformat}
ERROR: org.apache.hadoop.hbase.snapshot.SnapshotDoesNotExistException: Snapshot 
'sdf' doesn't exist on the filesystem
at 
org.apache.hadoop.hbase.master.snapshot.SnapshotManager.deleteSnapshot(SnapshotManager.java:270)
at 
org.apache.hadoop.hbase.master.MasterRpcServices.deleteSnapshot(MasterRpcServices.java:452)
at 
org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:44261)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2035)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
at java.lang.Thread.run(Thread.java:745)

Here is some help for this command:
Delete a specified snapshot. Examples:

  hbase delete_snapshot 'snapshotName',
{noformat}

other commands, when given bogus arguments, tend to fail cleanly and not leave 
stacktrace in the output.

  was:
This isn't really a functional bug, just more about erroring out cleanly.

* the shell commands assign, move, unassign and merge_region can throw the 
following error if given an invalid argument:
{noformat}

hbase(main):032:0 unassign 'adsfdsafdsa'

ERROR: org.apache.hadoop.ipc.RemoteException: 
org.apache.hadoop.hbase.UnknownRegionException: adsfdsafdsa
at org.apache.hadoop.hbase.master.HMaster.unassign(HMaster.java:1562)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
 

[jira] [Updated] (HBASE-14119) Show meaningful error messages instead of stack traces in hbase shell commands. Fixing few commands in this jira.

2015-07-18 Thread Apekshit Sharma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apekshit Sharma updated HBASE-14119:

Summary: Show meaningful error messages instead of stack traces in hbase 
shell commands. Fixing few commands in this jira.  (was: Show error message 
instead of stack traces in hbase shell commands)

 Show meaningful error messages instead of stack traces in hbase shell 
 commands. Fixing few commands in this jira.
 -

 Key: HBASE-14119
 URL: https://issues.apache.org/jira/browse/HBASE-14119
 Project: HBase
  Issue Type: Bug
Reporter: Apekshit Sharma
Assignee: Apekshit Sharma
Priority: Minor
 Attachments: HBASE-14119.patch


 This isn't really a functional bug, just more about erroring out cleanly.
 * the shell commands assign, move, unassign and merge_region can throw the 
 following error if given an invalid argument:
 {noformat}
 hbase(main):032:0 unassign 'adsfdsafdsa'
 ERROR: org.apache.hadoop.ipc.RemoteException: 
 org.apache.hadoop.hbase.UnknownRegionException: adsfdsafdsa
   at org.apache.hadoop.hbase.master.HMaster.unassign(HMaster.java:1562)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
   at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1336)
 Here is some help for this command:
 Unassign a region. Unassign will close region in current location and then
 reopen it again.  Pass 'true' to force the unassignment ('force' will clear
 all in-memory state in master before the reassign. If results in
 double assignment use hbck -fix to resolve. To be used by experts).
 Use with caution.  For expert use only.  Examples:
   hbase unassign 'REGIONNAME'
   hbase unassign 'REGIONNAME', true
 hbase(main):033:0 
 {noformat}
 * drop_namespace, describe_namespace throw stack trace too.
 {noformat}
 hbase(main):002:0 drop_namespace SDf
 ERROR: org.apache.hadoop.hbase.NamespaceNotFoundException: SDf
   at 
 org.apache.hadoop.hbase.master.TableNamespaceManager.remove(TableNamespaceManager.java:175)
   at 
 org.apache.hadoop.hbase.master.HMaster.deleteNamespace(HMaster.java:2119)
   at 
 org.apache.hadoop.hbase.master.MasterRpcServices.deleteNamespace(MasterRpcServices.java:430)
   at 
 org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:44279)
   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2035)
   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107)
   at 
 org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
   at java.lang.Thread.run(Thread.java:745)
 Here is some help for this command:
 Drop the named namespace. The namespace must be empty.
 {noformat}
 * fix error message in close_region
 {noformat}
 hbase(main):007:0 close_region sdf
 ERROR: sdf
 {noformat}
 * delete_snapshot throws exception too.
 {noformat}
 ERROR: org.apache.hadoop.hbase.snapshot.SnapshotDoesNotExistException: 
 Snapshot 'sdf' doesn't exist on the filesystem
   at 
 org.apache.hadoop.hbase.master.snapshot.SnapshotManager.deleteSnapshot(SnapshotManager.java:270)
   at 
 org.apache.hadoop.hbase.master.MasterRpcServices.deleteSnapshot(MasterRpcServices.java:452)
   at 
 org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:44261)
   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2035)
   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107)
   at 
 org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
   at java.lang.Thread.run(Thread.java:745)
 Here is some help for this command:
 Delete a specified snapshot. Examples:
   hbase delete_snapshot 'snapshotName',
 {noformat}
 other commands, when given bogus arguments, tend to fail cleanly and not 
 leave stacktrace in the output.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)