[jira] [Commented] (HBASE-9958) Remove some array copy, change lock scope in locateRegion

2013-11-15 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13823463#comment-13823463
 ] 

Nicolas Liochon commented on HBASE-9958:


bq. I see that you removed the change to use ConcurrentMap. Did that not work 
out?
I'm thinking about using a copy on write approach to be fully lock free on a 
successful read. I would like to try this, together with HBASE-9869. 

 Remove some array copy, change lock scope in locateRegion
 -

 Key: HBASE-9958
 URL: https://issues.apache.org/jira/browse/HBASE-9958
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.98.0, 0.96.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Fix For: 0.98.0, 0.96.1

 Attachments: 9958.v1.patch, 9958.v2.patch, 9958.v2.patch






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9974) Rest sometimes returns incomplete xml/json data

2013-11-15 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13823468#comment-13823468
 ] 

Andrew Purtell commented on HBASE-9974:
---

Looks like a client hangup (IOE connection reset by peer in the stacktrace). 
Does your client time out or give up before all data is recieved? Can you 
provide more information? 

 Rest sometimes returns incomplete xml/json data
 ---

 Key: HBASE-9974
 URL: https://issues.apache.org/jira/browse/HBASE-9974
 Project: HBase
  Issue Type: Bug
  Components: REST
Reporter: Liu Shaohui

 Rest sometimes return incomplete xml/json data.
 We found these exceptions in rest server.
 13/11/15 11:40:51 ERROR mortbay.log:/log/1A:23:11:0C:06:22*
 javax.ws.rs.WebApplicationException: javax.xml.bind.MarshalException
  - with linked exception:
 [org.mortbay.jetty.EofException]
   at 
 com.sun.jersey.core.provider.jaxb.AbstractRootElementProvider.writeTo(AbstractRootElementProvider.java:159)
   at 
 com.sun.jersey.spi.container.ContainerResponse.write(ContainerResponse.java:306)
   at 
 com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1437)
   at 
 com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1349)
   at 
 com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1339)
   at 
 com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:416)
   at 
 com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:537)
   at 
 com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:699)
   at javax.servlet.http.HttpServlet.service(HttpServlet.java:847)
   at 
 org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
   at 
 org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221)
   at 
 org.apache.hadoop.hbase.rest.filter.GzipFilter.doFilter(GzipFilter.java:73)
   at 
 org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
   at 
 org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
   at 
 org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
   at 
 org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
   at 
 org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
   at org.mortbay.jetty.Server.handle(Server.java:322)
   at 
 org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
   at 
 org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
   at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
   at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
   at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
   at 
 org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410)
   at 
 org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
 Caused by: javax.xml.bind.MarshalException
  - with linked exception:
 [org.mortbay.jetty.EofException]
   at 
 com.sun.xml.bind.v2.runtime.MarshallerImpl.write(MarshallerImpl.java:325)
   at 
 com.sun.xml.bind.v2.runtime.MarshallerImpl.marshal(MarshallerImpl.java:249)
   at 
 javax.xml.bind.helpers.AbstractMarshallerImpl.marshal(AbstractMarshallerImpl.java:75)
   at 
 com.sun.jersey.json.impl.JSONMarshallerImpl.marshal(JSONMarshallerImpl.java:74)
   at 
 com.sun.jersey.core.provider.jaxb.AbstractRootElementProvider.writeTo(AbstractRootElementProvider.java:179)
   at 
 com.sun.jersey.core.provider.jaxb.AbstractRootElementProvider.writeTo(AbstractRootElementProvider.java:157)
   ... 24 more
 Caused by: org.mortbay.jetty.EofException
   at org.mortbay.jetty.HttpGenerator.flush(HttpGenerator.java:791)
   at 
 org.mortbay.jetty.AbstractGenerator$Output.blockForOutput(AbstractGenerator.java:551)
   at 
 org.mortbay.jetty.AbstractGenerator$Output.flush(AbstractGenerator.java:572)
   at 
 org.mortbay.jetty.HttpConnection$Output.flush(HttpConnection.java:1012)
   at 
 org.mortbay.jetty.AbstractGenerator$Output.write(AbstractGenerator.java:651)
   at 
 org.mortbay.jetty.AbstractGenerator$Output.write(AbstractGenerator.java:580)
   at 
 com.sun.jersey.spi.container.servlet.WebComponent$Writer.write(WebComponent.java:307)
   at 
 com.sun.jersey.spi.container.ContainerResponse$CommittingOutputStream.write(ContainerResponse.java:134)
   at 
 com.sun.xml.bind.v2.runtime.output.UTF8XmlOutput.flushBuffer(UTF8XmlOutput.java:416)
   at 
 

[jira] [Updated] (HBASE-9954) Incorporate HTTPS support for HBase

2013-11-15 Thread Aditya Kishore (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aditya Kishore updated HBASE-9954:
--

Attachment: HBASE-9954_0.94.patch

Here is the patch that I had put together for our upcoming security release for 
0.94 branch.

 Incorporate HTTPS support for HBase
 ---

 Key: HBASE-9954
 URL: https://issues.apache.org/jira/browse/HBASE-9954
 Project: HBase
  Issue Type: Task
Reporter: Ted Yu
Assignee: Ted Yu
 Attachments: 9954-v1.txt, HBASE-9954_0.94.patch


 In various classes, http://; is hard coded.
 This JIRA adds support for using HBASE web UI via HTTPS 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9954) Incorporate HTTPS support for HBase

2013-11-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13823474#comment-13823474
 ] 

Hadoop QA commented on HBASE-9954:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12614034/HBASE-9954_0.94.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7878//console

This message is automatically generated.

 Incorporate HTTPS support for HBase
 ---

 Key: HBASE-9954
 URL: https://issues.apache.org/jira/browse/HBASE-9954
 Project: HBase
  Issue Type: Task
Reporter: Ted Yu
Assignee: Ted Yu
 Attachments: 9954-v1.txt, HBASE-9954_0.94.patch


 In various classes, http://; is hard coded.
 This JIRA adds support for using HBASE web UI via HTTPS 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9969) Improve KeyValueHeap using loser tree

2013-11-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13823473#comment-13823473
 ] 

Hadoop QA commented on HBASE-9969:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12614022/9969-0.94.txt
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 6 new 
or modified tests.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7879//console

This message is automatically generated.

 Improve KeyValueHeap using loser tree
 -

 Key: HBASE-9969
 URL: https://issues.apache.org/jira/browse/HBASE-9969
 Project: HBase
  Issue Type: Improvement
  Components: Performance, regionserver
Reporter: Chao Shi
Assignee: Chao Shi
 Fix For: 0.98.0, 0.96.1, 0.94.15

 Attachments: 9969-0.94.txt, hbase-9969-v2.patch, hbase-9969.patch, 
 hbase-9969.patch, kvheap-benchmark.png, kvheap-benchmark.txt


 LoserTree is the better data structure than binary heap. It saves half of the 
 comparisons on each next(), though the time complexity is on O(logN).
 Currently A scan or get will go through two KeyValueHeaps, one is merging KVs 
 read from multiple HFiles in a single store, the other is merging results 
 from multiple stores. This patch should improve the both cases whenever CPU 
 is the bottleneck (e.g. scan with filter over cached blocks, HBASE-9811).
 All of the optimization work is done in KeyValueHeap and does not change its 
 public interfaces. The new code looks more cleaner and simpler to understand.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9869) Optimize HConnectionManager#getCachedLocation

2013-11-15 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13823509#comment-13823509
 ] 

Nicolas Liochon commented on HBASE-9869:


Yourkit calculates the retained size, i.e. ??Retained size of an object is its 
shallow size plus the shallow sizes of the objects that are accessible, 
directly or indirectly, only from this object. In other words, the retained 
size represents the amount of memory that will be freed by the garbage 
collector when this object is collected.??
With 10 thousand regions, the retained size of the two ConcurrentSkipListMap is 
7 mega bytes.
With 100 thousands regions, the retained size is 75 mega bytes. 19 megs are 
TableName objects, and this leads to an obvious optimization (I had it in mind 
already, to save on 'equals' but the final size is crazy). On the same range, 
we have 3.3 mb of ServerName.

Lastly, I don't think that a Map is the best algorithm, a Trie would be much 
better. I will have a look at this as well.

With 100k regions, time is:

||#clients|#puts|time without the patch|time with the patch||
||2 clients| 50 million each| 83 seconds|58 seconds||

With these results my opinion is that we should commit this patch as it is, 
because:
- 60 Mb is acceptable for a client connected to a cluster with 100K regions.
- In most cases, the weak reference will just make the performance 
unpredictable. The remaining cases (regions not used often so we can remove 
them under memory pressure) does not justify the noise for the other cases.
- We can lower the memory foot print further if necessary, and it's likely a 
better solution than playing with the GC.




 Optimize HConnectionManager#getCachedLocation
 -

 Key: HBASE-9869
 URL: https://issues.apache.org/jira/browse/HBASE-9869
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.98.0, 0.96.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Fix For: 0.98.0, 0.96.1

 Attachments: 9869.v1.patch, 9869.v1.patch, 9869.v2.patch


 It javadoc says: TODO: This method during writing consumes 15% of CPU doing 
 lookup. This is still true, says Yourkit. With 0.96, we also spend more time 
 in these methods. We retry more, and the AsyncProcess calls it in parallel.
 I don't have the patch for this yet, but I will spend some time on it.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-4811) Support reverse Scan

2013-11-15 Thread chunhui shen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-4811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chunhui shen updated HBASE-4811:


Status: Patch Available  (was: Open)

 Support reverse Scan
 

 Key: HBASE-4811
 URL: https://issues.apache.org/jira/browse/HBASE-4811
 Project: HBase
  Issue Type: New Feature
  Components: Client
Affects Versions: 0.94.7, 0.20.6
Reporter: John Carrino
Assignee: chunhui shen
 Fix For: 0.98.0, 0.96.1, 0.94.15

 Attachments: 4811-0.94-v22.txt, 4811-0.94-v23.txt, 4811-0.94-v3.txt, 
 4811-trunk-v10.txt, 4811-trunk-v5.patch, HBase-4811-0.94-v2.txt, 
 HBase-4811-0.94.3modified.txt, hbase-4811-0.94 v21.patch, 
 hbase-4811-0.94-v24.patch, hbase-4811-trunkv1.patch, 
 hbase-4811-trunkv11.patch, hbase-4811-trunkv12.patch, 
 hbase-4811-trunkv13.patch, hbase-4811-trunkv14.patch, 
 hbase-4811-trunkv15.patch, hbase-4811-trunkv16.patch, 
 hbase-4811-trunkv17.patch, hbase-4811-trunkv18.patch, 
 hbase-4811-trunkv19.patch, hbase-4811-trunkv20.patch, 
 hbase-4811-trunkv21.patch, hbase-4811-trunkv24.patch, 
 hbase-4811-trunkv24.patch, hbase-4811-trunkv4.patch, 
 hbase-4811-trunkv6.patch, hbase-4811-trunkv7.patch, hbase-4811-trunkv8.patch, 
 hbase-4811-trunkv9.patch


 Reversed scan means scan the rows backward. 
 And StartRow bigger than StopRow in a reversed scan.
 For example, for the following rows:
 aaa/c1:q1/value1
 aaa/c1:q2/value2
 bbb/c1:q1/value1
 bbb/c1:q2/value2
 ccc/c1:q1/value1
 ccc/c1:q2/value2
 ddd/c1:q1/value1
 ddd/c1:q2/value2
 eee/c1:q1/value1
 eee/c1:q2/value2
 you could do a reversed scan from 'ddd' to 'bbb'(exclude) like this:
 Scan scan = new Scan();
 scan.setStartRow('ddd');
 scan.setStopRow('bbb');
 scan.setReversed(true);
 for(Result result:htable.getScanner(scan)){
  System.out.println(result);
 }
 Aslo you could do the reversed scan with shell like this:
 hbase scan 'table',{REVERSED = true,STARTROW='ddd', STOPROW='bbb'}
 And the output is:
 ddd/c1:q1/value1
 ddd/c1:q2/value2
 ccc/c1:q1/value1
 ccc/c1:q2/value2
 NOTE: when setting reversed as true for a client scan, you must set the start 
 row, else will throw exception. Through {@link 
 Scan#createBiggestByteArray(int)},you could get a big enough byte array as 
 the start row
 All the documentation I find about HBase says that if you want forward and 
 reverse scans you should just build 2 tables and one be ascending and one 
 descending.  Is there a fundamental reason that HBase only supports forward 
 Scan?  It seems like a lot of extra space overhead and coding overhead (to 
 keep them in sync) to support 2 tables.  
 I am assuming this has been discussed before, but I can't find the 
 discussions anywhere about it or why it would be infeasible.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-4811) Support reverse Scan

2013-11-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-4811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13823527#comment-13823527
 ] 

Hadoop QA commented on HBASE-4811:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12613995/hbase-4811-trunkv24.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 18 new 
or modified tests.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7880//console

This message is automatically generated.

 Support reverse Scan
 

 Key: HBASE-4811
 URL: https://issues.apache.org/jira/browse/HBASE-4811
 Project: HBase
  Issue Type: New Feature
  Components: Client
Affects Versions: 0.20.6, 0.94.7
Reporter: John Carrino
Assignee: chunhui shen
 Fix For: 0.98.0, 0.96.1, 0.94.15

 Attachments: 4811-0.94-v22.txt, 4811-0.94-v23.txt, 4811-0.94-v3.txt, 
 4811-trunk-v10.txt, 4811-trunk-v5.patch, HBase-4811-0.94-v2.txt, 
 HBase-4811-0.94.3modified.txt, hbase-4811-0.94 v21.patch, 
 hbase-4811-0.94-v24.patch, hbase-4811-trunkv1.patch, 
 hbase-4811-trunkv11.patch, hbase-4811-trunkv12.patch, 
 hbase-4811-trunkv13.patch, hbase-4811-trunkv14.patch, 
 hbase-4811-trunkv15.patch, hbase-4811-trunkv16.patch, 
 hbase-4811-trunkv17.patch, hbase-4811-trunkv18.patch, 
 hbase-4811-trunkv19.patch, hbase-4811-trunkv20.patch, 
 hbase-4811-trunkv21.patch, hbase-4811-trunkv24.patch, 
 hbase-4811-trunkv24.patch, hbase-4811-trunkv4.patch, 
 hbase-4811-trunkv6.patch, hbase-4811-trunkv7.patch, hbase-4811-trunkv8.patch, 
 hbase-4811-trunkv9.patch


 Reversed scan means scan the rows backward. 
 And StartRow bigger than StopRow in a reversed scan.
 For example, for the following rows:
 aaa/c1:q1/value1
 aaa/c1:q2/value2
 bbb/c1:q1/value1
 bbb/c1:q2/value2
 ccc/c1:q1/value1
 ccc/c1:q2/value2
 ddd/c1:q1/value1
 ddd/c1:q2/value2
 eee/c1:q1/value1
 eee/c1:q2/value2
 you could do a reversed scan from 'ddd' to 'bbb'(exclude) like this:
 Scan scan = new Scan();
 scan.setStartRow('ddd');
 scan.setStopRow('bbb');
 scan.setReversed(true);
 for(Result result:htable.getScanner(scan)){
  System.out.println(result);
 }
 Aslo you could do the reversed scan with shell like this:
 hbase scan 'table',{REVERSED = true,STARTROW='ddd', STOPROW='bbb'}
 And the output is:
 ddd/c1:q1/value1
 ddd/c1:q2/value2
 ccc/c1:q1/value1
 ccc/c1:q2/value2
 NOTE: when setting reversed as true for a client scan, you must set the start 
 row, else will throw exception. Through {@link 
 Scan#createBiggestByteArray(int)},you could get a big enough byte array as 
 the start row
 All the documentation I find about HBase says that if you want forward and 
 reverse scans you should just build 2 tables and one be ascending and one 
 descending.  Is there a fundamental reason that HBase only supports forward 
 Scan?  It seems like a lot of extra space overhead and coding overhead (to 
 keep them in sync) to support 2 tables.  
 I am assuming this has been discussed before, but I can't find the 
 discussions anywhere about it or why it would be infeasible.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HBASE-9975) Not starting ReplicationSink when using custom implementation for the ReplicationSink.

2013-11-15 Thread Anoop Sam John (JIRA)
Anoop Sam John created HBASE-9975:
-

 Summary: Not starting ReplicationSink when using custom 
implementation for the ReplicationSink.
 Key: HBASE-9975
 URL: https://issues.apache.org/jira/browse/HBASE-9975
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.94.13
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 0.98.0, 0.96.1, 0.94.14


Not starting ReplicationSink when using custom implementation for the 
ReplicationSink.
{code}
if (this.replicationSourceHandler == this.replicationSinkHandler 
 this.replicationSourceHandler != null) {
   this.replicationSourceHandler.startReplicationService();
} else if (this.replicationSourceHandler != null) {
  this.replicationSourceHandler.startReplicationService();
} else if (this.replicationSinkHandler != null) {
  this.replicationSinkHandler.startReplicationService();
}
{code}
ReplicationSource and Sink are different as there is custom impl for 
ReplicationSink. We can not have else ifs



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9959) Remove some array copy - server side

2013-11-15 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-9959:
---

Status: Patch Available  (was: Open)

 Remove some array copy - server side
 

 Key: HBASE-9959
 URL: https://issues.apache.org/jira/browse/HBASE-9959
 Project: HBase
  Issue Type: Bug
  Components: Protobufs, regionserver
Affects Versions: 0.96.0, 0.98.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Fix For: 0.98.0, 0.96.1

 Attachments: 9959-trunk.v1.patch, 9959-trunk.v2.patch, 
 9959-trunk.v2.patch, 9959-trunk.v2.patch, 9959.v1.patch, 9959.v3.patch, 
 9959.v4.patch






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9959) Remove some array copy - server side

2013-11-15 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-9959:
---

Attachment: 9959.v4.patch

 Remove some array copy - server side
 

 Key: HBASE-9959
 URL: https://issues.apache.org/jira/browse/HBASE-9959
 Project: HBase
  Issue Type: Bug
  Components: Protobufs, regionserver
Affects Versions: 0.98.0, 0.96.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Fix For: 0.98.0, 0.96.1

 Attachments: 9959-trunk.v1.patch, 9959-trunk.v2.patch, 
 9959-trunk.v2.patch, 9959-trunk.v2.patch, 9959.v1.patch, 9959.v3.patch, 
 9959.v4.patch






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9959) Remove some array copy - server side

2013-11-15 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-9959:
---

Status: Open  (was: Patch Available)

 Remove some array copy - server side
 

 Key: HBASE-9959
 URL: https://issues.apache.org/jira/browse/HBASE-9959
 Project: HBase
  Issue Type: Bug
  Components: Protobufs, regionserver
Affects Versions: 0.96.0, 0.98.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Fix For: 0.98.0, 0.96.1

 Attachments: 9959-trunk.v1.patch, 9959-trunk.v2.patch, 
 9959-trunk.v2.patch, 9959-trunk.v2.patch, 9959.v1.patch, 9959.v3.patch, 
 9959.v4.patch






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9975) Not starting ReplicationSink when using custom implementation for the ReplicationSink.

2013-11-15 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9975?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-9975:
--

Attachment: HBASE-9975_Trunk.patch

Patch for Trunk.
Also making ReplicationSink#batch() as protected so that we can change the impl 
for that alone easily in custom extended class.

 Not starting ReplicationSink when using custom implementation for the 
 ReplicationSink.
 --

 Key: HBASE-9975
 URL: https://issues.apache.org/jira/browse/HBASE-9975
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.94.13
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 0.98.0, 0.96.1, 0.94.14

 Attachments: HBASE-9975_Trunk.patch


 Not starting ReplicationSink when using custom implementation for the 
 ReplicationSink.
 {code}
 if (this.replicationSourceHandler == this.replicationSinkHandler 
this.replicationSourceHandler != null) {
this.replicationSourceHandler.startReplicationService();
 } else if (this.replicationSourceHandler != null) {
   this.replicationSourceHandler.startReplicationService();
 } else if (this.replicationSinkHandler != null) {
   this.replicationSinkHandler.startReplicationService();
 }
 {code}
 ReplicationSource and Sink are different as there is custom impl for 
 ReplicationSink. We can not have else ifs



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HBASE-9976) Don't create duplicated TableName objects

2013-11-15 Thread Nicolas Liochon (JIRA)
Nicolas Liochon created HBASE-9976:
--

 Summary: Don't create duplicated TableName objects
 Key: HBASE-9976
 URL: https://issues.apache.org/jira/browse/HBASE-9976
 Project: HBase
  Issue Type: Bug
  Components: Client, regionserver
Affects Versions: 0.96.0, 0.98.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Fix For: 0.98.0, 0.96.1


A profiling show that the table name is reponsible for 25% of the memory needed 
to keep the region locations. As well, comparisons will be faster if two 
identical table names are a single java object.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9959) Remove some array copy - server side

2013-11-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13823609#comment-13823609
 ] 

Hadoop QA commented on HBASE-9959:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12614050/9959.v4.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 9 new 
or modified tests.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 1 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:red}-1 site{color}.  The patch appears to cause mvn site goal to 
fail.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.regionserver.TestHRegionBusyWait
  org.apache.hadoop.hbase.regionserver.TestHRegion

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7881//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7881//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7881//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7881//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7881//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7881//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7881//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7881//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7881//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7881//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7881//console

This message is automatically generated.

 Remove some array copy - server side
 

 Key: HBASE-9959
 URL: https://issues.apache.org/jira/browse/HBASE-9959
 Project: HBase
  Issue Type: Bug
  Components: Protobufs, regionserver
Affects Versions: 0.98.0, 0.96.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Fix For: 0.98.0, 0.96.1

 Attachments: 9959-trunk.v1.patch, 9959-trunk.v2.patch, 
 9959-trunk.v2.patch, 9959-trunk.v2.patch, 9959.v1.patch, 9959.v3.patch, 
 9959.v4.patch






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9949) Fix the race condition between Compaction and StoreScanner.init

2013-11-15 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13823762#comment-13823762
 ] 

Ted Yu commented on HBASE-9949:
---

Planning to integrate over the weekend, if there is no further comment.

 Fix the race condition between Compaction and StoreScanner.init
 ---

 Key: HBASE-9949
 URL: https://issues.apache.org/jira/browse/HBASE-9949
 Project: HBase
  Issue Type: Bug
  Components: Scanners
Affects Versions: 0.89-fb
Reporter: Manukranth Kolloju
Assignee: Manukranth Kolloju
Priority: Minor
 Fix For: 0.89-fb

 Attachments: 9949-trunk-v1.txt, 9949-trunk-v2.txt, 9949-trunk-v3.txt

   Original Estimate: 48h
  Remaining Estimate: 48h

 The StoreScanner constructor has multiple stages and there can be a race 
 betwwen an ongoing compaction and the StoreScanner constructor where we might 
 get the list of scanners before a compaction and seek on those scanners after 
 the compaction.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9975) Not starting ReplicationSink when using custom implementation for the ReplicationSink.

2013-11-15 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13823763#comment-13823763
 ] 

Ted Yu commented on HBASE-9975:
---

lgtm

 Not starting ReplicationSink when using custom implementation for the 
 ReplicationSink.
 --

 Key: HBASE-9975
 URL: https://issues.apache.org/jira/browse/HBASE-9975
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.94.13
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 0.98.0, 0.96.1, 0.94.14

 Attachments: HBASE-9975_Trunk.patch


 Not starting ReplicationSink when using custom implementation for the 
 ReplicationSink.
 {code}
 if (this.replicationSourceHandler == this.replicationSinkHandler 
this.replicationSourceHandler != null) {
this.replicationSourceHandler.startReplicationService();
 } else if (this.replicationSourceHandler != null) {
   this.replicationSourceHandler.startReplicationService();
 } else if (this.replicationSinkHandler != null) {
   this.replicationSinkHandler.startReplicationService();
 }
 {code}
 ReplicationSource and Sink are different as there is custom impl for 
 ReplicationSink. We can not have else ifs



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9963) Remove the ReentrantReadWriteLock in the MemStore

2013-11-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13823773#comment-13823773
 ] 

Hudson commented on HBASE-9963:
---

FAILURE: Integrated in HBase-0.94-security #336 (See 
[https://builds.apache.org/job/HBase-0.94-security/336/])
HBASE-9963 Remove the ReentrantReadWriteLock in the MemStore (Nicolas Liochon) 
(larsh: rev 1542104)
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/MemStore.java
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/Store.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/io/TestHeapSize.java


 Remove the ReentrantReadWriteLock in the MemStore
 -

 Key: HBASE-9963
 URL: https://issues.apache.org/jira/browse/HBASE-9963
 Project: HBase
  Issue Type: Improvement
  Components: regionserver
Affects Versions: 0.98.0, 0.96.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
Priority: Minor
 Fix For: 0.98.0, 0.96.1, 0.94.14

 Attachments: 9963.96.v3.patch, 9963.v1.patch, 9963.v2.patch, 
 9963.v3.patch


 If I'm not wrong, the MemStore is always used from the HStore. The code in 
 HStore puts a lock before calling MemStore. So the lock in Memstore is 
 useless. 
 For example, in HStore
 {code}
   @Override
   public long upsert(IterableCell cells, long readpoint) throws IOException 
 {
 this.lock.readLock().lock();
 try {
   return this.memstore.upsert(cells, readpoint);
 } finally {
   this.lock.readLock().unlock();
 }
   }
 {code}
 With this in MemStore
 {code}
   public long upsert(IterableCell cells, long readpoint) {
this.lock.readLock().lock(); // ==Am I useful?
 try {
   long size = 0;
   for (Cell cell : cells) {
 size += upsert(cell, readpoint);
   }
   return size;
 } finally {
   this.lock.readLock().unlock();
 }
   }
 {code}
 I've checked, all the locks in MemStore are backed by a lock in HStore, the 
 only exception beeing
 {code}
   void snapshot() {
 this.memstore.snapshot();
   }
 {code}
 And I would say it's a bug. If it's confirm ([~lhofhansl], what do you 
 think?), I will add a lock there and remove all of them in MemStore. They do 
 appear in the profiling.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9834) Minimize byte[] copies for 'smart' clients

2013-11-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13823774#comment-13823774
 ] 

Hudson commented on HBASE-9834:
---

FAILURE: Integrated in HBase-0.94-security #336 (See 
[https://builds.apache.org/job/HBase-0.94-security/336/])
HBASE-9834: Minimize byte[] copies for 'smart' clients (jyates: rev 1542052)
* /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/client/Append.java
* /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/client/Delete.java
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/client/Mutation.java
* /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/client/Put.java


 Minimize byte[] copies for 'smart' clients
 --

 Key: HBASE-9834
 URL: https://issues.apache.org/jira/browse/HBASE-9834
 Project: HBase
  Issue Type: Bug
  Components: Client
Reporter: Jesse Yates
Assignee: Jesse Yates
 Fix For: 0.94.14

 Attachments: hbase-9834-0.94-v0.patch, hbase-9834-0.94-v1.patch, 
 hbase-9834-0.94-v2.patch, hbase-9834-0.94-v3.patch


 'Smart' clients (e.g. phoenix) that have in-depth knowledge of HBase often 
 bemoan the extra byte[] copies that must be done when building multiple 
 puts/deletes. We should provide a mechanism by which they can minimize these 
 copies, but still remain wire compatible. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9799) Change Hadoop 1.2 dependency to 1.2.1

2013-11-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13823775#comment-13823775
 ] 

Hudson commented on HBASE-9799:
---

FAILURE: Integrated in HBase-0.94-security #336 (See 
[https://builds.apache.org/job/HBase-0.94-security/336/])
HBASE-9799 Change Hadoop 1.2 dependency to 1.2.1 (larsh: rev 1542117)
* /hbase/branches/0.94/pom.xml


 Change Hadoop 1.2 dependency to 1.2.1
 -

 Key: HBASE-9799
 URL: https://issues.apache.org/jira/browse/HBASE-9799
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
Priority: Trivial
 Fix For: 0.94.14

 Attachments: 9799.txt, 9799.txt


 The Hadoop 1.2 profile is currently 1.2.0. We should update that to 1.2.1.
 Was:
 This will switch the default Hadoop profile to 1.2.1.
 The Hadoop 1.0.x (1.0.4 currently) will remain and we will add a new 
 autobuild that will build against 1.0.4 (once/week?)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9799) Change Hadoop 1.2 dependency to 1.2.1

2013-11-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13823802#comment-13823802
 ] 

Hudson commented on HBASE-9799:
---

FAILURE: Integrated in HBase-0.94 #1202 (See 
[https://builds.apache.org/job/HBase-0.94/1202/])
HBASE-9799 Change Hadoop 1.2 dependency to 1.2.1 (larsh: rev 1542117)
* /hbase/branches/0.94/pom.xml


 Change Hadoop 1.2 dependency to 1.2.1
 -

 Key: HBASE-9799
 URL: https://issues.apache.org/jira/browse/HBASE-9799
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
Priority: Trivial
 Fix For: 0.94.14

 Attachments: 9799.txt, 9799.txt


 The Hadoop 1.2 profile is currently 1.2.0. We should update that to 1.2.1.
 Was:
 This will switch the default Hadoop profile to 1.2.1.
 The Hadoop 1.0.x (1.0.4 currently) will remain and we will add a new 
 autobuild that will build against 1.0.4 (once/week?)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9834) Minimize byte[] copies for 'smart' clients

2013-11-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13823801#comment-13823801
 ] 

Hudson commented on HBASE-9834:
---

FAILURE: Integrated in HBase-0.94 #1202 (See 
[https://builds.apache.org/job/HBase-0.94/1202/])
HBASE-9834: Minimize byte[] copies for 'smart' clients (jyates: rev 1542052)
* /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/client/Append.java
* /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/client/Delete.java
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/client/Mutation.java
* /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/client/Put.java


 Minimize byte[] copies for 'smart' clients
 --

 Key: HBASE-9834
 URL: https://issues.apache.org/jira/browse/HBASE-9834
 Project: HBase
  Issue Type: Bug
  Components: Client
Reporter: Jesse Yates
Assignee: Jesse Yates
 Fix For: 0.94.14

 Attachments: hbase-9834-0.94-v0.patch, hbase-9834-0.94-v1.patch, 
 hbase-9834-0.94-v2.patch, hbase-9834-0.94-v3.patch


 'Smart' clients (e.g. phoenix) that have in-depth knowledge of HBase often 
 bemoan the extra byte[] copies that must be done when building multiple 
 puts/deletes. We should provide a mechanism by which they can minimize these 
 copies, but still remain wire compatible. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9963) Remove the ReentrantReadWriteLock in the MemStore

2013-11-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13823800#comment-13823800
 ] 

Hudson commented on HBASE-9963:
---

FAILURE: Integrated in HBase-0.94 #1202 (See 
[https://builds.apache.org/job/HBase-0.94/1202/])
HBASE-9963 Remove the ReentrantReadWriteLock in the MemStore (Nicolas Liochon) 
(larsh: rev 1542104)
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/MemStore.java
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/Store.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/io/TestHeapSize.java


 Remove the ReentrantReadWriteLock in the MemStore
 -

 Key: HBASE-9963
 URL: https://issues.apache.org/jira/browse/HBASE-9963
 Project: HBase
  Issue Type: Improvement
  Components: regionserver
Affects Versions: 0.98.0, 0.96.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
Priority: Minor
 Fix For: 0.98.0, 0.96.1, 0.94.14

 Attachments: 9963.96.v3.patch, 9963.v1.patch, 9963.v2.patch, 
 9963.v3.patch


 If I'm not wrong, the MemStore is always used from the HStore. The code in 
 HStore puts a lock before calling MemStore. So the lock in Memstore is 
 useless. 
 For example, in HStore
 {code}
   @Override
   public long upsert(IterableCell cells, long readpoint) throws IOException 
 {
 this.lock.readLock().lock();
 try {
   return this.memstore.upsert(cells, readpoint);
 } finally {
   this.lock.readLock().unlock();
 }
   }
 {code}
 With this in MemStore
 {code}
   public long upsert(IterableCell cells, long readpoint) {
this.lock.readLock().lock(); // ==Am I useful?
 try {
   long size = 0;
   for (Cell cell : cells) {
 size += upsert(cell, readpoint);
   }
   return size;
 } finally {
   this.lock.readLock().unlock();
 }
   }
 {code}
 I've checked, all the locks in MemStore are backed by a lock in HStore, the 
 only exception beeing
 {code}
   void snapshot() {
 this.memstore.snapshot();
   }
 {code}
 And I would say it's a bug. If it's confirm ([~lhofhansl], what do you 
 think?), I will add a lock there and remove all of them in MemStore. They do 
 appear in the profiling.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9954) Incorporate HTTPS support for HBase

2013-11-15 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-9954:
--

Attachment: 9954-v2.txt

 Incorporate HTTPS support for HBase
 ---

 Key: HBASE-9954
 URL: https://issues.apache.org/jira/browse/HBASE-9954
 Project: HBase
  Issue Type: Task
Reporter: Ted Yu
Assignee: Ted Yu
 Attachments: 9954-v1.txt, 9954-v2.txt, HBASE-9954_0.94.patch


 In various classes, http://; is hard coded.
 This JIRA adds support for using HBASE web UI via HTTPS 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9976) Don't create duplicated TableName objects

2013-11-15 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13823846#comment-13823846
 ] 

Nicolas Liochon commented on HBASE-9976:


I had some performances issues when I push the client to something like 1.5 
millions puts/sec, as it adds a synchronization point. I need to do more tests 
around this. 

 Don't create duplicated TableName objects
 -

 Key: HBASE-9976
 URL: https://issues.apache.org/jira/browse/HBASE-9976
 Project: HBase
  Issue Type: Bug
  Components: Client, regionserver
Affects Versions: 0.98.0, 0.96.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Fix For: 0.98.0, 0.96.1


 A profiling show that the table name is reponsible for 25% of the memory 
 needed to keep the region locations. As well, comparisons will be faster if 
 two identical table names are a single java object.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9976) Don't create duplicated TableName objects

2013-11-15 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-9976:
---

Status: Patch Available  (was: Open)

 Don't create duplicated TableName objects
 -

 Key: HBASE-9976
 URL: https://issues.apache.org/jira/browse/HBASE-9976
 Project: HBase
  Issue Type: Bug
  Components: Client, regionserver
Affects Versions: 0.96.0, 0.98.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Fix For: 0.98.0, 0.96.1

 Attachments: 9976.v1.patch


 A profiling show that the table name is reponsible for 25% of the memory 
 needed to keep the region locations. As well, comparisons will be faster if 
 two identical table names are a single java object.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Comment Edited] (HBASE-9976) Don't create duplicated TableName objects

2013-11-15 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13823846#comment-13823846
 ] 

Nicolas Liochon edited comment on HBASE-9976 at 11/15/13 5:44 PM:
--

I had some performance issues when I pushed the client to something like 1.5 
millions puts/sec, as it adds a synchronization point. I need to do more tests 
around this. 


was (Author: nkeywal):
I had some performances issues when I push the client to something like 1.5 
millions puts/sec, as it adds a synchronization point. I need to do more tests 
around this. 

 Don't create duplicated TableName objects
 -

 Key: HBASE-9976
 URL: https://issues.apache.org/jira/browse/HBASE-9976
 Project: HBase
  Issue Type: Bug
  Components: Client, regionserver
Affects Versions: 0.98.0, 0.96.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Fix For: 0.98.0, 0.96.1

 Attachments: 9976.v1.patch


 A profiling show that the table name is reponsible for 25% of the memory 
 needed to keep the region locations. As well, comparisons will be faster if 
 two identical table names are a single java object.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9976) Don't create duplicated TableName objects

2013-11-15 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-9976:
---

Attachment: 9976.v1.patch

 Don't create duplicated TableName objects
 -

 Key: HBASE-9976
 URL: https://issues.apache.org/jira/browse/HBASE-9976
 Project: HBase
  Issue Type: Bug
  Components: Client, regionserver
Affects Versions: 0.98.0, 0.96.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Fix For: 0.98.0, 0.96.1

 Attachments: 9976.v1.patch


 A profiling show that the table name is reponsible for 25% of the memory 
 needed to keep the region locations. As well, comparisons will be faster if 
 two identical table names are a single java object.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9165) Improvements to addDependencyJars

2013-11-15 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13823850#comment-13823850
 ] 

Nick Dimiduk commented on HBASE-9165:
-

[~lhofhansl] are you okay with the 0.94 backport?

 Improvements to addDependencyJars
 -

 Key: HBASE-9165
 URL: https://issues.apache.org/jira/browse/HBASE-9165
 Project: HBase
  Issue Type: Sub-task
  Components: mapreduce
Affects Versions: 0.95.2
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk
 Fix For: 0.98.0, 0.96.1, 0.94.14

 Attachments: 
 0001-HBASE-9165-mapreduce-Modularize-building-dependency-.patch, 
 0001-HBASE-9165-mapreduce-Modularize-building-dependency-.patch, 
 0001-HBASE-9165-mapreduce-Modularize-building-dependency-.patch, 
 HBASE-9165-0.94.00.patch, HBASE-9165-0.96.00.patch, HBASE-9165.02.patch


 The way we support adding HBase dependencies to a MapReduce job in 
 {{TableMapReduceUtils#addDependencyJars(job)}} is a bit monolithic. Advanced 
 users need a way to add HBase and its dependencies to their job without us 
 snooping around for ouput formats and the like (see PIG-3285). We can also 
 benefit from a little more code reuse between our {{mapred}} and 
 {{mapreduce}} namespaces.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9165) Improvements to addDependencyJars

2013-11-15 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-9165:


Attachment: HBASE-9165-0.96.00.patch

Attaching the patch committed to 0.96. From the commit message:

Note this patch differs from the one applied to TRUNK/0.98 in that it omits 
changes to TestTableMapReduceUtil which have not yet been backported from 
HBASE-8534. This can be addressed when that patch is backported (HBASE-9484).

 Improvements to addDependencyJars
 -

 Key: HBASE-9165
 URL: https://issues.apache.org/jira/browse/HBASE-9165
 Project: HBase
  Issue Type: Sub-task
  Components: mapreduce
Affects Versions: 0.95.2
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk
 Fix For: 0.98.0, 0.96.1, 0.94.14

 Attachments: 
 0001-HBASE-9165-mapreduce-Modularize-building-dependency-.patch, 
 0001-HBASE-9165-mapreduce-Modularize-building-dependency-.patch, 
 0001-HBASE-9165-mapreduce-Modularize-building-dependency-.patch, 
 HBASE-9165-0.94.00.patch, HBASE-9165-0.96.00.patch, HBASE-9165.02.patch


 The way we support adding HBase dependencies to a MapReduce job in 
 {{TableMapReduceUtils#addDependencyJars(job)}} is a bit monolithic. Advanced 
 users need a way to add HBase and its dependencies to their job without us 
 snooping around for ouput formats and the like (see PIG-3285). We can also 
 benefit from a little more code reuse between our {{mapred}} and 
 {{mapreduce}} namespaces.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9484) Backport 8534 Fix coverage for org.apache.hadoop.hbase.mapreduce to 0.96

2013-11-15 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13823853#comment-13823853
 ] 

Nick Dimiduk commented on HBASE-9484:
-

See the patch committed to trunk on HBASE-9165. It contains fixes to 
TestTableMapReduceUtil which are yet to be backported by this ticket.

 Backport 8534 Fix coverage for org.apache.hadoop.hbase.mapreduce to 0.96
 --

 Key: HBASE-9484
 URL: https://issues.apache.org/jira/browse/HBASE-9484
 Project: HBase
  Issue Type: Test
  Components: mapreduce, test
Reporter: Nick Dimiduk
Priority: Minor
 Fix For: 0.96.1

 Attachments: 
 0001-HBASE-9484-backport-8534-Fix-coverage-for-org.apache.patch






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9947) Add CM action for online compression algorithm change

2013-11-15 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9947?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HBASE-9947:
---

   Resolution: Fixed
Fix Version/s: 0.96.1
   0.98.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Integrated into trunk and 0.96. Thanks.

 Add CM action for online compression algorithm change
 -

 Key: HBASE-9947
 URL: https://issues.apache.org/jira/browse/HBASE-9947
 Project: HBase
  Issue Type: Bug
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
 Fix For: 0.98.0, 0.96.1

 Attachments: trunk-9947.patch


 We need to add a CM action for online compression algorithm change and make 
 sure ITBLL is ok with it.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9112) Custom TableInputFormat in initTableMapperJob throws ClassNoFoundException on TableMapper

2013-11-15 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-9112:


Fix Version/s: 0.94.7

 Custom TableInputFormat in initTableMapperJob throws ClassNoFoundException on 
 TableMapper
 -

 Key: HBASE-9112
 URL: https://issues.apache.org/jira/browse/HBASE-9112
 Project: HBase
  Issue Type: Bug
  Components: hadoop2, mapreduce
Affects Versions: 0.94.6.1
 Environment: CDH-4.3.0-1.cdh4.3.0.p0.22
Reporter: Debanjan Bhattacharyya
Assignee: Nick Dimiduk
 Fix For: 0.98.0, 0.94.7, 0.96.1


 When using custom TableInputFormat in TableMapReduceUtil.initTableMapperJob 
 in the following way
 TableMapReduceUtil.initTableMapperJob(mytable, 
   MyScan, 
   MyMapper.class,
   MyKey.class, 
   MyValue.class, 
   myJob,true,  
 MyTableInputFormat.class);
 I get error: java.lang.ClassNotFoundException: 
 org.apache.hadoop.hbase.mapreduce.TableMapper
   at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
   at java.security.AccessController.doPrivileged(Native Method)
   at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
   at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
   at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
   at java.lang.ClassLoader.defineClass1(Native Method)
   at java.lang.ClassLoader.defineClassCond(ClassLoader.java:631)
   at java.lang.ClassLoader.defineClass(ClassLoader.java:615)
   at 
 java.security.SecureClassLoader.defineClass(SecureClassLoader.java:141)
   at java.net.URLClassLoader.defineClass(URLClassLoader.java:283)
   at java.net.URLClassLoader.access$000(URLClassLoader.java:58)
   at java.net.URLClassLoader$1.run(URLClassLoader.java:197)
   at java.security.AccessController.doPrivileged(Native Method)
   at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
 If I do not use the last two parameters, there is no error.
 What is going wrong here?
 Thanks
 Regards



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (HBASE-9112) Custom TableInputFormat in initTableMapperJob throws ClassNoFoundException on TableMapper

2013-11-15 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk resolved HBASE-9112.
-

Resolution: Fixed

This issue was fixed on 0.94 by way of HBASE-8146 and on 0.96/trunk via 
HBASE-9165. Reopen if you continue to experience this issue.

 Custom TableInputFormat in initTableMapperJob throws ClassNoFoundException on 
 TableMapper
 -

 Key: HBASE-9112
 URL: https://issues.apache.org/jira/browse/HBASE-9112
 Project: HBase
  Issue Type: Bug
  Components: hadoop2, mapreduce
Affects Versions: 0.94.6.1
 Environment: CDH-4.3.0-1.cdh4.3.0.p0.22
Reporter: Debanjan Bhattacharyya
Assignee: Nick Dimiduk
 Fix For: 0.98.0, 0.96.1, 0.94.7


 When using custom TableInputFormat in TableMapReduceUtil.initTableMapperJob 
 in the following way
 TableMapReduceUtil.initTableMapperJob(mytable, 
   MyScan, 
   MyMapper.class,
   MyKey.class, 
   MyValue.class, 
   myJob,true,  
 MyTableInputFormat.class);
 I get error: java.lang.ClassNotFoundException: 
 org.apache.hadoop.hbase.mapreduce.TableMapper
   at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
   at java.security.AccessController.doPrivileged(Native Method)
   at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
   at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
   at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
   at java.lang.ClassLoader.defineClass1(Native Method)
   at java.lang.ClassLoader.defineClassCond(ClassLoader.java:631)
   at java.lang.ClassLoader.defineClass(ClassLoader.java:615)
   at 
 java.security.SecureClassLoader.defineClass(SecureClassLoader.java:141)
   at java.net.URLClassLoader.defineClass(URLClassLoader.java:283)
   at java.net.URLClassLoader.access$000(URLClassLoader.java:58)
   at java.net.URLClassLoader$1.run(URLClassLoader.java:197)
   at java.security.AccessController.doPrivileged(Native Method)
   at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
 If I do not use the last two parameters, there is no error.
 What is going wrong here?
 Thanks
 Regards



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9165) Improvements to addDependencyJars

2013-11-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13823857#comment-13823857
 ] 

Hadoop QA commented on HBASE-9165:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12614091/HBASE-9165-0.96.00.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 15 new 
or modified tests.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7883//console

This message is automatically generated.

 Improvements to addDependencyJars
 -

 Key: HBASE-9165
 URL: https://issues.apache.org/jira/browse/HBASE-9165
 Project: HBase
  Issue Type: Sub-task
  Components: mapreduce
Affects Versions: 0.95.2
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk
 Fix For: 0.98.0, 0.96.1, 0.94.14

 Attachments: 
 0001-HBASE-9165-mapreduce-Modularize-building-dependency-.patch, 
 0001-HBASE-9165-mapreduce-Modularize-building-dependency-.patch, 
 0001-HBASE-9165-mapreduce-Modularize-building-dependency-.patch, 
 HBASE-9165-0.94.00.patch, HBASE-9165-0.96.00.patch, HBASE-9165.02.patch


 The way we support adding HBase dependencies to a MapReduce job in 
 {{TableMapReduceUtils#addDependencyJars(job)}} is a bit monolithic. Advanced 
 users need a way to add HBase and its dependencies to their job without us 
 snooping around for ouput formats and the like (see PIG-3285). We can also 
 benefit from a little more code reuse between our {{mapred}} and 
 {{mapreduce}} namespaces.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Assigned] (HBASE-9971) Port part of HBASE-9958 to 0.94 - change lock scope in locateRegion

2013-11-15 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl reassigned HBASE-9971:


Assignee: Lars Hofhansl

 Port part of HBASE-9958 to 0.94 - change lock scope in locateRegion
 ---

 Key: HBASE-9971
 URL: https://issues.apache.org/jira/browse/HBASE-9971
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
 Fix For: 0.94.14

 Attachments: 9971.txt


 Simple fix that we should have in 0.94 as well.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9971) Port part of HBASE-9958 to 0.94 - change lock scope in locateRegion

2013-11-15 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13823865#comment-13823865
 ] 

Lars Hofhansl commented on HBASE-9971:
--

Going to commit today unless I hear objections.

 Port part of HBASE-9958 to 0.94 - change lock scope in locateRegion
 ---

 Key: HBASE-9971
 URL: https://issues.apache.org/jira/browse/HBASE-9971
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
 Fix For: 0.94.14

 Attachments: 9971.txt


 Simple fix that we should have in 0.94 as well.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9971) Port part of HBASE-9958 to 0.94 - change lock scope in locateRegion

2013-11-15 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-9971:
-

Fix Version/s: (was: 0.94.15)
   0.94.14

 Port part of HBASE-9958 to 0.94 - change lock scope in locateRegion
 ---

 Key: HBASE-9971
 URL: https://issues.apache.org/jira/browse/HBASE-9971
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
 Fix For: 0.94.14

 Attachments: 9971.txt


 Simple fix that we should have in 0.94 as well.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9976) Don't create duplicated TableName objects

2013-11-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13823869#comment-13823869
 ] 

Hadoop QA commented on HBASE-9976:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12614088/9976.v1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:red}-1 hadoop1.0{color}.  The patch failed to compile against the 
hadoop 1.0 profile.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7884//console

This message is automatically generated.

 Don't create duplicated TableName objects
 -

 Key: HBASE-9976
 URL: https://issues.apache.org/jira/browse/HBASE-9976
 Project: HBase
  Issue Type: Bug
  Components: Client, regionserver
Affects Versions: 0.98.0, 0.96.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Fix For: 0.98.0, 0.96.1

 Attachments: 9976.v1.patch


 A profiling show that the table name is reponsible for 25% of the memory 
 needed to keep the region locations. As well, comparisons will be faster if 
 two identical table names are a single java object.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9942) hbase Scanner specifications accepting wrong specifier and then after scan using correct specifier returning unexpected result

2013-11-15 Thread Matteo Bertozzi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13823889#comment-13823889
 ] 

Matteo Bertozzi commented on HBASE-9942:


The problem here is that we are using eval from irb.eval_input
so, the STARTROW = row will be set as the constant value
and when the second time we define the argument STARTROW = row will be 
evaluated as row = row
One workaround may be unset the args that we expect with something like 
Object.instance_eval {remove_const :STARTROW} in this way the STARTROW constant 
will not be avaluated to the row value

 hbase Scanner specifications accepting wrong specifier and then after scan 
 using correct specifier returning unexpected result 
 ---

 Key: HBASE-9942
 URL: https://issues.apache.org/jira/browse/HBASE-9942
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.96.0, 0.94.13
Reporter: Deepak Sharma
Priority: Minor
 Fix For: 0.96.1


 check the given scenerio:
 1. log in to hbase client -- ./hbase shell
 2. created table 'tab1' 
 hbase(main):001:0 create 'tab1' , 'fa1'
 3. put some 10 rows (row1 to row10)  in table 'tab1'
 4.  run the scan for table 'tab1' as follows:
  
 hbase(main):013:0 scan 'tab1' , { STARTROW = 'row4' , STOPROW = 'row9' }
 ROW   COLUMN+CELL 
   

  row4 column=fa1:col1, 
 timestamp=1384164182738, value=value1 
   
  row5 column=fa1:col1, 
 timestamp=1384164188396, value=value1 
   
  row6 column=fa1:col1, 
 timestamp=1384164192395, value=value1 
   
  row7 column=fa1:col1, 
 timestamp=1384164197693, value=value1 
   
  row8 column=fa1:col1, 
 timestamp=1384164203237, value=value1 
   
 5 row(s) in 0.0540 seconds
 so result was expected , rows from 'row4' to 'row8' are displayed
 5. then run the scan using wrong specifier ( '=' instead of '=') so get 
 wrong result
 hbase(main):014:0 scan 'tab1' , { STARTROW = 'row4' , STOPROW = 'row9' }
 ROW   COLUMN+CELL 
   

  row1 column=fa1:col1, 
 timestamp=1384164167838, value=value1 
   
  row10column=fa1:col1, 
 timestamp=1384164212615, value=value1 
   
  row2 column=fa1:col1, 
 timestamp=1384164175337, value=value1 
   
  row3 column=fa1:col1, 
 timestamp=1384164179068, value=value1 
   
  row4 column=fa1:col1, 
 timestamp=1384164182738, value=value1 
   
  row5 column=fa1:col1, 
 timestamp=1384164188396, value=value1 
   
  row6 column=fa1:col1, 
 timestamp=1384164192395, value=value1 
   
  row7 column=fa1:col1, 
 timestamp=1384164197693, value=value1 
   
  row8 column=fa1:col1, 
 timestamp=1384164203237, value=value1 
   
  row9 column=fa1:col1, 
 timestamp=1384164208375, value=value1 
   
 10 row(s) in 0.0390 seconds
 6. now performed correct scan query with correct specifier ( used '=' as 
 specifier)
 hbase(main):015:0 scan 'tab1' , { STARTROW = 'row4' , STOPROW = 'row9' }
 ROW   COLUMN+CELL 

[jira] [Commented] (HBASE-7663) [Per-KV security] Visibility labels

2013-11-15 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13823896#comment-13823896
 ] 

Andrew Purtell commented on HBASE-7663:
---

Still +1 after rebase. If you are waiting further [~anoop.hbase] can you 
extract the new base class for Get and Scan into a separate patch and commit it 
so I can update HBASE-7662? 

 [Per-KV security] Visibility labels
 ---

 Key: HBASE-7663
 URL: https://issues.apache.org/jira/browse/HBASE-7663
 Project: HBase
  Issue Type: Sub-task
  Components: Coprocessors, security
Affects Versions: 0.98.0
Reporter: Andrew Purtell
Assignee: Anoop Sam John
 Fix For: 0.98.0

 Attachments: HBASE-7663.patch, HBASE-7663_V2.patch, 
 HBASE-7663_V3.patch, HBASE-7663_V4.patch, HBASE-7663_V5.patch, 
 HBASE-7663_V6.patch, HBASE-7663_V7.patch, HBASE-7663_V8.patch


 Implement Accumulo-style visibility labels. Consider the following design 
 principles:
 - Coprocessor based implementation
 - Minimal to no changes to core code
 - Use KeyValue tags (HBASE-7448) to carry labels
 - Use OperationWithAttributes# {get,set}Attribute for handling visibility 
 labels in the API
 - Implement a new filter for evaluating visibility labels as KVs are streamed 
 through.
 This approach would be consistent in deployment and API details with other 
 per-KV security work, supporting environments where they might be both be 
 employed, even stacked on some tables.
 See the parent issue for more discussion.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-7662) [Per-KV security] Store and apply per cell ACLs into/from KeyValue tags

2013-11-15 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13823900#comment-13823900
 ] 

Andrew Purtell commented on HBASE-7662:
---

bq. Added Query super class in HBASE-7663. If that is getting committed before 
this, can change before commit. 

See 
https://issues.apache.org/jira/browse/HBASE-7663?focusedCommentId=13823896page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13823896

 [Per-KV security] Store and apply per cell ACLs into/from KeyValue tags
 ---

 Key: HBASE-7662
 URL: https://issues.apache.org/jira/browse/HBASE-7662
 Project: HBase
  Issue Type: Sub-task
  Components: Coprocessors, security
Affects Versions: 0.98.0
Reporter: Andrew Purtell
Assignee: Andrew Purtell
 Attachments: 7662.patch, 7662.patch, 7662.patch, 7662.patch


 We can improve the performance of per-cell authorization if the read of the 
 cell ACL, if any, is combined with the sequential read of the cell data 
 already in progress. When tags are inlined with KVs in block encoding (see 
 HBASE-7448, and more generally HBASE-7233), we can use them to carry cell 
 ACLs instead of using out-of-line storage (HBASE-7661) for that purpose.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9959) Remove some array copy - server side

2013-11-15 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-9959:
---

Status: Open  (was: Patch Available)

 Remove some array copy - server side
 

 Key: HBASE-9959
 URL: https://issues.apache.org/jira/browse/HBASE-9959
 Project: HBase
  Issue Type: Bug
  Components: Protobufs, regionserver
Affects Versions: 0.96.0, 0.98.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Fix For: 0.98.0, 0.96.1

 Attachments: 9959-trunk.v1.patch, 9959-trunk.v2.patch, 
 9959-trunk.v2.patch, 9959-trunk.v2.patch, 9959.v1.patch, 9959.v3.patch, 
 9959.v4.patch






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-7662) [Per-KV security] Store and apply per cell ACLs into/from KeyValue tags

2013-11-15 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13823901#comment-13823901
 ] 

Andrew Purtell commented on HBASE-7662:
---

bq. setACL() and setACLStrategy() javadoc pls add one line saying this is not 
having any impact for Delete mutation. Infact setACL() on Delete will make the 
op to fail at server side

Ok

 [Per-KV security] Store and apply per cell ACLs into/from KeyValue tags
 ---

 Key: HBASE-7662
 URL: https://issues.apache.org/jira/browse/HBASE-7662
 Project: HBase
  Issue Type: Sub-task
  Components: Coprocessors, security
Affects Versions: 0.98.0
Reporter: Andrew Purtell
Assignee: Andrew Purtell
 Attachments: 7662.patch, 7662.patch, 7662.patch, 7662.patch


 We can improve the performance of per-cell authorization if the read of the 
 cell ACL, if any, is combined with the sequential read of the cell data 
 already in progress. When tags are inlined with KVs in block encoding (see 
 HBASE-7448, and more generally HBASE-7233), we can use them to carry cell 
 ACLs instead of using out-of-line storage (HBASE-7661) for that purpose.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9165) Improvements to addDependencyJars

2013-11-15 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13823904#comment-13823904
 ] 

Lars Hofhansl commented on HBASE-9165:
--

+1
You sure we won't need com.google.common.base.Function.


 Improvements to addDependencyJars
 -

 Key: HBASE-9165
 URL: https://issues.apache.org/jira/browse/HBASE-9165
 Project: HBase
  Issue Type: Sub-task
  Components: mapreduce
Affects Versions: 0.95.2
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk
 Fix For: 0.98.0, 0.96.1, 0.94.14

 Attachments: 
 0001-HBASE-9165-mapreduce-Modularize-building-dependency-.patch, 
 0001-HBASE-9165-mapreduce-Modularize-building-dependency-.patch, 
 0001-HBASE-9165-mapreduce-Modularize-building-dependency-.patch, 
 HBASE-9165-0.94.00.patch, HBASE-9165-0.96.00.patch, HBASE-9165.02.patch


 The way we support adding HBase dependencies to a MapReduce job in 
 {{TableMapReduceUtils#addDependencyJars(job)}} is a bit monolithic. Advanced 
 users need a way to add HBase and its dependencies to their job without us 
 snooping around for ouput formats and the like (see PIG-3285). We can also 
 benefit from a little more code reuse between our {{mapred}} and 
 {{mapreduce}} namespaces.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9959) Remove some array copy - server side

2013-11-15 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-9959:
---

Attachment: 9959.v5.patch

 Remove some array copy - server side
 

 Key: HBASE-9959
 URL: https://issues.apache.org/jira/browse/HBASE-9959
 Project: HBase
  Issue Type: Bug
  Components: Protobufs, regionserver
Affects Versions: 0.98.0, 0.96.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Fix For: 0.98.0, 0.96.1

 Attachments: 9959-trunk.v1.patch, 9959-trunk.v2.patch, 
 9959-trunk.v2.patch, 9959-trunk.v2.patch, 9959.v1.patch, 9959.v3.patch, 
 9959.v4.patch, 9959.v5.patch






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-8438) Extend bin/hbase to print a mapreduce classpath

2013-11-15 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-8438:


Attachment: HBASE-8438.09.patch

Rebased onto trunk after HBASE-9165 was committed. Here's the new output, run 
from a tarball (hadoop1 profile) rather than my sandbox. Notice there's no 
debug logging to hide, no hadoop jar, and no JarFinder constructed temporary 
jars -- I expect this to be the common case.

{noformat}
$ ./bin/hbase mapredcp | tr ':' '\n'
/private/tmp/hbase-0.97.0-SNAPSHOT/lib/netty-3.6.6.Final.jar
/private/tmp/hbase-0.97.0-SNAPSHOT/lib/hbase-hadoop-compat-0.97.0-SNAPSHOT.jar
/private/tmp/hbase-0.97.0-SNAPSHOT/lib/protobuf-java-2.5.0.jar
/private/tmp/hbase-0.97.0-SNAPSHOT/lib/guava-12.0.1.jar
/private/tmp/hbase-0.97.0-SNAPSHOT/lib/htrace-core-2.01.jar
/private/tmp/hbase-0.97.0-SNAPSHOT/lib/hbase-protocol-0.97.0-SNAPSHOT.jar
/private/tmp/hbase-0.97.0-SNAPSHOT/lib/hbase-client-0.97.0-SNAPSHOT.jar
/private/tmp/hbase-0.97.0-SNAPSHOT/lib/zookeeper-3.4.5.jar
/private/tmp/hbase-0.97.0-SNAPSHOT/lib/hbase-server-0.97.0-SNAPSHOT.jar
/private/tmp/hbase-0.97.0-SNAPSHOT/lib/hbase-common-0.97.0-SNAPSHOT.jar
{noformat}

 Extend bin/hbase to print a mapreduce classpath
 -

 Key: HBASE-8438
 URL: https://issues.apache.org/jira/browse/HBASE-8438
 Project: HBase
  Issue Type: Improvement
  Components: scripts
Affects Versions: 0.94.6.1, 0.95.0, 0.94.13
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk
 Fix For: 0.98.0, 0.96.1, 0.94.14

 Attachments: 
 0001-HBASE-8438-Extend-bin-hbase-to-print-a-minimal-class.patch, 
 0001-HBASE-8438-Extend-bin-hbase-to-print-a-minimal-class.patch, 
 0001-HBASE-8438-Extend-bin-hbase-to-print-a-minimal-class.patch, 
 0001-HBASE-8438-Extend-bin-hbase-to-print-a-minimal-class.patch, 
 0001-HBASE-8438-Extend-bin-hbase-to-print-a-minimal-class.patch, 
 0001-HBASE-8438-Extend-bin-hbase-to-print-a-minimal-class.patch, 
 0001-HBASE-8438-Extend-bin-hbase-to-print-a-minimal-class.patch, 
 0001-HBASE-8438-Extend-bin-hbase-to-print-a-minimal-class.patch, 
 0001-HBASE-8438-Extend-bin-hbase-to-print-a-minimal-class.patch, 
 HBASE-8438-0.94.00.patch, HBASE-8438.09.patch


 For tools like pig and hive, blindly appending the full output of `bin/hbase 
 classpath` to their own CLASSPATH is excessive. They already build CLASSPATH 
 entries for hadoop. All they need from us is the delta entries, the 
 dependencies we require w/o hadoop and all of it's transitive deps. This is 
 also a kindness for Windows, where there's a shorter limit on the length of 
 commandline arguments.
 See also HIVE-2055 for additional discussion.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9959) Remove some array copy - server side

2013-11-15 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-9959:
---

Status: Patch Available  (was: Open)

 Remove some array copy - server side
 

 Key: HBASE-9959
 URL: https://issues.apache.org/jira/browse/HBASE-9959
 Project: HBase
  Issue Type: Bug
  Components: Protobufs, regionserver
Affects Versions: 0.96.0, 0.98.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Fix For: 0.98.0, 0.96.1

 Attachments: 9959-trunk.v1.patch, 9959-trunk.v2.patch, 
 9959-trunk.v2.patch, 9959-trunk.v2.patch, 9959.v1.patch, 9959.v3.patch, 
 9959.v4.patch, 9959.v5.patch






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (HBASE-9971) Port part of HBASE-9958 to 0.94 - change lock scope in locateRegion

2013-11-15 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl resolved HBASE-9971.
--

Resolution: Fixed

Ran a bunch of tests locally. All good. Committed to 0.94.

 Port part of HBASE-9958 to 0.94 - change lock scope in locateRegion
 ---

 Key: HBASE-9971
 URL: https://issues.apache.org/jira/browse/HBASE-9971
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
 Fix For: 0.94.14

 Attachments: 9971.txt


 Simple fix that we should have in 0.94 as well.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9975) Not starting ReplicationSink when using custom implementation for the ReplicationSink.

2013-11-15 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13823914#comment-13823914
 ] 

Lars Hofhansl commented on HBASE-9975:
--

Looks good. The change to batch is intended I assume (you override it in your 
custom implementation)?

 Not starting ReplicationSink when using custom implementation for the 
 ReplicationSink.
 --

 Key: HBASE-9975
 URL: https://issues.apache.org/jira/browse/HBASE-9975
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.94.13
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 0.98.0, 0.96.1, 0.94.14

 Attachments: HBASE-9975_Trunk.patch


 Not starting ReplicationSink when using custom implementation for the 
 ReplicationSink.
 {code}
 if (this.replicationSourceHandler == this.replicationSinkHandler 
this.replicationSourceHandler != null) {
this.replicationSourceHandler.startReplicationService();
 } else if (this.replicationSourceHandler != null) {
   this.replicationSourceHandler.startReplicationService();
 } else if (this.replicationSinkHandler != null) {
   this.replicationSinkHandler.startReplicationService();
 }
 {code}
 ReplicationSource and Sink are different as there is custom impl for 
 ReplicationSink. We can not have else ifs



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9857) Blockcache prefetch for HFile V3

2013-11-15 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-9857:
--

Status: Patch Available  (was: Open)

 Blockcache prefetch for HFile V3
 

 Key: HBASE-9857
 URL: https://issues.apache.org/jira/browse/HBASE-9857
 Project: HBase
  Issue Type: Improvement
Reporter: Andrew Purtell
Priority: Minor
 Attachments: 9857.patch, 9857.patch


 Attached patch implements a prefetching function for HFile (v3) blocks, if 
 indicated by a column family or regionserver property. The purpose of this 
 change is to as rapidly after region open as reasonable warm the blockcache 
 with all the data and index blocks of (presumably also in-memory) table data, 
 without counting those block loads as cache misses. Great for fast reads and 
 keeping the cache hit ratio high. Can tune the IO impact versus time until 
 all data blocks are in cache. Works a bit like CompactSplitThread. Makes some 
 effort not to stampede.
 I have been using this for setting up various experiments and thought I'd 
 polish it up a bit and throw it out there. If the data to be preloaded will 
 not fit in blockcache, or if as a percentage of blockcache it is large, this 
 is not a good idea, will just blow out the cache and trigger a lot of useless 
 GC activity. Might be useful as an expert tuning option though. Or not.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9857) Blockcache prefetch for HFile V3

2013-11-15 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-9857:
--

Attachment: 9857.patch

Rebase on latest trunk

 Blockcache prefetch for HFile V3
 

 Key: HBASE-9857
 URL: https://issues.apache.org/jira/browse/HBASE-9857
 Project: HBase
  Issue Type: Improvement
Reporter: Andrew Purtell
Priority: Minor
 Attachments: 9857.patch, 9857.patch


 Attached patch implements a prefetching function for HFile (v3) blocks, if 
 indicated by a column family or regionserver property. The purpose of this 
 change is to as rapidly after region open as reasonable warm the blockcache 
 with all the data and index blocks of (presumably also in-memory) table data, 
 without counting those block loads as cache misses. Great for fast reads and 
 keeping the cache hit ratio high. Can tune the IO impact versus time until 
 all data blocks are in cache. Works a bit like CompactSplitThread. Makes some 
 effort not to stampede.
 I have been using this for setting up various experiments and thought I'd 
 polish it up a bit and throw it out there. If the data to be preloaded will 
 not fit in blockcache, or if as a percentage of blockcache it is large, this 
 is not a good idea, will just blow out the cache and trigger a lot of useless 
 GC activity. Might be useful as an expert tuning option though. Or not.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9835) Define C interface of HBase Client synchronous APIs

2013-11-15 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-9835:
-

Attachment: (was: HBASE-9835-1.patch)

 Define C interface of HBase Client synchronous APIs
 ---

 Key: HBASE-9835
 URL: https://issues.apache.org/jira/browse/HBASE-9835
 Project: HBase
  Issue Type: Sub-task
  Components: Client
Reporter: Aditya Kishore
Assignee: Aditya Kishore
  Labels: C

 Creating this as a sub task of HBASE-1015 to define Define C language 
 interface of HBase Client synchronous APIs.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HBASE-9977) Define C interface of HBase Client Asynchronous APIs

2013-11-15 Thread Elliott Clark (JIRA)
Elliott Clark created HBASE-9977:


 Summary: Define C interface of HBase Client Asynchronous APIs
 Key: HBASE-9977
 URL: https://issues.apache.org/jira/browse/HBASE-9977
 Project: HBase
  Issue Type: Sub-task
  Components: Client
Reporter: Elliott Clark
Assignee: Elliott Clark






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9835) Define C interface of HBase Client synchronous APIs

2013-11-15 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-9835:
-

Attachment: (was: HBASE-9835-4.patch)

 Define C interface of HBase Client synchronous APIs
 ---

 Key: HBASE-9835
 URL: https://issues.apache.org/jira/browse/HBASE-9835
 Project: HBase
  Issue Type: Sub-task
  Components: Client
Reporter: Aditya Kishore
Assignee: Aditya Kishore
  Labels: C

 Creating this as a sub task of HBASE-1015 to define Define C language 
 interface of HBase Client synchronous APIs.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9835) Define C interface of HBase Client synchronous APIs

2013-11-15 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-9835:
-

Attachment: (was: HBASE-9835-2.patch)

 Define C interface of HBase Client synchronous APIs
 ---

 Key: HBASE-9835
 URL: https://issues.apache.org/jira/browse/HBASE-9835
 Project: HBase
  Issue Type: Sub-task
  Components: Client
Reporter: Aditya Kishore
Assignee: Aditya Kishore
  Labels: C

 Creating this as a sub task of HBASE-1015 to define Define C language 
 interface of HBase Client synchronous APIs.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9835) Define C interface of HBase Client synchronous APIs

2013-11-15 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-9835:
-

Attachment: (was: HBASE-9835-0.patch)

 Define C interface of HBase Client synchronous APIs
 ---

 Key: HBASE-9835
 URL: https://issues.apache.org/jira/browse/HBASE-9835
 Project: HBase
  Issue Type: Sub-task
  Components: Client
Reporter: Aditya Kishore
Assignee: Aditya Kishore
  Labels: C

 Creating this as a sub task of HBASE-1015 to define Define C language 
 interface of HBase Client synchronous APIs.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9835) Define C interface of HBase Client synchronous APIs

2013-11-15 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-9835:
-

Attachment: (was: HBASE-9835-5.patch)

 Define C interface of HBase Client synchronous APIs
 ---

 Key: HBASE-9835
 URL: https://issues.apache.org/jira/browse/HBASE-9835
 Project: HBase
  Issue Type: Sub-task
  Components: Client
Reporter: Aditya Kishore
Assignee: Aditya Kishore
  Labels: C

 Creating this as a sub task of HBASE-1015 to define Define C language 
 interface of HBase Client synchronous APIs.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9835) Define C interface of HBase Client synchronous APIs

2013-11-15 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13823941#comment-13823941
 ] 

Elliott Clark commented on HBASE-9835:
--

I talked with several people who requested that there also be a synchronous 
client api in C.  This should be basically a condition wait + the async client. 
 So I'll finish up the async api work in another issue and then circle back to 
this.

 Define C interface of HBase Client synchronous APIs
 ---

 Key: HBASE-9835
 URL: https://issues.apache.org/jira/browse/HBASE-9835
 Project: HBase
  Issue Type: Sub-task
  Components: Client
Reporter: Aditya Kishore
Assignee: Aditya Kishore
  Labels: C

 Creating this as a sub task of HBASE-1015 to define Define C language 
 interface of HBase Client synchronous APIs.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9954) Incorporate HTTPS support for HBase

2013-11-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13823951#comment-13823951
 ] 

Hadoop QA commented on HBASE-9954:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12614086/9954-v2.txt
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 1 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:red}-1 site{color}.  The patch appears to cause mvn site goal to 
fail.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7882//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7882//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7882//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7882//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7882//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7882//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7882//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7882//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7882//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7882//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7882//console

This message is automatically generated.

 Incorporate HTTPS support for HBase
 ---

 Key: HBASE-9954
 URL: https://issues.apache.org/jira/browse/HBASE-9954
 Project: HBase
  Issue Type: Task
Reporter: Ted Yu
Assignee: Ted Yu
 Attachments: 9954-v1.txt, 9954-v2.txt, HBASE-9954_0.94.patch


 In various classes, http://; is hard coded.
 This JIRA adds support for using HBASE web UI via HTTPS 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9165) Improvements to addDependencyJars

2013-11-15 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13823983#comment-13823983
 ] 

Nick Dimiduk commented on HBASE-9165:
-

Yes. This change is an effective noop; the same jar contains both classes. It's 
value is to bring the implementations across branches back into sync.

{noformat}
soleil:hbase-94 ndimiduk$ grep \guava\.version\ pom.xml
guava.version11.0.2/guava.version
soleil:hbase-94 ndimiduk$ jar tf 
~/.m2/repository/com/google/guava/guava/11.0.2/guava-11.0.2.jar | grep 
base\.Function\.class
com/google/common/base/Function.class
soleil:hbase-94 ndimiduk$ jar tf 
~/.m2/repository/com/google/guava/guava/11.0.2/guava-11.0.2.jar | grep 
collect\.ImmutableSet\.class
  
com/google/common/collect/ImmutableSet.class
{noformat}

 Improvements to addDependencyJars
 -

 Key: HBASE-9165
 URL: https://issues.apache.org/jira/browse/HBASE-9165
 Project: HBase
  Issue Type: Sub-task
  Components: mapreduce
Affects Versions: 0.95.2
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk
 Fix For: 0.98.0, 0.96.1, 0.94.14

 Attachments: 
 0001-HBASE-9165-mapreduce-Modularize-building-dependency-.patch, 
 0001-HBASE-9165-mapreduce-Modularize-building-dependency-.patch, 
 0001-HBASE-9165-mapreduce-Modularize-building-dependency-.patch, 
 HBASE-9165-0.94.00.patch, HBASE-9165-0.96.00.patch, HBASE-9165.02.patch


 The way we support adding HBase dependencies to a MapReduce job in 
 {{TableMapReduceUtils#addDependencyJars(job)}} is a bit monolithic. Advanced 
 users need a way to add HBase and its dependencies to their job without us 
 snooping around for ouput formats and the like (see PIG-3285). We can also 
 benefit from a little more code reuse between our {{mapred}} and 
 {{mapreduce}} namespaces.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9954) Incorporate HTTPS support for HBase

2013-11-15 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13823994#comment-13823994
 ] 

Ted Yu commented on HBASE-9954:
---

Javadoc warning came from HRegion.java, unrelated to this patch.

 Incorporate HTTPS support for HBase
 ---

 Key: HBASE-9954
 URL: https://issues.apache.org/jira/browse/HBASE-9954
 Project: HBase
  Issue Type: Task
Reporter: Ted Yu
Assignee: Ted Yu
 Attachments: 9954-v1.txt, 9954-v2.txt, HBASE-9954_0.94.patch


 In various classes, http://; is hard coded.
 This JIRA adds support for using HBASE web UI via HTTPS 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9954) Incorporate HTTPS support for HBase

2013-11-15 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13823998#comment-13823998
 ] 

Elliott Clark commented on HBASE-9954:
--

lgtm +1

 Incorporate HTTPS support for HBase
 ---

 Key: HBASE-9954
 URL: https://issues.apache.org/jira/browse/HBASE-9954
 Project: HBase
  Issue Type: Task
Reporter: Ted Yu
Assignee: Ted Yu
 Attachments: 9954-v1.txt, 9954-v2.txt, HBASE-9954_0.94.patch


 In various classes, http://; is hard coded.
 This JIRA adds support for using HBASE web UI via HTTPS 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9977) Define C interface of HBase Client Asynchronous APIs

2013-11-15 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13824003#comment-13824003
 ] 

Elliott Clark commented on HBASE-9977:
--

I feel pretty strongly that an async underlying client is the only way that we 
can have a single client have good performance as the number of regionservers 
in a cluster comes up.  So I propose:

* An async C++ client
* An async C client built using the async C++ client
* An sync C client built using the async C++ client

I think with those things we get everything that is needed with the least 
amount of code duplication.

 Define C interface of HBase Client Asynchronous APIs
 

 Key: HBASE-9977
 URL: https://issues.apache.org/jira/browse/HBASE-9977
 Project: HBase
  Issue Type: Sub-task
  Components: Client
Reporter: Elliott Clark
Assignee: Elliott Clark





--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9165) Improvements to addDependencyJars

2013-11-15 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13824002#comment-13824002
 ] 

Nick Dimiduk commented on HBASE-9165:
-

I stood up a pseudo-distributed hadoop on my laptop, built the tarball from 
0.94.14-SNAPSHOT, and launched PerformanceEvaluation sequentialWrite 2. In the 
job launch logs I see

{noformat}
13/11/15 11:36:46 DEBUG mapreduce.TableMapReduceUtil: For class 
com.google.common.collect.ImmutableSet, using jar 
/private/tmp/hbase-0.94.14-SNAPSHOT/lib/guava-11.0.2.jar
{noformat}

The submitted job.xml file contains the following value for tmpjars:

{noformat}
file:/private/tmp/hbase-0.94.14-SNAPSHOT/hbase-0.94.14-SNAPSHOT.jar,file:/private/tmp/hbase-0.94.14-SNAPSHOT/lib/protobuf-java-2.4.0a.jar,file:/private/tmp/hbase-0.94.14-SNAPSHOT/lib/zookeeper-3.4.5.jar,file:/private/tmp/hbase-0.94.14-SNAPSHOT/lib/guava-11.0.2.jar,file:/private/tmp/hbase-0.94.14-SNAPSHOT/lib/hadoop-core-1.0.4.jar,file:/private/tmp/hbase-0.94.14-SNAPSHOT/hbase-0.94.14-SNAPSHOT-tests.jar
{noformat}

I think we're good :)

 Improvements to addDependencyJars
 -

 Key: HBASE-9165
 URL: https://issues.apache.org/jira/browse/HBASE-9165
 Project: HBase
  Issue Type: Sub-task
  Components: mapreduce
Affects Versions: 0.95.2
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk
 Fix For: 0.98.0, 0.96.1, 0.94.14

 Attachments: 
 0001-HBASE-9165-mapreduce-Modularize-building-dependency-.patch, 
 0001-HBASE-9165-mapreduce-Modularize-building-dependency-.patch, 
 0001-HBASE-9165-mapreduce-Modularize-building-dependency-.patch, 
 HBASE-9165-0.94.00.patch, HBASE-9165-0.96.00.patch, HBASE-9165.02.patch


 The way we support adding HBase dependencies to a MapReduce job in 
 {{TableMapReduceUtils#addDependencyJars(job)}} is a bit monolithic. Advanced 
 users need a way to add HBase and its dependencies to their job without us 
 snooping around for ouput formats and the like (see PIG-3285). We can also 
 benefit from a little more code reuse between our {{mapred}} and 
 {{mapreduce}} namespaces.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9165) Improvements to addDependencyJars

2013-11-15 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13824033#comment-13824033
 ] 

Lars Hofhansl commented on HBASE-9165:
--

If they are in the same jar, let's just add them both. Seems safer and it also 
calls out the actual dependencies.
+1 otherwise.

 Improvements to addDependencyJars
 -

 Key: HBASE-9165
 URL: https://issues.apache.org/jira/browse/HBASE-9165
 Project: HBase
  Issue Type: Sub-task
  Components: mapreduce
Affects Versions: 0.95.2
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk
 Fix For: 0.98.0, 0.96.1, 0.94.14

 Attachments: 
 0001-HBASE-9165-mapreduce-Modularize-building-dependency-.patch, 
 0001-HBASE-9165-mapreduce-Modularize-building-dependency-.patch, 
 0001-HBASE-9165-mapreduce-Modularize-building-dependency-.patch, 
 HBASE-9165-0.94.00.patch, HBASE-9165-0.96.00.patch, HBASE-9165.02.patch


 The way we support adding HBase dependencies to a MapReduce job in 
 {{TableMapReduceUtils#addDependencyJars(job)}} is a bit monolithic. Advanced 
 users need a way to add HBase and its dependencies to their job without us 
 snooping around for ouput formats and the like (see PIG-3285). We can also 
 benefit from a little more code reuse between our {{mapred}} and 
 {{mapreduce}} namespaces.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-8438) Extend bin/hbase to print a mapreduce classpath

2013-11-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13824034#comment-13824034
 ] 

Hadoop QA commented on HBASE-8438:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12614098/HBASE-8438.09.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 1 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:red}-1 site{color}.  The patch appears to cause mvn site goal to 
fail.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7885//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7885//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7885//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7885//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7885//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7885//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7885//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7885//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7885//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7885//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7885//console

This message is automatically generated.

 Extend bin/hbase to print a mapreduce classpath
 -

 Key: HBASE-8438
 URL: https://issues.apache.org/jira/browse/HBASE-8438
 Project: HBase
  Issue Type: Improvement
  Components: scripts
Affects Versions: 0.94.6.1, 0.95.0, 0.94.13
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk
 Fix For: 0.98.0, 0.96.1, 0.94.14

 Attachments: 
 0001-HBASE-8438-Extend-bin-hbase-to-print-a-minimal-class.patch, 
 0001-HBASE-8438-Extend-bin-hbase-to-print-a-minimal-class.patch, 
 0001-HBASE-8438-Extend-bin-hbase-to-print-a-minimal-class.patch, 
 0001-HBASE-8438-Extend-bin-hbase-to-print-a-minimal-class.patch, 
 0001-HBASE-8438-Extend-bin-hbase-to-print-a-minimal-class.patch, 
 0001-HBASE-8438-Extend-bin-hbase-to-print-a-minimal-class.patch, 
 0001-HBASE-8438-Extend-bin-hbase-to-print-a-minimal-class.patch, 
 0001-HBASE-8438-Extend-bin-hbase-to-print-a-minimal-class.patch, 
 0001-HBASE-8438-Extend-bin-hbase-to-print-a-minimal-class.patch, 
 HBASE-8438-0.94.00.patch, HBASE-8438.09.patch


 For tools like pig and hive, blindly appending the full output of `bin/hbase 
 classpath` to their own CLASSPATH is excessive. They already build CLASSPATH 
 entries for hadoop. All they need from us is the delta entries, the 
 dependencies we require w/o hadoop and all of it's transitive deps. This is 
 also a kindness for Windows, where there's a shorter limit on the length of 
 commandline arguments.
 See also HIVE-2055 for additional discussion.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9959) Remove some array copy - server side

2013-11-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13824065#comment-13824065
 ] 

Hadoop QA commented on HBASE-9959:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12614097/9959.v5.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 9 new 
or modified tests.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 1 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:red}-1 site{color}.  The patch appears to cause mvn site goal to 
fail.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   
org.apache.hadoop.hbase.security.access.TestNamespaceCommands

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7886//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7886//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7886//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7886//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7886//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7886//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7886//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7886//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7886//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7886//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7886//console

This message is automatically generated.

 Remove some array copy - server side
 

 Key: HBASE-9959
 URL: https://issues.apache.org/jira/browse/HBASE-9959
 Project: HBase
  Issue Type: Bug
  Components: Protobufs, regionserver
Affects Versions: 0.98.0, 0.96.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Fix For: 0.98.0, 0.96.1

 Attachments: 9959-trunk.v1.patch, 9959-trunk.v2.patch, 
 9959-trunk.v2.patch, 9959-trunk.v2.patch, 9959.v1.patch, 9959.v3.patch, 
 9959.v4.patch, 9959.v5.patch






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9954) Incorporate HTTPS support for HBase

2013-11-15 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-9954:
--

Fix Version/s: 0.96.1
   0.98.0

 Incorporate HTTPS support for HBase
 ---

 Key: HBASE-9954
 URL: https://issues.apache.org/jira/browse/HBASE-9954
 Project: HBase
  Issue Type: Task
Reporter: Ted Yu
Assignee: Ted Yu
 Fix For: 0.98.0, 0.96.1

 Attachments: 9954-v1.txt, 9954-v2.txt, HBASE-9954_0.94.patch


 In various classes, http://; is hard coded.
 This JIRA adds support for using HBASE web UI via HTTPS 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9112) Custom TableInputFormat in initTableMapperJob throws ClassNoFoundException on TableMapper

2013-11-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13824073#comment-13824073
 ] 

Hudson commented on HBASE-9112:
---

SUCCESS: Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #838 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/838/])
HBASE-9165 [mapreduce] Modularize building dependency jars

 - Separate adding HBase and dependencies from adding other job
   dependencies, and expose it as a separate method that other
   projects can use (for PIG-3285).
 - Explicitly add hbase-server to the list of dependencies we ship
   with the job, for users who extend the classes we provide (see
   HBASE-9112).
 - Add integration test for addDependencyJars.
 - Code reuse for TestTableMapReduce. (ndimiduk: rev 1542341)
* 
/hbase/trunk/hbase-it/src/test/java/org/apache/hadoop/hbase/mapreduce/IntegrationTestTableMapReduceUtil.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapred/TableMapReduceUtil.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/IdentityTableMapper.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableMapReduceUtil.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/mapred/TestTableMapReduce.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableMapReduce.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableMapReduceBase.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableMapReduceUtil.java


 Custom TableInputFormat in initTableMapperJob throws ClassNoFoundException on 
 TableMapper
 -

 Key: HBASE-9112
 URL: https://issues.apache.org/jira/browse/HBASE-9112
 Project: HBase
  Issue Type: Bug
  Components: hadoop2, mapreduce
Affects Versions: 0.94.6.1
 Environment: CDH-4.3.0-1.cdh4.3.0.p0.22
Reporter: Debanjan Bhattacharyya
Assignee: Nick Dimiduk
 Fix For: 0.98.0, 0.94.7, 0.96.1


 When using custom TableInputFormat in TableMapReduceUtil.initTableMapperJob 
 in the following way
 TableMapReduceUtil.initTableMapperJob(mytable, 
   MyScan, 
   MyMapper.class,
   MyKey.class, 
   MyValue.class, 
   myJob,true,  
 MyTableInputFormat.class);
 I get error: java.lang.ClassNotFoundException: 
 org.apache.hadoop.hbase.mapreduce.TableMapper
   at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
   at java.security.AccessController.doPrivileged(Native Method)
   at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
   at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
   at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
   at java.lang.ClassLoader.defineClass1(Native Method)
   at java.lang.ClassLoader.defineClassCond(ClassLoader.java:631)
   at java.lang.ClassLoader.defineClass(ClassLoader.java:615)
   at 
 java.security.SecureClassLoader.defineClass(SecureClassLoader.java:141)
   at java.net.URLClassLoader.defineClass(URLClassLoader.java:283)
   at java.net.URLClassLoader.access$000(URLClassLoader.java:58)
   at java.net.URLClassLoader$1.run(URLClassLoader.java:197)
   at java.security.AccessController.doPrivileged(Native Method)
   at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
 If I do not use the last two parameters, there is no error.
 What is going wrong here?
 Thanks
 Regards



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9947) Add CM action for online compression algorithm change

2013-11-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9947?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13824070#comment-13824070
 ] 

Hudson commented on HBASE-9947:
---

SUCCESS: Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #838 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/838/])
HBASE-9947 Add CM action for online compression algorithm change (jxiang: rev 
1542323)
* 
/hbase/trunk/hbase-it/src/test/java/org/apache/hadoop/hbase/chaos/actions/ChangeCompressionAction.java
* 
/hbase/trunk/hbase-it/src/test/java/org/apache/hadoop/hbase/chaos/factories/SlowDeterministicMonkeyFactory.java


 Add CM action for online compression algorithm change
 -

 Key: HBASE-9947
 URL: https://issues.apache.org/jira/browse/HBASE-9947
 Project: HBase
  Issue Type: Bug
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
 Fix For: 0.98.0, 0.96.1

 Attachments: trunk-9947.patch


 We need to add a CM action for online compression algorithm change and make 
 sure ITBLL is ok with it.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-3787) Increment is non-idempotent but client retries RPC

2013-11-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-3787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13824072#comment-13824072
 ] 

Hudson commented on HBASE-3787:
---

SUCCESS: Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #838 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/838/])
HBASE-3787 Increment is non-idempotent but client retries RPC ADDENDUM add 
licence (sershe: rev 1542169)
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/PerClientRandomNonceGenerator.java
HBASE-3787 Increment is non-idempotent but client retries RPC (sershe: rev 
1542168)
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Action.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncProcess.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/ClientIdGenerator.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HConnection.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HConnectionManager.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HTable.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/MultiAction.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/MultiServerCallable.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/NonceGenerator.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/PerClientRandomNonceGenerator.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/exceptions/OperationConflictException.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/RequestConverter.java
* 
/hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java
* 
/hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/util/Triple.java
* 
/hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/ClientProtos.java
* 
/hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/MultiRowMutationProtos.java
* 
/hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/RowProcessorProtos.java
* 
/hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/WALProtos.java
* /hbase/trunk/hbase-protocol/src/main/protobuf/Client.proto
* /hbase/trunk/hbase-protocol/src/main/protobuf/MultiRowMutation.proto
* /hbase/trunk/hbase-protocol/src/main/protobuf/RowProcessor.proto
* /hbase/trunk/hbase-protocol/src/main/protobuf/WAL.proto
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/client/CoprocessorHConnection.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/BaseRowProcessorEndpoint.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/MultiRowMutationEndpoint.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/protobuf/ReplicationProtbufUtil.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RegionServerServices.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/ServerNonceManager.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/FSHLog.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/HLog.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/HLogKey.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/HLogSplitter.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/HLogUtil.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/SnapshotLogSplitter.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/MockRegionServerServices.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestMultiParallel.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/master/MockRegionServer.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestDistributedLogSplitting.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/protobuf/TestProtobufUtil.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegion.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestServerNonceManager.java
* 

[jira] [Commented] (HBASE-9165) Improvements to addDependencyJars

2013-11-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13824074#comment-13824074
 ] 

Hudson commented on HBASE-9165:
---

SUCCESS: Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #838 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/838/])
HBASE-9165 [mapreduce] Modularize building dependency jars

 - Separate adding HBase and dependencies from adding other job
   dependencies, and expose it as a separate method that other
   projects can use (for PIG-3285).
 - Explicitly add hbase-server to the list of dependencies we ship
   with the job, for users who extend the classes we provide (see
   HBASE-9112).
 - Add integration test for addDependencyJars.
 - Code reuse for TestTableMapReduce. (ndimiduk: rev 1542341)
* 
/hbase/trunk/hbase-it/src/test/java/org/apache/hadoop/hbase/mapreduce/IntegrationTestTableMapReduceUtil.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapred/TableMapReduceUtil.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/IdentityTableMapper.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableMapReduceUtil.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/mapred/TestTableMapReduce.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableMapReduce.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableMapReduceBase.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableMapReduceUtil.java


 Improvements to addDependencyJars
 -

 Key: HBASE-9165
 URL: https://issues.apache.org/jira/browse/HBASE-9165
 Project: HBase
  Issue Type: Sub-task
  Components: mapreduce
Affects Versions: 0.95.2
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk
 Fix For: 0.98.0, 0.96.1, 0.94.14

 Attachments: 
 0001-HBASE-9165-mapreduce-Modularize-building-dependency-.patch, 
 0001-HBASE-9165-mapreduce-Modularize-building-dependency-.patch, 
 0001-HBASE-9165-mapreduce-Modularize-building-dependency-.patch, 
 HBASE-9165-0.94.00.patch, HBASE-9165-0.96.00.patch, HBASE-9165.02.patch


 The way we support adding HBase dependencies to a MapReduce job in 
 {{TableMapReduceUtils#addDependencyJars(job)}} is a bit monolithic. Advanced 
 users need a way to add HBase and its dependencies to their job without us 
 snooping around for ouput formats and the like (see PIG-3285). We can also 
 benefit from a little more code reuse between our {{mapred}} and 
 {{mapreduce}} namespaces.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9954) Incorporate HTTPS support for HBase

2013-11-15 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-9954:
--

Hadoop Flags: Reviewed

Integrated to 0.96 and trunk.

Thanks for the reviews.

 Incorporate HTTPS support for HBase
 ---

 Key: HBASE-9954
 URL: https://issues.apache.org/jira/browse/HBASE-9954
 Project: HBase
  Issue Type: Task
Reporter: Ted Yu
Assignee: Ted Yu
 Fix For: 0.98.0, 0.96.1

 Attachments: 9954-v1.txt, 9954-v2.txt, HBASE-9954_0.94.patch


 In various classes, http://; is hard coded.
 This JIRA adds support for using HBASE web UI via HTTPS 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9165) Improvements to addDependencyJars

2013-11-15 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-9165:


Attachment: HBASE-9165-0.94.01.patch

As you prefer. Attached the updated patch. If there's no further objection, 
I'll commit this afternoon.

Thanks [~lhofhansl].

 Improvements to addDependencyJars
 -

 Key: HBASE-9165
 URL: https://issues.apache.org/jira/browse/HBASE-9165
 Project: HBase
  Issue Type: Sub-task
  Components: mapreduce
Affects Versions: 0.95.2
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk
 Fix For: 0.98.0, 0.96.1, 0.94.14

 Attachments: 
 0001-HBASE-9165-mapreduce-Modularize-building-dependency-.patch, 
 0001-HBASE-9165-mapreduce-Modularize-building-dependency-.patch, 
 0001-HBASE-9165-mapreduce-Modularize-building-dependency-.patch, 
 HBASE-9165-0.94.00.patch, HBASE-9165-0.94.01.patch, HBASE-9165-0.96.00.patch, 
 HBASE-9165.02.patch


 The way we support adding HBase dependencies to a MapReduce job in 
 {{TableMapReduceUtils#addDependencyJars(job)}} is a bit monolithic. Advanced 
 users need a way to add HBase and its dependencies to their job without us 
 snooping around for ouput formats and the like (see PIG-3285). We can also 
 benefit from a little more code reuse between our {{mapred}} and 
 {{mapreduce}} namespaces.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9165) Improvements to addDependencyJars

2013-11-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13824091#comment-13824091
 ] 

Hadoop QA commented on HBASE-9165:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12614123/HBASE-9165-0.94.01.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified tests.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7888//console

This message is automatically generated.

 Improvements to addDependencyJars
 -

 Key: HBASE-9165
 URL: https://issues.apache.org/jira/browse/HBASE-9165
 Project: HBase
  Issue Type: Sub-task
  Components: mapreduce
Affects Versions: 0.95.2
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk
 Fix For: 0.98.0, 0.96.1, 0.94.14

 Attachments: 
 0001-HBASE-9165-mapreduce-Modularize-building-dependency-.patch, 
 0001-HBASE-9165-mapreduce-Modularize-building-dependency-.patch, 
 0001-HBASE-9165-mapreduce-Modularize-building-dependency-.patch, 
 HBASE-9165-0.94.00.patch, HBASE-9165-0.94.01.patch, HBASE-9165-0.96.00.patch, 
 HBASE-9165.02.patch


 The way we support adding HBase dependencies to a MapReduce job in 
 {{TableMapReduceUtils#addDependencyJars(job)}} is a bit monolithic. Advanced 
 users need a way to add HBase and its dependencies to their job without us 
 snooping around for ouput formats and the like (see PIG-3285). We can also 
 benefit from a little more code reuse between our {{mapred}} and 
 {{mapreduce}} namespaces.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-8438) Extend bin/hbase to print a mapreduce classpath

2013-11-15 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-8438:


Attachment: HBASE-8438-0.94.01.patch

Rebased 0.94 patch onto HBASE-9165-0.94.01.patch.

{noformat}
$ ./bin/hbase mapredcp | tr ':' '\n'
/private/tmp/hbase-0.94.14-SNAPSHOT/hbase-0.94.14-SNAPSHOT.jar
/private/tmp/hbase-0.94.14-SNAPSHOT/lib/protobuf-java-2.4.0a.jar
/private/tmp/hbase-0.94.14-SNAPSHOT/lib/zookeeper-3.4.5.jar
/private/tmp/hbase-0.94.14-SNAPSHOT/lib/guava-11.0.2.jar
/private/tmp/hbase-0.94.14-SNAPSHOT/lib/hadoop-core-1.0.4.jar
{noformat}

 Extend bin/hbase to print a mapreduce classpath
 -

 Key: HBASE-8438
 URL: https://issues.apache.org/jira/browse/HBASE-8438
 Project: HBase
  Issue Type: Improvement
  Components: scripts
Affects Versions: 0.94.6.1, 0.95.0, 0.94.13
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk
 Fix For: 0.98.0, 0.96.1, 0.94.14

 Attachments: 
 0001-HBASE-8438-Extend-bin-hbase-to-print-a-minimal-class.patch, 
 0001-HBASE-8438-Extend-bin-hbase-to-print-a-minimal-class.patch, 
 0001-HBASE-8438-Extend-bin-hbase-to-print-a-minimal-class.patch, 
 0001-HBASE-8438-Extend-bin-hbase-to-print-a-minimal-class.patch, 
 0001-HBASE-8438-Extend-bin-hbase-to-print-a-minimal-class.patch, 
 0001-HBASE-8438-Extend-bin-hbase-to-print-a-minimal-class.patch, 
 0001-HBASE-8438-Extend-bin-hbase-to-print-a-minimal-class.patch, 
 0001-HBASE-8438-Extend-bin-hbase-to-print-a-minimal-class.patch, 
 0001-HBASE-8438-Extend-bin-hbase-to-print-a-minimal-class.patch, 
 HBASE-8438-0.94.00.patch, HBASE-8438-0.94.01.patch, HBASE-8438.09.patch


 For tools like pig and hive, blindly appending the full output of `bin/hbase 
 classpath` to their own CLASSPATH is excessive. They already build CLASSPATH 
 entries for hadoop. All they need from us is the delta entries, the 
 dependencies we require w/o hadoop and all of it's transitive deps. This is 
 also a kindness for Windows, where there's a shorter limit on the length of 
 commandline arguments.
 See also HIVE-2055 for additional discussion.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9165) Improvements to addDependencyJars

2013-11-15 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13824115#comment-13824115
 ] 

Lars Hofhansl commented on HBASE-9165:
--

+1 :)

Thanks Nick.

 Improvements to addDependencyJars
 -

 Key: HBASE-9165
 URL: https://issues.apache.org/jira/browse/HBASE-9165
 Project: HBase
  Issue Type: Sub-task
  Components: mapreduce
Affects Versions: 0.95.2
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk
 Fix For: 0.98.0, 0.96.1, 0.94.14

 Attachments: 
 0001-HBASE-9165-mapreduce-Modularize-building-dependency-.patch, 
 0001-HBASE-9165-mapreduce-Modularize-building-dependency-.patch, 
 0001-HBASE-9165-mapreduce-Modularize-building-dependency-.patch, 
 HBASE-9165-0.94.00.patch, HBASE-9165-0.94.01.patch, HBASE-9165-0.96.00.patch, 
 HBASE-9165.02.patch


 The way we support adding HBase dependencies to a MapReduce job in 
 {{TableMapReduceUtils#addDependencyJars(job)}} is a bit monolithic. Advanced 
 users need a way to add HBase and its dependencies to their job without us 
 snooping around for ouput formats and the like (see PIG-3285). We can also 
 benefit from a little more code reuse between our {{mapred}} and 
 {{mapreduce}} namespaces.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-8438) Extend bin/hbase to print a mapreduce classpath

2013-11-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13824118#comment-13824118
 ] 

Hadoop QA commented on HBASE-8438:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12614131/HBASE-8438-0.94.01.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7889//console

This message is automatically generated.

 Extend bin/hbase to print a mapreduce classpath
 -

 Key: HBASE-8438
 URL: https://issues.apache.org/jira/browse/HBASE-8438
 Project: HBase
  Issue Type: Improvement
  Components: scripts
Affects Versions: 0.94.6.1, 0.95.0, 0.94.13
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk
 Fix For: 0.98.0, 0.96.1, 0.94.14

 Attachments: 
 0001-HBASE-8438-Extend-bin-hbase-to-print-a-minimal-class.patch, 
 0001-HBASE-8438-Extend-bin-hbase-to-print-a-minimal-class.patch, 
 0001-HBASE-8438-Extend-bin-hbase-to-print-a-minimal-class.patch, 
 0001-HBASE-8438-Extend-bin-hbase-to-print-a-minimal-class.patch, 
 0001-HBASE-8438-Extend-bin-hbase-to-print-a-minimal-class.patch, 
 0001-HBASE-8438-Extend-bin-hbase-to-print-a-minimal-class.patch, 
 0001-HBASE-8438-Extend-bin-hbase-to-print-a-minimal-class.patch, 
 0001-HBASE-8438-Extend-bin-hbase-to-print-a-minimal-class.patch, 
 0001-HBASE-8438-Extend-bin-hbase-to-print-a-minimal-class.patch, 
 HBASE-8438-0.94.00.patch, HBASE-8438-0.94.01.patch, HBASE-8438.09.patch


 For tools like pig and hive, blindly appending the full output of `bin/hbase 
 classpath` to their own CLASSPATH is excessive. They already build CLASSPATH 
 entries for hadoop. All they need from us is the delta entries, the 
 dependencies we require w/o hadoop and all of it's transitive deps. This is 
 also a kindness for Windows, where there's a shorter limit on the length of 
 commandline arguments.
 See also HIVE-2055 for additional discussion.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9969) Improve KeyValueHeap using loser tree

2013-11-15 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13824125#comment-13824125
 ] 

Lars Hofhansl commented on HBASE-9969:
--

As for the discussion about optimizing it... I think we need to:
# make sure there is no scenario where this is significantly slower
# all corner cases were explored for correctness

This literally sits at the core of HBase, and we'd better be a 100% sure it's 
OK.
That said, it looks good to me (haven't studied the details of the LoserTree 
class, though).


 Improve KeyValueHeap using loser tree
 -

 Key: HBASE-9969
 URL: https://issues.apache.org/jira/browse/HBASE-9969
 Project: HBase
  Issue Type: Improvement
  Components: Performance, regionserver
Reporter: Chao Shi
Assignee: Chao Shi
 Fix For: 0.98.0, 0.96.1, 0.94.15

 Attachments: 9969-0.94.txt, hbase-9969-v2.patch, hbase-9969.patch, 
 hbase-9969.patch, kvheap-benchmark.png, kvheap-benchmark.txt


 LoserTree is the better data structure than binary heap. It saves half of the 
 comparisons on each next(), though the time complexity is on O(logN).
 Currently A scan or get will go through two KeyValueHeaps, one is merging KVs 
 read from multiple HFiles in a single store, the other is merging results 
 from multiple stores. This patch should improve the both cases whenever CPU 
 is the bottleneck (e.g. scan with filter over cached blocks, HBASE-9811).
 All of the optimization work is done in KeyValueHeap and does not change its 
 public interfaces. The new code looks more cleaner and simpler to understand.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9969) Improve KeyValueHeap using loser tree

2013-11-15 Thread Matt Corgan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13824130#comment-13824130
 ] 

Matt Corgan commented on HBASE-9969:


Another possible tweak to KeyValueHeapBenchmark.java: the row keys should have 
a longer common prefix.  Maybe prepend 16 identical bytes, which is a common 
real-world scenario and will help differentiate the implementations.

 Improve KeyValueHeap using loser tree
 -

 Key: HBASE-9969
 URL: https://issues.apache.org/jira/browse/HBASE-9969
 Project: HBase
  Issue Type: Improvement
  Components: Performance, regionserver
Reporter: Chao Shi
Assignee: Chao Shi
 Fix For: 0.98.0, 0.96.1, 0.94.15

 Attachments: 9969-0.94.txt, hbase-9969-v2.patch, hbase-9969.patch, 
 hbase-9969.patch, kvheap-benchmark.png, kvheap-benchmark.txt


 LoserTree is the better data structure than binary heap. It saves half of the 
 comparisons on each next(), though the time complexity is on O(logN).
 Currently A scan or get will go through two KeyValueHeaps, one is merging KVs 
 read from multiple HFiles in a single store, the other is merging results 
 from multiple stores. This patch should improve the both cases whenever CPU 
 is the bottleneck (e.g. scan with filter over cached blocks, HBASE-9811).
 All of the optimization work is done in KeyValueHeap and does not change its 
 public interfaces. The new code looks more cleaner and simpler to understand.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9857) Blockcache prefetch for HFile V3

2013-11-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13824132#comment-13824132
 ] 

Hadoop QA commented on HBASE-9857:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12614106/9857.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 26 new 
or modified tests.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 3 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 release 
audit warnings (more than the trunk's current 0 warnings).

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:red}-1 site{color}.  The patch appears to cause mvn site goal to 
fail.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7887//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7887//artifact/trunk/patchprocess/patchReleaseAuditProblems.txt
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7887//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7887//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7887//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7887//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7887//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7887//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7887//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7887//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7887//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7887//console

This message is automatically generated.

 Blockcache prefetch for HFile V3
 

 Key: HBASE-9857
 URL: https://issues.apache.org/jira/browse/HBASE-9857
 Project: HBase
  Issue Type: Improvement
Reporter: Andrew Purtell
Priority: Minor
 Attachments: 9857.patch, 9857.patch


 Attached patch implements a prefetching function for HFile (v3) blocks, if 
 indicated by a column family or regionserver property. The purpose of this 
 change is to as rapidly after region open as reasonable warm the blockcache 
 with all the data and index blocks of (presumably also in-memory) table data, 
 without counting those block loads as cache misses. Great for fast reads and 
 keeping the cache hit ratio high. Can tune the IO impact versus time until 
 all data blocks are in cache. Works a bit like CompactSplitThread. Makes some 
 effort not to stampede.
 I have been using this for setting up various experiments and thought I'd 
 polish it up a bit and throw it out there. If the data to be preloaded will 
 not fit in blockcache, or if as a percentage of blockcache it is large, this 
 is not a good idea, will just blow out the cache and trigger a lot of useless 
 GC activity. Might be useful as an expert tuning option though. Or not.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9165) Improvements to addDependencyJars

2013-11-15 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-9165:


  Resolution: Fixed
Release Note: 
Introduces the method TableMapReduceUtil#addHBaseDependencyJars for adding 
HBase and its direct dependencies (only) to the job configuration.

This is intended as a low-level API, facilitating code reuse between this class 
and its mapred counterpart. It is also of use to external tools that need to 
build a MapReduce job that interacts with HBase but want fine-grained control 
over the jars shipped to the cluster. See also PIG-3285 and HIVE-2055.
  Status: Resolved  (was: Patch Available)

Committed across all 3 branches. Thanks for the reviews everyone.

 Improvements to addDependencyJars
 -

 Key: HBASE-9165
 URL: https://issues.apache.org/jira/browse/HBASE-9165
 Project: HBase
  Issue Type: Sub-task
  Components: mapreduce
Affects Versions: 0.95.2
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk
 Fix For: 0.98.0, 0.96.1, 0.94.14

 Attachments: 
 0001-HBASE-9165-mapreduce-Modularize-building-dependency-.patch, 
 0001-HBASE-9165-mapreduce-Modularize-building-dependency-.patch, 
 0001-HBASE-9165-mapreduce-Modularize-building-dependency-.patch, 
 HBASE-9165-0.94.00.patch, HBASE-9165-0.94.01.patch, HBASE-9165-0.96.00.patch, 
 HBASE-9165.02.patch


 The way we support adding HBase dependencies to a MapReduce job in 
 {{TableMapReduceUtils#addDependencyJars(job)}} is a bit monolithic. Advanced 
 users need a way to add HBase and its dependencies to their job without us 
 snooping around for ouput formats and the like (see PIG-3285). We can also 
 benefit from a little more code reuse between our {{mapred}} and 
 {{mapreduce}} namespaces.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-8438) Extend bin/hbase to print a mapreduce classpath

2013-11-15 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13824170#comment-13824170
 ] 

Lars Hofhansl commented on HBASE-8438:
--

Is this right?
{code}
+SetString paths = new 
HashSetString(conf.getStringCollection(tmpjars));
+if (paths.size() == 0) {
+  throw new IllegalArgumentException(Configuration contains no tmpjars.);
+}
{code}

Will we always get an exception unless we provide tmpjars (which should be 
optional)?
Apologies if I missed something obvious.

 Extend bin/hbase to print a mapreduce classpath
 -

 Key: HBASE-8438
 URL: https://issues.apache.org/jira/browse/HBASE-8438
 Project: HBase
  Issue Type: Improvement
  Components: scripts
Affects Versions: 0.94.6.1, 0.95.0, 0.94.13
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk
 Fix For: 0.98.0, 0.96.1, 0.94.14

 Attachments: 
 0001-HBASE-8438-Extend-bin-hbase-to-print-a-minimal-class.patch, 
 0001-HBASE-8438-Extend-bin-hbase-to-print-a-minimal-class.patch, 
 0001-HBASE-8438-Extend-bin-hbase-to-print-a-minimal-class.patch, 
 0001-HBASE-8438-Extend-bin-hbase-to-print-a-minimal-class.patch, 
 0001-HBASE-8438-Extend-bin-hbase-to-print-a-minimal-class.patch, 
 0001-HBASE-8438-Extend-bin-hbase-to-print-a-minimal-class.patch, 
 0001-HBASE-8438-Extend-bin-hbase-to-print-a-minimal-class.patch, 
 0001-HBASE-8438-Extend-bin-hbase-to-print-a-minimal-class.patch, 
 0001-HBASE-8438-Extend-bin-hbase-to-print-a-minimal-class.patch, 
 HBASE-8438-0.94.00.patch, HBASE-8438-0.94.01.patch, HBASE-8438.09.patch


 For tools like pig and hive, blindly appending the full output of `bin/hbase 
 classpath` to their own CLASSPATH is excessive. They already build CLASSPATH 
 entries for hadoop. All they need from us is the delta entries, the 
 dependencies we require w/o hadoop and all of it's transitive deps. This is 
 also a kindness for Windows, where there's a shorter limit on the length of 
 commandline arguments.
 See also HIVE-2055 for additional discussion.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-8438) Extend bin/hbase to print a mapreduce classpath

2013-11-15 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13824182#comment-13824182
 ] 

Nick Dimiduk commented on HBASE-8438:
-

When there's no configured tmpjars there's no string to create. If you're 
calling this method you probably wanted some output. I can change it to a 
LOG.warn() and return an empty String if you prefer.

 Extend bin/hbase to print a mapreduce classpath
 -

 Key: HBASE-8438
 URL: https://issues.apache.org/jira/browse/HBASE-8438
 Project: HBase
  Issue Type: Improvement
  Components: scripts
Affects Versions: 0.94.6.1, 0.95.0, 0.94.13
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk
 Fix For: 0.98.0, 0.96.1, 0.94.14

 Attachments: 
 0001-HBASE-8438-Extend-bin-hbase-to-print-a-minimal-class.patch, 
 0001-HBASE-8438-Extend-bin-hbase-to-print-a-minimal-class.patch, 
 0001-HBASE-8438-Extend-bin-hbase-to-print-a-minimal-class.patch, 
 0001-HBASE-8438-Extend-bin-hbase-to-print-a-minimal-class.patch, 
 0001-HBASE-8438-Extend-bin-hbase-to-print-a-minimal-class.patch, 
 0001-HBASE-8438-Extend-bin-hbase-to-print-a-minimal-class.patch, 
 0001-HBASE-8438-Extend-bin-hbase-to-print-a-minimal-class.patch, 
 0001-HBASE-8438-Extend-bin-hbase-to-print-a-minimal-class.patch, 
 0001-HBASE-8438-Extend-bin-hbase-to-print-a-minimal-class.patch, 
 HBASE-8438-0.94.00.patch, HBASE-8438-0.94.01.patch, HBASE-8438.09.patch


 For tools like pig and hive, blindly appending the full output of `bin/hbase 
 classpath` to their own CLASSPATH is excessive. They already build CLASSPATH 
 entries for hadoop. All they need from us is the delta entries, the 
 dependencies we require w/o hadoop and all of it's transitive deps. This is 
 also a kindness for Windows, where there's a shorter limit on the length of 
 commandline arguments.
 See also HIVE-2055 for additional discussion.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-8438) Extend bin/hbase to print a mapreduce classpath

2013-11-15 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13824223#comment-13824223
 ] 

Enis Soztutar commented on HBASE-8438:
--

This looks good. 

 Extend bin/hbase to print a mapreduce classpath
 -

 Key: HBASE-8438
 URL: https://issues.apache.org/jira/browse/HBASE-8438
 Project: HBase
  Issue Type: Improvement
  Components: scripts
Affects Versions: 0.94.6.1, 0.95.0, 0.94.13
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk
 Fix For: 0.98.0, 0.96.1, 0.94.14

 Attachments: 
 0001-HBASE-8438-Extend-bin-hbase-to-print-a-minimal-class.patch, 
 0001-HBASE-8438-Extend-bin-hbase-to-print-a-minimal-class.patch, 
 0001-HBASE-8438-Extend-bin-hbase-to-print-a-minimal-class.patch, 
 0001-HBASE-8438-Extend-bin-hbase-to-print-a-minimal-class.patch, 
 0001-HBASE-8438-Extend-bin-hbase-to-print-a-minimal-class.patch, 
 0001-HBASE-8438-Extend-bin-hbase-to-print-a-minimal-class.patch, 
 0001-HBASE-8438-Extend-bin-hbase-to-print-a-minimal-class.patch, 
 0001-HBASE-8438-Extend-bin-hbase-to-print-a-minimal-class.patch, 
 0001-HBASE-8438-Extend-bin-hbase-to-print-a-minimal-class.patch, 
 HBASE-8438-0.94.00.patch, HBASE-8438-0.94.01.patch, HBASE-8438.09.patch


 For tools like pig and hive, blindly appending the full output of `bin/hbase 
 classpath` to their own CLASSPATH is excessive. They already build CLASSPATH 
 entries for hadoop. All they need from us is the delta entries, the 
 dependencies we require w/o hadoop and all of it's transitive deps. This is 
 also a kindness for Windows, where there's a shorter limit on the length of 
 commandline arguments.
 See also HIVE-2055 for additional discussion.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9908) [WINDOWS] Fix filesystem / classloader related unit tests

2013-11-15 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13824204#comment-13824204
 ] 

Enis Soztutar commented on HBASE-9908:
--

Thanks Nick. I've committed the addendum. 

 [WINDOWS] Fix filesystem / classloader related unit tests
 -

 Key: HBASE-9908
 URL: https://issues.apache.org/jira/browse/HBASE-9908
 Project: HBase
  Issue Type: Bug
Reporter: Enis Soztutar
Assignee: Enis Soztutar
 Fix For: 0.98.0, 0.96.1

 Attachments: hbase-9908_v1-addendum.patch, hbase-9908_v1.patch


 Some of the unit tests related to classloasing and filesystem are failing on 
 windows. 
 {code}
 org.apache.hadoop.hbase.coprocessor.TestClassLoading.testHBase3810
 org.apache.hadoop.hbase.coprocessor.TestClassLoading.testClassLoadingFromLocalFS
 org.apache.hadoop.hbase.coprocessor.TestClassLoading.testPrivateClassLoader
 org.apache.hadoop.hbase.coprocessor.TestClassLoading.testClassLoadingFromRelativeLibDirInJar
 org.apache.hadoop.hbase.coprocessor.TestClassLoading.testClassLoadingFromLibDirInJar
 org.apache.hadoop.hbase.coprocessor.TestClassLoading.testClassLoadingFromHDFS
 org.apache.hadoop.hbase.backup.TestHFileArchiving.testCleaningRace
 org.apache.hadoop.hbase.regionserver.wal.TestDurability.testDurability
 org.apache.hadoop.hbase.regionserver.wal.TestHLog.testMaintainOrderWithConcurrentWrites
 org.apache.hadoop.hbase.security.access.TestAccessController.testBulkLoad
 org.apache.hadoop.hbase.regionserver.TestHRegion.testRecoveredEditsReplayCompaction
 org.apache.hadoop.hbase.regionserver.TestHRegionBusyWait.testRecoveredEditsReplayCompaction
 org.apache.hadoop.hbase.util.TestFSUtils.testRenameAndSetModifyTime
 {code}
 The root causes are: 
  - Using local file name for referring to hdfs paths (HBASE-6830)
  - Classloader using the wrong file system 
  - StoreFile readers not being closed (for unfinished compaction)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9961) [WINDOWS] Multicast should bind to local address

2013-11-15 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HBASE-9961:
-

  Resolution: Fixed
Release Note: Clients now bind to multicast address configured as 
hbase.status.multicast.bind.address.ip, 0.0.0.0 by default. 
  Status: Resolved  (was: Patch Available)

I've committed this. Thanks Nicolas. 

 [WINDOWS] Multicast should bind to local address
 

 Key: HBASE-9961
 URL: https://issues.apache.org/jira/browse/HBASE-9961
 Project: HBase
  Issue Type: Bug
  Components: Client
Reporter: Enis Soztutar
Assignee: Enis Soztutar
 Fix For: 0.98.0, 0.96.1

 Attachments: hbase-9961_v1.patch, hbase-9961_v2.patch


 Binding to a multicast address (such as hbase.status.multicast.address.ip) 
 seems to be the preferred method on most unix systems and linux(2,3). At 
 least in RedHat, binding to multicast address might not filter out other 
 traffic coming to the same port, but for different multi cast groups (2)]. 
 However, on windows, you cannot bind to a non local (class D) address (1), 
 which seems to be correct according to the spec.
 # http://msdn.microsoft.com/en-us/library/ms737550%28v=vs.85%29.aspx
 # https://bugzilla.redhat.com/show_bug.cgi?id=231899
 # 
 http://stackoverflow.com/questions/10692956/what-does-it-mean-to-bind-a-multicast-udp-socket
 # https://issues.jboss.org/browse/JGRP-515
 The solution is to bind to mcast address on linux, but a local address on 
 windows. 
 TestHCM is also failing because of this. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9927) ReplicationLogCleaner#stop() calls HConnectionManager#deleteConnection() unnecessarily

2013-11-15 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-9927:
--

Attachment: 9927.txt

 ReplicationLogCleaner#stop() calls HConnectionManager#deleteConnection() 
 unnecessarily
 --

 Key: HBASE-9927
 URL: https://issues.apache.org/jira/browse/HBASE-9927
 Project: HBase
  Issue Type: Task
Reporter: Ted Yu
Priority: Minor
 Fix For: 0.94.15

 Attachments: 9927.txt


 When inspecting log, I found the following:
 {code}
 2013-11-08 18:23:48,472 ERROR [M:0;kiyo:42380.oldLogCleaner] 
 client.HConnectionManager(468): Connection not found in the list, can't 
 delete it (connection key=HConnectionKey{properties={hbase.rpc.timeout=6, 
 hbase.zookeeper.property.clientPort=59832, hbase.client.pause=100, 
 zookeeper.znode.parent=/hbase, hbase.client.retries.number=350, 
 hbase.zookeeper.quorum=localhost}, username='zy'}). May be the key was 
 modified?
 java.lang.Exception
 at 
 org.apache.hadoop.hbase.client.HConnectionManager.deleteConnection(HConnectionManager.java:468)
 at 
 org.apache.hadoop.hbase.client.HConnectionManager.deleteConnection(HConnectionManager.java:404)
 at 
 org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner.stop(ReplicationLogCleaner.java:141)
 at 
 org.apache.hadoop.hbase.master.cleaner.CleanerChore.cleanup(CleanerChore.java:276)
 {code}
 The call to HConnectionManager#deleteConnection() is not needed.
 Here is related code which has a comment for this effect:
 {code}
 // Not sure why we're deleting a connection that we never acquired or used
 HConnectionManager.deleteConnection(this.getConf());
 {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-5945) Reduce buffer copies in IPC server response path

2013-11-15 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5945?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-5945:
-

Attachment: 5945v4.txt

Fixes the h2 build failure (forgot to change the h2 compat module for the 
metrics change).  Does a little fix up on the end of TestIPC so I can do a bit 
of a benchmark on this patch.

 Reduce buffer copies in IPC server response path
 

 Key: HBASE-5945
 URL: https://issues.apache.org/jira/browse/HBASE-5945
 Project: HBase
  Issue Type: Improvement
  Components: IPC/RPC
Affects Versions: 0.95.2
Reporter: Todd Lipcon
Assignee: stack
 Fix For: 0.96.1

 Attachments: 5945-in-progress.2.1.patch, 5945-in-progress.2.patch, 
 5945-in-progress.patch, 5945v2.txt, 5945v4.txt, buffer-copies.txt, 
 even-fewer-copies.txt, hbase-5495.txt


 The new PB code is sloppy with buffers and makes several needless copies. 
 This increases GC time a lot. A few simple changes can cut this back down.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9927) ReplicationLogCleaner#stop() calls HConnectionManager#deleteConnection() unnecessarily

2013-11-15 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-9927:
--

Status: Patch Available  (was: Open)

 ReplicationLogCleaner#stop() calls HConnectionManager#deleteConnection() 
 unnecessarily
 --

 Key: HBASE-9927
 URL: https://issues.apache.org/jira/browse/HBASE-9927
 Project: HBase
  Issue Type: Task
Reporter: Ted Yu
Assignee: Ted Yu
Priority: Minor
 Fix For: 0.94.15

 Attachments: 9927.txt


 When inspecting log, I found the following:
 {code}
 2013-11-08 18:23:48,472 ERROR [M:0;kiyo:42380.oldLogCleaner] 
 client.HConnectionManager(468): Connection not found in the list, can't 
 delete it (connection key=HConnectionKey{properties={hbase.rpc.timeout=6, 
 hbase.zookeeper.property.clientPort=59832, hbase.client.pause=100, 
 zookeeper.znode.parent=/hbase, hbase.client.retries.number=350, 
 hbase.zookeeper.quorum=localhost}, username='zy'}). May be the key was 
 modified?
 java.lang.Exception
 at 
 org.apache.hadoop.hbase.client.HConnectionManager.deleteConnection(HConnectionManager.java:468)
 at 
 org.apache.hadoop.hbase.client.HConnectionManager.deleteConnection(HConnectionManager.java:404)
 at 
 org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner.stop(ReplicationLogCleaner.java:141)
 at 
 org.apache.hadoop.hbase.master.cleaner.CleanerChore.cleanup(CleanerChore.java:276)
 {code}
 The call to HConnectionManager#deleteConnection() is not needed.
 Here is related code which has a comment for this effect:
 {code}
 // Not sure why we're deleting a connection that we never acquired or used
 HConnectionManager.deleteConnection(this.getConf());
 {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-5945) Reduce buffer copies in IPC server response path

2013-11-15 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5945?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-5945:
-

Attachment: with_patch.png
without_patch.png

Here are results running little benchmark on end of TestIPC in its main.
It sets up the rpc doing a little echo protocol.  The echo copies the cells
it receives onto the response. On cmdline say how many cycles and how many 
columns.  I made Cell size be about 10k and ran the test adding 10 Cells per 
iteration so we are sending back and forth about 100k.  This approximates a 
small to medium-sized mult call.  I cycled 10k times.  Below are the test done 
twice.

With patch, the test finishes a little sooner... about 5-10% sooner.

I ran visualvm over a minute+ against each at about same stage in test.
Without patch we use more CPU and do more GC -- just over 36% CPU vs 33% or so 
and we do a bit more GC'ing... 4.1% or so vs 3.4% or so.  W/o the patch, more 
heap is used.  See pictures.  Patch seems to be improvement.

WITHOUT PATCH

durruti:hbase.git stack$ for i in 1 2 3 4 5; do time ./bin/hbase 
-Dhbase.defaults.for.version.skip=true org.apache.hadoop.hbase.ipc.TestIPC 
10 10  /tmp/wopatch.$i.txt; done

real0m42.843s
user0m43.902s
sys 0m17.495s

real0m43.357s
user0m46.050s
sys 0m17.712s

real0m42.595s
user0m44.179s
sys 0m17.448s

real0m43.320s
user0m45.578s
sys 0m17.736s

real0m42.647s
user0m44.845s
sys 0m17.583s

... and again

real0m45.868s
user0m46.522s
sys 0m18.776s

real0m42.764s
user0m44.505s
sys 0m17.447s

real0m43.080s
user0m45.445s
sys 0m17.585s

real0m43.261s
user0m45.246s
sys 0m17.722s

real0m42.592s
user0m44.102s
sys 0m17.333s


WITH PATCH

durruti:hbase.git stack$ for i in 1 2 3 4 5; do time ./bin/hbase 
-Dhbase.defaults.for.version.skip=true org.apache.hadoop.hbase.ipc.TestIPC 
10 10  /tmp/wpatch.$i.txt; done

real0m38.838s
user0m40.415s
sys 0m18.765s

real0m37.638s
user0m39.246s
sys 0m18.408s

real0m38.696s
user0m40.169s
sys 0m18.700s

real0m37.948s
user0m39.403s
sys 0m18.682s


real0m38.077s
user0m39.519s
sys 0m18.571s

...and again.

real0m43.888s
user0m44.394s
sys 0m21.427s

real0m40.311s
user0m42.553s
sys 0m19.460s

real0m38.489s
user0m41.097s
sys 0m18.761s

real0m38.252s
user0m39.603s
sys 0m18.618s

real0m38.066s
user0m39.656s
sys 0m18.621s

 Reduce buffer copies in IPC server response path
 

 Key: HBASE-5945
 URL: https://issues.apache.org/jira/browse/HBASE-5945
 Project: HBase
  Issue Type: Improvement
  Components: IPC/RPC
Affects Versions: 0.95.2
Reporter: Todd Lipcon
Assignee: stack
 Fix For: 0.96.1

 Attachments: 5945-in-progress.2.1.patch, 5945-in-progress.2.patch, 
 5945-in-progress.patch, 5945v2.txt, 5945v4.txt, buffer-copies.txt, 
 even-fewer-copies.txt, hbase-5495.txt, with_patch.png, without_patch.png


 The new PB code is sloppy with buffers and makes several needless copies. 
 This increases GC time a lot. A few simple changes can cut this back down.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-5945) Reduce buffer copies in IPC server response path

2013-11-15 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5945?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-5945:
-

Attachment: 5945v4.txt

Upload patch again so hadoopqa picks up this.

 Reduce buffer copies in IPC server response path
 

 Key: HBASE-5945
 URL: https://issues.apache.org/jira/browse/HBASE-5945
 Project: HBase
  Issue Type: Improvement
  Components: IPC/RPC
Affects Versions: 0.95.2
Reporter: Todd Lipcon
Assignee: stack
 Fix For: 0.96.1

 Attachments: 5945-in-progress.2.1.patch, 5945-in-progress.2.patch, 
 5945-in-progress.patch, 5945v2.txt, 5945v4.txt, 5945v4.txt, 
 buffer-copies.txt, even-fewer-copies.txt, hbase-5495.txt, with_patch.png, 
 without_patch.png


 The new PB code is sloppy with buffers and makes several needless copies. 
 This increases GC time a lot. A few simple changes can cut this back down.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9978) The client retries even if the method is not present on the server

2013-11-15 Thread Matteo Bertozzi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9978?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matteo Bertozzi updated HBASE-9978:
---

Attachment: HBASE-9978-v0.patch

 The client retries even if the method is not present on the server
 --

 Key: HBASE-9978
 URL: https://issues.apache.org/jira/browse/HBASE-9978
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.98.0, 0.96.0
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
Priority: Trivial
 Fix For: 0.98.0, 0.96.1

 Attachments: HBASE-9978-v0.patch


 If the RpcServer is not able to find the method on the server throws an 
 UnsupportedOperationException, but since is not wrapped in a DoNotRetry the 
 client keeps retrying even if the operation doesn't exists.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HBASE-9978) The client retries even if the method is not present on the server

2013-11-15 Thread Matteo Bertozzi (JIRA)
Matteo Bertozzi created HBASE-9978:
--

 Summary: The client retries even if the method is not present on 
the server
 Key: HBASE-9978
 URL: https://issues.apache.org/jira/browse/HBASE-9978
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.96.0, 0.98.0
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
Priority: Trivial
 Fix For: 0.98.0, 0.96.1
 Attachments: HBASE-9978-v0.patch

If the RpcServer is not able to find the method on the server throws an 
UnsupportedOperationException, but since is not wrapped in a DoNotRetry the 
client keeps retrying even if the operation doesn't exists.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9978) The client retries even if the method is not present on the server

2013-11-15 Thread Matteo Bertozzi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9978?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matteo Bertozzi updated HBASE-9978:
---

Status: Patch Available  (was: Open)

 The client retries even if the method is not present on the server
 --

 Key: HBASE-9978
 URL: https://issues.apache.org/jira/browse/HBASE-9978
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.96.0, 0.98.0
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
Priority: Trivial
 Fix For: 0.98.0, 0.96.1

 Attachments: HBASE-9978-v0.patch


 If the RpcServer is not able to find the method on the server throws an 
 UnsupportedOperationException, but since is not wrapped in a DoNotRetry the 
 client keeps retrying even if the operation doesn't exists.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9942) hbase Scanner specifications accepting wrong specifier and then after scan using correct specifier returning unexpected result

2013-11-15 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13824263#comment-13824263
 ] 

stack commented on HBASE-9942:
--

Ruby constants are not 'constant' -- there is no such thing as a constant in 
ruby it seems -- so what is happening here is that the 'constant' STARTROW gets 
assigned the value 'row4'; ditto for STOPROW with its 'row7' or whatever.  We 
then use this 'constant' key doing a lookup in the dictionary of submitted 
values for CF by doing args[STARTROW].  Because the 'constant' is not 'row4' 
rather than 'STARTROW', we find nothing and take the default ''.

In ruby, when you set a 'constant', it gives you a 'warning' but then goes 
ahead and sets it anyways.  Here is an illustration where I set a 'constant' 
using a java constant.

hbase(main):002:0* NAME = org.apache.hadoop.hbase.HConstants::NAME
= NAME
hbase(main):003:0 NAME = 'xyz'
(hbase):3 warning: already initialized constant NAME
= xyz

Not sure what to do about this one.  We might have to put it down to the 
joys-of-ruby?  Ideas?

 hbase Scanner specifications accepting wrong specifier and then after scan 
 using correct specifier returning unexpected result 
 ---

 Key: HBASE-9942
 URL: https://issues.apache.org/jira/browse/HBASE-9942
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.96.0, 0.94.13
Reporter: Deepak Sharma
Priority: Minor
 Fix For: 0.96.1


 check the given scenerio:
 1. log in to hbase client -- ./hbase shell
 2. created table 'tab1' 
 hbase(main):001:0 create 'tab1' , 'fa1'
 3. put some 10 rows (row1 to row10)  in table 'tab1'
 4.  run the scan for table 'tab1' as follows:
  
 hbase(main):013:0 scan 'tab1' , { STARTROW = 'row4' , STOPROW = 'row9' }
 ROW   COLUMN+CELL 
   

  row4 column=fa1:col1, 
 timestamp=1384164182738, value=value1 
   
  row5 column=fa1:col1, 
 timestamp=1384164188396, value=value1 
   
  row6 column=fa1:col1, 
 timestamp=1384164192395, value=value1 
   
  row7 column=fa1:col1, 
 timestamp=1384164197693, value=value1 
   
  row8 column=fa1:col1, 
 timestamp=1384164203237, value=value1 
   
 5 row(s) in 0.0540 seconds
 so result was expected , rows from 'row4' to 'row8' are displayed
 5. then run the scan using wrong specifier ( '=' instead of '=') so get 
 wrong result
 hbase(main):014:0 scan 'tab1' , { STARTROW = 'row4' , STOPROW = 'row9' }
 ROW   COLUMN+CELL 
   

  row1 column=fa1:col1, 
 timestamp=1384164167838, value=value1 
   
  row10column=fa1:col1, 
 timestamp=1384164212615, value=value1 
   
  row2 column=fa1:col1, 
 timestamp=1384164175337, value=value1 
   
  row3 column=fa1:col1, 
 timestamp=1384164179068, value=value1 
   
  row4 column=fa1:col1, 
 timestamp=1384164182738, value=value1 
   
  row5 column=fa1:col1, 
 timestamp=1384164188396, value=value1 
   
  row6 column=fa1:col1, 
 timestamp=1384164192395, value=value1 
   
  row7 column=fa1:col1, 
 timestamp=1384164197693, value=value1 
   
  row8 column=fa1:col1, 
 timestamp=1384164203237, value=value1 
   
  

[jira] [Commented] (HBASE-5945) Reduce buffer copies in IPC server response path

2013-11-15 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13824271#comment-13824271
 ] 

stack commented on HBASE-5945:
--

[~devaraj] Just saw your comment.  Thanks for review.  Rough numbers attached.  
If hadoopqa passes I'll commit.   There is more to be had here I'd say but will 
do in new issue -- then go read-side.  Different approach on read-side I'd say 
--- look at keeping the request off-heap as long as possible.. we'll see.

 Reduce buffer copies in IPC server response path
 

 Key: HBASE-5945
 URL: https://issues.apache.org/jira/browse/HBASE-5945
 Project: HBase
  Issue Type: Improvement
  Components: IPC/RPC
Affects Versions: 0.95.2
Reporter: Todd Lipcon
Assignee: stack
 Fix For: 0.96.1

 Attachments: 5945-in-progress.2.1.patch, 5945-in-progress.2.patch, 
 5945-in-progress.patch, 5945v2.txt, 5945v4.txt, 5945v4.txt, 
 buffer-copies.txt, even-fewer-copies.txt, hbase-5495.txt, with_patch.png, 
 without_patch.png


 The new PB code is sloppy with buffers and makes several needless copies. 
 This increases GC time a lot. A few simple changes can cut this back down.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-8332) Add truncate as HMaster method

2013-11-15 Thread Matteo Bertozzi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matteo Bertozzi updated HBASE-8332:
---

Status: Patch Available  (was: Open)

 Add truncate as HMaster method
 --

 Key: HBASE-8332
 URL: https://issues.apache.org/jira/browse/HBASE-8332
 Project: HBase
  Issue Type: Improvement
  Components: master
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
Priority: Minor
 Attachments: HBASE-8332-v0.patch, HBASE-8332.draft.patch


 Currently truncate and truncate_preserve are only shell functions, and 
 implemented as deleteTable() + createTable().
 Using ACLs the user running truncate, must have rights to create a table and 
 only global granted users can create tables.
 Add truncate() and truncatePreserve() to HBaseAdmin/HMaster with its own ACL 
 check.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-8332) Add truncate as HMaster method

2013-11-15 Thread Matteo Bertozzi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matteo Bertozzi updated HBASE-8332:
---

Attachment: HBASE-8332-v0.patch

 Add truncate as HMaster method
 --

 Key: HBASE-8332
 URL: https://issues.apache.org/jira/browse/HBASE-8332
 Project: HBase
  Issue Type: Improvement
  Components: master
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
Priority: Minor
 Attachments: HBASE-8332-v0.patch, HBASE-8332.draft.patch


 Currently truncate and truncate_preserve are only shell functions, and 
 implemented as deleteTable() + createTable().
 Using ACLs the user running truncate, must have rights to create a table and 
 only global granted users can create tables.
 Add truncate() and truncatePreserve() to HBaseAdmin/HMaster with its own ACL 
 check.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


  1   2   >