[jira] [Commented] (HBASE-13907) Document how to deploy a coprocessor

2015-12-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15064534#comment-15064534
 ] 

Hudson commented on HBASE-13907:


FAILURE: Integrated in HBase-Trunk_matrix #567 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/567/])
HBASE-13907 Document how to deploy a coprocessor (mstanleyjones: rev 
f8eab44dcd0d15ed5a4bf039c382f73468709a33)
* src/main/asciidoc/_chapters/cp.adoc


> Document how to deploy a coprocessor
> 
>
> Key: HBASE-13907
> URL: https://issues.apache.org/jira/browse/HBASE-13907
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Reporter: Misty Stanley-Jones
>Assignee: Misty Stanley-Jones
> Fix For: 2.0.0
>
> Attachments: HBASE-13907-1.patch, HBASE-13907-2.patch, 
> HBASE-13907-v3.patch, HBASE-13907-v4.patch, HBASE-13907-v5.patch, 
> HBASE-13907-v6.patch, HBASE-13907.patch
>
>
> Capture this information:
> > Where are the dependencies located for these classes? Is there a path on 
> > HDFS or local disk that dependencies need to be placed so that each 
> > RegionServer has access to them?
> It is suggested to bundle them as a single jar so that RS can load the whole 
> jar and resolve dependencies. If you are not able to do that, you need place 
> the dependencies in regionservers class path so that they are loaded during 
> RS startup. Do either of these options work for you? Btw, you can load the 
> coprocessors/filters into path specified by hbase.dynamic.jars.dir [1], so 
> that they are loaded dynamically by regionservers when the class is accessed 
> (or you can place them in the RS class path too, so that they are loaded 
> during RS JVM startup).
> > How would one deploy these using an automated system? 
> > (puppet/chef/ansible/etc)
> You can probably use these tools to automate shipping the jars to above 
> locations?
> > Tests our developers have done suggest that simply disabling a coprocessor, 
> > replacing the jar with a different version, and enabling the coprocessor 
> > again does not load the newest version. With that in mind how does one know 
> > which version is currently deployed and enabled without resorting to 
> > parsing `hbase shell` output or restarting hbase?
> Actually this is a design issue with current classloader. You can't reload a 
> class in a JVM unless you delete all the current references to it. Since the 
> current JVM (classloader) has reference to it, you can't overwrite it unless 
> you kill the JVM, which is equivalent to restarting it. So you still have the 
> older class loaded in place. For this to work, classloader design should be 
> changed. If it works for you, you can rename the coprocessor class name and 
> the new version of jar and RS loads it properly.
> > Where does logging go, and how does one access it? Does logging need to be 
> > configured in a certain way?
> Can you please specify which logging you are referring to?
> > Where is a good location to place configuration files?
> Same as above, are these hbase configs or something else? If hbase configs, 
> are these gateway configs/server side? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13907) Document how to deploy a coprocessor

2015-12-18 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15064084#comment-15064084
 ] 

Sean Busbey commented on HBASE-13907:
-

+1

> Document how to deploy a coprocessor
> 
>
> Key: HBASE-13907
> URL: https://issues.apache.org/jira/browse/HBASE-13907
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Reporter: Misty Stanley-Jones
>Assignee: Misty Stanley-Jones
> Attachments: HBASE-13907-1.patch, HBASE-13907-2.patch, 
> HBASE-13907-v3.patch, HBASE-13907-v4.patch, HBASE-13907-v5.patch, 
> HBASE-13907-v6.patch, HBASE-13907.patch
>
>
> Capture this information:
> > Where are the dependencies located for these classes? Is there a path on 
> > HDFS or local disk that dependencies need to be placed so that each 
> > RegionServer has access to them?
> It is suggested to bundle them as a single jar so that RS can load the whole 
> jar and resolve dependencies. If you are not able to do that, you need place 
> the dependencies in regionservers class path so that they are loaded during 
> RS startup. Do either of these options work for you? Btw, you can load the 
> coprocessors/filters into path specified by hbase.dynamic.jars.dir [1], so 
> that they are loaded dynamically by regionservers when the class is accessed 
> (or you can place them in the RS class path too, so that they are loaded 
> during RS JVM startup).
> > How would one deploy these using an automated system? 
> > (puppet/chef/ansible/etc)
> You can probably use these tools to automate shipping the jars to above 
> locations?
> > Tests our developers have done suggest that simply disabling a coprocessor, 
> > replacing the jar with a different version, and enabling the coprocessor 
> > again does not load the newest version. With that in mind how does one know 
> > which version is currently deployed and enabled without resorting to 
> > parsing `hbase shell` output or restarting hbase?
> Actually this is a design issue with current classloader. You can't reload a 
> class in a JVM unless you delete all the current references to it. Since the 
> current JVM (classloader) has reference to it, you can't overwrite it unless 
> you kill the JVM, which is equivalent to restarting it. So you still have the 
> older class loaded in place. For this to work, classloader design should be 
> changed. If it works for you, you can rename the coprocessor class name and 
> the new version of jar and RS loads it properly.
> > Where does logging go, and how does one access it? Does logging need to be 
> > configured in a certain way?
> Can you please specify which logging you are referring to?
> > Where is a good location to place configuration files?
> Same as above, are these hbase configs or something else? If hbase configs, 
> are these gateway configs/server side? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13907) Document how to deploy a coprocessor

2015-12-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15061365#comment-15061365
 ] 

Hadoop QA commented on HBASE-13907:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12778127/HBASE-13907-v6.patch
  against master branch at commit 3dec8582f527e4e7e35280cd34c4928648290658.
  ATTACHMENT ID: 12778127

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation patch that doesn't require tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.6.1 2.7.0 
2.7.1)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 1 
warning messages.

{color:green}+1 checkstyle{color}. The applied patch does not generate new 
checkstyle errors.

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:

+link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/coprocessor/BaseRegionObserver.html[BaseRegionObserver],
+or it should implement the 
link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/Coprocessor.html[Coprocessor]
+link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/coprocessor/CoprocessorService.html[CoprocessorService]
+link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/coprocessor/package-summary.html[coprocessor]
+such as 
link:http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/coprocessor/RegionObserver.html#prePut%28org.apache.hadoop.hbase.coprocessor.ObserverContext,%20org.apache.hadoop.hbase.client.Put,%20org.apache.hadoop.hbase.regionserver.wal.WALEdit,%20org.apache.hadoop.hbase.client.Durability%29[`prePut`].
 Observers that happen just after an event override methods that start
+with a `post` prefix, such as 
link:http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/coprocessor/RegionObserver.html#postPut%28org.apache.hadoop.hbase.coprocessor.ObserverContext,%20org.apache.hadoop.hbase.client.Put,%20org.apache.hadoop.hbase.regionserver.wal.WALEdit,%20org.apache.hadoop.hbase.client.Durability%29[`postPut`].
+  a coprocessor to use the `prePut` method on `user` to insert a record into 
`user_daily_attendance`.
+  
link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/coprocessor/RegionObserver.html[RegionObserver].
+  
link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/coprocessor/BaseRegionObserver.html[BaseRegionObserver],
+  
link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/coprocessor/RegionServerObserver.html[RegionServerObserver].

{color:green}+1 site{color}.  The mvn post-site goal succeeds with this 
patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

{color:green}+1 zombies{color}. No zombie tests found running at the end of 
the build.

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16892//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16892//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16892//artifact/patchprocess/checkstyle-aggregate.html

  Javadoc warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16892//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16892//console

This message is automatically generated.

> Document how to deploy a coprocessor
> 
>
> Key: HBASE-13907
> URL: https://issues.apache.org/jira/browse/HBASE-13907
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Reporter: Misty Stanley-Jones
>Assignee: Misty Stanley-Jones
> Attachments: HBASE-13907-1.patch, HBASE-13907-2.patch, 
> HBASE-13907-v3.patch, HBASE-13907-v4.patch, HBASE-13907-v5.patch, 
> HBASE-13907-v6.patch, HBASE-13907.patch
>
>
> Capture this information:
> > Where are the dependencies located for these classes? Is there a path on 
> > HDFS or local disk that dependencies need to be placed so that each 
> > RegionServer has access to them?
> It is suggest

[jira] [Commented] (HBASE-13907) Document how to deploy a coprocessor

2015-12-16 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15061011#comment-15061011
 ] 

Sean Busbey commented on HBASE-13907:
-

{code}
+[WARNING]
+.Use Coprocessors At Your Own Risk
+
+Coprocessors are an advanced feature of HBase and are intended to be used by 
system
+developers only. Because coprocessor code runs directly on the RegionServer 
and has
+direct access to your data, they introduce the risk of data corruption, 
man-in-the-middle
+attacks, or other malicious data access. Currently, there is no mechanism to 
prevent
+data corruption by coprocessors, though work is underway on
+link:https://issues.apache.org/jira/browse/HBASE-4047[HBASE-4047].
+
{code}

I'd include a note about how there's also no resource isolation, so a totally 
well intentioned but misbehaving coprocessor can severely degrade cluster 
performance and stability.

> Document how to deploy a coprocessor
> 
>
> Key: HBASE-13907
> URL: https://issues.apache.org/jira/browse/HBASE-13907
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Reporter: Misty Stanley-Jones
>Assignee: Misty Stanley-Jones
> Attachments: HBASE-13907-1.patch, HBASE-13907-2.patch, 
> HBASE-13907-v3.patch, HBASE-13907-v4.patch, HBASE-13907-v5.patch, 
> HBASE-13907.patch
>
>
> Capture this information:
> > Where are the dependencies located for these classes? Is there a path on 
> > HDFS or local disk that dependencies need to be placed so that each 
> > RegionServer has access to them?
> It is suggested to bundle them as a single jar so that RS can load the whole 
> jar and resolve dependencies. If you are not able to do that, you need place 
> the dependencies in regionservers class path so that they are loaded during 
> RS startup. Do either of these options work for you? Btw, you can load the 
> coprocessors/filters into path specified by hbase.dynamic.jars.dir [1], so 
> that they are loaded dynamically by regionservers when the class is accessed 
> (or you can place them in the RS class path too, so that they are loaded 
> during RS JVM startup).
> > How would one deploy these using an automated system? 
> > (puppet/chef/ansible/etc)
> You can probably use these tools to automate shipping the jars to above 
> locations?
> > Tests our developers have done suggest that simply disabling a coprocessor, 
> > replacing the jar with a different version, and enabling the coprocessor 
> > again does not load the newest version. With that in mind how does one know 
> > which version is currently deployed and enabled without resorting to 
> > parsing `hbase shell` output or restarting hbase?
> Actually this is a design issue with current classloader. You can't reload a 
> class in a JVM unless you delete all the current references to it. Since the 
> current JVM (classloader) has reference to it, you can't overwrite it unless 
> you kill the JVM, which is equivalent to restarting it. So you still have the 
> older class loaded in place. For this to work, classloader design should be 
> changed. If it works for you, you can rename the coprocessor class name and 
> the new version of jar and RS loads it properly.
> > Where does logging go, and how does one access it? Does logging need to be 
> > configured in a certain way?
> Can you please specify which logging you are referring to?
> > Where is a good location to place configuration files?
> Same as above, are these hbase configs or something else? If hbase configs, 
> are these gateway configs/server side? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13907) Document how to deploy a coprocessor

2015-08-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14694599#comment-14694599
 ] 

Hadoop QA commented on HBASE-13907:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12750189/HBASE-13907-v4.patch
  against master branch at commit 5e5bcceb533e6e4ded65bc778f05da213c07b688.
  ATTACHMENT ID: 12750189

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation patch that doesn't require tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 1 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15075//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15075//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15075//artifact/patchprocess/checkstyle-aggregate.html

  Javadoc warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15075//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15075//console

This message is automatically generated.

> Document how to deploy a coprocessor
> 
>
> Key: HBASE-13907
> URL: https://issues.apache.org/jira/browse/HBASE-13907
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Reporter: Misty Stanley-Jones
>Assignee: Misty Stanley-Jones
> Attachments: HBASE-13907-1.patch, HBASE-13907-2.patch, 
> HBASE-13907-v3.patch, HBASE-13907-v4.patch, HBASE-13907.patch
>
>
> Capture this information:
> > Where are the dependencies located for these classes? Is there a path on 
> > HDFS or local disk that dependencies need to be placed so that each 
> > RegionServer has access to them?
> It is suggested to bundle them as a single jar so that RS can load the whole 
> jar and resolve dependencies. If you are not able to do that, you need place 
> the dependencies in regionservers class path so that they are loaded during 
> RS startup. Do either of these options work for you? Btw, you can load the 
> coprocessors/filters into path specified by hbase.dynamic.jars.dir [1], so 
> that they are loaded dynamically by regionservers when the class is accessed 
> (or you can place them in the RS class path too, so that they are loaded 
> during RS JVM startup).
> > How would one deploy these using an automated system? 
> > (puppet/chef/ansible/etc)
> You can probably use these tools to automate shipping the jars to above 
> locations?
> > Tests our developers have done suggest that simply disabling a coprocessor, 
> > replacing the jar with a different version, and enabling the coprocessor 
> > again does not load the newest version. With that in mind how does one know 
> > which version is currently deployed and enabled without resorting to 
> > parsing `hbase shell` output or restarting hbase?
> Actually this is a design issue with current classloader. You can't reload a 
> class in a JVM unless you delete all the current references to it. Since the 
> current JVM (classloader) has reference to it, you can't overwrite it unless 
> you kill the JVM, which is equivalent to restarting it. So you still have the 
> older class loaded in place. For this to work, classloader design should be 
> changed. If it works for you, you can rename the coprocessor class name and 
> the new version of jar and RS loads it properly.
> > Where does logging go, and how does one access it? Does logging need to be 
> > configured in a certain way?
> Can you please specify which logging you are referring to?
> > Where is a good lo

[jira] [Commented] (HBASE-13907) Document how to deploy a coprocessor

2015-08-12 Thread Lars George (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14693552#comment-14693552
 ] 

Lars George commented on HBASE-13907:
-

[~misty] So the open question #3 is, what if a CP was loaded using the site 
file, using custom config values for it, and then an admin tries to override 
them in the CLI? How would that be possible, the CP is already loaded, no? I 
tried once and did issue the same CP load command on the CLI, which added the 
same CP twice, just with a higher ID. So, if you have loaded a system CP using 
the site XML file, you can load another on the CLI, same class, but it will 
have lower priority.

> Document how to deploy a coprocessor
> 
>
> Key: HBASE-13907
> URL: https://issues.apache.org/jira/browse/HBASE-13907
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Reporter: Misty Stanley-Jones
>Assignee: Misty Stanley-Jones
> Attachments: HBASE-13907-1.patch, HBASE-13907-2.patch, 
> HBASE-13907-v3.patch, HBASE-13907.patch
>
>
> Capture this information:
> > Where are the dependencies located for these classes? Is there a path on 
> > HDFS or local disk that dependencies need to be placed so that each 
> > RegionServer has access to them?
> It is suggested to bundle them as a single jar so that RS can load the whole 
> jar and resolve dependencies. If you are not able to do that, you need place 
> the dependencies in regionservers class path so that they are loaded during 
> RS startup. Do either of these options work for you? Btw, you can load the 
> coprocessors/filters into path specified by hbase.dynamic.jars.dir [1], so 
> that they are loaded dynamically by regionservers when the class is accessed 
> (or you can place them in the RS class path too, so that they are loaded 
> during RS JVM startup).
> > How would one deploy these using an automated system? 
> > (puppet/chef/ansible/etc)
> You can probably use these tools to automate shipping the jars to above 
> locations?
> > Tests our developers have done suggest that simply disabling a coprocessor, 
> > replacing the jar with a different version, and enabling the coprocessor 
> > again does not load the newest version. With that in mind how does one know 
> > which version is currently deployed and enabled without resorting to 
> > parsing `hbase shell` output or restarting hbase?
> Actually this is a design issue with current classloader. You can't reload a 
> class in a JVM unless you delete all the current references to it. Since the 
> current JVM (classloader) has reference to it, you can't overwrite it unless 
> you kill the JVM, which is equivalent to restarting it. So you still have the 
> older class loaded in place. For this to work, classloader design should be 
> changed. If it works for you, you can rename the coprocessor class name and 
> the new version of jar and RS loads it properly.
> > Where does logging go, and how does one access it? Does logging need to be 
> > configured in a certain way?
> Can you please specify which logging you are referring to?
> > Where is a good location to place configuration files?
> Same as above, are these hbase configs or something else? If hbase configs, 
> are these gateway configs/server side? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13907) Document how to deploy a coprocessor

2015-08-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14681411#comment-14681411
 ] 

Hadoop QA commented on HBASE-13907:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12749764/HBASE-13907-v3.patch
  against master branch at commit 3d5801602da7cde1f20bdd4b898e8b3cac77f2a3.
  ATTACHMENT ID: 12749764

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation patch that doesn't require tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   
org.apache.hadoop.hbase.client.TestRestoreSnapshotFromClient
  org.apache.hadoop.hbase.client.TestMultiParallel
  org.apache.hadoop.hbase.client.TestMobSnapshotFromClient
  
org.apache.hadoop.hbase.client.replication.TestReplicationAdminWithClusters
  org.apache.hadoop.hbase.client.TestFromClientSide3
  
org.apache.hadoop.hbase.client.TestCloneSnapshotFromClientWithRegionReplicas
  org.apache.hadoop.hbase.client.TestCloneSnapshotFromClient
  org.apache.hadoop.hbase.client.TestHTableMultiplexerFlushCache
  org.apache.hadoop.hbase.TestIOFencing
  org.apache.hadoop.hbase.wal.TestWALSplitCompressed
  org.apache.hadoop.hbase.client.TestSnapshotCloneIndependence
  
org.apache.hadoop.hbase.client.TestMobRestoreSnapshotFromClient
  org.apache.hadoop.hbase.client.TestAdmin2
  org.apache.hadoop.hbase.client.TestFromClientSide
  org.apache.hadoop.hbase.client.TestReplicaWithCluster
  org.apache.hadoop.hbase.master.TestDistributedLogSplitting
  org.apache.hadoop.hbase.client.TestClientPushback

 {color:red}-1 core zombie tests{color}.  There are 5 zombie test(s):   
at 
org.apache.hadoop.hbase.mapreduce.TestHFileOutputFormat2.testWritingPEData(TestHFileOutputFormat2.java:335)
at 
org.apache.hadoop.hbase.mapreduce.TestHFileOutputFormat.testMRIncrementalLoadWithSplit(TestHFileOutputFormat.java:384)
at 
org.apache.hadoop.hbase.mapreduce.TestCellCounter.testCellCounterForCompleteTable(TestCellCounter.java:299)
at 
org.apache.hadoop.hbase.mapreduce.TestTableSnapshotInputFormat.testWithMapReduceImpl(TestTableSnapshotInputFormat.java:247)
at 
org.apache.hadoop.hbase.mapreduce.TableSnapshotInputFormatTestBase.testWithMapReduce(TableSnapshotInputFormatTestBase.java:112)
at 
org.apache.hadoop.hbase.mapreduce.TableSnapshotInputFormatTestBase.testWithMapReduceSingleRegion(TableSnapshotInputFormatTestBase.java:91)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15038//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15038//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15038//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15038//console

This message is automatically generated.

> Document how to deploy a coprocessor
> 
>
> Key: HBASE-13907
> URL: https://issues.apache.org/jira/browse/HBASE-13907
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Reporter: Misty Stanley-Jones
>Assignee: Misty Stanley-Jones
> Attachments: HBASE-13907-1.patch, HBASE-13907-2.patch, 
> HBASE-13907-v3.patch, HBASE-13907.patch
>
>
> Capture this information:
> > Where are the dependencies located

[jira] [Commented] (HBASE-13907) Document how to deploy a coprocessor

2015-08-10 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14681065#comment-14681065
 ] 

Andrew Purtell commented on HBASE-13907:


bq. Folks who are running coprocessors are already kind of down a dark path, 
but should we encourage the cleaner "rolling restart of region servers" to 
reduce the chances we have to debug classloader pain?

+1, I'd recommend this, and leave out language related to alternatives

> Document how to deploy a coprocessor
> 
>
> Key: HBASE-13907
> URL: https://issues.apache.org/jira/browse/HBASE-13907
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Reporter: Misty Stanley-Jones
>Assignee: Misty Stanley-Jones
> Attachments: HBASE-13907-1.patch, HBASE-13907-2.patch, 
> HBASE-13907.patch
>
>
> Capture this information:
> > Where are the dependencies located for these classes? Is there a path on 
> > HDFS or local disk that dependencies need to be placed so that each 
> > RegionServer has access to them?
> It is suggested to bundle them as a single jar so that RS can load the whole 
> jar and resolve dependencies. If you are not able to do that, you need place 
> the dependencies in regionservers class path so that they are loaded during 
> RS startup. Do either of these options work for you? Btw, you can load the 
> coprocessors/filters into path specified by hbase.dynamic.jars.dir [1], so 
> that they are loaded dynamically by regionservers when the class is accessed 
> (or you can place them in the RS class path too, so that they are loaded 
> during RS JVM startup).
> > How would one deploy these using an automated system? 
> > (puppet/chef/ansible/etc)
> You can probably use these tools to automate shipping the jars to above 
> locations?
> > Tests our developers have done suggest that simply disabling a coprocessor, 
> > replacing the jar with a different version, and enabling the coprocessor 
> > again does not load the newest version. With that in mind how does one know 
> > which version is currently deployed and enabled without resorting to 
> > parsing `hbase shell` output or restarting hbase?
> Actually this is a design issue with current classloader. You can't reload a 
> class in a JVM unless you delete all the current references to it. Since the 
> current JVM (classloader) has reference to it, you can't overwrite it unless 
> you kill the JVM, which is equivalent to restarting it. So you still have the 
> older class loaded in place. For this to work, classloader design should be 
> changed. If it works for you, you can rename the coprocessor class name and 
> the new version of jar and RS loads it properly.
> > Where does logging go, and how does one access it? Does logging need to be 
> > configured in a certain way?
> Can you please specify which logging you are referring to?
> > Where is a good location to place configuration files?
> Same as above, are these hbase configs or something else? If hbase configs, 
> are these gateway configs/server side? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13907) Document how to deploy a coprocessor

2015-08-09 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14679499#comment-14679499
 ] 

Sean Busbey commented on HBASE-13907:
-

#1 and #2 sounds good to me.

> Document how to deploy a coprocessor
> 
>
> Key: HBASE-13907
> URL: https://issues.apache.org/jira/browse/HBASE-13907
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Reporter: Misty Stanley-Jones
>Assignee: Misty Stanley-Jones
> Attachments: HBASE-13907-1.patch, HBASE-13907-2.patch, 
> HBASE-13907.patch
>
>
> Capture this information:
> > Where are the dependencies located for these classes? Is there a path on 
> > HDFS or local disk that dependencies need to be placed so that each 
> > RegionServer has access to them?
> It is suggested to bundle them as a single jar so that RS can load the whole 
> jar and resolve dependencies. If you are not able to do that, you need place 
> the dependencies in regionservers class path so that they are loaded during 
> RS startup. Do either of these options work for you? Btw, you can load the 
> coprocessors/filters into path specified by hbase.dynamic.jars.dir [1], so 
> that they are loaded dynamically by regionservers when the class is accessed 
> (or you can place them in the RS class path too, so that they are loaded 
> during RS JVM startup).
> > How would one deploy these using an automated system? 
> > (puppet/chef/ansible/etc)
> You can probably use these tools to automate shipping the jars to above 
> locations?
> > Tests our developers have done suggest that simply disabling a coprocessor, 
> > replacing the jar with a different version, and enabling the coprocessor 
> > again does not load the newest version. With that in mind how does one know 
> > which version is currently deployed and enabled without resorting to 
> > parsing `hbase shell` output or restarting hbase?
> Actually this is a design issue with current classloader. You can't reload a 
> class in a JVM unless you delete all the current references to it. Since the 
> current JVM (classloader) has reference to it, you can't overwrite it unless 
> you kill the JVM, which is equivalent to restarting it. So you still have the 
> older class loaded in place. For this to work, classloader design should be 
> changed. If it works for you, you can rename the coprocessor class name and 
> the new version of jar and RS loads it properly.
> > Where does logging go, and how does one access it? Does logging need to be 
> > configured in a certain way?
> Can you please specify which logging you are referring to?
> > Where is a good location to place configuration files?
> Same as above, are these hbase configs or something else? If hbase configs, 
> are these gateway configs/server side? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13907) Document how to deploy a coprocessor

2015-08-09 Thread Misty Stanley-Jones (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14679382#comment-14679382
 ] 

Misty Stanley-Jones commented on HBASE-13907:
-

For point #1, I do not know the answer. I can reiterate that it's not a clean 
approach and is not recommended.

For point #2, I think I will just take out the second sentence altogether.

For point #3, I do not know the answer and need input from someone who knows 
more (maybe [~larsgeorge]?)

> Document how to deploy a coprocessor
> 
>
> Key: HBASE-13907
> URL: https://issues.apache.org/jira/browse/HBASE-13907
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Reporter: Misty Stanley-Jones
>Assignee: Misty Stanley-Jones
> Attachments: HBASE-13907-1.patch, HBASE-13907-2.patch, 
> HBASE-13907.patch
>
>
> Capture this information:
> > Where are the dependencies located for these classes? Is there a path on 
> > HDFS or local disk that dependencies need to be placed so that each 
> > RegionServer has access to them?
> It is suggested to bundle them as a single jar so that RS can load the whole 
> jar and resolve dependencies. If you are not able to do that, you need place 
> the dependencies in regionservers class path so that they are loaded during 
> RS startup. Do either of these options work for you? Btw, you can load the 
> coprocessors/filters into path specified by hbase.dynamic.jars.dir [1], so 
> that they are loaded dynamically by regionservers when the class is accessed 
> (or you can place them in the RS class path too, so that they are loaded 
> during RS JVM startup).
> > How would one deploy these using an automated system? 
> > (puppet/chef/ansible/etc)
> You can probably use these tools to automate shipping the jars to above 
> locations?
> > Tests our developers have done suggest that simply disabling a coprocessor, 
> > replacing the jar with a different version, and enabling the coprocessor 
> > again does not load the newest version. With that in mind how does one know 
> > which version is currently deployed and enabled without resorting to 
> > parsing `hbase shell` output or restarting hbase?
> Actually this is a design issue with current classloader. You can't reload a 
> class in a JVM unless you delete all the current references to it. Since the 
> current JVM (classloader) has reference to it, you can't overwrite it unless 
> you kill the JVM, which is equivalent to restarting it. So you still have the 
> older class loaded in place. For this to work, classloader design should be 
> changed. If it works for you, you can rename the coprocessor class name and 
> the new version of jar and RS loads it properly.
> > Where does logging go, and how does one access it? Does logging need to be 
> > configured in a certain way?
> Can you please specify which logging you are referring to?
> > Where is a good location to place configuration files?
> Same as above, are these hbase configs or something else? If hbase configs, 
> are these gateway configs/server side? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13907) Document how to deploy a coprocessor

2015-08-09 Thread Misty Stanley-Jones (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14679381#comment-14679381
 ] 

Misty Stanley-Jones commented on HBASE-13907:
-

For point #1, I do not know the answer. I can reiterate that it's not a clean 
approach and is not recommended.

For point #2, I think I will just take out the second sentence altogether.

For point #3, I do not know the answer and need input from someone who knows 
more (maybe [~larsgeorge]?)

> Document how to deploy a coprocessor
> 
>
> Key: HBASE-13907
> URL: https://issues.apache.org/jira/browse/HBASE-13907
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Reporter: Misty Stanley-Jones
>Assignee: Misty Stanley-Jones
> Attachments: HBASE-13907-1.patch, HBASE-13907-2.patch, 
> HBASE-13907.patch
>
>
> Capture this information:
> > Where are the dependencies located for these classes? Is there a path on 
> > HDFS or local disk that dependencies need to be placed so that each 
> > RegionServer has access to them?
> It is suggested to bundle them as a single jar so that RS can load the whole 
> jar and resolve dependencies. If you are not able to do that, you need place 
> the dependencies in regionservers class path so that they are loaded during 
> RS startup. Do either of these options work for you? Btw, you can load the 
> coprocessors/filters into path specified by hbase.dynamic.jars.dir [1], so 
> that they are loaded dynamically by regionservers when the class is accessed 
> (or you can place them in the RS class path too, so that they are loaded 
> during RS JVM startup).
> > How would one deploy these using an automated system? 
> > (puppet/chef/ansible/etc)
> You can probably use these tools to automate shipping the jars to above 
> locations?
> > Tests our developers have done suggest that simply disabling a coprocessor, 
> > replacing the jar with a different version, and enabling the coprocessor 
> > again does not load the newest version. With that in mind how does one know 
> > which version is currently deployed and enabled without resorting to 
> > parsing `hbase shell` output or restarting hbase?
> Actually this is a design issue with current classloader. You can't reload a 
> class in a JVM unless you delete all the current references to it. Since the 
> current JVM (classloader) has reference to it, you can't overwrite it unless 
> you kill the JVM, which is equivalent to restarting it. So you still have the 
> older class loaded in place. For this to work, classloader design should be 
> changed. If it works for you, you can rename the coprocessor class name and 
> the new version of jar and RS loads it properly.
> > Where does logging go, and how does one access it? Does logging need to be 
> > configured in a certain way?
> Can you please specify which logging you are referring to?
> > Where is a good location to place configuration files?
> Same as above, are these hbase configs or something else? If hbase configs, 
> are these gateway configs/server side? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13907) Document how to deploy a coprocessor

2015-07-13 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14625740#comment-14625740
 ] 

Sean Busbey commented on HBASE-13907:
-

{code}
+ ... This behavior
+  is not expected to change. Alternately, you can include the coprocessor 
version in its
+  classname. This way you can unload the old coprocessor and load the new one,
+  without restarting the RegionServer. However, the old version will remain in 
memory,
+  so this solution is a temporary one, and restarting will eventually be 
necessary.
{code}

This currently works? Are we still loading additional jars at runtime for 
Coprocessors? Folks who are running coprocessors are already kind of down a 
dark path, but should we encourage the cleaner "rolling restart of region 
servers" to reduce the changes we have to debug classloader pain?

{code}
+Coprocessor Logging::
+  The Coprocessor framework does not provide an API for logging beyond 
standard Java
+  logging. You can log to the location specified in the system property 
`HBASE_LOG_DIR`,
+  which is set in the RegionServer's `hbase-env.sh` file.
{code}

Won't using commons-logging or slf4j work? What do we gain by having them 
attempt to log direclty to this directory instead of just using the same 
logging framework we use?

{code}
+Coprocessor Configuration::
+  If you do not want to load coprocessors from the HBase Shell, you can add 
their configuration
+  properties to `hbase-site.xml`. In <>, two 
arguments are
+  set: `arg1=1,arg2=2`. These could have been added to `hbase-site.xml` as 
follows:
{code}

just for my own clarity, if they use the hbase-site.xml approach then admins 
loading the coprocessor in the shell won't be able to override the arguments, 
right? We should mention, either way.

> Document how to deploy a coprocessor
> 
>
> Key: HBASE-13907
> URL: https://issues.apache.org/jira/browse/HBASE-13907
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Reporter: Misty Stanley-Jones
>Assignee: Misty Stanley-Jones
> Attachments: HBASE-13907-1.patch, HBASE-13907-2.patch, 
> HBASE-13907.patch
>
>
> Capture this information:
> > Where are the dependencies located for these classes? Is there a path on 
> > HDFS or local disk that dependencies need to be placed so that each 
> > RegionServer has access to them?
> It is suggested to bundle them as a single jar so that RS can load the whole 
> jar and resolve dependencies. If you are not able to do that, you need place 
> the dependencies in regionservers class path so that they are loaded during 
> RS startup. Do either of these options work for you? Btw, you can load the 
> coprocessors/filters into path specified by hbase.dynamic.jars.dir [1], so 
> that they are loaded dynamically by regionservers when the class is accessed 
> (or you can place them in the RS class path too, so that they are loaded 
> during RS JVM startup).
> > How would one deploy these using an automated system? 
> > (puppet/chef/ansible/etc)
> You can probably use these tools to automate shipping the jars to above 
> locations?
> > Tests our developers have done suggest that simply disabling a coprocessor, 
> > replacing the jar with a different version, and enabling the coprocessor 
> > again does not load the newest version. With that in mind how does one know 
> > which version is currently deployed and enabled without resorting to 
> > parsing `hbase shell` output or restarting hbase?
> Actually this is a design issue with current classloader. You can't reload a 
> class in a JVM unless you delete all the current references to it. Since the 
> current JVM (classloader) has reference to it, you can't overwrite it unless 
> you kill the JVM, which is equivalent to restarting it. So you still have the 
> older class loaded in place. For this to work, classloader design should be 
> changed. If it works for you, you can rename the coprocessor class name and 
> the new version of jar and RS loads it properly.
> > Where does logging go, and how does one access it? Does logging need to be 
> > configured in a certain way?
> Can you please specify which logging you are referring to?
> > Where is a good location to place configuration files?
> Same as above, are these hbase configs or something else? If hbase configs, 
> are these gateway configs/server side? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13907) Document how to deploy a coprocessor

2015-06-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14589191#comment-14589191
 ] 

Hadoop QA commented on HBASE-13907:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12739985/HBASE-13907-2.patch
  against master branch at commit f469c3bd97d1465534c52dc22934633bdddf189a.
  ATTACHMENT ID: 12739985

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation patch that doesn't require tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.1 2.5.2 2.6.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 1 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14440//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14440//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14440//artifact/patchprocess/checkstyle-aggregate.html

  Javadoc warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14440//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14440//console

This message is automatically generated.

> Document how to deploy a coprocessor
> 
>
> Key: HBASE-13907
> URL: https://issues.apache.org/jira/browse/HBASE-13907
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Reporter: Misty Stanley-Jones
>Assignee: Misty Stanley-Jones
> Attachments: HBASE-13907-1.patch, HBASE-13907-2.patch, 
> HBASE-13907.patch
>
>
> Capture this information:
> > Where are the dependencies located for these classes? Is there a path on 
> > HDFS or local disk that dependencies need to be placed so that each 
> > RegionServer has access to them?
> It is suggested to bundle them as a single jar so that RS can load the whole 
> jar and resolve dependencies. If you are not able to do that, you need place 
> the dependencies in regionservers class path so that they are loaded during 
> RS startup. Do either of these options work for you? Btw, you can load the 
> coprocessors/filters into path specified by hbase.dynamic.jars.dir [1], so 
> that they are loaded dynamically by regionservers when the class is accessed 
> (or you can place them in the RS class path too, so that they are loaded 
> during RS JVM startup).
> > How would one deploy these using an automated system? 
> > (puppet/chef/ansible/etc)
> You can probably use these tools to automate shipping the jars to above 
> locations?
> > Tests our developers have done suggest that simply disabling a coprocessor, 
> > replacing the jar with a different version, and enabling the coprocessor 
> > again does not load the newest version. With that in mind how does one know 
> > which version is currently deployed and enabled without resorting to 
> > parsing `hbase shell` output or restarting hbase?
> Actually this is a design issue with current classloader. You can't reload a 
> class in a JVM unless you delete all the current references to it. Since the 
> current JVM (classloader) has reference to it, you can't overwrite it unless 
> you kill the JVM, which is equivalent to restarting it. So you still have the 
> older class loaded in place. For this to work, classloader design should be 
> changed. If it works for you, you can rename the coprocessor class name and 
> the new version of jar and RS loads it properly.
> > Where does logging go, and how does one access it? Does logging need to be 
> > configured in a certain way?
> Can you please specify which logging you are referring to?
> > Where is a good location to place configuration files?
> Same as above, are these hbase conf

[jira] [Commented] (HBASE-13907) Document how to deploy a coprocessor

2015-06-16 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14588862#comment-14588862
 ] 

Andrew Purtell commented on HBASE-13907:


bq. Actually this is a design issue with current classloader. 

It's functioning as designed because coprocessors are designed not to be 
reloadable.

It would be a mistake to give users the impression this will change or could 
change, until we have decided to and then implemented full classloader 
isolation for table coprocessors.

> Document how to deploy a coprocessor
> 
>
> Key: HBASE-13907
> URL: https://issues.apache.org/jira/browse/HBASE-13907
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Reporter: Misty Stanley-Jones
>Assignee: Misty Stanley-Jones
> Attachments: HBASE-13907-1.patch, HBASE-13907.patch
>
>
> Capture this information:
> > Where are the dependencies located for these classes? Is there a path on 
> > HDFS or local disk that dependencies need to be placed so that each 
> > RegionServer has access to them?
> It is suggested to bundle them as a single jar so that RS can load the whole 
> jar and resolve dependencies. If you are not able to do that, you need place 
> the dependencies in regionservers class path so that they are loaded during 
> RS startup. Do either of these options work for you? Btw, you can load the 
> coprocessors/filters into path specified by hbase.dynamic.jars.dir [1], so 
> that they are loaded dynamically by regionservers when the class is accessed 
> (or you can place them in the RS class path too, so that they are loaded 
> during RS JVM startup).
> > How would one deploy these using an automated system? 
> > (puppet/chef/ansible/etc)
> You can probably use these tools to automate shipping the jars to above 
> locations?
> > Tests our developers have done suggest that simply disabling a coprocessor, 
> > replacing the jar with a different version, and enabling the coprocessor 
> > again does not load the newest version. With that in mind how does one know 
> > which version is currently deployed and enabled without resorting to 
> > parsing `hbase shell` output or restarting hbase?
> Actually this is a design issue with current classloader. You can't reload a 
> class in a JVM unless you delete all the current references to it. Since the 
> current JVM (classloader) has reference to it, you can't overwrite it unless 
> you kill the JVM, which is equivalent to restarting it. So you still have the 
> older class loaded in place. For this to work, classloader design should be 
> changed. If it works for you, you can rename the coprocessor class name and 
> the new version of jar and RS loads it properly.
> > Where does logging go, and how does one access it? Does logging need to be 
> > configured in a certain way?
> Can you please specify which logging you are referring to?
> > Where is a good location to place configuration files?
> Same as above, are these hbase configs or something else? If hbase configs, 
> are these gateway configs/server side? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13907) Document how to deploy a coprocessor

2015-06-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14587847#comment-14587847
 ] 

Hadoop QA commented on HBASE-13907:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12739791/HBASE-13907-1.patch
  against master branch at commit a10a82a8ff2babefbfafe7c323d88eb85f2be52c.
  ATTACHMENT ID: 12739791

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation patch that doesn't require tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.1 2.5.2 2.6.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 1 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14430//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14430//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14430//artifact/patchprocess/checkstyle-aggregate.html

  Javadoc warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14430//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14430//console

This message is automatically generated.

> Document how to deploy a coprocessor
> 
>
> Key: HBASE-13907
> URL: https://issues.apache.org/jira/browse/HBASE-13907
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Reporter: Misty Stanley-Jones
>Assignee: Misty Stanley-Jones
> Attachments: HBASE-13907-1.patch, HBASE-13907.patch
>
>
> Capture this information:
> > Where are the dependencies located for these classes? Is there a path on 
> > HDFS or local disk that dependencies need to be placed so that each 
> > RegionServer has access to them?
> It is suggested to bundle them as a single jar so that RS can load the whole 
> jar and resolve dependencies. If you are not able to do that, you need place 
> the dependencies in regionservers class path so that they are loaded during 
> RS startup. Do either of these options work for you? Btw, you can load the 
> coprocessors/filters into path specified by hbase.dynamic.jars.dir [1], so 
> that they are loaded dynamically by regionservers when the class is accessed 
> (or you can place them in the RS class path too, so that they are loaded 
> during RS JVM startup).
> > How would one deploy these using an automated system? 
> > (puppet/chef/ansible/etc)
> You can probably use these tools to automate shipping the jars to above 
> locations?
> > Tests our developers have done suggest that simply disabling a coprocessor, 
> > replacing the jar with a different version, and enabling the coprocessor 
> > again does not load the newest version. With that in mind how does one know 
> > which version is currently deployed and enabled without resorting to 
> > parsing `hbase shell` output or restarting hbase?
> Actually this is a design issue with current classloader. You can't reload a 
> class in a JVM unless you delete all the current references to it. Since the 
> current JVM (classloader) has reference to it, you can't overwrite it unless 
> you kill the JVM, which is equivalent to restarting it. So you still have the 
> older class loaded in place. For this to work, classloader design should be 
> changed. If it works for you, you can rename the coprocessor class name and 
> the new version of jar and RS loads it properly.
> > Where does logging go, and how does one access it? Does logging need to be 
> > configured in a certain way?
> Can you please specify which logging you are referring to?
> > Where is a good location to place configuration files?
> Same as above, are these hbase configs or something else? I

[jira] [Commented] (HBASE-13907) Document how to deploy a coprocessor

2015-06-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14587561#comment-14587561
 ] 

Hadoop QA commented on HBASE-13907:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12739770/HBASE-13907.patch
  against master branch at commit b16293b5e2d19b5d5d22549a111cda3f1e6ad55a.
  ATTACHMENT ID: 12739770

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation patch that doesn't require tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.1 2.5.2 2.6.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 1 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14427//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14427//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14427//artifact/patchprocess/checkstyle-aggregate.html

  Javadoc warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14427//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14427//console

This message is automatically generated.

> Document how to deploy a coprocessor
> 
>
> Key: HBASE-13907
> URL: https://issues.apache.org/jira/browse/HBASE-13907
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Reporter: Misty Stanley-Jones
>Assignee: Misty Stanley-Jones
> Attachments: HBASE-13907-1.patch, HBASE-13907.patch
>
>
> Capture this information:
> > Where are the dependencies located for these classes? Is there a path on 
> > HDFS or local disk that dependencies need to be placed so that each 
> > RegionServer has access to them?
> It is suggested to bundle them as a single jar so that RS can load the whole 
> jar and resolve dependencies. If you are not able to do that, you need place 
> the dependencies in regionservers class path so that they are loaded during 
> RS startup. Do either of these options work for you? Btw, you can load the 
> coprocessors/filters into path specified by hbase.dynamic.jars.dir [1], so 
> that they are loaded dynamically by regionservers when the class is accessed 
> (or you can place them in the RS class path too, so that they are loaded 
> during RS JVM startup).
> > How would one deploy these using an automated system? 
> > (puppet/chef/ansible/etc)
> You can probably use these tools to automate shipping the jars to above 
> locations?
> > Tests our developers have done suggest that simply disabling a coprocessor, 
> > replacing the jar with a different version, and enabling the coprocessor 
> > again does not load the newest version. With that in mind how does one know 
> > which version is currently deployed and enabled without resorting to 
> > parsing `hbase shell` output or restarting hbase?
> Actually this is a design issue with current classloader. You can't reload a 
> class in a JVM unless you delete all the current references to it. Since the 
> current JVM (classloader) has reference to it, you can't overwrite it unless 
> you kill the JVM, which is equivalent to restarting it. So you still have the 
> older class loaded in place. For this to work, classloader design should be 
> changed. If it works for you, you can rename the coprocessor class name and 
> the new version of jar and RS loads it properly.
> > Where does logging go, and how does one access it? Does logging need to be 
> > configured in a certain way?
> Can you please specify which logging you are referring to?
> > Where is a good location to place configuration files?
> Same as above, are these hbase configs or something else? If