[ 
https://issues.apache.org/jira/browse/HBASE-8400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13672228#comment-13672228
 ] 

James Taylor commented on HBASE-8400:
-------------------------------------

We're trying to ease the installation of Phoenix 
(https://github.com/forcedotcom/phoenix) and this JIRA looks like it may help. 
Is it actively being worked on?

Currently we require that the phoenix jar be copied into the HBase lib dir of 
every region server, followed by a restart. For some background, Phoenix uses 
both coprocessors and custom filters. These are just the tip of the iceberg in 
our SQL layer, so to speak. There's a lot of shared/foundational phoenix code 
being used by these coprocessors and filters that come along with them - our 
type system, expression evaluation, schema interpretation, throttling code, 
memory management, etc. So when we say we'd like to upgrade our coprocessor and 
custom filters to a new version, that means all the foundational classes under 
it have changed as well.

If we use the feature provided by HBASE-1936, we're not sure we're easing the 
burden on our users, since users will still need to:
1) update the hbase-sites.xml on each region server to set the 
hbase.dynamics.jar.dir path of the jar
2) copy the phoenix jar to hdfs
3) make a sym link to the new phoenix jar
4) get a rolling restart to be done on the cluster

My fear would be that (1) would be error prone and difficult for a user to 
convince his admin to do, and for (2) & (3) the user wouldn't have the 
necessary perms. And (4), we'll probably just have to live with, but in a 
utopia, we could just have the new jar be used for new coprocessor/filter 
invocations.

My question: how close can we come to automating all of this to the point where 
we could have a phoenix install script that looks like this:

hbase install phoenix-1.2.jar

Would this JIRA get us there? Any other missing pieces? We'd be happy to be a 
guinea pig/test case for how to solve this issue from an application/platform 
standpoint.

Thanks!
                
> Load coprocessor jar from various protocols (HTTP, HTTPS, FTP, etc.)
> --------------------------------------------------------------------
>
>                 Key: HBASE-8400
>                 URL: https://issues.apache.org/jira/browse/HBASE-8400
>             Project: HBase
>          Issue Type: Improvement
>          Components: Coprocessors
>    Affects Versions: 0.94.3, 0.98.0
>            Reporter: Julian Zhou
>            Assignee: Julian Zhou
>            Priority: Minor
>             Fix For: 0.98.0, 0.95.1, 0.94.9
>
>
> In some application testing and production environment, after developed 
> coprocessors and generated jars, currently we need to transfer them to hdfs 
> first, then specify the URI in table descriptor to point to HDFS compatible 
> addresses of jars. Common used HTTP/HTTPS/FTP or other protocols were not 
> supported yet? To save time and make the life easier without transferring 
> from http/ftp or other servers, just modified the CoprocessorHost to use 
> http/ftp url connection (http and ftp servers are the most cases need to be 
> support) to stream jars to coprocessor jar spaces automatically.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to