[
https://issues.apache.org/jira/browse/PHOENIX-3654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16049644#comment-16049644
]
ASF GitHub Bot commented on PHOENIX-3654:
-----------------------------------------
Github user rahulsIOT commented on a diff in the pull request:
https://github.com/apache/phoenix/pull/236#discussion_r122060006
--- Diff:
phoenix-queryserver/src/main/java/org/apache/phoenix/queryserver/server/QueryServer.java
---
@@ -233,16 +240,29 @@ public int run(String[] args) throws Exception {
// Build and start the HttpServer
server = builder.build();
server.start();
+ registerToServiceProvider(hostname);
runningLatch.countDown();
server.join();
return 0;
} catch (Throwable t) {
LOG.fatal("Unrecoverable service error. Shutting down.", t);
this.t = t;
return -1;
+ } finally {
+ deRegister();
}
}
+ private void registerToServiceProvider(String hostName) throws Exception
{
+ PqsZookeeperConf pqsZookeeperConf = new
PqsZookeeperConfImpl(getConf());
--- End diff --
The implementation of service loader is simple but I think I am stuck at
the point of resolving dependencies.
> Load Balancer for thin client
> -----------------------------
>
> Key: PHOENIX-3654
> URL: https://issues.apache.org/jira/browse/PHOENIX-3654
> Project: Phoenix
> Issue Type: New Feature
> Affects Versions: 4.8.0
> Environment: Linux 3.13.0-107-generic kernel, v4.9.0-HBase-0.98
> Reporter: Rahul Shrivastava
> Assignee: Rahul Shrivastava
> Fix For: 4.9.0
>
> Attachments: LoadBalancerDesign.pdf, Loadbalancer.patch
>
> Original Estimate: 240h
> Remaining Estimate: 240h
>
> We have been having internal discussion on load balancer for thin client for
> PQS. The general consensus we have is to have an embedded load balancer with
> the thin client instead of using external load balancer such as haproxy. The
> idea is to not to have another layer between client and PQS. This reduces
> operational cost for system, which currently leads to delay in executing
> projects.
> But this also comes with challenge of having an embedded load balancer which
> can maintain sticky sessions, do fair load balancing knowing the load
> downstream of PQS server. In addition, load balancer needs to know location
> of multiple PQS server. Now, the thin client needs to keep track of PQS
> servers via zookeeper ( or other means).
> In the new design, the client ( PQS client) , it is proposed to have an
> embedded load balancer.
> Where will the load Balancer sit ?
> The load load balancer will embedded within the app server client.
> How will the load balancer work ?
> Load balancer will contact zookeeper to get location of PQS. In this case,
> PQS needs to register to ZK itself once it comes online. Zookeeper location
> is in hbase-site.xml. It will maintain a small cache of connection to the
> PQS. When a request comes in, it will check for an open connection from the
> cache.
> How will load balancer know load on PQS ?
> To start with, it will pick a random open connection to PQS. This means that
> load balancer does not know PQS load. Later , we can augment the code so that
> thin client can receive load info from PQS and make intelligent decisions.
> How will load balancer maintain sticky sessions ?
> While we still need to investigate how to implement sticky sessions. We can
> look for some open source implementation for the same.
> How will PQS register itself to service locator ?
> PQS will have location of zookeeper in hbase-site.xml and it would register
> itself to the zookeeper. Thin client will find out PQS location using
> zookeeper.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)