Hi,

(Author of that post, here) -- That article heavily assumes two things for any degree of usability:

* Avatica servers generally "stay running"
* Once a user is routed to a backend server, it keeps getting routed to that server.

I've done some single-node testing and this actually worked fairly well for the level of effort I put in (download+run haproxy). It definitely falls over in the case of long-running queries, but, admittedly, Avatica doesn't do a great job in that case anyways (both fetching the next batch of results taking a long time and the fetching the total result set taking a long time).

If either of the two points above happen often, client will definitely see (potentially, significant) increased latency.

On 5/25/17 3:02 PM, Gian Merlino wrote:
Is anyone out there using Avatica with servers (that don't share connection
state) behind load balancers? Is that a workable configuration? I'm
guessing it might be if sticky sessions are enabled on the load balancer.
What does the client do when the session switches to a new backend server?

I found a blog post that talks about some of these issues in the context of
Phoenix:
https://community.hortonworks.com/articles/9377/deploying-the-phoenix-query-server-in-production-e.html

It seems to suggest that the client will retry queries and skip to the most
recently read offset. Is that behavior on by default? This sounds like it
won't work for a database that is accepting new data -- the query results
aren't generally going to be exact matches from run to run just due to new
rows being added. In that case, I'm struggling to think of any better
approach than failing the query and expecting the user to retry if they
want to.

Gian

Reply via email to