This was , in large part how I found the approach I am using:

https://github.com/envoyproxy/envoy/issues/4897

On Monday, February 22, 2021 at 11:41:28 AM UTC-5 Rob Cecil wrote:

> I am aware of maglev as a potential alternative lb_policy, and also there 
> are problems if the cluster configuration is changed while there are 
> outstand connections through envoy.
>
> On Monday, February 22, 2021 at 11:40:36 AM UTC-5 Rob Cecil wrote:
>
>> Just some follow up - in case anyone else is looking for solutions.
>>
>> There are no obvious usage scenarios documented, but I found a solution 
>> using a lb_policy of "ring_hash" where the client provides a header that is 
>> unique the client. Just generate a unique, stable identifier throughout the 
>> lifetime of the client (browser) and inject that as a header (here using 
>> "x-session-hash", but it could be anything) into every request.
>>
>> I'm using nanoid to generate unique strings in Javascript. I simply 
>> generate one and store in local storage.
>>
>> Here's the relevant .yaml for an example configuration that defines a 
>> two-host upstream cluster:
>>
>> admin:
>>   access_log_path: /tmp/admin_access.log
>>   address:
>>     socket_address: { address: 0.0.0.0, port_value: 9901 }
>>
>> static_resources:
>>   listeners:
>>   - name: listener_0
>>     address:
>>       socket_address: { address: 0.0.0.0, port_value: 8080 }
>>     filter_chains:
>>     - filters:
>>       - name: envoy.filters.network.http_connection_manager
>>         typed_config:
>>           "@type": 
>> type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
>>           codec_type: auto
>>           stat_prefix: ingress_http
>>           route_config:
>>             name: local_route
>>             virtual_hosts:
>>             - name: local_service
>>               domains: ["*"]
>>               routes:
>>               - match: { prefix: "/" }
>>                 route:
>>                   cluster: controlweb_backendservice
>>                   hash_policy:
>>                     - header:
>>                         header_name: x-session-hash
>>                   max_stream_duration:
>>                     grpc_timeout_header_max: 0s
>>               cors:
>>                 allow_origin_string_match:
>>                 - prefix: "*"
>>                 allow_methods: GET, PUT, DELETE, POST, OPTIONS
>>
>>                 allow_headers: 
>> keep-alive,user-agent,cache-control,content-type,content-transfer-encoding,access-token,x-accept-content-transfer-encoding,x-accept-response-streaming,x-user-agent,x-grpc-web,grpc-timeout,x-session-hash
>>                 max_age: "1728000"
>>                 expose_headers: access-token,grpc-status,grpc-message
>>           http_filters:
>>           - name: envoy.filters.http.grpc_web
>>           - name: envoy.filters.http.cors
>>           - name: envoy.filters.http.router
>>   clusters:
>>   - name: controlweb_backendservice
>>     connect_timeout: 0.25s
>>     type: strict_dns
>>     http2_protocol_options: {}
>>     lb_policy: ring_hash
>>     
>>     load_assignment:
>>       cluster_name: cluster_0
>>       endpoints:
>>         - lb_endpoints:
>>             - endpoint:
>>                 address:
>>                   socket_address:
>>                     address: 172.16.0.219
>>                     port_value: 50251
>>               load_balancing_weight: 10
>>             - endpoint:
>>                 address:
>>                   socket_address:
>>                     address: 172.16.0.132
>>                     port_value: 50251
>>               load_balancing_weight: 1
>>
>> On Sunday, February 14, 2021 at 12:43:42 PM UTC-5 Rob Cecil wrote:
>>
>>> Looking for an example of an Envoy configuration that implements session 
>>> affinity (stickiness) to load balance a cluster of backend servers.  Thanks!
>>>
>>> I'm open to using the source IP or something in the header, but probably 
>>> not cookie.
>>>
>>> Thanks
>>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/746e5142-d102-4fe6-8fd8-720f6641f3a7n%40googlegroups.com.

Reply via email to