Hi,
This is my configuration based on Doc and sample configuration, I would
appreciate if you answer my questions:
```
probe myprobe {
.request =
"HEAD / HTTP/1.1"
"Connection: close"
"User-Agent: Varnish Health Probe";
.timeout = 1s;
.interval = 5s;
.window = 5;
.threshold = 3;
}
backend B1 { .host = "B1.mydomain.local"; .port = "80"; .probe = myprobe; }
backend B2 { .host = "B2.mydomain.local"; .port = "80"; .probe = myprobe; }
backend B3 { .host = "B3.mydomain.local"; .port = "80"; .probe = myprobe; }
backend B4 { .host = "B4.mydomain.local"; .port = "80"; .probe = myprobe; }
backend B5 { .host = "B5.mydomain.local"; .port = "80"; .probe = myprobe; }
sub vcl_init {
new hls_cluster = directors.shard();
hls_cluster.add_backend(B1);
hls_cluster.add_backend(B2);
hls_cluster.add_backend(B3);
hls_cluster.add_backend(B4);
hls_cluster.add_backend(B5);
new p = directors.shard_param();
hls_cluster.reconfigure(p, replicas=25);
hls_cluster.associate(p.use());
}
sub vcl_backend_fetch {
p.set(by=KEY, key=bereq.url);
set bereq.backend_hint = hls_cluster.backend(resolve=LAZY, healthy=ALL);}
}
```
1. First of all I think following section is not true, Do we need define both
shard parameter(p) and replicas in the reconfigure directive?
"hls_cluster.reconfigure(p, replicas=25);" or just
"hls_cluster.reconfigure(replicas=25);"
2. What does "replicas=25" means in the sample configuration?
Doc says:
<ident><n> (default ident being the backend name) for each backend and for a
running number n from 1 to replicas
this is what I found out:
varnish will choose one number for a backend randomly from 1 to "replicas"
number and then will combine it with name of backend and then will hash all for
circular ring,
But when we have backends with different names this hash would be different for
each backend because they have different name!
why is this neccessary?
`https://varnish-cache.org/docs/6.2/reference/vmod_directors.html#bool-xshard-reconfigure-int-replicas-67`
3. In the shard.backend(...) section and About "resolve=LAZY":
I couldn't understand what does LAZY resolve mean?
DOC:
```
LAZY: return an instance of this director for later backend resolution.
LAZY mode is required for referencing shard director instances, for example as
backends for other directors (director layering).
```
https://varnish-cache.org/docs/6.1/reference/vmod_directors.generated.html#shard-backend
4. For returning healthy backend besides defining probes as I adjust, should I
configure healthy=ALL as follow?
```
set bereq.backend_hint = hls_cluster.backend(resolve=LAZY, healthy=ALL);}
```
DOC:
```
ALL: Check health state also for alternative backend selection
```
5. About rampup and warmup :
rampup: I understand that if a backend goes down and become healthy again if we
defined a rampup period for it, it would wait till this period passed and then
varnish will send the request to that backend for this fraction of time it will
return alternative backend
warmup: for a choosen backend for specific key it will spread request between
two backend (the original backend and its alternative if we define 0.5 for
warmup)
Please correct me if I said anything wrong. I would apprecate if you could
explain about the functionality of this two parameter.
6. conceptual question:
1.What's the exact difference between hash and shard directive and when should
we use which one?
the Doc says when the backend changes shard is more consistent than hash but
how?
2.What will it happen when I'm using shard director based on "key=bereq.url" if
I add/delete one backend from backend lists? will it change the consistent
hashing ring for requests?
Thanks for your answers in advance
Best regards, Hamidreza
________________________________
From: Hamidreza Hosseini
Sent: Sunday, August 1, 2021 4:17 AM
To: [email protected] <[email protected]>
Subject: Best practice for caching scenario with different backend servers but
same content
Hi,
I want to use varnish in my scenario as cache service, I have about 10 http
servers that serve Hls fragments as the backend servers and about 5 varnish
servers for caching purpose, the problem comes in when I use round-robin
director for backend servers in varnish,
if a varnish for specific file requests to one backend server and for the same
file but to another backend server it would cache that file again because of
different Host headers ! so my solution is using fallback director instead of
round-robin as follow:
```
In varnish-1:
new hls_cluster = directors.fallback();
hls_cluster.add_backend(b1());
hls_cluster.add_backend(b2());
hls_cluster.add_backend(b3());
hls_cluster.add_backend(b4());
hls_cluster.add_backend(b5());
hls_cluster.add_backend(b6());
hls_cluster.add_backend(b7());
hls_cluster.add_backend(b8());
hls_cluster.add_backend(b9());
hls_cluster.add_backend(b10());
In varnish-2:
new hls_cluster = directors.fallback();
hls_cluster.add_backend(b10());
hls_cluster.add_backend(b1());
hls_cluster.add_backend(b2());
hls_cluster.add_backend(b3());
hls_cluster.add_backend(b4());
hls_cluster.add_backend(b5());
hls_cluster.add_backend(b6());
hls_cluster.add_backend(b7());
hls_cluster.add_backend(b8());
hls_cluster.add_backend(b9());
In varnish-3:
new hls_cluster = directors.fallback();
hls_cluster.add_backend(b9());
hls_cluster.add_backend(b1());
hls_cluster.add_backend(b2());
hls_cluster.add_backend(b3());
hls_cluster.add_backend(b4());
hls_cluster.add_backend(b5());
hls_cluster.add_backend(b6());
hls_cluster.add_backend(b7());
hls_cluster.add_backend(b8());
hls_cluster.add_backend(b10());
```
But I think this is not the best solution, because there is no load balancing
despite, I used different backend for the first argument of fallback directive,
What is varnish recommendation for this scenario?
_______________________________________________
varnish-misc mailing list
[email protected]
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc