Hi Team,
Though we have proxy cache valid defined to cache only respective response
code , nginx caching 416 response.
proxy_cache_valid 200 206 10d;
proxy_cache_key$uri$http_range;
416 is returned from upstream server and its getting cached on Ngnix.
Even with default settings by not
Hi Team,
Intermittently there are multiple below errors reported in error.log file.
[alert] 41456#41456: ignore long locked inactive cache entry
efcd5613750302a2657fca63c07fc777, count:1
This comes momentarily with a spike of 50-90 K such errors in a minute time
span.
During this period
> Given the above, I see two possible reasons why the cache volume
> is only filled at 50%:
>
> 1. You've run out of keys_zone size.
>
> 2. You've run out of resources requested frequent enough to be
> cached with proxy_cache_min_uses set to 2.
>
> It should be easy enough to find out what
With use of proxy_cache_min_uses volume of cache is getting settled up at
around 50% utilization.
No matter what is the volume allocated in max_size its not filling up
further beyond 50%.
If the proxy_cache_min_uses is removed the cache gets filled up with
max_size allocated volume.
No of files
Thanks Maxim for the explanation.
Is there a way to figure out how much time Nginx took to deliver the files
to the end user.
Posted at Nginx Forum:
https://forum.nginx.org/read.php?2,288938,289054#msg-289054
___
nginx mailing list
nginx@nginx.org
In our case response body is of size around 4MB to 8MB and its showing
0.000.
Since "request time" is for analyzing the time taken for delivering the
content to client , we are not able to get the actual value or time taken .
Even on slow user connection its showing 0.000 .
Generally it should
We are observing a behavior where request time and upstream response time is
logged as same value when request is MISS in log file.
And when there is HIT for the request , request time is logged as 0.000 for
all the requests.
Please help what could be the reason for this , we tried compiling
We are observing that multiple cache object is getting created for same file
in Nginx Cache which is resulting into non optimal use of cache storage.
We are using proxy_cache_key as $uri.
proxy_cache_key $uri;
For example with file having URI
Module is fixed now
https://github.com/kaltura/nginx-akamai-token-validate-module/issues/18
Thanks
Posted at Nginx Forum:
https://forum.nginx.org/read.php?2,288455,288478#msg-288478
___
nginx mailing list
nginx@nginx.org
Thanks Maxim
Will fix the module , just was looking a way around if it can be handled by
just removing the null character
Posted at Nginx Forum:
https://forum.nginx.org/read.php?2,288455,288472#msg-288472
___
nginx mailing list
nginx@nginx.org
Thanks Maxim
Actually null character is not being generated by Client .
We are using below module to validate the tokens
https://github.com/kaltura/nginx-akamai-token-validate-module
This is being caused by akamai_token_validate_strip_token directive which
strips the token and forwards request
Nginx Upstream returning 400 Bad Request if null character is being passed
in the request as part of uri or query params.
Is there a way Null Character can be removed from request before proxying
it to upstream.
Its only known from access logs that null character is being passed in
request as
We are using Nginx to deliver Widevine Streaming over Web.
Website sends OPTIONS request as a preflight check with every fragment
request for streaming.
Since Nginx by default caches GET, HEAD, we tried including OPTIONS method
to cache on Nginx.
proxy_cache_methods GET HEAD OPTIONS;
Gives
We want to use Nginx as LB in a way so that Nginx can return 301 or 302
redirect to client instead of Proxying request to backend/upstream servers.
It is required as Server which is configured as LB is having limited
throughput of 1 Gbps while upstream servers are having throughput of 10Gbps
.
In both the cases , either geoip2 or ip2location we will have to compile
Nginx to support .
Currently we are using below two RPM's from Nginx Repository
(http://nginx.org/packages/mainline/centos/7/x86_64/RPMS/)
nginx-1.10.2-1.el7.ngx.x86_64
nginx-module-geoip-1.10.2-1.el7.ngx.x86_64
Is the rpm
We are using Nginx with DAV Module , where encoder is pushing the content.
These content when being accessed is not coming with header
"Transfer-Encoding : chunked" though these header is being added by
Encoder.
Below is version details :
nginx version: nginx/1.10.2
built by gcc 4.8.5
Thanks Maxim
For Streaming with Low Latency , Harmonic Encoder is pushing media files
with "Transfer-Encoding: chunked" on the Nginx Origin Server.
We are able to see the same in tcpdump between Encoder and Nginx Origin.
However when we try to stream content through Origin Server ,
In order to support CMAF and Low latency for HLS streaming through Nginx, it
is required change in content header.
Instead of "Content-Length" in Header , expected value by player is
"Transfer-Encoding : chunked" so that for a 6 sec chunk of media segment
player will start streaming fetching data
Current Configuration
secure_link $arg_token,$arg_expiry;
secure_link_md5 "secret$arg_expiry";
if ($secure_link = "") {return 405;}
if ($secure_link = "0"){return 410;}
Posted at Nginx Forum:
https://forum.nginx.org/read.php?2,280125,280126#msg-280126
There is requirement for token authentication using two secret key i.e
primary and secondary secret for location block.
If token with first secret gives 405, then to generate the token with second
secret to allow the request.
This is required for changing the Secret Key in production on server
Thanks ... We need the Client IP on Server B as well for analytics .
Tried by enabling the Geo IP module on Server A which looks after remote
address field and successfully blocks the request.
But the problem here is that it is even blocking the requests coming from
our Internal Private IP
GeoIP module is able to block request on basis of remote address which is IP
of the remote device or user but not on basis of X-Forwarded-For IP if it
has multiple IP address in it.
There is Frontend Server( Server A) which receives the request and send it
to Intermediate Server (Server B)
We
Let me explain the complete implementation methodology and problem
statement
URL to be protected
http://site.media.com/mediafiles/movie.m3u8
We are generating token on application/client side to send it along with
request so that content is delivered by server only to authorized apps.
Token
URL Signing by Secure Link MD5 , restricts the client from accessing the
secured object for limited time using below module
Exp time is sent as query parameter from client device
secure_link $arg_hash,$arg_exp;
secure_link_md5 "secret$arg_exp";
if ($secure_link = "") {return 405;}
if
Any Update Please
How to use two secret Keys for Secure Link Md5.
Primary to be used by application which is in production and secondary for
application build which has been rolled out with changed secret key i.e.
secondary.
So that application should work in both scenario meanwhile till the
Thanks
But what about the next part when actually we are in production and if there
is need for change of secret Key on Nginx.
" Is there a way to implement the token authentication with two secret key
i.e primary and secondary
So that If the first one did not work, then try the second one.
Trying to implement the secure link md5 token check .
Is there a way to verify secure link i.e. to generate token using secret
key and verify the token. If it matches it should allow the request .
And also to allow the request for token which doesn't matches so that while
rolling out the update
Actually, its not the case that More number of Clients are trying to get the
content from One of Server as Server Throughput shows equal load on all
interfaces of Server which is around 4 Gbps.
So Do I expect , Writing will Increase with more number of Active
Connections.
Is it so that Nginx is
We are using Haproxy to distribute the load on the Servers.
Load is ditributed on the basis of URI, with parameter set in haproxy config
as "balance uri".
This has been done to achieve maximum Cache Hit from the Server.
Does high number of Writing is leading to increase in response time for
On some of the Severs Waiting is increasing in uneven way like if we have 3
Set of Servers on all of them Active Connections is around 6K and Writing on
two of the Server its around 500 -600 while on third ts 3000 . On this
server response time is increasing in delivering the content.
This happens
Thanks Maxim
We enabled the upstream_request_time on both the server which shows response
time less than a sec for Upstream request.
It doesn't seems to be issue with Upstream Server .
Even for the request which are HIT response time on the server on which
"Writing" is more varies from 10 sec
We are having two Nginx Server acting as Caching Server behind haproxy
loadbalancer. We are observing a high load on one of the server though we
see equal number of requests coming on the server from application per sec.
We see that out of two server on which load is high i.e around 5 , response
Hi Everyone,
We are using Nginx as Caching Server .
As per Nginx Documentation by default nginx caches 200, 301 & 302 response
code but we are observing that if Upstream server gives error 400 or 500 or
503, etc , response is getting cached and all other requests for same file
becomes HIT.
33 matches
Mail list logo