416 HTTP Response Issue

2021-06-11 Thread anish10dec
Hi Team,

Though we have proxy cache valid defined to cache only respective response
code , nginx caching 416 response.

proxy_cache_valid  200 206  10d;
proxy_cache_key$uri$http_range;

416 is returned from upstream server and its getting cached on Ngnix.

Even with default settings by not specifying http response behavior is
same.
proxy_cache_valid  10d;


Sample response cached on CDN for 416 Response

KEY:
/content/entry/wvdata/68/49/314af040c2c611ebad1619ca96fe25b8_2492_a.mp4bytes=12130626-12373254
HTTP/1.1 416 Requested Range Not Satisfiable^M
Server: nginx^M
Date: Tue, 01 Jun 2021 15:10:06 GMT^M
Content-Type: text/html^M
Content-Length: 190^M
Connection: close^M
Expires: Thu, 01 Jul 2021 14:10:43 GMT^M
Cache-Control: max-age=2592000^M
Access-Control-Allow-Origin: *^M
Access-Control-Allow-Methods: GET, POST, OPTIONS^M
Access-Control-Allow-Headers:
DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type^M
Content-Range: bytes */4194304^M

Posted at Nginx Forum: 
https://forum.nginx.org/read.php?2,291835,291835#msg-291835

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Alert: ignore long locked inactive cache entry

2021-04-08 Thread anish10dec
Hi Team,

Intermittently there are multiple below errors reported in error.log file.

[alert] 41456#41456: ignore long locked inactive cache entry
efcd5613750302a2657fca63c07fc777, count:1

This comes  momentarily with a spike of 50-90 K such errors in a minute time
span. 

During this period server load and cpu utilization increases to Maximum
dropping all the traffic with 0% Idle CPU and Load rising to more than 100.

This happens for 5 min after which server comes back into normal state.

Please help What causes this alert and how to avoid this scenario

Posted at Nginx Forum: 
https://forum.nginx.org/read.php?2,291199,291199#msg-291199

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: Cache Volume utilized at around 50 % with proxy_cache_min_uses

2020-08-25 Thread anish10dec
> Given the above, I see two possible reasons why the cache volume 
> is only filled at 50%:
> 
> 1. You've run out of keys_zone size.
> 
> 2. You've run out of resources requested frequent enough to be 
> cached with proxy_cache_min_uses set to 2.
> 
> It should be easy enough to find out what happens in your case.
> 

It seems possible reason is keys_zone size. Will look into by increasing the
same and trying different permutations.

As in general 1M stores around 8000 Keys, what could be probable formula for
keys_zone size with proxy_cache_min_uses.

Since it keeps information of all requested resource so it would highly
depend upon number of requested resources.

In my case number of request per sec is around 1000 i.e. 36 Lakhs per hour
during peak hours

Posted at Nginx Forum: 
https://forum.nginx.org/read.php?2,289185,289189#msg-289189

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Cache Volume utilized at around 50 % with proxy_cache_min_uses

2020-08-25 Thread anish10dec
With use of proxy_cache_min_uses volume of cache is getting settled up at
around 50% utilization. 
No matter what is the volume allocated in max_size its not filling up
further beyond 50%.
If the proxy_cache_min_uses is removed the cache gets filled up with
max_size allocated volume.

No of files in cache directory is far less beyond the size allocated in key
zone. Its getting capped up near 20 Lakhs whereas allocated key zone could
have accommodate around 80 L files with below configuration  

proxy_cache_path/cache/contentcache keys_zone=content:1000m levels=1:2
max_size=1000g inactive=7d use_temp_path=off;

proxy_cache_min_uses 2;

Cache volume is utilized with above configuration is around 550 GB which is
not growing beyond and as inactive is set to 7d so this would have been
effective only after 7 days when content should have got deleted if not
accessed within 7 days time period. 

Writing all the objects on disk is causing high i/o so using
proxy_cache_min_uses would have been beneficial with utilizing cache
optimally and high cache hit ratio

Posted at Nginx Forum: 
https://forum.nginx.org/read.php?2,289185,289185#msg-289185

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: Request Time in Nginx Log as always 0.000 for HIT Request

2020-08-11 Thread anish10dec
Thanks Maxim for the explanation.

Is there a way to figure out how much time Nginx took to deliver the files
to the end user.

Posted at Nginx Forum: 
https://forum.nginx.org/read.php?2,288938,289054#msg-289054

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: Request Time in Nginx Log as always 0.000 for HIT Request

2020-08-03 Thread anish10dec
In our case response body is of size around 4MB to 8MB and its showing
0.000.

Since "request time" is for analyzing the time taken for delivering the
content to client , we are not able to get the actual value or time taken .

Even on slow user connection its showing 0.000 . 
Generally it should be much higher as it captures the total time taken for
delivering last byte of the content to user.

Posted at Nginx Forum: 
https://forum.nginx.org/read.php?2,288938,288954#msg-288954

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Request Time in Nginx Log as always 0.000 for HIT Request

2020-08-01 Thread anish10dec
We are observing a behavior where request time and upstream response time is
logged as same value when request is MISS in log file.

And when there is HIT for the request , request time is logged as 0.000 for
all the requests.

Please help what could be the reason for this , we tried compiling from
source , rpm , upgrading and downgrading the version of Nginx. 

But always the case remains same.

Please help what could be causing this behavior

Posted at Nginx Forum: 
https://forum.nginx.org/read.php?2,288938,288938#msg-288938

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Multiple Cache Object for same file

2020-07-02 Thread anish10dec
We are observing that multiple cache object is getting created for same file
in Nginx Cache which is resulting into non optimal use of cache storage.

We are using proxy_cache_key as $uri. 

proxy_cache_key $uri;

For example with file having URI
/content/entry/jiomags/content/719/51/51_t_0.jpg

2 cache object has been created in cache folder. Both the files are having
same KEY

-rw--- 1 nginx nginx 21023 Jun 27 16:11
./2/95/9d78505da184e6ccd981fefe6b333952
-rw--- 1 nginx nginx 21023 Jun 27 18:16
./f/ad/c8e1c56031a14dd4a27e538956253adf

 vi ./2/95/9d78505da184e6ccd981fefe6b333952

KEY: /content/entry/jiomags/content/719/51/51_t_0.jpg
HTTP/1.1 200 OK^M
Server: nginx^M
Date: Sat, 27 Jun 2020 10:41:01 GMT^M
Content-Type: image/jpeg^M
Content-Length: 20369^M
Connection: close^M
Last-Modified: Fri, 10 Jan 2020 15:20:59 GMT^M
Vary: Accept-Encoding^M
ETag: "5e18965b-4f91"^M
Expires: Sun, 26 Jul 2020 20:17:15 GMT^M
Cache-Control: max-age=2592000^M
Access-Control-Allow-Origin: *^M
Access-Control-Expose-Headers: Content-Length,Content-Range^M
Access-Control-Allow-Headers: Range^M
Accept-Ranges: bytes^

 vi ./f/ad/c8e1c56031a14dd4a27e538956253adf

KEY: /content/entry/jiomags/content/719/51/51_t_0.jpg
HTTP/1.1 200 OK^M
Server: nginx^M
Date: Sat, 27 Jun 2020 12:46:06 GMT^M
Content-Type: image/jpeg^M
Content-Length: 20369^M
Connection: close^M
Last-Modified: Fri, 10 Jan 2020 15:20:59 GMT^M
Vary: Accept-Encoding^M
ETag: "5e18965b-4f91"^M
Expires: Mon, 27 Jul 2020 12:46:06 GMT^M
Cache-Control: max-age=2592000^M
Access-Control-Allow-Origin: *^M
Access-Control-Expose-Headers: Content-Length,Content-Range^M
Access-Control-Allow-Headers: Range^M
Accept-Ranges: bytes^M

What could be the reason for duplicate file getting cached having same URI
and KEY.
Please help

Posted at Nginx Forum: 
https://forum.nginx.org/read.php?2,288520,288520#msg-288520

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: Removing Null Character from Query Parameter

2020-06-26 Thread anish10dec
Module is fixed now

https://github.com/kaltura/nginx-akamai-token-validate-module/issues/18

Thanks

Posted at Nginx Forum: 
https://forum.nginx.org/read.php?2,288455,288478#msg-288478

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: Removing Null Character from Query Parameter

2020-06-25 Thread anish10dec
Thanks Maxim
Will fix the module , just was looking a way around if it can be handled by
just removing the null character

Posted at Nginx Forum: 
https://forum.nginx.org/read.php?2,288455,288472#msg-288472

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: Removing Null Character from Query Parameter

2020-06-25 Thread anish10dec
Thanks Maxim

Actually null character is not being generated by Client .

We are using below module to validate the tokens
https://github.com/kaltura/nginx-akamai-token-validate-module

This is being caused by akamai_token_validate_strip_token directive which
strips the token and forwards request to upstream server.

While striping the token and passing the remaining request  to upstream
stream its appending null character at the end.
If there is no any additional query param in request apart from token , then
there is no issue in handling.

http://10.49.120.61/folder/Test.m3u8?token=st=1593095161~exp=1593112361~acl=/*~hmac=60d9c29a65d837b203225318d1c69e205037580a08bf4417d4a1e237e5a2f5b6=abc123

Request passed to upstream is as below which is causing problem

GET /folder/Test.m3u8?uid=abc123\x00

Posted at Nginx Forum: 
https://forum.nginx.org/read.php?2,288455,288462#msg-288462

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Removing Null Character from Query Parameter

2020-06-25 Thread anish10dec
Nginx Upstream returning 400 Bad Request if null character is being passed
in the request as part of uri or query params.

Is there a way Null Character can be removed from request before proxying 
it to upstream.

Its only known from access logs that null character is being passed in
request as \x00 and causing the failure

How to identify the Null Character and remove it ? 

Tried below options but its not able to identify the null character

if ($args ~* (.*)(\x00)(.*)) {
 set $args $1$3;
}


Nginx returning below error 

Error Log

2020/06/25 20:20:43 [info] 19838#19838: *11985 client sent invalid request
while reading client request line, client: 10.49.120.61, server: test.com,
request: "HEAD /folder/Test.m3u8?uid=abc123 HTTP/1.0"


Access log

 10.49.120.61 | - | test.com | [25/Jun/2020:20:20:43 +0530] | - | "HEAD
/folder/Test.m3u8?uid=abc123\x00 HTTP/1.0" | 400 | 0 | "-" | "-" | 0.001 | -
| - | - | "- - - -" | http | - | -| "-"

Posted at Nginx Forum: 
https://forum.nginx.org/read.php?2,288455,288455#msg-288455

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Caching OPTIONS Response

2019-04-03 Thread anish10dec
We are using Nginx to deliver Widevine Streaming over Web.

Website sends OPTIONS request as a preflight check with every fragment
request for streaming.

Since Nginx by default caches GET, HEAD, we tried including OPTIONS method
to cache on Nginx.

proxy_cache_methods GET HEAD OPTIONS;

Gives error messsage as Invalid value.

Below links says OPTIONS cannot be cached 
https://forum.nginx.org/read.php?2,253403,253408

This is causing all the request of preflight check from Browser to load
Origin Server having Nginx.
Please suggest a way to handle OPTIONS request 

Regards,
Anish

Posted at Nginx Forum: 
https://forum.nginx.org/read.php?2,283592,283592#msg-283592

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Nginx as LB to redirect/return to upstream server instead of Proxy

2018-10-15 Thread anish10dec
We want to use Nginx as LB in a way so that Nginx can return 301 or 302
redirect to client instead of Proxying request to backend/upstream servers.

It is required as Server which is configured as LB is having limited
throughput of 1 Gbps while upstream servers are having throughput of 10Gbps
.

We want users to directly connect to Upstream Server for Data delivery. 
Nginx LB Server to make sure that all upstream are up and functional before
giving 301 or 302 redirect to any of upstream server

Example: 

http://nginxlb.com/data/download

Nginx LB Returns Redirected URL to Client 301 or 302 ( That upstream should
be up)

http://upstreamserver1.com/data/download
http://upstreamserver2.com/data/download

Is this possible by :

return 301 http://$upstream_addr/data/download

Posted at Nginx Forum: 
https://forum.nginx.org/read.php?2,281590,281590#msg-281590

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: GeoIP2 Maxmind Module Support for Nginx

2018-10-01 Thread anish10dec
In both the cases , either geoip2 or ip2location we will have to compile
Nginx to support .

Currently we are using below two RPM's from Nginx Repository
(http://nginx.org/packages/mainline/centos/7/x86_64/RPMS/)
nginx-1.10.2-1.el7.ngx.x86_64
nginx-module-geoip-1.10.2-1.el7.ngx.x86_64

Is the rpm module available or is there any plan to make it available.

Posted at Nginx Forum: 
https://forum.nginx.org/read.php?2,281341,281455#msg-281455

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: Enabling "Transfer-Encoding : chunked"

2018-09-26 Thread anish10dec
We are using Nginx with DAV Module , where encoder is pushing the content. 
These content when being accessed is not coming with header
"Transfer-Encoding : chunked" though these header is being added by
Encoder.


Below is version details : 

nginx version: nginx/1.10.2
built by gcc 4.8.5 20150623 (Red Hat 4.8.5-16) (GCC)
built with OpenSSL 1.0.2k-fips  26 Jan 2017
TLS SNI support enabled
configure arguments: --with-http_ssl_module --with-http_realip_module
--with-http_addition_module --with-http_sub_module --with-http_dav_module
--add-module=/opt/nginx-dav-ext-module-master --with-http_flv_module
--with-http_mp4_module --with-http_gunzip_module
--with-http_gzip_static_module --with-http_random_index_module
--with-http_secure_link_module --with-http_stub_status_module
--with-http_auth_request_module --with-mail --with-mail_ssl_module
--with-file-aio --with-ipv6

Below is the nginx configuration where encoder is pushing the content on
Nginx running on Port 81

location /packagerx {
root   /ram/streams_live/packagerx;
dav_methods PUT DELETE MKCOL COPY MOVE;
dav_ext_methods PROPFIND OPTIONS;
create_full_put_path  on;
dav_access user:rw group:rw all:r;
autoindex on;
client_max_body_size 100m;
}

Below is the configuration from which Nginx running on Port 80 is used for
accessing the content 

location / {
root   /ram/streams_live/packagerx;
expires 1h;
access_log /usr/local/nginx/logs/access_client.log lt-custom;
proxy_buffering off;
chunked_transfer_encoding on;

types {
application/dash+xml mpd;
application/vnd.apple.mpegurl m3u8;
video/mp2t ts;
video/x-m4v   m4v;
audio/x-m4a   m4a;
text/html html htm shtml;
text/css   css;
text/xml   xml;
image/gif gif;
image/jpeg   jpeg jpg;
application/javascript   js;
application/atom+xml   atom;
application/rss+xml  rss;
text/mathml  mml;
text/plain  txt;

}
}

Posted at Nginx Forum: 
https://forum.nginx.org/read.php?2,281371,281413#msg-281413

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: Enabling "Transfer-Encoding : chunked"

2018-09-24 Thread anish10dec
Thanks Maxim

For Streaming with Low Latency , Harmonic Encoder is pushing media files
with "Transfer-Encoding: chunked" on the Nginx Origin Server.

We are able to see the same in tcpdump between Encoder and Nginx Origin.

However when we try to stream content through Origin Server , 
"Transfer-Encoding: chunked" is missing in the header part because of which
player is not able to start stream with enabling low latency

Posted at Nginx Forum: 
https://forum.nginx.org/read.php?2,281371,281374#msg-281374

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Enabling "Transfer-Encoding : chunked"

2018-09-24 Thread anish10dec
In order to support CMAF and Low latency for HLS streaming through Nginx, it
is required change in content header.

Instead of "Content-Length" in Header , expected value by player is
"Transfer-Encoding : chunked" so that for a 6 sec chunk of media segment
player will start streaming fetching data in 200 msec part wise and thus
streaming will have low latency . This is supported by HTTP 1.1 

Tried below parameter to enable same in Nginx Configuration  
chunked_transfer_encoding on;

But its not adding the same in header. 

Please suggest better way to do it. 
https://gist.github.com/CMCDragonkai/6bfade6431e9ffb7fe88

Posted at Nginx Forum: 
https://forum.nginx.org/read.php?2,281371,281371#msg-281371

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: Secure Link Md5 with Primary and Secondary Secret

2018-06-12 Thread anish10dec
Current Configuration

secure_link $arg_token,$arg_expiry;
secure_link_md5 "secret$arg_expiry";
if ($secure_link = "") {return 405;}
if ($secure_link = "0"){return 410;}

Posted at Nginx Forum: 
https://forum.nginx.org/read.php?2,280125,280126#msg-280126

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Secure Link Md5 with Primary and Secondary Secret

2018-06-12 Thread anish10dec
There is requirement for token authentication using two secret key i.e
primary and secondary secret for location block.

If token with first secret gives 405, then to generate the token with second
secret to allow the request.

This is required for changing the Secret Key in production on server so that
partial user will be allowed with old secret and some with new secret for
meanwhile till secret is updated on all servers and client.

Something similar to below implementation
https://cdnsun.com/knowledgebase/cdn-live/setting-a-token-authentication-protect-your-cdn-content

Regards & Thanks ,
Anish

Posted at Nginx Forum: 
https://forum.nginx.org/read.php?2,280125,280125#msg-280125

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: RE: [IE] GeoIP Module for Blocking IP in http_x_forwarded_for

2018-01-17 Thread anish10dec
Thanks ... We need the Client IP on Server B as well for analytics .


Tried by enabling the Geo IP module on Server A which looks after remote
address field and successfully blocks the request. 
But the problem here is that it is even blocking the requests coming from
our Internal Private IP Segment such as 10.0.0.0/27 which are used for
monitoring . 

Is there a way to declare few Private IP's or IP Range as trusted address
even though if they are coming under blocked countries ?

Posted at Nginx Forum: 
https://forum.nginx.org/read.php?2,278117,278165#msg-278165

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


GeoIP Module for Blocking IP in http_x_forwarded_for

2018-01-11 Thread anish10dec
GeoIP module is able to block request on basis of remote address which is IP
of the remote device or user but not on basis of X-Forwarded-For IP if it
has multiple IP address in it.

There is Frontend Server( Server A) which receives the request and send it
to Intermediate Server (Server B)
We have GeoIP module installed on Intermediate Server i.e. Server B


Server B <--- Server A < User

When Server B , receives the request from Server A, remote address
(remote_addr) for Server B is IP of Server A.
Device/User IP is in http_x_forwarded_for field .
If http_x_forwarded_for has single IP in it GeoIP module is able to block
the IP on the basis of blocking applied. 

If http_x_forwarded_for has multiple IP i.e IP of User as well as IP of some
Proxy Server or IP of Server A, then its not able to block the request.

Below is the configuration : 

geoip_country/usr/share/GeoIP/GeoIP.dat;
geoip_proxy   IP_OF_ServerA;   // GeoIP module ignores remote_addr
considering it as trusted and refers to X-Forwarded For

map $geoip_country_code $allowed_country {
default no;
US yes;
}

http_x_forwarded_for =  { User IP of UK } - Request from this IP is getting
blocked

http_x_forwarded_for =  { User IP of UK , Proxy IP of US  }  -  This request
is not getting blocked

http_x_forwarded_for =  { User IP of UK , IP of Server A  }  -  This request
is not getting blocked

It seems nginx GeoIP Module refers to Last IP in http_x_forwarded_for field
for applying the blocking method.

Is there a way to check for First IP Address in http_x_forwarded_for for
blocking the request  ?

Please suggest

Please refer this for Solution in Apache
https://dev.maxmind.com/geoip/legacy/mod_geoip2/

Posted at Nginx Forum: 
https://forum.nginx.org/read.php?2,278110,278110#msg-278110

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: Secure Link Expires - URL Signing

2018-01-10 Thread anish10dec
Let me explain the complete implementation methodology and problem
statement

URL to be protected 
http://site.media.com/mediafiles/movie.m3u8

We are generating token on application/client side to send it along with
request so that content is delivered by server only to authorized apps.

Token Generation Methodology on App/Client

expire = Current Epoch Time on App/Client + 600 ( 600 so that URL will be
valid for 10 mins)
uri = mediafiles/movie.m3u8
secret = secretkey

On Client , MD5 Function is used to generate token by using three above
defined values
token = MD5 Hash ( secret, uri, expire)

Client passes generated token along with expiry time with URL
http://site.media.com/mediafiles/movie.m3u8?token={generated
value}={value in variable expire}


Token Validation on Server
Token and Expire is captured and passed through secure link module 

location / {

secure_link $arg_token,$arg_expire; 
secure_link_md5  "secretkey$uri$arg_expire";

//If token generated here matches with token passed in request , content is
delivered 
if ($secure_link = "") {return 405;}  // token doesn't match 

if ($secure_link = "0") {return 410;} 
//If value in arg_expire time is greater current epoch time of server ,
content is delivered . 
Since arg_expire has epoch time of device + 600 sec so on server it will be
success. If someone tries to access the content using same URL after 600 sec
, time on server will be greater than time send in arg_expire and thus
request will be denied.


Problem Statement
Someone changes the time on his client device to say some future date and
time. In this case same app will generate the token with above mention
methodolgy on client and send it along with request to server.
Server will generate the token at its end using all the values along with
expire time send in URL request ( note here expire time is generated using
future date on device)
So token will match and 1st check will be successful . 
In 2nd check since arg_expire has epoch time of future date + 600 sec which
will be obviously greater than current epcoh time of server and  request
will be successfully delivered.
Anyone can use same token and extended epoch time with request for that
period of time for which future date was set on device.

Hopefully now its explainatory . 
Please let know if there is a way to protect the content in this scenario.

Posted at Nginx Forum: 
https://forum.nginx.org/read.php?2,278063,278088#msg-278088

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Secure Link Expires - URL Signing

2018-01-10 Thread anish10dec
URL Signing by Secure Link MD5 , restricts the client from accessing the
secured object for limited time using below module

Exp time is sent as query parameter from client device

secure_link $arg_hash,$arg_exp;
secure_link_md5 "secret$arg_exp";
if ($secure_link = "") {return 405;}
if ($secure_link = "0") {return 410;} 

Here problem is that if expiry time i.e exp send from client is less than
server time nginx module returns 410 . 

But if some client changes the time of device to some future date and
request for object in that case also object will be delivered as client time
will be greater than server time.
Is there a way to restrict the request, by secure link module, to future
time so that for example object should be accessible only for 1 hour time
duration from current time.
Please suggest

Posted at Nginx Forum: 
https://forum.nginx.org/read.php?2,278063,278063#msg-278063

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: Secure Link Md5 Implementation

2017-09-13 Thread anish10dec
Any Update Please 
How to use two secret Keys for Secure Link Md5.

Primary to be used by application which is in production and secondary for
application build which has been  rolled out with changed secret key i.e.
secondary.
So that application should work in both scenario meanwhile till the all the
users update the application

Please help 
Inside location or server block

secure_link $arg_tok,$arg_e;
secure_link_md5 "primarysecret$arg_tok$arg_e";
secure_link_md5 "secondarysecret$arg_tok$arg_e";
if ($secure_link = "") {return 405;}
if ($secure_link = "0"){return 410;}

This gives error as secure link md5 is used twice within a location block

Posted at Nginx Forum: 
https://forum.nginx.org/read.php?2,275668,276348#msg-276348

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: Secure Link Md5 Implementation

2017-07-27 Thread anish10dec
Thanks 

But what about the next part when actually we are in production and if there
is need for change of secret Key on Nginx.

" Is there a way to implement the token authentication with two secret key
i.e primary and secondary 
So that If the first one did not work, then try the second one. 
This would be helpful while changing the Secret Key in production so that
some user will be allowed with old secret and some with new secret whose
client has been updated."

Regards,
Anish

Posted at Nginx Forum: 
https://forum.nginx.org/read.php?2,275668,275685#msg-275685

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Secure Link Md5 Implementation

2017-07-26 Thread anish10dec
Trying  to implement the secure link md5 token check .

Is there a way to verify  secure link i.e. to generate token using secret
key and verify the token. If it matches it should allow the request .
And also to allow the request for token which doesn't matches so that while
rolling out the update it may happen that some of the client request will
come without token . 
Those request should also get allowed meanwhile till all the client are
updated with new update of enabling token based authentication.

Secondly, Is there a way to implement the token authentication with two
secret key i.e primary and secondary
So that If the first one did not work, then try the second one.
This would be helpful while changing the Secret Key in production so that
some user will be allowed with old secret and some with new secret whose
client has been updated.

Posted at Nginx Forum: 
https://forum.nginx.org/read.php?2,275668,275668#msg-275668

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: Uneven High Load on the Nginx Server

2016-10-05 Thread anish10dec
Actually, its not the case that More number of Clients are trying to get the
content from One of Server as Server Throughput shows equal load on all
interfaces of Server which is around 4 Gbps.

So Do I expect , Writing will Increase with more number of Active
Connections.
Is it so that Nginx is not able to handle the load of as much connections
and due to which requests is going into Writing Mode and Nginx not releasing
it

Posted at Nginx Forum: 
https://forum.nginx.org/read.php?2,269874,270085#msg-270085

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: Uneven High Load on the Nginx Server

2016-10-05 Thread anish10dec
We are using Haproxy to distribute the load on the Servers.

Load is ditributed on the basis of URI, with parameter set in haproxy config
as "balance uri".

This has been done to achieve maximum Cache Hit from the Server.

Does high number of Writing is leading to increase in response time for
delivering the content ?

Posted at Nginx Forum: 
https://forum.nginx.org/read.php?2,269874,270080#msg-270080

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: Uneven High Load on the Nginx Server

2016-10-05 Thread anish10dec
On some of the Severs Waiting is increasing in uneven way like if we have 3
Set of Servers on all of them Active Connections is around 6K and Writing on
two of the Server its around 500 -600 while on third ts 3000 . On this
server response time is increasing in delivering the content.
This happens even if the content is served from cache of nginx.
Is any parameter in Nginx causing this, as on stopping the Nginx , the same
behaviour shifts to Other Two of them.

This is the Nginx Conf which we are using 
Server is having 60 CPU Cores with 1.5 TB of RAM
PFB part of nginx.conf of server with issue : 

worker_processes auto; 
events { 
worker_connections 4096; 
use epoll; 
multi_accept on; 
} 
worker_rlimit_nofile 11; 

http { 
include mime.types; 
default_type video/mp4; 
proxy_buffering on; 
proxy_buffer_size 4096k; 
proxy_buffers 5 4096k; 
sendfile on; 
keepalive_timeout 30; 
keepalive_requests 6; 
send_timeout 10; 
tcp_nodelay on; 
tcp_nopush on; 
reset_timedout_connection on; 
gzip off; 
server_tokens off; 

Regards,
Anish

Posted at Nginx Forum: 
https://forum.nginx.org/read.php?2,269874,270077#msg-270077

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: Uneven High Load on the Nginx Server

2016-09-27 Thread anish10dec
Thanks  Maxim 

We enabled the upstream_request_time on both the server which shows response
time less than a sec for Upstream request.
It doesn't seems to be issue with Upstream Server . 
Even for the request which are HIT response time on the server on which
"Writing" is more varies from 10 sec to 60 sec and even more while on other
its less than 2 sec either its MISS or HIT.
Once we restart the nginx service the load reduces and so the response time,
but same comes back in 5 min duration. 
And if we stop the Nginx service on that server load decreases and same load
gets shifted to other with high "Writing" value.

We have the Cache Directory Mounted on SSD as well as RAM .
We are using CentOS 6.5 with both IPV4 and IPV6 enabled on the server.
Application/User Traffic directly connect to Server over IPV4/IPV6 IP . 

Output of iostat: 
avg-cpu:  %user   %nice %system %iowait  %steal   %idle
   0.170.000.600.000.00   99.22

Device:tps   Blk_read/s   Blk_wrtn/s   Blk_read
Blk_wrtn
sda  11.7524.82   402.35  659063678  
10683772144
sdb 454.37  2180.70  3663.85   57905655098
97288688008
sdc 565.74  1771.80  4512.49  47047922584   
119823249968
dm-0  0.00 0.00 0.00   5136 
0
dm-1  0.05 0.40 0.30   10621106  
7853376
dm-2   1040.29  3952.19  8176.34 104945070786  217111937976
dm-3  1.01 1.40 8.0637053274
214028160
dm-4 33.97 2.39   271.65   63438578   
7213330456
dm-5 15.5520.55   122.34 545791730
3248559664

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
   0.580.004.520.020.00   94.88

Device: rrqm/s   wrqm/s r/s w/s   rsec/s   wsec/s avgrq-sz
avgqu-sz   await  svctm  %util
sda   0.00   181.000.00   34.60 0.00  1724.8049.85  
  0.010.17   0.09   0.30
sdb   0.0068.40   46.40  781.40 10710.40  6798.4021.15  
  0.310.37   0.03   2.88
sdc   0.0026.00   59.80  552.60 13425.60  4628.8029.48  
  0.280.46   0.05   3.22
dm-0  0.00 0.000.000.00 0.00 0.00 0.00  
  0.000.00   0.00   0.00
dm-1  0.00 0.000.000.00 0.00 0.00 0.00  
  0.000.00   0.00   0.00
dm-2  0.00 0.00  106.20 1428.40 24136.00 11427.2023.17  
  0.790.52   0.04   6.00
dm-3  0.00 0.000.000.20 0.00 1.60 8.00  
  0.003.00   3.00   0.06
dm-4  0.00 0.000.00  212.00 0.00  1696.00 8.00  
  0.130.61   0.00   0.08
dm-5  0.00 0.000.003.40 0.0027.20 8.00  
  0.000.47   0.47   0.16


"

Posted at Nginx Forum: 
https://forum.nginx.org/read.php?2,269874,269878#msg-269878

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Uneven High Load on the Nginx Server

2016-09-27 Thread anish10dec
We are having two Nginx Server acting as Caching Server behind haproxy
loadbalancer. We are observing a high load on one of the server though we
see equal number of requests coming on the server from application per sec.

We see that out of two server on which load is high i.e around 5 , response
time /latency is high in delivering the content . On same server attached
stats module screenshot shows more number of requests in "Writing" as
comapred to other one on which load is 0.5 and response time /latency is
also low .

Please help what might be causing high load and high number of writing on
one of server.

Active connections: 8619 
server accepts handled requests
33204889 33204889 38066647 
Reading: 0 Writing: 755 Waiting: 7863


Active connections: 10959 
server accepts handled requests
34625312 34625312 39974933 
Reading: 0 Writing: 3700 Waiting: 7259

Posted at Nginx Forum: 
https://forum.nginx.org/read.php?2,269874,269874#msg-269874

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Nginx Caching Error Response Code like 400 , 500 , 503 ,etc

2016-08-07 Thread anish10dec
Hi Everyone, 

We are using Nginx as Caching Server . 

As per Nginx Documentation by default nginx caches 200, 301 & 302 response
code but we are observing that if Upstream server gives error 400 or 500 or
503, etc , response is getting cached and all other requests for same file
becomes HIT. 

Though if we set proxy_cache_valid specifying response code ( like
proxy_cache_valid 200 15m; ) then also its caching the error response code
but its not caching 301 & 302 in that case. Why the same is not getting
applied for error response code. 

Is this the behaviour of Nginx or bug in Nginx ? We are using 1.4.0 version
of Nginx 

Please help so that error response codes should not get cached as this is
giving the same error response to users who are requesting for the file
though upstream server is healthy and ok to serve the request. 

Regards, 
Anish

Posted at Nginx Forum: 
https://forum.nginx.org/read.php?2,268813,268813#msg-268813

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx