Hi,
I noticed that if I configure a server to use the slice module it will happily
read slices of the appropriate size as fast as the upstream server can deliver
them.
However, if I configure the same server to run over SSL it requests the slices
at a rate consonant with the speed with which
Wow, thank you for your patience.
Perfect!!!
Btw, The ctx of slice module is really a good skill.
On Wed, Jun 21, 2017 at 2:43 AM, Roman Arutyunyan <a...@nginx.com> wrote:
> Hi,
>
> Here's a simple configuration for your case.
>
> map $http_range $proxy_range
;
proxy_pass http://127.0.0.1:9000;
}
}
Note that at request level the slice module is enabled by evaluating the
$slice_range variable.
On Wed, Jun 21, 2017 at 12:45:07AM +0800, 洪志道 wrote:
> Well, it's a good idea, but it's not satisfied yet.
>
> Now we assume u
I hope slice module can satisfy customer requirements.
Now the key point is users want to control the behavior of slice.
And this is common.
Here's the patch, take a look please!
diff -r 5e05118678af src/http/modules/ngx_http_slice_filter_module.c
--- a/src/http/modules
> > location / {
> > >> > > > slice 10;
> > >> > > > proxy_set_header Range $slice_range;
> > >> > > > proxy_pass http://127.0.0.1:81;
> > >> > > > }
> > >
slice 10;
> >> > > > proxy_set_header Range $slice_range;
> >> > > > proxy_pass http://127.0.0.1:81;
> >> > > > }
> >> > > > }
> >> > > >
> >> > > >
> >&g
> proxy_set_header Range $slice_range;
>> > > > proxy_pass http://127.0.0.1:81;
>> > > > }
>> > > > }
>> > > >
>> > > >
>> > > > server {
>> > > > lis
> > listen 81;
> > > > root html;
> > > > }
> > > >
> > > > Then we start a request with curl.
> > > > > curl http://my.test.com/ -x 127.1:80 -H "range: bytes=1-50, 2-51"
> > > >
> >
; > >
> > > Then we start a request with curl.
> > > > curl http://my.test.com/ -x 127.1:80 -H "range: bytes=1-50, 2-51"
> > >
> > > We get a response of the whole file that differs from expectation (1-50,
> > > 2-51).
> > >
rver {
> > listen 81;
> >root html;
> > }
> >
> > Then we start a request with curl.
> > > curl http://my.test.com/ -x 127.1:80 -H "range: bytes=1-50, 2-51"
> >
> > We get a response of the whole file that differs from ex
proxy_pass http://127.0.0.1:81;
> }
> }
>
>
> server {
> listen 81;
>root html;
> }
>
> Then we start a request with curl.
> > curl http://my.test.com/ -x 127.1:80 -H "range: bytes=1-50, 2-51"
>
> We get a response of the who
with curl.
> curl http://my.test.com/ -x 127.1:80 -H "range: bytes=1-50, 2-51"
We get a response of the whole file that differs from expectation (1-50,
2-51).
It seems that slice module doesn't support multi-range (separated by
commas),
but it's confused $slice_range variable is valid.
P
Hi George,
On Tue, Jun 13, 2017 at 04:39:08PM +0300, George . wrote:
> Hi Roman,
>
> Thank you a lot for detailed explanation.
> Initially I thought that NGX_HTTP_SUBREQUEST_CLONE option to
> ngx_http_subrequest (your latest fix in slice module - Slice filter: fetch
>
Hi Roman,
Thank you a lot for detailed explanation.
Initially I thought that NGX_HTTP_SUBREQUEST_CLONE option to
ngx_http_subrequest (your latest fix in slice module - Slice filter: fetch
slices in cloned subrequests) was intended to make full context in
subrequest to be kept during redirects
ole file here
instead of 4m-8m range.
The takeaway is you should avoid using the slice module with redirects
(error_page, X-Accel-Redirect) for fetching slices. Instead you should proxy
directly to the origin server.
> If there is no cached slice everything is okey (2nd capture
> slice
Hi,
I've discovered following strange issue with http_slice_module
If I have a named location for internal 302 redirect and caching one slice
makes further request for whole object to brake upstream redirected request
(missing Rage header, see frame 254 in the attached capture
2016-10-12 14:09 GMT+02:00 Roman Arutyunyan <a...@nginx.com>:
> Do you have this issue without slice?
No, if I send a the same request that slice module would send to
upstream through nginx
without slice activated, I do not see the same errors
> Anyway, it would be nice to see debug
Hi,
On Tue, Oct 11, 2016 at 08:26:29PM +0200, Bjørnar Ness wrote:
> Hello nginx-devel
>
> I am seeing lots of 'upstream sent more data than specified in
> "Content-Length"' errors
> when using nginx with proxy_cache and slice module. The relevant
> nginx.conf sett
-devel [mailto:nginx-devel-boun...@nginx.org] De la part de Bjørnar
Ness
Envoyé : mardi 11 octobre 2016 20:26
À : nginx-devel@nginx.org
Objet : slice module and upstream sent more data than specified in
"Content-Length"
Hello nginx-devel
I am seeing lots of 'upstream sent more data than
Hi Roman,
> On 18 Feb 2016, at 18:14, Roman Arutyunyan <a...@nginx.com> wrote:
>
> Hi Martijn,
>
> On Wed, Feb 17, 2016 at 09:20:37AM +, Martijn Berkvens wrote:
>> Hi,
>>
>> First off, thank you for the work on the slice module. We’re currently us
Hi Martijn,
On Wed, Feb 17, 2016 at 09:20:37AM +, Martijn Berkvens wrote:
> Hi,
>
> First off, thank you for the work on the slice module. We’re currently using
> a lua implementation to achieve something similar but this has its
> shortcomings and we would prefer to
Hi,
First off, thank you for the work on the slice module. We’re currently using a
lua implementation to achieve something similar but this has its shortcomings
and we would prefer to use this module.
We’ve been testing this module and when using range requests it works as
expected, but when
details: http://hg.nginx.org/nginx/rev/3250a5783787
branches:
changeset: 6318:3250a5783787
user: Maxim Dounin <mdou...@mdounin.ru>
date: Mon Dec 07 20:08:13 2015 +0300
description:
Added slice module to win32 builds.
diffstat:
misc/GNUmakefile | 1 +
1 files changed, 1 inse
On 30/9/2015 12:00 AM, Roman Arutyunyan wrote:
Known issues
The module can lead to excessive memory and file handle usage.
Hi Roman, thanks for sharing. Under what circumstances will there be
excessive memory and file handle usage?
___
Hello,
> On 05 Oct 2015, at 22:22, Woon Wai Keen wrote:
>
> On 30/9/2015 12:00 AM, Roman Arutyunyan wrote:
>> Known issues
>>
>>
>> The module can lead to excessive memory and file handle usage.
> Hi Roman, thanks for sharing. Under what circumstances
Thanks Roman! I tested it briefly and it looks good. Next days I think to
test it in limited production environment with high traffic load (~10Gb).
On Tue, Sep 29, 2015 at 7:00 PM, Roman Arutyunyan <a...@nginx.com> wrote:
> Hello,
>
> I'm happy to publish the experiment
26 matches
Mail list logo