Hi there,
I am happy to announce the new 0.29 release of Test::Nginx:
https://openresty.org/en/ann-test-nginx-029.html
This version fixes the Test2::Util module dependency problem
introduced in the previous 0.28 release.
This Perl module provides a test scaffold for automated testing in
Nginx
Hi there,
I am happy to announce the new 0.28 release of Test::Nginx:
https://openresty.org/en/ann-test-nginx-028.html
This version fixes the Test2::Util module dependency problem
introduced in the previous 0.27 release.
This Perl module provides a test scaffold for automated testing in
Nginx
Hi there,
I am happy to announce the new 0.27 release of Test::Nginx:
https://openresty.org/en/ann-test-nginx-027.html
This Perl module provides a test scaffold for automated testing in
Nginx C module or OpenResty-based Lua library development and
regression testing.
This class inherits from
Hi folks!
I am happy to announce the new formal release, 1.13.6.2, of the
OpenResty web platform based on NGINX and LuaJIT:
https://openresty.org/en/download.html
The (portable) source code distribution, the Win32/Win64 binary
distributions, and the pre-built binary Linux packages for
Hi there,
I am excited to announce the new formal release, 1.13.6.1, of the
OpenResty web platform based on NGINX and LuaJIT:
https://openresty.org/en/download.html
Both the (portable) source code distribution, the Win32 binary
distribution, and the pre-built binary Linux packages for all
Hello!
On Thu, Nov 9, 2017 at 12:19 PM, Joel Parker wrote:
> I am trying to load a table from disk (deserialize) into memory and then
> add, change, remove the values in the table then write it periodically back
> to disk (serialize). I looked at the documentation for the ngx.shared.DICT
>
Hi folks,
I am excited to announce the new formal release, 1.11.2.5, of the
OpenResty web platform based on NGINX and LuaJIT:
https://openresty.org/en/download.html
Both the (portable) source code distribution, the Win32 binary
distribution, and the pre-built binary Linux packages for all
Hi folks,
OpenResty 1.11.2.4 is just released to include the latest nginx
security fix in its range filter module (CVE-2017-7529).
You can download this version's source tarball and Win32 binary from
the following page:
https://openresty.org/en/download.html
Pre-built Linux binary packages
Hi folks,
Long time no releases. We've been very busy setting up the OpenResty
Inc. commercial company in the US. That's why we've been quiet in the
last few months. The good news is that we now have a strong full-time
engineering team that can work on both the OpenResty open source
platform and
Hi folks,
I've just uploaded Test::Nginx 0.26 to CPAN:
https://metacpan.org/release/Test-Nginx
It will appear on the CPAN mirror near you in the next few hours or
so. After that, you can install the module like below
sudo cpan Test::Nginx
or better, when you have the App::cpanminus
Hi folks,
I am excited to announce the new formal release, 1.11.2.1, of the
OpenResty web platform based on NGINX and LuaJIT:
https://openresty.org/en/download.html
Both the (portable) source code distribution and the Win32 binary
distribution are provided on this Download page.
Also, we
Hello!
On Tue, Jul 5, 2016 at 11:57 PM, Christian Rohmann wrote:
> On 07/04/2016 12:31 PM, Sushma wrote:
>> Or is there a way, nginx will be able to dynamically figure out the cert to
>> be presented without it being explicitly mentioned via the directive
>> ssl_certificate?
>
> After some
Hi folks
OpenResty 1.9.7.5 is just out to include the latest official NGINX
patch for nginx security advisory (CVE-2016-4450):
https://openresty.org/en/download.html
Both the (portable) source code distribution and the Win32 binary
distribution are provided on this Download page.
Changes
Hello!
On Fri, Apr 29, 2016 at 6:18 AM, Pasi Kärkkäinen wrote:
>
> One question about the new "ngx.balancer" Lua API .. with quick look I didn't
> notice anything related to upstream healthchecks.. is this something you've
> been looking at improving, or is it out of scope for this module?
>
Hi folks
We had a good time at our first bay area OpenResty meetup [1] last
month. Now I'd like to share with you the video recordings and slides
for our presentations:
* Presentation "adobe.io" from Adobe's Dragos Dascalita.
* Slides:
Hello!
On Wed, Apr 20, 2016 at 7:16 AM, Maxim Dounin wrote:
> I personally think that documenting that SSI module should not be
> excluded from a build is good enough approach for all practical
> reasons.
>
As the author of the ngx_echo and ngx_srcache modules that require the
http postpone
Hi folks
I am happy to announce the new formal release, 1.9.7.4, of the
OpenResty web platform based on NGINX and LuaJIT:
https://openresty.org/#Download
Both the (portable) source code distribution and the Win32 binary
distribution are provided on this Download page.
The highlights of
Hi guys,
I've just uploaded Test::Nginx 0.25 to CPAN:
https://metacpan.org/release/Test-Nginx
It will appear on the CPAN mirror near you in the next few hours or so.
Special thanks go to all our contributors and users :)
Here's the complete change log for this release (compared to the
Hello!
On Thu, Feb 11, 2016 at 3:29 PM, Piotr Sikora wrote:
>
> That doesn't really answer my question: which version (statically
> linked or dynamic) of the module should we use at runtime if both are
> present... and why?
>
For portable NGINX-based applications, for example, we only want to
Hello!
On Thu, Feb 11, 2016 at 2:43 AM, Deeptha wrote:
> /root/rpmbuild/SOURCES/modules/nginx_tcp_proxy_module-master/ngx_tcp_session.c
> /root/rpmbuild/SOURCES/modules/nginx_tcp_proxy_module-master/ngx_tcp_session.c:
> In function 'ngx_tcp_send':
>
Hello!
On Thu, Feb 11, 2016 at 2:33 PM, Piotr Sikora wrote:
> I disagree, because this would lead to unexpected behavior (since
> statically linked module can be slightly different from the dynamic
> module).
>
> Which version should be used at runtime in your non-fatal scenario and why?
We
Hello!
On Thu, Feb 11, 2016 at 3:16 PM, Yichun Zhang (agentzh) wrote:
> We don't have version numbers in the DSO file names anyway :) And we
> can issue warnings to error.log, even with a high log level.
>
Or with an explicit option to the load_module directive, as in
lo
Hi guys!
I wonder if you have any plans or interest in adding support for
system hosts configuration files (like /etc/hosts on Linux/*BSD/Mac OS
X) to NGINX's own nonblocking resolver implementation. This makes
debugging and other sysadmin work much easier otherwise we must set up
a local DNS
Hi folks
It seems to me that the the load_module directive for loading NGINX
dynamic modules just use the server prefix when resolving relative
module DSO file paths specified in nginx.conf.
Here is my wishlist: I hope that we can have support for module search
paths specified via an external
Hi guys!
Currently when an NGINX module is statically linked against the NGINX
binary, the load_module directive bails out the server startup with
the error message "module already loaded" (or something like that).
Hopefully we can make this a nonfatal error (or provide an option to
make it
Hi guys!
This is just a feature request. I hope that the stream subsystem of
NGINX can support access handlers that are both resumable and
composable, just like the access phase handlers in the http subsystem.
The former means that the access handler can yield (like returning
NGX_DONE) and
Hi folks,
I've recently created the Bay Area OpenResty Meeup group on meetup.com:
http://www.meetup.com/Bay-Area-OpenResty-Meetup/
You're welcome to join us in this group.
We're currently planning a face-to-face meetup at 5:30pm ~ 6:30pm on 9
March 2016 in CloudFlare's office (101 Townsend
Hello!
On Wed, Feb 3, 2016 at 3:37 PM, SplitIce wrote:
> I have been taking a look at the Stream modules for use in a particular
> application. I noticed the entire subsystem lacks any variable support.
>
> Is variable support planned?
> Is there a significant reason for the omission?
>
+1
I'm
Hello!
On Wed, Feb 3, 2016 at 7:09 PM, SplitIce wrote:
> What is the appropriate way to allocate memory during the stream? The http
> context has a pool member as part of the request structure, what is the
> equivalent in the stream module context?
>
I think it is s->connection->pool where s is
Hello!
On Thu, Jan 28, 2016 at 11:19 PM, A. Schulze wrote:
> I could not support with patches but would do some beta testing.
>
Thanks.
> Just to have ask:
> disabling http2 for a location is not possible, isn't it?
>
Nope.
Regards,
-agentzh
___
Hello!
On Thu, Jan 28, 2016 at 1:45 AM, A. Schulze wrote:
> The echo module (https://github.com/openresty/echo-nginx-module / v0.58)
> produce segfaults while accessing the following location:
>
># echo back the client request
>location /echoback {
> echo_duplicate 1
Hi folks
OpenResty 1.9.7.3 is now released with the latest security fixes from
the mainline NGINX core (CVE-2016-0742, CVE-2016-0746, CVE-2016-0747).
https://openresty.org/#Download
Both the (portable) source code distribution and the Win32 binary
distribution are provided on this Download
Hello!
On Wed, Jan 27, 2016 at 9:10 AM, Alexandre wrote:
>
> However I wish to monitor the status of the backend. How can I do ?
>
You may find the lua-resty-upstream-healthcheck library helpful:
https://github.com/openresty/lua-resty-upstream-healthcheck
But it's much easier to install
Hello!
On Sat, Jan 23, 2016 at 6:42 AM, highclass99 wrote:
> I use perl a lot,
> and I noticed
> http://nginx.org/en/docs/http/ngx_http_perl_module.html
> for several years has been documented as
> "The module is experimental, caveat emptor applies."
> So I have been somewhat avoiding testing its
Hi folks
I am happy to announce the new formal release, 1.9.7.2, of the
OpenResty web platform based on NGINX and Lua:
https://openresty.org/#Download
Both the (portable) source code distribution and the Win32 binary
distribution are provided on this Download page.
This version is an
Hello!
On Fri, Jan 15, 2016 at 6:01 AM, Thibault Koechlin wrote:
> When used with auth_request (or maybe other modules, but that's the
> first time I encounter this issue within a few years of usage), there is
> no request made to the upstream if the request is made using POST/PUT
> and the body
Hello!
On Sat, Jan 2, 2016 at 8:06 PM, Yichun Zhang (agentzh) wrote:
> SSL: handled SSL_CTX_set_cert_cb() callback yielding.
>
> OpenSSL 1.0.2+ introduces SSL_CTX_set_cert_cb() to allow custom
> callbacks to serve the SSL certificiates and private keys dynamically
> and lazily. Th
# HG changeset patch
# User Yichun Zhang
# Date 1451762084 28800
# Sat Jan 02 11:14:44 2016 -0800
# Node ID 449f0461859c16e95bdb18e8be6b94401545d3dd
# Parent 78b4e10b4367b31367aad3c83c9c3acdd42397c4
SSL: handled SSL_CTX_set_cert_cb() callback yielding.
OpenSSL 1.0.2+
Hello!
On Wed, Nov 18, 2015 at 6:48 AM, Stephane Wirtel wrote:
> With a request, is it possible to redirect to a running worker and if
> this one is not running, just enable it.
>
> I explain, I would like to implement a reverse proxy with Lua and
> OpenResty and Redis.
>
> Redis will store a
Hello!
On Sun, Nov 22, 2015 at 5:40 AM, Ritesh Jha wrote:
> Hello everyone,
> We are developing nginx modules to implement few usecases in our product.
> Most of the other usecases cases have been implemented using Java. At my
> office we follow TDD for Java development.
Hi guys,
I am glad to announce the new formal release, 1.9.3.2, of the
OpenResty bundle:
https://openresty.org/#Download
The first highlight of this release is the new *_by_lua_block {} directives
added in the ngx_http_lua module.
For example, instead of writing
content_by_lua '
Hello!
On Tue, Oct 6, 2015 at 5:33 PM, Carlos Eduardo Ferreira Rodrigues wrote:
> I'm aware 1.9.5 isn't supported yet and that SPDY is mentioned in the docs as
> non-working for certain API calls
> as well. However, we are have been using location.capture with SPDY for a
> while now and haven't
Hello!
On Wed, Sep 23, 2015 at 1:14 AM, Carlos Eduardo Ferreira Rodrigues wrote:
> I just upgraded nginx to 1.9.5 on our testing enviroment, and immediately
> started seeing this error on http2 requests:
>
> 2015/09/22 18:04:06 [alert] 27305#27305: *1 epoll_ctl(1, 17) failed (17: File
>
Hello!
On Wed, Sep 23, 2015 at 1:34 AM, Maxim Dounin wrote:
>
> The lua module deeply integrates into nginx internals (far beyond
> what we consider to be nginx modules API), and there is no surprise
> it's broken by the changes in nginx 1.9.5.
>
True. This is also why ngx_lua has so many
Hello!
On Tue, Sep 1, 2015 at 6:14 PM, Valentin V. Bartenev wrote:
> Why do you guys use *recursive* subrequests() for that?
>
> Please note, that this constant now limits recursion (not parallelism)
> of subrequests, when one subrequest creates another subrequest and the
> depth of this
Hello!
On Tue, Sep 1, 2015 at 4:29 AM, Valentin Bartenev wrote:
> #define NGX_HTTP_MAX_URI_CHANGES 10
> -#define NGX_HTTP_MAX_SUBREQUESTS 200
> +#define NGX_HTTP_MAX_SUBREQUESTS 50
>
Hmm, this change makes me sad. In our ngx_lua module, for example, we
allow
Hi folks!
I am glad to announce the new formal release, 1.9.3.1, of the
OpenResty bundle:
https://openresty.org/#Download
This is the first OpenResty formal release includes an NGINX 1.9.x
core. For OpenResty's release policy, please refer to the following
documentation:
Hello!
On Thu, Aug 6, 2015 at 2:51 PM, Nitin Solanki wrote:
Which should I use fastcgi or uwsgi.
It's generally believed that uwsgi is better.
I tried uwsgi but not
succeed. Can you help to sort out my problem. Shall you please send me steps
to configure python with Nginx.
As the
Hello!
On Wed, Aug 5, 2015 at 10:25 PM, Nitin Solanki wrote:
I want to execute python scripts into Nginx server. I don't want to
any frameworks for that. Core python script, I need to use.
Any help and step to follow . To do that.
Because you're using NGINX, I'd assume you're after
Hello!
On Wed, Aug 5, 2015 at 7:21 PM, Maxime Henrion wrote:
I am currently developing an nginx module in order to implement a software
component in our platform.
This module's responsibility is to receive upstream requests, forward them to
multiple hosts (one host per pool, with N pools;
Hi folks!
I am pleased to announce the new formal release, 1.7.10.2, of the
OpenResty bundle:
https://openresty.org/#Download
We include a lot of fixes and new features accumulated in the last few months.
Special thanks go to all our contributors and users for making this happen!
Below is
Hello!
On Thu, Jun 18, 2015 at 7:06 PM, Jeff Kaufman wrote:
ngx_pagespeed does this by giving nginx a pipe to watch, setting up a
handler for that pipe, calling an async api that uses threads, then
the from the callback writing a byte to the pipe. Now when the async
code finishes we're back
Hello!
On Sun, Jun 7, 2015 at 10:41 PM, nginxsantos wrote:
Can anyone please help me with a lua configuration which I can embedded into
nginx.conf to send the following sepaately in access log.
user_agent_os
user_agent_browser
user_agent_version
At present all these fields are embedded in
Hello!
On Sun, Mar 15, 2015 at 5:05 PM, Marat Dakota wrote:
In a few modules I've found a trick:
if (r != r-connection-data)
r-connection-data = r;
Careful. This is a common hack to cheat nginx's
ngx_http_postpone_filter_module when the in-stock subrequest model
cannot serve us well.
Hi Maxim
On Mon, Mar 2, 2015 at 11:09 AM, Maxim Dounin wrote:
I've committed this and another patch related to filter
finalization, see here:
http://hg.nginx.org/nginx/rev/5abf5af257a7
http://hg.nginx.org/nginx/rev/5f179f344096
Great. Thanks!
In the particular case you've described in
Hi folks!
I am pleased to announce the new formal release, 1.7.10.1, of the
OpenResty bundle:
http://openresty.org/#Download
Special thanks go to all our contributors and users for making this happen!
Below is the complete change log for this release, as compared to the
last formal release
Hello!
On Thu, Feb 26, 2015 at 4:41 AM, kabirova wrote:
I have a problem when using subrequest in content handler.
The content handler (my_content_handler) calls
ngx_http_read_client_request_body with callback handler (my_callback).
my_callback() makes a subrequest:
Just check out how my
Hello!
On Fri, Feb 13, 2015 at 7:05 AM, Maxim Dounin wrote:
Rather, I would suggest something like this:
--- a/src/http/ngx_http_upstream.c
+++ b/src/http/ngx_http_upstream.c
@@ -3744,10 +3744,13 @@ ngx_http_upstream_finalize_request(ngx_h
ngx_log_debug1(NGX_LOG_DEBUG_HTTP,
Hello!
Please review the following patch.
Thanks!
-agentzh
# HG changeset patch
# User Yichun Zhang agen...@gmail.com
# Date 1423789183 28800
# Thu Feb 12 16:59:43 2015 -0800
# Node ID 8b3d7171f35e74c8bea3234e88d8977b4f11f815
# Parent f3f25ad09deee27485050a75732e5f46ab1b18b3
Upstream:
Hello!
On Thu, Feb 5, 2015 at 12:47 AM, Batuhan Göksu wrote:
There are many great new features.
Why lua has not been updated
By default, OpenResty uses LuaJIT, which is actively updated upon
almost every new OpenResty release.
The bundled standard Lua interpreter is only used when you
Hi folks!
I am happy to announce the new formal release, 1.7.7.2, of the OpenResty bundle:
http://openresty.org/#Download
The highlights of this release are
1. the SSL/TLS support in the websocket client of lua-resty-websocket.
2. an enhanced version of resty command-line utility
Hello!
On Tue, Feb 3, 2015 at 1:26 AM, Tigran Bayburtsyan wrote:
As I understand all that 700kb data Nginx not sending at once it will take
some Nginx loops to be sent.
Right. Both of the ngx_http_finalize_request and
ngx_http_output_filter are asynchronous calls. So data might be later
Hello!
On Tue, Feb 3, 2015 at 2:18 PM, Yichun Zhang (agentzh) wrote:
One good approximation for this is to register your own *pool cleanup*
handler in r-pool. The request pool will not be destroyed when
there's still pending data (don't get confused it with the request
cleanup thing created
Hello!
On Sat, Jan 17, 2015 at 5:10 PM, Marat Dakota wrote:
I'm writing a module for a piece of software which works the same way.
So, what I need is a mechanism to call both handlers in a busy loop:
while (true) {
`process events and call callbacks`();
`process my piece of software
Hello!
On Mon, Jan 12, 2015 at 1:48 PM, Kunal Pariani wrote:
Is there already a patch for this ?
AFAIK, the Tengine fork has a patch for this.
I am not completely sure of how to make the nginx resolver (in
ngx_resolver.c) fallback to libresolv automatically and if this not trivial
enough,
Hello!
On Wed, Jan 7, 2015 at 4:15 PM, Francis Daly wrote:
(You could probably come up with a way to read /etc/resolv.conf when it
changes, and update the nginx config and reload it; but that's a dynamic
reconfiguration problem, not an nginx dynamic reconfiguration problem.)
Yeah, I think
Hello!
I suggest that the official nginx documentation should explicitly
document whether a particular built-in variable is changeable or
readonly. For example, $http_name variables do not allow overwrites
but $args does. Such explicit documentation and avoid a lot of
confusion for new comers.
Hello!
On Tue, Dec 2, 2014 at 12:57 PM, chase.holland wrote:
Thank you for your quick response! Could you be more specific on what is
blocking the worker process?
Just FYI: you can always find the blocking IO calls via tools like the
off-CPU flame graphs:
Hi Maxim!
On Thu, Nov 27, 2014 at 7:31 AM, Maxim Dounin wrote:
Yichun, I've spent some time looking in this, and I don't see how
it can cause infinite hang at least with stock nginx modules. It
certainly can cause suboptimal behaviour though, both with proxy
cache locks and with AIO.
Hello!
On Wed, Nov 26, 2014 at 8:29 AM, VladimirSmirnov wrote:
For testing purposes I'm using self-signed ssl cert.
ngx.log(ngx.DEBUG, session_id=, ngx.var.ssl_session_id) prints nil in
the logs. How can I get access to this variable?
It's very likely that your client sends TLS session
Hello!
On Wed, Nov 26, 2014 at 11:15 AM, julianfernandes wrote:
Running Blitz.io on it the server is getting absolutely murdered by the
NGINX worker processes, which each one using 100% CPU according to top and
htop.
100% CPU usage problems are usually trivial (and also fun) to solve
with the
Hello!
On Fri, Nov 14, 2014 at 11:20 AM, Guido Accardo wrote:
From the doc of proxy_ignore_client_abort:
... Determines whether the connection with a proxied server should be
closed when a client closes the connection without waiting for a response
...
So basically I'm discarding dev's
Hello!
On Wed, Nov 12, 2014 at 12:20 PM, Guido Accardo wrote:
Here, prod response is sent immediately as I want and dev receives the
traffic but the connection is closed the I got a Broken Pipe (which makes
sense).
For this error, maybe you should configure
proxy_ignore_client_abort
Hello!
On Wed, Nov 12, 2014 at 3:01 PM, josephlim wrote:
I was wondering what happens when multiple workers access the
ngx.shared.dict in the http lua module ? Are there conflicts/locking that
could potentially impact performance of nginx? We are talking about 32
workers in my use case.
It
Hello!
On Wed, Nov 5, 2014 at 5:08 PM, Yichun Zhang (agentzh) wrote:
Sorry again, it actually checked c-buffered
NGX_HTTP_LOWLEVEL_BUFFERED. This condition is indeed too strong and
I've made it check its own busy bufs instead.
Hmm, the problem here is more complicated than I originally
Hello!
On Wed, Nov 5, 2014 at 7:41 AM, Maxim Dounin wrote:
The questions are:
- How it happened that all content handler's buffers are busy,
while there are no busy buffers in gzip?
Sorry, I was wrong in this part. The content handler actually checked
the r-buffered flag instead of
Hello!
On Wed, Nov 5, 2014 at 5:02 PM, Yichun Zhang (agentzh) wrote:
Sorry, I was wrong in this part. The content handler actually checked
the r-buffered flag instead of checking its own busy bufs.
Sorry again, it actually checked c-buffered
NGX_HTTP_LOWLEVEL_BUFFERED. This condition
Hello!
On Mon, Nov 3, 2014 at 4:54 PM, Maxim Dounin wrote:
The commit log in question explains the reason for the change.
Work on the gzip stalls problem as fixed by 973fded4f461 clearly
showed that just passing NULL chains is wrong unless last buffer
was already sent or there are busy
# HG changeset patch
# User Yichun Zhang agen...@gmail.com
# Date 1414804249 25200
# Fri Oct 31 18:10:49 2014 -0700
# Node ID 38a74e59f199edafad0a8caae5cfc921ab3302e8
# Parent dff86e2246a53b0f4a61935cd5c8c0a0f66d0ca2
Gzip Gunzip: always flush busy bufs when the incoming chain is NULL.
After
Hello!
On Mon, Sep 22, 2014 at 4:39 AM, Richard Fussenegger, BSc wrote:
I'd like to implement built-in session ticket rotation. I know that it this
was discussed before but it was never implemented. Right now a custom
external ticket key system is supported. Admins with single installations
Hi folks!
I am happy to announce the new formal release, 1.7.4.1, of the OpenResty bundle:
http://openresty.org/#Download
The highlights of this release are
1) the new resty command-line utility,
2) SSL/TLS cosocket support in ngx_lua (with SNI support and
client-side session
Hello!
Valgrind memcheck caught a buffer overflow issue in ngx_hash_t when
exceeding the pre-configured limits on my side:
==7417== Invalid write of size 2
==7417==at 0x40600D: ngx_hash_init (ngx_hash.c:324)
==7417==by 0x45BBFD: ngx_http_proxy_merge_loc_conf
Hello!
On Tue, Sep 30, 2014 at 5:36 PM, Maxim Dounin wrote:
With such a change timer idents will become much less readable
for connection-related timers (that is, most of them), so this
is a last resort.
Yes, I know. Maybe let the caller explicitly tell ngx_event_t whether
ev-data is an
Hello!
On Fri, Sep 19, 2014 at 12:50 PM, igorhmm wrote:
I don't known how to reproduce, not yet :-)
I couldn't identify which worker was responding too, but I can see with
strace warnings in the old wolker about EAGAIN (Resource temporarily
unavailable). I can see that because old workers
Hello!
On Thu, Sep 18, 2014 at 9:02 AM, jpsonweb wrote:
I was able to post the parameter from nginx by passing the arguments using
this.
local maken_res = ngx.location.capture(/test, { method = ngx.HTTP_POST,
args = { pagelayout = dev_res_encoded }});
You're passing your args via URI
Hello!
On Wed, Sep 17, 2014 at 12:58 AM, Christos Trochalakis wrote:
I am one of the debian nginx maintainers. Is it possible to provide a
patch for nginx-1.2 series since the relevant commit is not backportable
as-is?
+1
I also hope there is a standalone patch that can (also) be applied to
Hello!
On Tue, Sep 16, 2014 at 11:24 AM, jpsonweb wrote:
I am calling an webapplication from nginx. I want to capture the response
and post the response body as a post parameter to another application.
I am doing something like this
local maken_res = ngx.location.capture(/test, { method =
Hello!
I've noted a bug in nginx 1.7.4's standard ngx_http_sub_module that
single-char patterns are never handled properly but longer patterns
work. Consider the following minimal example:
location = /t {
default_type text/html;
return 200 hello world;
sub_filter 'h'
Hello!
On Wed, Sep 3, 2014 at 12:06 PM, Yichun Zhang (agentzh) wrote:
I've noted a bug in nginx 1.7.4's standard ngx_http_sub_module that
single-char patterns are never handled properly but longer patterns
work.
Oops, I used to report this issue almost 2 years ago:
http
Hi Konstantin
On Wed, Sep 3, 2014 at 12:20 PM, Konstantin Pavlov wrote:
And it was, nine days ago: http://hg.nginx.org/nginx/rev/5322be87fc02
Awesome! Thanks! Sorry for not checking the latest hg repos :)
Regards,
-agentzh
___
nginx-devel mailing
Hello!
On Fri, Aug 29, 2014 at 12:53 AM, Fasih wrote:
Btw, I think you have to set write_event_handler to empty. Basically,
if you dont set it, and there is a write_event (while the body is not
read), nginx would call core_run_phases which you werent expecting.
The
Hello!
On Thu, Aug 28, 2014 at 10:34 AM, Fasih wrote:
I am trying to read the request body in pre_access phase. This seems
like a regular requirement but I dont seem to find a good way to do
this. Since the request body is read asynchronously, I have to do
phases++ and core_run_phases myself
Hello!
On Fri, Aug 1, 2014 at 5:03 AM, c0nw0nk wrote:
Does anyone know a way you can execute a program via the echo module or
another way with the lua module ?
You can try this: https://github.com/juce/lua-resty-shell
Regards,
-agentzh
___
nginx
Hello!
On Fri, Aug 1, 2014 at 2:18 PM, Yichun Zhang (agentzh) wrote:
You can try this: https://github.com/juce/lua-resty-shell
But for expensive image compression involved with relatively large
data volumn and CPU computation, it is better to be done in a
dedicated daemon process outside your
Hello!
On Thu, Jul 31, 2014 at 10:06 AM, c0nw0nk wrote:
I also see LUA can do the job but i get the feeling i will hit a dead end if
i did this.
location /compress-images {
content_by_lua 'os.execute(C:/server/bin/compress.exe)';
}
Oh no, os.execute() is blocking. You should
Hello!
On Tue, Jul 29, 2014 at 1:46 PM, Richard Stanway wrote:
I recently came across a modified version of zlib with code contributed by
Intel [1] that makes use of modern CPU instructions to increase performance.
In testing, the performance gains seemed substantial, however when I tried
to
Hello!
On Tue, Jul 29, 2014 at 3:47 PM, Richard Stanway wrote:
Thank you for the patch. This solves the issue with streamed responses,
however when the if (r-headers_out.content_length_n 0) branch is taken,
eg with static content, I still receive the 2nd alert type below.
Oh, we should
Hello!
On Tue, Jul 29, 2014 at 4:09 PM, Piotr Sikora wrote:
Just to make this clear, the zlib library that Richard is referring to
is a fork of standard zlib (like ours), not IPP zlib.
Okay, I see. Thank you for pointing that out :)
Regards,
-agentzh
Hi Maxim!
On Wed, Jul 23, 2014 at 7:10 AM, Maxim Dounin wrote:
Thanks for noting this. I think that it would be better to use
slightly different code, similar to what to we use in case of
client SSL handshakes:
[...]
This will consistently limit total connect and ssl handshake time
to
# HG changeset patch
# User Yichun Zhang agen...@gmail.com
# Date 1406068295 25200
# Tue Jul 22 15:31:35 2014 -0700
# Node ID 1db962fc3522ce61313b684ca8251a6462992d40
# Parent 93614769dd4b6df8844c3c43c6a0b3f83bfa6746
Proxy: added timeout protection to SSL handshake.
Previously, proxy relied
1 - 100 of 176 matches
Mail list logo