Hello,
1. How can I log the IP and (especially) the port used by nginx (proxy) to
connect to upstream when stream module is used?
2. Can I somehow get a log entry also/instead at stream connection setup time,
not only after it ends?
3. I think that $tcpinfo_* aren't supported in stream. Is
I'm unsure if thats possible without 3rd party module...
I've used fancyindex before when I wanted sorting.
On Wednesday, February 28, 2018, Luciano Mannucci
wrote:
>
> Hello all,
>
> I have a directory served by nginx via autoindex (That works perfectly
> as
This discussion is interesting, educational, and thought provoking. Web
architects
only learn “the right way” by first doing things “the wrong way” and seeing
what happens.
Attila and Valery asked questions that sound logical, and I think there's value
in exploring
what would happen if their
here is a synthetic test on vm, not perfect, but representative:
[root@nginx-single ~]# dd if=/dev/zero of=/writetest bs=8k count=30960
30960+0 records in
30960+0 records out
253624320 bytes (254 MB) copied, 0.834861 s, 304 MB/s
[root@nginx-single ~]# dd if=/dev/zero of=/writetest bs=8k
Valery,
may you please suggest how you came to the conclusion that
“fsync simply instructs OS to ensure consistency of a file"?
As far as understand simply instructing OS staff come at no cost, right?
> Without fsyncing file's data and metadata a client will receive a positive
> reply
Укажите флаг volatile, чтобы значения не кэшировались после первого
вычисления в рамках основного запроса.
map $request_uri $fastcgi_cache_key {
volatile;
default
$request_method|$host|$uri|$request_uri|$cookie_currency|$cookie_show_mode;
~^/objekti/.+
Not waiting for fsync to complete makes calling fsync pointless, waiting for
fsync is blocking, thread based or otherwise.
The only midway solution is to implement fsync as a cgi, ea. a none-blocking
(background)fc call in combination with an OS resource lock.
Posted at Nginx Forum:
Hello,
Could you add something similar to HTTP auth_request module for stream?
Basically I want to allow or deny access to TCP stream proxy based on the
result of HTTP request. I want to pass to this request source and destination
IP addresses and ports and possibly some more informations
On 28-02-18 15:08, Maxim Dounin wrote:
What do you mean by a reliable server?
I want to make sure when the HTTP operation returns, the file is on the
disk, not just in a buffer waiting for an indefinite amount of time to
be flushed.
This is what fsync is for.
The question here is - why you
Hello,
thanks for your answer.
The documentation is explicit when an upstream server is used with proxy_pass,
but it says nothing, when proxy_pass is used with an URL. Being explicit
somewhere and stating nothing on the same manner in an alternative scenario
leaves space for
It's completely clear why someone would need to flush file's data and
metadata upon a WebDAV PUT operation. That is because many architectures
expect a PUT operation to be completely settled before a reply is returned.
Without fsyncing file's data and metadata a client will receive a
positive
Hello all,
I have a directory served by nginx via autoindex (That works perfectly
as documented :). I need to show the content in reverse order (ls -r),
is there any rather simple method?
Thanks in advance,
Luciano.
--
/"\ /Via A. Salaino, 7 - 20144 Milano (Italy)
\
Hello!
On Tue, Feb 27, 2018 at 10:32:40PM +, Chris Branch via nginx-devel wrote:
> Hi, just giving this patch some birthday bumps.
>
> > On 27 Feb 2017, at 11:58, Chris Branch via nginx-devel
> > wrote:
> >
> > # HG changeset patch
> > # User Chris Branch
On Thu, Feb 22, 2018 at 06:12:52PM +0100, Johannes Baiter wrote:
> Sorry, I accidentally submitted an incomplete version of the patch.
> Here is the corrected version.
>
Hello,
I've slightly updated the patch (also note your mail client have broken
it - you may want to update settings to avoid
details: http://hg.nginx.org/njs/rev/c86a0cc40ce5
branches:
changeset: 454:c86a0cc40ce5
user: Roman Arutyunyan
date: Wed Feb 28 19:16:25 2018 +0300
description:
Skip empty buffers in HTTP response send().
Such buffers lead to send errors and should never be sent.
Am 2018-02-28 16:41, schrieb Igor A. Ippolitov:
Hello.
I'm not sure about what do you really need, but it looks like you can
get almost the same result using a combination of map{} blocks and
conditionals.
Something like this:
map $ssl_client_s_dn $ou_matched {
~OU=whatever 1;
default
Hello!
On Wed, Feb 28, 2018 at 11:52:18AM +0100, Дилян Палаузов wrote:
> when I try to enter a bug at
> https://trac.nginx.org/nginx/newticket#ticket, choose as version
> 1.12.x and submit the system rejects the ticket with the
> message:
>
> Warning: The ticket field 'nginx_version' is
Hello.
I'm not sure about what do you really need, but it looks like you can
get almost the same result using a combination of map{} blocks and
conditionals.
Something like this:
map $ssl_client_s_dn $ou_matched {
~OU=whatever 1;
default 0;
}
map $ssl_client_s_dn $cn_matched {
details: http://hg.nginx.org/nginx/rev/20f139e9ffa8
branches:
changeset: 7220:20f139e9ffa8
user: Roman Arutyunyan
date: Wed Feb 28 16:56:58 2018 +0300
description:
Generic subrequests in memory.
Previously, only the upstream response body could be accessed with the
Hello!
On Wed, Feb 28, 2018 at 10:30:08AM +0100, Nagy, Attila wrote:
> On 02/27/2018 02:24 PM, Maxim Dounin wrote:
> >
> >> Now, that nginx supports running threads, are there plans to convert at
> >> least DAV PUTs into it's own thread(pool), so make it possible to do
> >> non-blocking (from
Hello!
On Wed, Feb 28, 2018 at 05:22:14AM -0500, Andrzej Walas wrote:
> Can you answer?
The last recommendation you were given is to find out who and why
killed nginx worker process, see here:
http://mailman.nginx.org/pipermail/nginx/2018-February/055648.html
If you think nginx processes are
Hi,
it seems most examples, even for apache, seem to assume that the client
certificates are issued by your own CA.
In this case, you just need to check if your certificates were issued by
this CA - and if they're not, it's game over.
However, I may have a case where the CA is a public CA
Hello,
when I try to enter a bug at https://trac.nginx.org/nginx/newticket#ticket,
choose as version 1.12.x and submit the system rejects the ticket with the
message:
Warning: The ticket field 'nginx_version' is invalid: nginx_version is required
And here is the actual question:
proxy_pass
details: http://hg.nginx.org/njs/rev/ab1f67b69707
branches:
changeset: 453:ab1f67b69707
user: Igor Sysoev
date: Wed Feb 28 16:20:11 2018 +0300
description:
Fixed String.prototype.toUTF8() function.
A byte string returned by String.prototype.toUTF8() had length equal
28.02.2018 12:59, S.A.N пишет:
Если ресурс отданный push ответом, используется на странице и в ответе
были
заголовки НТТР кеширование, браузер этот ресурс перемещает в НТТР кеш?
Да.
Когда я экспериментировал с push ответами, браузеры не перемещал push
ресурсы в НТТР кеш, вы тестировали
Can you answer?
Posted at Nginx Forum:
https://forum.nginx.org/read.php?2,278589,278826#msg-278826
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
While it’s not clear why one may need to flush the data on each http operation,
I can imagine to what performance degradation that may lead of.
if it’s not a some kind of funny clustering among nodes, I wouldn't care much
where actual data is, RAM still should be much faster, than disk I/O.
> Push cache очищается при закрытии
> соединения, но все элементы при первом использовании браузером будут
> помещены в http cache, так что всё нормально.
> Подробнее здесь:
> https://jakearchibald.com/2017/h2-push-tougher-than-i-thought/
Если ресурс отданный push ответом, используется на
28.02.2018 11:40, S.A.N пишет:
Не совсем понял ваши слова про "не кешируемый контент".
По спецификации НТТР 2, браузер push ответы могут кешировать только в
отдельном кеше соединенния (смотрите на connection_id в devtools), после
закрытия соединения кеш очищается.
Или я не прав, браузеры
> Не совсем понял ваши слова про "не кешируемый контент".
По спецификации НТТР 2, браузер push ответы могут кешировать только в
отдельном кеше соединенния (смотрите на connection_id в devtools), после
закрытия соединения кеш очищается.
Или я не прав, браузеры сохранят push ответы в общем кеше и
30 matches
Mail list logo