On 31/01/2020 12:37, Reinis Rozitis wrote:
if ($arg_p) {
return 301http://yoursite/p/$arg_p;
}
This is what I was originally looking for, however as I've only 20 pages
to manage the individual redirects via the map directive I believe will
work better as it will remove a additional
, Francis Daly wrote:
On Fri, Jan 31, 2020 at 01:13:30AM +, Steve Wilson wrote:
Hi there,
Currently wordpress is using ugly urls for posts, so "/?p=1234" in wordpress
might be "/this_nice_title" in hugo.
Now hugo allows me to specify aliases too which I'd like to leverag
I'm currently in the process of transitioning from wordpress to hugo.
For anyone not familiar with these, wordpress is php based and hugo
outputs static content (keeping it simple)
Currently wordpress is using ugly urls for posts, so "/?p=1234" in
wordpress might be "/this_nice_title" in hugo.
On 27/08/2018 14:30, Maxim Dounin wrote:
Hello!
On Mon, Aug 27, 2018 at 06:56:01PM +0530, Sharan J wrote:
Hi,
Sample conf:
http{
resolver x.x.x.x;
server {
server_name _;
location / {
proxy_pass http://somedomain.com;
}
}
I have nameservers configured in my
I've no problem with IPv6 on my server using specific v4 and v6 listen
statements.
Is the IP you're trying to use actually configured on an interface?
Steve.
On 21/06/2018 21:37, abatie wrote:
I have nginx binding to a variety of addresses for ssl and target selection
reasons. Now I'm
Hi,
It doesn't look like that's actually getting passed to php-fpm.
You're possibly missing the php handling in your server{} block.
Check that you've got a location set for php files to do a fastcgi_pass.
eg.
location ~ \.php$ {
fastcgi_pass unix:/var/run/php-fpm/sock;
On 03/04/2017 16:50, sachin.she...@gmail.com wrote:
Thanks Maxim for the reply. We have evaluated disk based encryption
etc, but
that does not prevent sysadmins from viewing user data which is a
problem
for us.
Do you think we could build something using lua and intercept read and
wriite call
My initial thoughts here are that you're potentially putting private
information in the public hands.
iirc to use http_secure_link you need some "private" information to
generate the md5sum. This data should not be part of a mobile
application. Personally I'd look at a way to get the full url
It sounds to me like wordpress believes that www is required and nginx
doesn't want it.
I'd try commenting out the redirect server{} block and add the
server_name to the xxx.com one and see what you end up with in your
browser, then have a look through the wordpress settings to see what
it's
>From the limited testing I did when I enabled http2 on my sites, I found
that the few sites I used for testing were actually checking for spdy
and not http2/h2 as the next protocol.
I've had the spdy indicator plugin in chrome for a while which I believe
uses the chrome internals to check
At risk of repeating previous advice, see below ...
Original Message
Subject: Re: nginx RFC21266 Compliance - 'Proxy-Connection'
Date: 25/08/2015 21:21
From: Steve Wilson <lists-ng...@swsystem.co.uk>
To: nginx@nginx.org
Reply-To: nginx@nginx.org
Looking at
Adding the below should remove any authentication headers in the request
to the backend server(s).
proxy_set_header "Authorization" "";
Steve.
On 15/09/2015 14:33, derp14 wrote:
Hello,
Please excuse me if this has been asked/solved before. I've searched an
answer for some good hours but
-08-18 14:36, Steve Wilson wrote:
Hi,
When I migrated from apache+mod_php to nginx+php-fpm I found I had a
few websites using persistent mysql connections which never closed.
Steve, thanks for this tip. This surely was part of the problem, but
not all of it.
Sure enough, when I first noticed
Hi,
When I migrated from apache+mod_php to nginx+php-fpm I found I had a few
websites using persistent mysql connections which never closed. I had to
disable this in the php.ini so all the sites fell back to using
non-persistent connections.
I don't know if this will help as it was mysql not
There seems to be a naming issue for the socket.
nginx is configured to use /run/lists.sock yet your ls shows lists.sock-1
Steve.
On 26/03/2015 13:15, Silvio Siefke wrote:
Hello,
i try to run mailman on nginx over fcgiwrap. The sock is present on
system and has correct rights, but
log say
both IPv4 and IPv6?
Thanks
Cheers,
Lloyd
On Friday, January 30, 2015, Steve Wilson lists-ng...@swsystem.co.uk
mailto:lists-ng...@swsystem.co.uk wrote:
Hi,
Slightly complicated setup with 2 nginx servers.
server1 has a public ipv4 address using proxy_pass to server2 over
Hi,
Slightly complicated setup with 2 nginx servers.
server1 has a public ipv4 address using proxy_pass to server2 over ipv6
which only has a public ipv6, this then has various upstreams for each
subdomain.
ipv6 capable browsers connect directly to server2, those with only ipv4
will
On 14/01/2015 10:30, ramsoft75 wrote:
Isn't this ?
server {
listen 443 ssl spdy;
server_name domain.com;
...
}
With that configuration if a go https://domains.com it gives me a error
webpage not avaiable
I'm guessing the above is a typo in the mail and that you're not
actually trying
On 18/12/2014 13:02, Lukas Tribus wrote:
Hello all,
I have a nginx site configured with spdy on https.
But after reading
https://developers.google.com/speed/articles/spdy-for-mobile I decided
to try spdy also for http.
But strangely, after reloading the page on http, the browser keeps
On 02/09/2014 17:38, Grozdan wrote:
On Tue, Sep 2, 2014 at 3:09 PM, Maxim Dounin mdou...@mdounin.ru
wrote:
Hello!
On Tue, Sep 02, 2014 at 12:17:12PM +0100, Steve Wilson wrote:
Torrent clients have their own user agent normally, I had a need a
while
back to block some which we used the magic
Torrent clients have their own user agent normally, I had a need a while
back to block some which we used the magic 444 to kill it.
if ($http_user_agent ~* (uTorrent|Transmission) ) {
return 444;
break;
}
On 02/09/2014 12:08, Grozdan wrote:
Hi,
Somehow my server gets hit by torrent
On 23/06/2014 12:05, Keyur wrote:
Thanks Jonathan!
Well I can not comment regarding getting professional service. Infact I
will
be glad to have support but If I go with this approach then I would
rather
be asked to use web server which supports the said feature. (This is
doable
in apache).
On 18/06/2014 00:39, c0nw0nk wrote:
I am still having the same issue read time outs. If every request made to
the server circles the upstream then it has to be the upstream that is the
issue not php. PHP loads are fine no crashes no errors.
What's in the logs?
Steve.
On 13/06/14 15:14, 姚锟 wrote:
Hi Buddy,
I am a newer to Nginx world, now I have a project to link the varnish
HTTP server and nginx together, nginx is the back end.
I want to allow the connections only by varnish, so I use deny all
,this kind of stuff to archieve this.
But if there is a
It's late and I'm about to go to bed so I've not checked the docs on
this but ...
add_header Front-End-Https on;
I suspect this is meant to be proxy_add_header and meant so php can
detect the client is accessing via https.
If my memory is correct on this it's likely that php
I've just done a drupal7 site under nginx+php-fpm on debian.
One thing I noticed was that the php process wasn't closing fast enough,
this was tracked down to an issue with mysql. Connections were sitting
idle for a long time which basically exhausted the fpm workers on both
the web servers.
and causing this issue.
So can you please help me.
On Mon, Apr 7, 2014 at 8:33 PM, Steve Wilson lists-ng...@swsystem.co.uk
wrote:
I've just done a drupal7 site under nginx+php-fpm on debian.
One thing I noticed was that the php process wasn't closing fast enough, this
was tracked down
On 07/04/2014 16:45, Steve Wilson wrote:
A quick read at
http://dev.mysql.com/doc/refman/4.1/en/innodb-parameters.html#sysvar_innodb_flush_log_at_trx_commit
suggests there's a possibility of losing 1s worth of data. I'm not sure
if we'd still have a problem with this now we've moved page
I'm using startssl for my certificates so had problems with the
ssl_trusted_certificate too.
just using resolver and ssl_stapling on got mine enabled.
https://www.ssllabs.com/ssltest/analyze.html?d=stevewilson.co.uk
Using openssl on the console's helpful too:
openssl s_client -connect
On 2013-05-16 16:07, Daniel Griscom wrote:
At 3:34 PM +0200 5/16/13, René Neumann wrote:
Am 16.05.2013 15:18, schrieb Jim Ohlstein:
I think what Maxim was alluding to is that any decent email client will
sort messages for you based on headers if you set it do do so. This way
you don't need to
I've just had to move subversion onto a server that's already serving
network wordpress via nginx. Most things work via /svn in a subversion
client but I can't for the life of me figure out how to stop /svn.*\.php
hitting the fastcgi_pass.
I'm sure it's simple and I'm just not seeing the wood for
On 12/05/2013 15:55, Jonathan Matthews wrote:
Have you looked at the ^~ prefix mentioned in
http://wiki.nginx.org/HttpCoreModule#location ?
It looks like what you need ...
I thought I'd tried that, and even with the change in config it's still
giving me the 404 errors.
changed config:
32 matches
Mail list logo