is the problem?
Most probably, the nginx user cannot access the .php file you're trying
to execute, either because of its permissions or because it cannot
traverse one of its parent directories.
andrea
So in short you need to use
chmod -R...
chown -R...
Kfir
--
Best Regards,
Xi Shen (David
of which appear dead. I couldn't find any mention of http in my
kernel config either.
We use lighttpd for our dev stuff; I guess it's that, nginx or thttpd,
last of which doesn't do fastCGI, so might be the best for this
purpose.
http://en.wikipedia.org/wiki/Comparison_of_web_server_software
..might
Hello,
On Mon, 02 Apr 2012 11:01:46 +0200
Michael Schreckenbauer grim...@gmx.de wrote:
I'm not really an expert with ngingx and php-fpm, but afaict the error is in
the location line in nginx.conf.
You have:
location ~ .php$ {
...
Afaik this only matches the exact file .php
Try changing
, etc. The dom0 role has 15 tasks including monitoring, xen, grub.
The domU role basically just configures rc.conf.
An actual web server with apache/php has just about 20 tasks. A
load-balancer
with varnish/nginx/keepalived has just about the same. A database has
about
30 tasks because it also
fixed a misconfiguration at the nginx setup of
devuan.org he claimed) and
- Teodoro Santoni, who claimed to be a junior-jack-of-all-trades in the
original VUA group, going to be a maintainer of whatever is going to be
needed.
Source:
https://lists.dyne.org/lurker/thread/20141127.212941
configuration serving an extremely light load.
Nginx is an alternative for radicale (is it worth changing from one
large application to one almost as heavy?) but what else can do wsgi/dav?
BillK
On September 6, 2016 10:17:53 PM GMT+02:00, Grant <emailgr...@gmail.com> wrote:
>Hi, my site is being ravaged by an IP but dropping the IP via
>shorewall is seeming to have no effect. I'm using his IP from nginx
>logs. IP blocking in shorewall has always worked before. What coul
5 runs dnsmasq as DHCP server, NGINX, Postfix, Unbound and
more for a bunch of clients in a LAN. It is quite nifty as a local DNS
Resolver and DHCP server, because it is usually the fastest to boot
after the occasional power outage.
I would not use it as an Internet-facing production Mailserver,
Grant wrote:
Agreed! Although getting apache, mysql, and nginx plugins fully
working is proving to be a little trickier. To get those going it's
necessary to edit /etc/munin/plugin-conf.d/munin-node as well as some
apache and nginx config. Still working on getting it all 100%. - Grant
If you
On 06/09/2016 22:57, Grant wrote:
>> Hi, my site is being ravaged by an IP but dropping the IP via
>> shorewall is seeming to have no effect. I'm using his IP from nginx
>> logs. IP blocking in shorewall has always worked before. What could
>> be happening?
&
On Wed, Sep 7, 2016 at 9:14 AM, Grant <emailgr...@gmail.com> wrote:
>>>> Hi, my site is being ravaged by an IP but dropping the IP via
>>>> shorewall is seeming to have no effect. I'm using his IP from nginx
>>>> logs. IP blocking in shore
>>>>> Hi, my site is being ravaged by an IP but dropping the IP via
>>>>> shorewall is seeming to have no effect. I'm using his IP from nginx
>>>>> logs. IP blocking in shorewall has always worked before. What could
>>>>> be happen
es, as instructed here:
https://wiki.gentoo.org/wiki/Local_Mirror#Setting_up_the_mirror
> Then, whatever you use to fetch distfiles for installation, it uses ftp or
> http transport to fetch them. ...
This page discusses local distfile servers:
https://wiki.gentoo.org/wiki/Local_distfiles_cac
s, as instructed here:
>
> https://wiki.gentoo.org/wiki/Local_Mirror#Setting_up_the_mirror
>
>
> > Then, whatever you use to fetch distfiles for installation, it uses ftp or
> > http transport to fetch them. ...
>
>
> This page discusses local distfile servers:
c ports with service discovery == no port conflicts.
Why start the email asking why something old is used and then finish
the email suggesting the possibility of using something else old?
Not that old as apache. Nginx is still widly used (contrast to apache),
but is being replaced by caddy/traefi
, and now I find myself
periodically up against MaxClients. Is a RAM upgrade the only
practical way to solve this sort of problem?
Use a reverse proxy in caching mode.
A request served up by the proxy server is a request not served up by
Apache.
Squid, nginx and varnish are all decent
, nginx and varnish are all decent for the purpose, though squid
and nginx are probably the more polished than varnish.
Grant,
If you optimized the site well, I would imagine your RAM needs per page
request would go down and you could possibly increase MaxClients again.
Have you given it a try
is a request not served up by
Apache.
Squid, nginx and varnish are all decent for the purpose, though squid
and nginx are probably the more polished than varnish.
Grant,
If you optimized the site well, I would imagine your RAM needs per page
request would go down and you could possibly
g
intercepting the port & IP pair at some point up stream.
Not that old as apache.
I take your statement to be that the Apache HTTPD developers and
administrators have more experience than Nginx / caddy / traefik
developers and administrators by the simple fact that it has existed lo
and fairly secure apache/lighttpd/nginx/whatever
out there, and, provided there are no holes in your scripts, the setup
should be fairly secure.
And that's probably most used line-of-defence on any web, since there's
nothing more important for webserver than scripts - if you have www, you
pretty
I am trying to find away to access files securely from my home computer.
The network I am on is really strict on email attachments and no usb
drives are allowed, so I possibly thought of a website that uses ssl and
a password to access the files. I really don't know where to begin.
Since the
.
Preferably, with logging so I can see which packages I missed, but not
necessary.
Rgds,
nginx.
You can disable fastcgi/etc using use flags.
What about www-servers/fnord ?
Its website[1] claims that its binaries are less than 20 kB[2]
[1] http://www.fefe.de/fnord/
[2] http://www.fefe.de
On 22 February 2012 12:07, Helmut Jarausch jarau...@igpm.rwth-aachen.de wrote:
On 02/19/2012 07:15:46 PM, Mick wrote:
Hi All,
I am trying to set up a reverse-proxy at my home to be able to by-
pass
restrictive firewalls that only allow http/https traffic.
If you only want to get through
. My rule of thumb was always that I must prevent Apache swapping
at all costs as the performance impact is horrific.
It doesn't have to mean installing more RAM (which is quick, easy, cheap
and often rather effective), sensible optimizations can work wonders
too, as can nginx as a proxy in front
in the ciphers which nginx is
configured to use over https.
and 'openssl s_client -host HOSTNAME -port 443' shows:
Cipher: ECDHE-RSA-AES256-GCM-SHA384
I also get Verify return code: 20 (unable to get local issuer
certificate) from that command but I'm guessing that's OK since I get
the same when
and configures additional stuff like postfix,
nrpe, etc. The dom0 role has 15 tasks including monitoring, xen, grub.
The domU role basically just configures rc.conf.
An actual web server with apache/php has just about 20 tasks. A
load-balancer
with varnish/nginx/keepalived has just about the same
published their names do include:
- Franco Lanza (who fixed a misconfiguration at the nginx setup of
devuan.org he claimed) and
- Teodoro Santoni, who claimed to be a junior-jack-of-all-trades in the
original VUA group, going to be a maintainer of whatever is going to be
needed.
Source
than have one huge complex Apache
configuration serving an extremely light load.
Nginx is an alternative for radicale (is it worth changing from one
large application to one almost as heavy?) but what else can do wsgi/dav?
BillK
I use Debian 7 with Apache, Dovecot, etc. as Web, Mail, DNS, FTP
Grant <emailgr...@gmail.com> wrote:
>>
>> The way to do it nowadays would be by placing a file with the content
>> d /run/munin 0775 munin nginx
>> into /usr/lib/tmpfiles.d (if done by the distribution) or into
>> /etc/tmpfiles.d (if this is only needed
* Stefan G. Weichinger:
> My goal:
>
> collect logs of postfix, nginx into the docker-containers running ES,
> Kibana .. and learn my way from there.
If you are not dead-set on Elasticsearch et al, I propose considering
MongoDB as an alternative.
There are syslog Modules that a
t. Linky:
For a typical router that hides your home net from the outside via NAT,
you’d have to set up port forwarding in order to reach anything on the
inside. I run nginx on my raspi for Nextcloud and PIM syncserver, and I
make them available that way.
--
Grüße | Greetings | Qapla’
Please do
and
workload orchestrators like Nomad or Kubernetes.
Usually you don't configure Traefik with static config file, but with
metadata and annotations in K8S and Consul so it is dynamic and
reactive.
Or you can use nginx (which is already considered pretty old and clunky,
but it is much easier
On 07/09/2016 18:39, Grant wrote:
>>>>>> Hi, my site is being ravaged by an IP but dropping the IP via
>>>>>> shorewall is seeming to have no effect. I'm using his IP from nginx
>>>>>> logs. IP blocking in shorewall has always worked before.
on/pillow/pillow-4.2.1.ebuild'
!!! A file is not listed in the Manifest:
'/usr/portage/dev-python/sphinx/sphinx-1.3.1-r2.ebuild'
!!! A file is not listed in the Manifest:
'/usr/portage/www-servers/nginx/nginx-1.13.0.ebuild'
!!! A file is not listed in the Manifest:
'/usr/portage/app-text/podo
.
A request served up by the proxy server is a request not served up by
Apache.
Squid, nginx and varnish are all decent for the purpose, though squid
and nginx are probably the more polished than varnish.
Grant,
If you optimized the site well, I would imagine your RAM needs per page
t;> They also provide a .deb package, that's the reason why I'm running it
>> inside a Debian LXC container as well.
>
> And this runs on a gentoo server, with debian inside the LXC? Or on a debian
> machine with LXC?
Ok, so this is my *private* setup:
Single server box with gentoo on
/configuration tasks.
IIRC, I'm pretty sure that's what made nginx so much more performant
than Apache initially; instead of launching a new thread for every
request (Apache did/does this) or doing some sort of fancy message
passing, nginx pushes requests into a queue in shared memory and a
thread
php memory usage by 25% and accelerates binaries.
But in usual way, you will do
# docker import stage3-xxx.tar.bz2 gentoo
and emerge needed service, like nginx, mariadb or php.
This way you will have bunch of unmanaged >1GB containers, which have
90% unneded files and is hard to update.
Our project
Too Many Requests
RECEIVE: Server: nginx
RECEIVE: Date: Sun, 09 Oct 2022 09:37:52 GMT
RECEIVE: Content-Type: text/html
RECEIVE: Content-Length: 162
RECEIVE: Connection: close
RECEIVE: Strict-Transport-Security: max-age=15768000; includeSubDomains
RECEIVE:
RECEIVE:
RECEIVE: 429 Too Many
/mod_rewrite.html
Also I think it's worth mentioning that apache isn't well suited for
such a tasks if both local and remote targets get similar load - lite
frontend server or reverse proxy (like nginx, lighttpd, squid, haproxy
etc) should save a lot of workload.
Even more, if you'll make it serve
Apache2, mod_ssl, self signed certificate, htaccess/htpassword via
digest. Done deal :)
On 10/5/09, David Juhl commo_p...@yahoo.com wrote:
I am trying to find away to access files securely from my home computer.
The network I am on is really strict on email attachments and no usb
drives are
On Mon, Oct 05, 2009 at 08:04:03PM -0500, Penguin Lover David Juhl squawked:
I am trying to find away to access files securely from my home computer.
The network I am on is really strict on email attachments and no usb
drives are allowed, so I possibly thought of a website that uses ssl and
a
I suppose I could run it on a cdr and set up a ssh tunnel...
Dave
On Tue, 2009-10-06 at 05:59 -0400, Willie Wong wrote:
On Mon, Oct 05, 2009 at 08:04:03PM -0500, Penguin Lover David Juhl squawked:
I am trying to find away to access files securely from my home computer.
The network I am on
to download a perl script from
another site. Look for `wget' into apache logs.
@all
Apache was never installed I don't see any reason to install it
because nginx satisfies my needs. I grepped for the string wget in all
logs and php files, found some, but they were for libssh2 in wordpress
code
of that recently. The attacker used a instance of phpmyadmin
to inject into its URL a wget command to download a perl script from
another site. Look for `wget' into apache logs.
@all
Apache was never installed I don't see any reason to install it
because nginx satisfies my needs. I grepped
, boa, monkeyd cherokee. Does
anyone know if one of those would fit my main need of just being
extremely simple to setup and keep running for this one purpose?
Thanks in advance,
Mark
nginx comes to mind, very easy to set up and it should be able to serve
your video's w/o issue
priorities are low demand on resources on the host PC and a high
response/throughput speed for single threads, then I'd say give boa a spin.
If you will be connecting in parallel with multiple clients check lighttpd, or
thttpd.
If you are keen on exotica consider nginx, or G-WAN
/searchq=one+line+python+web+server
Other alternatives are boa, thttpd, nginx.
You can also run netcat as 'nc -l -p 80 backup_20120418.cfg' and then run
the copy command from the router.
--
Regards,
Mick
signature.asc
Description: This is a digitally signed message part.
want to
separate the tasks rather than have one huge complex Apache
configuration serving an extremely light load.
Nginx is an alternative for radicale (is it worth changing from one
large application to one almost as heavy?) but what else can do wsgi/dav?
BillK
Hi Bill,
I am self-hosting
to do it nowadays would be by placing a file with the content
> d /run/munin 0775 munin nginx
> into /usr/lib/tmpfiles.d (if done by the distribution) or into
> /etc/tmpfiles.d (if this is only needed for your special setup).
Will do. Is that leading "d " supposed to be there?
Am I cre
ere are ways, but I wouldn't call them better.
The way to do it nowadays would be by placing a file with the content
d /run/munin 0775 munin nginx
into /usr/lib/tmpfiles.d (if done by the distribution) or into
/etc/tmpfiles.d (if this is only needed for your special setup).
> /run is often a
y which
>>> I've now disabled. I'll post more info as I gather it.
>>
>>
>>imapproxy was clearly affecting the TCP Queuing graph in munin but I
>>still ended up with a massive TCP Queuing spike today and
>>corresponding http response time issues long after I d
I had similar issue when I installed v4.7.8, and enabling PHP debug output to
> the website helped me fixing the actuall PHP issue (while I couldn't get the
> logging to write these messages to any logfile).
>
> Here is my nginx snippet for phpmyadmin in a subdirectory:
> #===
helped me fixing the actuall PHP issue (while I couldn't get the
logging to write these messages to any logfile).
Here is my nginx snippet for phpmyadmin in a subdirectory:
#=== POSTFIXADMIN ===
location /postfixadmin {
try_files $uri $uri/ index.php;
}
location ~ .php$ {
#fastcgi_split_pat
I had similar issue when I installed v4.7.8, and enabling PHP debug output to
> the website helped me fixing the actuall PHP issue (while I couldn't get the
> logging to write these messages to any logfile).
>
> Here is my nginx snippet for phpmyadmin in a subdirectory:
> #===
. Is that enough ram for a DNS server?
For running the Nameservers, yes. Compiling Gentoo packages will likely
put your SD-Card under stress, but that's just how it goes. My Model B
Rev 2 of 2015 runs dnsmasq as DHCP server, NGINX, Postfix, Unbound and
more for a bunch of clients in a LAN
Am 03.04.20 um 17:57 schrieb Ralph Seichter:
> * Stefan G. Weichinger:
>
>> My goal:
>>
>> collect logs of postfix, nginx into the docker-containers running ES,
>> Kibana .. and learn my way from there.
>
> If you are not dead-set on Elasticsearch et
ble layout? Once I get past this I may be asking for
>help with
>/etc/apache2/vhosts.d/* .
>
>1. https://wiki.gentoo.org/wiki/Apache
>
>--
>Regards,
>Peter.
Is there a specific reason you are using Apache?
I found it far simpler to use Nginx when dealing with different
hat way by default, I'll google and find out how to disable
>> that. Linky:
> For a typical router that hides your home net from the outside via NAT,
> you’d have to set up port forwarding in order to reach anything on the
> inside. I run nginx on my raspi for Nextcloud and PIM syncs
file, but with
metadata and annotations in K8S and Consul so it is dynamic and reactive.
I view adding /additional/ software / daemons as poor form, especially
when the /existing/ software can do the task at hand.
Don't overlook the port conflict.
Or you can use nginx (which is already
combination of two problems which made it
much more difficult to figure out.
First of all I didn't have enough apache2 processes. That seems like
it should have been obvious but it wasn't for two reasons. Firstly,
my apache2 processes are always idle or nearly idle, even when traffic
levels are high
.
bastard =gnome-2.18*
bastard =nginx-0.7*
bastard =postgresql-server-8.3*
Unmasking-the-new-Gnome/KDE-ly Yours,
Joshua
#!/bin/bash
echo Bastard.sh - mass unmasker.
echo Note to self: unmasking packages is fun and exciting, but may be
dangerous.
echo Ensure you have a backup, or a box
.
@all
Apache was never installed I don't see any reason to install it
because nginx satisfies my needs. I grepped for the string wget in all
logs and php files, found some, but they were for libssh2 in wordpress
code.
@Michael,
I thought of doing that, but before I discovered
.
@all
Apache was never installed I don't see any reason to install it
because nginx satisfies my needs. I grepped for the string wget in all
logs and php files, found some, but they were for libssh2 in wordpress
code.
@Michael,
I thought of doing that, but before I discovered the file
/ PHP /
whathaveyouscripting support is needed.
Preferably, with logging so I can see which packages I missed, but not
necessary.
Rgds,
nginx.
You can disable fastcgi/etc using use flags.
What about www-servers/fnord ?
Its website[1] claims that its binaries are less than 20 kB[2]
[1
lighttpd, or
thttpd.
If you are keen on exotica consider nginx, or G-WAN, but their configuration
may be more involved.
--
Regards,
Mick
signature.asc
Description: This is a digitally signed message part.
to the vendors port(s) to access their service? ) but I'll guess that
you probably need a reverse proxy/load balancer arrangement - something like
pound, portfusion, or even nginx? BTW, did I mention apache mod_proxy? I am
not sure what authentication arrangements you need to access your vendors
rather than Linux, but that
shouldn't be a problem.
Kerwin.
Thanks, I'll look into that.
FreeNAS comes *highly* recommended. It isn't cli driven though, it's a
django framework on nginx and runs off a memory stick. Resource usage
is next to nothing and it makes an awesome media
uot;, I use
git to keep the changes in sync, and source make.local.conf (ignored
by git) on each the container). I serve the binpkg host from my
desktop to my LAN with nginx but I'm considering git from the booted
container. I also mount $PORTDIR via NFS to have the same tree(
bandwidth is ex
I'll post more info as I gather it.
>>>
>>>
>>>imapproxy was clearly affecting the TCP Queuing graph in munin but I
>>>still ended up with a massive TCP Queuing spike today and
>>>corresponding http response time issues long after I disabled
&g
ing spike itself was due to imapproxy
>>which
>>>>> I've now disabled. I'll post more info as I gather it.
>>>>
>>>>
>>>>imapproxy was clearly affecting the TCP Queuing graph in munin but I
>>>>still ended up with a
e midst of this. Are there certain attacks I should
>>> check
>>>>> for?
>>>>>>
>>>>>> It looks like the TCP Queuing spike itself was due to imapproxy
>>> which
>>>>>> I've now disabled. I'll post more info as I gath
.
It has been handling cases like restarting SNMP daemons that segfault,
hadoop instances that loose to contact with the ZooKeeper cluster,
restarting nginx daemons that stop responding to requests by analysing
the last write date in nginx's access logs, the list goes on.
StackStorm is event
& such) to a secondary hard drive on
another system.
It this all goes well, surely I put a web server on a fourth board.
https://pimylifeup.com/raspberry-pi-nginx/
Granted those links are not centric to embedded gentoo, but they do
cover a lot of what is needed.
Further suggestions are
in K8S and Consul so it is dynamic and reactive.
>
> I view adding /additional/ software / daemons as poor form, especially
> when the /existing/ software can do the task at hand.
>
> Don't overlook the port conflict.
>
> > Or you can use nginx (which is already c
e to sync and then emerge --fetchonly, or --
fetch-all-uri, on the local mirror before you start emerging the various
client PCs. A cron job can ensure this is all done by the time you're ready
to run sync & emerge on the rest of your clients.
You can use any number of available webservers with small fo
for gentoo?
I just want to serve the distfiles, so no CGI / PHP /
whathaveyouscripting support is needed.
Preferably, with logging so I can see which packages I missed, but not
necessary.
Rgds,
nginx.
You can disable fastcgi/etc using use flags.
What about www-servers/fnord
of NAT.
you probably need a reverse proxy/load balancer arrangement - something like
pound, portfusion, or even nginx? BTW, did I mention apache mod_proxy? I am
not sure what authentication arrangements you need to access your vendors
ports, if you have VPNs or other secure tunnels between
Hi,
wrong permissions usually caused 500 errors for me, but I'm using
nginx+php-fpm. I don't changing php's time limit would change something about
your situation (especially considering that the first time setup always loaded
pretty much instantly for me, even on a raspberry pi 2), unless you
nge but the result was the same.
I have HTTP running on the server and accessing via the IP works fine:
$ curl https://192.168.1.254 -k
...
< Server: nginx
< Date: Mon, 03 Dec 2018 10:22:45 GMT
< Content-Type: text/html
< Content-Length: 574
< Connection: keep-alive
< Keep-Alive
use nginx. Apache+squid works appropriately well for my
circumstance.
There was a reason why people
stopped using static /dev, and devfs; maybe there is a reason why people
should stop using udev, but thus far that reason seems to be initramfs
makes us cranky.
*That* is a matter of systemic
er headers include info
log_config logio mem_cache mime mime_magic negotiation rewrite setenvif
speling status unique_id userdir usertrack vhost_alias"
CALLIGRA_FEATURES="kexi words flow plan sheets stage tables krita karbon
braindump author" CAMERAS="ptp2" COLLECTD_PLUGI
_4%* -python2_7% -python3_5% -python3_6%"
[ebuild U ] dev-qt/qtwebengine-5.9.2 [5.7.1-r2]
[ebuild R ] dev-db/postgresql-9.6.5-r1
[ebuild U ] media-video/obs-studio-20.0.1-r1 [20.0.1]
[ebuild U ] net-print/cups-filters-1.17.9 [1.16.4] USE="-pclm%"
[ebuild U ] me
6%"
[ebuild U ] dev-qt/qtwebengine-5.9.2 [5.7.1-r2]
[ebuild R ] dev-db/postgresql-9.6.5-r1
[ebuild U ] media-video/obs-studio-20.0.1-r1 [20.0.1]
[ebuild U ] net-print/cups-filters-1.17.9 [1.16.4] USE="-pclm%"
[ebuild U ] media-gfx/graphviz-2.40.1 [2.38.0-r1]
[
[5.7.1-r2]
> [ebuild R ] dev-db/postgresql-9.6.5-r1
> [ebuild U ] media-video/obs-studio-20.0.1-r1 [20.0.1]
> [ebuild U ] net-print/cups-filters-1.17.9 [1.16.4] USE="-pclm%"
> [ebuild U ] media-gfx/graphviz-2.40.1 [2.38.0-r1]
> [ebuild R ] www-serve
101 - 185 of 185 matches
Mail list logo