Re: Is nginx to complement or replace apache?

2012-04-03 Thread Ted Unangst
On Tue, Apr 03, 2012, C)CCC C;C  CC CC C wrote:
> nginx is great piece of software, but it doesn't do CGI, how users will run
> bgplg, for example ?

There's about a dozen different ways to make cgi scripts work with
nginx.



Re: Is nginx to complement or replace apache?

2012-04-03 Thread Илья Шипицин
nginx is great piece of software, but it doesn't do CGI, how users will run
bgplg, for example ?

28 MARTA 2012 G. 18:39 POLXZOWATELX Kevin Chadwick
NAPISAL:

> Knowing nginx is on it's way to base and having just seen some fixes
> for nginx on gentoo (some CVES from 2009).
>
> Is nginx going to complement apache in case users want features/prefer
> it or replace apache as apache can no longer have time spent on it?
>
> Also, does anyone know if there are any CVEs applicable to base apache
> currently?



Re: Is nginx to complement or replace apache?

2012-03-30 Thread Jan Stary
On Mar 30 11:15:30, Andres Perera wrote:
> > Hitting mem or fd limit *is* as real world problem. Beacuse both
> > memory and fd usage can build up, even in a well written program. In
> > contrast to stack usage.
> 
> in my system, hitting fd limit is completely an artificial problem. i
> have 8 gigs of memory and struct file is 120 bytes on amd64. the
> default low limit is as silly as would be a 64k stack limit. if i were
> designing a browser for machines like these, i wouldn't waste time
> optimizing fd usage
> 
> even if i had access to the same browser you guys use, which magically
> multiplexes a single socket over all connections, including ipc with
> child processes that house tabs and plugins like google chrome, i
> could afford not to give a shit when tiny fds go to waste whenever i
> tried the bloated alternatives

That's because you are mighty, having 8 gigs of memory and all that.
Now, for god's sake, shut the fuck up and go home.



Re: Is nginx to complement or replace apache?

2012-03-30 Thread Andres Perera
On Thu, Mar 29, 2012 at 4:30 PM, Otto Moerbeek  wrote:
> On Thu, Mar 29, 2012 at 01:31:17PM -0430, Andres Perera wrote:
>
>> On Thu, Mar 29, 2012 at 11:29 AM, Otto Moerbeek  wrote:
>> > On Thu, Mar 29, 2012 at 10:54:48AM -0430, Andres Perera wrote:
>> >
>> >> On Thu, Mar 29, 2012 at 10:38 AM, Paul de Weerd 
wrote:
>> >> > On Thu, Mar 29, 2012 at 10:24:27AM -0430, Andres Perera wrote:
>> >> > | > Instead, you'll crank your file limits to... let me guess,
unlimited?
>> >> > | >
>> >> > | > And when you hit the system-wide limit, then what happens?
>> >> > | >
>> >> > | > Then it is our systems problem, isn't it.
>> >> > | >
>> >> > |
>> >> > | i am not sure if you're a suggesting that each program do getrlimit
>> >> > | and acquire resources based on that, because it's a pita
>> >> >
>> >> > Gee whiz, writing programs is hard! B Let's go shopping!
>> >> >
>> >> > | what they could do is offer a reliable estimate (e.g. 5 open files
per
>> >> > | tab required)
>> >> >
>> >> > Or just try to open a file, *CHECK THE RETURNED ERROR CODE* and (if
>> >> > any) *DEAL WITH IT*
>> >>
>> >> but we're only talking about one resource and one error condition
>> >>
>> >> write wrappers for open, malloc, etc
>> >>
>> >> avoiding errors regarding stack limits is not as easy
>> >
>> > There are very few programs that actually hit stack limits. MOst cases
>> > it's unbounded recursion, signalling an error.
>>
>> doesn't change the fact that preempting it takes modifying your
>> compiler's typical function prelude (and slowing down each call)
>>
>> additionally, anticipating FSIZE would greatly slow done each write
>>
>> so no, you can't just "be correct" all the time and pat your self on the
back
>>
>> >
>> >>
>> >> obviously there's no reason for: a. every application replicating
>> >> these wrappers (how many xmallocs have you seen, honest?) and b. the
>> >> system not providing a consistent api
>> >
>> > Nah, you cannot create a apifor this stuff, proper error handling and
>> > adaptation to recousrce limits is a program specfic thing.
>>
>> well, if including logic that gracefully handles the stack limit is
>> not important on the basis of most application's needs, then i don't
>> see how the reverse relation couldn't justify a library with xmalloc
>> and similar. *most* applications that implement this function copy
>> paste the same fatal version. see also `#define MIN/MAX`
>
> You just seem to argue for the sake of it. Anyway
>
> A lot of programs have a *static* limit on stack depth, so those
> programs do not have that problem.
>
> For programs where the stack depth is a functon of the input (for e.g.
> parser and expression evaluation), there are well known techniques to
> control the maxium depth. Most of these programs actually have their
> own parse stack management and do not use the function stack for
> that.
>
> In my experience, I only have seen programs hitting stacks limit when
> the stack limit was very low, like 64k or so. Hitting the stack limit
> is not a real world problem. Our default stack limit is 4M: big enough
> for virtually any program, and small enough to catch unbounded
> recursion before it will eat all vm.
>
> Hitting mem or fd limit *is* as real world problem. Beacuse both
> memory and fd usage can build up, even in a well written program. In
> contrast to stack usage.

in my system, hitting fd limit is completely an artificial problem. i
have 8 gigs of memory and struct file is 120 bytes on amd64. the
default low limit is as silly as would be a 64k stack limit. if i were
designing a browser for machines like these, i wouldn't waste time
optimizing fd usage

even if i had access to the same browser you guys use, which magically
multiplexes a single socket over all connections, including ipc with
child processes that house tabs and plugins like google chrome, i
could afford not to give a shit when tiny fds go to waste whenever i
tried the bloated alternatives

>
> And just using xmalloc or similar for those cases is often not a
> solution, epsecially not for daemon programs. Handling resource
> exhaustion is a difficult problem that cannot be "solved" by just
> quiting your program, even if a lot of program do so.
>
> B  B  B  B -Otto



Re: Is nginx to complement or replace apache?

2012-03-30 Thread Mihai Popescu
Well, I have to correct a mistake. I was saying back in this thread
that I got the idea of login.conf being configured for minimal systems
from FAQ, the OpenBSD FAQ. I think I put it in the wrong way and I got
it actualy from Absolute OpenBSD: Unix for the paranoid (page 126).

So,to Nick Holland, I'm sorry for that, please excuse my intrusion.

All being said, I was pointed in the right direction and I got my lesson.

Thanks.



Re: Is nginx to complement or replace apache?

2012-03-29 Thread Otto Moerbeek
On Thu, Mar 29, 2012 at 01:31:17PM -0430, Andres Perera wrote:

> On Thu, Mar 29, 2012 at 11:29 AM, Otto Moerbeek  wrote:
> > On Thu, Mar 29, 2012 at 10:54:48AM -0430, Andres Perera wrote:
> >
> >> On Thu, Mar 29, 2012 at 10:38 AM, Paul de Weerd  wrote:
> >> > On Thu, Mar 29, 2012 at 10:24:27AM -0430, Andres Perera wrote:
> >> > | > Instead, you'll crank your file limits to... let me guess, unlimited?
> >> > | >
> >> > | > And when you hit the system-wide limit, then what happens?
> >> > | >
> >> > | > Then it is our systems problem, isn't it.
> >> > | >
> >> > |
> >> > | i am not sure if you're a suggesting that each program do getrlimit
> >> > | and acquire resources based on that, because it's a pita
> >> >
> >> > Gee whiz, writing programs is hard! B Let's go shopping!
> >> >
> >> > | what they could do is offer a reliable estimate (e.g. 5 open files per
> >> > | tab required)
> >> >
> >> > Or just try to open a file, *CHECK THE RETURNED ERROR CODE* and (if
> >> > any) *DEAL WITH IT*
> >>
> >> but we're only talking about one resource and one error condition
> >>
> >> write wrappers for open, malloc, etc
> >>
> >> avoiding errors regarding stack limits is not as easy
> >
> > There are very few programs that actually hit stack limits. MOst cases
> > it's unbounded recursion, signalling an error.
> 
> doesn't change the fact that preempting it takes modifying your
> compiler's typical function prelude (and slowing down each call)
> 
> additionally, anticipating FSIZE would greatly slow done each write
> 
> so no, you can't just "be correct" all the time and pat your self on the back
> 
> >
> >>
> >> obviously there's no reason for: a. every application replicating
> >> these wrappers (how many xmallocs have you seen, honest?) and b. the
> >> system not providing a consistent api
> >
> > Nah, you cannot create a apifor this stuff, proper error handling and
> > adaptation to recousrce limits is a program specfic thing.
> 
> well, if including logic that gracefully handles the stack limit is
> not important on the basis of most application's needs, then i don't
> see how the reverse relation couldn't justify a library with xmalloc
> and similar. *most* applications that implement this function copy
> paste the same fatal version. see also `#define MIN/MAX`

You just seem to argue for the sake of it. Anyway

A lot of programs have a *static* limit on stack depth, so those
programs do not have that problem.

For programs where the stack depth is a functon of the input (for e.g.
parser and expression evaluation), there are well known techniques to
control the maxium depth. Most of these programs actually have their
own parse stack management and do not use the function stack for
that. 

In my experience, I only have seen programs hitting stacks limit when
the stack limit was very low, like 64k or so. Hitting the stack limit
is not a real world problem. Our default stack limit is 4M: big enough
for virtually any program, and small enough to catch unbounded
recursion before it will eat all vm. 

Hitting mem or fd limit *is* as real world problem. Beacuse both
memory and fd usage can build up, even in a well written program. In
contrast to stack usage. 

And just using xmalloc or similar for those cases is often not a
solution, epsecially not for daemon programs. Handling resource
exhaustion is a difficult problem that cannot be "solved" by just
quiting your program, even if a lot of program do so. 

-Otto



Re: Is nginx to complement or replace apache?

2012-03-29 Thread Andres Perera
On Thu, Mar 29, 2012 at 3:46 PM, Ted Unangst  wrote:
> On Thu, Mar 29, 2012, Andres Perera wrote:
>>> Maybe you could also close some of those 999 keep-alive sessions and
>>> pre-load sessions you have open and retry. Seriously why does a
>>> webbrowser need 1024 file descriptors to be open at the same time?
>>> Are you concurrently reading 500 homepages?
>>
>> you are not expected to read 500 homepages at the same time, but you
>> *are* expected to switch to any tab at any time, and the price of a
>> system call to reopen the pertaining file descriptors is unacceptable
>
> What retarded browser are you using that needs to reopen file
> descriptors to switch tabs? B And what retarded OS are you running
> where system calls are so expensive they're user noticable?
>

none of firefox, chrome micromanage to this extent, that's exactly the point

as for the second question, it's conveniently ignoring keep-alive and
*anything* interactive. re-aquiring fds *and* emptying the queue of
pending actions is the cost, not the mere syscall

apparently you or claudio came up with a scheduler that guesses which
tabs are more important, swaps to disk the ones that aren't, and
pretends their ongoing transmissions don't mean anything



Re: Is nginx to complement or replace apache?

2012-03-29 Thread Ted Unangst
On Thu, Mar 29, 2012, Andres Perera wrote:
>> Maybe you could also close some of those 999 keep-alive sessions and
>> pre-load sessions you have open and retry. Seriously why does a
>> webbrowser need 1024 file descriptors to be open at the same time?
>> Are you concurrently reading 500 homepages?
> 
> you are not expected to read 500 homepages at the same time, but you
> *are* expected to switch to any tab at any time, and the price of a
> system call to reopen the pertaining file descriptors is unacceptable

What retarded browser are you using that needs to reopen file
descriptors to switch tabs?  And what retarded OS are you running
where system calls are so expensive they're user noticable?



Re: Is nginx to complement or replace apache?

2012-03-29 Thread Steffen Daode Nurpmeso
Theo de Raadt wrote [2012-03-28 22:44+0200]:
> If software cannot cope intelligently with soft resource limits,
> then such software is probably broken.
> 
> Otherwise, let's just remove the entire resource limit subsystem, ok?

I found out that i miss some kind of "physical" keyword.
The impossibility to compile insn-attrtab in my VM was caused by
insufficient memory, and gcc(1) being unable to detect that it
effectively entered an endless loop.
Once i've lowered the softlimit i got a clear 'virtual memory
exhausted' message.
So some kind of automatic adjustment of the limits to physically
present mem (+swap) would have helped a bad admin here.
Maybe i find time next week to write a small MAX_RETRY patch for
gcc.

--steffen
Forza Figa!



Re: Is nginx to complement or replace apache?

2012-03-29 Thread Andres Perera
On Thu, Mar 29, 2012 at 12:53 PM, Claudio Jeker
 wrote:
> On Thu, Mar 29, 2012 at 10:54:48AM -0430, Andres Perera wrote:
>> On Thu, Mar 29, 2012 at 10:38 AM, Paul de Weerd  wrote:
>> > On Thu, Mar 29, 2012 at 10:24:27AM -0430, Andres Perera wrote:
>> > | > Instead, you'll crank your file limits to... let me guess, unlimited?
>> > | >
>> > | > And when you hit the system-wide limit, then what happens?
>> > | >
>> > | > Then it is our systems problem, isn't it.
>> > | >
>> > |
>> > | i am not sure if you're a suggesting that each program do getrlimit
>> > | and acquire resources based on that, because it's a pita
>> >
>> > Gee whiz, writing programs is hard! B Let's go shopping!
>> >
>> > | what they could do is offer a reliable estimate (e.g. 5 open files per
>> > | tab required)
>> >
>> > Or just try to open a file, *CHECK THE RETURNED ERROR CODE* and (if
>> > any) *DEAL WITH IT*
>>
>> but we're only talking about one resource and one error condition
>
> OMG. System calls can fail. I'm shocked. How can anything work?!
>
>> write wrappers for open, malloc, etc
>
> Why wrappers? Just check the freaking return value and design your program
> to behave in case something goes wrong.

guess what, if you do this more than once in your program you have a
wrapper candidate

>
>> avoiding errors regarding stack limits is not as easy
>
> Yes, so embrace them, design with failure in mind.
>
>> obviously there's no reason for: a. every application replicating
>> these wrappers (how many xmallocs have you seen, honest?) and b. the
>> system not providing a consistent api
>
> xmalloc is a dumb interface, since it terminates the process as soon as
> the first malloc fails. Sure it is the right thing for process with
> limited memory needs but browsers are such pigs today that you should be
> better then just showing a "Oups, something went wrong" page on next
> startup.
>
>> after you're done writing all the wrappers for your crappy browser,
>> what do you do? notify the user that no resources can be allocated,
>> try pushing the soft limit first, whatever. they still have to re-exec
>> with higher limits
>
> Maybe you could also close some of those 999 keep-alive sessions and
> pre-load sessions you have open and retry. Seriously why does a
> webbrowser need 1024 file descriptors to be open at the same time?
> Are you concurrently reading 500 homepages?

you are not expected to read 500 homepages at the same time, but you
*are* expected to switch to any tab at any time, and the price of a
system call to reopen the pertaining file descriptors is unacceptable

>
>> why even bother?
>
> because the modern browser suck. They suck big time. They assume complete
> ownership of the system and think that consuming all resources just to
> show the latest animated gif from 4chan is the right thing.
>
>>
>> >
>> >
>> > Note that on a busy system, the ulimit is not the only thing holding
>> > you back. B You may actually run into the maximum number of files the
>> > system can have open at any given time (sure, that's also tweakable).
>> > Just doing getrlimit isn't going to be sufficient...
>>
>> doesn't matter
>
> your attitude is the reason why we need multi-core laptops with 8GB of ram
> to play one game of tic-tac-toe.

until now it's been about the interface. glad that someone decided to
be honest by saying they have bias towards the default low limits (and
fitting oses in floppy disks, etc)

:)

>
> --
> :wq Claudio



Re: Is nginx to complement or replace apache?

2012-03-29 Thread Andres Perera
On Thu, Mar 29, 2012 at 11:29 AM, Otto Moerbeek  wrote:
> On Thu, Mar 29, 2012 at 10:54:48AM -0430, Andres Perera wrote:
>
>> On Thu, Mar 29, 2012 at 10:38 AM, Paul de Weerd  wrote:
>> > On Thu, Mar 29, 2012 at 10:24:27AM -0430, Andres Perera wrote:
>> > | > Instead, you'll crank your file limits to... let me guess,
unlimited?
>> > | >
>> > | > And when you hit the system-wide limit, then what happens?
>> > | >
>> > | > Then it is our systems problem, isn't it.
>> > | >
>> > |
>> > | i am not sure if you're a suggesting that each program do getrlimit
>> > | and acquire resources based on that, because it's a pita
>> >
>> > Gee whiz, writing programs is hard! B Let's go shopping!
>> >
>> > | what they could do is offer a reliable estimate (e.g. 5 open files per
>> > | tab required)
>> >
>> > Or just try to open a file, *CHECK THE RETURNED ERROR CODE* and (if
>> > any) *DEAL WITH IT*
>>
>> but we're only talking about one resource and one error condition
>>
>> write wrappers for open, malloc, etc
>>
>> avoiding errors regarding stack limits is not as easy
>
> There are very few programs that actually hit stack limits. MOst cases
> it's unbounded recursion, signalling an error.

doesn't change the fact that preempting it takes modifying your
compiler's typical function prelude (and slowing down each call)

additionally, anticipating FSIZE would greatly slow done each write

so no, you can't just "be correct" all the time and pat your self on the back

>
>>
>> obviously there's no reason for: a. every application replicating
>> these wrappers (how many xmallocs have you seen, honest?) and b. the
>> system not providing a consistent api
>
> Nah, you cannot create a apifor this stuff, proper error handling and
> adaptation to recousrce limits is a program specfic thing.

well, if including logic that gracefully handles the stack limit is
not important on the basis of most application's needs, then i don't
see how the reverse relation couldn't justify a library with xmalloc
and similar. *most* applications that implement this function copy
paste the same fatal version. see also `#define MIN/MAX`

>
>>
>> after you're done writing all the wrappers for your crappy browser,
>> what do you do? notify the user that no resources can be allocated,
>> try pushing the soft limit first, whatever. they still have to re-exec
>> with higher limits
>>
>> why even bother?
>
> Stop using the crappy program. We prefer to apply back pressure to
> crappy programming instead of accommodating it.
>
> B  B  B  B -Otto
>
>>
>> >
>> >
>> > Note that on a busy system, the ulimit is not the only thing holding
>> > you back. B You may actually run into the maximum number of files the
>> > system can have open at any given time (sure, that's also tweakable).
>> > Just doing getrlimit isn't going to be sufficient...
>>
>> doesn't matter
>>
>> >
>> > Paul 'WEiRD' de Weerd
>> >
>> > --
>> >>[<++>-]<+++.>+++[<-->-]<.>+++[<+
>> > +++>-]<.>++[<>-]<+.--.[-]
>> > B B B B B B B B B B B B B B B B http://www.weirdnet.nl/



Re: Is nginx to complement or replace apache?

2012-03-29 Thread Claudio Jeker
On Thu, Mar 29, 2012 at 10:54:48AM -0430, Andres Perera wrote:
> On Thu, Mar 29, 2012 at 10:38 AM, Paul de Weerd  wrote:
> > On Thu, Mar 29, 2012 at 10:24:27AM -0430, Andres Perera wrote:
> > | > Instead, you'll crank your file limits to... let me guess, unlimited?
> > | >
> > | > And when you hit the system-wide limit, then what happens?
> > | >
> > | > Then it is our systems problem, isn't it.
> > | >
> > |
> > | i am not sure if you're a suggesting that each program do getrlimit
> > | and acquire resources based on that, because it's a pita
> >
> > Gee whiz, writing programs is hard! B Let's go shopping!
> >
> > | what they could do is offer a reliable estimate (e.g. 5 open files per
> > | tab required)
> >
> > Or just try to open a file, *CHECK THE RETURNED ERROR CODE* and (if
> > any) *DEAL WITH IT*
> 
> but we're only talking about one resource and one error condition

OMG. System calls can fail. I'm shocked. How can anything work?!
 
> write wrappers for open, malloc, etc

Why wrappers? Just check the freaking return value and design your program
to behave in case something goes wrong.
 
> avoiding errors regarding stack limits is not as easy

Yes, so embrace them, design with failure in mind.
 
> obviously there's no reason for: a. every application replicating
> these wrappers (how many xmallocs have you seen, honest?) and b. the
> system not providing a consistent api

xmalloc is a dumb interface, since it terminates the process as soon as
the first malloc fails. Sure it is the right thing for process with
limited memory needs but browsers are such pigs today that you should be
better then just showing a "Oups, something went wrong" page on next
startup.
 
> after you're done writing all the wrappers for your crappy browser,
> what do you do? notify the user that no resources can be allocated,
> try pushing the soft limit first, whatever. they still have to re-exec
> with higher limits

Maybe you could also close some of those 999 keep-alive sessions and
pre-load sessions you have open and retry. Seriously why does a
webbrowser need 1024 file descriptors to be open at the same time?
Are you concurrently reading 500 homepages?
 
> why even bother?

because the modern browser suck. They suck big time. They assume complete
ownership of the system and think that consuming all resources just to
show the latest animated gif from 4chan is the right thing.

> 
> >
> >
> > Note that on a busy system, the ulimit is not the only thing holding
> > you back. B You may actually run into the maximum number of files the
> > system can have open at any given time (sure, that's also tweakable).
> > Just doing getrlimit isn't going to be sufficient...
> 
> doesn't matter

your attitude is the reason why we need multi-core laptops with 8GB of ram
to play one game of tic-tac-toe.

-- 
:wq Claudio



Re: Is nginx to complement or replace apache?

2012-03-29 Thread Otto Moerbeek
On Thu, Mar 29, 2012 at 10:54:48AM -0430, Andres Perera wrote:

> On Thu, Mar 29, 2012 at 10:38 AM, Paul de Weerd  wrote:
> > On Thu, Mar 29, 2012 at 10:24:27AM -0430, Andres Perera wrote:
> > | > Instead, you'll crank your file limits to... let me guess, unlimited?
> > | >
> > | > And when you hit the system-wide limit, then what happens?
> > | >
> > | > Then it is our systems problem, isn't it.
> > | >
> > |
> > | i am not sure if you're a suggesting that each program do getrlimit
> > | and acquire resources based on that, because it's a pita
> >
> > Gee whiz, writing programs is hard! B Let's go shopping!
> >
> > | what they could do is offer a reliable estimate (e.g. 5 open files per
> > | tab required)
> >
> > Or just try to open a file, *CHECK THE RETURNED ERROR CODE* and (if
> > any) *DEAL WITH IT*
> 
> but we're only talking about one resource and one error condition
> 
> write wrappers for open, malloc, etc
> 
> avoiding errors regarding stack limits is not as easy

There are very few programs that actually hit stack limits. MOst cases
it's unbounded recursion, signalling an error.

> 
> obviously there's no reason for: a. every application replicating
> these wrappers (how many xmallocs have you seen, honest?) and b. the
> system not providing a consistent api

Nah, you cannot create a apifor this stuff, proper error handling and
adaptation to recousrce limits is a program specfic thing.

> 
> after you're done writing all the wrappers for your crappy browser,
> what do you do? notify the user that no resources can be allocated,
> try pushing the soft limit first, whatever. they still have to re-exec
> with higher limits
> 
> why even bother?

Stop using the crappy program. We prefer to apply back pressure to
crappy programming instead of accommodating it.

-Otto

> 
> >
> >
> > Note that on a busy system, the ulimit is not the only thing holding
> > you back. B You may actually run into the maximum number of files the
> > system can have open at any given time (sure, that's also tweakable).
> > Just doing getrlimit isn't going to be sufficient...
> 
> doesn't matter
> 
> >
> > Paul 'WEiRD' de Weerd
> >
> > --
> >>[<++>-]<+++.>+++[<-->-]<.>+++[<+
> > +++>-]<.>++[<>-]<+.--.[-]
> > B  B  B  B  B  B  B  B  http://www.weirdnet.nl/



Re: Is nginx to complement or replace apache?

2012-03-29 Thread Andres Perera
On Thu, Mar 29, 2012 at 10:38 AM, Paul de Weerd  wrote:
> On Thu, Mar 29, 2012 at 10:24:27AM -0430, Andres Perera wrote:
> | > Instead, you'll crank your file limits to... let me guess, unlimited?
> | >
> | > And when you hit the system-wide limit, then what happens?
> | >
> | > Then it is our systems problem, isn't it.
> | >
> |
> | i am not sure if you're a suggesting that each program do getrlimit
> | and acquire resources based on that, because it's a pita
>
> Gee whiz, writing programs is hard! B Let's go shopping!
>
> | what they could do is offer a reliable estimate (e.g. 5 open files per
> | tab required)
>
> Or just try to open a file, *CHECK THE RETURNED ERROR CODE* and (if
> any) *DEAL WITH IT*

but we're only talking about one resource and one error condition

write wrappers for open, malloc, etc

avoiding errors regarding stack limits is not as easy

obviously there's no reason for: a. every application replicating
these wrappers (how many xmallocs have you seen, honest?) and b. the
system not providing a consistent api

after you're done writing all the wrappers for your crappy browser,
what do you do? notify the user that no resources can be allocated,
try pushing the soft limit first, whatever. they still have to re-exec
with higher limits

why even bother?

>
>
> Note that on a busy system, the ulimit is not the only thing holding
> you back. B You may actually run into the maximum number of files the
> system can have open at any given time (sure, that's also tweakable).
> Just doing getrlimit isn't going to be sufficient...

doesn't matter

>
> Paul 'WEiRD' de Weerd
>
> --
>>[<++>-]<+++.>+++[<-->-]<.>+++[<+
> +++>-]<.>++[<>-]<+.--.[-]
> B  B  B  B  B  B  B  B  http://www.weirdnet.nl/



Re: Is nginx to complement or replace apache?

2012-03-29 Thread Paul de Weerd
On Thu, Mar 29, 2012 at 10:24:27AM -0430, Andres Perera wrote:
| > Instead, you'll crank your file limits to... let me guess, unlimited?
| >
| > And when you hit the system-wide limit, then what happens?
| >
| > Then it is our systems problem, isn't it.
| >
| 
| i am not sure if you're a suggesting that each program do getrlimit
| and acquire resources based on that, because it's a pita

Gee whiz, writing programs is hard!  Let's go shopping!

| what they could do is offer a reliable estimate (e.g. 5 open files per
| tab required)

Or just try to open a file, *CHECK THE RETURNED ERROR CODE* and (if
any) *DEAL WITH IT*


Note that on a busy system, the ulimit is not the only thing holding
you back.  You may actually run into the maximum number of files the
system can have open at any given time (sure, that's also tweakable).
Just doing getrlimit isn't going to be sufficient...

Paul 'WEiRD' de Weerd

-- 
>[<++>-]<+++.>+++[<-->-]<.>+++[<+
+++>-]<.>++[<>-]<+.--.[-]
 http://www.weirdnet.nl/ 



Re: Is nginx to complement or replace apache?

2012-03-29 Thread Andres Perera
On Wed, Mar 28, 2012 at 4:42 PM, Theo de Raadt  wrote:
>> >> Seeing the work that is done on nginx as Daily changelog shows I was
>> >> thinking the same, that eventualy nginx will replace httpd (it cannot
>> >> replace apache).
>> >> About that "too many files open", I run it this once, but Stuart
>> >> Henderson suggested to alter the values in /etc/login.conf. I was
>> >> expecting some decent values there, but I found out from FAQ that the
>> >> default file has the corespondent values for the minimal hardware
>> >> system OpenBSD is able to run on, so the giant machines need
>> >> adjusting.
>> >
>>
>> On Wed, Mar 28, 2012 at 11:44 PM, Theo de Raadt  
>> wrote:
>> > Balony.
>> >
>> > If software cannot cope intelligently with soft resource limits,
>> > then such software is probably broken.
>> >
>> > Otherwise, let's just remove the entire resource limit subsystem, ok?
>>
>> No need to remove it I think, because the sole usage of it has a
>> purpose since you've put it there from the start.
>> I can't call xxxterm as being probably broken because my knowledge and
>> position don't allow me to do that. This package asks for minimum 1024
>> file descriptors
>
> What happens if it opens 1025 files?
>
>> and recommands 2048.
>
> What happens if it opens 2049 files?
>
>> I modified openfiles-max in
>> login.conf. That was the closest place I found to fulfill the request.
>> The other application is shotwell, it crashes when you try to open in
>> thumbnails mode a direcotry full of pictures. I don't know why the
>> developers used the opening all files at once approach.
>
> So you crank your limits.
>
> What happens if it opens 1 file more than your limits?
>
> You crank the limits, again.
>
> What happens if it opens 1 file more than your new limits?
>
> When do you realize that you are the problem, because you don't
> tell the developers to fix their software so that it works in the
> resource limits allocated to it?
>
> Instead, you'll crank your file limits to... let me guess, unlimited?
>
> And when you hit the system-wide limit, then what happens?
>
> Then it is our systems problem, isn't it.
>

i am not sure if you're a suggesting that each program do getrlimit
and acquire resources based on that, because it's a pita

what they could do is offer a reliable estimate (e.g. 5 open files per
tab required)



Re: Is nginx to complement or replace apache?

2012-03-28 Thread Stuart Henderson
On 2012/03/28 22:53, Stuart Henderson wrote:
> On 2012/03/28 21:30, Stuart Henderson wrote:
> > On 2012/03/28 12:55, Kevin wrote:
> > > The only issue I seem to be having is a *ton* (tens of thousands) of
> > > random instances where the logfile repeatedly records 'too many open
> > > files' errors for several minutes on end.
> > 
> > Haven't seen this myself, I would recommend fstat to start with,
> > maybe ktrace if it's not obvious from that.
> 
> Don't know whether it's obvious from my mail or not, but just in case
> not I'll spell it out: actually look into this, work out *why* it's
> happening, or at least paste in some things other people can look at.
> 
> Also can you post some of the complete log lines.

Okay don't worry about posting these, I can reproduce it.



Re: Is nginx to complement or replace apache?

2012-03-28 Thread Stuart Henderson
On 2012/03/28 21:30, Stuart Henderson wrote:
> On 2012/03/28 12:55, Kevin wrote:
> > The only issue I seem to be having is a *ton* (tens of thousands) of
> > random instances where the logfile repeatedly records 'too many open
> > files' errors for several minutes on end.
> 
> Haven't seen this myself, I would recommend fstat to start with,
> maybe ktrace if it's not obvious from that.

Don't know whether it's obvious from my mail or not, but just in case
not I'll spell it out: actually look into this, work out *why* it's
happening, or at least paste in some things other people can look at.

Also can you post some of the complete log lines.



Re: Is nginx to complement or replace apache?

2012-03-28 Thread Mihai Popescu
On Thu, Mar 29, 2012 at 12:12 AM, Theo de Raadt  wrote:
>> >> Seeing the work that is done on nginx as Daily changelog shows I was
>> >> thinking the same, that eventualy nginx will replace httpd (it cannot
>> >> replace apache).
>> >> About that "too many files open", I run it this once, but Stuart
>> >> Henderson suggested to alter the values in /etc/login.conf. I was
>> >> expecting some decent values there, but I found out from FAQ that the
>> >> default file has the corespondent values for the minimal hardware
>> >> system OpenBSD is able to run on, so the giant machines need
>> >> adjusting.
>> >
>>
>> On Wed, Mar 28, 2012 at 11:44 PM, Theo de Raadt  
>> wrote:
>> > Balony.
>> >
>> > If software cannot cope intelligently with soft resource limits,
>> > then such software is probably broken.
>> >
>> > Otherwise, let's just remove the entire resource limit subsystem, ok?
>>
>> No need to remove it I think, because the sole usage of it has a
>> purpose since you've put it there from the start.
>> I can't call xxxterm as being probably broken because my knowledge and
>> position don't allow me to do that. This package asks for minimum 1024
>> file descriptors
>
> What happens if it opens 1025 files?
>
>> and recommands 2048.
>
> What happens if it opens 2049 files?
>
>> I modified openfiles-max in
>> login.conf. That was the closest place I found to fulfill the request.
>> The other application is shotwell, it crashes when you try to open in
>> thumbnails mode a direcotry full of pictures. I don't know why the
>> developers used the opening all files at once approach.
>
> So you crank your limits.
>
> What happens if it opens 1 file more than your limits?
>
> You crank the limits, again.
>
> What happens if it opens 1 file more than your new limits?
>
> When do you realize that you are the problem, because you don't
> tell the developers to fix their software so that it works in the
> resource limits allocated to it?
>
> Instead, you'll crank your file limits to... let me guess, unlimited?
>
> And when you hit the system-wide limit, then what happens?
>
> Then it is our systems problem, isn't it.
>
>


"Look, if you intend by that utilization of an obscure colloquialism
to imply that my sanity is not entirely up to scratch, or indeed to
deny the semi-existence of my little half" browser ", I shall have to
ask you to listen to this!"


It's fine, I got the idea. I must admit I wasn't aware of what you've
explained. I got xxxterm as good because the original author is well
known inside the community.
I will not blame the system so quickly because of the ports.

So, xxxterm developers, please fix your application that it will work
with up to maximum 128 file descriptors. Thank you.



Re: Is nginx to complement or replace apache?

2012-03-28 Thread Theo de Raadt
> >> Seeing the work that is done on nginx as Daily changelog shows I was
> >> thinking the same, that eventualy nginx will replace httpd (it cannot
> >> replace apache).
> >> About that "too many files open", I run it this once, but Stuart
> >> Henderson suggested to alter the values in /etc/login.conf. I was
> >> expecting some decent values there, but I found out from FAQ that the
> >> default file has the corespondent values for the minimal hardware
> >> system OpenBSD is able to run on, so the giant machines need
> >> adjusting.
> >
> 
> On Wed, Mar 28, 2012 at 11:44 PM, Theo de Raadt  
> wrote:
> > Balony.
> >
> > If software cannot cope intelligently with soft resource limits,
> > then such software is probably broken.
> >
> > Otherwise, let's just remove the entire resource limit subsystem, ok?
> 
> No need to remove it I think, because the sole usage of it has a
> purpose since you've put it there from the start.
> I can't call xxxterm as being probably broken because my knowledge and
> position don't allow me to do that. This package asks for minimum 1024
> file descriptors

What happens if it opens 1025 files? 

> and recommands 2048.

What happens if it opens 2049 files?

> I modified openfiles-max in
> login.conf. That was the closest place I found to fulfill the request.
> The other application is shotwell, it crashes when you try to open in
> thumbnails mode a direcotry full of pictures. I don't know why the
> developers used the opening all files at once approach.

So you crank your limits.

What happens if it opens 1 file more than your limits?

You crank the limits, again.

What happens if it opens 1 file more than your new limits?

When do you realize that you are the problem, because you don't
tell the developers to fix their software so that it works in the
resource limits allocated to it?

Instead, you'll crank your file limits to... let me guess, unlimited?

And when you hit the system-wide limit, then what happens?

Then it is our systems problem, isn't it.



Re: Is nginx to complement or replace apache?

2012-03-28 Thread Mihai Popescu
>> Seeing the work that is done on nginx as Daily changelog shows I was
>> thinking the same, that eventualy nginx will replace httpd (it cannot
>> replace apache).
>> About that "too many files open", I run it this once, but Stuart
>> Henderson suggested to alter the values in /etc/login.conf. I was
>> expecting some decent values there, but I found out from FAQ that the
>> default file has the corespondent values for the minimal hardware
>> system OpenBSD is able to run on, so the giant machines need
>> adjusting.
>

On Wed, Mar 28, 2012 at 11:44 PM, Theo de Raadt  wrote:
> Balony.
>
> If software cannot cope intelligently with soft resource limits,
> then such software is probably broken.
>
> Otherwise, let's just remove the entire resource limit subsystem, ok?

No need to remove it I think, because the sole usage of it has a
purpose since you've put it there from the start.
I can't call xxxterm as being probably broken because my knowledge and
position don't allow me to do that. This package asks for minimum 1024
file descriptors and recommands 2048. I modified openfiles-max in
login.conf. That was the closest place I found to fulfill the request.
The other application is shotwell, it crashes when you try to open in
thumbnails mode a direcotry full of pictures. I don't know why the
developers used the opening all files at once approach.

The old problem is here: http://marc.info/?l=openbsd-misc&m=13284387529&w=2

Thanks



Re: Is nginx to complement or replace apache?

2012-03-28 Thread Theo de Raadt
> Seeing the work that is done on nginx as Daily changelog shows I was
> thinking the same, that eventualy nginx will replace httpd (it cannot
> replace apache).
> About that "too many files open", I run it this once, but Stuart
> Henderson suggested to alter the values in /etc/login.conf. I was
> expecting some decent values there, but I found out from FAQ that the
> default file has the corespondent values for the minimal hardware
> system OpenBSD is able to run on, so the giant machines need
> adjusting.

Balony.

If software cannot cope intelligently with soft resource limits,
then such software is probably broken.

Otherwise, let's just remove the entire resource limit subsystem, ok?



Re: Is nginx to complement or replace apache?

2012-03-28 Thread Stuart Henderson
On 2012/03/28 12:55, Kevin wrote:
> The only issue I seem to be having is a *ton* (tens of thousands) of
> random instances where the logfile repeatedly records 'too many open
> files' errors for several minutes on end.

Haven't seen this myself, I would recommend fstat to start with,
maybe ktrace if it's not obvious from that.

Between the network sockets to clients and network or unix domain
sockets to fastcgi/other backends, you will bump into the default
file descriptor limits more quickly than when running apache with
mod_php.

> Then, it stops as suddenly
> as it starts only to return again in a couple of hours.
> 
> Curiously, when this happens, relayd is happy as a clam as are our
> server monitors. Repeated manual checking of the sites shows nothing
> wrong either. Fast, complete page loads, no broken images, nothing.

There are separate file descriptor limits for different things:
openfiles-cur, openfiles-max, as well as the kernel limits.
"Too many open files" does specifically apply to a process not
the whole system, see ENFILE/EMFILE in errno(2).

Note that relayd in particular raises its own file descriptor
limit up to openfiles-max.



Re: Is nginx to complement or replace apache?

2012-03-28 Thread Mihai Popescu
Seeing the work that is done on nginx as Daily changelog shows I was
thinking the same, that eventualy nginx will replace httpd (it cannot
replace apache).
About that "too many files open", I run it this once, but Stuart
Henderson suggested to alter the values in /etc/login.conf. I was
expecting some decent values there, but I found out from FAQ that the
default file has the corespondent values for the minimal hardware
system OpenBSD is able to run on, so the giant machines need
adjusting.



Re: Is nginx to complement or replace apache?

2012-03-28 Thread Kevin

Stuart Henderson wrote:

On 2012-03-28, Kevin Chadwick  wrote:

Knowing nginx is on it's way to base and having just seen some fixes

It's already in base.
Coincidentally, I just moved all of our sites/servers over to nginx in 
ports just days before the integration into base was announced.


For what it's worth, I'm really happy with nginx thus far; certainly, 
the configuration seems easier and our site load times are down about 
50% across the board.


The only issue I seem to be having is a *ton* (tens of thousands) of 
random instances where the logfile repeatedly records 'too many open 
files' errors for several minutes on end. Then, it stops as suddenly as 
it starts only to return again in a couple of hours.


Curiously, when this happens, relayd is happy as a clam as are our 
server monitors. Repeated manual checking of the sites shows nothing 
wrong either. Fast, complete page loads, no broken images, nothing.


IOW: it's bitching, but I can't see any problems at all from the web 
aside from one solitary instance (when it wasn't even complaining about 
too many open files) where it couldn't connect to php-fpm and threw up 
the wonderful nginx 'could not connect to gateway' error. Grr.


I chalk it up to learning curve and know ultimately, I'll get it sorted.



Re: Is nginx to complement or replace apache?

2012-03-28 Thread Stuart Henderson
On 2012-03-28, Kevin Chadwick  wrote:
> Knowing nginx is on it's way to base and having just seen some fixes

It's already in base.

> for nginx on gentoo (some CVES from 2009).

All but one of those were fixed long ago, and the other was fixed recently
(we do already have the fix for it)

http://nginx.org/en/security_advisories.html

> Is nginx going to complement apache in case users want features/prefer
> it or replace apache as apache can no longer have time spent on it?

Long term it doesn't make sense to maintain both in base, the best solution
IMO would be to adjust as many ports as possible to work with nginx, and move
Apache to the ports tree for those who need it.

> Also, does anyone know if there are any CVEs applicable to base apache
> currently?

Hard to say, you can't look at general Apache problems because the
version in base has a *huge* number of changes. If someone is interested
in Apache and has the knowledge and time to work through CVEs and
figure out which apply, I would imagine their time would probably be
better spent on updating ports/www/apache-httpd and the major dependencies
(apr, apr-util etc) which are pretty much unmaintained at present.



Is nginx to complement or replace apache?

2012-03-28 Thread Kevin Chadwick
Knowing nginx is on it's way to base and having just seen some fixes
for nginx on gentoo (some CVES from 2009).

Is nginx going to complement apache in case users want features/prefer
it or replace apache as apache can no longer have time spent on it?

Also, does anyone know if there are any CVEs applicable to base apache
currently?