RE: load limiting [WAS: Re: [Vserver] disk scheduling ?]

2007-06-06 Thread Matt Anger (manger)
Unfortunately load is a "measure" (technically an exponentially weighted
moving average) of the run-queue length (High load means lots of context
switches as a lot of processes are fighting for the processor). So
restricting cpu usage could in fact make the load go higher as processes
would take longer to finish and hence more processes in the run-queue.
-Matt 

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Garcon du
Monde
Sent: Wednesday, June 06, 2007 7:26 AM
To: vserver@list.linux-vserver.org
Subject: load limiting [WAS: Re: [Vserver] disk scheduling ?]


hi,

Tony Lewis wrote:
> Herbert Poetzl wrote:
>> On Wed, May 23, 2007 at 01:07:55AM +0200, Attila Csipa wrote:
>>  
>>> I have a problem where one of the contexts is really heavy on IO and

>>> I'd try to limit that.

>>> 
>>
>> the question here is, _why_  .. maybe your service
>> really has a high I/O demand, maybe the service is
>> just badly configured ...
>>
>>   
> I'll add my "me too" to this.  The _why_ is the same as for CPU
limiting
> - so we can treat these vservers like they don't have to know or care
> about what other vservers do.


me too! :) well, actually, a little bit different... i have a (host)
server that is running about 15 guests. for most of these, it has been
fine, but there are a couple that are now running quite intensely. this
is making the loadavg for the host - and all the guests - quite high, so
that generally it is 5-10 and sometimes up to 40 or more. when it is
high, everything runs slow.

my question is, is there a way to restrict the loads (from 'uptime' or
'cat /proc/loadavg') on the individual guests (from the host, of
course), or is it just a matter of limiting the memory and cpu usages
alone? and if so, how to figure out which is having the biggest impact?

i know that the idea situation would be to move the high load vservers
onto dedicated physical hardware, but unfortunately that is not an
option that is open to me.

thanks for any help,

--gdm
___
Vserver mailing list
Vserver@list.linux-vserver.org
http://list.linux-vserver.org/mailman/listinfo/vserver
___
Vserver mailing list
Vserver@list.linux-vserver.org
http://list.linux-vserver.org/mailman/listinfo/vserver


Re: [Vserver] disk scheduling ?

2007-06-06 Thread Herbert Poetzl
On Wed, Jun 06, 2007 at 10:36:25AM +0930, Admin wrote:
> On Wed, 6 Jun 2007 03:07:37 am Nicolas Cadou wrote:
> > Le Tuesday 5 June 2007 06:52, Tony Lewis a écrit :
> > > Nicolas Cadou wrote:
> > > > Le Sunday 3 June 2007 02:15, Tony Lewis a écrit :
> > > >> My context is this: one vserver runs a popular web site, and on
> > > >> another one, occasionally I shift multi-gig files around, with cp. 
> > > >> When I'm doing that, the website vserver grinds to a halt - well,
> > > >> responds to requests quite slowly anyway.
> > > >
> > > > Instead of cp I use rsync --bwlimit=7000, which throttles I/O to a bit
> > > > less than 7MB/s. Works for local disk-to-disk copying, and works quite
> > > > well.
> > >
> > > There's always a workaround, but that's the same as renice'ing processes
> > > on one vserver to be cognisant of the needs of another vserver.  It's
> > > what the CPU limiting handles, so vservers can be more autonomous.
> >
> > I never came to try it, but this might help:
> >
> > http://linux-vserver.org/Frequently_Asked_Questions#Disk_I.2FO_limiting.3F_
> >Is_that_possible.3F
> 
> A while back I made some patches for util-vserver and a vserver-patched 
> kernel 
> which implemented ionice support
> 
> http://www.users.on.net/~anonc/.patches/vserver/
> 
> the readme.txt file has details on requirements and usage
> 
> it should be simple to adapt these to the latest releases.

maybe it would make sense to put guests (by default
or via option) into the idle class on a Linux-VServer
system/kernel?

best,
Herbert

> ___
> Vserver mailing list
> Vserver@list.linux-vserver.org
> http://list.linux-vserver.org/mailman/listinfo/vserver
___
Vserver mailing list
Vserver@list.linux-vserver.org
http://list.linux-vserver.org/mailman/listinfo/vserver


load limiting [WAS: Re: [Vserver] disk scheduling ?]

2007-06-06 Thread Garcon du Monde

hi,

Tony Lewis wrote:
> Herbert Poetzl wrote:
>> On Wed, May 23, 2007 at 01:07:55AM +0200, Attila Csipa wrote:
>>  
>>> I have a problem where one of the contexts is really heavy on IO and 
>>> I'd try to limit that.   
>>> 
>>
>> the question here is, _why_  .. maybe your service
>> really has a high I/O demand, maybe the service is
>> just badly configured ...
>>
>>   
> I'll add my "me too" to this.  The _why_ is the same as for CPU limiting
> - so we can treat these vservers like they don't have to know or care
> about what other vservers do.


me too! :) well, actually, a little bit different... i have a (host)
server that is running about 15 guests. for most of these, it has been
fine, but there are a couple that are now running quite intensely. this
is making the loadavg for the host - and all the guests - quite high, so
that generally it is 5-10 and sometimes up to 40 or more. when it is
high, everything runs slow.

my question is, is there a way to restrict the loads (from 'uptime' or
'cat /proc/loadavg') on the individual guests (from the host, of
course), or is it just a matter of limiting the memory and cpu usages
alone? and if so, how to figure out which is having the biggest impact?

i know that the idea situation would be to move the high load vservers
onto dedicated physical hardware, but unfortunately that is not an
option that is open to me.

thanks for any help,

--gdm
___
Vserver mailing list
Vserver@list.linux-vserver.org
http://list.linux-vserver.org/mailman/listinfo/vserver


Re: [Vserver] disk scheduling ?

2007-06-05 Thread Admin
On Wed, 6 Jun 2007 03:07:37 am Nicolas Cadou wrote:
> Le Tuesday 5 June 2007 06:52, Tony Lewis a écrit :
> > Nicolas Cadou wrote:
> > > Le Sunday 3 June 2007 02:15, Tony Lewis a écrit :
> > >> My context is this: one vserver runs a popular web site, and on
> > >> another one, occasionally I shift multi-gig files around, with cp. 
> > >> When I'm doing that, the website vserver grinds to a halt - well,
> > >> responds to requests quite slowly anyway.
> > >
> > > Instead of cp I use rsync --bwlimit=7000, which throttles I/O to a bit
> > > less than 7MB/s. Works for local disk-to-disk copying, and works quite
> > > well.
> >
> > There's always a workaround, but that's the same as renice'ing processes
> > on one vserver to be cognisant of the needs of another vserver.  It's
> > what the CPU limiting handles, so vservers can be more autonomous.
>
> I never came to try it, but this might help:
>
> http://linux-vserver.org/Frequently_Asked_Questions#Disk_I.2FO_limiting.3F_
>Is_that_possible.3F

A while back I made some patches for util-vserver and a vserver-patched kernel 
which implemented ionice support

http://www.users.on.net/~anonc/.patches/vserver/

the readme.txt file has details on requirements and usage

it should be simple to adapt these to the latest releases.
___
Vserver mailing list
Vserver@list.linux-vserver.org
http://list.linux-vserver.org/mailman/listinfo/vserver


Re: [Vserver] disk scheduling ?

2007-06-05 Thread Nicolas Cadou
Le Tuesday 5 June 2007 06:52, Tony Lewis a écrit :
> Nicolas Cadou wrote:
> > Le Sunday 3 June 2007 02:15, Tony Lewis a écrit :
> >> My context is this: one vserver runs a popular web site, and on another
> >> one, occasionally I shift multi-gig files around, with cp.  When I'm
> >> doing that, the website vserver grinds to a halt - well, responds to
> >> requests quite slowly anyway.
> >
> > Instead of cp I use rsync --bwlimit=7000, which throttles I/O to a bit
> > less than 7MB/s. Works for local disk-to-disk copying, and works quite
> > well.
>
> There's always a workaround, but that's the same as renice'ing processes
> on one vserver to be cognisant of the needs of another vserver.  It's
> what the CPU limiting handles, so vservers can be more autonomous.

I never came to try it, but this might help:

http://linux-vserver.org/Frequently_Asked_Questions#Disk_I.2FO_limiting.3F_Is_that_possible.3F

-- 

Nicolas Cadou

Cobi Informatique Inc


pgpeTXjk6sTYF.pgp
Description: PGP signature
___
Vserver mailing list
Vserver@list.linux-vserver.org
http://list.linux-vserver.org/mailman/listinfo/vserver


Re: [Vserver] disk scheduling ?

2007-06-05 Thread Tony Lewis

Nicolas Cadou wrote:

Le Sunday 3 June 2007 02:15, Tony Lewis a écrit :
  

My context is this: one vserver runs a popular web site, and on another
one, occasionally I shift multi-gig files around, with cp.  When I'm doing
that, the website vserver grinds to a halt - well, responds to requests
quite slowly anyway.



Instead of cp I use rsync --bwlimit=7000, which throttles I/O to a bit less 
than 7MB/s. Works for local disk-to-disk copying, and works quite well.
  


There's always a workaround, but that's the same as renice'ing processes 
on one vserver to be cognisant of the needs of another vserver.  It's 
what the CPU limiting handles, so vservers can be more autonomous.


Tony

___
Vserver mailing list
Vserver@list.linux-vserver.org
http://list.linux-vserver.org/mailman/listinfo/vserver


Re: [Vserver] disk scheduling ?

2007-06-04 Thread Nicolas Cadou
Le Sunday 3 June 2007 02:15, Tony Lewis a écrit :
> My context is this: one vserver runs a popular web site, and on another
> one, occasionally I shift multi-gig files around, with cp.  When I'm doing
> that, the website vserver grinds to a halt - well, responds to requests
> quite slowly anyway.

Instead of cp I use rsync --bwlimit=7000, which throttles I/O to a bit less 
than 7MB/s. Works for local disk-to-disk copying, and works quite well.

-- 

Nicolas Cadou

Cobi Informatique Inc.


pgpAKERRWVOBW.pgp
Description: PGP signature
___
Vserver mailing list
Vserver@list.linux-vserver.org
http://list.linux-vserver.org/mailman/listinfo/vserver


Re: [Vserver] disk scheduling ?

2007-06-03 Thread Tony Lewis

Herbert Poetzl wrote:

On Wed, May 23, 2007 at 01:07:55AM +0200, Attila Csipa wrote:
  
I have a problem where one of the contexts is really heavy on IO and  
I'd try to limit that.



the question here is, _why_  .. maybe your service
really has a high I/O demand, maybe the service is
just badly configured ...

  
I'll add my "me too" to this.  The _why_ is the same as for CPU limiting 
- so we can treat these vservers like they don't have to know or care 
about what other vservers do.


My context is this: one vserver runs a popular web site, and on another 
one, occasionally I shift multi-gig files around, with cp.  When I'm 
doing that, the website vserver grinds to a halt - well, responds to 
requests quite slowly anyway.


I'd like to be able to limit the amount of disk I/O the other vserver 
could do, so I don't adversely affect the webserver.


Tony Lewis

___
Vserver mailing list
Vserver@list.linux-vserver.org
http://list.linux-vserver.org/mailman/listinfo/vserver


Re: [Vserver] disk scheduling ?

2007-05-31 Thread Herbert Poetzl
On Wed, May 23, 2007 at 01:07:55AM +0200, Attila Csipa wrote:
> A question - is it possible to have something like the CPU token  
> mechanism, but for IO operations (f.e. hdd-s) ?   

there is, but the main problem here is that most of
the I/O is done asynchronous, i.e. it is not done
by the context itself, but by the kernel in general

> I have a problem where one of the contexts is really heavy on IO and  
> I'd try to limit that.

the question here is, _why_  .. maybe your service
really has a high I/O demand, maybe the service is
just badly configured ...

> The scheduler is CFQ, but that does not help much on itself, it's 
> not the scheduling itself that is the problem - if the HDD activity   
> is high, an another context, running apaches will slow down serving   
> files. Running out of children bc of the slowdown apache will start   
> forking new processes to fullfill the incoming demands, this however  
> triggers swapping after running out of ram which in turn makes
> everything even slower, starting a nasty IO bound load spiral.

well, IMHO the configuration needs some adjustments
here, for example, apache should not spawn more
workers than the memory can handle (note that workers
can serve more than one request)

> To make things (maybe) even harder, the IO intensive context is not   
> actually reading/writing all that much data but rather seeking among  
> small blocks of it.   

hmm, maybe it would be possible to keep th relevant
parts in memory, or at least use an index?

> Is there a recommended/usual way of solving IO bound problems among   
> vservers ?

really depends on the problem, my general advice is
to separate I/O bound guests and put them on a really
fast I/O system ...

> Putting in CPU limits or tokens does not help as the CPU-s are
> spending their time on idle or waiting even now so they are always
> full of tokens.

not unusual ... we thought about adding (or in this
case substracting) a penalty for I/O operations, maybe
that would be a viable solution for this kind of cases,
but I think that still needs some testing ...

HTC,
Herbert

> 
> 
> ___
> Vserver mailing list
> Vserver@list.linux-vserver.org
> http://list.linux-vserver.org/mailman/listinfo/vserver
___
Vserver mailing list
Vserver@list.linux-vserver.org
http://list.linux-vserver.org/mailman/listinfo/vserver


[Vserver] disk scheduling ?

2007-05-22 Thread Attila Csipa
A question - is it possible to have something like the CPU token mechanism, 
but for IO operations (f.e. hdd-s) ? I have a problem where one of the 
contexts is really heavy on IO and I'd try to limit that. The scheduler is 
CFQ, but that does not help much on itself, it's not the scheduling itself 
that is the problem - if the HDD activity is high, an another context, 
running apaches will slow down serving files. Running out of children bc of 
the slowdown apache will start forking new processes to fullfill the incoming 
demands, this however triggers swapping after running out of ram which in 
turn makes everything even slower, starting a nasty IO bound load spiral. To 
make things (maybe) even harder, the IO intensive context is not actually 
reading/writing all that much data but rather seeking among small blocks of 
it. Is there a recommended/usual way of solving IO bound problems among 
vservers ? Putting in CPU limits or tokens does not help as the CPU-s are 
spending their time on idle or waiting even now so they are always full of 
tokens.


___
Vserver mailing list
Vserver@list.linux-vserver.org
http://list.linux-vserver.org/mailman/listinfo/vserver