Re: [ck] Re: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler
Antonio Vargas wrote: IIRC, about 2 or three years ago (or maybe on the 2.6.10 timeframe), there was a patch which managed to pass the interactive from one app to another when there was a pipe or udp connection between them. This meant that a marked-as-interactive xterm would, when blocked waiting for an Xserver response, transfer some of its interactiveness to the Xserver, and aparently it worked very good for desktop workloads so, maybe adapting it for this new scheduler would be good. And it was dropped because of some very nasty side effect, probably a DOS opportunity. Helge Hafting - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [ck] Re: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler
Antonio Vargas wrote: IIRC, about 2 or three years ago (or maybe on the 2.6.10 timeframe), there was a patch which managed to pass the interactive from one app to another when there was a pipe or udp connection between them. This meant that a marked-as-interactive xterm would, when blocked waiting for an Xserver response, transfer some of its interactiveness to the Xserver, and aparently it worked very good for desktop workloads so, maybe adapting it for this new scheduler would be good. And it was dropped because of some very nasty side effect, probably a DOS opportunity. Helge Hafting - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [ck] Re: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler
Op Sunday 18 March 2007, schreef Con Kolivas: > On Monday 12 March 2007 22:26, Al Boldi wrote: > > Con Kolivas wrote: > > > On Monday 12 March 2007 15:42, Al Boldi wrote: > > > > Con Kolivas wrote: > > > > > On Monday 12 March 2007 08:52, Con Kolivas wrote: > > > > > > And thank you! I think I know what's going on now. I think each > > > > > > rotation is followed by another rotation before the higher > > > > > > priority task is getting a look in in schedule() to even get > > > > > > quota and add it to the runqueue quota. I'll try a simple change > > > > > > to see if that helps. Patch coming up shortly. > > > > > > > > > > Can you try the following patch and see if it helps. There's also > > > > > one minor preemption logic fix in there that I'm planning on > > > > > including. Thanks! > > > > > > > > Applied on top of v0.28 mainline, and there is no difference. > > > > > > > > What's it look like on your machine? > > > > > > The higher priority one always get 6-7ms whereas the lower priority one > > > runs 6-7ms and then one larger perfectly bound expiration amount. > > > Basically exactly as I'd expect. The higher priority task gets > > > precisely RR_INTERVAL maximum latency whereas the lower priority task > > > gets RR_INTERVAL min and full expiration (according to the virtual > > > deadline) as a maximum. That's exactly how I intend it to work. Yes I > > > realise that the max latency ends up being longer intermittently on the > > > niced task but that's -in my opinion- perfectly fine as a compromise to > > > ensure the nice 0 one always gets low latency. > > > > I think, it should be possible to spread this max expiration latency > > across the rotation, should it not? > > There is a way that I toyed with of creating maps of slots to use for each > different priority, but it broke the O(1) nature of the virtual deadline > management. Minimising algorithmic complexity seemed more important to > maintain than getting slightly better latency spreads for niced tasks. It > also appeared to be less cache friendly in design. I could certainly try > and implement it but how much importance are we to place on latency of > niced tasks? Are you aware of any usage scenario where latency sensitive > tasks are ever significantly niced in the real world? I do always nice down heavy games, it makes them more smooth... -- Disclaimer: Alles wat ik doe denk en zeg is gebaseerd op het wereldbeeld wat ik nu heb. Ik ben niet verantwoordelijk voor wijzigingen van de wereld, of het beeld wat ik daarvan heb, noch voor de daaruit voortvloeiende gedragingen van mezelf. Alles wat ik zeg is aardig bedoeld, tenzij expliciet vermeld. pgpTxYO89UBU3.pgp Description: PGP signature
Re: [ck] Re: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler
Op Sunday 18 March 2007, schreef Con Kolivas: On Monday 12 March 2007 22:26, Al Boldi wrote: Con Kolivas wrote: On Monday 12 March 2007 15:42, Al Boldi wrote: Con Kolivas wrote: On Monday 12 March 2007 08:52, Con Kolivas wrote: And thank you! I think I know what's going on now. I think each rotation is followed by another rotation before the higher priority task is getting a look in in schedule() to even get quota and add it to the runqueue quota. I'll try a simple change to see if that helps. Patch coming up shortly. Can you try the following patch and see if it helps. There's also one minor preemption logic fix in there that I'm planning on including. Thanks! Applied on top of v0.28 mainline, and there is no difference. What's it look like on your machine? The higher priority one always get 6-7ms whereas the lower priority one runs 6-7ms and then one larger perfectly bound expiration amount. Basically exactly as I'd expect. The higher priority task gets precisely RR_INTERVAL maximum latency whereas the lower priority task gets RR_INTERVAL min and full expiration (according to the virtual deadline) as a maximum. That's exactly how I intend it to work. Yes I realise that the max latency ends up being longer intermittently on the niced task but that's -in my opinion- perfectly fine as a compromise to ensure the nice 0 one always gets low latency. I think, it should be possible to spread this max expiration latency across the rotation, should it not? There is a way that I toyed with of creating maps of slots to use for each different priority, but it broke the O(1) nature of the virtual deadline management. Minimising algorithmic complexity seemed more important to maintain than getting slightly better latency spreads for niced tasks. It also appeared to be less cache friendly in design. I could certainly try and implement it but how much importance are we to place on latency of niced tasks? Are you aware of any usage scenario where latency sensitive tasks are ever significantly niced in the real world? I do always nice down heavy games, it makes them more smooth... -- Disclaimer: Alles wat ik doe denk en zeg is gebaseerd op het wereldbeeld wat ik nu heb. Ik ben niet verantwoordelijk voor wijzigingen van de wereld, of het beeld wat ik daarvan heb, noch voor de daaruit voortvloeiende gedragingen van mezelf. Alles wat ik zeg is aardig bedoeld, tenzij expliciet vermeld. pgpTxYO89UBU3.pgp Description: PGP signature
Re: [ck] Re: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler
On 3/12/07, jos poortvliet <[EMAIL PROTECTED]> wrote: Op Monday 12 March 2007, schreef Con Kolivas: > On Tuesday 13 March 2007 01:14, Al Boldi wrote: > > Con Kolivas wrote: > > > > > The higher priority one always get 6-7ms whereas the lower priority > > > > > one runs 6-7ms and then one larger perfectly bound expiration > > > > > amount. Basically exactly as I'd expect. The higher priority task > > > > > gets precisely RR_INTERVAL maximum latency whereas the lower > > > > > priority task gets RR_INTERVAL min and full expiration (according > > > > > to the virtual deadline) as a maximum. That's exactly how I intend > > > > > it to work. Yes I realise that the max latency ends up being longer > > > > > intermittently on the niced task but that's -in my opinion- > > > > > perfectly fine as a compromise to ensure the nice 0 one always gets > > > > > low latency. > > > > > > > > I think, it should be possible to spread this max expiration latency > > > > across the rotation, should it not? > > > > > > There is a way that I toyed with of creating maps of slots to use for > > > each different priority, but it broke the O(1) nature of the virtual > > > deadline management. Minimising algorithmic complexity seemed more > > > important to maintain than getting slightly better latency spreads for > > > niced tasks. It also appeared to be less cache friendly in design. I > > > could certainly try and implement it but how much importance are we to > > > place on latency of niced tasks? Are you aware of any usage scenario > > > where latency sensitive tasks are ever significantly niced in the real > > > world? > > > > It only takes one negatively nice'd proc to affect X adversely. > > I have an idea. Give me some time to code up my idea. Lack of sleep is > making me very unpleasant. You're excited by RSDL and the positive comments, aren't you? Well, don't forget to sleep, sleeping makes ppl smarter you know ;-) IIRC, about 2 or three years ago (or maybe on the 2.6.10 timeframe), there was a patch which managed to pass the interactive from one app to another when there was a pipe or udp connection between them. This meant that a marked-as-interactive xterm would, when blocked waiting for an Xserver response, transfer some of its interactiveness to the Xserver, and aparently it worked very good for desktop workloads so, maybe adapting it for this new scheduler would be good. -- Greetz, Antonio Vargas aka winden of network http://network.amigascne.org/ [EMAIL PROTECTED] [EMAIL PROTECTED] Every day, every year you have to work you have to study you have to scene. - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [ck] Re: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler
Op Monday 12 March 2007, schreef Con Kolivas: > On Tuesday 13 March 2007 01:14, Al Boldi wrote: > > Con Kolivas wrote: > > > > > The higher priority one always get 6-7ms whereas the lower priority > > > > > one runs 6-7ms and then one larger perfectly bound expiration > > > > > amount. Basically exactly as I'd expect. The higher priority task > > > > > gets precisely RR_INTERVAL maximum latency whereas the lower > > > > > priority task gets RR_INTERVAL min and full expiration (according > > > > > to the virtual deadline) as a maximum. That's exactly how I intend > > > > > it to work. Yes I realise that the max latency ends up being longer > > > > > intermittently on the niced task but that's -in my opinion- > > > > > perfectly fine as a compromise to ensure the nice 0 one always gets > > > > > low latency. > > > > > > > > I think, it should be possible to spread this max expiration latency > > > > across the rotation, should it not? > > > > > > There is a way that I toyed with of creating maps of slots to use for > > > each different priority, but it broke the O(1) nature of the virtual > > > deadline management. Minimising algorithmic complexity seemed more > > > important to maintain than getting slightly better latency spreads for > > > niced tasks. It also appeared to be less cache friendly in design. I > > > could certainly try and implement it but how much importance are we to > > > place on latency of niced tasks? Are you aware of any usage scenario > > > where latency sensitive tasks are ever significantly niced in the real > > > world? > > > > It only takes one negatively nice'd proc to affect X adversely. > > I have an idea. Give me some time to code up my idea. Lack of sleep is > making me very unpleasant. You're excited by RSDL and the positive comments, aren't you? Well, don't forget to sleep, sleeping makes ppl smarter you know ;-) pgpg8yLZmNOn3.pgp Description: PGP signature
Re: [ck] Re: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler
On 3/12/07, jos poortvliet <[EMAIL PROTECTED]> wrote: Op Monday 12 March 2007, schreef Al Boldi: > > It only takes one negatively nice'd proc to affect X adversely. goes faster than ever)? Or is this really the scheduler's fault? Take this with a grain of salt, but, I don't think this is the scheduler's _fault_. That said, if the scheduler can fix it, it's not necessarily a bad thing. -- ~Mike - Just the crazy copy cat. - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [ck] Re: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler
Op Monday 12 March 2007, schreef Al Boldi: > Con Kolivas wrote: > > > > The higher priority one always get 6-7ms whereas the lower priority > > > > one runs 6-7ms and then one larger perfectly bound expiration amount. > > > > Basically exactly as I'd expect. The higher priority task gets > > > > precisely RR_INTERVAL maximum latency whereas the lower priority task > > > > gets RR_INTERVAL min and full expiration (according to the virtual > > > > deadline) as a maximum. That's exactly how I intend it to work. Yes I > > > > realise that the max latency ends up being longer intermittently on > > > > the niced task but that's -in my opinion- perfectly fine as a > > > > compromise to ensure the nice 0 one always gets low latency. > > > > > > I think, it should be possible to spread this max expiration latency > > > across the rotation, should it not? > > > > There is a way that I toyed with of creating maps of slots to use for > > each different priority, but it broke the O(1) nature of the virtual > > deadline management. Minimising algorithmic complexity seemed more > > important to maintain than getting slightly better latency spreads for > > niced tasks. It also appeared to be less cache friendly in design. I > > could certainly try and implement it but how much importance are we to > > place on latency of niced tasks? Are you aware of any usage scenario > > where latency sensitive tasks are ever significantly niced in the real > > world? > > It only takes one negatively nice'd proc to affect X adversely. Then, maybe, we should start nicing X again, like we did/had to do until a few years ago? Or should we just wait until X gets fixed (after all, development goes faster than ever)? Or is this really the scheduler's fault? > Thanks! > > -- > Al > > ___ > http://ck.kolivas.org/faqs/replying-to-mailing-list.txt > ck mailing list - mailto: [EMAIL PROTECTED] > http://vds.kolivas.org/mailman/listinfo/ck -- Disclaimer: Alles wat ik doe denk en zeg is gebaseerd op het wereldbeeld wat ik nu heb. Ik ben niet verantwoordelijk voor wijzigingen van de wereld, of het beeld wat ik daarvan heb, noch voor de daaruit voortvloeiende gedragingen van mezelf. Alles wat ik zeg is aardig bedoeld, tenzij expliciet vermeld. pgppX4nCMFZsG.pgp Description: PGP signature
Re: [ck] Re: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler
Op Monday 12 March 2007, schreef Al Boldi: Con Kolivas wrote: The higher priority one always get 6-7ms whereas the lower priority one runs 6-7ms and then one larger perfectly bound expiration amount. Basically exactly as I'd expect. The higher priority task gets precisely RR_INTERVAL maximum latency whereas the lower priority task gets RR_INTERVAL min and full expiration (according to the virtual deadline) as a maximum. That's exactly how I intend it to work. Yes I realise that the max latency ends up being longer intermittently on the niced task but that's -in my opinion- perfectly fine as a compromise to ensure the nice 0 one always gets low latency. I think, it should be possible to spread this max expiration latency across the rotation, should it not? There is a way that I toyed with of creating maps of slots to use for each different priority, but it broke the O(1) nature of the virtual deadline management. Minimising algorithmic complexity seemed more important to maintain than getting slightly better latency spreads for niced tasks. It also appeared to be less cache friendly in design. I could certainly try and implement it but how much importance are we to place on latency of niced tasks? Are you aware of any usage scenario where latency sensitive tasks are ever significantly niced in the real world? It only takes one negatively nice'd proc to affect X adversely. Then, maybe, we should start nicing X again, like we did/had to do until a few years ago? Or should we just wait until X gets fixed (after all, development goes faster than ever)? Or is this really the scheduler's fault? Thanks! -- Al ___ http://ck.kolivas.org/faqs/replying-to-mailing-list.txt ck mailing list - mailto: [EMAIL PROTECTED] http://vds.kolivas.org/mailman/listinfo/ck -- Disclaimer: Alles wat ik doe denk en zeg is gebaseerd op het wereldbeeld wat ik nu heb. Ik ben niet verantwoordelijk voor wijzigingen van de wereld, of het beeld wat ik daarvan heb, noch voor de daaruit voortvloeiende gedragingen van mezelf. Alles wat ik zeg is aardig bedoeld, tenzij expliciet vermeld. pgppX4nCMFZsG.pgp Description: PGP signature
Re: [ck] Re: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler
On 3/12/07, jos poortvliet [EMAIL PROTECTED] wrote: Op Monday 12 March 2007, schreef Al Boldi: It only takes one negatively nice'd proc to affect X adversely. goes faster than ever)? Or is this really the scheduler's fault? Take this with a grain of salt, but, I don't think this is the scheduler's _fault_. That said, if the scheduler can fix it, it's not necessarily a bad thing. -- ~Mike - Just the crazy copy cat. - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [ck] Re: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler
Op Monday 12 March 2007, schreef Con Kolivas: On Tuesday 13 March 2007 01:14, Al Boldi wrote: Con Kolivas wrote: The higher priority one always get 6-7ms whereas the lower priority one runs 6-7ms and then one larger perfectly bound expiration amount. Basically exactly as I'd expect. The higher priority task gets precisely RR_INTERVAL maximum latency whereas the lower priority task gets RR_INTERVAL min and full expiration (according to the virtual deadline) as a maximum. That's exactly how I intend it to work. Yes I realise that the max latency ends up being longer intermittently on the niced task but that's -in my opinion- perfectly fine as a compromise to ensure the nice 0 one always gets low latency. I think, it should be possible to spread this max expiration latency across the rotation, should it not? There is a way that I toyed with of creating maps of slots to use for each different priority, but it broke the O(1) nature of the virtual deadline management. Minimising algorithmic complexity seemed more important to maintain than getting slightly better latency spreads for niced tasks. It also appeared to be less cache friendly in design. I could certainly try and implement it but how much importance are we to place on latency of niced tasks? Are you aware of any usage scenario where latency sensitive tasks are ever significantly niced in the real world? It only takes one negatively nice'd proc to affect X adversely. I have an idea. Give me some time to code up my idea. Lack of sleep is making me very unpleasant. You're excited by RSDL and the positive comments, aren't you? Well, don't forget to sleep, sleeping makes ppl smarter you know ;-) pgpg8yLZmNOn3.pgp Description: PGP signature
Re: [ck] Re: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler
On 3/12/07, jos poortvliet [EMAIL PROTECTED] wrote: Op Monday 12 March 2007, schreef Con Kolivas: On Tuesday 13 March 2007 01:14, Al Boldi wrote: Con Kolivas wrote: The higher priority one always get 6-7ms whereas the lower priority one runs 6-7ms and then one larger perfectly bound expiration amount. Basically exactly as I'd expect. The higher priority task gets precisely RR_INTERVAL maximum latency whereas the lower priority task gets RR_INTERVAL min and full expiration (according to the virtual deadline) as a maximum. That's exactly how I intend it to work. Yes I realise that the max latency ends up being longer intermittently on the niced task but that's -in my opinion- perfectly fine as a compromise to ensure the nice 0 one always gets low latency. I think, it should be possible to spread this max expiration latency across the rotation, should it not? There is a way that I toyed with of creating maps of slots to use for each different priority, but it broke the O(1) nature of the virtual deadline management. Minimising algorithmic complexity seemed more important to maintain than getting slightly better latency spreads for niced tasks. It also appeared to be less cache friendly in design. I could certainly try and implement it but how much importance are we to place on latency of niced tasks? Are you aware of any usage scenario where latency sensitive tasks are ever significantly niced in the real world? It only takes one negatively nice'd proc to affect X adversely. I have an idea. Give me some time to code up my idea. Lack of sleep is making me very unpleasant. You're excited by RSDL and the positive comments, aren't you? Well, don't forget to sleep, sleeping makes ppl smarter you know ;-) IIRC, about 2 or three years ago (or maybe on the 2.6.10 timeframe), there was a patch which managed to pass the interactive from one app to another when there was a pipe or udp connection between them. This meant that a marked-as-interactive xterm would, when blocked waiting for an Xserver response, transfer some of its interactiveness to the Xserver, and aparently it worked very good for desktop workloads so, maybe adapting it for this new scheduler would be good. -- Greetz, Antonio Vargas aka winden of network http://network.amigascne.org/ [EMAIL PROTECTED] [EMAIL PROTECTED] Every day, every year you have to work you have to study you have to scene. - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [ck] Re: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler
Xavier Bestel wrote: > On Tue, 2007-03-06 at 09:10 +1100, Con Kolivas wrote: > > Hah I just wish gears would go away. If I get hardware where it runs at > > just the right speed it looks like it doesn't move at all. On other > > hardware the wheels go backwards and forwards where the screen refresh > > rate is just perfectly a factor of the frames per second (or something > > like that). > > > > This is not a cpu scheduler test and you're inferring that there are cpu > > scheduling artefacts based on an application that has bottlenecks at > > different places depending on the hardware combination. > > I'd add that Xorg has its own scheduler (for X11 operations, of course), > that has its own quirks, and chances are that it is the one you're > testing with glxgears. And as Con said, as long as glxgears does more > FPS than your screen refresh rate, its flickering its completely > meaningless: it doesn't even attempt to sync with vblank. Al, you'd > better try with Quake3 or Nexuiz, or even Blender if you want to test 3D > interactivity under load. Actually, games aren't really usefull to evaluate scheduler performance, due to their bursty nature. OTOH, gears runs full throttle, including any of its bottlenecks. In fact, it's the bottlenecks that add to its realism. It exposes underlying scheduler hickups visually, unless buffered by the display-driver, in which case you just use the vesa-driver to be sure. If gears starts to flicker on you, just slow it down with a cpu hog like: # while :; do :; done & Add as many hogs as you need to make the hickups visible. Again, these hickups are only visible when using uneven nice+ levels. BTW, another way to show these hickups would be through some kind of a cpu/proc timing-tracer. Do we have something like that? Thanks! -- Al - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [ck] Re: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler
On Tue, 2007-03-06 at 09:10 +1100, Con Kolivas wrote: > Hah I just wish gears would go away. If I get hardware where it runs at just > the right speed it looks like it doesn't move at all. On other hardware the > wheels go backwards and forwards where the screen refresh rate is just > perfectly a factor of the frames per second (or something like that). > > This is not a cpu scheduler test and you're inferring that there are cpu > scheduling artefacts based on an application that has bottlenecks at > different places depending on the hardware combination. I'd add that Xorg has its own scheduler (for X11 operations, of course), that has its own quirks, and chances are that it is the one you're testing with glxgears. And as Con said, as long as glxgears does more FPS than your screen refresh rate, its flickering its completely meaningless: it doesn't even attempt to sync with vblank. Al, you'd better try with Quake3 or Nexuiz, or even Blender if you want to test 3D interactivity under load. Xav - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [ck] Re: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler
On Tue, 2007-03-06 at 09:10 +1100, Con Kolivas wrote: Hah I just wish gears would go away. If I get hardware where it runs at just the right speed it looks like it doesn't move at all. On other hardware the wheels go backwards and forwards where the screen refresh rate is just perfectly a factor of the frames per second (or something like that). This is not a cpu scheduler test and you're inferring that there are cpu scheduling artefacts based on an application that has bottlenecks at different places depending on the hardware combination. I'd add that Xorg has its own scheduler (for X11 operations, of course), that has its own quirks, and chances are that it is the one you're testing with glxgears. And as Con said, as long as glxgears does more FPS than your screen refresh rate, its flickering its completely meaningless: it doesn't even attempt to sync with vblank. Al, you'd better try with Quake3 or Nexuiz, or even Blender if you want to test 3D interactivity under load. Xav - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [ck] Re: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler
Xavier Bestel wrote: On Tue, 2007-03-06 at 09:10 +1100, Con Kolivas wrote: Hah I just wish gears would go away. If I get hardware where it runs at just the right speed it looks like it doesn't move at all. On other hardware the wheels go backwards and forwards where the screen refresh rate is just perfectly a factor of the frames per second (or something like that). This is not a cpu scheduler test and you're inferring that there are cpu scheduling artefacts based on an application that has bottlenecks at different places depending on the hardware combination. I'd add that Xorg has its own scheduler (for X11 operations, of course), that has its own quirks, and chances are that it is the one you're testing with glxgears. And as Con said, as long as glxgears does more FPS than your screen refresh rate, its flickering its completely meaningless: it doesn't even attempt to sync with vblank. Al, you'd better try with Quake3 or Nexuiz, or even Blender if you want to test 3D interactivity under load. Actually, games aren't really usefull to evaluate scheduler performance, due to their bursty nature. OTOH, gears runs full throttle, including any of its bottlenecks. In fact, it's the bottlenecks that add to its realism. It exposes underlying scheduler hickups visually, unless buffered by the display-driver, in which case you just use the vesa-driver to be sure. If gears starts to flicker on you, just slow it down with a cpu hog like: # while :; do :; done Add as many hogs as you need to make the hickups visible. Again, these hickups are only visible when using uneven nice+ levels. BTW, another way to show these hickups would be through some kind of a cpu/proc timing-tracer. Do we have something like that? Thanks! -- Al - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [ck] Re: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler
On Tuesday 06 March 2007 05:23, Al Boldi wrote: > Con Kolivas wrote: > > Gears just isn't an interactive task and just about anything but gears > > would be a better test case since its behaviour varies wildly under > > different combinations of graphics cards, memory bandwidth, cpu and so > > on. > > What about reflect? It has the same problem. > > > I'm not even sure what you're trying to prove with gears. > > RSDL fairness. > > Ok, try this: > # gears & gears & > Both run smoothly. > # nice gears & nice gears & > Both run smoothly. > > Now: > # gears & nice -10 gears & > Both stutter. > > But: > # gears & nice --10 gears & > Both run smoothly. > > Something looks fishy with nice levels. Hah I just wish gears would go away. If I get hardware where it runs at just the right speed it looks like it doesn't move at all. On other hardware the wheels go backwards and forwards where the screen refresh rate is just perfectly a factor of the frames per second (or something like that). This is not a cpu scheduler test and you're inferring that there are cpu scheduling artefacts based on an application that has bottlenecks at different places depending on the hardware combination. To imply something is fishy with nice levels, do a test that _only_ uses cpu (and not the bus, memory bandwidth, the gpu and has driver interactions) and prove that there is something wrong. What happens to other resources on the machine the cpu scheduler has no control over. The -rt tree tries to address these factors for example but it's a huge - some would say insurmountable - thing to try and manage at all levels on a general purpose operating system. The cpu is proportioned out very fairly with rsdl on both quota and latency according to nice vs cpu usage. When something is fully cpu bound on rsdl its average and maximum latency will be greater than something that only intermittently uses cpu. The (somewhat lengthy and not easily digestible) docs describe how you can model what these latencies will be. > Otherwise, your scheduler looks great! Thanks! -- -ck - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [ck] Re: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler
On Monday 05 March 2007 22:59, Al Boldi wrote: > Markus Törnqvist wrote: > > On Mon, Mar 05, 2007 at 08:34:45AM +0300, Al Boldi wrote: > > >Ok, gears is smooth when you run "make -j4", but with "nice make -j4", > > > gears becomes bursty. This looks like a problem with nice-levels. In > > > general, looking subjectively at top d.1, procs appear to show > > > jerkiness when nice'd. I wouldn't place much value on what you can see just from looking at top. Gears just isn't an interactive task and just about anything but gears would be a better test case since its behaviour varies wildly under different combinations of graphics cards, memory bandwidth, cpu and so on. I'm not even sure what you're trying to prove with gears. If it's to see "smooth behaviour" under load then gears is not the thing to do it with. Maybe unbuffered live video on xawtv or something? Perhaps even capturing video where you know what the cpu usage will be of the video codec you're using. Say you knew xvid at a fixed bitrate required 30% cpu to encode a live video then you could see if it did it real time with a make -j2 running assuming you don't run out of disk bandwidth and so on. Or something that required 75% cpu and you ran a nice -19 make -j4. Or even try interbench and fiddle with the nice values and size of the loads since those options exist and see what the latencies and %desired cpu are. By default without arguments interbench looks for unfair scheduling and finds plenty of it in other scheduler designs. Either way your testcase must mean something to you. > > Don't use glxgears, please. Ever. Unless you want meaningless gears. That I definitely agree with. > > It displays totally erratic behaviour anyway, and does sched_yield > > (strace tells us this) which means IT GIVES UP ITS TIME TO RUN, ie yields > > the cpu to someone else. > > I just strace'd it here. It doesn't show any yield in the mesa-5.0 > version. Which version are you using? Curious. On a machine with onboard intel graphics (915 driver), and on an ATI r200 dri based driver I can see yields. Yet with the nvidia driver I see no yields. This sched_yield seems to be more done by the driver rather than the application. That would cause woeful performance. How odd that modern drivers out there today are still using this basically defunct unix function. No, I'm not interested in getting into a discussion of the "possible" uses of sched_yield. That's been done to death many times before. -- -ck - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [ck] Re: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler
Markus Törnqvist wrote: > On Mon, Mar 05, 2007 at 08:34:45AM +0300, Al Boldi wrote: > >Ok, gears is smooth when you run "make -j4", but with "nice make -j4", > > gears becomes bursty. This looks like a problem with nice-levels. In > > general, looking subjectively at top d.1, procs appear to show jerkiness > > when nice'd. > > Don't use glxgears, please. Ever. Unless you want meaningless gears. > > It displays totally erratic behaviour anyway, and does sched_yield (strace > tells us this) which means IT GIVES UP ITS TIME TO RUN, ie yields the > cpu to someone else. I just strace'd it here. It doesn't show any yield in the mesa-5.0 version. Which version are you using? > >Do you have an objective test-case that can show the even-ness of RSDL in > >both nice'd and normal scenarios? > > A big movie, like DVD-quality, full resolution should do the trick. > That's what I used ;) The problem with audio/video is that they usually do buffering, which hides scheduler anomalies. > Debian and Ubuntu at least ship "stress" which I also used and reniced > in-flight with excellent results, but do note to start off with small > settings and work up or you might overcommit your box, in a way which > no scheduler can handle. Can you give a link? Thanks! -- Al - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [ck] Re: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler
Markus Törnqvist wrote: On Mon, Mar 05, 2007 at 08:34:45AM +0300, Al Boldi wrote: Ok, gears is smooth when you run make -j4, but with nice make -j4, gears becomes bursty. This looks like a problem with nice-levels. In general, looking subjectively at top d.1, procs appear to show jerkiness when nice'd. Don't use glxgears, please. Ever. Unless you want meaningless gears. It displays totally erratic behaviour anyway, and does sched_yield (strace tells us this) which means IT GIVES UP ITS TIME TO RUN, ie yields the cpu to someone else. I just strace'd it here. It doesn't show any yield in the mesa-5.0 version. Which version are you using? Do you have an objective test-case that can show the even-ness of RSDL in both nice'd and normal scenarios? A big movie, like DVD-quality, full resolution should do the trick. That's what I used ;) The problem with audio/video is that they usually do buffering, which hides scheduler anomalies. Debian and Ubuntu at least ship stress which I also used and reniced in-flight with excellent results, but do note to start off with small settings and work up or you might overcommit your box, in a way which no scheduler can handle. Can you give a link? Thanks! -- Al - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [ck] Re: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler
On Monday 05 March 2007 22:59, Al Boldi wrote: Markus Törnqvist wrote: On Mon, Mar 05, 2007 at 08:34:45AM +0300, Al Boldi wrote: Ok, gears is smooth when you run make -j4, but with nice make -j4, gears becomes bursty. This looks like a problem with nice-levels. In general, looking subjectively at top d.1, procs appear to show jerkiness when nice'd. I wouldn't place much value on what you can see just from looking at top. Gears just isn't an interactive task and just about anything but gears would be a better test case since its behaviour varies wildly under different combinations of graphics cards, memory bandwidth, cpu and so on. I'm not even sure what you're trying to prove with gears. If it's to see smooth behaviour under load then gears is not the thing to do it with. Maybe unbuffered live video on xawtv or something? Perhaps even capturing video where you know what the cpu usage will be of the video codec you're using. Say you knew xvid at a fixed bitrate required 30% cpu to encode a live video then you could see if it did it real time with a make -j2 running assuming you don't run out of disk bandwidth and so on. Or something that required 75% cpu and you ran a nice -19 make -j4. Or even try interbench and fiddle with the nice values and size of the loads since those options exist and see what the latencies and %desired cpu are. By default without arguments interbench looks for unfair scheduling and finds plenty of it in other scheduler designs. Either way your testcase must mean something to you. Don't use glxgears, please. Ever. Unless you want meaningless gears. That I definitely agree with. It displays totally erratic behaviour anyway, and does sched_yield (strace tells us this) which means IT GIVES UP ITS TIME TO RUN, ie yields the cpu to someone else. I just strace'd it here. It doesn't show any yield in the mesa-5.0 version. Which version are you using? Curious. On a machine with onboard intel graphics (915 driver), and on an ATI r200 dri based driver I can see yields. Yet with the nvidia driver I see no yields. This sched_yield seems to be more done by the driver rather than the application. That would cause woeful performance. How odd that modern drivers out there today are still using this basically defunct unix function. No, I'm not interested in getting into a discussion of the possible uses of sched_yield. That's been done to death many times before. -- -ck - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [ck] Re: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler
On Tuesday 06 March 2007 05:23, Al Boldi wrote: Con Kolivas wrote: Gears just isn't an interactive task and just about anything but gears would be a better test case since its behaviour varies wildly under different combinations of graphics cards, memory bandwidth, cpu and so on. What about reflect? It has the same problem. I'm not even sure what you're trying to prove with gears. RSDL fairness. Ok, try this: # gears gears Both run smoothly. # nice gears nice gears Both run smoothly. Now: # gears nice -10 gears Both stutter. But: # gears nice --10 gears Both run smoothly. Something looks fishy with nice levels. Hah I just wish gears would go away. If I get hardware where it runs at just the right speed it looks like it doesn't move at all. On other hardware the wheels go backwards and forwards where the screen refresh rate is just perfectly a factor of the frames per second (or something like that). This is not a cpu scheduler test and you're inferring that there are cpu scheduling artefacts based on an application that has bottlenecks at different places depending on the hardware combination. To imply something is fishy with nice levels, do a test that _only_ uses cpu (and not the bus, memory bandwidth, the gpu and has driver interactions) and prove that there is something wrong. What happens to other resources on the machine the cpu scheduler has no control over. The -rt tree tries to address these factors for example but it's a huge - some would say insurmountable - thing to try and manage at all levels on a general purpose operating system. The cpu is proportioned out very fairly with rsdl on both quota and latency according to nice vs cpu usage. When something is fully cpu bound on rsdl its average and maximum latency will be greater than something that only intermittently uses cpu. The (somewhat lengthy and not easily digestible) docs describe how you can model what these latencies will be. Otherwise, your scheduler looks great! Thanks! -- -ck - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [ck] Re: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler
Op Monday 05 March 2007, schreef Willy Tarreau: > On Mon, Mar 05, 2007 at 08:49:29AM +1100, Con Kolivas wrote: > (...) > > > > That's just what it did, but when you "nice make -j4", things (gears) > > > start to stutter. Is that due to the staircase? > > > > gears isn't an interactive task. Apart from using it as a background load > > to check for starvation because it loads up the cpu fully (which a gpu > > intensive but otherwise simple app like this should _not_ do) graphics > > card drivers and interrupts and so on, I wouldn't put much credence on > > gears as anything else. However I suspect that gears will still get a > > fair share of the cpu on RSDL which almost never happens on any other > > scheduler. > > Con, > > I've now given it a try with HZ=250 on my dual-athlon. It works > beautifully. I also quickly checked that playing mp3 doesn't skip during > make -j4, and that gears runs fairly smoothly, since those are the > references people often use. > > But with real work, it's excellent too. When I saturate my CPUs by > injecting HTTP traffic on haproxy, the load is stable and the command line > perfectly responsive, while in the past the load would oscillate and the > command line sometimes stopped to respond for a few seconds. > > I've also launched my scheddos program (you may remember, the one we did a > few experiments with). I could not cause any freeze at all. Plain 2.6.20 > had already improved a lot in this area, but above 4 processes/CPU, > occasional short freezes did still occur. This time, even at 100 processes, > the system was rather slow (of course!) but just as expected, and nothing > more. > > I also tried the good old "dd if=/dev/zero bs=1|...|dd bs=1 of=/dev/null" > and it did not cause any trouble. > > I will boot 2.6 slightly more often to test the code under various > conditions, and I will recommend it to a few people I know who tend to > switch back to 2.4 after one day full of 2.6 jerkiness. > > Overall, you have done a great job ! > > I hope that more people will give it a try, first to help find possible > remaining bugs, and to pronounce in favour of its inclusion in mainline. > > Cheers, > Willy Sounds nice... you might want to put this in the -ck wiki: ck.wikia.com/ pgpTOgvI1x3pY.pgp Description: PGP signature
Re: [ck] Re: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler
Op Sunday 04 March 2007, schreef Willy Tarreau: > Hi Con ! > > This was designed to be robust for any application since linux demands a > > general purpose scheduler design, while preserving interactivity, instead > > of optimising for one particular end use. > > Well, I haven't tested it yet, but your design choices please me. As you > know, I've been one of those encountering big starvation problems with > the original scheduler, making 2.6 unusable for me in many situations. I > welcome your work and want to thank you for the time you spend trying to > fix it. > > Keep up the good work, > Willy > > PS: I've looked at your graphs, I hope you're on the way to something > really better than the 21 first 2.6 releases ! Well, imho his current staircase scheduler already does a better job compared to mainline, but it won't make it in (or at least, it's not likely). So we can hope this WILL make it into mainline, but I wouldn't count on it. grtz Jos -- Disclaimer: Alles wat ik doe denk en zeg is gebaseerd op het wereldbeeld wat ik nu heb. Ik ben niet verantwoordelijk voor wijzigingen van de wereld, of het beeld wat ik daarvan heb, noch voor de daaruit voortvloeiende gedragingen van mezelf. Alles wat ik zeg is aardig bedoeld, tenzij expliciet vermeld. pgpYatpKSmwAl.pgp Description: PGP signature
Re: [ck] Re: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler
Op Sunday 04 March 2007, schreef Willy Tarreau: Hi Con ! This was designed to be robust for any application since linux demands a general purpose scheduler design, while preserving interactivity, instead of optimising for one particular end use. Well, I haven't tested it yet, but your design choices please me. As you know, I've been one of those encountering big starvation problems with the original scheduler, making 2.6 unusable for me in many situations. I welcome your work and want to thank you for the time you spend trying to fix it. Keep up the good work, Willy PS: I've looked at your graphs, I hope you're on the way to something really better than the 21 first 2.6 releases ! Well, imho his current staircase scheduler already does a better job compared to mainline, but it won't make it in (or at least, it's not likely). So we can hope this WILL make it into mainline, but I wouldn't count on it. grtz Jos -- Disclaimer: Alles wat ik doe denk en zeg is gebaseerd op het wereldbeeld wat ik nu heb. Ik ben niet verantwoordelijk voor wijzigingen van de wereld, of het beeld wat ik daarvan heb, noch voor de daaruit voortvloeiende gedragingen van mezelf. Alles wat ik zeg is aardig bedoeld, tenzij expliciet vermeld. pgpYatpKSmwAl.pgp Description: PGP signature
Re: [ck] Re: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler
Op Monday 05 March 2007, schreef Willy Tarreau: On Mon, Mar 05, 2007 at 08:49:29AM +1100, Con Kolivas wrote: (...) That's just what it did, but when you nice make -j4, things (gears) start to stutter. Is that due to the staircase? gears isn't an interactive task. Apart from using it as a background load to check for starvation because it loads up the cpu fully (which a gpu intensive but otherwise simple app like this should _not_ do) graphics card drivers and interrupts and so on, I wouldn't put much credence on gears as anything else. However I suspect that gears will still get a fair share of the cpu on RSDL which almost never happens on any other scheduler. Con, I've now given it a try with HZ=250 on my dual-athlon. It works beautifully. I also quickly checked that playing mp3 doesn't skip during make -j4, and that gears runs fairly smoothly, since those are the references people often use. But with real work, it's excellent too. When I saturate my CPUs by injecting HTTP traffic on haproxy, the load is stable and the command line perfectly responsive, while in the past the load would oscillate and the command line sometimes stopped to respond for a few seconds. I've also launched my scheddos program (you may remember, the one we did a few experiments with). I could not cause any freeze at all. Plain 2.6.20 had already improved a lot in this area, but above 4 processes/CPU, occasional short freezes did still occur. This time, even at 100 processes, the system was rather slow (of course!) but just as expected, and nothing more. I also tried the good old dd if=/dev/zero bs=1|...|dd bs=1 of=/dev/null and it did not cause any trouble. I will boot 2.6 slightly more often to test the code under various conditions, and I will recommend it to a few people I know who tend to switch back to 2.4 after one day full of 2.6 jerkiness. Overall, you have done a great job ! I hope that more people will give it a try, first to help find possible remaining bugs, and to pronounce in favour of its inclusion in mainline. Cheers, Willy Sounds nice... you might want to put this in the -ck wiki: ck.wikia.com/ pgpTOgvI1x3pY.pgp Description: PGP signature