Dennis Schridde schreef:
> Am Mittwoch, 2. Mai 2007 schrieb Giel van Schijndel:
>   
>> Almost correct, that is the precision of the kernel's CPU scheduler
>> although I believe most Linux versions 2.6 have a time slice of 1 msec.
>> Apart from that an SDL_Delay(1) call is just an explicit call to yield
>> the current process so the kernel can use the remaining CPU time and
>> divide it among other processes.
>>
>> Also SDL_Delay guarantees nothing about the amount of time that will be
>> waited, only that it will be "at least" the time you specify. So a call
>> like SDL_Delay(20) could very well result in losing CPU for 750 ms (if
>> the OS's scheduler decided so).
>>     
> Doesn't the scheduler distribute CPU time between all applications anyway?
> I don't think we can force it to give us all the CPU.
>   
No, not all CPU time, but you can take up a lot of CPU time if you want to.
Just compile this program and see how the rest of the system responds to
its execution.
> int main()
> {
>   int i = 0, j = 0;
>   for (;;++i)
>     j += i;
> }
I think Linux's CPU scheduler works better with this, but on windoze
this starts hogging my CPU at cost of other application's performance.
> Currently we only request as much as to keep a certain framerate. Which hits 
> the limits on slow PCs, what I don't think is bad (since why would we like to 
> drop the fps even more).
> The only reason we inserted the delay originaly, was because eg. laptop users 
> don't need 100fps, but would like to keep their CPU idle instead. They can 
> lower the framerate down to 1fps if they want, and that will be exactly what 
> they get. No more, maybe less.
>
> So currently the real only issue is that there is no delay in the mainmenu.
>   
>> An event driven system seems a lot nicer indeed. Although some parts
>> probably still require to be run once every frame or something (for
>> example the sound code requires an audio_Update/sound_Update call every
>> now and then to keep buffers and sources filled&synced).
>>     
> Wouldn't it be enough to estimate the time the buffer will bear up and wakeup 
> just before it is empty?
>   
Yes and no, since you will also have to keep in mind that you'll need
enough time to decode enough audio to fill a buffer with, then queue
that buffer on your sound source (this goes for streaming audio only,
e.g. music).

As for the estimating, you can precisely calculate it (samples are 2
bytes large, so time = bufferlength / 2 / channelcount / samplerate).
But still how would you want to use that information? Write your own
event driven timer implementation ? Or use timed interrupts (which is
very much non-portable, some OS's probably don't even allow it)?

Besides that you will eventually need some kind of loop anyway, only
thing is that now it would become an event-loop. So you might as well
include calls to several update functions in that loop as well.
> But you are right, I probably have to think about how to do this best for a 
> while...
>   
Indeed, think about it before implementing it (to be more precise: first
design then implement).

-- 
Giel

Attachment: signature.asc
Description: OpenPGP digital signature

_______________________________________________
Warzone-dev mailing list
Warzone-dev@gna.org
https://mail.gna.org/listinfo/warzone-dev

Reply via email to