Hi!

Ralph Glasstetter wrote:
> Yippie,... we have a Makefile for Windows!
> 
> Unfortunately it did not work for me... sigh... :-(
> At least at first...
> Seems that the .h/.cpp source files have not been build from the .ui's and 
> therefore the building crashed!

Maybe the rule for .ui -> .cpp/.h conversion isn't working correctly in
any case. I'll simplify that later.

> After creating the missing files by myself I called make again and this time 
> they are 
> automatically created (although they already have been there)... and dvbcut 
> was also 
> build succesfully. 

Actually, they should be created when you run "make dep" for the first
time. And they're only deleted by "make distclean".

> But the strangest thing is...
> After that I tried it once again with a clean (and also a distclean!) prior 
> to make... and it worked!?!
> No more missing files... ! 
> 
> Guess it's to late now to understand that... ;-)
> Thanks again for the Makefile!

De nada.

> PS: Has disabling the mmap part under windows any disadvantages?

I'm not sure if mmap ever had any advantages on Windows at all, compared
to read.

On Linux/Unix, I would claim that mmap is a little faster (but not
much). On Windows, the difference in performance (if there is any)
doesn't matter because mmap didn't work correctly in the first place.
But if someone can provide a working implementation (in particular, one
that also works with files >= 2 GB), we can turn mmap on again and compare.

Note that mmap isn't optimal either. What I really would like to try is
direct disk access (O_DIRECT) on Linux. Unlike raw device access, it
still reads/writes files (with some alignment restrictions), but it
doesn't cache the data.

Now you'll probably think: "Why is that a good thing? I thought caching
makes things faster?" - Well, yes, but only if the cache is large
enough. In our case, both input and output files usually are several
gigabytes long, and in a typical session, we're reading them three
times. Once from front to back while indexing, then again while setting
the markers, and finally the selected parts again while exporting.

If you have e.g. 1 GB of RAM (cache) and you read a 2 GB file, Linux
will first start to fill the cache with data from the beginning of the
file. After about half the input file is read, it starts throwing away
that data to make room for the rest of the file. After indexing, the
second half of the file will be cached. When you go back to the
beginning, the first thing the OS will have to do is throw away the
cached second half and read the first half again. So, caching
effectively gains us nothing because it's only useful if the cached data
is used several times before it is thrown away.

What dvbcut does is called "polluting" the cache. It takes away the
precious RAM from other applications (which may badly need it) and it
doesn't even benefit from it because the "cache hit" ratio is too low.
That's a major sin - like killing animals when you're not hungry.

With O_DIRECT, on the other hand, the cache footprint will be zero. And
we're already caching images in userspace to speed up navigation, so why
should we cache the same data again, in a different format? It's nothing
but a waste of resources.

-- 
Michael "Tired" Riepe <[EMAIL PROTECTED]>
X-Tired: Each morning I get up I die a little

-------------------------------------------------------------------------
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
_______________________________________________
DVBCUT-user mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/dvbcut-user

Reply via email to