Hi! > Figuring out what to use multiple cores for is a problem in today's > world. What do you with seven cores? How about 100 cores? Multicore > chips are not coming into existence because making faster single core > chips doesn't make sense, they are coming into existence because making > faster single core chips is impossible.
In Windows it seems to be common (?) to reduce the performance impact of running on-access virus scanners and other background maintenance tasks by using more than 1 CPU core. In Linux, I do not know many CPU heavy background things, but I can give a few examples of single tasks which can be distributed over cores: - pbzip2 compresses multiple parts of a file in parallel, so you can compress one large file much faster by using many cores. - many servers start one task per query, so queries can run on multiple cores without being in the way of each other. - some databases can distribute the work for complex queries in multiple threads which can run on different cores - batch processing of data in general can be split into smaller data sets so you can run one on each core (eg to mogrify many JPEGs to lower resolution with ImageMagick...?) - while the above example is not a feature of ImageMagick, you would have to split the work manually. However, e.g. mencoder of mplayer (mplayerhq.hu) can distribute video compression and encoding (e.g. per image fragment, frame or processing step) - you can let "bored cores" do something generally useful which does not need to be done at a certain time. For example you could do some filesystem maintenance or anti-virus checks as mentioned above, or give CPU time to BOINC cloud projects :-) Note that when you do for example do a complex query by manually splitting it into N smaller queries for PostgreSQL (which runs 1 thread per query even for complex queries afair) you only get a speed gain as far as your cores do not compete for RAM or disk access. The former can mean that multiple CPUs are better than multiple cores. The latter can mean that a RAID of modern disks with NCQ or a good cache make using more cores more efficient. By the way, multiple CPUs _lose_ performance for cache syncing. While all of this is interesting to think about, none of it is in any way a typical thing to do with DOS at all. Why would I take the effort to port mencoder or PostgreSQL to a multi core DOS variant when the Linux version already can use many other services of that OS to reach best performance, eg NCQ drivers? Yes I can imagine making 1000 thumbnails in DOS or compressing a CD-ROM ISO with pbzip2 in DOS in theory. But then I also know that FAT32 cannot even handle DVD or BluRay ISO file sizes and my 1000 pics would not get from SD card to the DOS PC at decent speed anyway, single core thumbnailing still faster than USB... So I get the impression that all of this is very hypothetical, that Windows and Linux already are okay for heavy calculations and that "normal" use of DOS has totally different focus anyway. But maybe others have more ideas "why to DOS on 4 GHz 12-cores". Regards, Eric ------------------------------------------------------------------------------ Get a FREE DOWNLOAD! and learn more about uberSVN rich system, user administration capabilities and model configuration. Take the hassle out of deploying and managing Subversion and the tools developers use with it. http://p.sf.net/sfu/wandisco-d2d-2 _______________________________________________ Freedos-user mailing list Freedos-user@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/freedos-user