Re: Large File Support in Kernel 2.6
Mike Houston wrote: > On Thu, 08 Sep 2005 21:27:42 +0200 > Andreas Baer <[EMAIL PROTECTED]> wrote: > > > >>I think it's 2TB for the file size and 2e73 for the file system, but >>I don't understand the second reference and the part about the >>CONIFG_LBD. What is exactly the CONFIG_LBD option? >>- > > > This is "Support for Large Block Devices" under Device Drivers/Block > Devices: > > CONFIG_LBD: > > Say Y here if you want to attach large (bigger than 2TB) discs to > your machine, or if you want to have a raid or loopback device > bigger than 2TB. Otherwise say N. > > The "2e73" refers to 2 to the exponent 73 bytes in size. Huge :-) So in other words, both the file size and the file system limit is 2e73 using CONFIG_LBD option, right? And 2TB are always possible? Sorry, but I need to get pretty sure. > - > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in > the body of a message to [EMAIL PROTECTED] > More majordomo info at http://vger.kernel.org/majordomo-info.html > Please read the FAQ at http://www.tux.org/lkml/ > > - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: Large File Support in Kernel 2.6
Mike Houston wrote: On Thu, 08 Sep 2005 21:27:42 +0200 Andreas Baer [EMAIL PROTECTED] wrote: I think it's 2TB for the file size and 2e73 for the file system, but I don't understand the second reference and the part about the CONIFG_LBD. What is exactly the CONFIG_LBD option? - This is Support for Large Block Devices under Device Drivers/Block Devices: CONFIG_LBD: Say Y here if you want to attach large (bigger than 2TB) discs to your machine, or if you want to have a raid or loopback device bigger than 2TB. Otherwise say N. The 2e73 refers to 2 to the exponent 73 bytes in size. Huge :-) So in other words, both the file size and the file system limit is 2e73 using CONFIG_LBD option, right? And 2TB are always possible? Sorry, but I need to get pretty sure. - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/ - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Large File Support in Kernel 2.6
I have a question about the Large File Support using Linux and glibc 2.3 on a 32-Bit machine. What's the correct limit for the file size and the file system using LFS (just for the kernel, not to mention filesystem limits etc)? I found two references: "The 2.6 kernel imposes its own limits on the size of files and file systems handled by it. These are as follows: - file size: On 32-bit systems, files may not exceed the size of 2 TB. - file system size: File systems may be up to 2e73 bytes large. However, this limit is still out of reach for the currently available hardware." Source: http://www.novell.com/documentation/suse91/suselinux-adminguide/html/apas04.html "Kernel 2.6: For both 32-bit systems with option CONFIG_LBD set and for 64-bit systems: The size of a file system is limited to 2e73 (far too much for today). On 32-bit systems (without CONFIG_LBD set) the size of a file is limited to 2 TiB. Note that not all filesystems and hardware drivers might handle such large filesystems." Source: http://www.suse.de/~aj/linux_lfs.html I think it's 2TB for the file size and 2e73 for the file system, but I don't understand the second reference and the part about the CONIFG_LBD. What is exactly the CONFIG_LBD option? - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Large File Support in Kernel 2.6
I have a question about the Large File Support using Linux and glibc 2.3 on a 32-Bit machine. What's the correct limit for the file size and the file system using LFS (just for the kernel, not to mention filesystem limits etc)? I found two references: The 2.6 kernel imposes its own limits on the size of files and file systems handled by it. These are as follows: - file size: On 32-bit systems, files may not exceed the size of 2 TB. - file system size: File systems may be up to 2e73 bytes large. However, this limit is still out of reach for the currently available hardware. Source: http://www.novell.com/documentation/suse91/suselinux-adminguide/html/apas04.html Kernel 2.6: For both 32-bit systems with option CONFIG_LBD set and for 64-bit systems: The size of a file system is limited to 2e73 (far too much for today). On 32-bit systems (without CONFIG_LBD set) the size of a file is limited to 2 TiB. Note that not all filesystems and hardware drivers might handle such large filesystems. Source: http://www.suse.de/~aj/linux_lfs.html I think it's 2TB for the file size and 2e73 for the file system, but I don't understand the second reference and the part about the CONIFG_LBD. What is exactly the CONFIG_LBD option? - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Memory-Mapping with LFS
Who is the memory mapping expert? :) What are the current file size limits for memory mapping via glibc's mmap() function on linux: - for a native 32-Bit System not using LFS? - for a native 32-Bit System using LFS? - for a native 64-Bit System? (linux-kernel >2.6, of course) It would be nice if someone could tell me what I have to consider if I want to use memory mapping for files. I'm currently a little bit confused about it (information overflow :)). Personal opinions about speed (maybe increase or decrease for large files) are also welcome. -- The glibc documentation says: "Since mmapped pages can be stored back to their file when physical memory is low, it is possible to mmap files orders of magnitude larger than both the physical memory and swap space. The only limit is address space. The theoretical limit is 4GB on a 32-bit machine - however, the actual limit will be smaller since some areas will be reserved for other purposes. If the LFS interface is used the file size on 32-bit systems is not limited to 2GB (offsets are signed which reduces the addressable area of 4GB by half); the full 64-bit are available." - I doubt that the full 64-Bit (something within Exabyte) are available in practical use. Right or wrong? I've also found an old kernel-list e-mail from 2004 that says: "There is a limit per process in the kernel vm that prevent from mmapping more than 512GB of data." - Is this still true for the current kernel? An example: Let's presume the following case. I have an 8 GB file, 1 GB physical memory and I want to use memory mapping for that file using LFS on a 32-Bit machine. - Is it possible? If yes, let's presume that I have 2 or more pointers, that are frequently pointing to completely different places and switch the data they are pointing to. - How is it managed (by the kernel)? Through the pages, that are mentioned in the glibc documentation above? Are these page operations really faster than normal random file access (lseek etc)? - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Memory-Mapping with LFS
Who is the memory mapping expert? :) What are the current file size limits for memory mapping via glibc's mmap() function on linux: - for a native 32-Bit System not using LFS? - for a native 32-Bit System using LFS? - for a native 64-Bit System? (linux-kernel 2.6, of course) It would be nice if someone could tell me what I have to consider if I want to use memory mapping for files. I'm currently a little bit confused about it (information overflow :)). Personal opinions about speed (maybe increase or decrease for large files) are also welcome. -- The glibc documentation says: Since mmapped pages can be stored back to their file when physical memory is low, it is possible to mmap files orders of magnitude larger than both the physical memory and swap space. The only limit is address space. The theoretical limit is 4GB on a 32-bit machine - however, the actual limit will be smaller since some areas will be reserved for other purposes. If the LFS interface is used the file size on 32-bit systems is not limited to 2GB (offsets are signed which reduces the addressable area of 4GB by half); the full 64-bit are available. - I doubt that the full 64-Bit (something within Exabyte) are available in practical use. Right or wrong? I've also found an old kernel-list e-mail from 2004 that says: There is a limit per process in the kernel vm that prevent from mmapping more than 512GB of data. - Is this still true for the current kernel? An example: Let's presume the following case. I have an 8 GB file, 1 GB physical memory and I want to use memory mapping for that file using LFS on a 32-Bit machine. - Is it possible? If yes, let's presume that I have 2 or more pointers, that are frequently pointing to completely different places and switch the data they are pointing to. - How is it managed (by the kernel)? Through the pages, that are mentioned in the glibc documentation above? Are these page operations really faster than normal random file access (lseek etc)? - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: Problem with Asus P4C800-DX and P4 -Northwood-
Bill Davidsen wrote: Andreas Baer wrote: Bill Davidsen wrote: One other oddment about this motherboard, Forgive if I have over-snipped this trying to make it relevant... Andreas Baer wrote: Willy Tarreau wrote: On Mon, Jul 25, 2005 at 03:10:08PM +0200, Andreas Baer wrote: There clearly is a problem on the system installed on this machine. You should use strace to see what this machine does all the time, it is absolutely not expected that the user/system ratios change so much between two nearly identical systems. So there are system calls which eat all CPU. You may want to try strace -Tttt on the running process during a few tens of seconds. I guess you'll immediately find the culprit amongst the syscalls, and it might give you a clue. I hope you are talking about a hardware/kernel problem and not a software problem, because I tried it also with LiveCD's and they showed the same results on this machine. I'm not a linux expert, that means I've never done anything like that before, so it would be nice if you give me a hint what you see in this results. :) Am I misreading this, or is your program doing a bunch of seeks not followed by an i/o operation? I would doubt that's important, but your vmstat showed a lot of system time, and I just wonder if llseek() is more expensive in Linux than Windows. Or if your code is such that these calls are not optimized away by gcc. I don't know what exactly produces this _llseek calls, but I ran the compiled binaries on both machines (desktop + notebook) without any recompilation and so I think they should do the same (even if this is bad or not optimized), but I see a time difference of more than 2:30 :) This _llseek calls also don't seem to be faster or slower if you compare the times on the notebook and the desktop. If the program and test data is not proprietary, would it help to have me run the test on my P4P800, P4-2.8, HT on, and see if that's an issue with your particular board or BIOS? I have the 1086 BIOS from my notes on that machine, I think you were running a later BIOS? 1091 or so, from memory? Anyway, I would run a test that takes 3 minutes if it helps as a data point. Properly a good idea, but you have a completely different chipset related to the Asus Website. I think it's a i865 and I have i875. I'm also running BIOS 1019(!). That's the driver page for my Board: http://support.asus.com/download/download.aspx?Type=All=P4C800%20Deluxe It would be better if someone has at least the same board. Does anyone have a Asus P4C800-Deluxe with a P4 around 2.4 GHz running on this mailing list and would sacrifice himself/herself to run a little test with my software for a maximum of 4 minutes? Would be approx. 10 MB for data transmission. - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: Problem with Asus P4C800-DX and P4 -Northwood-
Bill Davidsen wrote: Andreas Baer wrote: Bill Davidsen wrote: One other oddment about this motherboard, Forgive if I have over-snipped this trying to make it relevant... Andreas Baer wrote: Willy Tarreau wrote: On Mon, Jul 25, 2005 at 03:10:08PM +0200, Andreas Baer wrote: There clearly is a problem on the system installed on this machine. You should use strace to see what this machine does all the time, it is absolutely not expected that the user/system ratios change so much between two nearly identical systems. So there are system calls which eat all CPU. You may want to try strace -Tttt on the running process during a few tens of seconds. I guess you'll immediately find the culprit amongst the syscalls, and it might give you a clue. I hope you are talking about a hardware/kernel problem and not a software problem, because I tried it also with LiveCD's and they showed the same results on this machine. I'm not a linux expert, that means I've never done anything like that before, so it would be nice if you give me a hint what you see in this results. :) Am I misreading this, or is your program doing a bunch of seeks not followed by an i/o operation? I would doubt that's important, but your vmstat showed a lot of system time, and I just wonder if llseek() is more expensive in Linux than Windows. Or if your code is such that these calls are not optimized away by gcc. I don't know what exactly produces this _llseek calls, but I ran the compiled binaries on both machines (desktop + notebook) without any recompilation and so I think they should do the same (even if this is bad or not optimized), but I see a time difference of more than 2:30 :) This _llseek calls also don't seem to be faster or slower if you compare the times on the notebook and the desktop. If the program and test data is not proprietary, would it help to have me run the test on my P4P800, P4-2.8, HT on, and see if that's an issue with your particular board or BIOS? I have the 1086 BIOS from my notes on that machine, I think you were running a later BIOS? 1091 or so, from memory? Anyway, I would run a test that takes 3 minutes if it helps as a data point. Properly a good idea, but you have a completely different chipset related to the Asus Website. I think it's a i865 and I have i875. I'm also running BIOS 1019(!). That's the driver page for my Board: http://support.asus.com/download/download.aspx?Type=Allmodel=P4C800%20Deluxe It would be better if someone has at least the same board. Does anyone have a Asus P4C800-Deluxe with a P4 around 2.4 GHz running on this mailing list and would sacrifice himself/herself to run a little test with my software for a maximum of 4 minutes? Would be approx. 10 MB for data transmission. - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: Problem with Asus P4C800-DX and P4 -Northwood-
Bill Davidsen wrote: One other oddment about this motherboard, Forgive if I have over-snipped this trying to make it relevant... Andreas Baer wrote: Willy Tarreau wrote: On Mon, Jul 25, 2005 at 03:10:08PM +0200, Andreas Baer wrote: There clearly is a problem on the system installed on this machine. You should use strace to see what this machine does all the time, it is absolutely not expected that the user/system ratios change so much between two nearly identical systems. So there are system calls which eat all CPU. You may want to try strace -Tttt on the running process during a few tens of seconds. I guess you'll immediately find the culprit amongst the syscalls, and it might give you a clue. I hope you are talking about a hardware/kernel problem and not a software problem, because I tried it also with LiveCD's and they showed the same results on this machine. I'm not a linux expert, that means I've never done anything like that before, so it would be nice if you give me a hint what you see in this results. :) Am I misreading this, or is your program doing a bunch of seeks not followed by an i/o operation? I would doubt that's important, but your vmstat showed a lot of system time, and I just wonder if llseek() is more expensive in Linux than Windows. Or if your code is such that these calls are not optimized away by gcc. I don't know what exactly produces this _llseek calls, but I ran the compiled binaries on both machines (desktop + notebook) without any recompilation and so I think they should do the same (even if this is bad or not optimized), but I see a time difference of more than 2:30 :) This _llseek calls also don't seem to be faster or slower if you compare the times on the notebook and the desktop. strace output for desktop: <--snip--> [pid 1431] 1122318636.262578 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 <0.17> [pid 1431] 1122318636.262654 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 <0.17> [pid 1431] 1122318636.262732 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 <0.17> [pid 1431] 1122318636.262809 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 <0.15> [pid 1431] 1122318636.262881 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 <0.15> [pid 1431] 1122318636.262952 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 <0.15> [pid 1431] 1122318636.263023 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 <0.15> [pid 1431] 1122318636.263094 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 <0.16> [pid 1431] 1122318636.263165 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 <0.15> [pid 1431] 1122318636.263237 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 <0.16> [pid 1431] 1122318636.263310 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 <0.15> [pid 1431] 1122318636.263381 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 <0.15> [pid 1431] 1122318636.263452 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 <0.15> [pid 1431] 1122318636.263523 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 <0.16> [pid 1431] 1122318636.263594 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 <0.15> [pid 1431] 1122318636.263666 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 <0.17> [pid 1431] 1122318636.263740 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 <0.24> [pid 1431] 1122318636.263841 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 <0.15> [pid 1431] 1122318636.263913 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 <0.15> [pid 1431] 1122318636.263984 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 <0.14> [pid 1431] 1122318636.264055 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 <0.15> [pid 1431] 1122318636.264127 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 <0.15> [pid 1431] 1122318636.264199 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 <0.15> [pid 1431] 1122318636.264271 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 <0.15> [pid 1431] 1122318636.264342 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 <0.16> [pid 1431] 1122318636.264414 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 <0.16> [pid 1431] 1122318636.264487 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 <0.15> [pid 1431] 1122318636.264558 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 <0.16> [pid 1431] 1122318636.264630 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 <0.15> [pid 1431] 1122318636.264710 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 <0.15> [pid 1431] 1122318636.264788 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 <0.16> [pid 1431] 1122318636.264861 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 <0.15> [pid 1431] 1122318636.264934 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 <0.16> [pid 1431] 1122318636.265006 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 <0.
Re: Problem with Asus P4C800-DX and P4 -Northwood-
Erik Mouw wrote: On Mon, Jul 25, 2005 at 09:51:49PM +0200, Andreas Baer wrote: Willy Tarreau wrote: On Mon, Jul 25, 2005 at 03:10:08PM +0200, Andreas Baer wrote: Here I have /dev/hda: 26.91 MB/sec /dev/hda1: 26.90 MB/sec(Windows FAT32) /dev/hda7: 17.89 MB/sec(Linux EXT3) Could you give me a reason how this is possible? a reason for what ? the fact that the notebook performs faster than the desktop while slower on I/O ? No, a reason why the partition with Linux (ReiserFS or Ext3) is always slower than the Windows partition? Easy: Drives don't have the same speed on all tracks. The platters are built-up from zones with different recording densities: zones near the center of the platters have a lower recording density and hence a lower datarate (less bits/second pass under the head). Zones at the outer diameter have a higher recording density and a higher datarate. Erik So it has definitely nothing to do with filesystem? I also thought about physical reasons because I don't think the hdparm depends on filesystems... - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: Problem with Asus P4C800-DX and P4 -Northwood-
Willy Tarreau wrote: On Mon, Jul 25, 2005 at 03:10:08PM +0200, Andreas Baer wrote: (...) I have (S-ATA-150 Disk 80GB) /dev/sda: 50.59 MB/sec /dev/sda1: 50.62 MB/sec(Windows FAT32) /dev/sda6: 41.63 MB/sec(Linux ReiserFS) On the Notebook I have at most an ATA-100 Disk with 80GB and it shows the same declension. Here I have /dev/hda: 26.91 MB/sec /dev/hda1: 26.90 MB/sec(Windows FAT32) /dev/hda7: 17.89 MB/sec(Linux EXT3) Could you give me a reason how this is possible? a reason for what ? the fact that the notebook performs faster than the desktop while slower on I/O ? No, a reason why the partition with Linux (ReiserFS or Ext3) is always slower than the Windows partition? Vmstat for Notebook P4 3.0 GHz 512 MB RAM: Your Notebook's P4 has HT enabled (50% apparent idle remain permanently during operation). But you'll note that your load is 60% system + 40% user there, and that you do absolutely no I/O (I presume it's the second run and it's cached). procs ---memory-- ---swap-- -io --system-- cpu r b swpd free buff cache si sobibo incs us sy id wa 1 0 0 179620 14812 228832003321 557 184 3 1 95 1 2 0 0 178828 14812 22883200 0 0 1295 819 6 2 92 0 1 0 0 175948 14812 22883200 0 0 1090 111 37 17 46 0 1 0 0 175948 14812 22883200 0 0 1064 101 23 28 50 0 1 0 0 175948 14812 22883200 0 0 1066 100 20 31 49 0 1 0 0 175980 14820 22882400 048 1066 119 20 30 50 0 1 0 0 175980 14820 22882400 0 0 106786 19 31 50 0 1 0 0 175988 14820 22882400 0 0 1064 115 20 30 50 0 (...) Yeah the HT is enabled but as I said that changes nothing on the result, if I enable or diable it on the desktop machine. Sorry about the I/O, I explained something wrong. Look below, I answered Paulo Marques to explain everything. Vmstat for Desktop P4 2.4 GHz 1024 MB RAM: This one's hyperthreaded too (apparent consumption never goes above 50%). However, while not doing any I/O either, you're always spending only 4% in user and 96% in system. This means that it might take 10x more time to complete the same operations, had it been user-cpu bound. And this is about what you observe. There clearly is a problem on the system installed on this machine. You should use strace to see what this machine does all the time, it is absolutely not expected that the user/system ratios change so much between two nearly identical systems. So there are system calls which eat all CPU. You may want to try strace -Tttt on the running process during a few tens of seconds. I guess you'll immediately find the culprit amongst the syscalls, and it might give you a clue. I hope you are talking about a hardware/kernel problem and not a software problem, because I tried it also with LiveCD's and they showed the same results on this machine. I'm not a linux expert, that means I've never done anything like that before, so it would be nice if you give me a hint what you see in this results. :) strace output for desktop: <--snip--> [pid 15146] 1122317366.469624 _llseek(3, 7471104, [7471104], SEEK_SET) = 0 <0.14> [pid 15146] 1122317366.469692 _llseek(3, 7471104, [7471104], SEEK_SET) = 0 <0.15> [pid 15146] 1122317366.469760 _llseek(3, 7471104, [7471104], SEEK_SET) = 0 <0.15> [pid 15146] 1122317366.469828 _llseek(3, 7471104, [7471104], SEEK_SET) = 0 <0.15> [pid 15146] 1122317366.469896 _llseek(3, 7471104, [7471104], SEEK_SET) = 0 <0.14> [pid 15146] 1122317366.469963 _llseek(3, 7471104, [7471104], SEEK_SET) = 0 <0.15> [pid 15146] 1122317366.470031 _llseek(3, 7471104, [7471104], SEEK_SET) = 0 <0.14> [pid 15146] 1122317366.470098 _llseek(3, 7471104, [7471104], SEEK_SET) = 0 <0.15> [pid 15146] 1122317366.470168 _llseek(3, 7471104, [7471104], SEEK_SET) = 0 <0.14> [pid 15146] 1122317366.470236 _llseek(3, 7471104, [7471104], SEEK_SET) = 0 <0.15> [pid 15146] 1122317366.470298 read(3, "[EMAIL PROTECTED]"..., 131072) = 131072 <0.000138> [pid 15146] 1122317366.470528 _llseek(3, 7602176, [7602176], SEEK_SET) = 0 <0.15> [pid 15146] 1122317366.470599 _llseek(3, 7602176, [7602176], SEEK_SET) = 0 <0.14> [pid 15146] 1122317366.470667 _llseek(3, 7602176, [7602176], SEEK_SET) = 0 <0.15> [pid 15146] 1122317366.470734 _llseek(3, 7602176, [7602176], SEEK_SET) = 0 <0.14> [pid 15146] 1122317366.470802 _llseek(3, 7602176, [7602176], SEEK_SET) = 0 <0.14> [pid 15146] 1122317366.470870 _llseek(3, 7602176, [7602176], SEEK_SET) = 0 <0.14> [pid 15146] 1122317366.470939 _llseek(3, 7602176, [7602176], SEEK_SET) = 0 <0.14> [pid 15146] 1122317366.471008 _
Re: Problem with Asus P4C800-DX and P4 -Northwood-
Hi, Thanks for reply. Sorry, but I've never done any vmstat operation before so next time I'll send the output in the first mail :) Willy Tarreau wrote: Hi, On Mon, Jul 25, 2005 at 02:50:05AM +0200, Andreas Baer wrote: Hi everyone, First I want to say sorry for this BIG post, but it seems that I have no other chance. :) It's not big enough, you did not explain us what your database does nor how it does work, what type of resource it consumes most, any vmstat capture during operation. There are so many possibilities here : - poor optimisation from gcc => CPU bound I doubt it, because I've run the same binaries (no recompilation) on both systems. (you will see vmstat output below) - many random disk accesses => I/O bound, but changing/tuning the I/O scheduler could help Indeed, the data is stored in random access files. - intensive disk reads => perhaps your windows and linux partitions are on the same disk and windows is the first one, then you have 50 MB/s on the windows one and 25 MB/s on the linux one ? I have (S-ATA-150 Disk 80GB) /dev/sda: 50.59 MB/sec /dev/sda1: 50.62 MB/sec(Windows FAT32) /dev/sda6: 41.63 MB/sec(Linux ReiserFS) On the Notebook I have at most an ATA-100 Disk with 80GB and it shows the same declension. Here I have /dev/hda: 26.91 MB/sec /dev/hda1: 26.90 MB/sec(Windows FAT32) /dev/hda7: 17.89 MB/sec(Linux EXT3) Could you give me a reason how this is possible? - task scheduling : if your application is multi-process/multi-thread, it is possible that you hit some corner cases. There are only a maximum of 2 Threads started and they have more background activity or do nothing, should have nothing to do with this problem. So please start "vmstat 1" before your 3min request, and stop it at the end, so that it covers all the work. It will tell us many more useful information. all output below... Regards, Willy I have a Asus P4C800-DX with a P4 2,4 GHz 512 KB L2 Cache "Northwood" Processor (lowest Processor that supports HyperThreading) and 1GB DDR400 RAM. I'm also running S-ATA disks with about 50 MB/s (just to show that it shouldn't be due to hard disk speed). Everything was bought back in 2003 and I recently upgraded to the lastest BIOS Version. I've installed Gentoo Linux and WinXP with dual-boot on this system. Now to my problem: I'm currently developing a little database in C++ (runs currently under Windows and Linux) that internally builds up an R-Tree and does a lot of equality tests and other time consuming checks. For perfomance issue I ran a test with 20 entries and it took me about 3 minutes to complete under Gentoo Linux. So I ran the same test in Windows on the same platform and it took about 30(!) seconds. I was a little bit surprised about this result so I started to run several tests on different machines like an Athlon XP 2000+ platform and on my P4 3GHz "Prescott" Notebook and they all showed something about 30 seconds or below. Then I began to search for errors or any misconfiguration in Gentoo, in my code and also for people that have made equal experiences with that hardware configuration on the internet. I thought I have a problem with a broken gcc or libraries like glibc or libstdc++ and so I recompiled my whole system with the stable gcc 3.3.5 release, but that didn't make any changes. I also tried an Ubuntu and a Suse LiveCD to verify that it has nothing to do with Gentoo and my kernel version and they had the same problem and ran the test in about 3 min. Currently I'm at a loss what to do. I'm beginning to think that this is maybe a kernel problem because I have no problems under Windows and it doesn't matter whether I change any software or any configuration in Gentoo. I'm currently running kernel-2.6.12, but the Livecd's had other kernels. HyperThreading(HT) is also not the reason for the loss of power, because I tried to disable it and to create a uniprocessor kernel, but that didn't solve the problem. If you need some output of my configuration/log files or anything like that, just mail me. Is it possible that the kernel has a lack of support for P4 with "Northwood" core? Maybe only this one? Could I solve the problem if I change the processor to a "Prescott" core? Perhaps someone could provide any information if this would make any sense or not. Thanks in advance for anything that could help. ...sorry for bad english :) - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/ Vmstat for Notebook P4 3.0 GHz 512 MB RAM: procs ---memory-- ---swap-- -io --system-- cpu r b swpd free
Re: Problem with Asus P4C800-DX and P4 -Northwood-
Hi, Thanks for reply. Sorry, but I've never done any vmstat operation before so next time I'll send the output in the first mail :) Willy Tarreau wrote: Hi, On Mon, Jul 25, 2005 at 02:50:05AM +0200, Andreas Baer wrote: Hi everyone, First I want to say sorry for this BIG post, but it seems that I have no other chance. :) It's not big enough, you did not explain us what your database does nor how it does work, what type of resource it consumes most, any vmstat capture during operation. There are so many possibilities here : - poor optimisation from gcc = CPU bound I doubt it, because I've run the same binaries (no recompilation) on both systems. (you will see vmstat output below) - many random disk accesses = I/O bound, but changing/tuning the I/O scheduler could help Indeed, the data is stored in random access files. - intensive disk reads = perhaps your windows and linux partitions are on the same disk and windows is the first one, then you have 50 MB/s on the windows one and 25 MB/s on the linux one ? I have (S-ATA-150 Disk 80GB) /dev/sda: 50.59 MB/sec /dev/sda1: 50.62 MB/sec(Windows FAT32) /dev/sda6: 41.63 MB/sec(Linux ReiserFS) On the Notebook I have at most an ATA-100 Disk with 80GB and it shows the same declension. Here I have /dev/hda: 26.91 MB/sec /dev/hda1: 26.90 MB/sec(Windows FAT32) /dev/hda7: 17.89 MB/sec(Linux EXT3) Could you give me a reason how this is possible? - task scheduling : if your application is multi-process/multi-thread, it is possible that you hit some corner cases. There are only a maximum of 2 Threads started and they have more background activity or do nothing, should have nothing to do with this problem. So please start vmstat 1 before your 3min request, and stop it at the end, so that it covers all the work. It will tell us many more useful information. all output below... Regards, Willy I have a Asus P4C800-DX with a P4 2,4 GHz 512 KB L2 Cache Northwood Processor (lowest Processor that supports HyperThreading) and 1GB DDR400 RAM. I'm also running S-ATA disks with about 50 MB/s (just to show that it shouldn't be due to hard disk speed). Everything was bought back in 2003 and I recently upgraded to the lastest BIOS Version. I've installed Gentoo Linux and WinXP with dual-boot on this system. Now to my problem: I'm currently developing a little database in C++ (runs currently under Windows and Linux) that internally builds up an R-Tree and does a lot of equality tests and other time consuming checks. For perfomance issue I ran a test with 20 entries and it took me about 3 minutes to complete under Gentoo Linux. So I ran the same test in Windows on the same platform and it took about 30(!) seconds. I was a little bit surprised about this result so I started to run several tests on different machines like an Athlon XP 2000+ platform and on my P4 3GHz Prescott Notebook and they all showed something about 30 seconds or below. Then I began to search for errors or any misconfiguration in Gentoo, in my code and also for people that have made equal experiences with that hardware configuration on the internet. I thought I have a problem with a broken gcc or libraries like glibc or libstdc++ and so I recompiled my whole system with the stable gcc 3.3.5 release, but that didn't make any changes. I also tried an Ubuntu and a Suse LiveCD to verify that it has nothing to do with Gentoo and my kernel version and they had the same problem and ran the test in about 3 min. Currently I'm at a loss what to do. I'm beginning to think that this is maybe a kernel problem because I have no problems under Windows and it doesn't matter whether I change any software or any configuration in Gentoo. I'm currently running kernel-2.6.12, but the Livecd's had other kernels. HyperThreading(HT) is also not the reason for the loss of power, because I tried to disable it and to create a uniprocessor kernel, but that didn't solve the problem. If you need some output of my configuration/log files or anything like that, just mail me. Is it possible that the kernel has a lack of support for P4 with Northwood core? Maybe only this one? Could I solve the problem if I change the processor to a Prescott core? Perhaps someone could provide any information if this would make any sense or not. Thanks in advance for anything that could help. ...sorry for bad english :) - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/ Vmstat for Notebook P4 3.0 GHz 512 MB RAM: procs ---memory-- ---swap-- -io --system-- cpu r b swpd free buff cache si sobibo incs us sy id wa 1 0 0 179620 14812
Re: Problem with Asus P4C800-DX and P4 -Northwood-
Willy Tarreau wrote: On Mon, Jul 25, 2005 at 03:10:08PM +0200, Andreas Baer wrote: (...) I have (S-ATA-150 Disk 80GB) /dev/sda: 50.59 MB/sec /dev/sda1: 50.62 MB/sec(Windows FAT32) /dev/sda6: 41.63 MB/sec(Linux ReiserFS) On the Notebook I have at most an ATA-100 Disk with 80GB and it shows the same declension. Here I have /dev/hda: 26.91 MB/sec /dev/hda1: 26.90 MB/sec(Windows FAT32) /dev/hda7: 17.89 MB/sec(Linux EXT3) Could you give me a reason how this is possible? a reason for what ? the fact that the notebook performs faster than the desktop while slower on I/O ? No, a reason why the partition with Linux (ReiserFS or Ext3) is always slower than the Windows partition? Vmstat for Notebook P4 3.0 GHz 512 MB RAM: Your Notebook's P4 has HT enabled (50% apparent idle remain permanently during operation). But you'll note that your load is 60% system + 40% user there, and that you do absolutely no I/O (I presume it's the second run and it's cached). procs ---memory-- ---swap-- -io --system-- cpu r b swpd free buff cache si sobibo incs us sy id wa 1 0 0 179620 14812 228832003321 557 184 3 1 95 1 2 0 0 178828 14812 22883200 0 0 1295 819 6 2 92 0 1 0 0 175948 14812 22883200 0 0 1090 111 37 17 46 0 1 0 0 175948 14812 22883200 0 0 1064 101 23 28 50 0 1 0 0 175948 14812 22883200 0 0 1066 100 20 31 49 0 1 0 0 175980 14820 22882400 048 1066 119 20 30 50 0 1 0 0 175980 14820 22882400 0 0 106786 19 31 50 0 1 0 0 175988 14820 22882400 0 0 1064 115 20 30 50 0 (...) Yeah the HT is enabled but as I said that changes nothing on the result, if I enable or diable it on the desktop machine. Sorry about the I/O, I explained something wrong. Look below, I answered Paulo Marques to explain everything. Vmstat for Desktop P4 2.4 GHz 1024 MB RAM: This one's hyperthreaded too (apparent consumption never goes above 50%). However, while not doing any I/O either, you're always spending only 4% in user and 96% in system. This means that it might take 10x more time to complete the same operations, had it been user-cpu bound. And this is about what you observe. There clearly is a problem on the system installed on this machine. You should use strace to see what this machine does all the time, it is absolutely not expected that the user/system ratios change so much between two nearly identical systems. So there are system calls which eat all CPU. You may want to try strace -Tttt on the running process during a few tens of seconds. I guess you'll immediately find the culprit amongst the syscalls, and it might give you a clue. I hope you are talking about a hardware/kernel problem and not a software problem, because I tried it also with LiveCD's and they showed the same results on this machine. I'm not a linux expert, that means I've never done anything like that before, so it would be nice if you give me a hint what you see in this results. :) strace output for desktop: --snip-- [pid 15146] 1122317366.469624 _llseek(3, 7471104, [7471104], SEEK_SET) = 0 0.14 [pid 15146] 1122317366.469692 _llseek(3, 7471104, [7471104], SEEK_SET) = 0 0.15 [pid 15146] 1122317366.469760 _llseek(3, 7471104, [7471104], SEEK_SET) = 0 0.15 [pid 15146] 1122317366.469828 _llseek(3, 7471104, [7471104], SEEK_SET) = 0 0.15 [pid 15146] 1122317366.469896 _llseek(3, 7471104, [7471104], SEEK_SET) = 0 0.14 [pid 15146] 1122317366.469963 _llseek(3, 7471104, [7471104], SEEK_SET) = 0 0.15 [pid 15146] 1122317366.470031 _llseek(3, 7471104, [7471104], SEEK_SET) = 0 0.14 [pid 15146] 1122317366.470098 _llseek(3, 7471104, [7471104], SEEK_SET) = 0 0.15 [pid 15146] 1122317366.470168 _llseek(3, 7471104, [7471104], SEEK_SET) = 0 0.14 [pid 15146] 1122317366.470236 _llseek(3, 7471104, [7471104], SEEK_SET) = 0 0.15 [pid 15146] 1122317366.470298 read(3, [EMAIL PROTECTED]..., 131072) = 131072 0.000138 [pid 15146] 1122317366.470528 _llseek(3, 7602176, [7602176], SEEK_SET) = 0 0.15 [pid 15146] 1122317366.470599 _llseek(3, 7602176, [7602176], SEEK_SET) = 0 0.14 [pid 15146] 1122317366.470667 _llseek(3, 7602176, [7602176], SEEK_SET) = 0 0.15 [pid 15146] 1122317366.470734 _llseek(3, 7602176, [7602176], SEEK_SET) = 0 0.14 [pid 15146] 1122317366.470802 _llseek(3, 7602176, [7602176], SEEK_SET) = 0 0.14 [pid 15146] 1122317366.470870 _llseek(3, 7602176, [7602176], SEEK_SET) = 0 0.14 [pid 15146] 1122317366.470939 _llseek(3, 7602176, [7602176], SEEK_SET) = 0 0.14 [pid 15146] 1122317366.471008 _llseek(3, 7602176, [7602176], SEEK_SET) = 0 0.14 [pid 15146] 1122317366.471079 _llseek(3, 7602176, [7602176], SEEK_SET) = 0 0.16 [pid 15146] 1122317366.471158
Re: Problem with Asus P4C800-DX and P4 -Northwood-
Erik Mouw wrote: On Mon, Jul 25, 2005 at 09:51:49PM +0200, Andreas Baer wrote: Willy Tarreau wrote: On Mon, Jul 25, 2005 at 03:10:08PM +0200, Andreas Baer wrote: Here I have /dev/hda: 26.91 MB/sec /dev/hda1: 26.90 MB/sec(Windows FAT32) /dev/hda7: 17.89 MB/sec(Linux EXT3) Could you give me a reason how this is possible? a reason for what ? the fact that the notebook performs faster than the desktop while slower on I/O ? No, a reason why the partition with Linux (ReiserFS or Ext3) is always slower than the Windows partition? Easy: Drives don't have the same speed on all tracks. The platters are built-up from zones with different recording densities: zones near the center of the platters have a lower recording density and hence a lower datarate (less bits/second pass under the head). Zones at the outer diameter have a higher recording density and a higher datarate. Erik So it has definitely nothing to do with filesystem? I also thought about physical reasons because I don't think the hdparm depends on filesystems... - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: Problem with Asus P4C800-DX and P4 -Northwood-
Bill Davidsen wrote: One other oddment about this motherboard, Forgive if I have over-snipped this trying to make it relevant... Andreas Baer wrote: Willy Tarreau wrote: On Mon, Jul 25, 2005 at 03:10:08PM +0200, Andreas Baer wrote: There clearly is a problem on the system installed on this machine. You should use strace to see what this machine does all the time, it is absolutely not expected that the user/system ratios change so much between two nearly identical systems. So there are system calls which eat all CPU. You may want to try strace -Tttt on the running process during a few tens of seconds. I guess you'll immediately find the culprit amongst the syscalls, and it might give you a clue. I hope you are talking about a hardware/kernel problem and not a software problem, because I tried it also with LiveCD's and they showed the same results on this machine. I'm not a linux expert, that means I've never done anything like that before, so it would be nice if you give me a hint what you see in this results. :) Am I misreading this, or is your program doing a bunch of seeks not followed by an i/o operation? I would doubt that's important, but your vmstat showed a lot of system time, and I just wonder if llseek() is more expensive in Linux than Windows. Or if your code is such that these calls are not optimized away by gcc. I don't know what exactly produces this _llseek calls, but I ran the compiled binaries on both machines (desktop + notebook) without any recompilation and so I think they should do the same (even if this is bad or not optimized), but I see a time difference of more than 2:30 :) This _llseek calls also don't seem to be faster or slower if you compare the times on the notebook and the desktop. strace output for desktop: --snip-- [pid 1431] 1122318636.262578 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 0.17 [pid 1431] 1122318636.262654 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 0.17 [pid 1431] 1122318636.262732 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 0.17 [pid 1431] 1122318636.262809 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 0.15 [pid 1431] 1122318636.262881 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 0.15 [pid 1431] 1122318636.262952 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 0.15 [pid 1431] 1122318636.263023 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 0.15 [pid 1431] 1122318636.263094 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 0.16 [pid 1431] 1122318636.263165 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 0.15 [pid 1431] 1122318636.263237 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 0.16 [pid 1431] 1122318636.263310 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 0.15 [pid 1431] 1122318636.263381 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 0.15 [pid 1431] 1122318636.263452 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 0.15 [pid 1431] 1122318636.263523 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 0.16 [pid 1431] 1122318636.263594 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 0.15 [pid 1431] 1122318636.263666 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 0.17 [pid 1431] 1122318636.263740 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 0.24 [pid 1431] 1122318636.263841 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 0.15 [pid 1431] 1122318636.263913 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 0.15 [pid 1431] 1122318636.263984 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 0.14 [pid 1431] 1122318636.264055 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 0.15 [pid 1431] 1122318636.264127 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 0.15 [pid 1431] 1122318636.264199 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 0.15 [pid 1431] 1122318636.264271 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 0.15 [pid 1431] 1122318636.264342 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 0.16 [pid 1431] 1122318636.264414 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 0.16 [pid 1431] 1122318636.264487 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 0.15 [pid 1431] 1122318636.264558 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 0.16 [pid 1431] 1122318636.264630 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 0.15 [pid 1431] 1122318636.264710 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 0.15 [pid 1431] 1122318636.264788 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 0.16 [pid 1431] 1122318636.264861 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 0.15 [pid 1431] 1122318636.264934 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 0.16 [pid 1431] 1122318636.265006 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 0.15 [pid 1431] 1122318636.265077 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 0.15 [pid 1431] 1122318636.265149 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 0.14 [pid 1431] 1122318636.265220 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 0.15 [pid 1431
Problem with Asus P4C800-DX and P4 -Northwood-
Hi everyone, First I want to say sorry for this BIG post, but it seems that I have no other chance. :) I have a Asus P4C800-DX with a P4 2,4 GHz 512 KB L2 Cache "Northwood" Processor (lowest Processor that supports HyperThreading) and 1GB DDR400 RAM. I'm also running S-ATA disks with about 50 MB/s (just to show that it shouldn't be due to hard disk speed). Everything was bought back in 2003 and I recently upgraded to the lastest BIOS Version. I've installed Gentoo Linux and WinXP with dual-boot on this system. Now to my problem: I'm currently developing a little database in C++ (runs currently under Windows and Linux) that internally builds up an R-Tree and does a lot of equality tests and other time consuming checks. For perfomance issue I ran a test with 20 entries and it took me about 3 minutes to complete under Gentoo Linux. So I ran the same test in Windows on the same platform and it took about 30(!) seconds. I was a little bit surprised about this result so I started to run several tests on different machines like an Athlon XP 2000+ platform and on my P4 3GHz "Prescott" Notebook and they all showed something about 30 seconds or below. Then I began to search for errors or any misconfiguration in Gentoo, in my code and also for people that have made equal experiences with that hardware configuration on the internet. I thought I have a problem with a broken gcc or libraries like glibc or libstdc++ and so I recompiled my whole system with the stable gcc 3.3.5 release, but that didn't make any changes. I also tried an Ubuntu and a Suse LiveCD to verify that it has nothing to do with Gentoo and my kernel version and they had the same problem and ran the test in about 3 min. Currently I'm at a loss what to do. I'm beginning to think that this is maybe a kernel problem because I have no problems under Windows and it doesn't matter whether I change any software or any configuration in Gentoo. I'm currently running kernel-2.6.12, but the Livecd's had other kernels. HyperThreading(HT) is also not the reason for the loss of power, because I tried to disable it and to create a uniprocessor kernel, but that didn't solve the problem. If you need some output of my configuration/log files or anything like that, just mail me. Is it possible that the kernel has a lack of support for P4 with "Northwood" core? Maybe only this one? Could I solve the problem if I change the processor to a "Prescott" core? Perhaps someone could provide any information if this would make any sense or not. Thanks in advance for anything that could help. ...sorry for bad english :) - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Problem with Asus P4C800-DX and P4 -Northwood-
Hi everyone, First I want to say sorry for this BIG post, but it seems that I have no other chance. :) I have a Asus P4C800-DX with a P4 2,4 GHz 512 KB L2 Cache Northwood Processor (lowest Processor that supports HyperThreading) and 1GB DDR400 RAM. I'm also running S-ATA disks with about 50 MB/s (just to show that it shouldn't be due to hard disk speed). Everything was bought back in 2003 and I recently upgraded to the lastest BIOS Version. I've installed Gentoo Linux and WinXP with dual-boot on this system. Now to my problem: I'm currently developing a little database in C++ (runs currently under Windows and Linux) that internally builds up an R-Tree and does a lot of equality tests and other time consuming checks. For perfomance issue I ran a test with 20 entries and it took me about 3 minutes to complete under Gentoo Linux. So I ran the same test in Windows on the same platform and it took about 30(!) seconds. I was a little bit surprised about this result so I started to run several tests on different machines like an Athlon XP 2000+ platform and on my P4 3GHz Prescott Notebook and they all showed something about 30 seconds or below. Then I began to search for errors or any misconfiguration in Gentoo, in my code and also for people that have made equal experiences with that hardware configuration on the internet. I thought I have a problem with a broken gcc or libraries like glibc or libstdc++ and so I recompiled my whole system with the stable gcc 3.3.5 release, but that didn't make any changes. I also tried an Ubuntu and a Suse LiveCD to verify that it has nothing to do with Gentoo and my kernel version and they had the same problem and ran the test in about 3 min. Currently I'm at a loss what to do. I'm beginning to think that this is maybe a kernel problem because I have no problems under Windows and it doesn't matter whether I change any software or any configuration in Gentoo. I'm currently running kernel-2.6.12, but the Livecd's had other kernels. HyperThreading(HT) is also not the reason for the loss of power, because I tried to disable it and to create a uniprocessor kernel, but that didn't solve the problem. If you need some output of my configuration/log files or anything like that, just mail me. Is it possible that the kernel has a lack of support for P4 with Northwood core? Maybe only this one? Could I solve the problem if I change the processor to a Prescott core? Perhaps someone could provide any information if this would make any sense or not. Thanks in advance for anything that could help. ...sorry for bad english :) - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/