[gentoo-user] Re: aligning SSD partitions
The 05/09/12, Dale wrote: Michael Mol wrote: On Wed, Sep 5, 2012 at 11:17 AM, Neil Bothwick n...@digimed.co.uk wrote: On Wed, 05 Sep 2012 07:52:45 -0500, Dale wrote: I might also add, I see no speed improvements in putting portages work directory on tmpfs. I have tested this a few times and the difference in compile times is just not there. Probably because with 16GB everything stays cached anyway. I cleared the cache between the compiles. This is the command I use: echo 3 /proc/sys/vm/drop_caches But you are still using the RAM as disk cache during the emerge, the data doesn't stay around long enough to need to get written to disk with so much RAM for cache. Indeed. Try setting the mount to write-through to see the difference. When I run that command, it clears all the cache. It is the same as if I rebooted. Certainly you are not thinking that cache survives a reboot? You missed the point. One of the first thing emerge will do is to uncompress the package. At this time, all the files are cached in RAM. Hence, everything needed for the build/compilation will come from the cache like it would do with tmpfs. -- Nicolas Sebrecht
Re: [gentoo-user] aligning SSD partitions
Neil Bothwick wrote: On Wed, 05 Sep 2012 12:54:51 -0500, Dale wrote: I might also add, I see no speed improvements in putting portages work directory on tmpfs. I have tested this a few times and the difference in compile times is just not there. Probably because with 16GB everything stays cached anyway. I cleared the cache between the compiles. This is the command I use: echo 3 /proc/sys/vm/drop_caches But you are still using the RAM as disk cache during the emerge, the data doesn't stay around long enough to need to get written to disk with so much RAM for cache. Indeed. Try setting the mount to write-through to see the difference. When I run that command, it clears all the cache. It is the same as if I rebooted. Certainly you are not thinking that cache survives a reboot? You clear the cache between the two emerge runs, not during them. If I recall correctly the process, I cleared the cache, ran emerge with the portage work directory on disk. Then cleared cache again and run on tmpfs. If you think that the cache would make any difference for the second run then it would be faster just because of that *while using tmpfs* since that was the second run. Thing is, it wasn't faster. In some tests it was actually slower. I'm trying to catch on to why you think that clearing the cache means it is still there. That's the whole point of clearing cache is that it is gone. When I was checking on drop_caches, my understanding was that clearing that was the same as a reboot. This is from kernel.org drop_caches Writing to this will cause the kernel to drop clean caches, dentries and inodes from memory, causing that memory to become free. To free pagecache: echo 1 /proc/sys/vm/drop_caches To free dentries and inodes: echo 2 /proc/sys/vm/drop_caches To free pagecache, dentries and inodes: echo 3 /proc/sys/vm/drop_caches According to that, the 3 option clears all cache. One site I found that on even recommended running sync first just in case something in ram was not yet written to disk. If you are talking about ram on the drive itself, well, when it is on tmpfs, it is not on the drive to be cached. That's the whole point of tmpfs is to get the slow drive out of the way. By the way, there are others that ran tests with the same results. It just doesn't speed up anything since drives are so much faster nowadays. Drives are still orders of magnitude slower than RAM, that's why using swap is so slow. What appears to be happening here is that because files are written and then read again in short succession, they are still in the kernel's disk cache, so the speed of the disk is irrelevant. Bear in mind that tmpfs is basically a cached disk without the disk, so you are effectively comparing the same thing twice. I agree that they are slower but for whatever reason, it doesn't seem to matter as much as one thought. I was expecting a huge difference in this with using tmpfs being much faster. Thing is, that is NOT what I got. Theory meets reality and it was not what I expected or what others expected either. Putting portages work directory on tmpfs doesn't make much if any difference in emerge times. Dale :-) :-) -- I am only responsible for what I said ... Not for what you understood or how you interpreted my words!
Re: [gentoo-user] Re: aligning SSD partitions
Nicolas Sebrecht wrote: The 05/09/12, Dale wrote: Michael Mol wrote: On Wed, Sep 5, 2012 at 11:17 AM, Neil Bothwick n...@digimed.co.uk wrote: On Wed, 05 Sep 2012 07:52:45 -0500, Dale wrote: I might also add, I see no speed improvements in putting portages work directory on tmpfs. I have tested this a few times and the difference in compile times is just not there. Probably because with 16GB everything stays cached anyway. I cleared the cache between the compiles. This is the command I use: echo 3 /proc/sys/vm/drop_caches But you are still using the RAM as disk cache during the emerge, the data doesn't stay around long enough to need to get written to disk with so much RAM for cache. Indeed. Try setting the mount to write-through to see the difference. When I run that command, it clears all the cache. It is the same as if I rebooted. Certainly you are not thinking that cache survives a reboot? You missed the point. One of the first thing emerge will do is to uncompress the package. At this time, all the files are cached in RAM. Hence, everything needed for the build/compilation will come from the cache like it would do with tmpfs. You miss this point not me. I *cleared* that cache. From kernel.org: drop_caches Writing to this will cause the kernel to drop clean caches, dentries and inodes from memory, causing that memory to become free. To free pagecache: echo 1 /proc/sys/vm/drop_caches To free dentries and inodes: echo 2 /proc/sys/vm/drop_caches To free pagecache, dentries and inodes: echo 3 /proc/sys/vm/drop_caches I can confirm this is done with free, top or htop. See my reply to Neil for more on this. Dale :-) :-) -- I am only responsible for what I said ... Not for what you understood or how you interpreted my words!
Re: [gentoo-user] Re: aligning SSD partitions
On Thu, 06 Sep 2012 04:15:23 -0500, Dale wrote: You missed the point. One of the first thing emerge will do is to uncompress the package. At this time, all the files are cached in RAM. Hence, everything needed for the build/compilation will come from the cache like it would do with tmpfs. You miss this point not me. I *cleared* that cache. From kernel.org: Sorry Dale, but you are missing the point. You cleared the cache before running emerge, then ran emerge. The first thing emerge did was unpack the tarball and populate the disk cache. All clearing the disk cache did was make sure there was plenty of space to cache the new data, thus speeding up the process. -- Neil Bothwick A snooze button is a poor substitute for no alarm clock at all. signature.asc Description: PGP signature
Re: [gentoo-user] Re: aligning SSD partitions
Neil Bothwick wrote: On Thu, 06 Sep 2012 04:15:23 -0500, Dale wrote: You missed the point. One of the first thing emerge will do is to uncompress the package. At this time, all the files are cached in RAM. Hence, everything needed for the build/compilation will come from the cache like it would do with tmpfs. You miss this point not me. I *cleared* that cache. From kernel.org: Sorry Dale, but you are missing the point. You cleared the cache before running emerge, then ran emerge. The first thing emerge did was unpack the tarball and populate the disk cache. All clearing the disk cache did was make sure there was plenty of space to cache the new data, thus speeding up the process. Then explain to me why it was at times slower while on tmpfs? Trust me, I ran this test many times and in different orders and it did NOT make much if any difference. I might add, the cache on the drive I was using is nowhere near large enough to cache the tarball for the package. Heck, the cache on my current system drive is only 8Mbs according to hdparm. That is not much since I tested using much larger packages. You can't cache files larger than the cache. Do I need to run a test, reboot, run the test again to show this is not making much if any difference? I mean, really? o_O Dale :-) :-) -- I am only responsible for what I said ... Not for what you understood or how you interpreted my words!
Re: [gentoo-user] Re: aligning SSD partitions
Neil Bothwick wrote: On Thu, 06 Sep 2012 04:15:23 -0500, Dale wrote: You missed the point. One of the first thing emerge will do is to uncompress the package. At this time, all the files are cached in RAM. Hence, everything needed for the build/compilation will come from the cache like it would do with tmpfs. You miss this point not me. I *cleared* that cache. From kernel.org: Sorry Dale, but you are missing the point. You cleared the cache before running emerge, then ran emerge. The first thing emerge did was unpack the tarball and populate the disk cache. All clearing the disk cache did was make sure there was plenty of space to cache the new data, thus speeding up the process. One other thing, I am not just clearing the *disk* cache. I am clearing all the *SYSTEM* cache. I can have all 16Gbs of memory in use, either by programs or as cache, then run that command and it is then only using what is in use by programs. It clears everything else. That includes any cache that was stored there, disk or otherwise. You need to run free, run the command to clear and then run free again so you can see for yourself. If it was just me, I could think I am wrong but this was tested by others too with the same results. Dale :-) :-) -- I am only responsible for what I said ... Not for what you understood or how you interpreted my words!
Re: [gentoo-user] Re: aligning SSD partitions
On Thu, 06 Sep 2012 05:03:55 -0500, Dale wrote: You miss this point not me. I *cleared* that cache. From kernel.org: Sorry Dale, but you are missing the point. You cleared the cache before running emerge, then ran emerge. The first thing emerge did was unpack the tarball and populate the disk cache. All clearing the disk cache did was make sure there was plenty of space to cache the new data, thus speeding up the process. Then explain to me why it was at times slower while on tmpfs? Trust me, I ran this test many times and in different orders and it did NOT make much if any difference. So it was slower at times, but not by much? That's just general variances caused by multi-tasking, wind direction etc. I might add, the cache on the drive I was using is nowhere near large enough to cache the tarball for the package. Heck, the cache on my current system drive is only 8Mbs according to hdparm. We're not talking about drive caches, the kernel caches filesystem access long before it gets anywhere the drive. So all the real work is done in RAM if you have enough, whether you are using a hard drive filesystem or tmpfs. All your test demonstrates is that if you have enough RAM, it doesn't make much difference where you put PORTAGE_TMPDIR. -- Neil Bothwick Evolution stops when stupidity is no longer fatal! signature.asc Description: PGP signature
Re: [gentoo-user] Re: aligning SSD partitions
On Thu, 06 Sep 2012 05:11:01 -0500, Dale wrote: You need to run free, run the command to clear and then run free again so you can see for yourself. If it was just me, I could think I am wrong but this was tested by others too with the same results. I'm not saying your test results are wrong, I'm explaining why I think they are what they are. Have you tried running free *during* the emerge? I expect you'll find plenty of cache in use then. -- Neil Bothwick We are Pentium of Borg. You will be approximated. Resistance may or may not be futile, except on every other Tuesday when it is a definite maybe. signature.asc Description: PGP signature
[gentoo-user] Re: aligning SSD partitions
The 06/09/12, Dale wrote: Then explain to me why it was at times slower while on tmpfs? Trust me, I ran this test many times and in different orders and it did NOT make much if any difference. As explained, this is expected if you have enough RAM. I didn't check but I would expect that files stored in tmpfs are NOT duplicated in the the kernel cache in order to save RAM. So, the different times could come from the fact that the kernel will first look up in the kernel cache and /then/ look up in the tmpfs. In the scenario without tmpfs and lot of RAM, every unpacked file is stored in the _kernel cache_ with really fast access much before hitting the disk or even the disk cache (RAM speed and very few processor calculation required). While retrieving, the file is found on first look up from the kernel cache. In the other scenario with tmpfs and lot of RAM, every unpacked file is stored in the tmpfs allowing very fast access (due to RAM speed) but with the price of a first negative result from the kernel cache (and perhaps additional time needed by the kernel for accessing the file through the driver of the tmpfs filesystem). Using tmpfs will still be better as it prevents from writes to the disk in the spare times, avoiding unnecessary mecanic movements and saving disk life time. I might add, the cache on the drive I was using is nowhere near large enough to cache the tarball for the package. Heck, the cache on my current system drive is only 8Mbs according to hdparm. That is not much since I tested using much larger packages. You can't cache files larger than the cache. The disk cache is out of the scope. Do I need to run a test, reboot, run the test again to show this is not making much if any difference? I mean, really? o_O It won't make any difference from the drop cache configuration but it is still not the point! -- Nicolas Sebrecht
Re: [gentoo-user] Re: aligning SSD partitions
Neil Bothwick wrote: On Thu, 06 Sep 2012 05:11:01 -0500, Dale wrote: You need to run free, run the command to clear and then run free again so you can see for yourself. If it was just me, I could think I am wrong but this was tested by others too with the same results. I'm not saying your test results are wrong, I'm explaining why I think they are what they are. Have you tried running free *during* the emerge? I expect you'll find plenty of cache in use then. The point isn't about using cache DURING the emerge. The point was whether having portages work directory on tmpfs resulted in speed increases. If you have portages work directory on tmpfs, of course it uses ram. That's what tmpfs is. It's taking what might normally be put on the disk and putting it in ram because ram is faster. The point is, cache or not, having portages work directory on tmpfs doesn't result in speed improvements as one would expect. Actual tests gave unexpected results. Tests show that putting portages work directory on tmpfs did not result in speed increases for emerging packages. Dale :-) :-) -- I am only responsible for what I said ... Not for what you understood or how you interpreted my words!
Re: [gentoo-user] Re: aligning SSD partitions
Neil Bothwick wrote: On Thu, 06 Sep 2012 05:03:55 -0500, Dale wrote: You miss this point not me. I *cleared* that cache. From kernel.org: Sorry Dale, but you are missing the point. You cleared the cache before running emerge, then ran emerge. The first thing emerge did was unpack the tarball and populate the disk cache. All clearing the disk cache did was make sure there was plenty of space to cache the new data, thus speeding up the process. Then explain to me why it was at times slower while on tmpfs? Trust me, I ran this test many times and in different orders and it did NOT make much if any difference. So it was slower at times, but not by much? That's just general variances caused by multi-tasking, wind direction etc. That's the point. It doesn't make any difference whether you have portages work directory on tmpfs or not. For the point of this thread, it would be a good idea to save wear and tear on the SSD but one should NOT expect that emerge will compile packages any faster because of it being on tmpfs instead of on disk. I might also add, I ran some of my tests in single user mode. That is about as raw as Linux gets but there is still the chance of variances here and there. That's why I said not much. Sometimes one would be a second or two faster then next time be a second or two slower. Basically, just normal variances that may not be related to one another. I might add, the cache on the drive I was using is nowhere near large enough to cache the tarball for the package. Heck, the cache on my current system drive is only 8Mbs according to hdparm. We're not talking about drive caches, the kernel caches filesystem access long before it gets anywhere the drive. So all the real work is done in RAM if you have enough, whether you are using a hard drive filesystem or tmpfs. All your test demonstrates is that if you have enough RAM, it doesn't make much difference where you put PORTAGE_TMPDIR. The command mentioned several replies back CLEARS that cache. When you run that command to clear the cache, from my understanding, at that point it is as if the command has never been run since the last reboot. Meaning, the command, emerge in this case, and its children are NOT cached in ram nor is anything else. I posted that from kernel.org. That's their claim not mine. If you don't accept that clearing the cache works, you need to talk to the kernel people because they are saying it there and I'm just repeating it here. A link for you to read: http://www.kernel.org/doc/Documentation/sysctl/vm.txt Just scroll down to the section about drop_caches. Read it for yourself if you can't/won't accept me saying it. Dale :-) :-) -- I am only responsible for what I said ... Not for what you understood or how you interpreted my words!
Re: [gentoo-user] Re: aligning SSD partitions
Nicolas Sebrecht wrote: The 06/09/12, Dale wrote: Then explain to me why it was at times slower while on tmpfs? Trust me, I ran this test many times and in different orders and it did NOT make much if any difference. As explained, this is expected if you have enough RAM. I didn't check but I would expect that files stored in tmpfs are NOT duplicated in the the kernel cache in order to save RAM. So, the different times could come from the fact that the kernel will first look up in the kernel cache and /then/ look up in the tmpfs. In the scenario without tmpfs and lot of RAM, every unpacked file is stored in the _kernel cache_ with really fast access much before hitting the disk or even the disk cache (RAM speed and very few processor calculation required). While retrieving, the file is found on first look up from the kernel cache. The point you are missing is this. Between those tests, I CLEARED that cache. The thing you and Neil claim that makes a difference does not exist after you clear the cache. I CLEARED that cache between EACH and every test that was ran whether using tmpfs or not. I did this instead of rebooting my system after each test. In the other scenario with tmpfs and lot of RAM, every unpacked file is stored in the tmpfs allowing very fast access (due to RAM speed) but with the price of a first negative result from the kernel cache (and perhaps additional time needed by the kernel for accessing the file through the driver of the tmpfs filesystem). Using tmpfs will still be better as it prevents from writes to the disk in the spare times, avoiding unnecessary mecanic movements and saving disk life time. The thing is, this was tested because people wanted to see what the improvements was. When tested, it turned out that there was very little if any difference. So, in theory I would say that using tmpfs would result in faster compile times. After testing, theory left the building and reality showed that it did not make much if any difference. I might add, the cache on the drive I was using is nowhere near large enough to cache the tarball for the package. Heck, the cache on my current system drive is only 8Mbs according to hdparm. That is not much since I tested using much larger packages. You can't cache files larger than the cache. The disk cache is out of the scope. True, just wanted to make sure we were talking about the same cache here. Do I need to run a test, reboot, run the test again to show this is not making much if any difference? I mean, really? o_O It won't make any difference from the drop cache configuration but it is still not the point! Well, why say that caching makes a difference then say it doesn't matter when those caches are cleared? Either caches matter or it doesn't. Dale :-) :-) -- I am only responsible for what I said ... Not for what you understood or how you interpreted my words!
[gentoo-user] Re: aligning SSD partitions
The 06/09/12, Dale wrote: The point was whether having portages work directory on tmpfs resulted in speed increases. If you have portages work directory on tmpfs, of course it uses ram. That's what tmpfs is. It's taking what might normally be put on the disk and putting it in ram because ram is faster. Please, understand that whithout tmpfs and a lot of RAM, the kernel _won't_ work with the files from the disk but with the files stored in the _kernel cache_ which IS RAM, too. This explains why you get this result: The point is, cache or not, having portages work directory on tmpfs doesn't result in speed improvements as one would expect. Taking back your last sentence with precise sementic: The point is, /tmpfs cache (RAM)/ or /kernel cache (RAM)/, having portages work on tmpfs doesn't result in speed improvements. -- Nicolas Sebrecht
Re: [gentoo-user] Re: aligning SSD partitions
Nicolas Sebrecht wrote: The 06/09/12, Dale wrote: The point was whether having portages work directory on tmpfs resulted in speed increases. If you have portages work directory on tmpfs, of course it uses ram. That's what tmpfs is. It's taking what might normally be put on the disk and putting it in ram because ram is faster. Please, understand that whithout tmpfs and a lot of RAM, the kernel _won't_ work with the files from the disk but with the files stored in the _kernel cache_ which IS RAM, too. This explains why you get this result: The point is, cache or not, having portages work directory on tmpfs doesn't result in speed improvements as one would expect. Taking back your last sentence with precise sementic: The point is, /tmpfs cache (RAM)/ or /kernel cache (RAM)/, having portages work on tmpfs doesn't result in speed improvements. Not quite. The theory is that if you put portages work directory on tmpfs, then all the writes and such are done in ram which is faster. If you have portages work directory on disk, it will be slower because the disk is slower. That is the theory and was what I and others expected to happen. This is reality. Even when portages work directory is on tmpfs, it is not much, if any, faster when compared to portages work directory being on tmpfs. The two are essentially the same as far as emerge times go. Look, I have portages work directory on tmpfs. The only time my hard drive light comes on to amount to anything is when loading the tarball or installing the package after the compile is done. If I take portage off tmpfs, just unmount the directory so that it is back on disk like most normal folks, then the hard drive light blinks all during the compile. It doesn't make sense however, I can either accept what I think or what actually happens. In this case, I just have to accept that putting portages work directory on tmpfs just doesn't really do much good except save wear and tear on the disk drive. Which, that is why I know keep mine on tmpfs. It's also a good idea when using SSDs as in this thread. Dale :-) :-) -- I am only responsible for what I said ... Not for what you understood or how you interpreted my words!
[gentoo-user] Re: aligning SSD partitions
The 06/09/12, Dale wrote: The point you are missing is this. Between those tests, I CLEARED that cache. The thing you and Neil claim that makes a difference does not exist after you clear the cache. I CLEARED that cache between EACH and every test that was ran whether using tmpfs or not. I did this instead of rebooting my system after each test. We clearly understand that you cleared the cache between the tests. We pretend that it is not much relevant for your tests because of another process. So, in theory I would say that using tmpfs would result in faster compile times. After testing, theory left the building and reality showed that it did not make much if any difference. Yes, because you did the tests on a system with lot of RAM. If the kernel needs to retrieve a file, there is basically the following workflow: 1. retrieve file from kernel cache; 2. if not found, retrieve file from tmpfs cache; 3. if not found, retrieve file from swap cache; 4. if not found, retrieve file from disk cache; 5. if not found, retrieve file from disk. This is simplified workflow but you get the idea. Now, what we are saying is that *when you have lot of RAM*, the kernel never hit 2, 3, 4 and 5. The problem with the kernel cache is that files stored in this cache are dropped from it very fast. tmpfs allows to have better files persistence in RAM. But if you have lot of RAM, the files stored in the kernel cache are /not/ dropped from it which allows the kernel to work with files in RAM only. Clearing the kernel cache between the tests does not change much since files are stored in RAM again, at the unpack process time. What makes compilation very slow from the disk are all the _next reads and writes_ required by the compilation. Well, why say that caching makes a difference then say it doesn't matter when those caches are cleared? Either caches matter or it doesn't. It does make a difference if you don't have enough RAM for the kernel cache to store all the files involved in the whole emerge process and every other process run by the kernel during the emerge. -- Nicolas Sebrecht
[gentoo-user] static IP issue
Hello, I've installed Gentoo into VirtualBox. I want it to has static IP address and I've followed Hand book instructions and the OpenRC/net.example. After reboot there's a warning: WARNING: net.lo has already been started and each network depending service failes to start with: service_name: waiting for net.eth0 (51); where the 51 is a time which decrements to 0. I've searched net, but without success :-\ Can someone help me? The installation is fresh one. Thanks Pat Freehosting PIPNI - http://www.pipni.cz/
Re: [gentoo-user] static IP issue
2012/9/6 pat p...@xvalheru.org Hello, I've installed Gentoo into VirtualBox. I want it to has static IP address and I've followed Hand book instructions and the OpenRC/net.example. After reboot there's a warning: WARNING: net.lo has already been started and each network depending service failes to start with: service_name: waiting for net.eth0 (51); where the 51 is a time which decrements to 0. I've searched net, but without success :-\ Can someone help me? The installation is fresh one. Did you create a symlink for /etc/init.d/net.eth0 and did you add it to the runlevel (rc-update add net.eth0 default)?
Re: [gentoo-user] Re: aligning SSD partitions
On Thu, 06 Sep 2012 06:31:24 -0500, Dale wrote: Not quite. The theory is that if you put portages work directory on tmpfs, then all the writes and such are done in ram which is faster. If you have portages work directory on disk, it will be slower because the disk is slower. But the disk is not used when you have enough RAM to keep everything cached. So you are comparing the speed of storing all files in RAM with the speed of storing all files in RAM, so it is hardly surprising that the two tests give similar results. The fact that in one scenario the files do end up on disk is irrelevant, you are working from RAM copies of the files in both instances. By running the test on a lightly loaded machine, you are also removing the possibility of files being flushed from the cache in the tmpdir-on-disk setup, so I would expect you to get comparable results either way. The only real benefit of using tmpfs is the one you mentioned elsewhere, that the disks don't get bothered at all. -- Neil Bothwick Is there another word for synonym? signature.asc Description: PGP signature
Re: [gentoo-user] static IP issue
On Thu, 6 Sep 2012 14:10:21 +0200, Michael Hampicke wrote 2012/9/6 pat p...@xvalheru.org Hello, I#39;ve installed Gentoo into VirtualBox. I want it to has static IP address and I#39;ve followed Hand book instructions and the OpenRC/net.example. After reboot there#39;s a warning: WARNING: net.lo has already been started and each network depending service failes to start with: service_name: waiting for net.eth0 (51); where the 51 is a time which decrements to 0. I#39;ve searched net, but without success :-\ Can someone help me? The installation is fresh one. Did you create a symlink for /etc/init.d/net.eth0 and did you add it to the runlevel (rc-update add net.eth0 default)? Hi, Yes, I did. When the Gentoo boots to shell there#39;s only loop back interface. Pat Freehosting PIPNI - http://www.pipni.cz/
Re: [gentoo-user] Re: aligning SSD partitions
Nicolas Sebrecht wrote: The 06/09/12, Dale wrote: The point you are missing is this. Between those tests, I CLEARED that cache. The thing you and Neil claim that makes a difference does not exist after you clear the cache. I CLEARED that cache between EACH and every test that was ran whether using tmpfs or not. I did this instead of rebooting my system after each test. We clearly understand that you cleared the cache between the tests. We pretend that it is not much relevant for your tests because of another process. So, in theory I would say that using tmpfs would result in faster compile times. After testing, theory left the building and reality showed that it did not make much if any difference. Yes, because you did the tests on a system with lot of RAM. If the kernel needs to retrieve a file, there is basically the following workflow: 1. retrieve file from kernel cache; 2. if not found, retrieve file from tmpfs cache; 3. if not found, retrieve file from swap cache; 4. if not found, retrieve file from disk cache; 5. if not found, retrieve file from disk. This is simplified workflow but you get the idea. I do get it. I CLEARED #1 and #2, there is no usage of #3 and #4 is not large enough here to matter. So, it is left with #5. See the point? The test was a NORMAL emerge with portages work directory on tmpfs and a NORMAL emerge with portages work directory on disk and compare the results. The test resulted in little if any difference. If I ran the test and did not clear the cache, then I would expect skewed results because after the first emerge, some files would be cached in ram and the drive would not be used. If you clear the cache, then it has to take the same steps regardless of whether it was run first, second or third time. Now, what we are saying is that *when you have lot of RAM*, the kernel never hit 2, 3, 4 and 5. The problem with the kernel cache is that files stored in this cache are dropped from it very fast. tmpfs allows to have better files persistence in RAM. But if you have lot of RAM, the files stored in the kernel cache are /not/ dropped from it which allows the kernel to work with files in RAM only. Clearing the kernel cache between the tests does not change much since files are stored in RAM again, at the unpack process time. What makes compilation very slow from the disk are all the _next reads and writes_ required by the compilation. Well, why say that caching makes a difference then say it doesn't matter when those caches are cleared? Either caches matter or it doesn't. It does make a difference if you don't have enough RAM for the kernel cache to store all the files involved in the whole emerge process and every other process run by the kernel during the emerge. But if you CLEAR the kernel cache between each test, then it doesn't matter either. I am clearing the KERNEL cache which includes pagecache, dentries and inodes. I can see the difference in gkrellm, top and in what the command free gives me. Put another way. I run a emerge on tmpfs and note the emerge times. I reboot. I run the same emerge again with it not on tmpfs. Do we agree that that would result in a actual real result? If yes then using the command to clear the cache is the same as rebooting. It's the whole point of having the feature in the kernel. The file drop_caches when set to 3 with the echo command erases, deletes or whatever you want to call it, the caches. That's from the kernel folks as linked to in another reply. That's not me saying it, it is the kernel folks saying it. Dale :-) :-) -- I am only responsible for what I said ... Not for what you understood or how you interpreted my words!
[gentoo-user] Re: aligning SSD partitions
The 06/09/12, Neil Bothwick wrote: The only real benefit of using tmpfs is the one you mentioned elsewhere, that the disks don't get bothered at all. Benefits also depends of what the system does during the emerge. If another process is intensively using the kernel cache and the kernel cache can't keep all the cached files for all the processes because it is missing of RAM, then underlying disk rapidity (tmpfs vs bare metal HDD) will sightly change the results. -- Nicolas Sebrecht
Re: [gentoo-user] Re: aligning SSD partitions
Neil Bothwick wrote: On Thu, 06 Sep 2012 06:31:24 -0500, Dale wrote: Not quite. The theory is that if you put portages work directory on tmpfs, then all the writes and such are done in ram which is faster. If you have portages work directory on disk, it will be slower because the disk is slower. But the disk is not used when you have enough RAM to keep everything cached. So you are comparing the speed of storing all files in RAM with the speed of storing all files in RAM, so it is hardly surprising that the two tests give similar results. The fact that in one scenario the files do end up on disk is irrelevant, you are working from RAM copies of the files in both instances. By running the test on a lightly loaded machine, you are also removing the possibility of files being flushed from the cache in the tmpdir-on-disk setup, so I would expect you to get comparable results either way. The only real benefit of using tmpfs is the one you mentioned elsewhere, that the disks don't get bothered at all. I don't think that is correct. I am clearing the files in ram. That's the point of drop_caches is to clear the kernels cache files. See post to Nicolas Sebrecht a bit ago. Dale :-) :-) -- I am only responsible for what I said ... Not for what you understood or how you interpreted my words!
Re: [gentoo-user] static IP issue
On Thu, 6 Sep 2012 13:21:43 +0100, pat wrote: Yes, I did. When the Gentoo boots to shell there#39;s only loop back interface. Please post contents of /etc/conf.d/net -- Neil Bothwick Top Oxymorons Number 5: Twelve-ounce pound cake signature.asc Description: PGP signature
Re: [gentoo-user] Re: aligning SSD partitions
On Thu, 06 Sep 2012 07:48:59 -0500, Dale wrote: I don't think that is correct. I am clearing the files in ram. That's the point of drop_caches is to clear the kernels cache files. See post to Nicolas Sebrecht a bit ago. Take a step back Dale and read the posts again. This is not about the state of the cache at the start of the emerge but during it. You may clear the cache before starting, but that doesn't stop is filling up again as soon as the emerge reaches src_unpack(). This has nothing to do with caching the data from the previous emerge run, it is all from the currently running emerge. You may think you are unpacking the tarball to disk and then loading those files into the compiler, but you are only using the copies that are cached when you unpack. -- Neil Bothwick This universe is sold by mass, not by volume. Some expansion may have occurred during shipment signature.asc Description: PGP signature
Re: [gentoo-user] static IP issue
On Thu, 6 Sep 2012 14:00:22 +0100, Neil Bothwick wrote On Thu, 6 Sep 2012 13:21:43 +0100, pat wrote: Yes, I did. When the Gentoo boots to shell there's only loop back interface. Please post contents of /etc/conf.d/net -- Neil Bothwick Top Oxymorons Number 5: Twelve-ounce pound cake Here it is. Thanks Pat Freehosting PIPNI - http://www.pipni.cz/ net Description: Binary data
[gentoo-user] Re: aligning SSD partitions
The 06/09/12, Dale wrote: I do get it. I CLEARED #1 and #2, there is no usage of #3 and #4 is not large enough here to matter. So, it is left with #5. See the point? The test was a NORMAL emerge with portages work directory on tmpfs and a NORMAL emerge with portages work directory on disk and compare the results. The test resulted in little if any difference. If I ran the test and did not clear the cache, then I would expect skewed results because after the first emerge, some files would be cached in ram and the drive would not be used. If you clear the cache, then it has to take the same steps regardless of whether it was run first, second or third time. What you want to measure is the difference of times required by emerge whether you use a real disk or tmpfs as backend. What you would expect is a difference because a disk is much slower than RAM. What you see is no difference. You won't conclude that disk is as fast as RAM, right? Can you explain why you don't see much difference? No. Here is the explanation: if you have enough RAM, the emerge rapidity will NOT rely on the disk rapidity whatever storage backend you use. It will only rely on the RAM rapidity because of the kernel cache. Now, pretending that whatever backend you use (real disk or tmpfs) never changes the emerge time is WRONG because of the persistence strategy used by the kernel for the kernel cache. When having lot of RAM like you have, the persistence strategy of the kernel cache is NEVER raised in the process. This is exactly what your tests demonstrate demonstrate: if you have enough RAM, the persistence strategy of kernel cache is not raised, so everything happens in RAM, so the emerge times do not differ. -- Nicolas Sebrecht
[gentoo-user] Re: aligning SSD partitions
The 06/09/12, Dale wrote: Not quite. The theory is that if you put portages work directory on tmpfs, then all the writes and such are done in ram which is faster. No! This is too much simplistic view to explain what you see. In practice, _all_ the writes always happen in RAM whatever backend storage you use. The difference you could see is if there is not enough RAM for the kernel cache, it will have to wait for the backend storage. -- Nicolas Sebrecht
Re: [gentoo-user] Re: aligning SSD partitions
Neil Bothwick wrote: On Thu, 06 Sep 2012 07:48:59 -0500, Dale wrote: I don't think that is correct. I am clearing the files in ram. That's the point of drop_caches is to clear the kernels cache files. See post to Nicolas Sebrecht a bit ago. Take a step back Dale and read the posts again. This is not about the state of the cache at the start of the emerge but during it. You may clear the cache before starting, but that doesn't stop is filling up again as soon as the emerge reaches src_unpack(). This has nothing to do with caching the data from the previous emerge run, it is all from the currently running emerge. You may think you are unpacking the tarball to disk and then loading those files into the compiler, but you are only using the copies that are cached when you unpack. Then take a look at it this way. If I emerge seamonkey with portage's work directory on disk and it takes 12 minutes, the first time. Then I clear the caches and emerge seamonkey again while portage's work directory is on tmpfs and it is 12 minutes. Then repeat that process a few times more. If the outcome of all those emerges is 12 minutes, regardless of the order, then putting portages work directory on tmpfs makes no difference at all in that case. The emerge times are exactly the same regardless of emerge using cache or not or portage's work directory being on tmpfs or not. I don't care if emerge uses cache DURING the emerge process because it is always enabled in both tests. The point is whether portage's work directory is on tmpfs or not makes emerges faster. The thing about what you are saying is that I ran those tests with the files in memory. What I am saying is this, that is not the case. I am clearing that memory with the drop_cache command between each test. You claim that cache is affecting the timing but I am clearing the very same cache the same as a reboot would. The emerge times whether portage's work directory is on tmpfs or not didn't change enough to make a difference. That is what I am saying the tests resulted in. It was not what I expected but it is what I got. It is also what others got as well. I provided a link to the information that should be as clear as it gets. Can you provide a link that shows that the command does not clear the kernel cache? I'm going by what I linked to on kernel.org. Since they are the ones that make the kernels, I think they should know what it is and what it does. Here is some more links with the same info really: http://linux-mm.org/Drop_Caches http://www.linuxinsight.com/proc_sys_vm_drop_caches.html http://bjdean.id.au/wiki/LinuxMemoryManagement Those are all the first links in a google search for drop_caches kernel. See if you can find anything that says otherwise. Dale :-) :-) -- I am only responsible for what I said ... Not for what you understood or how you interpreted my words!
Aw: Re: [gentoo-user] dm-crypt + ext4 = where will the journal go?
Try `emerge -pvT $foo`. With whatever package $foo you are trying to install. That is already solved (I had selected it somehow) by simply deselecting it. But is now a little OT. I now try to compile x11-libs/libxcb, and dev-python/elementtree is not installed on my system. Regards, Florian Philipp Regards, Roland
Re: [gentoo-user] Re: aligning SSD partitions
Nicolas Sebrecht wrote: The 06/09/12, Dale wrote: Not quite. The theory is that if you put portages work directory on tmpfs, then all the writes and such are done in ram which is faster. No! This is too much simplistic view to explain what you see. In practice, _all_ the writes always happen in RAM whatever backend storage you use. The difference you could see is if there is not enough RAM for the kernel cache, it will have to wait for the backend storage. OK. Step by step here so hopefully you and Neil can follow. Freshly booted system. Clear caches just to be sure emerge foo with portages work directory on tmpfs clear caches again emerge foo with portages work directory on disk clear caches again. emerge foo with portages work directory on tmpfs clear caches again emerge foo with portages work directory on disk You repeat this enough times and you see that it doesn't matter if portage's work directory is on disk or on tmpfs. As I said before, when I have portage's work directory on disk, I see the drive light blinking like crazy so it is doing something, reading or writing. When portage's work directory is on tmpfs, it only blinks when I first start the process which should be unpacking the tarball and then at the end when it is installing the package. In between that, it is just the normal stuff of my wallpaper changing or it checking my emails. So, it may store something in ram as it does in both cases but it is also storing things on the drive or else the light would not be blinking so much. I'm not real big on rebooting but you and Neil are about to make me test this and reboot between each and every test. If nothing else, just to show that drop_caches does the same as rebooting like kernel.org says, except for the programs are still actually running. Dale :-) :-) -- I am only responsible for what I said ... Not for what you understood or how you interpreted my words!
Re: [gentoo-user] static IP issue
Yes, I did. When the Gentoo boots to shell there's only loop back interface. Are you sure that the kernel module for your network interface is loaded? What's the output of ifconfig -a after a reboot?
Re: [gentoo-user] Re: aligning SSD partitions
On Thu, Sep 6, 2012 at 10:07 AM, Dale rdalek1...@gmail.com wrote: Neil Bothwick wrote: On Thu, 06 Sep 2012 07:48:59 -0500, Dale wrote: I don't think that is correct. I am clearing the files in ram. That's the point of drop_caches is to clear the kernels cache files. See post to Nicolas Sebrecht a bit ago. Take a step back Dale and read the posts again. This is not about the state of the cache at the start of the emerge but during it. You may clear the cache before starting, but that doesn't stop is filling up again as soon as the emerge reaches src_unpack(). This has nothing to do with caching the data from the previous emerge run, it is all from the currently running emerge. You may think you are unpacking the tarball to disk and then loading those files into the compiler, but you are only using the copies that are cached when you unpack. Then take a look at it this way. If I emerge seamonkey with portage's work directory on disk and it takes 12 minutes, the first time. Then I clear the caches and emerge seamonkey again while portage's work directory is on tmpfs and it is 12 minutes. Then repeat that process a few times more. If the outcome of all those emerges is 12 minutes, regardless of the order, then putting portages work directory on tmpfs makes no difference at all in that case. The emerge times are exactly the same regardless of emerge using cache or not or portage's work directory being on tmpfs or not. I don't care if emerge uses cache DURING the emerge process because it is always enabled in both tests. The point is whether portage's work directory is on tmpfs or not makes emerges faster. The thing about what you are saying is that I ran those tests with the files in memory. What I am saying is this, that is not the case. I am clearing that memory with the drop_cache command between each test. Dale, here's what you're missing: emerge first downloads the source tarball and drops it on disk. Once the tarball has been placed on disk, the time required to read the tarball back into memory is negligible; it's a streamed format. The next step is what's important: the tarball gets extracted into PORTAGE_TEMP. At that moment onward, all the files that were inside that tarball are in your file cache until something bumps it out. If you have enough RAM, then the file will not be bumped out as a consequence of build-time memory usage. As a consequence, if you have enough ram, you won't see much (if any) difference in build times if you're comparing tmpfs to a normal filesystem...which means tmpfs (for you) won't have any benefit beyond being self-cleaning on a reboot or remount. So your drop_cache has no influence over build times, since the only cache behavior that matters is whatever happens between the time emerge unpacks the tarball and the time emerge exits. To see the difference, try something like watch drop_cache leave that running while you let a few builds fly. You should see an increase in build times. -- :wq
Re: [gentoo-user] Re: aligning SSD partitions
Nicolas Sebrecht wrote: The 06/09/12, Dale wrote: I do get it. I CLEARED #1 and #2, there is no usage of #3 and #4 is not large enough here to matter. So, it is left with #5. See the point? The test was a NORMAL emerge with portages work directory on tmpfs and a NORMAL emerge with portages work directory on disk and compare the results. The test resulted in little if any difference. If I ran the test and did not clear the cache, then I would expect skewed results because after the first emerge, some files would be cached in ram and the drive would not be used. If you clear the cache, then it has to take the same steps regardless of whether it was run first, second or third time. What you want to measure is the difference of times required by emerge whether you use a real disk or tmpfs as backend. What you would expect is a difference because a disk is much slower than RAM. What you see is no difference. You won't conclude that disk is as fast as RAM, right? Can you explain why you don't see much difference? No. Here is the explanation: if you have enough RAM, the emerge rapidity will NOT rely on the disk rapidity whatever storage backend you use. It will only rely on the RAM rapidity because of the kernel cache. Now, pretending that whatever backend you use (real disk or tmpfs) never changes the emerge time is WRONG because of the persistence strategy used by the kernel for the kernel cache. When having lot of RAM like you have, the persistence strategy of the kernel cache is NEVER raised in the process. This is exactly what your tests demonstrate demonstrate: if you have enough RAM, the persistence strategy of kernel cache is not raised, so everything happens in RAM, so the emerge times do not differ. The end result is this, it doesn't matter if portage's work directory is on tmpfs or not. You just concluded that yourself which is what I have been saying. It doesn't matter WHY it doesn't matter, it just matters that it DOESN'T matter. It takes just as long on a system with portage's work directory on tmpfs as it does on tmpfs. Very little difference at all. The variance I had was minimal at best. It was basically seconds of difference not minutes. I might add, I got the same results on my older system which has a LOT less ram. I think it only has 2Gbs or so. Dale :-) :-) -- I am only responsible for what I said ... Not for what you understood or how you interpreted my words!
Re: [gentoo-user] Re: aligning SSD partitions
On Thu, Sep 6, 2012 at 10:20 AM, Dale rdalek1...@gmail.com wrote: Nicolas Sebrecht wrote: The 06/09/12, Dale wrote: Not quite. The theory is that if you put portages work directory on tmpfs, then all the writes and such are done in ram which is faster. No! This is too much simplistic view to explain what you see. In practice, _all_ the writes always happen in RAM whatever backend storage you use. The difference you could see is if there is not enough RAM for the kernel cache, it will have to wait for the backend storage. OK. Step by step here so hopefully you and Neil can follow. Freshly booted system. Clear caches just to be sure emerge foo with portages work directory on tmpfs clear caches again emerge foo with portages work directory on disk clear caches again. emerge foo with portages work directory on tmpfs clear caches again emerge foo with portages work directory on disk You repeat this enough times and you see that it doesn't matter if portage's work directory is on disk or on tmpfs. If you have enough RAM, then this is certainly true. Nobody is disputing that. They've been trying to explain that there's a difference when you _don't_ have that much RAM, and they've been trying to explain the mechanism behind that. -- :wq
Re: [gentoo-user] Re: aligning SSD partitions
On Thu, 06 Sep 2012 09:07:30 -0500, Dale wrote: I don't care if emerge uses cache DURING the emerge process because it is always enabled in both tests. The point is whether portage's work directory is on tmpfs or not makes emerges faster. It does not, if you have enough RAM, precisely because of the part you claim not to care about. The thing about what you are saying is that I ran those tests with the files in memory. What I am saying is this, that is not the case. No, that is not what I am saying. Those files were loaded into memory when you ran the test AFTER you cleared the previously cached files. The number of times you run the test is irrelevant, as is whether you start with an empty cache or not. All that matters is that the kernel caching all the files used during the emerge makes the storage medium used irrelevant. Like I said, take a step back, a deep breath and a break of an hour or two. Then read the posts again without your preconceptions of what you think we are trying to say (which is not what we are actually saying). Only when you have done that can this discussion proceed beyond the current tit-for-tat exchanges of misunderstanding. -- Neil Bothwick Always remember you're unique, just like everyone else. signature.asc Description: PGP signature
[gentoo-user] Re: aligning SSD partitions
The 06/09/12, Dale wrote: Then take a look at it this way. If I emerge seamonkey with portage's work directory on disk and it takes 12 minutes, the first time. Then I clear the caches and emerge seamonkey again while portage's work directory is on tmpfs and it is 12 minutes. Then repeat that process a few times more. If the outcome of all those emerges is 12 minutes, regardless of the order, then putting portages work directory on tmpfs makes no difference at all in that case. We fully agree with you, here. The emerge times are exactly the same regardless of emerge using cache or not or portage's work directory being on tmpfs or not. I don't care if emerge uses cache DURING the emerge process because it is always enabled in both tests. But you *should* care. If you don't have enough memory, the kernel will reclaim memory from the pagecache, so the whole process rapidity won't only rely on RAM rapidity anymore. The point is whether portage's work directory is on tmpfs or not makes emerges faster. The thing about what you are saying is that I ran those tests with the files in memory. What I am saying is this, that is not the case. I am clearing that memory with the drop_cache command between each test. You claim that cache is affecting the timing but I am clearing the very same cache the same as a reboot would. The emerge times whether portage's We do agree with you that you droped the cache between the tests with almost the same effect of a reboot. The emerge times whether portage's work directory is on tmpfs or not didn't change enough to make a difference. Yes, we agree. You droped the cache which is expected to get correct tests. What we are saying is that you droped the cache but did NOT DISABLED the VM caches (kernel cache). You say that you don't care of that one because it was involved in all the tests. We say that you might not care in some contexts, not for all the contexts. You reach the context where it does not matter much, fine. -- Nicolas Sebrecht
Re: [gentoo-user] Fix for getting libxml2 compiled!
On Wed, Sep 5, 2012 at 5:42 PM, Roland Häder r.hae...@web.de wrote: Hi all, I finally got libxml2 compiled, first I had to do this: # emerge expat # emerge python # cd /usr/portage/dev-lang/python/ # emerge python-2.7.3-r2.ebuild # cd - This makes sure that libexpat is there. Now the package is still not compiling because of a missing .so file, see this: # cd /usr/lib/python2.7/xml/parsers/ # ln -sf /usr/lib/python2.7/site-packages/_xmlplus/parsers/pyexpat.so . If I don't do this a python script in /var/tmp/portage/dev-libs/libxml2-2.8.0_rc1/work/libxml2-2.8.0/python-2.7/ called generate.py (you have to call this python2.7 ./generate.py) will fail. Hope this saves someones endless hours. Regards, Roland PS: There are a lot warnings compiling libxml2, you may want to fix them. I have used this to build libxml2: (temporary) USE=-ipv6 readline -debug -doc -examples -icu lzma python -static-libs -test Weird, I'm on 2.8.0-r1 and didn't have to do any hoop jumping to get there (~amd64). -- Douglas J Hunley (doug.hun...@gmail.com) Twitter: @hunleyd Web: douglasjhunley.com G+: http://goo.gl/sajR3
Aw: Re: [gentoo-user] Fix for getting libxml2 compiled!
Weird, I'm on 2.8.0-r1 and didn't have to do any hoop jumping to get there (~amd64). Yes, it is really weird thing. :/ I use x86 (i686, my laptop does only support 32 bit; it is a Thinkpad R51). Regards, Roland
Aw: Re: [gentoo-user] dm-crypt + ext4 = where will the journal go?
That is already solved (I had selected it somehow) by simply deselecting it. But is now a little OT. I now try to compile x11-libs/libxcb, and dev-python/elementtree is not installed on my system. There is hope for this matter, see my forum posting: http://forums.gentoo.org/viewtopic-p-7133700.html#7133700 In short: USE=*build* foo bar That build was wrong and has disabled a lot required python modules (including _elementtree, gdbm, curses, ...). Roland
Re: [gentoo-user] static IP issue
On Thu, Sep 6, 2012 at 4:33 AM, pat p...@xvalheru.org wrote: Hello, I've installed Gentoo into VirtualBox. I want it to has static IP address and I've followed Hand book instructions and the OpenRC/net.example. After reboot there's a warning: WARNING: net.lo has already been started and each network depending service failes to start with: service_name: waiting for net.eth0 (51); where the 51 is a time which decrements to 0. I've searched net, but without success :-\ Can someone help me? The installation is fresh one. Thanks Pat Freehosting PIPNI - http://www.pipni.cz/ I have a 32-bit Gentoo Virtualbox VM where I am actually sending this email from. No problems with having a static IP: gentoo-32b ~ # ifconfig -a eth0 Link encap:Ethernet HWaddr 08:00:27:d4:6a:35 inet addr:192.168.1.125 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::a00:27ff:fed4:6a35/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:103 errors:0 dropped:0 overruns:0 frame:0 TX packets:108 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:10493 (10.2 KiB) TX bytes:9152 (8.9 KiB) loLink encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) sit0 Link encap:IPv6-in-IPv4 NOARP MTU:1480 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) gentoo-32b ~ # cat /etc/conf.d/net net network gentoo-32b ~ # cat /etc/conf.d/net # This blank configuration will automatically use DHCP for any net.* # scripts in /etc/init.d. To create a more complete configuration, # please review /usr/share/doc/openrc*/net.example* and save your configuration # in /etc/conf.d/net (this file :]!). config_eth0=192.168.1.125 netmask 255.255.255.0 routes_eth0=default via 192.168.1.1 gentoo-32b ~ # If it helps, I have the VM NEtwork settings set to Bridged Adapter. HTH, Mark
Re: [gentoo-user] Re: aligning SSD partitions
On Thu, Sep 6, 2012 at 9:20 AM, Dale rdalek1...@gmail.com wrote: OK. Step by step here so hopefully you and Neil can follow. Freshly booted system. Clear caches just to be sure emerge foo with portages work directory on tmpfs clear caches again emerge foo with portages work directory on disk clear caches again. emerge foo with portages work directory on tmpfs clear caches again emerge foo with portages work directory on disk I think, based on what the others are saying, that for you to more accurately test it, you should not use emerge but rather use ebuild to run (and time) the individual steps involved in emerging a package (unpacking, preparing, compiling, installing), clearing disk caches in-between each step. So, for example, after sources are unpacked to tmpfs, clear caches before compilation begins -- this way the source files have to be read from disk rather than from cache/RAM.
Re: [gentoo-user] static IP issue
On Thu, 6 Sep 2012 16:23:57 +0200, Michael Hampicke wrote Yes, I did. When the Gentoo boots to shell there's only loop back interface. Are you sure that the kernel module for your network interface is loaded? What's the output of ifconfig -a after a reboot? ifconfig -a gives: eth0 Link encap:Ethernet HWaddr 08:00:27:26:80:c5 BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) loLink encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:28 errors:0 dropped:0 overruns:0 frame:0 TX packets:28 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:1988 (1.9 KiB) TX bytes:1988 (1.9 KiB) sit0 Link encap:IPv6-in-IPv4 NOARP MTU:1480 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) and I didn't compile driver as module but into kernel and when I switch to dynamic IP address it works OK, so this is (probably) not issue of the driver. Pat Freehosting PIPNI - http://www.pipni.cz/
Re: [gentoo-user] static IP issue
On Thu, 6 Sep 2012 08:37:33 -0700, Mark Knecht wrote On Thu, Sep 6, 2012 at 4:33 AM, pat p...@xvalheru.org wrote: Hello, I've installed Gentoo into VirtualBox. I want it to has static IP address and I've followed Hand book instructions and the OpenRC/net.example. After reboot there's a warning: WARNING: net.lo has already been started and each network depending service failes to start with: service_name: waiting for net.eth0 (51); where the 51 is a time which decrements to 0. I've searched net, but without success :-\ Can someone help me? The installation is fresh one. Thanks Pat I have a 32-bit Gentoo Virtualbox VM where I am actually sending this email from. No problems with having a static IP: gentoo-32b ~ # ifconfig -a eth0 Link encap:Ethernet HWaddr 08:00:27:d4:6a:35 inet addr:192.168.1.125 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::a00:27ff:fed4:6a35/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:103 errors:0 dropped:0 overruns:0 frame:0 TX packets:108 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:10493 (10.2 KiB) TX bytes:9152 (8.9 KiB) loLink encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) sit0 Link encap:IPv6-in-IPv4 NOARP MTU:1480 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) gentoo-32b ~ # cat /etc/conf.d/net net network gentoo-32b ~ # cat /etc/conf.d/net # This blank configuration will automatically use DHCP for any net.* # scripts in /etc/init.d. To create a more complete configuration, # please review /usr/share/doc/openrc*/net.example* and save your configuration # in /etc/conf.d/net (this file :]!). config_eth0=192.168.1.125 netmask 255.255.255.0 routes_eth0=default via 192.168.1.1 gentoo-32b ~ # If it helps, I have the VM NEtwork settings set to Bridged Adapter. HTH, Mark Hello, I have bridged network too and my configuration looks similar to yours, but I have 64-bit Gentoo. Pat Freehosting PIPNI - http://www.pipni.cz/
Re: [gentoo-user] Re: aligning SSD partitions
Michael Mol wrote: On Thu, Sep 6, 2012 at 10:07 AM, Dale rdalek1...@gmail.com wrote: Neil Bothwick wrote: On Thu, 06 Sep 2012 07:48:59 -0500, Dale wrote: I don't think that is correct. I am clearing the files in ram. That's the point of drop_caches is to clear the kernels cache files. See post to Nicolas Sebrecht a bit ago. Take a step back Dale and read the posts again. This is not about the state of the cache at the start of the emerge but during it. You may clear the cache before starting, but that doesn't stop is filling up again as soon as the emerge reaches src_unpack(). This has nothing to do with caching the data from the previous emerge run, it is all from the currently running emerge. You may think you are unpacking the tarball to disk and then loading those files into the compiler, but you are only using the copies that are cached when you unpack. Then take a look at it this way. If I emerge seamonkey with portage's work directory on disk and it takes 12 minutes, the first time. Then I clear the caches and emerge seamonkey again while portage's work directory is on tmpfs and it is 12 minutes. Then repeat that process a few times more. If the outcome of all those emerges is 12 minutes, regardless of the order, then putting portages work directory on tmpfs makes no difference at all in that case. The emerge times are exactly the same regardless of emerge using cache or not or portage's work directory being on tmpfs or not. I don't care if emerge uses cache DURING the emerge process because it is always enabled in both tests. The point is whether portage's work directory is on tmpfs or not makes emerges faster. The thing about what you are saying is that I ran those tests with the files in memory. What I am saying is this, that is not the case. I am clearing that memory with the drop_cache command between each test. Dale, here's what you're missing: emerge first downloads the source tarball and drops it on disk. Once the tarball has been placed on disk, the time required to read the tarball back into memory is negligible; it's a streamed format. The next step is what's important: the tarball gets extracted into PORTAGE_TEMP. At that moment onward, all the files that were inside that tarball are in your file cache until something bumps it out. If you have enough RAM, then the file will not be bumped out as a consequence of build-time memory usage. As a consequence, if you have enough ram, you won't see much (if any) difference in build times if you're comparing tmpfs to a normal filesystem...which means tmpfs (for you) won't have any benefit beyond being self-cleaning on a reboot or remount. So your drop_cache has no influence over build times, since the only cache behavior that matters is whatever happens between the time emerge unpacks the tarball and the time emerge exits. To see the difference, try something like watch drop_cache leave that running while you let a few builds fly. You should see an increase in build times. But this is what you guys are missing too. If you want to use tmpfs, you have to have enough ram to begin with. Whether you use tmpfs or not, you have to have enough ram to do the compile otherwise you start using swap or it just crashes. Having ram is a prerequisite to using tmpfs. You can't set tmpfs to 8Gbs on a machine that doesn't have 8Gbs available and it work. I don't count swap because when you start using swap, it all goes out the window at that point. There is another flaw in your assumption above. I already had the tarballs downloaded BEFORE even the first emerge. I may not be the sharpest tool in the shed but I do know to download first when trying to measure a emerge time. I can measure my DSL speed with other tools. lol What the people wanted to test is if putting portages work directory on tmpfs would make emerge times faster. It doesn't. The posts people make admit to that fact now but want to argue the reason. I don't care about the reason. I just know that it doesn't matter. Putting portage's work directory on tmpfs does NOT make it faster. For the purpose of this thread, it would be a good idea to save wear and tear on the SSD but one should not expect compile times to improve as one would expect. I might also add, I didn't always have 16Gbs on this rig. I started with 4Gbs. Then I went to 8 and later on went to 16Gbs. Do we all admit that having portage on tmpfs does not make emerge times faster yet? Dale :-) :-) -- I am only responsible for what I said ... Not for what you understood or how you interpreted my words!
Re: [gentoo-user] Re: aligning SSD partitions
Neil Bothwick wrote: On Thu, 06 Sep 2012 09:07:30 -0500, Dale wrote: I don't care if emerge uses cache DURING the emerge process because it is always enabled in both tests. The point is whether portage's work directory is on tmpfs or not makes emerges faster. It does not, if you have enough RAM, precisely because of the part you claim not to care about. The thing about what you are saying is that I ran those tests with the files in memory. What I am saying is this, that is not the case. No, that is not what I am saying. Those files were loaded into memory when you ran the test AFTER you cleared the previously cached files. The number of times you run the test is irrelevant, as is whether you start with an empty cache or not. All that matters is that the kernel caching all the files used during the emerge makes the storage medium used irrelevant. Like I said, take a step back, a deep breath and a break of an hour or two. Then read the posts again without your preconceptions of what you think we are trying to say (which is not what we are actually saying). Only when you have done that can this discussion proceed beyond the current tit-for-tat exchanges of misunderstanding. But to use that or tmpfs, you first have to have the ram. The exact same rig reports that putting portages work directory on tmpfs does NOT result in faster emerge times. Period. I DO NOT care why that is, I just know from testing that it does NOT make emerge work any faster. The only reason to use tmpfs for portage's work directory is to save wear and tear on a drive. There is no difference in emerge times otherwise. Others ran their own tests and got the same results. Dale :-) :-) -- I am only responsible for what I said ... Not for what you understood or how you interpreted my words!
Re: [gentoo-user] Re: aligning SSD partitions
Michael Mol wrote: On Thu, Sep 6, 2012 at 10:20 AM, Dale rdalek1...@gmail.com wrote: Nicolas Sebrecht wrote: The 06/09/12, Dale wrote: Not quite. The theory is that if you put portages work directory on tmpfs, then all the writes and such are done in ram which is faster. No! This is too much simplistic view to explain what you see. In practice, _all_ the writes always happen in RAM whatever backend storage you use. The difference you could see is if there is not enough RAM for the kernel cache, it will have to wait for the backend storage. OK. Step by step here so hopefully you and Neil can follow. Freshly booted system. Clear caches just to be sure emerge foo with portages work directory on tmpfs clear caches again emerge foo with portages work directory on disk clear caches again. emerge foo with portages work directory on tmpfs clear caches again emerge foo with portages work directory on disk You repeat this enough times and you see that it doesn't matter if portage's work directory is on disk or on tmpfs. If you have enough RAM, then this is certainly true. Nobody is disputing that. They've been trying to explain that there's a difference when you _don't_ have that much RAM, and they've been trying to explain the mechanism behind that. But, if you don't have enough ram to compile a package, then you can't use tmpfs anyway. So, that point is not really a point. If you try to compile OOo on a machine with 512M of ram, you can't use tmpfs because you don't have enough ram to even consider it. The amount of ram wasn't what I was testing, I was testing whether using tmpfs makes it faster regardless of the amount of ram. It doesn't. Once everything related to that specific emerge process is loaded, tmpfs doesn't matter. That is what I been saying this whole time. Dale :-) :-) -- I am only responsible for what I said ... Not for what you understood or how you interpreted my words!
Re: [gentoo-user] Re: aligning SSD partitions
Paul Hartman wrote: On Thu, Sep 6, 2012 at 9:20 AM, Dale rdalek1...@gmail.com wrote: OK. Step by step here so hopefully you and Neil can follow. Freshly booted system. Clear caches just to be sure emerge foo with portages work directory on tmpfs clear caches again emerge foo with portages work directory on disk clear caches again. emerge foo with portages work directory on tmpfs clear caches again emerge foo with portages work directory on disk I think, based on what the others are saying, that for you to more accurately test it, you should not use emerge but rather use ebuild to run (and time) the individual steps involved in emerging a package (unpacking, preparing, compiling, installing), clearing disk caches in-between each step. So, for example, after sources are unpacked to tmpfs, clear caches before compilation begins -- this way the source files have to be read from disk rather than from cache/RAM. I didn't want to do it that way because how many people actually update their system that way? I wanted to test doing the same way any other person would normally do a emerge. I suspect that 99% of users just type emerge foo and let emerge do it. I kind of get what they are saying but at the same time using tmpfs doesn't matter. Once the tarball is read off the drive, it doesn't matter whether portage is run on a tmpfs or not. The only way it would is if you ran out of ram and it started using swap. That I disabled because we all know that when you use swap, it's all over. Who in their right mind wants to compile a large program and use a LOT of swap? I hope nobody. lol Dale :-) :-) -- I am only responsible for what I said ... Not for what you understood or how you interpreted my words!
Re: [gentoo-user] Re: aligning SSD partitions
Nicolas Sebrecht wrote: The 06/09/12, Dale wrote: Then take a look at it this way. If I emerge seamonkey with portage's work directory on disk and it takes 12 minutes, the first time. Then I clear the caches and emerge seamonkey again while portage's work directory is on tmpfs and it is 12 minutes. Then repeat that process a few times more. If the outcome of all those emerges is 12 minutes, regardless of the order, then putting portages work directory on tmpfs makes no difference at all in that case. We fully agree with you, here. That's good. The emerge times are exactly the same regardless of emerge using cache or not or portage's work directory being on tmpfs or not. I don't care if emerge uses cache DURING the emerge process because it is always enabled in both tests. But you *should* care. If you don't have enough memory, the kernel will reclaim memory from the pagecache, so the whole process rapidity won't only rely on RAM rapidity anymore. But if you are going to use tmpfs, you have to have the memory available. It doesn't matter if it is tmpfs or just used in the normal way. That is my point. The point is whether portage's work directory is on tmpfs or not makes emerges faster. The thing about what you are saying is that I ran those tests with the files in memory. What I am saying is this, that is not the case. I am clearing that memory with the drop_cache command between each test. You claim that cache is affecting the timing but I am clearing the very same cache the same as a reboot would. The emerge times whether portage's We do agree with you that you droped the cache between the tests with almost the same effect of a reboot. That's good. The emerge times whether portage's work directory is on tmpfs or not didn't change enough to make a difference. Yes, we agree. You droped the cache which is expected to get correct tests. What we are saying is that you droped the cache but did NOT DISABLED the VM caches (kernel cache). You say that you don't care of that one because it was involved in all the tests. We say that you might not care in some contexts, not for all the contexts. You reach the context where it does not matter much, fine. Who doing a normal update would cut off the cache? I wouldn't. I know how to clear it but I don't know how to disable it nor would I or most likely anyone else in normal use. The point of my test was in a normal use case of emerge with or without tmpfs and if there is any difference in the emerge times. There wasn't. Once emerge starts and loads all the stuff it needs, tmpfs doesn't matter at that point. Dale :-) :-) -- I am only responsible for what I said ... Not for what you understood or how you interpreted my words!
Re: [gentoo-user] Re: aligning SSD partitions
On Thu, 06 Sep 2012 11:44:07 -0500, Dale wrote: I kind of get what they are saying but at the same time using tmpfs doesn't matter. Once the tarball is read off the drive, it doesn't matter whether portage is run on a tmpfs or not. Reading the tarball has nothing to do with this, we are discussing filesystems for PORTAGE_TMPDIR, not DISTDIR. It's where the source is unpacked, the object files compiled to, the executables linked to and the install image created that is relevant to TMPDIR. -- Neil Bothwick What's the difference between ignorance and apathy? I don't know and I don't care signature.asc Description: PGP signature
Re: [gentoo-user] Re: aligning SSD partitions
On Thu, 06 Sep 2012 11:32:41 -0500, Dale wrote: Others ran their own tests and got the same results. No one is denying the results, only the reasons given for them. -- Neil Bothwick If you can't be kind, be vague. signature.asc Description: PGP signature
Re: [gentoo-user] Re: aligning SSD partitions
Neil Bothwick wrote: On Thu, 06 Sep 2012 11:44:07 -0500, Dale wrote: I kind of get what they are saying but at the same time using tmpfs doesn't matter. Once the tarball is read off the drive, it doesn't matter whether portage is run on a tmpfs or not. Reading the tarball has nothing to do with this, we are discussing filesystems for PORTAGE_TMPDIR, not DISTDIR. It's where the source is unpacked, the object files compiled to, the executables linked to and the install image created that is relevant to TMPDIR. Well, on my system, when I run emerge, it has to go read the tarball from the drive before it can unpack and do all the rest that needs to be done. I was timing from the time I hit return on the emerge command till it was done. Actually, I used time to time it for me. ;-) As I said, I ran these tests on what a typical user would be using. Dale :-) :-) -- I am only responsible for what I said ... Not for what you understood or how you interpreted my words!
Re: [gentoo-user] Re: aligning SSD partitions
On Thu, 06 Sep 2012 16:09:12 -0500, Dale wrote: Reading the tarball has nothing to do with this, we are discussing filesystems for PORTAGE_TMPDIR, not DISTDIR. It's where the source is unpacked, the object files compiled to, the executables linked to and the install image created that is relevant to TMPDIR. Well, on my system, when I run emerge, it has to go read the tarball from the drive before it can unpack and do all the rest that needs to be done. Of course, but it is reading from a different filesystem that is unaffected by your choice for $PORTAGE_TMPDIR. It has about as much relevance as the brand of mouse you are using. -- Neil Bothwick Fascinating, said Spock, watching Kirk's lousy acting. signature.asc Description: PGP signature
Re: [gentoo-user] Re: aligning SSD partitions
Neil Bothwick wrote: On Thu, 06 Sep 2012 16:09:12 -0500, Dale wrote: Reading the tarball has nothing to do with this, we are discussing filesystems for PORTAGE_TMPDIR, not DISTDIR. It's where the source is unpacked, the object files compiled to, the executables linked to and the install image created that is relevant to TMPDIR. Well, on my system, when I run emerge, it has to go read the tarball from the drive before it can unpack and do all the rest that needs to be done. Of course, but it is reading from a different filesystem that is unaffected by your choice for $PORTAGE_TMPDIR. It has about as much relevance as the brand of mouse you are using. But whether it is on tmpfs or just regular memory doesn't matter. Once emerge starts, everything is in ram including portages work directory which would be on tmpfs here. That's why it doesn't matter if portage is on tmpfs or not. Once emerge loads up the files, it's the same thing. That's why using tmpfs doesn't matter. I knew that the whole time. The amount of ram on a system doesn't matter either. If you have a system that doesn't have a lot of ram, then you can't really use tmpfs anyway. That is not something I would recommend to anyone. I just don't agree that one should *disable* cache to run the test since no one would disable cache on a normal system. It's not a memory speed test. It's a test to see if putting it on tmpfs makes it faster. The fact that emerge loads everything up in memory when it starts is not relevant for what I am testing. It does that on its own anyway. Since portage and the kernel does this in the most efficient way already, I still say putting portage's work directory on tmpfs is not needed UNLESS a person needs to save wear and tear on a drive, such as the SSD in this thread. I just don't want someone that is sort of new to Gentoo and compiling things to think that a package that takes 10 minutes when done on disk will take 3 minutes when on tmpfs. I see that thinking from time to time, usually on the forums. Dale :-) :-) -- I am only responsible for what I said ... Not for what you understood or how you interpreted my words!
Re: [gentoo-user] static IP issue
On Thu, Sep 06, 2012 at 02:13:04PM +0100, pat wrote On Thu, 6 Sep 2012 14:00:22 +0100, Neil Bothwick wrote Please post contents of /etc/conf.d/net -- Neil Bothwick Top Oxymorons Number 5: Twelve-ounce pound cake Here it is. config_eth0=192.168.74.101 netmask 255.255.255.0 #routes_eth0=default via 8.8.8.8 routes_eth0=default via 192.168.74.1 I assume 192.168.74.1 is your modem. Correct? No mention of broadcast address. I don't know if that's critical or not. Try either of the following... config_eth0=192.168.74.101 netmask 255.255.255.0 broadcast 192.168.74.255 routes_eth0=default via 192.168.74.1 or config_eth0=192.168.74.101/24 broadcast 192.168.74.255 routes_eth0=default via 192.168.74.1 -- Walter Dnes waltd...@waltdnes.org I don't run desktop environments; I run useful applications