Neil Bothwick wrote:
> On Wed, 05 Sep 2012 12:54:51 -0500, Dale wrote:
>
>>>>>>> I might also add, I see no speed improvements in putting portages
>>>>>>> work directory on tmpfs.  I have tested this a few times and the
>>>>>>> difference in compile times is just not there.  
>>>>>> Probably because with 16GB everything stays cached anyway.  
>>>>> I cleared the cache between the compiles.  This is the command I
>>>>> use:
>>>>>
>>>>> echo 3 > /proc/sys/vm/drop_caches  
>>>> But you are still using the RAM as disk cache during the emerge, the
>>>> data doesn't stay around long enough to need to get written to disk
>>>> with so much RAM for cache.  
>>> Indeed. Try setting the mount to write-through to see the difference.
>> When I run that command, it clears all the cache.  It is the same as if
>> I rebooted.  Certainly you are not thinking that cache survives a
>> reboot?
> You clear the cache between the two emerge runs, not during them.

If I recall correctly the process, I cleared the cache, ran emerge with
the portage work directory on disk.  Then cleared cache again and run on
tmpfs.  If you think that the cache would make any difference for the
second run then it would be faster just because of that *while using
tmpfs* since that was the second run.  Thing is, it wasn't faster.  In
some tests it was actually slower. 

I'm trying to catch on to why you think that clearing the cache means it
is still there.  That's the whole point of clearing cache is that it is
gone.  When I was checking on drop_caches, my understanding was that
clearing that was the same as a reboot.  This is from kernel.org

drop_caches

Writing to this will cause the kernel to drop clean caches, dentries and
inodes from memory, causing that memory to become free.

To free pagecache:
        echo 1 > /proc/sys/vm/drop_caches
To free dentries and inodes:
        echo 2 > /proc/sys/vm/drop_caches
To free pagecache, dentries and inodes:
        echo 3 > /proc/sys/vm/drop_caches

According to that, the 3 option clears all cache.  One site I found that
on even recommended running sync first just in case something in ram was
not yet written to disk. 

>
>> If you are talking about ram on the drive itself, well, when it is on
>> tmpfs, it is not on the drive to be cached.  That's the whole point of
>> tmpfs is to get the slow drive out of the way.  By the way, there are
>> others that ran tests with the same results.  It just doesn't speed up
>> anything since drives are so much faster nowadays. 
> Drives are still orders of magnitude slower than RAM, that's why using
> swap is so slow. What appears to be happening here is that because
> files are written and then read again in short succession, they are still
> in the kernel's disk cache, so the speed of the disk is irrelevant. Bear
> in mind that tmpfs is basically a cached disk without the disk, so you
> are effectively comparing the same thing twice.
>
>

I agree that they are slower but for whatever reason, it doesn't seem to
matter as much as one thought.  I was expecting a huge difference in
this with using tmpfs being much faster.  Thing is, that is NOT what I
got.  Theory meets reality and it was not what I expected or what others
expected either.  Putting portages work directory on tmpfs doesn't make
much if any difference in emerge times. 

Dale

:-)  :-) 

-- 
I am only responsible for what I said ... Not for what you understood or how 
you interpreted my words!


Reply via email to