[gentoo-user] Re: aligning SSD partitions

2012-09-10 Thread Nicolas Sebrecht
The 07/09/12, Dale wrote:

 The thing is tho, whether it is using the memory as cache or using it
 as
 tmpfs, it is the same memory.  There is no difference.  That's the
 whole
 point. 

Feel free to take your own assumptions as undeniable truth. The way the
kernel work with memory is the key, of course.

Now, as long as you blind yourself with statements like that, I'm not
going to respond anymore. I guess you need to make some basic research.

-- 
Nicolas Sebrecht



Re: [gentoo-user] Re: aligning SSD partitions

2012-09-10 Thread Dale
Nicolas Sebrecht wrote:
 The 07/09/12, Dale wrote:

 The thing is tho, whether it is using the memory as cache or using it
 as
 tmpfs, it is the same memory.  There is no difference.  That's the
 whole
 point. 
 Feel free to take your own assumptions as undeniable truth. The way the
 kernel work with memory is the key, of course.

 Now, as long as you blind yourself with statements like that, I'm not
 going to respond anymore. I guess you need to make some basic research.


I understand how the kernel uses memory.  That's why it doesn't matter
if you put portage's work directory on tmpfs or not.  I been using Linux
for a pretty good long while now.  I have a pretty good understanding of
it, especially the things that I use. 

Respond or not, I know what I tested and what the results were.  They
were not just my tests and results either. 

Dale

:-)  :-) 

-- 
I am only responsible for what I said ... Not for what you understood or how 
you interpreted my words!




Re: [gentoo-user] Re: aligning SSD partitions

2012-09-10 Thread Michael Mol
On Mon, Sep 10, 2012 at 7:13 AM, Dale rdalek1...@gmail.com wrote:

 Nicolas Sebrecht wrote:
  The 07/09/12, Dale wrote:
 
  The thing is tho, whether it is using the memory as cache or using it
  as
  tmpfs, it is the same memory.  There is no difference.  That's the
  whole
  point.
  Feel free to take your own assumptions as undeniable truth. The way the
  kernel work with memory is the key, of course.
 
  Now, as long as you blind yourself with statements like that, I'm not
  going to respond anymore. I guess you need to make some basic research.
 

 I understand how the kernel uses memory.  That's why it doesn't matter
 if you put portage's work directory on tmpfs or not.  I been using Linux
 for a pretty good long while now.  I have a pretty good understanding of
 it, especially the things that I use.

 Respond or not, I know what I tested and what the results were.  They
 were not just my tests and results either.


Nobody is disagreeing with your test results. In fact, they're not even
disagreeing with you that they mean what you think they mean within the
context you're testing. They're disagreeing with your extrapolation of your
results to other contexts. In short, all other things being equal, your
test results work out for someone in the exact same circumstances as
yourself...but there are a _lot_ of other things that need to be equal!

Filesystem mount options can have an impact. For example, let's say your
filesystem is configured to make writes synchronous, for general data
integrity purposes. That would slow PORTAGE_TMP down something _fierce_.

Someone might be tweaking any number of the knobs under 'vm' in /proc.
vm.swappiness, vm.dirty_* or vm.min_free_kbytes are ones that caught my
eye, but really most of them in there look relevant.

Or consider that someone else might be running drop_caches, or even sync()
while your code is running. (Heck, if there's a database, even an sqlite
database, on the same filesystem, that's almost a guarantee.)

These may seem to be obvious, but these are the kinds of things people were
trying to get you to be willing to acknowledge before you made blanket
assertions which covered them.

-- 
:wq


Re: [gentoo-user] Re: aligning SSD partitions

2012-09-10 Thread Dale
Michael Mol wrote:
 On Mon, Sep 10, 2012 at 7:13 AM, Dale rdalek1...@gmail.com
 mailto:rdalek1...@gmail.com wrote:

 Nicolas Sebrecht wrote:
  The 07/09/12, Dale wrote:
 
  The thing is tho, whether it is using the memory as cache or
 using it
  as
  tmpfs, it is the same memory.  There is no difference.  That's the
  whole
  point.
  Feel free to take your own assumptions as undeniable truth. The
 way the
  kernel work with memory is the key, of course.
 
  Now, as long as you blind yourself with statements like that,
 I'm not
  going to respond anymore. I guess you need to make some basic
 research.
 

 I understand how the kernel uses memory.  That's why it doesn't matter
 if you put portage's work directory on tmpfs or not.  I been using
 Linux
 for a pretty good long while now.  I have a pretty good
 understanding of
 it, especially the things that I use.

 Respond or not, I know what I tested and what the results were.  They
 were not just my tests and results either.


 Nobody is disagreeing with your test results. In fact, they're not
 even disagreeing with you that they mean what you think they mean
 within the context you're testing. They're disagreeing with your
 extrapolation of your results to other contexts. In short, all other
 things being equal, your test results work out for someone in the
 exact same circumstances as yourself...but there are a _lot_ of other
 things that need to be equal!

 Filesystem mount options can have an impact. For example, let's say
 your filesystem is configured to make writes synchronous, for general
 data integrity purposes. That would slow PORTAGE_TMP down something
 _fierce_.

 Someone might be tweaking any number of the knobs under 'vm' in /proc.
 vm.swappiness, vm.dirty_* or vm.min_free_kbytes are ones that caught
 my eye, but really most of them in there look relevant.

 Or consider that someone else might be running drop_caches, or even
 sync() while your code is running. (Heck, if there's a database, even
 an sqlite database, on the same filesystem, that's almost a guarantee.)

 These may seem to be obvious, but these are the kinds of things people
 were trying to get you to be willing to acknowledge before you made
 blanket assertions which covered them.

 -- 
 :wq


Someone could be getting rays from Mars but I am not testing that.  What
I tested was this,  Run emerge with portages work directory on disk. 
Then run same command with portage's work directory on tmpfs.  Then
compare the results.  No other changes except for where portage's work
directory is located, hard drive or ram.  This was done on a NORMAL
system that most ANY user would be using.  I'm not concerned with some
rare or exotic setup, just a normal setup.  If someone is running some
exotic setup, then they need to test that to see whether it helps or not
because I did not test for that sort of system.  I didn't test for rays
from Mars either.  LOL

Dale

:-)  :-) 

-- 
I am only responsible for what I said ... Not for what you understood or how 
you interpreted my words!



Re: [gentoo-user] Re: aligning SSD partitions

2012-09-10 Thread Michael Mol
On Mon, Sep 10, 2012 at 9:52 AM, Dale rdalek1...@gmail.com wrote:

  Michael Mol wrote:

 On Mon, Sep 10, 2012 at 7:13 AM, Dale rdalek1...@gmail.com wrote:

  Nicolas Sebrecht wrote:
  The 07/09/12, Dale wrote:
 
  The thing is tho, whether it is using the memory as cache or using it
  as
  tmpfs, it is the same memory.  There is no difference.  That's the
  whole
  point.
  Feel free to take your own assumptions as undeniable truth. The way the
  kernel work with memory is the key, of course.
 
  Now, as long as you blind yourself with statements like that, I'm not
  going to respond anymore. I guess you need to make some basic research.
 

  I understand how the kernel uses memory.  That's why it doesn't matter
 if you put portage's work directory on tmpfs or not.  I been using Linux
 for a pretty good long while now.  I have a pretty good understanding of
 it, especially the things that I use.

 Respond or not, I know what I tested and what the results were.  They
 were not just my tests and results either.


  Nobody is disagreeing with your test results. In fact, they're not even
 disagreeing with you that they mean what you think they mean within the
 context you're testing. They're disagreeing with your extrapolation of your
 results to other contexts. In short, all other things being equal, your
 test results work out for someone in the exact same circumstances as
 yourself...but there are a _lot_ of other things that need to be equal!

  Filesystem mount options can have an impact. For example, let's say your
 filesystem is configured to make writes synchronous, for general data
 integrity purposes. That would slow PORTAGE_TMP down something _fierce_.

  Someone might be tweaking any number of the knobs under 'vm' in /proc.
 vm.swappiness, vm.dirty_* or vm.min_free_kbytes are ones that caught my
 eye, but really most of them in there look relevant.

  Or consider that someone else might be running drop_caches, or even
 sync() while your code is running. (Heck, if there's a database, even an
 sqlite database, on the same filesystem, that's almost a guarantee.)

  These may seem to be obvious, but these are the kinds of things people
 were trying to get you to be willing to acknowledge before you made blanket
 assertions which covered them.

  --
 :wq



 Someone could be getting rays from Mars but I am not testing that.  What I
 tested was this,  Run emerge with portages work directory on disk.  Then
 run same command with portage's work directory on tmpfs.  Then compare the
 results.  No other changes except for where portage's work directory is
 located, hard drive or ram.  This was done on a NORMAL system that most ANY
 user would be using.  I'm not concerned with some rare or exotic setup,
 just a normal setup.  If someone is running some exotic setup, then they
 need to test that to see whether it helps or not because I did not test for
 that sort of system.  I didn't test for rays from Mars either.  LOL


Running databases on the same filesystem as PORTAGE_TMP is not a rare or
exotic setup. Anyone who doesn't use a separate /home or separate portage
temp is in a circumstance like that.


-- 
:wq


Re: [gentoo-user] Re: aligning SSD partitions

2012-09-10 Thread Dale
Michael Mol wrote:
 On Mon, Sep 10, 2012 at 9:52 AM, Dale rdalek1...@gmail.com
 mailto:rdalek1...@gmail.com wrote:

 Michael Mol wrote:
 On Mon, Sep 10, 2012 at 7:13 AM, Dale rdalek1...@gmail.com
 mailto:rdalek1...@gmail.com wrote:

 Nicolas Sebrecht wrote:
  The 07/09/12, Dale wrote:
 
  The thing is tho, whether it is using the memory as cache
 or using it
  as
  tmpfs, it is the same memory.  There is no difference.
  That's the
  whole
  point.
  Feel free to take your own assumptions as undeniable truth.
 The way the
  kernel work with memory is the key, of course.
 
  Now, as long as you blind yourself with statements like
 that, I'm not
  going to respond anymore. I guess you need to make some
 basic research.
 

 I understand how the kernel uses memory.  That's why it
 doesn't matter
 if you put portage's work directory on tmpfs or not.  I been
 using Linux
 for a pretty good long while now.  I have a pretty good
 understanding of
 it, especially the things that I use.

 Respond or not, I know what I tested and what the results
 were.  They
 were not just my tests and results either.


 Nobody is disagreeing with your test results. In fact, they're
 not even disagreeing with you that they mean what you think they
 mean within the context you're testing. They're disagreeing with
 your extrapolation of your results to other contexts. In short,
 all other things being equal, your test results work out for
 someone in the exact same circumstances as yourself...but there
 are a _lot_ of other things that need to be equal!

 Filesystem mount options can have an impact. For example, let's
 say your filesystem is configured to make writes synchronous, for
 general data integrity purposes. That would slow PORTAGE_TMP down
 something _fierce_.

 Someone might be tweaking any number of the knobs under 'vm' in
 /proc. vm.swappiness, vm.dirty_* or vm.min_free_kbytes are ones
 that caught my eye, but really most of them in there look relevant.

 Or consider that someone else might be running drop_caches, or
 even sync() while your code is running. (Heck, if there's a
 database, even an sqlite database, on the same filesystem, that's
 almost a guarantee.)

 These may seem to be obvious, but these are the kinds of things
 people were trying to get you to be willing to acknowledge before
 you made blanket assertions which covered them.

 -- 
 :wq


 Someone could be getting rays from Mars but I am not testing
 that.  What I tested was this,  Run emerge with portages work
 directory on disk.  Then run same command with portage's work
 directory on tmpfs.  Then compare the results.  No other changes
 except for where portage's work directory is located, hard drive
 or ram.  This was done on a NORMAL system that most ANY user would
 be using.  I'm not concerned with some rare or exotic setup, just
 a normal setup.  If someone is running some exotic setup, then
 they need to test that to see whether it helps or not because I
 did not test for that sort of system.  I didn't test for rays from
 Mars either.  LOL


 Running databases on the same filesystem as PORTAGE_TMP is not a rare
 or exotic setup. Anyone who doesn't use a separate /home or separate
 portage temp is in a circumstance like that.


 -- 
 :wq


Well, I have /home on its own partition, like most likely everyone
does.  At the time, I was not using LVM either.  At the time, I had a
pretty much default install except that the portage tree was on its own
partition since I wanted to keep it from fragmenting all of /usr with
all those constantly changing little files. 

I also use defaults when mounting file systems too.  Nothing exotic or
weird or anything. 

So again, just testing on as normal a system as there could be to get
some real world results.  

Dale

:-)  :-) 

-- 
I am only responsible for what I said ... Not for what you understood or how 
you interpreted my words!



[gentoo-user] Re: aligning SSD partitions

2012-09-07 Thread Nicolas Sebrecht
The 06/09/12, Dale wrote:

 But whether it is on tmpfs or just regular memory doesn't matter.  Once
 emerge starts, everything is in ram including portages work directory
 which would be on tmpfs here.  That's why it doesn't matter if portage
 is on tmpfs or not.  Once emerge loads up the files, it's the same
 thing.  That's why using tmpfs doesn't matter.  I knew that the whole
 time.  The amount of ram on a system doesn't matter either.  If you have
 a system that doesn't have a lot of ram, then you can't really use tmpfs
 anyway.  That is not something I would recommend to anyone. 

But you're wrong with this assumption. I guess you never tried to
upgrade a Gentoo system running as server (a working one, with users and
workload).

The amount of memory is only /one/ helper parameter to not see a
difference.

Like I've already said, the issue is all about the persistence strategy
used by the VM to mark memory as pinned, reclaimable or swappable. Where
tmpfs do change the matter is that a file stored in it is not going be
dropped from RAM until there is a unlink(2) call on it or that other
running processes are running out of memory and some page needs to be
swapped (so there is _already_ no more available RAM in the kernel cache).

If not using tmpfs and because memory cache is the first place where the
kernel will free up memory, you don't have to wait for the processes to
run out of available memory to hit a situation where you'll have to wait
for the disk to retrieve files. So, this will affect times.

-- 
Nicolas Sebrecht



[gentoo-user] Re: aligning SSD partitions

2012-09-07 Thread Nicolas Sebrecht
The 06/09/12, Dale wrote:

 But this is what you guys are missing too.  If you want to use tmpfs,
 you have to have enough ram to begin with.  Whether you use tmpfs or
 not, you have to have enough ram to do the compile otherwise you start
 using swap or it just crashes.  Having ram is a prerequisite to using
 tmpfs.  

This is too minimal overview to get the point. Memory is not a static
place. This is not a cake beeing shared once. Memory is living. See my
other mail.

 There is another flaw in your assumption above.  I already had the
 tarballs downloaded BEFORE even the first emerge.

This is not a flaw in assumption. This is negligible.

 What the people wanted to test is if putting portages work directory on
 tmpfs would make emerge times faster.

Come'on. We all understood your goal from the beginning.

 Do we all admit that having portage on tmpfs does not make emerge times
 faster yet? 

No. It depends on factors and underlying processes you claim they don't
matter, which is wrong. They *might* be not relevant in some cases.

-- 
Nicolas Sebrecht



[gentoo-user] Re: aligning SSD partitions

2012-09-07 Thread Nicolas Sebrecht
The 07/09/12, Nicolas Sebrecht wrote:

  There is another flaw in your assumption above.  I already had the
  tarballs downloaded BEFORE even the first emerge.
 
 This is not a flaw in assumption. This is negligible.

Fixing myself: s/negligible/out of the scope/

-- 
Nicolas Sebrecht



Re: [gentoo-user] Re: aligning SSD partitions

2012-09-07 Thread Dale
Nicolas Sebrecht wrote:
 The 06/09/12, Dale wrote:

 But this is what you guys are missing too.  If you want to use tmpfs,
 you have to have enough ram to begin with.  Whether you use tmpfs or
 not, you have to have enough ram to do the compile otherwise you start
 using swap or it just crashes.  Having ram is a prerequisite to using
 tmpfs.  
 This is too minimal overview to get the point. Memory is not a static
 place. This is not a cake beeing shared once. Memory is living. See my
 other mail.

I understand that memory is static but that is NOT what I was testing or
others either.  The test it whether putting portage's work directory on
tmpfs makes emerges faster not whether emerge using memory itself makes
it faster.  Since when you run emerge it loads everything into ram,
regardless of whether portages work directory is on tmpfs or not, it
doesn't matter.  This test is NOT about portage loading things into ram
WHILE emerging, it was about having the work directory on tmpfs and
speed.  Since emerge loads everything right after you hit the enter key,
it doesn't matter where the work directory is located. 

We wanted to change only one thing for this test, where portage's work
directory was.  It was not about how much ram a system has but where
tmpfs was located.  To use tmpfs, the system has to have enough ram to
begin with so systems that do not have larger amounts of ram were not
even relevant to the question.  If a system has small amounts of ram,
then most likely they can't use tmpfs anyway. 

 There is another flaw in your assumption above.  I already had the
 tarballs downloaded BEFORE even the first emerge.
 This is not a flaw in assumption. This is negligible.

It can make a huge difference.  The download times are included in the
emerge times if it is not already in distfiles.  So, if a tarball takes
a hour to download, it adds one hour to the emerge time.  Depending on
internet speed, it can be more than negligible.  I have DSL but it is
the slower package so this can in some cases make a HUGE difference
here.  Since I was running my tests here, I know it makes a difference
but you assumed it didn't.  That would be incorrect.  It does make a
difference and it can be a big one depending on the tarball size.


 What the people wanted to test is if putting portages work directory on
 tmpfs would make emerge times faster.
 Come'on. We all understood your goal from the beginning.

Well great.  We, and I, were only testing one thing not two or three
things.  We just wanted to change one setting, not disable a whole bunch
of stuff. 



 Do we all admit that having portage on tmpfs does not make emerge times
 faster yet? 
 No. It depends on factors and underlying processes you claim they don't
 matter, which is wrong. They *might* be not relevant in some cases.


Actually, they don't matter on my system and since others got the same
results, it doesn't matter.  Again, we only wanted to change one
specific setting, tmpfs, nothing else.  That was the only thing we were
testing and it was the only thing I tested here and it is the only
results I am reporting.  I'm not reporting on how well emerge is using
ram after the command is given. 

So, accept it or not, it makes no difference whether portage's work
directory is on tmpfs or not speed wise.  You get the same results
either way.  In the case of the OP of this thread, it would likely be a
good idea if he can but he should not expect emerge to be any faster. 

Dale

:-)  :-) 

-- 
I am only responsible for what I said ... Not for what you understood or how 
you interpreted my words!




Re: [gentoo-user] Re: aligning SSD partitions

2012-09-07 Thread Dale
Nicolas Sebrecht wrote:
 The 06/09/12, Dale wrote:

 But whether it is on tmpfs or just regular memory doesn't matter.  Once
 emerge starts, everything is in ram including portages work directory
 which would be on tmpfs here.  That's why it doesn't matter if portage
 is on tmpfs or not.  Once emerge loads up the files, it's the same
 thing.  That's why using tmpfs doesn't matter.  I knew that the whole
 time.  The amount of ram on a system doesn't matter either.  If you have
 a system that doesn't have a lot of ram, then you can't really use tmpfs
 anyway.  That is not something I would recommend to anyone. 
 But you're wrong with this assumption. I guess you never tried to
 upgrade a Gentoo system running as server (a working one, with users and
 workload).

Actually, I do that a lot here but we were not testing this on a server
but on what a regular user would have.  I ran most of my tests while in
single user mode tho.  I didn't want the fact that I use KDE and it may
be doing something to use CPU resources or even memory to skew the
results.  I went over this in another reply long ago. 


 The amount of memory is only /one/ helper parameter to not see a
 difference.

 Like I've already said, the issue is all about the persistence strategy
 used by the VM to mark memory as pinned, reclaimable or swappable. Where
 tmpfs do change the matter is that a file stored in it is not going be
 dropped from RAM until there is a unlink(2) call on it or that other
 running processes are running out of memory and some page needs to be
 swapped (so there is _already_ no more available RAM in the kernel cache).

 If not using tmpfs and because memory cache is the first place where the
 kernel will free up memory, you don't have to wait for the processes to
 run out of available memory to hit a situation where you'll have to wait
 for the disk to retrieve files. So, this will affect times.


Swap was disabled when I ran the tests even tho I have it set to not use
swap unless it is a must.  Memory is memory whether it is tmpfs or just
being used by a process or as disk cache.  I only have one type of
memory in my system here.  It is all the same no matter how the system
uses it.  Since the only setting changed was where the work directory
was located, then emerge pinned the use of memory the same in both
cases.  As it should. 

The thing is tho, whether it is using the memory as cache or using it as
tmpfs, it is the same memory.  There is no difference.  That's the whole
point. 

Dale

:-)  :-) 

-- 
I am only responsible for what I said ... Not for what you understood or how 
you interpreted my words!




Re: [gentoo-user] Re: aligning SSD partitions

2012-09-07 Thread Neil Bothwick
On Fri, 07 Sep 2012 07:25:42 -0500, Dale wrote:

 Since when you run emerge it loads everything into ram,
 regardless of whether portages work directory is on tmpfs or not, it
 doesn't matter.  This test is NOT about portage loading things into ram
 WHILE emerging, it was about having the work directory on tmpfs and
 speed.

Of course it doesn't. The tarball is unpacked from DISTDIR to the work
directory. Then individual source files in the work directory are
compiled to object files, also in the work directory. Then those object
files are linked to executables, also in the work directory. Finally,
everything is install to an image directory, also on TMPDIR. The speed of
the work directory would appear to be of critical importance - but it
isn't, as shown by your tests. The reason for this, and the point
everyone else has been making, is because the files are cached by the
kernel, so the filesystem is less important if you have enough RAM.


-- 
Neil Bothwick

30 minutes of begging is not considered foreplay.


signature.asc
Description: PGP signature


[gentoo-user] Re: aligning SSD partitions

2012-09-06 Thread Nicolas Sebrecht
The 05/09/12, Dale wrote:
 Michael Mol wrote:
  On Wed, Sep 5, 2012 at 11:17 AM, Neil Bothwick n...@digimed.co.uk wrote:
  On Wed, 05 Sep 2012 07:52:45 -0500, Dale wrote:
 
  I might also add, I see no speed improvements in putting portages
  work directory on tmpfs.  I have tested this a few times and the
  difference in compile times is just not there.
  Probably because with 16GB everything stays cached anyway.
  I cleared the cache between the compiles.  This is the command I use:
 
  echo 3  /proc/sys/vm/drop_caches
  But you are still using the RAM as disk cache during the emerge, the data
  doesn't stay around long enough to need to get written to disk with so
  much RAM for cache.
  Indeed. Try setting the mount to write-through to see the difference.
 
 
 
 When I run that command, it clears all the cache.  It is the same as if
 I rebooted.  Certainly you are not thinking that cache survives a reboot?

You missed the point. One of the first thing emerge will do is to
uncompress the package. At this time, all the files are cached in RAM.
Hence, everything needed for the build/compilation will come from the
cache like it would do with tmpfs.

-- 
Nicolas Sebrecht



Re: [gentoo-user] Re: aligning SSD partitions

2012-09-06 Thread Dale
Nicolas Sebrecht wrote:
 The 05/09/12, Dale wrote:
 Michael Mol wrote:
 On Wed, Sep 5, 2012 at 11:17 AM, Neil Bothwick n...@digimed.co.uk wrote:
 On Wed, 05 Sep 2012 07:52:45 -0500, Dale wrote:

 I might also add, I see no speed improvements in putting portages
 work directory on tmpfs.  I have tested this a few times and the
 difference in compile times is just not there.
 Probably because with 16GB everything stays cached anyway.
 I cleared the cache between the compiles.  This is the command I use:

 echo 3  /proc/sys/vm/drop_caches
 But you are still using the RAM as disk cache during the emerge, the data
 doesn't stay around long enough to need to get written to disk with so
 much RAM for cache.
 Indeed. Try setting the mount to write-through to see the difference.


 When I run that command, it clears all the cache.  It is the same as if
 I rebooted.  Certainly you are not thinking that cache survives a reboot?
 You missed the point. One of the first thing emerge will do is to
 uncompress the package. At this time, all the files are cached in RAM.
 Hence, everything needed for the build/compilation will come from the
 cache like it would do with tmpfs.


You miss this point not me.  I *cleared* that cache.  From kernel.org:

drop_caches

Writing to this will cause the kernel to drop clean caches, dentries and
inodes from memory, causing that memory to become free.

To free pagecache:
echo 1  /proc/sys/vm/drop_caches
To free dentries and inodes:
echo 2  /proc/sys/vm/drop_caches
To free pagecache, dentries and inodes:
echo 3  /proc/sys/vm/drop_caches

I can confirm this is done with free, top or htop.  See my reply to Neil for 
more on this.

Dale

:-)  :-)  

-- 
I am only responsible for what I said ... Not for what you understood or how 
you interpreted my words!




Re: [gentoo-user] Re: aligning SSD partitions

2012-09-06 Thread Neil Bothwick
On Thu, 06 Sep 2012 04:15:23 -0500, Dale wrote:

  You missed the point. One of the first thing emerge will do is to
  uncompress the package. At this time, all the files are cached in RAM.
  Hence, everything needed for the build/compilation will come from the
  cache like it would do with tmpfs.
   
 
 You miss this point not me.  I *cleared* that cache.  From kernel.org:

Sorry Dale, but you are missing the point. You cleared the cache before
running emerge, then ran emerge. The first thing emerge did was unpack
the tarball and populate the disk cache. All clearing the disk cache did
was make sure there was plenty of space to cache the new data, thus
speeding up the process.


-- 
Neil Bothwick

A snooze button is a poor substitute for no alarm clock at all.


signature.asc
Description: PGP signature


Re: [gentoo-user] Re: aligning SSD partitions

2012-09-06 Thread Dale
Neil Bothwick wrote:
 On Thu, 06 Sep 2012 04:15:23 -0500, Dale wrote:

 You missed the point. One of the first thing emerge will do is to
 uncompress the package. At this time, all the files are cached in RAM.
 Hence, everything needed for the build/compilation will come from the
 cache like it would do with tmpfs.
  
 You miss this point not me.  I *cleared* that cache.  From kernel.org:
 Sorry Dale, but you are missing the point. You cleared the cache before
 running emerge, then ran emerge. The first thing emerge did was unpack
 the tarball and populate the disk cache. All clearing the disk cache did
 was make sure there was plenty of space to cache the new data, thus
 speeding up the process.



Then explain to me why it was at times slower while on tmpfs?  Trust me,
I ran this test many times and in different orders and it did NOT make
much if any difference. 

I might add, the cache on the drive I was using is nowhere near large
enough to cache the tarball for the package.  Heck, the cache on my
current system drive is only 8Mbs according to hdparm.  That is not much
since I tested using much larger packages.  You can't cache files larger
than the cache. 

Do I need to run a test, reboot, run the test again to show this is not
making much if any difference?  I mean, really?  o_O

Dale

:-)  :-) 

-- 
I am only responsible for what I said ... Not for what you understood or how 
you interpreted my words!




Re: [gentoo-user] Re: aligning SSD partitions

2012-09-06 Thread Dale
Neil Bothwick wrote:
 On Thu, 06 Sep 2012 04:15:23 -0500, Dale wrote:

 You missed the point. One of the first thing emerge will do is to
 uncompress the package. At this time, all the files are cached in RAM.
 Hence, everything needed for the build/compilation will come from the
 cache like it would do with tmpfs.
  
 You miss this point not me.  I *cleared* that cache.  From kernel.org:
 Sorry Dale, but you are missing the point. You cleared the cache before
 running emerge, then ran emerge. The first thing emerge did was unpack
 the tarball and populate the disk cache. All clearing the disk cache did
 was make sure there was plenty of space to cache the new data, thus
 speeding up the process.



One other thing, I am not just clearing the *disk* cache.  I am clearing
all the *SYSTEM* cache.  I can have all 16Gbs of memory in use, either
by programs or as cache, then run that command and it is then only using
what is in use by programs.  It clears everything else.  That includes
any cache that was stored there, disk or otherwise.

You need to run free, run the command to clear and then run free again
so you can see for yourself.  If it was just me, I could think I am
wrong but this was tested by others too with the same results.

Dale

:-)  :-)

-- 
I am only responsible for what I said ... Not for what you understood or how 
you interpreted my words!




Re: [gentoo-user] Re: aligning SSD partitions

2012-09-06 Thread Neil Bothwick
On Thu, 06 Sep 2012 05:03:55 -0500, Dale wrote:

  You miss this point not me.  I *cleared* that cache.  From
  kernel.org:  
  Sorry Dale, but you are missing the point. You cleared the cache
  before running emerge, then ran emerge. The first thing emerge did
  was unpack the tarball and populate the disk cache. All clearing the
  disk cache did was make sure there was plenty of space to cache the
  new data, thus speeding up the process.

 Then explain to me why it was at times slower while on tmpfs?  Trust me,
 I ran this test many times and in different orders and it did NOT make
 much if any difference.

So it was slower at times, but not by much? That's just general variances
caused by multi-tasking, wind direction etc.
 
 I might add, the cache on the drive I was using is nowhere near large
 enough to cache the tarball for the package.  Heck, the cache on my
 current system drive is only 8Mbs according to hdparm.

We're not talking about drive caches, the kernel caches filesystem access
long before it gets anywhere the drive. So all the real work is done in
RAM if you have enough, whether you are using a hard drive filesystem or
tmpfs. All your test demonstrates is that if you have enough RAM, it
doesn't make much difference where you put PORTAGE_TMPDIR.


-- 
Neil Bothwick

Evolution stops when stupidity is no longer fatal!


signature.asc
Description: PGP signature


Re: [gentoo-user] Re: aligning SSD partitions

2012-09-06 Thread Neil Bothwick
On Thu, 06 Sep 2012 05:11:01 -0500, Dale wrote:

 You need to run free, run the command to clear and then run free again
 so you can see for yourself.  If it was just me, I could think I am
 wrong but this was tested by others too with the same results.

I'm not saying your test results are wrong, I'm explaining why I think
they are what they are. Have you tried running free *during* the emerge?
I expect you'll find plenty of cache in use then.


-- 
Neil Bothwick

We are Pentium of Borg.  You will be approximated.  Resistance may or may
not be futile, except on every other Tuesday when it is a definite maybe.


signature.asc
Description: PGP signature


[gentoo-user] Re: aligning SSD partitions

2012-09-06 Thread Nicolas Sebrecht
The 06/09/12, Dale wrote:

 Then explain to me why it was at times slower while on tmpfs?  Trust me,
 I ran this test many times and in different orders and it did NOT make
 much if any difference. 

As explained, this is expected if you have enough RAM.

I didn't check but I would expect that files stored in tmpfs are NOT
duplicated in the the kernel cache in order to save RAM. So, the
different times could come from the fact that the kernel will first look
up in the kernel cache and /then/ look up in the tmpfs.

In the scenario without tmpfs and lot of RAM, every unpacked file is
stored in the _kernel cache_ with really fast access much before hitting
the disk or even the disk cache (RAM speed and very few processor
calculation required). While retrieving, the file is found on first look
up from the kernel cache.

In the other scenario with tmpfs and lot of RAM, every unpacked file is
stored in the tmpfs allowing very fast access (due to RAM speed) but
with the price of a first negative result from the kernel cache (and
perhaps additional time needed by the kernel for accessing the file
through the driver of the tmpfs filesystem).

Using tmpfs will still be better as it prevents from writes to the disk
in the spare times, avoiding unnecessary mecanic movements and saving
disk life time.

 I might add, the cache on the drive I was using is nowhere near large
 enough to cache the tarball for the package.  Heck, the cache on my
 current system drive is only 8Mbs according to hdparm.  That is not much
 since I tested using much larger packages.  You can't cache files larger
 than the cache. 

The disk cache is out of the scope.

 Do I need to run a test, reboot, run the test again to show this is not
 making much if any difference?  I mean, really?  o_O

It won't make any difference from the drop cache configuration but it is
still not the point!

-- 
Nicolas Sebrecht



Re: [gentoo-user] Re: aligning SSD partitions

2012-09-06 Thread Dale
Neil Bothwick wrote:
 On Thu, 06 Sep 2012 05:11:01 -0500, Dale wrote:

 You need to run free, run the command to clear and then run free again
 so you can see for yourself.  If it was just me, I could think I am
 wrong but this was tested by others too with the same results.
 I'm not saying your test results are wrong, I'm explaining why I think
 they are what they are. Have you tried running free *during* the emerge?
 I expect you'll find plenty of cache in use then.



The point isn't about using cache DURING the emerge.  The point was
whether having portages work directory on tmpfs resulted in speed
increases.  If you have portages work directory on tmpfs, of course it
uses ram.  That's what tmpfs is.  It's taking what might normally be put
on the disk and putting it in ram because ram is faster.  The point is,
cache or not, having portages work directory on tmpfs doesn't result in
speed improvements as one would expect.  Actual tests gave unexpected
results.  Tests show that putting portages work directory on tmpfs did
not result in speed increases for emerging packages. 

Dale

:-)  :-) 

-- 
I am only responsible for what I said ... Not for what you understood or how 
you interpreted my words!




Re: [gentoo-user] Re: aligning SSD partitions

2012-09-06 Thread Dale
Neil Bothwick wrote:
 On Thu, 06 Sep 2012 05:03:55 -0500, Dale wrote:

 You miss this point not me.  I *cleared* that cache.  From
 kernel.org:  
 Sorry Dale, but you are missing the point. You cleared the cache
 before running emerge, then ran emerge. The first thing emerge did
 was unpack the tarball and populate the disk cache. All clearing the
 disk cache did was make sure there was plenty of space to cache the
 new data, thus speeding up the process.
 Then explain to me why it was at times slower while on tmpfs?  Trust me,
 I ran this test many times and in different orders and it did NOT make
 much if any difference.
 So it was slower at times, but not by much? That's just general variances
 caused by multi-tasking, wind direction etc.

That's the point.  It doesn't make any difference whether you have
portages work directory on tmpfs or not.  For the point of this thread,
it would be a good idea to save wear and tear on the SSD but one should
NOT expect that emerge will compile packages any faster because of it
being on tmpfs instead of on disk.  I might also add, I ran some of my
tests in single user mode.  That is about as raw as Linux gets but there
is still the chance of variances here and there.  That's why I said not
much.  Sometimes one would be a second or two faster then next time be a
second or two slower.  Basically, just normal variances that may not be
related to one another. 

  
 I might add, the cache on the drive I was using is nowhere near large
 enough to cache the tarball for the package.  Heck, the cache on my
 current system drive is only 8Mbs according to hdparm.
 We're not talking about drive caches, the kernel caches filesystem access
 long before it gets anywhere the drive. So all the real work is done in
 RAM if you have enough, whether you are using a hard drive filesystem or
 tmpfs. All your test demonstrates is that if you have enough RAM, it
 doesn't make much difference where you put PORTAGE_TMPDIR.



The command mentioned several replies back CLEARS that cache.  When you
run that command to clear the cache, from my understanding, at that
point it is as if the command has never been run since the last reboot. 
Meaning, the command, emerge in this case, and its children are NOT
cached in ram nor is anything else.  I posted that from kernel.org. 
That's their claim not mine.  If you don't accept that clearing the
cache works, you need to talk to the kernel people because they are
saying it there and I'm just repeating it here.  A link for you to read:

http://www.kernel.org/doc/Documentation/sysctl/vm.txt

Just scroll down to the section about drop_caches.  Read it for yourself
if you can't/won't accept me saying it. 

Dale

:-)  :-) 

-- 
I am only responsible for what I said ... Not for what you understood or how 
you interpreted my words!




Re: [gentoo-user] Re: aligning SSD partitions

2012-09-06 Thread Dale
Nicolas Sebrecht wrote:
 The 06/09/12, Dale wrote:

 Then explain to me why it was at times slower while on tmpfs?  Trust me,
 I ran this test many times and in different orders and it did NOT make
 much if any difference. 
 As explained, this is expected if you have enough RAM.

 I didn't check but I would expect that files stored in tmpfs are NOT
 duplicated in the the kernel cache in order to save RAM. So, the
 different times could come from the fact that the kernel will first look
 up in the kernel cache and /then/ look up in the tmpfs.

 In the scenario without tmpfs and lot of RAM, every unpacked file is
 stored in the _kernel cache_ with really fast access much before hitting
 the disk or even the disk cache (RAM speed and very few processor
 calculation required). While retrieving, the file is found on first look
 up from the kernel cache.


The point you are missing is this.  Between those tests, I CLEARED that
cache.  The thing you and Neil claim that makes a difference does not
exist after you clear the cache.  I CLEARED that cache between EACH and
every test that was ran whether using tmpfs or not.  I did this instead
of rebooting my system after each test. 



 In the other scenario with tmpfs and lot of RAM, every unpacked file is
 stored in the tmpfs allowing very fast access (due to RAM speed) but
 with the price of a first negative result from the kernel cache (and
 perhaps additional time needed by the kernel for accessing the file
 through the driver of the tmpfs filesystem).

 Using tmpfs will still be better as it prevents from writes to the disk
 in the spare times, avoiding unnecessary mecanic movements and saving
 disk life time.

The thing is, this was tested because people wanted to see what the
improvements was.  When tested, it turned out that there was very little
if any difference.  So, in theory I would say that using tmpfs would
result in faster compile times.  After testing, theory left the building
and reality showed that it did not make much if any difference. 

 I might add, the cache on the drive I was using is nowhere near large
 enough to cache the tarball for the package.  Heck, the cache on my
 current system drive is only 8Mbs according to hdparm.  That is not much
 since I tested using much larger packages.  You can't cache files larger
 than the cache. 
 The disk cache is out of the scope.

True, just wanted to make sure we were talking about the same cache here. 


 Do I need to run a test, reboot, run the test again to show this is not
 making much if any difference?  I mean, really?  o_O
 It won't make any difference from the drop cache configuration but it is
 still not the point!


Well, why say that caching makes a difference then say it doesn't matter
when those caches are cleared?  Either caches matter or it doesn't. 

Dale

:-)  :-) 

-- 
I am only responsible for what I said ... Not for what you understood or how 
you interpreted my words!




[gentoo-user] Re: aligning SSD partitions

2012-09-06 Thread Nicolas Sebrecht
The 06/09/12, Dale wrote:

   The point was
 whether having portages work directory on tmpfs resulted in speed
 increases.  If you have portages work directory on tmpfs, of course it
 uses ram.  That's what tmpfs is.  It's taking what might normally be put
 on the disk and putting it in ram because ram is faster.

Please, understand that whithout tmpfs and a lot of RAM, the kernel
_won't_ work with the files from the disk but with the files stored in
the _kernel cache_ which IS RAM, too.

This explains why you get this result:

   The point is,
 cache or not, having portages work directory on tmpfs doesn't result in
 speed improvements as one would expect.

Taking back your last sentence with precise sementic:

  The point is,
/tmpfs cache (RAM)/ or /kernel cache (RAM)/, having portages work on tmpfs 
doesn't result in
speed improvements.


-- 
Nicolas Sebrecht



Re: [gentoo-user] Re: aligning SSD partitions

2012-09-06 Thread Dale
Nicolas Sebrecht wrote:
 The 06/09/12, Dale wrote:

   The point was
 whether having portages work directory on tmpfs resulted in speed
 increases.  If you have portages work directory on tmpfs, of course it
 uses ram.  That's what tmpfs is.  It's taking what might normally be put
 on the disk and putting it in ram because ram is faster.
 Please, understand that whithout tmpfs and a lot of RAM, the kernel
 _won't_ work with the files from the disk but with the files stored in
 the _kernel cache_ which IS RAM, too.

 This explains why you get this result:

   The point is,
 cache or not, having portages work directory on tmpfs doesn't result in
 speed improvements as one would expect.
 Taking back your last sentence with precise sementic:

   The point is,
 /tmpfs cache (RAM)/ or /kernel cache (RAM)/, having portages work on tmpfs 
 doesn't result in
 speed improvements.



Not quite.  The theory is that if you put portages work directory on
tmpfs, then all the writes and such are done in ram which is faster.  If
you have portages work directory on disk, it will be slower because the
disk is slower.  That is the theory and was what I and others expected
to happen. 

This is reality.  Even when portages work directory is on tmpfs, it is
not much, if any, faster when compared to portages work directory being
on tmpfs.  The two are essentially the same as far as emerge times go. 

Look, I have portages work directory on tmpfs.  The only time my hard
drive light comes on to amount to anything is when loading the tarball
or installing the package after the compile is done.  If I take portage
off tmpfs, just unmount the directory so that it is back on disk like
most normal folks, then the hard drive light blinks all during the
compile.  It doesn't make sense however, I can either accept what I
think or what actually happens.  In this case, I just have to accept
that putting portages work directory on tmpfs just doesn't really do
much good except save wear and tear on the disk drive.  Which, that is
why I know keep mine on tmpfs.  It's also a good idea when using SSDs as
in this thread.

Dale

:-)  :-) 

-- 
I am only responsible for what I said ... Not for what you understood or how 
you interpreted my words!




[gentoo-user] Re: aligning SSD partitions

2012-09-06 Thread Nicolas Sebrecht
The 06/09/12, Dale wrote:

 The point you are missing is this.  Between those tests, I CLEARED that
 cache.  The thing you and Neil claim that makes a difference does not
 exist after you clear the cache.  I CLEARED that cache between EACH and
 every test that was ran whether using tmpfs or not.  I did this instead
 of rebooting my system after each test. 

We clearly understand that you cleared the cache between the tests. We
pretend that it is not much relevant for your tests because of another
process.

 So, in theory I would say that using tmpfs would
 result in faster compile times.  After testing, theory left the building
 and reality showed that it did not make much if any difference. 

Yes, because you did the tests on a system with lot of RAM.

If the kernel needs to retrieve a file, there is basically the following
workflow:

1. retrieve file from kernel cache;
2. if not found, retrieve file from tmpfs cache;
3. if not found, retrieve file from swap cache;
4. if not found, retrieve file from disk cache;
5. if not found, retrieve file from disk.

This is simplified workflow but you get the idea.

Now, what we are saying is that *when you have lot of RAM*, the kernel
never hit 2, 3, 4 and 5. The problem with the kernel cache is that files
stored in this cache are dropped from it very fast. tmpfs allows to have
better files persistence in RAM. But if you have lot of RAM, the files
stored in the kernel cache are /not/ dropped from it which allows the
kernel to work with files in RAM only.

Clearing the kernel cache between the tests does not change much since
files are stored in RAM again, at the unpack process time. What makes
compilation very slow from the disk are all the _next reads and writes_
required by the compilation.

 Well, why say that caching makes a difference then say it doesn't matter
 when those caches are cleared?  Either caches matter or it doesn't. 

It does make a difference if you don't have enough RAM for the kernel
cache to store all the files involved in the whole emerge process and
every other process run by the kernel during the emerge.

-- 
Nicolas Sebrecht



Re: [gentoo-user] Re: aligning SSD partitions

2012-09-06 Thread Neil Bothwick
On Thu, 06 Sep 2012 06:31:24 -0500, Dale wrote:

 Not quite.  The theory is that if you put portages work directory on
 tmpfs, then all the writes and such are done in ram which is faster.  If
 you have portages work directory on disk, it will be slower because the
 disk is slower.

But the disk is not used when you have enough RAM to keep everything
cached. So you are comparing the speed of storing all files in RAM with
the speed of storing all files in RAM, so it is hardly surprising that
the two tests give similar results.

The fact that in one scenario the files do end up on disk is irrelevant,
you are working from RAM copies of the files in both instances.

By running the test on a lightly loaded machine, you are also removing
the possibility of files being flushed from the cache in the
tmpdir-on-disk setup, so I would expect you to get comparable results
either way.

The only real benefit of using tmpfs is the one you mentioned elsewhere,
that the disks don't get bothered at all.


-- 
Neil Bothwick

Is there another word for synonym?


signature.asc
Description: PGP signature


Re: [gentoo-user] Re: aligning SSD partitions

2012-09-06 Thread Dale
Nicolas Sebrecht wrote:
 The 06/09/12, Dale wrote:

 The point you are missing is this.  Between those tests, I CLEARED that
 cache.  The thing you and Neil claim that makes a difference does not
 exist after you clear the cache.  I CLEARED that cache between EACH and
 every test that was ran whether using tmpfs or not.  I did this instead
 of rebooting my system after each test. 
 We clearly understand that you cleared the cache between the tests. We
 pretend that it is not much relevant for your tests because of another
 process.

 So, in theory I would say that using tmpfs would
 result in faster compile times.  After testing, theory left the building
 and reality showed that it did not make much if any difference. 
 Yes, because you did the tests on a system with lot of RAM.

 If the kernel needs to retrieve a file, there is basically the following
 workflow:

 1. retrieve file from kernel cache;
 2. if not found, retrieve file from tmpfs cache;
 3. if not found, retrieve file from swap cache;
 4. if not found, retrieve file from disk cache;
 5. if not found, retrieve file from disk.

 This is simplified workflow but you get the idea.

I do get it.  I CLEARED #1 and #2, there is no usage of #3 and #4 is not
large enough here to matter.  So, it is left with #5. 

See the point?  The test was a NORMAL emerge with portages work
directory on tmpfs and a NORMAL emerge with portages work directory on
disk and compare the results.  The test resulted in little if any
difference. 

If I ran the test and did not clear the cache, then I would expect
skewed results because after the first emerge, some files would be
cached in ram and the drive would not be used.  If you clear the cache,
then it has to take the same steps regardless of whether it was run
first, second or third time. 


 Now, what we are saying is that *when you have lot of RAM*, the kernel
 never hit 2, 3, 4 and 5. The problem with the kernel cache is that files
 stored in this cache are dropped from it very fast. tmpfs allows to have
 better files persistence in RAM. But if you have lot of RAM, the files
 stored in the kernel cache are /not/ dropped from it which allows the
 kernel to work with files in RAM only.

 Clearing the kernel cache between the tests does not change much since
 files are stored in RAM again, at the unpack process time. What makes
 compilation very slow from the disk are all the _next reads and writes_
 required by the compilation.

 Well, why say that caching makes a difference then say it doesn't matter
 when those caches are cleared?  Either caches matter or it doesn't. 
 It does make a difference if you don't have enough RAM for the kernel
 cache to store all the files involved in the whole emerge process and
 every other process run by the kernel during the emerge.


But if you CLEAR the kernel cache between each test, then it doesn't
matter either.  I am clearing the KERNEL cache which includes pagecache,
dentries and inodes.  I can see the difference in gkrellm, top and in
what the command free gives me. 

Put another way.  I run a emerge on tmpfs and note the emerge times.  I
reboot.  I run the same emerge again with it not on tmpfs.  Do we agree
that that would result in a actual real result?  If yes then using the
command to clear the cache is the same as rebooting.  It's the whole
point of having the feature in the kernel.  The file drop_caches when
set to 3 with the echo command erases, deletes or whatever you want to
call it, the caches.  That's from the kernel folks as linked to in
another reply.  That's not me saying it, it is the kernel folks saying it. 

Dale

:-)  :-) 

-- 
I am only responsible for what I said ... Not for what you understood or how 
you interpreted my words!




[gentoo-user] Re: aligning SSD partitions

2012-09-06 Thread Nicolas Sebrecht
The 06/09/12, Neil Bothwick wrote:

 The only real benefit of using tmpfs is the one you mentioned elsewhere,
 that the disks don't get bothered at all.

Benefits also depends of what the system does during the emerge. If
another process is intensively using the kernel cache and the kernel
cache can't keep all the cached files for all the processes because it
is missing of RAM, then underlying disk rapidity (tmpfs vs bare metal
HDD) will sightly change the results.

-- 
Nicolas Sebrecht



Re: [gentoo-user] Re: aligning SSD partitions

2012-09-06 Thread Dale
Neil Bothwick wrote:
 On Thu, 06 Sep 2012 06:31:24 -0500, Dale wrote:

 Not quite.  The theory is that if you put portages work directory on
 tmpfs, then all the writes and such are done in ram which is faster.  If
 you have portages work directory on disk, it will be slower because the
 disk is slower.
 But the disk is not used when you have enough RAM to keep everything
 cached. So you are comparing the speed of storing all files in RAM with
 the speed of storing all files in RAM, so it is hardly surprising that
 the two tests give similar results.

 The fact that in one scenario the files do end up on disk is irrelevant,
 you are working from RAM copies of the files in both instances.

 By running the test on a lightly loaded machine, you are also removing
 the possibility of files being flushed from the cache in the
 tmpdir-on-disk setup, so I would expect you to get comparable results
 either way.

 The only real benefit of using tmpfs is the one you mentioned elsewhere,
 that the disks don't get bothered at all.



I don't think that is correct.  I am clearing the files in ram.  That's
the point of drop_caches is to clear the kernels cache files.  See post
to Nicolas Sebrecht a bit ago. 

Dale

:-)  :-) 

-- 
I am only responsible for what I said ... Not for what you understood or how 
you interpreted my words!




Re: [gentoo-user] Re: aligning SSD partitions

2012-09-06 Thread Neil Bothwick
On Thu, 06 Sep 2012 07:48:59 -0500, Dale wrote:

 I don't think that is correct.  I am clearing the files in ram.  That's
 the point of drop_caches is to clear the kernels cache files.  See post
 to Nicolas Sebrecht a bit ago. 

Take a step back Dale and read the posts again. This is not about the
state of the cache at the start of the emerge but during it. You may
clear the cache before starting, but that doesn't stop is filling up
again as soon as the emerge reaches src_unpack().

This has nothing to do with caching the data from the previous emerge
run, it is all from the currently running emerge. You may think you are
unpacking the tarball to disk and then loading those files into the
compiler, but you are only using the copies that are cached when you
unpack.


-- 
Neil Bothwick

This universe is sold by mass, not by volume.
Some expansion may have occurred during shipment


signature.asc
Description: PGP signature


[gentoo-user] Re: aligning SSD partitions

2012-09-06 Thread Nicolas Sebrecht
The 06/09/12, Dale wrote:

 I do get it.  I CLEARED #1 and #2, there is no usage of #3 and #4 is not
 large enough here to matter.  So, it is left with #5. 
 
 See the point?  The test was a NORMAL emerge with portages work
 directory on tmpfs and a NORMAL emerge with portages work directory on
 disk and compare the results.  The test resulted in little if any
 difference. 
 
 If I ran the test and did not clear the cache, then I would expect
 skewed results because after the first emerge, some files would be
 cached in ram and the drive would not be used.  If you clear the cache,
 then it has to take the same steps regardless of whether it was run
 first, second or third time. 

What you want to measure is the difference of times required by emerge
whether you use a real disk or tmpfs as backend.

What you would expect is a difference because a disk is much slower than
RAM.

What you see is no difference. You won't conclude that disk is as fast
as RAM, right? Can you explain why you don't see much difference? No.

Here is the explanation: if you have enough RAM, the emerge rapidity
will NOT rely on the disk rapidity whatever storage backend you use. It
will only rely on the RAM rapidity because of the kernel cache.

Now, pretending that whatever backend you use (real disk or tmpfs) never
changes the emerge time is WRONG because of the persistence strategy
used by the kernel for the kernel cache.

When having lot of RAM like you have, the persistence strategy of the
kernel cache is NEVER raised in the process.

This is exactly what your tests demonstrate demonstrate: if you have
enough RAM, the persistence strategy of kernel cache is not raised, so
everything happens in RAM, so the emerge times do not differ.

-- 
Nicolas Sebrecht



[gentoo-user] Re: aligning SSD partitions

2012-09-06 Thread Nicolas Sebrecht
The 06/09/12, Dale wrote:

 Not quite.  The theory is that if you put portages work directory on
 tmpfs, then all the writes and such are done in ram which is faster.

No! This is too much simplistic view to explain what you see.

In practice, _all_ the writes always happen in RAM whatever backend
storage you use.

The difference you could see is if there is not enough RAM for the
kernel cache, it will have to wait for the backend storage.

-- 
Nicolas Sebrecht



Re: [gentoo-user] Re: aligning SSD partitions

2012-09-06 Thread Dale
Neil Bothwick wrote:
 On Thu, 06 Sep 2012 07:48:59 -0500, Dale wrote:

 I don't think that is correct.  I am clearing the files in ram.  That's
 the point of drop_caches is to clear the kernels cache files.  See post
 to Nicolas Sebrecht a bit ago. 
 Take a step back Dale and read the posts again. This is not about the
 state of the cache at the start of the emerge but during it. You may
 clear the cache before starting, but that doesn't stop is filling up
 again as soon as the emerge reaches src_unpack().

 This has nothing to do with caching the data from the previous emerge
 run, it is all from the currently running emerge. You may think you are
 unpacking the tarball to disk and then loading those files into the
 compiler, but you are only using the copies that are cached when you
 unpack.




Then take a look at it this way.  If I emerge seamonkey with portage's
work directory on disk and it takes 12 minutes, the first time.  Then I
clear the caches and emerge seamonkey again while portage's work
directory is on tmpfs and it is 12 minutes. Then repeat that process a
few times more. If the outcome of all those emerges is 12 minutes,
regardless of the order, then putting portages work directory on tmpfs
makes no difference at all in that case.  The emerge times are exactly
the same regardless of emerge using cache or not or portage's work
directory being on tmpfs or not.  I don't care if emerge uses cache
DURING the emerge process because it is always enabled in both tests. 
The point is whether portage's work directory is on tmpfs or not makes
emerges faster.

The thing about what you are saying is that I ran those tests with the
files in memory.  What I am saying is this, that is not the case.  I am
clearing that memory with the drop_cache command between each test. You
claim that cache is affecting the timing but I am clearing the very same
cache the same as a reboot would. The emerge times whether portage's
work directory is on tmpfs or not didn't change enough to make a
difference.  That is what I am saying the tests resulted in.  It was not
what I expected but it is what I got.  It is also what others got as well. 

I provided a link to the information that should be as clear as it
gets.  Can you provide a link that shows that the command does not clear
the kernel cache?  I'm going by what I linked to on kernel.org.  Since
they are the ones that make the kernels, I think they should know what
it is and what it does. 

Here is some more links with the same info really:

http://linux-mm.org/Drop_Caches 

http://www.linuxinsight.com/proc_sys_vm_drop_caches.html

http://bjdean.id.au/wiki/LinuxMemoryManagement

Those are all the first links in a google search for drop_caches
kernel.  See if you can find anything that says otherwise. 

Dale

:-)  :-) 

-- 
I am only responsible for what I said ... Not for what you understood or how 
you interpreted my words!




Re: [gentoo-user] Re: aligning SSD partitions

2012-09-06 Thread Dale
Nicolas Sebrecht wrote:
 The 06/09/12, Dale wrote:

 Not quite.  The theory is that if you put portages work directory on
 tmpfs, then all the writes and such are done in ram which is faster.
 No! This is too much simplistic view to explain what you see.

 In practice, _all_ the writes always happen in RAM whatever backend
 storage you use.

 The difference you could see is if there is not enough RAM for the
 kernel cache, it will have to wait for the backend storage.



OK.  Step by step here so hopefully you and Neil can follow. 

Freshly booted system. 
Clear caches just to be sure

emerge foo with portages work directory on tmpfs
clear caches again
emerge foo with portages work directory on disk
clear caches again.
emerge foo with portages work directory on tmpfs
clear caches again
emerge foo with portages work directory on disk

You repeat this enough times and you see that it doesn't matter if
portage's work directory is on disk or on tmpfs.  As I said before, when
I have portage's work directory on disk, I see the drive light blinking
like crazy so it is doing something, reading or writing.  When portage's
work directory is on tmpfs, it only blinks when I first start the
process which should be unpacking the tarball and then at the end when
it is installing the package.  In between that, it is just the normal
stuff of my wallpaper changing or it checking my emails.  So, it may
store something in ram as it does in both cases but it is also storing
things on the drive or else the light would not be blinking so much. 

I'm not real big on rebooting but you and Neil are about to make me test
this and reboot between each and every test.  If nothing else, just to
show that drop_caches does the same as rebooting like kernel.org says,
except for the programs are still actually running. 

Dale

:-)  :-) 

-- 
I am only responsible for what I said ... Not for what you understood or how 
you interpreted my words!




Re: [gentoo-user] Re: aligning SSD partitions

2012-09-06 Thread Michael Mol
On Thu, Sep 6, 2012 at 10:07 AM, Dale rdalek1...@gmail.com wrote:
 Neil Bothwick wrote:
 On Thu, 06 Sep 2012 07:48:59 -0500, Dale wrote:

 I don't think that is correct.  I am clearing the files in ram.  That's
 the point of drop_caches is to clear the kernels cache files.  See post
 to Nicolas Sebrecht a bit ago.
 Take a step back Dale and read the posts again. This is not about the
 state of the cache at the start of the emerge but during it. You may
 clear the cache before starting, but that doesn't stop is filling up
 again as soon as the emerge reaches src_unpack().

 This has nothing to do with caching the data from the previous emerge
 run, it is all from the currently running emerge. You may think you are
 unpacking the tarball to disk and then loading those files into the
 compiler, but you are only using the copies that are cached when you
 unpack.




 Then take a look at it this way.  If I emerge seamonkey with portage's
 work directory on disk and it takes 12 minutes, the first time.  Then I
 clear the caches and emerge seamonkey again while portage's work
 directory is on tmpfs and it is 12 minutes. Then repeat that process a
 few times more. If the outcome of all those emerges is 12 minutes,
 regardless of the order, then putting portages work directory on tmpfs
 makes no difference at all in that case.  The emerge times are exactly
 the same regardless of emerge using cache or not or portage's work
 directory being on tmpfs or not.  I don't care if emerge uses cache
 DURING the emerge process because it is always enabled in both tests.
 The point is whether portage's work directory is on tmpfs or not makes
 emerges faster.

 The thing about what you are saying is that I ran those tests with the
 files in memory.  What I am saying is this, that is not the case.  I am
 clearing that memory with the drop_cache command between each test.

Dale, here's what you're missing:

emerge first downloads the source tarball and drops it on disk. Once
the tarball has been placed on disk, the time required to read the
tarball back into memory is negligible; it's a streamed format.

The next step is what's important: the tarball gets extracted into
PORTAGE_TEMP. At that moment onward, all the files that were inside
that tarball are in your file cache until something bumps it out.

If you have enough RAM, then the file will not be bumped out as a
consequence of build-time memory usage. As a consequence, if you have
enough ram, you won't see much (if any) difference in build times if
you're comparing tmpfs to a normal filesystem...which means tmpfs (for
you) won't have any benefit beyond being self-cleaning on a reboot or
remount.

So your drop_cache has no influence over build times, since the only
cache behavior that matters is whatever happens between the time
emerge unpacks the tarball and the time emerge exits.

To see the difference, try something like watch drop_cache leave
that running while you let a few builds fly. You should see an
increase in build times.

-- 
:wq



Re: [gentoo-user] Re: aligning SSD partitions

2012-09-06 Thread Dale
Nicolas Sebrecht wrote:
 The 06/09/12, Dale wrote:

 I do get it.  I CLEARED #1 and #2, there is no usage of #3 and #4 is not
 large enough here to matter.  So, it is left with #5. 

 See the point?  The test was a NORMAL emerge with portages work
 directory on tmpfs and a NORMAL emerge with portages work directory on
 disk and compare the results.  The test resulted in little if any
 difference. 

 If I ran the test and did not clear the cache, then I would expect
 skewed results because after the first emerge, some files would be
 cached in ram and the drive would not be used.  If you clear the cache,
 then it has to take the same steps regardless of whether it was run
 first, second or third time. 
 What you want to measure is the difference of times required by emerge
 whether you use a real disk or tmpfs as backend.

 What you would expect is a difference because a disk is much slower than
 RAM.

 What you see is no difference. You won't conclude that disk is as fast
 as RAM, right? Can you explain why you don't see much difference? No.

 Here is the explanation: if you have enough RAM, the emerge rapidity
 will NOT rely on the disk rapidity whatever storage backend you use. It
 will only rely on the RAM rapidity because of the kernel cache.

 Now, pretending that whatever backend you use (real disk or tmpfs) never
 changes the emerge time is WRONG because of the persistence strategy
 used by the kernel for the kernel cache.

 When having lot of RAM like you have, the persistence strategy of the
 kernel cache is NEVER raised in the process.

 This is exactly what your tests demonstrate demonstrate: if you have
 enough RAM, the persistence strategy of kernel cache is not raised, so
 everything happens in RAM, so the emerge times do not differ.


The end result is this, it doesn't matter if portage's work directory is
on tmpfs or not.  You just concluded that yourself which is what I have
been saying.  It doesn't matter WHY it doesn't matter, it just matters
that it DOESN'T matter.  It takes just as long on a system with
portage's work directory on tmpfs as it does on tmpfs.  Very little
difference at all.  The variance I had was minimal at best.  It was
basically seconds of difference not minutes.

I might add, I got the same results on my older system which has a LOT
less ram.  I think it only has 2Gbs or so. 

Dale

:-)  :-) 

-- 
I am only responsible for what I said ... Not for what you understood or how 
you interpreted my words!




Re: [gentoo-user] Re: aligning SSD partitions

2012-09-06 Thread Michael Mol
On Thu, Sep 6, 2012 at 10:20 AM, Dale rdalek1...@gmail.com wrote:
 Nicolas Sebrecht wrote:
 The 06/09/12, Dale wrote:

 Not quite.  The theory is that if you put portages work directory on
 tmpfs, then all the writes and such are done in ram which is faster.
 No! This is too much simplistic view to explain what you see.

 In practice, _all_ the writes always happen in RAM whatever backend
 storage you use.

 The difference you could see is if there is not enough RAM for the
 kernel cache, it will have to wait for the backend storage.



 OK.  Step by step here so hopefully you and Neil can follow.

 Freshly booted system.
 Clear caches just to be sure

 emerge foo with portages work directory on tmpfs
 clear caches again
 emerge foo with portages work directory on disk
 clear caches again.
 emerge foo with portages work directory on tmpfs
 clear caches again
 emerge foo with portages work directory on disk

 You repeat this enough times and you see that it doesn't matter if
 portage's work directory is on disk or on tmpfs.

If you have enough RAM, then this is certainly true. Nobody is
disputing that. They've been trying to explain that there's a
difference when you _don't_ have that much RAM, and they've been
trying to explain the mechanism behind that.


-- 
:wq



Re: [gentoo-user] Re: aligning SSD partitions

2012-09-06 Thread Neil Bothwick
On Thu, 06 Sep 2012 09:07:30 -0500, Dale wrote:

 I don't care if emerge uses cache
 DURING the emerge process because it is always enabled in both tests. 
 The point is whether portage's work directory is on tmpfs or not makes
 emerges faster.

It does not, if you have enough RAM, precisely because of the part you
claim not to care about.

 The thing about what you are saying is that I ran those tests with the
 files in memory.  What I am saying is this, that is not the case. 

No, that is not what I am saying. Those files were loaded into memory
when you ran the test AFTER you cleared the previously cached files. The
number of times you run the test is irrelevant, as is whether you start
with an empty cache or not. All that matters is that the kernel caching
all the files used during the emerge makes the storage medium used
irrelevant.

Like I said, take a step back, a deep breath and a break of an hour or
two. Then read the posts again without your preconceptions of what you
think we are trying to say (which is not what we are actually saying).
Only when you have done that can this discussion proceed beyond the
current tit-for-tat exchanges of misunderstanding.


-- 
Neil Bothwick

Always remember you're unique, just like everyone else.


signature.asc
Description: PGP signature


[gentoo-user] Re: aligning SSD partitions

2012-09-06 Thread Nicolas Sebrecht
The 06/09/12, Dale wrote:

 Then take a look at it this way.  If I emerge seamonkey with portage's
 work directory on disk and it takes 12 minutes, the first time.  Then I
 clear the caches and emerge seamonkey again while portage's work
 directory is on tmpfs and it is 12 minutes. Then repeat that process a
 few times more. If the outcome of all those emerges is 12 minutes,
 regardless of the order, then putting portages work directory on tmpfs
 makes no difference at all in that case.

We fully agree with you, here.

   The emerge times are exactly
 the same regardless of emerge using cache or not or portage's work
 directory being on tmpfs or not.  I don't care if emerge uses cache
 DURING the emerge process because it is always enabled in both tests. 

But you *should* care. If you don't have enough memory, the kernel will
reclaim memory from the pagecache, so the whole process rapidity won't
only rely on RAM rapidity anymore.

 The point is whether portage's work directory is on tmpfs or not makes
 emerges faster.
 
 The thing about what you are saying is that I ran those tests with the
 files in memory.  What I am saying is this, that is not the case.  I am
 clearing that memory with the drop_cache command between each test. You
 claim that cache is affecting the timing but I am clearing the very same
 cache the same as a reboot would. The emerge times whether portage's

We do agree with you that you droped the cache between the tests with
almost the same effect of a reboot.

   The emerge times whether portage's
 work directory is on tmpfs or not didn't change enough to make a
 difference.

Yes, we agree. You droped the cache which is expected to get correct
tests.

What we are saying is that you droped the cache but did NOT DISABLED the
VM caches (kernel cache). You say that you don't care of that one
because it was involved in all the tests. We say that you might not care
in some contexts, not for all the contexts. You reach the context where
it does not matter much, fine.

-- 
Nicolas Sebrecht



Re: [gentoo-user] Re: aligning SSD partitions

2012-09-06 Thread Paul Hartman
On Thu, Sep 6, 2012 at 9:20 AM, Dale rdalek1...@gmail.com wrote:
 OK.  Step by step here so hopefully you and Neil can follow.

 Freshly booted system.
 Clear caches just to be sure

 emerge foo with portages work directory on tmpfs
 clear caches again
 emerge foo with portages work directory on disk
 clear caches again.
 emerge foo with portages work directory on tmpfs
 clear caches again
 emerge foo with portages work directory on disk

I think, based on what the others are saying, that for you to more
accurately test it, you should not use emerge but rather use
ebuild to run (and time) the individual steps involved in emerging a
package (unpacking, preparing, compiling, installing), clearing disk
caches in-between each step. So, for example, after sources are
unpacked to tmpfs, clear caches before compilation begins -- this way
the source files have to be read from disk rather than from cache/RAM.



Re: [gentoo-user] Re: aligning SSD partitions

2012-09-06 Thread Dale
Michael Mol wrote:
 On Thu, Sep 6, 2012 at 10:07 AM, Dale rdalek1...@gmail.com wrote:
 Neil Bothwick wrote:
 On Thu, 06 Sep 2012 07:48:59 -0500, Dale wrote:

 I don't think that is correct.  I am clearing the files in ram.  That's
 the point of drop_caches is to clear the kernels cache files.  See post
 to Nicolas Sebrecht a bit ago.
 Take a step back Dale and read the posts again. This is not about the
 state of the cache at the start of the emerge but during it. You may
 clear the cache before starting, but that doesn't stop is filling up
 again as soon as the emerge reaches src_unpack().

 This has nothing to do with caching the data from the previous emerge
 run, it is all from the currently running emerge. You may think you are
 unpacking the tarball to disk and then loading those files into the
 compiler, but you are only using the copies that are cached when you
 unpack.



 Then take a look at it this way.  If I emerge seamonkey with portage's
 work directory on disk and it takes 12 minutes, the first time.  Then I
 clear the caches and emerge seamonkey again while portage's work
 directory is on tmpfs and it is 12 minutes. Then repeat that process a
 few times more. If the outcome of all those emerges is 12 minutes,
 regardless of the order, then putting portages work directory on tmpfs
 makes no difference at all in that case.  The emerge times are exactly
 the same regardless of emerge using cache or not or portage's work
 directory being on tmpfs or not.  I don't care if emerge uses cache
 DURING the emerge process because it is always enabled in both tests.
 The point is whether portage's work directory is on tmpfs or not makes
 emerges faster.

 The thing about what you are saying is that I ran those tests with the
 files in memory.  What I am saying is this, that is not the case.  I am
 clearing that memory with the drop_cache command between each test.
 Dale, here's what you're missing:

 emerge first downloads the source tarball and drops it on disk. Once
 the tarball has been placed on disk, the time required to read the
 tarball back into memory is negligible; it's a streamed format.

 The next step is what's important: the tarball gets extracted into
 PORTAGE_TEMP. At that moment onward, all the files that were inside
 that tarball are in your file cache until something bumps it out.

 If you have enough RAM, then the file will not be bumped out as a
 consequence of build-time memory usage. As a consequence, if you have
 enough ram, you won't see much (if any) difference in build times if
 you're comparing tmpfs to a normal filesystem...which means tmpfs (for
 you) won't have any benefit beyond being self-cleaning on a reboot or
 remount.

 So your drop_cache has no influence over build times, since the only
 cache behavior that matters is whatever happens between the time
 emerge unpacks the tarball and the time emerge exits.

 To see the difference, try something like watch drop_cache leave
 that running while you let a few builds fly. You should see an
 increase in build times.


But this is what you guys are missing too.  If you want to use tmpfs,
you have to have enough ram to begin with.  Whether you use tmpfs or
not, you have to have enough ram to do the compile otherwise you start
using swap or it just crashes.  Having ram is a prerequisite to using
tmpfs.  You can't set tmpfs to 8Gbs on a machine that doesn't have 8Gbs
available and it work.  I don't count swap because when you start using
swap, it all goes out the window at that point. 

There is another flaw in your assumption above.  I already had the
tarballs downloaded BEFORE even the first emerge.  I may not be the
sharpest tool in the shed but I do know to download first when trying to
measure a emerge time.  I can measure my DSL speed with other tools.  lol 

What the people wanted to test is if putting portages work directory on
tmpfs would make emerge times faster.  It doesn't.  The posts people
make admit to that fact now but want to argue the reason.  I don't care
about the reason.  I just know that it doesn't matter.  Putting
portage's work directory on tmpfs does NOT make it faster.  For the
purpose of this thread, it would be a good idea to save wear and tear on
the SSD but one should not expect compile times to improve as one would
expect. 

I might also add, I didn't always have 16Gbs on this rig.  I started
with 4Gbs.  Then I went to 8 and later on went to 16Gbs. 

Do we all admit that having portage on tmpfs does not make emerge times
faster yet? 

Dale

:-)  :-) 

-- 
I am only responsible for what I said ... Not for what you understood or how 
you interpreted my words!




Re: [gentoo-user] Re: aligning SSD partitions

2012-09-06 Thread Dale
Neil Bothwick wrote:
 On Thu, 06 Sep 2012 09:07:30 -0500, Dale wrote:

 I don't care if emerge uses cache
 DURING the emerge process because it is always enabled in both tests. 
 The point is whether portage's work directory is on tmpfs or not makes
 emerges faster.
 It does not, if you have enough RAM, precisely because of the part you
 claim not to care about.

 The thing about what you are saying is that I ran those tests with the
 files in memory.  What I am saying is this, that is not the case. 
 No, that is not what I am saying. Those files were loaded into memory
 when you ran the test AFTER you cleared the previously cached files. The
 number of times you run the test is irrelevant, as is whether you start
 with an empty cache or not. All that matters is that the kernel caching
 all the files used during the emerge makes the storage medium used
 irrelevant.

 Like I said, take a step back, a deep breath and a break of an hour or
 two. Then read the posts again without your preconceptions of what you
 think we are trying to say (which is not what we are actually saying).
 Only when you have done that can this discussion proceed beyond the
 current tit-for-tat exchanges of misunderstanding.




But to use that or tmpfs, you first have to have the ram.  The exact
same rig reports that putting portages work directory on tmpfs does NOT
result in faster emerge times.  Period.  I DO NOT care why that is, I
just know from testing that it does NOT make emerge work any faster. 
The only reason to use tmpfs for portage's work directory is to save
wear and tear on a drive.  There is no difference in emerge times
otherwise.   Others ran their own tests and got the same results. 

Dale

:-)  :-) 

-- 
I am only responsible for what I said ... Not for what you understood or how 
you interpreted my words!




Re: [gentoo-user] Re: aligning SSD partitions

2012-09-06 Thread Dale
Michael Mol wrote:
 On Thu, Sep 6, 2012 at 10:20 AM, Dale rdalek1...@gmail.com wrote:
 Nicolas Sebrecht wrote:
 The 06/09/12, Dale wrote:

 Not quite.  The theory is that if you put portages work directory on
 tmpfs, then all the writes and such are done in ram which is faster.
 No! This is too much simplistic view to explain what you see.

 In practice, _all_ the writes always happen in RAM whatever backend
 storage you use.

 The difference you could see is if there is not enough RAM for the
 kernel cache, it will have to wait for the backend storage.


 OK.  Step by step here so hopefully you and Neil can follow.

 Freshly booted system.
 Clear caches just to be sure

 emerge foo with portages work directory on tmpfs
 clear caches again
 emerge foo with portages work directory on disk
 clear caches again.
 emerge foo with portages work directory on tmpfs
 clear caches again
 emerge foo with portages work directory on disk

 You repeat this enough times and you see that it doesn't matter if
 portage's work directory is on disk or on tmpfs.
 If you have enough RAM, then this is certainly true. Nobody is
 disputing that. They've been trying to explain that there's a
 difference when you _don't_ have that much RAM, and they've been
 trying to explain the mechanism behind that.



But, if you don't have enough ram to compile a package, then you can't
use tmpfs anyway.  So, that point is not really a point.  If you try to
compile OOo on a machine with 512M of ram, you can't use tmpfs because
you don't have enough ram to even consider it.  The amount of ram wasn't
what I was testing, I was testing whether using tmpfs makes it faster
regardless of the amount of ram.  It doesn't.  Once everything related
to that specific emerge process is loaded, tmpfs doesn't matter.  That
is what I been saying this whole time. 

Dale

:-)  :-) 

-- 
I am only responsible for what I said ... Not for what you understood or how 
you interpreted my words!




Re: [gentoo-user] Re: aligning SSD partitions

2012-09-06 Thread Dale
Paul Hartman wrote:
 On Thu, Sep 6, 2012 at 9:20 AM, Dale rdalek1...@gmail.com wrote:
 OK.  Step by step here so hopefully you and Neil can follow.

 Freshly booted system.
 Clear caches just to be sure

 emerge foo with portages work directory on tmpfs
 clear caches again
 emerge foo with portages work directory on disk
 clear caches again.
 emerge foo with portages work directory on tmpfs
 clear caches again
 emerge foo with portages work directory on disk
 I think, based on what the others are saying, that for you to more
 accurately test it, you should not use emerge but rather use
 ebuild to run (and time) the individual steps involved in emerging a
 package (unpacking, preparing, compiling, installing), clearing disk
 caches in-between each step. So, for example, after sources are
 unpacked to tmpfs, clear caches before compilation begins -- this way
 the source files have to be read from disk rather than from cache/RAM.



I didn't want to do it that way because how many people actually update
their system that way?  I wanted to test doing the same way any other
person would normally do a emerge.  I suspect that 99% of users just
type emerge foo and let emerge do it. 

I kind of get what they are saying but at the same time using tmpfs
doesn't matter.  Once the tarball is read off the drive, it doesn't
matter whether portage is run on a tmpfs or not.  The only way it would
is if you ran out of ram and it started using swap.  That I disabled
because we all know that when you use swap, it's all over.  Who in their
right mind wants to compile a large program and use a LOT of swap?  I
hope nobody.  lol 

Dale

:-)  :-) 

-- 
I am only responsible for what I said ... Not for what you understood or how 
you interpreted my words!




Re: [gentoo-user] Re: aligning SSD partitions

2012-09-06 Thread Dale
Nicolas Sebrecht wrote:
 The 06/09/12, Dale wrote:

 Then take a look at it this way.  If I emerge seamonkey with portage's
 work directory on disk and it takes 12 minutes, the first time.  Then I
 clear the caches and emerge seamonkey again while portage's work
 directory is on tmpfs and it is 12 minutes. Then repeat that process a
 few times more. If the outcome of all those emerges is 12 minutes,
 regardless of the order, then putting portages work directory on tmpfs
 makes no difference at all in that case.
 We fully agree with you, here.


That's good. 


   The emerge times are exactly
 the same regardless of emerge using cache or not or portage's work
 directory being on tmpfs or not.  I don't care if emerge uses cache
 DURING the emerge process because it is always enabled in both tests. 
 But you *should* care. If you don't have enough memory, the kernel will
 reclaim memory from the pagecache, so the whole process rapidity won't
 only rely on RAM rapidity anymore.

But if you are going to use tmpfs, you have to have the memory
available.  It doesn't matter if it is tmpfs or just used in the normal
way.  That is my point. 

 The point is whether portage's work directory is on tmpfs or not makes
 emerges faster.

 The thing about what you are saying is that I ran those tests with the
 files in memory.  What I am saying is this, that is not the case.  I am
 clearing that memory with the drop_cache command between each test. You
 claim that cache is affecting the timing but I am clearing the very same
 cache the same as a reboot would. The emerge times whether portage's
 We do agree with you that you droped the cache between the tests with
 almost the same effect of a reboot.

That's good.

   The emerge times whether portage's
 work directory is on tmpfs or not didn't change enough to make a
 difference.
 Yes, we agree. You droped the cache which is expected to get correct
 tests.

 What we are saying is that you droped the cache but did NOT DISABLED the
 VM caches (kernel cache). You say that you don't care of that one
 because it was involved in all the tests. We say that you might not care
 in some contexts, not for all the contexts. You reach the context where
 it does not matter much, fine.


Who doing a normal update would cut off the cache?  I wouldn't.  I know
how to clear it but I don't know how to disable it nor would I or most
likely anyone else in normal use.  The point of my test was in a normal
use case of emerge with or without tmpfs and if there is any difference
in the emerge times.  There wasn't.  Once emerge starts and loads all
the stuff it needs, tmpfs doesn't matter at that point. 

Dale

:-)  :-) 

-- 
I am only responsible for what I said ... Not for what you understood or how 
you interpreted my words!




Re: [gentoo-user] Re: aligning SSD partitions

2012-09-06 Thread Neil Bothwick
On Thu, 06 Sep 2012 11:44:07 -0500, Dale wrote:

 I kind of get what they are saying but at the same time using tmpfs
 doesn't matter.  Once the tarball is read off the drive, it doesn't
 matter whether portage is run on a tmpfs or not.

Reading the tarball has nothing to do with this, we are discussing
filesystems for PORTAGE_TMPDIR, not DISTDIR. It's where the source is
unpacked, the object files compiled to, the executables linked to and the
install image created that is relevant to TMPDIR.


-- 
Neil Bothwick

What's the difference between ignorance and apathy?
I don't know and I don't care


signature.asc
Description: PGP signature


Re: [gentoo-user] Re: aligning SSD partitions

2012-09-06 Thread Neil Bothwick
On Thu, 06 Sep 2012 11:32:41 -0500, Dale wrote:

 Others ran their own tests and got the same results.

No one is denying the results, only the reasons given for them.

-- 
Neil Bothwick

If you can't be kind, be vague.


signature.asc
Description: PGP signature


Re: [gentoo-user] Re: aligning SSD partitions

2012-09-06 Thread Dale
Neil Bothwick wrote:
 On Thu, 06 Sep 2012 11:44:07 -0500, Dale wrote:

 I kind of get what they are saying but at the same time using tmpfs
 doesn't matter.  Once the tarball is read off the drive, it doesn't
 matter whether portage is run on a tmpfs or not.
 Reading the tarball has nothing to do with this, we are discussing
 filesystems for PORTAGE_TMPDIR, not DISTDIR. It's where the source is
 unpacked, the object files compiled to, the executables linked to and the
 install image created that is relevant to TMPDIR.



Well, on my system, when I run emerge, it has to go read the tarball
from the drive before it can unpack and do all the rest that needs to be
done.  I was timing from the time I hit return on the emerge command
till it was done.  Actually, I used time to time it for me.  ;-) 

As I said, I ran these tests on what a typical user would be using. 

Dale

:-)  :-) 

-- 
I am only responsible for what I said ... Not for what you understood or how 
you interpreted my words!




Re: [gentoo-user] Re: aligning SSD partitions

2012-09-06 Thread Neil Bothwick
On Thu, 06 Sep 2012 16:09:12 -0500, Dale wrote:

  Reading the tarball has nothing to do with this, we are discussing
  filesystems for PORTAGE_TMPDIR, not DISTDIR. It's where the source is
  unpacked, the object files compiled to, the executables linked to and
  the install image created that is relevant to TMPDIR.

 Well, on my system, when I run emerge, it has to go read the tarball
 from the drive before it can unpack and do all the rest that needs to be
 done.

Of course, but it is reading from a different filesystem that is
unaffected by your choice for $PORTAGE_TMPDIR. It has about as much
relevance as the brand of mouse you are using.


-- 
Neil Bothwick

Fascinating, said Spock, watching Kirk's lousy acting.


signature.asc
Description: PGP signature


Re: [gentoo-user] Re: aligning SSD partitions

2012-09-06 Thread Dale
Neil Bothwick wrote:
 On Thu, 06 Sep 2012 16:09:12 -0500, Dale wrote:

 Reading the tarball has nothing to do with this, we are discussing
 filesystems for PORTAGE_TMPDIR, not DISTDIR. It's where the source is
 unpacked, the object files compiled to, the executables linked to and
 the install image created that is relevant to TMPDIR.
 Well, on my system, when I run emerge, it has to go read the tarball
 from the drive before it can unpack and do all the rest that needs to be
 done.
 Of course, but it is reading from a different filesystem that is
 unaffected by your choice for $PORTAGE_TMPDIR. It has about as much
 relevance as the brand of mouse you are using.




But whether it is on tmpfs or just regular memory doesn't matter.  Once
emerge starts, everything is in ram including portages work directory
which would be on tmpfs here.  That's why it doesn't matter if portage
is on tmpfs or not.  Once emerge loads up the files, it's the same
thing.  That's why using tmpfs doesn't matter.  I knew that the whole
time.  The amount of ram on a system doesn't matter either.  If you have
a system that doesn't have a lot of ram, then you can't really use tmpfs
anyway.  That is not something I would recommend to anyone. 

I just don't agree that one should *disable* cache to run the test since
no one would disable cache on a normal system.  It's not a memory speed
test.  It's a test to see if putting it on tmpfs makes it faster.  The
fact that emerge loads everything up in memory when it starts is not
relevant for what I am testing.  It does that on its own anyway.

Since portage and the kernel does this in the most efficient way
already, I still say putting portage's work directory on tmpfs is not
needed UNLESS a person needs to save wear and tear on a drive, such as
the SSD in this thread.  I just don't want someone that is sort of new
to Gentoo and compiling things to think that a package that takes 10
minutes when done on disk will take 3 minutes when on tmpfs.  I see that
thinking from time to time, usually on the forums. 

Dale

:-)  :-) 

-- 
I am only responsible for what I said ... Not for what you understood or how 
you interpreted my words!