I've been toying with zram swap on the desktop, under Ubuntu Precise. It looks like a good candidate for a major feature in the next version; Precise is currently in feature freeze. Yes, implementing this would involve just a single Upstart script; but it's a major change to the memory system in practice, thus a major feature. I am not pushing this for implementation in Precise.

Looking for comments on if I should spec this out and put it up on Launchpad as a goal for Precise+1, and maybe additional testing if anyone cares to.

The justification for this is as follows:

The zram device dynamically allocates RAM to store a block device in compressed blocks. The device has a fixed size, but zram only allocates RAM when it needs it. When the kernel frees swap, it now informs the underlying device; thus zram releases that RAM back to the system. Similarly, when a file system frees a block, it now also informs the underlying device; zram supports this.

Because of all this, you can feasibly actually have a swap device on RAM the size of your entire RAM and nothing will happen until you start swapping--you can have 2GB of RAM, a 2GB zram block device, mkswap on it, and activate it. Nothing will happen until it was time to start swapping anyway, in which case it'll work. That means zram swap devices have no negative impact by design. They could only have a negative impact if they were slower than regular RAM (purpose defeating) or if the driver had bugs (obviously wrong and should be fixed).

All this basically means a number of things, in theory:

- The LiveCD should activate a zram0 swap device immediately at boot, equivalent to the size of physical RAM. * "Immediately at boot" is feasible as soon as /sys and /dev are accessible.
   * tmpfs swaps, so tmpfs inherits compression!

 - Desktops may benefit by eschewing physical swap for RAM
   * But this breaks suspend-to-disk; then again, so does everything:
      + Who has a 16GB swap partition?  Many people have 4GB, 8GB, 16GB RAM
      + The moment RAM + SWAP - CACHE > TOTAL_SWAP, suspend to disk breaks

 - Servers may benefit greatly by eschewing physical swap for RAM
   * More RAM means an even bigger impact
   * Suspend to disk isn't important

Also for consideration is that Linux doesn't (to my knowledge) differentiate between "fast" and "Slow" swap and won't start moving really old stuff out of zram0 and onto regular disk swap if you have both. This naturally means that the oldest, least used stuff would tend to float to zram0 if you had both.


ON TO THE TESTS!


The test conditions are as such:

 - Limited RAM to 2GB with mem=2G
- Running a normal desktop: Xchat, bitlbee, Thunderbird, Rhythmbox, Chromium
 - Memory pressure occasionally supplied by VirtualBox

In this case, I ran under 2GB with a 1.5GB swap as such:

$ sudo -i
# echo 1610612736 > /sys/block/zram0/disksize
# mkswap /dev/zram0
# swapon /dev/zram0
# swapoff /dev/sda5

My swap device is now solely zram0. I have attached an analysis script that analyzes /sys/block/zram0/



CURSORY RESULTS:


===STAGE 1===
Condition: VirtualBox running Windows XP and Ubuntu 11.4 from LiveCD (two VMs).


Memory looks like:

Mem:   2051396k total,  1997028k used,    54368k free,       92k buffers
Swap:  1572860k total,   945784k used,   627076k free,    51576k cached

Every 2.0s: ./zram_stat.sh Mon Feb 27 17:09:40 2012

                Current         Predicted
Original:       922M            1536M
Compressed:     262M
Total mem use:  269M            448M
Saved:          652M            1087M
Ratio:          29%     (28%)

Physical RAM:   2003M
Effective RAM:  2656M           (3090M)

[Explanation of zram_stat.sh: Current data in zram0; Compresesd data size; Total size of zram0 in memory including fragmentation, padding, etc; RAM saved by compression; size, i.e. 4M becomes 1M then size is 25%; Effective RAM = physical RAM + Saved RAM. The Predicted column assumes the same Ratio and a full swap device on zram0.]


In this case, the machine was extremely slow due to constant hammering of the disk. I was quite aware of the reason: not enough disk cache, thus lots of reading libraries off disk. I tried adjusting /proc/sys/vm/swappiness=100 but no luck.



===STAGE 2===

I closed the VirtualBox machine running the Ubuntu installer.

Mem:   2051396k total,  1614096k used,   437300k free,      192k buffers
Swap:  1572860k total,   860040k used,   712820k free,   138940k cached

               Current         Predicted
Original:       821M            1536M
Compressed:     240M
Total mem use:  267M            499M
Saved:          554M            1036M
Ratio:          32%     (29%)

Physical RAM:   2003M
Effective RAM:  2558M           (3040M)

Notice that 100M was in swap and about 475M got freed in actual RAM, almost 90MB of which went directly to cache. That's how much cache pressure there was.

The machine is now quite peppy, tapping the disk frequently but not enough to cause a problem.


===STAGE 3===

And of course a little while later:

Mem:   2051396k total,  1982984k used,    68412k free,     6284k buffers
Swap:  1572860k total,   717232k used,   855628k free,   200620k cached

And even after that:

Mem:   2051396k total,  1956432k used,    94964k free,    12792k buffers
Swap:  1572860k total,   630040k used,   942820k free,   167204k cached

Every 2.0s: ./zram_stat.sh Mon Feb 27 18:10:31 2012

                Current         Predicted
Original:       590M            1536M
Compressed:     175M
Total mem use:  234M            611M
Saved:          355M            924M
Ratio:          39%     (29%)

Physical RAM:   2003M
Effective RAM:  2358M           (2927M)


Cache pressure has stabilized. My machine is now running faster than it was before, by a lot. The reason is simple: almost no disk activity since most libraries are cached.

It looks to me like fragmentation fluffs up as the kernel frees swap. To my knowledge, fragmented data eventually gets repacked (decompressed and then recompressed; shuffled around for consolidation; eventually) as the device is swapped onto, so this is less a problem than it seems as it will go away in high memory pressure situations. In truth, the ratio turned to 32%/28% an hour later under minimal load so it looks like Nitin did a good job when he wrote this (right now I have 704M in 228M of RAM, with 203M of that being compressed data).


Attachment: zram_stat.sh
Description: application/shellscript

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss

Reply via email to