On 08/11/2014 08:00 AM, Gordan Bobic wrote:
On 08/11/2014 12:37 PM, Robert Moskowitz wrote:

On 08/11/2014 06:41 AM, Gordan Bobic wrote:
On 08/11/2014 11:33 AM, Robert Moskowitz wrote:

On 08/11/2014 06:19 AM, Gordan Bobic wrote:
On 08/11/2014 11:13 AM, Robert Moskowitz wrote:

On 08/11/2014 04:50 AM, Ron Yorston wrote:
Gordan Bobic <[email protected]> wrote:
If you are using an image, please make sure you do:

dd if=/dev/zero of=/scrub bs=8MB; rm -f /scrub

before you create the image to zero out the free space.
I wrote the zerofree utility for just this sort of thing. It looks
for non-zero free blocks in an ext2/3/4 filesystem and zeroes them.
Because it doesn't write to blocks that are already zeroed it can be
more efficient than using dd.  It only works on unmounted (or
read-only)
filesystems, though.

My write-up on zerofree is here:

    http://intgat.tigress.co.uk/rmy/uml/sparsify.html

For the terminally impatient:

    # e2fsck -f /dev/sdx1
    # zerofree -v /dev/sdx1
    # e2fsck -f /dev/sdx1

The -v flag shows progress and prints the number of blocks zeroed,
total free blocks and total blocks at the end of the process. Use '-n
-v' to perform a dry run to see if it's worth bothering to zero a
given
filesystem.

It's available in the standard repositories for Fedora and Debian,
plus
EPEL for CentOS/RHEL 5, though not 6. I really ought to make an RPM
for 6.
Please do, and perhaps send it my way today or such?

It took ~ 5hours to apply the updates.

5 hours sounds excessive. Are you running on an SD/CF card? Or USB
stick?

SDcard.  There were 598 things to do and first there was the
update/install.  Then there was the cleanup/erase.

For running off an SD card and similar you may be interested in this
page:

http://www.altechnative.net/2012/01/25/flash-module-benchmark-collection-sd-cards-cf-cards-usb-sticks/


In my quest for decently performing SD cards thus far I found 1 that
performs comparably to a 5400rpm disk. All others were unbelievably
bad on random writes of the sort that happen all the time on a typical
rootfs.

You should bear this in mind when you are replacing productive servers
with ARM machines - SD/CF cards are a real performance killer. Apart
from very, very few models most are completely unusable.

Testing on SDcards.  Production on HDD.  The Cubieboards have a SATA
interface.  I have a couple pulled from notebooks (that I replaced with
SSD).  I also have a few IDE drives worth using and with:

http://cgi.ebay.com/ws/eBayISAPI.dll?ViewItem&item=360615680757

I can put the IDE on the sata connector.  But the power connector is the
wrong one and I have to find the 'right' 2-wire conector the Cubieboard
uses.

So the SDcard is for testing and allowing others to get started.  My
HOPE is that I can xzcat that SDcard image onto  a HDD then expand the
partition.  That is something else I have to learn to do...

resize2fs /dev/$partition
will extend it to the end of the disk.

the compressed image grew
~500Mb; no good. I am going to have to review Gordon's comments about
how to make better saved images to see if I can tighten it up a bit.

It helps if you can mount the disk on a different machine, delete all
the logs and zero out the free space. xz -9 also helps (use pxz -9
instead of xz -9 if you have multiple cores, but bear in mind the
memory usage will be high - best done on an x86 machine.

My notebook is an x86 duo core Lenovo x120e; I ran xz with defaults, so
my understanding it used -6.  I DO need it for work while the compress
is running.  Logs where hardly any size, but that will be more free
space to zero out.  Don't bother grabbing the update .xz image at this
time...

Acknowledged.

You can nice -n 19 pxz, but I can't remember off the top of my head
how much memory xz -9 takes (and pxz will spawn an xz thread per core,
so double it on a dual core machine).

'nice -n 19 pxz' ???

man nice

Oh... I am still a little slow this morning and did not get that 'nice' is a command and not a comment by you about '-n19 pxz' !


_______________________________________________
users mailing list
[email protected]
http://lists.redsleeve.org/mailman/listinfo/users

Reply via email to