Greg,

I'm trying to find ways of speedup the build process.

Before going to distcc I was trying ccache. It was not working for me.
Make allmodconfig enables GCOV that enables -fprofile-arcs
-ftest-coverage gcc flags. ccache is not compatible with those flags.

After allmodconfig, changed .config from:

CONFIG_GCOV_KERNEL=y
CONFIG_GCOV_PROFILE_ALL=y

to:

CONFIG_GCOV_KERNEL=n
CONFIG_GCOV_PROFILE_ALL=n

Then ccache started to work. If ccache files are also on tmpfs the
second build, after make clean, of same Kernel version has
considerable speedup, from 5 to 1 minute on cc2.8xlarge, and from 30
to 3 minutes on my notebook. It is useful to increase the ccache cache
size so it can store more than one complete build. I'm using:

$ mkdir /tmp/tmpfs-ccache
$ sudo mount -t tmpfs -o noatime,mode=777 tmpfs /tmp/tmpfs-ccache
$ export CCACHE_DIR=/tmp/tmpfs-ccache
$ ccache -F 0 -M 9G # -F 0 == unlimited number of files on cache
$ ccache -s # -s show statistics and configuration for current user
...

I made scripts to make ccache on tmpfs boot persistent using simple
systemd service. The scripts are at:
https://github.com/petersenna/tmpfs-ccache

Is testing the build with

CONFIG_GCOV_KERNEL=n
CONFIG_GCOV_PROFILE_ALL=n

a problem?

[]'s

Peter

On Sat, Nov 24, 2012 at 1:07 PM, Peter Senna Tschudin
<[email protected]> wrote:
> On Sat, Nov 24, 2012 at 3:51 PM, Peter Senna Tschudin
> <[email protected]> wrote:
>> On Sat, Nov 24, 2012 at 12:58 AM, Greg KH <[email protected]> wrote:
>>> On Fri, Nov 23, 2012 at 06:22:26PM +0100, Peter Senna Tschudin wrote:
>>>> On Fri, Nov 23, 2012 at 5:52 PM, Greg KH <[email protected]> 
>>>> wrote:
>>>> > On Fri, Nov 23, 2012 at 02:43:47PM +0100, Peter Senna Tschudin wrote:
>>>> >> On Fri, Nov 23, 2012 at 3:48 AM, Greg KH <[email protected]> 
>>>> >> wrote:
>>>> >> > On Thu, Nov 22, 2012 at 10:01:02AM +0100, Peter Senna Tschudin wrote:
>>>> >> >> I've write this scripts because I want to test both the build and the
>>>> >> >> boot of -rc stable Kernels. I would like some feedback on the
>>>> >> >> directions I'm going.
>>>> >> >>
>>>> >> >> My goal is to use cloud infrastructure like Amazon EC2 or Google
>>>> >> >> Compute Engine, to build and boot stable -rc Kernels.
>>>> >> >
>>>> >> > EC2 makes it pretty hard to boot your own kernels, right?  Does Google
>>>> >> > make it any easier?
>>>> >> I found a simple way of creating instances for testing Kernels at EC2.
>>>> >
>>>> > You did?  Any pointers to it?  I would love to be able to do this as
>>>> > part of my daily stable test builds that I do today on EC2.
>>>>
>>>> I do not have instructions yet, but I can do an image/AMI for you, so
>>>> you can create instances of it. What distro do you want? I already
>>>> have a clean and minimum Fedora17 install that I've used successfully
>>>> today for compiling and testing 3.6.8-rc1.
>>>
>>> If I use an AMI like this, can I successfully replace the kernel and
>>> have it boot properly?  If so, sure, I'd love to see it, but note I will
>>> probably not be able to do anything with it until late next week due to
>>> the holidays here.
>>
>> It is possible to boot any Kernel because instances based on this
>> image will be of type hvm and not paravirtual. The problem is that I
>> do not know how to interact with the boot loader, and do not know how
>> to see the console. If the Kernel do not boot or hang, it is not easy
>> to recover. Amazon do not allow to have small hvm Linux instances, the
>> smallest available is m3.xlarge.
>>
>> For creating a new instance using the minimum Fedora17 image:
>> Launch Instance -> Classic Wizard -> Community AMIs ->
>> 375440392274/fedora17-x86_64-minimum-hvm
>>
>> Root password: aws
>>
>> Recommended after changing the root password:
>> # acpid is important so Amazon EC2 can shutdown the VM gracefully
>> yum install yum-plugin-fastestmirror
>> yum install @"Development Tools" acpid wget
>>
>> # Installing dependecies for building Kernel
>> yum install rpmdevtools yum-utils
>> cd /tmp
>> yumdownloader --source kernel
>> yum-builddep kernel-<version>.src.rpm
>>
>>
>> How did I created the hvm image?
>>
>> Creating a local VM and exporting it to EC2
>> 1 - Create a virtual machine using KVM on my notebook. It is mandatory
>> to use the VM disk in RAW mode***
>> 2 - Installed minimum Fedora17 without swap and all in a single
>> partition. No LVM, no encryption. Shutdown the VM.
>> 3 - Compressed the VM disk image: gzip -9 ...
>> 4 - Scp the compressed disk image of local VM to instance running
>> @EC2. Lets name the instance that receive the image: Blue
>>
>> EC2 Magic
>> 1 - Create an instance based on
>> 099720109477/ubuntu/images-testing/hvm/ubuntu-raring-daily image. Lets
>> name this instance Green***
>> 2 - Wait Green to boot and stop it.
>> 3 - Detach the Green's disk
>> 4 - Attach the Green's disk to the Blue instance. No need to stop Blue
>> to do this.
>> 5 - Connect over ssh to Blue, uncompress the file sent from notebook
>> and dd it to the Green's recently attached disk.
>> 6 - Detach Green's disk from Blue instance and attach it back to Green 
>> instance.
>>
>> At this point the Green is ready, but before using there is one useful step:
>>
>> Creating a template / AMI
>> 1 - At EC2 console, right click on Green, then Create Image(EBS AMI).
>> This will take some time.
>> 2 - Create a new instance based on the AMI just created and test it.
>> If it works, delete Green.
>>
>> *** This AMI has 8GB disk. So your life will be easier if you create a
>> disk for your local VM with the correct disk size. I did with:
>> dd if=/dev/zero of=aws8gb.img bs=128k count=65535
>
> Tip: Green and Blue should be on the same zone, like us-east-1d. If
> not the disks dance is not allowed.
>
>>
>>
>>>
>>>> > Hm, no, I _wish_ I could build in one minute on an EC2 image, right now
>>>> > it's about 5 minutes, as you have found.  I too put everything into a
>>>> > tmpfs to get the speed up (EC2 disk speeds suck), and I'm also using a
>>>> > cc2.8xlarge type, as that's the fastest one I could find.
>>>>
>>>> I'll do some testing with distcc and EC2. Using distcc I was able to
>>>> reduce the build time by half using 2 desktops and my notebook instead
>>>> of only my notebook. Maybe we can have half minute if we use some
>>>> cc2.8xlarge... :-)
>>>
>>> If you use a cluster of cc2.8xlarge images and distcc, you might be able
>>> to get the speed up, but that depends on the speed of the network as
>>> well.  I'll play around with that idea later next week if I get the
>>> chance.
>>>
>>> Although I can't imagine what the cost will be for doing something like
>>> this, pretty soon it will just make more sense to buy a real box and use
>>> it instead of the cloud :)
>>
>> I do not know how accurate the article is, but the author claims to
>> have a single core machine building the Kernel in ~60 seconds:
>> http://www.phoronix.com/scan.php?page=news_item&px=MTAyNjU
>>
>>>
>>> thanks,
>>>
>>> greg k-h
>>
>>
>>
>> --
>> Peter
>
>
>
> --
> Peter



--
Peter
--
To unsubscribe from this list: send the line "unsubscribe stable" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to