Re: [systemd-devel] Bootchart speeding up boot time

2016-02-25 Thread Martin Townsend
A bit of an update. disabling the second core didn't make much difference,
a couple of seconds max.
I played around with my own init task based on bootchart and tracked it
down to the fact that nanosleep was being called.  I basically have the
following code below which gives me the same boot time improvement.
If I take out the nanosleep then boot times slow down.  If I put say 3
seconds in the nanosleep the boot times speed up for 3 seconds and then
slow down.  I have no explanation as to why, the only thing I know about
nanosleep is that it's a syscall and it uses hrtimers.If anyone has
experience in this area and might be able to shed some light on this
problem it would be very appreciated?
I first thought that the hrtimer might have an effect on the scheduler
somehow or the idle dynticks was broken somehow but after disabling dyn
ticks completely and upping the periodic rate to 1000HZ it made no
difference to boot times.
I would like to understand why this is happening, systemd probably isn't
the right forum so would also appreciate any pointers to where I may get
some answers.

Out of interest does anyone else see this behaviour with a 3.14 Kernel?

- Martin.

int main(int argc, char *argv[]) {
/*
 * If the kernel executed us through
init=/usr/lib/systemd/systemd-bootchart, then
 * fork:
 * - parent execs executable specified via init_path[]
(/usr/lib/systemd/systemd by default) as pid=1
 * - child logs data
 */
if (getpid() == 1) {
pid_t pid;

pid = fork();
if (pid) {
/* parent */
if (pid < 0)
fprintf(stderr, "Failed to create child\n");
execl("/lib/systemd/systemd",
"/lib/systemd/systemd", NULL);
} else if(pid == 0) {
struct timespec req;
int res;

req.tv_sec = 20;
req.tv_nsec = 0;

/* TODO: Catch interruption and carry on sleeping */
res = nanosleep(, NULL);
exit(EXIT_SUCCESS);
}
} else {
fprintf(stderr,
"Failed to start init\n"
"Must be started as PID 1.\n");
exit (EXIT_FAILURE);
}
return 0;
}



On Tue, Feb 23, 2016 at 3:33 PM, Umut Tezduyar Lindskog 
wrote:

> On Mon, Feb 22, 2016 at 8:51 PM, Martin Townsend
>  wrote:
> > Hi,
> >
> > Thanks for your reply.  I wouldn't really call this system stripped
> down, it
> > has an nginx webserver, DHCP server, postgresql-server, sftp server, a
> few
> > mono (C#) daemons running, loads quite a few kernel modules during boot,
> > dbus, sshd, avahi, and a bunch of other stuff I can't quite remember.  I
> > would imagine glibc will be a tiny portion of what gets loaded during
> boot.
> > I have another arm system which has a similar boot time with systemd,
> it's
> > only a single cortex A9 core, it's running a newer 4.1 kernel with a new
> > version of systemd as it's built with the Jethro version of Yocto so
> > probably a newer version of glibc and this doesn't speed up when using
> > bootchart and in fact slows down slightly (which is what I would expect).
> > So my current thinking is that it's either be down to the fact that it's
> a
> > dual core and only one core is being used during boot unless a fork/execl
> > occurs? Or it's down to the newer kernel/systemd/glibc or some other
> > component.
>
> Are you sure both cores have the same speed and same size of L1
> data cache?
> You could try to force the OS to run systemd on the first core by A)
> make the second one unavailable B) play with control groups and pin
> systemd to first core.
>
> Umut
>
> >
> > Is there anyway of seeing what the CPU usage for each core is for
> systemd on
> > boot without using bootchart then I can rule in/out the first idea.
> >
> > Many Thanks,
> > Martin.
> >
> >
> > On Mon, Feb 22, 2016 at 6:52 PM, Kok, Auke-jan H <
> auke-jan.h@intel.com>
> > wrote:
> >>
> >> On Fri, Feb 19, 2016 at 7:15 AM, Martin Townsend
> >>  wrote:
> >> > Hi,
> >> >
> >> > I'm new to systemd and have just enabled it for my Xilinx based dual
> >> > core
> >> > cortex A-9 platform.  The linux system is built using Yocto (Fido
> >> > branch)
> >> > which is using version 219 of systemd.
> >> >
> >> > The main reason for moving over to systemd was to see if we could
> >> > improve
> >> > boot times and the good news was that by just moving over to systemd
> we
> >> > halved the boot time.  So I read that I could analyse the boot times
> in
> >> > detail using bootchart so I set init=//bootchart in my kernel
> >> > command
> >> > line and was really suprised to see my boot time halved again.
> Thinking
> >> > some 

Re: [systemd-devel] Bootchart speeding up boot time

2016-02-23 Thread Martin Townsend
I'm pretty sure they are, they are part of the Xilinx Zynq SoC platform,
from their specs
32 KB Level 1 4-way set-associative instruction and data caches
(independent for each CPU)
512 KB 8-way set-associative Level 2 cache (shared between the CPUs)

Good idea on disabling a core, this could then prove/disprove my first
theory, a bit of googling tells me that there's a Kernel boot arg 'nosmp',
I'll give this a try.

Cheers, Martin.


On Tue, Feb 23, 2016 at 3:33 PM, Umut Tezduyar Lindskog 
wrote:

> On Mon, Feb 22, 2016 at 8:51 PM, Martin Townsend
>  wrote:
> > Hi,
> >
> > Thanks for your reply.  I wouldn't really call this system stripped
> down, it
> > has an nginx webserver, DHCP server, postgresql-server, sftp server, a
> few
> > mono (C#) daemons running, loads quite a few kernel modules during boot,
> > dbus, sshd, avahi, and a bunch of other stuff I can't quite remember.  I
> > would imagine glibc will be a tiny portion of what gets loaded during
> boot.
> > I have another arm system which has a similar boot time with systemd,
> it's
> > only a single cortex A9 core, it's running a newer 4.1 kernel with a new
> > version of systemd as it's built with the Jethro version of Yocto so
> > probably a newer version of glibc and this doesn't speed up when using
> > bootchart and in fact slows down slightly (which is what I would expect).
> > So my current thinking is that it's either be down to the fact that it's
> a
> > dual core and only one core is being used during boot unless a fork/execl
> > occurs? Or it's down to the newer kernel/systemd/glibc or some other
> > component.
>
> Are you sure both cores have the same speed and same size of L1
> data cache?
> You could try to force the OS to run systemd on the first core by A)
> make the second one unavailable B) play with control groups and pin
> systemd to first core.
>
> Umut
>
> >
> > Is there anyway of seeing what the CPU usage for each core is for
> systemd on
> > boot without using bootchart then I can rule in/out the first idea.
> >
> > Many Thanks,
> > Martin.
> >
> >
> > On Mon, Feb 22, 2016 at 6:52 PM, Kok, Auke-jan H <
> auke-jan.h@intel.com>
> > wrote:
> >>
> >> On Fri, Feb 19, 2016 at 7:15 AM, Martin Townsend
> >>  wrote:
> >> > Hi,
> >> >
> >> > I'm new to systemd and have just enabled it for my Xilinx based dual
> >> > core
> >> > cortex A-9 platform.  The linux system is built using Yocto (Fido
> >> > branch)
> >> > which is using version 219 of systemd.
> >> >
> >> > The main reason for moving over to systemd was to see if we could
> >> > improve
> >> > boot times and the good news was that by just moving over to systemd
> we
> >> > halved the boot time.  So I read that I could analyse the boot times
> in
> >> > detail using bootchart so I set init=//bootchart in my kernel
> >> > command
> >> > line and was really suprised to see my boot time halved again.
> Thinking
> >> > some weird caching must have occurred on the first boot I reverted
> back
> >> > to
> >> > normal systemd boot and boot time jumped back to normal (around 17/18
> >> > seconds), putting bootchart back in again reduced it to ~9/10 seconds.
> >> >
> >> > So I created my own init using bootchart as a template that just slept
> >> > for
> >> > 20 seconds using nanosleep and this also had the same effect of
> speeding
> >> > up
> >> > the boot time.
> >> >
> >> > So the only difference I can see is that the kernel is not starting
> >> > /sbin/init -> /lib/systemd/systemd directly but via another program
> that
> >> > is
> >> > performing a fork and then in the parent an execl to run
> >> > /lib/systemd/systemd.  What I would really like to understand is why
> it
> >> > runs
> >> > faster when started this way?
> >>
> >>
> >> systemd-bootchart is a dynamically linked binary. In order for it to
> >> run, it needs to dynamically link and load much of glibc into memory.
> >>
> >> If your system is really stripped down, then the portion of data
> >> that's loaded from disk that is coming from glibc is relatively large,
> >> as compared to the rest of the system. In an absolute minimal system,
> >> I expect it to be well over 75% of the total data loaded from disk.
> >>
> >> It seems in your system, glibc is about 50% of the stuff that needs to
> >> be paged in from disk, hence, by starting systemd-bootchart before
> >> systemd, you've "removed" 50% of the total data to be loaded from the
> >> vision of bootchart, since, bootchart cannot start logging data until
> >> it's loaded all those glibc bits.
> >>
> >> Ultimately, your system isn't likely booting faster, you're just
> >> forcing it to load glibc before systemd starts.
> >>
> >> systemd-analyze may actually be a much better way of looking at the
> >> problem: it reports CLOCK_MONOTONIC timestamps for the various parts
> >> involved, including, possibly, firmware, kernel time, etc.. In
> >> conjunction with bootchart, this should give a full 

Re: [systemd-devel] Bootchart speeding up boot time

2016-02-23 Thread Umut Tezduyar Lindskog
On Mon, Feb 22, 2016 at 8:51 PM, Martin Townsend
 wrote:
> Hi,
>
> Thanks for your reply.  I wouldn't really call this system stripped down, it
> has an nginx webserver, DHCP server, postgresql-server, sftp server, a few
> mono (C#) daemons running, loads quite a few kernel modules during boot,
> dbus, sshd, avahi, and a bunch of other stuff I can't quite remember.  I
> would imagine glibc will be a tiny portion of what gets loaded during boot.
> I have another arm system which has a similar boot time with systemd, it's
> only a single cortex A9 core, it's running a newer 4.1 kernel with a new
> version of systemd as it's built with the Jethro version of Yocto so
> probably a newer version of glibc and this doesn't speed up when using
> bootchart and in fact slows down slightly (which is what I would expect).
> So my current thinking is that it's either be down to the fact that it's a
> dual core and only one core is being used during boot unless a fork/execl
> occurs? Or it's down to the newer kernel/systemd/glibc or some other
> component.

Are you sure both cores have the same speed and same size of L1
data cache?
You could try to force the OS to run systemd on the first core by A)
make the second one unavailable B) play with control groups and pin
systemd to first core.

Umut

>
> Is there anyway of seeing what the CPU usage for each core is for systemd on
> boot without using bootchart then I can rule in/out the first idea.
>
> Many Thanks,
> Martin.
>
>
> On Mon, Feb 22, 2016 at 6:52 PM, Kok, Auke-jan H 
> wrote:
>>
>> On Fri, Feb 19, 2016 at 7:15 AM, Martin Townsend
>>  wrote:
>> > Hi,
>> >
>> > I'm new to systemd and have just enabled it for my Xilinx based dual
>> > core
>> > cortex A-9 platform.  The linux system is built using Yocto (Fido
>> > branch)
>> > which is using version 219 of systemd.
>> >
>> > The main reason for moving over to systemd was to see if we could
>> > improve
>> > boot times and the good news was that by just moving over to systemd we
>> > halved the boot time.  So I read that I could analyse the boot times in
>> > detail using bootchart so I set init=//bootchart in my kernel
>> > command
>> > line and was really suprised to see my boot time halved again.  Thinking
>> > some weird caching must have occurred on the first boot I reverted back
>> > to
>> > normal systemd boot and boot time jumped back to normal (around 17/18
>> > seconds), putting bootchart back in again reduced it to ~9/10 seconds.
>> >
>> > So I created my own init using bootchart as a template that just slept
>> > for
>> > 20 seconds using nanosleep and this also had the same effect of speeding
>> > up
>> > the boot time.
>> >
>> > So the only difference I can see is that the kernel is not starting
>> > /sbin/init -> /lib/systemd/systemd directly but via another program that
>> > is
>> > performing a fork and then in the parent an execl to run
>> > /lib/systemd/systemd.  What I would really like to understand is why it
>> > runs
>> > faster when started this way?
>>
>>
>> systemd-bootchart is a dynamically linked binary. In order for it to
>> run, it needs to dynamically link and load much of glibc into memory.
>>
>> If your system is really stripped down, then the portion of data
>> that's loaded from disk that is coming from glibc is relatively large,
>> as compared to the rest of the system. In an absolute minimal system,
>> I expect it to be well over 75% of the total data loaded from disk.
>>
>> It seems in your system, glibc is about 50% of the stuff that needs to
>> be paged in from disk, hence, by starting systemd-bootchart before
>> systemd, you've "removed" 50% of the total data to be loaded from the
>> vision of bootchart, since, bootchart cannot start logging data until
>> it's loaded all those glibc bits.
>>
>> Ultimately, your system isn't likely booting faster, you're just
>> forcing it to load glibc before systemd starts.
>>
>> systemd-analyze may actually be a much better way of looking at the
>> problem: it reports CLOCK_MONOTONIC timestamps for the various parts
>> involved, including, possibly, firmware, kernel time, etc.. In
>> conjunction with bootchart, this should give a full picture.
>>
>> Auke
>
>
>
> ___
> systemd-devel mailing list
> systemd-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/systemd-devel
>
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Bootchart speeding up boot time

2016-02-23 Thread Martin Townsend
I'm using a physical stopwatch and running it from the moment U-Boot hands
over until I get a prompt so I'm not taking any timing information from
systemd or even the system itself.  I'm sure that glibc does indeed take
some time to load into memory but I can't see it being the culprit of an
8-9 second difference.  Even without a stopwatch you can easily see the
speed difference when it boots by the speed that the systemd messages
appear so I think it's something more fundamental.  Or am I missing
something here?

Cheers, Martin.

On Mon, Feb 22, 2016 at 9:20 PM, Kok, Auke-jan H 
wrote:

> On Mon, Feb 22, 2016 at 11:51 AM, Martin Townsend
>  wrote:
> > Hi,
> >
> > Thanks for your reply.  I wouldn't really call this system stripped
> down, it
> > has an nginx webserver, DHCP server, postgresql-server, sftp server, a
> few
> > mono (C#) daemons running, loads quite a few kernel modules during boot,
> > dbus, sshd, avahi, and a bunch of other stuff I can't quite remember.  I
> > would imagine glibc will be a tiny portion of what gets loaded during
> boot.
> > I have another arm system which has a similar boot time with systemd,
> it's
> > only a single cortex A9 core, it's running a newer 4.1 kernel with a new
> > version of systemd as it's built with the Jethro version of Yocto so
> > probably a newer version of glibc and this doesn't speed up when using
> > bootchart and in fact slows down slightly (which is what I would expect).
> > So my current thinking is that it's either be down to the fact that it's
> a
> > dual core and only one core is being used during boot unless a fork/execl
> > occurs? Or it's down to the newer kernel/systemd/glibc or some other
> > component.
> >
> > Is there anyway of seeing what the CPU usage for each core is for
> systemd on
> > boot without using bootchart then I can rule in/out the first idea.
>
> Not that I know of, but, to work around the issue of dynamic linking,
> one can link systemd-bootchartd statically. It'll become larger, but
> you can then clearly ascern that the impact of glibc bits being loaded
> are properly recorded by bootchart. And, it's fairly trivial link it
> statically.
>
> Auke
>
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Bootchart speeding up boot time

2016-02-22 Thread Kok, Auke-jan H
On Mon, Feb 22, 2016 at 11:51 AM, Martin Townsend
 wrote:
> Hi,
>
> Thanks for your reply.  I wouldn't really call this system stripped down, it
> has an nginx webserver, DHCP server, postgresql-server, sftp server, a few
> mono (C#) daemons running, loads quite a few kernel modules during boot,
> dbus, sshd, avahi, and a bunch of other stuff I can't quite remember.  I
> would imagine glibc will be a tiny portion of what gets loaded during boot.
> I have another arm system which has a similar boot time with systemd, it's
> only a single cortex A9 core, it's running a newer 4.1 kernel with a new
> version of systemd as it's built with the Jethro version of Yocto so
> probably a newer version of glibc and this doesn't speed up when using
> bootchart and in fact slows down slightly (which is what I would expect).
> So my current thinking is that it's either be down to the fact that it's a
> dual core and only one core is being used during boot unless a fork/execl
> occurs? Or it's down to the newer kernel/systemd/glibc or some other
> component.
>
> Is there anyway of seeing what the CPU usage for each core is for systemd on
> boot without using bootchart then I can rule in/out the first idea.

Not that I know of, but, to work around the issue of dynamic linking,
one can link systemd-bootchartd statically. It'll become larger, but
you can then clearly ascern that the impact of glibc bits being loaded
are properly recorded by bootchart. And, it's fairly trivial link it
statically.

Auke
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Bootchart speeding up boot time

2016-02-22 Thread Martin Townsend
Hi,

Thanks for your reply.  I wouldn't really call this system stripped down,
it has an nginx webserver, DHCP server, postgresql-server, sftp server, a
few mono (C#) daemons running, loads quite a few kernel modules during
boot, dbus, sshd, avahi, and a bunch of other stuff I can't quite
remember.  I would imagine glibc will be a tiny portion of what gets loaded
during boot.
I have another arm system which has a similar boot time with systemd, it's
only a single cortex A9 core, it's running a newer 4.1 kernel with a new
version of systemd as it's built with the Jethro version of Yocto so
probably a newer version of glibc and this doesn't speed up when using
bootchart and in fact slows down slightly (which is what I would expect).
So my current thinking is that it's either be down to the fact that it's a
dual core and only one core is being used during boot unless a fork/execl
occurs? Or it's down to the newer kernel/systemd/glibc or some other
component.

Is there anyway of seeing what the CPU usage for each core is for systemd
on boot without using bootchart then I can rule in/out the first idea.

Many Thanks,
Martin.


On Mon, Feb 22, 2016 at 6:52 PM, Kok, Auke-jan H 
wrote:

> On Fri, Feb 19, 2016 at 7:15 AM, Martin Townsend
>  wrote:
> > Hi,
> >
> > I'm new to systemd and have just enabled it for my Xilinx based dual core
> > cortex A-9 platform.  The linux system is built using Yocto (Fido branch)
> > which is using version 219 of systemd.
> >
> > The main reason for moving over to systemd was to see if we could improve
> > boot times and the good news was that by just moving over to systemd we
> > halved the boot time.  So I read that I could analyse the boot times in
> > detail using bootchart so I set init=//bootchart in my kernel command
> > line and was really suprised to see my boot time halved again.  Thinking
> > some weird caching must have occurred on the first boot I reverted back
> to
> > normal systemd boot and boot time jumped back to normal (around 17/18
> > seconds), putting bootchart back in again reduced it to ~9/10 seconds.
> >
> > So I created my own init using bootchart as a template that just slept
> for
> > 20 seconds using nanosleep and this also had the same effect of speeding
> up
> > the boot time.
> >
> > So the only difference I can see is that the kernel is not starting
> > /sbin/init -> /lib/systemd/systemd directly but via another program that
> is
> > performing a fork and then in the parent an execl to run
> > /lib/systemd/systemd.  What I would really like to understand is why it
> runs
> > faster when started this way?
>
>
> systemd-bootchart is a dynamically linked binary. In order for it to
> run, it needs to dynamically link and load much of glibc into memory.
>
> If your system is really stripped down, then the portion of data
> that's loaded from disk that is coming from glibc is relatively large,
> as compared to the rest of the system. In an absolute minimal system,
> I expect it to be well over 75% of the total data loaded from disk.
>
> It seems in your system, glibc is about 50% of the stuff that needs to
> be paged in from disk, hence, by starting systemd-bootchart before
> systemd, you've "removed" 50% of the total data to be loaded from the
> vision of bootchart, since, bootchart cannot start logging data until
> it's loaded all those glibc bits.
>
> Ultimately, your system isn't likely booting faster, you're just
> forcing it to load glibc before systemd starts.
>
> systemd-analyze may actually be a much better way of looking at the
> problem: it reports CLOCK_MONOTONIC timestamps for the various parts
> involved, including, possibly, firmware, kernel time, etc.. In
> conjunction with bootchart, this should give a full picture.
>
> Auke
>
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Bootchart speeding up boot time

2016-02-22 Thread Kok, Auke-jan H
On Fri, Feb 19, 2016 at 7:15 AM, Martin Townsend
 wrote:
> Hi,
>
> I'm new to systemd and have just enabled it for my Xilinx based dual core
> cortex A-9 platform.  The linux system is built using Yocto (Fido branch)
> which is using version 219 of systemd.
>
> The main reason for moving over to systemd was to see if we could improve
> boot times and the good news was that by just moving over to systemd we
> halved the boot time.  So I read that I could analyse the boot times in
> detail using bootchart so I set init=//bootchart in my kernel command
> line and was really suprised to see my boot time halved again.  Thinking
> some weird caching must have occurred on the first boot I reverted back to
> normal systemd boot and boot time jumped back to normal (around 17/18
> seconds), putting bootchart back in again reduced it to ~9/10 seconds.
>
> So I created my own init using bootchart as a template that just slept for
> 20 seconds using nanosleep and this also had the same effect of speeding up
> the boot time.
>
> So the only difference I can see is that the kernel is not starting
> /sbin/init -> /lib/systemd/systemd directly but via another program that is
> performing a fork and then in the parent an execl to run
> /lib/systemd/systemd.  What I would really like to understand is why it runs
> faster when started this way?


systemd-bootchart is a dynamically linked binary. In order for it to
run, it needs to dynamically link and load much of glibc into memory.

If your system is really stripped down, then the portion of data
that's loaded from disk that is coming from glibc is relatively large,
as compared to the rest of the system. In an absolute minimal system,
I expect it to be well over 75% of the total data loaded from disk.

It seems in your system, glibc is about 50% of the stuff that needs to
be paged in from disk, hence, by starting systemd-bootchart before
systemd, you've "removed" 50% of the total data to be loaded from the
vision of bootchart, since, bootchart cannot start logging data until
it's loaded all those glibc bits.

Ultimately, your system isn't likely booting faster, you're just
forcing it to load glibc before systemd starts.

systemd-analyze may actually be a much better way of looking at the
problem: it reports CLOCK_MONOTONIC timestamps for the various parts
involved, including, possibly, firmware, kernel time, etc.. In
conjunction with bootchart, this should give a full picture.

Auke
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] Bootchart speeding up boot time

2016-02-19 Thread Martin Townsend
Hi,

I'm new to systemd and have just enabled it for my Xilinx based dual core
cortex A-9 platform.  The linux system is built using Yocto (Fido branch)
which is using version 219 of systemd.

The main reason for moving over to systemd was to see if we could improve
boot times and the good news was that by just moving over to systemd we
halved the boot time.  So I read that I could analyse the boot times in
detail using bootchart so I set init=//bootchart in my kernel command
line and was really suprised to see my boot time halved again.  Thinking
some weird caching must have occurred on the first boot I reverted back to
normal systemd boot and boot time jumped back to normal (around 17/18
seconds), putting bootchart back in again reduced it to ~9/10 seconds.

So I created my own init using bootchart as a template that just slept for
20 seconds using nanosleep and this also had the same effect of speeding up
the boot time.

So the only difference I can see is that the kernel is not starting
/sbin/init -> /lib/systemd/systemd directly but via another program that is
performing a fork and then in the parent an execl to run
/lib/systemd/systemd.  What I would really like to understand is why it
runs faster when started this way?

I'm using glibc v2.21
Linux kernel v3.14
gcc v4.9.2


Let me know if you require any more information.

Any help appreciated,
Martin.
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel