Re: when startup delays become bugs

2013-05-20 Thread Bill Nottingham
Adam Williamson (awill...@redhat.com) said: 
 Isn't anaconda-tools the group used to ensure that things anaconda may
 need to install for particular hardware are present on images?

Live  DVD/install images, yes.

Bill
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: when startup delays become bugs (dmraid)

2013-05-20 Thread Przemek Klosowski

On 05/18/2013 05:17 AM, Nicolas Mailhot wrote:


Le Sam 18 mai 2013 05:39, T.C. Hollingsworth a écrit :

On Fri, May 17, 2013 at 7:34 PM, Adam Williamson awill...@redhat.com
wrote:

You could transfer the install to a system which contains a dmraid
array, or add a dmraid array to an existing install (I think this thread
has been considering only the case of the installed system itself being
on the RAID array). Of course, the case where the installed system is
not on the RAID array is much less 'urgent' - your system doesn't stop
working, you just have to figure out you need to enable the service /
install the tool in order to see the array.


Yeah, but you'd need to do some manual configuration there anyway.
Adding `yum install dmraid` to that isn't really a massive burden.


That's really a terrible argument. It's why you get so many one thousand
paper cut moments in IT nowadays.


IT failures tend to be either because the design is too simplistic or 
too complex. Windows is in the first category: it only works in the 
specific hardware configuration it was originally installed---although I 
suspect that it's a deliberate choice to prevent illegal mass cloning, 
rather than inability to make it more portable.


Of course Linux has built-in drivers for most popular chipsets so it 
tends to boot on any hardware you put it on, at least in a basic mode.
I am quite happy with that---it allows me to do what I need to, in an 
incremental way, rather than telling me that I am SOL and should have 
started with a 'sysprep' but it's too late now.


I would say let's count our blessings and not expect that every possible 
combination of hardware components will be supported in an optimal way. 
Just as long as the whole thing reliably comes up, I'd be happy.

--
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: when startup delays become bugs

2013-05-19 Thread Reindl Harald

Am 17.05.2013 14:45, schrieb Lennart Poettering:
 Here we go again:
 https://www.dropbox.com/s/gdrdvq0kovucpsp/bootchart-20130517-1530.svg
 
 There is something really fishy... For example, from 8s to 10s the
 machine goes pretty much entirely idle... As if there was something
 doing sleep(2) or so... 

maybe something related with the bugreport i wrote a
few minutes ago where systemd-analyze blame does
not display services with type=simple

https://bugzilla.redhat.com/show_bug.cgi?id=964760



signature.asc
Description: OpenPGP digital signature
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: when startup delays become bugs (dmraid)

2013-05-18 Thread Nicolas Mailhot

Le Sam 18 mai 2013 05:39, T.C. Hollingsworth a écrit :
 On Fri, May 17, 2013 at 7:34 PM, Adam Williamson awill...@redhat.com
 wrote:
 You could transfer the install to a system which contains a dmraid
 array, or add a dmraid array to an existing install (I think this thread
 has been considering only the case of the installed system itself being
 on the RAID array). Of course, the case where the installed system is
 not on the RAID array is much less 'urgent' - your system doesn't stop
 working, you just have to figure out you need to enable the service /
 install the tool in order to see the array.

 Yeah, but you'd need to do some manual configuration there anyway.
 Adding `yum install dmraid` to that isn't really a massive burden.

That's really a terrible argument. It's why you get so many one thousand
paper cut moments in IT nowadays.

-- 
Nicolas Mailhot

-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: when startup delays become bugs (dmraid)

2013-05-18 Thread Reindl Harald


Am 18.05.2013 11:17, schrieb Nicolas Mailhot:
 
 Le Sam 18 mai 2013 05:39, T.C. Hollingsworth a écrit :
 On Fri, May 17, 2013 at 7:34 PM, Adam Williamson awill...@redhat.com
 wrote:
 You could transfer the install to a system which contains a dmraid
 array, or add a dmraid array to an existing install (I think this thread
 has been considering only the case of the installed system itself being
 on the RAID array). Of course, the case where the installed system is
 not on the RAID array is much less 'urgent' - your system doesn't stop
 working, you just have to figure out you need to enable the service /
 install the tool in order to see the array.

 Yeah, but you'd need to do some manual configuration there anyway.
 Adding `yum install dmraid` to that isn't really a massive burden.
 
 That's really a terrible argument. It's why you get so many one thousand
 paper cut moments in IT nowadays

to it is a *very good argument*

nobody needs *everything and all* in a *core setup* and i get my
paper cut moments where people trying to make defaults and
core setups idiot proof which will *never* success



signature.asc
Description: OpenPGP digital signature
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: when startup delays become bugs (dmraid)

2013-05-17 Thread Hans de Goede

Hi,

On 05/17/2013 01:21 AM, Lennart Poettering wrote:

On Thu, 16.05.13 13:44, Adam Williamson (awill...@redhat.com) wrote:


On Thu, 2013-05-16 at 20:41 +, Jóhann B. Guðmundsson wrote:

On 05/16/2013 08:17 PM, Bill Nottingham wrote:


We *could* drop all the assorted local storage tools from @standard and just 
leave
them to be installed by anaconda if they're being used to create such
storage. They'd have to remain on the live image, though, and so this would
not help installs done from the live images.


Is not the live images exactly the place where we dont want enterprise
storage daemons?


dmraid is hardly enterprise, though. It's used for all firmware RAID
implementations aside from Intel's, and it's not unusual for any random
system to be using firmware RAID.


As I understood this non-intel fakeraid is actually pretty much the
exception...


I'm afraid that is not the case, back when I worked on anaconda we had
tons and tons of dmraid (so non intel firmware raid) related bugs, mostly
in dmraid itself (*), and yes this is 2 years ago but I don't believe
the situation will have changed drastically all amd motherboards for
example will still need dmraid if they are using the motherboard build
in raid, as well as jmicron controllers slapped on for extra sata ports,
and even some pci(-express) add in raid cards need dmraid because they
are just firmware raid. Also various raid + ssd all in one contraptions
are being build, which likely also need dmraid.

So at least in the live images having dmraid by default is a must IMHO,
otherwise we may end up writing to one disk of a mirror without marking the
mirror dirty, which is not good, not good at all.

But I think we can still do better here while keeping dmraid by default,
it is highly highly unlikely for an installed system which is not using
dmraid to all of a sudden get a dmraid set without a re-install. It is
possible, but this requires an advanced user to be doing all kind of
manual tweaking, and we can just document that in this case one more
tweak is necessary.

So I would like to suggest that we simple make the dmraid service write
a (very simple) config file at the end of its first run (so its writes
the config file only if it does not exist), and then use the contents
of that file to conditionally start dmraid from then on.

Note I believe this really needs to be a config file, and not a if
no dmraid-sets where found disable service kind of magic, so that if
a dmraid set was available but temporarily becomes unavailable, we
don't end up disabling the service and then if the set is fixed /
restored, we still don't do the right thing.

If others (esp Heinz and Peter, added to the CC) agree this would be
a good solution to no longer dragging in systemd-udev-settle into
every systems boot, I think we should schedule fixing this for F-20.

Regards,

Hans


*) Not dmraid's fault, just the price we pay for reverse-engineering
these firmware raid on disk meta data format.
--
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: when startup delays become bugs

2013-05-17 Thread Hans de Goede

Hi,

On 05/17/2013 01:31 AM, Lennart Poettering wrote:

On Thu, 16.05.13 16:17, Bill Nottingham (nott...@redhat.com) wrote:


Lennart Poettering (mzerq...@0pointer.de) said:

RequiredBy=
WantedBy=dmraid-activation.service

I'm not using dmraid, or md raid, or any kind of raid at the moment. I also 
have this entry, previously explained in this thread as probably not being 
needed unless dmraid is being used, so is the likely offender for udev-settle.

   2.823s dmraid-activation.service


Given that this is really only needed for exotic stuff I do wonder why
we need to install this by default even.


We *could* drop all the assorted local storage tools from @standard and just 
leave
them to be installed by anaconda if they're being used to create such
storage. They'd have to remain on the live image, though, and so this would
not help installs done from the live images.


Does anaconda have infrastructure for this?


Yes, it already drags in for example iscsi-initiator-utils automatically when
iscsi disks are used during install. IIRC doing the same for dmraid should
not be hard.

Also see my comment in another thread, where I offer a solution which would
work with the always popular livecd installs too, and thus is better IMHO.

Regards,

Hans

--
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: when startup delays become bugs

2013-05-17 Thread Lennart Poettering
On Fri, 17.05.13 07:09, Heiko Adams (heiko.ad...@gmail.com) wrote:

 Am 17.05.2013 01:09, schrieb Lennart Poettering:
  
  For the super slow run above I'd be quite interested to have a look at
  the bootchart actually. (Heiko? Can you upload that?)
  
  Lennart
  
 https://www.dropbox.com/s/95dzgklajrgaz4n/bootchart-20130517-0756.svg
 
 BTW: Is it correct that /dev/mapper/VolGroup-lv_home can't be found with
 systemd-bootchart?

Hmm, for some reason the whole thing is truncated...

There are intervals of 2s where the whole machine doesn't do anything
apparently...

Can you rerun this with initcall_debug?

Lennart

-- 
Lennart Poettering - Red Hat, Inc.
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: when startup delays become bugs (dmraid)

2013-05-17 Thread Lennart Poettering
On Fri, 17.05.13 09:05, Hans de Goede (hdego...@redhat.com) wrote:

 Hi,
 
 On 05/17/2013 01:21 AM, Lennart Poettering wrote:
 On Thu, 16.05.13 13:44, Adam Williamson (awill...@redhat.com) wrote:
 
 On Thu, 2013-05-16 at 20:41 +, Jóhann B. Guðmundsson wrote:
 On 05/16/2013 08:17 PM, Bill Nottingham wrote:
 
 We *could* drop all the assorted local storage tools from @standard and 
 just leave
 them to be installed by anaconda if they're being used to create such
 storage. They'd have to remain on the live image, though, and so this 
 would
 not help installs done from the live images.
 
 Is not the live images exactly the place where we dont want enterprise
 storage daemons?
 
 dmraid is hardly enterprise, though. It's used for all firmware RAID
 implementations aside from Intel's, and it's not unusual for any random
 system to be using firmware RAID.
 
 As I understood this non-intel fakeraid is actually pretty much the
 exception...
 
 I'm afraid that is not the case, back when I worked on anaconda we had
 tons and tons of dmraid (so non intel firmware raid) related bugs, mostly
 in dmraid itself (*), and yes this is 2 years ago but I don't believe
 the situation will have changed drastically all amd motherboards for
 example will still need dmraid if they are using the motherboard build
 in raid, as well as jmicron controllers slapped on for extra sata ports,
 and even some pci(-express) add in raid cards need dmraid because they
 are just firmware raid. Also various raid + ssd all in one contraptions
 are being build, which likely also need dmraid.
 
 So at least in the live images having dmraid by default is a must IMHO,
 otherwise we may end up writing to one disk of a mirror without marking the
 mirror dirty, which is not good, not good at all.
 
 But I think we can still do better here while keeping dmraid by default,
 it is highly highly unlikely for an installed system which is not using
 dmraid to all of a sudden get a dmraid set without a re-install. It is
 possible, but this requires an advanced user to be doing all kind of
 manual tweaking, and we can just document that in this case one more
 tweak is necessary.
 
 So I would like to suggest that we simple make the dmraid service write
 a (very simple) config file at the end of its first run (so its writes
 the config file only if it does not exist), and then use the contents
 of that file to conditionally start dmraid from then on.
 
 Note I believe this really needs to be a config file, and not a if
 no dmraid-sets where found disable service kind of magic, so that if
 a dmraid set was available but temporarily becomes unavailable, we
 don't end up disabling the service and then if the set is fixed /
 restored, we still don't do the right thing.
 
 If others (esp Heinz and Peter, added to the CC) agree this would be
 a good solution to no longer dragging in systemd-udev-settle into
 every systems boot, I think we should schedule fixing this for F-20.

I have filed a bug for this now:

https://bugzilla.redhat.com/show_bug.cgi?id=964172

I also filed this bug against anaconda, so that for the non-livecd
installs we don't even get dmraid installed...

https://bugzilla.redhat.com/show_bug.cgi?id=964175

Lennart

-- 
Lennart Poettering - Red Hat, Inc.
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: when startup delays become bugs

2013-05-17 Thread Heiko Adams
Am 17.05.2013 14:18, schrieb Lennart Poettering:
 On Fri, 17.05.13 07:09, Heiko Adams (heiko.ad...@gmail.com) wrote:
 
 Am 17.05.2013 01:09, schrieb Lennart Poettering:

 For the super slow run above I'd be quite interested to have a look at
 the bootchart actually. (Heiko? Can you upload that?)

 Lennart

 https://www.dropbox.com/s/95dzgklajrgaz4n/bootchart-20130517-0756.svg

 BTW: Is it correct that /dev/mapper/VolGroup-lv_home can't be found with
 systemd-bootchart?
 
 Hmm, for some reason the whole thing is truncated...
 
 There are intervals of 2s where the whole machine doesn't do anything
 apparently...
 
 Can you rerun this with initcall_debug?
 
 Lennart
 
Here we go again:
https://www.dropbox.com/s/gdrdvq0kovucpsp/bootchart-20130517-1530.svg
-- 
Regards,

Heiko Adams
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: when startup delays become bugs

2013-05-17 Thread Lennart Poettering
On Fri, 17.05.13 14:38, Heiko Adams (heiko.ad...@gmail.com) wrote:

 Am 17.05.2013 14:18, schrieb Lennart Poettering:
  On Fri, 17.05.13 07:09, Heiko Adams (heiko.ad...@gmail.com) wrote:
  
  Am 17.05.2013 01:09, schrieb Lennart Poettering:
 
  For the super slow run above I'd be quite interested to have a look at
  the bootchart actually. (Heiko? Can you upload that?)
 
  Lennart
 
  https://www.dropbox.com/s/95dzgklajrgaz4n/bootchart-20130517-0756.svg
 
  BTW: Is it correct that /dev/mapper/VolGroup-lv_home can't be found with
  systemd-bootchart?
  
  Hmm, for some reason the whole thing is truncated...
  
  There are intervals of 2s where the whole machine doesn't do anything
  apparently...
  
  Can you rerun this with initcall_debug?
  
  Lennart
  
 Here we go again:
 https://www.dropbox.com/s/gdrdvq0kovucpsp/bootchart-20130517-1530.svg

There is something really fishy... For example, from 8s to 10s the
machine goes pretty much entirely idle... As if there was something
doing sleep(2) or so... 

Lennart

-- 
Lennart Poettering - Red Hat, Inc.
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: when startup delays become bugs (dmraid)

2013-05-17 Thread Chris Adams
Once upon a time, Lennart Poettering mzerq...@0pointer.de said:
 I also filed this bug against anaconda, so that for the non-livecd
 installs we don't even get dmraid installed...

I know there's always a goal of shrinking the base install, but I'm
afraid we're going overboard.  One nice thing about Linux vs. Windows
has always been how easy it is to move an install from one physical
system to another (install in one and move the hard drive, system dies
but drive is okay so you replace mboard, etc.).

By cutting out all the hardware support except for what is in the system
at install, it becomes more difficult (like Windows) to deal with any
problems, hardware upgrades, etc. down the road.

-- 
Chris Adams li...@cmadams.net
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: when startup delays become bugs (dmraid)

2013-05-17 Thread Matthew Miller
On Fri, May 17, 2013 at 08:43:36AM -0500, Chris Adams wrote:
 By cutting out all the hardware support except for what is in the system
 at install, it becomes more difficult (like Windows) to deal with any
 problems, hardware upgrades, etc. down the road.

Agreed, but I think hardware-which-requires-weird-daemons is an okay place
to draw the line for @core, at least. Maybe it *should* be in @standard?



-- 
Matthew Miller  ☁☁☁  Fedora Cloud Architect  ☁☁☁  mat...@fedoraproject.org
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: when startup delays become bugs (dmraid)

2013-05-17 Thread Bruno Wolff III

On Fri, May 17, 2013 at 08:43:36 -0500,
  Chris Adams li...@cmadams.net wrote:


By cutting out all the hardware support except for what is in the system
at install, it becomes more difficult (like Windows) to deal with any
problems, hardware upgrades, etc. down the road.


Note that the change to defaulting to hostonly for dracut has already 
sent us down that path. People who want to be able to move disks without 
needing to use rescue mode to fix things up on the new hardware, will 
already need to be doing some work.

--
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: when startup delays become bugs

2013-05-17 Thread David Lehman
On Fri, 2013-05-17 at 09:07 +0200, Hans de Goede wrote:
 Hi,
 
 On 05/17/2013 01:31 AM, Lennart Poettering wrote:
  On Thu, 16.05.13 16:17, Bill Nottingham (nott...@redhat.com) wrote:
 
  Lennart Poettering (mzerq...@0pointer.de) said:
  RequiredBy=
  WantedBy=dmraid-activation.service
 
  I'm not using dmraid, or md raid, or any kind of raid at the moment. I 
  also have this entry, previously explained in this thread as probably 
  not being needed unless dmraid is being used, so is the likely offender 
  for udev-settle.
 
 2.823s dmraid-activation.service
 
  Given that this is really only needed for exotic stuff I do wonder why
  we need to install this by default even.
 
  We *could* drop all the assorted local storage tools from @standard and 
  just leave
  them to be installed by anaconda if they're being used to create such
  storage. They'd have to remain on the live image, though, and so this would
  not help installs done from the live images.
 
  Does anaconda have infrastructure for this?
 
 Yes, it already drags in for example iscsi-initiator-utils automatically when
 iscsi disks are used during install. IIRC doing the same for dmraid should
 not be hard.

This has been in place since somewhere around F11 or F12. You just can't
tell since dmraid is in base. There is still the anaconda-tools group,
which I forget the purpose of (live media?).


-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: when startup delays become bugs

2013-05-17 Thread Lennart Poettering
On Fri, 17.05.13 10:18, David Lehman (dleh...@redhat.com) wrote:

 On Fri, 2013-05-17 at 09:07 +0200, Hans de Goede wrote:
  Hi,
  
  On 05/17/2013 01:31 AM, Lennart Poettering wrote:
   On Thu, 16.05.13 16:17, Bill Nottingham (nott...@redhat.com) wrote:
  
   Lennart Poettering (mzerq...@0pointer.de) said:
   RequiredBy=
   WantedBy=dmraid-activation.service
  
   I'm not using dmraid, or md raid, or any kind of raid at the moment. I 
   also have this entry, previously explained in this thread as probably 
   not being needed unless dmraid is being used, so is the likely 
   offender for udev-settle.
  
  2.823s dmraid-activation.service
  
   Given that this is really only needed for exotic stuff I do wonder why
   we need to install this by default even.
  
   We *could* drop all the assorted local storage tools from @standard and 
   just leave
   them to be installed by anaconda if they're being used to create such
   storage. They'd have to remain on the live image, though, and so this 
   would
   not help installs done from the live images.
  
   Does anaconda have infrastructure for this?
  
  Yes, it already drags in for example iscsi-initiator-utils automatically 
  when
  iscsi disks are used during install. IIRC doing the same for dmraid should
  not be hard.
 
 This has been in place since somewhere around F11 or F12. You just can't
 tell since dmraid is in base. There is still the anaconda-tools group,
 which I forget the purpose of (live media?).

So, are you saying we could simply drop dmraid from base, and anaconda
would do the right thing and install it when dmraid is used for the
installation?

Lennart

-- 
Lennart Poettering - Red Hat, Inc.
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: when startup delays become bugs

2013-05-17 Thread David Lehman
On Fri, 2013-05-17 at 17:45 +0200, Lennart Poettering wrote:
 On Fri, 17.05.13 10:18, David Lehman (dleh...@redhat.com) wrote:
 
  On Fri, 2013-05-17 at 09:07 +0200, Hans de Goede wrote:
   Hi,
   
   On 05/17/2013 01:31 AM, Lennart Poettering wrote:
On Thu, 16.05.13 16:17, Bill Nottingham (nott...@redhat.com) wrote:
   
Lennart Poettering (mzerq...@0pointer.de) said:
RequiredBy=
WantedBy=dmraid-activation.service
   
I'm not using dmraid, or md raid, or any kind of raid at the moment. 
I also have this entry, previously explained in this thread as 
probably not being needed unless dmraid is being used, so is the 
likely offender for udev-settle.
   
   2.823s dmraid-activation.service
   
Given that this is really only needed for exotic stuff I do wonder why
we need to install this by default even.
   
We *could* drop all the assorted local storage tools from @standard 
and just leave
them to be installed by anaconda if they're being used to create such
storage. They'd have to remain on the live image, though, and so this 
would
not help installs done from the live images.
   
Does anaconda have infrastructure for this?
   
   Yes, it already drags in for example iscsi-initiator-utils automatically 
   when
   iscsi disks are used during install. IIRC doing the same for dmraid should
   not be hard.
  
  This has been in place since somewhere around F11 or F12. You just can't
  tell since dmraid is in base. There is still the anaconda-tools group,
  which I forget the purpose of (live media?).
 
 So, are you saying we could simply drop dmraid from base, and anaconda
 would do the right thing and install it when dmraid is used for the
 installation?

Yes. Likewise lvm2, mdadm, cryptsetup, e2fsprogs, and whatever other
storage-specific packages are in there.

 
 Lennart
 
 -- 
 Lennart Poettering - Red Hat, Inc.


-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: when startup delays become bugs

2013-05-17 Thread poma
On 16.05.2013 21:29, Heiko Adams wrote:
 My top 6 of extreme long starting services are:
 $ systemd-analyze blame
  27.652s NetworkManager.service
  27.072s chronyd.service
  27.015s avahi-daemon.service
  26.899s tuned.service
  26.647s restorecond.service
  23.512s lightdm.service
 
 And I don't have a clue why the hell I've got an avahi-daemon.service
 running on every startup an why it take so extreme long to start.
…

Also your 'lightdm' seems to trying to compete with 'gdm', I mean really
what are you two doing!? :)

systemd-analyze blame | grep lightdm
   467ms lightdm.service


poma



-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: when startup delays become bugs

2013-05-17 Thread poma
On 17.05.2013 14:45, Lennart Poettering wrote:

 There is something really fishy...
…

Yeah!
The week *end* has just begun! :)


poma

-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: when startup delays become bugs

2013-05-17 Thread Adam Williamson
On Fri, 2013-05-17 at 10:18 -0500, David Lehman wrote:
 On Fri, 2013-05-17 at 09:07 +0200, Hans de Goede wrote:
  Hi,
  
  On 05/17/2013 01:31 AM, Lennart Poettering wrote:
   On Thu, 16.05.13 16:17, Bill Nottingham (nott...@redhat.com) wrote:
  
   Lennart Poettering (mzerq...@0pointer.de) said:
   RequiredBy=
   WantedBy=dmraid-activation.service
  
   I'm not using dmraid, or md raid, or any kind of raid at the moment. I 
   also have this entry, previously explained in this thread as probably 
   not being needed unless dmraid is being used, so is the likely 
   offender for udev-settle.
  
  2.823s dmraid-activation.service
  
   Given that this is really only needed for exotic stuff I do wonder why
   we need to install this by default even.
  
   We *could* drop all the assorted local storage tools from @standard and 
   just leave
   them to be installed by anaconda if they're being used to create such
   storage. They'd have to remain on the live image, though, and so this 
   would
   not help installs done from the live images.
  
   Does anaconda have infrastructure for this?
  
  Yes, it already drags in for example iscsi-initiator-utils automatically 
  when
  iscsi disks are used during install. IIRC doing the same for dmraid should
  not be hard.
 
 This has been in place since somewhere around F11 or F12. You just can't
 tell since dmraid is in base. There is still the anaconda-tools group,
 which I forget the purpose of (live media?).

Isn't anaconda-tools the group used to ensure that things anaconda may
need to install for particular hardware are present on images?
-- 
Adam Williamson
Fedora QA Community Monkey
IRC: adamw | Twitter: AdamW_Fedora | identi.ca: adamwfedora
http://www.happyassassin.net

-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: when startup delays become bugs

2013-05-17 Thread Ric Wheeler

On 05/16/2013 02:39 PM, Lennart Poettering wrote:

On Thu, 16.05.13 12:20, Chris Murphy (li...@colorremedies.com) wrote:


There have been no crashes, so ext4 doesn't need fsck on every boot:

   4.051s systemd-fsck-root.service
515ms

systemd-fsck@dev-disk-by\x2duuid-09c66d01\x2d8126\x2d39c2\x2db7b8\x2d25f14cbd35af.service

Well, but only fsck itself knows that and can determine this from the
superblock. Hence we have to start it first and it will then exit
quickly if the fs wasn't dirty.

Note that these times might be misleading: if fsck takes this long to
check the superblock and exit this might be a result of something else
which runs in parallel monopolizing CPU or IO (for example readahead),
and might not actually be fsck's own fault.


We really should not need to run fsck on boot unless the mount fails. Might save 
some time at the cost of a bit of extra complexity?


Ric




and no oops, so this seems unnecessary:

   1.092s abrt-uefioops.service

https://bugzilla.redhat.com/show_bug.cgi?id=963182


and I'm not using LVM so these seem unnecessary:


   2.783s lvm2-monitor.service
489ms systemd-udev-settle.service
 15ms lvm2-lvmetad.service

How do I determine what component to file a bug against? I guess I have to find 
the package that caused these .service files to be installed?

$ repoquery --qf=%{sourcerpm} --whatprovides 
'*/lib/systemd/system/lvm2-monitor.service'
lvm2-2.02.98-8.fc19.src.rpm

Please file a bug against the lvm2 package. And make sure to add it to:

https://bugzilla.redhat.com/show_bug.cgi?id=963210

Hmm, on your machine, what does systemctl show -p WantedBy -p
RequiredBy systemd-udev-settle.service show? This will tell us which
package is actually responsible for pulling in
systemd-udev-settle.service.

Thanks!

Lennart



--
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: when startup delays become bugs

2013-05-17 Thread Ric Wheeler

On 05/15/2013 05:09 PM, Rahul Sundaram wrote:

On 05/15/2013 04:39 PM, Przemek Klosowski wrote:
I was planning to upgrade to F19 soon, and I kind-of care about the data on 
that system (I have backup, but corruption would not be welcome, just for the 
lost time reason). Do people recommend sticking with it? holding off the 
upgrade? switching back to ext4?


https://btrfs.wiki.kernel.org/index.php/FAQ#Is_btrfs_stable.3F is a reasonably 
good answer to that question.  I would note that traditional linux filesystems 
have a very  long history and time to mature including XFS and Ext4 and only 
check for metadata integrity unlike Btrfs.  If you have a good (and frequent) 
backup system, it might be worth riding it out but you will have to evaluate 
the risks and make that determination of your own.


FWIW, I am using Btrfs without any (known) issues for a while but I also have 
almost no data in my laptop that I cannot afford to lose since everything 
important is an external backup disk and really important files are in another 
laptop as well.


Rahul



The right answer is of course to always back up every file system since disk 
dies, bugs eat data, etc :)


If you have backups and want to help btrfs settle in, it is really valuable to 
have people on it.


btrfs is getting more stable, but for critical data I would suggest using xfs or 
ext4.


Ric

--
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: when startup delays become bugs

2013-05-17 Thread Chris Murphy

On May 17, 2013, at 2:38 PM, Ric Wheeler rwhee...@redhat.com wrote:

 On 05/16/2013 02:39 PM, Lennart Poettering wrote:
 On Thu, 16.05.13 12:20, Chris Murphy (li...@colorremedies.com) wrote:
 
 There have been no crashes, so ext4 doesn't need fsck on every boot:
 
   4.051s systemd-fsck-root.service
515ms

 systemd-fsck@dev-disk-by\x2duuid-09c66d01\x2d8126\x2d39c2\x2db7b8\x2d25f14cbd35af.service
 Well, but only fsck itself knows that and can determine this from the
 superblock. Hence we have to start it first and it will then exit
 quickly if the fs wasn't dirty.
 
 Note that these times might be misleading: if fsck takes this long to
 check the superblock and exit this might be a result of something else
 which runs in parallel monopolizing CPU or IO (for example readahead),
 and might not actually be fsck's own fault.
 
 We really should not need to run fsck on boot unless the mount fails. Might 
 save some time at the cost of a bit of extra complexity?

Seems some extra complexity is needed anyway since the way to deal with file 
system problems differs with the various fs's. XFS and Btrfs fsck's are noops. 
XFS needs xfs_repair run, and Btrfs maybe needs to be remounted with -o 
degraded, depending on the nature of the mount failure since most problems are 
autorecovered from during mount.


Chris Murphy
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: when startup delays become bugs

2013-05-17 Thread Eric Sandeen
On 5/17/13 3:38 PM, Ric Wheeler wrote:
 On 05/16/2013 02:39 PM, Lennart Poettering wrote:
 On Thu, 16.05.13 12:20, Chris Murphy (li...@colorremedies.com) wrote:

 There have been no crashes, so ext4 doesn't need fsck on every boot:

4.051s systemd-fsck-root.service
 515ms
 
 systemd-fsck@dev-disk-by\x2duuid-09c66d01\x2d8126\x2d39c2\x2db7b8\x2d25f14cbd35af.service
 Well, but only fsck itself knows that and can determine this from the
 superblock. Hence we have to start it first and it will then exit
 quickly if the fs wasn't dirty.

 Note that these times might be misleading: if fsck takes this long to
 check the superblock and exit this might be a result of something else
 which runs in parallel monopolizing CPU or IO (for example readahead),
 and might not actually be fsck's own fault.
 
 We really should not need to run fsck on boot unless the mount fails. Might 
 save some time at the cost of a bit of extra complexity?

well, ext[34]a are special little snowflakes.  ;)

Since forever, we've called fsck on boot, and e2fsck replays the log in 
userspace if needed.  If the fs isn't marked as having encountered an error 
during the previous mount (or, historically, having had too many mounts or too 
much time since last fsck), then nothing else happens.

It shouldn't take a whole ton of time, but could, depending on whether the log 
was dirty, then the size of the log and speed of the disk I suppose.

When it took 4s above, was that a from a clean reboot (i.e. was the journal 
dirty?)

-Eric

 Ric
 

 and no oops, so this seems unnecessary:

1.092s abrt-uefioops.service
 https://bugzilla.redhat.com/show_bug.cgi?id=963182

 and I'm not using LVM so these seem unnecessary:


2.783s lvm2-monitor.service
 489ms systemd-udev-settle.service
  15ms lvm2-lvmetad.service

 How do I determine what component to file a bug against? I guess I have to 
 find the package that caused these .service files to be installed?
 $ repoquery --qf=%{sourcerpm} --whatprovides 
 '*/lib/systemd/system/lvm2-monitor.service'
 lvm2-2.02.98-8.fc19.src.rpm

 Please file a bug against the lvm2 package. And make sure to add it to:

 https://bugzilla.redhat.com/show_bug.cgi?id=963210

 Hmm, on your machine, what does systemctl show -p WantedBy -p
 RequiredBy systemd-udev-settle.service show? This will tell us which
 package is actually responsible for pulling in
 systemd-udev-settle.service.

 Thanks!

 Lennart

 

-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: when startup delays become bugs

2013-05-17 Thread Eric Sandeen
On 5/17/13 3:58 PM, Chris Murphy wrote:
 
 On May 17, 2013, at 2:38 PM, Ric Wheeler rwhee...@redhat.com wrote:
 
 On 05/16/2013 02:39 PM, Lennart Poettering wrote:
 On Thu, 16.05.13 12:20, Chris Murphy (li...@colorremedies.com) wrote:

 There have been no crashes, so ext4 doesn't need fsck on every boot:

   4.051s systemd-fsck-root.service
515ms

 systemd-fsck@dev-disk-by\x2duuid-09c66d01\x2d8126\x2d39c2\x2db7b8\x2d25f14cbd35af.service
 Well, but only fsck itself knows that and can determine this from the
 superblock. Hence we have to start it first and it will then exit
 quickly if the fs wasn't dirty.

 Note that these times might be misleading: if fsck takes this long to
 check the superblock and exit this might be a result of something else
 which runs in parallel monopolizing CPU or IO (for example readahead),
 and might not actually be fsck's own fault.

 We really should not need to run fsck on boot unless the mount fails. Might 
 save some time at the cost of a bit of extra complexity?
 
 Seems some extra complexity is needed anyway since the way to deal
 with file system problems differs with the various fs's. XFS and
 Btrfs fsck's are noops. XFS needs xfs_repair run, and Btrfs maybe
 needs to be remounted with -o degraded, depending on the nature of
 the mount failure since most problems are autorecovered from during
 mount.

fsck.xfs is a no-op because of the xfs approach that it's a journaling
filesystem, so the mount-time recovery is simply replay the log you're
good.

If you have a corrupt filesystem (as opposed to a not-cleanly-unmounted
filesystem), xfs_repair is an administrative action,
not a boot-time auto-initiated initscript action.

-Eric


 
 Chris Murphy
 

-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: when startup delays become bugs

2013-05-17 Thread Chris Murphy

On May 17, 2013, at 3:56 PM, Eric Sandeen sand...@redhat.com wrote:

 On 5/17/13 3:58 PM, Chris Murphy wrote:
 
 Seems some extra complexity is needed anyway since the way to deal
 with file system problems differs with the various fs's. XFS and
 Btrfs fsck's are noops. XFS needs xfs_repair run, and Btrfs maybe
 needs to be remounted with -o degraded, depending on the nature of
 the mount failure since most problems are autorecovered from during
 mount.
 
 fsck.xfs is a no-op because of the xfs approach that it's a journaling
 filesystem, so the mount-time recovery is simply replay the log you're
 good.
 
 If you have a corrupt filesystem (as opposed to a not-cleanly-unmounted
 filesystem), xfs_repair is an administrative action,
 not a boot-time auto-initiated initscript action.

So if the boot fails due to reasons other than an unclean mount, with xfs the 
user needs a rescue environment of some sort. At the moment, it's similar with 
Btrfs in that what to do next depends on the problem.


Chris Murphy
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: when startup delays become bugs

2013-05-17 Thread Chris Murphy

On May 17, 2013, at 3:53 PM, Eric Sandeen sand...@redhat.com wrote:

 On Thu, 16.05.13 12:20, Chris Murphy (li...@colorremedies.com) wrote:
 
 There have been no crashes, so ext4 doesn't need fsck on every boot:
 
   4.051s systemd-fsck-root.service
515ms

 systemd-fsck@dev-disk-by\x2duuid-09c66d01\x2d8126\x2d39c2\x2db7b8\x2d25f14cbd35af.service


 When it took 4s above, was that a from a clean reboot (i.e. was the journal 
 dirty?)

Clean. And it's a new file system, created within the hour of the time test. I 
also don't understand why there are two instances. There's only one ext4 file 
system on the computer.


Chris Murphy
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: when startup delays become bugs

2013-05-17 Thread Eric Sandeen
On 5/17/13 5:29 PM, Chris Murphy wrote:
 
 On May 17, 2013, at 3:56 PM, Eric Sandeen sand...@redhat.com wrote:
 
 On 5/17/13 3:58 PM, Chris Murphy wrote:

 Seems some extra complexity is needed anyway since the way to deal
 with file system problems differs with the various fs's. XFS and
 Btrfs fsck's are noops. XFS needs xfs_repair run, and Btrfs maybe
 needs to be remounted with -o degraded, depending on the nature of
 the mount failure since most problems are autorecovered from during
 mount.

 fsck.xfs is a no-op because of the xfs approach that it's a journaling
 filesystem, so the mount-time recovery is simply replay the log you're
 good.

 If you have a corrupt filesystem (as opposed to a not-cleanly-unmounted
 filesystem), xfs_repair is an administrative action,
 not a boot-time auto-initiated initscript action.
 
 So if the boot fails due to reasons other than an unclean mount, 

unclean mount doesn't fail boot ;)

 with
 xfs the user needs a rescue environment of some sort. At the moment,
 it's similar with Btrfs in that what to do next depends on the
 problem.

Isn't that always the case? :)

Same is true for e2fsck, TBH.  Things can go wrong...

initramfs contains tools for all these filesystems, AFAIK.

-Eric (sorry, I think I'm taking this thread off-topic)

 Chris Murphy
 

-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: when startup delays become bugs

2013-05-17 Thread Eric Sandeen
On 5/17/13 5:39 PM, Chris Murphy wrote:
 
 On May 17, 2013, at 3:53 PM, Eric Sandeen sand...@redhat.com wrote:
 
 On Thu, 16.05.13 12:20, Chris Murphy (li...@colorremedies.com) wrote:

 There have been no crashes, so ext4 doesn't need fsck on every boot:

   4.051s systemd-fsck-root.service
515ms

 systemd-fsck@dev-disk-by\x2duuid-09c66d01\x2d8126\x2d39c2\x2db7b8\x2d25f14cbd35af.service
 
 
 When it took 4s above, was that a from a clean reboot (i.e. was the journal 
 dirty?)
 
 Clean. And it's a new file system, created within the hour of the time test. 
 I also don't understand why there are two instances. There's only one ext4 
 file system on the computer.

Can't imagine why it takes 4s then, need finer-grained tracing I guess...

If I e2fsck a clean fs here it takes about 0.1s.

Most of its time is spent in read(); several reads get about 100k of data from 
the device, and that's about it.

As a sanity check I suppose you could try e2fsck from a rescue environment and 
see if it still takes that long, or of there is other overhead / interaction 
slowing it down.

-Eric

 
 Chris Murphy
 

-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: when startup delays become bugs

2013-05-17 Thread Michael Scherer
Le jeudi 16 mai 2013 à 14:44 -0400, john.flor...@dart.biz a écrit :
  From: Adam Williamson awill...@redhat.com 
   How do I determine what component to file a bug against? I guess I
   have to find the package that caused these .service files to be
   installed?
  
  Yes. 'rpm -qf /lib/systemd/system/foo.service'.
 
 I'd actually suggest doing: 
 
 rpm -qif /lib/systemd/system/foo.service 
 
 ... and noting the source rpm. 

For the sake of over optimisation :
$ rpm --queryformat '%{SOURCERPM}\n' -qf /lib/systemd/system/foo.service
foo-1.2-7.fc19.src.rpm

-- 
Michael Scherer

-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: when startup delays become bugs (dmraid)

2013-05-17 Thread T.C. Hollingsworth
On May 17, 2013 6:43 AM, Chris Adams li...@cmadams.net wrote:
 Once upon a time, Lennart Poettering mzerq...@0pointer.de said:
  I also filed this bug against anaconda, so that for the non-livecd
  installs we don't even get dmraid installed...

 I know there's always a goal of shrinking the base install, but I'm
 afraid we're going overboard.  One nice thing about Linux vs. Windows
 has always been how easy it is to move an install from one physical
 system to another (install in one and move the hard drive, system dies
 but drive is okay so you replace mboard, etc.).

 By cutting out all the hardware support except for what is in the system
 at install, it becomes more difficult (like Windows) to deal with any
 problems, hardware upgrades, etc. down the road.

How does this change affect that?  If you slap a disk into another machine
it's not going to magically start using RAID...

-T.C.
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: when startup delays become bugs (dmraid)

2013-05-17 Thread Adam Williamson
On Fri, 2013-05-17 at 19:17 -0700, T.C. Hollingsworth wrote:
 On May 17, 2013 6:43 AM, Chris Adams li...@cmadams.net wrote:
  Once upon a time, Lennart Poettering mzerq...@0pointer.de said:
   I also filed this bug against anaconda, so that for the non-livecd
   installs we don't even get dmraid installed...
 
  I know there's always a goal of shrinking the base install, but I'm
  afraid we're going overboard.  One nice thing about Linux vs.
 Windows
  has always been how easy it is to move an install from one physical
  system to another (install in one and move the hard drive, system
 dies
  but drive is okay so you replace mboard, etc.).
 
  By cutting out all the hardware support except for what is in the
 system
  at install, it becomes more difficult (like Windows) to deal with
 any
  problems, hardware upgrades, etc. down the road.
 
 How does this change affect that?  If you slap a disk into another
 machine it's not going to magically start using RAID...

You could transfer the install to a system which contains a dmraid
array, or add a dmraid array to an existing install (I think this thread
has been considering only the case of the installed system itself being
on the RAID array). Of course, the case where the installed system is
not on the RAID array is much less 'urgent' - your system doesn't stop
working, you just have to figure out you need to enable the service /
install the tool in order to see the array.
-- 
Adam Williamson
Fedora QA Community Monkey
IRC: adamw | Twitter: AdamW_Fedora | identi.ca: adamwfedora
http://www.happyassassin.net

-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: when startup delays become bugs (dmraid)

2013-05-17 Thread T.C. Hollingsworth
On Fri, May 17, 2013 at 7:34 PM, Adam Williamson awill...@redhat.com wrote:
 You could transfer the install to a system which contains a dmraid
 array, or add a dmraid array to an existing install (I think this thread
 has been considering only the case of the installed system itself being
 on the RAID array). Of course, the case where the installed system is
 not on the RAID array is much less 'urgent' - your system doesn't stop
 working, you just have to figure out you need to enable the service /
 install the tool in order to see the array.

Yeah, but you'd need to do some manual configuration there anyway.
Adding `yum install dmraid` to that isn't really a massive burden.

I could have another drive in my new system that uses ZFS, but I don't
think we should add zfs-fuse to @core just so any random hard drive
with Fedora installed that I happen to want to pop in there can read
it off the bat.

I love that I can put a drive with Fedora on it into any system and
have it boot, but my expectations in this regard end with being able
to log in.  I do not expect any random new hardware I have in that new
system to just magically work without a little prodding, beyond the
hardware that *always* just magically works, of course.  Asking for
more is madness.

-T.C.
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: when startup delays become bugs

2013-05-16 Thread Lennart Poettering
On Wed, 15.05.13 15:28, Orion Poplawski (or...@cora.nwra.com) wrote:

 On 05/15/2013 03:26 PM, Bill Nottingham wrote:
 Orion Poplawski (or...@cora.nwra.com) said:
 On 05/15/2013 05:36 AM, Lennart Poettering wrote:
 On Tue, 14.05.13 20:43, Adam Williamson (awill...@redhat.com) wrote:
 479ms iprupdate.service
 385ms iprinit.service
 
 These appear to be untis that are only necessary for IBM RAID. It's
 hugely disappointing if this is pulled into all boots. I wonder if we
 can find a different logic for this, for example pulling it in from a
 udev rules or so. Or by changing anaconda to install this only onb IBM
 RAID systemd... Anyone knows what this is about?
 
 Also, I don't see either of these listed in the default preset
 file. This really shouldnt be started by default.
 
  78ms iprdump.service
 
 IBM RAID again? Jeez...
 
 These are from the iprutils package:
 
 Summary : Utilities for the IBM Power Linux RAID adapters
 Description :
 Provides a suite of utilities to manage and configure SCSI devices
 supported by the ipr SCSI storage device driver.
 
 And are sysv init scripts:
 
 /etc/rc.d/init.d/iprdump
 /etc/rc.d/init.d/iprinit
 /etc/rc.d/init.d/iprupdate
 
 Which is listed a mandatory in the core group of comps.
 
 CCing iprutils-owner for comment.
 
 99% sure those should only be built on PPC.
 
 Bill
 
 
 * Wed Sep 05 2012 Karsten Hopp kars...@redhat.com 2.3.11-1
 - update to 2.3.11
 - enable on all archs as it now supports some adapters on them, too.

I filed a bug about this now:

https://bugzilla.redhat.com/show_bug.cgi?id=963679

Lennart

-- 
Lennart Poettering - Red Hat, Inc.
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: when startup delays become bugs

2013-05-16 Thread Daniel J Walsh
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 05/14/2013 06:30 PM, Dan Williams wrote:
 On Tue, 2013-05-14 at 15:51 -0600, Chris Murphy wrote:
 This is not intended to be snarky, but I admit it could sound like it is.
 When are long startup times for services considered to be bugs in their
 own right?
 
 
 [root@f19q ~]# systemd-analyze blame 1min 444ms sm-client.service 1min
 310ms sendmail.service 18.602s firewalld.service 13.882s
 avahi-daemon.service 12.944s NetworkManager-wait-online.service
 
 Is anything waiting on NetworkManager-wait-online in your install?  That 
 target is really intended for servers where you want to block Apache or 
 Sendmail or Database from starting until you're sure your networking is 
 fully configured.  If you don't have any services that require a network to
 be up, then you can mask NetworkManager-wait-online and things will be more
 parallel.
 
 12.715s restorecond.service
I have no idea why restorecond would take this much time, and it really should
not be enabled for most machines, with file name transitions.

Not sure why it is enabled.  Could someone check on a fresh install if it is
enabled?

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.13 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iEYEARECAAYFAlGU2roACgkQrlYvE4MpobMWkgCePCWW4miPesJMTl3OyQIAue4r
zu8AnR2iilICj6NqhjDfWZl4l7gMaXe6
=2nKR
-END PGP SIGNATURE-
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: when startup delays become bugs

2013-05-16 Thread Orion Poplawski

On 05/16/2013 07:10 AM, Daniel J Walsh wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 05/14/2013 06:30 PM, Dan Williams wrote:

On Tue, 2013-05-14 at 15:51 -0600, Chris Murphy wrote:

12.715s restorecond.service

I have no idea why restorecond would take this much time, and it really should
not be enabled for most machines, with file name transitions.

Not sure why it is enabled.  Could someone check on a fresh install if it is
enabled?


policycoreutils-restorecond is marked mandatory in the gnome-desktop comp 
groups.  I don't have it installed on my machines (KDE).





--
Orion Poplawski
Technical Manager 303-415-9701 x222
NWRA, Boulder/CoRA Office FAX: 303-415-9702
3380 Mitchell Lane   or...@nwra.com
Boulder, CO 80301   http://www.nwra.com
--
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: when startup delays become bugs

2013-05-16 Thread Chris Murphy

On May 16, 2013, at 7:10 AM, Daniel J Walsh dwa...@redhat.com wrote:
 
 
 12.715s restorecond.service
 I have no idea why restorecond would take this much time, and it really should
 not be enabled for most machines, with file name transitions.
 
 Not sure why it is enabled.  Could someone check on a fresh install if it is
 enabled?

That result was from a fresh install in a qemu-kvm VM.

Fresh installed (wipefs -a the F19 partitions, then remove the partitions with 
fdisk, reinstall) on bare metal, and on the fifth reboot I get:

 17.495s restorecond.service



Chris Murphy

-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: when startup delays become bugs

2013-05-16 Thread Chris Murphy

These are the remaining items that I don't understand why they run at all, on 
every boot. 


There have been no crashes, so ext4 doesn't need fsck on every boot:

  4.051s systemd-fsck-root.service
   515ms 
systemd-fsck@dev-disk-by\x2duuid-09c66d01\x2d8126\x2d39c2\x2db7b8\x2d25f14cbd35af.service



and no oops, so this seems unnecessary:

  1.092s abrt-uefioops.service


and I'm not using LVM so these seem unnecessary:


  2.783s lvm2-monitor.service
   489ms systemd-udev-settle.service
15ms lvm2-lvmetad.service


How do I determine what component to file a bug against? I guess I have to find 
the package that caused these .service files to be installed?


Chris Murphy

-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: when startup delays become bugs

2013-05-16 Thread Adam Williamson
On Thu, 2013-05-16 at 12:20 -0600, Chris Murphy wrote:
 These are the remaining items that I don't understand why they run at all, on 
 every boot. 
 
 
 There have been no crashes, so ext4 doesn't need fsck on every boot:
 
   4.051s systemd-fsck-root.service
515ms 
 systemd-fsck@dev-disk-by\x2duuid-09c66d01\x2d8126\x2d39c2\x2db7b8\x2d25f14cbd35af.service
 
 
 
 and no oops, so this seems unnecessary:
 
   1.092s abrt-uefioops.service

So abrt-uefioops.service always runs (it doesn't have any systemd
conditionals), and it does this:

# Wait for abrtd to start. Give it at least 1 second to initialize.
i=10
while ! pidof abrtd /dev/null; do
if test $((i--)) = 0; then
exit 1
fi
sleep 1
done
sleep 1

cd /sys/fs/pstore 2/dev/null || exit 0

abrt-merge-uefioops -o * | abrt-dump-oops -D
if test $? = 0; then
abrt-merge-uefioops -d *
fi

Now, the ordering there looks wonky to me. It waits for abrtd to start
and *then* checks if /sys/fs/pstore even exists. So I see various
possible improvements here.

1. Why is it waiting for abrtd to 'initialize' at all? That seems like a
messy hack. If abrtd.service is returning early, surely that should be
fixed, rather than putting ugly sleep loops in things that depend on it.

2. Move the 'wait for abrtd' stuff to under the 'cd /sys/fs/pstore
2/dev/null || exit 0' line. Then at least if /sys/fs/pstore doesn't
exist at all, we can exit without waiting for a second.

3. Make the whole service conditional on there being anything
in /sys/fs/pstore at all. In practice, just doing this would resolve the
problems and make 1 and 2 unnecessary (though still possibly desirable).
To do that, just add this to the abrt-uefioops.service file:

ConditionDirectoryNotEmpty=/sys/fs/pstore

 How do I determine what component to file a bug against? I guess I
 have to find the package that caused these .service files to be
 installed?

Yes. 'rpm -qf /lib/systemd/system/foo.service'.
-- 
Adam Williamson
Fedora QA Community Monkey
IRC: adamw | Twitter: AdamW_Fedora | identi.ca: adamwfedora
http://www.happyassassin.net

-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: when startup delays become bugs

2013-05-16 Thread Lennart Poettering
On Thu, 16.05.13 12:20, Chris Murphy (li...@colorremedies.com) wrote:

 There have been no crashes, so ext4 doesn't need fsck on every boot:
 
   4.051s systemd-fsck-root.service
515ms

 systemd-fsck@dev-disk-by\x2duuid-09c66d01\x2d8126\x2d39c2\x2db7b8\x2d25f14cbd35af.service

Well, but only fsck itself knows that and can determine this from the
superblock. Hence we have to start it first and it will then exit
quickly if the fs wasn't dirty.

Note that these times might be misleading: if fsck takes this long to
check the superblock and exit this might be a result of something else
which runs in parallel monopolizing CPU or IO (for example readahead),
and might not actually be fsck's own fault.

 and no oops, so this seems unnecessary:
 
   1.092s abrt-uefioops.service

https://bugzilla.redhat.com/show_bug.cgi?id=963182

 and I'm not using LVM so these seem unnecessary:
 
 
   2.783s lvm2-monitor.service
489ms systemd-udev-settle.service
 15ms lvm2-lvmetad.service
 
 How do I determine what component to file a bug against? I guess I have to 
 find the package that caused these .service files to be installed?

$ repoquery --qf=%{sourcerpm} --whatprovides 
'*/lib/systemd/system/lvm2-monitor.service'
lvm2-2.02.98-8.fc19.src.rpm

Please file a bug against the lvm2 package. And make sure to add it to:

https://bugzilla.redhat.com/show_bug.cgi?id=963210

Hmm, on your machine, what does systemctl show -p WantedBy -p
RequiredBy systemd-udev-settle.service show? This will tell us which
package is actually responsible for pulling in
systemd-udev-settle.service.

Thanks!

Lennart

-- 
Lennart Poettering - Red Hat, Inc.
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: when startup delays become bugs

2013-05-16 Thread John . Florian
 From: Adam Williamson awill...@redhat.com
  How do I determine what component to file a bug against? I guess I
  have to find the package that caused these .service files to be
  installed?
 
 Yes. 'rpm -qf /lib/systemd/system/foo.service'.

I'd actually suggest doing:

rpm -qif /lib/systemd/system/foo.service

... and noting the source rpm.  I've had some cases where the source rpm 
(bugzilla component) results in numerous binary rpms and sometimes the 
connections aren't so obvious at first.  For example, note the Name vs. 
the Source RPM with:

$ rpm -qif /usr/lib/python2.7/site-packages/imgcreate/fs.py
Name: python-imgcreate
Epoch   : 1
Version : 18.15
Release : 1.fc18
Architecture: x86_64
Install Date: Wed 24 Apr 2013 11:47:37 AM EDT
Group   : System Environment/Base
Size: 285483
License : GPLv2
Signature   : RSA/SHA256, Fri 05 Apr 2013 01:34:00 AM EDT, Key ID 
ff01125cde7f38bd
Source RPM  : livecd-tools-18.15-1.fc18.src.rpm
Build Date  : Wed 03 Apr 2013 01:55:56 PM EDT
Build Host  : buildvm-13.phx2.fedoraproject.org
Relocations : (not relocatable)
Packager: Fedora Project
Vendor  : Fedora Project
URL : http://git.fedorahosted.org/git/livecd
Summary : Python modules for building system images
Description :
Python modules that can be used for building images for things
like live image or appliances.


--
John Florian
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: when startup delays become bugs

2013-05-16 Thread Lennart Poettering
On Thu, 16.05.13 09:28, Orion Poplawski (or...@cora.nwra.com) wrote:

 On 05/16/2013 07:10 AM, Daniel J Walsh wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1
 
 On 05/14/2013 06:30 PM, Dan Williams wrote:
 On Tue, 2013-05-14 at 15:51 -0600, Chris Murphy wrote:
 12.715s restorecond.service
 I have no idea why restorecond would take this much time, and it really 
 should
 not be enabled for most machines, with file name transitions.
 
 Not sure why it is enabled.  Could someone check on a fresh install if it is
 enabled?
 
 policycoreutils-restorecond is marked mandatory in the gnome-desktop
 comp groups.  I don't have it installed on my machines (KDE).

I have now filed a bug about this:

https://bugzilla.redhat.com/show_bug.cgi?id=963919

Lennart

-- 
Lennart Poettering - Red Hat, Inc.
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: when startup delays become bugs

2013-05-16 Thread Adam Williamson
On Thu, 2013-05-16 at 14:44 -0400, john.flor...@dart.biz wrote:
  From: Adam Williamson awill...@redhat.com 
   How do I determine what component to file a bug against? I guess I
   have to find the package that caused these .service files to be
   installed?
  
  Yes. 'rpm -qf /lib/systemd/system/foo.service'.
 
 I'd actually suggest doing: 
 
 rpm -qif /lib/systemd/system/foo.service 
 
 ... and noting the source rpm.  I've had some cases where the source
 rpm (bugzilla component) results in numerous binary rpms and sometimes
 the connections aren't so obvious at first.  For example, note the
 Name vs. the Source RPM with: 

Yeah, I usually do that in two steps, but doing it in one seems more
logical. :)
-- 
Adam Williamson
Fedora QA Community Monkey
IRC: adamw | Twitter: AdamW_Fedora | identi.ca: adamwfedora
http://www.happyassassin.net

-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: when startup delays become bugs

2013-05-16 Thread Chris Murphy

On May 16, 2013, at 12:39 PM, Lennart Poettering mzerq...@0pointer.de wrote:
 
 
 Hmm, on your machine, what does systemctl show -p WantedBy -p
 RequiredBy systemd-udev-settle.service show? This will tell us which
 package is actually responsible for pulling in
 systemd-udev-settle.service.

RequiredBy=
WantedBy=dmraid-activation.service

I'm not using dmraid, or md raid, or any kind of raid at the moment. I also 
have this entry, previously explained in this thread as probably not being 
needed unless dmraid is being used, so is the likely offender for udev-settle.

  2.823s dmraid-activation.service


And I missed one other I don't understand, which is a top 3 offender:

  9.855s accounts-daemon.service
# We pull this in by graphical.target instead of waiting for the bus
# activation, to speed things up a little: gdm uses this anyway so it is nice
# if it 

Fine, but 10 seconds seems like a lot of time, but is it?


Chris Murphy
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: when startup delays become bugs

2013-05-16 Thread Adam Williamson
On Thu, 2013-05-16 at 12:51 -0600, Chris Murphy wrote:

 And I missed one other I don't understand, which is a top 3 offender:
 
   9.855s accounts-daemon.service
 # We pull this in by graphical.target instead of waiting for the bus
 # activation, to speed things up a little: gdm uses this anyway so it is nice
 # if it 
 
 Fine, but 10 seconds seems like a lot of time, but is it?

Honestly, it does seem like *everything* seems to start up slow on your
F19 system, compared to F18. I wonder if there's some kind of bug
causing general slowness on your system?

On my desktop accounts-daemon.service takes 21ms, lvm2-monitor takes
70ms, lvm2-lvmetad 5ms, NetworkManager.service 454ms...the only thing
that takes more than 1s is abrt-uefioops.service, and I explained why
that was in my last mail.

This is a fast system with an SSD, but even so, it seems like
_everything_ is sluggish on your system. I think that's why someone
asked if you were on a debug kernel.
-- 
Adam Williamson
Fedora QA Community Monkey
IRC: adamw | Twitter: AdamW_Fedora | identi.ca: adamwfedora
http://www.happyassassin.net

-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: when startup delays become bugs

2013-05-16 Thread Lennart Poettering
On Thu, 16.05.13 12:51, Chris Murphy (li...@colorremedies.com) wrote:

 
 
 On May 16, 2013, at 12:39 PM, Lennart Poettering mzerq...@0pointer.de wrote:
  
  
  Hmm, on your machine, what does systemctl show -p WantedBy -p
  RequiredBy systemd-udev-settle.service show? This will tell us which
  package is actually responsible for pulling in
  systemd-udev-settle.service.
 
 RequiredBy=
 WantedBy=dmraid-activation.service
 
 I'm not using dmraid, or md raid, or any kind of raid at the moment. I also 
 have this entry, previously explained in this thread as probably not being 
 needed unless dmraid is being used, so is the likely offender for udev-settle.
 
   2.823s dmraid-activation.service

Given that this is really only needed for exotic stuff I do wonder why
we need to install this by default even.

Good to know though that LVM isn't the primary reason why we are
slow. Now it's dmraid.

And thank you, LVM guys, for getting rid of the udev-settle dep. Much 
appreciated!

 And I missed one other I don't understand, which is a top 3 offender:
 
   9.855s accounts-daemon.service
 # We pull this in by graphical.target instead of waiting for the bus
 # activation, to speed things up a little: gdm uses this anyway so it is nice
 # if it 
 
 Fine, but 10 seconds seems like a lot of time, but is it?

Hmm, yeah, 10s is a lot, especially since this service is loaded kinda
late and shouldn't really do anything.

Matthias, any clue what this might be about? Does accounts-daemon do
anything on start-up that might take long? Network stuff?

Lennart

-- 
Lennart Poettering - Red Hat, Inc.
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: when startup delays become bugs

2013-05-16 Thread Adam Williamson
On Thu, 2013-05-16 at 11:35 -0700, Adam Williamson wrote:

 3. Make the whole service conditional on there being anything
 in /sys/fs/pstore at all. In practice, just doing this would resolve the
 problems and make 1 and 2 unnecessary (though still possibly desirable).
 To do that, just add this to the abrt-uefioops.service file:
 
 ConditionDirectoryNotEmpty=/sys/fs/pstore

I'll submit a patch for abrt with this change after I test it, working
on it ATM.
-- 
Adam Williamson
Fedora QA Community Monkey
IRC: adamw | Twitter: AdamW_Fedora | identi.ca: adamwfedora
http://www.happyassassin.net

-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: when startup delays become bugs

2013-05-16 Thread Lennart Poettering
On Thu, 16.05.13 12:15, Adam Williamson (awill...@redhat.com) wrote:

 On Thu, 2013-05-16 at 11:35 -0700, Adam Williamson wrote:
 
  3. Make the whole service conditional on there being anything
  in /sys/fs/pstore at all. In practice, just doing this would resolve the
  problems and make 1 and 2 unnecessary (though still possibly desirable).
  To do that, just add this to the abrt-uefioops.service file:
  
  ConditionDirectoryNotEmpty=/sys/fs/pstore
 
 I'll submit a patch for abrt with this change after I test it, working
 on it ATM.

Please reference this bug for the patch! Thanks!

https://bugzilla.redhat.com/show_bug.cgi?id=963182

Lennart

-- 
Lennart Poettering - Red Hat, Inc.
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: when startup delays become bugs

2013-05-16 Thread Heiko Adams
My top 6 of extreme long starting services are:
$ systemd-analyze blame
 27.652s NetworkManager.service
 27.072s chronyd.service
 27.015s avahi-daemon.service
 26.899s tuned.service
 26.647s restorecond.service
 23.512s lightdm.service

And I don't have a clue why the hell I've got an avahi-daemon.service
running on every startup an why it take so extreme long to start.

$ systemctl show -p WantedBy -p RequiredBy avahi-daemon.service
RequiredBy=
WantedBy=multi-user.target
-- 
Regards

Heiko Adams
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: when startup delays become bugs

2013-05-16 Thread Adam Williamson
On Thu, 2013-05-16 at 21:29 +0200, Heiko Adams wrote:
 My top 6 of extreme long starting services are:
 $ systemd-analyze blame
  27.652s NetworkManager.service
  27.072s chronyd.service
  27.015s avahi-daemon.service
  26.899s tuned.service
  26.647s restorecond.service
  23.512s lightdm.service
 
 And I don't have a clue why the hell I've got an avahi-daemon.service
 running on every startup an why it take so extreme long to start.

It's enabled by default because zeroconf is considered a useful thing to
have. If you don't want it, you can happily disable it, I think.
-- 
Adam Williamson
Fedora QA Community Monkey
IRC: adamw | Twitter: AdamW_Fedora | identi.ca: adamwfedora
http://www.happyassassin.net

-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: when startup delays become bugs

2013-05-16 Thread Chris Adams
Once upon a time, Heiko Adams heiko.ad...@gmail.com said:
 My top 6 of extreme long starting services are:
 $ systemd-analyze blame
  27.652s NetworkManager.service
  27.072s chronyd.service
  27.015s avahi-daemon.service
  26.899s tuned.service
  26.647s restorecond.service
  23.512s lightdm.service

Some people seem to be seeing some extremely long startup times for some
services.  I can't see what would make some of these take that long.
I'm wondering:

- is there some common bug that is causing a major slowdown for some of
  these services?

- is there possibly a bug in how systemd is measuring and/or reporting
  these times?

Some of the services you list could be running into some type of network
timeout, but tuned? restorecond?  Those shouldn't be hitting the network
AFAIK.

Rather than focusing on individual services, it would seem to me like a
good idea to see if there is some underlying issue at work here.

-- 
Chris Adams li...@cmadams.net
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: when startup delays become bugs

2013-05-16 Thread Heiko Adams
Am 16.05.2013 21:41, schrieb Chris Adams:
 Once upon a time, Heiko Adams heiko.ad...@gmail.com said:
 My top 6 of extreme long starting services are:
 $ systemd-analyze blame
  27.652s NetworkManager.service
  27.072s chronyd.service
  27.015s avahi-daemon.service
  26.899s tuned.service
  26.647s restorecond.service
  23.512s lightdm.service
 
 Some people seem to be seeing some extremely long startup times for some
 services.  I can't see what would make some of these take that long.
 I'm wondering:
 
 - is there some common bug that is causing a major slowdown for some of
   these services?
 
 - is there possibly a bug in how systemd is measuring and/or reporting
   these times?
 
 Some of the services you list could be running into some type of network
 timeout, but tuned? restorecond?  Those shouldn't be hitting the network
 AFAIK.
 
 Rather than focusing on individual services, it would seem to me like a
 good idea to see if there is some underlying issue at work here.
 
For me the most anoying thing is the long delay of lightdm.service which
causes a black screen for several seconds which could make me think
something went wrong.

But you're right it looks like a more general problem.
-- 
Regards

Heiko Adams
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: when startup delays become bugs

2013-05-16 Thread Chris Murphy

On May 16, 2013, at 12:57 PM, Adam Williamson awill...@redhat.com wrote:

 On Thu, 2013-05-16 at 12:51 -0600, Chris Murphy wrote:
 
 And I missed one other I don't understand, which is a top 3 offender:
 
  9.855s accounts-daemon.service
 # We pull this in by graphical.target instead of waiting for the bus
 # activation, to speed things up a little: gdm uses this anyway so it is nice
 # if it 
 
 Fine, but 10 seconds seems like a lot of time, but is it?
 
 Honestly, it does seem like *everything* seems to start up slow on your
 F19 system, compared to F18. I wonder if there's some kind of bug
 causing general slowness on your system?

No idea, as with the exception of the very high (5s to 1m) service times, I get 
really non-deterministic numbers for other services. For example on 10 reboots, 
colord.service is all over the map, with a high of 1.2 seconds and a low of 
22ms! That's a huge spread. And it's not the only one. sshd.service is 23ms in 
the current baremetal boot, but has been as high as 12s. Massive differences.



 This is a fast system with an SSD, but even so, it seems like
 _everything_ is sluggish on your system. I think that's why someone
 asked if you were on a debug kernel.

I'm not using a debug kernel. SSD makes a huge difference. Even Fedora19 on 
Vbox on OSX on SSD is faster than what I'm getting on baremetal with HDD. By a 
lot.

[0.051627] [Firmware Bug]: ACPI: BIOS _OSI(Linux) query ignored
[0.717606] [Firmware Bug]: efi: Inconsistent initial sizes



Chris Murphy
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: when startup delays become bugs

2013-05-16 Thread Chris Murphy

On May 16, 2013, at 1:41 PM, Chris Adams li...@cmadams.net wrote:

 Once upon a time, Heiko Adams heiko.ad...@gmail.com said:
 My top 6 of extreme long starting services are:
 $ systemd-analyze blame
 27.652s NetworkManager.service
 27.072s chronyd.service
 27.015s avahi-daemon.service
 26.899s tuned.service
 26.647s restorecond.service
 23.512s lightdm.service
 
 Some people seem to be seeing some extremely long startup times for some
 services.  I can't see what would make some of these take that long.

I wonder if wireless connectivity is related? When I reboot with and without a 
wired connection attached, I get the same result. But maybe wireless is causing 
delays just by being configured, even if a wired connection is available?


 
 Rather than focusing on individual services, it would seem to me like a
 good idea to see if there is some underlying issue at work here.

Is why I'm not trying to focus on individual services beyond understanding what 
they do or what they need, to see if there's a pattern. My top suspect is 
network related, because I get the weirdest changes by just changing hostname 
and rebooting, which is baffling to me.


Chris Murphy
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: when startup delays become bugs

2013-05-16 Thread Adam Williamson
On Thu, 2013-05-16 at 11:35 -0700, Adam Williamson wrote:

 2. Move the 'wait for abrtd' stuff to under the 'cd /sys/fs/pstore
 2/dev/null || exit 0' line. Then at least if /sys/fs/pstore doesn't
 exist at all, we can exit without waiting for a second.
 
 3. Make the whole service conditional on there being anything
 in /sys/fs/pstore at all. In practice, just doing this would resolve the
 problems and make 1 and 2 unnecessary (though still possibly desirable).
 To do that, just add this to the abrt-uefioops.service file:
 
 ConditionDirectoryNotEmpty=/sys/fs/pstore

I've sent patches for both of these to the ABRT ML.
-- 
Adam Williamson
Fedora QA Community Monkey
IRC: adamw | Twitter: AdamW_Fedora | identi.ca: adamwfedora
http://www.happyassassin.net

-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: when startup delays become bugs

2013-05-16 Thread Chris Murphy

On May 16, 2013, at 1:51 PM, Chris Murphy li...@colorremedies.com wrote:
 
 I wonder if wireless connectivity is related? When I reboot with and without 
 a wired connection attached, I get the same result. But maybe wireless is 
 causing delays just by being configured, even if a wired connection is 
 available?

OK well this isn't right. On every reboot, wireless is activated and connected 
to. But the wired connection is disabled. I just removed the b43 wireless 
firmware files to prevent it from being involved, and on every wired reboot, 
Gnome has the wired connection set to off. I turn it on, I connect, I reboot, 
and it's off again. That's not how Live media works (it's on on every boot), 
and isn't how F18 works either. So I don't know what's going on…

journalctl | grep p5p1
May 16 14:13:50 F19.localdomain systemd-udevd[187]: renamed network interface 
eth0 to p5p1
May 16 14:14:06 F19.localdomain NetworkManager[403]: info (p5p1): carrier is 
OFF
May 16 14:14:06 F19.localdomain NetworkManager[403]: info (p5p1): new 
Ethernet device (driver: 'sky2' ifindex: 2)
May 16 14:14:06 F19.localdomain NetworkManager[403]: info (p5p1): exported as 
/org/freedesktop/NetworkManager/Devices/0
May 16 14:14:06 F19.localdomain NetworkManager[403]: info (p5p1): device 
state change: unmanaged - unavailable (reason 'managed') [10 20 2]
May 16 14:14:06 F19.localdomain NetworkManager[403]: info (p5p1): bringing up 
device.
May 16 14:14:06 F19.localdomain kernel: sky2 :0c:00.0 p5p1: enabling 
interface
May 16 14:14:06 F19.localdomain kernel: IPv6: ADDRCONF(NETDEV_UP): p5p1: link 
is not ready
May 16 14:14:06 F19.localdomain NetworkManager[403]: info (p5p1): preparing 
device.
May 16 14:14:06 F19.localdomain NetworkManager[403]: info (p5p1): 
deactivating device (reason 'managed') [2]
May 16 14:14:08 F19.localdomain kernel: sky2 :0c:00.0 p5p1: Link is up at 
100 Mbps, full duplex, flow control both
May 16 14:14:08 F19.localdomain kernel: IPv6: ADDRCONF(NETDEV_CHANGE): p5p1: 
link becomes ready
May 16 14:14:08 F19.localdomain NetworkManager[403]: info (p5p1): carrier now 
ON (device state 20)
May 16 14:14:08 F19.localdomain NetworkManager[403]: info (p5p1): device 
state change: unavailable - disconnected (reason 'carrier-changed') [20 30 40]
May 16 14:14:10 F19.localdomain avahi-daemon[308]: Registering new address 
record for fe80::21e:c2ff:fe1d:507e on p5p1.*.


This is where I manually slide the Wired setting from Off to On in Gnome:


May 16 14:14:36 F19.localdomain NetworkManager[403]: info Activation (p5p1) 
starting connection 'ens5'
May 16 14:14:36 F19.localdomain NetworkManager[403]: info (p5p1): device 
state change: disconnected - prepare (reason 'none') [30 40 0]
May 16 14:14:36 F19.localdomain NetworkManager[403]: info Activation (p5p1) 
Stage 1 of 5 (Device Prepare) scheduled...
May 16 14:14:36 F19.localdomain NetworkManager[403]: info Activation (p5p1) 
Stage 1 of 5 (Device Prepare) started...
May 16 14:14:36 F19.localdomain NetworkManager[403]: info Activation (p5p1) 
Stage 2 of 5 (Device Configure) scheduled...
May 16 14:14:36 F19.localdomain NetworkManager[403]: info Activation (p5p1) 
Stage 1 of 5 (Device Prepare) complete.
May 16 14:14:36 F19.localdomain NetworkManager[403]: info Activation (p5p1) 
Stage 2 of 5 (Device Configure) starting...
May 16 14:14:36 F19.localdomain NetworkManager[403]: info (p5p1): device 
state change: prepare - config (reason 'none') [40 50 0]
May 16 14:14:36 F19.localdomain NetworkManager[403]: info Activation (p5p1) 
Stage 2 of 5 (Device Configure) successful.
May 16 14:14:36 F19.localdomain NetworkManager[403]: info Activation (p5p1) 
Stage 2 of 5 (Device Configure) complete.
May 16 14:14:36 F19.localdomain NetworkManager[403]: info Activation (p5p1) 
Stage 3 of 5 (IP Configure Start) scheduled.
May 16 14:14:36 F19.localdomain NetworkManager[403]: info Activation (p5p1) 
Stage 3 of 5 (IP Configure Start) started...
May 16 14:14:36 F19.localdomain NetworkManager[403]: info (p5p1): device 
state change: config - ip-config (reason 'none') [50 70 0]
May 16 14:14:36 F19.localdomain NetworkManager[403]: info Activation (p5p1) 
Beginning DHCPv4 transaction (timeout in 45 seconds)
May 16 14:14:36 F19.localdomain NetworkManager[403]: info Activation (p5p1) 
Beginning IP6 addrconf.
May 16 14:14:36 F19.localdomain avahi-daemon[308]: Withdrawing address record 
for fe80::21e:c2ff:fe1d:507e on p5p1.
May 16 14:14:36 F19.localdomain NetworkManager[403]: info Activation (p5p1) 
Stage 3 of 5 (IP Configure Start) complete.
May 16 14:14:36 F19.localdomain NetworkManager[403]: info (p5p1): DHCPv4 
state changed nbi - preinit
May 16 14:14:36 F19.localdomain dhclient[1526]: Listening on 
LPF/p5p1/00:1e:c2:1d:50:7e
May 16 14:14:36 F19.localdomain dhclient[1526]: Sending on   
LPF/p5p1/00:1e:c2:1d:50:7e
May 16 14:14:36 F19.localdomain dhclient[1526]: DHCPREQUEST on p5p1 to 
255.255.255.255 port 67 (xid=0x5cc85c0c)
May 16 14:14:36 F19.localdomain NetworkManager[403]: info (p5p1): DHCPv4 

Re: when startup delays become bugs

2013-05-16 Thread Bill Nottingham
Lennart Poettering (mzerq...@0pointer.de) said: 
  RequiredBy=
  WantedBy=dmraid-activation.service
  
  I'm not using dmraid, or md raid, or any kind of raid at the moment. I also 
  have this entry, previously explained in this thread as probably not being 
  needed unless dmraid is being used, so is the likely offender for 
  udev-settle.
  
2.823s dmraid-activation.service
 
 Given that this is really only needed for exotic stuff I do wonder why
 we need to install this by default even.

We *could* drop all the assorted local storage tools from @standard and just 
leave
them to be installed by anaconda if they're being used to create such
storage. They'd have to remain on the live image, though, and so this would
not help installs done from the live images.

Bill
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: when startup delays become bugs

2013-05-16 Thread Adam Williamson
On Thu, 2013-05-16 at 16:17 -0400, Bill Nottingham wrote:
 Lennart Poettering (mzerq...@0pointer.de) said: 
   RequiredBy=
   WantedBy=dmraid-activation.service
   
   I'm not using dmraid, or md raid, or any kind of raid at the moment. I 
   also have this entry, previously explained in this thread as probably not 
   being needed unless dmraid is being used, so is the likely offender for 
   udev-settle.
   
 2.823s dmraid-activation.service
  
  Given that this is really only needed for exotic stuff I do wonder why
  we need to install this by default even.
 
 We *could* drop all the assorted local storage tools from @standard and just 
 leave
 them to be installed by anaconda if they're being used to create such
 storage. They'd have to remain on the live image, though, and so this would
 not help installs done from the live images.

dmraid looks like it's only in the anaconda-tools group (and a critpath
group) in comps, not in standard. But there's a possibly problematic dep
chain here too:

initial-setup - anaconda - python-blivet - dmraid

so for non-GNOME systems, we're getting dmraid via that dep chain.
-- 
Adam Williamson
Fedora QA Community Monkey
IRC: adamw | Twitter: AdamW_Fedora | identi.ca: adamwfedora
http://www.happyassassin.net

-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: when startup delays become bugs

2013-05-16 Thread Jóhann B. Guðmundsson

On 05/16/2013 08:17 PM, Bill Nottingham wrote:

We*could*  drop all the assorted local storage tools from @standard and just 
leave
them to be installed by anaconda if they're being used to create such
storage. They'd have to remain on the live image, though, and so this would
not help installs done from the live images.


Is not the live images exactly the place where we dont want enterprise 
storage daemons?


JBG
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: when startup delays become bugs

2013-05-16 Thread Adam Williamson
On Thu, 2013-05-16 at 20:41 +, Jóhann B. Guðmundsson wrote:
 On 05/16/2013 08:17 PM, Bill Nottingham wrote:
 
  We *could* drop all the assorted local storage tools from @standard and 
  just leave
  them to be installed by anaconda if they're being used to create such
  storage. They'd have to remain on the live image, though, and so this would
  not help installs done from the live images.
 
 Is not the live images exactly the place where we dont want enterprise
 storage daemons?

dmraid is hardly enterprise, though. It's used for all firmware RAID
implementations aside from Intel's, and it's not unusual for any random
system to be using firmware RAID.
-- 
Adam Williamson
Fedora QA Community Monkey
IRC: adamw | Twitter: AdamW_Fedora | identi.ca: adamwfedora
http://www.happyassassin.net

-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: when startup delays become bugs

2013-05-16 Thread Chris Murphy

On May 16, 2013, at 1:51 PM, Chris Murphy li...@colorremedies.com wrote:

 But maybe wireless is causing delays just by being configured, even if a 
 wired connection is available?

Not really. The one improvement is in NetworkManager-wait-online.service which 
with a wired connection instead of wireless is 1/10th of the time. So at the 
moment I'm still no farther along on finding the overall cause of slowness.

 16.656s plymouth-quit-wait.service
 13.750s accounts-daemon.service
 12.924s avahi-daemon.service
 12.852s chronyd.service
 12.724s firewalld.service


Chris Murphy

-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: when startup delays become bugs

2013-05-16 Thread Lennart Poettering
On Thu, 16.05.13 14:41, Chris Adams (li...@cmadams.net) wrote:

 Once upon a time, Heiko Adams heiko.ad...@gmail.com said:
  My top 6 of extreme long starting services are:
  $ systemd-analyze blame
   27.652s NetworkManager.service
   27.072s chronyd.service
   27.015s avahi-daemon.service
   26.899s tuned.service
   26.647s restorecond.service
   23.512s lightdm.service
 
 Some people seem to be seeing some extremely long startup times for some
 services.  I can't see what would make some of these take that long.
 I'm wondering:
 
 - is there some common bug that is causing a major slowdown for some of
   these services?
 
 - is there possibly a bug in how systemd is measuring and/or reporting
   these times?
 
 Some of the services you list could be running into some type of network
 timeout, but tuned? restorecond?  Those shouldn't be hitting the network
 AFAIK.
 
 Rather than focusing on individual services, it would seem to me like a
 good idea to see if there is some underlying issue at work here.

So, the blame chart should not be misunderstood. It simply tells you how
much time passed between the time systemd forked off the process until
it completed initialization and told systemd about it. Now, within that
time there might be many things happening and the service might simply
wait for some resource to become available rather than be slow in its
own. That resource could be the CPU or IO or some other service or
device. For example, readahead might monopolize IO for some time during
boot, so that other processes get starved. These dependencies and
resource constraints are not visible in systemd-analyze blame.

If you want to track this down, try systemd-bootchart. Simply boot with
init=/usr/lib/systemd/systemd-bootchart, see systemd-bootchart(1) for
details. It will plot your CPU and IO consumption and can show you
resource usage by process, which is usually a more useful tool than
a simple systemd-analyze blame.

For the super slow run above I'd be quite interested to have a look at
the bootchart actually. (Heiko? Can you upload that?)

Lennart

-- 
Lennart Poettering - Red Hat, Inc.
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: when startup delays become bugs

2013-05-16 Thread Lennart Poettering
On Thu, 16.05.13 13:44, Adam Williamson (awill...@redhat.com) wrote:

 On Thu, 2013-05-16 at 20:41 +, Jóhann B. Guðmundsson wrote:
  On 05/16/2013 08:17 PM, Bill Nottingham wrote:
  
   We *could* drop all the assorted local storage tools from @standard and 
   just leave
   them to be installed by anaconda if they're being used to create such
   storage. They'd have to remain on the live image, though, and so this 
   would
   not help installs done from the live images.
  
  Is not the live images exactly the place where we dont want enterprise
  storage daemons?
 
 dmraid is hardly enterprise, though. It's used for all firmware RAID
 implementations aside from Intel's, and it's not unusual for any random
 system to be using firmware RAID.

As I understood this non-intel fakeraid is actually pretty much the
exception...

Lennart

-- 
Lennart Poettering - Red Hat, Inc.
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: when startup delays become bugs

2013-05-16 Thread Lennart Poettering
On Thu, 16.05.13 16:17, Bill Nottingham (nott...@redhat.com) wrote:

 Lennart Poettering (mzerq...@0pointer.de) said: 
   RequiredBy=
   WantedBy=dmraid-activation.service
   
   I'm not using dmraid, or md raid, or any kind of raid at the moment. I 
   also have this entry, previously explained in this thread as probably not 
   being needed unless dmraid is being used, so is the likely offender for 
   udev-settle.
   
 2.823s dmraid-activation.service
  
  Given that this is really only needed for exotic stuff I do wonder why
  we need to install this by default even.
 
 We *could* drop all the assorted local storage tools from @standard and just 
 leave
 them to be installed by anaconda if they're being used to create such
 storage. They'd have to remain on the live image, though, and so this would
 not help installs done from the live images.

Does anaconda have infrastructure for this?

I'd certainly prefer if this could be done that way, even if it doesn't
help livecd installs...

Lennart

-- 
Lennart Poettering - Red Hat, Inc.
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: when startup delays become bugs

2013-05-16 Thread Adam Williamson
On Fri, 2013-05-17 at 01:21 +0200, Lennart Poettering wrote:
 On Thu, 16.05.13 13:44, Adam Williamson (awill...@redhat.com) wrote:
 
  On Thu, 2013-05-16 at 20:41 +, Jóhann B. Guðmundsson wrote:
   On 05/16/2013 08:17 PM, Bill Nottingham wrote:
   
We *could* drop all the assorted local storage tools from @standard and 
just leave
them to be installed by anaconda if they're being used to create such
storage. They'd have to remain on the live image, though, and so this 
would
not help installs done from the live images.
   
   Is not the live images exactly the place where we dont want enterprise
   storage daemons?
  
  dmraid is hardly enterprise, though. It's used for all firmware RAID
  implementations aside from Intel's, and it's not unusual for any random
  system to be using firmware RAID.
 
 As I understood this non-intel fakeraid is actually pretty much the
 exception...

Intel firmware RAID is becoming more popular, yeah, just as a result of
the fact that Intel-based motherboards are becoming more popular; but
there are still plenty of others out there.
-- 
Adam Williamson
Fedora QA Community Monkey
IRC: adamw | Twitter: AdamW_Fedora | identi.ca: adamwfedora
http://www.happyassassin.net

-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: when startup delays become bugs

2013-05-16 Thread Heiko Adams
Am 17.05.2013 01:09, schrieb Lennart Poettering:
 
 For the super slow run above I'd be quite interested to have a look at
 the bootchart actually. (Heiko? Can you upload that?)
 
 Lennart
 
https://www.dropbox.com/s/95dzgklajrgaz4n/bootchart-20130517-0756.svg

BTW: Is it correct that /dev/mapper/VolGroup-lv_home can't be found with
systemd-bootchart?
-- 
Regards

Heiko Adams
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: when startup delays become bugs

2013-05-15 Thread Alexey Torkhov
2013/5/15 Chris Adams li...@cmadams.net:
 Once upon a time, Chris Murphy li...@colorremedies.com said:
 The only things related to networking I change is setting a hostname from 
 localhost to f18s, f19s or f19q; and occasionally adding the b43 firmware if 
 I decide to go wireless.

 Do you add the changed hostname to /etc/hosts?  If not, sendmail (and
 some other things) will stop for some time trying to resolve the local
 hostname to an IP address.

Why doesn't nss_myhostname resolve local hostname even if it is not in
/etc/hosts?


Alexey
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: when startup delays become bugs

2013-05-15 Thread Michal Schmidt

On 05/15/2013 12:56 AM, Jeffrey Bastian wrote:

The slowest component on F19 is this new wait service:
  10.102s NetworkManager-wait-online.service

This service doesn't seem to do anything other than wait until the
network is fully up.  This really skews unfavorably the boot time
reported by systemd-analyze.  But if I mask that service and reboot,
everthing still appears to work fine and systemd-analyze reports a boot
time of only 6.2 seconds.


A bit OT...

This is not the first time in this thread that someone mentions masking 
a service. Do you guys have a reason why you mask the service instead of 
simply disabling it? Is there anything pulling the service into your 
boot even when it's disabled?


Thanks,
Michal

--
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: when startup delays become bugs

2013-05-15 Thread Paolo Bonzini
Il 15/05/2013 05:43, Adam Williamson ha scritto:
 On Tue, 2013-05-14 at 15:51 -0600, Chris Murphy wrote:
 
 But firewalld goes from 7 seconds to 18 seconds? Why? avahi-daemon,
 restorecond, all are an order of magnitude longer on F19 than F18.
 It's a 3+ minute userspace hit on the startup time where the kernel
 takes 1.9 seconds. Off hand this doesn't seem reasonable, especially
 sendmail. If the time can't be brought down by a lot, can it ship
 disabled by default?
 
 FWIW, I found your results here interesting, so I did a little test of
 my own. I did default DVD installs of F17, F18 and F19 Beta TC4, ran
 through firstboot, rebooted, then rebooted again and ran systemd-analyze
 (to let prelink kick in). Results are that F18's slightly slower than
 F17 and F19 is somewhat slower again. My numbers are way way faster than
 your numbers overall, though; VMs do seem to perform very quickly on
 this box for whatever reason.

Can you include here the QEMU command line or libvirt XML definition?

Unless you have something like cache=none or cache='none' in it
(respectively for QEMU and libvirt), chances are that you're using the
host memory as a very fast and large disk cache for your VM. :)

(VMs tend to be fast in the initramfs anyway, because they do not have
much hardware to initialize, especially graphics cards).

Paolo

 F17
 ---
 
 Startup finished in 493ms (kernel) + 794ms (initramfs) + 2751ms
 (userspace) = 4039ms
 
446ms udev-settle.service
345ms NetworkManager.service
268ms systemd-logind.service
266ms ip6tables.service
262ms avahi-daemon.service
261ms iptables.service
173ms mcelog.service
170ms nfs-lock.service
145ms udev-trigger.service
136ms udev.service
135ms abrt-ccpp.service
122ms spice-vdagentd.service
117ms sendmail.service
114ms sm-client.service
110ms media.mount
106ms sys-kernel-debug.mount
105ms fedora-loadmodules.service
105ms dev-hugepages.mount
103ms dev-mqueue.mount
100ms sys-kernel-security.mount
 98ms rsyslog.service
 91ms remount-rootfs.service
 90ms dbus.service
 86ms sys-kernel-config.mount
 84ms systemd-vconsole-setup.service
 84ms acpid.service
 77ms boot.mount
 62ms systemd-tmpfiles-setup.service
 56ms abrt-vmcore.service
 53ms fedora-storage-init.service
 53ms systemd-user-sessions.service
 49ms sshd.service
 47ms auditd.service
 44ms systemd-remount-api-vfs.service
 41ms colord.service
 41ms systemd-sysctl.service
 38ms bluetooth.service
 32ms fedora-storage-init-late.service
 28ms fedora-readonly.service
 28ms lvm2-monitor.service
 27ms systemd-readahead-collect.service
 25ms colord-sane.service
 18ms udisks2.service
 18ms mdmonitor-takeover.service
 13ms upower.service
 13ms accounts-daemon.service
 11ms rtkit-daemon.service
 10ms rpcbind.service
  9ms fedora-wait-storage.service
 
 F18
 ---
 
 Startup finished in 521ms (kernel) + 616ms (initramfs) + 3348ms
 (userspace) = 4485ms
 
742ms iscsid.service
607ms firewalld.service
460ms systemd-udev-settle.service
372ms chronyd.service
369ms restorecond.service
321ms gdm.service
292ms abrt-ccpp.service
279ms ksmtuned.service
231ms accounts-daemon.service
208ms spice-vdagentd.service
208ms auditd.service
182ms systemd-logind.service
174ms avahi-daemon.service
167ms rtkit-daemon.service
163ms sm-client.service
116ms fedora-readonly.service
110ms fedora-loadmodules.service
109ms systemd-udev-trigger.service
104ms NetworkManager.service
101ms mcelog.service
 95ms systemd-udevd.service
 87ms sendmail.service
 84ms sys-kernel-debug.mount
 82ms dev-hugepages.mount
 82ms dev-mqueue.mount
 79ms iscsi.service
 74ms systemd-remount-fs.service
 64ms sys-kernel-config.mount
 56ms systemd-vconsole-setup.service
 52ms colord.service
 42ms fedora-storage-init.service
 41ms systemd-user-sessions.service
 35ms udisks2.service
 34ms ksm.service
 34ms polkit.service
 29ms systemd-tmpfiles-setup.service
 28ms rpcbind.service
 26ms bluetooth.service
 24ms fedora-storage-init-late.service
 23ms sshd.service
 16ms abrt-vmcore.service
 15ms upower.service
 15ms lvm2-monitor.service
 15ms systemd-sysctl.service
 13ms systemd-modules-load.service
 13ms mdmonitor-takeover.service
 10ms boot.mount
  4ms tmp.mount
 
 F19
 ---
 
 Startup finished in 411ms (kernel) + 745ms (initrd) + 4.704s (userspace)
 = 5.861s
 
   2.745s plymouth-quit-wait.service
   2.389s NetworkManager-wait-online.service
   1.078s accounts-daemon.service
   1.026s firewalld.service
   1.007s restorecond.service
987ms avahi-daemon.service
479ms iprupdate.service
385ms iprinit.service
356ms systemd-udev-settle.service
  

Re: when startup delays become bugs

2013-05-15 Thread Lennart Poettering
On Tue, 14.05.13 17:58, Chris Murphy (li...@colorremedies.com) wrote:

 Static hostname: f19q.localdomain
 Takes 1.5 minutes before I can ssh into the VM, but the
 sendmail.service is now about 2 seconds instead of a minute.

Is this a non-debug kernel?

 [root@f19q ~]# systemd-analyze blame
  51.688s firewalld.service
  50.696s restorecond.service
  16.989s avahi-daemon.service
  16.023s systemd-logind.service
  11.908s NetworkManager-wait-online.service

With the recommendations from

https://bugzilla.redhat.com/show_bug.cgi?id=787314#c37

NM-w-o.s would become a service that is only pulled in when actually
needed. It would be good to get this included in F19, as it is
completely bogus to delay the boot for everybody just because a few
people happen to have static NFS mounts listed in fstab.

BTW, systemd-analyze critical-chain (possibly with --fuzz=500ms) is a
new tool in F19, which allows a bit more 

Lennart

-- 
Lennart Poettering - Red Hat, Inc.
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: when startup delays become bugs

2013-05-15 Thread Lennart Poettering
On Tue, 14.05.13 17:30, Dan Williams (d...@redhat.com) wrote:

 On Tue, 2013-05-14 at 15:51 -0600, Chris Murphy wrote:
  This is not intended to be snarky, but I admit it could sound like it is.  
  When are long startup times for services considered to be bugs in their own 
  right?
  
  
  [root@f19q ~]# systemd-analyze blame
1min 444ms sm-client.service
1min 310ms sendmail.service
  18.602s firewalld.service
   13.882s avahi-daemon.service
   12.944s NetworkManager-wait-online.service
 
 Is anything waiting on NetworkManager-wait-online in your install?  That
 target is really intended for servers where you want to block Apache or
 Sendmail or Database from starting until you're sure your networking is
 fully configured.  If you don't have any services that require a network
 to be up, then you can mask NetworkManager-wait-online and things will
 be more parallel.

Dan, could you please implement these recommendations for F19:

https://bugzilla.redhat.com/show_bug.cgi?id=787314#c37

This really shouldn't be pulled in unconditionally...

Lennart

-- 
Lennart Poettering - Red Hat, Inc.
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: when startup delays become bugs

2013-05-15 Thread Lennart Poettering
On Tue, 14.05.13 15:51, Chris Murphy (li...@colorremedies.com) wrote:

   2.911s abrt-uefioops.service

I don't understand really why abrt needs to run at all by default. It
should be spawned automatically when an actualy crash happens, so that
by default nobody has to have this serice running.

Abrt guys, any chance you can make use of socket/bus/path activation so
that abrt is auto-spawned when needed and not before?

You don't even need to patch abrt itself for that (dropping in unit
files is sufficient) if it needs only bus or path activation. Only
socket activation needs patching.

I filed this now, to keep track of this:

https://bugzilla.redhat.com/show_bug.cgi?id=963184

Lennart

-- 
Lennart Poettering - Red Hat, Inc.
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: when startup delays become bugs

2013-05-15 Thread Lennart Poettering
On Tue, 14.05.13 20:43, Adam Williamson (awill...@redhat.com) wrote:

Let me take the opportunity to dissect some of this:

 Startup finished in 411ms (kernel) + 745ms (initrd) + 4.704s (userspace)
 = 5.861s
 
   2.745s plymouth-quit-wait.service

This one is misleading, as this just makes sure the bootsplash is
terminated when start-up is complete.

   2.389s NetworkManager-wait-online.service

https://bugzilla.redhat.com/show_bug.cgi?id=787314#c37

479ms iprupdate.service
385ms iprinit.service

These appear to be untis that are only necessary for IBM RAID. It's
hugely disappointing if this is pulled into all boots. I wonder if we
can find a different logic for this, for example pulling it in from a
udev rules or so. Or by changing anaconda to install this only onb IBM
RAID systemd... Anyone knows what this is about?

Also, I don't see either of these listed in the default preset
file. This really shouldnt be started by default.

356ms systemd-udev-settle.service

This is exlusively LVM's fault. There's really no need to ever have that
in the boot unless you run LVM. 

On many machines LVM is the primary reason why things are slow. I can
only recommend everybody to not install on LVM if you can. It's a real
shame that LVM hasn't gotten their things together still, after all
those years. 

A service that needs systemd-udev-settle is a broken service! LVM, you
are broken!

I am hoping for the day LVM gets kicked out of the default install. Oh
btrfs, why are you still not around?

297ms ksmtuned.service

I thought the kernel did this without user-interaction these days?

236ms spice-vdagentd.service

I don't see why this is on by default. According to this bug report this
should have been pulled in on-demand only:

https://bugzilla.redhat.com/show_bug.cgi?id=876237

233ms abrt-ccpp.service

https://bugzilla.redhat.com/show_bug.cgi?id=963184

160ms fedora-loadmodules.service

Somebody should file bugs against all packages that still drop something
into /etc/sysconfig/modules/.

I see that kvm and bluez do this still. I filed these bugs now:

https://bugzilla.redhat.com/show_bug.cgi?id=963193
https://bugzilla.redhat.com/show_bug.cgi?id=963198

138ms sm-client.service

Oh my, sendmail. What an emabrassement that we still ship this enabled
by default... If we could at least turn it into something that is
activated on-demand. But I gues touching these sources to implement that
would be too awful...

112ms iscsi.service

This really sounds like something that should be socket actviated on
demand rather than run by default.

 97ms sshd.service

Dito. THis is something to start by default only on hosts where a ton of
people log in all the time.


 80ms isdn.service

In very new systemd 'isdn.service' is not enabled any more in the preset
file, so this is already gone now.

 79ms systemd-modules-load.service

Ah, cute, on my machine the only reason this is pulled in is
/etc/modules-load.d/spice-vdagentd.conf -- which also pulls in our old
friend uinput (see above).

https://bugzilla.redhat.com/show_bug.cgi?id=963201

 78ms iprdump.service

IBM RAID again? Jeez...

 64ms sendmail.service

Urks...

 56ms systemd-localed.service

Hmm, this one is weird... It should only be loaded when needed, so I am
surprised tjat there's something that needs this during boot-up.

 54ms ksm.service

I thought the kernel would do this internally these days without
userspace interaction.

 54ms abrt-vmcore.service

https://bugzilla.redhat.com/show_bug.cgi?id=963184

 45ms rpcbind.service

https://bugzilla.redhat.com/show_bug.cgi?id=963189

 33ms dmraid-activation.service

I wonder if we can change this to pull this in only if DMRAID is
actually used... SOunds wrong to spawn this unconditionally...

Lennart

-- 
Lennart Poettering - Red Hat, Inc.
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: when startup delays become bugs

2013-05-15 Thread Lennart Poettering
On Wed, 15.05.13 12:07, Alexey Torkhov (atork...@gmail.com) wrote:

 2013/5/15 Chris Adams li...@cmadams.net:
  Once upon a time, Chris Murphy li...@colorremedies.com said:
  The only things related to networking I change is setting a hostname from 
  localhost to f18s, f19s or f19q; and occasionally adding the b43 firmware 
  if I decide to go wireless.
 
  Do you add the changed hostname to /etc/hosts?  If not, sendmail (and
  some other things) will stop for some time trying to resolve the local
  hostname to an IP address.
 
 Why doesn't nss_myhostname resolve local hostname even if it is not in
 /etc/hosts?

My suspicion is that sendmail uses res_query() and friends since it
needs MX and stuff... THis means NSS is bypassed and hence
nss_myhostname and /etc/hosts won't take effect anyway...

Lennart

-- 
Lennart Poettering - Red Hat, Inc.
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: when startup delays become bugs

2013-05-15 Thread Richard W.M. Jones
On Wed, May 15, 2013 at 01:36:57PM +0200, Lennart Poettering wrote:
 On Tue, 14.05.13 20:43, Adam Williamson (awill...@redhat.com) wrote:
 356ms systemd-udev-settle.service
 
 This is exlusively LVM's fault. There's really no need to ever have that
 in the boot unless you run LVM. 
 
 On many machines LVM is the primary reason why things are slow. I can
 only recommend everybody to not install on LVM if you can. It's a real
 shame that LVM hasn't gotten their things together still, after all
 those years. 

Is there a constructive summary anywhere of what LVM needs
to do / change?  This problem affects libguestfs too.

 I am hoping for the day LVM gets kicked out of the default install. Oh
 btrfs, why are you still not around?

I Want To Believe in btrfs, but unfortunately it's still excessively
buggy.  It's actually got worse in Rawhide recently -- it reliably
fails in mkfs.btrfs :-(  I stopped bothering filing bugs about this
because they just get ignored because Rawhide isn't new enough (they
want us to retest everything on some non-upstream btrfs-next kernel).
My colleague ran btrfs on his laptop and had daily crashes.  Last week
he went back to ext4.

Rich.

-- 
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
libguestfs lets you edit virtual machines.  Supports shell scripting,
bindings from many languages.  http://libguestfs.org
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: when startup delays become bugs

2013-05-15 Thread Igor Gnatenko
On Wed, 2013-05-15 at 13:36 +0200, Lennart Poettering wrote:
 On Tue, 14.05.13 20:43, Adam Williamson (awill...@redhat.com) wrote:
 
 Let me take the opportunity to dissect some of this:
 
  Startup finished in 411ms (kernel) + 745ms (initrd) + 4.704s (userspace)
  = 5.861s
  
2.745s plymouth-quit-wait.service
 
 This one is misleading, as this just makes sure the bootsplash is
 terminated when start-up is complete.
 
2.389s NetworkManager-wait-online.service
 
 https://bugzilla.redhat.com/show_bug.cgi?id=787314#c37
 
 479ms iprupdate.service
 385ms iprinit.service
 
 These appear to be untis that are only necessary for IBM RAID. It's
 hugely disappointing if this is pulled into all boots. I wonder if we
 can find a different logic for this, for example pulling it in from a
 udev rules or so. Or by changing anaconda to install this only onb IBM
 RAID systemd... Anyone knows what this is about?
 
 Also, I don't see either of these listed in the default preset
 file. This really shouldnt be started by default.
 
 356ms systemd-udev-settle.service
 
 This is exlusively LVM's fault. There's really no need to ever have that
 in the boot unless you run LVM. 
 
 On many machines LVM is the primary reason why things are slow. I can
 only recommend everybody to not install on LVM if you can. It's a real
 shame that LVM hasn't gotten their things together still, after all
 those years. 
 
 A service that needs systemd-udev-settle is a broken service! LVM, you
 are broken!
 
 I am hoping for the day LVM gets kicked out of the default install. Oh
 btrfs, why are you still not around?
 
 297ms ksmtuned.service
 
 I thought the kernel did this without user-interaction these days?
 
 236ms spice-vdagentd.service
 
 I don't see why this is on by default. According to this bug report this
 should have been pulled in on-demand only:
 
 https://bugzilla.redhat.com/show_bug.cgi?id=876237
 
 233ms abrt-ccpp.service
 
 https://bugzilla.redhat.com/show_bug.cgi?id=963184
 
 160ms fedora-loadmodules.service
 
 Somebody should file bugs against all packages that still drop something
 into /etc/sysconfig/modules/.
 
 I see that kvm and bluez do this still. I filed these bugs now:
 
 https://bugzilla.redhat.com/show_bug.cgi?id=963193
 https://bugzilla.redhat.com/show_bug.cgi?id=963198
 
 138ms sm-client.service
 
 Oh my, sendmail. What an emabrassement that we still ship this enabled
 by default... If we could at least turn it into something that is
 activated on-demand. But I gues touching these sources to implement that
 would be too awful...
 
 112ms iscsi.service
 
 This really sounds like something that should be socket actviated on
 demand rather than run by default.
 
  97ms sshd.service
 
 Dito. THis is something to start by default only on hosts where a ton of
 people log in all the time.
 
 
  80ms isdn.service
 
 In very new systemd 'isdn.service' is not enabled any more in the preset
 file, so this is already gone now.
 
  79ms systemd-modules-load.service
 
 Ah, cute, on my machine the only reason this is pulled in is
 /etc/modules-load.d/spice-vdagentd.conf -- which also pulls in our old
 friend uinput (see above).
 
 https://bugzilla.redhat.com/show_bug.cgi?id=963201
 
  78ms iprdump.service
 
 IBM RAID again? Jeez...
 
  64ms sendmail.service
 
 Urks...
 
  56ms systemd-localed.service
 
 Hmm, this one is weird... It should only be loaded when needed, so I am
 surprised tjat there's something that needs this during boot-up.
 
  54ms ksm.service
 
 I thought the kernel would do this internally these days without
 userspace interaction.
 
  54ms abrt-vmcore.service
 
 https://bugzilla.redhat.com/show_bug.cgi?id=963184
 
  45ms rpcbind.service
 
 https://bugzilla.redhat.com/show_bug.cgi?id=963189
 
  33ms dmraid-activation.service
 
 I wonder if we can change this to pull this in only if DMRAID is
 actually used... SOunds wrong to spawn this unconditionally...
 
 Lennart
 
 -- 
 Lennart Poettering - Red Hat, Inc.

Lennart, good.
I CC'ed for interested bugs.

-- 
Best Regards,
Igor Gnatenko

-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: when startup delays become bugs

2013-05-15 Thread Tomasz Torcz
On Wed, May 15, 2013 at 01:36:57PM +0200, Lennart Poettering wrote:
 112ms iscsi.service
 
 This really sounds like something that should be socket actviated on
 demand rather than run by default.

  There are few parts of iSCSI stack in Fedora. This iscsi.service brings
up all targets previously configured. Nevertheless it should only be pulled
in when there are actaully some targets discovered:

  https://bugzilla.redhat.com/show_bug.cgi?id=951951#c3

  Other part of stack, iscsid.service, is already socket activable (at least
upstream since Mike pulled my patches, in Fedora since 6.2.0.873-4).

-- 
Tomasz Torcz  ,,If you try to upissue this patchset I shall be 
seeking
xmpp: zdzich...@chrome.pl   an IP-routable hand grenade.'' -- Andrew Morton 
(LKML)

-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: when startup delays become bugs

2013-05-15 Thread Lennart Poettering
On Wed, 15.05.13 13:36, Lennart Poettering (mzerq...@0pointer.de) wrote:

I have now created a tracker bug WhyWeBotSoSlow to track all these
little issues that cause things to be in our boot-up sequence that
really shouldn't be there...

https://bugzilla.redhat.com/show_bug.cgi?id=963210

Would be fantastic if we could find some volunteers to keep an eye on
this and put some pressure behind this!

Lennart

-- 
Lennart Poettering - Red Hat, Inc.
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: when startup delays become bugs

2013-05-15 Thread Lennart Poettering
On Wed, 15.05.13 12:55, Richard W.M. Jones (rjo...@redhat.com) wrote:

 On Wed, May 15, 2013 at 01:36:57PM +0200, Lennart Poettering wrote:
  On Tue, 14.05.13 20:43, Adam Williamson (awill...@redhat.com) wrote:
  356ms systemd-udev-settle.service
  
  This is exlusively LVM's fault. There's really no need to ever have that
  in the boot unless you run LVM. 
  
  On many machines LVM is the primary reason why things are slow. I can
  only recommend everybody to not install on LVM if you can. It's a real
  shame that LVM hasn't gotten their things together still, after all
  those years. 
 
 Is there a constructive summary anywhere of what LVM needs
 to do / change?  This problem affects libguestfs too.

It should subscribe to block devices coming and going and then assemble
things as stuff appears. Right now it expects to be called at a point in
time where everything that might show up has shown up. Of course, that's
not how modern hardware works (everything is hotpluggable now, hw gives
no guarantees when it shows up and we don't have any idea what might
still come hence), and requires pulling in of udev-settle. It requires
us to delay the boot, in order to keep the promise of everything has
shown up now to LVM, which happens to be a promise that we cannot
actually hold, since we don't know whether something is missing...

LVM should subscribe to udev events like any other daemon, and 
as soon as all devices it needs have shown up immeidately
assemble things. This would guarantee minimal boot times since we
would have to wait exactly for the devices that are actually needed, and
that's it. We wouldn't have to wait for devices we never know might
appear and that nobody actually needs.

If it wasn't for LVM we'd have removed the udev-settle functionality
from udev already, since it's just broken, and slows down things too.

LVM of course has been broken like this for 5 years, and they know
that. And they did nothing about it. And that's so disappointing...

(Of course, btrfs got assembly right from day 1: it will assemble disks
exclusively via udev events and as soon as a device is complete it is
made available to the OS. The btrfs folks understood how modern hardware
and storage technology works, the LVM folks not so much I fear...)

  I am hoping for the day LVM gets kicked out of the default install. Oh
  btrfs, why are you still not around?
 
 I Want To Believe in btrfs, but unfortunately it's still excessively
 buggy.  It's actually got worse in Rawhide recently -- it reliably
 fails in mkfs.btrfs :-(  I stopped bothering filing bugs about this
 because they just get ignored because Rawhide isn't new enough (they
 want us to retest everything on some non-upstream btrfs-next kernel).
 My colleague ran btrfs on his laptop and had daily crashes.  Last week
 he went back to ext4.

Well, it works fine for myself and for quite a few other folks I
know. I am pretty sure it's time to grow some, make this the default in
Fedora, and then fix everything popping up quickyl with the necessary
pressure. As it appears not having any pressure to stabilize btrfs
certainly doesn't work at all for the project...

Lennart

PS: I am very good at bitching about LVM. You can book me for your party
at very low rates!

-- 
Lennart Poettering - Red Hat, Inc.
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: when startup delays become bugs

2013-05-15 Thread Michal Schmidt

On 05/15/2013 02:25 PM, Lennart Poettering wrote:

On Wed, 15.05.13 12:55, Richard W.M. Jones (rjo...@redhat.com) wrote:

Is there a constructive summary anywhere of what LVM needs
to do / change?  This problem affects libguestfs too.


It should subscribe to block devices coming and going and then assemble
things as stuff appears.


They have added the lvmetad daemon which does this. And it seems to be 
used by default on my test F19 installation. I.e. I am not seeing 
udev-settle being pulled in anymore.


Michal

--
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: when startup delays become bugs

2013-05-15 Thread Dan Williams
On Wed, 2013-05-15 at 12:52 +0200, Lennart Poettering wrote:
 On Tue, 14.05.13 17:30, Dan Williams (d...@redhat.com) wrote:
 
  On Tue, 2013-05-14 at 15:51 -0600, Chris Murphy wrote:
   This is not intended to be snarky, but I admit it could sound like it is. 
When are long startup times for services considered to be bugs in their 
   own right?
   
   
   [root@f19q ~]# systemd-analyze blame
 1min 444ms sm-client.service
 1min 310ms sendmail.service
   18.602s firewalld.service
13.882s avahi-daemon.service
12.944s NetworkManager-wait-online.service
  
  Is anything waiting on NetworkManager-wait-online in your install?  That
  target is really intended for servers where you want to block Apache or
  Sendmail or Database from starting until you're sure your networking is
  fully configured.  If you don't have any services that require a network
  to be up, then you can mask NetworkManager-wait-online and things will
  be more parallel.
 
 Dan, could you please implement these recommendations for F19:
 
 https://bugzilla.redhat.com/show_bug.cgi?id=787314#c37
 
 This really shouldn't be pulled in unconditionally...

The changes you suggest there seem fine; one question about
After=syslog.target being removed though: intentional?  I pretty much
just take your guidance on the .service files.

Dan

-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: when startup delays become bugs

2013-05-15 Thread Lennart Poettering
On Wed, 15.05.13 07:45, Dan Williams (d...@redhat.com) wrote:

   Is anything waiting on NetworkManager-wait-online in your install?  That
   target is really intended for servers where you want to block Apache or
   Sendmail or Database from starting until you're sure your networking is
   fully configured.  If you don't have any services that require a network
   to be up, then you can mask NetworkManager-wait-online and things will
   be more parallel.
  
  Dan, could you please implement these recommendations for F19:
  
  https://bugzilla.redhat.com/show_bug.cgi?id=787314#c37
  
  This really shouldn't be pulled in unconditionally...
 
 The changes you suggest there seem fine; one question about
 After=syslog.target being removed though: intentional?  I pretty much
 just take your guidance on the .service files.

After=syslog.target is unnecessary these days and should be removed from
all unit files, to keep things simple.

We nowadays require syslog implementations to be socket activatable, and
that socket is around before normal services start, and that's
guaranteed, hence nobody has to depend on syslog explicitly anymore. All
services will have logging available anyway, out-of-the-box, as default,
with no manual configuration necessary, and without any referring to
syslog.target.

Lennart

-- 
Lennart Poettering - Red Hat, Inc.
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: when startup delays become bugs

2013-05-15 Thread Farkas Levente
On 05/15/2013 02:48 PM, Lennart Poettering wrote:
 On Wed, 15.05.13 07:45, Dan Williams (d...@redhat.com) wrote:
 
 Is anything waiting on NetworkManager-wait-online in your install?  That
 target is really intended for servers where you want to block Apache or
 Sendmail or Database from starting until you're sure your networking is
 fully configured.  If you don't have any services that require a network
 to be up, then you can mask NetworkManager-wait-online and things will
 be more parallel.

 Dan, could you please implement these recommendations for F19:

 https://bugzilla.redhat.com/show_bug.cgi?id=787314#c37

 This really shouldn't be pulled in unconditionally...

 The changes you suggest there seem fine; one question about
 After=syslog.target being removed though: intentional?  I pretty much
 just take your guidance on the .service files.
 
 After=syslog.target is unnecessary these days and should be removed from
 all unit files, to keep things simple.
 
 We nowadays require syslog implementations to be socket activatable, and
 that socket is around before normal services start, and that's
 guaranteed, hence nobody has to depend on syslog explicitly anymore. All
 services will have logging available anyway, out-of-the-box, as default,
 with no manual configuration necessary, and without any referring to
 syslog.target.

is it documented?

in many of your mails in this thread you wrote nowadays, anymore,
default, modern system, modern hardware, etc...
does it documented anywhere? what are the assumptions and how nowadays
fedora should have to work?


-- 
  Levente   Si vis pacem para bellum!
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: when startup delays become bugs

2013-05-15 Thread Michal Schmidt

On 05/15/2013 02:58 PM, Farkas Levente wrote:

On 05/15/2013 02:48 PM, Lennart Poettering wrote:

We nowadays require syslog implementations to be socket activatable, and
that socket is around before normal services start, and that's
guaranteed, hence nobody has to depend on syslog explicitly anymore. All
services will have logging available anyway, out-of-the-box, as default,
with no manual configuration necessary, and without any referring to
syslog.target.


is it documented?

in many of your mails in this thread you wrote nowadays, anymore,
default, modern system, modern hardware, etc...
does it documented anywhere? what are the assumptions and how nowadays
fedora should have to work?


The requirements on syslog implementations to be compatible with systemd 
are described in:

http://www.freedesktop.org/wiki/Software/systemd/syslog

Michal

--
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: when startup delays become bugs

2013-05-15 Thread Matthew Garrett
On Wed, May 15, 2013 at 01:01:18PM +0200, Lennart Poettering wrote:
 On Tue, 14.05.13 15:51, Chris Murphy (li...@colorremedies.com) wrote:
 
2.911s abrt-uefioops.service
 
 I don't understand really why abrt needs to run at all by default. It
 should be spawned automatically when an actualy crash happens, so that
 by default nobody has to have this serice running.

This specific one is scraping previous kernel crash reports out of 
pstore, so it should really only be doing something if there's anything 
there.

-- 
Matthew Garrett | mj...@srcf.ucam.org
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: when startup delays become bugs

2013-05-15 Thread Hans de Goede

Hi,

On 05/15/2013 01:36 PM, Lennart Poettering wrote:

On Tue, 14.05.13 20:43, Adam Williamson (awill...@redhat.com) wrote:

Let me take the opportunity to dissect some of this:



snip


236ms spice-vdagentd.service


I don't see why this is on by default. According to this bug report this
should have been pulled in on-demand only:


Adam is running a spice enabled vm, so it is getting started
because he has the required virtual hardware.




https://bugzilla.redhat.com/show_bug.cgi?id=876237


233ms abrt-ccpp.service


https://bugzilla.redhat.com/show_bug.cgi?id=963184


160ms fedora-loadmodules.service


Somebody should file bugs against all packages that still drop something
into /etc/sysconfig/modules/.

I see that kvm and bluez do this still. I filed these bugs now:

https://bugzilla.redhat.com/show_bug.cgi?id=963193
https://bugzilla.redhat.com/show_bug.cgi?id=963198


 79ms systemd-modules-load.service


Ah, cute, on my machine the only reason this is pulled in is
/etc/modules-load.d/spice-vdagentd.conf -- which also pulls in our old
friend uinput (see above).

https://bugzilla.redhat.com/show_bug.cgi?id=963201


I'm fine with dropping the config file from spice-vdagent as long as
something then creates the /dev/uinput device node, so that module
auto-load will actually work, without the device-node pre-existing
trying to use /dev/uinput will just cause a -ENOENT failure.

snip


 54ms ksm.service


I thought the kernel would do this internally these days without
userspace interaction.


At a minimum we should consider not starting these *inside* vms

Regards,

Hans
--
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: when startup delays become bugs

2013-05-15 Thread Chris Adams
Once upon a time, Lennart Poettering mzerq...@0pointer.de said:
 112ms iscsi.service
 
 This really sounds like something that should be socket actviated on
 demand rather than run by default.

This is attaching to configured iSCSI devices (which at a minimum
requires parsing configuration files to see if there are any devices
configured), not running a listening daemon.

  97ms sshd.service
 
 Dito. THis is something to start by default only on hosts where a ton of
 people log in all the time.

SSH host key generation needs to be done in advance (don't want a
connecting socket to wait for that).  Maybe that could be done with a
separate firstboot-like service that gets disabled once run?

-- 
Chris Adams li...@cmadams.net
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: when startup delays become bugs

2013-05-15 Thread Michael Catanzaro
On Tue, 2013-05-14 at 22:21 -0600, Chris Murphy wrote:
 I've tried both, but when I rename it I'm using 'hostnamectl set-hostname 
 XXX' as I'm not seeing anything obvious in Gnome that does this.
This is on the Details panel of System Settings.  You're expected to
type a pretty hostnames with spaces and caps. You'll end up with
something like

[michael@victory-road ~]$ hostnamectl
   Static hostname: victory-road
   Pretty hostname: Victory Road

It doesn't magically become fully qualified.


signature.asc
Description: This is a digitally signed message part
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: when startup delays become bugs

2013-05-15 Thread Lennart Poettering
On Wed, 15.05.13 08:53, Chris Adams (li...@cmadams.net) wrote:

 Once upon a time, Lennart Poettering mzerq...@0pointer.de said:
  112ms iscsi.service
  
  This really sounds like something that should be socket actviated on
  demand rather than run by default.
 
 This is attaching to configured iSCSI devices (which at a minimum
 requires parsing configuration files to see if there are any devices
 configured), not running a listening daemon.

It should be possible to come up with some form of ConditionPathExists=
or ConditionDirectoryNotEmpty= that causes this to be skipped if no
targets are configured.

https://bugzilla.redhat.com/show_bug.cgi?id=951951

   97ms sshd.service
  
  Dito. THis is something to start by default only on hosts where a ton of
  people log in all the time.
 
 SSH host key generation needs to be done in advance (don't want a
 connecting socket to wait for that).  Maybe that could be done with a
 separate firstboot-like service that gets disabled once run?

We should really to be as stateless as possible here and not require
write access to /etc, which a solution like this would require.

Instead I'd propose to splitt the key generation into its own service
but then pull this in by the first connection and conditionalize it also
with ConditionPathExists= or so:

ConditionPathExists=!/etc/ssh/ssh_host_rsa_key

I filed this now:

https://bugzilla.redhat.com/show_bug.cgi?id=963268

Lennart

-- 
Lennart Poettering - Red Hat, Inc.
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: when startup delays become bugs

2013-05-15 Thread Lennart Poettering
On Wed, 15.05.13 15:38, Hans de Goede (hdego...@redhat.com) wrote:

 Ah, cute, on my machine the only reason this is pulled in is
 /etc/modules-load.d/spice-vdagentd.conf -- which also pulls in our old
 friend uinput (see above).
 
 https://bugzilla.redhat.com/show_bug.cgi?id=963201
 
 I'm fine with dropping the config file from spice-vdagent as long as
 something then creates the /dev/uinput device node, so that module
 auto-load will actually work, without the device-node pre-existing
 trying to use /dev/uinput will just cause a -ENOENT failure.

Yeah, kmod/udev/systemd get this right these days: the kernel modules
just export in modalias what node they want to have created, kmod then
extracts that and udev/systemd create the device node for it. Then, when
userspace opens the deivce the kernel will make sure the module is
actually auto-loaded.

Lennart

-- 
Lennart Poettering - Red Hat, Inc.
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: when startup delays become bugs

2013-05-15 Thread Adam Williamson
On Wed, 2013-05-15 at 11:58 +0200, Paolo Bonzini wrote:
 Il 15/05/2013 05:43, Adam Williamson ha scritto:
  On Tue, 2013-05-14 at 15:51 -0600, Chris Murphy wrote:
  
  But firewalld goes from 7 seconds to 18 seconds? Why? avahi-daemon,
  restorecond, all are an order of magnitude longer on F19 than F18.
  It's a 3+ minute userspace hit on the startup time where the kernel
  takes 1.9 seconds. Off hand this doesn't seem reasonable, especially
  sendmail. If the time can't be brought down by a lot, can it ship
  disabled by default?
  
  FWIW, I found your results here interesting, so I did a little test of
  my own. I did default DVD installs of F17, F18 and F19 Beta TC4, ran
  through firstboot, rebooted, then rebooted again and ran systemd-analyze
  (to let prelink kick in). Results are that F18's slightly slower than
  F17 and F19 is somewhat slower again. My numbers are way way faster than
  your numbers overall, though; VMs do seem to perform very quickly on
  this box for whatever reason.
 
 Can you include here the QEMU command line or libvirt XML definition?
 
 Unless you have something like cache=none or cache='none' in it
 (respectively for QEMU and libvirt), chances are that you're using the
 host memory as a very fast and large disk cache for your VM. :)

That's likely the case, yeah, I'm using a standard VM definition, no
tweaks, and I have 16GB of RAM.
-- 
Adam Williamson
Fedora QA Community Monkey
IRC: adamw | Twitter: AdamW_Fedora | identi.ca: adamwfedora
http://www.happyassassin.net

-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: when startup delays become bugs

2013-05-15 Thread Adam Williamson
On Wed, 2013-05-15 at 13:36 +0200, Lennart Poettering wrote:

 236ms spice-vdagentd.service
 
 I don't see why this is on by default. According to this bug report this
 should have been pulled in on-demand only:
 
 https://bugzilla.redhat.com/show_bug.cgi?id=876237

It probably was. I was testing in a SPICE/qxl KVM. Please don't anyone
take it out; I really appreciate having it in all my test VMs.
-- 
Adam Williamson
Fedora QA Community Monkey
IRC: adamw | Twitter: AdamW_Fedora | identi.ca: adamwfedora
http://www.happyassassin.net

-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: when startup delays become bugs

2013-05-15 Thread Matthew Miller
On Wed, May 15, 2013 at 08:53:44AM -0500, Chris Adams wrote:
 SSH host key generation needs to be done in advance (don't want a
 connecting socket to wait for that).  Maybe that could be done with a
 separate firstboot-like service that gets disabled once run?

Bugzilla # ? :)

-- 
Matthew Miller  ☁☁☁  Fedora Cloud Architect  ☁☁☁  mat...@fedoraproject.org
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: when startup delays become bugs

2013-05-15 Thread Matthew Miller
On Wed, May 15, 2013 at 10:57:52AM -0400, Matthew Miller wrote:
 On Wed, May 15, 2013 at 08:53:44AM -0500, Chris Adams wrote:
  SSH host key generation needs to be done in advance (don't want a
  connecting socket to wait for that).  Maybe that could be done with a
  separate firstboot-like service that gets disabled once run?
 Bugzilla # ? :)

Oh -- Lennart is ahead of me. 

(mumbles about the good advice of reading all the messages before sending
replies)

-- 
Matthew Miller  ☁☁☁  Fedora Cloud Architect  ☁☁☁  mat...@fedoraproject.org
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: when startup delays become bugs

2013-05-15 Thread Matthew Miller
On Wed, May 15, 2013 at 04:21:20PM +0200, Lennart Poettering wrote:
 https://bugzilla.redhat.com/show_bug.cgi?id=963268

In that bug you suggest that administrators could change back to standalone
mode. Given that what you say about frequency of access is true (even on
systems where ssh login is a primary activity, time between accesess is
generally on a human scale rather hundreds per second), is there a
disadvantage to socket activation beyond a very slight delay in the first
access?

-- 
Matthew Miller  ☁☁☁  Fedora Cloud Architect  ☁☁☁  mat...@fedoraproject.org
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: when startup delays become bugs

2013-05-15 Thread Chris Adams
Once upon a time, Matthew Miller mat...@fedoraproject.org said:
 In that bug you suggest that administrators could change back to standalone
 mode. Given that what you say about frequency of access is true (even on
 systems where ssh login is a primary activity, time between accesess is
 generally on a human scale rather hundreds per second), is there a
 disadvantage to socket activation beyond a very slight delay in the first
 access?

Well, SSH connection rate is on human scale until your system is the
target of a scan, which can result in at least dozens per second.
-- 
Chris Adams li...@cmadams.net
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: when startup delays become bugs

2013-05-15 Thread Matthew Miller
On Wed, May 15, 2013 at 10:05:40AM -0500, Chris Adams wrote:
 Once upon a time, Matthew Miller mat...@fedoraproject.org said:
  In that bug you suggest that administrators could change back to standalone
  mode. Given that what you say about frequency of access is true (even on
  systems where ssh login is a primary activity, time between accesess is
  generally on a human scale rather hundreds per second), is there a
  disadvantage to socket activation beyond a very slight delay in the first
  access?
 Well, SSH connection rate is on human scale until your system is the
 target of a scan, which can result in at least dozens per second.

Oh, I see. I missed that the idea is to activate sshd each time rather than
as a singleton service.


-- 
Matthew Miller  ☁☁☁  Fedora Cloud Architect  ☁☁☁  mat...@fedoraproject.org
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: when startup delays become bugs

2013-05-15 Thread Jóhann B. Guðmundsson

On 05/15/2013 02:21 PM, Lennart Poettering wrote:

On Wed, 15.05.13 08:53, Chris Adams (li...@cmadams.net) wrote:


Once upon a time, Lennart Poettering mzerq...@0pointer.de said:

112ms iscsi.service

This really sounds like something that should be socket actviated on
demand rather than run by default.

This is attaching to configured iSCSI devices (which at a minimum
requires parsing configuration files to see if there are any devices
configured), not running a listening daemon.

It should be possible to come up with some form of ConditionPathExists=
or ConditionDirectoryNotEmpty= that causes this to be skipped if no
targets are configured.

https://bugzilla.redhat.com/show_bug.cgi?id=951951


 97ms sshd.service

Dito. THis is something to start by default only on hosts where a ton of
people log in all the time.

SSH host key generation needs to be done in advance (don't want a
connecting socket to wait for that).  Maybe that could be done with a
separate firstboot-like service that gets disabled once run?

We should really to be as stateless as possible here and not require
write access to /etc, which a solution like this would require.

Instead I'd propose to splitt the key generation into its own service
but then pull this in by the first connection and conditionalize it also
with ConditionPathExists= or so:

ConditionPathExists=!/etc/ssh/ssh_host_rsa_key

I filed this now:

https://bugzilla.redhat.com/show_bug.cgi?id=963268


I created such service ( ssh-keygen.service ) when I migrated the ssh to 
unit files.


For some reason the maintainers in the distribution chose not to include 
it..


JBG
--
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: when startup delays become bugs

2013-05-15 Thread Lennart Poettering
On Wed, 15.05.13 10:05, Chris Adams (li...@cmadams.net) wrote:

 Once upon a time, Matthew Miller mat...@fedoraproject.org said:
  In that bug you suggest that administrators could change back to standalone
  mode. Given that what you say about frequency of access is true (even on
  systems where ssh login is a primary activity, time between accesess is
  generally on a human scale rather hundreds per second), is there a
  disadvantage to socket activation beyond a very slight delay in the first
  access?
 
 Well, SSH connection rate is on human scale until your system is the
 target of a scan, which can result in at least dozens per second.

Note that systemd puts a (configurable) limit on the number of
concurrent connections, much like sshd does it, so there's very little
difference... Also ssh forks of per-connection processes too, so the
difference is probably even smaller...

Lennart

-- 
Lennart Poettering - Red Hat, Inc.
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: when startup delays become bugs

2013-05-15 Thread Chris Adams
Once upon a time, Lennart Poettering mzerq...@0pointer.de said:
 Note that systemd puts a (configurable) limit on the number of
 concurrent connections, much like sshd does it, so there's very little
 difference... Also ssh forks of per-connection processes too, so the
 difference is probably even smaller...

The difference is that when sshd is running as a traditional daemon,
it can do all of its start-up processing once (parsing config files,
loading host keys, etc.), instead of for each connection.
-- 
Chris Adams li...@cmadams.net
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

  1   2   >