Re: [systemd-devel] Help needed for optimizing my boot time

2015-06-11 Thread Andrei Borzenkov
On Thu, Jun 11, 2015 at 2:26 PM, Francis Moreau francis.m...@gmail.com wrote:

$ systemd-analyze critical-chain

graphical.target @7.921s
  multi-user.target @7.921s
autofs.service @7.787s +132ms
  network-online.target @7.786s
network.target @7.786s
  NetworkManager.service @675ms +184ms
basic.target @674ms
  ...

...
 Is NetworkManager-wait-online.service enabled and active?


 It seems it's enabled but no more active:

 $ systemctl status NetworkManager-wait-online.service
 ● NetworkManager-wait-online.service - Network Manager Wait Online
Loaded: loaded
 (/usr/lib/systemd/system/NetworkManager-wait-online.service; disabled;
 vendor preset: disabled)
Active: inactive (dead) since Thu 2015-06-11 11:54:37 CEST; 1h 4min ago
   Process: 583 ExecStart=/usr/bin/nm-online -s -q --timeout=30
 (code=exited, status=0/SUCCESS)
  Main PID: 583 (code=exited, status=0/SUCCESS)

 Jun 11 11:54:30 cyclone systemd[1]: Starting Network Manager Wait Online...
 Jun 11 11:54:37 cyclone systemd[1]: Started Network Manager Wait Online.

 This seems correct to me, doesn't it ?


Actually it says disabled which makes me wonder why it run. But this
is the service that is likely responsible for long time you observe.
If disabling it does ot help, you can try masking it (systemctl mask)
for a test.

OTOH disabled here just means links in [Install] section are not
present. Could you show

sysmectl show NetworkManager-wait-online.service -p WantedBy -p RequiredBy
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] Help needed for optimizing my boot time

2015-06-11 Thread Francis Moreau
Hello,

I'm interested in optimizing my boot time on my laptop.

So I looked at the big picture first:

   $ systemd-analyze
   Startup finished in 3.994s (firmware) + 7.866s (loader) + 8.226s
(kernel) + 7.921s (userspace) = 28.007s

and noticed that the boot time spent in userspace is quite high.

I looked at the details:

   $ systemd-analyze critical-chain

   graphical.target @7.921s
 multi-user.target @7.921s
   autofs.service @7.787s +132ms
 network-online.target @7.786s
   network.target @7.786s
 NetworkManager.service @675ms +184ms
   basic.target @674ms
 ...

If I understand that correctly, NetworkManager takes more than 7 seconds
to start and seems to be the culprit.

However, I'm not sure to understand why the service following NM
(autofs) and thus multi-user.target need to wait for the network to be
available.

Specially since:

 - nothing requires a network connection in order to boot and setup my
system, including mounting /home partition

 - autofs should still be working if there's no network connection and
detect if the network becomes ready later

So my question is: in this case, is autofs wrongly waiting for the
network to be started or is NM taking too much time to start ?

Thanks.
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] nspawn dependencies

2015-06-11 Thread Lennart Poettering
On Thu, 11.06.15 09:40, Richard Weinberger (richard.weinber...@gmail.com) wrote:

 Hi!
 
 Recent systemd-nspawn seems to support unprivileged containers (user
 namespaces). That's awesome, thank you guys for working on that!

Well, the name unprivileged containers usually is used for the
concept where you don't need any privs to start and run a
container. We don't support that, and that's turned off in the kernel
of Fedora at least, for good reasons.

We do support user namespaces now, but we require privs on the host to
set them up. I doubt though that UID namespacing as it is now is
really that useful though: you have to prep your images first, apply a
uid shift to all file ownership and ACLs of your tree, and this needs
to be done manually. This makes it pretty hard to deploy since you
cannot boot unmodified container images this way you download from the
internet. Also, since there is no sane, established scheme for
allocating UID ranges for the containers automatically. So far uid
namespaces hence appear mostly like an useless excercise, far from
being deployable in real life hence.

 Maybe you can help me so sort this out, can I run any systemd enabled
 distribution
 using the most current systemd-nspawn?
 Say, my host is FC22 using systemd-nspawn from git, can it spawn an
 openSUSE 13.2 container which has only systemd v210?
 
 Or has the systemd version on the container side to match the systemd
 version on the host side?

It generally does not have to match. We try to maintain compatibility
there (though we make no guarantees -- the stuff is too new). That
said, newer systemd versions work much better in nspawn than older
ones, and v210 is pretty old already.

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] Static test coverage

2015-06-11 Thread Daniel Mack
Hi,

Now that we're using Semaphore CI for building all pull requests and
pushes to the master branch, I've set up a second VM instance to also
use their service for static code analysis on a nightly base.

We've had the systemd project registered with Coverity for a while, and
so far, new builds were manually uploaded once in a while by Philippe de
Swert (thanks for that!). This is now done automatically every night.
The results can be seen here:

  https://scan.coverity.com/projects/350/

While at it, I also taught the build bot to use LLVM's scan-build, and
sync the output with a new repository:

  https://github.com/systemd/systemd-build-scan

The patches are pushed to the 'gh-pages' branch, hence the HTML files
are published here:

  https://systemd.github.io/systemd-build-scan/

Unfortunately, scan-build does not seem to understand the _cleanup_*
variable annotations, so it currently reports lots of false-positive
memory leaks.


Hope this helps getting those collections of possible issues more
exposure. If you want me to add more automated static testing, please
let me know.


Daniel
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Help needed for optimizing my boot time

2015-06-11 Thread Francis Moreau
Hi,

On 06/11/2015 12:44 PM, Andrei Borzenkov wrote:
 On Thu, Jun 11, 2015 at 1:08 PM, Francis Moreau francis.m...@gmail.com 
 wrote:
 Hello,

 I'm interested in optimizing my boot time on my laptop.

 So I looked at the big picture first:

$ systemd-analyze
Startup finished in 3.994s (firmware) + 7.866s (loader) + 8.226s
 (kernel) + 7.921s (userspace) = 28.007s

 and noticed that the boot time spent in userspace is quite high.

 I looked at the details:

$ systemd-analyze critical-chain

graphical.target @7.921s
  multi-user.target @7.921s
autofs.service @7.787s +132ms
  network-online.target @7.786s
network.target @7.786s
  NetworkManager.service @675ms +184ms
basic.target @674ms
  ...

 If I understand that correctly, NetworkManager takes more than 7 seconds
 to start and seems to be the culprit.

 However, I'm not sure to understand why the service following NM
 (autofs) and thus multi-user.target need to wait for the network to be
 available.

 Specially since:

  - nothing requires a network connection in order to boot and setup my
 system, including mounting /home partition

  - autofs should still be working if there's no network connection and
 detect if the network becomes ready later

 So my question is: in this case, is autofs wrongly waiting for the
 network to be started or is NM taking too much time to start ?

 
 Is NetworkManager-wait-online.service enabled and active?
 

It seems it's enabled but no more active:

$ systemctl status NetworkManager-wait-online.service
● NetworkManager-wait-online.service - Network Manager Wait Online
   Loaded: loaded
(/usr/lib/systemd/system/NetworkManager-wait-online.service; disabled;
vendor preset: disabled)
   Active: inactive (dead) since Thu 2015-06-11 11:54:37 CEST; 1h 4min ago
  Process: 583 ExecStart=/usr/bin/nm-online -s -q --timeout=30
(code=exited, status=0/SUCCESS)
 Main PID: 583 (code=exited, status=0/SUCCESS)

Jun 11 11:54:30 cyclone systemd[1]: Starting Network Manager Wait Online...
Jun 11 11:54:37 cyclone systemd[1]: Started Network Manager Wait Online.

This seems correct to me, doesn't it ?

Thanks
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] nspawn dependencies

2015-06-11 Thread Richard Weinberger
Lennart,

Am 11.06.2015 um 12:08 schrieb Lennart Poettering:
 On Thu, 11.06.15 09:40, Richard Weinberger (richard.weinber...@gmail.com) 
 wrote:
 
 Hi!

 Recent systemd-nspawn seems to support unprivileged containers (user
 namespaces). That's awesome, thank you guys for working on that!
 
 Well, the name unprivileged containers usually is used for the
 concept where you don't need any privs to start and run a
 container. We don't support that, and that's turned off in the kernel
 of Fedora at least, for good reasons.

Depends. Container stuff is that much hyped these days that namings change
all the time. :-)
I understand unprivileged containers as containers which do not run as root.
While I don't care whether you have to be root to spawn them.

 We do support user namespaces now, but we require privs on the host to
 set them up. I doubt though that UID namespacing as it is now is
 really that useful though: you have to prep your images first, apply a
 uid shift to all file ownership and ACLs of your tree, and this needs
 to be done manually. This makes it pretty hard to deploy since you
 cannot boot unmodified container images this way you download from the
 internet. Also, since there is no sane, established scheme for
 allocating UID ranges for the containers automatically. So far uid
 namespaces hence appear mostly like an useless excercise, far from
 being deployable in real life hence.

What I care about is that root within the container is not the real root.
Hence, what user namespaces do.

 Maybe you can help me so sort this out, can I run any systemd enabled
 distribution
 using the most current systemd-nspawn?
 Say, my host is FC22 using systemd-nspawn from git, can it spawn an
 openSUSE 13.2 container which has only systemd v210?

 Or has the systemd version on the container side to match the systemd
 version on the host side?
 
 It generally does not have to match. We try to maintain compatibility
 there (though we make no guarantees -- the stuff is too new). That
 said, newer systemd versions work much better in nspawn than older
 ones, and v210 is pretty old already.

Okay. Thanks for the clarification.

From reading the source it seems like you mount the whole cgroup hierarchy into 
the
container's mount namespace, rebind /sys/fs/cgroup/systemd/yadda/.../yadda/ to 
/sys/fs/cgroup/systemd
and remount some parts read only.
Does this play well with the cgroup release_agent/notify_on_release mechanism?
Some time ago I've played with that and found that always only systemd on the
host side receives the notify.
Mostly due to the broken design of cgroups. ;-\

One more question, how does systemd-nspawn depend on the host systemd?
On this machine runs openSUSE with systemd v210. I build current systemd-nswpan
and gave it a try wit no luck.

rw@sandpuppy:~/work/systemd (master) sudo ./systemd-nspawn -bD /fc22
Spawning container fc22 on /fc22.
Press ^] three times within 1s to kill container.
Failed to register machine: Unknown method 'CreateMachineWithNetwork' or 
interface 'org.freedesktop.machine1.Manager'

I suspect I was too naive to think it would work out. :-)

Thanks,
//richard
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Static test coverage

2015-06-11 Thread Ronny Chevalier
On Thu, Jun 11, 2015 at 11:07 AM, Daniel Mack dan...@zonque.org wrote:
 Hi,

 Now that we're using Semaphore CI for building all pull requests and
 pushes to the master branch, I've set up a second VM instance to also
 use their service for static code analysis on a nightly base.

 We've had the systemd project registered with Coverity for a while, and
 so far, new builds were manually uploaded once in a while by Philippe de
 Swert (thanks for that!). This is now done automatically every night.
 The results can be seen here:

   https://scan.coverity.com/projects/350/

 While at it, I also taught the build bot to use LLVM's scan-build, and
 sync the output with a new repository:

   https://github.com/systemd/systemd-build-scan

 The patches are pushed to the 'gh-pages' branch, hence the HTML files
 are published here:

   https://systemd.github.io/systemd-build-scan/

 Unfortunately, scan-build does not seem to understand the _cleanup_*
 variable annotations, so it currently reports lots of false-positive
 memory leaks.


 Hope this helps getting those collections of possible issues more
 exposure. If you want me to add more automated static testing, please
 let me know.

Hi,

Maybe it could be useful to add automatic code coverage also?



 Daniel
 ___
 systemd-devel mailing list
 systemd-devel@lists.freedesktop.org
 http://lists.freedesktop.org/mailman/listinfo/systemd-devel
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [PATCH] ima-setup: write policy one line at a time

2015-06-11 Thread Lennart Poettering
On Thu, 11.06.15 00:34, Zbigniew Jędrzejewski-Szmek (zbys...@in.waw.pl) wrote:

 On Thu, Jun 11, 2015 at 01:16:47AM +0200, Lennart Poettering wrote:
  On Wed, 10.06.15 15:38, Zbigniew Jędrzejewski-Szmek (zbys...@in.waw.pl) 
  wrote:
  
   ima_write_policy() expects data to be written as one or more
   rules, no more than PAGE_SIZE at a time. Easiest way to ensure
   that we are not splitting rules is to read and write on line at
   a time.
   
   https://bugzilla.redhat.com/show_bug.cgi?id=1226948
   ---
src/core/ima-setup.c | 39 +--
1 file changed, 17 insertions(+), 22 deletions(-)
   
   diff --git a/src/core/ima-setup.c b/src/core/ima-setup.c
   index 4d8b638115..5b3d16cd31 100644
   --- a/src/core/ima-setup.c
   +++ b/src/core/ima-setup.c
   @@ -23,9 +23,6 @@

#include unistd.h
#include errno.h
   -#include fcntl.h
   -#include sys/stat.h
   -#include sys/mman.h

#include ima-setup.h
#include util.h
   @@ -36,20 +33,19 @@
#define IMA_POLICY_PATH /etc/ima/ima-policy

int ima_setup(void) {
   -int r = 0;
   -
#ifdef HAVE_IMA
   -_cleanup_close_ int policyfd = -1, imafd = -1;
   -struct stat st;
   -char *policy;
   +_cleanup_fclose_ FILE *input = NULL;
   +_cleanup_close_ int imafd = -1;
   +char line[LINE_MAX];
  
  Hmm, I wonder if this might bite us. LINE_MAX is a good choice as max
  line length for formats we define in systemd, but the question of
  course is what the the max line length is for IMA...

 It's PAGE_SIZE ;) Making this dynamic doesn't make much sense to me,
 but we could make it 4096, as this is the lowest (and common) size.

I don't think this is actually really that bad:

_cleanup_free_ void *line = NULL;
line = malloc(page_size());

Or, we could even just do alloca(page_size())...

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Help needed for optimizing my boot time

2015-06-11 Thread Andrei Borzenkov
On Thu, Jun 11, 2015 at 1:08 PM, Francis Moreau francis.m...@gmail.com wrote:
 Hello,

 I'm interested in optimizing my boot time on my laptop.

 So I looked at the big picture first:

$ systemd-analyze
Startup finished in 3.994s (firmware) + 7.866s (loader) + 8.226s
 (kernel) + 7.921s (userspace) = 28.007s

 and noticed that the boot time spent in userspace is quite high.

 I looked at the details:

$ systemd-analyze critical-chain

graphical.target @7.921s
  multi-user.target @7.921s
autofs.service @7.787s +132ms
  network-online.target @7.786s
network.target @7.786s
  NetworkManager.service @675ms +184ms
basic.target @674ms
  ...

 If I understand that correctly, NetworkManager takes more than 7 seconds
 to start and seems to be the culprit.

 However, I'm not sure to understand why the service following NM
 (autofs) and thus multi-user.target need to wait for the network to be
 available.

 Specially since:

  - nothing requires a network connection in order to boot and setup my
 system, including mounting /home partition

  - autofs should still be working if there's no network connection and
 detect if the network becomes ready later

 So my question is: in this case, is autofs wrongly waiting for the
 network to be started or is NM taking too much time to start ?


Is NetworkManager-wait-online.service enabled and active?
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Help needed for optimizing my boot time

2015-06-11 Thread Francis Moreau
On 06/11/2015 01:40 PM, Andrei Borzenkov wrote:
 On Thu, Jun 11, 2015 at 2:26 PM, Francis Moreau francis.m...@gmail.com 
 wrote:

$ systemd-analyze critical-chain

graphical.target @7.921s
  multi-user.target @7.921s
autofs.service @7.787s +132ms
  network-online.target @7.786s
network.target @7.786s
  NetworkManager.service @675ms +184ms
basic.target @674ms
  ...

 ...
 Is NetworkManager-wait-online.service enabled and active?


 It seems it's enabled but no more active:

 $ systemctl status NetworkManager-wait-online.service
 ● NetworkManager-wait-online.service - Network Manager Wait Online
Loaded: loaded
 (/usr/lib/systemd/system/NetworkManager-wait-online.service; disabled;
 vendor preset: disabled)
Active: inactive (dead) since Thu 2015-06-11 11:54:37 CEST; 1h 4min ago
   Process: 583 ExecStart=/usr/bin/nm-online -s -q --timeout=30
 (code=exited, status=0/SUCCESS)
  Main PID: 583 (code=exited, status=0/SUCCESS)

 Jun 11 11:54:30 cyclone systemd[1]: Starting Network Manager Wait Online...
 Jun 11 11:54:37 cyclone systemd[1]: Started Network Manager Wait Online.

 This seems correct to me, doesn't it ?

 
 Actually it says disabled which makes me wonder why it run. But this
 is the service that is likely responsible for long time you observe.
 If disabling it does ot help, you can try masking it (systemctl mask)
 for a test.
 

Masking this service helps:

$ systemd-analyze
Startup finished in 3.323s (firmware) + 6.795s (loader) + 8.342s
(kernel) + 1.470s (userspace) = 19.932s

$ systemd-analyze critical-chain
The time after the unit is active or started is printed after the @
character.
The time the unit takes to start is printed after the + character.

graphical.target @1.470s
  multi-user.target @1.470s
autofs.service @1.024s +445ms
  network-online.target @1.023s
network.target @1.021s
  NetworkManager.service @731ms +289ms
   basic.target @731ms

and the system seems to run fine (specially autofs, ntpd).

But I think the time given by systemd-analyze (1.470s) is not correct.
When booting I can see that the userspace is doing a fsck on root which
takes more than 2s. And the login screen takes at least 5s to appear
once the fsck is starting.

Is the time spent in initrd is included in userspace ?

Thanks


___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] nspawn dependencies

2015-06-11 Thread Lennart Poettering
On Thu, 11.06.15 12:48, Richard Weinberger (rich...@nod.at) wrote:

  Maybe you can help me so sort this out, can I run any systemd enabled
  distribution
  using the most current systemd-nspawn?
  Say, my host is FC22 using systemd-nspawn from git, can it spawn an
  openSUSE 13.2 container which has only systemd v210?
 
  Or has the systemd version on the container side to match the systemd
  version on the host side?
  
  It generally does not have to match. We try to maintain compatibility
  there (though we make no guarantees -- the stuff is too new). That
  said, newer systemd versions work much better in nspawn than older
  ones, and v210 is pretty old already.
 
 Okay. Thanks for the clarification.
 
 From reading the source it seems like you mount the whole cgroup hierarchy 
 into the
 container's mount namespace, rebind /sys/fs/cgroup/systemd/yadda/.../yadda/ 
 to /sys/fs/cgroup/systemd
 and remount some parts read only.
 Does this play well with the cgroup release_agent/notify_on_release
 mechanism?

No, cgroup notification is fucked in containers, and even on the host
it is broken, but not as badly.

The new unified hierarchy handles all this *much* much better, as it
has a inotify based notification scheme that covers hierchies really
nicely.

Note that more recent systemd versions can handle non-working cgroup
notifications in containers much better than older ones.

 One more question, how does systemd-nspawn depend on the host systemd?
 On this machine runs openSUSE with systemd v210. I build current 
 systemd-nswpan
 and gave it a try wit no luck.

v210 is really old.

We don't support half upgrades. If you do half upgrades, where the
utilites do not match the daemons then your are on your own.

That said, rkt actually uses nspawn and supports that on their own
downstream on all distros, even old ones that do not have systemd at
all. But that's on them, we don't want to be bothered with that
upstream.

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Help needed for optimizing my boot time

2015-06-11 Thread Dan Williams
On Thu, 2015-06-11 at 15:15 +0200, Francis Moreau wrote:
 On 06/11/2015 02:22 PM, Andrei Borzenkov wrote:
  On Thu, Jun 11, 2015 at 3:10 PM, Francis Moreau francis.m...@gmail.com 
  wrote:
  On 06/11/2015 01:40 PM, Andrei Borzenkov wrote:
  On Thu, Jun 11, 2015 at 2:26 PM, Francis Moreau francis.m...@gmail.com 
  wrote:
 
 $ systemd-analyze critical-chain
 
 graphical.target @7.921s
   multi-user.target @7.921s
 autofs.service @7.787s +132ms
   network-online.target @7.786s
 network.target @7.786s
   NetworkManager.service @675ms +184ms
 basic.target @674ms
   ...
 
  ...
  Is NetworkManager-wait-online.service enabled and active?
 
 
  It seems it's enabled but no more active:
 
  $ systemctl status NetworkManager-wait-online.service
  ● NetworkManager-wait-online.service - Network Manager Wait Online
 Loaded: loaded
  (/usr/lib/systemd/system/NetworkManager-wait-online.service; disabled;
  vendor preset: disabled)
 Active: inactive (dead) since Thu 2015-06-11 11:54:37 CEST; 1h 4min 
  ago
Process: 583 ExecStart=/usr/bin/nm-online -s -q --timeout=30
  (code=exited, status=0/SUCCESS)
   Main PID: 583 (code=exited, status=0/SUCCESS)
 
  Jun 11 11:54:30 cyclone systemd[1]: Starting Network Manager Wait 
  Online...
  Jun 11 11:54:37 cyclone systemd[1]: Started Network Manager Wait Online.
 
  This seems correct to me, doesn't it ?
 
 
  Actually it says disabled which makes me wonder why it run. But this
  is the service that is likely responsible for long time you observe.
 
  I think it runs because of this:
 
  $ ls /usr/lib/systemd/system/network-online.target.wants/
  NetworkManager-wait-online.service
 
  BTW, why isn't it showed by 'systemd-analyze critical-chain' ?
 
  
  My best guess is that it has no direct dependency on NetworkManager so
  it is not counted as part of chain. You could try adding
  
  After=NetworkManager.service
  
  to see if it changes anything in systemd-analyze output.
  
  If disabling it does ot help, you can try masking it (systemctl mask)
  for a test.
 
  Actually, I'm still not sure why autofs.service is waiting for
  network-online.target to be activated, IOW why this service has
  'After=network-online.target'.
 
  
  You can discuss it on autofs list; systemd is just a messenger here :)
  
 
 Well it's more a systemd configuration question. I think the
 'After=network-online.target' in its service file is not really needed.
 
 I tried to disable autofs service and got a similar issue with ntpd one
 (except network-online.target is not involved here):
 
   $ systemd-analyze critical-chain
   graphical.target @7.921s
 multi-user.target @7.921s
   ntpd.service @7.790s +20ms
 network.target @7.786s
   NetworkManager.service @675ms +184ms
 basic.target @674ms
 
   $ systemctl show ntpd -p After
   After=network.target...
 
 Does ntpd service really need 'After=network.target', not sure.

The 'network online' targets are really just there for ignorant services
that don't respond to network events themselves, that expect the network
to be up and running before they start.  Of course, those services don't
have any way to say *which* network interface they care about, so if you
have more than one interface in your system they still get it wrong.

But anyway, if ntpd or autofs can respond to network events using
netlink or listening on D-Bus to NetworkManager/connman/etc or getting
triggered by eg NetworkManager dispatcher scripts, then they probably
don't need to block on network-online.  But if they can't, and they
expect the network to be up and running before they start, then yes they
will block startup until some kind of networking is running.

Dan

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [ANNOUNCE] Git development moved to github

2015-06-11 Thread Filipe Brandenburger
On Wed, Jun 10, 2015 at 9:52 AM, Lennart Poettering
lenn...@poettering.net wrote:
 On Wed, 10.06.15 08:25, Filipe Brandenburger (filbran...@google.com) wrote:
 On Wed, Jun 10, 2015 at 6:31 AM, Alban Crequy al...@endocode.com wrote:
  FWIW it only loses the comments if people comment on individual
  commits instead of commenting on the Files changed tab of a PR. I
  usually comment in this way on purpose instead of commenting on
  commits, so that the history of comments are kept in the PR, even
  after rebase (it might be folded if the chunk of the patch is not
  there anymore, but the comment is still in the PR). If you really want
  to comment on an individual commit (but I don't recommend it), you can
  include the reference of the PR in your comment (#42), then github
  will keep your comment attached to the PR.

 Ah that makes sense!

 Indeed as I explained I like to look at the individual commits, so
 that would explain why my comments would get lost as a new version is
 pushed...

  I think it is fine as it is as long as people comment in the Files
  changed tab.

 Lennart, do you think setting that rule is better than the one PR per
 version of patchset?

 No. We should review commits, not diffs. We also should review commit
 msgs. (see other mail)

Another downside of adding comments to the commits is that e-mail
notifications are not sent for them (I just noticed that while lurking
on #164, I got e-mails for the main thread but not for Lennart's
comments on commit 5f33680.)

I think adding comments on the Files changed would work on cases such as:

1) The PR contains only a single commit, in which case the diff in
Files changed will match the commit itself. (You still need to look
at the commit description, but even if you do it from the Commits
tab you can't really add any line comments directly to it anyways.)

2) If the commits change disjoint sets of files (you could check that
first, and then review the code in the Files changed tab.)

I think the exception is when a PR is both introducing new code and
later changing it in a follow up commit but I guess that's not really
too frequent (though I'm clearly guilty of it on #44.)

Can we try to add comments to Files changed? Not asking not to look
at the commits, yes looking at the commits is important! It's just
that I think if we could have the e-mail notifications for the line
comments, make sure they are kept in the same thread and be able to
keep multiple versions of a patchset around in the same PR (instead of
the wonky PR linking) I think that would be a huge win... We can
always fall back to opening a new PR and closing the old one, but I'd
prefer if that was the exception and not the rule...

It really sounds like what you really want is Gerrit... I think
gerrithub.io (which I haven't tried personally) might be what bridges
these two worlds... Makes it easy for the submitters to send you
commits, makes it easy for the reviewers to adopt the new code, tracks
pending requests.

Cheers!
Filipe
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Use a specific device ?

2015-06-11 Thread Dan Williams
On Thu, 2015-06-11 at 10:36 +0200, Jean-Christian de Rivaz wrote:
 Le 11. 06. 15 09:29, Bjørn Mork a écrit :
  Jean-Christian de Rivaz j...@eclis.ch writes:
  Le 10. 06. 15 23:37, Bjørn Mork a écrit :
  Jean-Christian de Rivaz j...@eclis.ch writes:
 
  There is not so
  much modem manufacturers and each of them don't even release a new
  product range per year.
  Ehh... I don't think we live on the same planet.  Did you know Toshiba
  is a modem manufacturer? Dell? HP? There are 43 (damn - I would have
  loved to see 42) different vendor IDs just in the option driver:
 
 bjorn@nemi:/usr/local/src/git/linux$ git grep -E '^#define.*VENDOR' 
  drivers/usb/serial/option.c |wc -l
 43
  Please provide a complete picture:
  git grep -E '^#define.*VENDOR' drivers/usb/serial/* | wc -l
  174
 
  Not a such bit number. There are various vendor/product database on
  the internet, I failed to identify a unmanageable number of modem on
  them.
  Please go ahead and complete the driver whitelists, then.  It will be
  appreciated.
 
 I am pleased by your welcome. It's now important to know if your view is 
 shared by the others key contributors of the ModemManager project, 
 because having a white list only will not be enough if the ModemManager 
 project is not willing to change code to use it.

I don't mind an optional whitelist scheme, but I don't really have the
time nor the inclination to maintain it.  And I worry that it will get
out-of-date very quickly and thus provide a sub-optimal experience for
the majority of users.  But sure, if you want to do that yourself
locally, that's fine.

  I tried to explain to you why a whitelist design makes an incomplete
  list into a failure, while a blaocklist based design ensures that stuff
  works whether the list is complete or not. That is something to take
  advantage of whenever you can.  It's also used in e.g. USB class drivers.
 
  But whatever.
 
  You are advocating a change here.  That's fine.  But you shouldn't
  expect everyone to jump at your redesign ideas just like that.  There
  ore often reasons for the existing design, and those who designed it
  have often been through multiple incarnations already.
 
  You haven't convinced me yet, to put it mildly...
 
 
 
 You can a least agree that I am not alone trying to find a solution:
 
 https://bugzilla.gnome.org/show_bug.cgi?id=688213
 https://bugs.freedesktop.org/show_bug.cgi?id=85007
 
 What's your observations that need to be addressed to get closer to an 
 acceptable solution ?

Updated that bug with some thoughts.

dan

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Use a specific device ?

2015-06-11 Thread Dan Williams
On Thu, 2015-06-11 at 11:19 +0200, Jean-Christian de Rivaz wrote:
 Le 11. 06. 15 10:48, Bjørn Mork a écrit :
  Jean-Christian de Rivaz j...@eclis.ch writes:
 
  In my experience this is not true.  Many vendors, many of them no-name
  Asian ones, release many devices each year, especially when rebranding
  the same device between network operators.  Even in the United States
  there can be 3 or 4 models of the same hardware, differentiated only by
  firmware and external branding, but with different VID/PID combinations.
  Please provides real substantial example.
  Go look it up in the modem database you have access to.
 
 Why? You are the one pretending that this reality exists by your 
 experience, not me!
 
  You can use almost any modem present in any laptop as example, or any
  modem marketing name from any of the major asian vendors.  They will
  *all* have a number of different VID/PID combinations.  If you should
  happen to find an exception from this rule, then that would be a truly
  interesting device.
 
 
 I never pretending that that there was just a few VID/PID combinations. 
 I do pretend that the number of combinations is not completely out of 
 control like you try to present.

I guess our definitions of out-of-control are different then :)

The FCC database lists ~3396 FCCIDs, 164 of which have been granted so
far in 2015 alone, that are:

- PCS spectrum (eg 1850 - 1910mhz)
- Part 24E
- Class PCB - PCS Licensed Transmitter (not held to face or worn)
- original grants
- registered from 2003 - today

That does include some tablets, MiFis, and M2M devices.  So for a more
representative sample I did a query of all the FCC Grantee codes of all
the WWAN devices I have which gives me a total of 1039 devices from 24
different manufacturers.

These are very restrictive queries because:

1) they do not include devices *not* registered with the FCC, eg many
European/Asian/African devices that aren't intended for sale in the US
2) they only included devices registered for the US/Americas PCS band
(1900MHz)
3) the 1039 query only includes a couple of common manufacturers
4) this is for the FCC ID, not the VID/PID of the device.  Many devices
that are re-branded by OEMs (dell, quanta, HP, Acer, etc) will change
the VID/PID for the same hardware.  So a large number of these devices
will have multiple VID/PIDs.

I also have 15+ devices (from 6 or 7 manufacturers) that have no FCC
registration because they were never intended for sale in the US.  There
are also many whitebox devices.

We have 1118 unique USB IDs in option+sierra+qcaux+qcserial+qmi_wwan
kernel drivers.  Which means we're not even close to covering the USB
IDs of just US registered devices, and that doesn't even cover the
devices that are CDC-ACM, MBIM, or NCM and don't need IDs.

Dan


___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [ANNOUNCE] Git development moved to github

2015-06-11 Thread Ronny Chevalier
On Thu, Jun 11, 2015 at 6:31 PM, Filipe Brandenburger
filbran...@google.com wrote:
 On Wed, Jun 10, 2015 at 9:52 AM, Lennart Poettering
 lenn...@poettering.net wrote:
 On Wed, 10.06.15 08:25, Filipe Brandenburger (filbran...@google.com) wrote:
 On Wed, Jun 10, 2015 at 6:31 AM, Alban Crequy al...@endocode.com wrote:
  FWIW it only loses the comments if people comment on individual
  commits instead of commenting on the Files changed tab of a PR. I
  usually comment in this way on purpose instead of commenting on
  commits, so that the history of comments are kept in the PR, even
  after rebase (it might be folded if the chunk of the patch is not
  there anymore, but the comment is still in the PR). If you really want
  to comment on an individual commit (but I don't recommend it), you can
  include the reference of the PR in your comment (#42), then github
  will keep your comment attached to the PR.

 Ah that makes sense!

 Indeed as I explained I like to look at the individual commits, so
 that would explain why my comments would get lost as a new version is
 pushed...

  I think it is fine as it is as long as people comment in the Files
  changed tab.

 Lennart, do you think setting that rule is better than the one PR per
 version of patchset?

 No. We should review commits, not diffs. We also should review commit
 msgs. (see other mail)

 Another downside of adding comments to the commits is that e-mail
 notifications are not sent for them (I just noticed that while lurking
 on #164, I got e-mails for the main thread but not for Lennart's
 comments on commit 5f33680.)

Yes you need to specify for each PR you are interested in that you
want to receive mail notifications for the PR... (I think it's the
subscribe button at the bottom)


 I think adding comments on the Files changed would work on cases such as:

 1) The PR contains only a single commit, in which case the diff in
 Files changed will match the commit itself. (You still need to look
 at the commit description, but even if you do it from the Commits
 tab you can't really add any line comments directly to it anyways.)

 2) If the commits change disjoint sets of files (you could check that
 first, and then review the code in the Files changed tab.)

 I think the exception is when a PR is both introducing new code and
 later changing it in a follow up commit but I guess that's not really
 too frequent (though I'm clearly guilty of it on #44.)

 Can we try to add comments to Files changed? Not asking not to look
 at the commits, yes looking at the commits is important! It's just
 that I think if we could have the e-mail notifications for the line
 comments, make sure they are kept in the same thread and be able to
 keep multiple versions of a patchset around in the same PR (instead of
 the wonky PR linking) I think that would be a huge win... We can
 always fall back to opening a new PR and closing the old one, but I'd
 prefer if that was the exception and not the rule...

 It really sounds like what you really want is Gerrit... I think
 gerrithub.io (which I haven't tried personally) might be what bridges
 these two worlds... Makes it easy for the submitters to send you
 commits, makes it easy for the reviewers to adopt the new code, tracks
 pending requests.

 Cheers!
 Filipe
 ___
 systemd-devel mailing list
 systemd-devel@lists.freedesktop.org
 http://lists.freedesktop.org/mailman/listinfo/systemd-devel
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Help needed for optimizing my boot time

2015-06-11 Thread Andrei Borzenkov
On Thu, Jun 11, 2015 at 3:10 PM, Francis Moreau francis.m...@gmail.com wrote:
 On 06/11/2015 01:40 PM, Andrei Borzenkov wrote:
 On Thu, Jun 11, 2015 at 2:26 PM, Francis Moreau francis.m...@gmail.com 
 wrote:

$ systemd-analyze critical-chain

graphical.target @7.921s
  multi-user.target @7.921s
autofs.service @7.787s +132ms
  network-online.target @7.786s
network.target @7.786s
  NetworkManager.service @675ms +184ms
basic.target @674ms
  ...

 ...
 Is NetworkManager-wait-online.service enabled and active?


 It seems it's enabled but no more active:

 $ systemctl status NetworkManager-wait-online.service
 ● NetworkManager-wait-online.service - Network Manager Wait Online
Loaded: loaded
 (/usr/lib/systemd/system/NetworkManager-wait-online.service; disabled;
 vendor preset: disabled)
Active: inactive (dead) since Thu 2015-06-11 11:54:37 CEST; 1h 4min ago
   Process: 583 ExecStart=/usr/bin/nm-online -s -q --timeout=30
 (code=exited, status=0/SUCCESS)
  Main PID: 583 (code=exited, status=0/SUCCESS)

 Jun 11 11:54:30 cyclone systemd[1]: Starting Network Manager Wait Online...
 Jun 11 11:54:37 cyclone systemd[1]: Started Network Manager Wait Online.

 This seems correct to me, doesn't it ?


 Actually it says disabled which makes me wonder why it run. But this
 is the service that is likely responsible for long time you observe.

 I think it runs because of this:

 $ ls /usr/lib/systemd/system/network-online.target.wants/
 NetworkManager-wait-online.service

 BTW, why isn't it showed by 'systemd-analyze critical-chain' ?


My best guess is that it has no direct dependency on NetworkManager so
it is not counted as part of chain. You could try adding

After=NetworkManager.service

to see if it changes anything in systemd-analyze output.

 If disabling it does ot help, you can try masking it (systemctl mask)
 for a test.

 Actually, I'm still not sure why autofs.service is waiting for
 network-online.target to be activated, IOW why this service has
 'After=network-online.target'.


You can discuss it on autofs list; systemd is just a messenger here :)
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [PATCH] ima-setup: write policy one line at a time

2015-06-11 Thread Zbigniew Jędrzejewski-Szmek
On Thu, Jun 11, 2015 at 11:28:06AM +0200, Lennart Poettering wrote:
 On Thu, 11.06.15 00:34, Zbigniew Jędrzejewski-Szmek (zbys...@in.waw.pl) wrote:
 
  On Thu, Jun 11, 2015 at 01:16:47AM +0200, Lennart Poettering wrote:
   On Wed, 10.06.15 15:38, Zbigniew Jędrzejewski-Szmek (zbys...@in.waw.pl) 
   wrote:
   
ima_write_policy() expects data to be written as one or more
rules, no more than PAGE_SIZE at a time. Easiest way to ensure
that we are not splitting rules is to read and write on line at
a time.

https://bugzilla.redhat.com/show_bug.cgi?id=1226948
---
 src/core/ima-setup.c | 39 +--
 1 file changed, 17 insertions(+), 22 deletions(-)

diff --git a/src/core/ima-setup.c b/src/core/ima-setup.c
index 4d8b638115..5b3d16cd31 100644
--- a/src/core/ima-setup.c
+++ b/src/core/ima-setup.c
@@ -23,9 +23,6 @@
 
 #include unistd.h
 #include errno.h
-#include fcntl.h
-#include sys/stat.h
-#include sys/mman.h
 
 #include ima-setup.h
 #include util.h
@@ -36,20 +33,19 @@
 #define IMA_POLICY_PATH /etc/ima/ima-policy
 
 int ima_setup(void) {
-int r = 0;
-
 #ifdef HAVE_IMA
-_cleanup_close_ int policyfd = -1, imafd = -1;
-struct stat st;
-char *policy;
+_cleanup_fclose_ FILE *input = NULL;
+_cleanup_close_ int imafd = -1;
+char line[LINE_MAX];
   
   Hmm, I wonder if this might bite us. LINE_MAX is a good choice as max
   line length for formats we define in systemd, but the question of
   course is what the the max line length is for IMA...
 
  It's PAGE_SIZE ;) Making this dynamic doesn't make much sense to me,
  but we could make it 4096, as this is the lowest (and common) size.
 
 I don't think this is actually really that bad:
 
 _cleanup_free_ void *line = NULL;
 line = malloc(page_size());
 
 Or, we could even just do alloca(page_size())...
Either would break FOR_EACH_LINE, but line[page_size()] should work.

https://github.com/systemd/systemd/pull/167

What I don't like about having a non-fixed value for the line size is
that the syntactic validity of configuration depends on the kernel you
are running. In practice not an issue, unless you like really long
lines.

@Mimi: could you check that the patch works for you?

Zbyszek
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Help needed for optimizing my boot time

2015-06-11 Thread Francis Moreau
On 06/11/2015 02:22 PM, Andrei Borzenkov wrote:
 On Thu, Jun 11, 2015 at 3:10 PM, Francis Moreau francis.m...@gmail.com 
 wrote:
 On 06/11/2015 01:40 PM, Andrei Borzenkov wrote:
 On Thu, Jun 11, 2015 at 2:26 PM, Francis Moreau francis.m...@gmail.com 
 wrote:

$ systemd-analyze critical-chain

graphical.target @7.921s
  multi-user.target @7.921s
autofs.service @7.787s +132ms
  network-online.target @7.786s
network.target @7.786s
  NetworkManager.service @675ms +184ms
basic.target @674ms
  ...

 ...
 Is NetworkManager-wait-online.service enabled and active?


 It seems it's enabled but no more active:

 $ systemctl status NetworkManager-wait-online.service
 ● NetworkManager-wait-online.service - Network Manager Wait Online
Loaded: loaded
 (/usr/lib/systemd/system/NetworkManager-wait-online.service; disabled;
 vendor preset: disabled)
Active: inactive (dead) since Thu 2015-06-11 11:54:37 CEST; 1h 4min ago
   Process: 583 ExecStart=/usr/bin/nm-online -s -q --timeout=30
 (code=exited, status=0/SUCCESS)
  Main PID: 583 (code=exited, status=0/SUCCESS)

 Jun 11 11:54:30 cyclone systemd[1]: Starting Network Manager Wait Online...
 Jun 11 11:54:37 cyclone systemd[1]: Started Network Manager Wait Online.

 This seems correct to me, doesn't it ?


 Actually it says disabled which makes me wonder why it run. But this
 is the service that is likely responsible for long time you observe.

 I think it runs because of this:

 $ ls /usr/lib/systemd/system/network-online.target.wants/
 NetworkManager-wait-online.service

 BTW, why isn't it showed by 'systemd-analyze critical-chain' ?

 
 My best guess is that it has no direct dependency on NetworkManager so
 it is not counted as part of chain. You could try adding
 
 After=NetworkManager.service
 
 to see if it changes anything in systemd-analyze output.
 
 If disabling it does ot help, you can try masking it (systemctl mask)
 for a test.

 Actually, I'm still not sure why autofs.service is waiting for
 network-online.target to be activated, IOW why this service has
 'After=network-online.target'.

 
 You can discuss it on autofs list; systemd is just a messenger here :)
 

Well it's more a systemd configuration question. I think the
'After=network-online.target' in its service file is not really needed.

I tried to disable autofs service and got a similar issue with ntpd one
(except network-online.target is not involved here):

  $ systemd-analyze critical-chain
  graphical.target @7.921s
multi-user.target @7.921s
  ntpd.service @7.790s +20ms
network.target @7.786s
  NetworkManager.service @675ms +184ms
basic.target @674ms

  $ systemctl show ntpd -p After
  After=network.target...

Does ntpd service really need 'After=network.target', not sure.

Thanks
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Help needed for optimizing my boot time

2015-06-11 Thread Francis Moreau
On 06/11/2015 01:40 PM, Andrei Borzenkov wrote:
 On Thu, Jun 11, 2015 at 2:26 PM, Francis Moreau francis.m...@gmail.com 
 wrote:

$ systemd-analyze critical-chain

graphical.target @7.921s
  multi-user.target @7.921s
autofs.service @7.787s +132ms
  network-online.target @7.786s
network.target @7.786s
  NetworkManager.service @675ms +184ms
basic.target @674ms
  ...

 ...
 Is NetworkManager-wait-online.service enabled and active?


 It seems it's enabled but no more active:

 $ systemctl status NetworkManager-wait-online.service
 ● NetworkManager-wait-online.service - Network Manager Wait Online
Loaded: loaded
 (/usr/lib/systemd/system/NetworkManager-wait-online.service; disabled;
 vendor preset: disabled)
Active: inactive (dead) since Thu 2015-06-11 11:54:37 CEST; 1h 4min ago
   Process: 583 ExecStart=/usr/bin/nm-online -s -q --timeout=30
 (code=exited, status=0/SUCCESS)
  Main PID: 583 (code=exited, status=0/SUCCESS)

 Jun 11 11:54:30 cyclone systemd[1]: Starting Network Manager Wait Online...
 Jun 11 11:54:37 cyclone systemd[1]: Started Network Manager Wait Online.

 This seems correct to me, doesn't it ?

 
 Actually it says disabled which makes me wonder why it run. But this
 is the service that is likely responsible for long time you observe.

I think it runs because of this:

$ ls /usr/lib/systemd/system/network-online.target.wants/
NetworkManager-wait-online.service

BTW, why isn't it showed by 'systemd-analyze critical-chain' ?

 If disabling it does ot help, you can try masking it (systemctl mask)
 for a test.

Actually, I'm still not sure why autofs.service is waiting for
network-online.target to be activated, IOW why this service has
'After=network-online.target'.

 
 OTOH disabled here just means links in [Install] section are not
 present.

   $ cat /usr/lib/systemd/system/NetworkManager-wait-online.service
 ...
 [Install]
 WantedBy=multi-user.target

and I can't find the corresponding symlink in
/usr/lib/systemd/system/multi-user.target.wants/ or
/etc/systemd/system/multi-user.target.wants/

So you seem right, and disabled seems a bit mis-leading here.

 Could you show
 sysmectl show NetworkManager-wait-online.service -p WantedBy -p RequiredBy
 

$ systemctl show NetworkManager-wait-online.service -p WantedBy -p
RequiredBy
RequiredBy=
WantedBy=network-online.target

Thanks
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [ANNOUNCE] Git development moved to github

2015-06-11 Thread Lucas De Marchi
On Tue, Jun 9, 2015 at 7:01 PM, Lennart Poettering
lenn...@poettering.net wrote:
 Well, but it's really weird... If you start out with a patch things
 are tracked as PR. If you start out without a patch things are tracked
 as an issue. And they have quite different workflows, as PRs cannot be
 reopened and issues can, for example.

 I am pretty sure issues should be at the core of things...

 WHat really surprises me about the whole discussion is that we cannot
 be the first ones running into this. Given the success of github this
 must be a common issue. And if it is, then either github is actually
 prety bad, or I am too stuck in my bugzilla mindset and haven't really
 grokked the github way of doing things yet.

You really aren't.  I commented on this thread before and on my quest
to try to understand the github model I found several people with the
same problems. It's worth reading
https://github.com/torvalds/linux/pull/17#issuecomment-5654674  -
Those are not the same problems you are facing and those I really care
about, but there is much about the github model there.

Projects with proper per-commit review have a hard time with github
because it's not the github model.  The github model is you push a
lots of things people may suggest some changes and the original author
just pushes new code on top.  The pullrequest in github ui is just a
chronological view of out-of-place comments and new pushes. There are
exceptions to this, but it pretty much covers the vast majority of
projects really using the issues/pr featues in github.  Of course
there are the petty projects in which losing comments doesn't matter
much and reviews are pretty much superficial.  It's really hard to see
projects in github with good commit messages and proper commit
reviews.  And I'd say some of the github limitations that pushes for
this kind o behavior.

Since I care about comments in each patch what I'm doing in projects I
maintain (and I do have some private repositories) is to have
something similar to what you suggest: opening a second pullrequest
and reference the first one.  Bear in mind though the comments are
*always attached to the commit* not the pullrequest. So in the extreme
case the person sending the pullrequest removes *his* remote, you lost
the comments. This may not hurt now, but it really does after one year
when you are trying to find that comment.

Then people will try to convince you to comment on the pullrequest
rather the individual commits.  It's rather a sick place to be in for
whom are used to proper reviews. Github does has nice features,
integration with other tools, etc. But I was really shocked when their
review system was *the* reason systemd was getting aboard.

Oh... not to mention the pullrequest doesn't show commits in order
(https://help.github.com/articles/why-are-my-commits-in-the-wrong-order/).
I was bitten by this back in 2013 when I was using github much more
and I had forgotten. Looks like things didn't change since then. Now
when I'm reviewing pullrequests I never trust to review them directly
in the browser but I rather pull all the pullrequests with a variant
of your git pullnotes:

alias.pullpr = fetch origin refs/pull/*:refs/pull/*

-- 
Lucas De Marchi
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [ANNOUNCE] Git development moved to github

2015-06-11 Thread Filipe Brandenburger
On Thu, Jun 11, 2015 at 12:31 PM, Ronny Chevalier
chevalier.ro...@gmail.com wrote:
 On Thu, Jun 11, 2015 at 6:31 PM, Filipe Brandenburger filbran...@google.com 
 wrote:
 Another downside of adding comments to the commits is that e-mail
 notifications are not sent for them (I just noticed that while lurking
 on #164, I got e-mails for the main thread but not for Lennart's
 comments on commit 5f33680.)

 Yes you need to specify for each PR you are interested in that you
 want to receive mail notifications for the PR... (I think it's the
 subscribe button at the bottom)

Sorry I should have explained myself better...

I watch systemd/systemd as a whole, so I get all notifications
without having to ask for them individually...

On #164, I *did* get an e-mail for @zonque's comment (Also, you
forgot to add the new files to Makefile.am and po/LINGUAS...) but I
did *not* get e-mails for @poettering's comments on commit 5f33680
(Hmm, can you please change the commit msg to say this is the catalog
translation? ...) and the replies on that thread (@s8321414 replied
@poettering How can I do this using git? etc.)

I think that's one more symptom of the fact that, for GitHub, the
commit itself doesn't directly belong to the PR, and so does not
belong to the project either...

The e-mail notifications are not really a great big deal (still, they
are annoying), but I think it's just one more sign that adding
comments to the commits will end up causing trouble in the future...

Cheers,
Filipe
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel