Re: [systemd-devel] [RFC][PATCH] udev: net_id - support multi-function, multi-port enpo* device names

2015-07-15 Thread Michael Marineau
Found one of these fabaled multi-function network devices you dropped
from the patch, an Intel I350 Gigabit device on a Supermicro
X9DRI-LN4F+ motherboard. The 4 different network interfaces are all
are fighting over the 'eno1' name and are functions 06:00.0, 06:00.1,
06:00.2, and 06:00.3.

On Wed, Apr 1, 2015 at 1:58 PM, Tom Gundersen  wrote:
> I pushed a version of this only handling the multi-port devices. We
> can deal with multi-function if and when they appear in the wild.
>
> -t
>
> On Wed, Apr 1, 2015 at 4:52 PM, Tom Gundersen  wrote:
>> I'd argue that having firmware labels for such devices makes no sense, but 
>> they exist, so make sure
>> we handle them as best as we can.
>> ---
>>  src/udev/udev-builtin-net_id.c | 64 
>> --
>>  1 file changed, 43 insertions(+), 21 deletions(-)
>>
>> diff --git a/src/udev/udev-builtin-net_id.c b/src/udev/udev-builtin-net_id.c
>> index 71f3a59..1a72190 100644
>> --- a/src/udev/udev-builtin-net_id.c
>> +++ b/src/udev/udev-builtin-net_id.c
>> @@ -35,7 +35,7 @@
>>   * Type of names:
>>   *   b -- BCMA bus core number
>>   *   ccw -- CCW bus group name
>> - *   o  -- on-board device index number
>> + *   o[f][d]-- on-board device index number
>>   *   s[f][d] -- hotplug slot index number
>>   *   x-- MAC address
>>   *   [P]ps[f][d]
>> @@ -126,11 +126,38 @@ struct netnames {
>>  char ccw_group[IFNAMSIZ];
>>  };
>>
>> +/* read the 256 bytes PCI configuration space to check the multi-function 
>> bit */
>> +static bool is_pci_multifunction(struct udev_device *dev) {
>> +_cleanup_fclose_ FILE *f = NULL;
>> +const char *filename;
>> +uint8_t config[64];
>> +
>> +filename = strjoina(udev_device_get_syspath(dev), "/config");
>> +f = fopen(filename, "re");
>> +if (!f)
>> +return false;
>> +if (fread(&config, sizeof(config), 1, f) != 1)
>> +return false;
>> +
>> +/* bit 0-6 header type, bit 7 multi/single function device */
>> +if ((config[PCI_HEADER_TYPE] & 0x80) != 0)
>> +return true;
>> +
>> +return false;
>> +}
>> +
>>  /* retrieve on-board index number and label from firmware */
>>  static int dev_pci_onboard(struct udev_device *dev, struct netnames *names) 
>> {
>> +unsigned func, dev_port = 0;
>> +size_t l;
>> +char *s;
>> +const char *attr;
>>  const char *index;
>>  int idx;
>>
>> +if (sscanf(udev_device_get_sysname(names->pcidev), 
>> "%*x:%*x:%*x.%u", &func) != 1)
>> +return -ENOENT;
>> +
>>  /* ACPI _DSM  -- device specific method for naming a PCI or PCI 
>> Express device */
>>  index = udev_device_get_sysattr_value(names->pcidev, "acpi_index");
>>  /* SMBIOS type 41 -- Onboard Devices Extended Information */
>> @@ -141,30 +168,25 @@ static int dev_pci_onboard(struct udev_device *dev, 
>> struct netnames *names) {
>>  idx = strtoul(index, NULL, 0);
>>  if (idx <= 0)
>>  return -EINVAL;
>> -snprintf(names->pci_onboard, sizeof(names->pci_onboard), "o%d", 
>> idx);
>>
>> -names->pci_onboard_label = 
>> udev_device_get_sysattr_value(names->pcidev, "label");
>> -return 0;
>> -}
>> -
>> -/* read the 256 bytes PCI configuration space to check the multi-function 
>> bit */
>> -static bool is_pci_multifunction(struct udev_device *dev) {
>> -_cleanup_fclose_ FILE *f = NULL;
>> -const char *filename;
>> -uint8_t config[64];
>> +/* kernel provided multi-device index */
>> +attr = udev_device_get_sysattr_value(dev, "dev_port");
>> +if (attr)
>> +dev_port = strtol(attr, NULL, 10);
>>
>> -filename = strjoina(udev_device_get_syspath(dev), "/config");
>> -f = fopen(filename, "re");
>> -if (!f)
>> -return false;
>> -if (fread(&config, sizeof(config), 1, f) != 1)
>> -return false;
>> +s = names->pci_onboard;
>> +l = sizeof(names->pci_onboard);
>> +l = strpcpyf(&s, l, "o%d", idx);
>> +if (func > 0 || is_pci_multifunction(names->pcidev))
>> +l = strpcpyf(&s, l, "f%d", func);
>> +if (dev_port > 0)
>> +l = strpcpyf(&s, l, "d%d", dev_port);
>> +if (l == 0)
>> +names->pci_onboard[0] = '\0';
>>
>> -/* bit 0-6 header type, bit 7 multi/single function device */
>> -if ((config[PCI_HEADER_TYPE] & 0x80) != 0)
>> -return true;
>> +names->pci_onboard_label = 
>> udev_device_get_sysattr_value(names->pcidev, "label");
>>
>> -return false;
>> +return 0;
>>  }
>>
>>  static int dev_pci_slot(struct udev_device *dev, struct netnames *names) {
>> --
>> 2.3.4
>>
> __

[systemd-devel] Reason for setting runqueue to IDLE priority and side effects if this is changed?

2015-07-15 Thread Hoyer, Marko (ADITG/SW2)
Hi all,

jumping from systemd 206 to systemd 211 we were faced with some issue, which 
are finally caused by a changed main loop priority of the job execution. 

Our use case is the following one:
--
While we are starting up the system, a so called application starter is 
bringing up a set of applications at a certain point in a controlled way by 
requesting systemd via dbus to start respective units. The reason is that we 
have to take care for applications internal states as synchronization points 
and for better meeting timing requirements of certain applications in a generic 
way. I told the story once before in another post about watchdog observation in 
state "activating". However ...

Up to v206, the behavior of systemd was the following one:
--
- the starter sends out a start request of a bench of applications (he requests 
a sequence of unit starts)
- it seems that systemd is working of the sequence exactly as it was requested 
one by one in the same order

Systemd v211 shows a different behavior:

- the starter sends out a bench of requests
- the job execution of the first job is significantly delayed (we have a system 
under stress, high CPU load at that time)
- suddenly, system starts working of the jobs but now in reverse order
- depending on the situation, it might happen that a complete bench of 
scheduled jobs are reverse ordered, sometimes two or more sub benches of jobs 
are executed in the reverse order (the jobs in each bench are reverse ordered)

I found that the system behavior with systemd v206 was only accidently the 
expected one. The reason was that in this version the run queue dispatching was 
a fixed part of the main loop located before the dispatching of the events. 
This way, it gained higher priority than the dbus request handling. One jobs 
was requested via dbus. Once the dbus job request was dispatched, it's been 
worked off immediately the next round of the mainloop. Then the next dbus 
request was dispatched and so on ...

Systemd v211 added the running queue as deferred event source to the event 
handling with priority IDLE. So dbus requests are preferred over job execution. 
The reverse order effect is simply because the run queue is more a stack than a 
queue. All of the observed behavior could be explained I guess.

So long story in advance. I've now two questions:
- Am I causing any critical side effects when I'm increasing the run queue 
priority so that it is higher than the one of the dbus handling (which is 
NORMAL)? First tests showed that I can get back exactly the behavior we had 
before with that.

- Might it still happen that situations are happening where the jobs are 
reordered even though I'm increasing the priority?

- Is there any other good solution ensuring the order of job execution?

Hope someone has some useful feedback.

Thx in advance.

Marko Hoyer

Advanced Driver Information Technology GmbH
Software Group II (ADITG/SW2)
Robert-Bosch-Str. 200
31139 Hildesheim
Germany

Tel. +49 5121 49 6948
Fax +49 5121 49 6999
mho...@de.adit-jv.com

ADIT is a joint venture company of Robert Bosch GmbH/Robert Bosch Car 
Multimedia GmbH and DENSO Corpoation
Sitz: Hildesheim, Registergericht: Amtsgericht Hildesheim HRB 3438
Geschaeftsfuehrung: Wilhelm Grabow, Katsuyoshi Maeda
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] How to debug this strange issue about "systemd"?

2015-07-15 Thread sean
Hi All:
I am trying to test the latest upstream kernel, But i encounter a 
strange issue about systemd.
When the "systemd" extracted from initrd image mounts the real root file system 
"hda.img" on "/sysroot" and changes root to the new directory,  it can not 
found "/sbin/init" and "/bin/sh".
In fact, These two files exist in the "hda.img".
How to debug this issue?
Why does not it enter emergency mode?
If enter emergency mode, maybe this issue become easy.

qemu command line:
qemu-kvm -D /tmp/qemu-kvm-machine.log -m 1024M -append 
"root=UUID=20059b62-2542-4a85-80cf-41da6e0c1137 rootflags=rw rootfstype=ext4 
debug debug_objects console=ttyS0,115200n8 console=tty0
 rd.debug rd.shell=1 log_buf_len=1M systemd.unit=emergency.target 
systemd.log_level=debug systemd.log_target=console" -kernel 
./qemu_platform/bzImage -hda ./qemu_platform/hda.img 
-initrd ./qemu_platform/initrd-4.1.0-rc2-7-desktop+ -device 
e1000,netdev=network0 -netdev user,id=network0 -serial 
file:/home/sean/work/source/upstream/kernel.org/ttys0.txt

Job initrd-switch-root.target/start finished, result=done
Accepted new private connection.
Got message type=signal sender=n/a destination=n/a 
object=/org/freedesktop/systemd1/agent interface=org.freedesktop.systemd1.Agent 
member=Released cookie=1 reply_cookie=0 error=n/a
Got disconnect on private connection.
Received SIGCHLD from PID 520 (plymouth).
Child 520 (plymouth) died (code=exited, status=0/SUCCESS)
Child 520 belongs to plymouth-switch-root.service
plymouth-switch-root.service: main process exited, code=exited, status=0/SUCCESS
plymouth-switch-root.service changed start -> dead
Job plymouth-switch-root.service/start finished, result=done
plymouth-switch-root.service: cgroup is empty
ConditionPathExists=/etc/initrd-release succeeded for 
initrd-switch-root.service.
About to execute: /usr/bin/systemctl --no-block --force switch-root /sysroot
Forked /usr/bin/systemctl as 527
initrd-switch-root.service changed dead -> start
Accepted new private connection.
Got message type=method_call sender=n/a destination=org.freedesktop.systemd1 
object=/org/freedesktop/systemd1 interface=org.freedesktop.systemd1.Manager 
member=SwitchRoot cookie=1 reply_cookie=0 error=n/a
Sent message type=method_return sender=n/a destination=n/a object=n/a 
interface=n/a member=n/a cookie=1 reply_cookie=1 error=n/a
Serializing state to /run/systemd
Switching root.
Closing left-over fd 21
Closing left-over fd 22
Closing left-over fd 23
Closing left-over fd 26
Closing left-over fd 27
No /sbin/init, trying fallback
Failed to execute /bin/sh, giving up: No such file or directory

sean@linux-dunz:~/work/source/upstream/kernel.org/qemu_platform> sudo mount -o 
loop ./hda.img ./hda
sean@linux-dunz:~/work/source/upstream/kernel.org/qemu_platform> ls -l 
./hda/sbin/init
lrwxrwxrwx 1 sean users 26 Jul 14 22:49 ./hda/sbin/init -> 
../usr/lib/systemd/systemd
sean@linux-dunz:~/work/source/upstream/kernel.org/qemu_platform> ls -l 
./hda/bin/sh
lrwxrwxrwx 1 sean users 4 Oct 26  2014 ./hda/bin/sh -> bash

sean@linux-dunz:~/work/source/upstream/kernel.org/qemu_platform> lsinitrd 
./initrd-4.1.0-rc2-7-desktop+ |grep "sbin\/init"
-rwxr-xr-x   1 root root 1223 Nov 27  2014 sbin/initqueue
lrwxrwxrwx   1 root root   26 Jul 14 21:00 sbin/init -> 
../usr/lib/systemd/systemd
sean@linux-dunz:~/work/source/upstream/kernel.org/qemu_platform> lsinitrd 
./initrd-4.1.0-rc2-7-desktop+ |grep "bin\/sh"
lrwxrwxrwx   1 root root4 Jul 14 21:00 bin/sh -> bash



-- 
Sean. XinRong Fu
Dedicate System Engineer
SUSE
x...@suse.com
(P)+86 18566229618


line

SUSE



-- 
Sean. XinRong Fu
Dedicate System Engineer
SUSE
x...@suse.com
(P)+86 18566229618


line

SUSE



___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel