[systemd-devel] networkd must start before nspawn@container

2015-05-02 Thread arnaud gaboury
My host/conatiner networking are both managed by systemd-netwrokd. I
have a bridge Br0 on host and vb-MyContainer for the conatiner. Both
have a fix local IP.

I boot container at host boot  this way:

--
$ cat /etc/systemd/system/systemd-nspawn@.service
.
ExecStart=/usr/bin/systemd-nspawn --quiet --keep-unit --boot
--link-journal=try-guest --network-bridge=br0 --machine=
--

Unfortunately, systemd-nspawn@poppy fails sometimes at boot :


$ systemctl status systemd-nspawn@poppy
● systemd-nspawn@poppy.service - Container poppy
   Loaded: loaded (/etc/systemd/system/systemd-nspawn@.service;
enabled; vendor preset: disabled)
   Active: failed (Result: exit-code) since Fri 2015-05-01 19:34:56
CEST; 50s ago
 Docs: man:systemd-nspawn(1)
  Process: 544 ExecStart=/usr/bin/systemd-nspawn --quiet --keep-unit
--boot --link-journal=try-guest --net

 work-bridge=br0 --machine=%I (code=exited, status=1/FAILURE)
 Main PID: 544 (code=exited, status=1/FAILURE)

May 01 19:34:55 hortensia systemd[1]: Starting Container poppy...
May 01 19:34:55 hortensia systemd-nspawn[544]: Failed to resolve
interface br0: No such device
May 01 19:34:56 hortensia systemd[1]: systemd-nspawn@poppy.service:
main process exited, code=exite...LURE
May 01 19:34:56 hortensia systemd[1]: Failed to start Container poppy.
May 01 19:34:56 hortensia systemd[1]: Unit
systemd-nspawn@poppy.service entered failed state.
May 01 19:34:56 hortensia systemd[1]: systemd-nspawn@poppy.service failed.
Hint: Some lines were ellipsized, use -l to show in full.
--

Obviously the reason is networkd has not been activated. I solved this
issue this way:

$  cat /etc/systemd/system/network.target
--
[Unit]
Description=Network
Documentation=man:systemd.special(7)
Documentation=http://www.freedesktop.org/wiki/Software/systemd/NetworkTarget
After=network-pre.target
RefuseManualStart=yes

[Install]
WantedBy=machines.target
--
# systemctl enable machines.target

I added machines.target in Before section options in systemd-netwrokd.service
$ cat /etc/systemd/system/systemd-netwrokd.service
--
.
Before=network.target multi-user.target shutdown.target machines.target
..
-

My issue is now solved. I just wonder if my setting is a good practice.

Thank you for advice




google.com/+arnaudgabourygabx
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [PATCH 2/2] udev: Allow detection of udevadm settle timeout

2015-05-02 Thread Nir Soffer
On Tue, Apr 21, 2015 at 12:41 AM, Tom Gundersen t...@jklm.no wrote:
 On Mon, Apr 13, 2015 at 3:04 PM, Nir Soffer nir...@gmail.com wrote:
 On Sat, Apr 11, 2015 at 6:58 PM, David Herrmann dh.herrm...@gmail.com 
 wrote:
 A program running this tool can detect a timeout (expected) or an error
 (unexpected), and can change the program flow based on this result.

 Without this, the only way to detect a timeout is to implement the timeout
 in the program calling udevadm.

 I cannot really see a use-case here. I mean, yeah, the commit-message
 says it warns about timeouts but fails loudly on real errors. But
 again, what's the use-case? Why is a timeout not a real error? Why do
 you need to handle it differently?

 Timeout means that the value I chose may be too small, or the machine
 is overloaded. The administrator may need to configure the system
 differently.

 Other errors are not expected, and typically unexpected errors in an
 underlying tool means getting the developer of the underlying tool
 involved.

 Anyway, if it's only about diagnostics this patch seems fine to me.

 Yes, it is mainly about diagnostics, and making it easier to debug and 
 support.

 Wouldn't a better solution be to improve the udevadm logging? If we
 change the exit codes this is basically ABI. Do we really want to make
 such promises only for diagnostics?

Improving logging is orthogonal, as it does allow the program to change
the flow based the exit code.

Adding a timeout exit code may break code using undocumented behavior,
since the return code is not documented.

Nir
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] pam_systemd.so indirectly calling pam_acct_mgmt

2015-05-02 Thread Lennart Poettering
On Fri, 01.05.15 08:29, Stephen Gallagher (sgall...@redhat.com) wrote:

 Right, so based on this information, it seems to me that in SSSD we
 need to be treating the 'systemd-user' PAM service the same way we do
 the 'cron' service. The idea being that this is meant to handle
 actions performed *as* a user but not *by* a user (for lack of a
 better distinction).
 
 In the terms of how Microsoft Active Directory would treat it (and
 when we're using AD as the identity and authorization store), it
 should be handled as the [Allow|Deny]BatchLogonRight permission which
 is described by MS as:
 
 This security setting allows a user to be logged on by means of a
 batch-queue facility.
 
 For example, when a user submits a job by means of the task scheduler,
 the task scheduler logs that user on as a batch user rather than as an
 interactive user.
 
 Does that seem to match everyone's expectation here?

Well, I guess for now. But note that eventually we hope to move most
programs invoked from .desktop into this as systemd services. This
then means that the actual sessions will become pretty empty, with
only stubs remaining that trigger services off this user instance of
systems.

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] pam_systemd.so indirectly calling pam_acct_mgmt

2015-05-02 Thread Stephen Gallagher


 On May 2, 2015, at 3:48 AM, Lennart Poettering lenn...@poettering.net wrote:
 
 On Fri, 01.05.15 08:29, Stephen Gallagher (sgall...@redhat.com) wrote:
 
 Right, so based on this information, it seems to me that in SSSD we
 need to be treating the 'systemd-user' PAM service the same way we do
 the 'cron' service. The idea being that this is meant to handle
 actions performed *as* a user but not *by* a user (for lack of a
 better distinction).
 
 In the terms of how Microsoft Active Directory would treat it (and
 when we're using AD as the identity and authorization store), it
 should be handled as the [Allow|Deny]BatchLogonRight permission which
 is described by MS as:
 
 This security setting allows a user to be logged on by means of a
 batch-queue facility.
 
 For example, when a user submits a job by means of the task scheduler,
 the task scheduler logs that user on as a batch user rather than as an
 interactive user.
 
 Does that seem to match everyone's expectation here?
 
 Well, I guess for now. But note that eventually we hope to move most
 programs invoked from .desktop into this as systemd services. This
 then means that the actual sessions will become pretty empty, with
 only stubs remaining that trigger services off this user instance of
 systems.
 

If you do that, you will still need some way to invoke PAM with different 
service identities otherwise you'll be implementing a pretty severe 
vulnerability into the system. If all services are authorized by the same PAM 
service, it amounts to removing the ability for administrators to differentiate 
which actions a particular user is allowed to perform.

There is definite value in allowing a user to interact with the samba 
file-sharing daemon on a system without also being allowed to log on 
interactively to that system. Similarly, just because someone can log onto a 
system, that doesn't automatically mean it's reasonable for them to be allowed 
to run automated services when they are not present.

How are you planning on handling these distinctions in this new design? If the 
answer is polkit, that's it's own set of worms, as polkit rules are notoriously 
difficult to manage centrally (the FreeIPA and SSSD projects made an attempt 
several years ago and aborted; and that was _before_ the .pkla - JS move).

 Lennart
 
 -- 
 Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Journald logging handler for Python 3 and AsyncIO integration

2015-05-02 Thread Zbigniew Jędrzejewski-Szmek
On Sat, May 02, 2015 at 03:34:52PM +0200, Ludovic Gasc wrote:
 Ok, my bad, I didn't see JournalHandler class to use with Python logging:
 http://www.freedesktop.org/software/systemd/python-systemd/journal.html#journalhandler-class
 
 Nevertheless, my question about communication between Python and journald
 remains.
Can you rephrase the question? I don't quite understand what functionality
you're missing from
http://www.freedesktop.org/software/systemd/python-systemd/journal.html#systemd.journal.send
 .

Zbyszek

 
 --
 Ludovic Gasc (GMLudo)
 http://www.gmludo.eu/
 
 2015-05-02 15:12 GMT+02:00 Ludovic Gasc gml...@gmail.com:
 
  Hi,
 
  With the new release of Debian Jessie and systemd+journald integration,
  I'm looking for how to modernize our Python 3 toolbox to build daemons.
 
  For now on Debian Wheezy, we use a SysLogHandler with UNIX socket:
  https://docs.python.org/3.4/library/logging.handlers.html#sysloghandler
  + a custom rsyslog+logrotate configuration to split and manage log files.
 
  From sysvinit to systemd migration to start our daemons, it should be
  easy, I've found this documentation:
  http://gunicorn-docs.readthedocs.org/en/latest/deploy.html#systemd
 
  But for journald, even if I can use syslog UNIX socket provided by
  journald, I want to benefit the features of journald, especially structured
  logs.
 
  I've seen the Python binding for journald:
  http://www.freedesktop.org/software/systemd/python-systemd/journal.html
  Nevertheless, I've two questions:
 
 1. I've seen no python logging handler for journald. Is it a desired
 situation or it's because no time to implement that ? Could you be
 interested in by an handler with journald ?
 2. We use heavily AsyncIO module to have async pattern in Python,
 especially for I/O: https://docs.python.org/3/library/asyncio.html
 In the source code of python-systemd, I've seen that you use a C glue
 to interact with journald, but I don't understand what's the 
  communication
 between my Python daemon process and journald: unix sockets ? Other
 mechanism ? Depends on the mechanism, it should be have an impact for us.
 
  Thanks for your answers.
 
  Have a nice week-end.
  --
  Ludovic Gasc (GMLudo)
  http://www.gmludo.eu/
 

 ___
 systemd-devel mailing list
 systemd-devel@lists.freedesktop.org
 http://lists.freedesktop.org/mailman/listinfo/systemd-devel

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Journald logging handler for Python 3 and AsyncIO integration

2015-05-02 Thread Jörg Thalheim
On Sat, 2 May 2015 16:31:44 +0200
Ludovic Gasc gml...@gmail.com wrote:

 2015-05-02 16:18 GMT+02:00 Zbigniew Jędrzejewski-Szmek
 zbys...@in.waw.pl:
 
  On Sat, May 02, 2015 at 03:34:52PM +0200, Ludovic Gasc wrote:
   Ok, my bad, I didn't see JournalHandler class to use with Python
   logging:
  
  http://www.freedesktop.org/software/systemd/python-systemd/journal.html#journalhandler-class
  
   Nevertheless, my question about communication between Python and
   journald remains.
  Can you rephrase the question? I don't quite understand what
  functionality you're missing from
 
  http://www.freedesktop.org/software/systemd/python-systemd/journal.html#systemd.journal.send
  .
 
 
 In AsyncIO, when you interact with the outside world, you need to use
 yield from, for example:
 https://docs.python.org/3/library/asyncio-stream.html#tcp-echo-client-using-streams
 
 yield from means to the Python interpreter to pause the coroutine
 to work on another coroutine until data arrive.
 To be efficient, I/O should handled by AsyncIO, even if some
 workarounds are possible if the raw socket isn't accessible.
 
 If this pattern is useful for HTTP requests, for logging where you
 shouldn't wait a return, it isn't very critical, especially if you use
 syslog protocol with UNIX socket.
 
 Nevertheless, to be sure, I wish to understand how the communication
 works between my process and journald daemon.

AsyncIO was not their, when these bindings were written. Inventing a
asynchron API on top of a synchron API (as provided by sd_journal_sendv) might 
be hard.
So I have two ideas here:

1. use a background thread, which writes messages to journald from a ring 
buffer and fill this ringbuffer from your application.
2. write to the unix socket /run/systemd/journal/socket directly with python 
and asyncIO, 
   this is not as hard as it sounds because it is still some kind of text 
protocol. The coreos guys did this in go here:
   https://github.com/coreos/go-systemd/blob/master/journal/send.go#L68


pgp7peLoLOaky.pgp
Description: OpenPGP digital signature
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Journald logging handler for Python 3 and AsyncIO integration

2015-05-02 Thread Ludovic Gasc
Ok, my bad, I didn't see JournalHandler class to use with Python logging:
http://www.freedesktop.org/software/systemd/python-systemd/journal.html#journalhandler-class

Nevertheless, my question about communication between Python and journald
remains.

--
Ludovic Gasc (GMLudo)
http://www.gmludo.eu/

2015-05-02 15:12 GMT+02:00 Ludovic Gasc gml...@gmail.com:

 Hi,

 With the new release of Debian Jessie and systemd+journald integration,
 I'm looking for how to modernize our Python 3 toolbox to build daemons.

 For now on Debian Wheezy, we use a SysLogHandler with UNIX socket:
 https://docs.python.org/3.4/library/logging.handlers.html#sysloghandler
 + a custom rsyslog+logrotate configuration to split and manage log files.

 From sysvinit to systemd migration to start our daemons, it should be
 easy, I've found this documentation:
 http://gunicorn-docs.readthedocs.org/en/latest/deploy.html#systemd

 But for journald, even if I can use syslog UNIX socket provided by
 journald, I want to benefit the features of journald, especially structured
 logs.

 I've seen the Python binding for journald:
 http://www.freedesktop.org/software/systemd/python-systemd/journal.html
 Nevertheless, I've two questions:

1. I've seen no python logging handler for journald. Is it a desired
situation or it's because no time to implement that ? Could you be
interested in by an handler with journald ?
2. We use heavily AsyncIO module to have async pattern in Python,
especially for I/O: https://docs.python.org/3/library/asyncio.html
In the source code of python-systemd, I've seen that you use a C glue
to interact with journald, but I don't understand what's the communication
between my Python daemon process and journald: unix sockets ? Other
mechanism ? Depends on the mechanism, it should be have an impact for us.

 Thanks for your answers.

 Have a nice week-end.
 --
 Ludovic Gasc (GMLudo)
 http://www.gmludo.eu/

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Journald logging handler for Python 3 and AsyncIO integration

2015-05-02 Thread Zbigniew Jędrzejewski-Szmek
On Sat, May 02, 2015 at 05:22:49PM +0200, Jörg Thalheim wrote:
 On Sat, 2 May 2015 16:31:44 +0200
 Ludovic Gasc gml...@gmail.com wrote:
 
  2015-05-02 16:18 GMT+02:00 Zbigniew Jędrzejewski-Szmek
  zbys...@in.waw.pl:
  
   On Sat, May 02, 2015 at 03:34:52PM +0200, Ludovic Gasc wrote:
Ok, my bad, I didn't see JournalHandler class to use with Python
logging:
   
   http://www.freedesktop.org/software/systemd/python-systemd/journal.html#journalhandler-class
   
Nevertheless, my question about communication between Python and
journald remains.
   Can you rephrase the question? I don't quite understand what
   functionality you're missing from
  
   http://www.freedesktop.org/software/systemd/python-systemd/journal.html#systemd.journal.send
   .
  
  
  In AsyncIO, when you interact with the outside world, you need to use
  yield from, for example:
  https://docs.python.org/3/library/asyncio-stream.html#tcp-echo-client-using-streams
  
  yield from means to the Python interpreter to pause the coroutine
  to work on another coroutine until data arrive.
  To be efficient, I/O should handled by AsyncIO, even if some
  workarounds are possible if the raw socket isn't accessible.
  
  If this pattern is useful for HTTP requests, for logging where you
  shouldn't wait a return, it isn't very critical, especially if you use
  syslog protocol with UNIX socket.
  
  Nevertheless, to be sure, I wish to understand how the communication
  works between my process and journald daemon.
 
 AsyncIO was not their, when these bindings were written. Inventing a
 asynchron API on top of a synchron API (as provided by sd_journal_sendv) 
 might be hard.
 So I have two ideas here:
I'm not sure if it is worth the trouble. Sending to the journal should be very
fast, and the overhead of dispatching to a separate thread is likely to only 
slow
things down.

Zbyszek

 1. use a background thread, which writes messages to journald from a ring 
 buffer and fill this ringbuffer from your application.
 2. write to the unix socket /run/systemd/journal/socket directly with python 
 and asyncIO, 
this is not as hard as it sounds because it is still some kind of text 
 protocol. The coreos guys did this in go here:
https://github.com/coreos/go-systemd/blob/master/journal/send.go#L68



 ___
 systemd-devel mailing list
 systemd-devel@lists.freedesktop.org
 http://lists.freedesktop.org/mailman/listinfo/systemd-devel

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Journald logging handler for Python 3 and AsyncIO integration

2015-05-02 Thread Zbigniew Jędrzejewski-Szmek
On Sat, May 02, 2015 at 04:31:44PM +0200, Ludovic Gasc wrote:
 2015-05-02 16:18 GMT+02:00 Zbigniew Jędrzejewski-Szmek zbys...@in.waw.pl:
 
  On Sat, May 02, 2015 at 03:34:52PM +0200, Ludovic Gasc wrote:
   Ok, my bad, I didn't see JournalHandler class to use with Python logging:
  
  http://www.freedesktop.org/software/systemd/python-systemd/journal.html#journalhandler-class
  
   Nevertheless, my question about communication between Python and journald
   remains.
  Can you rephrase the question? I don't quite understand what functionality
  you're missing from
 
  http://www.freedesktop.org/software/systemd/python-systemd/journal.html#systemd.journal.send
  .
 
 
 In AsyncIO, when you interact with the outside world, you need to use
 yield from, for example:
 https://docs.python.org/3/library/asyncio-stream.html#tcp-echo-client-using-streams
 
 yield from means to the Python interpreter to pause the coroutine to
 work on another coroutine until data arrive.
 To be efficient, I/O should handled by AsyncIO, even if some workarounds
 are possible if the raw socket isn't accessible.
 
 If this pattern is useful for HTTP requests, for logging where you
 shouldn't wait a return, it isn't very critical, especially if you use
 syslog protocol with UNIX socket.
 
 Nevertheless, to be sure, I wish to understand how the communication works
 between my process and journald daemon.
systemd.journal.send is a thin wrapper around sd_journal_send, which calls 
sd_journal_sendv,
which uses sendmsg() to send to a unix socket at /run/systemd/journal/socket.
You can see the sources at
http://cgit.freedesktop.org/systemd/systemd/tree/src/journal/journal-send.c?id=HEAD#n199.

Zbyszek
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] Journald logging handler for Python 3 and AsyncIO integration

2015-05-02 Thread Ludovic Gasc
Hi,

With the new release of Debian Jessie and systemd+journald integration, I'm
looking for how to modernize our Python 3 toolbox to build daemons.

For now on Debian Wheezy, we use a SysLogHandler with UNIX socket:
https://docs.python.org/3.4/library/logging.handlers.html#sysloghandler
+ a custom rsyslog+logrotate configuration to split and manage log files.

From sysvinit to systemd migration to start our daemons, it should be easy,
I've found this documentation:
http://gunicorn-docs.readthedocs.org/en/latest/deploy.html#systemd

But for journald, even if I can use syslog UNIX socket provided by
journald, I want to benefit the features of journald, especially structured
logs.

I've seen the Python binding for journald:
http://www.freedesktop.org/software/systemd/python-systemd/journal.html
Nevertheless, I've two questions:

   1. I've seen no python logging handler for journald. Is it a desired
   situation or it's because no time to implement that ? Could you be
   interested in by an handler with journald ?
   2. We use heavily AsyncIO module to have async pattern in Python,
   especially for I/O: https://docs.python.org/3/library/asyncio.html
   In the source code of python-systemd, I've seen that you use a C glue to
   interact with journald, but I don't understand what's the communication
   between my Python daemon process and journald: unix sockets ? Other
   mechanism ? Depends on the mechanism, it should be have an impact for us.

Thanks for your answers.

Have a nice week-end.
--
Ludovic Gasc (GMLudo)
http://www.gmludo.eu/
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Journald logging handler for Python 3 and AsyncIO integration

2015-05-02 Thread Ludovic Gasc
2015-05-02 16:18 GMT+02:00 Zbigniew Jędrzejewski-Szmek zbys...@in.waw.pl:

 On Sat, May 02, 2015 at 03:34:52PM +0200, Ludovic Gasc wrote:
  Ok, my bad, I didn't see JournalHandler class to use with Python logging:
 
 http://www.freedesktop.org/software/systemd/python-systemd/journal.html#journalhandler-class
 
  Nevertheless, my question about communication between Python and journald
  remains.
 Can you rephrase the question? I don't quite understand what functionality
 you're missing from

 http://www.freedesktop.org/software/systemd/python-systemd/journal.html#systemd.journal.send
 .


In AsyncIO, when you interact with the outside world, you need to use
yield from, for example:
https://docs.python.org/3/library/asyncio-stream.html#tcp-echo-client-using-streams

yield from means to the Python interpreter to pause the coroutine to
work on another coroutine until data arrive.
To be efficient, I/O should handled by AsyncIO, even if some workarounds
are possible if the raw socket isn't accessible.

If this pattern is useful for HTTP requests, for logging where you
shouldn't wait a return, it isn't very critical, especially if you use
syslog protocol with UNIX socket.

Nevertheless, to be sure, I wish to understand how the communication works
between my process and journald daemon.


 Zbyszek

 
  --
  Ludovic Gasc (GMLudo)
  http://www.gmludo.eu/
 
  2015-05-02 15:12 GMT+02:00 Ludovic Gasc gml...@gmail.com:
 
   Hi,
  
   With the new release of Debian Jessie and systemd+journald integration,
   I'm looking for how to modernize our Python 3 toolbox to build daemons.
  
   For now on Debian Wheezy, we use a SysLogHandler with UNIX socket:
  
 https://docs.python.org/3.4/library/logging.handlers.html#sysloghandler
   + a custom rsyslog+logrotate configuration to split and manage log
 files.
  
   From sysvinit to systemd migration to start our daemons, it should be
   easy, I've found this documentation:
   http://gunicorn-docs.readthedocs.org/en/latest/deploy.html#systemd
  
   But for journald, even if I can use syslog UNIX socket provided by
   journald, I want to benefit the features of journald, especially
 structured
   logs.
  
   I've seen the Python binding for journald:
  
 http://www.freedesktop.org/software/systemd/python-systemd/journal.html
   Nevertheless, I've two questions:
  
  1. I've seen no python logging handler for journald. Is it a desired
  situation or it's because no time to implement that ? Could you be
  interested in by an handler with journald ?
  2. We use heavily AsyncIO module to have async pattern in Python,
  especially for I/O: https://docs.python.org/3/library/asyncio.html
  In the source code of python-systemd, I've seen that you use a C
 glue
  to interact with journald, but I don't understand what's the
 communication
  between my Python daemon process and journald: unix sockets ? Other
  mechanism ? Depends on the mechanism, it should be have an impact
 for us.
  
   Thanks for your answers.
  
   Have a nice week-end.
   --
   Ludovic Gasc (GMLudo)
   http://www.gmludo.eu/
  

  ___
  systemd-devel mailing list
  systemd-devel@lists.freedesktop.org
  http://lists.freedesktop.org/mailman/listinfo/systemd-devel


___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [PATCH 3/3] Use a stamp file to avoid running systemd-fsck-root.service twice

2015-05-02 Thread systemd github import bot
Patchset imported to github.
Pull request:
https://github.com/systemd-devs/systemd/compare/master...systemd-mailing-devs:1430587019-25642-4-git-send-email-zbyszek%40in.waw.pl

--
Generated by https://github.com/haraldh/mail2git
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Journald logging handler for Python 3 and AsyncIO integration

2015-05-02 Thread Ludovic Gasc
2015-05-02 17:22 GMT+02:00 Jörg Thalheim joerg.syst...@higgsboson.tk:

 On Sat, 2 May 2015 16:31:44 +0200
 Ludovic Gasc gml...@gmail.com wrote:

  2015-05-02 16:18 GMT+02:00 Zbigniew Jędrzejewski-Szmek
  zbys...@in.waw.pl:
 
   On Sat, May 02, 2015 at 03:34:52PM +0200, Ludovic Gasc wrote:
Ok, my bad, I didn't see JournalHandler class to use with Python
logging:
   
  
 http://www.freedesktop.org/software/systemd/python-systemd/journal.html#journalhandler-class
   
Nevertheless, my question about communication between Python and
journald remains.
   Can you rephrase the question? I don't quite understand what
   functionality you're missing from
  
  
 http://www.freedesktop.org/software/systemd/python-systemd/journal.html#systemd.journal.send
   .
  
  
  In AsyncIO, when you interact with the outside world, you need to use
  yield from, for example:
 
 https://docs.python.org/3/library/asyncio-stream.html#tcp-echo-client-using-streams
 
  yield from means to the Python interpreter to pause the coroutine
  to work on another coroutine until data arrive.
  To be efficient, I/O should handled by AsyncIO, even if some
  workarounds are possible if the raw socket isn't accessible.
 
  If this pattern is useful for HTTP requests, for logging where you
  shouldn't wait a return, it isn't very critical, especially if you use
  syslog protocol with UNIX socket.
 
  Nevertheless, to be sure, I wish to understand how the communication
  works between my process and journald daemon.

 AsyncIO was not their, when these bindings were written. Inventing a
 asynchron API on top of a synchron API (as provided by sd_journal_sendv)
 might be hard.
 So I have two ideas here:

 1. use a background thread, which writes messages to journald from a ring
 buffer and fill this ringbuffer from your application.


As mentioned by Zbigniew, it should be slower that send directly,
especially because with Python we have GIL for threads.


 2. write to the unix socket /run/systemd/journal/socket directly with
 python and asyncIO,
this is not as hard as it sounds because it is still some kind of text
 protocol. The coreos guys did this in go here:
https://github.com/coreos/go-systemd/blob/master/journal/send.go#L68


It's often the way followed by the AsyncIO community, rewrite or cherry
pick a protocol business logic and handle sockets with AsyncIO directly.
Nevertheless, in the case of UNIX socket, especially with a datagram packet
where you don't wait to send something, you have almost no benefit to adapt
that for AsyncIO.

Thank you guys for theses answers, I'm continuing to play with that, I
think you did everything I need to work.
If I have an issue, I'll open a pull request and post the patch on this
mailing-list.




 ___
 systemd-devel mailing list
 systemd-devel@lists.freedesktop.org
 http://lists.freedesktop.org/mailman/listinfo/systemd-devel


___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [PATCH 3/3] Use a stamp file to avoid running systemd-fsck-root.service twice

2015-05-02 Thread Andrei Borzenkov
В Sat,  2 May 2015 13:16:59 -0400
Zbigniew Jędrzejewski-Szmek zbys...@in.waw.pl пишет:

 In the initramfs, we run systemd-fsck@sysroot-device.service.
 In the real system we run systemd-fsck-root.service. It is hard
 to pass the information that the latter should not run if the first
 succeeded using unit state only.
 
 - in the real system, we need a synchronization point between the fsck
   for root and other fscks, to express the dependency to run this
   systemd-fsck@.service before all other systemd-fsck@ units. We
   cannot express it directly, because there are no wildcard
   dependencies. We could use a target as a sychronization point, but
   then we would have to provide drop-ins to order
   systemd-fsck@-.service before the target, and all others after it,
   which becomes messy. The currently used alternative of having a
   special unit (systemd-fsck-root.service) makes it easy to express
   this dependency, and seems to be the best solution.
 
 - we cannot use systemd-fsck-root.service in the initramfs, because
   other fsck units should not be ordered after it. In the real system,
   the root device is always checked and mounted before other filesystems,
   but in the initramfs this doesn't have to be true: /sysroot might be
   stacked on other filesystems and devices.
 
 - the name of the root device can legitimately be specified in a
   different way in the initramfs (on the kernel command line, or
   automatically discovered through GPT), and in the real fs (in /etc/fstab).
   Even if we didn't need systemd-fsck-root.service as a synchronization
   point, it would be hard to ensure the same instance parameter is
   provided for systemd-fsck@.service in the initrams and the real
   system.
 
 Let's use a side channel to pass this information.
 /run/systemd/fsck-root-done is touched after fsck in the initramfs
 succeeds, through an ExecStartPost line in a drop-in for
 systemd-fsck@sysroot.service.

You probably should mention that it effectively reverts
956eaf2b8d6c024705ddadc7393bc707de02.

 
 https://bugzilla.redhat.com/show_bug.cgi?id=1201979
 ---
  src/shared/generator.c | 7 +++
  units/systemd-fsck-root.service.in | 1 +
  2 files changed, 8 insertions(+)
 
 diff --git a/src/shared/generator.c b/src/shared/generator.c
 index 7b2f846175..a71222d1cb 100644
 --- a/src/shared/generator.c
 +++ b/src/shared/generator.c
 @@ -78,6 +78,13 @@ int generator_write_fsck_deps(
  RequiresOverridable=%1$s\n
  After=%1$s\n,
  fsck);
 +
 +if (in_initrd()  path_equal(where, /sysroot))
 +return write_drop_in_format(dir, fsck, 50, stamp,
 +# Automatically 
 generated by %s\n\n
 +[Service]\n
 +
 ExecStartPost=-/bin/touch /run/systemd/fsck-root-done\n,
 +
 program_invocation_short_name);
  }
  
  return 0;
 diff --git a/units/systemd-fsck-root.service.in 
 b/units/systemd-fsck-root.service.in
 index 3617abf04a..48dacc841c 100644
 --- a/units/systemd-fsck-root.service.in
 +++ b/units/systemd-fsck-root.service.in
 @@ -11,6 +11,7 @@ Documentation=man:systemd-fsck-root.service(8)
  DefaultDependencies=no
  Before=local-fs.target shutdown.target
  ConditionPathIsReadWrite=!/
 +ConditionPathExists=!/run/systemd/fsck-root-done
  
  [Service]
  Type=oneshot

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] [PATCH 0/3] avoid running fsck twice

2015-05-02 Thread Zbigniew Jędrzejewski-Szmek
This is an attempt to fix the issue with fsck running twice. Patch 3/3 is
the only important one, the other two are nice-to-have.

dracut configuration also has to be modified to include /bin/touch.
I'll work on a patch for that later.

I also tried the approach listed in TODO: generate systemd-fskc-root.service
in the initramfs. The problem is that systemd-fsck@.service has a dependency
After=systemd-fsck-root, which is wrong in the initramfs, because other
file systems might be stacked below sysroot. The solution in this patch
is also much simpler.

Zbigniew Jędrzejewski-Szmek (3):
  generators: rename add_{root,usr}_mount to add_{sysroot,sysroot_usr}_mount
  Allow $SYSTEMD_PRETEND_INITRD to override initramfs detection
  Use a stamp file to avoid running systemd-fsck-root.service twice

 src/fstab-generator/fstab-generator.c | 21 +
 src/shared/generator.c| 28 +++-
 src/shared/generator.h| 17 +
 src/shared/util.c | 18 --
 units/systemd-fsck-root.service.in|  1 +
 5 files changed, 58 insertions(+), 27 deletions(-)

-- 
2.3.5

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] [PATCH 2/3] Allow $SYSTEMD_PRETEND_INITRD to override initramfs detection

2015-05-02 Thread Zbigniew Jędrzejewski-Szmek
When testing generators and other utilities, it is extremely useful
to be able to trigger initramfs behaviour.

We already allow $SYSTEMD_UNIT_PATH to modify systemd behaviour, this
follows the same principle.
---
 src/shared/util.c | 18 --
 1 file changed, 12 insertions(+), 6 deletions(-)

diff --git a/src/shared/util.c b/src/shared/util.c
index 2c7254eeda..b110909e30 100644
--- a/src/shared/util.c
+++ b/src/shared/util.c
@@ -3734,23 +3734,29 @@ bool documentation_url_is_valid(const char *url) {
 bool in_initrd(void) {
 static int saved = -1;
 struct statfs s;
+const char *e;
 
 if (saved = 0)
 return saved;
 
-/* We make two checks here:
+/* We make three checks here:
  *
- * 1. the flag file /etc/initrd-release must exist
- * 2. the root file system must be a memory file system
+ * 0. First check for SYSTEMD_PRETEND_INITRD.
+ * 1. The flag file /etc/initrd-release must exist
+ * 2. The root file system must be a memory file system
  *
  * The second check is extra paranoia, since misdetecting an
  * initrd can have bad bad consequences due the initrd
  * emptying when transititioning to the main systemd.
  */
 
-saved = access(/etc/initrd-release, F_OK) = 0 
-statfs(/, s) = 0 
-is_temporary_fs(s);
+e = getenv(SYSTEMD_PRETEND_INITRD);
+if (e  parse_boolean(e)  0)
+saved = true;
+else
+saved = access(/etc/initrd-release, F_OK) = 0 
+statfs(/, s) = 0 
+is_temporary_fs(s);
 
 return saved;
 }
-- 
2.3.5

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] [PATCH 1/3] generators: rename add_{root, usr}_mount to add_{sysroot, sysroot_usr}_mount

2015-05-02 Thread Zbigniew Jędrzejewski-Szmek
This makes it obvious that those functions are only usable in the
initramfs.

Also, add a warning when noauto, nofail, or automount is used for the
root fs, instead of silently ignoring. Using those options would be a
sign of significant misconfiguration, and if we bother to check for
them, than let's go all the way and complain.

Other various small cleanups and reformattings elsewhere.
---
 src/fstab-generator/fstab-generator.c | 21 +
 src/shared/generator.c| 21 -
 src/shared/generator.h| 17 +
 3 files changed, 38 insertions(+), 21 deletions(-)

diff --git a/src/fstab-generator/fstab-generator.c 
b/src/fstab-generator/fstab-generator.c
index 7aee3359e7..664ee2aa6f 100644
--- a/src/fstab-generator/fstab-generator.c
+++ b/src/fstab-generator/fstab-generator.c
@@ -176,6 +176,7 @@ static int write_idle_timeout(FILE *f, const char *where, 
const char *opts) {
 
 return 0;
 }
+
 static int add_mount(
 const char *what,
 const char *where,
@@ -213,10 +214,14 @@ static int add_mount(
 return 0;
 
 if (path_equal(where, /)) {
-/* The root disk is not an option */
-automount = false;
-noauto = false;
-nofail = false;
+if (noauto)
+log_warning(Ignoring \noauto\ for root device);
+if (nofail)
+log_warning(Ignoring \nofail\ for root device);
+if (automount)
+log_warning(Ignoring automount option for root 
device);
+
+noauto = nofail = automount = false;
 }
 
 name = unit_name_from_path(where, .mount);
@@ -419,7 +424,7 @@ static int parse_fstab(bool initrd) {
 return r;
 }
 
-static int add_root_mount(void) {
+static int add_sysroot_mount(void) {
 _cleanup_free_ char *what = NULL;
 const char *opts;
 
@@ -453,7 +458,7 @@ static int add_root_mount(void) {
  /proc/cmdline);
 }
 
-static int add_usr_mount(void) {
+static int add_sysroot_usr_mount(void) {
 _cleanup_free_ char *what = NULL;
 const char *opts;
 
@@ -600,9 +605,9 @@ int main(int argc, char *argv[]) {
 
 /* Always honour root= and usr= in the kernel command line if we are 
in an initrd */
 if (in_initrd()) {
-r = add_root_mount();
+r = add_sysroot_mount();
 if (r == 0)
-r = add_usr_mount();
+r = add_sysroot_usr_mount();
 }
 
 /* Honour /etc/fstab only when that's enabled */
diff --git a/src/shared/generator.c b/src/shared/generator.c
index 569b25bb7c..7b2f846175 100644
--- a/src/shared/generator.c
+++ b/src/shared/generator.c
@@ -32,13 +32,13 @@
 
 int generator_write_fsck_deps(
 FILE *f,
-const char *dest,
+const char *dir,
 const char *what,
 const char *where,
 const char *fstype) {
 
 assert(f);
-assert(dest);
+assert(dir);
 assert(what);
 assert(where);
 
@@ -58,10 +58,10 @@ int generator_write_fsck_deps(
 return log_warning_errno(r, Checking was requested 
for %s, but fsck.%s cannot be used: %m, what, fstype);
 }
 
-if (streq(where, /)) {
+if (path_equal(where, /)) {
 char *lnk;
 
-lnk = strjoina(dest, / SPECIAL_LOCAL_FS_TARGET 
.wants/systemd-fsck-root.service);
+lnk = strjoina(dir, / SPECIAL_LOCAL_FS_TARGET 
.wants/systemd-fsck-root.service);
 
 mkdir_parents(lnk, 0755);
 if (symlink(SYSTEM_DATA_UNIT_PATH 
/systemd-fsck-root.service, lnk)  0)
@@ -75,17 +75,20 @@ int generator_write_fsck_deps(
 return log_oom();
 
 fprintf(f,
-RequiresOverridable=%s\n
-After=%s\n,
-fsck,
+RequiresOverridable=%1$s\n
+After=%1$s\n,
 fsck);
 }
 
 return 0;
 }
 
-int generator_write_timeouts(const char *dir, const char *what, const char 
*where,
- const char *opts, char **filtered) {
+int generator_write_timeouts(
+const char *dir,
+const char *what,
+const char *where,
+const char *opts,
+char **filtered) {
 
 /* Allow configuration how long we wait for a device that
  * backs a mount point to show up. This is useful to support
diff --git a/src/shared/generator.h b/src/shared/generator.h
index 64bd28f596..6c3f38abba 100644
--- a/src/shared/generator.h
+++ b/src/shared/generator.h
@@ -23,7 +23,16 @@
 
 #include stdio.h
 
-int 

[systemd-devel] [PATCH 3/3] Use a stamp file to avoid running systemd-fsck-root.service twice

2015-05-02 Thread Zbigniew Jędrzejewski-Szmek
In the initramfs, we run systemd-fsck@sysroot-device.service.
In the real system we run systemd-fsck-root.service. It is hard
to pass the information that the latter should not run if the first
succeeded using unit state only.

- in the real system, we need a synchronization point between the fsck
  for root and other fscks, to express the dependency to run this
  systemd-fsck@.service before all other systemd-fsck@ units. We
  cannot express it directly, because there are no wildcard
  dependencies. We could use a target as a sychronization point, but
  then we would have to provide drop-ins to order
  systemd-fsck@-.service before the target, and all others after it,
  which becomes messy. The currently used alternative of having a
  special unit (systemd-fsck-root.service) makes it easy to express
  this dependency, and seems to be the best solution.

- we cannot use systemd-fsck-root.service in the initramfs, because
  other fsck units should not be ordered after it. In the real system,
  the root device is always checked and mounted before other filesystems,
  but in the initramfs this doesn't have to be true: /sysroot might be
  stacked on other filesystems and devices.

- the name of the root device can legitimately be specified in a
  different way in the initramfs (on the kernel command line, or
  automatically discovered through GPT), and in the real fs (in /etc/fstab).
  Even if we didn't need systemd-fsck-root.service as a synchronization
  point, it would be hard to ensure the same instance parameter is
  provided for systemd-fsck@.service in the initrams and the real
  system.

Let's use a side channel to pass this information.
/run/systemd/fsck-root-done is touched after fsck in the initramfs
succeeds, through an ExecStartPost line in a drop-in for
systemd-fsck@sysroot.service.

https://bugzilla.redhat.com/show_bug.cgi?id=1201979
---
 src/shared/generator.c | 7 +++
 units/systemd-fsck-root.service.in | 1 +
 2 files changed, 8 insertions(+)

diff --git a/src/shared/generator.c b/src/shared/generator.c
index 7b2f846175..a71222d1cb 100644
--- a/src/shared/generator.c
+++ b/src/shared/generator.c
@@ -78,6 +78,13 @@ int generator_write_fsck_deps(
 RequiresOverridable=%1$s\n
 After=%1$s\n,
 fsck);
+
+if (in_initrd()  path_equal(where, /sysroot))
+return write_drop_in_format(dir, fsck, 50, stamp,
+# Automatically generated 
by %s\n\n
+[Service]\n
+ExecStartPost=-/bin/touch 
/run/systemd/fsck-root-done\n,
+
program_invocation_short_name);
 }
 
 return 0;
diff --git a/units/systemd-fsck-root.service.in 
b/units/systemd-fsck-root.service.in
index 3617abf04a..48dacc841c 100644
--- a/units/systemd-fsck-root.service.in
+++ b/units/systemd-fsck-root.service.in
@@ -11,6 +11,7 @@ Documentation=man:systemd-fsck-root.service(8)
 DefaultDependencies=no
 Before=local-fs.target shutdown.target
 ConditionPathIsReadWrite=!/
+ConditionPathExists=!/run/systemd/fsck-root-done
 
 [Service]
 Type=oneshot
-- 
2.3.5

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [PATCH 3/3] Use a stamp file to avoid running systemd-fsck-root.service twice

2015-05-02 Thread Zbigniew Jędrzejewski-Szmek
On Sat, May 02, 2015 at 08:26:25PM +0300, Andrei Borzenkov wrote:
 В Sat,  2 May 2015 13:16:59 -0400
 Zbigniew Jędrzejewski-Szmek zbys...@in.waw.pl пишет:
 
  In the initramfs, we run systemd-fsck@sysroot-device.service.
  In the real system we run systemd-fsck-root.service. It is hard
  to pass the information that the latter should not run if the first
  succeeded using unit state only.
  
  - in the real system, we need a synchronization point between the fsck
for root and other fscks, to express the dependency to run this
systemd-fsck@.service before all other systemd-fsck@ units. We
cannot express it directly, because there are no wildcard
dependencies. We could use a target as a sychronization point, but
then we would have to provide drop-ins to order
systemd-fsck@-.service before the target, and all others after it,
which becomes messy. The currently used alternative of having a
special unit (systemd-fsck-root.service) makes it easy to express
this dependency, and seems to be the best solution.
  
  - we cannot use systemd-fsck-root.service in the initramfs, because
other fsck units should not be ordered after it. In the real system,
the root device is always checked and mounted before other filesystems,
but in the initramfs this doesn't have to be true: /sysroot might be
stacked on other filesystems and devices.
  
  - the name of the root device can legitimately be specified in a
different way in the initramfs (on the kernel command line, or
automatically discovered through GPT), and in the real fs (in /etc/fstab).
Even if we didn't need systemd-fsck-root.service as a synchronization
point, it would be hard to ensure the same instance parameter is
provided for systemd-fsck@.service in the initrams and the real
system.
  
  Let's use a side channel to pass this information.
  /run/systemd/fsck-root-done is touched after fsck in the initramfs
  succeeds, through an ExecStartPost line in a drop-in for
  systemd-fsck@sysroot.service.
 
 You probably should mention that it effectively reverts
 956eaf2b8d6c024705ddadc7393bc707de02.
Aaah, thanks, I completely forgot about that. I'll add a note.

Zbyszek

 
  
  https://bugzilla.redhat.com/show_bug.cgi?id=1201979
  ---
   src/shared/generator.c | 7 +++
   units/systemd-fsck-root.service.in | 1 +
   2 files changed, 8 insertions(+)
  
  diff --git a/src/shared/generator.c b/src/shared/generator.c
  index 7b2f846175..a71222d1cb 100644
  --- a/src/shared/generator.c
  +++ b/src/shared/generator.c
  @@ -78,6 +78,13 @@ int generator_write_fsck_deps(
   RequiresOverridable=%1$s\n
   After=%1$s\n,
   fsck);
  +
  +if (in_initrd()  path_equal(where, /sysroot))
  +return write_drop_in_format(dir, fsck, 50, stamp,
  +# Automatically 
  generated by %s\n\n
  +[Service]\n
  +
  ExecStartPost=-/bin/touch /run/systemd/fsck-root-done\n,
  +
  program_invocation_short_name);
   }
   
   return 0;
  diff --git a/units/systemd-fsck-root.service.in 
  b/units/systemd-fsck-root.service.in
  index 3617abf04a..48dacc841c 100644
  --- a/units/systemd-fsck-root.service.in
  +++ b/units/systemd-fsck-root.service.in
  @@ -11,6 +11,7 @@ Documentation=man:systemd-fsck-root.service(8)
   DefaultDependencies=no
   Before=local-fs.target shutdown.target
   ConditionPathIsReadWrite=!/
  +ConditionPathExists=!/run/systemd/fsck-root-done
   
   [Service]
   Type=oneshot
 
 
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] networkd: Is auto-negotiation turned off when specifying parameters in a link file?

2015-05-02 Thread Paul Menzel
Dear poma,


Thank you for the reply with a test and sorry for my late reply. I was
unfortunately unable to test this on the server, as there was no
maintenance window yet. It’s still on my todo list though.

Am Donnerstag, den 09.04.2015, 00:28 +0200 schrieb poma:
 On 08.04.2015 23:05, Lennart Poettering wrote:
  On Wed, 08.04.15 22:13, Paul Menzel wrote:
  
  Wouldn't it suffice to unplug the ethernet cable, then use ethtool to
  turn this on, then replug it, and measuring the time until networkd
  notices the link beat is back?
 
  It would. But this is a rented Hetzner server and I have no access to
  the data center. Do you have another idea. Please keep in mind that I
  also only have remote access using that NIC. ;-)
  
  Then, I figure a udev rule should do. Something like this (untested:)
  
  ACTION==add, SUBSYSTEM==net, KERNEL!=lo, RUN+=/usr/bin/ethtool $name 
  ...
  
  Replace the ... of course with the ethool options you need.
  
  This would then be run immediately when the device appears. If this
  makes a measurable difference, then let us know.

 /etc/udev/rules.d/10-speed1G-enp1s6.rules
 ACTION==add, SUBSYSTEM==net, RUN+=/usr/sbin/ethtool -s enp1s6 advertise 
 0x20
 
 :03 systemd[1]: Starting Network Service...
 :05 systemd-networkd[1612]: enp1s6  : link configured
 :05 systemd-networkd[1612]: enp1s6  : gained carrier
 :06 systemd-networkd[1612]: enp1s6  : lost carrier
 :09 systemd-networkd[1612]: enp1s6  : gained carrier
 
 ~~~
 
 /etc/udev/rules.d/10-speed1G-enp1s6.rules-
 
 :15 systemd[1]: Starting Network Service...
 :17 systemd-networkd[1633]: enp1s6  : link configured
 :17 systemd-networkd[1633]: enp1s6  : gained carrier

So in your case, `gained carrier` is indeed shown earlier saving two
seconds. The next message probably indicates a problem with the driver.

Poma, what Linux kernel do you use?

Lennart, is poma’s test sufficient to show that integrating an
`advertise` command(?) into systemd-networkd would be useful?


Thanks,

Paul


signature.asc
Description: This is a digitally signed message part
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] Use python-systemd with virtualenv or pyvenv

2015-05-02 Thread Ludovic Gasc
Hi,

I'm trying to use python-systemd with a pyvenv: We use pyvenv to add
different set of librairies on the server for each daemon without to
disturb Python at the system level.
I don't find a way to compile systemd with a pyvenv enabled.

To by-pass the issue and ideally to be installable via PyPI or at least
with pip, I've tried to add a setup.py.
I've started something for that, but now I'm blocked, technical details:
https://github.com/systemd/systemd/pull/35

If somebody has a clue, I'm interested in.
Nevertheless, if you have an alternative solution, I'm also interested in,
I see at least two:

1. Find a way to compile python-systemd directly from the source code of
systemd and use a pyvenv instead of Python system
2. Implement journald protocol in pure Python: I don't need
all python-systemd for now, only to push logs, it should be quicker to
implement that instead of to try to install python-systemd via pip.
I've tried to find a documentation about the protocol specification used in
unix socket. I've found nothing except source code.

Thanks for your suggestions.

BTW, I've played a little bit with structured log mechanism, it's awesome,
I've several ideas to use that for our sysadmin needs, like to find easily
all HTTP requests from a specific tenant.
--
Ludovic Gasc (GMLudo)
http://www.gmludo.eu/
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] RT wiki page outdated

2015-05-02 Thread Andrei Borzenkov
http://www.freedesktop.org/wiki/Software/systemd/MyServiceCantGetRealtime/
seems way outdated - none of parameters mentioned there exist anymore.
What is the correct way to do the same on modern systemd? Thank you!
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel