[SSSD] Re: [RFC] sbus2 integration

2018-05-22 Thread Pavel Březina

On 05/21/2018 03:51 PM, Simo Sorce wrote:

On Mon, 2018-05-21 at 11:52 +0200, Pavel Březina wrote:

On 05/18/2018 09:50 PM, Simo Sorce wrote:

On Fri, 2018-05-18 at 16:11 +0200, Sumit Bose wrote:

On Fri, May 18, 2018 at 02:33:32PM +0200, Pavel Březina wrote:

Hi folks,
I sent a mail about new sbus implementation (I'll refer to it as sbus2) [1].


Sorry Pavel,
but I need to ask, why a new bus instead of somthing like varlink ?


This is an old work, we did not know about varlink until this work was
already finished. But since we still provide public D-Bus API, we need a
way to work with it anyway.


Ack, thanks, wasn't sure how old the approach was, so I just asked :-)


Now, I'm integrating it into SSSD. The work is quite difficult since it
touches all parts of SSSD and the changes are usually interconnected but I'm
slowly moving towards the goal [2].

At this moment, I'm trying to take "miminum changes" paths so the code can
be built and function with sbus2, however to take full advantage of it, it
will take further improvements (that will not be very difficult).

There is one big change that I would like to take though, that needs to be
discussed. It is about how we currently handle sbus connections.

In current state, monitor and each backend creates a private sbus server.
The current implementation of a private sbus server is not a message bus, it
only serves as an address to create point to point nameless connection. Thus
each client must maintain several connections:
   - each responder is connected to monitor and to all backends
   - each backend is connected to monitor
   - we have monitor + number of backends private servers
   - each private server maintains about 10 active connections

This has several disadvantages - there are many connections, we cannot
broadcast signals, if a process wants to talk to other process it needs to
connect to its server and maintain the connection. Since responders do not
currently provider a server, they cannot talk between each other.


This design has a key advantage, a single process going down does not
affect all other processes communication. How do you recover if the
"switch-board" goes down during message processing with sbus ?


The "switch-board" will be restarted and other processes will reconnect.
The same way as it is today when one backend dies.


Yes, but what about in-flight operations ?
Both client and server will abort and retry ?
Will the server just keep around data forever ?
It'd be nice to understand the mechanics of recovery to make sure the
actual clients do not end up being impacted, by lack of service.


See below.


sbus2 implements proper private message bus. So it can work in the same way
as session or system bus. It is a server that maintains the connections,
keep tracks of their names and then routes messages from one connection to
another.

My idea is to have only one sbus server managed by monitor.


This conflict wth the idea of getting rid of the monitor process, do
not know if this is currently still pursued but it was brought up over
and over many times that we might want to use systemd as the "monitor"
and let socket activation deal with the rest.


I chose monitor process for the message bus, since 1) it is stable, 2)
it is idle most of the time. However, it can be a process on its own.


Not sure that moving it to another process makes a difference, the
concern would be the same I think.


Yes.


That being said, it does not conflict with removing the monitoring
functionality. We only leave a single message bus.


Right but at that point might as well retain monitoring ...



   Other processes
will connect to this server with a named connection (e.g. sssd.nss,
sssd.backend.dom1, sssd.backend.dom2). We can then send message to this
message bus (only one connection) and set destination to name (e.g. sssd.nss
to invalidate memcache). We can also send signals to this bus and it will
broadcast it to all connections that listens to this signals. So, it is
proper way how to do it. It will simplify things and allow us to send
signals and have better IPC in general.

I know we want to eventually get rid of the monitor, the process would stay
as an sbus server. It would become a single point of failure, but the
process can be restarted automatically by systemd in case of crash.

Also here is a bonus question - do any of you remember why we use private
server at all?


In the very original design there was a "switch-board" process which
received a request from one component and forwarded it to the right
target. I guess at this time we didn't know a lot about DBus to
implement this properly. In the end we thought it was a useless overhead
and removed it. I think we didn't thought about signals to all components
or the backend sending requests to the frontends.


Why don't we connect to system message bus?


Mainly because we do not trust it to handle plain text passwords and
other credentials with the needed care.


That and because 

[SSSD] Re: [RFC] sbus2 integration

2018-05-21 Thread Simo Sorce
On Mon, 2018-05-21 at 11:52 +0200, Pavel Březina wrote:
> On 05/18/2018 09:50 PM, Simo Sorce wrote:
> > On Fri, 2018-05-18 at 16:11 +0200, Sumit Bose wrote:
> > > On Fri, May 18, 2018 at 02:33:32PM +0200, Pavel Březina wrote:
> > > > Hi folks,
> > > > I sent a mail about new sbus implementation (I'll refer to it as sbus2) 
> > > > [1].
> > 
> > Sorry Pavel,
> > but I need to ask, why a new bus instead of somthing like varlink ?
> 
> This is an old work, we did not know about varlink until this work was 
> already finished. But since we still provide public D-Bus API, we need a 
> way to work with it anyway.

Ack, thanks, wasn't sure how old the approach was, so I just asked :-)

> > > > Now, I'm integrating it into SSSD. The work is quite difficult since it
> > > > touches all parts of SSSD and the changes are usually interconnected 
> > > > but I'm
> > > > slowly moving towards the goal [2].
> > > > 
> > > > At this moment, I'm trying to take "miminum changes" paths so the code 
> > > > can
> > > > be built and function with sbus2, however to take full advantage of it, 
> > > > it
> > > > will take further improvements (that will not be very difficult).
> > > > 
> > > > There is one big change that I would like to take though, that needs to 
> > > > be
> > > > discussed. It is about how we currently handle sbus connections.
> > > > 
> > > > In current state, monitor and each backend creates a private sbus 
> > > > server.
> > > > The current implementation of a private sbus server is not a message 
> > > > bus, it
> > > > only serves as an address to create point to point nameless connection. 
> > > > Thus
> > > > each client must maintain several connections:
> > > >   - each responder is connected to monitor and to all backends
> > > >   - each backend is connected to monitor
> > > >   - we have monitor + number of backends private servers
> > > >   - each private server maintains about 10 active connections
> > > > 
> > > > This has several disadvantages - there are many connections, we cannot
> > > > broadcast signals, if a process wants to talk to other process it needs 
> > > > to
> > > > connect to its server and maintain the connection. Since responders do 
> > > > not
> > > > currently provider a server, they cannot talk between each other.
> > 
> > This design has a key advantage, a single process going down does not
> > affect all other processes communication. How do you recover if the
> > "switch-board" goes down during message processing with sbus ?
> 
> The "switch-board" will be restarted and other processes will reconnect. 
> The same way as it is today when one backend dies.

Yes, but what about in-flight operations ?
Both client and server will abort and retry ?
Will the server just keep around data forever ?
It'd be nice to understand the mechanics of recovery to make sure the
actual clients do not end up being impacted, by lack of service.

> > > > sbus2 implements proper private message bus. So it can work in the same 
> > > > way
> > > > as session or system bus. It is a server that maintains the connections,
> > > > keep tracks of their names and then routes messages from one connection 
> > > > to
> > > > another.
> > > > 
> > > > My idea is to have only one sbus server managed by monitor.
> > 
> > This conflict wth the idea of getting rid of the monitor process, do
> > not know if this is currently still pursued but it was brought up over
> > and over many times that we might want to use systemd as the "monitor"
> > and let socket activation deal with the rest.
> 
> I chose monitor process for the message bus, since 1) it is stable, 2) 
> it is idle most of the time. However, it can be a process on its own.

Not sure that moving it to another process makes a difference, the
concern would be the same I think.

> That being said, it does not conflict with removing the monitoring 
> functionality. We only leave a single message bus.

Right but at that point might as well retain monitoring ...


> > > >   Other processes
> > > > will connect to this server with a named connection (e.g. sssd.nss,
> > > > sssd.backend.dom1, sssd.backend.dom2). We can then send message to this
> > > > message bus (only one connection) and set destination to name (e.g. 
> > > > sssd.nss
> > > > to invalidate memcache). We can also send signals to this bus and it 
> > > > will
> > > > broadcast it to all connections that listens to this signals. So, it is
> > > > proper way how to do it. It will simplify things and allow us to send
> > > > signals and have better IPC in general.
> > > > 
> > > > I know we want to eventually get rid of the monitor, the process would 
> > > > stay
> > > > as an sbus server. It would become a single point of failure, but the
> > > > process can be restarted automatically by systemd in case of crash.
> > > > 
> > > > Also here is a bonus question - do any of you remember why we use 
> > > > private
> > > > server at all?
> > > 
> > > In the very original design there was a 

[SSSD] Re: [RFC] sbus2 integration

2018-05-21 Thread Simo Sorce
On Mon, 2018-05-21 at 10:38 +0200, Jakub Hrozek wrote:
> > On 18 May 2018, at 21:50, Simo Sorce  wrote:
> > 
> > Sorry Pavel,
> > but I need to ask, why a new bus instead of somthing like varlink ?
> 
> Do you think there is an advantage with varlink over D-Bus as long as
> we use a private style of communication and use either varlink or D-
> Bus more or less as a marshalling mechanism?

Only if we have to start from scratch or make significant changes, I
wouldn't embark on replacing the tool just for the sake of replacing.

> > 
> > > > Now, I'm integrating it into SSSD. The work is quite difficult since it
> > > > touches all parts of SSSD and the changes are usually interconnected 
> > > > but I'm
> > > > slowly moving towards the goal [2].
> > > > 
> > > > At this moment, I'm trying to take "miminum changes" paths so the code 
> > > > can
> > > > be built and function with sbus2, however to take full advantage of it, 
> > > > it
> > > > will take further improvements (that will not be very difficult).
> > > > 
> > > > There is one big change that I would like to take though, that needs to 
> > > > be
> > > > discussed. It is about how we currently handle sbus connections.
> > > > 
> > > > In current state, monitor and each backend creates a private sbus 
> > > > server.
> > > > The current implementation of a private sbus server is not a message 
> > > > bus, it
> > > > only serves as an address to create point to point nameless connection. 
> > > > Thus
> > > > each client must maintain several connections:
> > > > - each responder is connected to monitor and to all backends
> > > > - each backend is connected to monitor
> > > > - we have monitor + number of backends private servers
> > > > - each private server maintains about 10 active connections
> > > > 
> > > > This has several disadvantages - there are many connections, we cannot
> > > > broadcast signals, if a process wants to talk to other process it needs 
> > > > to
> > > > connect to its server and maintain the connection. Since responders do 
> > > > not
> > > > currently provider a server, they cannot talk between each other.
> > 
> > This design has a key advantage, a single process going down does not
> > affect all other processes communication. How do you recover if the
> > "switch-board" goes down during message processing with sbus ?
> 
> FWIW, this is a worry I also expressed to Pavel on the phone the other day.
> 
> > 
> > > > sbus2 implements proper private message bus. So it can work in the same 
> > > > way
> > > > as session or system bus. It is a server that maintains the connections,
> > > > keep tracks of their names and then routes messages from one connection 
> > > > to
> > > > another.
> > > > 
> > > > My idea is to have only one sbus server managed by monitor.
> > 
> > This conflict wth the idea of getting rid of the monitor process, do
> > not know if this is currently still pursued but it was brought up over
> > and over many times that we might want to use systemd as the "monitor"
> > and let socket activation deal with the rest.
> 
> It’s something we’ve been talking about but never got around to
> implementing. Additionally,  there are users who are running sssd in
> a container for better or worse and I’m not sure we systemd in a
> container is something used or stable outside some test builds..

So .. we do not know :-)

Simo.

-- 
Simo Sorce
Sr. Principal Software Engineer
Red Hat, Inc
___
sssd-devel mailing list -- sssd-devel@lists.fedorahosted.org
To unsubscribe send an email to sssd-devel-le...@lists.fedorahosted.org
Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/sssd-devel@lists.fedorahosted.org/message/L3V7ZZR2B47GLNPIPWPVZWMXOXSKNXTI/


[SSSD] Re: [RFC] sbus2 integration

2018-05-21 Thread Pavel Březina

On 05/18/2018 09:50 PM, Simo Sorce wrote:

On Fri, 2018-05-18 at 16:11 +0200, Sumit Bose wrote:

On Fri, May 18, 2018 at 02:33:32PM +0200, Pavel Březina wrote:

Hi folks,
I sent a mail about new sbus implementation (I'll refer to it as sbus2) [1].


Sorry Pavel,
but I need to ask, why a new bus instead of somthing like varlink ?


This is an old work, we did not know about varlink until this work was 
already finished. But since we still provide public D-Bus API, we need a 
way to work with it anyway.



Now, I'm integrating it into SSSD. The work is quite difficult since it
touches all parts of SSSD and the changes are usually interconnected but I'm
slowly moving towards the goal [2].

At this moment, I'm trying to take "miminum changes" paths so the code can
be built and function with sbus2, however to take full advantage of it, it
will take further improvements (that will not be very difficult).

There is one big change that I would like to take though, that needs to be
discussed. It is about how we currently handle sbus connections.

In current state, monitor and each backend creates a private sbus server.
The current implementation of a private sbus server is not a message bus, it
only serves as an address to create point to point nameless connection. Thus
each client must maintain several connections:
  - each responder is connected to monitor and to all backends
  - each backend is connected to monitor
  - we have monitor + number of backends private servers
  - each private server maintains about 10 active connections

This has several disadvantages - there are many connections, we cannot
broadcast signals, if a process wants to talk to other process it needs to
connect to its server and maintain the connection. Since responders do not
currently provider a server, they cannot talk between each other.


This design has a key advantage, a single process going down does not
affect all other processes communication. How do you recover if the
"switch-board" goes down during message processing with sbus ?


The "switch-board" will be restarted and other processes will reconnect. 
The same way as it is today when one backend dies.



sbus2 implements proper private message bus. So it can work in the same way
as session or system bus. It is a server that maintains the connections,
keep tracks of their names and then routes messages from one connection to
another.

My idea is to have only one sbus server managed by monitor.


This conflict wth the idea of getting rid of the monitor process, do
not know if this is currently still pursued but it was brought up over
and over many times that we might want to use systemd as the "monitor"
and let socket activation deal with the rest.


I chose monitor process for the message bus, since 1) it is stable, 2) 
it is idle most of the time. However, it can be a process on its own.


That being said, it does not conflict with removing the monitoring 
functionality. We only leave a single message bus.






  Other processes
will connect to this server with a named connection (e.g. sssd.nss,
sssd.backend.dom1, sssd.backend.dom2). We can then send message to this
message bus (only one connection) and set destination to name (e.g. sssd.nss
to invalidate memcache). We can also send signals to this bus and it will
broadcast it to all connections that listens to this signals. So, it is
proper way how to do it. It will simplify things and allow us to send
signals and have better IPC in general.

I know we want to eventually get rid of the monitor, the process would stay
as an sbus server. It would become a single point of failure, but the
process can be restarted automatically by systemd in case of crash.

Also here is a bonus question - do any of you remember why we use private
server at all?


In the very original design there was a "switch-board" process which
received a request from one component and forwarded it to the right
target. I guess at this time we didn't know a lot about DBus to
implement this properly. In the end we thought it was a useless overhead
and removed it. I think we didn't thought about signals to all components
or the backend sending requests to the frontends.


Why don't we connect to system message bus?


Mainly because we do not trust it to handle plain text passwords and
other credentials with the needed care.


That and because at some point there was a potential chicken-egg issue
at startup, and also because we didn't want to handle additional error
recovery if the system message bus was restarted.

Fundamentally the system message bus is useful only for services
offering a "public" service, otherwise it is just an overhead, and has
security implications.


Thank you for explanation.


I do not see any benefit in having a private server.


There is no way to break into sssd via a bug in the system message bus.
This is one good reason, aside for the other above.

Fundamentally we needed a private structured messaging system we could
easily 

[SSSD] Re: [RFC] sbus2 integration

2018-05-21 Thread Jakub Hrozek


> On 18 May 2018, at 21:50, Simo Sorce  wrote:
> 
> Sorry Pavel,
> but I need to ask, why a new bus instead of somthing like varlink ?

Do you think there is an advantage with varlink over D-Bus as long as we use a 
private style of communication and use either varlink or D-Bus more or less as 
a marshalling mechanism?

> 
>>> Now, I'm integrating it into SSSD. The work is quite difficult since it
>>> touches all parts of SSSD and the changes are usually interconnected but I'm
>>> slowly moving towards the goal [2].
>>> 
>>> At this moment, I'm trying to take "miminum changes" paths so the code can
>>> be built and function with sbus2, however to take full advantage of it, it
>>> will take further improvements (that will not be very difficult).
>>> 
>>> There is one big change that I would like to take though, that needs to be
>>> discussed. It is about how we currently handle sbus connections.
>>> 
>>> In current state, monitor and each backend creates a private sbus server.
>>> The current implementation of a private sbus server is not a message bus, it
>>> only serves as an address to create point to point nameless connection. Thus
>>> each client must maintain several connections:
>>> - each responder is connected to monitor and to all backends
>>> - each backend is connected to monitor
>>> - we have monitor + number of backends private servers
>>> - each private server maintains about 10 active connections
>>> 
>>> This has several disadvantages - there are many connections, we cannot
>>> broadcast signals, if a process wants to talk to other process it needs to
>>> connect to its server and maintain the connection. Since responders do not
>>> currently provider a server, they cannot talk between each other.
> 
> This design has a key advantage, a single process going down does not
> affect all other processes communication. How do you recover if the
> "switch-board" goes down during message processing with sbus ?

FWIW, this is a worry I also expressed to Pavel on the phone the other day.

> 
>>> sbus2 implements proper private message bus. So it can work in the same way
>>> as session or system bus. It is a server that maintains the connections,
>>> keep tracks of their names and then routes messages from one connection to
>>> another.
>>> 
>>> My idea is to have only one sbus server managed by monitor.
> 
> This conflict wth the idea of getting rid of the monitor process, do
> not know if this is currently still pursued but it was brought up over
> and over many times that we might want to use systemd as the "monitor"
> and let socket activation deal with the rest.

It’s something we’ve been talking about but never got around to implementing. 
Additionally,  there are users who are running sssd in a container for better 
or worse and I’m not sure we systemd in a container is something used or stable 
outside some test builds..
___
sssd-devel mailing list -- sssd-devel@lists.fedorahosted.org
To unsubscribe send an email to sssd-devel-le...@lists.fedorahosted.org
Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/sssd-devel@lists.fedorahosted.org/message/GRJULOCJ3H2KSLCJKQUM3WSBNE7R4LRD/


[SSSD] Re: [RFC] sbus2 integration

2018-05-21 Thread Fabiano Fidêncio
On Fri, May 18, 2018 at 9:50 PM, Simo Sorce  wrote:
> On Fri, 2018-05-18 at 16:11 +0200, Sumit Bose wrote:
>> On Fri, May 18, 2018 at 02:33:32PM +0200, Pavel Březina wrote:
>> > Hi folks,
>> > I sent a mail about new sbus implementation (I'll refer to it as sbus2) 
>> > [1].
>
> Sorry Pavel,
> but I need to ask, why a new bus instead of somthing like varlink ?

For those who are not familiar with varlink: https://lwn.net/Articles/742675/

>
>> > Now, I'm integrating it into SSSD. The work is quite difficult since it
>> > touches all parts of SSSD and the changes are usually interconnected but 
>> > I'm
>> > slowly moving towards the goal [2].
>> >
>> > At this moment, I'm trying to take "miminum changes" paths so the code can
>> > be built and function with sbus2, however to take full advantage of it, it
>> > will take further improvements (that will not be very difficult).
>> >
>> > There is one big change that I would like to take though, that needs to be
>> > discussed. It is about how we currently handle sbus connections.
>> >
>> > In current state, monitor and each backend creates a private sbus server.
>> > The current implementation of a private sbus server is not a message bus, 
>> > it
>> > only serves as an address to create point to point nameless connection. 
>> > Thus
>> > each client must maintain several connections:
>> >  - each responder is connected to monitor and to all backends
>> >  - each backend is connected to monitor
>> >  - we have monitor + number of backends private servers
>> >  - each private server maintains about 10 active connections
>> >
>> > This has several disadvantages - there are many connections, we cannot
>> > broadcast signals, if a process wants to talk to other process it needs to
>> > connect to its server and maintain the connection. Since responders do not
>> > currently provider a server, they cannot talk between each other.
>
> This design has a key advantage, a single process going down does not
> affect all other processes communication. How do you recover if the
> "switch-board" goes down during message processing with sbus ?
>
>> > sbus2 implements proper private message bus. So it can work in the same way
>> > as session or system bus. It is a server that maintains the connections,
>> > keep tracks of their names and then routes messages from one connection to
>> > another.
>> >
>> > My idea is to have only one sbus server managed by monitor.
>
> This conflict wth the idea of getting rid of the monitor process, do
> not know if this is currently still pursued but it was brought up over
> and over many times that we might want to use systemd as the "monitor"
> and let socket activation deal with the rest.
>
>> >  Other processes
>> > will connect to this server with a named connection (e.g. sssd.nss,
>> > sssd.backend.dom1, sssd.backend.dom2). We can then send message to this
>> > message bus (only one connection) and set destination to name (e.g. 
>> > sssd.nss
>> > to invalidate memcache). We can also send signals to this bus and it will
>> > broadcast it to all connections that listens to this signals. So, it is
>> > proper way how to do it. It will simplify things and allow us to send
>> > signals and have better IPC in general.
>> >
>> > I know we want to eventually get rid of the monitor, the process would stay
>> > as an sbus server. It would become a single point of failure, but the
>> > process can be restarted automatically by systemd in case of crash.
>> >
>> > Also here is a bonus question - do any of you remember why we use private
>> > server at all?
>>
>> In the very original design there was a "switch-board" process which
>> received a request from one component and forwarded it to the right
>> target. I guess at this time we didn't know a lot about DBus to
>> implement this properly. In the end we thought it was a useless overhead
>> and removed it. I think we didn't thought about signals to all components
>> or the backend sending requests to the frontends.
>>
>> > Why don't we connect to system message bus?
>>
>> Mainly because we do not trust it to handle plain text passwords and
>> other credentials with the needed care.
>
> That and because at some point there was a potential chicken-egg issue
> at startup, and also because we didn't want to handle additional error
> recovery if the system message bus was restarted.
>
> Fundamentally the system message bus is useful only for services
> offering a "public" service, otherwise it is just an overhead, and has
> security implications.
>
>> > I do not see any benefit in having a private server.
>
> There is no way to break into sssd via a bug in the system message bus.
> This is one good reason, aside for the other above.
>
> Fundamentally we needed a private structured messaging system we could
> easily integrate with tevent. The only usable option back then was
> dbus, and given we already had ideas about offering some plugic
> interface over the message bus we went 

[SSSD] Re: [RFC] sbus2 integration

2018-05-18 Thread Simo Sorce
On Fri, 2018-05-18 at 16:11 +0200, Sumit Bose wrote:
> On Fri, May 18, 2018 at 02:33:32PM +0200, Pavel Březina wrote:
> > Hi folks,
> > I sent a mail about new sbus implementation (I'll refer to it as sbus2) [1].

Sorry Pavel,
but I need to ask, why a new bus instead of somthing like varlink ?

> > Now, I'm integrating it into SSSD. The work is quite difficult since it
> > touches all parts of SSSD and the changes are usually interconnected but I'm
> > slowly moving towards the goal [2].
> > 
> > At this moment, I'm trying to take "miminum changes" paths so the code can
> > be built and function with sbus2, however to take full advantage of it, it
> > will take further improvements (that will not be very difficult).
> > 
> > There is one big change that I would like to take though, that needs to be
> > discussed. It is about how we currently handle sbus connections.
> > 
> > In current state, monitor and each backend creates a private sbus server.
> > The current implementation of a private sbus server is not a message bus, it
> > only serves as an address to create point to point nameless connection. Thus
> > each client must maintain several connections:
> >  - each responder is connected to monitor and to all backends
> >  - each backend is connected to monitor
> >  - we have monitor + number of backends private servers
> >  - each private server maintains about 10 active connections
> > 
> > This has several disadvantages - there are many connections, we cannot
> > broadcast signals, if a process wants to talk to other process it needs to
> > connect to its server and maintain the connection. Since responders do not
> > currently provider a server, they cannot talk between each other.

This design has a key advantage, a single process going down does not
affect all other processes communication. How do you recover if the
"switch-board" goes down during message processing with sbus ?

> > sbus2 implements proper private message bus. So it can work in the same way
> > as session or system bus. It is a server that maintains the connections,
> > keep tracks of their names and then routes messages from one connection to
> > another.
> > 
> > My idea is to have only one sbus server managed by monitor.

This conflict wth the idea of getting rid of the monitor process, do
not know if this is currently still pursued but it was brought up over
and over many times that we might want to use systemd as the "monitor"
and let socket activation deal with the rest.

> >  Other processes
> > will connect to this server with a named connection (e.g. sssd.nss,
> > sssd.backend.dom1, sssd.backend.dom2). We can then send message to this
> > message bus (only one connection) and set destination to name (e.g. sssd.nss
> > to invalidate memcache). We can also send signals to this bus and it will
> > broadcast it to all connections that listens to this signals. So, it is
> > proper way how to do it. It will simplify things and allow us to send
> > signals and have better IPC in general.
> > 
> > I know we want to eventually get rid of the monitor, the process would stay
> > as an sbus server. It would become a single point of failure, but the
> > process can be restarted automatically by systemd in case of crash.
> > 
> > Also here is a bonus question - do any of you remember why we use private
> > server at all?
> 
> In the very original design there was a "switch-board" process which
> received a request from one component and forwarded it to the right
> target. I guess at this time we didn't know a lot about DBus to
> implement this properly. In the end we thought it was a useless overhead
> and removed it. I think we didn't thought about signals to all components
> or the backend sending requests to the frontends.
> 
> > Why don't we connect to system message bus?
> 
> Mainly because we do not trust it to handle plain text passwords and
> other credentials with the needed care.

That and because at some point there was a potential chicken-egg issue
at startup, and also because we didn't want to handle additional error
recovery if the system message bus was restarted.

Fundamentally the system message bus is useful only for services
offering a "public" service, otherwise it is just an overhead, and has
security implications.

> > I do not see any benefit in having a private server.

There is no way to break into sssd via a bug in the system message bus.
This is one good reason, aside for the other above.

Fundamentally we needed a private structured messaging system we could
easily integrate with tevent. The only usable option back then was
dbus, and given we already had ideas about offering some plugic
interface over the message bus we went that way so we could later reuse
the integration.

Today we'd probably go with something a lot more lightweight like
varlink.

> If I understood you correctly we not only have 'a' private server but 4
> for a typically minimal setup (monitor, pam, nss, backend).
> 
> Given your 

[SSSD] Re: [RFC] sbus2 integration

2018-05-18 Thread Simo Sorce
On Fri, 2018-05-18 at 21:02 +0200, Jakub Hrozek wrote:
> > On 18 May 2018, at 14:33, Pavel Březina  wrote:
> > 
> > Also here is a bonus question - do any of you remember why we use private 
> > server at all? Why don't we connect to system message bus? I do not see any 
> > benefit in having a private server.
> 
> To expand on what Sumit said, at one point we were betting on kdbus
> to become a thing and then it didn’t. (I don’t know enough about
> bus1, but IIRC was "just" a userspace reimplementation of the D-Bus
> protocol, so the same trust limitations apply)

bus1 was also a kernel implementation, but that one also did not pan
out ...

Simo.

-- 
Simo Sorce
Sr. Principal Software Engineer
Red Hat, Inc
___
sssd-devel mailing list -- sssd-devel@lists.fedorahosted.org
To unsubscribe send an email to sssd-devel-le...@lists.fedorahosted.org
Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/sssd-devel@lists.fedorahosted.org/message/RWGIJVWWJEUPZAWE3BGHG7SNQKWTBA46/


[SSSD] Re: [RFC] sbus2 integration

2018-05-18 Thread Jakub Hrozek


> On 18 May 2018, at 14:33, Pavel Březina  wrote:
> 
> Also here is a bonus question - do any of you remember why we use private 
> server at all? Why don't we connect to system message bus? I do not see any 
> benefit in having a private server.

To expand on what Sumit said, at one point we were betting on kdbus to become a 
thing and then it didn’t. (I don’t know enough about bus1, but IIRC was "just" 
a userspace reimplementation of the D-Bus protocol, so the same trust 
limitations apply)
___
sssd-devel mailing list -- sssd-devel@lists.fedorahosted.org
To unsubscribe send an email to sssd-devel-le...@lists.fedorahosted.org
Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/sssd-devel@lists.fedorahosted.org/message/SOTVIQ7CCHEY3CDU6KH4SYP5ZNHHIVAD/