[389-users] Re: How to containerize 389DS using Docker in production systems

2018-03-12 Thread William Brown
On Thu, 2018-03-08 at 12:24 +0100, Alberto García Sola wrote:
> It's great knowing you are getting a proper container support.
> Reading your message, I've found this docker folder withing the
> source that I hadn't seen yet: https://pagure.io/389-ds-
> base/blob/master/f/docker , with same examples of how to use it
> beyond the demo.

Great, I would love to hear your feedback on this. 

> Thank you for the great explanation regarding the situation. 
> I'll try to report back any issues we find using Docker from the
> current MASTER branch, though there are two (IMHO) big stoppers to
> get this into production:
> The persistance part (https://pagure.io/389-ds-base/issue/49213).

It's a bit more subtle I think. You can build the image and it creates
a /etc/dirsrv/slapd-localhost. BUT if you want persistance you have to
overlap volumes ONTO /etc/dirsrv/slapd-localhost AND
/var/lib/disrv/slapd-localhost.

The issue then is ns-slapd starts and sees empty folders! So suddenly
it won't start.

If you can extract a /etc/dirsrv/slapd-localhost *AND* a
/var/lib/dirsrv/slapd-localhost into these places and then bind mount
them, you will have persistence! It's just not a friendly user
experience today, and I want it to be as simple as:

docker run -v  389ds:latest

And you get persistence without messing about. 

> The upgrade part, which is an essential part of the containers
> philosophy, though not such a big stopper as the previous one.

This just needs some polish, but this ticket I kind of forgot about
shows we have some mechanisms in place that can provide upgrade support
already, so I think it's 90% there:

https://pagure.io/389-ds-base/issue/49447

The other patch provides attribute level "upgrade" support, not just
"ensure these entries exists" support.

> I guess it would be difficult to say, but do you manage any ETAs?

Sorry, I don't have an ETA. There are three major goals for me in the
coming weeks:

* Finish up work on connection system cleanup
* Improve and finish work on our new cli tools (read more here: http://
www.port389.org/docs/389ds/design/dsadm-dsconf.html)
* Container support

Probably in that order.

So I can't give an ETA, but I'll be sure to post updates to 389-users
and requests for comment when I have more substantial work complete :)

I'm still also thinking about scripting of the server and how to manage
this, so I'm thinking about ideas which again I'll post design
documents and ask for feedback. 

Hope that helps,


> Alberto.
> 
> El 08/03/2018 a las 4:42, William Brown escribió:
> > On Wed, 2018-03-07 at 23:50 +, tda...@email.arizona.edu wrote:
> > > > On Wed, 2018-03-07 at 08:52 +0100, Alberto García Sola wrote:
> > > > 
> > > > Hi there,
> > > > 
> > > > I'm currently working on docker support in 389-ds.
> > > 
> > > William, I'm really glad to hear this. We've been running 389
> > > server
> > > in docker in EC2 instances for months now and it works great. We
> > > have
> > > home grown scripts for automating the DS installation and
> > > replication
> > > between 2 DS instances, but it would be awesome to use a
> > > supported
> > > setup instead, so I'd really like to try what you have. Our setup
> > > uses mounted EBS volumes that contain all the necessary DS
> > > folders so
> > > that the EC2s can be blown away and recreated any time we want.
> > 
> > Hope you don't mind, but this is a bit of a brain dump. we have
> > some
> > open tickets about this. Currently we have LOADS of support here
> > for
> > containers, like detection of container memory and process limits,
> > support for containerised installs in dscreate, and more.
> > 
> > But first I want to describe the general picture and situation.
> > 
> > It would be great to have a temporary demo instance like:
> > 
> > docker run 389ds:1.4.0
> > 
> > And that *works*.
> > 
> > Now, when you want to really use it in production something more
> > like:
> > 
> > docker run -v /etc/dirsrv:/etc/dirsrv -v
> > /var/lib/dirsrv:/var/lib/dirsrv 389ds:1.4.0
> > 
> > And now you have persistance, and can pull, upgrade, destroy,
> > everything.
> > 
> > If you want a readonly ephemeral replica, something maybe like:
> > 
> > docker run -e replication_manager=12345 389ds:1.4.0
> > 
> > Which would trigger the replica ID to become 65535 and set the
> > replication manager password (which could now be pushed to from
> > another
> > instance).
> > 
> > So what are the challenges to these scenarios? 
> > 
> > Well, the first scenario "kinda works" today, but you don't get
> > persistence, and we have to ship a known password. The barrier here
> > is
> > that ns-slapd (our server binary) needs assistance from
> > dscreate/setup-
> > ds.pl to create dse.ldif and it's related instance parts.
> > 
> > So we need to move the *SETUP* logic of DS out of python and INTO
> > an
> > early runtime part of ns-slapd, to be able to process a .inf +
> > envvariables to create dse.ldif on startup if it does not exist.
> > 
> > Thankfully 

[389-users] Re: How to containerize 389DS using Docker in production systems

2018-03-08 Thread Alberto García Sola

It's great knowing you are getting a proper container support.

Reading your message, I've found this docker folder withing the source 
that I hadn't seen yet: 
https://pagure.io/389-ds-base/blob/master/f/docker , with same examples 
of how to use it beyond the demo.


Thank you for the great explanation regarding the situation.

I'll try to report back any issues we find using Docker from the current 
MASTER branch, though there are two (IMHO) big stoppers to get this into 
production:


 * The persistance part (https://pagure.io/389-ds-base/issue/49213).
 * The upgrade part, which is an essential part of the containers
   philosophy, though not such a big stopper as the previous one.

I guess it would be difficult to say, but do you manage any ETAs?

Alberto.


El 08/03/2018 a las 4:42, William Brown escribió:

On Wed, 2018-03-07 at 23:50 +, tda...@email.arizona.edu wrote:

On Wed, 2018-03-07 at 08:52 +0100, Alberto García Sola wrote:

Hi there,

I'm currently working on docker support in 389-ds.

William, I'm really glad to hear this. We've been running 389 server
in docker in EC2 instances for months now and it works great. We have
home grown scripts for automating the DS installation and replication
between 2 DS instances, but it would be awesome to use a supported
setup instead, so I'd really like to try what you have. Our setup
uses mounted EBS volumes that contain all the necessary DS folders so
that the EC2s can be blown away and recreated any time we want.

Hope you don't mind, but this is a bit of a brain dump. we have some
open tickets about this. Currently we have LOADS of support here for
containers, like detection of container memory and process limits,
support for containerised installs in dscreate, and more.

But first I want to describe the general picture and situation.

It would be great to have a temporary demo instance like:

docker run 389ds:1.4.0

And that *works*.

Now, when you want to really use it in production something more like:

docker run -v /etc/dirsrv:/etc/dirsrv -v
/var/lib/dirsrv:/var/lib/dirsrv 389ds:1.4.0

And now you have persistance, and can pull, upgrade, destroy,
everything.

If you want a readonly ephemeral replica, something maybe like:

docker run -e replication_manager=12345 389ds:1.4.0

Which would trigger the replica ID to become 65535 and set the
replication manager password (which could now be pushed to from another
instance).

So what are the challenges to these scenarios?

Well, the first scenario "kinda works" today, but you don't get
persistence, and we have to ship a known password. The barrier here is
that ns-slapd (our server binary) needs assistance from dscreate/setup-
ds.pl to create dse.ldif and it's related instance parts.

So we need to move the *SETUP* logic of DS out of python and INTO an
early runtime part of ns-slapd, to be able to process a .inf +
envvariables to create dse.ldif on startup if it does not exist.

Thankfully this also solves the second case with a persistant image,
with backed storage.

The challenge here is the inplace upgrade. When you do say:

docker run -v ... 389ds:1.4.0
docker kill ...
docker run -v ... 389ds:1.5.0

Because our current upgrade scripts run in perl at RPM upgrade time,
when we launch the 1.5.0 container, it would NOT have the upgraded
configuration/plugin/other data that we may need.

Thankfully, this is in the process of being fixed via some patches that
are currently underreview, so this concern is "mostly" fixed, and the
team is pretty aware that upgrade perl scripts aren't a future
acceptable thing.


Finally, is the stateless instance - again, this requires more
interaction at start up to get the replica setup like this, but it also
requires us to coordinate docker networking / others for "what IP do we
replicate to?". This is a tougher challenge. Today we could solve this
externally by just reconfiguring our various instances, but this
automation would be nice to achieve.


Now there are still other issues - certificates and load balancing is a
big one. We have the concept of "SSF" in the server (despite ssf's
flaws). We won't let you do password changes or other operations
WITHOUT a secure connection, but today that means putting cert and key
material INTO the container.

So another area we need to improve is load balancer support for
haproxy. There is an open ticket for parsing HAproxy metadata for
proper log data, but we need to have an "SSF override" value so that DS
on plaintext 389 "treats it" like it's a secure connection, and haproxy
ONLY advertises 636 (ldaps).


Another concern is backups and how to take them effectively, or how to
do datarestore correctly. I haven't decided on a good method for this
yet (we could have different containers thatj ust use the same volumes
and handle it correctly, or we could rely on the online tasks)


 But william, show me the code!!! 

Okay, okay. Today, you can build and test our docker container from git
master ONLY. We rely on a few too 

[389-users] Re: How to containerize 389DS using Docker in production systems

2018-03-07 Thread William Brown
On Wed, 2018-03-07 at 23:50 +, tda...@email.arizona.edu wrote:
> > On Wed, 2018-03-07 at 08:52 +0100, Alberto García Sola wrote:
> > 
> > Hi there,
> > 
> > I'm currently working on docker support in 389-ds.
> 
> William, I'm really glad to hear this. We've been running 389 server
> in docker in EC2 instances for months now and it works great. We have
> home grown scripts for automating the DS installation and replication
> between 2 DS instances, but it would be awesome to use a supported
> setup instead, so I'd really like to try what you have. Our setup
> uses mounted EBS volumes that contain all the necessary DS folders so
> that the EC2s can be blown away and recreated any time we want.

Hope you don't mind, but this is a bit of a brain dump. we have some
open tickets about this. Currently we have LOADS of support here for
containers, like detection of container memory and process limits,
support for containerised installs in dscreate, and more.

But first I want to describe the general picture and situation.

It would be great to have a temporary demo instance like:

docker run 389ds:1.4.0

And that *works*.

Now, when you want to really use it in production something more like:

docker run -v /etc/dirsrv:/etc/dirsrv -v
/var/lib/dirsrv:/var/lib/dirsrv 389ds:1.4.0

And now you have persistance, and can pull, upgrade, destroy,
everything.

If you want a readonly ephemeral replica, something maybe like:

docker run -e replication_manager=12345 389ds:1.4.0

Which would trigger the replica ID to become 65535 and set the
replication manager password (which could now be pushed to from another
instance).

So what are the challenges to these scenarios? 

Well, the first scenario "kinda works" today, but you don't get
persistence, and we have to ship a known password. The barrier here is
that ns-slapd (our server binary) needs assistance from dscreate/setup-
ds.pl to create dse.ldif and it's related instance parts.

So we need to move the *SETUP* logic of DS out of python and INTO an
early runtime part of ns-slapd, to be able to process a .inf +
envvariables to create dse.ldif on startup if it does not exist.

Thankfully this also solves the second case with a persistant image,
with backed storage.

The challenge here is the inplace upgrade. When you do say:

docker run -v ... 389ds:1.4.0
docker kill ...
docker run -v ... 389ds:1.5.0

Because our current upgrade scripts run in perl at RPM upgrade time,
when we launch the 1.5.0 container, it would NOT have the upgraded
configuration/plugin/other data that we may need.

Thankfully, this is in the process of being fixed via some patches that
are currently underreview, so this concern is "mostly" fixed, and the
team is pretty aware that upgrade perl scripts aren't a future
acceptable thing.


Finally, is the stateless instance - again, this requires more
interaction at start up to get the replica setup like this, but it also
requires us to coordinate docker networking / others for "what IP do we
replicate to?". This is a tougher challenge. Today we could solve this
externally by just reconfiguring our various instances, but this
automation would be nice to achieve.


Now there are still other issues - certificates and load balancing is a
big one. We have the concept of "SSF" in the server (despite ssf's
flaws). We won't let you do password changes or other operations
WITHOUT a secure connection, but today that means putting cert and key
material INTO the container.

So another area we need to improve is load balancer support for
haproxy. There is an open ticket for parsing HAproxy metadata for
proper log data, but we need to have an "SSF override" value so that DS
on plaintext 389 "treats it" like it's a secure connection, and haproxy
ONLY advertises 636 (ldaps). 


Another concern is backups and how to take them effectively, or how to
do datarestore correctly. I haven't decided on a good method for this
yet (we could have different containers thatj ust use the same volumes
and handle it correctly, or we could rely on the online tasks)


 But william, show me the code!!! 

Okay, okay. Today, you can build and test our docker container from git
master ONLY. We rely on a few too many things that are only in 1.4.0
and this is a fast-ish moving target today. I won't promise we have a
stable solution for you, but I'd love to hear your thoughts on how we
can improve.

If you want to test this today:

http://www.port389.org/docs/389ds/contributing.html#get-the-code

git clone https://pagure.io/389-ds-base.git
cd 389-ds-base
make -f docker.mk poc

This builds a container called "389-poc:latest", which functions like
the "demo" instance. We statically create an instance in the container
called "localhost" with the dm password of "directory manager
password". There is an updated to this poc in pagure in the following
ticket: https://pagure.io/389-ds-base/issue/49570

There is still quite a bit of integration work to go, but I'd love some
feedback and review of 

[389-users] Re: How to containerize 389DS using Docker in production systems

2018-03-07 Thread tdarby
> On Wed, 2018-03-07 at 08:52 +0100, Alberto García Sola wrote:
> 
> Hi there,
> 
> I'm currently working on docker support in 389-ds.

William, I'm really glad to hear this. We've been running 389 server in docker 
in EC2 instances for months now and it works great. We have home grown scripts 
for automating the DS installation and replication between 2 DS instances, but 
it would be awesome to use a supported setup instead, so I'd really like to try 
what you have. Our setup uses mounted EBS volumes that contain all the 
necessary DS folders so that the EC2s can be blown away and recreated any time 
we want.
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org


[389-users] Re: How to containerize 389DS using Docker in production systems

2018-03-07 Thread William Brown
On Wed, 2018-03-07 at 08:52 +0100, Alberto García Sola wrote:
> Reading the documentation I find little or none information regarding
> containers and Docker, but I've found a few comments in the changelog
> regarding Docker. I plan to use them in a highly scalable and elastic
> environment.
> I wonder, what's the best way to containerize 389DS using Docker to
> use in production systems? 
> Any considerations regarding storage (beyond being persistent)? 
> Any experiences using Docker and 389DS in production systems?

Hi there,

I'm currently working on docker support in 389-ds.

The current status is "almost there", but before I approve it for use I
have a high standard I want to meet. I want our support to be the best
possible. 

I want to be able to redefine root password via envariables, and I want
to be able to configure the server via environment or some other means
that would work nicely in something like kubernetes so you can then
have scripted scaling for replicas.

We also have some plans for inplace upgrades (rather than the perl
scripts) so that containers can upgrade at run time rather than relying
on rpm/deb to run upgrade scripts. 

Today, you can run ds in a container, but you need to build the
/etc/dirsrv/slapd- directory "seperately" and then bring that
to the container.

If you are interested in testing the current image, I can provide steps
on using it, and if you have some use cases or ideas, I'd love to hear
them. I really want our docker support ready in the next few months.

Thanks!


> Regards,
> Alberto.
> ___
> 389-users mailing list -- 389-users@lists.fedoraproject.org
> To unsubscribe send an email to 389-users-leave@lists.fedoraproject.o
> rg
-- 
Thanks,

William Brown
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org