Re: [atomic-devel] firewalld in atomic host

2017-04-23 Thread Jason DeTiberus
On Fri, Apr 21, 2017 at 10:16 AM, Dusty Mabe <du...@dustymabe.com> wrote:

> NOTE: if you respond to this message please 'reply-all'.
>
> I'd like to discuss firewalld on atomic host. Recently I was trying to
> figure out the best way to explain to other users how to set firewall rules
> on atomic host.
>
> Usually I would say add your rules and then iptables-save, but on Atomic
> Host docker has added it's firewall rules in there dynamically so if you
> iptables-save
> you'll get a bunch of stuff that you don't want in your static
> configuration.
>
> There are ways around this; manually create your config file, or use
> iptables-save
> and then rip the docker stuff out. Either way it's a bit of a pain. I think
> firewalld would make this easier on the user. Not sure of the pro/con
> ratio though.
>

While I can see firewalld improving the situation wrt documenting how to
add/persist firewall changes for Atomic Host (especially when using
moby/docker), I think there is a bigger concern with firewalld being
absent. If a user is running multiple applications that modify the host
firewall (docker, Kubernetes, OpenShift, etc), firewalld provides a way to
make firewall modifications in a consistent and repeatable manner, where
iptables does not. There is the --wait flag for iptables, however any
applications/users that are interacting with iptables will need to ensure
they use it consistently.

--
Jason DeTiberus


Re: [atomic-devel] How to apply non-atomic tuned profiles to atomic host

2016-10-14 Thread Jason DeTiberus
On Fri, Oct 14, 2016 at 7:40 AM, Jeremy Eder <je...@redhat.com> wrote:

> On Wed, Oct 12, 2016 at 10:29 AM, Colin Walters <walt...@verbum.org>
> wrote:
>
>>
>> On Tue, Oct 11, 2016, at 02:45 PM, Jeremy Eder wrote:
>>
>> Because layered products (not just OpenShift) do not want to be coupled
>> to the RHEL release schedule to update their profiles.  They want to own
>> their profiles and rely on the tuned daemon to be there.
>>
>>
>> I see two aspects to this discussion:
>>
>> 1) Generic tradeoffs with host configuration
>> 2) The specific discussion about tuned profiles
>>
>> Following 2) if I run:
>>
>> $ cd ~/src/github/openshift/origin
>> $ git describe --tags --always
>> v1.3.0-rc1-14-ge9081ae
>> $ git log --follow contrib/tuned/origin-node-host/tuned.conf
>>
>> There are a grand total of *two* commits that aren't mere
>> code reorganization:
>>
>> commit d959d25a405bb28568a17f8bf1b79e7d427ae0dc
>> Author: Jeremy Eder <je...@redhat.com>
>> AuthorDate: Tue Mar 29 10:40:03 2016 -0400
>> Commit: Jeremy Eder <je...@redhat.com>
>> CommitDate: Tue Mar 29 10:40:03 2016 -0400
>>
>> bump inotify watches
>>
>> commit c11cb47c07e24bfeec22a7cf94b0d6d693a00883
>> Author: Scott Dodson <sdod...@redhat.com>
>> AuthorDate: Thu Feb 12 13:06:57 2015 -0500
>> Commit: Scott Dodson <sdod...@redhat.com>
>> CommitDate: Wed Mar 11 16:41:08 2015 -0400
>>
>> Provide both a host and guest profile
>>
>> That level of change seems quite sufficient for the slower
>> RHEL cadence, no?
>>
>
> Decoupling profiles from RHEL has already been negotiated with many
> different engineering teams.  As you can imagine, it has ties into our
> channels and distribution mechanics.  Making an exception here doesn't make
> sense to me when it's working fine everywhere else.
>

Given the reboot issue gets addressed, I think I would prefer this approach
as well. We are working as best as we can to decouple the underlying host
management from the cluster management especially around upgrades. Being
able to update and ship the tuned profiles as needed would allow us to
manage it as part of the cluster management without having to query
underlying host state to determine if we need to do temporary
modifications. The other issue is that we don't require users to manage
their environments with Ansible, so our temporary modifications would also
need to be documented and implemented separately for non-Ansible users.


Particularly when one considers that something like the
>> inotify watch bump could easily be part of a "tuned updates"
>> in the installer that would live there until the base tuned
>> profile updates.
>>
>> Right?
>>
>
> ​Personally I would prefer to keep tuning centralized into tuned and not
> have 5 difference places where it's being done...but to your point around
> having two commits ... I'm losing that consolidation battle because
> Kubernetes has hardcoded certain sysctl adjustments that ideally we really
> should have carried in tuned :-/  But if we can at least avoid doing things
> in openshift-ansible at least that's one less place to track.​
>

I can understand why Kubernetes wouldn't want to require tuned, but maybe
we can drive changes upstream to make sysctl management optional. Then we
would be able to add the tuned requirement in our packaging and handle it
there without forcing tuned upstream.



>
>
>
>> Before we go the layered RPM route I just want to make sure you're
>> onboard with it, as I was not aware of any existing in-product users of
>> that feature.  Are there any? If we're the first that's not an issue, just
>> want to make sure we get it right.
>>
>>
>> In this particular case of tuned, I'd argue that Atomic Host should come
>> out of the box with these profiles,
>> and that any async updates could be done via the openshift-ansible
>> installer.
>>
>
> Realistically speaking -- we may want to use AH with another
> product...we've developed
> ​realtime and ​
> NFV profiles which again exist in another
> ​
> channel and there is no such thing as openshift-ansible there.
> ​
> What would be your approach if the openshift-ansible option did not exist?
> (back to scattered tuning)​
> ​​
>
>


-- 
Jason DeTiberus


Re: [atomic-devel] How to apply non-atomic tuned profiles to atomic host

2016-10-11 Thread Jason DeTiberus
On Tue, Oct 11, 2016 at 1:53 PM, Scott Dodson <sdod...@redhat.com> wrote:

> On Tue, Oct 11, 2016 at 1:44 PM, Jason DeTiberus <jdeti...@redhat.com>
> wrote:
> >
> >
> > On Tue, Oct 11, 2016 at 1:36 PM, Jeremy Eder <je...@redhat.com> wrote:
> >>
> >> Hi,
> >>
> >> Right now we've got the tuned package in the base atomic content.  There
> >> are atomic-host and atomic-guest tuned profiles which are currently
> >> identical to the atomic-openshift ones.  We'd like to make a change to
> the
> >> atomic-openshift-node/master profiles (which are distributed with the
> >> openshift product).
> >>
> >> Going fwd, I think we would rather not maintain two locations (atomic-*
> >> and atomic-openshift-* tuned profiles with identical content.
> >>
> >> So, trying to reason a way to get those profiles onto an AH since we
> can't
> >> install the tuned-atomic-openshift RPM...We could copy them to
> /etc/tuned
> >> and enable them manually...but I'm not sure that jives with how we're
> >> supposed to use AH and it seems kind of hacky since there would be
> "orphan
> >> files" in /etc.  Thoughts?
> >
> >
> > With a sufficiently recent version of atomic host, you could use layered
> > packages:
> > http://www.projectatomic.io/blog/2016/08/new-centos-
> atomic-host-with-package-layering-support/
>
> Is that a solution we want to support OCP customers on?
>

I don't see why not. As we go down the path of reducing the size of atomic
host, we'll need to install dependencies that may not make sense to deploy
containerized (storage dependencies for example), so package layering seems
like the proper place to manage those dependencies. I would view the tuned
profiles in the same way.


-- 
Jason DeTiberus


Re: [atomic-devel] How to apply non-atomic tuned profiles to atomic host

2016-10-11 Thread Jason DeTiberus
On Tue, Oct 11, 2016 at 1:36 PM, Jeremy Eder <je...@redhat.com> wrote:

> Hi,
>
> Right now we've got the tuned package in the base atomic content.  There
> are atomic-host and atomic-guest tuned profiles which are currently
> identical to the atomic-openshift ones.  We'd like to make a change to the
> atomic-openshift-node/master profiles (which are distributed with the
> openshift product).
>
> Going fwd, I think we would rather not maintain two locations (atomic-*
> and atomic-openshift-* tuned profiles with identical content.
>
> So, trying to reason a way to get those profiles onto an AH since we can't
> install the tuned-atomic-openshift RPM...We could copy them to /etc/tuned
> and enable them manually...but I'm not sure that jives with how we're
> supposed to use AH and it seems kind of hacky since there would be "orphan
> files" in /etc.  Thoughts?
>

With a sufficiently recent version of atomic host, you could use layered
packages:
http://www.projectatomic.io/blog/2016/08/new-centos-atomic-host-with-package-layering-support/


-- 
Jason DeTiberus


Re: [atomic-devel] ARM builds?

2016-08-25 Thread Jason DeTiberus
On Thu, Aug 25, 2016 at 5:19 PM, Josh Berkus <jber...@redhat.com> wrote:

> On 08/25/2016 05:08 PM, Jason DeTiberus wrote:
> >
> >
> > On Thu, Aug 25, 2016 at 4:59 PM, Jason Brooks <jbro...@redhat.com
> > <mailto:jbro...@redhat.com>> wrote:
> >
> > On Thu, Aug 25, 2016 at 1:55 PM, Josh Berkus <jber...@redhat.com
> > <mailto:jber...@redhat.com>> wrote:
> > > Folks,
> > >
> > > Partly due to lugging my micro-cluster around, I've been getting a
> lot
> > > of interest in having an ARM port of Atomic Host (ARM64, mostly).
> Not
> > > just hobby user interest, but interest from SDN and IoT makers.
> > >
> > > I'm raising this on atomic-devel because I'm not sure if it makes
> more
> > > sense to start on this via Fedora or CentOS.  Thoughts?
> >
> > I'm into it. I imagine there are parties on the CentOS and Fedora
> > sides who'd be into this too, and any progress on one side will
> > probably benefit the other side.
> >
> > I think docker and kube can both run on ARM, and CentOS and Fedora
> run
> > on ARM. I don't know about the ostree bits...
> >
> >
> > I like the idea, as long as the focus is on arm64. Unfortunately, I
> > think arm64 support for raspberry pi's are still a work in progress
> > (thought, I think CentOS has been making progress in this area as of
> late).
>
> Well, enabling the raspberry Pi would be nice, but not really the main
> target.  As fun at the Pis are to play with, I'm a lot more concerned
> with enabling folks who want to really build systems, and that means
> "real" ARM64.
>

+1, my concern is there have been quite a few requests for the Pi, mainly
for the demo/wow factor.


-- 
Jason DeTiberus


Re: [atomic-devel] ARM builds?

2016-08-25 Thread Jason DeTiberus
On Thu, Aug 25, 2016 at 4:59 PM, Jason Brooks <jbro...@redhat.com> wrote:

> On Thu, Aug 25, 2016 at 1:55 PM, Josh Berkus <jber...@redhat.com> wrote:
> > Folks,
> >
> > Partly due to lugging my micro-cluster around, I've been getting a lot
> > of interest in having an ARM port of Atomic Host (ARM64, mostly).  Not
> > just hobby user interest, but interest from SDN and IoT makers.
> >
> > I'm raising this on atomic-devel because I'm not sure if it makes more
> > sense to start on this via Fedora or CentOS.  Thoughts?
>
> I'm into it. I imagine there are parties on the CentOS and Fedora
> sides who'd be into this too, and any progress on one side will
> probably benefit the other side.
>
> I think docker and kube can both run on ARM, and CentOS and Fedora run
> on ARM. I don't know about the ostree bits...
>

I like the idea, as long as the focus is on arm64. Unfortunately, I think
arm64 support for raspberry pi's are still a work in progress (thought, I
think CentOS has been making progress in this area as of late).

-- 
Jason DeTiberus


Re: [atomic-devel] Moving towards containerized Kube/layered packages

2016-08-22 Thread Jason DeTiberus
On Mon, Aug 22, 2016 at 10:13 AM, Colin Walters <walt...@verbum.org> wrote:

> Hi, I'd like to propose a fairly fundamental rework of Atomic Host.  TL;DR:
>
> - Move towards "system containers" (or layered packages) for flannel/etcd
>

+1


> - Move towards containers (system, or Docker) for kubernetes-master
>

Unless we can run on docker 1.12+, where we can restart the docker daemon
without impacting the containers running, then I would suggest this would
have to be a system container or layered package.


> - Move towards layered packages for kubernetes-node and storage
> (ceph/gluster)
>

+1


> In progress PR:
>
> https://github.com/CentOS/sig-atomic-buildscripts/pull/144
>
> There are advantages to this and disadvantages; I think we'll have some
> short term transition pain, but past that short term the advantages
> outweigh the disadvantages a lot.
>
> == Advantage: Version flexibility ==
>
> etcd for should really have its own identity in a clustered environment,
> and
> not necessarily roll fowards/backwards with the underlying host version.
> I've
> had users report things like hitting an e.g. Kubernetes or Docker issue
> and rolling back their
> host, which rolled back etcd as well, but the store isn't
> forwards-compatible,
> which then breaks.  There's also a transition to etcd2 coming, which again
> one should really manage distinct from host upgrades.


> Another major example is that while we chose to include Kubernetes in
> the host, it's a fast moving project, and many people want to use a newer
> version, or a different distribution like OpenShift.  The version
> flexibility
> also applies to other components like Ceph/Gluster and flannel.
>
> == Advantage: Size and fit to purpose ==
>
> We included things like the Ceph and GlusterFS drivers in the base
> host, but that was before we had layered packages, and there's
> also continuing progress on containerized drivers.  If one is using
> an existing IaaS environment like OpenStack or AWS, many users
> want to reuse Cinder/AWS, rather than maintaining their own storage.
>
> Similarly, while flannel is a good general purpose tool, there are
> lots of alternatives, and some users already have existing SDN solutions.
>
> == Disadvantage: More assembly required ==
>
> This is a superficial disadvantage I think - in practice, since we didn't
> pick a single official installation/upgrade system (like OpenShift has
> openshift-ansible), if you want to run a Kubernetes cluster, you need
> to do a lot of assembly anyways.  Adding a bit more to that I suspect
> isn't going to be too bad for most users.
>
> Down the line I'd like to revisit the installation/upgrade story - there's
> work happening upstream in
> https://github.com/kubernetes/contrib/tree/master/ansible
> and I think there's also interest and some work in
> having parts of openshift-ansible be available for baseline Kubernetes
> and accessible on Galaxy etc.


> == Disadvantage: Dependency on new tooling ==
>
> Both `rpm-ostree pkg-add` and `atomic install --system` are pretty new.
> They both could use better documentation, more real world testing, and
> in particular haven't gained management tool awareness yet (for example,
> they need
> better Ansible support).
>
> == Summary ==
>
> If people agree, I'd like to merge the PR pretty soon and do a new CentOS
> AH Alpha,
> and we can collaborate on updating docs/tools around this.  For
> Fedora...it's
> probably simplest to leave 24 alone and just do 25 for now.
>
> What I'd like to focus on is having AH be more of a good "building block"
> rather than positioning it as a complete solution.  We ensure that the base
> Docker/kernel/SELinux/systemd block works together, system management tools
> work, and look at working more in the upstream Kubernetes (and OpenShift)
> communities, particularly around Ansible.
>
>


-- 
Jason DeTiberus


Re: [atomic-devel] Introducing Commissaire

2016-05-19 Thread Jason DeTiberus
On May 19, 2016 2:59 PM, "Derek Carr" <dec...@redhat.com> wrote:
>
> Related: https://github.com/kubernetes/kubernetes/pull/23343
>
> This is the model proposed by CoreOS for supporting cluster-upgrades.
Basically, a run-once kubelet is launched by the init system, and pulls
down the real kubelet to run as a container, then all other requisite host
services are provisioned as a DaemonSet derived set of pods on the node.
This does not cover things like kernel updates, but definitely does enable
a lot of scenarios for updates of kubelet/openshift-node if we adopted the
pattern.

Definitely solves a large chunk of the problem. We still need to worry
about host upgrades, data center maintenance, etc.

I'm all for the cluster owning all cluster upgrade related tasks, though.

>
> Thanks,
> Derek
>
>
>
>
>
>
> On Thu, May 19, 2016 at 12:44 PM, Jason DeTiberus <jdeti...@redhat.com>
wrote:
>>
>>
>> On Thu, May 19, 2016 at 12:18 PM, Chmouel Boudjnah <chmo...@redhat.com>
wrote:
>>>
>>> Hello thanks for releasing this blog post, from a first impression
>>> there is a bit of an overlap if you are already cloudforms to do that,
>>> isn't it ?
>>
>>
>> With current implementations, yes. That said, Cloud Forms could
eventually switch to using Commissaire for managing clusters of hosts.
>>
>> As commissaire matures, I see great promise for it to handle a lot of
the complexity involved in managing complex cluster upgrades (think
OpenShift), where even something like applying kernel updates and
orchestrating a reboot of hosts requires much more consideration than apply
and restart or just performing the operations serially. Long term we need
something that can be more integrated with Kubernetes/OpenShift that will
allow for making ordering/restarting decisions on things like pod
placement, scheduler configuration, and disruption budgets (when they are
implemented). Having a centralized place to manage that complexity is much
better than having multiple external tools do the same.
>>
>>
>>>
>>>
>>> Chmouel
>>>
>>> On Thu, May 19, 2016 at 3:55 PM, Stephen Milner <smil...@redhat.com>
wrote:
>>> > Hello all,
>>> >
>>> > Have you heard about some kind of cluster host manager project and
>>> > want to learn more? Curious about what this Commissaire thing is that
>>> > has shown up in the Project Atomic GitHub repos?
>>> > The short answer is it is a lightweight REST interface for cluster
>>> > host management. For more information check out the introductory blog
>>> > post ...
>>> >
>>> > http://www.projectatomic.io/blog/2016/05/introducing_commissaire/
>>> >
>>> > ... and stay tuned for more in-depth posts for development and
>>> > operations in the near future!
>>> >
>>> > --
>>> > Thanks,
>>> > Steve Milner
>>> >
>>>
>>
>>
>>
>> --
>> Jason DeTiberus
>
>


Re: [atomic-devel] Parallel installing 1.9 and 1.10

2016-03-28 Thread Jason DeTiberus
Does it make sense to configure it through alternatives?

On Mon, Mar 28, 2016 at 9:41 AM, Andy Goldstein <agold...@redhat.com> wrote:

> Ok, makes sense.
>
> I'm +1 to having the ability to test out newer Docker versions. How would
> they ship - in 1 RPM, or multiple?
>
> On Mon, Mar 28, 2016 at 9:38 AM, Colin Walters <walt...@verbum.org> wrote:
>
>> On Mon, Mar 28, 2016, at 09:31 AM, Andy Goldstein wrote:
>> > Would this be with SCL, or some other means?
>>
>> The SCL model/tools become more useful when dynamic linking is in play,
>> but currently
>> in our usage of golang there aren't any beyond a few system ones.  So I
>> think it would
>> work to just have e.g.
>> /usr/libexec/docker-1.10
>> /usr/libexec/docker-1.9
>>
>> And choose via a config file in /etc/sysconfig/docker which to run.
>>
>> (And even if we did introduce dynamic linking, using rpath I think is
>> saner for this case)
>>
>>
>>
>>
>


-- 
Jason DeTiberus


Re: [atomic-devel] Who can do Slack?

2016-03-23 Thread Jason DeTiberus
On Wed, Mar 23, 2016 at 2:07 PM, Josh Berkus <jber...@redhat.com> wrote:

> On 03/23/2016 07:50 AM, Jason DeTiberus wrote:
> > Is this targeted for project atomic only or the larger atomic/openshift
> > community?
>
> I'd *like* to do it for all of AOS.  Do you know anyone from Origin who
> will commit to being on Slack regularly?
>

I'd be willing to hang out there, and could occasionally help with
installation related issues, but I'm sure we could get some more of the
team to join as well.


>
> --
> --
> Josh Berkus
> Project Atomic
> Red Hat OSAS
>



-- 
Jason DeTiberus


Re: [atomic-devel] Who can do Slack?

2016-03-23 Thread Jason DeTiberus
Is this targeted for project atomic only or the larger atomic/openshift
community?

On Wed, Mar 23, 2016 at 4:19 AM, Brian (bex) Exelbierd <b...@pobox.com>
wrote:

> +1 and willing
>
>
> On 03/22/2016 11:23 PM, Josh Berkus wrote:
>
>> Atomic Legion,
>>
>> So we want to have a presence on Slack in order to reach users and
>> developers who don't do IRC or email so much.  However, this only makes
>> sense if several of us are willing to log into the channel every day.
>>
>> Who's up for it?  Note that you can use many IRC clients to log in, once
>> you've set up an account.
>>
>>
> --
> Brian (bex) Exelbierd | b...@pobox.com
> +420-606-055-877 | @bexelbie
> http://www.winglemeyer.org
>
>


-- 
Jason DeTiberus