Re: [atomic-devel] tools and systemtap containers are available in Fedora

2017-10-05 Thread Jeremy Eder
Woops, sorry Dan,  my bad.  That was a relic from earlier, when I tried
sys_admin.

Looks like --security-opt label:disable is enough to get it going.

# docker run --security-opt label:disable --cap-add SYS_MODULE -v
/sys/kernel/debug:/sys/kernel/debug -v /usr/src/kernels:/usr/src/kernels -v
/usr/lib/modules/:/usr/lib/modules/ -v /usr/lib/debug:/usr/lib/debug -t -i
--name systemtap candidate-registry.fedoraproject.org/f26/systemtap

On Thu, Oct 5, 2017 at 1:47 PM, Frank Ch. Eigler <f...@redhat.com> wrote:

> Hi, Dan -
>
>
> > Could you show the docker line that atomic run is executing?
>
> % atomic run --spc candidate-registry.fedoraproject.org/f26/systemtap
> /usr/share/systemtap/examples/io/iotop.stp
> docker run --cap-add SYS_MODULE -v /sys/kernel/debug:/sys/kernel/debug -v
> /usr/src/kernels:/usr/src/kernels -v /usr/lib/modules/:/usr/lib/modules/
> -v /usr/lib/debug:/usr/lib/debug -t -i --name systemtap-spc
> candidate-registry.fedoraproject.org/f26/systemtap
> /usr/share/systemtap/examples/io/iotop.stp
>
> ... which fails.  But a hand-run % docker run, with "--security-opt
> label:disable" added in the front works for me.
>
>
> > The LABEL would be the preferred way.
>
> Sure, just someone(tm) needs to find the Dockerfile in git.  I
> couldn't find it from a dozen minutes reading
> https://fedoraproject.org/wiki/Changes/Layered_Docker_Image_Build_Service
> and pals.
>
>
> - FChE
>



-- 

-- Jeremy Eder


Re: [atomic-devel] tools and systemtap containers are available in Fedora

2017-10-05 Thread Jeremy Eder
I don't see any avc when it fails while label:disable is set.
I ran semodule -DB and retried.  I now see dontaudit stuff but still no
interesting denials.

I'm not sure if you were talking to me or Frank with the atomic command
line...

I pulled the label out docker inspect on the systemtap image so I can run
it manually.  Here is what I am running.
All I have added is the --security-opt label:disable part.

# docker run --security-opt label:disable --cap-add SYS_ADMIN -v
/sys/kernel/debug:/sys/kernel/debug -v /usr/src/kernels:/usr/src/kernels -v
/usr/lib/modules/:/usr/lib/modules/ -v /usr/lib/debug:/usr/lib/debug -t -i
--name systemtap candidate-registry.fedoraproject.org/f26/systemtap

I also tried with --security-opt seccomp:unconfimed.  That did not help.

Adding --privileged to the above command line, and systemtap works.

This is likely the key difference between why systemtap has always worked
in the rhel-tools container...the label on that image includes --privileged.



On Thu, Oct 5, 2017 at 1:25 PM, Daniel Walsh <dwa...@redhat.com> wrote:

> On 10/05/2017 01:18 PM, Jeremy Eder wrote:
>
> setenforce 0 works...security-opt label:disable does not.
>
> On Thu, Oct 5, 2017 at 1:06 PM, Daniel Walsh <dwa...@redhat.com> wrote:
>
>> On 10/05/2017 01:00 PM, Frank Ch. Eigler wrote:
>>
>>> wcohen forwarded:
>>>
>>> [...]
>>>>
>>>>>[root@dhcp23-91 ~]# atomic run --spc candidate-registry.fedoraproje
>>>>> ct.org/f26/systemtap <http://candidate-registry.fed
>>>>> oraproject.org/f26/systemtap>
>>>>>  docker run --cap-add SYS_MODULE -v 
>>>>> /sys/kernel/debug:/sys/kernel/debug
>>>>> -v /usr/src/kernels:/usr/src/kernels -v 
>>>>> /usr/lib/modules/:/usr/lib/modules/
>>>>> -v /usr/lib/debug:/usr/lib/debug -t -i --name systemtap-spc
>>>>> candidate-registry.fedoraproject.org/f26/systemtap <
>>>>> http://candidate-registry.fedoraproject.org/f26/systemtap>
>>>>>   [...]
>>>>>  ERROR: Couldn't insert module '/tmp/stapNEjJDX/stap_4f013e75
>>>>> 62b546a0316af840de9f0713_8509.ko': Operation not permitted
>>>>> [...]
>>>>>
>>>> I bet
>>> # setenforce 0
>>> makes it work for you.  As per audit.log:
>>>
>>> type=AVC msg=audit(1507222590.683:7940): avc:  denied  { module_load }
>>> for  pid=7595 comm="staprun" scontext=system_u:system_r:con
>>> tainer_t:s0:c534,c921
>>> tcontext=system_u:system_r:container_t:s0:c534,c921 tclass=system
>>> permissive=1
>>>
>>>
>>> - FChE
>>> ___
>>> devel mailing list -- de...@lists.fedoraproject.org
>>> To unsubscribe send an email to devel-le...@lists.fedoraproject.org
>>>
>>
>> Rather then putting the system into permissive mode, you should run a
>> privileged container or at least disable SELinux protections.
>>
>>
>> docker run -ti --security-opt label:disable ...
>>
>>
>>
>
>
> --
>
> -- Jeremy Eder
>
> Could you show me the AVC you get when you do the label:disable?
>
>
>


-- 

-- Jeremy Eder


Re: [atomic-devel] tools and systemtap containers are available in Fedora

2017-10-05 Thread Jeremy Eder
Forgot to add Will Cohen (discussed stap errors with him briefly).  Also my
replies won't make it to the dev list since I am not subscribed (just fyi I
guess).

On Thu, Oct 5, 2017 at 9:10 AM, Jeremy Eder <je...@redhat.com> wrote:

> First of all, that readme is awesome.
>
> spot checking the tools container...seems to all "just work" when I run it
> with atomic run ...
> blktrace works
> ethtool works (-K -i -c -S specifically)
> netstat works
> pstack works
> perf top,record,report works
> iotop works
> slabtop works
> lstopo works
> htop works (wish this was in rhel)
> nstat works
> ss works (-tmpie)
> ifpps works (wish this was in rhel)
> numastat works (-mczs)
> pmap works
> all the sysstat tools work
> strace works
> tcpdump works
> sar works but you have to prepend the /host directory (so, sar -f
> /host/var/log/sa/sa05)
> my god tmux is in here?? yes!
>
>
> ​systemtap (aww, no readme?)
>
> doesnt work:
> ​[root@8b7437fed211 /]# cd /usr/share/systemtap/examples/process/
>
>
> [root@8b7437fed211 process]# stap cycle_thief.stp
> ERROR: Couldn't insert module '/tmp/stapslabb9/stap_
> 0811c9eea1bbb81f2fbc5f7bf9df4506_8509.ko': Operation not permitted
> WARNING: /usr/bin/staprun exited with status: 1
> Pass 5: run failed.  [man error::pass5]
> [root@8b7437fed211 process]#
>
>
>
> [root@dhcp23-91 ~]# atomic run --spc candidate-registry.
> fedoraproject.org/f26/systemtap
> docker run --cap-add SYS_MODULE -v /sys/kernel/debug:/sys/kernel/debug -v
> /usr/src/kernels:/usr/src/kernels -v /usr/lib/modules/:/usr/lib/modules/
> -v /usr/lib/debug:/usr/lib/debug -t -i --name systemtap-spc
> candidate-registry.fedoraproject.org/f26/systemtap
>
> This container uses privileged security switches:
>
> INFO: --cap-add
>   Adding capabilities to your container could allow processes from the
> container to break out onto your host system.
>
> For more information on these switches and their security implications,
> consult the manpage for 'docker run'.
>
> [root@10accce504c2 /]# cd /usr/share/systemtap/examples/process/
> [root@10accce504c2 process]# stap cycle_thief.stp
> ERROR: Couldn't insert module '/tmp/stapNEjJDX/stap_
> 4f013e7562b546a0316af840de9f0713_8509.ko': Operation not permitted
> WARNING: /usr/bin/staprun exited with status: 1
> Pass 5: run failed.  [man error::pass5]
>
>
>
> On Thu, Oct 5, 2017 at 3:09 AM, Tomas Tomecek <ttome...@redhat.com> wrote:
>
>> Not sure if the question is for me -- I literally have no idea how to do
>> that.
>>
>>
>> Let me know how I can help,
>>
>> Tomas
>>
>>
>> On Thu, Oct 5, 2017 at 5:04 AM, Dusty Mabe <du...@dustymabe.com> wrote:
>>
>>>
>>>
>>> On 09/18/2017 10:48 AM, Tomas Tomecek wrote:
>>> > Hello,
>>> >
>>> > we managed to move tools container from Fedora Dockerfiles github repo
>>> to Fedora infra [1]. As a side effects, we put systemtap in a dedicated
>>> container.
>>> >
>>> > We would very much appreciate your feedback here: so if you have some
>>> time to take a look at these containers and try them out, it would mean a
>>> lot to us.
>>> >
>>> > Repos:
>>> > https://src.fedoraproject.org/container/systemtap
>>> > https://src.fedoraproject.org/container/tools
>>> >
>>> > The way to access the images:
>>> > docker pull candidate-registry.fedoraproject.org/f26/tools <
>>> http://candidate-registry.fedoraproject.org/f26/tools>
>>>
>>> just tested out the tools container. can we get this into the official
>>> registry?
>>>
>>> > docker pull candidate-registry.fedoraproject.org/f26/systemtap <
>>> http://candidate-registry.fedoraproject.org/f26/systemtap>
>>> >
>>> > Both images have help files, so please read them prior using the
>>> containers:
>>> > https://src.fedoraproject.org/container/tools/blob/master/f/
>>> root/README.md
>>> > https://github.com/container-images/systemtap/blob/master/help/help.md
>>> >
>>> > (or `atomic help $the_container_image`)
>>> >
>>> > [1] https://pagure.io/atomic-wg/issue/214
>>>
>>
>>
>
>
> --
>
> -- Jeremy Eder
>



-- 

-- Jeremy Eder


Re: [atomic-devel] tools and systemtap containers are available in Fedora

2017-10-05 Thread Jeremy Eder
First of all, that readme is awesome.

spot checking the tools container...seems to all "just work" when I run it
with atomic run ...
blktrace works
ethtool works (-K -i -c -S specifically)
netstat works
pstack works
perf top,record,report works
iotop works
slabtop works
lstopo works
htop works (wish this was in rhel)
nstat works
ss works (-tmpie)
ifpps works (wish this was in rhel)
numastat works (-mczs)
pmap works
all the sysstat tools work
strace works
tcpdump works
sar works but you have to prepend the /host directory (so, sar -f
/host/var/log/sa/sa05)
my god tmux is in here?? yes!


​systemtap (aww, no readme?)

doesnt work:
​[root@8b7437fed211 /]# cd /usr/share/systemtap/examples/process/


[root@8b7437fed211 process]# stap cycle_thief.stp
ERROR: Couldn't insert module
'/tmp/stapslabb9/stap_0811c9eea1bbb81f2fbc5f7bf9df4506_8509.ko': Operation
not permitted
WARNING: /usr/bin/staprun exited with status: 1
Pass 5: run failed.  [man error::pass5]
[root@8b7437fed211 process]#



[root@dhcp23-91 ~]# atomic run --spc
candidate-registry.fedoraproject.org/f26/systemtap
docker run --cap-add SYS_MODULE -v /sys/kernel/debug:/sys/kernel/debug -v
/usr/src/kernels:/usr/src/kernels -v /usr/lib/modules/:/usr/lib/modules/ -v
/usr/lib/debug:/usr/lib/debug -t -i --name systemtap-spc
candidate-registry.fedoraproject.org/f26/systemtap

This container uses privileged security switches:

INFO: --cap-add
  Adding capabilities to your container could allow processes from the
container to break out onto your host system.

For more information on these switches and their security implications,
consult the manpage for 'docker run'.

[root@10accce504c2 /]# cd /usr/share/systemtap/examples/process/
[root@10accce504c2 process]# stap cycle_thief.stp
ERROR: Couldn't insert module
'/tmp/stapNEjJDX/stap_4f013e7562b546a0316af840de9f0713_8509.ko': Operation
not permitted
WARNING: /usr/bin/staprun exited with status: 1
Pass 5: run failed.  [man error::pass5]



On Thu, Oct 5, 2017 at 3:09 AM, Tomas Tomecek <ttome...@redhat.com> wrote:

> Not sure if the question is for me -- I literally have no idea how to do
> that.
>
>
> Let me know how I can help,
>
> Tomas
>
>
> On Thu, Oct 5, 2017 at 5:04 AM, Dusty Mabe <du...@dustymabe.com> wrote:
>
>>
>>
>> On 09/18/2017 10:48 AM, Tomas Tomecek wrote:
>> > Hello,
>> >
>> > we managed to move tools container from Fedora Dockerfiles github repo
>> to Fedora infra [1]. As a side effects, we put systemtap in a dedicated
>> container.
>> >
>> > We would very much appreciate your feedback here: so if you have some
>> time to take a look at these containers and try them out, it would mean a
>> lot to us.
>> >
>> > Repos:
>> > https://src.fedoraproject.org/container/systemtap
>> > https://src.fedoraproject.org/container/tools
>> >
>> > The way to access the images:
>> > docker pull candidate-registry.fedoraproject.org/f26/tools <
>> http://candidate-registry.fedoraproject.org/f26/tools>
>>
>> just tested out the tools container. can we get this into the official
>> registry?
>>
>> > docker pull candidate-registry.fedoraproject.org/f26/systemtap <
>> http://candidate-registry.fedoraproject.org/f26/systemtap>
>> >
>> > Both images have help files, so please read them prior using the
>> containers:
>> > https://src.fedoraproject.org/container/tools/blob/master/f/
>> root/README.md
>> > https://github.com/container-images/systemtap/blob/master/help/help.md
>> >
>> > (or `atomic help $the_container_image`)
>> >
>> > [1] https://pagure.io/atomic-wg/issue/214
>>
>
>


-- 

-- Jeremy Eder


Re: [atomic-devel] Fedora 26 change: using overlayfs as default

2017-01-06 Thread Jeremy Eder
It would be good to confirm ... if you still have the devicemapper setup,
run docker info

I was pointed to this https://github.com/openshift/origin/issues/11016 but
I don't think it's the same issue.  That one is docker daemon <--> kubelet
interaction is slower.

On Fri, Jan 6, 2017 at 1:07 PM, Josh Berkus <jber...@redhat.com> wrote:

> On 01/05/2017 05:15 PM, Jeremy Eder wrote:
> > On Thu, Jan 5, 2017 at 7:22 PM, Josh Berkus <jber...@redhat.com
> > <mailto:jber...@redhat.com>>wrote:
> >
> > Also, performance is MUCH better on PostgreSQL pgbench than
> devicemapper
> > is.  Like 3X better.
> >
> >
> > Details please?  Were you using the loopback dm driver?
>
> I was using whatever is the default with Atomic docker-storage-setup.
>
> It wasn't actually a rigourous test; I was mostly interested in
> verifying that Overlay wasn't *slower*.  If we have a reason to want a
> rigorous test, I'll playbook something and get numbers.
>
> --
> --
> Josh Berkus
> Project Atomic
> Red Hat OSAS
>



-- 

-- Jeremy Eder


Re: [atomic-devel] Fedora 26 change: using overlayfs as default

2017-01-05 Thread Jeremy Eder
On Thu, Jan 5, 2017 at 7:22 PM, Josh Berkus  wrote:

> Also, performance is MUCH better on PostgreSQL pgbench than devicemapper
> is.  Like 3X better.


Details please?  Were you using the loopback dm driver?


Re: [atomic-devel] Docker project: Can you have overlay2 speed and density with devicemapper? Yep.

2016-10-26 Thread Jeremy Eder
​If a user specifies read-only in their podspec...what does that translate
to (it might be a distro-specific question).  IMO the --shared-rootfs
should be the default when --read-only is specified, but it's not atm.

Vivek has implemented it for devicemapper first.  But the intent is that it
will be added to most or all graph drivers, including overlay/overlay2.  It
has the most benefit on devicemapper or btrfs which have unique inodes per
container.




On Wed, Oct 26, 2016 at 2:20 PM, Vishnu Kannan <vish...@google.com> wrote:

> *What* do you intend to surface to users? IIUC, this discussion is
> specific to device mapper storage drivers right?
>
> On Tue, Oct 25, 2016 at 5:03 AM, Jeremy Eder <je...@redhat.com> wrote:
>
>> Hi,
>>
>> Vivek Goyal (cc) and I were discussing ways to deliver page cache
>> sharing, POSIX compliance and SELinux support with a single docker graph
>> driver, using existing kernel facilities.  We decided to go with a
>> bind-mount technique, and Vivek has posted a first cut here:
>> https://github.com/docker/docker/pull/27364​
>>
>> Testing of the prototype looks like a great improvement:
>> ​http://developerblog.redhat.com/2016/10/25/docker-project-c
>> an-you-have-overlay2-speed-and-density-with-devicemapper-yep/
>>
>> Assuming this type of feature is merged in a container run-time, what
>> preference would Kube folks have for surfacing this to users ... currently
>> it's a daemon runtime flag that says ... if you use --read-only then you
>> get the shared-rootfs as well.  Obviously this requires "12factor-ish"
>> design up front, because you can no longer scribble in the container
>> filesystem in places that are not persistent volumes, but we think
>> read-only container hygiene is well worth the security and performance
>> improvements to be had.
>>
>> https://twitter.com/rhdevelopers/status/790870667008757760
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Kubernetes developer/contributor discussion" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to kubernetes-dev+unsubscr...@googlegroups.com.
>> To post to this group, send email to kubernetes-...@googlegroups.com.
>> To view this discussion on the web visit https://groups.google.com/d/ms
>> gid/kubernetes-dev/CABxNGQa-VLzP%3DEFYQucfJtTEtSHmWac4Tv%3Dc
>> %2BQVAFJNcDLSb1g%40mail.gmail.com
>> <https://groups.google.com/d/msgid/kubernetes-dev/CABxNGQa-VLzP%3DEFYQucfJtTEtSHmWac4Tv%3Dc%2BQVAFJNcDLSb1g%40mail.gmail.com?utm_medium=email_source=footer>
>> .
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>


-- 

-- Jeremy Eder


Re: [atomic-devel] We have a bugzilla requesting that we change the default CMD to systemd for base images in RHEL

2016-10-25 Thread Jeremy Eder
When you "docker pull golang", the image is over 600MB (and it's built on
alpine).
Same with docker pull java...also > 600MB.

docker pull alpine is not apples:apples.  If you're pulling alpine it's
because you're about to shove in a ton of other stuff.

On Tue, Oct 25, 2016 at 2:58 PM, Joe Brockmeier <j...@redhat.com> wrote:

> On Tue, Oct 25, 2016 at 2:12 PM, Matthew Miller
> <mat...@fedoraproject.org> wrote:
> >
> > We've definitely got a ways to go here — our base image is very large
> > by any standard, let alone compared to alpine or busybox.
>
> It is, and I support un-embiggening it, but ... IIRC Scott McCarty did
> some apples:apples comparisons w/Alpine and once you load up an
> application the size differential is seriously reduced.
>
> (e.g.,  try adding PHP to Alpine and then see how it compares, size-wise.)
>
> Best,
>
> jzb
>
>
> --
> Joe Brockmeier
> Senior Evangelist, Linux Containers
> j...@redhat.com
> Twitter: @jzb
>
>


-- 

-- Jeremy Eder


[atomic-devel] Docker project: Can you have overlay2 speed and density with devicemapper? Yep.

2016-10-25 Thread Jeremy Eder
Hi,

Vivek Goyal (cc) and I were discussing ways to deliver page cache sharing,
POSIX compliance and SELinux support with a single docker graph driver,
using existing kernel facilities.  We decided to go with a bind-mount
technique, and Vivek has posted a first cut here:
https://github.com/docker/docker/pull/27364​

Testing of the prototype looks like a great improvement:
​
http://developerblog.redhat.com/2016/10/25/docker-project-can-you-have-overlay2-speed-and-density-with-devicemapper-yep/

Assuming this type of feature is merged in a container run-time, what
preference would Kube folks have for surfacing this to users ... currently
it's a daemon runtime flag that says ... if you use --read-only then you
get the shared-rootfs as well.  Obviously this requires "12factor-ish"
design up front, because you can no longer scribble in the container
filesystem in places that are not persistent volumes, but we think
read-only container hygiene is well worth the security and performance
improvements to be had.

https://twitter.com/rhdevelopers/status/790870667008757760


Re: [atomic-devel] We have a bugzilla requesting that we change the default CMD to systemd for base images in RHEL

2016-10-21 Thread Jeremy Eder
Sorry hit send too soon. other thing i was going to mention is if we have 2
bases, then the minimal one should take advantage of Colin's research into
yum micro (I forget the exact name).

On Fri, Oct 21, 2016 at 12:52 PM, Jeremy Eder <je...@redhat.com> wrote:

> rhel7:pet
>
> On Fri, Oct 21, 2016 at 12:48 PM, Daniel J Walsh <dwa...@redhat.com>
> wrote:
>
>> Well we got to figure out how/if upstart can run in a non-privileged
>> container. but yes.
>>
>> rhel7-init or rhel7-system
>>
>> rhel6-init or rhel6-system
>>
>> Perhaps
>>
>> On 10/21/2016 12:42 PM, Daniel Riek wrote:
>>
>> We will need the same for rhel6 (with upstart). We should think about a
>> consistent naming model.
>>
>> D.
>>
>> On Oct 21, 2016 12:38 PM, "Mrunal Patel" <mpa...@redhat.com> wrote:
>>
>>>
>>>
>>> On Fri, Oct 21, 2016 at 9:29 AM, Daniel J Walsh <dwa...@redhat.com>
>>> wrote:
>>>
>>>> That might make the most sense.
>>>>
>>>> RHEL7 == Base Image
>>>>
>>>> RHEL7Systemd == BaseImage + Config to run systemd as pid1.
>>>>
>>> +1, maybe call it RHEL7-system image?
>>>
>>>>
>>>>
>>>>
>>>> On 10/21/2016 12:26 PM, Daniel Riek wrote:
>>>>
>>>> Question: should we separate a true minimal base image that as default
>>>> run's a shell and the default iamge that runs systemd and behaves more like
>>>> a linux system?
>>>>
>>>> On Fri, Oct 21, 2016 at 12:02 PM, Clayton Coleman <ccole...@redhat.com>
>>>> wrote:
>>>>
>>>>> This seems like a breaking API change (as you note) for downstream
>>>>> consumers.  Seems more correct to create a new image for that.
>>>>>
>>>>> > On Oct 21, 2016, at 11:50 AM, Daniel J Walsh <dwa...@redhat.com>
>>>>> wrote:
>>>>> >
>>>>> > If we make this change, we would want to do it in Fedora and Centos
>>>>> also.
>>>>> >
>>>>> > https://bugzilla.redhat.com/show_bug.cgi?id=1387282
>>>>> >
>>>>> > The benefits of making this change are that people new to containers
>>>>> > could follow a simple workflow similar to what the do on the OS,
>>>>> where
>>>>> > all they need to do is install an rpm service and enable and it is
>>>>> ready
>>>>> > to go.
>>>>> >
>>>>> >
>>>>> > # cat Dockerfile
>>>>> >
>>>>> > FROM rhel7
>>>>> >
>>>>> > RUN dnf -y install httpd; systemctl enabled httpd
>>>>> >
>>>>> > ADD MYAPP /
>>>>> >
>>>>> >
>>>>> > # docker build -t MYAPP .
>>>>> >
>>>>> >
>>>>> > And they are done.  Now if they run their container
>>>>> >
>>>>> > docker run -d MYAPP
>>>>> >
>>>>> > And their app runs with systemd/journald and httpd with their app
>>>>> runnin
>>>>> > inside of it.
>>>>> >
>>>>> >
>>>>> > For users who don't want to use systemd, they would just override the
>>>>> > CMD field and their container would work fine.
>>>>> >
>>>>> > Since the current default is bash, they would need to do this
>>>>> anyways.
>>>>> >
>>>>> >
>>>>> > A couple of things will break,
>>>>> >
>>>>> > docker run -ti rhel7
>>>>> >
>>>>> > Currently runs a shell.  With the new change, systemd would start up
>>>>> > inside of the contaienr.
>>>>> >
>>>>> >
>>>>> > Users who want a shell would need to execute
>>>>> >
>>>>> > docker run -ti rhel7 /bin/sh
>>>>> >
>>>>> > (I always do this anyways, but I guess some people do not)
>>>>> >
>>>>> >
>>>>> > The other big issue is on stopping of containers. docker stop
>>>>> currently
>>>>> > defaults to sending SIGTERM to the pid 1 of the container.
>>>>> >
>>>>> > systemd requires that SIGRTMIN+3 be sent to it to close down
>>>>> properly.
>>>>> > If we want to have systemd work by default, we would
>>>>> >
>>>>> > need to change the default SIGSTOP of the base image.  This means any
>>>>> > application based on the base image that does not
>>>>> >
>>>>> > override the SIGSTOP would get SIGRTMIN+3.  Most apps will die when
>>>>> they
>>>>> > get this signal, but if the App had a signal handler for
>>>>> >
>>>>> > SIGTERM, the signal handler will not work correctly.
>>>>> >
>>>>> >
>>>>> > Adding
>>>>> >
>>>>> > SIGSTOP SIGTERM
>>>>> >
>>>>> > to a Dockerfile would fix the issue, but it will cause unexpected
>>>>> > breakage.  I don't see an easy solution for this.
>>>>> >
>>>>> >
>>>>> >
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> Daniel Riek <r...@redhat.com>
>>>> * Sr. Director Systems Design & Engineering
>>>> * Red Hat Inc, Tel. +1-617-863-6776
>>>>
>>>>
>>>>
>>>
>>
>
>
> --
>
> -- Jeremy Eder
>



-- 

-- Jeremy Eder


Re: [atomic-devel] How to apply non-atomic tuned profiles to atomic host

2016-10-14 Thread Jeremy Eder
On Wed, Oct 12, 2016 at 10:29 AM, Colin Walters <walt...@verbum.org> wrote:

>
> On Tue, Oct 11, 2016, at 02:45 PM, Jeremy Eder wrote:
>
> Because layered products (not just OpenShift) do not want to be coupled to
> the RHEL release schedule to update their profiles.  They want to own their
> profiles and rely on the tuned daemon to be there.
>
>
> I see two aspects to this discussion:
>
> 1) Generic tradeoffs with host configuration
> 2) The specific discussion about tuned profiles
>
> Following 2) if I run:
>
> $ cd ~/src/github/openshift/origin
> $ git describe --tags --always
> v1.3.0-rc1-14-ge9081ae
> $ git log --follow contrib/tuned/origin-node-host/tuned.conf
>
> There are a grand total of *two* commits that aren't mere
> code reorganization:
>
> commit d959d25a405bb28568a17f8bf1b79e7d427ae0dc
> Author: Jeremy Eder <je...@redhat.com>
> AuthorDate: Tue Mar 29 10:40:03 2016 -0400
> Commit: Jeremy Eder <je...@redhat.com>
> CommitDate: Tue Mar 29 10:40:03 2016 -0400
>
> bump inotify watches
>
> commit c11cb47c07e24bfeec22a7cf94b0d6d693a00883
> Author: Scott Dodson <sdod...@redhat.com>
> AuthorDate: Thu Feb 12 13:06:57 2015 -0500
> Commit: Scott Dodson <sdod...@redhat.com>
> CommitDate: Wed Mar 11 16:41:08 2015 -0400
>
> Provide both a host and guest profile
>
> That level of change seems quite sufficient for the slower
> RHEL cadence, no?
>

Decoupling profiles from RHEL has already been negotiated with many
different engineering teams.  As you can imagine, it has ties into our
channels and distribution mechanics.  Making an exception here doesn't make
sense to me when it's working fine everywhere else.

Particularly when one considers that something like the
> inotify watch bump could easily be part of a "tuned updates"
> in the installer that would live there until the base tuned
> profile updates.
>
> Right?
>

​Personally I would prefer to keep tuning centralized into tuned and not
have 5 difference places where it's being done...but to your point around
having two commits ... I'm losing that consolidation battle because
Kubernetes has hardcoded certain sysctl adjustments that ideally we really
should have carried in tuned :-/  But if we can at least avoid doing things
in openshift-ansible at least that's one less place to track.​



> Before we go the layered RPM route I just want to make sure you're onboard
> with it, as I was not aware of any existing in-product users of that
> feature.  Are there any? If we're the first that's not an issue, just want
> to make sure we get it right.
>
>
> In this particular case of tuned, I'd argue that Atomic Host should come
> out of the box with these profiles,
> and that any async updates could be done via the openshift-ansible
> installer.
>

Realistically speaking -- we may want to use AH with another
product...we've developed
​realtime and ​
NFV profiles which again exist in another
​
channel and there is no such thing as openshift-ansible there.
​
What would be your approach if the openshift-ansible option did not exist?
(back to scattered tuning)​
​​


Re: [atomic-devel] How to apply non-atomic tuned profiles to atomic host

2016-10-11 Thread Jeremy Eder
On Tue, Oct 11, 2016 at 2:14 PM, Colin Walters <walt...@verbum.org> wrote:

> On Tue, Oct 11, 2016, at 01:36 PM, Jeremy Eder wrote:
>
> Going fwd, I think we would rather not maintain two locations (atomic-*
> and atomic-openshift-* tuned profiles with identical content.
>
>
> Yes, agreed.
>
>
> So, trying to reason a way to get those profiles onto an AH since we can't
> install the tuned-atomic-openshift RPM
>
>
> That's not true.  We've been shipping package layering for quite a while.
>

​OK, I hadn't seen it.​  Just read the blog that Jason sent.  Looks good.

> ...We could copy them to /etc/tuned and enable them manually...but I'm not
> sure that jives with how we're supposed to use AH and it seems kind of
> hacky since there would be "orphan files" in /etc.  Thoughts?
>
>
> I wouldn't say they're orphaned if something "owns" it.  Ownership doesn't
> have to just be RPM, it can also be Ansible.
>
> Although a common trap with management systems like Ansible and Puppet is
> (by default) they're subject
> to https://en.wikipedia.org/wiki/Hysteresis - if one version of the
> installer creates a tuned snippet, then
> we later don't want it to apply, the Ansible rules have to carry code to
> explicitly ensure it's deleted.  Whereas
> with RPM (and ostree) the system does synchronize to the new state,
> automatically deleting files
> no longer shipped.
>
> Anyways, I'm a bit confused here - why isn't the fix to:
>
> 1) Put the profile in the tuned RPM
> 2) Atomic Host installs it by default
> 3) Installers like openshift-ansible ensure it's installed (noop on AH)
>

​Because layered products (not just OpenShift) do not want to be coupled to
the RHEL release schedule to update their profiles.  They want to own their
profiles and rely on the tuned daemon to be there.
​

Before we go the layered RPM route I just want to make sure you're onboard
with it, as I was not aware of any existing in-product users of that
feature.  Are there any? If we're the first that's not an issue, just want
to make sure we get it right.

Now, what would the implementation look like ... basically
openshift-ansible would do what the blog does?
http://www.projectatomic.io/blog/2016/08/new-centos-atomic-host-with-package-layering-support/

Also using the layered RPMs seems to currently have a reboot requirement.
Is that correct?  At least until we have
https://bugzilla.gnome.org/show_bug.cgi?id=767977
​ ?​


[atomic-devel] How to apply non-atomic tuned profiles to atomic host

2016-10-11 Thread Jeremy Eder
Hi,

Right now we've got the tuned package in the base atomic content.  There
are atomic-host and atomic-guest tuned profiles which are currently
identical to the atomic-openshift ones.  We'd like to make a change to the
atomic-openshift-node/master profiles (which are distributed with the
openshift product).

Going fwd, I think we would rather not maintain two locations (atomic-* and
atomic-openshift-* tuned profiles with identical content.

So, trying to reason a way to get those profiles onto an AH since we can't
install the tuned-atomic-openshift RPM...We could copy them to /etc/tuned
and enable them manually...but I'm not sure that jives with how we're
supposed to use AH and it seems kind of hacky since there would be "orphan
files" in /etc.  Thoughts?


Re: [atomic-devel] How to handle crashes

2016-09-14 Thread Jeremy Eder
Anyone know?  There's a node-problem-detector proposed in Kubernetes but
... abrt is far more comprehensive.
https://github.com/kubernetes/node-problem-detector

The difference is that node-problem-detector has hooks to call back to the
kubernetes control plane to inform it that a node has problems.
We could create an abrt container that does the same for RH-based ecosystem.

On Fri, Sep 9, 2016 at 11:21 AM, Jeremy Eder <je...@redhat.com> wrote:

> Hmm, appears this was not integrated into Fedora Atomic?  Is there a plan
> to do so?
>
> On Fri, Mar 20, 2015 at 5:50 AM, Jakub Filak <jfi...@redhat.com> wrote:
>
>> Hello,
>>
>>
>>
>> I've been working on integration of ABRT with Project Atomic and, today,
>> my work landed in Fedora 22 [1].
>>
>>
>>
>> To enable abrt core_dump helper on Atomic hosts, it is necessary to
>> install abrt-atomic package and enable abrt-coredump-helper service. After
>> doing so core dump files will be stored in sub-directories of
>> /var/tmp/abrt/.
>>
>>
>>
>> You can find more technical details here:
>>
>> https://github.com/abrt/abrt/wiki/Containers-and-chroots#abr
>> t---project-atomic
>>
>>
>>
>>
>>
>> Should I write a new proposal for the oversight repository or should I
>> just open a new pull request for fedora-atomic repository?
>>
>>
>>
>>
>>
>>
>>
>> Regards,
>>
>> Jakub
>>
>>
>>
>>
>>
>> 1: https://admin.fedoraproject.org/updates/gnome-abrt-1.1.0-1.
>> fc22,abrt-2.5.0-2.fc22,libreport-2.5.0-1.fc22
>>
>
>
>
> --
>
> -- Jeremy Eder
>



-- 

-- Jeremy Eder


Re: [atomic-devel] Introducing Commissaire

2016-05-19 Thread Jeremy Eder
On May 19, 2016 17:03, "Jason DeTiberus" <jdeti...@redhat.com> wrote:
>
>
>
> On Thu, May 19, 2016 at 4:31 PM, Jeremy Eder <je...@redhat.com> wrote:
>>
>> ​Would commissaire be intended to address the case where I want to
adjust config options across a cluster? (openshift node or master configs)​
>
>
> From my biased perspective, I would say no. I would expect that to be
handled by Ansible or some other configuration management tool.
>
> That said, there are proposals and work being done upstream to move a
good bit of that configuration into the cluster itself.
>

Yeah, the configmaps. That's specifically what I'm trying to dedupe w/
commissaire. Thanks Jason.

>
>>
>>
>> On Thu, May 19, 2016 at 3:57 PM, Derek Carr <dec...@redhat.com> wrote:
>>>
>>> Yep, definitely agreed - and it's not even implemented yet so I can't
recommend people use it as a part of the overall procedure, but something
to keep in mind in this problem space that this is one potential way of
updating the cluster node agent and the daemons it manages in the future.
>>>
>>> On Thu, May 19, 2016 at 3:25 PM, Jason DeTiberus <jdeti...@redhat.com>
wrote:
>>>>
>>>>
>>>> On May 19, 2016 2:59 PM, "Derek Carr" <dec...@redhat.com> wrote:
>>>> >
>>>> > Related: https://github.com/kubernetes/kubernetes/pull/23343
>>>> >
>>>> > This is the model proposed by CoreOS for supporting
cluster-upgrades.  Basically, a run-once kubelet is launched by the init
system, and pulls down the real kubelet to run as a container, then all
other requisite host services are provisioned as a DaemonSet derived set of
pods on the node.  This does not cover things like kernel updates, but
definitely does enable a lot of scenarios for updates of
kubelet/openshift-node if we adopted the pattern.
>>>>
>>>> Definitely solves a large chunk of the problem. We still need to worry
about host upgrades, data center maintenance, etc.
>>>>
>>>> I'm all for the cluster owning all cluster upgrade related tasks,
though.
>>>>
>>>> >
>>>> > Thanks,
>>>> > Derek
>>>> >
>>>> >
>>>> >
>>>> >
>>>> >
>>>> >
>>>> > On Thu, May 19, 2016 at 12:44 PM, Jason DeTiberus <
jdeti...@redhat.com> wrote:
>>>> >>
>>>> >>
>>>> >> On Thu, May 19, 2016 at 12:18 PM, Chmouel Boudjnah <
chmo...@redhat.com> wrote:
>>>> >>>
>>>> >>> Hello thanks for releasing this blog post, from a first impression
>>>> >>> there is a bit of an overlap if you are already cloudforms to do
that,
>>>> >>> isn't it ?
>>>> >>
>>>> >>
>>>> >> With current implementations, yes. That said, Cloud Forms could
eventually switch to using Commissaire for managing clusters of hosts.
>>>> >>
>>>> >> As commissaire matures, I see great promise for it to handle a lot
of the complexity involved in managing complex cluster upgrades (think
OpenShift), where even something like applying kernel updates and
orchestrating a reboot of hosts requires much more consideration than apply
and restart or just performing the operations serially. Long term we need
something that can be more integrated with Kubernetes/OpenShift that will
allow for making ordering/restarting decisions on things like pod
placement, scheduler configuration, and disruption budgets (when they are
implemented). Having a centralized place to manage that complexity is much
better than having multiple external tools do the same.
>>>> >>
>>>> >>
>>>> >>>
>>>> >>>
>>>> >>> Chmouel
>>>> >>>
>>>> >>> On Thu, May 19, 2016 at 3:55 PM, Stephen Milner <smil...@redhat.com>
wrote:
>>>> >>> > Hello all,
>>>> >>> >
>>>> >>> > Have you heard about some kind of cluster host manager project
and
>>>> >>> > want to learn more? Curious about what this Commissaire thing is
that
>>>> >>> > has shown up in the Project Atomic GitHub repos?
>>>> >>> > The short answer is it is a lightweight REST interface for
cluster
>>>> >>> > host management. For more information check out the introductory
blog
>>>> >>> > post ...
>>>> >>> >
>>>> >>> >
http://www.projectatomic.io/blog/2016/05/introducing_commissaire/
>>>> >>> >
>>>> >>> > ... and stay tuned for more in-depth posts for development and
>>>> >>> > operations in the near future!
>>>> >>> >
>>>> >>> > --
>>>> >>> > Thanks,
>>>> >>> > Steve Milner
>>>> >>> >
>>>> >>>
>>>> >>
>>>> >>
>>>> >>
>>>> >> --
>>>> >> Jason DeTiberus
>>>> >
>>>> >
>>>
>>>
>>
>>
>>
>> --
>>
>> -- Jeremy Eder
>
>
>
>
> --
> Jason DeTiberus


Re: [atomic-devel] Introducing Commissaire

2016-05-19 Thread Jeremy Eder
​Would commissaire be intended to address the case where I want to adjust
config options across a cluster? (openshift node or master configs)​

On Thu, May 19, 2016 at 3:57 PM, Derek Carr <dec...@redhat.com> wrote:

> Yep, definitely agreed - and it's not even implemented yet so I can't
> recommend people use it as a part of the overall procedure, but something
> to keep in mind in this problem space that this is one potential way of
> updating the cluster node agent and the daemons it manages in the future.
>
> On Thu, May 19, 2016 at 3:25 PM, Jason DeTiberus <jdeti...@redhat.com>
> wrote:
>
>>
>> On May 19, 2016 2:59 PM, "Derek Carr" <dec...@redhat.com> wrote:
>> >
>> > Related: https://github.com/kubernetes/kubernetes/pull/23343
>> >
>> > This is the model proposed by CoreOS for supporting cluster-upgrades.
>> Basically, a run-once kubelet is launched by the init system, and pulls
>> down the real kubelet to run as a container, then all other requisite host
>> services are provisioned as a DaemonSet derived set of pods on the node.
>> This does not cover things like kernel updates, but definitely does enable
>> a lot of scenarios for updates of kubelet/openshift-node if we adopted the
>> pattern.
>>
>> Definitely solves a large chunk of the problem. We still need to worry
>> about host upgrades, data center maintenance, etc.
>>
>> I'm all for the cluster owning all cluster upgrade related tasks, though.
>>
>> >
>> > Thanks,
>> > Derek
>> >
>> >
>> >
>> >
>> >
>> >
>> > On Thu, May 19, 2016 at 12:44 PM, Jason DeTiberus <jdeti...@redhat.com>
>> wrote:
>> >>
>> >>
>> >> On Thu, May 19, 2016 at 12:18 PM, Chmouel Boudjnah <chmo...@redhat.com>
>> wrote:
>> >>>
>> >>> Hello thanks for releasing this blog post, from a first impression
>> >>> there is a bit of an overlap if you are already cloudforms to do that,
>> >>> isn't it ?
>> >>
>> >>
>> >> With current implementations, yes. That said, Cloud Forms could
>> eventually switch to using Commissaire for managing clusters of hosts.
>> >>
>> >> As commissaire matures, I see great promise for it to handle a lot of
>> the complexity involved in managing complex cluster upgrades (think
>> OpenShift), where even something like applying kernel updates and
>> orchestrating a reboot of hosts requires much more consideration than apply
>> and restart or just performing the operations serially. Long term we need
>> something that can be more integrated with Kubernetes/OpenShift that will
>> allow for making ordering/restarting decisions on things like pod
>> placement, scheduler configuration, and disruption budgets (when they are
>> implemented). Having a centralized place to manage that complexity is much
>> better than having multiple external tools do the same.
>> >>
>> >>
>> >>>
>> >>>
>> >>> Chmouel
>> >>>
>> >>> On Thu, May 19, 2016 at 3:55 PM, Stephen Milner <smil...@redhat.com>
>> wrote:
>> >>> > Hello all,
>> >>> >
>> >>> > Have you heard about some kind of cluster host manager project and
>> >>> > want to learn more? Curious about what this Commissaire thing is
>> that
>> >>> > has shown up in the Project Atomic GitHub repos?
>> >>> > The short answer is it is a lightweight REST interface for cluster
>> >>> > host management. For more information check out the introductory
>> blog
>> >>> > post ...
>> >>> >
>> >>> >
>> http://www.projectatomic.io/blog/2016/05/introducing_commissaire/
>> >>> >
>> >>> > ... and stay tuned for more in-depth posts for development and
>> >>> > operations in the near future!
>> >>> >
>> >>> > --
>> >>> > Thanks,
>> >>> > Steve Milner
>> >>> >
>> >>>
>> >>
>> >>
>> >>
>> >> --
>> >> Jason DeTiberus
>> >
>> >
>>
>
>


-- 

-- Jeremy Eder


Re: [atomic-devel] Submitted ansible module for managing Atomic Host

2016-04-15 Thread Jeremy Eder
Awesome!
On Apr 15, 2016 00:32, "SGhosh"  wrote:

> On 04/14/2016 05:00 PM, Dusty Mabe wrote:
>
>> https://github.com/ansible/ansible-modules-extras/pull/1902
>>
>>
>> See link. I thought there might be some interested parties on this list :)
>>
>>
> You think? :)
>
> Thanks!
> subhendu
>
>


Re: [atomic-devel] Running docker-storage-setup from a UI

2016-04-13 Thread Jeremy Eder
On Wed, Apr 13, 2016 at 9:06 AM, Daniel J Walsh  wrote:

> COW on loopback device, how do I fix?
>
Want to try out Overlayfs How?
>

​
​
​Yes! These
!​
​


Re: [atomic-devel] Running docker-storage-setup from a UI

2016-04-12 Thread Jeremy Eder
On Tue, Apr 12, 2016 at 10:26 AM, Marius Vollmer <marius.voll...@redhat.com>
wrote:

> Elvir Kuric <eku...@redhat.com> writes:
>
> > On 04/12/2016 02:28 PM, Jeremy Eder wrote:
> >
> > I think --force-wipe and --init-storage ... or any destructive operation
> > on disks was not good option in past ( due by mistake selecting in
> > /etc/sysconfing/docker-storage wrong device )  .. not sure something
> > changed in meantime
>
> In our case, the UI would provide the warnings and confirmation
> dialogs.
>
> Ideally, we would run d-s-s with --force-wipe and --non-interactive.
> That would cause it to assume "yes" when asking about wiping a device,
> and assume "no" in all other cases.
>
> (d-s-s also runs during boot, right?  That's also non-interactive, no?
> How is that handled?)
>
>
​The alternative is the process in that ansible :/

I agree it should default to the safest possible operation. But that
doesn't mean we can't also have a nuclear option.
 --yes-i-really-want-you-to-delete-all-my-data​


Re: [atomic-devel] Running docker-storage-setup from a UI

2016-04-12 Thread Jeremy Eder
​Adding Vivek, not sure he's on this list.​

docker-storage-setup includes an example conf file:
/usr/lib/docker-storage-setup/docker-storage-setup

Perhaps there are some other options to that should be exposed to the UI?

If it's helpful, here is an ansible I use to reprovision docker storage for
testing.  I think most of this could (should?) probably be added to
docker-storage-setup (maybe like you said a --force-wipe or --init-storage
option).

>From the UI standpoint it would be great to be able to choose the storage
driver as well (devicemapper or overlay).

---
- name: install docker
  yum: name={{ item }} state=latest
  with_items:
- docker
- atomic

- name: ensure docker is stopped
  service: name=docker state=stopped

- name: remove docker-pool if it exists
  lvol: vg=docker_vg lv=docker-pool state=absent force=yes

- name: remove docker_vg if it exists
  lvg: vg=docker_vg state=absent

- name: remove /dev/sdb1 pv
  command:  pvremove /dev/sdb1
  ignore_errors: True

- name: delete /dev/sdb1
  command: parted /dev/sdb rm 1
  ignore_errors: True

- name: dd over first 10MB
  command:  dd if=/dev/zero of=/dev/sdb bs=1M count=10 oflag=direct

- name: rm /etc/sysconfig/docker-storage
  file: path=/etc/sysconfig/docker-storage state=absent

- name: setup /etc/sysconfig/docker-storage-setup
  copy: src=docker-storage-setup.conf dest=/etc/sysconfig/docker-storage-setup

# https://github.com/projectatomic/docker-storage-setup/issues/114
- name: patch /usr/bin/docker-storage-setup for partitions > 2TB
  patch: >
src=docker-storage-setup.patch
dest=/usr/bin/docker-storage-setup

- name: start docker-storage-setup service
  service: name=docker-storage-setup state=started

- name: start docker service
  service: name=docker state=started





On Tue, Apr 12, 2016 at 8:09 AM, Marius Vollmer <marius.voll...@redhat.com>
wrote:

> Hi,
>
> I am working on this:
>
> https://github.com/cockpit-project/cockpit/wiki/Atomic:-Docker-Storage
>
> which is basically a UI inside Cockpit for docker-storage-setup.
>
> I am not super far along, but I am getting to the point where the UI
> will need to actually run docker-storage-setup.
>
> - My basic idea is to write a new /etc/sysconfig/docker-storage-setup
>   file and then run docker-storage-setup.  Is that the best approach for
>   a UI?
>
> - Docker-storage-setup needs to run non-interactively, but I think it
>   can't do that right now, and asks confirmation for various things.
>   Would it be acceptable to add a "--force-wipe" option to d-s-s, and
>   maybe others?  I can do that at the same time as I write the code for
>   the UI.
>
> - Just showing the contents of /etc/sysconfig/docker-storage-setup in
>   the UI as the current state of things is not really correct, since
>   docker-storage-setup might not have run since it was last changed, or
>   it might have failed.
>
>   So I am thinking there could be something like
>
> # docker-storage-setup status
> /dev/vda: ok, shared with OS
> /dev/sda: ok
> /dev/sdb: Not yet set up!
>
> The Docker storage pool is not fully set up.  Run
> "docker-storage-setup" to complete the set up.
>
>   This would output information about how things should be, and how they
>   actually are.  (With an option for machine readable output.)  The
>   above would be for VG="" and DEVS="/dev/sda /dev/sdb" where there was
>   some sort of problem with /dev/sdb.
>
>   The machine readable output could maybe look like
>
> # docker-storage-setup status -M
> /dev/vda:root
> /dev/sda:
> /dev/sdb:error=missing
>
>   Or should we go full JSON right away?
>
>
> What do you think?  Am I heading down the wrong path?  If nobody stops
> me, I'll hopefully make some PRs soon for this, and we can discuss the
> details there.
>
>


-- 

-- Jeremy Eder


Re: [atomic-devel] Determining if a host is an atomic host via /etc/os-release

2015-11-09 Thread Jeremy Eder
The tuned software keys off of this file:

-bash-4.2# cat /etc/system-release-cpe
cpe:/o:redhat:enterprise_linux:7.2:ga:atomic-host


On Mon, Nov 9, 2015 at 2:13 PM, Charlie Drage <cdr...@redhat.com> wrote:

> I'm working on docker-machine integration for atomic host's. The
> problem I'm facing is the fact that "atomic" isn't defined under
> /etc/os-release, it's simply "fedora" or "centos" as the id.
>
> At the moment the only viable way to determine if an OS is an atomic
> host or not would be 'ls /ostree' or 'rpm-ostree status' (correct me
> if I'm wrong). Unfortunately it would be bad coding practice to
> require SSH'ing into a machine via the Docker machine generic driver
> in order to determine if it is atomic or not before proceeding.
> Ideally, having this information within /etc/os-release would be the
> most ideal.
>
> Is there a possibility to add a description to /etc/os-release? Or
> perhaps using fedora22-atomic, fedora23-atomic, etc as a release id?
> Creating a
> standardized way for scripts (Ansible, docker machine, etc.) to detect
> if this is an atomic host or not?
>
> I did try 'uname -r' as well to try and determine if there was a
> specifically named kernel version for atomic hosts as an alternative
> to determining if it's an atomic host, but to no avail.
>
> Best regards,
>
> --
>
> Charlie Drage
> Red Hat - OSAS Team / Project Atomic
> 4096R / 0x9B3B446C
> http://pgp.mit.edu/pks/lookup?op=get=0x622CDF119B3B446C
>
>


-- 

-- Jeremy Eder


Re: [atomic-devel] resize container

2015-08-07 Thread Jeremy Eder
That's what I meant -- you have to completely tear down all
/var/lib/docker in order for the size change to take effect.  You
would have to then pull down all your images, basically starting from
scratch.
Quite a price to pay; so that's why we're bumping the default to 100G
(btw, 10G was an arbitrary, nice round number chosen when the
devicemapper driver was first implemented).

But the real thing to keep in mind here is that you should be putting
your persistent data onto a volume, when you go to production (as Matt
also mentioned).  Hope that makes sense.

On Thu, Aug 6, 2015 at 12:44 PM, Carl Mosca carljmo...@gmail.com wrote:
 I have read that bumping the size has no effect on the 10GB images.

 When I tried it, I saw no change in new containers

 On Aug 6, 2015 11:34 AM, Jeremy Eder je...@redhat.com wrote:

 Unfortunately, not at this time.  We have some PRs out there to make
 it configurable.  Short term, we've bumped the default devicemapper
 container size from 10G to 100G (will show up in a future release).

 For now, you can adjust the --storage-opt dm.basesize=XYZG value
 passed to the docker daemon via /etc/sysconfig/docker-storage.  The
 downside of that is it requires users to tear down their docker
 storage and re-create it, in order to apply the new dm.basesize value.

 On Wed, Aug 5, 2015 at 11:19 PM, Carl Mosca carljmo...@gmail.com wrote:
  Is it possible to resize a container's filesystem?
 
  TIA,
  Carl
 
  --
  Carl J. Mosca



 --

 -- Jeremy Eder



-- 

-- Jeremy Eder



Re: [atomic-devel] resize container

2015-08-06 Thread Jeremy Eder
Unfortunately, not at this time.  We have some PRs out there to make
it configurable.  Short term, we've bumped the default devicemapper
container size from 10G to 100G (will show up in a future release).

For now, you can adjust the --storage-opt dm.basesize=XYZG value
passed to the docker daemon via /etc/sysconfig/docker-storage.  The
downside of that is it requires users to tear down their docker
storage and re-create it, in order to apply the new dm.basesize value.

On Wed, Aug 5, 2015 at 11:19 PM, Carl Mosca carljmo...@gmail.com wrote:
 Is it possible to resize a container's filesystem?

 TIA,
 Carl

 --
 Carl J. Mosca



-- 

-- Jeremy Eder



Re: [atomic-devel] docker binary

2015-07-22 Thread Jeremy Eder
On Tue, Jul 21, 2015 at 8:32 PM, Waldemar Augustyn walde...@astyn.com
wrote:

 No super privilege, rather, we want
 to control all those containers running on it.  Storage is another one.


​Hi, what did you mean by 'storage is another one'?  You want to be able to
admin the host's storage from inside a container?​



-- 

-- Jeremy Eder


Re: [atomic-devel] [CentOS-devel] Rebuild images ready for testing

2015-06-19 Thread Jeremy Eder
 I believe that *is* the correct URL, but it hasn't gone out to mirrors yet.
 
 The repo is also here:
 
 http://buildlogs.centos.org/centos/7/atomic-host/x86_64/repo/

That did it, thanks!



Re: [atomic-devel] Screen in Atomic

2015-04-17 Thread Jeremy Eder


- Original Message -
 From: James purplei...@gmail.com
 To: atomic-devel@projectatomic.io
 Cc: go...@redhat.com
 Sent: Friday, April 17, 2015 4:18:18 PM
 Subject: [atomic-devel] Screen in Atomic
 
 RE:
 https://lists.projectatomic.io/projectatomic-archives/atomic-devel/2015-April/msg00036.html
 
 There are two goals:
 
 1) Having Atomic get used and have people get comfortable with it
 2) Having it be extremely minimal
 
 While I think #2 is a great goal, I think #1 is more important *now*,
 as without any users it doesn't matter what size it is. At the moment,
 it's very difficult to use and hack on things without spawning 100
 terminals, which is why screen is a good addition.
 
 Gnome continuous actually got this right because it had a normal
 version, and a -devel version (which was bigger, but you got all the
 tools).
 
 The reasons you need a terminal multiplexer in base are well described here:
 
 https://git.fedorahosted.org/cgit/fedora-atomic.git/commit/?id=268fabe30ae6640293bc0c1a57372e0c83f504c4
 
 Now the question as to why tmux and not screen, or vice versa. Well,
 we've agreed that one is necessary, the next issue is which one? Both
 are significantly small and without any external dependencies. Screen
 is quite popular and still the most common. Some users prefer tmux and
 that is okay. So either include both, or include neither. Saying you
 will only include one is a giant f-off to users who believe strongly
 in one of them. It's the same thing as saying don't use vim, use
 emacs.
 
 I actually prefer screen, and there are a number of tools that don't
 work with tmux yet. vscreen (vagrant screen) is one example, and
 there's no tmux version yet! Also screen is my personal preference.
 Let's not be dogmatic about 100k, there are bigger issues to worry
 about.
 
 TL;DR: +1 screen

There is a long history of screen usage with industry standard benchmarks.  
There is zero chance of those ever being ported to tmux.  Perf team's own 
internal benchmark/system statistics wrapper uses screen as well, so...here's 
another +1.