[atomic-wg] Issue #186 `switch to overlay2`

2017-01-09 Thread Vivek Goyal

vgoyal added a new comment to an issue you are following:
``
> Flipping from one to the other will take free space somewhere for the 'atomic 
> storage export/import' operation to temporarily store docker images and 
> containers to.
> A way around the xfs lack of shrink issue is to put the filesystem containing 
> /var onto a thinly provisioned LV (be it a dir on rootfs or its own volume). 
> After 'atomic storage reset' wipes the docker storage, issue fstrim, and all 
> the previously used extents will be returned to the thin pool, which can then 
> be returned to the VG, which can then be reassigned to a new docker thin 
> pool. Convoluted in my opinion, but doable.

IIUC, you are saying that use a thin LV for rootfs to work around xfs shrink 
issue? People have tried that in the past and there have been talks about that 
many a times. There are still issues with xfs on top of thin lv and how no 
space situation is handled etc. Bottom line, we are not there yet.
 
So if we can't use rootfs on thin LV and if xfs can't be shrinked, then only 
way to flip back to devicemapper is don't allow rootfs to use all free space. 
And have free space which can either
be used by overlay2 or devicemapper for container storage.
``

To reply, visit the link below or just reply to this email
https://pagure.io/atomic-wg/issue/186
___
cloud mailing list -- cloud@lists.fedoraproject.org
To unsubscribe send an email to cloud-le...@lists.fedoraproject.org


About the recent failures of Atomic images on Autocloud

2017-01-09 Thread Kushal Das
Hi,

I finally managed to reproduce the error on a local box. After doing the
reboot like in [1], the tool can not ssh back into the vm. When I tried
the same on debug mode on, it still fails for some time, and then
randomly allows to ssh again.

I could not reproduce this using the same images over our OpenStack
cloud. Any tips will be helpful to find the cause.

[1] https://apps.fedoraproject.org/autocloud/jobs/1845/output#264

Kushal
-- 
Fedora Cloud Engineer
CPython Core Developer
https://kushaldas.in
https://dgplug.org
___
cloud mailing list -- cloud@lists.fedoraproject.org
To unsubscribe send an email to cloud-le...@lists.fedoraproject.org


Re: About the recent failures of Atomic images on Autocloud

2017-01-09 Thread Dusty Mabe


On 01/09/2017 08:31 AM, Kushal Das wrote:
> Hi,
> 
> I finally managed to reproduce the error on a local box. After doing the
> reboot like in [1], the tool can not ssh back into the vm. When I tried
> the same on debug mode on, it still fails for some time, and then
> randomly allows to ssh again.
> 
> I could not reproduce this using the same images over our OpenStack
> cloud. Any tips will be helpful to find the cause.
> 

Can we get together and debug this?
___
cloud mailing list -- cloud@lists.fedoraproject.org
To unsubscribe send an email to cloud-le...@lists.fedoraproject.org


Re: About the recent failures of Atomic images on Autocloud

2017-01-09 Thread Matthew Miller
On Mon, Jan 09, 2017 at 10:16:51AM -0500, Dusty Mabe wrote:
> > I finally managed to reproduce the error on a local box. After doing the
> > reboot like in [1], the tool can not ssh back into the vm. When I tried
> > the same on debug mode on, it still fails for some time, and then
> > randomly allows to ssh again.
> > I could not reproduce this using the same images over our OpenStack
> > cloud. Any tips will be helpful to find the cause.
> > 
> Can we get together and debug this?

Is this a key-generation entropy problem?

-- 
Matthew Miller

Fedora Project Leader
___
cloud mailing list -- cloud@lists.fedoraproject.org
To unsubscribe send an email to cloud-le...@lists.fedoraproject.org


[Fedocal] Reminder meeting : Fedora Cloud Workgroup

2017-01-09 Thread dusty
Dear all,

You are kindly invited to the meeting:
   Fedora Cloud Workgroup on 2017-01-11 from 17:00:00 to 18:00:00 UTC
   At fedora-meetin...@irc.freenode.net

The meeting will be about:
Standing meeting for the Fedora Cloud Workgroup


Source: https://apps.fedoraproject.org/calendar/meeting/1999/

___
cloud mailing list -- cloud@lists.fedoraproject.org
To unsubscribe send an email to cloud-le...@lists.fedoraproject.org


[atomic-wg] Issue #186 `switch to overlay2`

2017-01-09 Thread Josh Berkus

jberkus added a new comment to an issue you are following:
``
So, to summarize:

1. The main reason given for keeping "two partitions" by default with 
docker-storage-setup is so that users can easily switch back to devicemapper if 
there are critical issues with OverlayFS.

2. However, there is currently no tool which actually allows users to switch 
back to devicemapper without partitioning.

3. Therefore, the reason given for maintaining two partitions as the default is 
invalid.

4. If that reason is invalid, we should again consider making "one big 
partition" the default for Overlay2 installations.


``

To reply, visit the link below or just reply to this email
https://pagure.io/atomic-wg/issue/186
___
cloud mailing list -- cloud@lists.fedoraproject.org
To unsubscribe send an email to cloud-le...@lists.fedoraproject.org


[atomic-wg] Issue #186 `switch to overlay2`

2017-01-09 Thread Daniel J Walsh

dwalsh added a new comment to an issue you are following:
``
I disagree with 2.  

We have tools that allow you to switch back to devicemapper if their is 
partioning, which is why we want to keep partitioning.  If this was easy to 
switch from no partioning to partitioned, then I would agree with just default 
to overlay without partitions.

``

To reply, visit the link below or just reply to this email
https://pagure.io/atomic-wg/issue/186
___
cloud mailing list -- cloud@lists.fedoraproject.org
To unsubscribe send an email to cloud-le...@lists.fedoraproject.org


[atomic-wg] Issue #186 `switch to overlay2`

2017-01-09 Thread Dusty Mabe

dustymabe added a new comment to an issue you are following:
``
> The main reason given for keeping "two partitions" by default with 
> docker-storage-setup is so that users can easily switch back to devicemapper 
> if there are critical issues with OverlayFS.

I would like to also point out that one other benefit would be to prevent 
containers from cannibalizing your root partition.

> 
> 
> However, there is currently no tool which actually allows users to switch 
> back to devicemapper without repartitioning.

IMHO in the world of container registries and such I would be less worried 
about the content that exists on the machine (container images, containers 
etc). If there are two LVs (and thus two partitions) one can choose DM or 
overlay2. If they want to switch between the two, blow away the one that exists 
and start over. If you want to keep your containers, store them somewhere else 
temporarily.

> 
> 
> Therefore, the reason given for maintaining two partitions as the default is 
> invalid.
> 
> 
> If that reason is invalid, we should again consider making "one big 
> partition" the default for Overlay2 installations.

I prefer overlay2 and would like to see there be only one option so that we can 
have less confusion in the future. However, giving users the choice is nice as 
well. Maybe there is a way to achieve both on startup. 
``

To reply, visit the link below or just reply to this email
https://pagure.io/atomic-wg/issue/186
___
cloud mailing list -- cloud@lists.fedoraproject.org
To unsubscribe send an email to cloud-le...@lists.fedoraproject.org


[atomic-wg] Issue #186 `switch to overlay2`

2017-01-09 Thread Josh Berkus

jberkus added a new comment to an issue you are following:
``
@dwalsh see comments above about why those tools won't actually work in 
practice.  If we can work around those, then that changes things.  But right 
now what I'm hearing is "you can switch back, but only if you have unallocated 
space equal to or greater than your existing docker partition".
``

To reply, visit the link below or just reply to this email
https://pagure.io/atomic-wg/issue/186
___
cloud mailing list -- cloud@lists.fedoraproject.org
To unsubscribe send an email to cloud-le...@lists.fedoraproject.org


[atomic-wg] Issue #186 `switch to overlay2`

2017-01-09 Thread Dusty Mabe

dustymabe added a new comment to an issue you are following:
``

>> The main reason given for keeping "two partitions" by default with 
>> docker-storage-setup is so that users can easily switch back to devicemapper 
>> if there are critical issues with OverlayFS.
> 
> I would like to also point out that one other benefit would be to prevent 
> containers from cannibalizing your root partition.

This might be of interest to counter my previous point: [
Implement XFS quota for overlay2](https://github.com/docker/docker/pull/24771)
``

To reply, visit the link below or just reply to this email
https://pagure.io/atomic-wg/issue/186
___
cloud mailing list -- cloud@lists.fedoraproject.org
To unsubscribe send an email to cloud-le...@lists.fedoraproject.org


[atomic-wg] Issue #186 `switch to overlay2`

2017-01-09 Thread Josh Berkus

jberkus added a new comment to an issue you are following:
``
Also, using partitioning to limit Docker's space consumption only makes sense 
if we can somehow automagically "right-size" the two partitions.  In our 
current code, it doesn't matter how much space docker eats up, because we've 
only given the user 3GB on their root partion, which means they're already out 
of space anyway, just from log files.
``

To reply, visit the link below or just reply to this email
https://pagure.io/atomic-wg/issue/186
___
cloud mailing list -- cloud@lists.fedoraproject.org
To unsubscribe send an email to cloud-le...@lists.fedoraproject.org


[atomic-wg] Issue #186 `switch to overlay2`

2017-01-09 Thread Daniel J Walsh

dwalsh added a new comment to an issue you are following:
``
@jberkus The tools will work fine if you just want to start fresh and blow away 
your container images.

```
atomic storage reset
```
Should delete everything, then you change your default backend using

```
atomic storage modify --driver ...
```

Only time you would need extra space would be if you wanted to export/import 
your images.  

Without using separate partitions, I end up having to reinstall the system in 
order to setup separate partitions.

If you are using containers correctly, destroying your images, should not be 
too painful. :^)

``

To reply, visit the link below or just reply to this email
https://pagure.io/atomic-wg/issue/186
___
cloud mailing list -- cloud@lists.fedoraproject.org
To unsubscribe send an email to cloud-le...@lists.fedoraproject.org


[atomic-wg] Issue #186 `switch to overlay2`

2017-01-09 Thread Josh Berkus

jberkus added a new comment to an issue you are following:
``
@dwalsh aha.  The current docks emphasize export/import, so I thought it was 
required.

Lemme test that, but if it works that's a powerful argument for maintaining 
dual partitions for backwards compatibility.  If we're doing that, though, is 
there anything we can do about sizing the rootfs better?
``

To reply, visit the link below or just reply to this email
https://pagure.io/atomic-wg/issue/186
___
cloud mailing list -- cloud@lists.fedoraproject.org
To unsubscribe send an email to cloud-le...@lists.fedoraproject.org


[atomic-wg] Issue #186 `switch to overlay2`

2017-01-09 Thread Dusty Mabe

dustymabe added a new comment to an issue you are following:
``
> If we're doing that, though, is there anything we can do about sizing the 
> rootfs better?

there is the `ROOT_SIZE` variable in docker-storage-setup that allows you to 
specify the size of the root partition. We could theoretically get more 
detailed on how we determine the "default" size though. i.e. default to lessor 
of `6G` or `30% of of entire disk`. or something like that. So for systems with 
big disk root size is now `6G`. 
``

To reply, visit the link below or just reply to this email
https://pagure.io/atomic-wg/issue/186
___
cloud mailing list -- cloud@lists.fedoraproject.org
To unsubscribe send an email to cloud-le...@lists.fedoraproject.org


[atomic-wg] Issue #186 `switch to overlay2`

2017-01-09 Thread Chris Murphy

chrismurphy added a new comment to an issue you are following:
``

>vgoyal
>IIUC, you are saying that use a thin LV for rootfs to work around xfs shrink 
>issue? People have tried that in the past and there have been talks about that 
>many a times. There are still issues with xfs on top of thin lv and how no 
>space situation is handled etc. Bottom line, we are not there yet.

You mean thin pool exhaustion? Right now the atomic host default uses the 
docker devicemapper driver which is XFS on a dm-thin pool. So I don't 
understand why one is OK and the other isn't.

>So if we can't use rootfs on thin LV and if xfs can't be shrinked, then only 
>way to flip back to devicemapper is don't allow rootfs to use all free space.

When hosted in the cloud, isn't it typical to charge for allocated space 
whether it's actively used or not?


>jberkus
>If that reason is invalid, we should again consider making "one big partition" 
>the default for Overlay2 installations.

Yes. It's the same effort to add more space (partition, LV, raw/qcow2), make it 
an LVM PV, and add to the VG and then let docker-storage-setup create a 
docker-pool thin pool from that extra space.


>dwalsh
>We have tools that allow you to switch back to devicemapper if their is 
>partioning, which is why we want to keep partitioning. If this was easy to 
>switch from no partioning to partitioned, then I would agree with just default 
>to overlay without partitions.

My interpretation of jberkus "one big partition" is a rootfs LV that uses all 
available space in the VG, reserving nothing. But it's still possible to add a 
PV to that VG and either grow rootfs for continued use of overlay2; or to 
fallback to devicemapper. I don't interpret it literally to mean dropping LVM. 
You'd probably want some way of doing online fs resize as an option, and that 
requires rootfs on LVM or Btrfs, not a plain partition.

I think it's a coin toss having this extra space already available in the VG, 
vs expecting the admin to enlarge the backing storage or add an additional 
device, which is then added to the VG, which can then grow rootfs (overlay2) or 
be used as fallback with the Docker devicemapper driver. 

>dustymabe
>I would like to also point out that one other benefit would be to prevent 
>containers from cannibalizing your root partition.

Not possible by making /var a separate file system, you'd have to use quotas. 
Ostree owns /var, it must be a directory on rootfs at present.

>I prefer overlay2 and would like to see there be only one option so that we 
>can have less confusion in the future. However, giving users the choice is 
>nice as well. Maybe there is a way to achieve both on startup.

You could have two kickstarts: overlay2 and devicemapper, and each kickstart is 
specified using a GRUB menu entry on the installation media. The devicemapper 
case uses the existing kickstart and depends on the existing 
docker-storage-setup "use 40% of VG free space for a dm-thin pool"; the 
overlay2 kickstart would cause the installer to use all available space for 
rootfs, leaving no unused space in the VG.


``

To reply, visit the link below or just reply to this email
https://pagure.io/atomic-wg/issue/186
___
cloud mailing list -- cloud@lists.fedoraproject.org
To unsubscribe send an email to cloud-le...@lists.fedoraproject.org


[atomic-wg] Issue #186 `switch to overlay2`

2017-01-09 Thread Vivek Goyal

vgoyal added a new comment to an issue you are following:
``
> 
> vgoyal
> IIUC, you are saying that use a thin LV for rootfs to work around xfs shrink 
> issue? People have tried that in the past and there have been talks about 
> that many a times. There are still issues with xfs on top of thin lv and how 
> no space situation is handled etc. Bottom line, we are not there yet.
> 
> You mean thin pool exhaustion? Right now the atomic host default uses the 
> docker devicemapper driver which is XFS on a dm-thin pool. So I don't 
> understand why one is OK and the other isn't.

There are outstanding bugs and issues against that.  Error handling was not 
graceful and there were instances of container hanging if thin pool was full 
and only solution was to reboot the system. So it is not fine as such. Just 
that we don't seem to have better options. People have been talking about much 
closer interaction between xfs and thin pool for quite some time. 

Anaconda developers have tried setting thin pool out of box in the past and 
finally they backed it out later due to various issues.

In short, putting rootfs on thin lv increases complexity of default setup and 
difficult to recover if something is bad. (thin pool full). Lot of people don't 
like the idea of doing over provisioning for
rootfs. They better have peach of mind with pre-allocated rootfs. 
``

To reply, visit the link below or just reply to this email
https://pagure.io/atomic-wg/issue/186
___
cloud mailing list -- cloud@lists.fedoraproject.org
To unsubscribe send an email to cloud-le...@lists.fedoraproject.org


[atomic-wg] Issue #186 `switch to overlay2`

2017-01-09 Thread Chris Murphy

chrismurphy added a new comment to an issue you are following:
``
> vgoyal 
> In short, putting rootfs on thin lv increases complexity of default setup and 
> difficult to recover if something is bad. (thin pool full). Lot of people 
> don't like the idea of doing over provisioning for rootfs. They better have 
> peach of mind with pre-allocated rootfs.

OK got it. Without over provisioning, rootfs on thin LV is the same risk as 
Docker devicemapper using a dm-thin pool; what unacceptably increases risk is 
overprovisioning rootfs which is necessarily what happens when using fstrim on 
it to recoup extents for a devicemapper based reversion. Fair enough. I have 
seen it explode spectacularly with total data loss of all LV's using the thin 
pool and repair tools being unable to repair it; not just the fs that was 
writing at the time the exhaustion happened.
``

To reply, visit the link below or just reply to this email
https://pagure.io/atomic-wg/issue/186
___
cloud mailing list -- cloud@lists.fedoraproject.org
To unsubscribe send an email to cloud-le...@lists.fedoraproject.org


[atomic-wg] Issue #186 `switch to overlay2`

2017-01-09 Thread Vivek Goyal

vgoyal added a new comment to an issue you are following:
``
> When hosted in the cloud, isn't it typical to charge for allocated space 
> whether it's actively used or not?

Not sure, what this has to do with how we partition the storage between rootfs 
and docker.

> 
> jberkus
> If that reason is invalid, we should again consider making "one big 
> partition" the default for Overlay2 installations.
> 
> Yes. It's the same effort to add more space (partition, LV, raw/qcow2), make 
> it an LVM PV, and add to the VG and then let docker-storage-setup create a 
> docker-pool thin pool from that extra space.

I think server variant also adds all space to a VG and then carves out a LV for 
rootfs and leaves rest of the space free in VG. Now docker-storage-setup can 
use this space for image/container storage. For now devicemapper makes use of 
it and going forward by default it will be used for overlay2.
> 
> dwalsh
> We have tools that allow you to switch back to devicemapper if their is 
> partioning, which is why we want to keep partitioning. If this was easy to 
> switch from no partioning to partitioned, then I would agree with just 
> default to overlay without partitions.
> 
> My interpretation of jberkus "one big partition" is a rootfs LV that uses all 
> available space in the VG, reserving nothing. But it's still possible to add 
> a PV to that VG and either grow rootfs for continued use of overlay2; or to 
> fallback to devicemapper. I don't interpret it literally to mean dropping 
> LVM. You'd probably want some way of doing online fs resize as an option, and 
> that requires rootfs on LVM or Btrfs, not a plain partition.
> I think it's a coin toss having this extra space already available in the VG, 
> vs expecting the admin to enlarge the backing storage or add an additional 
> device, which is then added to the VG, which can then grow rootfs (overlay2) 
> or be used as fallback with the Docker devicemapper driver.

Asking users to either grow existing disk or add more disks in VG to make space 
for devicemapper might not always be feasible. For example if somebody gives me 
a VM to work with and I have no control on management of that VM and I am 
supposed to work with resources provided in that VM. I think we need to provide 
option of being able to go back to devicemapper without requiring to add more 
disk space to VM.

> I prefer overlay2 and would like to see there be only one option so that we 
> can have less confusion in the future. However, giving users the choice is 
> nice as well. Maybe there is a way to achieve both on startup.
> 
> You could have two kickstarts: overlay2 and devicemapper, and each kickstart 
> is specified using a GRUB menu entry on the installation media. The 
> devicemapper case uses the existing kickstart and depends on the existing 
> docker-storage-setup "use 40% of VG free space for a dm-thin pool"; the 
> overlay2 kickstart would cause the installer to use all available space for 
> rootfs, leaving no unused space in the VG.

So this is image generation part? So we will generate and ship two kind of 
images and users will download one of these based on the default storage driver 
they want to use? 
``

To reply, visit the link below or just reply to this email
https://pagure.io/atomic-wg/issue/186
___
cloud mailing list -- cloud@lists.fedoraproject.org
To unsubscribe send an email to cloud-le...@lists.fedoraproject.org


[atomic-wg] Issue #186 `switch to overlay2`

2017-01-09 Thread Dusty Mabe

dustymabe added a new comment to an issue you are following:
``

>>dustymabe
>>I would like to also point out that one other benefit would be to prevent 
>>containers from cannibalizing your root partition.
> 
> Not possible by making /var a separate file system, you'd have to use quotas. 
> Ostree owns /var, it must be a directory on rootfs at present.
> 

with 
[DOCKER_ROOT_VOLUME](https://github.com/projectatomic/docker-storage-setup/pull/175/commits/ee035598cbbd8c194ab3f6830e38972dee24744a)
 and `overlayfs` using that then all of `/var/lib/docker` would be taken care 
of. Please let me know if I'm wrong. 

>> I prefer overlay2 and would like to see there be only one option so that we 
>> can have less confusion in the future. However, giving users the choice is 
>> nice as well. Maybe there is a way to achieve both on startup.
> 
> You could have two kickstarts: overlay2 and devicemapper, and each kickstart 
> is specified using a GRUB menu entry on the installation media. The 
> devicemapper case uses the existing kickstart and depends on the existing 
> docker-storage-setup "use 40% of VG free space for a dm-thin pool"; the 
> overlay2 kickstart would cause the installer to use all available space for 
> rootfs, leaving no unused space in the VG.

So I hardly ever use interactive installs, but that is a valid case. I would 
think most people would be using their own kickstart file if they are 
installing a server fresh and they would set up storage the way they want it to 
be set up, right? I tend to think more about the cloud use case where you spin 
up a preconfigured image. What I was referring to is having 
`docker-storage-setup` be able to make the switch for us. It turns out that we 
have the storage configured like this in the baked images (note this is before 
`docker-storage-setup` runs):

```text
-bash-4.3# lsblk
NAME  MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sdb 8:16   0   10G  0 disk 
sdc 8:32   0  368K  0 disk 
sda 8:00   20G  0 disk 
├─sda2  8:20 19.7G  0 part 
│ └─atomicos-root 253:009G  0 lvm  /sysroot
└─sda1  8:10  300M  0 part /boot
```

This means we can essentially look at if the user provided `overlay` or `DM` 
and do whatever they asked. 
- If they provided overlay then we can just extend the root partition and go on 
our merry way.
- If they also specified `DOCKER_ROOT_VOLUME=yes` then they want overlay on 
another partition, did they specify a partion? yes, use that one. no, create an 
LV. 
- If they provided DM then create new LVs and set it up just like we have been 
doing before this discussion started. 

``

To reply, visit the link below or just reply to this email
https://pagure.io/atomic-wg/issue/186
___
cloud mailing list -- cloud@lists.fedoraproject.org
To unsubscribe send an email to cloud-le...@lists.fedoraproject.org


[atomic-wg] Issue #186 `switch to overlay2`

2017-01-09 Thread Josh Berkus

jberkus added a new comment to an issue you are following:
``
And what happens if the user doesn't provide anything?  That's the "defaults" 
case.
``

To reply, visit the link below or just reply to this email
https://pagure.io/atomic-wg/issue/186
___
cloud mailing list -- cloud@lists.fedoraproject.org
To unsubscribe send an email to cloud-le...@lists.fedoraproject.org


[atomic-wg] Issue #186 `switch to overlay2`

2017-01-09 Thread Dusty Mabe

dustymabe added a new comment to an issue you are following:
``
> And what happens if the user doesn't provide anything?  That's the "defaults" 
> case.

This is the big question. We are essentially debating over if we should enable 
`DOCKER_ROOT_VOLUME=yes` by default or not. 
``

To reply, visit the link below or just reply to this email
https://pagure.io/atomic-wg/issue/186
___
cloud mailing list -- cloud@lists.fedoraproject.org
To unsubscribe send an email to cloud-le...@lists.fedoraproject.org


[atomic-wg] Issue #186 `switch to overlay2`

2017-01-09 Thread Vivek Goyal

vgoyal added a new comment to an issue you are following:
``
> 
> And what happens if the user doesn't provide anything?  That's the "defaults" 
> case.
> 
> This is the big question. We are essentially debating over if we should 
> enable DOCKER_ROOT_VOLUME=yes by default or not.

Right. And I think it is a good idea to enable it by default because that 
allows users switch back to devicemapper easily without having to add more disk 
space to VM.
``

To reply, visit the link below or just reply to this email
https://pagure.io/atomic-wg/issue/186
___
cloud mailing list -- cloud@lists.fedoraproject.org
To unsubscribe send an email to cloud-le...@lists.fedoraproject.org


[atomic-wg] Issue #186 `switch to overlay2`

2017-01-09 Thread Chris Murphy

chrismurphy added a new comment to an issue you are following:
``
> dustymabe
> with DOCKER_ROOT_VOLUME and overlayfs using that then all of /var/lib/docker 
> would be taken care of. Please let me know if I'm wrong.

It'll work on a conventional installation. I'm skeptical it'll work on an 
rpm-ostree installation because /var is already a bind mount performed by 
ostree during the startup process. So I'm pretty sure ostree is going to have 
to know about the "true nature" of a separate var partition, mount it, then 
bind mount it correctly.

>I tend to think more about the cloud use case where you spin up a 
>preconfigured image. What I was referring to is having docker-storage-setup be 
>able to make the switch for us.

I don't have a strong opinion on where the proper hinting belongs to indicate 
which driver to use. The user already has to setup #cloud-config so maybe the 
hint belongs in there, and either it does something to storage which is then 
understood by docker-storage-setup, or the hint is just a baton to 
docker-storage-config to do it, just depends on which is more flexible and 
maintainable.

> This means we can essentially look at if the user provided overlay or DM and 
> do whatever they asked.
> - If they provided overlay then we can just extend the root partition and go 
> on our merry way.
> - If they also specified DOCKER_ROOT_VOLUME=yes then they want overlay on 
> another partition, did they specify a partion? yes, use that one. no, create 
> an LV.
> - If they provided DM then create new LVs and set it up just like we have 
> been doing before this discussion started.

Seems reasonable. But I have zero confidence at the moment that ostree can 
handle a separate /var file system; it's a question for Colin what assumptions 
are being made and I think it assumes it's directory that it bind mounts 
somewhere, and if it's really a separate volume, then something has to mount it 
first before it can be bind mounted elsewhere.

An additional trick is testing any changes against Btrfs where mounting 
subvolumes explicitly is actually a bind mount behind the scene. That should 
just work but...
``

To reply, visit the link below or just reply to this email
https://pagure.io/atomic-wg/issue/186
___
cloud mailing list -- cloud@lists.fedoraproject.org
To unsubscribe send an email to cloud-le...@lists.fedoraproject.org


[atomic-wg] Issue #186 `switch to overlay2`

2017-01-09 Thread Josh Berkus

jberkus added a new comment to an issue you are following:
``
Dusty:

Right, and that question comes down to "how much do we care about revertability 
VS. user experience".  It's not an easy question to answer.  In the long run, 
DOCKER_ROOT_VOLUME=no as default is the obvious answer.  But for F26?  Not so 
sure.
``

To reply, visit the link below or just reply to this email
https://pagure.io/atomic-wg/issue/186
___
cloud mailing list -- cloud@lists.fedoraproject.org
To unsubscribe send an email to cloud-le...@lists.fedoraproject.org


[atomic-wg] Issue #187 `Remove-Kube: Make fully containerized install work`

2017-01-09 Thread Josh Berkus

jberkus reported a new issue against the project: `atomic-wg` that you are 
following:
``
This is one of several issues which need to be overcome in order to remove 
Kubernetes from the base Atomic Host image.

This issue is a tracking issue for the various technical problems and bugs 
which prevent us from running the full Kubernetes infrastructure in containers. 
 @jasonbrooks should have detail on what the specific blockers are.
``

To reply, visit the link below or just reply to this email
https://pagure.io/atomic-wg/issue/187
___
cloud mailing list -- cloud@lists.fedoraproject.org
To unsubscribe send an email to cloud-le...@lists.fedoraproject.org


[atomic-wg] Issue #188 `Remove-Kube: produce official containers with Kubernetes components`

2017-01-09 Thread Josh Berkus

jberkus reported a new issue against the project: `atomic-wg` that you are 
following:
``
In order to remove Kubernetes from the AH base image and have a full 
containerized Kube install, we need to have "official" kubernetes containers 
produced by the Fedora project.  They need to be automatically updated and 
mirrored, so we'll want to use FDLIBS for them.
``

To reply, visit the link below or just reply to this email
https://pagure.io/atomic-wg/issue/188
___
cloud mailing list -- cloud@lists.fedoraproject.org
To unsubscribe send an email to cloud-le...@lists.fedoraproject.org


[atomic-wg] Issue #189 `Remove-Kube: document new Kubernetes install process`

2017-01-09 Thread Josh Berkus

jberkus reported a new issue against the project: `atomic-wg` that you are 
following:
``
Once we have  working, containerized Kubernetes install process, we'll need to 
write documentation for it.
``

To reply, visit the link below or just reply to this email
https://pagure.io/atomic-wg/issue/189
___
cloud mailing list -- cloud@lists.fedoraproject.org
To unsubscribe send an email to cloud-le...@lists.fedoraproject.org


[atomic-wg] Issue #190 `Remove-Kube: Determine and document migration process`

2017-01-09 Thread Josh Berkus

jberkus reported a new issue against the project: `atomic-wg` that you are 
following:
``
We will need to figure out, and then fully document, a migration process for 
users of the old built-in Kubernetes binaries to upgrade to F26 with 
containerized binaries.
``

To reply, visit the link below or just reply to this email
https://pagure.io/atomic-wg/issue/190
___
cloud mailing list -- cloud@lists.fedoraproject.org
To unsubscribe send an email to cloud-le...@lists.fedoraproject.org


Prerequisites to removing Kube from base image

2017-01-09 Thread Josh Berkus
Atomic WG:

I've filed four issues which I think represent the prerequisites to
making removal of the kubernetes binaries from the base Atomic image work.

https://pagure.io/atomic-wg/issues?status=Open&tags=remove-kube

Of course, those issues are pretty broad requirements.  Realistically, I
don't see getting this done by F26 just because of the outstanding
issues with a containerized install. But we'll see.

-- 
--
Josh Berkus
Project Atomic
Red Hat OSAS
___
cloud mailing list -- cloud@lists.fedoraproject.org
To unsubscribe send an email to cloud-le...@lists.fedoraproject.org


Re: Prerequisites to removing Kube from base image

2017-01-09 Thread Jason Brooks
On Mon, Jan 9, 2017 at 1:21 PM, Josh Berkus  wrote:
> Atomic WG:
>
> I've filed four issues which I think represent the prerequisites to
> making removal of the kubernetes binaries from the base Atomic image work.
>
> https://pagure.io/atomic-wg/issues?status=Open&tags=remove-kube
>
> Of course, those issues are pretty broad requirements.  Realistically, I
> don't see getting this done by F26 just because of the outstanding
> issues with a containerized install. But we'll see.

I'm working on these -- the kube containers are approved and I'm doing
the uploading / building now. I'll update the relevant tickets as I
go.

>
> --
> --
> Josh Berkus
> Project Atomic
> Red Hat OSAS
> ___
> cloud mailing list -- cloud@lists.fedoraproject.org
> To unsubscribe send an email to cloud-le...@lists.fedoraproject.org
___
cloud mailing list -- cloud@lists.fedoraproject.org
To unsubscribe send an email to cloud-le...@lists.fedoraproject.org


Re: Prerequisites to removing Kube from base image

2017-01-09 Thread Dusty Mabe


On 01/09/2017 04:21 PM, Josh Berkus wrote:
> Of course, those issues are pretty broad requirements.  Realistically, I
> don't see getting this done by F26 just because of the outstanding
> issues with a containerized install.

What are the outstanding issues? I know https://pagure.io/atomic-wg/issue/187
is for that. I guess let's fill that out sooner than later and have
the discussion there.

Would be really nice to not miss f26 on this. Can we prepare a
change request for f26 anyway because I'd like to be able to
make this change if we can.

Dusty
___
cloud mailing list -- cloud@lists.fedoraproject.org
To unsubscribe send an email to cloud-le...@lists.fedoraproject.org


[atomic-wg] Issue #186 `switch to overlay2`

2017-01-09 Thread Dusty Mabe

dustymabe added a new comment to an issue you are following:
``
> chrismurphy
> Seems reasonable. But I have zero confidence at the moment that ostree can 
> handle a separate /var file system; it's a question for Colin what 
> assumptions are being made and I think it assumes it's directory that it bind 
> mounts somewhere, and if it's really a separate volume, then something has to 
> mount it first before it can be bind mounted elsewhere.

hmm. so I'm not sure about everything you've said because you've thrown around 
some concepts that I might not understand fully. However, what I can do is 
test. I grabbed a fedora 25 atomic system and did not allow docker to run on 
first boot (`systemd.mask=docker systemd.mask=docker-storage-setup` on kernel 
command line). I then did `ostree admin unlock --hotfix` so I could modify the 
contents of the tree. I then grabbed latest upstream 
[docker-storage-setup](https://api.github.com/repos/projectatomic/docker-storage-setup/tarball)
 and installed everything to the system with `make install`. 

I then configured /etc/sysconfig/docker-storage-setup with:
```
STORAGE_DRIVER=overlay2
DOCKER_ROOT_VOLUME=yes
```

and rebooted the system. Now I get:

```text
-bash-4.3# lsblk
NAME  MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sdb 8:16   0   10G  0 disk 
sdc 8:32   0  368K  0 disk 
sda 8:00   20G  0 disk 
├─sda2  8:20  5.7G  0 part 
│ ├─atomicos-docker--root--lv 253:10  1.1G  0 lvm  /var/lib/docker
│ └─atomicos-root 253:003G  0 lvm  /sysroot
└─sda1  8:10  300M  0 part /boot
-bash-4.3# 
-bash-4.3# blkid
/dev/sda1: UUID="1cffb3b3-f5c4-4c73-9e4c-adb168f1cefa" TYPE="ext4" 
PARTUUID="82b21228-01"
/dev/sda2: UUID="l5jqv8-ZxTX-jIfh-ve4J-aqID-mAZi-O5mU5n" TYPE="LVM2_member" 
PARTUUID="82b21228-02"
/dev/mapper/atomicos-root: UUID="96a6e82b-98e5-4ab3-8034-72b61540c166" 
TYPE="xfs"
/dev/sdc: UUID="2017-01-09-18-25-56-00" LABEL="cidata" TYPE="iso9660"
/dev/mapper/atomicos-docker--root--lv: 
UUID="3f5ee97d-f612-46c4-abe5-21799e4830b1" TYPE="xfs"
-bash-4.3# 
-bash-4.3# docker info
Containers: 0
 Running: 0
 Paused: 0
 Stopped: 0
Images: 0
Server Version: 1.12.5
Storage Driver: overlay2
 Backing Filesystem: xfs
Logging Driver: journald
Cgroup Driver: systemd
Plugins:
 Volume: local
 Network: null host bridge overlay
Swarm: inactive
Runtimes: oci runc
Default Runtime: oci
Security Options: seccomp selinux
Kernel Version: 4.8.15-300.fc25.x86_64
Operating System: Fedora 25 (Atomic Host)
OSType: linux
Architecture: x86_64
Number of Docker Hooks: 2
CPUs: 2
Total Memory: 3.859 GiB
Name: cloudhost.localdomain
ID: YKSF:TWGT:FNJH:B553:F3FK:RFHJ:OUAK:AYOO:T5NP:WBTL:KZFI:MYSY
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Insecure Registries:
 127.0.0.0/8
Registries: docker.io (secure)
```

The mount is handled by `var-lib-docker.mount` systemd file:
```text
-bash-4.3# systemctl cat var-lib-docker.mount 
# /etc/systemd/system/var-lib-docker.mount
[Unit]
Description=Mount docker-root-lv on docker root directory.
Before=docker-storage-setup.service

[Mount]
What=/dev/atomicos/docker-root-lv
Where=/var/lib/docker
Type=xfs
Options=defaults

[Install]
WantedBy=docker-storage-setup.service
```


Am I missing something? Did I make some bad assumptions somewhere in this test?
``

To reply, visit the link below or just reply to this email
https://pagure.io/atomic-wg/issue/186
___
cloud mailing list -- cloud@lists.fedoraproject.org
To unsubscribe send an email to cloud-le...@lists.fedoraproject.org


[atomic-wg] Issue #186 `switch to overlay2`

2017-01-09 Thread Chris Murphy

chrismurphy added a new comment to an issue you are following:
``
>dustymabe 
>Am I missing something? Did I make some bad assumptions somewhere in this test?
Nope, works for me as well. /var is still a directory on the ext4 rootfs, but 
it looks like a new LV Is created at 40% of the free space in the VG, formatted 
XFS, and  var-lib-docker.mount mounts it at /var/lib/docker; that mount file is 
created by the code triggered by DOCKER_ROOT_VOLUME=yes.

I did additionally try a migrate from devicemapper to overlay2 using atomic 
storage export + reset + modify + import and it does work. There is no 
automatic space recapture of the docker-root-lv LV however that could be 
deleted by the user after the modify step, reboot so docker-storage-setup sets 
up the dm-thin pool, and then do the import. I'm assuming in any case that 
there needs to be temp space somewhere for the exported containers.
``

To reply, visit the link below or just reply to this email
https://pagure.io/atomic-wg/issue/186
___
cloud mailing list -- cloud@lists.fedoraproject.org
To unsubscribe send an email to cloud-le...@lists.fedoraproject.org