Thanks all.

docker-gc looks like a viable solution for the interim. Cheers

On Sat, 10 Jun 2017 at 00:57 Alex Creek <[email protected]> wrote:

> I used to use spotifys docker-gc too on build nodes running docker but
> there’s a storage leak somewhere between docker (1.12.6) and the thin lv.
> After a while the thin lv wouldn’t shrink anymore after purging
> images/containers.  I looked into thin_trim from the
> device-mapper-persistent-data pkg but it wasn’t trivial to setup the nodes
> to support it. I settled on clean up being destroying the docker dir And
> the lvm and recreating the lvm with docker-storage-setup.
>
>
>
>
>
> Alex
>
>
>
>
>
> *From: *<[email protected]> on behalf of
> Aleksandar Lazic <[email protected]>
> *Organization: *ME2Digital e. U.
> *Date: *Friday, June 9, 2017 at 10:20 AM
> *To: *Mateus Caruccio <[email protected]>, Gary Franczyk <
> [email protected]>
> *Cc: *"[email protected]" <[email protected]
> >
> *Subject: *Re: [EXTERNAL] Re: garbage collection docker metadata
>
>
>
> Hi Mateus Caruccio.
>
> on Freitag, 09. Juni 2017 at 14:50 was written:
>
> I do basically the same on an node cronjob: docker rm $(docker images -q)
>
>
> We also.
>
> I think the same as Andrew that the kubernetes gc does not take care about
> the metadata part in the thinpool.
>
> Maybe there is already a issue open in k8 for this.
>
> Regards
> Aleks
>
> --
> Mateus Caruccio / Master of Puppets
> GetupCloud.com
> We make the infrastructure invisible
>
> 2017-06-09 9:30 GMT-03:00 Gary Franczyk <[email protected]>:
>
> I regularly run an app named “docker-gc” to clean up unused images and
> containers.
>
> https://github.com/spotify/docker-gc
>
>
>
> *Gary Franczyk*Senior Unix Administrator, Infrastructure
>
> Availity | 10752 Deerwood Park Blvd S. Ste 110, Jacksonville FL 32256
> W 904.470.4953 <(904)%20470-4953> | M 561.313.2866 <(561)%20313-2866>
> [email protected]
>
> *From: *<[email protected]> on behalf of Andrew
> Lau <[email protected]>
> *Date: *Friday, June 9, 2017 at 8:27 AM
> *To: *Fernando Lozano <[email protected]>
> *Cc: *"[email protected]" <[email protected]
> >
> *Subject: *[EXTERNAL] Re: garbage collection docker metadata
>
> WARNING: This email originated outside of the Availity email system.
> DO NOT CLICK links or open attachments unless you recognize the sender and
> know the content is safe.
> ------------------------------
>
> The error was from a different node.
>
> `docker info` reports plenty of data storage free. Manually removing
> images from the node has always fixed the metadata storage issue, hence why
> I was asking if garbage collection did take into account metadata or only
> data storage.
>
> On Fri, 9 Jun 2017 at 22:11 Fernando Lozano <[email protected]> wrote:
>
> If the Docker GC complains images are in use and you get out of disk space
> errors, I'd assume you need more space for docker storage.
>
> On Fri, Jun 9, 2017 at 8:37 AM, Andrew Lau <[email protected]> wrote:
>
>
> On Fri, 9 Jun 2017 at 21:10 Aleksandar Lazic <[email protected]> wrote:
>
> Hi Andrew Lau.
>
> on Freitag, 09. Juni 2017 at 12:35 was written:
>
> Does garbage collection get triggered when the docker metadata storage is
> full? Every few days I see some nodes fail to create new containers due to
> the docker metadata storage being full. Docker data storage has plenty of
> capacity.
>
> I've been cleaning out the images manually as the garbage collection
> doesn't seem to trigger.
>
>
> Do you have tried to change the default settings?
>
>
> https://docs.openshift.org/latest/admin_guide/garbage_collection.html#image-garbage-collection
> <https://urldefense.proofpoint.com/v2/url?u=https-3A__docs.openshift.org_latest_admin-5Fguide_garbage-5Fcollection.html-23image-2Dgarbage-2Dcollection&d=DwMFaQ&c=OICO5LaDH-Xi2CSZAJgmTQ&r=DWo_ijwFGqBf00dIEKrMRiq5JjUOT-29YI7Con8CGIc&m=3OxWWnumQqdUIEXBrqVhdKbx_eHIqeFUa5mPZU5M2fM&s=HmJaM6JkseziRVZ20lzdXpzAFCvFbpgWu3ag6iBSTvc&e=>
>
> How was the lvm thinpool created?
>
> https://docs.openshift.org/latest/install_config/install/host_preparation.html#configuring-docker-storage
> <https://urldefense.proofpoint.com/v2/url?u=https-3A__docs.openshift.org_latest_install-5Fconfig_install_host-5Fpreparation.html-23configuring-2Ddocker-2Dstorage&d=DwMFaQ&c=OICO5LaDH-Xi2CSZAJgmTQ&r=DWo_ijwFGqBf00dIEKrMRiq5JjUOT-29YI7Con8CGIc&m=3OxWWnumQqdUIEXBrqVhdKbx_eHIqeFUa5mPZU5M2fM&s=JizN4AMP_DxaHGnuOmfcq9svbVbvyE6nvynwLFBy18E&e=>
>
> The docker-storage-setup calculates normally 0.1% for metadata as describe
> in this line
>
> https://github.com/projectatomic/container-storage-setup/blob/master/container-storage-setup.sh#L380
> <https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_projectatomic_container-2Dstorage-2Dsetup_blob_master_container-2Dstorage-2Dsetup.sh-23L365&d=DwMFaQ&c=OICO5LaDH-Xi2CSZAJgmTQ&r=DWo_ijwFGqBf00dIEKrMRiq5JjUOT-29YI7Con8CGIc&m=3OxWWnumQqdUIEXBrqVhdKbx_eHIqeFUa5mPZU5M2fM&s=YoasVf-qTutZV0sR3xepc_g8gVxYPI-RIN_JcYVUAjk&e=>
>
>
> Garbage collection set to 80 high and 70 low.
>
> Garbage collection is working on, I see it complain about images in use on
> other nodes:
>
> ImageGCFailedwanted to free 3289487769, but freed 3466304680
> <(346)%20630-4680> space with errors in image deletion: [Error response
> from daemon: {"message":"conflict: unable to delete 96f1d6e26029 (cannot be
> forced) - image is being used by running container 3ceb5410db59"}, Error
> response from daemon: {"message":"conflict: unable to delete 4e390ce4fc8b
> (cannot be forced) - image is being used by running container
> 0040546d8f73"}, Error response from daemon: {"message":"conflict: unable to
> delete 60b78ced07a8 (cannot be forced) - image has dependent child
> images"}, Error response from daemon: {"message":"conflict: unable to
> delete 2aebdcf9297e (cannot be forced) - image has dependent child images"}]
>
> docker-storage-setup with 99% data volume. I wondering if maybe only the
> data volume is watched
>
>
>
>
>
>
> *-- Best RegardsAleks*
>
>
>
> _______________________________________________
> users mailing list
> [email protected]
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
> <https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openshift.redhat.com_openshiftmm_listinfo_users&d=DwMFaQ&c=OICO5LaDH-Xi2CSZAJgmTQ&r=DWo_ijwFGqBf00dIEKrMRiq5JjUOT-29YI7Con8CGIc&m=3OxWWnumQqdUIEXBrqVhdKbx_eHIqeFUa5mPZU5M2fM&s=t4g9etPy3ser61OrFVIAAAwOeVw_XX81wF9_8A7EqGw&e=>
> ------------------------------
>
> The information contained in this e-mail may be privileged and
> confidential under applicable law. It is intended solely for the use of the
> person or firm named above. If the reader of this e-mail is not the
> intended recipient, please notify us immediately by returning the e-mail to
> the originating e-mail address. Availity, LLC is not responsible for errors
> or omissions in this e-mail message. Any personal comments made in this
> e-mail do not reflect the views of Availity, LLC.
>
> _______________________________________________
> users mailing list
> [email protected]
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
>
>
>
>
> *-- Best RegardsAleks _______________________________________________
> users mailing list [email protected]
> <[email protected]>
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
> <http://lists.openshift.redhat.com/openshiftmm/listinfo/users> *
> _______________________________________________
> users mailing list
> [email protected]
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
_______________________________________________
users mailing list
[email protected]
http://lists.openshift.redhat.com/openshiftmm/listinfo/users

Reply via email to