What about cleaning the local docker storage? We have had several
"incidents" were deployments of a POD failed due to "out of disk" errors on
the node.
For example on an infrastructure node:
$ sudo docker info
Containers: 8
Images: 23
Storage Driver: devicemapper
Pool Name: docker--vg-docker--pool
Pool Blocksize: 524.3 kB
Backing Filesystem: xfs
Data file:
Metadata file:
Data Space Used: 78.4 GB
Data Space Total: 80.08 GB
Data Space Available: 1.674 GB
Metadata Space Used: 5.714 MB
Metadata Space Total: 83.89 MB
Metadata Space Available: 78.17 MB
Udev Sync Supported: true
Deferred Removal Enabled: false
Library Version: 1.02.107-RHEL7 (2015-10-14)
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 3.10.0-327.el7.x86_64
Operating System: OpenShift Enterprise
$ sudo docker images --all
REPOSITORY TAG
IMAGE ID CREATED VIRTUAL SIZE
registry.access.redhat.com/openshift3/metrics-heapster 3.1.1
2ff00fc36375 8 weeks ago 230 MB
registry.access.redhat.com/openshift3/metrics-hawkular-metrics 3.1.1
5c02894a36cd 8 weeks ago 1.435 GB
<none> <none>
7c80ddb514e4 8 weeks ago 1.209 GB
<none> <none>
6aecaabbf532 8 weeks ago 697 MB
<none> <none>
49535b796b0c 8 weeks ago 410.2 MB
<none> <none>
af3c7ada9b2b 8 weeks ago 215.2 MB
<none> <none>
bf63a676257a 8 weeks ago 203.2 MB
registry.access.redhat.com/openshift3/ose-f5-router latest
9448e7137a67 3 months ago 440.3 MB
registry.access.redhat.com/openshift3/ose-f5-router v3.1.1.6
9448e7137a67 3 months ago 440.3 MB
registry.access.redhat.com/openshift3/ose-deployer v3.1.1.6
c86dcc5d726f 3 months ago 440.3 MB
registry.access.redhat.com/openshift3/ose-docker-registry v3.1.1.6
8e574c74fd8a 3 months ago 475.3 MB
registry.access.redhat.com/openshift3/ose-pod v3.1.1.6
ec2a195f885e 3 months ago 424.8 MB
<none> <none>
e6fcc9f19fb0 3 months ago 270.7 MB
<none> <none>
ca8d16d50502 3 months ago 440.3 MB
<none> <none>
f5b63152ffd3 3 months ago 270.7 MB
registry.access.redhat.com/openshift3/ose-deployer v3.1.0.4
9580a28b3e18 4 months ago 395.6 MB
registry.access.redhat.com/openshift3/ose-f5-router v3.1.0.4
f90b61070942 4 months ago 395.6 MB
<none> <none>
62208f151337 4 months ago 395.6 MB
registry.access.redhat.com/openshift3/ose-docker-registry v3.1.0.4
eb7a879607cf 4 months ago 419.9 MB
registry.access.redhat.com/openshift3/ose-keepalived-ipfailover v3.1.0.4
7fb6478ccf8f 4 months ago 292 MB
<none> <none>
b6e110bbfe5d 4 months ago 270.7 MB
registry.access.redhat.com/openshift3/ose-pod v3.1.0.4
758ab73ad286 4 months ago 327.4 MB
<none> <none>
50749f3bbd45 4 months ago 270.7 MB
<none> <none>
6c3a84d798dc 4 months ago 201.7 MB
And doing a rough summary of the MB's gives a total of ~12-13GB, not 78GB
as "docker info" reports. What have happened with the missing 60GB of
storage?
I thought (naively) that Kubernetes Garbage Collection (
https://docs.openshift.com/enterprise/3.1/admin_guide/garbage_collection.html)
should take care of this. We are by the way using the default parameters,
i.e haven't configured any kubletArguments in node-config.yaml.
On 29 April 2016 at 12:20, v <[email protected]> wrote:
> Hello,
>
> thank you for all the interesting insights!
> We were indeed able to identify a few images that can't be pulled (error
> message from docker pull: "manifest unknown: manifest unkown"), however 95%
> of the images are not in use any more and are never going to be pulled
> again - for this reason the unpullable images haven't been an issue for us
> so far.
>
> Our main issue is that the registry keeps growing and that "oadm prune
> images" doesn't do anything against that. The issue that you have described
> might be related to this and if you have any ideas how we could investigate
> this further and get "oadm prune images" to work I'd be very interested in
> trying them out.
>
> BTW I tried to find the registry layer blobs and indeed I did find all of
> them:
>
> $ oadm prune images --keep-tag-revisions=2
> I0427 11:29:14.205485 558627 imagepruner.go:430] Unable to find image
> "sha256:750b9cf638df0e9401b8305ba922a230c9e5be60832f13a7f7b5679d11b899e6"
> in the graph
> I0427 11:29:14.227040 558627 imagepruner.go:430] Unable to find image
> "sha256:750b9cf638df0e9401b8305ba922a230c9e5be60832f13a7f7b5679d11b899e6"
> in the graph
> I0427 11:29:14.230952 558627 imagepruner.go:430] Unable to find image
> "sha256:750b9cf638df0e9401b8305ba922a230c9e5be60832f13a7f7b5679d11b899e6"
> in the graph
> Dry run enabled - no modifications will be made. Add --confirm to remove
> images
> *snip*
> Deleting registry layer blobs ...
> BLOB
> sha256:26cb28a2d521429917baf3518a1590dd33d538e44b30ab6e814baebe83549111
> sha256:326e3193f83b9d481797a0d1f70962f18873aced53dabf69fefae8dd456a9e46
> sha256:0326956423daf9466ab0bac9b616c1338a6040b81191829e76946c73f067f5c2
> sha256:948f9feeef886428a08a1fdebf3c3534a6aaa42a183281a6347b49cfbb40e04b
> sha256:ed38eed867b3758bc061ea5a7093a91f985c269c53be2eabaa3e0aa2374622f5
> sha256:e2193d52a9a83cfc583f7892fd8b856b3af264d22b112eca526660b6c7dc0c7d
>
> $ find /data -type d | grep 26cb28a2d52142*
>
> /data/registry/docker/registry/v2/repositories/openshift/haproxy/_layers/sha256/26cb28a2d521429917baf3518a1590dd33d538e44b30ab6e814baebe83549111
>
> /data/registry/docker/registry/v2/blobs/sha256/26/26cb28a2d521429917baf3518a1590dd33d538e44b30ab6e814baebe83549111
> $ find /data -type d | grep 326e3193f83b9d4*
>
> /data/registry/docker/registry/v2/repositories/openshift/haproxy/_layers/sha256/326e3193f83b9d481797a0d1f70962f18873aced53dabf69fefae8dd456a9e46
>
> /data/registry/docker/registry/v2/blobs/sha256/32/326e3193f83b9d481797a0d1f70962f18873aced53dabf69fefae8dd456a9e46
> etc.
>
> Regards,
> v
>
>
> Am 2016-04-26 um 12:58 schrieb Michal Minář:
>
> Don't worry, all you've pasted is actually correct.
> We don't store manifests in /docker/registry/v2/blobs. Only layer blobs
> and signatures can be found there.
>
> If you have output like this:
>
> Deleting registry layer blobs ...
> BLOB
>
> sha256:a844c88431c581881114773680304d177d7b8fa983bfc4299b0b5b15ad68b71b
> ...
>
> then you can start looking into `/docker/registry/v2/blobs`.
>
> To get this output, you may consider changing the defaults of the prune
> command: `oadm prune images --keep-tag-revisions=2` (make sure not to
> add --confirm).
>
> The problem is that you have some images referring to nonexistent blobs
> which makes them not pull-able. These could be fixed by re-pushing the
> affected images manually with docker client if you have them on some of
> your nodes.
>
> To identify them in the most easy and inefficient way, is to try to pull
> them with clean docker daemon (having no images locally) one by one:
>
> registry_url=our.domain.ltd:5000
> for nm in $namespaces; do
> for isname in `oc get is -n $nm --no-headers | awk '{print $1}'`;
> do
> image=`oc get -o yaml -n $nm is/$isname | sed -n
> "s,^\s\+image:\s*\(.*\),$registry_url/$nm/$isname@\1,p"`
> # TODO: handle the failures
> docker pull $image
> done
> done
>
> They can be deleted using `oc rmi $image`. However, that won't purge
> unreferenced blobs from the storage. As I said earlier, we'll deal with
> them in future versions.
>
> I'd be interested to find out, why the blobs aren't available anymore.
> I have a trouble to believe it's the registry's misbehaviour.
>
> Regards,
> Michal
>
> On 04/26/2016 11:46 AM, v wrote:
>
> Hello,
>
> here is what I found:
> $ find /data -type d -name 26429*
> /data/registry/docker/registry/v2/repositories/testnamespace/index/_manifests/revisions/sha256/26429e2d9c67622ec7e86cb02b70d6ba54012d942f8479d5c80e0602ab605eb1
>
>
>
> Now we didn't expect to find anything in /v2/blobs/ because this stuff
> should be gone after the purging, right?
> Well, it turns out those paths in /v2/blobs/ never existed in the first
> place. See the following output:
>
> $ oadm prune images
> Dry run enabled - no modifications will be made. Add --confirm to remove
> images
> Deleting references from image streams to images ...
> STREAM IMAGE TAGS
> testnamespace/index
> sha256:5b2d03a9c0a6bece7944753870c17709f2e24af7963bc61f8034c8910aca1510
> latest
> testnamespace/index
> sha256:5c9a3bff2b461f2be8f31eaf47a5ff0b027c6e8d19442f97c4a067da130eb65c
> latest
> *snip*
>
> Deleting registry repository manifest data ...
> REPO IMAGE
> testnamespace/index
> sha256:5b2d03a9c0a6bece7944753870c17709f2e24af7963bc61f8034c8910aca1510
> testnamespace/index
> sha256:5c9a3bff2b461f2be8f31eaf47a5ff0b027c6e8d19442f97c4a067da130eb65c
> *snip*
>
> Deleting images from server ...
> IMAGE
> sha256:5b2d03a9c0a6bece7944753870c17709f2e24af7963bc61f8034c8910aca1510
> sha256:5c9a3bff2b461f2be8f31eaf47a5ff0b027c6e8d19442f97c4a067da130eb65c
> *snip*
>
> $ find /data -type d -name 5b2d03a9c0*
> /data/registry/docker/registry/v2/repositories/testnamespace/index/_manifests/revisions/sha256/5b2d03a9c0a6bece7944753870c17709f2e24af7963bc61f8034c8910aca1510
>
>
> $ find /data -type d -name 5c9a3bff2*
> /data/registry/docker/registry/v2/repositories/testnamespace/index/_manifests/revisions/sha256/5c9a3bff2b461f2be8f31eaf47a5ff0b027c6e8d19442f97c4a067da130eb65c
>
>
>
> As we can see the folders in /v2/blobs/ have different names/hashes than
> the the folders in /v2/repositories/. That at
> least explains the error messages we are seeing in the logs of the docker
> registry.
> It is possible that we changed the volume mount from hostMount to NFS but
> if we did that we did it over 6 months ago
> when we first installed OpenShift.
>
> Do you have any idea how we could fix this?
> Can we just delete the DC and create a new registry with "oadm registry"
> or do we risk losing all our images?
>
> Regards,
> v
>
>
>
>
> Am 2016-04-26 um 09:26 schrieb Michal Minář:
>
> Thank you once again.
>
> The blobs aren't located at the location they were stored. Have you
> perhaps changed the volume mount (path or hostMount -> NFS) or done some
> data migration? Could you perhaps try to `find /data -type -d -name
> 26429e2d9c67622ec7e86cb02b70d6ba54012d942f8479d5c80e0602ab605eb1` on
> your NFS server and/or your node running the registry? According to your
> setup, it should have been stored in
> /data/registry/docker/registry/v2/blobs/sha256/26/26429e2d9c67622ec7e86cb02b70d6ba54012d942f8479d5c80e0602ab605eb1
> on
> you NFS server.
>
> Even if you manage to find some dislocated digests and move them to the
> right place, the prune won't clean them up because they're not
> referenced anymore in etcd. We will be able to deal with them in future
> version when the pruning is moved into the registry itself.
>
> Also the NFS server is delaying the writes - depending on you config. If
> there were some successful deletes, you might get a relevat size update
> after a considerable amount of time (even minutes).
>
> Regards,
> Michal
>
> On 04/25/2016 05:11 PM, v wrote:
>
> Hello,
>
> here comes the stuff you asked for:
>
> Am 2016-04-25 um 10:26 schrieb Michal Minář:
>
> Thank you v for all the data so far.
>
> That's very suspicious. The deletion seem to run through without a
> problem. The amount of disk space freed should be significant for 70+
> layer blobs removed.
>
> I've got few more questions for you if you don't mind:
>
> - Could you please also include a snip from the registry log from the
> time the prune run? `oc logs dc/docker-registry`
>
> I have attached this to the bottom of the email.
>
>
> - And the version of your registry image?
> `oc describe dc/docker-registry | grep -i image:`
>
> $ oc describe dc/docker-registry | grep -i image
> Image: openshift/origin-docker-registry:v1.1.4
>
>
> - Is there just one replica of the registry?
>
> Yes, just one.
>
>
> - Do you use a custom config file? If yes, could you please share the
> storage section?
>
> No custom config file, everything default.
>
>
> - Also do you define some registry storage environment variables?
> `oc env --list dc/docker-registry | grep REGISTRY_STORAGE`
>
> The environment variable "REGISTRY_STORAGE" does not exist, neither on my
> live cluster nor on my test cluster. Only the
> following variables are set:
> $ oc env --list dc/docker-registry
> OPENSHIFT_CA_DATA=
> OPENSHIFT_CERT_DATA=
> OPENSHIFT_INSECURE=false
> OPENSHIFT_KEY_DATA=
> OPENSHIFT_MASTER=https://domain.com:8443
>
>
> - Does your mount volume look like this?
>
> $ oc volume dc/docker-registry --name registry-storage
> deploymentconfigs/docker-registry
> host path /data/registry as registry-storage
> mounted at /registry
>
> $ oc volume dc/docker-registry
> deploymentconfigs/docker-registry
> NFS nfs.domain.com:/data/registry as registry-storage
> mounted at /registry
>
> Thank you very much for your support. Here are the log entries from the
> docker registry while the pruning process was
> taking place:
>
> time="2016-04-22T06:45:38.017050821Z" level=info msg="blobHandler:
> ignoring driver.PathNotFoundError error:
> filesystem: Path not found:
> /docker/registry/v2/blobs/sha256/a8/a844c88431c581881114773680304d177d7b8fa983bfc4299b0b5b15ad68b71b"
>
> go.version=go1.4.2 http.request.host="172.30.22.72:5000"
> http.request.id=94f2e04e-1dd4-4620-ad05-4c3a7c8fb33e
>
> http.request.method=DELETE http.request.remoteaddr="10.1.3.1:42320"
> http.request.uri="/admin/blobs/sha256:a844c88431c581881114773680304d177d7b8fa983bfc4299b0b5b15ad68b71b"
>
> http.request.useragent="Go 1.1 package http"
> instance.id=d76dd935-0640-455d-a710-3829ac737bd7
>
> vars.digest="sha256:a844c88431c581881114773680304d177d7b8fa983bfc4299b0b5b15ad68b71b"
>
> time="2016-04-22T06:45:38.017151041Z" level=info msg="response completed"
> go.version=go1.4.2
> http.request.host="172.30.22.72:5000"
> http.request.id=94f2e04e-1dd4-4620-ad05-4c3a7c8fb33e
> http.request.method=DELETE
> http.request.remoteaddr="10.1.3.1:42320"
> http.request.uri="/admin/blobs/sha256:a844c88431c581881114773680304d177d7b8fa983bfc4299b0b5b15ad68b71b"
>
> http.request.useragent="Go 1.1 package http"
> http.response.duration=106.234291ms http.response.status=204
> http.response.written=0 instance.id=d76dd935-0640-455d-a710-3829ac737bd7
> 10.1.3.1 - - [22/Apr/2016:06:45:37 +0000] "DELETE
> /admin/blobs/sha256:a844c88431c581881114773680304d177d7b8fa983bfc4299b0b5b15ad68b71b
> HTTP/1.1" 204 0 "" "Go 1.1
> package http"
> time="2016-04-22T06:45:38.020975623Z" level=debug msg="authorizing
> request" go.version=go1.4.2
> http.request.host="172.30.22.72:5000"
> http.request.id=bfc2158d-f202-497b-a485-cbb62c6cfe5b
> http.request.method=DELETE
> http.request.remoteaddr="10.1.3.1:42320"
> http.request.uri="/admin/blobs/sha256:449fff85ad5bc6c5f9b3b6dbb0965db3e02b7fc4ba998cbed78a351120c3f68e"
>
> http.request.useragent="Go 1.1 package http"
> instance.id=d76dd935-0640-455d-a710-3829ac737bd7
>
> vars.digest="sha256:449fff85ad5bc6c5f9b3b6dbb0965db3e02b7fc4ba998cbed78a351120c3f68e"
>
> time="2016-04-22T06:45:38.021100109Z" level=debug msg="Origin auth:
> checking for access to admin::prune"
> go.version=go1.4.2 http.request.host="172.30.22.72:5000"
> http.request.id=bfc2158d-f202-497b-a485-cbb62c6cfe5b
>
> http.request.method=DELETE http.request.remoteaddr="10.1.3.1:42320"
> http.request.uri="/admin/blobs/sha256:449fff85ad5bc6c5f9b3b6dbb0965db3e02b7fc4ba998cbed78a351120c3f68e"
>
> http.request.useragent="Go 1.1 package http"
> instance.id=d76dd935-0640-455d-a710-3829ac737bd7
>
> vars.digest="sha256:449fff85ad5bc6c5f9b3b6dbb0965db3e02b7fc4ba998cbed78a351120c3f68e"
>
> time="2016-04-22T06:45:38.02363199Z" level=info msg="Deleting blob path:
> /docker/registry/v2/blobs/sha256/44/449fff85ad5bc6c5f9b3b6dbb0965db3e02b7fc4ba998cbed78a351120c3f68e"
>
> go.version=go1.4.2 http.request.host="172.30.22.72:5000"
> http.request.id=bfc2158d-f202-497b-a485-cbb62c6cfe5b
>
> http.request.method=DELETE http.request.remoteaddr="10.1.3.1:42320"
> http.request.uri="/admin/blobs/sha256:449fff85ad5bc6c5f9b3b6dbb0965db3e02b7fc4ba998cbed78a351120c3f68e"
>
> http.request.useragent="Go 1.1 package http"
> instance.id=d76dd935-0640-455d-a710-3829ac737bd7
>
> vars.digest="sha256:449fff85ad5bc6c5f9b3b6dbb0965db3e02b7fc4ba998cbed78a351120c3f68e"
>
> time="2016-04-22T06:45:38.024332007Z" level=debug
> msg="filesystem.Delete(\"/docker/registry/v2/blobs/sha256/44/449fff85ad5bc6c5f9b3b6dbb0965db3e02b7fc4ba998cbed78a351120c3f68e\")"
>
>
> go.version=go1.4.2 http.request.host="172.30.22.72:5000"
> http.request.id=bfc2158d-f202-497b-a485-cbb62c6cfe5b
>
> http.request.method=DELETE http.request.remoteaddr="10.1.3.1:42320"
> http.request.uri="/admin/blobs/sha256:449fff85ad5bc6c5f9b3b6dbb0965db3e02b7fc4ba998cbed78a351120c3f68e"
>
> http.request.useragent="Go 1.1 package http"
> instance.id=d76dd935-0640-455d-a710-3829ac737bd7
> trace.duration=577.546µs
> trace.file="/go/src/
> github.com/openshift/origin/Godeps/_workspace/src/github.com/docker/distribution/registry/storage/driver/base/base.go"
>
> trace.func="
> github.com/docker/distribution/registry/storage/driver/base.(*Base).Delete"
>
> trace.id=3abc527e-1dc4-49b3-8fbd-616024235c15 trace.line=181
> vars.digest="sha256:449f!
> ff85ad5bc
> 6
> c5f9b3b6dbb0965db3e02b7fc4ba998cbed78a351120c3f68e"
> time="2016-04-22T06:45:38.024425679Z" level=info msg="blobHandler:
> ignoring driver.PathNotFoundError error:
> filesystem: Path not found:
> /docker/registry/v2/blobs/sha256/44/449fff85ad5bc6c5f9b3b6dbb0965db3e02b7fc4ba998cbed78a351120c3f68e"
>
> go.version=go1.4.2 http.request.host="172.30.22.72:5000"
> http.request.id=bfc2158d-f202-497b-a485-cbb62c6cfe5b
>
> http.request.method=DELETE http.request.remoteaddr="10.1.3.1:42320"
> http.request.uri="/admin/blobs/sha256:449fff85ad5bc6c5f9b3b6dbb0965db3e02b7fc4ba998cbed78a351120c3f68e"
>
> http.request.useragent="Go 1.1 package http"
> instance.id=d76dd935-0640-455d-a710-3829ac737bd7
>
> vars.digest="sha256:449fff85ad5bc6c5f9b3b6dbb0965db3e02b7fc4ba998cbed78a351120c3f68e"
>
> 10.1.3.1 - - [22/Apr/2016:06:45:38 +0000] "DELETE
> /admin/blobs/sha256:449fff85ad5bc6c5f9b3b6dbb0965db3e02b7fc4ba998cbed78a351120c3f68e
> HTTP/1.1" 204 0 "" "Go 1.1
> package http"
> time="2016-04-22T06:45:38.024523934Z" level=info msg="response completed"
> go.version=go1.4.2
> http.request.host="172.30.22.72:5000"
> http.request.id=bfc2158d-f202-497b-a485-cbb62c6cfe5b
> http.request.method=DELETE
> http.request.remoteaddr="10.1.3.1:42320"
> http.request.uri="/admin/blobs/sha256:449fff85ad5bc6c5f9b3b6dbb0965db3e02b7fc4ba998cbed78a351120c3f68e"
>
> http.request.useragent="Go 1.1 package http"
> http.response.duration=5.481672ms http.response.status=204
> http.response.written=0 instance.id=d76dd935-0640-455d-a710-3829ac737bd7
> time="2016-04-22T06:45:38.027541503Z" level=debug msg="authorizing
> request" go.version=go1.4.2
> http.request.host="172.30.22.72:5000"
> http.request.id=9dba4180-b492-4873-a46b-cf0b05264746
> http.request.method=DELETE
> http.request.remoteaddr="10.1.3.1:42320"
> http.request.uri="/admin/blobs/sha256:26429e2d9c67622ec7e86cb02b70d6ba54012d942f8479d5c80e0602ab605eb1"
>
> http.request.useragent="Go 1.1 package http"
> instance.id=d76dd935-0640-455d-a710-3829ac737bd7
>
> vars.digest="sha256:26429e2d9c67622ec7e86cb02b70d6ba54012d942f8479d5c80e0602ab605eb1"
>
> time="2016-04-22T06:45:38.02764781Z" level=debug msg="Origin auth:
> checking for access to admin::prune"
> go.version=go1.4.2 http.request.host="172.30.22.72:5000"
> http.request.id=9dba4180-b492-4873-a46b-cf0b05264746
>
> http.request.method=DELETE http.request.remoteaddr="10.1.3.1:42320"
> http.request.uri="/admin/blobs/sha256:26429e2d9c67622ec7e86cb02b70d6ba54012d942f8479d5c80e0602ab605eb1"
>
> http.request.useragent="Go 1.1 package http"
> instance.id=d76dd935-0640-455d-a710-3829ac737bd7
>
> vars.digest="sha256:26429e2d9c67622ec7e86cb02b70d6ba54012d942f8479d5c80e0602ab605eb1"
>
> time="2016-04-22T06:45:38.038036629Z" level=info msg="Deleting blob path:
> /docker/registry/v2/blobs/sha256/26/26429e2d9c67622ec7e86cb02b70d6ba54012d942f8479d5c80e0602ab605eb1"
>
> go.version=go1.4.2 http.request.host="172.30.22.72:5000"
> http.request.id=9dba4180-b492-4873-a46b-cf0b05264746
>
> http.request.method=DELETE http.request.remoteaddr="10.1.3.1:42320"
> http.request.uri="/admin/blobs/sha256:26429e2d9c67622ec7e86cb02b70d6ba54012d942f8479d5c80e0602ab605eb1"
>
> http.request.useragent="Go 1.1 package http"
> instance.id=d76dd935-0640-455d-a710-3829ac737bd7
>
> vars.digest="sha256:26429e2d9c67622ec7e86cb02b70d6ba54012d942f8479d5c80e0602ab605eb1"
>
> time="2016-04-22T06:45:38.038590916Z" level=debug
> msg="filesystem.Delete(\"/docker/registry/v2/blobs/sha256/26/26429e2d9c67622ec7e86cb02b70d6ba54012d942f8479d5c80e0602ab605eb1\")"
>
>
> go.version=go1.4.2 http.request.host="172.30.22.72:5000"
> http.request.id=9dba4180-b492-4873-a46b-cf0b05264746
>
> http.request.method=DELETE http.request.remoteaddr="10.1.3.1:42320"
> http.request.uri="/admin/blobs/sha256:26429e2d9c67622ec7e86cb02b70d6ba54012d942f8479d5c80e0602ab605eb1"
>
> http.request.useragent="Go 1.1 package http"
> instance.id=d76dd935-0640-455d-a710-3829ac737bd7
> trace.duration=469.742µs
> trace.file="/go/src/
> github.com/openshift/origin/Godeps/_workspace/src/github.com/docker/distribution/registry/storage/driver/base/base.go"
>
> trace.func="
> github.com/docker/distribution/registry/storage/driver/base.(*Base).Delete"
>
> trace.id=e7a73988-e5b9-47f6-9a30-df5c3f4920a1 trace.line=181
> vars.digest="sha256:2642!
> 9e2d9c676
> 2
> 2ec7e86cb02b70d6ba54012d942f8479d5c80e0602ab605eb1"
> time="2016-04-22T06:45:38.038668137Z" level=info msg="blobHandler:
> ignoring driver.PathNotFoundError error:
> filesystem: Path not found:
> /docker/registry/v2/blobs/sha256/26/26429e2d9c67622ec7e86cb02b70d6ba54012d942f8479d5c80e0602ab605eb1"
>
> go.version=go1.4.2 http.request.host="172.30.22.72:5000"
> http.request.id=9dba4180-b492-4873-a46b-cf0b05264746
>
> http.request.method=DELETE http.request.remoteaddr="10.1.3.1:42320"
> http.request.uri="/admin/blobs/sha256:26429e2d9c67622ec7e86cb02b70d6ba54012d942f8479d5c80e0602ab605eb1"
>
> http.request.useragent="Go 1.1 package http"
> instance.id=d76dd935-0640-455d-a710-3829ac737bd7
>
> vars.digest="sha256:26429e2d9c67622ec7e86cb02b70d6ba54012d942f8479d5c80e0602ab605eb1"
>
> time="2016-04-22T06:45:38.038737605Z" level=info msg="response completed"
> go.version=go1.4.2
> http.request.host="172.30.22.72:5000"
> http.request.id=9dba4180-b492-4873-a46b-cf0b05264746
> http.request.method=DELETE
> http.request.remoteaddr="10.1.3.1:42320"
> http.request.uri="/admin/blobs/sha256:26429e2d9c67622ec7e86cb02b70d6ba54012d942f8479d5c80e0602ab605eb1"
>
> http.request.useragent="Go 1.1 package http"
> http.response.duration=13.165897ms http.response.status=204
> http.response.written=0 instance.id=d76dd935-0640-455d-a710-3829ac737bd7
> 10.1.3.1 - - [22/Apr/2016:06:45:38 +0000] "DELETE
> /admin/blobs/sha256:26429e2d9c67622ec7e86cb02b70d6ba54012d942f8479d5c80e0602ab605eb1
> HTTP/1.1" 204 0 "" "Go 1.1
> package http"
> time="2016-04-22T06:45:38.042240219Z" level=debug msg="authorizing
> request" go.version=go1.4.2
> http.request.host="172.30.22.72:5000"
> http.request.id=78eea48b-7549-4a0d-a1d0-369481bedb47
> http.request.method=DELETE
> http.request.remoteaddr="10.1.3.1:42320"
> http.request.uri="/admin/blobs/sha256:96f46a6fac98e5cba293ab788795b12e0c7abb8a7eda0834cb5b2d78620a96e6"
>
> http.request.useragent="Go 1.1 package http"
> instance.id=d76dd935-0640-455d-a710-3829ac737bd7
>
> vars.digest="sha256:96f46a6fac98e5cba293ab788795b12e0c7abb8a7eda0834cb5b2d78620a96e6"
>
> time="2016-04-22T06:45:38.042369591Z" level=debug msg="Origin auth:
> checking for access to admin::prune"
> go.version=go1.4.2 http.request.host="172.30.22.72:5000"
> http.request.id=78eea48b-7549-4a0d-a1d0-369481bedb47
>
> http.request.method=DELETE http.request.remoteaddr="10.1.3.1:42320"
> http.request.uri="/admin/blobs/sha256:96f46a6fac98e5cba293ab788795b12e0c7abb8a7eda0834cb5b2d78620a96e6"
>
> http.request.useragent="Go 1.1 package http"
> instance.id=d76dd935-0640-455d-a710-3829ac737bd7
>
> vars.digest="sha256:96f46a6fac98e5cba293ab788795b12e0c7abb8a7eda0834cb5b2d78620a96e6"
>
>
> Regards,
> v
>
> Thank you,
> Michal
>
> On 04/22/2016 09:17 AM, v wrote:
>
>
>
> Am 2016-04-21 um 20:24 schrieb Andy Goldstein:
>
>
>
> On Thu, Apr 21, 2016 at 2:22 PM, v < <[email protected]>[email protected]
> <[email protected]><mailto:[email protected]> <[email protected]>> wrote:
>
> Am 2016-04-21 um 13:49 schrieb Andy Goldstein:
>
>
>
> On Thursday, April 21, 2016, v < <[email protected]>
> <mailto:[email protected]> <[email protected]> <[email protected]>
> [email protected]> wrote:
>
>
>
> Am 2016-04-21 um 09:44 schrieb aleks:
>
> Hi Lorenz
>
> Am 21-04-2016 09:01, schrieb Lorenz Vanthillo:
>
> Thanks Aleks,
>
> Is this deleting images on your nodes or also on your
> openshift-registry?
>
>
> As far as I have seen only in the registy not on the nodes.
> That's the reason why we afterwards execute a
>
> ansible -m shell -a 'docker rmi $(docker images -q)' all
>
> And for example:
>
> oadm prune images --keep-younger-than=60m
>
> Will this only delete images images older than 60m which
> aren't used?
> Or wil this also delete images which are used (maybe only
> on the node
> but not out of the registry?)
>
> Unfortunately this will not delete any images at all, it will only
> delete the references to those images.
> You
> will not get any disk space back with this.
>
>
> This is incorrect. oadm prune images does free up disk space in the
> registry pod's storage.
>
> Hello,
>
> this is interesting. We've tried executing
> oadm prune builds --confirm and then
> oadm prune deployments --confirm and then
> oadm prune images --confirm
> and it never freed up a meaningful amount of disk space. We tried it
> with Origin 1.0.6 and just recently with
> Origin 1.1.4 (our registry is currently 50 GiB in size).
>
> Does that mean that we have encountered a bug?
>
>
> We will need more information to determine if there's a bug or not. Could
> you please provide:
>
> * The command(s) you ran, and output, showing the registry's size before
> pruning
> * the output from running 'oadm prune images' (including the exact
> command line you ran)
> * The command(s) and output showing the registry's size after pruning
>
> Dear Mr. Goldstein,
>
> thank you very much for taking interest in our issues with Origin 1.1.4.
> Here comes the output you requested:
>
> root@master01 ~ # du -s /data/registry/docker/registry/v2/
> 61184824 /data/registry/docker/registry/v2/
>
> root@master01 ~ # oadm prune images --confirm
> Deleting references from image streams to images ...
> STREAM IMAGE TAGS
> test/booking
> sha256:f98a1b3e04483ce2aef8db3fe60f96cff332b5e248f4392f1f8f77683a0bfa72
> latest
> test/booking
> sha256:b2ecf6a32a95921b8789eaf8e6f9ecd5141cf06d3bea0a1b4c803180aae90eee
> latest
> prod/coder
> sha256:e87719b4bf9a8d92e96f78fc60f2cb3ce7040ef1a1ed8ae649faeee414520505
> latest
> test/booking
> sha256:b025722494ca4ef05e74734d647795dcefe94f4b29476c0d882b4488d212fb6d
> latest
> test/booking
> sha256:b7e2daa5732f2ac9da160a32ec4865434986628d352345fc3f492ed4c2554065
> latest
> test/booking
> sha256:2c792289c69dfae91f068bac14a8b3fec10128e3cfcfb0b6037357908458de34
> latest
> test/booking
> sha256:ac06de98d7cb3f16ad7c92cb25aeff0a2a759cfd06d307629bf6ba1a87a0c446
> latest
> test/booking
> sha256:e214b820f295b35f4da2e67e3f94328c5a18d94b55f9b97823a08d3777cc4bf8
> latest
> prod/booking
> sha256:60faf646109cf90ec8d5eb071c5a066346eb3b5291a79772b776c8cb36e6f3f3
> latest
> test/booking
> sha256:616d6ca9965a181da31d4ae5a639a795292e0cefb25eb8aede65ce561745660c
> latest
> prod/offer
> sha256:74daebd81689786cbccf98f1ee1f97641f3321da8c7f8dbeaf673500aa8df0a8
> latest
> test/booking
> sha256:0561a33c88e786b1740ac36cf8a6fb41a22a0b76de5e4efbc4af8b22ef61f19a
> latest
> prod/gate
> sha256:39783ed3166c9db846be6f5ce88af3d81bfbc38db5550407281eea0d609327a2
> latest
>
> Deleting registry layer blobs ...
> BLOB
> sha256:a844c88431c581881114773680304d177d7b8fa983bfc4299b0b5b15ad68b71b
> sha256:449fff85ad5bc6c5f9b3b6dbb0965db3e02b7fc4ba998cbed78a351120c3f68e
> sha256:26429e2d9c67622ec7e86cb02b70d6ba54012d942f8479d5c80e0602ab605eb1
> sha256:96f46a6fac98e5cba293ab788795b12e0c7abb8a7eda0834cb5b2d78620a96e6
> ###
> ### *snip* I cut out 70 lines
> ###
>
> Deleting registry repository manifest data ...
> REPO IMAGE
> test/booking
> sha256:f98a1b3e04483ce2aef8db3fe60f96cff332b5e248f4392f1f8f77683a0bfa72
> test/booking
> sha256:b2ecf6a32a95921b8789eaf8e6f9ecd5141cf06d3bea0a1b4c803180aae90eee
> prod/geocoder
> sha256:e87719b4bf9a8d92e96f78fc60f2cb3ce7040ef1a1ed8ae649faeee414520505
> test/booking
> sha256:b025722494ca4ef05e74734d647795dcefe94f4b29476c0d882b4488d212fb6d
> test/booking
> sha256:b7e2daa5732f2ac9da160a32ec4865434986628d352345fc3f492ed4c2554065
> test/booking
> sha256:2c792289c69dfae91f068bac14a8b3fec10128e3cfcfb0b6037357908458de34
> test/booking
> sha256:ac06de98d7cb3f16ad7c92cb25aeff0a2a759cfd06d307629bf6ba1a87a0c446
> test/booking
> sha256:e214b820f295b35f4da2e67e3f94328c5a18d94b55f9b97823a08d3777cc4bf8
> prod/booking
> sha256:60faf646109cf90ec8d5eb071c5a066346eb3b5291a79772b776c8cb36e6f3f3
> test/booking
> sha256:616d6ca9965a181da31d4ae5a639a795292e0cefb25eb8aede65ce561745660c
> prod/offer
> sha256:74daebd81689786cbccf98f1ee1f97641f3321da8c7f8dbeaf673500aa8df0a8
> test/booking
> sha256:0561a33c88e786b1740ac36cf8a6fb41a22a0b76de5e4efbc4af8b22ef61f19a
> prod/offergate
> sha256:39783ed3166c9db846be6f5ce88af3d81bfbc38db5550407281eea0d609327a2
>
> Deleting images from server ...
> IMAGE
> sha256:fe601337970c8f2c753a31d96832bf88f0a1e1782a4235df07d1b63036e40a2e
> sha256:56ede46935d31bca6553edc43edb73f93583b782eed4cc03a46390d9105461c3
> ###
> ### *snip* I cut out 400 lines
> ###
>
> root@master01 ~ # du -s /data/registry/docker/registry/v2/
> 61184564 /data/registry/docker/registry/v2/
>
> As you can see the difference in size before/after the pruning is less
> than 300 Bytes. We were hoping that our upgrade
> from 1.0.6 to 1.1.4 would fix this problem but unfortunately the issue
> still remains. I'd be glad to provide you with
> more details or debug information at your request.
>
> Regards,
> v
>
>
>
>
>
>
>
>
>
>
> As for your second question:
> If a pod, RC or DC is using the image it will not get deleted, you
> can read the docs for more details:
>
> https://docs.openshift.com/enterprise/3.1/admin_guide/pruning_resources.html#pruning-images
>
>
>
> Well due to the fact hat I'm not using such option you can try
> it by your own as long as you don't add '
> --confirm ' to the command
>
> Best Regards
> Aleks
>
> PS: Please keep the list on cc thanks
>
> To: <[email protected]>
> [email protected]
> Subject: Re: Best way to delete images (local and in
> registry)
> Date: Thu, 21 Apr 2016 08:42:50 +0200
> From: <[email protected]>
> [email protected]
> CC: <[email protected]>
> [email protected]
>
> Hi Lorenz.
>
> Am 20-04-2016 14:33, schrieb Lorenz Vanthillo:
> > I'm searching for the best way to delete unused
> docker images in my
> > cluster.
> > Because we're rebuilding images + pushing them to
> the registry.
> >
> > When we perform
> > docker images -q |xargs do
>
>
>
> oadm prune images --keep-younger-than=60m
>
> Will this only delete images images older than 60m
> which aren't used?
> Or wil this also delete images which are used
> (maybe only on the node
> but not out of the registry?)
>
>
> Well due to the fact hat I'm not using such option you
> can try it by your own as long as you
> don't add ' --confirm ' to the command
>
> Best Regards cker rmi
> >
> > We get:
> > REPOSITORY TAG IMAGE ID CREATED
> > VIRTUAL SIZE
> > <none> <none> 0fd6f6a7d8
>
>
>
> oadm prune images --keep-younger-than=60m
>
> Will this only delete images images older than 60m
> which aren't used?
> Or wil this also delete images which are used
> (maybe only on the node
> but not out of the registry?)
>
>
> Well due to the fact hat I'm not using such option you
> can try it by your own as long as you
> don't add ' --confirm ' to the command
>
> Best Regards fb 6 days ago
> > 660.1 MB
> > <none> <none> cdcb32f9b621 2 weeks ago
> > 743.2 MB
> > <none> <none> 9df362e36242 2 weeks ago
> > 794 MB
> > <none> <none> 67de4dbed60e 2 weeks ago
> > 704 MB
> > <none> <none> 999e0047a070 2 weeks ago
> > 543.6 MB
> >
> > But oc get images gave us:
>
> [snipp]
>
> > Is this fine?
> >
> > And what's the best way to delete old images out of
> the registry?
>
> Do you have tried this way?
>
>
>
> https://docs.openshift.com/enterprise/3.1/admin_guide/pruning_resources.html#pruning-images
>
>
> After wards we have run
>
> docker rmi $(docker images -q)
>
> on every node.
>
> I'm not sure if the last step is still necessary in
> the current
>
> version.
>
>
> Best Regards
> Aleks
>
>
> _______________________________________________
> users mailing list
> [email protected]
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
> _______________________________________________
> users mailing list
> [email protected]
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
>
>
>
> _______________________________________________
> users mailing list
> [email protected]
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
>
> _______________________________________________
> users mailing list
> [email protected]
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
>
> _______________________________________________
> users mailing list
> [email protected]
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
>
> _______________________________________________
> users mailing list
> [email protected]
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
>
> _______________________________________________
> users mailing list
> [email protected]
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
--
Pelle
Research is what I'm doing when I don't know what I'm doing.
- Wernher von Braun
_______________________________________________
users mailing list
[email protected]
http://lists.openshift.redhat.com/openshiftmm/listinfo/users