[
https://issues.apache.org/jira/browse/YARN-9564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16898342#comment-16898342
]
Eric Badger commented on YARN-9564:
-----------------------------------
{noformat}
[ebadger@foo bin]$ ./docker-to-squash.py -h
usage: docker-to-squash.py [-h] [--working-dir WORKING_DIR]
[--skopeo-format SKOPEO_FORMAT]
[--pull-format PULL_FORMAT] [-l LOG_LEVEL]
[--hdfs-root HDFS_ROOT]
[--image-tag-to-hash IMAGE_TAG_TO_HASH]
[-r REPLICATION] [--hadoop-prefix HADOOP_PREFIX]
[-f] [--check-magic-file] [--magic-file MAGIC_FILE]
{pull-build-push-update,pull-build,push-update,remove-image,remove-tag,add-tag,copy-update,query-tag,list-tags}
...
positional arguments:
{pull-build-push-update,pull-build,push-update,remove-image,remove-tag,add-tag,copy-update,query-tag,list-tags}
sub help
pull-build-push-update
Pull an image, build its squashfs layers, push it to
hdfs, and atomically update the image-tag-to-hash file
pull-build Pull an image and build its squashfs layers
push-update Push the squashfs layers to hdfs and update the image-
tag-to-hash file
remove-image Remove an image (manifest, config, layers) from hdfs
based on its tag or manifest hash
remove-tag Remove an image to tag mapping in the image-tag-to-
hash file
add-tag Add an image to tag mapping in the image-tag-to-hash
file
copy-update Copy an image from hdfs in one cluster to another and
then update the image-tag-to-hash file
query-tag Get the manifest, config, and layers associated with a
tag
list-tags List all tags in image-tag-to-hash file
optional arguments:
-h, --help show this help message and exit
--working-dir WORKING_DIR
Name of working directory
--skopeo-format SKOPEO_FORMAT
Output format for skopeo copy
--pull-format PULL_FORMAT
Pull format for skopeo
-l LOG_LEVEL, --log LOG_LEVEL
Logging level (DEBUG, INFO, WARNING, ERROR, CRITICAL)
--hdfs-root HDFS_ROOT
The root directory in HDFS for all of the squashfs
images
--image-tag-to-hash IMAGE_TAG_TO_HASH
image-tag-to-hash filepath or filename in hdfs
-r REPLICATION, --replication REPLICATION
Replication factor for all files uploaded to HDFS
--hadoop-prefix HADOOP_PREFIX
hadoop_prefix value for environment
-f, --force Force overwrites in HDFS
--check-magic-file Check for a specific magic file in the image before
uploading
--magic-file MAGIC_FILE
The magic file to check for in the image
{noformat}
{noformat:Building and pushing a new image to HDFS}
./docker-to-squash.py --log=DEBUG pull-build-push-update <docker image
uri>,<docker image tag>
{noformat}
{noformat:title=Example}
./docker-to-squash.py --log=DEBUG pull-build-push-update
registry.hub.docker.com/library/busybox,busybox:latest
{noformat}
Note that the busybox image won't be enough to run the runC containers, since
it won't have Java, some native libs, and any other Hadoop things that are
needed to start a container. So you should point this to a Docker image that
you can run DockerLinuxContainerRuntime with.
> Create docker-to-squash tool for image conversion
> -------------------------------------------------
>
> Key: YARN-9564
> URL: https://issues.apache.org/jira/browse/YARN-9564
> Project: Hadoop YARN
> Issue Type: Sub-task
> Reporter: Eric Badger
> Assignee: Eric Badger
> Priority: Major
> Attachments: YARN-9564.001.patch
>
>
> The new runc runtime uses docker images that are converted into multiple
> squashfs images. Each layer of the docker image will get its own squashfs
> image. We need a tool to help automate the creation of these squashfs images
> when all we have is a docker image
--
This message was sent by Atlassian JIRA
(v7.6.14#76016)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]