ericbadger commented on pull request #2513:
URL: https://github.com/apache/hadoop/pull/2513#issuecomment-744750186


   @insideo parallel layer comversion would certainly be helpful. I am sort of 
worried for some images though. Generally, docker images are made using fewer 
layers instead of many layers. And in the runc implementation, there's actually 
a limit of 37 layers because of how we name the mounts as well as the 4kb limit 
on the arguments to the mount command. So that gives opposing incentives. On 
one hand, you want more, smaller layers to decrease image conversion time. On 
the other hand, you want fewer, larger layers to adhere to the layer limit as 
well as to follow general convention surrounding docker images (e.g. RUN yum 
install && yum install && yum install && etc.)
   
   Especially anybody who starts building there images using Buildah or starts 
taking advantage of the new Docker image feature to define your own layer 
points. They would likely be inclined to make fewer layers instead of more 
layers. 
   
   When you say the tool is streaming, what exactly do you mean? I asked you 
this before and I thought you said that it would start converting the layers as 
they came in instead of waiting for them to be fully downloaded. But looking at 
the log it seems like there is a download stage, a conversion stage, and then 
an upload stage and those stages are sequential
   
   Also, I just realized that I am using squashfs-tools 4.3, which doesn't have 
reproducible builds on. So it's a slightly fair comparison since 4.4 slows 
things down by removing some (all?) of the multithreaded-ness of mksquashfs. I 
will retest with squashfs-tools 4.4 with reproducible builds enabled


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to