alexandru-bagu commented on issue #5697:
URL: https://github.com/apache/cloudstack/issues/5697#issuecomment-1035531550


   > The optimisation you mention could be done, if you wish But I would 
definately opt for opt-out (maybe to be reversersed by a global setting. It 
sounds strange to me that after downloading the network would still be fully 
occupied by the status query thaat checks the status of the install. That 
should be minimal.
   
   I doubt it's the status check that uses the network. My thinking was that if 
I use shared storage for templates which is required afaik then whenever the 
SSVM does an operation network bandwidth has to be used. Meaning
   1. when downloading the template traffic should look like this: [public net] 
-download-> [ssvm] -save-> [storage] (ssvm downloads from public net and writes 
to storage)
   2. when download is complete and template is being extracted traffic is 
probably this: [storage] -read-> [ssvm] -process-> [storage] (ssvm reads from 
storage and writes to storage)
   3. when hash is being computed traffic is probably this: [storage] -read-> 
[ssvm] and then when it finishes reading and computing the hash push it to 
cloudstack
   
   If hash is computed separate from the download then additional bandwidth is 
going to be used to read the file and compute it. 
   Additionally I hope that in case where the template is not compressed the 
file is just renamed and not copied into the proper name and the other one 
removed.
   
   *For more context, we are currently looking for a way to import a template 
that has about 4 TB. Importing such a template alone would take a long time 
even with a 10GB connection. To have the system waste more bandwidth to compute 
a hash that is not even going to be validated any other time is not useful.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to