adoroszlai commented on pull request #1142:
URL: https://github.com/apache/hadoop-ozone/pull/1142#issuecomment-659356744


   > Overall it looks good to me, and really impressive approach. I have a few 
comments -- none of them are blocker, but i like to discuss technical details...
   
   Thanks for taking a look.  I waited with the merge exactly to have this kind 
of discussion. ;)
   
   > 1. Can you please help me to understand why did you remove `-f 
"${compose_file}"`?
   
   Each `-f` accepts only a single filename, so using the same command with one 
or more files is easier with `COMPOSE_FILE` approach.  Initially I used two 
separate files (including the one from `ozone` env), so I needed this fix, but 
then abandoned that approach.  This part of the change could be extracted to a 
separate issue if you prefer to simplify this one a bit.  (It allows you to run 
`ozone/test.sh` with monitoring enabled, so I'd rather not drop it completely.)
   
   > 2. fixed ip / dedicated network in docker-compose file seems to be 
unnecessary in this cluster (IMHO)
   > 4. you create external volume directories but `/data` is already a volume 
inside the docker containers. If you use simple `docker-compose stop` instead 
of `down` it can be reused. Did you consider using this approach?
   > 
   > Why do you prefer external volumes? (I found two arguments: easier to 
debug + easier to execute commands when the cluster is down. But interested if 
you had any other motivations...).
   
   After `stop/start` this is what `ozone version` prints:
   
   ```
                     //////////////
                  ////////////////////
               ////////     ////////////////
              //////      ////////////////
             /////      ////////////////  /
            /////            ////////   ///
            ////           ////////    /////
           /////         ////////////////
           /////       ////////////////   //
            ////     ///////////////   /////
            /////  ///////////////     ////
             /////       //////      /////
              //////   //////       /////
                ///////////     ////////
                  //////  ////////////
                  ///   //////////
                 /    0.5.0-beta(Crater Lake)
   
   Source code repository [email protected]:apache/hadoop-ozone.git -r 
9b4f8fd49fa15946994bccc6c6ac50a560cfb0ea
   Compiled by dchitlangia on 2020-03-16T00:54Z
   Compiled with protoc 2.5.0
   From source with checksum 4cde4c7a7aaa250bfbaf58220cb8e2c
   
   Using HDDS 0.5.0-beta
   Source code repository [email protected]:apache/hadoop-ozone.git -r 
9b4f8fd49fa15946994bccc6c6ac50a560cfb0ea
   Compiled by dchitlangia on 2020-03-16T00:53Z
   Compiled with protoc 2.5.0
   From source with checksum 9df32efd56424ab869a0acd0124e4bf5
   ```
   
   So `docker-compose down/up` is needed because changes to the compose file 
(docker image, etc.) are not picked up with `stop/start`. And we need different 
images before/after upgrade.
   
   That's the reason for both volumes and network settings.  I had started out 
without the network/ip settings, but the containers did not always get the same 
address after `down`/`up`, nor would they reuse volumes.
   
   > 3. It seems to be a big restriction that we can't start multiple datanode 
on the same file system without configuring the datanode path. This is the 
reason why you need dn1..dn3 directories. I am wondering if we can provide a 
generic solution to this one. Maybe we can support `${env...}` notion when we 
set the datanode directory?
   
   Would be nice, I think we can explore it later.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to