[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16833006#comment-16833006
 ] 

Elek, Marton commented on HDDS-1458:
------------------------------------

Thank you very much [~eyang] to repeat the same arguments in more details. 
Unfortunately I don't feel that my concerns are not addressed. Let me quickly 
summarize the pro and con arguments which are discussed until now to make it 
clear why I think this one. (And please fix me If I am wrong).

First of all, this patch is not about  "Creating a maven profile to run fault 
injection tests". It's mainly about modifying the build process to use inline 
docker image creation instead of the current solution. 

Pros:

 1. The compose folder of the distribution can be copied to an other machine 
_without_ copying the distribution folder.
 2. An other arguments are also added that this approach is more aligned with 
the original spirit/intention of containerization.

Cons:

 1. The builds are slower, we would have huge number unnecessary  io operation.
 2. It's harder to test locally the patches. The reproducibility are decreased. 
(Instead of the final build a local container is used).
 3. It's harder to test a release package from the smoketest directory.
 4. Security fixes can be applied later to the docker containers.
 5. It conflicts if more than one builds are executed on the same machine 
(docker images are shared but volume mounts are separated). (Even just this one 
is a blocker problem for me)
 6. Some tests can't be executed from the final tarball. (I would like to 
execute tests from the released binary, as discussed earlier in the original 
jira). 

I wrote it down to help to understand why I can't see any benefit to do it in 
the other way, and why I feel it's a dangerous step. And let me clarify again. 
It's not against executing the fault injection as part of the build. I very 
like your idea. I am against replacing the current way to use the 
smoketest/compose files without archiving consensus.

And now let me answer to your points in more details:
 
bq. There was no one disputed against using dist profile and I filed YARN-9523 
to correct that mistake

I have different view. I have concerns about using docker in this way (even in 
the dist profile). It could be my fault but I don't feel that my questions are 
addressed in the discussion mailing list. (See my last mail without answers: 
https://mail-archives.apache.org/mod_mbox/hadoop-hdfs-dev/201903.mbox/%3C5bfeb864-3f26-1ccc-3300-2680e1b94f34%40apache.org%3E)

 bq. Dev can be tested with tarball. There is no need to involve docker until 
the finished goods are ready for transport

Thank you very much to write it down, because it helped me to understand the 
fundamental differences between our views. I think it's not true at all. During 
the ozone development we use docker-compose to start pseudo cluster in every 
hour. docker-compose based clusters are used to test patches, test local  
changes, etc. I think this is the most important thing: this is more like a dev 
tool and a tool to try out ozone from dist folder than a production tool.

May I ask you to try out this kind of development workfow to understand my 
view? Just try to test all the reviewed patches in pseudo clusters.

bq. Dev can be tested with tarball.

Let me ask again, because I may misunderstood your points. What is your 
suggested way to test a security patch in a pseudo-cluster?

bq. Docker image is used as a configuration file transport mechanism. I think 
it's convoluted process. There are more efficient way to transport config files 
IMHO. Those instructions requires to run docker rm command to destroy the 
instances as well. 

I understand your concerns, but why is it better to use scp to copy 
configuration from one location to other? The docker image is NOT created here 
to be used as a configuration file transport mechanism. It's an optional 
feature to provide  a _default_ set of configuration. After the first run you 
can copy the docker-compose file exactly the same way as you wished earlier.

bq. Wouldn't it be better that we just give UX team yaml file, and let their 
docker-compose fetch self contained docker images from dockerhub without 
downloading the tarball?  

I am not sure if I understand. What is UX team? Maybe I used wrong words (sorry 
for that). When I used the UX, I thought about the developer user experience. 

bq. It seems that we are using the tools in most inappropriate way of it's 
design that created more problems.

Could you please explain the "created problems" in more details. To be honest 
for me everything just works. I recommend to define the problems first and 
tries to find answers for that together. For example if the problem is to run 
blockade test during the build, it can be adressed in a very easy way to 
execute the blockade tests after the distribution assembly in the 
hadoop-ozone/dist project.

bq. I am quite puzzle on the route chosen to mount binaries, it is like 
ordering a container for moving between houses and put a key in container while 
leaving all furniture outside of container. It works, but completely 
impractical.

Sorry, I don't understand how can it be similar, can you please elaborate. I 
think there are more than one form to use docker image:

 1. One option is to create images which contains all the required bits and can 
be started everywhere.
 2. The other option is to use docker only to provide a pre-defined, stable 
__environnment__. 

We use the (2) option here and I strongly believe that this is not a wrong 
usage of docker. (Especially as it's hard to define what is theoretically wrong)

But I am not problem to add an __additional__ support of (1). Just please don't 
replace the existing and working toolset.

In fact I already created a profile which creates the docker image from maven. 
Please check the k8s-dev profile in hadoop-ozone/dist.

> Create a maven profile to run fault injection tests
> ---------------------------------------------------
>
>                 Key: HDDS-1458
>                 URL: https://issues.apache.org/jira/browse/HDDS-1458
>             Project: Hadoop Distributed Data Store
>          Issue Type: Test
>            Reporter: Eric Yang
>            Assignee: Eric Yang
>            Priority: Major
>         Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch, 
> HDDS-1458.003.patch
>
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to