ad-m opened a new issue #18127:
URL: https://github.com/apache/superset/issues/18127


   **Is your feature request related to a problem? Please describe.**
   
   I would like to discuss the testing strategy for Helm Chart for Apache 
Superset to improve the developer experience and avoid issues like #17920 .
   
   I think a DevEX tweaks in this area could improve development efficiency, 
which will allow address [the numerous 
issues](https://github.com/apache/superset/issues?q=is%3Aissue+is%3Aopen+helm) 
that exist for the Helm Charts. Then we will be able to obtain the certainty 
necessary for publication on [Artifact 
Repository](https://artifacthub.io/packages/search?ts_query_web=superset&sort=relevance&page=1)
 as an official artifact.
   
   **Describe the solution you'd like**
   
   I see numerous solution in that area. I am afraid that I do not have 
experience in the project to see all the required aspects, so I highly 
recommend any comments from experienced people, both users and developers. 
Comments and additional requirements are welcome!
   
   First of all, It will be great to provide `values.schema.json` ( 
https://helm.sh/docs/topics/charts/#schema-files ). It's about writing JSON 
schema for `values.yaml` file. I already have some draft in that area. This 
will improve the experience of developers, but most of all end-users will lose 
the ability to use wrongly formatted or misindent something `values.yaml` file. 
   
   Secondly, I think about providing unit tests about rendering helm charts to:
   
   * verify schema for generated manifests (I think it will avoid situations 
like  #17920 ) with K8s expected schema
   * use `pytest` to renders helm manifests & use it as a tests framework eg. 
Apache Airflow use that approach.
   
   **Describe alternatives you've considered**
   
   We might also use kubeval for testing schema format, but this is another 
tool when we have a few testing framework in place. A large number of different 
frameworks raise the entry threshold and lower DevEx by constantly changing the 
context.
   
   We might also use conftest, but again – one more testing framework which 
does not bring much more values than  `pytest`
   
   We might also start integration tests on CI eg. in minikube or – like test 
env – on AWS. This can be a great topic, but such tests are long, slow, and 
thus cover a limited number of scenarios (but provide a very real validation of 
correctness). I think we should start with something smaller.
   
   **Additional context**
   
    @wiktor2200 @nytai @mvoitko might be interested in that area as involved in 
development of our Helm Chart.
    @mik-laj @potiuk may want to share their thoughts based on the experience 
of Apache Airflow (another Apache Foundation project).


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscr...@superset.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: notifications-unsubscr...@superset.apache.org
For additional commands, e-mail: notifications-h...@superset.apache.org

Reply via email to