dineshchitlangia commented on a change in pull request #58: HDDS-2293. Create a new CONTRIBUTION.md for the new repository URL: https://github.com/apache/hadoop-ozone/pull/58#discussion_r340250670
########## File path: CONTRIBUTION.md ########## @@ -0,0 +1,174 @@ +Apache Hadoop Ozone Contribution guideline +=== + +Ozone is a part of the Apache Hadoop project. The bug tracking system for Ozone is under the [Apache Jira project named HDDS](https://issues.apache.org/jira/projects/HDDS/). + +If you are familiar with contributing to Apache Hadoop, then you already know everything you need to know to start filing Ozone bugs and submitting patches. + +If you have never contributed to Apache Hadoop before, then you may find it useful to read [How To Contribute](https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute+to+Ozone). + +This document summarize the contribution process and defines the differenceis. + +## What can I contribute? + +We welcome contributions of: + + * **Code**. File a bug and submit a patch, or pick up any one of the unassigned Jiras. + * [Newbie Ozone jiras](https://s.apache.org/OzoneNewbieJiras) + * [All open and unassigned Ozone jiras](https://s.apache.org/OzoneUnassignedJiras) + * **Documentation Improvements**: You can submit improvements to either: + * Ozone website. Instructions are here: [Modifying the Ozone Website](https://cwiki.apache.org/confluence/display/HADOOP/Modifying+the+Ozone+Website) + * Developer docs. These are markdown files [checked into the Apache Hadoop Source tree](https://github.com/apache/hadoop-ozone/tree/master/hadoop-hdds/docs/content). + * The [wiki pages](https://cwiki.apache.org/confluence/display/HADOOP/Ozone+Contributor+Guide): Please contact us at hdfs-dev@hadoop.apache.org and we can provide you write access to the wiki. + * **Testing**: We always need help to improve our testing + * Unit Tests (JUnit / Java) + * Acceptance Tests (docker + robot framework) + * Blockade tests (python + blockade) + * Performance: We have multiple type of load generator / benchmark tools (`ozone freon`, `ozone genesis`). Which can be used to test cluster and report problems. + * **Bug reports** pointing out broken functionality, docs, or suggestions for improvements are always welcome! + +## Who To Contact + +If you have any questions, please don't hesitate to contact + + * in **mail**: use hdfs-dev@hadoop.apache.org. + * in **chat**: You can find the #ozone channel at the ASF slack. Invite link is [here](http://s.apache.org/slack-invite) + * **meeting**: [We have weekly meetings](https://cwiki.apache.org/confluence/display/HADOOP/Ozone+Community+Calls) which is open to anybody. Feel free to join and ask any questions + +## Building from the source code + +### Requirements + +Requirements to compile the code: + +* Unix System +* JDK 1.8 +* Maven 3.5 or later +* Protocol Buffers 2.5 +* Internet connection for first build (to fetch all Maven and Hadoop dependencies) + +Additional requirements to run your first pseudo cluster: + +* docker +* docker-compose + +Additional rquirements to execute different type of tests: + +* [Robot framework](https://robotframework.org/) (for executing acceptance tests) +* docker-compose (to start pseudo cluster, also used for blockade and acceptance tests) +* [blockade](https://pypi.org/project/blockade/) To execute network fault-injection testing. + +Optional dependencies: + +* [hugo](https://gohugo.io/) to include the documentation in the web ui. + +### Build the project + +The build is as simple as: + +``` +mvn clean install -DskipTests +``` + +And you can start our first cluster: + +``` +cd hadoop-ozone/dist/target/ozone-*-SNAPSHOT/compose/ozone +docker-compose up -d --scale datanode=3 +``` + +### Helper scripts + +`hadoop-ozone/dev-support/checks` directory contains helper scripts to build and check your code. (Including findbugs and checkstyle). Use them if you don't know the exact maven gools / parameters. + +These scripts are executed by the CI servers, so it's always good to run them locally before creating a PR. + +### Maven build options: + + * Use `-DskipShade` to exclude ozonefs jar file creation from the relase. It's way more faster, but you can't test Hadoop Compatible file system. + * Use `-DskipRecon` to exclude the Recon build (Web UI and monitoring) from the build. It saves about 2 additional minutes. + * Use `-Pdist` to build a distribution (Without this profile you won't have the final tar file) + * Use `-Pdocker-build` to build a docker image which includes Ozone + * Use `-Ddocker.image=repo/name` to define the name of your docker image + * USe `-Pdocker-push` to puseh the created docker image to the docker registry + +## Contribute your modifications + +We use github pull requests instead of uploading patches to JIRA. The main contribution workflow is the following: + + 1. Fork `apache/hadoop-ozone` github repository (first time) + 2. Create a new Jira in HDDS project (eg. HDDS-1234) + 3. Create a local branch for your contribution (eg. `git checkout -b HDDS-1234`) + 4. Create your commits and push your branches to your personal fork. + 5. Create a pull request on github UI + * Please include the Jira link, problem description and testing instruction + 6. Set the Jira to "Patch Available" state + +## Code convention and tests + +We follow the code convention of Hadoop project (2 spaces instead of tabs, 80 char line width, ASF licence headers). The code checked with checkstyle, findbugs and various test frameworks. + +Please don't post / commit any code with any code violations (all checks are not checking the introduced violations as checks in Hadoop but all the available violations. One committed checkstyle issue will be reported in all of ht next pull requests.) + +### Check your contribution + +The easiest way to check your contribution is using the simplified shell scripts under `hadoop-ozone/dev-support/checks`. The problems will be printed out the standard output: + +For example: +``` +hadoop-ozone/dev-support/checks/rat.sh +hadoop-ozone/dev-support/checks/checkstyle.sh +hadoop-ozone/dev-support/checks/findbugs.sh +``` + +Execution of rat and checkstyle are very fast. Findbugs is slightly slower. Executing unit.sh takes about 30 minutes. + +The same scripts are executed by the github PR checker. + +It's always good practice (and fast) to test with the related docker-compose based pseudo clusters: + +``` +cd hadoop-ozone/dist/target/ozone-*-SNAPSHOT/compose/ozone +./test.sh +``` + +(To test S3 use `compose/ozones3`, to test security use `compose/ozonsecure`, etc. + +### False positive findbugs violation + +If you have __very good__ reasons, you can ignore any Fingbugs warning. Your good reason can be be persisted with the `@SuppressFBWarnings` annotation. + +```java +@SuppressFBWarnings(value="AT_OPERATION_SEQUENCE_ON_CONCURRENT_ABSTRACTION", + justification="The method is synchronized and this is the only place "+ + "dnsToUuidMap is modified") +private synchronized void addEntryTodnsToUuidMap( +... +``` + +## Using IDE + +As Ozone uses Apache Maven it can be developed from any IDE. As IntelliJ is a common choice here are some suggestions to use it for Ozone development: + +### Run Ozone from Intellij + +Ozone components depends on maven classpath. We generate classpath descriptor from the maven pom.xml files to use exactly the same classpath at runtime. + +As a result, it's easy to start _all_ the components from IDE as the right classpath (without provided scope) has already been set. + +To start Ozone from IntelliJ: + +1. Stop your IDE +2. Execute the `./hadoop-ozone/dev-support/intellij/install-runconfigs.sh` helper script. +3. Start the IDE +4. New runner definitions are available from the Run menu. + +### Common problems + Review comment: @anuengineer After reading your comment, I checked my local setup and sense this is definitely related to the checkstyle version. I am using IntelliJ's Checkstyle Plugin 5.24.2 which in turn uses Checkstyle 8.16 and I am still able to add the checkstyle file. @elek What version do you have where it says invalid ? My IntelliJ says I have 4 pending updates to the plugin, I am on holding off on that :) ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org