lenoxzhao opened a new pull request, #3952:
URL: https://github.com/apache/incubator-streampark/pull/3952
<!--
Thank you for contributing to StreamPark! Please make sure that your code
changes
are covered with tests. And in case of new features or big changes
remember to adjust the documentation.
## Contribution Checklist
- If this is your first time, please read our contributor guidelines:
[Submit Code](https://streampark.apache.org/community/submit_guide/submit_code).
- Make sure that the pull request corresponds to a [GITHUB
issue](https://github.com/apache/incubator-streampark/issues).
- Name the pull request in the form "[Feature] Title of the pull request",
where *Feature* can be replaced by `Hotfix`, `Bug`, etc.
- Fill out the template below to describe the changes contributed by the
pull request. That will give reviewers the context they need to do the review.
- If the PR is unfinished, add `[WIP]` in your PR title, e.g.,
`[WIP][Feature] Title of the pull request`.
-->
## What changes were proposed in this pull request
<!--(For example: This pull request proposed to add checkstyle plugin).-->
## Brief change log
1. Redesigned the Spark application entity class `SparkApplication`and
persisted some scheduling parameters for easier user access, such as
**spark.driver.cores** and **spark.executor.cores**.
```
/** scheduling */
private String driverCores;
private String driverMemory;
private String executorCores;
private String executorMemory;
private String executorMaxNums;
/** metrics of running job */
private Long numTasks;
private Long numCompletedTasks;
private Long numStages;
private Long numCompletedStages;
private Long usedMemory;
private Long usedVCores;
```
2. Enhanced application submission configuration settings:
- Added support for setting Spark configurations through the configuration
template(**spark-application.conf**) and custom configurations(**--conf** or
**-c**), with the following priority: **custom configurations** >
**configuration template** > **default configurations**.
- Added support for setting main class entry arguments.
3. Added support for abtaining metrics dashboard of running applications for
display, including the following information:
- From Spark: number of total/completed tasks, number of total/completed
stages.
- From YARN: memory and vcores used.
4. Refactored `SqlCommandParser` based on Spark SQL syntax, introducing
**org.apache.spark.sql.execution.SparkSqlParser** for SQL validation.
## Verifying this change
<!--*(Please pick either of the following options)*-->
This change is a trivial rework / code cleanup without any test coverage.
*(or)*
This change is already covered by existing tests, such as *(please describe
tests)*.
*(or)*
This change added tests and can be verified as follows:
<!--*(example:)*
- *Added integration tests for end-to-end.*
- *Added *Test to verify the change.*
- *Manually verified the change by testing locally.* -->
## Does this pull request potentially affect one of the following parts
- Dependencies (does it add or upgrade a dependency): (yes / no)
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]